text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
N=2 Gauge Theories from Wrapped Five-branes We present string duals of four dimensional N=2 pure SU(N) SYM theory. The theory is obtained as the low energy limit of D5-branes wrapped on non-trivial two-cycles. Using seven dimensional gauged supergravity and uplifting the result to ten dimensions, we obtain solutions corresponding to various points of the N=2 moduli space. The more symmetric solution may correspond to a point with rotationally invariant classical vevs. By turning on seven dimensional scalar fields, we find a solution corresponding to a linear distribution of vevs. Both solutions are conveniently studied with a D5-probe, which also confirms many of the standard expectations for N=2 solutions. Introduction Supergravity duals of non-conformal N=2 gauge theories have been recently discussed in the literature from several complementary points of view. They can be obtained as mass deformations of N = 4 SYM [1,2,3,4], using fractional branes at orbifold singularities [5,6,7,8,9,10,11] or M5-branes wrapped on Riemann surfaces [12]. In this paper, we analyze the realization of pure SU(N) N = 2 gauge theories using wrapped type IIB NS-branes, an approach which proved successful in the study of N = 1 gauge theories [13,14]. Pure SU(N) N = 2 SYM can be realized as the low energy theory on D5-branes wrapped on a non-trivial cycle of an ALE space 1 . We will also consider the S-dual configuration with NS5-branes. Differently from AdS 5 × S 5 deformations and systems with fractional branes, at high energy the N = 2 theory is embedded in the six dimensional theory living on the NS5. Such six dimensional theory decouples from gravity when the string coupling is sent to zero, it is not conformal and not even local [16], but admit an holographic description in terms of a linear dilaton background [17]. We will find N = 2 solutions of the appropriate seven dimensional gauged supergravity which are asymptotic (in the ultraviolet) to the linear dilaton background and we will uplift them to ten dimensions. The resulting solutions have a complex one dimensional moduli space for the motion of a D5-probe, as expected for the dual of the SU(N) N = 2 SYM theory. We will find symmetric solutions corresponding to classical vevs distributed in a rotationally invariant way. A particular solution in this family may correspond to the strongly coupled theory with zero (or smaller than the dynamically generated scale) classical vevs. We will also find solutions corresponding to linear distributions of vevs. The moduli space structure will be studied using a D5-probe, which, as usual, captures the quantum field theory one-loop result. The supergravity solutions we find are all singular. In N = 2 theories, an enhançon mechanism [18] is usually invoked for resolving the singularity. Some of the features usually associated with the enhançon mechanism are at work here. In Section 2 we discuss our approach, which uses seven dimensional gauged supergravity. Since only the bosonic equations of motion of the relevant theory are known [19], we will perform the singular limit described in [20] on the fermionic shifts of the maximally gauged supergravity. Evidence of the N = 2 supersymmetry of the solution will be given by the probe analysis. In Section 3 we will consider the uplifting to ten dimensions of the most symmetric solution. By using a D5-probe, we will identify this solution as the point in moduli space of N = 2 SYM with zero classical vevs. In Section 4 we will turn on scalar fields corresponding to the chiral operators parameterizing the Coulomb branch. With the probe analysis, we will identify this solution as corresponding to a linear distribution of vevs in the gauge theory. While this work was being written, a paper [21] appeared where the solution in Section 2-3 was independently discussed. Twisting the NS5 brane In order to obtain N = 2 supersymmetry in four dimensions, we wrap N NS5 branes over an S 2 of an ALE space, and then twist the normal bundle as in [22]. The ten dimensional spacetime is locally of the form R 4 × S 2 × R 2 × R 2 , where the first R 2 is part of the ALE space and the second one is in the transverse flat space. These give an U(1)×U(1) normal bundle. Working with seven dimensional gauged supergravity we can perform the twist by identifying the gauge fields in the theory with the spin connection on the sphere as in [13,14]. We thus choose the U(1) × U(1) truncation [23] of the SO(5) seven dimensional gauged supergravity [24]. As in [13], the right choice to retain N = 2 supersymmetry is to take one of the abelian gauge fields equal to the spin connection on the sphere, setting the other one to zero. Our ansatz for the string frame metric in seven dimensions is: which in the Einstein frame reads: with f = − 2 5 Φ 7 , g = − 2 5 Φ 7 + h (Φ 7 is the seven dimensional dilaton). We also chose N=1 for simplicity. We look for solutions of the equations of motion of U(1) × U(1) gauged supergravity that also preserve N = 2 supersymmetry. The theory at hand contains, apart from the metric, two scalars, two abelian gauge fields, a tree-form potential (which we take equal to zero in the following) and the corresponding fermions. The U(1) × U(1) truncation was used in [13] to find N = 2 M-theory solutions interpolating between AdS 7 and AdS 5 , corresponding to wrapped M5-branes. In order to study NS5 branes in Type IIB, we need to perform a singular limit in the theory, which reduces M theory to type II, as discussed in [20]. The bosonic part of the Lagrangian thus becomes: While the maximally gauged supergravity (M theory compactified on S 4 ) has AdS vacua, corresponding to the (2, 0) CFT, the new theory (type II on S 3 ) has only run-away vacua. There is instead a solution corresponding to NS5 branes. The same singular limit on the supersymmetry variations for the fermions gives (k = 2m is the gauge coupling) 2 : From (2) it follows that the non trivial components of the spin connection are: where α = 0, 1, 2, 3 labels the four dimensional coordinates in (2), and the hats distinguish the curved coordinates from the flat ones. We impose the ansatz: As explained above, we take A (1) = − 1 k cos θdϕ, A (2) = 0, so that inserting (2) in (4) we obtain (equating the variations (4) to zero): whose solutions are: We have chosen the integration constants for e −2g−2f and f in order that u ranges in the interval [0, ∞) and the seven dimensional dilaton has the canonical NS5 asymptotic behaviour (for ρ → ∞) φ ≈ −ρ. We also fixed to one the integration constant for e λ 2 +λ 1 which has no physical relevance, appearing as an overall factor. We will mainly consider the case K ≥ 1 4 where u ∈ [0, ∞). For solutions with K < 1 4 , u can never reach zero. We explicitly checked that the second order equations in [19] are satisfied. We will have further evidence about the supersymmetry of the solution in the next section, when we will find a moduli space for D5 probes. Ten dimensional solution We refer to [20] to lift the previous solution to ten dimensions. The string frame metric is (ds 2 7 is given by (1) with N=1): and the dilaton reads: with ∆ = e 2λ 1μ 2 1 + e 2λ 2μ 2 2 and φ 1,2 ,μ 1,2 = (sin θ ′ , cos θ ′ ) being angular coordinates of the transverse three-sphere. The solution incorporates also a potential six form whose field strength is given by: The D5 solution may be obtained from the one above by performing the S-duality transformations: We can now examine the asymptotic behaviours of the metric and the dilaton making use of the explicit solutions (8). In NS variables, we expect a solution UV asymptotic to the linear dilaton background [17] with a size of S 2 that grows reflecting the coupling constant running. Alternatively, in D5 variables, when u → ∞ (ρ → ∞) we get: As expected the dilaton diverges and the S 2 blows up. The details of the u → 0 limit 3 drastically depends on the value of the integration constant K in (8). When K > 1 4 we find: The metric has a bad type singularity according to the criteria of [13] and the dilaton is diverging, so we discard this possibility. When K = 1 4 we find instead: The singularity (which is located at u = 0, orμ 1 = 0) of this metric is milder, so that we can retain this solution as a dual of N = 2 SYM. We can explore the nature of the moduli space of the gauge theory using a probe D5 brane wrapped on S 2 , whose low energy effective action is (in units 2πα ′ = τ 5 = 1): where is the induced metric on the worldvolume (α, β = 0, 1, ..., 5 label the worldvolume coordinates, while M, N = 0, 1, ..., 9), and F is the gauge field strength on the brane. We now perform our calculations in static gauge choosing ξ 0 = x 0 ≡ t, ξ i = x i , i = 1, ..., 5, x m = x m (t), m = 6, ..., 9 and taking the low velocity limit. From (13) and (14) it follows that, for θ ′ = 0 : which does not depend on the angular coordinates. This term contributes to the effective potential for the probe. The other contribution comes from the Dirac-Born-Infeld part of the action. The BPS configurations correspond to having zero potential. If we search for solutions with unfixed radial coordinate u, the potential vanishes whenμ 1 ≡ sin θ ′ = 0. In this case the low velocity limit of the DBI part of the action gives (taking F = 0 on S 2 ): The first contribution in the integral cancels exactly with C 6 , after choosing the constant in (20) equal to 2K. This cancellation is independent of the particular choice of K, and can be proved using only the first order equations of motion. The kinetic term in (21) reads: which, introducing r = e u , may be written as: This gives a complex one-dimensional moduli space as expected for the N = 2 gauge theory, which can be parameterized by the complex coordinate z = re iφ 2 . We can explicitly write the holomorphic coupling after the calculation of the coefficient of F ∧ F from the third term in (18), which equals −2πφ 2 . After making explicit the dependence on the number N of D5 branes we find (apart from numerical factors) for the gauge kinetic term: while the scalar kinetic term reduces to: We have thus found the structure for the moduli space expected for the N = 2 supersymmetric four dimensional YM theory. The supergravity description captures correctly all the perturbative contributions to the coupling. Formula (24) is compatible with points in moduli space where all the classical vevs are zero or distributed in U(1) R invariant configurations. K may distinguish in between these cases. Results in [21] suggest that the radius of the distribution is bigger than Λ for K < 1/4. This is compatible with a probe able to move in the region inside the distribution of branes, as found in [21]. It is tempting to associate the fine tuned value K = 1/4 with zero (or smaller than Λ) classical vevs. For such a strongly coupled vacuum, an enhançon mechanism [18] could be expected: even if classically at the origin, quantum mechanically the branes dispose in a spherical shell of radius Λ. It is difficult to make this more precise because the quantum field theory region of moduli space below the radius Λ is hardly seen in the solution. However, many features usually associated with the enhançon mechanism are manifest in the solution with K = 1/4: at u = 0 (z = Λ) the probe become tensionless and extra bulk fields become massless (D3-branes wrapped on the two-sphere, for example). The precise form of the singularity (and its resolution) deserves further investigation. A more interesting example We can also turn on other scalar fields in the seven dimensional gauged supergravity. The scalar fields parameterizing the sphere reduction are expressed in terms of a symmetric SO(4) tensor T i,j , i = 1, ..., 4. In the previous Sections we retained the U(1) × U(1) singlets T 11 = T 22 = e 2λ 1 and T 33 = T 44 = e 2λ 2 . The scalar fields T ij , i, j = 3, 4 parameterize the motion of the NS-branes in the untwisted R 2 plane, which preserves N = 2 supersymmetry. They are dual to the bilinear scalar operators of the N = 2 gauge theory. We expect that, as in similar AdS 5 examples [25,1], we can find solutions corresponding to non-trivial points in the Coulomb branch of the N = 2 theory. Up to a gauge rotation, we can take T 11 = T 22 = e 2λ 1 , T 33 = e 2λ 2 , T 44 = e 2λ 2 . This choice explicitly breaks U(1) (2) . The equations of motion can be consistently truncated to these three scalar fields. We look for N = 2 solutions. We now need to consider the full SO(5) gauged supergravity and perform the singular limit described in [20], in order to descend from M theory to Type II. That this is a sensible procedure will be guaranteed by the fact that the BPS first order equations satisfy the second order equations [19] for type II compactifications. The BPS equations are now which can be solved as in previous case. The relations e 2h = u, du/dρ = e (λ 2 +λ 2 )/2−λ 1 still hold and moreover we have the important relation For b = 0 we recover the previously discussed solution with λ 2 =λ 2 . The up-lifting to ten dimensions can be performed using formulae in [19]. The solution for wrapped D5 branes is: where 4 1 µ 2 i = 1 parameterize S 3 . The dilaton reads: with The complete formula for F (3) can be found in [19]. We can choose µ 1,2 = sin θ ′ (cos φ 1 , sin φ 1 ) and µ 3,4 = cos θ ′ (cos φ 2 , sin φ 2 ). The probe computation goes over as before. We find that, for θ ′ = 0, so that there is a complex one-dimensional moduli space, which can be parameterized as before by z = re iφ 2 , u = log r. In this Section we put, for simplicity, Λ = 1. The gauge field kinetic term is unchanged: while the scalar kinetic term, using formulae (26), (27), reads: log |z| |z| 2 cos 2 φ 2 e −2(2λ 1 +λ 2 +2λ 2 ) + sin 2 φ 2 e −2(2λ 1 +2λ 2 +λ 2 ) dzdz = (33) In the standard coordinates for the N = 2 effective Lagrangian, the scalar and gauge kinetic term coincide. We can obtain this by an holomorphic change of coordinates w = z + b 2 /z. The probe coupling constant now reads: For large |w|, τ (w) has the standard logarithmic behaviour, while for small |w| it reveals a non-trivial distribution of vevs in the gauge theory. We can explicitly compute such distribution, by approximating it with a continuum: with µ(a) = N/(π √ 4b 2 − a 2 ). We see that our solution represents a point in the Coulomb branch where the vevs are linearly distributed. This is somehow reminiscent of [3] with the obvious difference that the theory in [3] is a mass deformation of N = 4, while our starting point is (a little string UV completion of) pure N = 2 without matter. Curiously, this linear density of vevs is of the same type which appears for the N = 2 point in moduli space where all types of monopoles become massless [26]. The relation of our solution with such a particular point and with the possible soft breaking to N = 1 deserves further investigation. We gave several indications that our solutions are actually N = 2. It would be interesting to check the supersymmetry directly in ten dimensions. We understand that this was done in [21] for the solution in Section 2 and 3.
3,941
2001-06-18T00:00:00.000
[ "Physics" ]
The effects of tax policy and economic regulation on the companies of the electricity sector in Brazil The objective of this study was to analyze the possible effects of the electric sector's regulatory regulations and the enactment of laws 10,637 / 2002 and 10,833 / 2003 on the collection of federal PIS and COFINS taxes and on pricing of electric energy tariffs, identifying the determinant factors for fixing them. The multivariate analysis was used as an analytical approach, taking as reference the multiple panel regressions. In general, there was a 113% increase in the payment of PIS and COFINS social contributions after the enactment of Law 10.833/2003, indicating that the right to deduct credits on certain factors of production was not obtained by companies in the electric energy sector increasing, therefore, the tax burden of companies. The direct consequence of this result was the increase in the average electric energy tariff charged from residential consumers, especially after the 2004 period. In this sense, after a 153% increase in the PIS and COFINS rates, and after the concessionaires' right to revise their tariffs when there was an increase in costs, including taxes, there was a considerable increase in the average tariff of electric power in the order of 2.8% and 8.1%, respectively, higher than the increases in their production costs. Therefore, it can be inferred that the increase in electric energy tariffs during the study period was mainly due to the increase of the tax burden and the regulatory factors since the factors of production were not significant in the model. The effects of tax policy and economic regulation on the companies of the electricity sector in Brazil REBRAE, Curitiba, v.10, n. 2, p. 222-240, may/aug. 2017 223 Introduction Since the 1990s, the Brazilian Electric Energy sector has undergone several changes of an institutional nature, expansion of private sector participation, technological innovations, economic infrastructure, deregulation of the sector and institution of public policies, among them, the tax police. In principle, the causes of this process began with the deterioration of the infrastructure, due to the loss of government reinvestment capacity and the difficulties of gaining economies of scale, leading to a process of privatization initiated with the institution of the privatization Law No. 8. 031 / 1990031 / (SILVA, 2007)). From that moment, the government felt the necessity of new changes that in the vision of Moraes (2009), Gomes et.al. (2009) and Viana et.al. (2009) were necessary for the competitiveness and maintenance of the sector, among them: (1) the unbundling of generation, transmission, distribution and commercialization activities (known as MAE / CCEE) and, as of 2004, subdivided into exporters and importers; (2) the purchase of electric power in the transmission and distribution segments started to be done through auctions -observing the lowest tariff criterion and, (3) introduction of the independent producer and self-producer on a larger scale, with the objective of better Allocation, production and distribution of resources. It could be said that the basic concern was to create a competitive pressure in the possible segments (generation and commercialization) and regulation in the segments where it was needed (transmission and distribution), as shown in Figure 1.Table 1 presents a summary of the main changes between the models, which resulted in changes in the activities of some agents in the sector.Prices freely negotiated in generation and commercialization. In the free environment: prices freely negotiated in generation and commercialization.In the regulated environment: auction and bidding for the lowest tariff.Source: Adapted from Vieira et.Al. (2009). Another important step in this process was the creation of Law no.8.631 / 1993 and Decree No. 774/1993 which regulated the tariff levels to be charged for the consideration of the public electricity supply service, according to the specific characteristics of each concession area. From the tariff point of view, these laws extinguished the tariff equalization and maintained the service regime at cost, with the readjustment of the tariffs proposed by the concessionaires, homologated by the Granting Authority, and the levels of energy tariffs provided by the concessionaires Shall be fixed taking into account the specific costs of the concessionaires, the amounts related to the prices of electricity purchased from the suppliers, the transportation of the electricity generated by Itaipu Binacional, the annual quotas of the Global Reversion Reserve, the apportionment of fuel costs And financial compensation for the use of water resources (Article 2, Decree 774/93).The minimum legal remuneration of 10% on the investment, in effect since the Water Code, of 1934, was thus eliminated, establishing, from these changes, the current tariff regime. Thus, from Law no.8.631 / 1993 and Decree No. 774, the current system now includes the so-called Binomial Rate, consisting of two distinct parcels, that is, the recorded electricity consumption (kW / h) calculated based on the power (or demand) values of the various equipment used (in Watts, W) and the power consumption (in hours, h) of these electrical equipment.In addition to this distinction be-tween power and energy, the system added the Horo-Seasonal segment, which establishes tariffs for peak (HP) and off-peak hours (HFP).The first refers to the one with the highest energy demand and consists of three consecutive daily hours defined by the distributor considering the load curve of its electric system, approved by ANEEL for the entire concession area, except for weekends holidays defined by federal law.On average, there are 66 hours during the month.The off-peak hours are the complementary hours to the three consecutive hours that make up the peak hours, plus all the hours of weekends and holidays.At this time the energy tariffs are lower than the HP and an average of 664 hours during the month.The system also fixes different values for the periods of the year between May and November, defined as dry period (PS) and between December and April as wet period (PU).The amounts are set by the National Electric Energy Agency (ANEEL), which regulates relations between concessionaires and consumers, establishing the various types of contracts, standards and instructions.(Lein° 8.631/1993 e Decreto n° 774/1993) Likewise, article 9 of Law 8.987 of 1995 inaugurated the tariff regime for the price, with the possibility of forecasting mechanisms for readjustment and revision of tariffs.According to the legislation, the tariff charged should be established by the regulatory agency, in this case the National Electric Energy Agency (ANEEL), and be sufficient to cover all costs of the service, including taxes, in order to guarantee economic and financial balance of the concessionaire and the remuneration of the investments necessary to maintain the services with quality and reliability. Very similar, Laws no.8.931 / 1993 and 8.987 / 1995 granted the concessionaire the right to establish a revision of electric energy tariffs whenever there is an increase in operating costs, regardless of origin, as well as the creation, alteration or extinction of any tax, except for taxes on the income. In 2002 also included the system of tariff modalities, according to Decree No. 4.413 / 2002, which are: conventional modality, green horo-seasonal and blue horoseasonal.Lastly, as of January 2015, the flag system was also included in Brazil's electricity tariff structure, which consists of passing on (and demonstrating) monthly to the consumer the additional cost of buying energy from less favorable conditions for the generation of energy.There are currently 3 flags: green (which indicates favorable conditions, normal tariff does not increase), yellow (conditions less favorable, increase of R$ 0.015 per kWh) and red (more expensive conditions of generation, increase of R$ 0.03 or R$ 0.045 depending on the seriousness of the situation) (Normative Resolution No. 574/13, ANEEL, 2013).In all tariffs modalities, ICMS, PIS and COFINS are charged on the sum of the installments, and the ICMS is not levied on the portion of inactive contracted demand, that is, contracted but not used. With regard to PIS and COFINS taxes, a major change for companies in the electricity sector occurred with the enactment of laws 10. 637 / 2002, 10.833 / 2003 and 10.865 / 2004.With the enactment of these laws, PIS and COFINS had their rates changed to 1.65% and 7.6%, respectively, and were calculated in a non-cumulative manner.As a result, the average rate of these taxes started to vary with the volume of credits established monthly by the concessionaires and with PIS and COFINS paid on costs and expenses in the same period, such as electricity purchased for resale to the consumer. Thus, if, on the one hand, the institution of laws no.8.631 / 1993 and 8.987 / 1995 came to stimulate the search for the efficiency of companies in the electricity sector, on the other hand, laws 10. 637 / 2002, 10,833 / 2003 and 10.865 / 2004 increased the tax burden in this sector.Of course with the possibility of credit discounts. Therefore, it is evident that the higher the costs, the higher the final product prices (electric energy) for consumers.It remains to be seen how much of the tax cost on each of the electric power segments is passed on to the final product and how the tariff is fixed in each segment of the industry (generation, transmission, distribution and commercialization). Non-cumulative incidence regime: Laws 10.637 / 2002 and 10.833 / 2003 Concerning the reforms of the Brazilian tax system, the concern with economic competitiveness turned to taxes on consumption and production, which usually include social contributions to Social Security Financing (COFINS) and to the Program Of Social Integration (PIS).Another issue that also worries and, according to available literature, impairs competitiveness and productive efficiency, is the cumulative system of taxation present in the tax system. Based on diagnoses of this nature, the government proposed, through Law 10. 637 / 2002, effective December 2002and Laws 10,833 / 2003and 10,865 / 2004 in force from February and April 2014, respectively, to institute the system of noncumulative PIS and COFINS taxes, for all publicly-held companies, with the exceptions described in art.8 of Law 10.637 / 2002, and of art. 10 of Law 10.833 / 2003, observing the provisions of art.15 of this last Law. Among these exceptions, are the revenues earned in the purchase and sale of electric power, within the scope of the Wholesale Electricity Market (MAE), observing the provisions of art.47 of Law 10.637 / 2002.An observation should be placed in relation to the other segments (generation, transmission and distribution) of this sector, since none of the legislation mention their exclusion, showing that they would be in the new modality, generating confusion among accountants and administrators. In addition to the change in the basis of calculation, these two contributions had their rates increased.In the case of COFINS, the rate increased from 3% to 7.6%, while in the case of PIS it increased from 0.65% to 1.65%.(Art.2 of Laws 10.637 / 02 and 10.833 / 2003). The new legislation also gave the taxpayer the right to deduct credits, 9.25%, on the expenses of certain inputs set forth in Article 3 of both laws.It should be noted that the expenses with Labor paid to the individual and the acquisition of goods or services not subject to payment of the contribution do not generate the right to credit.Wessel (2003) points out that the type of market structure in which a taxed product is inserted is a determining factor of the implications of taxation on generation capacity, transmission and the intensity of supply and demand along the chain. Rationale and relevance of the research In this aspect, according to Gremaud et al. (2003) the first repercussion of a tax on products traded in imperfect markets, as is the case of the Brazilian electricity sec-tor, is the increase in the initial marginal cost in the amount equal to the value of the tax. According to these authors, the implications observed for the impact of taxation on imperfect competition markets are generally the same in cases of taxation of products transacted in the monopoly structure, but with the aggravating circumstance of the inefficiencies inherent in the monopoly.In this sense, for the monopoly companies (segments of transmission and distribution of electric energy in Brazil), the possibility of a rise in the price of the product greater than the value of the tribute is admitted. However, while on the one hand, companies in the electric power sector are taxed and "carry" a high tax burden that increases their production costs and, theoretically, having monopoly and competitive segments throughout their chain could pass on this cost for the price of electricity, on the other, they are companies that have their price and supply regulated by government agencies. In this sense, taking into account that the current structure of the Brazilian energy sector is divided into competing companies (generation and commercialization) and monopolies (transmission and distribution), and that are still companies whose decision on the quantity offered and the final price are regulated by ANEEL and CCEE, this work sought to present evidence of the impacts that public policies -notably tax policy and regulatory issues -have on the determination of the sale price of companies and on the behavior of production costs in the energy sector, without, however, to disregard competitive and regulatory aspects that also contribute to these facts. Thus, the choice of the electric power sector as a topic of study was due to: (1) the importance of this activity for the Brazilian economy; (2) the structural changes that the industry has undergone in recent years, which instigates research regarding the behavior of decisions on product supply, selling price and cost variation; (3) because it is a sector with economic regulation and, (4) considering the tax change occurred in Brazil and included the electric power sector in these changes. For this purpose, the objective of this study was to analyze the possible effects of the electric sector's regulatory regulations and the enactment of laws 10.637 / 2002 and 10.833 / 2003 on the collection of federal PIS and COFINS taxes and on pricing of electric energy tariffs, identifying the determinant factors for fixing them. Methodology Study area and data source The study adopted as a space of analysis the set of 54 (fifty-four) Brazilian companies in the electric energy sector with shares traded on the São Paulo Stock Exchange (BMF & BOVESPA) from 2001 to 2012. The data were collected in the system database ®Economática and made available by BMF & BOVESPA, ANEEL and the Applied Research Institute (IPEA). The consolidated financial statements of only one paper type -common shares -, measured at book values, adjusted by the IGP-DI, as of 12/31/2012, in thousands of reais, were used. With respects to the treatment of outliers, companies that did not have all the information available during the review period were excluded.Among other types of Outliers, extreme values were eliminated, considering the observations with values outside the limit of three standard deviations, in order to avoid distortions. After these procedures of filtering the data corresponding to the 54 companies available, a sample of 10 companies was obtained, classified according to the share capital and the operating segments, as presented in Table 2. It should be noted that the sample number was reduced due to non-availability of the tariff variable charged to final consumers by ANEEL at the date of the survey. Definition of the variables In order to evaluate the possible effects of the changes introduced by Laws 10.637 / 2002 and 10.833 / 2003, they may have influenced the collection of the PIS and COFINS contributions of the Brazilian electric power sector, the estimated PIS and COFINS collected by companies in each year of study and obtained through published accounting information was used as dependent variable and, as independent variables, those described in Table 3. Source: Prepared by the authors. To capture these possible impacts, the variable Dummy (DPIS), assigning values 0 for the years prior to the non-cumulative PIS and 1 for the subsequent years, and the Dummy variable (DCOFINS) assigning values 0 for the years prior to its validity and 1 for previous years (Table 3), was created. It is expected that all variables related to production factors, except for the variable Cost of Personnel, have a negative relation with the total collection of federal taxes PIS and COFINS, since art.30 of laws 10.637/2002 and 10,833/2003 gave companies the right to deduct credits thereon. In order for taxation on value added and gross revenue to result in the same tax burden, segments should add at most 39.45% of taxes and margin in its cost of production (RAIMUNDI, 2010).In this sense, it is expected that the variable (VA) is positively related to the dependent PISCOFINS, so that the higher the value added of the company, the higher the PIS and COFINS collected by the industries. We expect a positive correlation between GDP and the dependent variable, believing that this sector presents a positive correlation with the level of economic activity, accompanying the stimulus that this economic greatness presents on the demand for electric energy. As for the variables Dummies PIS and COFINS, the expected signal is positive, demonstrating that the increase in the rates of 153% on the social contributions PIS and COFINS were not offset by the discounts of credits on the factors of production. In order to evaluate the possible impact of the changes imposed by Laws 10.637 / 2002 and 10.833 / 2003 and the regulatory rules of the electricity sector on the fixing of the electric energy tariff, the average tariff charged by the final consumer was used as the dependent variable each year and for each company, based on the revenue composition established by ANEEL to the concessionaires: The concessionaire's revenue is composed of two installments: "Portion A", represented by the nonmanagement costs of the company (sector and tax charges and transmission charges and purchase of energy for resale and for use of connection facilities) and part B, which aggregates the manageable costs (operating expenses -such as personnel, material, third party services and overheads, maintenance expenses -depreciation and, capital expenditure -financial expenses and other capital remuneration expenses). As independent variables of this second model, items directly related to the formation of the sale price described by ANEEL (2010) were selected, as well as the items included in the cost of production, which are directly related to the debts established by the Laws 10.637 / 2002 and 10.833 / 2003. Variables related to enterprise size and growth expectation were also included, besides including macroeconomic variables such as real GDP, exchange rate, Brazilian income and number of inhabitants.Regarding the variables of size and growth expectancy, a negative relation is expected, since the larger the companies, the greater the possibility of economies of scale.Regarding the macroeconomic variables, a particular signal is not expected, but rather they are significant, since all are directly related to consumption, although it can not be inferred in what form they are related to the price fixing of the tariff. With the objective of verifying the influence of the management of the capital stock on the price fixing of the residential electric tariff also includes a Dummy, Attributing Value 1 (one), for private and 0 (zero), for public enterprises.In this case, as in model 1, variables dummies were included.A particular signal is not expected, but rather they show significance. Development of the Panel Model First, the presence or not of multi-collinearity between the dependent variable and the independent variables was measured by simple correlation analysis, as in Plata et.Al. (2005) and Mário (2002).In order to reduce multi-collinearity problems among the independent variables, those with a correlation above 75% were eliminated, which, according to Famá and Melher (1999), show a mean correlation between the variables.Gujarati (2006) calls this procedure a partial correlation. In order to estimate the effects of Laws 10.637/2002 and 10.833/2003 on the total collection of PIS and COFINS contributions in the 10 companies over the 11 years of analysis, the multivariate regression for panel data was used.This technique combines analyzes by company (series Cross-section) with the analyzes per unit of time (time series), encompassing elements of both (WOOLDRIDGE, 2006). Models 1 and 2 were estimated for each year (2001 to 2012) and considered only one form of adjustment, the Fixed Effect Model, given the amount of degrees of freedom. In order to estimate the influence of the independent variables on each one of the dependents, the coefficient of determination R 2-adj was analysed.The serial autocorrelation hypothesis in the residues was tested using the Durbin-Watson "d" test. For Marques (2007), the parameter of this test is both better and closer to 2. The significance parameters of the independent variables were up to 5% for the entry and exit of each one. In this sense, after all methodological procedures, the final model was specified in functional form Log-Lin as described in equation 1. On what: it NS LogPISCOFI Is the log-dependent variable of the total PIS and COFINS collected by the companies distributed for each cross section (i) in each year (t); DPIS is the variable Dummy to capture the effect of Law 10. effect of public and private companies on corporate taxation; it  Is the independent and identically distributed error term on ( t ) and (i); it  is the parameter to be es- timated, and i '  measures the heterogeneity or the specific effect of each group or individual, containing a constant term and a set of variables not observed by the model and not correlated with the regressors. The statistical treatment of the data, in all stages of the models, was carried out through the Software Gretl 1.9.3, which enabled the operation of the descriptive statistics and the calculations of the multivariate regression coefficients for the panel. Descriptive analysis of variables Table 4 presents the descriptive statistics of the variables selected for the tax collection models and generation of supply. It can be verified that the average value of the tariff charged by residential final consumers during the analyzed period was R$ 0.29 and that the difference between the values was R$ 0.22, that is, consumers paid different amounts for the same product. The value of the collection of PISCOFINS contributions by the companies presented an average of R$ 379,333.00,which represented 6.36% of the Gross Revenue of the companies.On average, the collection of PISCOFINS by the companies had a significant increase over the period.From 2001 to 2009, the increase was 187%, and in the years 2002 to 2003, 2003 to 2004 and 2004 to 2005 the percentage increase was more significant, 20.40, 43.48 and 57.19%, respectively.Source: authors It is also observed that the percentage of value added by the companies presented an average of 62%, which suggests that companies in the electricity sector were not benefited by the institution of Laws 10.637 / 2002 and 10.833 / 2003.The exception was observed in the company Ienergia Elétrica (IEN), which managed to obtain, over the analyzed period, more rights to tax credits than to debts.It remains to be seen if she has been able to lower the value of its fare. When checking the behavior of the variable average tariff charged to final consumers (TAX), it can be seen that the lowest tariff was charged by public companies CEB and COPEL, with an average tariff of R$ 0.27.The tariff charged by CEMIG was the largest among the 10 companies (Table 5).This fact has even been the subject of complaints among many consumers. In this sense, at first, it can be affirmed that public companies, with the exception of CEMIG, are the ones that present the lowest tariff value and, furthermore, we can assume that the electric energy company may have benefited from the tax change, however, it presented the third highest rate charged. There was also a 17.6% increase in the average energy tariff price over the eleven periods, and this increase was greater in the years in which Laws 10.637 / 2002 and 10.833 / 2003 came into force.This variation was greater in private companies. Therefore, a preliminary analysis of the data showed that, during the analyzed period, there was a positive variation both in the increase of electric energy tariffs charged to the consumers, as well as in the collection of PIS and COFINS by the companies and that this increase was greater in the period in which PIS and COFINS suffered a 153% increase in their rates. Table 5 -Average per company of the average tariff charged by the residential consumer over the period Results obtained for Panel regressions The result of model ( 1) to detect the effects of Laws 10.637 / 2002 and 10.833 / 2003 on the collection of the contributions provided in these legislations presented an R 2 -adj in the order of 79.42% and a statistical coefficient Durbin-Watson close to 2, indicating a high fit of the model (Table 6).The results show that the total input factor used in the production process (represented by the EST variable) does not seem to be an intrinsic attribute to determine the calculation of the PIS and COFINS collection, since it did not present statistical significance.Possibly, its significance could have been captured through the variable purchase of electric energy for resale and for use of connection facilities.The same happened with the variable Expense with sales (DESPV), coefficient not significant.It should be noted that the model omitted the variable DCONT because of the exact colinearity presented. The signal presented by the variable DEP was contrary to what was expected, demonstrating that companies in the electricity sector were not benefited by the right to discount depreciation credits.It should be noted that this result can be explained by the subsequent change to Laws 10.637 / 2002 and 10.833 / 2003, in which the right to deduct credit on depreciation became valid only for investments made from 2005 on. The result presented by the DESPA variable presented the expected result, indicating that for each R$ 1.00 of expenses with telephone expenses, general maintenance expenses and the own electric energy used in the companies, the lower the value of the tax collected by the companies given the possibility of deducting 9.25% of the total amount.Therefore, the right to deduct credits calculated on the factors of production specified in article 3 of Laws 10.637 / 2002 and 10.833 / 2003, for the most part, was not deducted by the electric power sector, a result corroborated by the positive and significant coefficient of the variable Dummy D04.That is, after the enactment of Law 10.833 / 2003, effective as from 2004, there was a significant 113% increase in the collection of PIS and COFINS from Brazilian companies in the electric energy sector.It should be noted that the variable D03 was not significant, demonstrating that the institution of Law 10.637 / 2002 did not bring impacts on the PIS and COFINS collection, which can be explained given the "maturation" period of a given law on an agent. Another interesting result is that the variable Added Value (VA) presented a positive and significant coefficient at 5%, indicating that at each increase in the VA there is an increment in the PIS and COFINS collection of 6. 52159e-08 corroborating the results of many studies in the tax area of which companies that aggregate above 39.45% of taxes and margin in their cost of production were affected by the new system of non-cumulatively.In this sense, as companies in the electric power sector added little more than 62%, we could expect the result presented by the VA variable. Finally, the GDP variable presented satisfactory results, both with respect to the significance of the estimate and the sign of the parameter presented, an indication that, in times of economic growth, there is a tendency of increase in the collection of taxes collected by the companies, and consequently, In the tax collection of the government. Durbin-Watson statistic 1.7396 Table 7 presents the results for estimating the impact of tax changes and economic regulation on the generation of supply of Brazilian companies in the electricity sector.The result of the model to detect the effects of Laws 10.637 / 2002 and 10.833 / 2003 on the fixation of electric energy tariffs presented an R2-Adj in the order of 81.76%, indicating a high adjustment of the model.However, the statistical coefficient Durbin-Watson was only 0.68, possibly due to the low correlation between the CTB3 variable and the TAX-dependent variable and the possible multi-colinearity problems between the AT, INV, CTA, CTB1 and CTB variables, which were not detected in the simple correlation analysis composed the final model.By the correlation analysis presented in the previous section and by the results presented in table 6, it can be observed that the variables that form the value of electric energy tariffs (Installment A -non-managerial costs of the company and installment B -manageable costs) do not appear to be an intrinsic attribute of stimulus in the generation of supply or at least a relevant factor for the fixation of the electric energy tariff value, since they did not present statistical significance.The same rationale applies to the size of the company (AT) and to the total investments made by the company annually (INV). On the other hand, the value-added variable of the companies presented a significant and positive coefficient, demonstrating that the price of residential rates increases as companies add more value.This fact may eventually be an indication of barriers to entry, that is, the companies in the electricity sector analyzed in this work have a high cost, however, they can achieve a high profitability given the high price of the tariffs, thus preventing the entry of other companies that can not afford the high cost. It is also observed that, in addition to taxes on income, the PIS and COFINS collection is also a determining factor for the fixation of the electric energy tariff and, even if the significance of the parameter has been at the 10% level, it can be inferred that the higher the value of the tax collected, the greater the price of the tariff. In Brazil, both the result presented by the variable VA and the result presented by the variable PISCOFINS corroborates with a discussion that has long been discussed: the high Brazilian tax burden. The macroeconomic variables -GDP, income and the Brazilian populationpresented quite curious results regarding the sign of the presented parameters: it is expected that the larger the Gross Domestic Product of a country, the greater its consumption of electric energy.In the same way, the greater the population growth and the income of this population, the greater the consumption of electric energy.Following this reasoning, the coefficients of the GDP, Population and income variables were expected to be inversely proportional to the price of the electric energy tariff, since the larger the quantities produced and sold of a product, the greater the possibility of diluting the fixed costs of the company and, consequently, lower the value of the sale price.In this sense, only the population variable did not present the expected coefficient, indicating that, at the 1% level of significance, with each increase in the Brazilian population, the electric energy tariff increases 2.38612e-08 reais.It should be emphasized that a more detailed analysis should be done to enhance these results.On the other hand, the exchange rate variable was not significant for the fixation of the price of the electricity tariff. As for the variables dummies, these presented positive and significant coefficients in relation to the dependent variable TAX indicating that, after the period of changes in the tax legislation and after a period of establishing a regulatory law, there was a 2.8% increase in the fare price after 2003 and of 8.1% after the year 2004.In this sense, it can be affirmed that changes in fiscal policies, especially in tax policies and economic regulation in a given sector, have a significant impact on the economy of a country.It is also observed that these results corroborated with the results presented in the previous model, that is, an increase of 113% in the PIS and COFINS collection after 2004 was verified, indicating that the right to deduct credits on some factors of production was not obtained by companies in the electric energy sector, thus raising the tax burden of companies.The direct consequence of this result was the increase in the average electric energy tariff charged from residential consumers, especially after the 2004 period. That is, after the increase in the COFINS tax rate from 3% to 7.6%, and after the concessionaires' right to revise their tariffs when there was an increase in costs, including taxes, there was an increase of 8.1% in the price of electric power higher than the increase in other production costs.As a consequence, companies have partly passed on the burden of the tax burden on their final products.In this way, part of the increase in tax rates was borne by producers and partly by consumers. Therefore, it can be inferred that the increase in electric energy tariffs during the study period was mainly due to the increase of the tax burden and the regulatory factors since the factors of production were not significant in the model. The result of the variable Q (Quantity produced and commercialized of electric power) reinforces this affirmation since it presented a negative and significant coefficient to 1% indicating that for every 1 kWh of electricity commercialized there is a decrease of 1. 46504e-08 in the price of Electricity tariff.This would be more indicative of economies of scale, that is, the more production, given a fixed production structure, the greater the possibility of dilution of production and maintenance costs and the lower the product price. It should be noted that the model omitted the variable DCONT because of the exact colinearity presented. Therefore, the results of the preliminary analysis (descriptive analysis of the data) as well as the results presented by the panel models partially corroborate the theory presented by Gremaud et al. (2004) and Pindyck and Rubinfeld (2004).According to these authors, with the imposition of taxes on imperfect markets, as it is the case of the sample companies, formed by oligopolistic and monopolistic companies, there is an increase in the production costs of these industries, a rise in consumer prices and, consequently, a reduction in the aggregate supply of a particular sector, regardless of whether there is a reduction or increase in the number of companies in the market.They further argue that the cost of the tax burden will fall partly on the consumer and partly on the producer, varying according to the shape of the demand and supply curves and, in particular, the elasticities of supply and demand. Conclusions In general, it was verified that the tax changes instituted by Law 10.833 / 2003 significantly affected the Brazilian publicly traded companies in the Brazilian electricity sector, in view of the 113% increase in PIS and COFINS since the year 2004, thus indicating that the 153% increase in the PIS and COFINS rates was not offset by the credits calculated on the production factors allowed by the legislation. The direct consequence of this result was the increase in the average electric energy tariff charged from residential consumers, especially after the 2004 period.The results showed a considerable increase in the average electricity tariff in the order of 2.8% after 2003 and 8.1% after 2004.In this sense, after the increase in the COFINS tax from 3% to 7.6%, and after the concessionaires' right to revise their tariffs when there was an increase in costs, including taxes, there was an increase in the price of energy higher than the increase of their production costs, since no statistical significance was found in the coefficients presented by the factors of production in the presented model, There was no significant change after the institution of Law 10.637 / 2002 on the collection of these contributions. The main contribution of this work lies in the importance of its results for the understanding of the potential effects of public policies on the industrial segments. It should be emphasized that the results found in this work should, however, be weighted by the limitations that surround them.In the first place, a variable dummy for each segment of energy activity in the sector could have been generated and taken into account in the estimates, but since the sample consisted of only 10 companies would be practically impossible to do so, because several degrees of freedom would be lost, which are important for the significance tests of the parameters.The alternative would be to do isolated analyzes of each segment looking for a larger number of companies.Secondly, it was the omission by the model of the variable dummy of share control, extremely important for the analysis of this work.Therefore, these are suggestions for adjustments for future research. Another suggestion would be to verify how much of the resources raised by the government (since there was a significant increase in the federal tax collection) are reinvested in companies of the electric power sector and in society, through investment, work, healthcare, education.Objectives of fiscal policies are being followed by Brazilian rulers in the face of so many changes in tax and regulatory legislation. Table 1 - Brazilian regulatory models from the point of view of the Restructuring of the Electric Sector Table 2 - Study sample Table 3 - Relationship of independent variables and theoretical expectations (model 1) 637 / 2002; D04 is the variable Dummy to capture the effect of Law Is the variable Gross Domestic Product of the Industries; DCONT Is the variable Dummy Indicating the effect of public and private companies on corporate taxation; it  Is the independent and identically distributed error term on ( t ) (i); it  Is the parameter to be estimated and i '  Measures the heteroge- neity, or the specific effect of each group or individual, containing a constant term and a set of variables not observed by the model, but correlated with the regressors. it DESPA ; it DESPV Is the variable Administrative Expense; it EST Is the variable Sales Expense; it DEP Is the Stock variable; Is the Depreciation Expense it variable; it Q variable; Is the variable that represents the total amount of electric energy produced and sold; DCONT Is the variable dummy indicating the Table 4 - Descriptive analysis of variables Table 6 - Coefficients obtained by the estimation of the Fixed Effects model -TAX Table 7 - Coefficients obtained by the estimation of the Fixed Effects model -TAX Note: *** Significant at 1%. ** Significant at 5%. * Significant at 10%.Values in parentheses refer to standard errors.Dependent variable: TAX Source: research results.Source: research results.
8,953
2017-07-27T00:00:00.000
[ "Economics" ]
Droplet formation in expanding nuclear matter: a system-size dependent study Cluster production is investigated in central collisions of Ca+Ca, Ni+Ni, 96Zr+96Zr, 96Ru+96Ru, Xe+CsI and Au+Au reactions at 0.4A GeV incident energy. We find that the multiplicity of clusters with charge Z>2 grows quadratically with the system's total charge and is associated with a mid-rapidity source with increasing transverse velocity fluctuations. In energetic central heavy ion collisions it is generally assumed that, after going through an early stage of hot and compressed nuclear matter, the system undergoes a substantial expansion causing local cooling before freezing out. At beam energies above 10A GeV the current picture is that the hot system while cooling passes from a phase involving at least partially deconfined quarks and gluons into a purely confined hadronic phase. At energies below 1A GeV the hot phase is still predominantly a nucleonic gas which, however, in the expansion phase can partially 'liquefy' i.e. clusterize in analogy to the processes used in clusterization devices for atomic physics [1], although under less controlled conditions. In both energy regimes the aim is to determine basic parameters, such as the critical temperature T c or the latent heat of the (first order) phase transition. Due to the finite size of the nuclear systems available in accelerator physics and the complexity of the dynamics of heavy ion reactions, convincing progress on this frontier has proven to be a difficult task. Concerning the liquid-to-gas transition onsets of plateau's in caloric curves [2] have been interpreted [3] in ways that relate indirectly to first order transitions. More direct signatures [4,5], such as negative heat capacities [6] have been subjected to critical reviewing [7]. Claims to the determination of T c [8,9,10] vary in the proposed values and require model assumptions, that seem to be in conflict with some experimental data [11]. Indications for the existence of a negative compressibility region in the nuclear phase diagram, leading to spinodal instabilities characterized by enhancement of events with nearly equal-sized fragments, have been obtained [12]. The very small cross sections for this phenomenon were justified with microscopic simulations. One of the key assumptions in many works is that multifragmentation is a unique mechanism related directly to subcritical and/or critical phenomena [5]. In recent theoretical simulations using Nuclear Molecular Dynamics (NMD), together with a backtracing method, Bondorf et al. [13] have argued that two mechanisms coexist: a dynamical rupture of spectator-type, relatively cold, fragments and a second process where nucleon-nucleon collisions generate the seeds for completely new fragment creation, coalescing nucleons which were initially far from each other in phase space. In this Letter we present data for central collisions in symmetric heavy ion systems at 0.4A GeV that strongly support these theoretical findings [13]. In order to better understand the finite-size problem, we have varied the system size from Ca+Ca (Z sys = 40) to Au+Au (Z sys = 158), investigating five systems of different size. We find that the multiplicity of heavy clusters with charge Z ≥ 3, when reduced to the same number of available charges, grows linearly with system size and is associated with a mid-rapidity source with increasing transverse momentum fluctuations. This is in strong contrast to multifragmentation of quasi-projectiles in the Fermi energy regime or of spectator matter at SIS/BEVALAC energies (0.1-2A GeV) where a high degree of 'universality' [14,15] was observed, with among others an apparent systemsize independence. As parallel studies of the same systems [16] have shown an increasing degree of stopping as well as increasing generation of flow when passing from light to heavy systems, we can conclude that the increased flow leads to a cooling process favouring gradual 'liquefaction'. This is a non-trivial finding: from earlier experimental studies [17] a suppression of heavy fragment production in systems with strong collective expansion was inferred and theoretical works [18] have predicted that strong flow gradients would prevent coagulation. A similar behavior is seen in two-particle correlations where the effective radii of homogeneity are diminished by flow [19]. The data were taken at the SIS accelerator of GSI-Darmstadt using various heavy ion beams of 0.4A GeV and the large acceptance FOPI detector [20,21]. In the experiments involving the systems 40 Ca+ 40 Ca, 96 Ru+ 96 Ru, 96 Zr+ 96 Zr, and 197 Au+ 197 Au, particle tracking and energy loss determinations were done using two drift chambers, the CDC (covering laboratory polar angles between 35 • and 135 • ) and the Helitron (9 • − 26 • ), both located inside a superconducting solenoid operated at a magnetic field of 0.6T. A set of scintillator arrays, Plastic Wall (7−30) 0 , Zero Degree Detector (1.4 • −7 • ), and Barrel (42 • −120 • ), allowed to measure the time of flight and, below 30 • , also the energy loss. All subdetector systems have full azimuthal coverage. Use of CDC and Helitron allowed the identification of pions, as well as good isotope separation for hydrogen and helium clusters in a large part of momentum space. The identification of heavier clusters (Z ≥ 3), by nuclear charge only, was restricted to the polar angles covered by the Plastic Wall and the Zero degree detector. In a second setup the Helitron was replaced by an array of gas ionization chambers [20], the PARABOLA, allowing charge identification of heavier clusters up to nuclear charge Z = 12. The systems 58 Ni+ 58 Ni, 129 Xe+CsI, 197 Au+ 197 Au were studied in this experiment. The data for Au on Au, measured with both setups, were found to be in excellent agreement. Further details on the detector resolution and performance can be found in [20,21]. Collision centrality selection was obtained by binning distributions of the ratio, Erat [22], of total transverse and longitudinal kinetic energies. In terms of the scaled impact parameter, b (0) = b/b max , we choose the same centrality for all the systems: b (0) < 0.15. We take b max = 1.15(A 1/3 T ) as effective sharp radius and estimate b from the measured differential cross sections for the Erat distribution using a geometrical sharp-cut approximation. In this energy regime Erat selections show better impact parameter resolution for the most central collisions than multiplicity selections [22,23] and do not imply a priori a chemical bias. Autocorrelations in high transverse momentum population, that are caused by the selection of high Erat values, are avoided by not including the particle of interest in the selection criterion. The analysis of the particle spectra involves some interpolations and extrapolations (30% in the worst case) to fill the gaps in the measured data. The two-dimensional method used to achieve this has been extensively tested with theoretical data generated with the transport code IQMD [28] applying apparatus filters. Details will be published elsewhere [29]. Since the present study is restricted to central collisions of symmetric systems, we require reflection symmetry in the center of momentum (c.o.m.) and, use azimuthally averaged data. Choosing the c.o.m. as reference frame and orienting the z-axis in the beam direction the two remaining dimensions are characterized by the longitudinal rapidity y ≡ y z , given by exp(2y) = (1+β z )/(1−β z ) and the transverse component of the four-velocity u, given by u t = β t γ. Following common notation β is the velocity in units of the light velocity and γ = 1/ (1 − β 2 ). Later we shall also use the transverse rapidity, y x , which is defined by replacing β z by β x in the expression for the longitudinal rapidity. The x-axis is laboratory fixed and hence randomly oriented relative to the reaction plane, i.e. we average over deviations from axial symmetry. Throughout we use scaled units y (0) = y/y p and u (0) t = u t /u p , with u p = β p γ p , the index p referring to the incident projectile in the c.o.m.. Fig. 1. The lower part of this figure illustrates the result of the extrapolation to 4π using the two-dimensional fit method. An example of a reconstructed distribution for emitted Li ions in central collisions of Au on Au is shown in Turning now to the presentation of results, we show in Fig. 2 charged particle multiplicity distributions as a function of nuclear charge Z. The data for four out of the six measured systems are plotted. As the surface of nuclei has a finite thickness one expects some degree of transparency even in collisions with perfect geometrical overlap. To minimize such 'corona' effects, we show midrapidity data, |y (0) | < 0.5. As observed earlier [22], we find that the heavy cluster (Z > 2) multiplicities, M hc , decrease exponentially with the nuclear charge, i.e. M hc (Z) ∼ exp(−c * hc Z), however, the slope parameter c * hc is seen to vary with the system-size (see Fig. 2 and Table 1): heavy cluster production is In the present context we refrain from calling heavy clusters (hc), Z > 2, 'intermediate mass fragments' (IMF), because in this energy regime they are, actually, the heaviest fragments ('droplets') in central collisions. Charge balances show unambiguously that there is no heavy remnant (Z > 1/6Z sys ) with a sizeable (> 1%) probability (that one might be tempted to call 'liquid'). The slope parameters, obtained for the range Z = 3 − 6, are listed in Table 1 both for the mid-rapidity data, c * hc , and the 4π data, c hc . For Ca+Ca reliable data beyond Z = 4 could not be obtained. The value c hc = 1.224 ± 0.043 for Au+Au can be compared with our earlier [22] value of 1.170 ± 0.018 which was obtained from a fit over a larger Z range. Although a different method to extrapolate to 4π was used, the main reason for the modest deviation can be traced to the somewhat higher centrality achieved with the present setup compared with the 'PHASE I' setup used in our earlier work, which covered If one were to interprete the slopes as being an indicator of a global freeze-out temperature of systems in chemical equilibrium, the qualitative conclusion would be that smaller systems appear to be hotter. We recall that all systems are studied at the same incident energy and for the same centrality, b (0) = b/b max < 0.15. However, in complex reactions leading to many outgoing channels with an apparently simple (exponential) statistical distribution, integrated charge distributions give limited information and therefore do not allow to draw convincing conclusions on the possible emergence of a final state in equilibrium. More details of the mechanism at work are seen in Fig. 3 where we com- pare the evolution of the longitudinal and the transverse rapidity distributions (left and right panels, respectively) with system size for the 'gas' (protons) which is prevalent for the light (surface-dominated) systems, and the 'droplets' which appear in increasing numbers at mid-rapidity as the systemsize increases (lower left panel), accompanied by a broadening of the transverse velocities (lower right panel). We plot the smoothened data emphasizing the gradual evolution with system size. The statistical errors are smaller than the symbol sizes, while the absolute systematic errors are 10%. Relative errors are expected to be smaller by a factor of two or more. Note that all yields in Fig. 3 are reduced to a constant system size of 100 nuclear charges by multiplying with (100/Z sys ), where Z sys is the total system charge. Closer inspection of the system-size evolutions in these reduced scales reveals two other noteworthy features: 1) The 'missing' proton vapour in the heaviest system, that served as source of the developping clusters, is limited to smaller transverse rapidities, |y (0) x | ≤ 0.7, see the upper right panel and note the enlarged abscissa scale in the lower right panel. 2) Except for the Ca+Ca system, heavy cluster production is system-size independent around longitudinal rapidities, |y 0 | ≥ 0.8, see the lower left panel. This latter point is not trivial, although 'universal' features in the partition of excited spectator matter are well established [14]. In general, however,the collisions investigated in ref. [14] were more peripheral and 'spectators' could be clearly identified as relatively narrow peaks in the longitudinal rapidity distributions near projectile (or target) rapidity. In the present cases of high centrality, spectator matter, if any, is not readily isolated in the measured rapidity distribution, except for the two-peak structures in the Ca+Ca reaction. But even in that case the evolution towards mid-rapidity is continuous. When we select more peripheral collisions 'spectator' peaks appear in all systems and we observe [24,25] at this energy the same universal features as in ref. [14]. A way to summarize the observations and at the same time confronting them with theoretical simulations is presented in Fig. 4. Starting with the lower left panel, we show the system-size dependence of the rapidity-integrated multiplicity, M hc , of heavy clusters. Although we plot again reduced yields (to 100 incoming protons), we observe a remarkable linear increase: the least squares fitted straight line follows the data with an accuracy of 4% (as mentioned earlier, the relative accuracy of the data points is expected to be better than the indicated 10% systematic error bars). This means that the system-size dependence of M hc has a term proportional to Z 2 sys or A 2 sys (the system mass squared). There is a small irregularity associated with the two data points due to Zr+Zr (Z sys = 80), resp. Ru+Ru (Z sys = 88) which represents systems with the same total mass (A sys = 192), suggesting that besides the size dependence there is also an isospin dependence of heavy cluster production. The effect is however at the limit of the error margins and therefore will not be discussed any further in the present work. Besides the A 2 sys term in M hc (A sys ), a second piece of information comes from looking at multiplicities, M * hc , confined to the midrapidity interval (|y (0) | < 0.5) (see lower left panel of Fig. 4): the restricted data run parallel to the integrated data, showing that the quadratic term is associated virtually entirely with the midrapidity region. For comparison we also show the same midrapidity data before acceptance corrections. While the corrections are important, and enhance the effect, it is clear that they are not creating the effect. In the midrapidity region the reduced cluster production is about a factor 5.5 higher for Au+Au than for Ca+Ca. We note that for the lightest system, Z sys = 40 is still large compared to the most abundant (Z = 3) heavy cluster. Of course the linear trends cannot go on indefinitely as the number of heavy clusters per hundred protons cannot exceed 33 by definition. We are far from this trivial limit, however. We have also reduced M * hc by dividing by the sum of charges accumulated at midrapidity (|y (0) < 0.5), Z midy , rather than by the total system charge, Z sys . In this case the straight line fitted to the data (not plotted to avoid overloading the Figure) passes very close to the origin of the axes: the relationship M * hc × 100/Z midy = 0.0342Z midy reproduces the six data points with an accuracy of 6%. This means that, well within our systematic errors, we can say that all the mid-rapidity heavy cluster production rises with the square of the number of 'participant' nucleon number. The associated observation of increasingly broad transverse rapidity distributions as the system size is increased, shown in Fig. 3, is summarized in the upper left panel of Fig. 4 where the variances var(y (0) x ) of these distributions (taken for data within |y (0) x | < 1) are plotted and also seen to rise linearly with the system size, although with an offset. The observables M hc , M * hc and var(y (0) x ) are also listed in Table 1. In a global thermal equilibrium picture the variance of rapidity (velocity) distributions is a measure of the temperature and hence, in this scenario, one would conclude that the heaviest system is the hottest system, in contradiction to our earlier conjecture, inferred from the observed partitions, that the lightest system is the hottest. A way out of this contradiction is to introduce radial flow: generated during the expansion, it is accompanied by a local cooling. This mechanism is seen to be stronger for bigger systems. Radial flow, assessed by modelling hydrodynamic expansion [26], or treated phenomenologically [22], has been introduced by our Collaboration with some success in describing the deviations of momentum space distribu- Fig. 4. Summary of heavy cluster data measured with FOPI (left panels) and calculated using IQMD (right panels). Lower panels: Average multiplicities of heavy clusters per 100 incoming protons (i.e. M hc × 100/Z sys ) as a function of the system charge Z sys . In decreasing order the data represent rapidity-integrated (4π) data (open triangles), data confined to the midrapidity interval |y (0) | < 0.5 (full diamonds) and, in the FOPI case, data limited to the acceptance of the apparatus, i.e before acceptance corrections (open circles). Upper panels: Variance of the transverse rapidity distributions for Li ions versus system charge. All straight lines are linear least squares fits to the respective data. Note that the IQMD multiplicities are multiplied by a factor three. The meaning of the marked square symbols in the IQMD data is explained in the text. tions from a global thermal scenario, notably the fragment mass dependences. But some inconsistency in accounting for the cluster yields [22] remained. This difficulty could be due to the failure of correctly accounting for nonequilibrium effects caused by the coronae of the nuclei and, more generally, by partial transparency [16]. In principle, such non-equilibrium effects can be handled by simulation codes based on transport theory [27]. We have used the code IQMD [28] based on Quantum Molecular Dynamics [30], QMD, to see if we could reproduce the features of our data. Clusters are identified after a reaction time of 200 fm/c using the minimum spanning tree algorithm in configuration space with a coalescence radius of 3 fm. For each system-energy we have generated 50000 IQMD events over the complete impact parameter range which were subsequently sorted using the Erat criterion to obtain an event class with comparable centrality to that selected in the experiment. The reduced heavy cluster multiplicities (4π integrated or at mid-rapidity) versus system size are plotted in the lower right panel of Fig. 4. As in our earlier studies [22], we find that IQMD, as well as other realizations of QMD [31], underestimates cluster production: the yields in the Figure are multiplied by a factor of three. However, qualitatively, the experimental size dependence is reproduced, including the dominant effect of mid-rapidity emissions in accounting for the linear rise. The associated rise of the variance of the transverse rapidity with system size is reproduced almost quantitatively. In this context it is useful to realize that the nuclear stopping phenomenon, as quantized by the shape of proton and deuteron longitudinal rapidity distributions is rather well described by IQMD [32] at incident energies around 400A MeV. For heavy clusters the simulations also reproduce the yields near projectile (target) rapidity: the underestimation concerns mid-rapidity emissions and might be a general deficiency of semiclassical approaches such as IQMD that tend to converge towards Boltzmann statistics after many single nucleon collisions (leading to mid-rapidity population) rather than conserving the initial Fermi-destribution of nucleons. To shed some light into the cluster creation mechanism, we have performed, for Au+Au, a calculation where the elementary nucleon-nucleon cross sections were raised by a factor two. The results (full squares marked '2σ' in the right panels) show an increased transverse rapidity variance, as one would expect from the increase of the elementary collision frequency, but also a rise of the heavy cluster multiplicity, shown for the interval |y (0) | < 0.5, supporting our observation that cluster production is in a direct correlation with flow. The introduction of more copious binary interactions increases the adiabaticity (and hence the cooling effect) of the expansion in a twofold way: 1) the local equilibration is faster and 2) the expansion is slower (since the diffusion time is larger). Clearly, we have found evidence for the 'second process' suggested in [13] where nucleon-nucleon interactions act as seeds for self-organization leading to new clusters. A key question is whether cluster production is sensitive to relevant features in the nuclear phase diagram, in particular to the existence and location of a critical point and of liquid-vapor coexistence curves. When using the IQMD code with its two available momentum dependent Equations of State, EOS, as input, a so-called 'stiff' one with an incompressibility K = 380 MeV around saturation density, or alternatively a 'soft' EOS, K = 200 MeV, we find sensitivity of cluster production. While all calculations presented so far in Fig. 4 were done with the soft EOS, one calculation, again for Au+Au, was performed assuming a stiff EOS: see the open squares (marked 'HM') in the right panels of Fig. 4. The variance var(y (0) x ) is little changed, i.e. it is caused primarily by the nucleon-nucleon collision frequency which is only indirectly affected by switching to another (mean field) EOS. Cluster production in the interval |y (0) | < 0.5, however, is lowered when assuming a stiff EOS. It is too early to draw specific conclusions from this potentially interesting finding. What is missing, so far, is the underlying phase diagram implied by IQMD in its various options. Recently it was shown [33] with codes based on Antisymmetrized Molecular Dynamics (AMD) that caloric curves can be predicted in controlled scenarios (fixed volume or pressure). This presents an important and necessary link between microscopic models for equilibrium thermodynamics (and nuclear structure as well) and the complex dynamic situation found in heavy ion collisions. In summary, cluster production has been investigated in central collisions of Ca+Ca, Ni+Ni, 96 Zr+ 96 Zr, 96 Ru+ 96 Ru, Xe+CsI and Au+Au reactions at 0.4A GeV incident energy. We find that the multiplicity of clusters with charge Z ≥ 3, when reduced to the same number of available charges, grows linearly with system size and is associated with a mid-rapidity source with increasing transverse velocity fluctuations. An increase by about a factor of 5.5 is observed in the mid-rapidity region between the lightest system (Ca+Ca) and the heaviest one (Au+Au). The results, as well as simulations using Quantum Molecular Dynamics, suggest a collision process where droplets are created in an expanding, gradually cooling, nucleon gas. Expansion dynamics, collective radial flow and cluster formation are closely linked resulting from the combined action of nucleon-nucleon scatterings and the mean fields.
5,345.2
2004-05-18T00:00:00.000
[ "Physics" ]
A Curvature Inequality Characterizing Totally Geodesic Null Hypersurfaces A well-known application of the Raychaudhuri equation shows that, under geodesic completeness, totally geodesic null hypersurfaces are unique which satisfy that the Ricci curvature is nonnegative in the null direction. The proof of this fact is based on a direct analysis of a differential inequality. In this paper, we show, without assuming the geodesic completeness, that an inequality involving the squared null mean curvature and the Ricci curvature in a compact three-dimensional null hypersurface also implies that it is totally geodesic. The proof is completely different from the above, since Riemannanian tools are used in the null hypersurface thanks to the rigging technique. Introduction Traditionally, the hypersurfaces of a Lorentzian manifold are classified depending on its causal character in three types: timelike, spacelike or null. Spacelike and timelike hypersurfaces are well studied compared to null hypersurfaces because they inherit a nondegenerate metric from the ambient space. Conversely, a null hypersurface inherits a degenerate metric and its orthogonal direction is tangent to the hypersurface itself, so there is not a well-defined canonical projection. This issue can be partially overcome by different ways [7,17]. One of them is to fix (arbitrarily) a transverse vector field to the null hypersurface which is called a rigging, [11]. From it we can construct several geometric objects as the null second fundamental form and the null mean curvature. Moreover, we can construct a Riemannian metric on the null hypersurface which can be used as an auxiliary tool to prove some results, [3,[12][13][14]. There are several examples of inequalities involving the squared mean curvature of a hypersurface. For example, Willmore proved that for a compact Recall that the sign for the null mean curvature used in this paper is the opposite than in [9]. Moreover, its sign depends on the chosen time orientation. As a corollary of the above proposition, if the null geodesics of L are also past complete, then the null mean curvature is zero, which jointly with the null convergence condition implies that L is totally geodesic. As pointed out by Galloway, proposition 1.1 is the most rudimentary form of Hawking's black hole area theorem, since it says that cross sections of L do not decrease in area as one moves towards the future. Clearly, the completeness of the null geodesics is necessarily to obtain the conclusion of proposition 1.1. Moreover, it does not hold even if there is a complete vector field in L which integral curves are null pregeodesics, as it is shown in example 3.6. In this paper we generalize proposition 1.1 to the case of a compact null hypersurface L in a four-dimensional Lorentzian manifold without assuming the completeness of the null geodesics of L (recall that compactness does not implies geodesic completeness in this context, see example 3.12). Our main result is theorem 3.11, where an inequality between the Ricci curvature and the square of the null mean curvature similar to the inequality (1.1) is assumed instead of the null convergence condition. The proof of proposition 1.1 is a quite easy argument based on the analysis of a differential inequality (see proposition 3.2). However, the proof of theorem 3.11 uses as main tool the rigging technique introduced in [11], which we will briefly review in the next section. Preliminaries Let (M, g) be a n-dimensional (n ≥ 3) Lorentzian connected manifold. A hypersurface L is null if the induced metric tensor is degenerate on L. In this case, the radical of L, which is given by T L ∩ T L ⊥ , is a one-dimensional null distribution and all other directions in L are spacelike. On the other hand, it always exists a rigging for any null hypersurface in a time-orientable Lorentzian manifold, since there is not timelike tangent directions to a null hypersurface. From now on, we always suppose that there is a rigging ζ for any null hypersurface. From it, we can define the rigged vector field ξ as the unique null vector field in L such that g(ξ, ζ) = 1, the null transverse vector field and the screen distribution S p = ζ ⊥ p ∩ T p L for all p ∈ L. Taking into account that g(∇ U ξ, ξ) = 0 for all U ∈ X(L) and the decompositions T p M = T p L ⊕ span(N p ) and T p L = S p ⊕ span(ξ p ), we can write for all U, V ∈ X(L), where ∇ L is the induced connection, A * : T L → S is an endomorphism field called the shape operator and τ is a one-form. It is easy to check that A * is self-adjoint and A * (ξ) = 0. This last fact has two important consequences. The first one is that A * is diagonalizable, since S is spacelike. The second one is that The null mean curvature of L is given by for all p ∈ L, being {e 1 , . . . , e n−2 } an orthonormal basis of S p . Since A * is diagonalizable, it holds the inequality 1 If A * is identically zero, then L is totally geodesic and if A * is proportional to the identity map, then L is totally umbilical. The following proposition shows how the geometric objects constructed in a null hypersurface from a rigging are transformed under a rigging change. Proposition 2.3. ( [19]) If ζ = ΦN + X 0 + aξ, with X 0 ∈ Γ(S) and Φ, a ∈ C ∞ (L), is another rigging for L, then for all U, V ∈ X(L). Moreover, ξ = 1 Φ ξ and H = 1 Φ H and in particular Therefore, although A * depends on the chosen rigging, being totally geodesic, totally umbilical or having zero null mean curvature do not depend on any choice. Moreover, a change on the sign of the rigging only produces a change on the sign of the rigged vector field and the null mean curvature. The curvature tensor R L of the induced connection ∇ L is related to the curvature tensor R of the ambient by the Gauss-Codazzi equations, [7]. For our purpose we recall that We can also define a Riemannian metric on L, called the rigged metric, as g = g + ω ⊗ ω, (2.5) where ω = i * (α), i : L → M is the canonical inclusion and α is the metrically equivalent one form to ζ. In this manner, once we have fixed a rigging for a null hypersurface, two connections appear: the induced connection ∇ L and the Levi-Civita connection ∇ of g, which is called the rigged connection. The difference tensor where L ξ is the Lie derivative along ξ. is a timelike and unitary vector field, then we can construct the Riemannian metric on M given by This is a classical construction which has been used in some situations. For example, in [8] it is used to prove, under suitable conditions, the existence of periodic timelike geodesics in a compact Lorentzian manifold and in [22] to show that a compact Lorentzian manifold furnished with a timelike Killing vector field is geodesically complete, although the Riemannian metric (2.6) is not explicitly written in this last paper. On the other hand, in [18], to prove the geodesic completeness of certain Lorentzian manifolds, the authors construct a Riemannian metric h on the whole Lorentzian manifold from two null vector fields Z, V ∈ X(M ) such that g(Z, V ) = 1. For this, they define h by the conditions If we have a null hypersurface L in (M, g), then it inherits a Riemannian metric from the Riemannian manifold (M, h). However, this construction seems to be too rigid since we need a timelike and unitary vector field whereas to construct the rigged metric (2.5) we only need a transverse vector field to the null hypersurface. Another fact supporting this is that if L is totally umbilical in (M, g), then it is not totally umbilical in (M, h) even if E is parallel, [20, Proposition 6.1]. Nevertheless, some characterizations of totally geodesic null hypersurfaces can be proven using the Riemannian metric (2.6), [20, Theorem 6.5]. As an immediate consequence of proposition 2.4, the difference tensor D L holds the following properties. If we take W = ξ and U = X, V = Y ∈ Γ(S) in proposition 2.4, then we get for all X, Y ∈ Γ(S). In particular, we have g( ∇ X ξ, X) = −g(A * (X), X) for all X ∈ Γ(S) and The rigged metric g allows us to use Riemannian tools in the study of null hypersurfaces. One of them is the so-called Bochner technique, which has its starting point at the general equation where S is the endomorphism given by S(U ) = ∇ U Z. This equation holds for any vector field Z in a semi-Riemannian manifold. Moreover, since On the other hand, the Raychaudhuri equation states that for a null hypersurface it holds (see for example [9,10]) This equation can be derived from (2.9). Indeed, fixed p ∈ L we extend ξ to get a null vector field in a neighbourhood p ∈ θ ⊂ M which we still call ξ. If we take a basis {e 1 , . . . , e n−2 } of S p , then we can construct a pseudoorthonormal basis of T p M adding ξ p and N p . Now we have On the other hand, Therefore, taking Z = ξ in equation (2.9) we obtain equation (2.11). Main Results An immediate but important consequence of equation (2.11) is stated in the following well-known proposition, which we will use later. We also need a slight different formulation of proposition 1.1. We include the proof to make the paper self-contained. If H p > 0, then the above inequality implies that H γ(t) is not defined for all t > 0, which is a contradiction. If the null geodesics of L are also past complete, then H = 0 and proposition 3.1 ensures that L is totally geodesic. We can drop the geodesic completeness assumption in proposition 3.2 if we suppose that the null hypersurface is compact. The vector field ξ is complete since L is compact. If τ (ξ) = 0, then we can apply corollary 3.3. Suppose for example that τ (ξ) < 0. If α : R → L is an integral curve of ξ, then we can check that γ(t) = α − 1 τ (ξ) ln(−τ (ξ)t + 1) is a null geodesic, which is defined for 1 τ (ξ) < t. Therefore, from proposition 3.2 we get that H ≤ 0. From equation (2.8) we have L Hd g = 0, so H = 0 and proposition 3.1 ensures that L is totally geodesic. As is pointed out in the proof of proposition 3.2, the condition cH 2 ≤ Ric(ξ, ξ) does not depend on the chosen rigging. However, from proposition 2.3 we see that the condition τ (ξ) constant does depend on the chosen rigging. A null hypersurface L is called a Killing horizon if there is a Killing vector field K such that K p is null and tangent to L for all p ∈ L. It is a remarkable fact that for a Killing horizon in a spacetime which satisfies the null dominant energy condition there is a null section ξ such that τ (ξ) is constant (see [15] for a proof of this using the rigging technique). Corollary 3.5. Let L be a compact orientable null hypersurface in a timeorientable Einstein Lorentzian manifold. If there is a rigging for L such that τ (ξ) is constant, then L is totally geodesic. Theorem 3.4 does not hold if L is not supposed compact. The following is an example of this and a counterexample of corollary 3.3 if ξ is not a complete vector field or τ (ξ) = 0. Example 3.6. Take (F, g R ) a Riemannian manifold and M = R 2 ×F furnished with the Lorentzian metric given by g = 2e s dtds + e 2s g R . We can check that L = {t 0 } × R × F is a null hypersurface for any fixed t 0 ∈ R. If we consider the rigging ζ = e −s ∂t, then the rigged vector field is ξ = ∂s and the screen distribution is given by S = T F . If X, Y ∈ Γ(S), then Therefore A * (X) = −X for all X ∈ Γ(S) and so L is totally umbilical with null mean curvature H = 2 − n. Since g(ξ, ∂t) = e s , from the Koszul formula we get −τ (ξ)e s = g(∇ ξ ξ, ∂t) = ξg(ξ, ∂t) = e s and thus τ (ξ) = −1. Moreover, from equation (2.11) we have Ric(ξ, ξ) = 0. Thus, the inequality in theorem 3.4 holds with c = 0, but L is not totally geodesic. If we consider the rigging ζ = e −2s ∂t, then its rigged vector field is ξ = e s ξ and it holds τ (ξ ) = 0, but we can not apply corollary 3.3 because ξ is not a complete vector field. Now, we try to generalize theorem 3.4 in some manner such that τ (ξ) does not need to be constant. From now on, we suppose that L has dimension three and it is oriented. Fix a g-orthonormal local positive basis {X, Y, ξ} and write We can define a function which measures the integrability of S. In fact, dω(X, Y ) does not depend on the chosen local positive orthonormal basis, so it determines a well-defined global function μ ∈ C ∞ (L). If μ vanishes everywhere, then S is integrable. Observe that μ is locally given by Lemma 3.7. In the above situation, it holds In particular, Proof. From proposition 2.4, we have g(D L (X, ξ), X) = g(D L (X, ξ), X) = 0 so Using again proposition 2.4 we get g(D L (X, ξ), Y ) = − 1 2 μ. On the other hand, taking into account equation (2.1) and that ξ is g-unitary, we have In the same way we get D L (Y, ξ) = 1 2 μX − τ (Y )ξ. The final assertion follows from equation (2.2). Now we relate the Ricci curvature of ξ in the ambient Lorentzian manifold and in (L, g). For this, recall the general formula relating the curvature tensor associated to the two connections ∇ L and ∇ on L. for all U, V, W ∈ X(M ). Theorem 3.8. Let L be an oriented null hypersurface in a four dimensional Lorentzian manifold. It holds Proof. Using equation (2.4) and (3.2) we get We compute each term in the right hand of this equation. Using corollary 2.6, lemma 3.7 and equation (2.7), the first one is Since X, ∇ ξ ξ ∈ Γ(S), from corollary 2.6 we can write the second term in (3.3) as But from the definition of g and lemma 3.7 we get and using proposition 2.4 the above is The third term g(X, D L (ξ, ξ), X) vanishes due to corollary 2.6. For the last term in formula (3.3) we use lemma 3.7 to get Putting all together we arrive to In the same way, we can check that and so We call S : T L → T L the endomorphism field given by S(U ) = ∇ U ξ and S * its adjoint with respect to g, i.e. S * : T L → T L is the unique endomorphism such that g(S(U ), V ) = g(U, S * (V )) for all U, V ∈ X(L). We say that the rigged vector field ξ is orthogonally normal if g(S(X), S(X)) = g(S * ⊥ (X), S * ⊥ (X)) for all X ∈ Γ(S), being S * ⊥ (X) the g-orthogonal projection of S * (X) onto S. For example, if the screen distribution S is integrable or if L is totally umbilical, then ξ is orthogonally normal. If we have a null hypersurface in a Lorentzian manifold with arbitrary dimension and the rigged vector field ξ is orthogonally normal, then we can also get an explicit expression for Ric(ξ, ξ), which is similar (but not the same) to the one given in theorem 3.8, [11,Corollary 4.5]. This has been used to prove that, under some additional conditions, a totally umbilical null hypersurface is in fact totally geodesic, [11, Theorem 5.1]. In [2] some obstructions for the existence of a compact null hypersurface under a causality hypothesis are given. Concretely, it is proven that in a distinguishing, strongly causal, stably causal or globally hyperbolic Lorentzian manifolds there are not compact null submanifolds. However, an example of a causal Lorentzian manifold which contains a compact null hypersurface is shown. We can give an obstruction to the existence of compact null hypersurface in a causal Lorentzian manifold adding a curvature condition. Corollary 3.9. Let L be an oriented null hypersurface in a four-dimensional causal Lorentzian manifold. If there is a conformal rigging for L such that τ (X) = 0 for all X ∈ Γ(S) and 0 < Ric(ξ, ξ) + 1 2 μ 2 , then L is not compact. If the rigging ζ is closed, i.e. the one-form α is closed, then the endomorphism S coincides with −A * [11], but in general it has more information than A * . Lemma 3.10. Let L be an orientable null hypersurface in a four-dimensional Lorentzian manifold. It holds Proof. Take {X, Y, ξ} a local g-orthonormal positive basis. From equation (2.7) and lemma 3.7 we get and analogously Therefore On the other hand, using equation (3.1) it is straightforward to check that and we get the lemma. Now, we are ready to prove the main result of this paper. Proof. We integrate with respect to g in the formula for Ric(ξ, ξ) in theorem 3.8. Taking into account equation (2.10) we get From equation (2.7) we know that tr(S) = −H and using lemma 3.10 the above equation reduces to Using equation (2.8), we have div (τ (ξ)ξ) = ξ(τ (ξ)) − τ (ξ)H, but ξ(τ (ξ)) = 0 because τ (ξ) is constant through the integral curves of ξ and so L τ (ξ)Hd g = 0. (3.5) The inequality (2.3) and the hypothesis of the theorem give us Thus from the equation (3.4) we deduce that Therefore, Ric(ξ, ξ) = 1 2 H 2 and ||A * || 2 = 1 2 H 2 . Taking into account equation (2.11) we have that the null mean curvature holds the Riccati differential equation This equation can be explicitly solved. In fact, if we take γ : R → L an integral curve of ξ with γ(0) = p ∈ L, then . Therefore, we deduce that 0 ≤ τ (ξ)H on the whole L. From equation (3.5) we get that τ (ξ)H = 0, but in view of (3.6), we have that necessarily H = 0. Proposition 3.1 ensures that L is totally geodesic. The hypersurface L = {0} × S 1 × S 1 × S 1 is null and totally geodesic. The vector field ζ = −∂t is a rigging for it with associated rigged ξ = ∂x and τ (ξ) = −g(∇ ξ ξ, ζ) = g(∇ ∂x ∂x, ∂t) = cos y, which is constant through the integral curves of ξ. It is worth noting that the null geodesics of L are not complete. Indeed, the null geodesic through a point p 0 = (0, x 0 , y 0 , z 0 ) ∈ L is 0, x 0 − 1 cos y0 ln (1 + t cos y 0 ) , y 0 , z 0 for − 1 cos y0 < t, if cos y 0 > 0 (0, x 0 + t, y 0 , z 0 ) for t ∈ R, if cos y 0 = 0 0, x 0 − 1 cos y0 ln (1 + t cos y 0 ) , y 0 , z 0 for t < − 1 cos y0 , if cos y 0 < 0 Observe that we can not prove theorem 3.11 as theorem 3.4. This is because given an integral curve of ξ, a reparametrization will be a future or past complete geodesic depending on the sign of τ (ξ), which can change from one integral curve of ξ to another integral curve of ξ. On the other hand, under the conditions of the theorem 3.11, from equation (2.11) and inequality (2.3) we have that where γ is an integral curve of ξ and τ is some constant. From this differential inequality we can not deduce that H = 0. In fact, there are complete solutions apart from the constant zero function or the constant function H = τ as for example H(t) = τ 2 + τ 3 cos τ 3 t . A null hypersurface in a three-dimensional Lorentzian manifold is always totally umbilical. In this case we can prove the following. Theorem 3.13. Let L be an oriented compact null hypersurface in a threedimensional Lorentzian manifold. Suppose that there is a rigging for L such that τ (ξ) is constant through the integral curves of ξ. If cH 2 ≤ Ric(ξ, ξ) for certain c > 0, then L is totally geodesic. Proof. The existence of the rigged vector ξ implies that the Euler-Poincaré characteristic of L is zero. Therefore, the Gauss-Bonnet theorem ensures that Ric(ξ, ξ)d g = 0.
5,096
2023-01-17T00:00:00.000
[ "Mathematics" ]
A semi-implicit relaxed Douglas-Rachford algorithm (sir-DR) for Ptychograhpy Alternating projection based methods, such as ePIE and rPIE, have been used widely in ptychography. However, they only work well if there are adequate measurements (diffraction patterns); in the case of sparse data (i.e. fewer measurements) alternating projection underperforms and might not even converge. In this paper, we propose semi-implicit relaxed Douglas Rachford (sir-DR), an accelerated iterative method, to solve the classical ptychography problem. Using both simulated and experimental data, we show that sir-DR improves the convergence speed and the reconstruction quality relative to ePIE and rPIE. Furthermore, in certain cases when sparsity is high, sir-DR converges while ePIE and rPIE fail. To facilitate others to use the algorithm, we post the Matlab source code of sir-DR on a public website (www.physics.ucla.edu/research/imaging/sir-DR). We anticipate that this algorithm can be generally applied to the ptychographic reconstruction of a wide range of samples in the physical and biological sciences. Introduction Since the first experimental demonstration in 1999 [1], coherent diffraction imaging (CDI) through directly inverting far-field diffraction patterns to high-resolution images has been a rapidly growing field due to its broad potential applications in the physical and biological sciences [2][3][4][5]. A fundamental problem of CDI is the phase problem, that is, the diffraction pattern measured only contains the magnitude, but the phase information is lost. In the original demonstration of CDI, phase retrieval was performed by measuring the diffraction pattern from a finite object. If the diffraction intensity is sufficiently oversampled [6], the phase information can be directly retrieved by using iterative algorithms [7][8][9][10][11]. Ptychography, a powerful scanning CDI method, relieves the finite object requirement by performing 2D scanning of an extended relative to an illumination probe and measuring multiple diffraction patterns with each illumination probe overlapped with its neighboring ones [12,13]. The overlap of the illumination probes not only reduces the oversampling requirement, but also improves the convergence speed of the iterative process. By taking advantage of ever-improving computational power and advanced detectors, ptychography has been applied to study a wide range of samples using both coherent x-rays and electrons [2,5,[14][15][16][17][18][19][20][21][22][23]. More recently, a time-domain ptychography method was developed by introducing a time-invariant overlapping region as a constraint, allowing the reconstruction of a time series of complex exit wave of dynamic processes with robust and fast convergence [24]. These iterative algorithm can be generally divided into three classes: i) the conjugate gradient (CG) [30,34], ii) extended ptychography iterative engine (ePIE) [31], and iii) difference map (DM) [13], whereas the last two have a close relationship. ePIE is an alternating projection algorithm, while DM is built on both projection and reflection which is believed to provide a momentum to speed up the convergence. The relaxed average alternating reflection (RAAR) [8] is a relaxation of DM and has been shown to be effective in phase retrieval [39]. All algorithms except ePIE take a global approach, i.e. using the entire collection of diffraction patterns to perform an update of the probe and object in each iteration. In contrast, ePIE goes through the measured data sequentially to refine the probe and object. However, ePIE has a slow convergence due to the step size restriction which may cause divergence if violated. To fix this issue, rPIE was proposed, in which regularization is used for stability [40]. The significant results also show that rPIE obtains a larger field of view (FOV) than ePIE. In this paper, we show that DM and RAAR can also be implemented locally, similarly to ePIE. We then apply non-convex optimization tools to improve the robustness, convergence and FOV of the ptychography reconstruction. The proposed algorithm incorporates two techniques. The first modifies the update of the probe and object as the algorithm iterates via semi-implicit method or Proximal Mapping. The second technique is the implementation of relaxed Douglas Rachford, a generalized version of DM and RAAR, on the local scale. The proposed algorithm Given N measured diffraction patterns at N positions, the ptychographic algorithm aims to find a 2D object O and probe P that satisfy the overlap constraint and the Fourier magnitude constraint |F (PO n )| = I n for n = 1, .., N. (1) Where O n is the object at the n th scan position. Here, we omit the spatial variables for a simple notation and use the notations P, O n and I n for both continuous and discrete cases. The absolute value, multiplication, division, conjugate, and square root operators are applied element-wise on P, O n and I n which represent 2D complex matrices of the same size in the discrete case. We can argue that O n is the object restricted to a sub-domain Ω n . The overlap constraint can be mathematically interpreted as where {r n } N n=1 are displacement vectors. In short notation, we write O n = O| Ω n to imply the object is restricted to sub domain Ω n . The equivalent constraint in the discrete case is the agreement between the sub-matrix of O and O n . We find a better representation of the problem by introducing the exit wave variable Ψ = PO. By denoting the Fourier measurement constraint set T and the overlap object constraint set S, we have Then we write the ptychography problem in a minimization fashion where i S (Ψ) and i T (Ψ) are the indicator functions of sets S and T respectively, defined as To solve Eq. (3), an alternating projection method is proposed. At each iteration, we select a random position n and update Ψ n The Frobenius norm is used in this minimization problem and entire paper unless a different norm is specified. The minimization of Eq. (6) is difficult due to instability. One way to solve this non-convex problem is to minimize each variable while fixing the other ones. This approach is unstable because of the division. A cut-off method is used to avoid the divergence and zero-division. A modification is recommended by adding a penalizing least square error term (i.e. regularizer) The idea of regularization appears throughout the literature such as proximal algorithms [41,42]. Eq. (9) is more reliable to solve than Eq. (6) but is still very expensive since the variables are coupled. P k+1 and O k+1 n can be solved via a Backward-Euler system derived from Eq. (9). ePIE proposes a simple approximation by linearizing the system so that it can be solved sequentially. The system is solved by alternating direction methods (ADM) [43]. The remaining part is to choose appropriate step sizes t and s to ensure stability. ePIE suggests t = β O / P k 2 max and s = β P / O k+1 2 max where β O , β P ∈ (0, 1] are normalized step sizes. The max matrix norm is the element-wise norm, taking the maximum in absolute values of all elements in the matrix. The final version of ePIE is given by We will exploit the structure of Eq. (10) to give a better approximation. A semi-implicit algorithm We replace the minimization of Eq. (9) by two steps Step Step 2: P k+1 = argmin This results in a better approximation to the linearized system of Eq. (11) and simpler than the Backward-Euler Eq. (10) In this uncoupled system, we can derive a closed form solution for each sub-problem. By choosing the step sizes s and t as in the ePIE algorithm and normalizing the parameters β O and β P , we obtain This formula can be explained as a weighted average between the previous update O k n and Ψ k n P k . The object update is similar to the rPIE algorithm when rewriting it as The difference is rPIE does not have the parameter β O in front of the fraction. i.e. rPIE has a larger step size than sir-DR. This helps converge faster but might also get trapped in local minima. The regularization (weighted average) in sir-DR is more mathematically correct and enhances the algorithm's stability. In the next section, we apply the Douglas-Rachford algorithm to solve for the exit wave Ψ. The relaxed Douglas-Rachford algorithm The Douglas-Rachford algorithm was originally proposed to solve the heat conduction problem [44], which represents a composite minimization problem The iteration consists of Over the past decades, this accelerated convex optimization algorithm has been exhaustively studied in both theory and practice with many applications [45][46][47][48][49][50].Here we apply the algorithm to the ptychographic phase retrieval. Note that the Douglas-Rachford algorithm reduces to Difference Map (DM) when f = i T and g = i S are characteristic functions of constraint sets T and S, respectively We realize that the reflection operator 2Π S − I helps to accelerate the convergence in the convex case and escape local minima in the non-convex case. However this momentum, caused by reflection, might be too large and can lead to over-fitting. Therefore, we relax the reflection by introducing the relaxation parameter σ ∈ [0, 1] Since the experimental measurements are contaminated by noise, a direct projection of measurement constraint is not an appropriate approach. We thus relax the Fourier magnitude constraint by a least square penalty Recall that prox t f (Ψ) has a closed form solution where τ = t/(1 + t) ∈ (0, 1) exclusively is the normalized step size. Combining this result with DM, we obtain When we let β = 1 − τ and σ = 1, the update reduces to RAAR. Therefore, we show that relaxed Douglas-Rachford is a generalized version of RAAR. We now move to our main algorithm. The sir-DR algorithm In combination of the semi-implicit algorithm and relaxed Douglas Rachford algorithm, we propose the sir-DR algorithm, shown in Fig. 1. In this algorithm, we only apply the semi-implicit method on O k n while P k can be integrated with the Forward Euler (gradient descent) method. τ ∈ [0, 1] is chosen to be small. In most of our experiments, we select τ ≈ 0.1, while the choice of σ depends on the specific problem. In many cases, σ = 1 works very well (full reflection). But in some specific cases, large σ might cause divergence or small recovered FOV. We decrease σ in these cases, for example σ = 0.5. We choose β O = 0.9 in most cases. β P is chosen to be large at the beginning (β P = 1) and decreases as a function of iteration. This adaptive step-size has been introduced as a strategy for noise-robust Fourier ptychography [51]. Reconstruction from simulated data To examine the sir-DR algorithm, we simulate a complex object of 128 × 128 pixels with a cameraman and a pepper images representing the amplitude and the phase, respectively (Fig. 2).The circular aperture is chosen as probe with a radius of 50 pixels. We raster scan the aperture over the object with a step size of 35 pixels, resulting in 4x4 scan positions. The overlap is therefore 56.4%, the approximate lower limit for ePIE to work. Poisson noise is added to the diffraction patterns with a flux of 10 8 photons per scan position. We use R noise to quantify the relative error with respect to the noise-free diffraction patterns where P 0 and O 0 is the noise-free model and the L 1,1 matrix norm represents the sum of all elements in absolute value of the matrix. The above flux results in R noise = 3.73%. Fig. 3 shows that three algorithms (ePIE, rPIE and sir-DR) all successfully reconstruct the object in the case where the overlap between adjacent positions is high and the noise level is low. As a baseline comparison, Fig. 3 shows that all three algorithms correctly reconstruct the object in the ideal case when the overlap between adjacent positions is high and the noise level is low. Next, we apply the three algorithms to the reconstruction of sparse data, which is centrally important to reducing computation time, data storage requirements and incident dose to the sample. We increase the scan step size to 50 pixels while keeping the same field of view, which reduces the number of diffraction patterns to 3 × 3. Consequently, the overlap is reduced to 39.1%. Not only is the overlap between adjacent positions low, but the total number of measurements is also small, creating a challenging data set for conventional ptychographic algorithms. Fig. 4 show that sir-DR can work well with sparse data, while ePIE and rPIE fail to reconstruct the object faithfully. Optical laser data As an initial test of sir-DR with experimental data, we collect diffraction patterns from an USAF resolution pattern using a green laser with a wavelength of 543 nm. The incident illumination is created by a 15 µm diameter pinhole. The pinhole is placed approximately 6 mm in front of the sample, creating a illumination wavefront on the sample plane that can be approximated by Fresnel propagation. The detector is positioned 26 cm downstream of the sample to collect far-field diffraction patterns. We raster scan across the sample with a step size of 50 µm and acquire 169 diffraction patterns. We perform a sparsity test by randomly choosing 85 diffraction patterns (50% density) and run ePIE, rPIE, and sir-DR on this subset with 300 iterations. If we assume the probe diameter is to where the intensity falls to 10% of the maximum, then the the overlaps are 73% and 46.4% for the full and sparsity sets respectively. Fig. 5 shows that rPIE and sir-DR obtain a larger FOV than ePIE as both use regularization. Furthermore, sir-DR removes noise more effectively and obtains a flatter background than ePIE and rPIE. We monitor the R-factor (relative error) to quantify the reconstruction, defined as R F is 16.94%, 13.95% and 13.28% for the ePIE, rPIE and sir-DR reconstructions, respectively. Synchrotron radiation data To demonstrate the applicability of sir-DR to synchrotron radiation data, we reconstruct a ptychographic data set collected from the Advanced Light Sources [16]. In this experiment, 710 eV soft x-rays are focused onto a sample using a zone plate and the far-field diffraction patterns are collected by a detector. A 2D scan consists of 7,500 positions, which span approximately 10 × 4 µm. The sample is a portion of a HeLa cell labeled with nanoparticles, which is supported on a graphene-oxide layer. Fig. 6 shows the ePIE reconstruction of the whole FOV of the sample. To compare the three algorithms, we choose a subdomain of a 4 × 4 µm region, consisting of 2,450 diffraction patterns. With the same assumption, the overlap is computed to be 79.5%. where sir-DR obtains a better quality reconstruction than ePIE and rPIE. Both sir-DR and rPIE produce a larger FOV than ePIE. Scale bar 200µm Fig. 7 show the ePIE, rPIE, and sir-DR reconstructions, respectively. With 300 iterations, all three algorithms converge to images with good quality. When reducing the number of iterations to 100, we observe that sir-DR converges faster and reconstruct a larger FOV than ePIE. The individual nanoparticles, which serve as a resolution benchmark, are better resolved in the sir-DR reconstruction than the ePIE and rPIE ones. Furthermore, the reconstruction by ePIE contains artifacts as a faint square grid, which is removed by rPIE and sir-DR. We next perform a sparsity test by randomly picking 980 out of 2,450 diffraction patterns, i.e. a reduction of data by 60%. The corresponding overlap of the sparsity set is 50.8%. Fig. 8 shows the reconstructions by ePIE, rPIE, and sir-DR with 300 iterations. Both the ePIE and rPIE reconstructions exhibit noticeable degradation. In particular, the nanoparticles are not well resolved. But the sir-DR reconstruction has no noticeable artifacts noise and the individual nanoparticles are clearly visible. The quality of sir-DR reconstruction with 60% data reduction is still comparable to that of ePIE using all the diffraction patterns Conclusion In this work, we have developed a fast and robust ptychographic algorithm, termed sir-DR. The algorithm relaxes Douglas-Rachford to improve robustness and applies a semi-implicit scheme (semi-Backward Euler) to solve for the object and to expand the reconstructed FOV. Using both simulated and experimental data, we have demonstrated that sir-DR outperforms ePIE and rPIE with sparse data. Being able to obtain good ptychographic reconstructions from sparse measurements, sir-DR can reduce the computation time, data storage requirement and radiation dose to the sample.
3,806.6
2019-06-05T00:00:00.000
[ "Physics", "Engineering" ]
Conductive Supramolecular Polymer Nanocomposites with Tunable Properties to Manipulate Cell Growth and Functions Synthetic bioactive nanocomposites show great promise in biomedicine for use in tissue growth, wound healing and the potential for bioengineered skin substitutes. Hydrogen-bonded supramolecular polymers (3A-PCL) can be combined with graphite crystals to form graphite/3A-PCL composites with tunable physical properties. When used as a bioactive substrate for cell culture, graphite/3A-PCL composites have an extremely low cytotoxic activity on normal cells and a high structural stability in a medium with red blood cells. A series of in vitro studies demonstrated that the resulting composite substrates can efficiently interact with cell surfaces to promote the adhesion, migration, and proliferation of adherent cells, as well as rapid wound healing ability at the damaged cellular surface. Importantly, placing these substrates under an indirect current electric field at only 0.1 V leads to a marked acceleration in cell growth, a significant increase in total cell numbers, and a remarkable alteration in cell morphology. These results reveal a newly created system with great potential to provide an efficient route for the development of multifunctional bioactive substrates with unique electro-responsiveness to manipulate cell growth and functions. Introduction Polymer-based bioengineering approaches have risen to prominence in the biomedical field as potential materials for use in fabrication with a wide range of tissue engineering applications [1][2][3][4]. Several studies have shown that synthetic bioactive polymers with a tailorable structural composition, surface microstructure and wettability can substantially affect cellular response and growth through the addition of specific functional monomers and control of the monomer addition sequence during the polymerization process [5][6][7]. However, the development of synthetic bioactive polymers is frequently constrained by our limited understanding of how cell adhesion and proliferation are regulated by the extracellular matrix (ECM) [8][9][10][11]. Recently, supramolecular polymers produced via reversible non-covalent interactions have attracted attention owing to the unique mechanical properties of their matrices, such as their environmental stimuli-responsiveness [12,13], selfhealing [14,15] and shape-memory behavior [16,17]. Specifically, supramolecular hydrogenbonding moieties (or synthons) within the polymer that drive self-assembly behavior might be exploited to manipulate supramolecular polymer-cell junctions [18][19][20][21][22]. For example, Dankers and coworkers synthesized novel synthetic supramolecular polymers with quadruple hydrogen-bonding ureido-pyrimidinone (UPy) moieties that could spontaneously self-assemble into a membrane-like structure and improve the bioactivity of cell growth and proliferation, thereby achieving a biocompatible polymer [23,24]. Therefore, functional polymeric materials with supramolecular moieties and strong hydrogen-bonding capability have critical factors required for the development of multifunctional tissue engineering scaffolds with tunable physical properties that can enhance the overall growth of cultured cells. Graphene is a one-atom-thick two-dimensional material with a hexagonal structure that provides distinct features, including a large surface area, excellent thermo-electrical conductivity, high mechanical strength and light transmittance [25][26][27]. In particular, conductive materials such as graphene have the potential to provide an essential role in tissue regeneration that can accelerate cell adhesion and migration during wound healing by electrical stimulation. Cells have an innate self-electroactivity ability that enhances the overall efficiency of their cellular wound healing after electrical stimulation [28][29][30][31][32]. Nevertheless, full carbon-based graphene nanosheets have limitations. In particular, they are extremely hydrophobic, making it a challenge to interface them with biological systems and thus causing the significant inhibition of cell growth and functions [28][29][30]. Given the hydrophobic nature of graphene, we speculate that introducing hydrogen-bonded supramolecular polymers into the graphene matrix could significantly improve the hydrophilicity of the composites, thereby improving the affinity between the composite substrate and cells [18,19]. We further propose that electrical stimulation-as an exogenous physiological stimulus-could enhance the cellular affinity of adhesion, proliferation, and differentiation on materials substrates [33][34][35]. As a promising strategy with electrical stimuli, direct electrical stimulation (DES) implies the interaction of electron transport chain components with a working electrode surface into cells which may cause burn damage to living tissues treated with DES [36]. In contrast, indirect electrical stimulation (IES) involves the transfer of electrons from a working electrode to a microorganism without direct interaction into the cellular environment that allows safe stimulation to promote cellular growth [37]. Therefore, we speculated that a combination of hydrogen-bonded supramolecular polymers with graphene nanosheets for cell and tissue culture using IES may old great potential as a high-performance conductive bioactive substrate for manipulating cells in engineered tissues. Recent studies in our laboratory demonstrated that supramolecular exfoliated graphene nanosheets with tunable physical properties could be obtained by controlling the amount of three-arm adenine-end-capped polycaprolactone polymer (3A-PCL) [38]. This results from the strong affinity of the self-assembled lamellar and spherical nanostructures of 3A-PCL to be strongly absorbed onto the surface of the graphite crystals, which subsequently lead to the formation of exfoliated graphene nanosheets. The tailorable graphene-exfoliation level and controlled conductive performance of these graphite/3A-PCL composites inspired us to explore their ability as a conductive bioactive substrate for cell culture in vitro (Scheme 1). The objectives of this work were to achieve improved the surface bioactivity of conductive graphite/3A-PCL substrates to promote the adhesion, migration and proliferation of the cells cultured on the substrates via IES at very low voltage levels. In this paper, we show that graphite/3A-PCL composites not only exhibit extremely low cytotoxic activity against normal cells and high structural stability in a red blood cell-containing medium, but also significantly enhance wound-healing and cell-growth rates. In addition, we showed that graphite/3A-PCL substrates under an indirect current electric field at only 0.1 V can rapidly and efficiently produce cell adhesion, spreading and proliferation, resulting in a substantial increase in the total cell numbers and significant alteration in cell morphology. To the best of our knowledge, this is the first study demonstrating a conductive bioactive supramolecular substrate based on hydrogen-bonded adenine units and exfoliated graphene nanosheets that can efficiently control cell growth when exposed to IES. This newly created system has an advantageous combination of composite, amphipathic, conductive and bioactive char-bination of composite, amphipathic, conductive and bioactive characteristics with promise as a multifunctional, soft cell-culture scaffold for skin substitute bioconstructs and tissue-engineered regeneration. Scheme 1. Graphical illustration for the development of supramolecular composites: self-assembled lamellar nanostructures are formed by hydrogen-bonded adenine units within the 3A-PCL macromer, which subsequently adsorbs on the surface of exfoliated graphene nanosheets after blending with graphite in THF. The resulting composite is applicable as a conductive bioactive substrate for cell culture under indirect electrical stimulus. Results and Discussion The molecular structure of tri-adenine end-capped polycaprolactone macromer (3A-PCL) and its direct self-assembly into well-ordered lamellar structures in solid state via the noncovalent linkage of the self-complementary adenine-adenine (A-A) hydrogen-bonding interactions are shown in the upper part of Scheme 1. When 3A-PCL is directly blended with natural graphite in tetrahydrofuran (THF) using ultrasonic treatment, 3A-PCL efficiently undergoes self-assembly in THF to form a stable lamellar structure on the graphite surface. This promotes the exfoliation of the bulk graphite into variable layers of graphene nanosheets. The number of layers depends on the amount of 3A-PCL incorporated into the graphite crystals ( Figure 1) due to the presence of the strong noncovalent intermolecular interactions between the exfoliated graphene surface and the self-organized lamellar 3A-PCL nanostructures. A well-tunable layer number and physical properties of the graphite/3A-PCL nanosheets in the solid state were analyzed and discussed in detail in our previous work [38]. Here, we expanded on our previous work, exploring the physical properties and potential applications for the cell culturing of graphite/3A-PCL composites. Scheme 1. Graphical illustration for the development of supramolecular composites: self-assembled lamellar nanostructures are formed by hydrogen-bonded adenine units within the 3A-PCL macromer, which subsequently adsorbs on the surface of exfoliated graphene nanosheets after blending with graphite in THF. The resulting composite is applicable as a conductive bioactive substrate for cell culture under indirect electrical stimulus. Results and Discussion The molecular structure of tri-adenine end-capped polycaprolactone macromer (3A-PCL) and its direct self-assembly into well-ordered lamellar structures in solid state via the noncovalent linkage of the self-complementary adenine-adenine (A-A) hydrogen-bonding interactions are shown in the upper part of Scheme 1. When 3A-PCL is directly blended with natural graphite in tetrahydrofuran (THF) using ultrasonic treatment, 3A-PCL efficiently undergoes self-assembly in THF to form a stable lamellar structure on the graphite surface. This promotes the exfoliation of the bulk graphite into variable layers of graphene nanosheets. The number of layers depends on the amount of 3A-PCL incorporated into the graphite crystals ( Figure 1) due to the presence of the strong noncovalent intermolecular interactions between the exfoliated graphene surface and the self-organized lamellar 3A-PCL nanostructures. A well-tunable layer number and physical properties of the graphite/3A-PCL nanosheets in the solid state were analyzed and discussed in detail in our previous work [38]. Here, we expanded on our previous work, exploring the physical properties and potential applications for the cell culturing of graphite/3A-PCL composites. Physical Properties of Graphite/3A-PCL Composites To further extend our previous findings, we explore here the effects of the self-assembled lamellar structures on the surface wettability of graphite/3A-PCL composites at 25 °C by measuring the water contact angle (WCA). Spin-coated commercial polycaprolactone (PCL; average molecular weight = 80,000 g/mol) and ade- Physical Properties of Graphite/3A-PCL Composites To further extend our previous findings, we explore here the effects of the selfassembled lamellar structures on the surface wettability of graphite/3A-PCL composites at 25 • C by measuring the water contact angle (WCA). Spin-coated commercial polycaprolactone (PCL; average molecular weight = 80,000 g/mol) and adenine-functionalized 3A-PCL thin-films had WCA values of approximately 80 • and 49 • , respectively. This confirms that introducing adenine moieties into the end groups of the PCL oligomer increases the surface hydrophilicity of 3A-PCL ( Figure 2a). The WCA values of all spin-coated graphite/3A-PCL thin-films exhibited a gradual decrease from 74 • to 54 • with increasing weight fractions of 3A-PCL content, indicating that adjusting the content of 3A-PCL within composites not only significantly affected the surface hydrophilicity of composites, but also effectively regulated their level of surface wettability. An ideal exfoliated graphene-based composite for engineering applications must have high electrical conductivity in a thin-film state. We investigated the electrical resistance of spin-coated graphite/3A-PCL films using a light bulb conductivity apparatus at 25 °C. The light bulbs instantly lit up after being placed on the substrates of the 3/10 and 5/10 graphite/3A-PCL composites, but not on the 1/10 graphite/3A-PCL composite ( Figure 2b). This suggests that an increased proportion of graphite forms enough exfoliated graphene nanosheets to enable overall electrical conductivity in composites. In addition, a four-point probing measurement of sensor resistance at 25 °C and relative humidity of approximately 35% showed similar trends for all composites (Figure 2b), further demonstrating that the 3A-PCL macromer promotes the efficient exfoliation process of graphite. The resulting graphene nanosheets have a substantial reduction in electrical resistance compared to pristine graphite. For example, the 3/10 and 5/10 graphite/3A-PCL composites had electric resistances of the 312 and 348 Ohm, respectively, which were approximately 1.5-2.0 times lower than pristine graphite (544 Ohm) and the 1/10 graphite/3A-PCL composite (665 Ohm). These results further indicate that the combination of surface hydrophilicity and electrical conductivity properties that can be tailored for suitability suggests that these composites have strong potential for electrical stimulation cell culture applications. Graphite/3A-PCL Composites for Cell Culture Applications We therefore further explored the structural stability and cytotoxic activity of the graphite/3A-PCL composites toward normal mouse embryonic fibroblasts (NIH/3T3 cells) and sheep red blood cells (SRBCs). As shown in Figure 3a, all sample solutions at concentrations ranging from 0.01 μg/mL to 100 μg/mL showed no significant effect on the viability of NIH/3T3 cells after 24 h treatment using MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assays. Thus, the graphite/3A-PCL composites exerted a low cytotoxic effect on normal NIH/3T3 cells. More surprisingly, the SRBC hemolysis assay clearly demonstrated that the graphite/3A-PCL composites appeared to exhibit greater structural stability and biocompatibility with SRBCs than pristine graphite does (see Supporting Information, Figure S1). This might be An ideal exfoliated graphene-based composite for engineering applications must have high electrical conductivity in a thin-film state. We investigated the electrical resistance of spin-coated graphite/3A-PCL films using a light bulb conductivity apparatus at 25 • C. The light bulbs instantly lit up after being placed on the substrates of the 3/10 and 5/10 graphite/3A-PCL composites, but not on the 1/10 graphite/3A-PCL composite ( Figure 2b). This suggests that an increased proportion of graphite forms enough exfoliated graphene nanosheets to enable overall electrical conductivity in composites. In addition, a four-point probing measurement of sensor resistance at 25 • C and relative humidity of approximately 35% showed similar trends for all composites (Figure 2b), further demonstrating that the 3A-PCL macromer promotes the efficient exfoliation process of graphite. The resulting graphene nanosheets have a substantial reduction in electrical resistance compared to pristine graphite. For example, the 3/10 and 5/10 graphite/3A-PCL composites had electric resistances of the 312 and 348 Ohm, respectively, which were approximately 1.5-2.0 times lower than pristine graphite (544 Ohm) and the 1/10 graphite/3A-PCL composite (665 Ohm). These results further indicate that the combination of surface hydrophilicity and electrical conductivity properties that can be tailored for suitability suggests that these composites have strong potential for electrical stimulation cell culture applications. Graphite/3A-PCL Composites for Cell Culture Applications We therefore further explored the structural stability and cytotoxic activity of the graphite/3A-PCL composites toward normal mouse embryonic fibroblasts (NIH/3T3 cells) and sheep red blood cells (SRBCs). As shown in Figure 3a, all sample solutions at concentrations ranging from 0.01 µg/mL to 100 µg/mL showed no significant effect on the viability of NIH/3T3 cells after 24 h treatment using MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assays. Thus, the graphite/3A-PCL composites exerted a low cytotoxic effect on normal NIH/3T3 cells. More surprisingly, the SRBC hemolysis assay clearly demonstrated that the graphite/3A-PCL composites appeared to exhibit greater structural stability and biocompatibility with SRBCs than pristine graphite does (see Supporting Information, Figure S1). This might be attributable to the reversible A-A hydrogen-bonding interactions within the composite matrix (see Figure S2 for the explanation and results), reducing the hemolytic effect of graphite and enabling its potential use for in vivo applications [39,40]. To further evaluate the effects of the hydrogen-bonded adenine moieties and self-assembled structures on the growth of cell numbers, NIH/3T3 cells were seeded on commercial PCL film as a control substrate and on spin-coated 3A-PCL and composite films, before being incubated at 37 • C for various periods (24, 48 and 72 h). After 72 h incubation, the average number of cells in pristine 3A-PCL (1.1 × 10 6 ) was 1.5 times higher than on commercial hydrophobic PCL film (7.2 × 10 5 ; Figure 3b), indicating that the adenine molecules on the end groups of PCL-enhanced surface hydrophilicity promotes accelerated cell growth in NIH/3T3 cells [19]. Interestingly, the spin-coated 3/10 graphite/3A-PCL substrate was similar in cell growth and number (1.0 × 10 6 ) to pristine 3A-PCL, suggesting that the self-assembled structures of 3A-PCL can play a major role in facilitating the proliferation and survival of cells even in the presence of exfoliated graphene nanosheets. However, the 1/10 and 5/10 graphite/3A-PCL substrates had significantly lower cell numbers compared to 3/10 graphite/3A-PCL substrate-cell numbers decreased to approximately 7.7 × 10 5 in both-suggesting that the exfoliated graphene surfaces are optimally covered by 3A-PCL to achieve a desired surface morphology when the blending ratio of 3A-PCL and graphite is 3:10 [38]. Thus, a further decrease or increase in the 3A-PCL content in the composites apparently alters the surface roughness and wettability in a significant way, leading to a decrease in the total number of produced NIH/3T3 cells. These results confirm that tuning the 3A-PCL content of composites can allow the efficient control of cellular growth and proliferation in adherent cells. This is perhaps due to the presence of strong, multiple high-affinity interactions between NIH/3T3 cells and the hydrogen-bonded adenine group of 3A-PCL, creating a cell culture platform with tailorable physical substrate properties [19]. To confirm the results of cell growth and assess the interaction and relationship at the interface between the adenine units, exfoliated nanosheets and cell surface, confocal laser scanning microscopy (CLSM) was employed to observe the cell cytoskeleton (F-actin, green) and nuclei (bright blue) using phalloidin and 4 ,6-diamidino-2-phenylindole (DAPI) staining, respectively. The NIH/3T3 cells seeded and cultured for 24 h on pristine graphite or PCL films displayed an unhealthy cellular shape, insufficient cell density and lacked the characteristic features of the filamentous cytoskeletal network in cell morphology (Figure 3c). Thus, NIH/3T3 cells did not seem to attach to the hydrophobic graphite or PCL surfaces, leading to the inhibition of cell movement and little proliferation. In contrast, NIH/3T3 cells did adhere, spread, and grow well on 3A-PCL film and generated a large area of highly aligned fibroblast-like morphology with the elongation of dendritic filopodia, indicating that the introduction of the adenine moieties in the PCL structure effectively improved the binding affinity between the cell surface and the adenine-modified PCL substrate. This increased binding affinity in turn helped accelerate the cell alignment into clusters from adjacent cells via intercellular adhesions, further promoting cell migration and proliferation. Surprisingly, on the 3/10 graphite/3A-PCL film, NIH/3T3 cells exhibited a highly uniform cellular structure without the presence of a large area covered by filamentous cytoskeletal structures on the substrate. The restricted formation of cytoskeletal filamentous networks between neighboring cells was possibly due to the presence of exfoliated graphene nanosheets within the composites, altering the mechanism of cell proliferation and inducing a change in the cellular behavior or characteristics [41,42]. While 3A-PCL can help maintain the rate of cell growth, these changes in cell behavior led to differential morphological attributes in NIH/3T3 cells when compared to cells cultured in pristine 3A-PCL substrate. Thus, it appears that the combination of exfoliated graphene and bioactive 3A-PCL can create a surface with high cell affinity and tailorable effects on cell culture and that the resulting composite substrate can be efficiently controlled to regulate the cellular functions involved in cell growth, proliferation, and survival. This has the potential to play a vital role in cell and tissue cultures [43]. the cellular functions involved in cell growth, proliferation, and survival. This has the potential to play a vital role in cell and tissue cultures [43]. Evaluation of the In Vitro Wound-Healing Activity on Conductive Bioactive Substrates To determine whether graphite/3A-PCL composites can significantly improve NIH/3T3 cell growth behavior, we evaluated the effects on cell attachment, migration, and proliferation, using an in vitro scratch wound healing assay [44,45]. As shown in Figures 4 and 5a, after 24 h of culture, the wound closure percentages of NIH/3T3 cells on pristine 3A-PCL can reach up to 64.5 ± 1.4%, whereas cells on control tissue culture polystyrene (TCPS) substrate had only 56.5 ± 2.1% wound closure. We thus conclude that the 3A-PCL substrate can promote cell adhesion and enhance cell spreading and migration through specific interactions between NIH/3T3 cells and hydrophilic adenine-functionalized 3A-PCL substrate, thereby accelerating cell proliferation and wound closure. After 48 h, NIH/3T3 cells on the 3A-PCL and TCPS substrates showed nearly Evaluation of the In Vitro Wound-Healing Activity on Conductive Bioactive Substrates To determine whether graphite/3A-PCL composites can significantly improve NIH/3T3 cell growth behavior, we evaluated the effects on cell attachment, migration, and proliferation, using an in vitro scratch wound healing assay [44,45]. As shown in Figures 4 and 5a, after 24 h of culture, the wound closure percentages of NIH/3T3 cells on pristine 3A-PCL can reach up to 64.5 ± 1.4%, whereas cells on control tissue culture polystyrene (TCPS) substrate had only 56.5 ± 2.1% wound closure. We thus conclude that the 3A-PCL substrate can promote cell adhesion and enhance cell spreading and migration through specific interactions between NIH/3T3 cells and hydrophilic adenine-functionalized 3A-PCL substrate, thereby accelerating cell proliferation and wound closure. After 48 h, NIH/3T3 cells on the 3A-PCL and TCPS substrates showed nearly complete wound closure, revealing that 3A-PCL can be used as a high-efficiency bioactive scaffold for cell and tissue culture applications. In contrast to pristine 3A-PCL, graphite substrates led to a delay in wound closure efficiency, with the scratch-damaged NIH/3T3 monolayer showing only 38.5 ± 2.0% wound closure after 48 h. This suggests that the hydrophobicity of the graphite surface suppresses cell growth, migration, and cycle progression. However, with the incorporation of 3A-PCL into the graphite matrix, the resulting composite substrates significantly enhance the wound closure rate in NIH/3T3 cells. For example, the wound closure efficiency of NIH/3T3 cells on the 3/10 and 5/10 graphite/3A-PCL substrates after 48 h of culture reached 83 ± 1.4% and 74 ± 2.8%, respectively. Overall, these findings demonstrate that adjusting the content of 3A-PCL is critical to controlling the physical properties of exfoliated graphene nanosheets [38] and the resulting composites can be used as a bioactive substrate to efficiently manipulate and regulate the cell growth and wound healing ability, eventually achieving desirable cell performance in cell culture. complete wound closure, revealing that 3A-PCL can be used as a high-efficiency bioactive scaffold for cell and tissue culture applications. In contrast to pristine 3A-PCL, graphite substrates led to a delay in wound closure efficiency, with the scratch-damaged NIH/3T3 monolayer showing only 38.5 ± 2.0% wound closure after 48 h. This suggests that the hydrophobicity of the graphite surface suppresses cell growth, migration, and cycle progression. However, with the incorporation of 3A-PCL into the graphite matrix, the resulting composite substrates significantly enhance the wound closure rate in NIH/3T3 cells. For example, the wound closure efficiency of NIH/3T3 cells on the 3/10 and 5/10 graphite/3A-PCL substrates after 48 h of culture reached 83 ± 1.4% and 74 ± 2.8%, respectively. Overall, these findings demonstrate that adjusting the content of 3A-PCL is critical to controlling the physical properties of exfoliated graphene nanosheets [38] and the resulting composites can be used as a bioactive substrate to efficiently manipulate and regulate the cell growth and wound healing ability, eventually achieving desirable cell performance in cell culture. Assessment of Cell Growth on Conductive Bioactive Substrates with IES Given that graphite/3A-PCL composites have excellent electrical conductivity, we decided to further explore whether these newly developed substrates could enhance cell growth and manipulate the cell morphology and shape through the use of an indirectcurrent electric field [46][47][48]. The cell culture experiment using IES was performed under low electric voltage at 0.1 V/mm (approximately 1~2 mA current intensity); this level does not affect cell growth or have negative effects during cell culture [49,50]. After 24 h of culture under IES, the number of NIH/3T3 cells on the 3:10 graphite/3A-PCL substrate increased from 11.4 × 10 4 to 15.7 × 10 4 while little change in cell number was seen on control TCPS or pristine 3A-PCL substrate with IES treatment (Figure 5b). These results suggest that exfoliated graphite nanosheets present in composites create an electric field-sensing medium that effectively stimulates the structural motion of the composite under lowvoltage electric fields. This probably promotes an increased interaction of composite and cells at the interface, leading to a significant increase in total cell number. To explore how the graphite/3A-PCL substrates under IES treatment influence cell morphology, NIH/3T3 cells were cultured with pristine 3A-PCL or graphite/3A-PCL substrates for 24 h under 0.1 V, and then observed under CLSM to detect changes in cell shape and morphology. The CLSM images showed no significant differences in cell morphology on the pristine 3A-PCL substrate before and after IES treatments (Figure 5c). In contrast, cells on the 3/10 graphite/3A-PCL substrate cell morphology changed remarkably after IES treatment, with cells displaying interlocking and close-packing bundles of spindle-shaped cells of increased overall cell area and pseudopod numbers after IES treatment (Figure 5c, far right). The increased coalescence and proliferation of cells with neighboring cells can be attributed to the fact that exfoliated graphene nanosheets in graphite/3A-PCL composites not only act as an electrical stimulation unit, like an "active trigger", to facilitate the segmental motion of the 3A-PCL polymer chains, but also to efficiently facilitate interaction between the cells and substrate through the effect of an external electric field. This facilitated interaction accelerates the formation of a closely connected cell morphology via the promotion of intercellular adhesion, resulting in a significant increase in total cell numbers on the substrate. Overall, we concluded that the introduction of the adenine moieties in the PCL matrix substantially enhances the binding affinity with cells, which promotes the attachment of cells to the substrate and regulates cellular characteristics, i.e., adhesion, proliferation, and migration. Incorporating graphite into the 3A-PCL substrates enabled the effective tuning of the wettability and conductivity of the substrate surface and thus altered the growth behavior and characteristics of the cells. Importantly, under an indirect-current electric field of only 0.1 V, the graphite/3A-PCL substrates rapidly stimulated the spread and proliferation of cells and significantly increased the total cell numbers. Thus, the adenine and exfoliated graphene-containing bioactive substrates exhibit unique physical and biological properties that efficiently enhance cell growth and manipulate cell morphology and function through IES-responsive characteristics. These important features suggest great potential for a wide variety of biomedical applications, especially as a highly effective scaffold for tissue and cell cultures [43]. Assessment of Cell Growth on Conductive Bioactive Substrates with IES Given that graphite/3A-PCL composites have excellent electrical conductivity, we decided to further explore whether these newly developed substrates could enhance cell growth and manipulate the cell morphology and shape through the use of an indi- Materials and Methods Details regarding the synthetic procedures, the cell experiments and instrumentation used in this study are given in the Supplementary Information. Conclusions In summary, we successfully created a high-performance conductive supramolecular nanocomposite containing a hydrogen-bonded adenine-functionalized PCL and exfoliated graphene nanosheets that serve as a highly efficient bioactive substrate for the cell culture and manipulation of cell biophysical properties. The exfoliation of graphite within the 3A-PCL matrix promotes the formation of well-dispersed graphene nanosheets with unique structural and physical properties due to the presence of strong interaction between the exfoliated graphene nanosheets and the self-assembled nanostructures of 3A-PCL. The resulting spin-coated composite films can be easily tuned by altering the blending ratio of the graphite and 3A-PCL to obtain the required level of surface roughness and achieve the desired surface wettability and electrical conductivity. The combination of a wide range of tunable physical properties and stable thermo-reversible behavior of graphite/3A-PCL composites is rare and has strong potential for use as cell culture substrates or tissue culture scaffolds. When these newly developed composites were evaluated under in vitro environmental conditions, they exhibited extremely low cytotoxic activity against NIH/3T3 normal cells, high structural stability, and biocompatibility in the SRBC-containing medium. Cell culture, scratch experiments, and fluorescence images confirmed that pristine 3A-PCL substrates can efficiently interact with cell surfaces to enhance cell attachment, spreading, migration, and proliferation. With the incorporation of graphite into the 3A-PCL matrix, the resulting composite substrates can efficiently regulate cellular functions involved in cellular morphological features, without affecting wound healing abilities. More importantly, placing the graphite/3A-PCL substrates under an indirect current electric field of only 0.1 V rapidly stimulated cell responses in terms of adhesion, spreading, viability, and proliferation, leading to a substantial increase in the total cell numbers and a significant alteration in cell morphology, especially a gradual increase in cell size distribution. The presence of both adenine moieties and exfoliated graphene nanosheets within the composite substrates are crucial for the manipulation of cell growth, morphology, and functions by IES-responsive characteristics. This newly created strategy provides a simple, rapid, and efficient path to produce biocompatible and biodegradable conductive supramolecular nanocomposites for the development of multifunctional bioactive substrates that can substantially improve the cell culturing process.
6,382.6
2022-04-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Polyphosphazene Based Star-Branched and Dendritic Molecular Brushes. A new synthetic procedure is described for the preparation of poly(organo)phosphazenes with star-branched and star dendritic molecular brush type structures, thus describing the first time it has been possible to prepare controlled, highly branched architectures for this type of polymer. Furthermore, as a result of the extremely high-arm density generated by the phosphazene repeat unit, the second-generation structures represent quite unique architectures for any type of polymer. Using two relativity straight forward iterative syntheses it is possible to prepare globular highly branched polymers with up to 30 000 functional end groups, while keeping relatively narrow polydispersities (1.2-1.6). Phosphine mediated polymerization of chlorophosphoranimine is first used to prepare three-arm star polymers. Subsequent substitution with diphenylphosphine moieties gives poly(organo)phosphazenes to function as multifunctional macroinitiators for the growth of a second generation of polyphosphazene arms. Macrosubstitution with Jeffamine oligomers gives a series of large, water soluble branched macromolecules with high-arm density and hydrodynamic diameters between 10 and 70 nm. Introduction Polymers with a phosphorus containing backbone, such as polyphosphazenes [1] and polyphosphoesters, [2,3] have recently merited much attention, especially in the biomedical field, [4,5] due to the tunable hydrolytic stability of the phosphorus based backbone, and the adjustable rates of (bio)degradation this infers. [6,7] Polyphosphazenes are of particular interest due to their unique properties compared to many conventional carbon based polymers, [8][9][10][11][12][13] and are currently being investigated for many advanced applications including polymer therapeutics, [5,14,15] vaccine delivery, [16,17] thermoresponsive hydrogels for drug delivery, [18,19] and tissue engineering. [20][21][22] The progress in living polymerization and controlled radical polymerization over recent years has resulted in higher control and a rapid expansion in the molecular architectures available, including molecular brush, [23] star branched, [24,25] dendritic, [26] and dendronized molecular brushes, [27] just to name a few. Globular highly branched polymers have gained interest in particular for their unique characteristics, which cannot be obtained by linear polymers, such as compact structure and high density of end groups. [28] Of the advanced architectures available however, most of them consist of organic polymers with aliphatic carbon backbones, although the use of phosphorus based building blocks containing organic units has yielded dendrimers with high functionality and promising properties. [29,30] Despite much research effort, a controlled synthesis of poly(dichlorophosphazene) ([PNCl 2 ] n ), the most used precursor, remains a limiting factor for the preparation of polyphosphazenes. [31][32][33][34][35] The development of a living cationic polymerization of chlorophosphoranimine (Cl 3 PNSi(CH 3 ) 3 ) has propelled the research on polyphosphazenes with controlled structures forward. [36][37][38] Indeed, using (RO) 3 P = NSi(CH 3 ) 3 type phosphoranimines as a precursor, three-arm star polymers have been reported. [39] Successive studies by the same authors revealed this innovative grafting-from route not to be as promising as first appeared, [40] with only a more extensive route capping branched organic species with phosphoranimines being viable. Although polyphosphazenes can be successfully grafted onto branched organic polymers and dendrimers using this method, [41] it remains a challenge to prepare polyphosphazene branches via this route. Hybrid branched polyphosphazenes, [38] where organic polymers are grafted onto the phosphazene backbone, for example via atom transfer radical polymerization (ATRP), [42] have also been frequently reported. To the best of our knowledge, however, polyphosphazene branches grafted from or onto a polyphosphazene backbone have hitherto not been reported. Star Branched Polyphosphazenes Herein the synthesis of a series of star dendritic molecular brush type polyphosphazenes is reported. Applying a recently established one-pot phosphine-mediated living polymerization route, [43] a polyphosphazene tri-arm star polymer (Figure 1a) was first prepared and subsequently second-generation polyphosphazene side chains added, yielding star dendritic molecular brush structures (Figure 1b). The three-arm star polymer was prepared using 1,1,1-tris(diphenylphosphino)methane as a multifunctional core for the parallel growth of three polymer chains (Figure 1a). The phosphine moieties of the core were first chlorinated with C 2 Cl 6 functioning as a mild chlorinating agent. The choice of solvent here is critical, with CH 2 Cl 2 yielding three fully ionized [RPh 2 PCl] + species with Cl − counter ions. [44] The living cationic polymerization was then initiated from these cationic phosphorus atoms immediately upon addition of n equivalents of the Cl 3 PNSi(CH 3 ) 3 monomer [45] to give three [PNCl 2 ] n chains, linked together by the tris(diphenylphosphino)methane core. The chlorine atoms in the [PNCl 2 ] n chains were then macrosubstituted [1] (the postpolymerization of the macromolecular [PNCl 2 ] n ) by Jeffamine, an amino end-capped polyalkyl oxide, yielding the desired star-brush poly(organo)phosphazenes (polymers 1 and 2), with each repeat unit of the [PNCl 2 ] n arms having two Jeffamine oligomers. Since all of these steps are a simple addition of reactants in solution to the existing reaction mixture, this can be regarded as a one-pot synthesis. Star Dendritic Molecular Brushes A second series of three-arm star polymers were then prepared, this time with 3-(diphenylphosphino)-1-propanamine as the substituent on the [PNCl 2 ] p chains (3, 4, Figure 1b). Each diphenylphosphine moiety was then chlorinated yielding the ionic [RPh 2 PCl] + species (3a, 4a) acting as macroinitiators. The addition of Cl 3 PNSi(CH 3 ) 3 results in the growth of two [PNCl 2 ] p chains per repeat unit (3b, 4b), followed by the macrosubstitution with Jeffamine, yielding second-generation poly(organo)phosphazene star dendritic molecular brushes (5, 6) ( Figure 1b). The completion of each step can be followed by { 1 H} 31 P NMR spectroscopy (Figure 2a), including the complete substitution of the P-N backbone, due to the absence of P-Cl groups from the precursor (Figure 2a (IV)). The growth of [PNCl 2 ] p chains from the macroinitiators (4a to 4b) was confirmed by { 1 H} 31 P NMR spectroscopy measurements in CD 2 Cl 2 in which monomer consumption over time showed living chain growth kinetics (Figure 2c,d). The NMR measurements show the complete consumption of the monomer into the growing chains of the [PNCl 2 ] p (Figure 2b). The kinetics derived from the measurements are slower than expected from previous studies. [43] Although this could be partly due to the use of macroinitiators, it is more likely due to the alkyl moiety of the 3-(diphenylphosphino)-1-propanamine. Replacement of the alkyl moiety with a substituted phenyl group would be expected to enhance reaction rates, since previous studies with fully aromatic phosphines showed shorter polymerization times. [43] These star branched and star dendritic molecular brushes have high effective degrees of polymerization (DP) for polyphosphazenes prepared from Cl 3 PNSi(CH 3 ) 3 and an unprecedented large number of end groups due to the multiplying dendritic effect of two arms emanating from each repeat unit. Since { 1 H} 31 P NMR studies show near complete consumption of the monomer, and assuming also complete incorporation into the growing molecules (as previously shown for this polymerization type [43]), the DP for the star polymer 2 is calculated as 150, i.e., three times n = 50. The second generation consists of two [PNCl 2 ] p arms per repeat unit, each with p = 50, thus giving an expected overall DP of ≈15 000 for polymer 6. Furthermore, with two organic substituents per repeat unit, polymer 6 gives an estimated 30 000 end groups ( Table 1). The molecular weights obtained from size exclusion chromatography measurements confirmed an increase in molecular weight upon increasing generation and chain lengths ( Table 1). The values received are, as expected, much lower than the theoretical values, due to the reliance on linear polystyrene standards. The obtained polydispersities range from 1.2 to 1.6, suggesting relatively good control of the polymerization. The hydrodynamic volumes of the polymers were also measured by dynamic light scattering (DLS) (Figure 3), with a similar increase in hydrodynamic diameter upon increasing chain lengths and number of generations being observed. Jeffamine oligomer side chains render the poly(organo) phosphazenes not only water soluble but also augment the hydrodynamic diameter of the resulting polymers. DLS measurements in H 2 O showed only a slight increase when increasing the number of repeat units of the star from 10 to 50 per arm (polymer 1 vs 2). However, on going from the first generation to the second generation, with short (n = 10) polyphosphazene side-arms (5), the hydrodynamic volume doubles. When extending each side-arm to 50 repeat units, the hydrodynamic volume is quadrupled (6) and reaches a value of 70 nm (Figure 3), thus, large, unimolecular, water soluble nanostructures are obtained, as also confirmed by atomic force microscopy (AFM) ( Figure SI-1, Supporting Information). Conclusions Phosphine-mediated living polymerization was used to prepare star-branched polyphosphazenes emanating from a central core containing three phosphine moieties. These three-armed star polymers could be substituted with two phosphine moieties per repeat unit. Upon chlorination, the phosphine moieties subsequently act as macroinitiators for the preparation of second-generation polyphosphazene side chains, yielding star dendritic molecular brush structures. Monomer consumption (and chain growth) was tracked by { 1 H} 31 P NMR spectroscopy and showed linear kinetics and after macrosubstitution with Jeffamine, DLS and AFM measurements confirmed water soluble globular unimolecular structures in the region of 70 nm. The term star dendritic molecular brushes is used to describe these unique macromolecules due to their high degree of branching (two branches per repeat unit) emanating from a central core. Since these structures do not fall under the definition of classical hyperbranched polymers, nor dendritic polymers, the description dendritic molecular brushes was chosen, due to the similarity to such recently reported structures. [46] With two simple one-pot syntheses, it was possible to reach a degree of polymerization of up to 15 000 and thus ≈30 000 end groups in a relatively simple synthesis. Moreover, the limit of further generations that could be synthesized remains to be explored. Furthermore, due to the high number of functional groups which can be easily introduced by mixed substitution of the poly(dichlorophosphazene) backbone, it should be a simple task to introduce a variety of functional groups, for example for catalyst or drug loading. The hydrodynamic volumes in the 10-70 nm range, in combination with the proven biocompatibility and degradability of similar poly(organo)phosphazenes, as well as their good aqueous solubility, [7] render these materials particularly interesting candidates as polymer therapeutics, [4,5,15] where highly branched, controlled structures could be highly valuable. [47,48] Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts Supporting Information Refer to Web version on PubMed Central for supplementary material. Table 1 Polymer characterization. Apparent molecular weight as measured by SEC in DMF versus linear polystyrene standards.
2,450
2016-05-01T00:00:00.000
[ "Materials Science" ]
Long non-coding RNA MIDEAS-AS1 inhibits growth and metastasis of triple-negative breast cancer via transcriptionally activating NCALD Background Triple-negative breast cancer (TNBC) is a subtype of breast cancer with higher aggressiveness and poorer outcomes. Recently, long non-coding RNAs (lncRNAs) have become the crucial gene regulators in the progression of human cancers. However, the function and underlying mechanisms of lncRNAs in TNBC remains unclear. Methods Based on public databases and bioinformatics analyses, the low expression of lncRNA MIDEAS-AS1 in breast cancer tissues was detected and further validated in a cohort of TNBC tissues. The effects of MIDEAS-AS1 on proliferation, migration, invasion were determined by in vitro and in vivo experiments. RNA pull-down assay and RNA immunoprecipitation (RIP) assay were carried out to reveal the interaction between MIDEAS-AS1 and MATR3. Luciferase reporter assay, Chromatin immunoprecipitation (ChIP) and qRT-PCR were used to evaluate the regulatory effect of MIDEAS-AS1/MATR3 complex on NCALD. Results LncRNA MIDEAS-AS1 was significantly downregulated in TNBC, which was correlated with poor overall survival (OS) and progression-free survival (PFS) in TNBC patients. MIDEAS-AS1 overexpression remarkably inhibited tumor growth and metastasis in vitro and in vivo. Mechanistically, MIDEAS-AS1 mainly located in the nucleus and interacted with the nuclear protein MATR3. Meanwhile, NCALD was selected as the downstream target, which was transcriptionally regulated by MIDEAS-AS1/MATR3 complex and further inactivated NF-κB signaling pathway. Furthermore, rescue experiment showed that the suppression of cell malignant phenotype caused by MIDEAS-AS1 overexpression could be reversed by inhibition of NCALD. Conclusions Collectively, our results demonstrate that MIDEAS-AS1 serves as a tumor-suppressor in TNBC through modulating MATR3/NCALD axis, and MIDEAS-AS1 may function as a prognostic biomarker for TNBC. Supplementary Information The online version contains supplementary material available at 10.1186/s13058-023-01709-1. Background Breast cancer is one of the most common malignant tumors that has serious effects on the health of women worldwide [1,2].Triple-negative breast cancer (TNBC), one subtype of breast cancer, lacks estrogen receptor (ER), progesterone receptor (PR), and expresses low levels of human epidermal growth factor receptor 2 (HER-2) and accounts for approximately 15-20% of all breast carcinomas [3][4][5].TNBC is characterized by higher rates of relapse, greater metastatic potential, and shorter overall survival compared with other major breast cancer subtypes [4,5].Therapeutic methods for TNBC patients usually include surgery, chemotherapy, radiotherapy, and immunotherapy [6][7][8][9].Clinically, although significant progress in the treatment of TNBC over the last decade, recurrence and metastasis remain the principal causes of mortality in patients with this disease [10,11].Therefore, elucidating the potential molecular mechanisms underlying TNBC progression is of great significance to provide promising novel treatment targets and prognostic biomarkers. In recent years, the potential role of long non-coding RNAs (lncRNAs) has attracted increasing attention in different kinds of cancers.LncRNAs are commonly defined as a class of RNA molecules with a length of more than 200 nucleotides and little or no coding capacity [12].LncRNAs are abnormally expressed in many tumors and closely associated with prognosis of tumor patients [13].Various studies reported that lncRNAs are responsible for human tumorigenesis and cancer progression by functioning either as oncogenes or tumor suppressors [14][15][16] and play pivotal roles in tumor cell growth, differentiation, invasiveness, metastasis, antiapoptosis, and drug resistance [17][18][19].For example, long non-coding RNA SNHG12 promotes tumor progression and sunitinib resistance in renal cell carcinoma [20].LncRNA CASC2 inhibits hypoxia-induced pulmonary artery smooth muscle cell proliferation and migration by regulating the miR-222/ING5 axis [21].Previous reports have shown that lncRNAs are involved in the regulation of gene expression through a multitude of mechanisms depending on their subcellular localization.The cytoplasmic lncRNAs are most commonly reported to function as competing endogenous RNAs (ceRNA) to regulate gene expression.For example, long non-coding RNA NEAT1 promotes ferroptosis by modulating the miR-362-3p/ MIOX axis as a ceRNA [22].LINC00673 is activated by YY1 and promotes the proliferation of breast cancer cells via the miR-515-5p/MARK4/Hippo signaling pathway [23].On the other hand, the nuclear lncRNAs could regulate gene expression by participating in several biological processes, such as chromatin organization, nuclear structure organization, severing as transcriptional and post-transcriptional regulators, and acting as scaffolds for TFs [24,25].Given the abundant quantities and diversified regulatory mechanisms of lncRNAs, more efforts are needed to better understand the biological functions and molecular mechanisms of lncRNAs in TNBC and provide information for improving the treatment and prognosis of TNBC. In the present study, we aimed to explore the function and underlying mechanisms of a novel identified lncRNA, MIDEAS-AS1, in TNBC.We discovered that MIDEAS-AS1 were markedly reduced in breast cancer according to GEO and TCGA databases, which was further confirmed in our cohort.Further, we compared the expression of MIDEAS-AS1 in distinct subtypes in breast cancer, and MIDEAS-AS1 was significantly decreased in TNBC.Moreover, low MIDEAS-AS1 expression was associated with poor prognosis of patients with TNBC.Using in vitro and in vivo experiments, we further demonstrated that MIDEAS-AS1 could inhibit the progression and metastasis of TNBC through interacting with MATR3 to upregulate the transcription of NCALD and subsequently inhibited the NF-κB signaling pathway.These findings indicated that MIDEAS-AS1 might function as a tumor-suppressor lncRNA with potential as a diagnostic/prognostic marker and may offer a novel target for the treatment of patients with TNBC. Tissue samples Human breast cancer tissues and adjacent non-tumor tissues were obtained from patients at the Qilu Hospital.Written informed consent was provided by all patients, and the study was achieved approval by the Ethical Committee on Scientific Research of Shandong University Qilu Hospital (IRB number, KYLL-2016(KS)-140). Cell cultures All cell lines were bought from American Type Culture Collection (ATCC, Manassas, VA).MDA-MB-231, MDA-MB-468 and HEK293T cells were cultured with DMEM.The above media contained 100 U/ml penicillin, 100 μg/ml streptomycin, 10% fetal bovine serum (Cell-Box, HK, China) and were cultured with 5% CO 2 in a humidified cell-culture incubator at 37 °C. RNA sequencing analysis Breast cancer gene expression data were downloaded from TCGA (https:// tcga-data.nci.nih.gov/ tcga) and GEO (http:// www.ncbi.nlm.nih.gov/ geo) database.The data analysis was performed with R software using the DEGseq package.The threshold set for differences was |log2(Fold change) |> 1 and p-value < 0.05. Quantitative real-time PCR (qRT-PCR) Total RNA was extracted using the RNA-easy Isolation Reagent Kit (Vazyme, Nanjing, China).Reverse transcription from 1 μg RNA to cDNA was performed using the PrimeScript reverse transcriptase reagent kit (Takara, Shiga, Japan).Real-time PCR was performed with SYBR qPCR SuperMix Plus (novoprotein, Suzhou, China) on a QuantStudio 6 Flex Real-Time PCR System.Results were analyzed using the comparative Ct method normalizing to Actin.The primer sequences are shown in Additional file 1: Table S2. Subcellular fractionation Separation of nuclear and cytosolic fractions was performed using the PARIS Kit (Invitrogen, Texas, USA) according to the manufacturer's instructions.Afterward, MIDEAS-AS1, GAPDH (cytoplasmic control) and U6 (nuclear control) in a cytoplasmic fraction or nuclear fraction were detected by qRT-PCR. RNA immunoprecipitation (RIP) The RIP experiments were performed strictly with a Magna RIP RNA-Binding Protein Immunoprecipitation kit (Millipore, Burlington MA, USA) according to the manufacturer's instruction.Each RIP reaction required 100 μl of 1 × 10 7 MDA-MB-231 cell lysate, and each immunoprecipitation required 5 μg of antibody.The expression of MIDEAS-AS1 in the precipitated of anti-MATR3 and negative control (IgG) was detected by qPCR, and the content in the IgG precipitate was used as a reference.qRT-PCR was performed as described above. Cell migration and invasion assay After transfection treatment for 24 h, MDA-MB-231 and MDA-MB-468 were harvested and resuspended in serum-free DMEM medium and seeded into the upper transwell chambers containing 8 μm pores.As for invasion experiment, the cells were seeded in Matrigel matrix-plated chambers.Culture medium supplemented with 20% FBS was added to the lower chamber.After incubation for 20 h for migration and 24 h for invasion at 37 °C, the chambers were fixed with methanol, stained with 0.5% crystal violet.Then, the cells on the upper surface were wiped off and allowed to dry at room temperature.The migrated and invasive cells were counted and photographed under a light microscopy (200×) (Olympus, Tokyo, Japan). Wound healing experiment The cells were seeded in 24-well plates at a density of 3.5 × 10 5 cells per well for MDA-MB-231 and 5 × 10 5 cells per well for MDA-MB-468.Then, incubated the wells with cell culture medium at 37 °C overnight.When the cells at approximately 90% confluence, an artificial wound was made with a 10-µl sterile pipette tip.The fragments were washed thoroughly with PBS, and the wells were added the serum-free medium, cultured for various amounts of time.MDA-MB-231 were incubated for 24 h, and MDA-MB-468 cells were incubated for 48 h.We used a microscope to detect cell migration near the wound and obtain images.The images were processed using ImageJ software to quantify the open wound area as average open wound area % and a histogram was drawn. Fluorescence in situ hybridization (FISH) FISH was performed using the RNA FISH Probe Mix Kit (GenePharma, Shanghai, China) according to the manufacturer's protocol.Briefly, we placed the cell slides in a 24-well plate, seeded the cells at a density of 1.5 × 10 5 cells per well, and incubated overnight at 37 °C.The medium was discarded and washed twice with PBS in each well, and 4% paraformaldehyde was added and fixed at the room temperature for 15 min.After blocking, the cells were incubated with the lncRNA probes at 37 °C for 16 h and washed three times with a washing solution for 15 min.Subsequently, we added Hoechst 33342 (Beyotime, Suzhou, China) fluorescent dye solution for 10 min at the room temperature while avoiding light and then, observed the distribution of MIDEAS-AS1 under the Confocal Microscope ZEISS LSM 880 (ZEISS, Berlin, Germany). Chromatin immunoprecipitation (ChIP)-qPCR assay ChIP assays were performed using a ChIP assay kit (Cell Signaling Technology, MA, USA) according to the manufacturer's instructions.Briefly, cells were fixed for 10 min with 1% free formaldehyde and then disrupted in SDS lysis buffer.Chromatin was sonicated by Bioruptor ® Pico (Diagenode, Belgium) to shear DNA to an average length ranging from 200 to 1000 bp, as verified by agarose gel electrophoresis.Next, chromatin was immunoprecipitated with anti-Flag (Cell signaling Technology, USA), and normal rabbit IgG was used as the negative control.Final DNA extractions were quantitative-PCR amplified using primer pairs that cover the sequence in the NCALD promoter region (−2000 bp to + 100 bp). RNA-protein pull-down assays In vitro transcription of sense, antisense or truncated MIDEAS-AS1 was achieved by T7 RNA polymerase (Thermo Fisher, MA, USA).Subsequently, the product of in vitro transcription was obtained biotin-labeled with PierceTM RNA 3' End Biotinylation Kit (Thermo Fisher, MA, USA).Washed streptavidin magnetic beads were incubated with 50 pmol of purified biotinylated transcripts at room temperature for 30 min, followed by addition of the whole-cell lysates (20-200 μg) from MDA-MB-231 cells and incubated for 1 h at 4 °C.The beads containing DNA and proteins were then washed and eluted, then beads were boiled, and precipitated protein was separated by SDS-PAGE and detected by Western blotting analysis. Immunofluorescence analysis The cell coverslips were placed in a 24-well plate, seeded transfected MDA-MB-231 at a density of 1.5 × 10 5 cells per well, and incubated overnight at 37 °C.Then, the cells were fixed with 4% paraformaldehyde for 15 min at room temperature and blocked with 10% goat serum for 30 min.The cells were incubated with rabbit anti-NCALD (Proteintech, Wuhan, China) at 4 °C overnight.Cells were washed three times in PBS and incubated for 1 h at room temperature with FITC conjugatedgoat anti-rabbit antibody (ZSGB-BIO, Beijing, China).After several washes, the cells were added hoechst 3342 (Beyotime, Suzhou, China) fluorescent dye solution, and coverslips were mounted to the slides using fluorescent mounting medium (PROLONG-GOLD, Thermo Fisher Scientific, MA, USA).Coverslips were imaged on the Nikon Eclipse Ti microscope (Nikon, Tokyo, Japan). Cell proliferation assay 1.5 × 10 3 transfected cells were seeded into each well of five 96-well plates.The cells were cultured for five consecutive days and added with 20 μl of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT, Beyotime, Suzhou, China).Afterward, the cells were maintained at 37 °C, 5% CO 2 for another 4 h, and the MTT solution was removed, and 100 μl DMSO was added to each well.The 490-nm optical absorption value of each well was obtained, and the proliferation curves were established accordingly by using the GraphPad Prism 8.3.0 software. Colony formation assay In total, 500 cells were plated in a six-well plate and cultured in DMEM medium containing 10% FBS.Medium was changed every three days.MDA-MB-231 cells were cultured for two weeks, and MDA-MB-468 cells were cultured for four weeks.Then, cells were washed once with PBS solution and fixed with methanol for 15 min, and 500 μl of 0.5% crystal violet (Beyotime, Suzhou, China) was added to each well for 30 min.The colonies were imaged and counted. Cell cycle analysis Cell cycle analysis was performed using Cell Cycle Staining Buffer (Multi Sciences, Hangzhou, China) following the manufacturer's protocol.Briefly, transfected cells were digested and washed, and then, resuspended in 500 μl of cell cycle staining buffer for 15 min.The cells were examined on a FACSCalibur (BD Biosciences, CA, USA) within 1 h. Cell apoptosis assay For apoptosis analysis, FITC-Annexin-V/7-AAD double staining method was used (BD Biosciences, CA, USA).Transfected cells were collected, washed twice with cold PBS, centrifuged at 1000 rpm 5 min, and the supernatant discarded.Then, resuspended cells in 1× Binding buffer at a concentration of 1 × 10 6 cell/ml.Transfer 100 μl of solution to a 1.5 ml culture tube, added 5 μl FITC-Annexin V, 5 μl 7-AAD to resuspend the cells, reacted for 15 min at the room temperature in the dark.After staining, added 400 μl of 1× Binding buffer to each tube, the apoptosis percentage were analyzed by FACSCalibur flow cytometer (BD) within 1 h. Xenograft tumor formation assay BALB/c nude mice (female, 4-week-old) with weights of about 20 g were procured from GemPharmatech (Jiangsu, China) for in vivo study.The transfected cells were injected subcutaneously into mice, and the width and length of formed tumors were monitored every 5 days after injection.Tumor volumes were calculated using the following formula: tumor volume = length × width 2 × 0.5.Following 25 days, the nude mice were sacrificed, and the collected tumors were weighed and then, fixed in formalin for immunohistochemical (IHC) staining and Hematoxylin and eosin (H&E) staining.To evaluate the influence of MIDEAS-AS1 on metastasis, the transfected cells were injected into the lateral tail veins of nude female mice (six mice per group).After about 2 months, the mice were sacrificed, and the lungs were collected to evaluate the number of pulmonary metastatic lesions and then, also were fixed for HE staining.The animal experiments were approved by the Ethical Committee on Scientific Research of Shandong University Qilu Hospital. Statistical analysis Statistical test in this study was performed using GraphPad Prism software 8.3.0 software.Results were expressed as the mean ± standard deviation (SD).The Kaplan-Meier method and log-rank test were used to analyze survival rate.Differences of two or more groups were analyzed using the student's t-test or one-way/twoway ANOVA.Statistical significance was determined as p < 0.05.All tests were conducted in triplicates. MIDEAS-AS1 is downregulated in TNBC tissues and low MIDEAS-AS1 level predicts poor prognosis of TNBC To explore the potential role of lncRNAs in TNBC, we first analyzed the differentially expressed lncRNAs using GEO (Fig. 1A) and TCGA databases (Additional file 1: Fig. S1A).Then, integrated analysis was performed on the lowly expressed lncRNAs at the GSE60689 and TCGA lncRNA databases (Fig. 1B).Among them, the newly named lncRNA MIDEAS-AS1 attracted our attention.A search of the UCSC database (http:// genome.ucsc.edu/) and CPC2 database (http:// cpc2.gao-lab.org) revealed that MIDEAS-AS1 was located on the human chromosome 14q24.3and had three exons, without proteincoding capabilities (Additional file 1: Fig. S1B-C).Based on TCGA datasets, we validated that MIDEAS-AS1 was markedly low expressed in breast cancer tissues compared to adjacent normal tissues (Fig. 1C).Additionally, we also analyzed the expression levels of MIDEAS-AS1 in different subtypes of breast cancer.Among patients with breast cancer, the expression of MIDEAS-AS1 was lower in TNBC than in other subtypes through the GSE21653 database (Additional file 1: Fig. S1D).To further validate the decrease in MIDEAS-AS1 in breast cancer, we examined the expression of MIDEAS-AS1 using qRT-PCR in breast cancer tissues and their adjacent normal breast tissues, as well as, TNBC and non-TNBC tissues.Compared to the adjacent normal tissues, the expression of MIDEAS-AS1 was significantly downregulated in breast cancer tissues (Fig. 1D).Meanwhile, we also confirmed the results that patients with TNBC had a significantly lower expression of MIDEA-AS1, compared with non-TNBC (Fig. 1E).Moreover, we further examined the MIDEAS-AS1 expression level in breast cancer organoids and normal organoids, and the result revealed that MIDEAS-AS1 was significantly low expressed in the breast cancer organoids (Fig. 1F).In addition, fluorescence in situ hybridization (FISH) assay revealed similar results, and downregulated expression of MIDEAS-AS1 was identified in breast cancer organoids compared to the normal organoid (Fig. 1G).There were some literatures reported that MIDEAS-AS1 was associated with tumor stage and tumor-node-metastasis (TNM) stage [26].To assess the clinical significance of MIDEAS-AS1 in TNBC, we analyzed the relationship between MIDEAS-AS1 expression level and clinicopathological characteristics.We found that the low expression level of MIDEAS-AS1 was significantly correlated with larger tumor size and higher pathological grade (Table1).Moreover, Kaplan-Meier survival analysis showed that lower expression level of MIDEAS-AS1 was correlated with significantly poorer overall survival (OS) rate (Fig. 1H) and progression-free survival (PFS) rate of patients with TNBC (Fig. 1H-I).Together, these data indicated that MIDEAS-AS1 was downregulated in TNBC and low expression of MIDEAS-AS1 was associated with poor prognosis of TNBC patients. MIDEAS-AS1 reduced the proliferation, migration and invasion of TNBC cells in vitro To explore the potential roles of MIDEAS-AS1, we transfected the MIDEAS-AS1 overexpression plasmids into MDA-MB-231 and MDA-MB-468 cells, and the overexpression efficiency was determined by qRT-PCR (Additional file 1: Fig. S2A, S2B).Results from MTT assay indicated that the proliferation ability of MDA-MB-231 and MDA-MB-468 cells with MIDEAS-AS1 overexpression was reduced compared to control group (Fig. 2A).Meanwhile, overexpression of MIDEAS-AS1 also inhibited the proliferation of breast cancer organoids (Fig. 2B).Moreover, overexpression of MIDEAS-AS1 repressed cell colony-forming activity (Fig. 2C-D).Additionally, flow cytometry was used to detect the apoptosis rate and cell cycle after different treatments.Our results indicated (See figure on next page.)Fig. 1 LncRNA MIDEAS-AS1 is downregulated in TNBC tissues and is associated with poor progression.A Heat map shows the significantly expressed lncRNAs in breast cancer samples compared to normal tissues from GEO (GSE60689) databases.The red shades represent high expression, and blue shades represent low expression.B Overlapping lowly expressed lncRNAs identified in GEO and TCGA lncRNA databases.C TCGA database showed that MIDEAS-AS1 was abnormally low expressed in breast cancer tissues.D The mRNA expression of MIDEAS-AS1 was lower in breast cancer tissues than in adjacent tissue.E qRT-PCR analysis of the expression of MIDEAS-AS1 in TNBC tissues and non-TNBC tissues.F qRT-PCR analysis of the relative expression of MIDEAS-AS1 in breast cancer organoids compared to the normal.G FISH analysis of the MIDEAS-AS1 in breast cancer and normal organoids.Scale bars, 20 μm.H Kaplan-Meier analysis showed the association between MIDEAS-AS1 expression and overall survival of TNBC patients.I Kaplan-Meier analysis showed the association between MIDEAS-AS1 expression and progression-free survival (PFS) of TNBC patients.Data were shown as mean ± SD. (*p < 0.05, ** p < 0.01, *** p < 0.001) that MIDEAS-AS1 overexpression led to remarkably increased apoptosis rate (Fig. 2E) and cell number in G1 phase (Additional file 1: Fig. S2C).On the other hand, MIDEAS-AS1 knockdown promoted the proliferation and colony-formation abilities of TNBC cells (Fig. 2A, D).Furthermore, the apoptosis rate was obviously decreased and the number of cells in G1 phase was less in MIDEAS-AS1 knockdown group compared to that in control group (Fig. 2E, Additional file 1: Fig. S2C).We then investigated the effect of MIDEAS-AS1 on TNBC cell migration and invasion using transwell assays and wound healing assay in MDA-MB-231 and MDA-MB-468 cells.The results of transwell assays showed a significant reduction in the migratory and invasive abilities of MDA-MB-231 and MDA-MB-468 cells after MIDEAS-AS1 overexpression (Fig. 2F).Furthermore, wound healing assay indicated that the wound closure area of MDA-MB-231 and MDA-MB-468 cells was dramatic decreased following MIDEAS-AS1 overexpression (Additional file 1: Fig. S2D).Consistently, MIDEAS-AS1 knockdown led to increased migratory and invasive abilities of TNBC cells (Fig. 2G).Given that epithelial-mesenchymal transition (EMT) is one of the major mechanisms for cancer metastasis, we further examined the effect of MIDEAS-AS1 on the expression of EMT-related marker proteins by Western blot.The results showed that N-cadherin, vimentin, fibronectin and MMP9 proteins were downregulated, while the E-cadherin protein was upregulated in MDA-MB-231 and MDA-MB-468 cells after overexpression of MIDEAS-AS1 (Fig. 2H).On the contrary, N-cadherin, vimentin, fibronectin and MMP9 proteins were upregulated, and the E-cadherin protein was downregulated in TNBC cells after MIDEAS-AS1 knockdown (Fig. 2I).Therefore, MIDEAS-AS1 could regulate the EMT process to modulate TNBC metastasis.Collectively, these results indicated that MIDEAS-AS1 played critical roles in suppressing cell proliferation and mobility of TNBC cells. MIDEAS-AS1 directly interacts with MATR3 to carry out its function It had been suggested that the regulatory mechanisms of lncRNAs were closed associated with their subcellular localization [25].Therefore, we firstly detected the subcellular localization of MIDEAS-AS1 in TNBC cells.Following isolation of the nuclear and cytoplasmic RNA in MDA-MB-231 and MDA-MB-468 cells, qRT-PCR analysis demonstrated that MIDEAS-AS1 was mainly located in the nucleus (Fig. 3A).Furthermore, FISH assay also obtained the similar results (Fig. 3B), indicating the potential role of MIDEAS-AS1 in transcriptional regulation through acting as a scaffold for TFs [25,27].To identify proteins interacted with MIDEAS-AS1, RNA pull-down assay was performed.We incubated MDA-MB-231 cell lysates with biotinylated MIDEAS-AS1 or its antisense RNA transcribed in vitro, and the silver staining and mass spectrometry (MS) identified MATR3 as one of the major proteins in the MIDEAS-AS1 pull-down precipitations (Fig. 3C).Moreover, mass spectrometry analysis found that MIDEAS-AS1 interacted with MATR3 at "DLSAAGIGLLAAATQSLSMPASLGR" and "YQLLQLVEPFGVISNHLILNK" peptide sequences (Fig. 3D).The specific binding between MATR3 protein and MIDEAS-AS1 was further confirmed by RNA pull down following Western blot (Fig. 3E).Then, we performed RNA immunoprotein (RIP) assay using flag antibodies and revealed that MIDEAS-AS1 was significantly enriched in MATR3 immunoprecipitations (Fig. 3F).Meanwhile, RNA FISH technology combined with immunofluorescence analysis demonstrated the co-localization of MIDEAS-AS1 and MATR3 protein in MDA-MB-231 cells (Fig. 3G).To identify the specific binding regions between MIDEAS-AS1 and MATR3, we constructed vectors containing full-length MIDEAS-AS1 or three truncated sequences (Fig. 3H).The RNA pull down assay indicated that deleting the sequence of 217-439 bp (Δ2 vector) could abolish the MIDEAS-AS1-MATR3 interaction, which was consist with the prediction results obtained from catRAPID database (Additional file 1: Fig. S3A).Furthermore, we wonder whether MIDEAS-AS1 could regulate the expression of MATR3.Significantly, overexpression or knockdown of MIDEAS-AS1 shows no effect on the RNA and protein expression levels of MATR3 (Additional file 1: Fig. S3B), indicating that MIDEAS-AS1 was not involved in the post-transcriptional regulation of MATR3.Collectively, these results revealed that MIDEAS-AS1 specifically interacted with MATR3 in TNBC cells. MIDEAS-AS1 affects the progression and metastasis of TNBC by promoting NCALD expression Given the critical role of nuclear matrix-associated protein MATR3 in transcriptional regulation, we wonder whether MIDEAS-AS1 could regulate the expression of downstream genes through interacting with MATR3 to further affect the progression and metastasis of TNBC.We performed an RNA-sequencing analysis on MIDEAS-AS1-overexpressing and control cells, and 471 differentially expressed genes were identified (Fig. 4A-B), including 252 up-regulated and 219 down-regulated genes.Meanwhile, the top 20 up-regulated genes are shown in Fig. 4C.Due to the tumor-suppressive role of MIDEAS-AS1 in TNBC, we integrated the down-regulated genes in breast cancer tissues from TCGA database and the up-regulated genes after MIDEAS-AS1 overexpression in TNBC cells, and then, 9 candidate genes were selected as potential downstream targets of MIDEAS-AS1 (Fig. 4D).It has been reported in the literature that antisense lncRNA may affect the expression of sense gene [28], so we also examined the influence of MIDEAS-AS1 on its sense gene (MIDEAS) by qRT-PCR.However, the results showed that MIDEAS-AS1 did not affect the expression of MIDEAS (Additional file 1: Fig. S4B).Meanwhile, the qRT-PCR results revealed that the expression of AF131215.5,CXCL1, and NCALD was remarkably increased after MIDEAS-AS1 overexpression and reduced after MIDEAS-AS1 knockdown (Additional file 1: Fig. S4A).CXCL1 (C-X-C motif chemokine ligand 1) was the most abundant chemokine secreted by TAMs, and CXCL1 could promote migration, invasion ability, and EMT in breast cancer [29,30].AF131215.5 was a lncRNA whose function remained unclear in cancer [31,32].Interestingly, NCALD (neurocalcin delta) expression was lower in lung adenocarcinoma tissues [33], and patients with higher NCALD levels exhibited a higher survival rate [34,35].Among these genes, we hypothesized that NCALD, the tumor suppressor gene, might be a potential target of MIDEAS-AS1 in breast cancer.Significantly, NCALD expression was markedly downregulated in breast cancer tissues compared with normal tissues (Additional file 1: Fig. S4C-D).Moreover, immunohistochemistry (IHC) showed higher expression of NCALD in normal tissues than breast cancer tissues according to Human Protein Atlas database (Additional file 1: Fig. S4E).These results indicated that NCALD might play a significant tumor suppressor role in breast cancer.Therefore, NCALD was selected as the functional downstream mediator for MIDEAS-AS1-mediated cell migration and metastasis in TNBC.Significantly, the qRT-PCR and Western blot results revealed that the expression of NCALD was remarkably increased after MIDEAS-AS1 or MATR3 overexpression and reduced after MIDEAS-AS1 or MATR3 knockdown in MDA-MB-231 and MDA-MB-468 cells (Fig. 4E-F, Additional file 1: Fig.The qRT-PCR analysis indicated that MIDEAS-AS1 overexpression promoted the expression of NCALD, while MATR3 knockdown could weaken the increased tendency of NCALD expression caused by overexpression of MIDEAS-AS1 in MDA-MB-231 and MDA-MB-468 cells (Fig. 4G).Furthermore, we found that the increasing trend of NCALD protein level caused by MIDEAS-AS1 overexpression could be attenuated by MATR3 knockdown in MDA-MB-231 and MDA-MB-468 cells (Fig. 4H).Then, we further investigated the transcriptional regulation effect of MIDEAS-AS1 and MATR3 on the expression of NCALD.The dual luciferase reporter assay showed that overexpression of MIDEAS-AS1 or MATR3 substantially increased the luciferase activity of NCALD vectors, while knockdown of MIDEAS-AS1 or MATR3 significantly inhibited the luciferase activity of NCALD vectors in HK293T cells (Fig. 4I).Furthermore, rescue experiments revealed that the promoter activity of NCALD was activated by MIDEAS-AS1 overexpression, which was inhibited after co-transfection with si-MATR3 (Fig. 4J).These results indicated that MIDEAS-AS1 could regulate the expression of NCALD through modulating the function of MATR3.To identify the specific binding regions of MIDEAS-AS1 and MATR3 on the NCALD promoter, we ectopically expressed full-length NCALD promoter (−2000 to + 100 bp) as well as five mutants: the truncated NCALD-1 (−2000 to −1564 bp) mutant, the truncated NCALD-2 (−1563 to −1094 bp) mutant, the truncated NCALD-3 (−1093 to −654 bp) mutant, the truncated NCALD-4 (−653 to −269 bp) mutant, and the truncated NCALD-5 (−268 to + 100 bp) mutant (Fig. 4K), and a luciferase assay was performed.We found that the luciferase activity was most significantly increased in −1563 bp to −1094 bp, indicating the presence of positive regulatory elements, which enhanced NCALD transcription in this region (Fig. 4L).In addition, the ChIP followed by qPCR assays was subsequently performed in Flag-MATR3 transfected breast cancer cells with or without MIDEAS-AS1 overexpression to determine the effect of MIDEAS-AS1 on MATR3 recruitment to the NCALD promoter.Consistent with the luciferase assay, MIDEAS-AS1-MATR3 complex was identified to significantly bound to −1563 bp to −1094 bp sites of the NCALD promoter region in MDA-MB-231 and MDA-MB-468 cells (Fig. 4M).These findings suggested that MIDEAS-AS1-MATR3 complex could enhance NCALD transcription by directly binding to its promoter. NCALD inhibits TNBC cell proliferation, migration, and invasion in vitro Previous studies had found that NCALD was not only involved in cell apoptosis, cell cycle progression and other biological processes in several cancers [36], but also associated with the prognosis of cancers [34,35].However, there is no report about the function of NCALD in breast cancer.Our above results revealed downregulated expression of NCALD in breast cancer tissues and breast cancer cells, indicating the tumor-suppressive role of NCALD in breast cancer.To confirm the function of NCALD, we transfected NCALD overexpressing vectors and si-NCALD into MDA-MB-231 and MDA-MB-468 cells, and the overexpression and knockdown efficiency were determined by qRT-PCR and Western blot assays (Fig. 5A-B).The MTT and colony formation assay results indicated that NCALD overexpression reduced TNBC cell proliferation and colony formation abilities (Fig. 5C, D).Additionally, the apoptosis rate was remarkably increased and the number of cells in G1 phase was increased after NCALD overexpression (Fig. 5F, Additional file 1: Fig. S5A).On the other hand, NCALD knockdown led to significantly increased proliferation, colony formation abilities of TNBC cells (Fig. 5C, E).Moreover, the apoptosis rate was decreased and the number of the cells in G1 phase was less in NCALD knockdown cells than that in control cells (Fig. 5F, Additional file 1: Fig. S5A).Additionally, transwell assay and wound healing assay revealed that NCALD overexpression significantly reduced TNBC cell migration and invasion abilities (Fig. 5G, Additional file 1: Fig. S5B), while NCALD knockdown led to increased migration and invasion abilities in TNBC cells (Fig. 5H, Additional file 1: Fig. S5B).These data demonstrated that NCALD served as a suppressive functional factor in TNBC progression.Previous literature reported that NCALD might be related to ERK1/2 signaling pathway, NF-κB signaling pathway, TGF-β signaling pathway and immune response pathway in ovarian cancer [35].We further investigated the potential molecular mechanism caused by the change of NCALD in TNBC cells.Western blot analysis indicated that overexpression of NCALD would significantly inhibited phosphorylate p-65, but not TGF-β and, p-ERK1/2 in MDA-MB-231 and MDA-MB-468 cells (Fig. 5I), while NCALD knockdown brought about opposite results (Fig. 5J).Collectively, our results demonstrated that NCALD inhibited TNBC cell proliferation, migration, and invasion by suppressing NF-κB signaling pathways. MIDEAS-AS1 regulates TNBC progression through regulating the expression of NCALD Our previous results have showed that the association between MIDEAS-AS1 and MATR3 played significant role in initiating NCALD transcription.To further confirm whether MIDEAS-AS1 exerts tumor-suppressive functions via modulating NCALD, we co-transfected si-NCALD and MIDEAS-AS1-overexpressed plasmid in TNBC cell lines.The MTT and colony formation experiments indicated that NCALD knockdown remarkably rescued the proliferation ability of MDA-MB-231 and MDA-MB-468 cells inhibited by the overexpression of MIDEAS-AS1 (Fig. 6A-B).Moreover, the transwell and wound healing assays showed that NCALD knockdown could partially recover the cell migration and invasion abilities reduced by MIDEAS-AS1-overexpression (Fig. 6C-D).Moreover, Western blot analysis revealed that NF-κB pathway-related proteins were decreased after overexpression of MIDEAS-AS1, and that was increased after co-transfection of MIDEAS-AS1 overexpression plasmid and si-MATR3 (Fig. 6E).Taken together, MIDEAS-AS1 associates with MATR3 to initiate NCALD transcription and inhibits NF-κB signaling pathway, which further affects the TNBC progression. MIDEAS-AS1 overexpression inhibits TNBC progression and metastasis in vivo In order to further evaluate the biological function of MIDEAS-AS1 in vivo, a subcutaneous xenograft model was first constructed.The MDA-MB-231 cells stably transfected with MIDEAS-AS1 overexpressing vectors or control vectors were subcutaneously injected to the flanks of nude mice.The result showed that the tumor weight and tumor volume were significantly inhibited in xenografts of MIDEAS-AS1 overexpressing group compared with those in the control group (Fig. 7A-C).Furthermore, immunohistochemistry (IHC) assays confirmed that MIDEAS-AS1 overexpression caused decreased Ki67 expression and increased NCALD expression, but did not affect MATR3 expression (Fig. 7D-E).In addition, we further constructed pulmonary metastasis model through intravenously injecting MDA-MB-231 cells stably expressing MIDEAS-AS1 or control vectors to compare the metastatic abilities in vivo.The results showed that mice injected with MIDEAS-AS1-overexpressing TNBC cells had no or fewer metastatic foci compared with the control group (Fig. 7F-H).Together, these results indicated that MIDEAS-AS1 played a critical role in inhibiting TNBC progression and metastasis. Discussion Breast cancer is one of the most common malignancies among women worldwide and is the major cause of most cancer-related deaths.There are several explanations for the high mortality rate of breast cancer, with metastasis of vital organs thought to be the main cause [37].As reported in the literature, TNBC is the most challenging subtype of breast cancer with higher rates of relapse and greater metastatic potential compared to other breast cancer subtypes [4,38,39].Despite the treatment methods for patients with metastatic breast cancer are complicated, the curative effects are unsatisfactory in the clinical practice.Therefore, it is of great significance to investigate the molecular mechanism of TNBC metastasis and identify novel prognostic predictors for accurate diagnosis and prediction of prognosis. It is well known that lncRNA is abnormally expressed in many different types of cancer and is involved in the regulation of tumor development and progression [40][41][42].Recently, various studies have focused on the functions and regulations of lncRNAs to search novel diagnostic and therapeutic targets for cancer treatment.In this study, we explored the potential role and molecular mechanism of MIDEAS-AS1 in TNBC progression.We determined that MIDEAS-AS1 was significantly downregulated in TNBC tissues compared to the non-TNBC tissues, and low expression of MIDEAS-AS1 was associated with poor prognosis in TNBC.Functional studies revealed that MIDEAS-AS1 suppressed proliferation, migration and invasion, metastasis and promoted apoptosis of TNBC cells in vitro, indicating a tumor suppressor role in TNBC.Moreover, MIDEAS-AS1 inhibited TNBC tumor progression and lung metastasis in vivo by xenograft model.However, the regulatory mechanism of MIDEAS-AS1 involved in TNBC progression was still unclear and worthy of further exploration. It is reported that the localization of lncRNAs within the cell is the primary determinant of their molecular functions [43].Increasing evidence suggests that cytoplasmic lncRNAs can regulate gene expression by modulating mRNA stability, translation process or participating in mRNA post-transcriptional regulation as ceRNAs [19,44].Meanwhile, the nuclear lncRNAs also participate in several biological processes, including chromatin organization, and transcriptional and posttranscriptional gene expression, and acting as structural scaffolds of nuclear domains [24].Here, we found that MIDEAS-AS1 was mainly located in the nucleus based on cell cytoplasmic/nuclear fractionation and RNA FISH assays.Previous studies have reported that lncRNAs could play important role in transcription by recruiting corresponding proteins [45,46].Therefore, MIDEAS-AS1 might exert its function by recruiting transcription complexes and further enhance or inhibit gene transcription.Then, RNA pull-down followed by mass spectrometry showed the binding potential between MATR3 and MIDEAS-AS1, which was further confirmed by RIP assay.Moreover, we also found that MIDEAS-AS1 and MATR3 were co-localization in TNBC cells through RNA FISH technology combined with immunofluorescence.Subsequently, we identified the specific binding region between MIDEAS-AS1 and MATR3.However, we found that the expression level of MIDEAS-AS1 did not affect the RNA and protein levels of MATR3, making us speculating that MIDEAS-AS1 might regulated the function of MATR3 in TNBC cells. It is reported that MATR3 is an abundant nuclear protein that binds with DNA and RNA [47,48], allowing MATR3 to play crucial roles in RNA splicing and gene transcription [49][50][51].To further explore the downstream genes regulated by the MIDEAS-AS1-MATR3 complex, we analyzed the RNA-seq results and 471 DEGs were revealed.Following integration with TCGA database, we finally identified 9 candidate genes as potential direct downstream targets of MIDEAS-AS1 and MATR3.NCALD, a member of the neuronal calcium sensors protein family [52], caught our attention due to the remarkable changes after MIDEAS-AS1 overexpression or knockdown.Studies have found that NCALD was associated with the prognosis of several cancers, including nonsmall cell lung cancer, ovarian cancer, colorectal cancer, indicating its clinical potential as a prognostic biomarker [35,36].However, the function role and molecular mechanism of NCALD in breast cancer has not been reported.In this study, we found that NCALD was downregulated in breast cancer tissues.Moreover, the expression of NCALD could be regulated by MATR3 in TNBC cells.Significantly, rescue experiment revealed that MATR3 knockdown attenuated the increasing trend of NCALD expression caused by MIDEAS-AS1 overexpression in TNBC cells.Subsequently, dual luciferase reporter assays and ChIP-qPCR indicated that MIDEAS-AS1 and MATR3 complex could bind to NCALD promoter to regulate NCALD transcription. We further investigated whether NCALD modulate the functional effect of MIDEAS-AS1 in TNBC.Functional experiments revealed that NCALD overexpression inhibited cell proliferation, migration, and invasion in TNBC cells, suggesting the suppressive functional role of NCALD in TNBC progression.Previous study has reported that the TGF-β, NF-κB and ERK signaling pathways were associated with NCALD expression in epithelial ovarian cancer [35].Interestingly, Western blot analysis showed that overexpression of NCALD inhibited the expression of the NF-κB pathway-associated proteins, indicating that NCALD inhibited progression of TNBC possibly by suppressed NF-κB signaling pathway.Furthermore, the rescue experiments showed that NCALD knockdown could partially rescue the proliferation and migration ability of TNBC cells inhibited by overexpression of MIDEAS-AS1.Western blot analysis also revealed that MIDEAS-AS1 and MATR3 could not only affect the NCALD expression, but also significantly modulate the NF-κB signaling pathway.Therefore, our study revealed that MIDEAS-AS1 regulated TNBC progression by recruiting the MATR3 to initiate NCALD transcription and suppressing NF-κB signaling pathways (Fig. 7I). Conclusions In this study, we identified MIDEAS-AS1 as a novel tumor suppressor in TNBC, and higher expression of MIDEAS-AS1 was associated with better prognosis of TNBC patients.Mechanistically, we revealed that MIDEAS-AS1 inhibited TNBC progression through activating NCALD transcription via recruiting MATR3.Our study illustrates a novel mechanism involved in TNBC progression and suggests that MIDEAS-AS1 might serve as a prognostic biomarker for TNBC patients. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from: Fig. 2 Fig. 2 MIDEAS-AS1 inhibits TNBC cell proliferation and migration in vitro.A-D The effects of MIDEAS-AS1 overexpression and knockdown on the proliferation were examined by MTT assay (A-B) and colony formation assays (C-D).E Flow cytometry was performed to measure the effect of MIDEAS-AS1 on apoptosis.F-G Transwell and invasion assays were used to evaluate the motility of MDA-MB-231 and MDA-MB-468 cells transfected with MIDEAS-AS1-overexpressing vector or control vector (F) and si-NC or si-MIDEAS-AS1 (G).H Western blot analysis of EMT-related proteins in MDA-MB-231 and MDA-MB-468 cells after overexpression of MIDEAS-AS1.I Western blot analysis of EMT-related proteins in MDA-MB-231 and MDA-MB-468 cells after knockdown of MIDEAS-AS1.Data were shown as mean ± SD. (*p < 0.05, ** p < 0.01, *** p < 0.001) (See figure on next page.) Fig. 3 Fig. 3 MIDEAS-AS1 interacts with MATR3.A The expression level of MIDEAS-AS1 in the subcellular fractions of MDA-MB-231 and MDA-MB-468 cells was detected by qRT-PCR, with U6 and GAPDH as nuclear and cytoplasmic markers, respectively.B FISH analysis of the location of MIDEAS-AS1 (red) in the cytoplasm and nuclear fractions of MDA-MB-231 and MDA-MB-468 cells.Scale bars, 20 μm.C Biotin-labeled MIDEAS-AS1 was incubated with MDA-MB-231 cell lysates for pull-down, followed by SDS-PAGE separation, silver staining.D Peptide sequences analysis interacted with MIDEAS-AS1 was performed by mass spectrometry.E Western blot analysis following RNA pull-down assay indicated that MIDEAS-AS1 interacted with MATR3.F RIP assay using Flag antibody showed that MIDEAS-AS1 interacted with MATR3.G Colocalization of MIDEAS-AS1 and MATR3 by immunofluorescence following transfected with MIDEAS-AS1-overexpressing vector or control vector.Scale bars, 20 μm.H The interaction between the truncated MIDEAS-AS1 and MATR3 was confirmed by RNA pull-down and western blot.Data were shown as mean ± SD. (*p < 0.05, ** p < 0.01, *** p < 0.001) Fig. 4 Fig. 4 MIDEAS-AS1 activates NCALD expression via recruiting the MATR3 complex to the NCALD promoter.A Heat map of differentially expressed gene based on RNA-seq analysis between MIDEAS-AS1-overexpression cells and control cells.B There were 471genes differentially expressed between MIDEAS-AS1-overexpression and control (252 up and 219 down) cells (|log2(Fold change) |> 1).C Top 20 up-regulated genes were showed.D Venn-diagrams showing intersect between down-regulated genes in TCGA data (1648 genes) and up-regulated DEGs after MIDEAS-AS1-overexpressing data (252 genes).E-F qRT-PCR and Western blot analysis of NCALD expression in MDA-MB-231cells and MDA-MB-468 cells with knockdown or overexpression MIDEAS-AS1 or MATR3.G qRT-PCR analysis of the expression of NCALD in MDA-MB-231 and MDA-MB-468 cells co-transfected with MIDEAS-AS1 overexpression vector or empty vector together with si-MATR3 or si-NC.H Western blot analysis of the expression of NCALD in MDA-MB-231 and MDA-MB-468 cells co-transfected with MIDEAS-AS1 overexpression vector or empty vector together with si-MATR3 or si-NC.I Luciferase reporter assays validated the binding of MIDEAS-AS1 and MATR3 with NCALD.J Luciferase assays in HK293T cells was determined by co-transfected with MIDEAS-AS1-overexpressing vector and si-MATR3.K-L Relative luciferase activity of full-length promoter and the other five truncated promoter regions of NCALD in HK293T cells by transfected with MIDEAS-AS1-overexpressing vector.M ChIP-qPCR experiments on ten different NCLAD promoter primer using anti-Flag antibody in MDA-MB-231 and MDA-MB-468 cells transfected with MIDEAS-AS1 overexpression plasmid and N-terminal FLAG-tagged MATR3 plasmid.Data were shown as mean ± SD. (*p < 0.05, ** p < 0.01, *** p < 0.001) (See figure on next page.) (Fig. 5 See figure on next page.)The overexpression of NCALD inhibits TNBC cell proliferation and migration in vitro.A-B The efficiency of overexpression and knockdown of NCALD were confirmed by qRT-PCR (A) and Western blot (B) in MDA-MB-231 and MDA-MB-468 cells.C-E The effects of NCALD overexpression and knockdown on the proliferation of MDA-MB-231 and MDA-MB-468 cells were examined by MTT assay (C) and colony formation assays (D-E).F Flow cytometry was performed to measure the effect of NCALD overexpression and knockdown on apoptosis.G Transwell and invasion were used to evaluate the motility of MDA-MB-231 and MDA-MB-468 cells transfected with NCALD-overexpressing vector or control vector (G) and si-NC or si-MIDEAS-AS1 (H).I Western blot analysis of p-ERK1/2, p-NF-κB and TNF-β protein levels in MDA-MB-231 and MDA-MB-468 cells transfected with NCALD overexpression plasmid.J Western blot analysis of p-ERK1/2, p-NF-κB and TNF-β protein levels in MDA-MB-231 and MDA-MB-468 cells transfected with transfected with si-NC or si-NCALD.Data were shown as mean ± SD. (*p < 0.05, ** p < 0.01, *** p < 0.001) Fig. 6 Fig. 6 MIDEAS-AS1 exerts its function by regulating the NCALD expression and downstream signaling.A-B The effects of co-transfected with MIDEAS-AS1 overexpression vector or empty vector together with si-NCALD or si-NC on the proliferation of MDA-MB-231 and MDA-MB-468 cells by MTT assay (A) and colony formation assays (B).C-D Transwell and wound-healing assays were used to evaluate the motility of MDA-MB-231 and MDA-MB-468 cells of co-transfected with MIDEAS-AS1 overexpression vector or empty vector together with si-NCALD or si-NC.E Western blot analysis of the corresponding signaling in MDA-MB-231 and MDA-MB-468 cells co-transfected with MIDEAS-AS1 overexpression vector or empty vector together with si-MATR3 or si-NC.Data were shown as mean ± SD. (*p < 0.05, ** p < 0.01, *** p < 0.001) ( See figure on next page.)Fig. 7 MIDEAS-AS1 overexpression inhibits tumor formation in nude mice xenograft models.A MDA-MB-231 cells were stably transfected with the MIDEAS-AS1 overexpression plasmid or control plasmid and inoculated subcutaneously into nude mice.Compared with the control plasmid group, MIDEAS-AS1 overexpression inhibited tumor growth.B-C Tumor weight (B) Tumor volume (mm 3 ) (C) were significantly decreased in the MIDEAS-AS1 overexpression group.D Immunohistochemistry with a Ki67-specific antibody was performed in the tumor.The results showed that MIDEAS-AS1 overexpression led to reduced expression of Ki67.Scale bars, 50 μm.E Representative images MATR3, NCALD staining in the tumor.Immunohistochemical staining showed that MIDEAS-AS1 overexpression led to increased expression of NCALD, while MATR3 was not affected.Scale bars, 50 μm.F-G Stably transfected MDA-MB-231 cells were injected into the tail veins of nude mice (n = 5).Representative images of lungs (F) and HE staining of lungs (H) isolated from mice.MIDEAS-AS1 overexpression resulted in a decreased number of lung metastatic colonies (G).I Schematic diagram depicting mechanisms involved in the effect of MIDEAS-AS1.Data were shown as mean ± SD. (*p < 0.05, ** p < 0.01, *** p < 0.001) Table 1 Correlation between MEDIA-AS1 mRNA expression and clinicopathological characteristics in 91 triple-negative breast cancer patients
9,975
2023-09-28T00:00:00.000
[ "Medicine", "Biology" ]
Mathematical Problem-Solving Ability in STAD Learning Assisted by Question Cards in Terms of Student Learning Motivation This research aims to determine the effectiveness of Student Teams Achievement Division (STAD) learning model assisted by question cards and description of mathematical problem solving ability in terms of students' learning motivation using STAD learning model assisted by question cards. The method used in this research is mixed method with sequential explanatory design. The subjects in this research were 6 people consisting of 2 students in each learning motivation category, namely high, moderate, and low. The results of this research showed that: (1) the mathematical problem-solving ability of grade VIII students after participating in STAD learning assisted by question cards on Probability material reaches learning completeness; (2) The average mathematical problem-solving ability of grade VIII students after participating in STAD learning assisted by question cards is higher than the average mathematical problem solving ability of grade VIII students who use conventional learning models; (3) The completeness of mathematical problem-solving ability of grade VIII students after participating in STAD learning assisted by question cards is more than the proportion of completeness of mathematical problem solving ability of grade VIII students using conventional learning models; (4) Subjects with high learning motivation fulfil all four indicators of mathematical problem-solving ability well. Subjects with moderate learning motivation fulfilled 3 indicators of mathematical problem-solving ability well. INTRODUCTION Education is an important aspect of science and technology development.Education plays a role in developing the potential of human resources through teaching and learning activities in order to become innovative and creative citizens and realise quality human resources so they can compete with other citizens (Halean et al., 2021).Act of The Republic of Indonesia Number 20, year 2003 states that national education functions to develop abilities and form the character and civilisation of a dignified nation in order to educate the nation's life, aims to develop the potential of students to become human beings who are faithful and devoted to God Almighty, noble, healthy, scientific, capable, creative, independent, and become democratic and responsible citizens.Given the important role of mathematics in everyday life, mathematics learning should pay more attention and improve problem-solving skills in everyday life.(Antika, 2019;Fatimah, 2020;Fitriawan et al., 2023).Mathematics is very identical to solving mathematical problems and that the main purpose of education is for students to be able to solve problems in everyday life (Simanjuntak, 2021).The National Council of Teachers of Mathematics states that the goal of mathematics is for students to have five standards of mathematical ability, namely the process of problem-solving, reasoning and proof, connection, communication, and representation (NCTM, 2000). But in reality, the results of the Programme for International Students Assessment (PISA) test in 2012, Indonesia ranked 64th out of 65 countries that took the test with an average score of 375.Indonesia's average score is quite far below the average score of the Organisation for Economic Cooperation and Development (OECD) which is 494 (OECD, 2015).The low PISA results are reinforced by the reality at school.Based on observations of interviews with teachers at SMP Salafiyah Pekalongan, information was obtained that students still find it difficult to solve mathematical problems in the form of narrative problems that require deeper understanding and problem-solving.Students are not accustomed to writing down the steps of completion and students' answers are still not by the problems given.This shows that students' mathematical problem-solving skills are still not optimal.The learning process provided by the teacher also still uses conventional methods that are only teacher-centred, making students less motivated to learn.Students only listen to the teacher's explanation and memorise formulas without being associated with everyday life as a result students become bored and sleepy during learning. The importance of motivation in learners is also important in learning (Baier et al., 2019;Novianti et al., 2020).Learners who do not have motivation tend to be more bored in participating in learning activities, causing low mathematical problem-solving ability and also low academic achievement.Vice versa, (Järvenoja et al., 2020;Waritsman, 2020) mentioned that learners who are more motivated will obtain higher academic achievement as well.Therefore, learning motivation needs to be owned by every learner.When learners have high learning motivation, they will always try to solve the problems they face, actively participate in learning, and not give up easily in facing problems.Conversely, learners with low motivation will tend to give up easily and not try to solve a problem.This is in line with research conducted by (Rigusti & Pujiastuti, 2020) which shows the results that students with high learning motivation have high problem-solving ability.Meanwhile, students who have moderate learning motivation have moderate problem-solving ability and students with low learning motivation have low problemsolving ability. In addition to requiring a good model, educators need to use interesting learning media as well.Learning media is media that is used as a learning aid consisting of various types, including media based on living things, media based on print, media based on visual, media based on audio-visual, and media based on computer (Muhaimin & Juandi, 2023;Salam et al., 2022).One of the media that is often used in cooperative learning models is question cards (Arifin & Halim, 2021;Atiaturrahmaniah & Fajri, 2020;Sya'adah et al., 2023).Question cards are cards that contain questions that must be answered by students (Rohmah et al., 2019).Question cards have also been proven to be effective in mathematics learning (Fatra et al., 2023;Nada et al., 2020). METHODS The research method used in this research is mixed methods with a sequential explanatory research design.(Creswell, 2018) explains that explanatory sequential mixed methods are a method where researchers first conduct quantitative research, analyse the results, and then collate the results to explain them in more detail with qualitative research.The quantitative research design used is True Experimental Design in the form of Posttest Only Control Design where in this design the grouping is done randomly in both the experimental and control groups.According to (Sugiyono, 2013a(Sugiyono, , 2013b)), the research design is described in Table 1.The variables of this research were mathematical problem-solving ability and students' learning motivation.The data collection methods used were tests, questionnaires, and interviews.The test method was used to collect data on mathematical problem-solving ability after learning mathematics with the STAD learning model assisted by question cards and conventional learning models.The questionnaire method was used to measure the learning motivation of experimental group students which was then used to classify students into high, moderate, and low motivation categories.The interview method in this research was conducted in an unstructured manner to obtain data on mathematical problem-solving ability in terms of students' learning motivation. Quantitative data was obtained from the results of the problem-solving ability test.Qualitative data was obtained from interviews with six students based on the category of student learning motivation.The quantitative data analysis techniques used were ( 1 (Natow, 2020;Raskind et al., 2019). Indicators of mathematical problem-solving ability used in this research according to The National Council of Teachers of Mathematics (NCTM, 2000), namely: (1) constructing new mathematical knowledge through problem-solving, (2) solving problems that appear in mathematics and other contexts, (3) implementing and adapting a variety of suitable strategies to solve problems, and (4) paying attention to and reflecting on the process of solving mathematical problems. RESULTS AND DISCUSSION Before hypothesis testing, the data from the mathematical problem-solving ability test were first tested for normality and homogeneity.After testing normality and homogeneity in both groups, the results showed that the data of mathematical problemsolving ability of both samples came from a normally distributed population and had the same variance (homogeneous), so it could be continued with hypothesis testing. The calculation of the hypothesis 1 test is used to test whether the experimental group's mathematical problem-solving ability test data can reach the limit of classical completeness, namely, students can achieve a minimum score of 70 with a percentage of more than or equal to 75%.Based on the calculation results, the = 1.687 value is obtained.The value 0.5− with a = 5% = 0.05 is 1.64.Because of = 1.687 > 1.64 = 0.5− then 0 is rejected.This means that the mathematical problemsolving ability of experimental group students has reached classical learning completeness.So, based on the results of research on hypothesis 1, it shows that the ability to solve mathematical problems in the STAD learning model assisted by question cards reaches a minimum learning completeness of 75%, namely 28 out of 32 students achieved individual completeness or 88% of students who took the problem-solving ability test exceeded the KKM.This can be achieved because, in the experimental group, students get more practice problems both in groups and individually.Giving problems in the form of question cards makes students not feel bored in solving problems on the question card.In addition, with the award for students who get the highest score, students also try to solve the problems given. The calculation of the hypothesis 2 test was conducted to test whether the average test of students' mathematical problem-solving ability with STAD assisted by question cards was higher than the average test of students' problem-solving ability with conventional learning.Based on the results of the mathematical problem-solving ability test, the average of the group that received STAD learning assisted by question cards was 79.97 with the lowest score was 50 and the highest score was 100.From the mean difference test, the value of = 3.644 and with = (32 + 32 − 2) = 62 is 1.999.Because = 3.644 > 1.999 = then 0 is rejected.That is, the average test results of students' mathematical problem-solving ability in STAD learning assisted by question cards is more than the average test results of problemsolving ability in conventional learning. The calculation of the hypothesis 3 test was carried out to test whether proportion of students who were complete in solving ability in the STAD learning model assisted by question cards was more than proportion of students' mathematical problem-solving ability in the conventional learning model.From proportion difference test obtained = 3.64 and 1 2 − with = 5% = 0.05 is 1.64.Because = 3.64 > 1.64 then 0 is rejected.That is, proportion of completeness of mathematical problemsolving ability in STAD learning assisted by question cards is more than proportion of completeness of conventional learning's mathematical problem-solving ability. Learning motivation questionnaires were distributed to experimental group students who received the STAD learning model assisted by question cards at the end of the meeting.Based on the Likert scale calculation of learning motivation questionnaire, the results show that 22% of students have learning motivation in the high category, as many as 6 out of 32 students; 66% who have learning motivation in the moderate category, as many as 22 out of 32 students; and 13% who have learning motivation in the low category, as many as 4 out of 32 students.Based on these results, two research subjects were taken, each representing each category of learning motivation so six research subjects were obtained.The research subjects in the STAD class assisted by question cards can be seen in Table 2.After conducting quantitative data analysis and making a category for student learning motivation, the next is qualitative data analysis.Students with high learning motivation, namely subject T-1 and subject T-2 have high mathematical problem-solving ability.Subject T-1 and subject T-2 can answer 3 questions given and 3 questions answered correctly.The results of the qualitative analysis of the mathematical problem-solving ability of students in the high learning motivation group show that subject T-1 and subject T-2 can fulfil four indicators of mathematical problem-solving ability, namely constructing new mathematical knowledge through problem-solving, solving problems that appear in mathematics and other contexts, implementing and adapting a variety of suitable strategies to solve problems, and paying attention to and reflecting on the process of solving mathematical problems. Students with moderate learning motivation, namely subject S-1 and subject S-2 have moderate mathematical problem-solving ability.Subjects S-1 and S-2 answered the 4 questions given correctly.However, subject S-1 and subject S-2 did not fulfil the indicators of paying attention to and reflecting on the progress of problem-solving in question number 3. Students with low learning motivation namely subjects R-1 and R-2 have low mathematical problem-solving abilities.Subjects R-1 and R-2 tend to be able to fulfil the indicators of constructing new mathematical knowledge through problem-solving.So it can be said that subjects with low learning motivation are able to understand problems well.Subjects with low motivation tend to be unable to fulfil the indicators of solving problems with other contexts, using various appropriate steps to solve, and observing and reflecting on the progress of problem-solving.Subject R-1 and subject R-2 fulfilled the indicator of solving problems with other contexts in items number 1 and 3. Subject R-2 did not fulfil using various appropriate steps to solve for all items, while subject R-1 could fulfil for items number 2 and 3.The indicator of paying attention to and reflecting on the progress of problem-solving could be fulfilled by subject R-1 for items number 1 and 3, while subject R-2 only for item number 1. CONCLUSION Based on the results of research and discussion of mathematical problem-solving ability in terms of students' learning motivation in Student Teams Achievement Division (STAD) learning assisted by question cards, the following conclusions are obtained.(1) Students' mathematical problem-solving ability of class VIII after participating in STAD learning assisting with question cards has reached learning completeness; (2) Students' mathematical problem-solving ability with the application of the STAD learning model assisted by question cards is better than the mathematical problem-solving ability of students in the control class.(4) Students with high learning motivation have better mathematical problem-solving ability than students with moderate and low learning motivation.This is because high learning motivation subjects are able to fulfil the four indicators of mathematical problem-solving ability well fibre is able to solve problems correctly and precisely using systematic solution steps.Learners with moderate mathematical problem-solving ability have moderate problem-solving ability.This is because the subject is able to fulfil three indicators of mathematical problem-solving but the subject cannot reflect on the results of his work.Meanwhile, students with low mathematical problem-solving ability have low mathematical problem-solving ability.This is because the subject can only fulfil one indicator of mathematical problem-solving ability, namely constructing new mathematical knowledge through problem-solving. Table 1 . Posttest Only Control DesignThe population in this research were students in SMP Salafiyah Pekalongan's grade VIII in the 2022/2023 academic year.Sampling was done by random sampling method.In this research, two classes were taken, namely class VIII D as the experimental group and class VIII E as the control group.The experimental group received learning with Student Teams Achievement Division (STAD) assisted by question cards, while the control group received a conventional learning model.The subjects in this research were six students of class VIII D, each in each category of 2 students representing high, moderate, and low learning motivation categories. 1 : test results of the mathematical problem-solving ability's experimental group 2 : test results of the mathematical problem-solving ability's control group ) initial data analysis in the form of test, homogeneity test, and average similarity test of the final semester test of experimental and control class students, (2) final data analysis, namely normality test, homogeneity test, z-test hypothesis test to test classical completeness, mean difference test and proportion difference test.The qualitative data analysis technique used was interviewed.Analysis of interview data in this research was carried out by reducing data, presenting data and drawing conclusions.Furthermore, qualitative data is tested for validity by conducting a credibility test, transferability test, dependability test, and confirmability test.The credibility test in this research was carried out by triangulating techniques, namely test and interview techniques
3,593.8
2024-06-26T00:00:00.000
[ "Mathematics", "Education" ]
Can dd excitations mediate pairing ? The Cu-$3d$ states in the high-$T_c$ cuprates are often described as a single band of $3d_{x^2-y^2}$ states, with the other four $3d$ states having about 2 to 3 eV higher energy due to the lower-than-octahedral crystal field at the copper sites. However, excitations to these higher energy states observed with RIXS show indications of strong coupling to doped holes in the $3d_{x^2-y^2}$ band. This relaunches a decades-old question of the possible role of the orbital degrees of freedom that once motivated Bednorz and M\"uller to search for superconductivity in these systems. Here we explore a direction different from the Jahn-Teller electron-phonon coupling considered by Bednorz and M\"uller, namely the interaction between holes mediated by $dd$ excitations. I. INTRODUCTION In 1986 it was widely believed that superconductivity above the then-known limit of 23 Kelvin was not possible.Who could have predicted that, initiated by Georg Bednorz and Karl Alex Müller's discovery, superconductivity in the cuprates would go through the liquid nitrogen ceiling.The consequences for fundamental science and for applications are numerous and will be detailed throughout the tributes in this special issue. In their Nobel lecture [1] Bednorz and Müller provided a captivating account of the path that led them to their discovery.A remarkable aspect was the consideration of the 3d orbital degrees of freedom: "For Cu 3+ with 3d 8 configuration, the orbitals transforming as base functions of the cubic e g group are half-filled, thus a singlet ground state is formed.In the presence of Cu 2+ with 3d 9 configuration the ground state is degenerate, and a spontaneous distortion of the octahedron occurs to remove this degeneracy.This is known as the Jahn-Teller effect."For a Cu ion surrounded by O 2− ligands with an octahedral crystal field, the 3d-levels are grouped in e g (i.e.d x 2 −y 2 , d 3r 2 −z 2 ) and t 2g (i.e.d xy , d yz , |d zx ⟩) levels at about 2 eV higher energy.Here the notation d j refers to a hole in an otherwise fully occupied 3d-shell.In the cuprate superconductors the crystal field symmetry at the Cu sites is lower than octahedral.This causes the ground state to become d x 2 −y 2 , with d 3r 2 −z 2 at least 1 eV above the ground state.This splitting can be directly observed with resonant inelastic x-ray scattering (RIXS).An example is provided in Fig. 1 [2] for the hole-doped Bi-based cuprate.Since the Jahn-Teller effect relies on the two states being degenerate, it is tempting to conclude that this effect does not play a role here.However, the Jahn-Teller effect is restoredat least in part -if the d 3r 2 −z 2 and d x 2 −y 2 bands are coupled (citing K. A. Müller [3]) "via vibronic interac- Recently the detailed temperature dependence of the dd excitations in Bi-2212 has been studied by, among others, the present authors [2] and it was found that the energies of these modes are influenced by the system becoming superconducting.Superconductivity-induced shifts of +30 meV were observed on the overdoped side, about −12 meV on the underdoped side, and no shift within the error bar for optimal doping.It was argued that these shifts -including the sign-change as a function of doping -could be caused by differences in local spin-correlations of the different phases.These observations also clearly indicate that there is a non-negligible coupling between the conducting holes in the cuprates and the dd excitations. A dd excitation is essentially a local orbital flip from arXiv:2308.03401v2 [cond-mat.supr-con]16 Aug 2023 the d x 2 −y 2 ground state to one of the other d j states (j = xy, yz, zx or 3r 2 − z 2 ).From this perspective it is unavoidable that a coupling exists between the holes in the d x 2 −y 2 band and the dd excitations, and that this coupling is in fact very strong.We interrupt the flow of logic to point out a commonality with Bednorz and Müller's intuition, namely that the 3d orbital degrees of freedom could be one of the key ingredients.Instead of coupling the d orbital degrees of freedom to vibrations, we will now consider the coupling between dd excitations and conduction electrons or holes.The question then is whether such coupling contributes to the pairing interaction, and whether this contribution is positive or negative. The possibility that dd excitations in the cuprates could mediate superconducting pairing has been pioneered by Werner Weber and collaborators [6,7] and by William Little and collaborators [8][9][10][11].Weber considered a model where conduction is confined to holes in the O-2p band, where the copper ions are purely 3d 9 , and pairing is mediated by dd orbital excitations within the e g manifold of the Cu 3d 9 .Little et al. used the Eliashberg equations to describe the coupling of conduction electrons or holes to dd excitations.The approach that we explore here differs from the previous ones.Compared to the approach of Little et al. we use a more direct description of the coupling to the 3d orbital degrees of freedom; and instead of Weber's model with separate O-2p and Cu-3d x 2 −y 2 bands, we adopt the model of a single partly filled band of mixed Cu-3d x 2 −y 2 -O-2p character. II. THE MODEL The question that we want to address is whether the coupling to dd excitations could provide an attractive short-range interaction between the free carriers.We model the problem by considering a 2-dimensional square lattice of atoms.We then dope two additional holes into this lattice, (i) in a configuration where the holes are at infinite distance from each other giving the energy E ∞ , (ii) where they occupy nearest-neighbor sites giving the energy E nn .The energy difference E nn − E ∞ then constitutes the effective interaction energy of a doped hole pair at a nearest-neighbor distance. In the interest of simplicity, we will limit the Hilbert space on each site to two of the 3d orbitals, which is also the minimal model to consider dd excitations.In the hole language, 3d x 2 −y 2 (|a⟩) is the state with the lowest energy, and we represent the other four orbitals at 2-3 eV above the ground state by a single state |b⟩.In this subspace the local Hamiltonian reads [12] where a † (a) and b † (b) are the creation (annihilation) operators for a hole with energy ε a and ε b , respectively, The first term of Eq. ( 1) is the kinetic energy, the second term proportional to U is the intra-orbital Coulomb interaction, the third and fourth terms represent interorbital Coulomb and Hund's coupling, and the last term involves on-site inter-orbital spin-flips and on-site pairwise orbital flips [13]. A hole doped into the Mott-insulating state has the possibility of making virtual excursions to the neighboring sites, which can be either the same orbital a or the orbital b at a higher energy.When we dope two holes and keep them at a sufficiently long distance from each other, there is no interference between their respective virtual excursions, but if they occupy two neighboring sites this changes the pattern of virtual excitations, as is schematically illustrated in Fig. 2. In the Appendix, the energies of these various different configurations are calculated using second-order perturbation theory and assuming a hopping energy t ab between different orbitals on neighboring sites.The nearest-neighbor interaction energy Γ nn is defined as the energy gained by two holes when they benefit from these virtual excitations sitting on two-neighboring sites, rather than at an infinite distance from each other [see Eq. (A.21)]. It is convenient to introduce the dimensionless quantities We further note that With these definitions, we express the nearest-neighbor interaction, Eq. (A.21), as a dimensionless quantity III. DISCUSSION A color map of γ nn in the (u, j) plane is shown in Fig. 3. Blue (red) indicates zones of attractive (repulsive) interaction.The gray color indicates the region where the ground state of a doped hole becomes unstable toward a different orbital configuration.This happens for j ⩾ (1 + δ j )/3.For the purpose of our discussion we will only need to consider j < (1 + δ j )/3.In this parameter range, The experimental values of the dd exciton peak observed by RIXS indicate that Ω ab = 2 eV.Following Ghijsen et al. [14], we find for the cuprates U = 8.8 eV and J = 1 eV, so that u = 4.4 and j = 0.5.As pointed out above, this high value of j is outside the range of validity of Eq. ( 5).This should not come as a surprise: it is a manifestation of the second Hund's rule that -in the absence of a crystal field -the ground state is a high-spin state.For j < 1/3 the crystal field is sufficiently strong to quench the high-spin state, but for the more realistic estimate j = 0.5 the ground state has S = 1.In the actual cuprate materials this does not happen, because the energy of introducing a hole in O-2p is lower than that of creating a 3d 8 state [15].In other words, if one considers realistic interaction parameters of the Cu-3d electrons, it is necessary to take into account the O-2p bands [15,16]. Meanwhile the question "can dd excitations mediate pairing ?" has a non-trivial answer: There exists a set of parameters in the (u, j) plane, for which γ nn = 0 [17].For u > u c (j) the interaction γ nn is repulsive; for 5j −1 < u < u c (j) it is attractive (always considering j < 1/3).For u < 5j − 1 (moss green area on the left of the color map) the state of the two doped holes on nearest-neighbor sites becomes unstable toward a different orbital configuration.As pointed out above, this model cannot be directly applied to the cuprates.On the other hand, as a matter of principle it is of interest that such an attractive interaction is possible, and it may provide a method of tuning an attractive interaction in doped multi-band Mott insulators.Let us therefore take a step back and try to understand what exactly causes the attractive pairing channel.From Eq. ( 5) we see that in the parameter range j < (1+u)/5 and u < u c (j), the first and second terms of Eq. ( 5) are both positive.When j approaches (1 + u)/5 from below, the denominator of the third term tends to zero and the attractive interaction potential diverges.Clearly the attractive interaction is caused by the third term.In the Appendix, we show that this term originates in virtual transitions that are unique to having two doped holes on nearest neighbor sites, i.e. d(R 1 ) 2 d(R 2 ) 2 .This opens a channel of virtual transitions to excited states d(R 1 ) 1 d(R 2 ) 3 .The energy cost of such a process is Ω ab plus the effective on-site interaction energy U eff = U − 5J.U eff is negative in the parameter range considered, j < (1 + u)/5.For j → (1 + u)/5 the effective on-site interaction U eff → −Ω ab .Consequently, the energies of the two states d(R 1 ) 2 d(R 2 ) 2 and d(R 1 ) 1 d(R 2 ) 3 approach the same value.One can therefore interpret this type of pairing interaction as a resonant process whereby two doped holes temporarily occupy the same site.It is not completely obvious if we should regard this as an exciton-mediated interaction; however, from a broader perspective, this interaction is effectively mediated by virtual local orbital fluctuations. We close by making a speculation if this type of behavior could be realized in a real system, and if so, in which materials.From Fig. 3, we see that in the region of attractive interaction we have j ≪ 1, so that u c (j) ≈ 5j/2.The conditions u < u c (j) and j < (1 + u)/5 then become It is a well-known fact that the Hund's coupling J is a relatively robust atomic parameter that is only weakly screened in a solid-state environment.In the 3d series, J varies from 0.9 eV (Ti) to 1.5 eV (Cu), in the 4d series from 0.6 eV (Zr) to 1.0 eV (Ag), and in the 5d series from 0.7 eV (Hf) to 1.0 eV (Au) [18].The best targets are Nb 4+ or W 5+ , both having a d 1 configuration.Taking J = 0.6 eV and Ω ab = 2 eV, U should be about 1 to 1.5 eV.This is compatible with reported empirical values of the early 4d and 5d elements [18].However, since in this case the role of electrons and holes is reversed, the electron in a d 1 configuration occupies one of the t 2g states.For an octahedral coordination the ground state is then 3-fold degenerate, making it Jahn-Teller active.This degeneracy can be lifted by the combination of spinorbit coupling and a tetragonal crystal field.Electrondoped Sr 2 NbO 4 or K 2 WO 4 and other members of the Ruddleston-Popper series of these compounds could be candidate materials for observing the exciton mediated interactions of the type sketched above.To arrive at a quantitative prediction requires the implementation of spin-orbit coupling and a tetragonal crystal field. IV. CONCLUSIONS We calculated the effective interaction between 2 holes doped in a 2-band Mott-insulator.We conclude from our study that the coupling to dd excitations (orbital flips) mediates a nearest-neighbor interaction, the sign of which depends on the relative values of the on-site Coulomb repulsion and Hund's-rule exchange.Our analysis demonstrates that exciton-mediated interactions can in principle contribute positively to the pairing interaction, but depending on the parameters it may also lead to the opposite effect.To obtain a precise understanding it is necessary to include in the model the O-2p states and the full set of Cu-3d states.Better candidate materials for observing the type of interactions described here are members of the Ruddleston-Popper series of early transition-metal oxides such as Sr 2 NbO 4 or K 2 WO 4 . In the limit U → ∞, the ground state of the undoped Mott insulator is the direct product of the eigenstates for each site, namely where Ω ab = ε b − ε a . The d 2 states Hole doping causes a finite fraction of the sites to be occupied with 2 holes.For the case of 2 holes on a single site, the subspace with spin S = 0 is spanned by the vectors On this basis, the Hamiltonian for S = 0 is Straightforward diagonalization gives the following set of eigenstates where The corresponding energies are The basis vectors for the S = 1 manifold are For the evaluation of virtual processes where 2 doped holes occupy nearest-neighbor sites, we will need the following d 3 states: When a hole is introduced in the system, it can be "dressed" by creating virtual excitons on the neighboring sites.To estimate the energy saving by these processes, we consider a 2-site cluster where one hole is introduced in addition to the hole already present at each site: where the indices 1, 2 refer to the different sites and the label 1h refers to 1 doped hole.The relevant excited states are The intersite hopping term Ĥt connecting different orbitals a and b is We will need below the matrix elements of the hopping term Ĥt , for which we obtain With the help of these expressions, we calculate the energy lowering of a site with a single doped hole due to virtual hopping to a different [19] If we introduce two doped holes on nearest-neighbor sites, as described by the state vector g = 2E ãa;0 = 4ε a + 2U − 2∆ J , (A.17) the virtual excitations whereby a hole hops from one of these sites to the other are modified, because the intermediate state on one of the two sites now has 2 doped holes in addition to the hole already present in the undoped state (i.e. 3 holes in total).Specifically these virtual states are FIG. 2 . FIG. 2. a Two additional holes present on distant sites (blue spheres) are renormalized by the interaction with 4 nearest neighbors each (green spheres).b Two additional holes present on neighboring sites are renormalized by the interaction with 6 nearest neighbors and by virtual hopping of particles between the two sites (yellow). ) a b FIG. 3 . FIG.3.a Exciton mediated hole-hole interaction γnn as function of the dimensionless u and j using Eq.(5).b Zoom of panel a.The black dot in the top panel shows a realistic combination of parameters for Bi2212.The gray and moss green areas indicate where perturbation theory cannot be applied. 4 . Dressing of a doped hole by excitons on neighboring sites Ω ab − J + ∆ J − 3α 2 t 2 ab /2 Ω ab − 3J + ∆ J .(A.16)5.Two holes at nearest-neighbor sites 16)calculate the correction to the ground-state energy due to virtual hopping between the two sites sketched in Fig.2band subtract the contribution from the virtual hopping processes that they replace [see Eq. (A.16)].This way we obtain for the energy of bringing two holes from an infinite distance to a nearest-neighbor distance J + Ω ab + U − 5J (1 ⩽ j ⩽ 4).
4,037.6
2023-08-07T00:00:00.000
[ "Physics" ]
Variational preparation of finite-temperature states on a quantum computer The preparation of thermal equilibrium states is important for the simulation of condensed-matter and cosmology systems using a quantum computer. We present a method to prepare such mixed states with unitary operators, and demonstrate this technique experimentally using a gate-based quantum processor. Our method targets the generation of thermofield double states using a hybrid quantum-classical variational approach motivated by quantum-approximate optimization algorithms, without prior calculation of optimal variational parameters by numerical simulation. The fidelity of generated states to the thermal-equilibrium state smoothly varies from 99 to 75% between infinite and near-zero simulated temperature, in quantitative agreement with numerical simulations of the noisy quantum processor with error parameters drawn from experiment. INTRODUCTION The potential for quantum computers to simulate other quantum mechanical systems is well known [1], and the ability to represent the dynamical evolution of quantum many-body systems has been demonstrated [2]. However, the accuracy of these simulations depends on efficient initial state preparation within the quantum computer. Much progress has been made on the efficient preparation of non-trivial quantum states, including spinsqueezed states [3] and entangled cat states [4]. Studying phenomena like high-temperature superconductivity [5] requires preparation of thermal equilibrium states, or Gibbs states. Producing mixed states with unitary quantum operations is not straightforward, and has only recently begun to be explored [6,7]. In this work, we demonstrate the use of a variational quantum-classical algorithm to realize Gibbs states using (ideally unitary) gate control on a transmon quantum processor. Our approach is mediated by the generation of thermofield double (TFD) states, which are pure states sharing entanglement between two identical quantum systems with the characteristic that when one of the systems is considered independently (by tracing over the other), the result is a mixed state representing equilibrium at a specific temperature. TFD states are of interest not only in condensed matter physics but also for the study of black holes [8,9] and traversable wormholes [10,11]. We use a variational protocol [12] motivated by quantumapproximate optimization algorithms (QAOA) that re-lies on alternation of unitary intra-and inter-system operations to control the effective temperature, eliminating the need for a large external heat bath. Recently, verification of TFD state preparation was demonstrated on a trapped-ion quantum computer [6]. Our work experimentally demonstrates the first generation of finitetemperature states in a superconducting quantum computer by variational preparation of TFD states in a hybrid quantum-classical manner. RESULTS Theory Consider a quantum system described by Hamiltonian H with eigenstates |j and corresponding eigenenergies E j : The Gibbs state ρ Gibbs of the system is ρ Gibbs (β) = 1 Z systems A and B as Tracing out either system yields the desired Gibbs state in the other. To prepare the TFD states, we follow the variational protocol proposed by Wu and Hsieh [12] and consider two systems each of size n. In the first step of the procedure, the TFD state at β = 0 is generated by creating Bell pairs Φ + i = (|0 Bi |0 Ai + |1 Bi |1 Ai ) / √ 2 between corresponding qubits i in the two systems. Tracing out either system yields a maximally mixed state on the other, and vice versa. The next steps to create the TFD state at finite temperature depend on the relevant Hamiltonian. Here, we choose the transverse field Ising model in a onedimensional chain of n spins [13], with n = 2 [ Fig. 1(a)]. We map spin up (down) to the computational state |0 (|1 ) of the corresponding transmon. The Hamiltonian describing system A is where ZZ A = Z A2 Z A1 , X A = X A2 +X A1 , and g is proportional to the transverse magnetic field. The Hamiltonian for system B is the same. We focus on g = 1, where a phase transition is expected in the transverse field Ising model at large n [14]. We use a QAOA-motivated variational ansatz [12,15], where intra-system evolution is interleaved with a Hamiltonian enforcing interaction between the systems: where XX BA = X B2 X A2 + X B1 X A1 , and analogously for ZZ BA . For single-step state generation, the unitary operation describing the TFD protocol is The variational parameters γ = (γ 1 , γ 2 ), α = (α 1 , α 2 ) are optimized by the hybrid classical-quantum algorithm to generate states closest to the ideal TFD states. A single step of intra-and inter-system interaction ideally produces the state |ψ( α, γ) = U ( α, γ) Φ + 2 ⊗ Φ + 1 [16]. The variational algorithm extracts the cost function after each state preparation. We engineer a cost function C to be minimized when the generated state is closest to an ideal TFD state [16]. This cost function is Principle and generation of Thermofield Double state. (a) Two identical systems A and B are variationally prepared in an ideally pure, entangled joint state such that tracing out one system yields the Gibbs state on the other. (b) Corresponding qubits in the two systems are first pairwise entangled to produce the β = 0 TFD state. Next, intra-and inter-system Hamiltonians are applied with optimized variational angles ( α, γ) to approximate the TFD state corresponding to the desired temperature. We compare the performance of this engineered cost function C 1.57 to that of the non-optimized cost function C 1.00 , using the reduction of infidelity to the Gibbs state as the ultimate metric of success (see [17]). The engineered cost function achieves an average improvement of 54% across the β range covered ([10 −2 , 10 2 ] in units of 1/g), as well as a maximum improvement of up to 98% for intermediate temperatures (β ∼ 1). Our choice of the class of cost functions to optimize lets us trade off a slight decrease in low-temperature performance with a significant increase in performance at intermediate temperatures. See [16] for further details on the theory. The quantum portion of the algorithm prepares the state according to a given set of angles ( α, γ), performs the measurements, and returns these values to the classical portion. The classical portion then evaluates the cost function according to the returned measurements, performs classical optimization, generates and returns the next set of variational angles to evaluate on the quantum portion. mm Device and optimized quantum circuit. (a) Optical image of the transmon processor used in this experiment, with false color highlighting the four transmons employed and the dedicated bus resonators providing their nearest-neighbor coupling. (b) Optimized circuit equivalent to that in Fig. 1(b) and conforming to the native gate set in our architecture. All variational parameters are mapped onto rotation axes and angles of single-qubit gates. Tomographic pre-rotations R1-R4 are added to reconstruct the terms in the cost function C and to perform two-qubit state tomography of each system following optimization. Experiment We implement the algorithm using four of seven transmons in a monolithic quantum processor [ Fig. 2(a)]. The four transmons (labelled A 1 , A 2 , B 1 , and B 2 ) have square connectivity provided by coupling bus resonators, and are thus ideally suited for implementing the circuit in Fig. 1(b). Each transmon has a microwave-drive line for single-qubit gating, a flux-bias line for two-qubit controlled-Z (CZ) gates, and a dispersively coupled resonator with dedicated Purcell filter [18,19]. The four transmons can be simultaneously and independently read out by frequency multiplexing, using the common feedline connecting to all Purcell filters. All transmons are biased to their flux-symmetry point (i.e., sweetspot [20]) using static flux bias to counter residual offsets. Device details and a summary of measured transmon parameters are provided in [17]. Landscape of the cost function for infinite temperature. Panels (a) and (c): Landscape cuts obtained in noiseless simulation and in experiment, respectively, when varying control angles γ while keeping α = 0 (the ideal solution value). Panels (b) and (d): Corresponding cuts obtained when varying control angles α while keeping γ = 0 (the ideal solution value). See text for details. These landscape cuts are sampled at 100 points and interpolated using the Python package adaptive [21]. In order to realize the theoretical circuit in Fig. 1(b), we first map it to the optimized depth-13 equivalent circuit shown in Fig. 2(b), which conforms to the native gate set in our control architecture. This gate set consists of arbitrary single-qubit rotations about any equatorial axis of the Bloch sphere, and CZ gates between nearestneighbor transmons. Conveniently, all variational angles are mapped to either the axis or angle of single-qubit rotations. Further details on the compilation steps are reported in the Methods section and [17]. Bases prerotations are added at the end of the circuit to first extract all the terms in the cost function C and finally to perform two-qubit state tomography of each system. Prior to implementing any variational optimizer, it is helpful to build a basic understanding of the costfunction landscape. To this end, we investigate the cost function C at β = 0 using two-dimensional cuts: we sweep γ while keeping α = 0 to study the effect of U intra and vice versa to study the effect of U inter . Note that owing to the β −1.57 divergence, the cost function reduces to − H BA in the β = 0 limit. Consider first the landscape for an ideal quantum processor, which is possible to compute for our system size. The γ landscape at α = 0 is π-periodic in both directions due to the invariance of |TFD(β = 0) under bit-flip (X) and phase-flip (Z) operations on all qubits. The cost function is minimized to -4 at even multiples of π/2 on γ 1 and γ 2 : |TFD(β = 0) is a simultaneous eigenstate of XX BA and ZZ BA with eigenvalue +2 due to the symmetry of the constituting Bell states Φ + i . In turn, the cost function is maximized to +4 at odd multiples of π/2, at which the Φ + i are transformed to sin- The α landscape at γ = 0 is constant, reflecting that |TFD(β = 0) is a simultaneous eigenstate of XX BA and ZZ BA and thus also of any exponentiation of these operators. The corresponding experimental landscapes show qualitatively similar behavior. The γ landscape clearly shows the π periodicity with respect to both angles, albeit with reduced contrast. The α landscape is not strictly constant, showing weak structure particularly with respect to α 2 . These experimental deviations reflect underlying errors in our noisy intermediate-scale quantum (NISQ) processor, which include transmon decoherence, residual ZZ coupling at the bias point, and leakage during CZ gates. We discuss these error sources in detail further below. The challenge faced by the variational algorithm is to balance the mixture of the states at each β, in order to generate the corresponding Gibbs state. When working with small systems, it is possible and tempting to predetermine the variational parameters at each β by a prior classical simulation and optimization for an ideal or noisy quantum processor. We refer to this common practice [6,22] as cheating, since this approach does not scale to larger problem sizes and skips the main quality of variational algorithms: to arrive at the parameters variationally. Here, we avoid cheating altogether by starting at β = 0, with initial guess the obvious optimal variational parameters for an ideal processor ( γ = α = 0), and using the experimentally optimized ( α, γ) at the last β as an initial guess when stepping β in the range [0, 5] (in units of 1/g). This approach only relies on the assumption that solutions (and their corresponding optimal variational angles) vary smoothly with β. At each β, we use the Gradient-Based Random-Tree optimizer of the scikitoptimize [23] Python package to minimize C, using 4096 averages per tomographic pre-rotation necessary for the calculation of C. After 200 iterations, the optimization is stopped. The best point is remeasured two times, each with 16384 averages per tomographic pre-rotation needed to perform two-qubit quantum state tomography of each system. A new optimization is then started for the next β, using the previous solution as the initial guess. To begin comparing the optimized states ρ Exp produced in experiment to the target Gibbs states ρ Gibbs , we first visualize their density matrices (in the computational basis) for a sampling of the β range covered (Fig. 4). Starting from the maximally-mixed state II/4 at β = 0, the Gibbs state monotonically develops coherences (off-diagonal terms) between all states as β increases. Coherences between states of equal (opposite) parity have 0 (π) phase throughout. Populations (diagonal terms) monotonically decrease (increase) for even (odd) parity states. By β = 5, the Gibbs state is very close to the pure state |Υ Υ|, where Qualitative comparison of optimized states to the Gibbs state. Visualization of the density matrices (in the computational basis) for the targeted Gibbs states ρ Gibbs (left) and the optimized experimental states ρExp (right) at (a-b) β = 0, (c-d) β = 1 and (e-f) β = 5. As β increases, the Gibbs state monotonically develops coherence between all states, with phase 0 (π) for states with the same (opposite) parity. Populations in even (odd) parity states decrease (increase). The optimized experimental states show qualitatively similar trends. The noted trends are reproduced in ρ Exp . However, the matching is evidently not perfect, and to address this we proceed to a quantitative analysis. We employ two metrics to quantify experimental performance: the fidelity F of ρ Exp to ρ Gibbs and the purity P of ρ Exp , given by At β = 0, F = 99% and P = 0.262, revealing a very close match to the ideal maximally-mixed state. However, F smoothly worsens with increasing β, decreasing to 92% at β = 1 and 75% by β = 5. Simultaneously, P does not Performance of the variational algorithm. (a) Fidelity to the Gibbs state as a function of inverse temperature β for experimental states obtained by optimization and cheating. (b) Purity of experimental states as a function of β, and comparison to the purity of the Gibbs state. Added curves in both panels are obtained by numerical simulation of a noisy quantum processor with incremental error models based on calibrated error sources for our device: qubit relaxation and dephasing times, increased dephasing from flux noise during CZ gates, residual ZZ crosstalk at the bias point, and leakage during CZ gates. Leakage is identified as the dominant error source. closely track the increase of purity of the Gibbs state. By β = 5, the Gibbs state is nearly pure, but P peaks at 0.601. In an effort to quantitatively explain these discrepancies, we perform a full density-matrix simulation of a four-qutrit system using quantumsim [24]. Our simulation incrementally adds calibrated errors for our NISQ processor, starting from an ideal processor (model 0): transmon relaxation and dephasing times at the bias point (model 1), increased dephasing from flux noise during CZ gates (model 2), crosstalk from residual ZZ coupling at the bias point (model 3), and transmon leakage to the second-excited state during CZ gates (model 4). The experimental input parameters for each increment are detailed in the Methods section and [17]. The added curves in Fig. 5 clearly show that model 4 quantitatively matches the observed dependence of F and P over the full β range, and identifies leakage from CZ gates as the dominant error. DISCUSSION The power of variational algorithms relies on their adaptability: the optimizer is meant to find its way through the variational parameter space, adapting to mitigate coherent errors as allowed by the chosen parametrization. For completeness, we compare in Fig. 5 the performance achieved with our variational strategy to that achieved by cheating, i.e., using the pre-calculated optimal ( α, γ) for an ideal processor. Our variational approach, whose sole input is the obvious initial guess at β = 0, achieves comparable performance at all β. This aspect is crucial when considering the scaling with problem size, as classical pre-simulations will require prohibitive resources beyond ∼ 50 qubits, but variational optimizers would not. Given the dominant role of leakage as the error source, which cannot be compensated by the chosen parametrization, it is unsurprising in hindsight that both approaches yield nearly identical performance. In summary, we have presented the first generation of finite-temperature Gibbs states in a quantum computer by variational targeting of TFD states in a hybrid quantum-classical manner. The algorithm successfully prepares mixed states for the transverse field Ising model with Gibbs-state fidelity ranging from 99% to 75% as β increases from 0 to 5/g. The loss of fidelity with decreasing simulated temperature is quantitatively matched by a numerical simulation with incremental error models based on experimental input parameters, which identifies leakage in CZ gates as dominant. This work demonstrates the suitability of variational algorithms on NISQ processors for the study of finite-temperature problems of interest, ranging from condensed-matter physics to cosmology. Our results also highlight the critical importance of continuing to reduce leakage in two-qubit operations when employing weakly-anharmonic multi-level systems such as the transmon. During the preparation of this manuscript, we became aware of related experimental work [22] on a trapped-ion system, applying a non-variationally prepared TFD state to the calculation of a critical point. METHODS We map the theoretical circuit in Fig. 1(b) to an equivalent circuit conforming to the native gate set in our control architecture and exploiting virtual Z-gate compilation [25] to minimize circuit depth. Single-qubit rotations R XY (φ, θ), by arbitrary angle θ around any equatorial axis cos(φ)x + sin(φ)ŷ on the Bloch sphere, are realized using 20 ns DRAG pulses [26,27]. Two-qubit CZ gates are realized by baseband flux pulsing [28,29] using the Net Zero scheme [30,31], completing in 80 ns. In the optimized circuit [ Fig. 2(b)], CZ gates only appear in pairs. These pairs are simultaneously executed and tuned as one block. Single-qubit rotations R 1 -R 4 are used to change the measurement bases, as required to measure C during optimization and to perform twoqubit tomography [32] in each system to extract F and P . A summary of single-and two-qubit gate performance and a step-by-step derivation of the optimized circuit are provided in [17]. The models used to simulate the performance of the algorithm are incremental: model k contains all the noise mechanisms in model k − 1 plus one more, which we use for labeling in Fig. 5. Model 0 corresponds to an ideal quantum processor without any error. Model 1 adds the measured relaxation and dephasing times measured for the four transmons at their bias point. These times are tabulated in [17]. Model 2 adds the increased dephasing that flux-pulsed transmons experience during CZ gates. For this we extrapolate the echo coherence time T echo 2 to the CZ flux-pulse amplitude using a 1/f noise model [33,34] with amplitude √ A = 1µΦ 0 . This noise model is implemented following [35]. Model 3 adds the idling crosstalk due to residual ZZ coupling between transmons. This model expands on the implementation of idling evolution used for coherence times: the circuit gates are simulated to be instantaneous, and the idling evolution of the system is trotterized. In this case, the residual ZZ coupling operator uses the measured residual ZZ coupling strengths at the bias point [17]. Finally, model 4 adds leakage to the CZ gates, based on randomized benchmarking with modifications to quantify leakage [30,36], and implemented in simulation using the procedure described in [35]. Leakage to transmon second-excited states is found es-sential to quantitatively match the performance of the algorithm by simulation. To reach this conclusion it was necessary to first thoroughly understand how leakage affects the two-qubit tomographic reconstruction procedure employed. The readout calibration only considers computational states of the two transmons involved. Moreover, basis pre-rotations only act on the qubit subspace, leaving the population in leaked states unchanged. Using an overcomplete set of basis pre-rotations for state tomography, comprising both positive (X, Y, Z) and negative (−X, −Y, −Z) bases for each transmon, leads to the misdiagnosis of a leaked state as a maximally mixed state qubit state for that transmon. This is explained in [17]. This supplement provides additional information in support of statements and claims made in the main text. Section I presents the optimization of the engineered cost function. Section II provides a step-by-step description of the transformation of the circuit in Fig. 1(b) into the equivalent, optimized circuit in Fig. 2(b). Section III provides further information on the device and transmon parameters measured at the bias point. Section IV presents a detailed description of the fridge wiring and electronic-control setup. Section V summarizes single-and two-qubit gate performance. Section VI characterizes residual ZZ coupling at the bias point. Section VII details the measurement procedures used for cost-function evaluation and for two-qubit state tomography. Section VIII explains the impact of transmon leakage on two-qubit tomography. Section IX explains the package and error models used in the numerical simulation. I. OPTIMIZATION OF THE COST FUNCTION We optimize the cost function to maximize fidelity of the variationally-optimized state |ψ( α, γ) to the TFD state |TFD(β) (assuming an ideal processor). We consider the class of cost functions defined by and perform nested optimization of parameter ς to minimize the infidelity of the variationally-optimized state to the TFD state over a range of inverse temperatures We define the minimization quantity of interest as where O is the set of operators and |Ψ(ς, β) is the state optimized using C ς (β). We find the minimum value of Ξ at ς = 1.57 [see Fig. S1(a)]. We compare the performance of the optimized cost function C 1.57 to that used in prior work, C 1.00 , in two ways. First, we compare the simulated infidelity to the TFD state of states optimized with both cost functions in the range β ∈ [0.1, 10]. The optimized cost function C 1.57 performs better over the entire range. Finally, we compare the simulated fidelity F of the reduced state of system A to the targeted Gibbs state. As shown in Fig. S2, using C 1.57 significantly reduces the infidelity 1 − F for β 3, We also observe that the purity of the reduced state tracks that of the Gibbs state more closely when using C 1.57 . II. CIRCUIT COMPILATION In this section we present the step-by-step transformation of the circuit in Fig. 1(b) into the equivalent circuit in Fig. 2(b) realizable with the native gate set in our control architecture. Exponentiation of ZZ and XX: We first substitute the standard decomposition of the operations e −iφZZ/2 and e −iφXX/2 using controlled-NOT (CNOT) gates and single-qubit rotations, shown in Fig. S3. The decomposition of e −iφZZ/2 uses an initial CNOT to transfer the two-qubit parity into the target qubit, followed by a rotation R Z (φ) on this target qubit, and a final CNOT inverting the parity. The decomposition of e −iφXX/2 simply dresses the transformations above by pre-and post-rotations transforming from the X basis to the Z basis and back, respectively. The result of these substitutions is shown in Fig. S4. Compilation using native gate set: The native gate set consists of single-qubit rotations of the form R XY (φ, θ) and CZ gates. We compile every CNOT in Fig. S4 as a circuit using native gates, shown in Fig. S5. Note that R Y (θ) = R XY (90 • , θ). Using this replacement together with the identities R Y (−90 S4. Compilation step 1. Depth-14 circuit obtained by replacing the ZZ and XX exponentiation steps in Fig. 1(b) with the circuits of Fig. S3. Reduction of circuit depth: Exploiting the commutations in Fig. S7 together with the identities R Y (−90 , we can bring two identical pairs of CZ gates back-toback and cancel them out (since CZ is its own inverse). This leads to the circuit in Fig. S8. Elimination of Z rotations: All the R Z gates in Fig. S8 can be propagated to the beginning of the circuit using the commutation relation and noting that R Z commutes with CZ. State |0 is an eigenstate of all R Z rotations, so we can ignore all R Z gates at Compilation step 3. Depth-12 circuit obtained by applying the commutation rule in Fig. S7 and simple identities to the circuit of Fig. S6. the start because they only produce a global phase. This action leads to the final depth-11 circuit shown in Fig. S9, which matches that of Fig. 2(b) upon adding measurement pre-rotations and final measurements on all qubits. Compilation step 4. Depth-11 circuit obtained by propagating all RZ gates in Fig. S8 to the beginning of the circuit and then eliminating them. III. DEVICE AND TRANSMON PARAMETERS AT BIAS POINT Our experiment makes use of four transmons with square connectivity within a seven-transmon processor. Figure S10 provides an optical image zoomed in to this transmon patch. Each transmon has a flux-control line for two-qubit gating, a microwave-drive line for single-qubit gating, and dispersively-coupled resonator with Purcell filter for readout [S1, S2]. The readout-resonator/Purcell-filter pair for B 1 is visible at the center of this image. A vertically running common feedline connects to all Purcell filters, enabling simultaneous readout of the four transmons by frequency multiplexing. Air-bridge crossovers enable the routing of all input and output lines to the edges of the chip, where they connect to a printed circuit board through aluminum wirebonds. The four transmons are biased to their sweetspot using static flux bias to counter any residual offset. Table S1 presents measured transmon parameters at this bias point. S10. Optical image of the device, zoomed in to the four transmons used in the experiment. Added false color highlights the transmon pair of system A (blue, A1, A2) the transmon pair of system B (red, B1, B2), and the dedicated bus resonators used to achieve intra-system (read and blue) and inter-system coupling (purple). IV. EXPERIMENTAL SETUP The device was mounted on a copper sample holder attached to the mixing chamber of a Bluefors XLD dilution refrigerator with 12 mK base temperature. For radiation shielding, the cold finger was enclosed by a copper can coated with a mixture of Stycast 2850 and silicon carbide granules (15−1000 nm diameter) used for infrared absorption. To shield against external magnetic fields, the can was enclosed by an aluminum can and two Cryophy cans. Microwavedrive lines were filtered using ∼ 60 dB of attenuation with both commercial cryogenic attenuators and home-made Eccosorb filters for infrared absorption. Flux-control lines were also filtered using commercial low-pass filters and Eccosorb filters with stronger absorption. Flux pulses for CZ gates were coupled to the flux-bias lines via roomtemperature bias tees. Amplification of the readout signal was done in three stages: a travelling-wave parametric amplifier (TWPA, provided by MIT-LL [S3]) located at the mixing chamber plate, a Low Noise Factory HEMT at the 4 K plate, and finally a Miteq amplifier at room temperature. Room-temperature electronics used both commercial hardware and custom hardware developed in QuTech. Rohde & Schwarz SGS100 sources provided all microwave signals for single-qubit gates and readout. Home-built current sources (IVVI racks) provided static flux biasing. QuTech arbitrary waveform generators (QWG) generated the modulation envelopes for single-qubit gates and a Zurich Instruments HDAWG-8 generated the flux pulses for CZ gates. A Zurich Instruments UHFQA was used to perform independent readout of the four qubits. QuTech mixers were used for all frequency up-and down-conversion. The QuTech Central Controller (QCC) coordinated the triggering of the QWG, HDAWG-8 and UHFQA. All measurements were controlled at the software level with QCoDeS [S4] and PycQED [S5] packages. The QuTech OpenQL compiler translated high-level Python code into the eQASM code [S6] forming the input to the QCC. V. GATE PERFORMANCE The gate set in our quantum processor consists of single-qubit rotations R XY (φ, θ) and two-qubit CZ gates. Singlequbit rotations are implemented as DRAG-type microwave pulses with total duration 4σ = 20 ns, where σ is the Gaussian width of the main-quadrature Gaussian pulse envelope. We characterize single-qubit gate performance by single-qubit Clifford randomized benchmarking (100 seeds per run) with modifications to detect leakage, keeping all other qubits in |0 . Two-qubit CZ gates are implemented using the Net Zero flux-pulsing scheme, with strong pulses acquiring the conditional phase in 70 ns and weak pulses nulling single-qubit phases in 10 ns. Intra-system and inter-system CZ gates were simultaneously tuned in pairs (using conditional-oscillation experiments as in [S7]) in order to reduce circuit depth. However, we characterize CZ gate performance individually using two-qubit interleaved randomized benchmarking (100 seeds per run) with modifications to detect leakage, keeping the other two qubits in |0 . Figure S12 presents the extracted infidelity and leakage for single-qubit gates (circles) and CZ gates (squares). VI. RESIDUAL ZZ COUPLING AT BIAS POINT Coupling between nearest-neighbor transmons in our device is realized using dedicated coupling bus resonators. The non-tunability of these couplers leads to residual ZZ coupling between the transmons at the bias point. We quantify the residual ZZ coupling between every pair of transmons as the shift in frequency of one when the state of the other changes from |0 to |1 . We extract this frequency shift using the simple time-domain measurement shown in Fig. S13(a): we perform a standard echo experiment on one qubit (the echo qubit), but add a π pulse on the other qubit (control qubit) halfway through the free-evolution period simultaneous with the refocusing π pulse on the echo qubit. An example measurement with B 2 as the echo qubit and B 1 as the control is shown in Fig. S13(b). The complete results for all Echo-qubit, control-qubit combinations are presented as a matrix in Fig. S13(c). We observe that the residual ZZ coupling is highest between B 1 and the mid-frequency qubits B 2 and A 1 . This is consistent with the higher (lower) absolute detuning and the lower (higher) transverse coupling between A 2 (B 1 ) and the mid-frequency transmons. VII. MEASUREMENT MODELS, COST FUNCTION EVALUATION, AND TWO-QUBIT STATE TOMOGRAPHY In this section we present detailed aspects of measurement as needed for evaluation of C and for performing two-qubit state tomography. We begin by characterizing the fidelity and crosstalk of simultaneous single-qubit measurements using the cross-fideltiy matrix as defined in [S8]: where e j (g j ) denotes the assignment of qubit j to the |1 (|0 ) state, and π i (I i ) denotes the preparation of qubit i in |1 (|0 ). The measured cross-fidelity matrix for the four qubits is shown in Fig. S14. From diagonal element F ii we extract the average assignment fidelity for qubit i, the latter given by 1/2 + F ii /2 and quoted in Table S1. The magnitude of the off-diagonal elements F ji with j = i quantifies readout crosstalk, and is below 2% for all pairs. This low level of crosstalk justifies using the simple measurement models that we now describe. A. Measurement models The evaluation of the cost function and two-qubit state tomography require estimating the expected value of single-qubit and two-qubit Pauli operators. We do so by least-squares linear inversion of the experimental average of single-transmon measurements and two-transmon correlation measurements. When measuring transmon i, we 1-bit discretize the integrated analog signal for its readout channel at every shot, outputting m i = +1 when declaring the transmon in |0 and m i = −1 when declared it in |1 . The expected value of m i is given by m i = Tr (M i ρ Exp ), where the measurement operator (in view of the low crosstalk) is modelled as Prepared qubit, rewrite the measurement operator in the form also with real-valued c i I and c i Z ∈ [−1, 1]. When correlating measurements on transmons i and j, we compute the product of the 1-bit discretized output for each transmon. The expected value of m ji = m j × m i is given by m ji = Tr (M ji ρ Exp ), where the measurement operator (also in view of the low crosstalk) is modelled as with real-valued coefficients c ji lk ∈ [−1, 1]. Making use of the 2-qubit Pauli operators given by tensor products, and again truncating to three transmon levels, we can rewrite the measurement operator in the form In experiment, we calibrate the coefficients c i P and c ji P Q (P, Q ∈ I, Z) by linear inversion of the experimental average of single-transmon and correlation measurements with the four transmons prepared in each of the 16 computational states (for which P i = ±1 and Q j P i = ±1). We do not calibrate the coefficients c i 2 , c ji 2P , c ji Q2 , or c ji 22 . Measurement pre-rotations change the measurement operator as follows: Pre-rotations do not transform the projectors |2 i 2 i | as they only act on the qubit subspace. B. Cost function evaluation To evaluate the cost function C, we must estimate the expected value of all single-qubit Pauli operators X i , the two intra-system two-qubit Pauli operators Z j Z i , and the inter-system two-qubit Pauli operators X j X j and Z j Z i [the latter only between corresponding qubits in the two systems (e.g., B 1 and A 1 )]. We estimate these by linear inversion of the experimental averages (based on 4096 measurements) of single-transmon and relevant correlation measurements with the transmons measured in the bases specified in Table S2. As an example, Fig. S15. shows the raw data for the estimation of C with variational parameters ( α, γ) = 0. Note that every evaluation of the cost function includes readout calibration measurements to extract measurement-operator coefficients c i P and c ji P Q (P, Q ∈ I, Z). C. Two-qubit state tomography After optimization, we perform two-qubit state tomography of each system separately to assess peformance. To do this, we obtain experimental averages (from 16394 shots) of single-transmon and correlation measurements using an over-complete set of measurement bases. This set consists of the 36 bases obtained by using all combinations of bases for each transmon i and j, drawn from the set {+X, +Y, +Z, −X, −Y, −Z}. The expectation values of single-qubit X -1 X -2 X -3 X -4 X -5 X -6 X -7 X -8 X -9 Z -1 Z -2 Z -3 Z -4 Z -5 Z -6 Z -7 Z -8 Z -9 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 Single-qubit signals (a) X -1 X -2 X -3 X -4 X -5 X -6 X -7 X -8 X -9 Z -1 Z -2 Z -3 Z -4 Z -5 Z -6 Z -7 Z -8 Z -9 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 Correlation signals (e) Table S2 and for calibration of the measurement operators (following preparation of the 16 computational states of the four transmons). Panels show the experimental averages of (a-d) single-transmon measurements, (e,h) intra-system correlations and (f,g) inter-system correlations. Our linear inversion procedure for converting measurement averages into estimates of the expected value of oneand two-qubit Pauli operators is only valid for |l j k i l j k l | = 0 whenever either k or l ≥ 2, i.e., there is no leakage on either transmon. It is therefore essential, particularly for simulation model 4, to understand precisely how leakage in either or both transmons infiltrates our extraction of the two-qubit density matrix ρ Exp . First we consider the estimation of expected values of single-qubit Pauli operators P i , taking Z i as a concrete example. The expected value of all measurements on this transmon for the basis combinations (6 in total) where this specific transmon is measured in the +Z basis is: For measurement bases +X j and −Z i , For measurement bases −X j and +Z i , Finally, for measurement bases −X j and −Z i , Our least-squares linear inversion of these 4 experimental averages to estimate X j Z i is Clearly, owing to the balanced nature of this linear combination (all coefficients of equal magnitude, 2 positive and 2 negative), this estimator is not biased by c ji 2I , c ji 2Z , c ji I2 , c ji Z2 , and c ji 22 . In other words, the average of X j Z i est is independent of the value of these coefficients. We are finally in position to describe how leakage in the two-transmon system infiltrates into our two-qubit tomographic reconstruction procedure. Evidently, the complete description of the two-transmon system would be a two-qutrit density matrix ρ 2Qutrit , but our procedure returns a two-qubit density matrix ρ Exp . It is therefore key to understand how elements of ρ 2Qutrit are mapped onto ρ Exp . Table S3 summarizes these mappings and Fig. S16 illustrates them, including several examples. We have verified the mappings by exactly replicating the tomographic procedure in our numerical simulation using quantumsim. To incorporate this into the simulations, we have made use of the measurement coefficients c i experimentally obtained. These leakage mappings have also been used when adding leakage in simulation model 4. Elements of ρ2Qutrit Mapping onto ρExp l j , k i lj, ki| l j , k i lj, ki| l j , 2i lj, 2i| IX. ERROR MODEL FOR NUMERICAL SIMULATIONS Our numerical simulations use the quantumsim [S9] density-matrix simulator with the error model described in the quantumsim dclab subpackage. Single-qubit gates are modeled as perfect rotations, sandwiched by two 10 ns idling blocks. The idling model takes into account amplitude damping (T 1 ), phase damping (T echo 2 ) (noise model 1 of the main text), and residual ZZ crosstalk (noise model 3). To implement it, we first split the idling intervals into slices of 10 ns or less. These slides include amplitude and phase damping. Between these slices, we add instantaneous two-qubit gates capturing the residual coupling described by the Hamiltonian: The measurements of T 1 , T echo 2 and ζ ij are detailed in previous sections. The error model for CZ gates is described in detail in [S10]. The dominant error sources are identified to be leakage of the fluxed transmon to the second-excited state through the |11 ↔ |02 channel and increased dephasing (reduced T echo 2 ) due to the fact that the fluxed transmon is pulsed away from the sweetspot. These two effects implement noise models 4 and 2, respectively. The two-transmon process is modeled as instantaneous, and sandwiched by two idling blocks of 35 ns with decreased T echo 2 on the fluxed transmon. Quasistatic flux noise is suppressed to first order in the Net Zero scheme and is therefore neglected. Residual ZZ crosstalk is not inserted during idling for the transmon pair, because it is absorbed by the gate calibration. The error model described in [S10] allows for higher-order leakage effects, e.g., so-called leakage conditional phases and leakage mobility. We do not include these effects. The simulation finishes by including the effect of leakage on the tomographic procedure as discussed in Section VIII. The density matrix is obtained at the qutrit level, and the correct mapping for the density-matrix elements is applied. We take special care to use the experimental readout coefficients c i to model the readout signal for the simulated density matrix, according to Eq. (S4) and Eq. (S5). The simulation produces data for the same basis set as shown in Fig. S15. Afterwards, the same tomographic state reconstruction routine as in the experiment is applied to these data. in this way, noise model 4 properly accounts for the imperfect reconstruction of leaked states, providing a fair comparison to experiment.
9,346.4
2020-12-07T00:00:00.000
[ "Physics", "Computer Science" ]
A Superspace Formalism for the Electromagnetism of Generic Nonlocal Continuous Metamaterials: Principal Structures and Applications An alternative to conventional spacetime is rigoursly formulated for nonlocal electromagnetism using the general concept of fiber bundle superspace. I. INTRODUCTION In classical electromagnetic (EM) theory, there are no nonlocal interactions or phenomena in vacuum because Maxwell's equations, which capture the ultimate content of the physics of electromagnetic fields, are essentially local differential equations [1].An effect applied at point r in space will first be felt at the same location but then spread or propagate slowly into the infinitesimally immediate neighborhood. 2Long-term disturbances such as electromagnetic waves propagate through both vacuum and material media by cascading these infinitesimal perturbations in outward directions (rays or propagation paths) emanating from the time-varying point source that originated the whole process.On the other hand, nonlocal interactions differ from this vacuum-like picture in allowing fields applied at position r to influence the medium at different location r, i.e., a location that is not infinitesimally close to the source position r .The distance |r − r | could be very small in most media (and certainly zero in vacuum), but in some types of materials, the so-called nonlocal media, observable response can be found such that the "radius of nonlocality" |r − r | is appreciably different from zero [2]- [4].This is the core idea of nonlocality in electromagnetic metamaterials.The research field concerned with the study of the electromagnetism of such material domains is called nonlocal electromagnetism/electromagnetics/electrodynamics. 3 This paper introduces a comprehensive new general theory for this emerging discipline together with a series of selected applications. The main goal of the present work is to explore at a very general level the conceptual and mathematical foundations of nonlocality in connection with applied electromagnetic metamaterials (MTMs). 4Our approach is conceptual and theoretical with the main focus on understanding the mathematical foundations of the subject at a very broad level.Indeed, while a massive amount of numerical and experimental data on all types of nonlocal materials abound in a literature that goes back to as early as the 1950s, the purpose of the present paper is attaining some clear understanding of the essentials of the subject, particularly in connection with the ability to build a very general superspace formalism for nonlocal electromagnetism without restricting the formalism first to particular classes of materials such as metals, plasmas, or semiconductors. The superspace formalism has a long history in physics, mathematical physics, and mathematics (see Remark I.1).It will be shown below that nonlocal electromagnetism appears to lead very naturally to a reformulation of its essential configuration space by upgrading the conventional space-time or frequency-space to a larger superspace in which the former spaces serve as base spaces for the new (larger) superspace.Such reconsideration of the fundamental structure of the prob-lem may help foster future numerical methods and potential applications as will be discussed later. Remark I.1.(Superspace Concepts in Other Fields) The concept of superspace is not new and has been proposed in several fields in both physics and mathematics.For a brief but general view on the definition of superspaces, see [5].For example applications, various superspaces have been proposed as fundamental structures in quantum gravity [6], [7], which are frequently infinite dimensional.Superspace concepts are also now extensively researched in quantum field theory and the standard model of particle physics, e.g., see [8]- [10].In general, dealing with topics such as supergravity, supersymmetry, superfields, superstrings, and noncommutative geometry often requires the use of one superspace formalism or another [8].More related to the subject of nonlocal MTMs is the original superspace concept introduced earlier for the analysis of deformed crystal [11] and subsequently utilized for fundamental investigations of EM nonlocality in incommensurate (IC) superstructures in insulators [12].Such modulatedstructure materials possess spaces with dimensions greater than spacetime [13].Nevertheless, for fairly concrete models one may exploit group theory to construct finite-dimensional (dimension> 4) approximations of them.The general theory of superspace formalisms in quasiperiodic crystals is presented in [14].Other examples from condensed-matter physics where superspace methods where applied include mesoscopic superconductivity [15].In mathematics and mathematical physics, where the concept itself originated, a notable recent example of the superspace concept includes sheaves, which are used in differential and algebraic topology and algebraic geometry and have numerous applications in physics [16]- [18]. The key motivation for our superspace approach is based on explicating the subtle but often overlooked difference between infinitesimal interactions, which characterize local electromagnetism, and interactions occurring in small topological neighborhoods around the observation point.We believe that this topological difference has not received the attention it deserves in the growing theoretical and methodological literature on nonlocal media.In particular, the author believes that a majority of present approaches to nonlocal metamaterials conflate the topologically local (but EM nonlocal) domain of small neighborhoods and global domains.However, general topology and much of modern mathematical physics is based on clearly distinguishing the last two topological levels.It turns out that the standard formalism of local electromagnetism, which is based on spacetime points and their differential but not topological neighborhoods as the basic configuration space of the problem, is not the most natural or convenient framework for formulating the electromagnetism of nonlocal materials.This is mainly because the physicsbased domain of electromagnetic nonlocality (to be defined precisely below), which captures the effective region of fieldmatter nonlocal interactions, is not usually built into the mathematical formalism of classical boundary value problems in applied electromagnetism.By investigating the subject from a new perspective, it will be shown that a natural space for conducting nonlocal metamaterials research is the vector bundle structure, more specifically, a Banach bundle [19] where every element in the fiber superspace is a vector field on the entire domain of nonlocality. The main result of this paper is that every generic nonlocal domain can be topologically described by a superspace comprised of a Banach (infinite-dimensional) vector bundle M. If two materials described by their corresponding vector bundles M 1 and M 2 are juxtaposed, then one may use topological methods to combine them and to compare their topologies.The present paper's focus is mainly on the first part, i.e., how to construct the material bundle M. That is, the derivation of the various vector bundle structures starting from a generic phenomenological model of electromagnetic nonlocality is the main contribution of the present work.It is the hope that the superspace theory will stimulate new approaches to computational EM by adopting methods from computational topology and differential topology for solving challenging problems in complex material domains. II. REVIEW OF NONLOCAL ELECTROMAGNETISM AND AN OUTLINE OF THE PRESENT WORK A. Survey of the Literature on Nonlocal Metamaterials We first provide a non-exhaustive and selective review of the development of nonlocal electromagnetic materials research. 5ome of the physical phenomena that cannot be understood using local electromagnetic theory include spatial dispersion effects [20], extreme negative group velocity and negative refraction [21], [22], new diffraction behaviour in optical beams [23], superconductivity [24], natural optical activity [4], [25], [26], non-Planck equilibrium radiation formulas in nonlocal plasma [27].Outside electromagnetism but within wave phenomena, there also exists processes that cannot be fully accounted for through simple local material models, for instance, we mention phase transitions, Casimir force effects [28], and streaming birefringence [29].By large, spatial dispersion has attracted most of the attention of the various research communities working on nonlocal electromagnetic materials.Indeed, few book-length researches on spatial dispersion already exist in literature, most notably [2]- [4], [20]. The majority of published research on nonlocal media and nonlocal electromagnetism tend to focus on applications and specialized materials (see references quoted below.)Few Exceptions include investigations attempting to approach the subject at a more general level.For example, from the perspective of general thermodynamics, see [29], [30].Some of the topics reexamined within the framework of a general nonlocal fieldmatter interaction theory include the applicability of optical reciprocity theorems [31]- [34], energy/power balance [35], quantization [36]- [38], operator methods [39], extension of spatial dispersion to include inhomogeneous media [12], and alternative formulations of spatial dispersion in terms of the Jones calculus [40]. The bulk of the available research on nonlocality is concentrated in the very large area of general field-matter interactions.There already exists a well-attested body of research on nonlocality in metals based on various phenomenological approaches, e.g., see [41] for a general review.Nonlocality has also been investigated extensively in dielectric media, for example semiconductors [20], [42].A comprehensive recent review of nonlocality in crystal structures is provided in [43], which updates the classic books [4], [44].Moreover, numerous researches conducted within condensed-matter physics and material science implicitly or explicitly assume that nonlocality is essentially based on microscopic (hence quantum) processes, and develop an extensive body of work where the spatially dispersive dielectric tensor is deployed as the representative constitutive material relation [45]- [49].On the other hand, one can also treat nonlocality without resort to spatial dispersion by modeling certain classes of material media as periodic structures [50] where the susceptibility tensor is derived from the symmetry of the overall structure [45], [51], [52] or from the lattice dynamics approach [53], [54]. For solving nonlocal problems, several methods exists in order to deal with the lack of universal model at the interface between a nonlocal material and another medium.The Additional Boundary Condition (ABC) approach adjoins new boundary conditions to the standard Maxwell's equations in order to account for additional waves excited at the interface, which otherwise would not be explained by the standard local theory [4].All ABC formulations are inherently model-specific since they assume particular types of nonlocal media or postulate specific ABCs based on the physics and applications, e.g., see [44], [55]- [59].We note that such ABC formalisms are not inevitable since there exists several boundary-condition free formulations, e.g., see [46], [47], [51]. Numerous homogenization theories for nonlocal MTM with averaging operations considered over multiple spatial scales have been reported, e.g., see [72], [105]- [108].We note that the subject of electromagnetic metamaterials (with or without nonlocality) is enormous and it is beyond the scope of this paper to even summarize the main papers in the field.However, most publications (till recently) have focused on non-spatially dispersive and local scenarios.This situation has began to change in recent years, and increasing numbers of reports appear to move from the old opinion that nonlocality is a "bug" to the more positive and fruitful perspective that nonlocality provides new dimensions to be exploited in metamaterial system design. A particularly interesting direction of research in nonlocal media is the recent subject of topological photonics.The main idea was inspired by previous researches in Chern insulators and topological insulators [109], where the focus has been on electronic systems.There, it has already been observed that the nonlocal behaviour of the fermionic wavefunction may exhibit a rather interesting and nontrivial dependence on the entire configuration space of the system, in that case the momentum space (the wavevector k space).In addition to the already established role played by nonlocality in superconductors, quantum Hall effects are among the most intriguing physically observable phenomena that turned out to depend fundamentally on purely topological aspects of the electron wavefunction [24].The major themes exhibited by electrons undergoing topological transition states include topological robustness of the excited edge (surface) states moving along a 2-dimensional interface under the influence of an external magnetic fields.More recently, it was proposed that the same phenomenon may apply to photons (electromagnetism) [110], where the key idea was to use photonic crystals to emulate the periodic potential function experienced by electrons in fermionic systems.However, since photons are bosons, transplanting the main theme of topological insulators into photons is not trivial and is currently generating great attention, see for example the extensive review article [111], which provides a literature survey of the field. One of the most important applications of topological photonics is the presence of "edge states," which are topologically robust unidirectional surface waves excited on the interface between two metamaterials with topologically distinct invariants.Since edge states are immune to perturbations on the surface, they have been advocated for major new applications where topology and physics become deeply intertwined [112].Topology can also be exploited to devise non-resonant metamaterials [113] and to investigate bifurcation transitions in media [114].Another related exciting subject illustrating the synergy between topology, physics, and engineering is non-Hermitian dynamics, especially in light of recent work related to the origin of surface waves [115], [116], which is now being considered as essentially non-trivial topological effect. The previous old and recent directions of research all point toward a basic fact: topology and physics are destined to come closer to each other throughout the next decades.However, the unique feature in this convergence, though in itself is not totally new since Herman Weyl introduced topological thinking to physics in the 1920s, is the focus on material engineering applications, in our case metamaterials and topology-based devices.For that reason, we propose that in addition to the now mainstream approach to topological materials where the focus is on the global dependence of the wavefunction on momentum (Fourier) space, there is a need to consider how materials can be assigned a direct structure in the configuration space, i.e., space-time or space-frequency.Our key observation is that EM nonlocality requires gathering information at microdomains (small regions around every point where the response is nonlocal), then aggregating these microdomains together in order to arrive into a global topological structure.The fundamental insight coming from topology is precisely how this process of "moving from the local to the global" can be enacted.We have found that a very efficient method to do this is the natural formulation of the entire problem in terms of a fiber bundle.In other words, in contrast to most existing works on topological materials, we don't first solve Maxwell's equations to find the state function in the Fourier kspace then study topology over momentum space; instead, we work directly in spacetime (or space-frequency) and formulate the dual problem of the topology over a fiber bundle.It is the hope of the author that such new perspective may provide a complementary approach to the exciting subject of topological materials and help generate new insights into the physics and novel algorithms for the computation of suitable topological invariant characterizing complex material domains. B. An Outline of the Present Work Because of the complexity of the subject, and to make it more accessible to a wider audience involving physicists, engineers, and mathematicians, we have divided the argument into different stages with different flavors as follows.In Sec.III, we kick off our presentation by introducing a general review of electromagnetic nonlocality targeting a wide audience of mathematicians, physicists, engineers and applied scientist. The key ingredients of nonlocal metamaterials/materials are illustrated in Sec.III-A using an abstract excitation-response model.This is followed in Sec.III-B by a more detailed description of the special but important case of spatial dispersion, which tends to arise naturally in many investigations of nonlocal metamaterials.In Sec.IV, we begin the elucidation of the main topological ideas behind electromagnetic nonlocality, most importantly, the concept of EM nonlocality microdomains, which provides the key link between physics, material engineering, and topology in this paper.The various physical and mathematical structures are spelled out explicitly, followed in Sec.V by a more careful construction of a natural fiber bundle superspace structure that appears to satisfy simultaneously both the physical and mathematical requirements of EM nonlocality (Sec.V-A,V-B.)We then provide a key computational application of the proposed theory in Sec.V-C, where it is shown that the material response function is representable as a special fiber bundle homomorphism over the metamaterial base space.In this way, a more general map than linear operators in local EM is derived to provide mathematical foundations for future computational topological methods in which the bundle homomorphism is discretized instead of the linear operator itself.The fiber bundle superspace algorithm is summarized in Sec.VI where it is highlighted that the main data needed are the EM nonlocality microdomains, which come from physics.Otherwise, the entire construction of the superspace can proceeds using the procedure outlined.To illustrate how the above mentioned microdomain structure may be estimated in practice, we give in Sec.VII a computational example based on nonlocal semiconductors and also explore in depth the physical origin of nonlocality.Insights into the lack of general EM boundary conditions in nonlocal EM are provided in Sec.VIII based on the superspace formalism.In Sec.IX, various additional current and future applications to fundamental methods, applied physics, and engineering are outlined in brief form.Finally we end up with conclusion.Some basic familiarity with vector bundles and Banach spaces is assumed, but essential definitions and concepts will be reviewed briefly within the main formulation and references where more background on vector bundles can be found will be pointed out.The paper intentionally avoids the strict theorem-proof format to make it accessible to a wider audience.Most of the time we give only proof sketches and leave out straightforward but lengthy computations.In general, just the very basic definitions of smooth manifolds, vector bundles, Banach spaces, etc, are needed to comprehend this theory (also see Appendix A for a guide to the mathematical background.)The only place where the treatment is mildly more technical is in Sec.V-C when the bundle homomorphism is constructed using partition of unity technique as a detailed computational application of the superspace theory. A. The Generic Nonlocal Response Model in Inhomogeneous Media In order to introduce the concept of nonlocality in the simplest way possible, let us first start with a scalar field theory setting.As mentioned in the introduction, vacuum classical fields cannot exhibit nonlocality, so in order to attain this phenomenon, one must consider fields in specialized domains.We then begin by reviewing the broad theory of such media.The goal is to outline the main ingredients of the spacetime configuration space on which such theories are often formulated in literature.To further simplify the presentation, we work in the regime of linear response theory: all media are assumed to be linear with respect to field excitation.If the medium response is described by the function R(r, t), while the exciting field is F (r, t), then the most general response is given by an operator equation of the form [29] where L is a the linear operator describing the medium, and is ultimately determined by the laws of physics relevant to the structure under consideration [117]- [119].Now, the entire physical process will occur in a spacetime domain.In nonrelativistic applications (like this paper), we intentionally separate and distinguish space from time.Therefore, let us consider a process of field-matter interaction where t ∈ R, while we spatially restrict to a "small" region r ∈ D ⊂ R 3 , where D is an open set containing r. 6 Since the operator L is linear, one may argue (informally) that its associated Green's function K(r, r ; t, t ) must exist.Strictly speaking, this is not correct in general and one needs to prove the existence of the Green's function for every given linear operator on a case by case basis by actually constructing one [16], [120]. 7owever, we will follow (for now) the common trend in physics and engineering by assuming that linearity alone is enough to justify the construction of the Green's function.If this is accepted, then we can immediately infer from the very definition of the Green's function itself that [1], [121] R(r, t) The relation (2) represents the most general response function of a (scalar) material medium valid for linear field-matter interaction regimes [45], [48].The kernel (Green's) function K(r, r ; t, t ) is often called the medium response function.If we further assume that all of the material constituents of the medium are time-invariant (the medium is not changing with time), then the relation (2) maybe replaced by where the only difference is that the kernel function's temporal dependence is replaced by t − t instead of two separated arguments.Such superficially small difference has nevertheless considerable consequences.Most importantly, working with (3) instead of (2), it becomes possible to apply the Fourier transform in time to simplify the formulation of the problem.Indeed, taking the temporal Fourier transform of both sides of (3) leads to where the Fourier spectra of the fields are defined by (5) On the other hand, the medium response function's Fourier transform is given by the essentially equivalent formula In this paper, we focus on time-invariant material media and hence work exclusively with frequency-domain expressions like (4). The generalization to the 3-dimensional (full-wave) electromagnetic picture is straightforward when the dyadic formalism is employed.The relation corresponding to (2) is where we have replaced the scalar fields F (r) and R(r) by vectors F(r), R(r) ∈ R 3 .The kernel function K, however, must be transformed into a dyadic function (tensor of second rank) K(r, r ; t−t ) [2], [118], [125].In the (temporal) Fourier domain, (7) becomes where (10) The essence of electromagnetic nonlocality can be neatly captured by the mathematical structure of the basic relation (7).In words, it says that the field response R(r) is determined not only by the excitation field F(r ) applied at location r , but at all points r ∈ D. Consequently, knowledge of the response at one point requires knowledge of the cause (excitation field) at an entire topologically local set D. On the other hand, if the medium is local, then the material response function can be written as where K 0 is a spatially constant tensor and δ(r − r ) is the 3-dimensional Dirac delta function.In this case, (8) reduces to [126] which is the standard constitutive relation of linear electromagnetic materials.Clearly, (12) says that only the exciting field F(r) data at r is needed to induce a response at the same location.In a nutshell, locality implies that the natural configuration space of the electromagnetic problem is just the point-like spacetime manifold D ⊂ R 3 or the entire Euclidean space R 3 . Remark III.1.One may attach the "infinitesimally immediate neighborhood" to a given point r where a response is sought.Indeed, according to (12), while only the exciting field at r is needed to compute the response, Maxwell's equations still need to be coupled with that local constitutive relation.The fact that these equations are differential equations implies that the "largest" domain beside the point r needed to carry over the mathematical description of the field-matter interaction physics is just the region infinitesimally close to r. Conventional boundary-value problems in applied electromagnetism are formulated in this manner, i.e., with a 3differential manifolds as the main problem space on which spatial fields live [118], [119], [121], [125], [127]- [131].Note that strictly speaking, the full configuration space in local electromagnetism (also called normal optics) is the 4dimensional manifolds D × R or R 4 since either time t or the (temporal) circular frequency ω must be included to engender a full description of electromagnetic fields.However, nonlocal materials are most fundamentally a spatial type of materials/metamaterials where it is the spatial structure of the field what carries most of the physics involved [121], [132].For that reason, throughout this paper we investigate the required configuration spaces with focus mainly on the spatial degrees of freedom.This will naturally lead to the discovery of the fiber-bundle structure of nonlocality, the main topic of the present work. B. Spatial Dispersion in Homogeneous Nonlocal Material Domains Spatial dispersion is considered by some researchers as one of the most promising routes toward nonlocal metamaterials, e.g., see [4], [132]- [134].It is by large the most intensely investigated class of nonlocal media, receiving both theoretical and experimental treatments by various research groups since the early 1960s. 8The basic idea is to restrict electromagnetism to the special but important case of media possessing translational symmetry, an important case attained when the medium is homogeneous.In such situation, the material tensor function satisfies The spatial Fourier transforms are defined by with After inserting (13) into (8) and taking the spatial (3dimensional) Fourier transform of both sides, the following relation is obtained The dependence of K(k, ω) on the wavevector ("spatial frequency") k in addition to the the temporal frequency ω is the signature of spatial dispersion.As a spectral transfer function of the medium, K(k, ω) includes all the information needed to obtain the nonlocal material domain's response to arbitrary spacetime field excitation functions F(r, t) through the application of inverse 4-dimensional Fourier transform [4]. the popular multipole model of material media.A comparison between the two material response formalisms, the one based on K(k, ω) and the multipole model is given in [47], [121], [132]. Remark III.3.(Historical Digression) Historically, spatial dispersion had been under the radar since the 1950s, especially in connection with researches on the optical spectra of material domains [137]- [139].However, the first systematic and thorough treatment of the subject appeared in 1960s, especially in the first edition of Ginzburg book on plasma physics, which was dedicated to electromagnetic wave propagation in plasma media.The second edition of the book, published in 1970, contained a considerably extended treatment of the various mathematical and physical aspects of the electromagnetism of spatially dispersive media [2].Spatial dispersion in crystals had been also investigated by Ginzburg and his coworkers during roughly the same time [140]- [142].The book [20] contains good summaries on spatial dispersion research up to the end of the 1980s.More recently, media obtained by homogenizing arrays of wires, already very popular because of their connection with traditional (temporal) metamaterials, are known to exhibit spatial dispersion effects, though many researchers ignore that effect to focus on temporal dispersion [143]- [145].Other types of periodic or large finite arrays of composed of unit cells like spheres and desks also exhibit spatial dispersion effects [146].Nonlinear materials with observable nonlocality have also been investigated in the optical regime [147].More recently, much of the reemergence of interest in spatial dispersion stems from the observation that the phenomenon cannot be ignored at the nanoscale [148], especially those of low-dimensional structures like carbon nanotubes [51], [54], [149] and graphene [150], [151].The subject was also introduced at a pedagogical level for applications involving current flow in spatially dispersive conductive materials like plasma and nanowires [152]. Spatial dispersion cellular domains Fig. 1: A generic depiction of an electromagnetic nonlocal metamaterial system.Each of the domains D n is captured by a general linear nonlocal response function K n (r, r ). Complex heterogeneous arrangements of various nonlocal materials can be realized by juxtaposing several subdomains where each is homogeneous, hence, can be described by a spatial dispersion profile K(k, ω).The idea is that even materials inhomogeneous at a given spatial scale tend to become homogeneous at a finer spatial level, leading to a "grid-like" spatially dispersive cellular building blocks at the lower level.In Fig. 1, we show a nonlocal metamaterial system with various multiscale structures.A large nonlocal domain, e.g., K 3 (r, r ) in the figure, acts like a "substrate" holding together several other smaller material constituents, such as K n (r, r ), n = 1, 2, 4. We envision that each nonlocal subdomain may possess its own especially tailored nonlocal response function profile serving one or several applications. 9y concatenating multiple regions, interfaces between subdomains with different material constitutive relations are created.We show here subdomains D n , n = 1, 2, 3, 4, and the possible material interfaces include Recall that in local electromagnetism each material interface should be assigned a special electromagnetic boundary condition in order to ensure the existence of a unique solution to the problem [125].This however is not possible in nonlocal electromagnetism. Indeed, and as already mentioned earlier, nonlocal electromagnetism introduces several subtle issues that are absent in the local case: additional boundary conditions are often invoked to handle the transition of fields along barriers separating different domains, like between two nonlocal domains or even one nonlocal and another local domain [4], [44], [153].The topological fiber bundle theory to be developed in Sec.V will provide a clarification of why this is so since it turns out the traditional spacetime approach often employed in local electromagnetism is not necessarily the most natural one.There is a need to examine in more details the existence of multiple topological scales in nonlocal metamaterials, and this paper will provide some new insights into such issues. For completeness, we note below three directly observable topological scales that do not require the more elaborate mathematical apparatus to be detailed in later Sections.We list these as follows: 1) The first is the already stated separation between different nonlocal domains like D 1 and D 2 . 2) The second is the case captured by the inset in the right hand side in Fig. 1.Fine "microscopic" cells, each homogeneous and hence describable by a response function of the form K(k, ω), can be combined to build up a complex effective nonlocal response tensor K n (r, r ) over its topologically global domain D n .Such juxtaposition at the microscopically local level leading to global behaviour is a classic example of multiscale physics, but here it acquires even higher importance since both the constituent cells (rectangular "bricks" in the inset of Fig. 1) and the global domain level D n are already electromagnetically nonlocal.3) Finally, the third directly observable topological scale is that connected to what we termed "topological holes" in Fig. 1.These are arbitrarily-shaped gaps, like holes, vias, etchings, etc, that are intentionally introduced in order to influence the the electromagnetic response by modifying the topology of the 3-dimensional material manifolds D n . Remark III.4.(Distinction Between Electromagnetic and Topological Locality/Nonlocality) The terms local and global possess two different senses, one electromagnetic, the other spatio-geometric.Elucidating this subtle interconnection between the two senses will be one of the main objectives of the present work but we will first need to introduce the various relevant micro-scale topological concepts to be given below (see also Remark VII.1.) A. Introduction Let the nonlocality domain of the electromagnetic medium, the region D ⊂ R 3 in ( 8), be bounded.Corresponding to (1), a similar operator equation in the frequency domain can be assumed to represent the most general form of a nonlocal electromagnetic medium, namely where the nonlocal medium linear operator is itself a function of frequency.For simplicity, when it is understood from the context that the material response operator is formulated in the frequency domain, the dependence on ω in its expression will be removed. We are going to propose a change in the mathematical framework inside which electromagnetic nonlocality is usually defined.This will be done in two stages: • Initially, in the present Section, we introduce the rudiments of the main physics-based micro-topological structure associated with EM nonlocality without going into considerable mathematical details.The aim is to familiarize ourselves with the minimal necessary physical setting and how it naturally gives rise to a finer picture of the material domain compared with the traditional (and much simpler) topological structure of local electromagnetism based on spacetime points.• In the second stage, covered in Sec.V, a more careful mathematical picture is developed using the theory of topological fiber bundles.We eventually show (Sec.V-C) that the EM nonlocal operator ( 17) can be reformulated as a Banach bundle map (homomorphism) over the 3dimensional space of the material domain under consideration.Computational examples and applications are provided in the later Sections.The key conceptual idea behind the entire theory presented here is that of topological microdomains associated with the electromagnetism of nonlocal media, which we first develop thematically in the next Sec.IV-B before moving subsequently to the rigorous and exact topological formulation of Sec.V. B. The Concept of Topological Microdomains in Nonlocal electromagnetism In conventional frequency-domain local electromagnetism, the boundary-value problem of multiple domains is formulated as a set of coupled partial differential equations or integrodifferential equations interwoven with each other via the appropriate material interface boundary conditions dictating how fields change while crossing the various spatial regions inside which the equations hold [118], [125], [127].This has been traditionally achieved by taking up the electromagnetic response function K(r, r ; ω) as an essential key ingredient of the problem description, which traditionally has been exploited in two stages: First, the constitutive relations enter into the governing equations in each separate solution domain.Second, the constitutive relations themselves are used in order to construct the proper electromagnetic boundary conditions prescribing the continuity/discontinuity behaviour of the sought field solutions as they move across the various interfaces separating domains with different material properties. Unfortunately, it has been well known for a long time that it is not possible to formulate a universal electromagnetic boundary condition for nonlocal medium, especially for the case of spatial dispersion.This will be discussed later with more details in Sec.VIII, but also see the discussion around additional boundary conditions (ABC) in Sec.II-A.For now, we concentrate on gaining a deeper understanding of the structure of spatial nonlocality in electromagnetism.On every such subset a vector field is defined, representing the EM excitation field.The collection of all vector fields on a given set V r gives rise to a linear topological function space F(V r ).The topologies of the base spaces D n , the nonlocal micro-domains V r , and the function spaces F(V r ) collectively give rise to a total "macroscopic" topological structure (superspace) considerably more complex than the base spaces D n . A key starting observation is how nonlocality forces us to associate with every spacetime point (r, t) -or frequencyspace point (r; ω) -a topological neighborhood of r, say V r , such that r ∈ V r .For now, let us assume that the spatial material domain D r is just an open set in the technical sense of the topology of the Euclidean space R 3 inherited from the standard Euclidean metric [154].By restricting D to be open, we avoid the notorious problem of dealing with boundaries or interfaces between such (possibly overlapping) open sets.That is, the topological closure of D, denoted by cl(D), is excluded from the domain of nonlocality.Let D be the maximal such topological neighbored for the problem under consideration. 10We now associate with each point r a smaller open set V r such that r ∈ V r , V r ⊂ D for all r ∈ D. (The fact that D is assumed open makes this possible.) Now, instead of considering fields like R(r) and F(r) defined on the entire maximal domain of nonlocality D (which can grow "very large,") we propose to reformulate the problem of nonlocal electromagnetic materials in a topologically local form by noting that the physics of field-matter interactions gives the EM response at location r due to excitation fields essentially confined within a "smaller domain" around r, namely the open set V r . 11On the other hand, if the response at another different point r = r is needed, then a new -generally different -small open set V r will be required.That is, in general we allow that V r = V r , even though we expect that typically there is some overlap between these two small local domains of electromagnetic nonlocality, i.e., V r ∩ The "smaller sets" V r , r ∈ D, will be dubbed nonlocal microdomains or just microdomains in short.A possible definition is the following: Definition IV.1.(EM Nonlocality Microdomains) Consider a material domain D with the associated EM response function K(r , r).We define the EM nonlocality microdomain V r ⊂ D based at r ∈ D as the interior of the compact support of K(r , r).The support is defined by supp K(r , r) := {r ∈ D, K(r , r) = 0}, where • is a suitable tensor norm, for example the matrix norm. Remark IV.1.It can be shown that he collection of open sets {V r , r ∈ D} induces a topology on the total space occupied by the nonlocal material.This topology will be referred to in what follows by the term microdomains topology.In local media, the microdomains topology reduces to the trivial discrete topology {{r}, r ∈ D} since the external field interacts only with the point r at which it is applied and hence V r = {r}. . The set of EM nonlocality microdomains (microdomains for short) defined above explicate the fine micro-topological structure of nonlocal electromagnetic domains at a spatial scale different from that of the (topologically "larger") material domain D itself and are fundamental for the theory developed in this paper. C. Construction of Excitation Field Functions Spaces on the Topological Microdomains of Nonlocal Media Next, after enriching the MTM domain D with the finer topology of nonlocality microdomains V r , r ∈ D, we will equip this total medium with additional mathematical structure based on the physics of field-matter interaction.Consider the set of all sufficiently differentiable vector fields F(r) defined on V r , r ∈ D. This set possesses an obvious complex vector space structure: for any complex numbers a 1 , a 2 ∈ C, the sum is defined on V r whenever F 1 (r ) and F 2 (r ) are, while the null field plays the role of the origin.In what follows, we will denote such function spaces by F(V r ) or just F if it is understood from the context on which material spatial domains the fields are defined. Remark IV.2.It is possible to equip F(V r ) with a suitable topology in order to measure how "near" to each other are any two fields defined on V r , e.g., see [154]- [156].Therefore, F(V r ) becomes a topological vector space [154], in particular a Sobolev space, which is not only a Banach space (normed space), but also a Hilbert space (inner product space) [157]- [159]. D. The Global Topological Structure of Nonlocal Electromagnetic Material Domains: First Look In light of the analysis above, each microdomain V r induces an infinite-dimensional linear function space (Sobolev) F(V r ) indexed by the position r ∈ D, with topology essentially determined by the geometry of V r .On the other hand, this latter geometry is obtained from the physics of field-matter interaction in nonlocal media.Consequently, the physical content of nonlocal materials is encoded at the topologically micro-local level expressed by the following structure: If we denote the relevant collections of subsets as then ( 18) can be neatly captured by the ordered triplet Let us unpack this compact structure step by step as follows: 1) Each open domain in D ⊆ R 3 will by assigned a distribution V(D) of open sets V r , i.e., the EM nonlocality microdomains topology defined in Sec.IV-B.Physically, it expresses the fine micro-topological structure of electromagnetic nonlocality. 2) The structure V(D) is solely determined by the fieldmatter interaction physics.A concrete example will be given in Sec.VII. 3) We further emphasize that the various sets constitute an open cover of D, that is, we have In this way, the model can accommodate excitation fields F(r) applied at every point in r ∈ D. 4) The decomposition of the material domain D into smaller building blocks exemplified by ( 22) is fundamental for computational topological models of nonlocal MTMs.For example, in Sec.VII we will use this expansion to construct a topological coarse-grained model for inhomogeneous nonlocal semiconductor metamaterials. 5) Finally, the topology V(D) induces the space G[V(D)] of function spaces F(V r ), r ∈ D, where each vector field is defined on one element V r chosen from the topology V(D).It is interesting to note how within the framework proposed above a kind of constructive "division of labour" is shared between physics and mathematics in order to generate the various required multiscale structures characteristic of electromagnetic nonlocality.This is also the source of some potential difficulties hidden in the topological structure (21).Indeed, we will next try to smooth out the differences between the two main substructures V(D), controlled mostly by physics, on one side, and G[V(D)], which is dominated by mathematical considerations.One way to achieve this is by unifying the entire total topological structure (21) that can encode all of the substrucutres of ( 21) within a single rich enough "metastructure" : the Banach vector bundle superspace (Sec.V). E. A Reformulation of the Electromagnetic Response Function It is now possible to provisionally construct the EM response function by working on the fundamental topological domain structure (21) instead of the global domain D, the later being the favored arena of conventional electromagnetic theory that we would like to move beyond in this paper.The response function R(r) will be re-expressed by the map where the codomain is taken to be C 3 because the electric or magnetic response functions D or B, respectively, are complex vector fields in the frequency domain. 12The value of the EM nonlocal response field due to excitation field F(r) applied at a microdomain V r can be computed by means of Although (24) may appear at first sight to be only slightly different from (8), the underlying difference between the two formulas is significant.In essence, the construction of the EM response field R(r) via the map (23) amounts to topological localization of electromagnetic nonlocality since in the latter case the EM response function K(r, r ; ω) is no longer allowed to extend globally onto "large and complicated material domains."Indeed, with the recipe (24) only the response to "small" -or more rigorously topologically local -domains, namely the microdomains V r , is admitted.On the other hand, in order to find the response field R(r) everywhere in D, one needs to use sophisticated topological techniques to extend the response from one point to another till it covers the entirety of D. This local-to-global extension application of differential topology is discussed in details in Sec.V-C and again briefly in Sec.IX-A. In this manner, it becomes possible to provide an alternative, more detailed explication of the behaviour of the medium at topological interfaces (boundary conditions in nonlocal metamaterials are treated -provisionally -in Sec.VIII) and also explore the effect of the topology of the bulk medium itself on the allowable response functions and the production of non-trivial edge state, with obvious applications to nonlocal metamaterials (see the discussion of nonlocal and topological metamaterials applications in Sec.IX-B.) V. THE FIBER-BUNDLE SUPERSPACE FORMALISM IN THE ELECTROMAGNETISM OF GENERIC NONLOCAL MEDIA Here, an outline of the direct construction of a fiber (Banach) bundle over an entire (global) nonlocal generic material domain is given, where our purpose is to build into every point r ∈ U i a fiber superspace F i .The contents of this section are the most technically advanced in this paper.Readers interested in applications may skim through Secs.V-A and V-B, skip Sec.V-C, then move directly to Sec.VI for a general summary of the fiber bundle algorithm.Concrete computational models are outlined in Sec.VII using a practical nonlocal model, while additional remarks and discussions about current and future uses of the theory are provided in Sec.IX.However, even readers not fully familiar with differential manifold theory will benefit from reading the present technical Section because we strive to illustrate the physical intuition behind the various mathematical computations and steps therein. A. Preparatory Step: Promoting the Material Domain D to a Manifold D In order to investigate in depth the fundamental physicomathematical constraints imposed on EM nonlocal metamaterials, the material domain D considered so far should be promoted to a differential manifold structure [19], [155], [156], [161], [162].There are several reasons why this is highly desirable: 1) It provides a natural and obvious generalization of the basic structure (21) from the mathematical perspective.2) Engineers often need to insert metamaterials into specific device settings, hence the shape of the material becomes highly restricted.It is therefore important to develop efficient tools to deal with variations of geometric and topological degrees of freedom and how they could possibly impact the design process.3) Applied scientists and engineers are often interested in deriving fundamental limitation on metamaterials, e.g., What are the ultimate allowable response-excitation relations or constitutive response functions possible given this material domain topology or that?4) Sophisticated full-wave electromagnetic numerical solvers prefer working with local coordinates in order to handle complicated shapes, even if a global coordinate system is sometimes available, making the deployment of 3-manifold structures for describing the material domain D useful.5) In topological photonics and materials [111], most applications seem to focus on lower-dimensional states of matter like those associated with quantum Hall effects and edge states (surface waves).There, new phenomena appear at materials where the base space (material domain D) is a 2-surface, which is best described mathematically as a differential 2-manifold.For all these reasons, it is expedient to ascribe to the domain D the most general mathematical expression possible, which in our case amounts to equipping the material/metamaterial's spatial domain with a smooth manifold structure.If we denote by D this 3-manifold, then, being a subset of R 3 , there is a natural differential structure defined on it, that inherited from the 3-dimensional Euclidean space itself.This differential 3-manifold structure will be presupposed in the remaining parts of this paper.Following the standard theory of smooth manifolds, let (U i , φ i ) be a collection of charts (an atlas), labeled by i ∈ I, an index set, which equips D ⊂ R 3 with a differential 3-manifold structure. 13This constitutes the differential atlas on D which will be used in what follows. B. Attaching Fibers to Generic Points in the Nonlocal Material Manifold D Our current goal is to attach a vector fiber (a linear function space in this case) at every point r ∈ D, namely the function space F(V r ) introduced in Sec.IV-C.It turns out that accomplishing this requires finding suitable "compatibility laws" dictating how coordinates change when two intersecting charts U i and U j interact with each other, which is typical in such types of constructions [19].In particular, we will need later to find the law of mutual transformation of vectors in the fibers F(V φi(r) ) and F(V φj (r) ).Here, the expression F(V φi(r) ) means the fiber superspace attached to the point whose coordinates are φ i (r), i.e., the function space where all functions are expressed in terms of the language of the ith chart (U i , φ i (r)). In this connection, the major technical problem facing us is the following: Since the differential structure associated with charts (U i , φ i (r)) can be fixed by essentially mathematical considerations alone, while the collection of microdomains V(D = {V r , r ∈ D} is solely determined by the physics of electromagnetic nonlocality (Sec.IV), there is no direct and simple way to express the transformation of vectors in F(V φi(r) ) into vectors in F(V φj (r) ) because several different coordinate patches other than U i and U j , belonging to the differential 3-manifold D atlas, might be involved in building up the microdomain V r . The above technical problem will be solved in Sec.V-C by using the technique of partition of unity borrowed from differential topology [19], [155], [161].It will allow us to split up each full microdomain V r into several suitable submicrodomains (details below), which can be later be joined up together in order to give back the original EM nonlocality microdomain V r . For now, we start by recalling that the microdomain structure represented by the collection V(D) := {V r , r ∈ D} is an open cover of the manifold D. Therefore, and since the material domain manifold D possesses a countable topological base [154], it contains a locally finite open cover subordinated to V(D) [19], [161]. 14pecifically, the theorem just mentioned implies that an atlas (U i , φ i ), i ∈ I, with diffeomorhisms describing the differential structure of the manifold D exists such that the elements {U i , i ∈ I} constitute the above mentioned locally finite subcover subordinated to the microdomains collection V(D).Moreover, the images φ i (U i ) are open balls centered around 0 in R 3 with finite radius a > 0 (henceforth, such balls will be denoted by B a ) [19]. In this way, the physics-based open cover set V(D) provides a first step toward the construction of a complete topological description of the electromagnetic nonlocal microdomain structure.The reason is that the coordinate patches (U i , φ i ), i ∈ I, are subordinated to the microdomains {V r , r ∈ D} [19]. It is also known that there exists a partition of unity associated with the D-atlas (U i , φ i ), i ∈ I, constructed above summarized by the following lemma [19], [155], [156], [161], [162]: satisfying the following requirements: 1) ψ i (r) ≥ 0 and each function is C p , p ≥ 1. 152) The support of ψ i (r), denoted by supp ψ i , is contained within U i , i.e., supp ψ i ⊂ U i . 16) Since the open cover U i , i ∈ I, is locally finite, at each point r ∈ D, only a finite number of U i will intersect r. 4) Let the set of indices of those intersecting U i s be I r . Then we require that where the sum is always convergent because the set I r is finite. Remark V.1.It can be shown that the sets φ −1 i (B a/3 ), i ∈ I, already cover D [161].The closure cl{φ −1 i (B a/3 )} may be taken to constitute the support of ψ i (r), while ψ i (r) = 0 for all r / ∈ supp{ψ i (r)} [19], [161], [164].The partition of unity functions ψ i can be computationally constructed using standard methods, most prominently the bump functions, e.g., see [161], [165].with our fundamental EM nonlocal structure should now be clear.we have found that the following three-step process is natural: The motivation behind the deployment of the partition of unity technique and how it immediately arises in connection 1) Initially, the physics-based collection of sets V(D) = {V r , r ∈ D}, i.e., the EM nonlocal microdomain structure based on each point r in the nonlocal metamaterial D, is obtained using a suitable physical microscopic theory or some other procedure. 17) Introduce a differential atlas (U i , φ i (r)), i ∈ I on the smooth manifold D subordinated to V(D) and representing the nonlocal material domain under consideration.3) Finally, the same atlas is linked to a set of functions ψ i (r) (partition of unity) that can be recruited as "topological bases" to expand any differentiable field excitation function into sum of individual sub-fields defined on open subsets of the material domain D (see Sec. V-C.)The three-step process outlined above is summarized in Fig. 3, illustrating how to progressively construct micro-coordinate systems allowing one to see through increasingly smaller spatial scales in the fundamental characterization of electromagnetic material nonlocality. The key idea to be developed next is that both the base manifold D and the nonlocal EM microdomains V r are described locally (in the topological sense) by the same collection of charts, namely (U i , φ i (r)), i ∈ I.This will permit us to construct a direct unified description of both the base manifold D and its fibers, i.e., the linear topological function spaces F(V r ), the latter being the model of the physical electromagnetic fields exciting the nonlocal material D. The construction of a fiber-bundle superspace for nonlocal electromagnetic materials will be completed in two steps: • Step I: Construct a tailored fiber bundle based on the partition of unity charts (U i , φ i (r)) introduced above.• Step II: The original physical structure ( 21) is recovered by gluing together various sub-microdomain U i ⊆ V r of each EM nonlocal microdomain V r .We start with the Step I and leave the more complicated Step II to Section V-C. Consider the (U i , φ i (r)), i ∈ I, as our atlas on the 3manifold D introduced in Sec.V-A.At each point r ∈ U i , we attached a linear topological space F(U i ) defined as the Sobolev space W p,2 (U i ), p ≥ 1, of functions on the open set U i , i.e., we write where F(r) is a suitable C p,2 vector field. Remark V.2.For the precise technical definition of the infinite-dimensional Sobolev function space W p,2 (U i ), see [157], [158].Appendix A provides some additional information on the literature.Sec.IV-C gives a simplified intuitive definition of the physics-based function space F i , in particular see Remark IV.2.The intricate details of the theory of such Sobolev function spaces will not be needed for our immediate purposes in what follows (compare with Remark V.3.) Physically, the multiplication of the global excitation field F(r) by ψ i (r) in constructions like (28) above and (29) below effectively "localizes" (in the topological sense) the field into a smaller compact subdomain, namely the support of the "topological localization basis function" ψ i (r) itself.Moreover, because the C p functions ψ i (r) have compact Remark V.3.We can alternatively define a less complicated function space on U i by where the sup-norm is given by In the case of F (U i ), one may further consider only C p vector excitation fields F(r).The choice of which linear function space to work with depends on the particular application under consideration.In what follows, we further simplify notation by writing F i instead of F(U i ) whenever the partition of unity differential atlas coordinate patches U i are used. C. Direct Construction of Bundle Homomorphism as Generalization of Linear Operators in Electromagnetic Theory We now demonstrate how the material constitutive relations in conventional (local) electromagnetic theory may be absorbed into a new structure, the bundle homomorphism, which is the most natural generalization of linear operators in local electromagnetism taking us into the enlarged stage of the generic nonlocal medium's superspace formalism.In the future, these bundle homomorphisms may be discretized using topological numerical methods, e.g., see [166].In what follows, we focus on the rigorous exact construction using the technique of partition of unity, which allows computations going from local to global domains. 18 1) The Basic Definition of the Nonlocal Material Banach (Fiber) Bundle Superspace: The initial step in formally defining the proposed nonlocal MTM bundle superspace is the following disjoint union construction: Definition V.1.(Preliminary Definition of the Bundle Superspace) Let the MTM superspace be denoted by M, which is also called the total bundle space.We define this space as the disjoint union of all spaces F i of the form: (31) 18 The discretization of the nonlocal MTM bundle homomorphism itself is outside the scope of the present work and will be addressed elsewhere. Associated with M is a surjective map which "projects" the fiber onto its corresponding point in the base manifold D, i.e., p((r, F)) := r. Remark V.4.In mainstream literature, the fiber bundle is defined slightly differently.Indeed, the fiber of M at r ∈ D is defined as the set p −1 (r) provided the map p is given as part of the bundle data.This is how fiber bundles are often introduced in the mathematical literature.However, in this paper, we construct the bundle data starting with the physicsbased topological structure (21).The map p is called the projection of the vector bundle M onto its base space D. Moreover, from now on we will also use the notation F r to denote the fiber p −1 (r).By construction, it should be clear that p −1 (r) = F i iff r ∈ U i .From the topological viewpoint, the MTM superspace M locally appears like a product space U i × F i .In other words, the map p should behave locally as a conventional projection operator; i.e., in a local domain U i , the material's total bundle space M is isomorphic to U i × F i , and p(U i × F i ) should be isomorphic to U i . In order to complete the specification of the nonlocal MTM superspace, we next construct the linear function space X i defined by which is the Sobolev space of W p,2 (B a ) functions on the Euclidean 3-ball B a .Here, each function is defined with respect to the local coordinates x := φ −1 (r), where r ∈ U i .In fact, it should be straightforward to deduce from the above that there exists maps for all i ∈ I, that are isomorphisms (diffeomorphism in our case), where such diffeomorphism may be expressed by ∀i ∈ I : The fact that ( 34) is such an isomorphism follows from the definitions of the spaces F i and X i by ( 28) and (33), respectively, and from the fact that each φ i is a diffeomorphism from U i into R 3 (or equivalently the unit 3-ball B a with radius a.) We further note that by construction the diffeomorphism where proj 1 is the standard projection map defined by proj 1 (x, y) := x. Finally, if we restrict τ i to p −1 (r), the resulting map is a (linear) topological vector space isomorphism from F r to X i , i.e., ∀i ∈ I, r ∈ U i : Remark V.5.The charts (U i , τ i ) are called trivialization covering of the vector bundle M. They provide a coordinate representation of local patches of the vector bundle.(The global topology of the bundle, however, is rarely trivial [155].)Since here all maps are C p smooth, τ i are also called smooth trivialization maps.The complete derivations of the diffeomorphism (35) and the topological vector space isomorphism (38) are straightforward but lengthy and the full proofs are omitted. Consider now two patches U i and U j with U i ∩ U j = Ø.By restricting τ i and τ j to U i ∩ U j , two diffeomorphisms are obtained, which in turns implies that or X i ∼ = X j as expected.In particular, it can be shown that the composition map possesses the simple form Here, F ∈ X i and g ∈ L(X i , X j ), where L(X i , X j ) is the space of linear operators from X i to X j [19].In particular, g(r) is a C p -Banach space isomorphism. Remark V.6.In the mathematical literature, the smooth maps τ j •τ −1 i are called the vector bundle transition maps.They are essential technical tools for computing global data by starting from local data then gluing them together. We have now succeeded in directly constructing a specialized smooth Banach vector bundle (M, D, τ, p) consisting of the nonlocal MTM's total fiber bundle space M, base 3manifold D of the MTM, a set of smooth trivialization charts τ i , i ∈ I, and a projection map p.The base manifolds D itself is described by a differential atlas (U i , φ i ) associated to the partition of unity (U i , ψ i ), i ∈ I as per our discussion in Sec.V-B above. 2) The Nonlocal Material Fiber Bundle Homomrphism: At this point, we need to describe how the evaluation process of the electromagnetic response field (23) may be formulated within the new enlarged framework of the fibered superspace M. The most obvious method is to introduce a new vector bundle with the base space being the same base space D, but with the fibers now taken as the complex Hilbert space C 3 .This is a well-known vector bundle, which we denote by R, and call the range vector bundle.Formally, the structure of this vector bundle is written as (R, D, τ , p ), where τ and p are its own smooth trivialization and projection maps.The source vector bundle is taken as M. The physical process of exciting a nonlocal electromagnetic medium can now be understood as follows: 1) The material domain is mathematically modeled by the Banach bundle M. The response of the medium is to be sought at some point r ∈ D. 2) The bundle structure M will associate a linear function space at r, namely the fiber p −1 (r), which is a Banach space of functions defined on the region U i is the model of the excitation field F(r) after being restricted (topologically localized) to the EM nonlocal domain U i . 19) A vector bundle homomorphism (to be formally defined shortly) will map one element of this fiber function space, namely, the particular excitation field F(r), r ∈ U i , to its value in the fiber isomorphic to C 3 at r in the range vector bundle R. We will need now a precise definition for maps between bundle superspaces.Formally, the is given by [161], [162]: Definition V.2.(Bundle Homomorphism) A (smooth) bundle homomorphism over a common base space D shared between the two vector bundles M and R is defined by the (smooth) map: satisfying p • L = p.Moreover, the restriction of L to each fiber p −1 (r) induces a linear operator on the vector space of that fiber [161].In effect, the following diagram Remark V.7.Because M and R share the same base manifold D, the action of the map L is effectively reduced to how it acts on each fiber p −1 (r) as a linear operator.Now, since the Banach space X i is isomorphic to p −1 (r), we will express L by giving its expression locally in each element U i ⊂ D of the open cover {U i , i ∈ I}.In particular, we define the local action using the source and range bundles' trivialization maps τ i and τ i by the intuitively obvious formula: with where L i,ω : is the linear operator defined by in which '*' stands for an element of the smooth Banach function space X i .Therefore, within the frequency-domain formulation of this paper, the operator L will leave every point in the base space D unchanged while mapping each smooth function on U i (component of the total electromagnetic excitation field, see below) into its complex vector value in C 3 at r ∈ U i . Physically, L i models a (topologically) localized "piece" of the global electromagnetic material operator mapping excitation fields F((r) to response fields R(r), where the entire physics here is restricted to the EM nonlocal subdomain U i .The global operator itself is assembled by gluing together all these small pieces using the partition of unity technique as we endeavour to show next. 3) Computing Global Data Starting from Local Data: The final step is tying up together the fundamental source Banach bundle M, range bundle R and the EM nonlocal microdomain physics space (21).The essential ingredients of the physics of nonlocal EM field-matter interaction are encoded in the geometrical construction of the collection of microdomains V(D) = {V r , r ∈ D}, and the excitation fields F(r) defined on them, i.e., the sets V(D) and the (excitation) function spaces G(D) combined together in one space, the Banach bundle M. So far, the vector bundle homomorphism L introduced above takes care of excitation fields supported on the open sets U i , i ∈ I.However, these are mathematical fundamental building blocks used to construct the source vector bundle M. The question now is to how to extend the description of nonlocal EM response operators for excitation fields applied to the entire physical cluster of EM nonlocality microdomains {V r , r ∈ D).As mentioned before, it is the partition of unity (U i , ψ i ), i ∈ I, what will make this expansion of the topological formulation possible. To see this, let us consider an electromagnetic field F(r) interacting with a nonlocal medium extended over the manifold Our goal is to compute the response field R(r), that is, at point r.The fundamental idea of EM nonlocality is that to know the response at one point r, one must know the excitation field in a entire open set V r , a topological neighborhood of r, and that in general this microdomain will change depending on the position r.The goal now is to find R(r) using the vector bundle map L defined by (43) starting from the data: 1) Region V r , 2) Vector field F(r) acting on V r .To accomplish this, we exploit the properties of the partition of unity functions ψ i (Lemma V.1) for expanding the excitation field F(r) over all patches U i covering V r , resulting in where (27) was used.The truncated function F i is equal to F(r) only if r ∈ U i and zero elsewhere, i.e., we have Recall that according to Lemma V.1, the set I r is defined as the collection of indices i ∈ I of all U i having the point r in their common set intersection and is always finite.The main construction should now become clear.While each truncated sub-field F i fails to be differentiable (it is not even continuous), the multiplication by ψ i (r) fixes this problem.In fact, each function ψ i (r )F i (r ; ω) is a smooth component of the total excitation field F with support fully contained inside the coordinate patch U i , i.e., supp{F i } ⊂ U i . Consequently, the vector bundle map constructed in (43) can be applied to each such component field.From ( 44)-( 46) and ( 48), the following can be deduced: Finally, using (47), we arrive at our main superspace map theorem: Theorem V.2.(Global Superspace Bundle Map) For the fiber bundle superspace M of the nonlocal MTM on the manifold D with material response tensor K, the response and excitation fields R and F can be related to other other via the global bundle map: where ψ i , i ∈ I, are the partition of unity basis functions subordinated to the D-atlas (U i , φ i ), i ∈ I. Physically, Theorem V.2 states that the source bundle M, R, and the response map L provide a skeleton through which the total response to any EM excitation field defined on an arbitrary EM nonlocality microdomain can be computed.By aggregating all such microdomains constituting the microstructure of nonlocality of the MTM under consideration, the electromagnetism of the medium can be computed and reformulated in the Banach fiber-bundle superspace M instead of the position space D. In this way, the vector bundle formalism for electromagnetic nonlocality is essentially complete and the connection between the purely mathematical fiber superspace and the physical microdomain structures is secured by (51). VI. INTERLUDE: THE FIBER BUNDLE ALGORITHM: SUMMARY AND TRANSITION TO APPLICATIONS We now summarize the fiber bundle superspace construction by providing the algorithm derived in the previous Section.Our main goal here is to highlight that the entire construction is based on estimating the EM nonlocality microdomain set V(D) = {V r , r ∈ D} associated with the nonlocal MTM domain D. This data can be obtained only through physical theory and/or measurement.However, once available, the construction of the fibered space proceeds in a computationally well-determined manner.We first summarize the algorithm then provide few remarks preparing for some computational examples in latter sections. In Fig. 4 we show two distinct points r 1 , r 2 ∈ D and their associated microdomains V r1 and V r2 , respectively.From the locally finite subcover {U i } i∈I subordinated to V(D) = {V r , r ∈ D} we highlight U i ⊆ V r1 and U j ⊆ V r2 , where it is possible in general that V r1 ∩ V r2 = ∅ and U i ∩ U j = ∅ as indicated in the Figure itself.For the partition of unity (U i , ψ i ) i∈I suppordinated to the open cover {U i } i∈I , we also depict the two compact sets S i := supp{ψ i (r)} and S i := supp{ψ i (r)} forming the support of the corresponding partition of unity functions. The nonlocal MTM superspace algorithm itself is summarized in Algorithm 1. Once the microdomain dataset V(D) is given, the construction proceeds automatically using the partition of unity basis functions (U i , ψ i ) i∈I .The latter may be computed directly in terms of the standard bump functions, see [161], [162], [165] and also Remark V.1. Because of the fundamental importance of the EM nonlocality microdomain structure V(D), we will devote Sec.VII entirely to a quantitative practical example illustrating the origin of these microdomains and how they may be estimated in practice.In the subsequent sections, we also explore the usefulness of the homomorphism construction for reformulating boundary-value problems in nonlocal electromagnetic theory and also provide some hints for possible other current and future applications.The open cover V(D) of D induces a locally finite subcover {U i } i∈I subordinated to V(D).It is then automatically equipped with the differential structure of the manifold D, generating the differential atlas (U i , φ i ) i∈I .3: The subcover {U i } i∈I is equipped with a partition of unity function set {ψ i } i∈I , producing the partition of unity (U i , ψ i ) i∈I .4: Generate an appropriate Banach/Sobolev/Hilbert space X i attached to each point r ∈ D using constructions such as (33).5: Declare D the base manifold of the fiber bundle.Construct the bundle space M using 6: Construct the projection map p : M → D through the operation (r, X i ) → r. 7: Use (42) to transform vector from one fiber (function) space to another. VII. APPLICATION TO REAL-LIFE MATERIAL SYSTEMS: ESTIMATING THE MICRODOMAIN TOPOLOGY OF ISOTROPIC SPATIALLY-DISPERSIVE MEDIA Here, a concrete example involving spatially-dispersive isotropic media is considered where the intention is to provide an outline of how the intricate fiber-bundle type topological fine structure (the topology of microdomains attached to each point explored above) may be estimated in practice. A. The Electromagnetic Model of Nonlocal Isotropic Domains One of the simplest -yet still demanding and interestingnonlocal media is the special case of isotropic, homogeneous, spatially-dispersive, but optically inactive domains [2].In this case, very general principles force the generic expression of the material response tensor to acquire the concrete form [3], [4], [126]: where k := |k|, k := k/k, and k is the wavevector (spatialfrequency) of the field.The first term in the RHS of ( 53) represents the transverse parts of the response function, while the second term is clearly the longitudinal component, with behaviour captured by the generic functions K T (k, ω) and K L (k, ω), respectively. 20The tensorial forms involving the dyads kk , however, are imposed by the formal requirement of the need to satisfy the Onsager symmetry relations in the absence of external magnetic fields [4].Using a proper microscopic theory, ultimately quantum theory, it is possible in general to derive fundamental expressions for the transverse and longitudinal components of the response functions in (53) [2]- [4], [45], [46], [48].These forms are often obtained in the following way: 1) First, fundamental theory is deployed to derive analytical expressions for K T (k, ω; r ) and K L (k, ω; r ).2) Afterwards, depending on the concrete values of the various physical parameters that enter into these expressions, e.g., frequency, temperature, molecular charge/mass/spin, density, etc, the obtained analytical expressions are expanded in power series with the proper number of terms. 3) The expression of the dielectric tensor function is then put in the form of either a polynomial or rotational polynomial in k.A concrete example will be given in Sec.VII-C to illustrate the use of such physics-based dielectric function for the case of exciton-polaritons in semiconductors. B. A Topological Coarse-Graining Model for Inhomogeneous Nonlocal Material Domains We now describe a method that can help transitioning from the generic form (53) valid for homogeneous nonlocal domains to the inhomogeneous medium situation developed throughout this paper where nonlocality cannot be captured by a simple global dependence of the dielectric function on k.However, instead of working with the full nonlocal function K(r, r ), an alternative simplified model is proposed which we name the topological coarse-graining model.The idea is as follows.Consider a global material domain D, which is an open 3manifold, say an open subset of R 3 that may be either simply connected or disconnected. 21The material is nonlocal and inhomogeneous.At each point r ∈ D, a microdomain, i.e., and open set V r ⊂ D, is assigned.The medium is locally isotropic and homogeneous in the sense that within each microdomain we can describe the response to an external field excitation E by means of a relation similar to (24), namely: That is, the only difference between ( 54) and ( 24) is that in the former we use the correct form of homogeneous nonlocality K(r − r ; ω) instead of K(r, r ; ω).Moreover, we have put the proper response and excitation fields D(r) and E(r) and inserted the free space permitvity ε 0 . It may be understood now that as a topological coarsegraining process, the original inhomogeneous nonlocal medium ultimately described by the material tensor K(r, r ; ω) is subdivided into "small topological cells," the microdomains V r , r ∈ D, such that each "topological cell" in itself behaves as a homogeneous nonlocal isotropic subdomain and can hence be described by the form (54) with the material tensor itself taking the (topologically) locally correct form (53).This can be considered a quasi-local model where the global domain is EM nonlocal but in small region (cells) it behaves like an EM local medium, e.g., see [12]. Remark VII.1.The term local is used in this paper in two senses.The first sense is the physical one in which local is set against physical nonlocality, which includes spatial dispersion (EM local/nonlocal.)On the other hand, in topology, a local property is that which holds in a small open neighbourhood of a given point, in our case the topological microdomain V r .The distinction between the two technical senses of the same term should always be clear from the context.In the few cases when there is a risk of confusion, we say topologically local to emphasize the second meaning above from EM local.(see also Remark III.4.) Our key objective here is to use the simple estimation of the size of nonlocal domains (spheres) outlined above in order to build the topological content of the microdomain structure developed in the discussion started in Sec.IV.For example, consider a point r 1 , which provides a label for one of the micro-cells we may deploy for creating a coarse-grained model for the inhomogeneous medium.To be more specific, let us construct the topological open ball defined by where a r ∈ R + is a number quantifying the smallness of this "nonlocality ball" centered at r and d is the distance metric.22).Note how the topological approach allows overlapping microdomains, e.g., between microdomains V r2 and V r3 .The technique of the partition of unity will take care of electromagnetic data "repeated" in such regions of overlap by assigning proper weights that always sum to unity at each point in r ∈ D. The number a r will be determined later based on the actual physics of the problem.Next, the fine-grained topological microdomain structure can be constructed by aggregating all these balls in order to produce a coarse-graining of the overall inhomogeneous nonlocal material domain D. The choice of the shape of the microdomain V r as a sphere B(r, a r ) defined by ( 55) is justified by our earlier assumption that the material is (topologically) locally isotropic.However, note that globally the electromagnetic processes needs not behave as in isotropic domains.Fig. 5 provides an illustration of the two processes employing i) the proposed topological coarse-graining model utilizing the set of balls V r , r ∈ D (left), and ii) the conventional paradigm where the unit cells are nonoverlapping (right).As can be seen from the diagram, in the topological approach, there exists an open set (microdomain) V r attached to each point r ∈ D such that nearby microdomains may overlap with each other, i.e., the set is not necessarily empty.On the other hand, the conventional approach to coarse-graining (right) involves subdomains like V r1 and V r2 that are nonoverlapping, leading to a grid-like structures or "tiling up" of the material domain D where in general no holes are left.In both approaches each type of subdomains, whether V r or V r , is assumed to be homogeneous.The disadvantage of the conventional approach is that abrupt change in electromagnetic properties when transitioning between every two neighboring subdomains often requires imposing a boundary condition between them for the purpose of arriving at a correct computational assessment of the physics inside the material domain D. On the other hand, this problem does not exist in the topological approach (left) because the microdomains are allowed to overlap and common regions between overlapping microdomains are treated correctly using the partition of unity basis functions as described in Sec.V-C. C. Concrete Example: Resonant Nonlocal Semiconductor Domains (Exciton-Polariton Interactions) We provide here a concrete application of the topological coarse-graining algorithm proposed above.The specific nonlocal metamaterial is a semiconductor with dielectric function exhibiting a single strong resonant exciton transition at frequency ω = ω e . Remark VII.2.Excitons were introduced by Frenkel very early in the history of condensed-matter physics [167], [168] and were further developed by other researchers such as Wannier [169].In the late 1950s, excitons were brought into the picture of light-matter interaction through the concept of exciton-polariton [139], which will be defined below.Pekar [139], Ginzburg [137], and others [58], [170]- [172] affirmed the nonlocal approach to exciton-polariton materials by explicitly highlighting the strong impact of spatial dispersion near excitonic resonances.The subject of excitons is vast and multidisciplinary.For extensive treatments covering various applications in physics, chemistry, and technology, see [4], [44], [173]- [176]. 1) Origin of EM Nonlocality in Excitonic Semiconductors: In order to understand the particular nonlocal model to be presented in Sec.VII-C2, let us first briefly explain the relevant physics of exciton-polariton interactions and why they can lead to strong nonlocal response.In direct-bandgap semiconductor the minimum of the conduction band is aligned along the maximum of the valance band, allowing electronic transitions from lower (unexcited) to excited bands upon interaction with external EM fields. 22No free charged carriers are assumed, in contrast to metals and plasma.An electron exiting the valance band after the absorption of an external photon will leave behind a hole, which acts as an independent quasiparticle that can travel throughout the material in the form of a collective excitation [167], [168], [177], [178].The exciton is defined as a coupled pair composed of the two bound states of the electron and hole.Here, both electrons and holes must be understood as "dressed" particles (quasiparticles) with effective mass and charge different from those of the bare (noninteracting) particle [179].We may apply the Bohr model to the exciton (electron-hole pair) with simple modifications: First, the electron mass must be replaced by the reduced exciton mass m r := m el m h /(m el + m h ), where m el and m h are the electron and hole masses, respectively. 23Second, due to the screening of Coulomb attraction by the dielectric medium, the effective electron charge e − = −e should be replaced by e − / √ ε 0 , where ε 0 is the static dielectric constant.From this it follows that the exciton binding energy E b is given by E b = m r e 4 /(2 2 ε 2 0 ).Therefore, the total energy needed to create an exciton state is given by where E g is the semiconductor bandgap energy. 24hey key to the origin of nonlocality is the scenario when the excitation photon has an energy ω that is greater than the minimum exciton energy (57).In the case where ω > ω e , the excess energy will be transformed into kinetic energy.Due to the conservation of momentum, the wavevector of the exciton is equal to the photon wavevector k and hence the kinetic energy of the exciton is given by k • k/2m e , where m e := m el + m h is the translational mass of the exciton in the effective-mass approximation [169].Consequently, the total exciton energy E e is given by [182] The exciton frequency ω e (k) then acquires a dependence on k due to the kinetic energy term.As will be shown below, it is precisely such dependence that eventually leads to the emergence of electromagnetic nonlocality in semiconductors around excitonic resonances when photons couple with excitons. 2) The Nonlocal Exicton-Polariton Model: A Polariton is simply a "photons living inside a dielectric medium."That is, the quantization of any electromagnetic wave inside a dielectric domain is often called polariton instead of photons (sometimes polaritons are called "dressed photons.")An exciton-polariton is a polariton coupled with a mechanical exciton like the electron-hole pair defined above. 25t is well known from quantum theory that the dielectric function can be approximated near resonance by the form [4], [59], [137], [138], [170]: where Here, is the reduced Planck constant while α is a kind of oscillator strength and can be different for transverse and longitudinal excitation fields.The effective mass of the exciton is denoted by m e . 26On the other hand, the exciton lifetime τ e is defined by hence Γ can be thought of as the exciton decay or relaxation rate.In this model, possible dependencies of Γ and the oscillator strength α on k are ignored.In order to obtain significant nonlocality in the material, the following condition can be imposed: Indeed, it can be shown that under such condition the kinetic energy term in (58) leads to significant nonlocal effects in (59).A way to realize nonlocal semiconducting metamaterials is to use intrinsic semiconductors satisfying (62) by keeping the temperature low and the material pure (undoped) [20]. The model described by ( 59) and ( 60) can be viewed as a natural generalization of the local Lorentz model widely utilized to model temporal dispersion in solids and plasma [137], [170].It represents the simplest nonlocal resonant model with a single strong resonance at a characteristic frequency, here ω = ω e .All other off-resonance excitonic transitions are gathered into the background dielectric constant ε 0 for simplicity.For frequencies well below ω ω e , the exciton-polariton behaves essentially like a photon propagating in a medium with background permitivity ε 0 .For ω ω e , we again recover photons but usually with a background described by ε ∞ , the high-frequency limit of permitivity.In general, the difference between the static and high-frequency permitivities is quite small, i.e., |ε 0 − ε ∞ | ε 0 and in this paper for simplicity we treat them as equal (ε 0 ε ∞ ) since we are interested in the EM response around a single excitonic resonance and the oscillator strength α in (59) is small.One consequence of this assumption is that the splitting between longitudinal and transverse modes can be neglected.Indeed, since the longitudinal and transverse frequencies ω L and ω T are related to each other via 27 A consequence of this is the near equality of the longitudinal and transverse frequencies, which allows us to considerably simplify the mathematical treatment.In addition, assuming that the oscillator strength α in ( 59) is nearly the same for both the longitudinal and transverse part of the response function, then it follows that we need only work with a single scalar response function, namely the form (59) itself instead of the more general tensorial formula (53). 28onlocal effects associated with the model ( 59) emerge from the quantum mechanical nature of exciton-polariton interactions and the need to enforce conservation of energy/momentum as discussed in Sec.VII-C1, leading to the strong dependence on k observed in (59). Remark VII.3.The model (59) itself may be intuitively derived as follows.A generic oscillator model is the Lorentzian form 1/(ω 2 e − ω 2 − iΓω), which models a large number of physical processes, from lattice vibrations to electronic transitions and many others [3], [45], [126].Substituting the wavevector-dependent ω e expression (58) into the just mentioned Lorentzian form, the expression ( 59) is immediately obtained after keeping only the quadratic terms of k. 29 This provides a first explanation of nonlocality as arising from the quantum mechanical requirement of enforcing energy/momentum conservation in photon-exciton interactions. There is yet another physical explanation of nonlocality.When the exciton mass approaches infinity (m * e → ∞), the kinetic energy term in (58) drops out and the excitonic dielectric function (59) becomes local.This is why spatial dispersion is sometimes referred to as the "finite-mass model," with some suggestions that the origin of nonlocality in this case is the inertial effects of the exciton [170]. 30In what follows, we assume that the effective mass of the exciton is always finite and positive, i.e., 0 < m e < ∞.However, it should be noted that since excitons are collective excitations of solids [167], [168], they may have negative mass [173].While this will not be pursued here, the negativity of the excitonic mass may be exploited in order to further design and control the EM behaviour of nonlocal MTMs constructed using excitonic semiconductors. To gain a deeper insight into the various resonance structures of the exciton-polariton response function ( 59), we rewrite it in the equivalent form where is called the exciton wavenumber.The wavelength λ e is a fundamental resonance spatial scale, which we will refer to as the exciton wavelength and is given by For example, with ω e = 2.5 eV and m e = 0.9m e , where m e is the electron mass, the exciton wavelength λ e is around 0.0293 nm, which is the same order of magnitude of interatomic spacing.The excitation field wavelength λ is at least one order of magnitude larger.Later we will show typical values for the topological microdomain radius a r , see Table II below. There are several fundamental spatial and temporal scales involved in the process of describing generic nonlocal metamaterial domain D. The excitation field E(r) itself introduces its own excitation period T := 2π/ω and spatial scale (wavelength) λ := 2π/k.On the other hand, the excitonic transition as such is associated with the fundamental transition period T e := 2π/ω e and the spatial scale λ e := 2π/k e .In addition, 29 For a more care full quantum mechanical derivation, see [4], [49], [173]. 30There is a nice parallelism here with temporal dispersion where the latter is known to arise from the inertial effects of electrons in interaction with radiation fields [45]. the radius of a topological microdomain based at a generic position r ∈ D will be shown latter to be given by a special formula (75).Nonlocality arises from the delicate interplay between all these different spatial and temporal scales.In what follows we will emphasize their relative roles in determining the rich nonlocal microstructure of the material domain while introducing quantitative examples.Table I gives a summary of all these parameters with their meaning explicitly stated.Armed with this list of spatial and temporal scales, we are now better positioned to understand the resonance structure associated with the exciton-polariton nonlocal dielectric function (63).Fig. 6 illustrates two cases of resonance studied with respect to variation in the excitation field wavenumber k (wavelength λ).In order to focus on nonlocality, we only plot the nonlocal response part, which is proportional to ε(k, ω) − ε 0 .As we can see from the two figures, there is a strong resonance taking place the ratio k/k = λ e /λ becomes comparable in magnitude with the quantities remaining in the denominator.That is, the spatial resonance condition is However, (66) may hold only if the imaginary part of the denominator, i.e., the quantity ωΓ/ω 2 e is very small.Otherwise, since k and k e are real, the ratio k/k e can never lead to strong resonance if the relaxation rate Γ is sufficiently large.Another way to say that is that strong nonlocal spatial resonance happens when dissipation is small, or when the exciton life time is long enough.This special case of long exciton lifetime is characterized by the condition In this case, it is evident that the spatial and temporal conditions for nonlocal resonance are related to each other by the simple relation From this it can be inferred that nonlocal resonances generally occur only for ω/ω e > 1.In Fig. 6 (left), we can see that for the above-resonance condition of ω/ω e = 1.5, the nonlocal domain possesses a spatial resonance at roughly λ ≈ λ e .On the other hand, if we operate the material at larger frequency ω/ω e = 2.5, i.e., well above the exciton transition frequency, then spatial resonance occurs at excitation field wavelengths λ considerably smaller than λ e .Finally, we add that when the nonlocal response is plotted as function of ω instead of k, resonance structures similar to Fig. 6 are obtained under the condition (67) since in that case (68) approximately holds.In general, we would expect that for the best operation of the designed nonlocal MTM (maximal nonlocal response), the operating frequency should be made near the exciton transition frequency, i.e., ω/ω e ≈ 1 since in general Γ is never exactly zero and hence the condition (67) seldom hold for all frequencies. D. Quantitative Estimation of the EM Nonlocality Microdmain Structure in the Exciton-Polariton Dielectric Nonlocal Model The dielectric function in the spatial domain is given by the inverse Fourier transform where F −1 k is the inverse of the forward Fourier transformation defined by (14).We will need the following inverse Fourier transform relation (proved in Appendix B): where (72) Therefore by substituting (79) into (69), we arrive at Nonlocal response (73) The first terms in the RHS provides the background local response of the medium.On the other hand, all nonlocal effects are relegated to the Green's function of the material under investigation.The Green's function (74) is the most fundamental physical quantity needed for the construction of the microdomain structure D of the nonlocal medium.It has some similarity with the scalar freespace Green's function for radiation fields (spherical waves exp(ik|r − r |)/|r − r |, but there are notable differences. First, we note that (74) exhibits strong dispersive behaviour due to the dependence of γ and γ on frequency.Second, the existence of the exponential factor exp(γ |r − r |) makes the Green's function ε NL (r − r ; ω) highly attenuating in spite of the fact that this attenuation is not mainly due to losses.Indeed, as can be seen from ( 59), dissipation is controlled by the exciton life time τ e , or equivalently the decay rate Γ.Dissipation decreases as the life time increases or when Γ is small.Fig. 7 illustrates some examples where we plot both γ and γ as functions of frequency.The frequencydependence behaviour strongly depends on the ratio Γ/ω e , i.e., the ratio between the relaxation frequency and the excitonic transition frequency.For small ratio such as Γ/ω e = 0.1, the attenuation constant γ is nearly constant for ω > ω e , while it assumes higher values for frequencies below the ω e as can be seen from Fig. 7(a).This is consistent with high-pass filtering behaviour for this type of resonance where waves are excited for frequencies slightly larger than the cutoff threshold at ω e .For the propagation constant γ at the same relaxation-toexciton transition ratio, Fig. 7(b) shows that it becomes nearly straight line.Such behaviour combined with nearly constant attenuation indicates small dispersion effects.On the other hand, when Γ/ω e increases, we begin to see strong dispersion effects, manifested by non-constant attenuation and nonlinear phase constant. In fact, the attenuation captured by the constant γ is not merely an expression of dissipation but is the signature of nonlocality in exciton-polariton material domains.The medium response weakens as the distance from the source increases, with the characteristic length scale of this nonlocality radius controlled by γ .Fig. 8 illustrates the real part of the dielectric function Green's function (74).The ability of the medium to respond to spatially distant sources is graphically illustrated by the spread of the function around the origin |r − r | = 0.The size of the nonlocal domain is then directly reflected by the rapidity of the decay of the Green's function (74) as one moves away from r , which is the origin here. E. The Locally-Homogeneous Model of Nonlocal Semiconducting Domains Quasi-inhomogeneous or smoothly-inhomogeneous nonlocal media are some of the simplest possible prototypes of general (inhomogeneous) nonlocal materials where the spatial dispersion model ε(k) with only a dependence on one variable k is not adequate to the description of the physics of the problem [12], [39].In general, there has been quite few investigations aimed at going beyond spatial dispersion in homogeneous media.Examples include inhomogeneous plasma such as those in controlled-fusion reactors [185], cold collisionless magnetoplasma [185], and incommensuratelymodulated superstructures in insulators [12], [186]. Here, we will analyze a simple inhomogeneous model of semiconductors experiencing exciton-polariton transitions as outlined above.The EM nonlocal model is locallyhomogeneous in the sense that around each point rD there exists a microdomain V r such the medium can be modeled as homogeneous and spatially dispersive for all r ∈ V r (i.e. the second mention of 'locally" here means topological nonlocality.)However, we allow for variations in the spatial dispersion model to take place from one microdomain V r to another. We may then estimate the size of each EM nonlocality microdomain by using the exponential law in (74).To see this, let us first generalize to the inhomogeneous setting where at each point r ∈ D the parameters of the exciton-Polariton model all become generally functions of position, i.e., we write γ (r), γ (r), ω e (r), α(r), m e (r), etc, where it is understood that the medium microscopic composition may change from one position to another.The main formula for computing the size (radius) of the topological microdomain balls V r = B(r, a r ) is then given by which is roughly the characteristic localization entailed by an exponential law of the form exp(−|γ |r ).Using (72), this becomes: ω 2 e (r) . This formula is illustrated in some basic examples in Fig. 8(b) for various values for the ratio Γ/ω e .When this ratio is small, the size of the EM nonlocality domain increases since attenuation becomes smaller.conversely, one may control the size of each EM nonlocality microdomain V r by modifying the ratio Γ(r)/ω e (r) evaluated at that position.This provides a path for an experimental realization of generalized nonlocal MTMs with controlled micro-topological structure.In order to give a view on the numerical values of this structure, Table II provides some relevant microdomain data. VIII. APPLICATION TO FUNDAMENTAL THEORY: ELECTROMAGNETIC BOUNDARY CONDITIONS IN THE FIBER BUNDLE APPROACH The well-known tension between nonlocal electromagnetism and material interfaces has been already mentioned several times above.Here, we provide some application of the fiber bundle theory of Sec.V aiming at elucidating the nature of this tension and to suggest some possible new formulation of the problem.The starting point is Fig. 2, where a zoomed-in topological picture based on the general structures explicated in Sec.IV is given.The focus now is on the interface between two generic nonlocal domains D m and D m .In traditional local electromagnetism, the constitutive relation material tensor K n is exploited to deduce conditions dictating how various electromagnetic field components behave as they cross the D m /D m interface.However, even if each K n (r, r ) was to be treated as that belonging to a spatially dispersive domain, i.e., by replacing it by K n (r − r ), the presence of a boundary completely destroys the translational symmetry of the structure on which the very form of the spatial dispersion nonlocal response tensor K n (r − r ) is based.This was very clearly explained in [4], with several proposals for a solution of the electromagnetic problem.For example, since close to the interface it is very obvious that the material tensor must be reverted back to the most general nonlocal form, namely K n (r, r ), it was then proposed that one may use the latter form only in a thin region containing the interface on both sides.Outside this region, a gradual transition or a continuous profile (tapering) is introduced to transition from the forms K n (r, r ) and K m (r, r ) to the spatially dispersive forms K n (r − r ) and K m (r − r ) characteristic of "bulk" homogeneous material domains [4]. Another proposal is to keep the spatial dispersion profiles K n (r − r ) and K m (r − r ) everywhere but introduce specialized additional boundary conditions (ABCs) at the interface suitable for the problem at hand.Although this latter approach is neither consistent mathematically nor physically (because of the breakdown of symmetry caused by the presence of an interface), it nevertheless remains popular because -at least in outline -nonlocal electromagnetism is thereby held up in a form as close as possible to familiar local electromagnetic theory methods, especially numerical techniques such as Finite Element Method (FEM) [129] , Method of Moment (MoM) [131], and Finite Difference Time-Domain Method (FDTD) [130], i.e., established full-wave algorithms where it is quite straightforward to replace one boundary condition by another without essentially changing much of the code. 31evertheless, each of the two approaches discussed above requires a considerable input from microscopic theory, mainly to determine the tapering transition region in the case of the first, and the ABCs themselves in the second.That motivated the third approach, called, the ABC-free formalism, where the relevant microscopic theory was utilized right from the beginning in order to formulate and solve Maxwell's equations.For example, in [46], [47], a global Hamiltonian of the matter-field system is constructed and Maxwell's equations are derived accordingly.In [48], the rim zone (field attached to matter) is investigated using different physical assumptions to understand the transition from nonlocal material domains to vacuum going through the entire complex near-field zone. We believe that the main common conclusion from all these different formulations is that in nonlocal electromagnetism it is not possible in general to formulate the electromagnetic problem at a fully phenomenological level.In other words, microscopic theory appears to be in demand more often than in the case of systems involving only local materials.However, since all existing solutions use the traditional spatial manifold D as the main configuration space, the question now is whether the alternative formulation proposed in this paper, the extended fiber bundle approach, may provide some additional insights. We provide a provisional elucidation of the topological nature of electromagnetism across material interfaces by noting that in Fig. 2, not only the behaviour of the fields F(r) in the two domains is relevant, but also the entire local topological We build on this key concept in order to illustrate how the problem of nonlocal electromagnetism across interfaces may be reformulated.First, Fig. 9 provides a finer or more structured picture of the topological structure of Fig. 2 It should become clear now that since the two nonlocal material domains possess an extra structure, namely that of the fiber superspaces attached to each point in the base space, we must also indicate how the various elements belonging to the Banach function spaces, the fields defined on the microdomains V r in Fig. 2, behave as they cross the boundary separating D m and D n .One obvious way to do this is to introduce a bundle homomorphism [161] between the two vector bundles M m and M n over the interface submanifold ∂D mn separating D m and D n . 32 The motivation behind introducing this bundle homomorphism is to serve as a "boundary condition operator" acting on the fiber bundle nonlocal domains M m and M n instead of D m and D n .We will not go here into constructing this operator in details, but provide instead some additional remarks to illustrate the main idea.The traditional boundary condition applied in the base space will be denoted by D m D n and is usually spelled out in the form where ∂D mn is the boundary between D m and D n .Here, Γ b1 and Γ b2 are "base space boundary functions."On the other 32 This mathematical object is similar to the nonlocal response map L introduced by (43). hand, the fiber bundle elements, i.e., functions defined on on the microdomains V r , are mapped by where Γ f is a different fiber superspace boundary function.The full formulation is more complex because the boundary condition operator must also be proved to be compatible with the fiber bundle structures and so the entire global topology of M m and M n will interact with the effective final electromagnetic boundary condition resulting from this process. The main relevant conclusion here is that the existence of extra or additional structures in the fiber bundle space of electromagnetic nonlocality makes the need for additional boundary conditions or information coming from the microscopic topological structure very natural.The fiber bundle formalism of nonlocal metamaterials does capture the physics of nonlocal domains joined together through interfaces.The full formulation of the proposed fiber bundle boundary condition homomorphism is beyond the scope of this paper but it is hopped that this initial insight can at least clarify the subject and stimulate further research in the fundamental theory of nonlocal metamaterials. IX. FURTHER APPLICATIONS AND FUTURE WORK We give a general outline of several additional possible applications, where some of them highlights the theory developed in this paper.Issues pertinent to fundamental considerations (physical and mathematical) and engineering functions are taken up in Secs.IX-A and IX-B, respectively. 33.Applications to Fundamental Theory and Methods 1) Limitations on Nonlocal Metamaterials: Maps like L (43) can be reformulated in the space of vector bundle sections [155], [161], [164], a subject that is extremely well developed in classic differential topology.In fact, the electromagnetic response field function R itself can sometimes be obtained by working directly with the source bundle superspace M. For example, under some conditions, this can be achieved by replacing each fiber X i by X i × C 3 .In this way, the entire electromagnetic nonlocal response problems becomes identical to the investigation of how vector bundle sections interact with the topology of the underlying base manifold D. There is an extremely large literature in differential topology and geometry focused on this problem, especially how local information can be propagated to extend into global structures [19], [156], [161]. The author believes that by starting from local data in a given nonlocal metamaterial domain, e.g., the global shape of the device, the distribution of topological holes, etc, one may then use existing techniques borrowed from differential topology, e.g., the theory of characteristic classes, to determine the allowable EM response functions permissible in principle at the global level.Engineers are typically interested in knowing in advance what the best (or worst) performance measures obtainable from specific topologies are, and hence reformulating the electromagnetism of nonlocal metamaterials in terms of vector bundles could be of help in this respect since it opens a pathway toward a synergy between general topology, physics, and engineering in the field of metamaterials. 2) Numerical Methods: Traditional full-wave numerical methods are sometimes deployed to deal with nonlocal EM materials, often using the additional boundary conditions framework, in spite of the latter's lack of complete generality.At the heart of the traditional approach to numerical methods in local electromagnetism is the concept of operators between linear spaces.However, by reformulating the source space of field-matter interaction in terms of a Banach bundle, it should be possible to reformulate Maxwell's equations to act on this extended geometric superspace instead of the conventional spacetime framework.Instead of the concept of linear operator, we now have the much more general and richer concept of bundle homomorphism developed in details above.Some of the advantages anticipated from such reformulation is the ability to resolve the issue of generalized boundary condition (Sec.VIII).Moreover, since every point belonging to a fiber superspace is in itself a smooth function defined on an entire material sub-microdomain, by building a new system of discretized recursive equations approximating the behaviour of electromagnetic solutions living in the enlarged superspaces M and R, one may expect deeper understanding of the physics of nonlocality since the topology of the nonlocal interaction regime is explicitly encoded into the geometry of the new expanded solution superspace M itself.It is also possible that such numerical methods may emerge as more computationally efficient and broader in applicability than the conventional methods rooted in local electromagnetism.One reason for this is that the Banach vector bundle formulation introduced in this paper is quite natural and appears to reflect the underlying physics of nonlocal metamaterials in a direct manner.In recent years, the subject of computational topology has gained momentum and some researchers are now building new numerical methods by exploiting the topological structure of the problems under considerations, e.g., see [166], [187]. B. Other Applications of Nonlocal Metamaterials 1) Topological photonics: One of the main applications of the proposed vector bundle formalism is that it opens the door for a new way to investigate the topological structure of materials.It has already been noticed that nonlocal EM response is essential in topological photonics, e.g., see [100], [111].Indeed, since in topological photonics the wavefunction of bosons, usually the Bloch state, is examined over the entirety of momentum space (usually the Brillouin zone), then it is the dependence of the EM response on k what is at stake, which naturally brings in nonlocal issues.But now since by using our theory we can associate with every nonlocal material a concrete fiber bundle superspace reflecting the rich information about the multiscale topological microdomain structure and the global shape of the material plus the impact of the boundaries separating various material domains, it is natural to examine whether a topological classification of the corresponding fiber bundles may lead to a new way to characterize the topology of materials other than the Chern invariants used extensively in literature.The advantage of the superspace approach in this case is that the complicated topological and geometrical aspects of the boundaries and inhomogeneity in nonlocal media can be encoded very efficiently in the local structure of the material fiber bundle.Using standard techniques in differential topology [155], it should be possible to propagate this local information to the global domain (the entirety of the system), for example by computing fiber the bundle topological invariants like its homology groups [164].Our approach is then a "duall" to the standard approach since we work on an enlarged configuration space (spacetime or space-frequency), while the mainstream approach operates in the momentum space of the wavefunction. 2) Digital Communications: Nonlocal metamaterials offer a very wide range of potential applications in wireless communications and optical fibers.The basic idea is to introduce specially engineered nonlocal domains either as part of the communication channel (e.g., optical fibers, plasmonic circuits, microwave transmission lines) [150], or as a control structure integrated with existing antennas [121].Spatial dispersion was also used as a method to engineer wave propagation characteristics in material domains, e.g., see [188] for applications to high-efficiency modulation of free-space EM waves.A general linear partial equation explicating how spatial and temporal dispersion can be jointly exploited to produce zero distortion (e.g., constant negative group velocity) was derived and solved in [22].The main idea originated from the fact that distortion in communication systems emerge from nonconstant group velocity v g := ∇ k ω.Since v g is a strong function of the dependence of the material response tensor K(k, ω) on both k and ω, dispersion management equations can be derived for several applications.For example, it was proved in [22] that in simple isotropic spatially dispersive media with high-symmetry, one may obtain exact solutions where the group velocity is constant at an entire frequency band.This happens because while strong temporal dispersion is present (which alone causes strong distortion), the presence of optimized spatially dispersive profiles lead to compensation of distortion, resulting essentially distortionfree channels.There are enormous potentials of research in this exciting area since most practical realizations of nonlocal metamaterials involve complex material response tensors and the relevant mathematics of dispersion engineering is still relatively underdeveloped. 3) Electromagnetic Metamaterials: As early as the 1960s, it was proposed that EM nonlocality can be exploited to produce materials with very unusual properties.For example, in [4], negative refraction materials were noted as one possible application of spatial dispersion where the path toward attaining this goes through controlling the direction of the group velocity vector.Since in nonlocal media power does not flow along the Poynting vector [2], new (higher-order) effects were shown to be capable of generating arbitrary group velocity profiles by carefully controlling the spatial and temporal dispersion profiles.Overall, the ability of spatial dispersion to induce higher-order corrections to power flow is a unique added advantage enjoyed by nonlocal metamaterials exhibiting spatial dispersion in addition to normal dispersion.This extra spatial degrees of freedom provided by space was researched, reviewed and highlighted many publications, including for example [52], [73], [80], [102], [114], [121], [132], [145], [189], [190]. 4) Near-Field Engineering, Nonlocal Antennas, and Energy Applications: Another interesting application of nonlocality in electromagnetic media is near-field engineering, a subject that has not yet received the attention it deserves.It was observed in [189] that a source radiating in homogeneous, unbounded isotropic spatial dispersive medium may exhibit several unusual and interesting phenomena due to the emergence of extra poles in the radiation Green's function of such domains.Both longitudinal and transverse waves are possible (dispersion relations), and the dispersion engineering equations relevant to finding suitable modes capable of engineering desired radiation field patterns are relatively easy to set and solve.For example, by carefully controlling the modes of the radiated waves, it is possible to shape the near field profile, including total confinement of the field around the antenna even when losses is very small, opening the door for applications like energy harvesting, storage, and retrieval in such media.This subject, however, has been explored only for simple materials so far and mainly at the theoretical level [121].On the other hand, the subject of far-field radiation by sources embedded into nonlocal media was investigated previously by some authors within the context of plasma domains [34].Recently, it has been systematized into a general theory for nonlocal antennas with media possessing an arbitrary spatial dispersion profile [191]- [193].However, no general theory exists for nonlocal media which are inhomogeneous.The superspace formalism proposed in this paper may help stimulate research into this direction. X. CONCLUSION We provided a general theoretical and conceptual investigation of nonlocal metamaterials aiming at achieving several goals.First, the subject was reviewed from a new perspective with the intention of introducing it to a wide audience, including engineers, applied physicists, and mathematicians.The various essential ideas behind EM nonlocality were viewed in new light using an abstract field-response model in three dimensions.Next, the fine-grained topological microstructure of nonlocal metamaterials was explicated in details.We introduced EM nonlocality microdomains and showed that they present an important structural topological feature of the physics of nonlocal media.After that, it was proved using differential topology that a natural fiber bundle structure serving as a source space can be constructed.The source fiber bundle was shown to have all the required properties of standard fiber bundles while faithfully reflecting the physics of EM nonlocality microdomains.Eventually, and using the technique of partition of unity, it was proved that the source fiber bundle can be used to construct and compute the material response function over arbitrary microdomains.This was accomplished by building a bundle homomorphism to replace the material tensor linear operators in conventional electromagnetism.The new homomorphism is a generalization of linear operators and can in turn be discretized in the future using suitable methods available in computational topology. The new fiber bundle superspace formulation suggests that EM nonlocality can be formulated in an alternative way compared with other existing methods that borrow heavily from the electromagnetism of local media.Most importantly, EM nonlocality forces us to consider an entire infinite-dimensional Banach space attached to each point in the conventional 3dimensional space on which the material is defined.This extra or additional structure provides a natural explanation of why traditional boundary conditions often fail to account for the physics of nonlocal metamaterials.Moreover, the fiber bundle theory opens the door for several new applications, including the ability to understand the deep connection between topology and electromagnetism in engineered novel artificial media.Overall, the author proposes that future research in metamaterials will gradually require more extensive collaboration between engineers and mathematicians to explore the full consequences of this organic topology/electromagnetism relation. A. Guide to the Mathematical Background We provide an informal overview regarding how to read the mathematical portions of this paper and where to find detailed references that might be needed in order to expand some of the technical proof sketches provided in the main text.We emphasize that in this paper only the elementary definitions of 1) differential manifolds, 2) Banach and Sobolev spaces, 3) vector bundles, and 4) partition of unity are needed to understand the mathematical development. Differential manifolds.A differential manifold is a collection of fundamental "topological atoms" each composed of an open set U i and a chart φ i (x), which serves as a coordinate system, basically an invertible smooth map to the Euclidean space R n .That is, locally, every manifold looks like a Euclidean space with dimension n.The collection of open sets U i , i ∈ I, where I is an index set, covers this n-dimensional manifold.Since some of these open sets are allowed to overlap, the key idea of the differential manifold is that on the overlap region U i ∩ U j , there exist a smooth reversible coordinate transformation function connecting the coordinates of the same point when expressed in the two (different) languages of the topological atoms U i and U j .The key concept of topology is how to propagate information from the local to the global.In this sense, differential manifolds present elementary strucutre allowing us to model this process using the efficient technology of the differential calculus.Only the basic definition of smooth manifolds is required in this paper, which can be found in virtually any book on differential or Reimannian geometry, e.g., see [8], [16], [19], [155], [158], [161], [164], [194]. In Sec.IV, we introduced Sobolev space over the open domain D instead of simply Banach space.However, that was done mainly to simplify the technical development and in anticipation of future work.Indeed, in this paper, the fiber bundle M is referred to just as Banach bundle, not Sobolev bundle for the reason that all our essential results and insights apply to the more general concept of Banach space, which contain Soblev spaces as special case.However, Sobolev spaces are easier to implement and we only invoked here the key definition of the space itself.In particular, none of the other technical properties of Sobolev spaces are needed in the paper.However, since in the future the material bundle space M is expected to be used to construct solution of Maxwell's equations in new form, Sobolev spaces are projected to play the most important role since they have proved very efficient in analysis.For the basic definition of Sobolev spaces and their applications to partial differential equations in mathematical physics and finite-element method in engineering, we recommend [158].The subject of Banach manifolds is less commonly treated in literature than finite-dimensional manifolds, but good concise treatments of the topic include [19], [164], [194]. Vector bundles.Fiber bundles, of which vector bundles are special case, are now standard topics in both mathematics (topology, geometry, differential equations), theoretical physics (quantum field theory, cosmology, quantum gravity), and applied physics (condensed-matter physics, many-body problems).For the major importance of vector and fiber bundles within the overall structure of modern fundamental physics, see [8], [16], [18].In quantum field theory, gauge field theories use vector bundles as essential ingredients in the standard model of particle physics [16].The increasing importance of methods based on quantum field theory in applications to condensed-matter physics has contributed into making knowledge of fiber bundle techniques useful and more widespread in physical and engineering research, e.g., see the area of Berry phase and the associated gauge connection [24], [111].The key idea of a vector bundle is to attach an entire vector space to every point on a base manifold.To be more specific, consider a differential manifold D. Each such vector space will be called the fiber at that point.The tangent space of the manifold is the most obvious example of such vector bundles.However, more complicated structures than finite-dimensional tangent spaces can also be encoded by the vector bundle concept.In this paper, we have shown that EM nonlocality can be modeled naturally by considering the Banach space of all fields on the microdomains based at a point in the material configuration space.Fiber bundles then can be seen as highly efficient and economic ways to encapsulate large amount of topological and geometrical data and they lend themselves easily to complex calculations.Very readable technical descriptions of vector bundles can be found in [16], [156], [161], [164]. Partition of unity techniques.These are somehow technical tools used by topologists to propagate information from the local to the global and are quite handy and easy to apply.The main theorems allow moving from one topological atom to another by "gluing" them together using smooth standard domain-division functions.The technique was stated and used only toward the end of Sec.V to write the expansion (48) and can be skipped in first reading of the paper.Partition of unity is usually taught in all topology and some geometry textbooks, e.g., see [19], [156], [161], [164]. B. Computation of the Inverse Fourier Transform (70) We start from the standard Fourier transform pair where the spatial Fourier transform is defined by (14).The condition Im{γ(ω)} < 0 is due to the physical requirement that fields don't grow exponentially in passive domains [126].We also have written |r − r | instead of |r| in anticipation of the fact that the inverse Fourier transform will produce a Green's function. Our main task now is to make a proper choice of the correct sign of when taking the square root operation γ 2 .Let us write γ 2 = Re γ 2 + i Im γ 2 .From ( 60 On the other hand, γ also can take the form where both γ and γ are real.The goal now is to derive expressions for γ and γ in terms of Re γ 2 and Im γ 2 with the correct sign since the square root is a many-one function. To accomplish this, we use the following elementary theorem: Let x, y, a, b ∈ R. Then the square root of x + iy is given by x + iy = ±(a + ib), (82) where the following expressions hold It remains now to find the correct signs.From (79), the condition γ = Im{γ(ω)} < 0 must be satisfied.Therefore, we choose the negative sign in (82).The final expressions become γ = −a, γ = −b, and after inserting γ and γ into (79), the required relation ( 70) is obtained. Fig. 2 : Fig. 2: The micro-topological structure of nonlocal metamaterial systems includes more than just the 3-dimensional spatial domains D n , n = 1, 2, ....It is best captured by classes V(D n ) composed of various open sets V r ⊂ D n based at each point r ∈ D n .On every such subset a vector field is defined, representing the EM excitation field.The collection of all vector fields on a given set V r gives rise to a linear topological function space F(V r ).The topologies of the base spaces D n , the nonlocal micro-domains V r , and the function spaces F(V r ) collectively give rise to a total "macroscopic" topological structure (superspace) considerably more complex than the base spaces D n . Fig. 3 : Fig.3: The three-step process of constructing micro-coordinate representations of EM nonlocality starting from the nonlocal microdomain set and ending with the partition of unity on the MTM superspace. Fig. 4 : Fig. 4: An example illustrating the various topological microstructures involved in modeling a generic nonlocal material.The microdomains V r1 , V r2 ∈ V(D) are open sets and belong to the nonlocal microstructure of the MTM D.The open sets U i and U j are the corresponding coordinate sets and partition of unity functions {ψ i } i∈I 's domains subordinated to V r1 and V r2 , respectively.The compact sets S i and S j are defined by S i := supp{ψ i (r)} and S i := supp{ψ i (r)}. Fig. 5 : Fig. 5: Topological coarse-graining model for an inhomogeneous nonlocal material domain D (left) in comparison with a conventional coarse-graining process (right).The topological microdomains constitute an open cover of the domain in the sense that D = r∈D V r , which is the obvious generalization of(22).Note how the topological approach allows overlapping microdomains, e.g., between microdomains V r2 and V r3 .The technique of the partition of unity will take care of electromagnetic data "repeated" in such regions of overlap by assigning proper weights that always sum to unity at each point in r ∈ D. TABLE I: A summary of the various spatial and temporal scales involved in understanding and designing generic nonlocal metamaterials with exciton-polariton resonance-type of nonlocality. Fig. 7 : Fig. 7: Frequency dependence of γ and γ for several values of the exciton decay rate Γ.Here, m e = 0.9m e , where m e is the electron mass.The exciton transition frequency is ω e = 3.7977 × 10 15 rad/s ( ω e = 2.5 eV). Fig. 8 : Fig. 8: (a) Comparison between the real parts of the longrange decay 1/|r − r | of the excitonic nonlocal domain Green's function ε NL (r − r ) and its full spatial dependence including the exponential decay factor exp(γ |r − r |) for γ = 1 nm −1 and γ = 2 nm −1 .(b) Frequency dependence of a r , the radious of the topological micordomain B(r, a r ) centered at some generic point r in the nonlocal excotonic material domain D for several values of the exciton life time Γ −1 .Here, m e = 0.9m e , where m e is the electron mass.The exciton transition frequency is ω e = 3.7977 × 10 15 rad/s ( ω e = 2.5 eV). Fig. 9 : Fig. 9: An abstract representation of the fiber bundle structure behind Fig. 2. based on replacing the spaces D m and D n by the corresponding Banach bundle superspaces M m and M n , respectively.The thick horizontal curved lines represent the bases spaces D m and D n , while the wavy vertical lines stands for the fibers spaces X m and X n attached to each point in the corresponding base manifolds.The double discontinuous lines at the "junction" of the two base spaces D m and D n indicate the joining together of the two vector bundles M m and M n .
28,704.2
2021-01-28T00:00:00.000
[ "Physics" ]
A qualitative investigation of the digital literacy practices of doctoral students Academic libraries are currently part of a landscape where there is a rapid growth of digital technologies and electronic resources and they have responded to this by developing their research services. Some of the most specialised and complex research in higher education is conducted by doctoral students and the effective use of digital tools and skills is often crucial to their research workflow and success. The need for digital literacy has been further emphasised during the global pandemic of 2020-21 which has required the maximisation of online working and digital skills to ensure the continuation of education, services and research productivity. This paper presents the findings of a qualitative research study in a UK university exploring factors influencing differences in the digital literacy skills of doctoral students. The literature included has been updated as digital skills and technologies are a constantly changing area of research.   Due the complex nature of doctoral research, it was difficult to draw definite conclusions about the many factors which influence the digital literacy practices of research students. Students interviewed in the study discussed their approaches to and understanding of information, digital and media literacy (Jisc, 2016) but the influence of demographic factors such as age, discipline and gender could not easily be evaluated.  All students in the study appeared to be under time pressure and required a high level of organisation and this was assisted by digital skills and proficiency and access to robust hardware and software. They believed they were largely self-taught and some required appropriate training at the point of need to increase their research productivity. This paper will explore how evidence-based practice and engagement may be used to understand the digital practices of doctoral students and to inform the development of research services within academic libraries.  Introduction Digital technologies, resources and skills are prevalent in higher education and are increasingly employed during the research lifecycle. Digital skills have become even more important to ensure continued productivity in the context of the global pandemic of 2020-21. Some of the most complex and specialised research is conducted by doctoral students, however there is not always a deep and clear understanding of their utilisation of digital skills and technologies (Dowling & Wilson, 2015). Information literacy often refers to the ability to retrieve, evaluate, process, manage and disseminate research in all formats (CILIP Information Literacy Group, 2018). This increasingly includes online content such as open access repositories, electronic articles, data and online archives which means that digital literacy is strongly inter-related to information literacy. Doctoral digital literacy may focus on the appropriate deployment of online resources (such as electronic journals and databases), technologies and digital skills to support the entire research lifecycle. This paper is based on experiences of being a practitioner researcher and includes new and updated literature to complement the original empirical study in recognition of the changing nature of digital technologies. It includes findings from the author's MA Academic Practice dissertation entitled Which factors may contribute to differences in the digital literacy skills of research students? It will outline the research objectives, the methodology and approach employed, some of the insights and outcomes and how these may inform the provision of academic library research services. Context The research landscape is constantly changing and there is an increasing move towards open access content such as institutional and data repositories, providing a wider range of a digital content available to researchers. Digital skills and technologies are widely used in research, for example, to conduct literature reviews, to store and manage search results, and to disseminate research outputs and datasets. In higher education, many universities have increased their emphasis on research (Bent, 2016); for example by offering funding such as doctoral scholarships and graduate teaching contracts. In some disciplines such as Social Sciences and Humanities, students may have to fund themselves. In the past decade, the UK has tended to have a structured PhD programme, often between 4-7 years, which usually has an initial registration, an upgrade process, and the transfer to a full PhD route (Pyhältö et al., 2020). Employing research skills and conducting empirical research in an increasingly digital environment are important aspects of doctoral studies (Gouseti, 2017). While doctoral students may not be the largest cohorts, they produce specialised and unique research outputs and encourage innovative practices in universities (Jisc, 2020). Institutional context The research study was undertaken at a London-based university in the UK. Subjects taught and researched in the university include Arts, Humanities and Social Sciences, Health Sciences, and Business Studies. At the time of the study, the university had approximately 20,000 students, 700 of which were doctoral students, and some of their research was interdisciplinary, conducted across different departments and research centres. Framework used to inform the research The research study was informed by some parts of the Jisc Building digital capability: example researcher profile (2016); this will be referred to as the Jisc Researcher capability profile (2016). Researchers can use the profile to consider their individual digital practices and development requirements. It was thought that the framework included certain online skills and capabilities which were relevant to doctoral digital literacy. In particular, three areas of the framework were used: information and communication (ICT) skills; information, data and media literacy skills; and online communication and participation. These informed the literature review and the qualitative interview aspects of the research such as the framing of some of the interview questions. The use of social media and the importance of an online research presence were discussed in the interview part of the study. Rationale for the research The rationale for the study was an attempt to reach a greater understanding of factors which may influence the digital literacy skills of doctoral students. Due to the intensity of their research, Barry (1997) believed that doctoral students have the greatest need for information and therefore they require high quality information retrieval and management skills. The research topic was selected as it related to the author's role in a university library which involves supporting research students and offering them training on areas such as literature searching, using reference management software and creating an online research profile. Following informal conversations with doctoral students from various disciplines, it appeared that there were differences in how the students approached and utilised digital resources and technologies to support their research processes. The research study attempted to explore why this might be the case. The intention was to use the evidence-based research findings to inform professional practice in Library Services. This included developing training and online content to assist doctoral students with their research workflow. It was also intended that the study could contribute to knowledge and understanding of digital literacy in relation to research students in the fields of Library and Information Science and of Education. Literature review The literature review provided useful insights into digital literacy definitions, and previous research and frameworks such as the Vitae Researcher Development Framework (Vitae Careers Research and Advisory Centre, 2021) were useful in showing how this related to doctoral students. It was also insightful to consider types of digital tools and skills which can support different stages of the research lifecycle. It is, however, difficult to define digital literacy. Gilster (1997), as cited in Lankshear & Knobel (2008) wrote about digital literacy as an overarching approach and conceptualisation of the effective use of digital skills and technologies rather than a prescriptive list of core abilities. Martin (2006, p.151) further developed Gilster's concept and his definition is useful for this research study as it also applies to the concept of conducting research: Digital literacy is the awareness, attitude and ability of individuals to appropriately use digital tools and facilities to identify, access, manage, integrate, evaluate, analyze and synthesize digital resources […]. Digital literacy, in a wider context, may be defined as a skill set or a range of competencies which enable individuals to live and contribute to the digital society (Jisc, 2016;List, 2019). In the doctoral context, this may involve becoming effective online researchers through the application of tools, skills and workflows (Ince et al., 2018). It has also been described as a process of continuous change and improvement which is relevant to the doctoral context (Soltovets et al., 2020). Research students often require many digital skills and tools to assist in the process of conducting their research. Information and communications technology (ICT) skills are useful to enable the researcher to identify and utilise appropriate tools and software to support their research lifecycle. Usage may depend on how much time the researcher has to explore tools, their awareness of suitable products, their previous experience, and perceived success in using certain technologies. Subsequent to the study, a researcher digital insights survey (Jisc, 2020) found that above all else researchers would like specialised training, support and access to appropriate software to support their digital skills. In terms of factors which may influence the use of digital tools and skills by research students, Collins et al. (2012) identified firstly supply factors and secondly demand led ones. Supply in this context means the electronic journals, databases, apps, social media tools, software and online tools used to conduct and manage the research process. The demand-led factors were described as those such as age, gender, discipline, length of research career etc. These may contribute to learned and habitual methods of conducting research within a disciplinary context (Green & Macauley, 2007). It is likely that researchers will use digital tools which save them time or enable them to work efficiently. While in many respects digital technologies provide many solutions to conducting online research, they also present challenges because they are constantly changing and can be disruptive to research processes (Laurillard, 2008). It is possible therefore through their experiences, if students do not feel confident in mastering and, to an extent, experimenting with digital technologies, this may affect their digital literacy or capabilities. Demographic factors such as discipline, age, gender were identified in the literature. Similarly digital factors such as access to training and support; confidence in using technologies, information, digital and media literacy, online communication and participation using social media were also identified. However, it was not possible to differentiate between factors in respect of their perceived importance or to measure the influence of aspects such as supervision, peer support and training opportunities. To address this, some original, empirical research was conducted. It was thought survey and qualitative interview data could provide a rich source of insight into individual and thematic digital practices. Design and pilot testing The aim of the research study was to identify and explore factors which may contribute to differences in the digital literacy skills of doctoral students. Prior to conducting the empirical research, ethical approval was obtained from the appropriate university ethics committee. This ensured that participants were aware of the research purpose along with their anonymity, data security and right to withdraw from the study at any time. Participant information sheets were provided and informed consent was also obtained before the interviews. Following good practice, the survey was pilot tested and feedback was sought from colleagues, academics, and a professional researcher. The pilot testing was very useful in identifying and rectifying potential technical problems with the survey. Pickard (2013) emphasises the importance of consistency and clarity in the survey design. Technical adjustments were made to allow multiple responses to some questions, other questions were reworded, and an indication of the number of questions and the approximate completion time were given at the start. Participants The study was a small scale one and doctoral students in two Schools (250 students from Arts, Humanities and Social Sciences, and 100 from Health Sciences) were invited to participate voluntarily via email. The survey response rate was approximately 8% (27 students) although because the email was forwarded by research administrators to preserve the confidentiality of participants, there was no control over the distribution. These Schools were chosen as it was anticipated that their doctoral students might have different disciplinary approaches to digital skills and tools. For example, Carpenter (2012) indicated 90% of doctoral students in arts, humanities and social sciences mainly researched independently and experienced a sense of isolation. Health students may be from a practitioner background such as the National Health Service (NHS) and some are mature students returning to education. As their main discipline, respondents researched across 11 different subjects including International Politics, Music, Food Policy, and Nursing and Midwifery. 26% of respondents were researching Psychology and 15% Sociology. The respondents ranged from 24-30 and up to 50-59 years of age. 75% of the respondents were female and almost 80% were full-time students. The large range of subjects meant that these samples were too small to provide a basis for statistical analysis. Research methods (survey and interviews) The anonymous online survey (see Appendix 1) was adapted with permission from a University of Greenwich digital literacy survey which seemed to cover relevant themes. The survey was a mixture of multiple choice, Likert scale, and free text questions. These were adjusted to suit doctoral participants and the subjects they were researching. The survey was completed by 27 doctoral students in the university and was used to explore literature review themes such as demographic factors, digital tools used by research students, their confidence in using technologies, and their sources of support and training needs. The participants were at different stages of their research and 70% were in year three or above of their study. Semi-structured interviews of 40-45 minutes were conducted with eight participants recruited from the survey (see Appendix 2 for the structured questions). The semi-structured approach meant that all participants answered the same core questions but the interviews were flexible with follow-up questions and discussion as appropriate. The interviews explored the themes of: defining digital literacy; exploring disciplinary research practices; student research methods; digital tools used in research; university, supervisory and peer support. The interviewees were aged between 24-30 and 50-59 and 50% were male and 50% were female. Analysis The survey and interviews tested whether factors such as disciplinary research practices, gender and age appeared to be more important than general factors such as access to training opportunities, ICT skills, digital, data, media literacy, and the online communication and participation of researchers. The survey was designed on and analysed using a subscription version of Survey Monkey and the data was analysed using the survey software and Excel to create tables, charts and graphs. The interviews were recorded on a dictaphone and then transcribed manually thereby resulting in familiarity with the data. The transcriptions were then analysed thematically using a four stage quantitative analysis method (Bryman, 2016). This involved reading and searching the transcripts, identifying key themes, and coding and analysing the data. Responses were grouped and excerpts from each interview were used to provide evidence in support of the themes. The factors identified in the literature review were synthesised with the empirical research findings to address the research question. Limitations of the study The study was a small scale one and limited to doctoral students in Social Sciences, Arts and Health Sciences in a London University. The response to the survey was 8% (27 respondents) and 8 students agreed to be interviewed, therefore it was not possible to draw definitive conclusions from the small data sample. The anonymity of the survey meant it was not possible to clarify any of the answers further and 75% of the survey respondents were female so it was difficult to draw gender-based conclusions. Research findings and discussion Doctoral students face several challenges in regard to their digital literacy in a complex online environment. They are often under time pressure and conducting specialised research requiring them to have excellent literature searching and information management skills. Large volumes of research data may be generated and need to be collected, analysed and stored appropriately (research data management). Researchers may use a variety of research methods such as qualitative or quantitative, phenomenography or mixed methods for example. Due to their individual research processes and the length of their course, they might not have access to training at the appropriate point of their research. Research students have complex digital literacy needs which makes it difficult to identify all of the factors which may influence these. Disciplinary practices may influence the research methods adopted. All of the interview participants researched in a cross-disciplinary environment and believed that this influenced their research methods and choice of digital tools to some extent. The most popular technologies used by all of the survey respondents were email and online conferencing tools and the most popular electronic resources were online journals and databases. Factors which seemed to influence all of the students interviewed were using ICT skills to assist with time management and organisational skills at the point of need, and experiences of and confidence in using technology. The largest debate in the interviews was around the use of social media tools. Half of the students found them very useful but half thought they could be distracting and time wasting. A lack of training on areas such as research data management, research methods, and data analysis made the students' research more challenging. Discipline and interdisciplinarity One finding from the interviews was that all of the students were conducting interdisciplinary research. Collins et al. (2012) concluded that in their use of digital technologies, most researchers continued to employ the tools and skills that they had used in their previous academic careers. One student interviewed described how the discipline influenced the choice of research methods: 'Yes, the discipline does actually affect things because I'm doing more qualitative research broadly speaking, so [...] I am just doing interviews and content analysis really' (Student 8). In this context, the researcher may adopt a hybrid approach to digital technologies and employ elements from different disciplines. This increases complexity but may enrich the research experience: 'I think that makes your whole research richer as well when you know you can tackle various disciplines within one study' (Student 6). The students viewed interdisciplinary research positively and believed it offered new possibilities, depth and collaborative digital approaches and they felt it was becoming increasingly common. Access to training and support and peer learning Another finding was that participants had no access to a university wide doctoral training programme meaning ' [...] structurally and institutionally there's not all necessarily the support there that we need.' (Student 3). A lack of training was a challenge to all interviewees to some extent. They required training at appropriate stages of their research lifecycle to increase their productivity in areas such as research methods, research data management, and data analysis. Only 7% of survey respondents strongly agreed they had been supported by their university in using digital technologies. This might because of the individual and specialised nature of their research approach. Three quarters of the survey respondents and all of the students interviewed seemed to be in favour of having a general university doctoral training programme: 'I think it might be useful actually if the university did do training sessions on those kind of things because there is a wide range of PhD students who would benefit from that' (Student 1). This correlates with a subsequent researcher digital experience survey by Jisc (2020) which indicated that above all respondents wished for specialised digital skills training, support and access to appropriate software. Library Services support Library Services within the university offered a programme of workshops aimed at research students and staff. Research by Gouseti (2017) indicated that doctoral students were open to using new technologies, especially when offered training and support from their university and from Library Services. Regarding training offered on digital resources and tools, Interviewees were all aware of these training opportunities. A high proportion of the survey respondents (63%) said they consulted Library Services about their digital technology enquiries although the nature of these enquiries was not clear. 'I know support is there for example managing your work, for example I think the Library has sessions on sort of managing the research process, managing your research sources' (Student 3). Students valued Library Services support in workshops and individual, tailored appointments for literature searching etc. and felt Research Librarian roles added value and assisted with current awareness and search techniques and strategies. 'I think the support I have had from the Library has been very good […], I have been able to bring my own issues and have been able to talk through my issues, I have been very impressed with the Library support' (Student 7). The finding was an encouragement to provide individual library support for doctoral students and to be aware they research in specific contexts and use different research approaches. Self teaching and peer learning From the survey data, 85% of students believed that they mainly taught themselves digital skills and technologies through practice and experience (Sharpe et al., 2010). Allan (2010) emphasised the importance of the supervisory relationship but in the study supervisors were not necessarily the main source of digital literacy support. In the interview data outcomes, all of the students felt that they had to informally teach themselves to use technologies at certain points of their research. Student 5 spoke of 'self-discovery' and Student 4 of being 'self-taught'. 'My feeling is that I am mainly self-taught I can't remember that I really got teaching, that anyone ever taught me that this would be useful and this is how you use it' (Student 4). Peer learning was very important and 78% of the survey respondents asked fellow students for advice on digital technologies and valued their input and expertise. 'There is a sense of community, definitely [...] There is definitely a very, very strong willingness of people to share knowledge' (Student 7). Interview participants all valued support from peers and colleagues (including those from other universities). This may be because they researched in specialised areas and wished to try new digital approaches learned from their peers (Jisc, 2020). ICT, information, digital and media literacy and online communication skills ICT skills The Jisc Researcher capability profile (2016) refers to information and communications technology (ICT) skills. In the interviews, students expressed that they changed their approach to digital technologies based on specific needs such as collecting and managing research references, and data collection and analysis. Persistence and resilience in using technologies was a very strong theme mentioned by most students in that they experimented with new technologies within their time constraints. IT skills and access to good quality equipment was regarded as useful. Student 5 believed 'a researcher is not just a gatherer but a hunter gatherer in a way but also a selector […]' and requires 'IT skills, a pathological attention to detail'. A key theme for all of the researchers interviewed was time management and organisational skills. Digital technologies which assisted with these at the point of need are likely to be more popular. Generally, students interviewed perceived they were less busy at the start of the doctorate and this was a good time to learn digital skills and technologies but may not have done so. One interviewee mentioned however, that using digital technologies may save time but were no substitute for being a competent researcher. 'The thing is technology is good if it's kind of time saving in terms of doing the analysis better. I'm not convinced that there are, if you like, software systems or processes will essentially make the thinking process better' (Student 8). Overall, all interviewees regarded information technology skills and access to robust technology and reliable software as beneficial to their research process. Jisc (2016) highlighted the importance of information, digital and media literacy (using digital resources and software across different media) as a research and current awareness tool and also in building an online research reputation. In the research study, 93% of survey respondents felt confident or very confident about performing literature searches. It was quite reasonable to assume that research students would be confident in constructing a complex search strategy for their online literature searching. Some students interviewed had attended individual appointments and workshops with Research Librarians and believed they had benefited from these and as a result learned new search techniques and literature searching skills. 73% of survey respondents would like research data management training. This implied that there was potentially a skills gap in that area. 78% of survey respondents were researching full time and 72% of respondents believed their digital skills could be improved. 100% of part time students believed their digital skills could be enhanced. They were likely to use electronic materials and tools and as they may have attended the university less frequently have had less face-to-face training opportunities. In the area of data collection, there was a lack of confidence in using the data analysis tools SPSS and NVivo. Only 7.5% of survey respondents felt very confident about using SPSS and 15% of respondents in using SPSS. This may be due to the specific research methods being used or possibly a lack of training. Media literacy skills seemed generally of more importance to respondents who were researching subjects such as Journalism or topics requiring either a high level of current awareness or use of multimedia news or digital archival material. Jisc (2016) referred to the digital communication and participation of researchers including the use of social media such as Twitter and blogs. In the study, attitudes to developing a social media presence as a researcher was one of the structured interview questions (see Appendix 2). Attitudes to social media in the interviews were very varied. Some students valued it highly and thought it was very useful, others did not wish to engage with it at all: Online communication, participation and reputation 'But there is a whole PhD community going on on Twitter. Where people connect and follow each other' (Student 1). Some students felt social media had benefits for employability and networking, especially LinkedIn. It was felt there was a certain pressure to use social media in academia to network and build a reputation. 'I would say in general to have a presence is very important because […] in academia lots of things depend on contacts and how visible you are' (Student 4). Sumner (2012) and Bennett and Folley (2014) identified a level of anxiety amongst researchers in using social media. From the interview transcripts, it seemed that this was for a variety of reasons. Twitter was regarded by some as distracting, overused, confusing and in some cases less prestigious and credible than academic sources. Facebook was sometimes used privately for social and emotional support. Lupton (2014) identified some of the barriers to using social media and these can be around prestige, clarity of purpose and a perceived lack of quality control. Gouseti (2017) emphasised the importance of critical thinking, digital identity and of long term networking in a doctoral context. One student had intellectual property concerns about putting unfinished, doctoral research on social media in case other researchers appropriated the ideas. 'With research I think it's actually different because you don't necessarily want to put your research into the public domain prior to actually completing because you might be worried about the contribution side of things' (Student 8). Student 2 also expressed the idea of being careful online which correlated with the idea of digital well-being in the Jisc Researcher digital capability profile (Jisc, 2016). Overall, attitudes to social media use were the most polarised topic discussed in the interviews. Applying the research to professional library practice It appeared from the research that a bespoke, university doctoral training programme with specific departmental input as appropriate would be highly desirable. Suggestions included having a doctoral training centre at the university; having a cross university training programme to increase the breadth of possible opportunities; offering departmental research methods training. In the past year, City, University of London has established a new Doctoral College, online inductions, a university doctoral training programme and events during the year. Enhancing communication channels such as a university events, training and current awareness portal would be useful and time saving for doctoral students. General, informal events such as social events and inviting external speakers were thought to be useful from the point of view of networking, sharing expertise and socialising with other researchers. Research methods, data collection and analysis are very important in research and university training on these and on research data management would be useful to researchers. Students indicated that they conduct multi-disciplinary research and employ different research methods (such as qualitative, quantitative), therefore a flexible approach is inclusive. Assisting students in the use of digital tools and skills to enhance their time management, organisational skills and confidence would benefit them. Attitudes towards Library Services online guides, staff, workshops and individual appointments (for example, for literature searching and systematic reviews) were positive and appreciative. Students spoke of the need for space and time to ask questions in a non-judgemental setting and to learn new techniques, for example creating search alerts. This implies that continuing to develop a range of training opportunities to suit different research approaches is worthwhile. One student spoke of the perception that research library staff conducted research themselves and acquired expertise and then shared their tips and expertise (Pickton, 2013). The research-informed expertise of library staff was regarded as positive in terms of enhancing student productivity and effectiveness. It is clear that research student needs are complex, but in the training offered by university Library Services, it is useful to continue to offer different approaches such as individual appointments and workshops, and more use is now made of video-conferencing software to assist students. More recently, promoting library workshops through the Doctoral College and offering them online appears to have increased attendance. Potential topics for future training identified in the survey included reference management, using apps for research, impact factors, and citation searching. As students indicated they are largely self-taught in the area of digital skills, access to a range of online courses such as IT skills and data analysis software may assist them in training themselves at the appropriate points in their research lifecycle. Read for Research, a patron-driven acquisitions scheme for book and e-book purchasing at City, University of London, aimed at doctoral students and researchers has proved very popular and is regarded as an example of good practice (Bent, 2016). Students appreciate a personalised service and feel valued by contributing to the development of research collections. Since the pandemic of 2020-21, e-book purchasing has been prioritised to allow equality of access. The area which provoked most discussion at the interviews was attitudes to social media tools. Students tended to be strongly pro or strongly against using these tools. Awareness sessions of how the tools could be used to create an online research profile and demonstrating the potential academic or professional benefits to researchers would be useful. Since the study, such workshops have been delivered, for example with the School of Law. Advocacy for doctoral students has been assisted by presenting the research findings at conferences, writing articles for publication in academic journals to raise awareness of doctoral students' digital skills, and their approach to digital technologies in their research process. Future plans include delivering workshops and content on the research process, incorporating video content covering research methodologies and looking at the user experience and journey mapping of doctoral students in this context. Reflection The study offered a unique insight into the use of digital technologies by doctoral students and the way they conduct their research, their training needs and supervisory arrangements. The research data has also informed the development of training and support for research students to enable them to develop their digital skills and online research presences in the future. Subsequently since the pandemic of 2020-21, there has been an increasing need for students to rely on digital technologies and resources to continue their research. As a practitioner, it was incredibly worthwhile to explore attitudes to digital skills and technologies and to have the opportunity to survey and interview some doctoral students. The students engaged really well with the survey and were generous both in offering their support for the research project and in giving their time to be interviewed. Further research Although demographic factors such as gender, age and research discipline were included in the study, it was difficult to evaluate these as 75% of the survey respondents were female. It is possible that the gender of a student may influence their choice of discipline and therefore indirectly their use of digital technologies. It might be interesting to follow this up in a future study. Further research could also be conducted on ways of assisting research students to use digital technologies to save time and increase efficiency in their research processes and to increase their confidence. A larger study, possibly across multiple universities, would provide a more comprehensive data sample and any findings could be compared to this study. Conclusions The main conclusions of the research were that several factors might influence the adoption of digital technologies and skills by research students. The complexity of doctoral research meant it was not possible to generalise about these factors and the individual requirements, disciplinary practices and research methods of students were also relevant. The acquisition of research, information, digital and media literacy skills seemed more important than demographic factors such as age and gender. From the research findings it appeared that all students in the study were affected by factors such as IT skills, time management and organisational skills, and access to and competence with technologies. These skills are often associated with positive outcomes and completion (Lindsay, 2015). Almost all of the study participants would have benefitted from training at appropriate points in their research, for example data analysis after data collection. Some students were influenced by disciplinary or multidisciplinary practices but it was not entirely possible to assess the significance of gender and age in the study. In the interview data, attitudes to social media use were varied; some students found it very useful, others believed it was a distraction from research. Which of the following training would you find useful? (Please tick all which apply). Building an online research profile (eg. ResearchGate) Social media tools (eg. blogging, Twitter) Using apps to assist in your research Reference management software tools Research data management Using mobile database apps (eg. EBSCOhost mobile) Data analysis tools (Please specify) 11. Which of the following do you use to manage references in your research? Creating an online research profile for yourself 16. How confident do you feel about using the following for data collection/ analysis? Digital camera, audio recorder/ Digital audio visual conferencing software/ Transcription software/ Quantitative data analysis software (eg. SPSS)/ Qualitative data analysis (eg. NVivo)/ Digital archives, records/ Field, lab recording tools/ Survey software (eg. Survey Monkey). Who do you ask for help with using digital technologies? (Pls tick all which apply). Your supervisor/ Your department/ Library Services/ IT Services/ The Graduate School/ Learning Enhancement and Development Team/ Your workplace colleagues/ Fellow research students/ Online, self taught/ Other (please specify). 18. I would be prepared to be contacted to discuss attending a follow up interview (of no longer than an hour) about digital literacy. This would be semi-structured but flexible and allow you to reflect on your own digital literacy. Yes No Please give your email address if you are willing to be contacted about a possible follow up interview.
8,264.2
2021-12-07T00:00:00.000
[ "Education", "Computer Science" ]
Clozapine-Induced Mitochondria Alterations and Inflammation in Brain and Insulin-Responsive Cells Background Metabolic syndrome (MetS) is a constellation of factors including abdominal obesity, hyperglycemia, dyslipidemias, and hypertension that increase morbidity and mortality from diabetes and cardiovascular diseases and affects more than a third of the population in the US. Clozapine, an atypical antipsychotic used for the treatment of schizophrenia, has been found to cause drug-induced metabolic syndrome (DIMS) and may be a useful tool for studying cellular and molecular changes associated with MetS and DIMS. Mitochondria dysfunction, oxidative stress and inflammation are mechanisms proposed for the development of clozapine-related DIMS. In this study, the effects of clozapine on mitochondrial function and inflammation in insulin responsive and obesity-associated cultured cell lines were examined. Methodology/Principal Findings Cultured mouse myoblasts (C2C12), adipocytes (3T3-L1), hepatocytes (FL-83B), and monocytes (RAW 264.7) were treated with 0, 25, 50 and 75 µM clozapine for 24 hours. The mitochondrial selective probe TMRM was used to assess membrane potential and morphology. ATP levels from cell lysates were determined by bioluminescence assay. Cytokine levels in cell supernatants were assessed using a multiplex array. Clozapine was found to alter mitochondria morphology, membrane potential, and volume, and reduce ATP levels in all cell lines. Clozapine also significantly induced the production of proinflammatory cytokines IL-6, GM-CSF and IL12-p70, and this response was particularly robust in the monocyte cell line. Conclusions/Significance Clozapine damages mitochondria and promotes inflammation in insulin responsive cells and obesity-associated cell types. These phenomena are closely associated with changes observed in human and animal studies of MetS, obesity, insulin resistance, and diabetes. Therefore, the use of clozapine in DIMS may be an important and relevant tool for investigating cellular and molecular changes associated with the development of these diseases in the general population. Introduction This study addresses the cellular and molecular basis of a highly significant public health problem: metabolic syndrome (MetS). MetS is a constellation of factors including abdominal obesity, hyperglycemia, dyslipidemias, and hypertension that increase morbidity and mortality from diabetes and cardiovascular diseases [1,2,3,4]. According to the most recent National Health Statistics Reports, approximately 34% of the adult population in the U.S. meets the criteria for having MetS [5]. Recent estimates indicate that independent of cardiovascular disease, risk factors associated with MetS cost an estimated $80 billion annually [6] and are projected to increase between 59% and 157% by 2020 [7]. Because of this significant health problem and its economic burden, there is a great need to better understand the cellular and molecular basis of MetS. There is an abundance of studies investigating MetS, obesity, and diabetes in human and animal model systems. These models are complex, heterogenous systems representing multiple cellular, biochemical, molecular, and physiological pathways. In this study, we utilize clozapine as a tool for studying drug-induced metabolic syndrome (DIMS) in cultured mammalian cell types that are typically associated with MetS. Cultured cell models provide a straightforward system for detecting key cellular and molecular changes that may be associated with MetS. Clozapine is an atypical antipsychotic that is highly efficacious for the treatment of schizophrenia. However, along with most atypical antipsychotics, clozapine has been found to cause DIMS, giving rise to adverse metabolic side effects such as obesity and increased diabetes risk [8,9]. The underlying biological causes of clozapine-associated DIMS are unknown. There is a growing consensus in the obesity and diabetes fields that understanding the mechanisms responsible for the adverse metabolic effects of atypical antipsychotics may shed an important light on the origin of MetS, and this is the rationale for using this model in the current study. There are three interrelated hypotheses that have been proposed to explain antipsychotic-induced metabolic side effects. First, these drugs negatively affect the proper functioning of mitochondria [10,11,12,13,14]. Specifically, these drugs may alter the function of key metabolic enzymes and thus negatively affect carbon metabolism and/or electron transport during oxidative phosphorylation. Clozapine has been shown to promote the oxidation of mitochondrial proteins involved in energy metabolism in neuroblastoma cells and in lymphoblastoid cells of schizophrenia patients [10,11]. Oxidized proteins included enzymes important in carbon metabolism such as pyruvate kinase and mitochondrial malate dehydrogenase. Analyses of rat or mice brains have shown that clozapine alters mitochondrial function, energy metabolism, and expression of mitochondrial proteins belonging to the electron transport chain and oxidative phosphorylation pathway, such as succinate dehydrogenase and cytochrome oxidase [12,13]. In addition, alterations in electron transport were demonstrated in peripheral blood cells of patients taking atypical antipsychotics [14]. Second, these drugs may cause increased oxidative stress in cells and tissues [15,16,17]. In addition to direct protein oxidation, antipsychotic treatment has been associated with increased production of reactive oxygen species (ROS) and antioxidant proteins. In a study of patients undergoing long-term clozapine treatment, there were elevated levels of the antioxidant enzyme superoxide dismutase in red blood cells [18]. Further evidence of clozapine-induced production of reactive oxygen species (ROS) was demonstrated in rat whole blood [19] and rat brain [16,17]. Third, these drugs promote inflammation [20,21,22]. There is evidence to suggest that clozapine influences the production of several cytokines and/or cytokine receptors that modulate immunological responses [20,21,22]. In stimulated blood from healthy donors, clozapine treatment increased levels of IL-4 and IL-17 [13]. In a study which administered clozapine to schizophrenia patients over a six week period, plasma levels of cytokines, including TNF-a, sTNFR-1, sTNFR-2, IL-6, and sIL-2R, were found to increase significantly over the treatment time [14]. Lastly, it is important to note the interplay between these three proposed mechanisms. Mitochondria that are damaged are known to produce increased levels of ROS and initiate inflammation; oxidative stress itself can damage mitochondria and promote an inflammatory response [23,24]. Similarly, a pro-inflammatory state contributes to increased ROS production and can negatively affect mitochondria function, either directly, or through oxidative stress [24,25]. These same mechanisms of mitochondria dysfunction, oxidative stress and inflammation are also attributed to the development of obesity, insulin resistance and other symptoms of diabetes and MetS [26,27,28]. Patients who suffer from obesity, insulin resistance or diabetes have been found to have dysfunctional mitochondria [29]. In these patients, electron transport [30] and oxidative phosphorylation [31,32] are altered. Patients with metabolic disease also have alterations in mitochondria morphology or number [33]. Further, altered mitochondria dynamics, as well as oxidative stress have been associated with altered muscle metabolism and insulin resistance in mouse skeletal muscle [22]. Regarding oxidative stress, there are several studies which describe in detail the role of ROS and decreased antioxidant capacity in dyslipidemia, obesity, insulin resistance, diabetes and MetS [34,35]. In many disease models, including diabetes, ROS promote a pro-inflammatory environment. It has been shown that mitochondrial ROS trigger expression of pro-inflammatory cytokines as a result of the oxidative stress process [36]. Therefore, it is not surprising that inflammation has also been shown to play a key role in obesity and diabetes [27]. Previous studies have shown that chronic activation of intracellular pro-inflammatory pathways within insulin target cells can lead to obesity-related insulin resistance [28]. Further, inflammation and cytokine production are an extracellular source of ROS. Thus, ROS and inflammation are inextricably and cyclically linked to metabolic disease and metabolic dysfunction. Not surprisingly, elevated cytokines have been found in the serum of patients suffering from diseases where inflammation is a key factor [37]. Inflammatory cytokines such as IL-6, IL-1b, MCP-1 and TNF-a, produced by fat, liver, muscle, and inflammatory cells, have been described as key players in obesity, insulin resistance and diabetes [28]. Patients with MetS have increased circulating levels of numerous cytokines such as IL-2, IL-4, IL-5, IL-12, IL-12, and INF-c, while patients with type II diabetes have circulating T cells which produce increased levels of IL-17 and IFN-c [38]. Thus, in this study, we explore whether clozapine-associated DIMS may be attributed to mitochondria dysfunction, oxidative stress and inflammation. Importantly, much of the work investigating the mechanisms responsible for antipsychotic-induced metabolic side effects has primarily focused on the brain and blood, for the purposes of understanding how these drugs affect their target population, schizophrenia patients. While those investigations are meaningful and important, there are two problems with this approach. First, they neglect the biology of obesity and diabetes. The major contributing factor in type II diabetes and obesity is insulin resistance [39]. Insulin resistance is thought to be mediated by the release of pro-inflammatory molecules and other mediators from adipocytes and inflammatory cells which alter insulin response in fat and muscle tissue [28]. Secondly, this approach is short sighted; DIMS can be an important tool for studying cellular and molecular changes that lead to MetS in the greater population, not just within the psychiatric milieu. For these reasons, in addition to human neuroblastoma cells, the present study examines the effects of clozapine on insulin responsive and obesity-associated cell types: cultured mouse fat, muscle, liver and inflammatory cell lines. The mouse 3T3-L1 (herein referred to as 3T3) adipocyte, C2C12 myoblast, FL83B hepatocyte, and RAW264.7 (herein referred to as RAW) monocyte cell lines, representing fat, muscle, liver, and inflammatory cells, respectively, were used in order to directly assess the cellular and molecular effects of clozapine treatment within a homogenous cell culture system. In this study, we specifically examine the effects of clozapine on mitochondria morphology and function as well as on the production of proinflammatory cytokines. Understanding how clozapine increases risk for MetS may provide insight into the development of metabolic disease in the general population. Clozapine-induced mitochondria changes in neuroblastoma cells Previous studies demonstrated that clozapine treatment in neuroblastoma cells did not alter cell viability or induce apoptosis, but did increase ROS and oxidized mitochondrial proteins important for energy metabolism [11]. To further explore the effects of clozapine on mitochondria functions, the effect of clozapine on mitochondria morphology, membrane potential, and volume in neuroblastoma cells was assessed here. After treatment with 10, 20 and 50 mM clozapine or vehicle only, neuroblastoma cells were incubated with the mitochondria-selective probe, tetramethylrhodamine, methyl ester, perchlorate (TMRM) and imaged by confocal microscopy. At the highest concentration of clozapine (50 mM), the mitochondria were found to be mostly punctate and round when compared to a mixed population of reticular filamentous arrangement in the perinuclear area along with a smaller puncta cytoplasmic mitochondrial distribution observed in the control cells, and in cells treated with 10 or 20 mM clozapine ( Figure 1B). The analyses of the distribution of TMRM fluorescence provided a measure of the mitochondria membrane potential, which under normal conditions varies between 2180 to 2120 mV, though more than 24 percent of healthy mitochondria exhibit a membrane potential between (2180 and 2140 DY). Treatment of neuroblastoma cells with clozapine shifted cells from their normal membrane potential distribution ( Figure 1A and Table S1). Above 10 mM clozapine, the distribution of mitochondrial membrane potential is observed in two or more distinct populations. A group of mitochondria still possess high membrane potential, however, between 48 to 65% of mitochondria exhibit significantly decreased membrane potential (p#0.0001), compared to the control untreated cells. Analyses of the percent change in Nernst potential above indicated thresholds, and the percent distribution above the median of the control, to represent the shift in the medians after treatment, are shown in Table S1. In addition, measurements of central tendency (median and mean) and dispersion (IQR and SD) for the distribution of the membrane potential and mitochondria volume data across the various concentrations of clozapine used can be found in Table S2. Lastly, relative to the control cells, the mean volume of mitochondria increased approximately 1.5-fold at 10 mM (p = 0.0029) and over two-fold at 50 mM clozapine (p,0.0001) ( Figure 1C). In the box plot, outliers seen above control and 10 mM clozapine represent the large framework of contiguous mitochondrial networks, between Log 10 (4-5); the majority of the smaller mitochondrial fragments fall within the inter-quartile range, i.e. between the 25 th -75 th percentiles of the total values. The horizontal line across the box shows the median value of mitochondrial volume in each group, flanked by whiskers which are 1.5 times the interquartile range. Clozapine-induced mitochondria changes in insulinresponsive cells After observing the effects of clozapine on the mitochondria of neuroblastoma cells, we were interested in examining whether similar changes occurred in cell types associated with whole-body metabolism: fat, muscle, liver and inflammatory cells. Changes in each of these cells types have been closely associated with metabolic alterations such as insulin resistance, obesity and diabetes [28]. The mouse cell lines which were used were: fat represented by 3T3 adipocytes, muscle represented by C2C12 myoblasts, liver represented by FL83B hepatocytes, and inflammatory cells represented by RAW264.7 monocytes. To determine whether clozapine affected these cells in the same fashion as neuroblastoma cells, we first assessed whether clozapine treatment affected cell viability. After treatment with 25, 50 and 75 mM clozapine or vehicle only, the effect of clozapine, at each dose, for each cell line, on viability was determined by neutral red uptake assay. Similar to the neuroblastoma cells, the concentrations of clozapine used did not cause significant differences in cell viability in three of the four cell lines. At the highest concentration of clozapine (75 mM), RAW monocyte viability decreased significantly relative to the control cells (p = 0.03). The percentage of viable monocytes, relative to the untreated control, was 62.1611.8%. We then examined whether clozapine also induced changes in mitochondria morphology, membrane potential, and mitochondria volume in the insulin-responsive cell types as these types of changes are thought to underlie the pathogenesis of several metabolic diseases [29]. Treatment with 25, 50 or 75 mM clozapine, or vehicle only, for 24 hours, was followed by incubation with TMRM and confocal microscopic imaging. Similar to the neuroblastoma cells, each of the cell lines show a transition of mitochondrial morphology and distribution from filamentous network to a more fissioned mitochondria, having a punctate appearance at the highest clozapine concentration ( Figure 2). In addition to the mitochondria morphological changes, treatment of the insulin-responsive cell lines with clozapine shifted cells from their normal mitochondria membrane potential distribution ( Figure 3) to a more depolarized state, particularly at the higher concentrations used. Interestingly, each cell type showed a varied distribution pattern of mitochondrial membrane potential. As in the case for neuroblastoma cells, further analyses of the percent change in Nernst potential and the percent distribution above the median of the control due to treatment are shown in Table S1, as well as central tendency and dispersion data in Table S2. At 25 mM clozapine, about 43% of mitochondria in 3T3 preadipocytes get highly polarized by increasing their membrane potential compared to the untreated controls (p = 0.0001). However, mitochondrial membrane potential decreased significantly and clustered in two distinct populations (around 2160 to 2140 mV and 2120 to 2100 mV, respectively) at 75 mM clozapine compared to control cells (p = 0.0001). The C2C12 myoblasts showed an initial increase in mitochondrial membrane potential (about 18% of the Nernst values shifted to the left) at 25 mM clozapine relative to controls (p = 0.0001), followed by a shift in distribution curves of Nernst potential to the right, suggesting depolarization of a large % (between 18 to 34%) of mitochondria from 2140 mV to potentials ranging between 2130 to 2110 mV, at both 50 and 75 mM clozapine (p = 0.0001). For the FL83B hepatocytes, the proportion of mitochondria with reduced membrane potential increased by 26% at 75 mM clozapine (p = 0.0001). Similarly, for the RAW monocytes at 75 mM clozapine, there was an increase of 8% in the population of depolarized mitochondria when compared to the control cells (p = 0.0001) (Figure 3). The effect of clozapine treatment on ATP production Changes in mitochondria morphology and membrane potential are associated with mitochondria dysfunction. To determine whether these changes had functional consequences, the effect of clozapine treatment on ATP levels was determined by bioluminescence assay. ATP levels significantly decreased with increasing doses of clozapine for the 3T3, FL83B and RAW cell lines (for all three, p,0.0001; Figure 5). At 50 and 75 mM clozapine, ATP levels in 3T3 cells were significantly reduced 80 and 96% (p,0.0001), respectively, relative to the controls, as well as 86 and 97% (p,0.0001) relative to the 25 mM dose of clozapine. ATP levels were significantly reduced 62% and 64% (p,0.0001), relative to the controls, at 50 and 75 mM clozapine, respectively, and 51 and 55% (p,0.01) relative to the 25 mM clozapine dose, respectively, in FL83B cells. Levels of ATP in RAW cells were significantly reduced 58, 74 and 85% (p,0.0001) relative to the controls at 25, 50 and 75 mM clozapine, respectively, and reduced 84% (p,0.05) relative to the 25 mM dose at 75 mM clozapine. Changes in ATP levels in C2C12 cells as a result of increasing clozapine concentration were not statistically significant. Discussion Previous studies of how atypical antipsychotics such as clozapine may give rise to increased risk for MetS and diabetes have focused primarily on the brain or brain cells, and on understanding this phenomenon with regard to psychiatric patients. In this study, in addition to SKNSH neuroblastoma cells, we examined the effect of clozapine on insulin responsive cell types and cells associated with obesity: fat, muscle, liver and inflammatory cells. These cell lines provide a simple model system for understanding the mechanistic details of clozapine-induced metabolic changes. Specifically, this study examined the effect of clozapine on mitochondrial functions and inflammation in these cell types as both mitochondrial dysfunction and inflammation are thought to give rise to obesity, insulin resistance and other symptoms of MetS [26,27,28]. Clozapine induced alterations in mitochondria morphology in all cell types tested (Figures 1 and 2). These alterations included changes from a contiguous mitochondrial framework to smaller punctuate pattern, and increases in overall mitochondria volume. Importantly, alterations in mitochondrial size and density have been observed in the skeletal muscle of both geneticallyinduced, or diet-induced obese mice [40] and in individuals with metabolic disease [33], supporting the hypothesis that clozapineinduced alterations in mitochondria morphology may be associated with the metabolic symptoms observed in patients using clozapine. Mitochondrial turn over, morphology and remodeling varies in different cell types [41,42], wherein mitochondria exist in two states, ''individual state'' and ''network state'' which we refer to as a punctate/fissioned or contiguous/filamentous state, respectively, in our findings [42,43]. Importantly, within each cell, mitochondrial population can be functionally and morphologically heterogeneous based in part on the demand of oxidative phosphorylation and ATP output [41,43] and on the internal pool of healthy and old/sick mitochondria (though there is no biomarker yet to detect the two subtypes). Therefore, any external trigger leading to cellular oxidative stress [44] can disrupt the homeostasis of proteins and transcription factors that control the process of mitochondrial fusion and fission, thus changing the mitochondrial number, distribution pattern, size and shape [43], which may lead to development of diseases such as diabetes [44]. Clozapine also caused mitochondria depolarization in all cell types examined. In most cases, there appeared to be two distinct populations of mitochondria, based on their membrane potential. Other studies have shown that reticular mitochondria have varying potential across their framework [45]. This aspect was all the more evident with increased dosing regimens of clozapine, when the reticular mitochondrial distribution is lost. Moreover, our present findings suggest that fluctuations in mitochondrial membrane potentials need stringent examination, and a single value of average membrane potential/cell may be insufficient. The depolarization of the mitochondria membrane potential is associated with an increase in mitochondria swelling [46], which we observed after clozapine treatment in all cell types investigated. The observed clozapine-induced swelling of mitochondria may be a mechanism by which cells prevent oxidative damage due to increased generation of mitochondrial ROS (mROS) by the respiratory chain. Several studies have shown that clozapine induces production of ROS [11,47]. Other studies have shown that minor oxidative stress induces mitochondrial swelling and the formation of a mitochondrial ''firewall'' which prevents propagation of mROS [48]. In addition, a known cell survival strategy involves mild uncoupling of mitochondria leading to weak mitochondrial depolarization, without causing cell death [49,50]. This may be true here as the cell adapts to clozapine exposure. This protection mechanism may be the reason why no significant decreases in cell viability were observed for most of the cell lines after clozapine treatment. The observed mitochondria membrane depolarization is consistent with the observed depletion of ATP at increased concentrations of clozapine. In the monocyte cell line, this depletion of ATP was evident at the lowest dose of clozapine tested, and at the intermediate concentration of clozapine in the preadipocyte and liver cell lines, suggesting that lower levels of clozapine are capable of severely disrupting energy metabolism in the cell lines tested. Alterations in mitochondria function are associated with increased inflammation. Increased production of proinflammatory cytokines was observed after clozapine treatment. Inflammation has been shown to play a key role in obesity and diabetes [39]. In fact, the pathogenesis of diabetes and its metabolic complications exist in a state of chronic systemic inflammation. Such chronic inflammation increases the expression of circulating inflammatory factors. Atypical antipsychotics have been reported to be both protective against inflammation, as well as causative [51]. However, these studies have been performed on different cell types, under different conditions, and have not been systematically conducted. While there is some debate about the inflammatory nature of clozapine, there is evidence to support that this drug influences the production of several cytokines and/or cytokine receptors that modulate immunological responses [20,21,22]. In support of this, we observed here that clozapine induces elevated production of proinflammatory cytokines/chemokines in adipocyte, muscle, and monocyte cell lines. Of particular interest is the robust proinflammatory response in monocytes, which includes the production of IL-1b, a cytokine whose production is known to be triggered by the inflammasome [23,24], a multi-protein complex which initiates an inflammatory response in response to mitochondria dysfunction or reactive oxygen species. The findings of a clozapine-induced ''pro-inflammatory'' state in monocytes is important, as monocyte infiltration of adipose and other tissue, followed by the local production of proinflammatory cytokines, has been found to be associated with obesity and insulin resistance [52]. One such cytokine is IL-6, whose production was altered in response to clozapine in adipocyte and muscle cells. Circulating IL-6 elicits many types of responses, and has been found to correlate with insulin resistance and altered carbon metabolism [53]. Recently it was shown that decreasing inflammation and monocyte infiltration to muscle and liver tissue could reduce the development of MetS in a mouse model [54]. The findings herein suggest that a variety of cell types are susceptible to a clozapineinduced proinflammatory state that may promote cellular dysfunction. In patients, this may be further exacerbated by monocyte-infiltration of tissues resulting in further local inflammation. In summary, the findings of clozapine-induced mitochondrial and inflammatory alterations in insulin responsive cells support the aforementioned link between mitochondria function and inflammation in risk for MetS, and suggest that alterations in these pathways may underlie the causes of clozapine-induced MetS. It is important to note that, as this was an exploratory study to determine whether a DIMS/cell culture model system could produce cellular and molecular phenomena biologically relevant to MetS, control drugs were not included in the design. Our findings show that a cell-culture-DIMS-based approach might be a useful tool for achieving a better mechanistic understanding of the genesis of diabetes and MetS, and that such a tool is but one of many that might be used to fully understand metabolic disease. Further studies with other antipsychotics, both typical and atypical, including those which are not known to cause increased risk for weight gain or MetS, should be performed in order to determine if the current observed effects are unique to clozapine or to atypical antipsychotics and could therefore indeed 1) be causative of the unique clinical side-effects seen with these drugs and 2) identify relevant cellular and molecular mechanisms which may give rise to MetS. Cell Culture and Clozapine Treatment All cell lines were obtained from the ATCC (Manasas, VA, USA). SKNSH human neuroblastoma cells were cultured in DMEM supplemented with 4 mM L-glutamine as previously described [11]. The mouse cell lines, 3T3-L1 (3T3) preadipocytes, C2C12 myoblasts, and RAW 264.7 (RAW) monocytes, were cultured in DMEM supplemented with 110 mg/L sodium pyruvate, 4 mM L-glutamine and 10% fetal bovine serum. FL83B mouse hepatocytes were cultured in F-12 K media containing 10% fetal bovine serum. To determine the effects of clozapine treatment on SKNSH cells, cells were treated with 10, 20 and 50 mM clozapine or vehicle (0.65% DMSO only) for 24 hours. To determine the effects of clozapine treatment on the other cell lines, cells were treated with 25, 50 and 75 mM clozapine or vehicle (0.65% DMSO only) for 24 hours. All assays described below were performed in triplicate at each concentration. Cell Viability The effect of clozapine on the viability of RAW, C2C12, 3T3 and FL83B cells was determined by neutral red assay (Sigma, St. Louis, MO, USA). Only viable cells are capable of incorporating neutral red dye by active transport. Briefly, after 24 hours of clozapine treatment or vehicle, the cells were rinsed with PBS and incubated in media containing 0.033% neutral red for two hours. Cells were then washed several times with PBS and the incorporated neutral red dye was solubilized by gentle rocking for 10 minutes with a solution of 1% acetic acid and 50% ethanol. After 10 min, the solution was collected and the amount of incorporated neutral red dye was determined spectrophotometrically by measuring the absorbance of the solution at 540 nm. Mitochondria Morphology and Membrane Potential To visualize mitochondria morphology and measure the effects of clozapine on membrane potential after 24 hours of treatment, cells were cultured on nunc chambers and incubated with media containing 30 nM tetramethylrhodamine methyl ester perchlorate (TMRM; Invitrogen, Eugene, OR, USA) for 30 min. Z-series confocal images of the cells were then obtained using a FV1000 imaging system mounted on an Olympus IX-81 inverted microscope using a Plan-Achromat 63x/1.4 Oil DIC objective. To reduce phototoxicity, laser intensity was kept at the lowest for excitation of TMRM but sufficient for cell imaging. Laser power settings for each cell type were maintained constant throughout the experiment. The collected Z-image series were used to examine the appearance of mitochondria for the characteristic reticular morphology of normal mitochondria, the presence of numerous round fragments of varying size indicative of fission, or filamentous elongation indicative of fusion. The images were further analyzed to measure mitochondrial membrane potential and volume using the Nernst Potential MulPro2D plug-in for Image J software (National Institute of Health, USA). The plug-in identifies individual mitochondria and, using the fluorescence of TMRM and the Nernst Equation, calculates the mitochondrial membrane potential for each identified mitochondria. ATP Levels To determine the effect of clozapine treatment on ATP production, ATP levels were determined by bioluminescence assay (Roche Applied Science, Indianapolis, IN, USA) which measures the ATP-dependent conversion of luciferin to oxyluciferin and light. After 24 hours of clozapine or vehicle only treatment, cells were washed with PBS and an ATP lysate was made for each replicate culture using boiling lysis buffer or boiling water. ATP levels for each replicate were measured in duplicate. Lysates were combined with luciferase reagent per the manufacturer's instruction, and the resulting light emission at 562 nm was quantified by a microplate-format luminometer. ATP lysates were also quantified for A260 by NanoDrop (Thermo Scientific, Wilmington, DE, USA). The luminescence values were further corrected by using A260 values to correct for the number of viable cells contributing to ATP levels. Cytokine Analysis by Luminex Assay To determine the effect of clozapine treatment on the production of proinflammatory cytokines, cell culture supernatants from treated and control cells were analyzed for 13 different cytokines using a using a multiplex Luminex bead-based assay (MILLIPLEX MAP Mouse Cytokine/Chemokine-Premixed 13 Plex; Millipore, Billerica, MA, USA) capable of detecting the following analytes: GM-CSF, IFN-c, IL-10, IL-12 (p70), IL-13, IL-1b, IL-2, IL-4, IL-5, IL-6, IL-7, MCP-1, and TNF-a. After 24 hours of clozapine treatment, culture media from clozapinetreated and control cells was collected, aliquoted and stored at 280C until assayed according to the manufacturer's instructions. Briefly, prior to the assay, culture supernatants were concentrated 2-to 3-fold using microconcentrators (Corning Spin-X UF Concentrators; Sigma, St. Louis, MO, USA) with a 10 K molecular weight cut-off. Protein content of culture supernatants was quantified for A280 by NanoDrop before and after concentrating. Cytokine levels for each drug dose replicate were measured in duplicate. Equivalent amounts of concentrated culture supernatant samples were incubated with antibody-coated capture beads overnight at 4 uC. Washed beads were further incubated with biotin-labeled anti-mouse cytokine antibodies for 1 h at room temperature followed by incubation with streptavidinphycoerythrin for 30 min. Samples were analyzed using Luminex 200 TM (Luminex, Austin, TX, USA) and Statlia software (Brendan Technologies Inc., Carlsbad, CA, USA). Standard curves of known concentrations of recombinant mouse cytokines were used to convert median fluorescence intensity (MFI) to cytokine concentration in pg/ml and these were further corrected by cell lysate protein concentration to correct for the number of viable cells contributing to cytokine levels. For this, cell lysates were prepared from the monolayer from which the culture supernatants were collected. These cells were washed with PBS and lysed using ice cold RIPA buffer. Lysates were quantified by BCA Assay (Thermo Fisher Scientific, Rockford, IL, USA). Statistical Analyses Logarithmic (base 10) transformations were applied to the absolute mitochondrial volume data values obtained from NIH Image J analysis. The data was evaluated for a normal distribution using the Shapiro-Wilks test and the Kolmogorov-Smirnov test. Plots for comparing Nernst mitochondrial membrane potential distribution across treatments were drawn by adding Gaussian kernel density plots to the histograms. Comparisons of mitochon-drial Nernst potential and mitochondrial volume values were made to their respective controls, and performed by nonparametric Rank-sum and Kruskal Wallis Tests, where for multipurpose comparisons a p,0.016 is considered significant. Statistical analysis and graphics were made using Stata 11.0 (Stata Corp, College Station, TX). Other statistical analyses were performed using the software SPSS (IBM, Armonk, NY, USA) or GraphPad Prism (GraphPad Softwars, La Jolla, CA, USA). Data is expressed as the mean 6 SEM. Differences in the endpoints (variables) between doses of clozapine were determined by an analysis of variance (ANOVA) incorporating repeated measures across dose. For the analyses of cytokine levels, data was log transformed and ANOVA was performed using additional post-hoc analyses to define significant relationships between the individual end-points. To correct for multiple comparisons, for each cell line, statistical significance was determined using Bonferroni correction, where p = 0.05 was divided by the number of hypothesis tested (i.e., the number of cytokines detected in each cell line). Therefore, as discussed in the results, the p value considered to be significant was different for each cell line. Supporting Information Table S1 Change in membrane potential after treatment with clozapine. (DOCX)
7,169.4
2013-03-20T00:00:00.000
[ "Biology", "Medicine" ]
Framework of Artificial Intelligence Learning Platform for Education Nowadays, Information Technology is an integrated as a part of our life activities. It does not affect only teaching and learning methods at all levels, but also the teaching styles of each teacher with suitable for the digital age. Therefore, the standardized platform should create for all teachers to effectively serve the future education policy. This research aims to synthesize and develop a framework of an artificial intelligence learning platform for education and estimate the framework’s suitability. The research is discussed into three phases: 1) synthesizing an intelligent learning platform by using Artificial Intelligence (AI), 2) developing a framework of an artificial intelligence learning platform for education, and 3) evaluating the suitability of the framework by 15 experts. The result found that the suitability evaluation of the framework of an artificial intelligence learning platform for education was very good. The results showed that this framework could develop a learning platform for preparing transformation to the digital age. Introduction There are many issues to improve the quality of life and one of the important issues is being the quality enhancement in education (Voratitipong et al., 2018). The United Nations (UN) has designated education as one of the 2030 (Sustainable Development Goals: SDGs). The fourth article of the SDGs is on promoting equal education and promoting lifelong learning for everyone (Aroonsrimorakot & Vajaradul, 2016;Sachs, 2012;United Nations, 2015). From the past to present, education has undergone tremendous changes which affect all our lives. The education does not only end up in the classroom, but it is also changing the pattern to "Learning for life". The pictures of past studies have been replaced with new technology like smart phones, high speed internet, and so on. Today, people around the world can learn online about all topics and can access information from a variety of sources from all over the world. In addition, all students can learn at anytime, anywhere at your convenience through mobile phones, tablets, and computers. When technology comes into a part of our life, it is inevitable that technology will be a key factor in changing the education. The adoption of digital technology as a tool in teaching and learning or known as "Platform of digital learning". It is the learning management that is timed to the changing of the world situation and focus on encouraging people to seek self-knowledge from digital and social media. Consequently, the population in digital age has the ability to create and develop innovative learning to meet the needs of self-learning through the aptitude creation of a free-to-use social learning platform. The guidelines for developing educational personnel to have the digital skills must be started developing people in equality knowledge and integrating with the knowledge they have by focusing on learning model 70:20:10. This will reduce the lecture and add other relevant learning instead. However, there will be a digital platform to support teaching and learning through information technology systems. Starting with online teaching, which has the trials are widely used, including training to develop teachers' skills and educational personnel can choose the corrected and appropriated technology. This platform can be used as an add-on for teaching and learning and has a positive effect on teachers and learners (Ratchagit, 2019). Today, technology plays an important role in changing the world. It is not only for the lifestyle of human beings, but it is also included with the response to consumer demand for the business sector and enhancing the efficiency of both public service and the government sector (Institute for Innovative Learning, 2020), in particular, took a role in education. Technology can help to enhance and increase the potential of education, especially "Artificial Intelligence or AI". However, artificial intelligence technology does not replace teachers; it is a combination between automation and the instructor's attention. Since learning is not writing code or command systems like a robot, it is personalized learning tailored to the individual student. Everyone has equal access to quality learning, and artificial intelligence technology will greatly promote education which is very useful for students and instructors themselves. In addition, artificial intelligence helps reduce teachers' work time and helps to reduce mistakes that may occur, for example, checking homework or test and creating effective teaching and learning materials. The above examples are causing the application to promote many studies, and it also helps teachers or tutors answer questions for individual students known as "Teacher assistant". It is considered a special channel that allows students to easily consult with teachers and get quick answer (Creative Thailand, 2018;Plook, 2019;Tuemaster, 2020). As previously said, the researcher is interested in developing the framework of an artificial intelligence learning platform for education to help improve the educational system or curriculum to be suitable for changing the world. So that teachers and learners can adjust their lifestyles in a balanced way. • To synthesize the intelligent learning platform using artificial intelligence. • To develop the framework of an artificial intelligence learning platform for education. • To estimate the suitability of the framework of an artificial intelligence learning platform for education. Digital Learning Platform A Digital learning platform refers to a learning environment that connects with the learners in two-way by using technology tools to support all or part of the learning. The tools focus on learners and teachers, and software that were designed to provide comprehensive help in the educational process. Likewise, this tool can improve the learning experience of learners as well as makes the learning environment become a digital learning environment with limitless freedom (Artuso & Graf, 2020; Bujang et al., 2020;Pratsri & Nilsook, 2020;Faustmann et al., 2019;Iliashenko et al., 2019;Yanga & Yenb, 2016). Definition of an Intelligent Learning Platform An intelligent learning platform refers to a learning system designed to create intelligence by focusing on human-computer interaction. Then it is a tool that helps to improve the efficiency of evaluating academic achievement, which analyzes data to monitor learners' technology learning and assessments. Besides, the intelligent learning platform can analyze strengths and weaknesses of learning to improve teachers' teaching level as well as stimulate learners' interest in learning and promote the development of balance learning (Adenowo, 2018;Diao, 2020;Gong, 2020;Yang & Wu, 2017;Zheng, 2018). 2. Learning Achievement (Diao, 2020;Gong, 2020) 3. Content (Artuso & Graf, 2020; Yang & Wu, 2017) 4. Supplement of Advance Learning (Yang & Wu, 2017) 5. Data Analysis (Yang & Wu, 2017) 6. Assessment (Artuso & Graf, 2020; Gong, 2020;Yang & Wu, 2017) 7. Assessment Indicators (Gong, 2020) 8. Quality Monitoring Students' (Gong, 2020;Yang & Wu, 2017) 9. Practice (Diao, 2020) From Table 1, the intelligent learning platform resulting from the synthesis of relevant research can conclude into four components as follows: Elements of an Intelligent Learning Platform 1) User: learner, teacher, and admin. 2) Learning platform: learning content management systems, learning management systems, classroom management system, virtual learning environments, course management system, user management system, supporting system, intelligent tutoring system, and Massive Open Online Course (MOOC). 3) Intelligent technology: web service, mobile technology, virtual reality, artificial intelligence, online classroom, E-learning, and embedded process monitoring. 4) Curriculum: curriculum, learning achievement, content, a supplement of advance learning, data analysis, assessment, assessment indicators, quality monitoring students', and practice. Definition of an Artificial Intelligence Artificial intelligence refers to technology that simulates human intelligence and behavior to think like humans and imitates human actions (Anagnostopoulou et al., 2020;Maneehaet & Wannapiroon, 2019;Tang & Hai, 2021;Yu, 2021), which are developed based on working principles and incoming technology. Moreover, it helps in working or making decisions instead of human intervention and working wisely (O'Brien, 2020). It also can recognize, learn, and automate tasks without human command (Copeland, 2020;Frankenfield, 2020;Haenlein, 2019;Hamet & Tremblay, 2017;Marsden, 2017;Szolovits, 2018;Zhang & Dafoe, 2019). It aims primarily to make computer performance more comprehensive and cultivate intelligent patterns of thinking linking humans to computers to make them smarter (Han, 2019). As one of the most advanced information technologies globally, Artificial intelligence technology has made many advances in fields such as Speech recognition, Automatic control, Organization management, and Teaching system (Yang et al., 2018). Artificial Intelligence Technology Artificial intelligence technology can be classified into four types of functionality: reactive machines, limited memory, theory of mind, and self-awareness (Hintze, 2016;Johnson, 2020;Lateef, 2020 Artificial Intelligence Technology to Support Learning Platforms A learning platform that uses artificial intelligence technology to support is a further step of the education system. On the other hand, there are still many people who may not realize the benefit or the importance of adopting artificial intelligence technology to help develop and improve the education system. Therefore, artificial intelligence technology may not be well known worldwide. There are many benefits to implementing artificial intelligence technology to support learning platforms. It does not only help to shorten the working time, but it also helps to increase the capacity in various tasks that humans cannot do, for example, analysis of the knowledge level of the learners, offering retrospective communication, helping to plan for improvement of the teaching and learning curriculum as well as aiding in teaching and learning to be more effective (Kuprenko, 2020). Method The research method was divided into three phases according to the research objectives as follows: Phase 1: Synthesis is an artificial intelligence-based intelligent learning platform. To begin, materials and research on digital learning platforms, intelligent learning platforms, and artificial intelligence technology were reviewed. There were forty issues published in the international research-based system between 2016 and 2021. After that, elements of an intelligent learning platform were synthesized, and artificial intelligence technology was presented in an illustration plan and the essay, as shown in Figure 1 and Figure 2. The research tool validity was analyzed by content analysis. Phase 2: A framework of an artificial intelligence learning platform for education was developed. The data obtained from research in Phase 1 was used to develop a framework of an artificial intelligence learning platform for education, as presented in an illustration plan and the essay in Figure 4. Phase 3: The suitability of the framework of an artificial intelligence learning platform for education was evaluated. Use questionnaires as a data collection tool from 15 experts who had more than five years' experience in the relevant field. Therefore, experts were selected, and each of their expertise in the digital learning platform, intelligent learning platform, and artificial intelligence. The research instruments were the framework of artificial intelligence and a learning platform for education. All survey questions utilized a 5-point Likert scale. The arithmetic mean and standard deviation were utilized in the data analysis. Elements of Artificial Intelligence Learning Platform Artificial intelligent learning platform consisted of four main components: user, learning platform, intelligent technology, and curriculum. Each main component had a different sub-component. The first component was the user, which consisted of a learner, teacher, and admin. The second component was the learning platform, which consisted of a user management system, supporting system, intelligent tutoring system, and Massive Open Online Course (MOOC). The third component was intelligent technology, which consisted of web service, mobile technology, virtual reality, artificial intelligence, online classroom, E-learning, and embedded process monitoring. The last component was the curriculum, which comprises a supplement of advanced learning, data analysis, assessment, assessment indicators, quality monitoring students', and practice. (See Figure 1) ies.ccsenet. Artif Artificial i limited me Artificial N as shown i org ficial Intelligen intelligence te emory, theory Narrow Intellig in Figure 2. Framework Design This step is the development of a framework of an artificial intelligence learning platform for education, comprising elements of an intelligent learning platform. This includes artificial intelligence technology and intelligent education. Outlined in Figure 1 and Figure 2 are the results of the synthesis of an intelligent learning platform and artificial intelligence technology to develop into an artificial intelligence learning framework, as shown in Figure 3. Then all data was developed into a framework of an artificial intelligence learning platform for education, as presented in Figure 4. The Evaluation Results Platform The suitability of the framework of an artificial intelligence learning platform for education was evaluated. The researchers invited 15 experts to carry out an evaluation. The results were shown in Table 2. Discussion The framework of an artificial intelligence learning platform for education has three elements: artificial intelligence technology, intelligent learning platform, and intelligent education. Based on the research of designing a framework of an artificial intelligence learning platform for education, it can be summarized as follows: Artificial intelligence technology consists of the following components: functionality AI and capabilities AI. Functionality AI has four types: reactive machines, limited memory, theory of mind, and self-aware. Capabilities AI contains three types: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). Biswal (2020) conducted research on types of Artificial Intelligence that are necessary to learn in 2020. The above elements are made to form an intelligent machine from vast volumes of data to perform human-like tasks. Furthermore, the research on types of Artificial Intelligence conducted by Joshi (2019) indicated that the higher number of the same elements is used in research to make machines emulate humanlike functioning, the higher degree to which an AI system can replicate human capabilities. Regarding the intelligent learning platform, there must be the following components: user, learning platform, intelligent technology, and curriculum, as seen in the study conducted by Artuso and Graf (2020), who examined the Science and Math courses in a Danish digital learning platform. According to the study, it was found that the digital learning platform must consist of the elements mentioned above. Moreover, in line with the research by Gong (2020), who investigated the evaluation mechanism of learning achievement based on an intelligent learning platform, it will use all the elements mentioned above. To summarize, intelligent education was the application of artificial intelligence technology and intelligent learning platform, which were used in the education system and enabled the education system to be more intelligent. From an assessment of 15 experts, they found that an artificial intelligence learning platform framework for education is very suitable. As a result, this platform is suitable and could be used as a framework to develop a learning platform of modern smart education to prepare for the digital transformation. Conclusion In this study, a framework of an artificial intelligence learning platform for education was presented. The findings revealed that thee framework may create a learning platform for preparing the digital era transition. The rise of entry into artificial intelligence technology, educational institutions need to learn, adapt, and develop constantly. When educational institutions are prepared to cope and be aware of the current situation, provide more knowledge, and be always open-minded to learn new things, the education system will probably gain the utmost benefits from using artificial intelligence technology. Artificial intelligence technology could be an important tool for teaching and learning, applied in various fields of instructional management. It is not only to help a teacher manage to learn but also to enable students to learn and expand their knowledge to be more diverse. At present, artificial intelligence technology is the changing and more remarkable field of education. Moreover, teaching and learning through intelligent systems are developed from a small platform as an application in a smart classroom and leading to the development of modern educational structures, which is considered an aid that allows the education system to be more developed.
3,563
2022-01-17T00:00:00.000
[ "Computer Science", "Education" ]
Top Seesaw with a Custodial Symmetry, and the 126 GeV Higgs The composite Higgs models based on the top seesaw mechanism commonly possess an enhanced approximate chiral symmetry, which is spontaneously broken to produce the Higgs field as the pseudo-Nambu-Goldstone bosons. The minimal model with only one extra vector-like singlet quark that mixes with the top quark can naturally give rise to a 126 GeV Higgs boson. However, without having a custodial symmetry it suffers from the weak-isospin violation constraint, which pushes the chiral symmetry breaking scale above a few TeV, causing a substantial fine-tuning for the weak scale. We consider an extension to the minimal model to incorporate the custodial symmetry by adding a vector-like electroweak doublet of quarks with hypercharge +7/6. Such a setup also protects the $Zb\bar{b}$ coupling which is another challenge for many composite Higgs models. With this addition, the chiral symmetry breaking scale can be lowered to around 1 TeV, making the theory much less fine-tuned. The Higgs is a pseudo-Nambu-Goldstone boson of the broken O(5) symmetry. For the Higgs mass to be 126 GeV, the hypercharge +7/6 quarks should be around or below the chiral symmetry breaking scale, and are likely to be the lightest new states. The 14 TeV LHC will significantly extend the search reach of these quarks. To probe the rest of the spectrum, on the other hand, would require a higher-energy future collider. Introduction The nature and properties of the Higgs boson have become the focus of particle physics research since its discovery in 2012. The relatively light Higgs boson of 126 GeV suggests that it is either an elementary particle or a pseudo-Nambu-Goldstone boson (pNGB) of some spontaneously broken symmetry if it is a composite degree of freedom of some strong dynamics [1][2][3][4][5][6]. Other than the Higgs boson, the Large Hadron Collider (LHC) so far has not discovered any new physics yet. The couplings of the Higgs boson are consistent with their standard model (SM) values, though some significant deviations are still possible. If there exists new physics responsible for the origin of the electroweak symmetry breaking (EWSB), the current experimental results indicate that it is probably close to the decoupling limit. On the other hand, the naturalness argument strongly prefers the new physics to be near the weak scale to avoid excessive fine-tuning. The tension between these two requirements has becomes a severe challenge for any model that attempts to explain the electroweak (EW) scale. In a previous paper [7], it was found that in a top seesaw model of dynamical EWSB [8][9][10][11], the Higgs boson arises naturally as a pNGB of the spontaneously broken U (3) L symmetry, which relates the left-handed top-bottom doublet and a new quark χ L . The top seesaw model fixes the problem of the top quark being too heavy in the top condensation model [12][13][14][15][16] by mixing the top quark with a new vector-like quark χ. It was shown that, in the presence of the approximate U (3) L symmetry, the Higgs boson mass is highly correlated with, and generically smaller than the top quark mass. The experimental value of 126 GeV can be obtained with natural values of the parameters of this model. A drawback of this model is that the U (3) L does not contain a custodial symmetry. As a result, the constraint on the weak-isospin violation requires the chiral symmetry breaking scale f to be above 3.5 TeV. Some significant fine-tuning is needed to obtain the weak scale at v ≈ 246 GeV. Such a high chiral symmetry breaking scale also implies that none of the new states are predicted to be reachable at the LHC. A collider of much higher center of mass energy (∼ 100 TeV) would be needed to have any chance to see some of the new states. It is therefore desirable to consider extensions of the minimal top seesaw model to include a custodial symmetry. A straightforward extension to the top seesaw model in Ref. [7] is to introduce "bottom seesaw" by adding a vector-like singlet bottom partner ω. The spontaneously broken U (4) L symmetry can produce 2 light Higgs doublets. Without 1/2 -1/2 1/2 -1/2 T 3 R 1/2 1/2 -1/2 -1/2 Table 1: The quantum numbers of X L , T L , t L , b L under SU (2) L × SU (2) R . additional contributions, the mass of the Higgs boson made of the bottom and ω quarks is related to the bottom quark mass and hence is too light. To avoid this situation, one could introduce scalar mass terms (which come from 4-fermion interactions in the UV theory) to explicitly break the U (4) L chiral symmetry of (t L , b L , χ L , ω L ) down to Sp (4). While the Sp(4) contains the SU (2) C custodial symmetry which can be used to protect the weak-isospin, such a model suffers from the constraint on Z → bb branching ratio. The most recent results suggest that the SM prediction for Z → bb branching ratio (R b ) is 2.4σ smaller than the measured value [17]. When the bottom quark mixes with a heavy singlet, as required for the bottom seesaw mechanism, the Zb LbL coupling is reduced (becomes less negative) while the Zb RbR coupling is not modified. As a result, the Z → bb branching ratio is further reduced. This puts a constraint on the mixing angle (θ b L ) between b L and ω L , which pushes the mass of ω to be very large [11,18]. In order not to have a large weak-isospin violation, the masses of χ and ω should be close, again implying a large chiral symmetry breaking scale. By playing with the model parameters, the chiral symmetry breaking scale may only be slightly reduced compared to the original top seesaw model, which means that such an extension still require stiff fine-tunings. It was pointed out in Ref. [19] that the custodial symmetry which protects the weak isospin can also protect the Zb LbL coupling under certain conditions. Namely, the new physics needs to be invariant under an O(4) global symmetry, which is the familiar SU (2) L × SU (2) R of the SM Higgs sector together with a parity defined as the interchange L ↔ R (P LR ); also, b L needs to be charged under both SU (2) L and SU (2) R with T L = T R = 1/2, T 3 L = T 3 R = −1/2 . This implies that the SM (t L , b L ) doublet needs to be embedded into a (2, 2) representation of SU (2) L × SU (2) R , together with a new doublet quark (X L , T L ) of hypercharge +7/6, with the quantum numbers given in Table 1. To adopt this setup we introduce an SU (2) W -doublet vector-like quarks, Q ≡ (X, T ), with hypercharge +7/6, in addition to the vector-like SU (2) W -singlet quark χ which is responsible for the top seesaw mechanism. The underlying strong dynamics is assumed to approximately respect the U (5) L × U (4) R symmetry among the five left-handed quarks (t L , b L , X L , T L , χ L ) and the four right-handed quarks (X R , T R , t R , χ R ). To avoid too many light pNGBs after the chiral symmetry breaking, gauge invariant scalar mass terms (arising from 4-fermion interactions in the UV) can be introduced to explicitly break U (4) R symmetry and also U (5) L down to O(5). In this way, only one light Higgs doublet arises from the chiral symmetry breaking of O(5) → O(4). An important difference between our model and the setup in Ref. [19] is that in our model, the custodial symmetry that protects both the weak isospin and the Zb LbL coupling is only approximately preserved by the new physics, which violates the conditions in Ref. [19]. Nevertheless, we found that within some regions of the parameter space, both corrections are within experimental constraints, while the chiral symmetry breaking can be as low as ∼ 1 TeV, significantly ameliorating the fine-tuning of the weak scale. The rest of this paper is organized as follows. In Section 2, we write down the effective theory with composite scalars below the compositeness scale with U (5) L × U (4) R symmetric dynamics of the extended quark sector. In Section 3, we focus on the theory at the TeV scale and show that the Higgs boson arises as a pNGB of the chiral symmetry breaking. We derive an approximate analytic formula for the mass of the Higgs boson (M h ) and discuss various possible corrections. It can naturally be around 126 GeV for model parameters within reasonable ranges. In Section 4, we further verify the results in Section 3 with numerical studies. We show that in this model the chiral symmetry breaking scale can be lowered to ∼ 1 TeV without large weak-isospin violation, and a 126 GeV Higgs boson mass can easily be obtained. We also comment on the search of the new states at the LHC and future colliders. The conclusions are drawn in Section 5. The two appendices collect the formula of T parameter from fermion loops and the estimates of some model parameters. Composite Scalars with a Custodial Symmetry As in the usual composite Higgs models, we assume that at a scale Λ 1 TeV there are no fundamental scalars. To implement the custodial symmetry in the top seesaw dynamics, we introduce an SU (2) W -singlet vector-like quark, χ, of electric charge +2/3 and SU (2) W -doublet vector-like quarks, Q ≡ (X, T ), with hypercharge +7/6, in addition to the SM gauge group and fermions. For the doublet quarks, T has electric charge +2/3, same as the SM top quark t, while X has electric charge +5/3. We assume that these new quarks, the left-handed (t L , b L ) doublet and the right-handed t R in the SM (but not b R ) have some new non-confining strong interactions, which can be represented by 4-fermion interactions with strength proportional to 1/Λ 2 . The strong dynamics is further assumed to approximately preserve the U (5) L × U (4) R chiral symmetry of the five left-handed fermions Ψ L ≡ (t L , b L , X L , T L , χ L ) and the four right-handed fermions 1 The strong dynamics among the fermions at scale Λ is given by We assume the 4-fermion interactions in Eq. which also preserves the approximate U (5) L × U (4) R symmetry. The scalar field Φ is a For each of the 20 complex scalars, the superscript denotes the electric charge and the subscript indicates the fermion constituents of the scalar. For example, σ − tX ∼ (t L X R ), and has electric charge −1. The fields that contain χ R are labelled differently (φ instead of σ) because they contain the light scalars which will be the focus of our study. It is useful to classify the scalar fields in Eq. (2.4) into the following categories: are EW doublets; • σ 0 χt and φ 0 χχ are EW singlets; contains one EW triplet and one singlet, which can be parameterized The vector-like fermions can possess gauge invariant masses, which may be generated by the physics at some higher scale than Λ: These fermion mass terms explicitly break the U (5) L ×U (4) R symmetry. They are assumed to be small compared to Λ so that they do not affect the strong dynamics. Below the compositeness scale, these mass terms are matched to the tadpole terms of the composite scalars. At scales µ < Λ, the Yukawa couplings give rise to the quartic couplings and corrections to the masses of the scalars. We assume that there are additional explicit U (4) R breaking effects which distinguish t R , χ R and Q R . Since mass terms are quadratically sensitive to the UV physics, such effects could induce a large relative splitting of the masses for Σ X,T , Σ t and Φ χ . Combining the quartic couplings, mass terms and tadpole terms, the scalar potential below scale Λ is given by Because Q R ≡ (X R , T R ) is an EW doublet, Σ X , Σ T have the same mass-squared M 2 Σ X,T , and σ 0 XX , σ 0 T T have the same tadpole coefficient C Q . (This guarantees that the VEV of triplet scalars are suppressed.) Matching at the scale Λ, the size of the tadpole terms are related to the fermion mass terms by When the scalars are integrated out at the cutoff scale, the fermion mass terms are recovered. This means that at scale µ < Λ we do not need to include the explicit fermion mass terms in Eq. (2.1). They will appear from the scalar VEVs in the low energy effective theory. The quartic coupling λ 1 is generated by fermion loops and becomes nonperturbative near Λ. λ 2 , on the other hand, is not induced by fermion loops at the leading order and vanishes at Λ in the large N c limit. At scales µ < Λ, scalar loops generate a non-zero value for λ 2 and give corrections to λ 1 . Nevertheless, we expect λ 1 |λ 2 |. The spontaneous breaking of the chiral symmetry requires at least one of the scalars to have a negative mass-squared. To obtain the correct SM limit, we require M 2 Φχ < 0, while M 2 Σt and M 2 Σ X,T are assumed to be positive for simplicity. The theory below the compositeness scale Λ is given by Eq. (2.2) and Eq. (2.7). Overall, the scalar sector contains 2 complex triplets, 5 complex doublets and 4 complex singlets. The full theory is rather complicated. However, the main focus of this paper is the low energy (µ Λ) phenomenology, in particular, the mass of the Higgs boson and the constraint from the weak-isospin violation T parameter. To produce the correct top seesaw mechanism, the SM Higgs doublet is required to be mostly the linear combination necessarily ruled out by current experimental constraints, from a naturalness point of view it is more reasonable to assume that their masses are not much smaller than Λ, so that all the degrees of freedom in them are heavy and can be integrated out for µ Λ to obtain a low energy effective theory with Φ χ only. We will focus on this low energy theory for the rest of this paper. Higgs Boson as a PNGB of the O(5) Symmetry We now study the effective theory at scale µ Λ obtained by integrating out the heavy modes in Σ X , Σ T and Σ t . For simplicity we will sometimes label them collectively as Σ X,T,t and their masses as M Σ X,T,t . In the effective theory, the lowest order contribution of Σ X,T,t simply comes from the VEVs of σ 0 XX , σ 0 T T and σ 0 χt , induced by the tadpole terms in Eq. (2.7). The subleading corrections, including the VEVs of other neutral fields in Σ X,T,t , are suppressed by 1/M 2 Σ X,T,t . We will first consider the contributions from VEVs of σ 0 XX , σ 0 T T and σ 0 χt only and study the O(1/M 2 Σ X,T,t ) corrections later in Section 3.4. The scalar potential at µ Λ can be written as w and u t are defined as At the lowest order, σ XX and σ T T have the same VEVs since they have the same tadpole terms. This guarantees that the triplet scalar does not develop a VEV at the lowest order, which may otherwise cause a large weak isospin violation. Eq. (3.9) has an U (5) L chiral symmetry which is explicitly broken by the heavy field VEVs w and u t and the tadpole term C χχ . Without the explicit breaking terms, U (5) L is spontaneously broken to U (4) L due to a negative mass-squared M 2 Φχ , and Φ χ contains 9 NGBs which includes two massless Higgs doublets. If the explicit breaking is small, the theory will have two light Higgs doublet. Although the possibility of additional light scalars is not ruled out, such a theory will not have an EWSB minimum that approximately preserves the custodial symmetry. As we will see in Section 3.1, the VEV w is constrained by the search of the charge +5/3 quark to be at least several hundred GeV. A large w can raise the masses of one of the Higgs doublet by explicit breaking the U (5) L chrial symmetry down to an approximate U (3) L symmetry of (φ 0 t , φ − b , φ 0 χ ). However, the U (3) L symmetry does not contain the SU (2) custodial symmetry and we just recover the minimal model of Ref. [7] in this limit, which makes the extension of the hypercharge +7/6 quarks (X and T ) and the corresponding composite scalars totally pointless! To solve this problem, we introduce the following mass terms (parameterized by the masssquared parameter K 2 ) that also explicitly break U (5) L , and A χ is the CP-odd field in φ 0 χ shown later in Eq. (3.16). They can come from gauge invariant 4-fermion operators in the UV theory. We require K 2 to be positive. Eq. (3.12) lifts up the masses of A χ and one linear combination of the two Higgs doublets, hence breaks U (5) L down to O(5). The custodial symmetry will approximately hold as long as the value of K 2 is large enough (K 2 λ 1 w 2 ). (More explicitly discussion will be done in Section 3.2.) In this case, the theory has only 4 pNGBs that forms the light SM-like Higgs doublet from spontaneous breaking of O(5) to O(4). At the same time an approximate custodial symmetry is also retained. Combining Eq. (3.9) and Eq. (3.12), the scalar potential is VEVs from tadpoles, heavy field VEVs and the negative mass squared M 2 Φχ . We parameterize them as The electroweak VEV, v = v 2 t + v 2 T , is required to be about 246 GeV. Due to the explicit breaking from the VEV w, v t > v T is required for the potential to be at a minimum. For u χ is a singlet VEV which is expected to be significantly larger than the electroweak VEV. which is conventionally called the chiral symmetry breaking scale. Extended top seesaw Once the scalar fields develop VEVs as in Eq. (3.11) and (3.16), the Yukawa couplings in Eq. (2.2) generate the following mass terms of the fermions: The X quark has electric charge +5/3 and does not mix with any other fermions. Its mass is given by The most recent CMS search has excluded the charge +5/3 quark with a mass below 800 GeV at 95% confidence level (CL), assuming that they decay exclusively to tW [20]. 4 This constrains the value of w to be at least a few hundred GeV. The T quark, on the other hand, mixes with t and χ so that the 2 × 2 mass matrix in the usual top seesaw model is extended to a 3 × 3 mass matrix. We denote the three mass eigenstates as t 1 , t 2 and t 3 , ordered by m t 1 ≤ m t 2 ≤ m t 3 . Given that w can not be too small (w 300 GeV for ξ ∼ 3.6), the top quark is always the the lightest mass eigenstate t 1 , and its mass (m top ≡ m t 1 ) is approximately given by As we will see later f w is required to obtain the correct Higgs mass. The lighter toppartner t 2 is mainly T , its mass m t 2 is almost degenerate with m X due to the small mixing. There is also a bound on m t 2 from the searches of the heavy top-like quarks [21,22], similar to but slightly weaker than the bound of m X . The heavier top partner t 3 is mostly the EW singlet χ, with a mass given by m t 3 ∼ ξf / √ 2. Finally, to obtain the correct top mass in Eq. (3.21), we have the following constraint where y t is the SM top Yukawa coupling, define as m 2 top ≡ With the addition of the X and T quarks, (t L , b L ) and (X L , T L ) form a (2, 2) representation under SU (2) L × SU (2) R , which contains the SU (2) C custodial symmetry after EWSB. In the limit that the vector-like mass µ Q vanishes (or equivalently w = 0), there is no explicit violation of the custodial symmetry in the (t L , b L , X L , T L ) sector, which implies a negative T parameter relative to the SM value because it removes the SM contribution On the other hand, if µ Q → ∞, then (X, T ) decouples and we recover the fermion sector of the minimal model, which gives a large positive contribution to T if the chiral symmetry breaking scale is low. We expect that in a suitable range of the X, T masses, the T parameter can be small and consistent with the EW measurements. In Appendix A, we provide the full expression for the T -parameter calculated from fermion loops, which we use in the numerical calculations in Section 4. Other contributions, such as the contribution from triplet scalar VEVs, are very small as long as the masses of heavy scalars M Σ X,T,t are sufficiently large. In principle there could be additional modeldependent contributions from unknown UV physics. Here we assume that the custodial symmetry is a good symmetry in the UV and all major explicit breaking effects have been parameterized in our low energy effective theory, so that they are negligible. Since we only add vector-like quarks, the calculable contributions to the S parameter is negligible. However, there could be UV contributions from heavy vector states [23][24][25][26][27]. While such contributions are model-dependent, they can be estimated to be [28] S whereŜ = g 2 /(16π) S and m ρ is the mass scale of the heavy vector state. We expect such states to exist, as mentioned later in Secion 3.3, which sets the scale where gaugeloop contributions are cut off. For m ρ = 3 TeV, a typical value for f ∼ 1 TeV, we have S ∼ 0.08, within the 68% CL of the experimental constraint [17]. A larger value of S (up to ∼ 0.27) may still be allowed if we arrange a larger value for T as well, which can be easily achieved in this model. The Zbb coupling has been a long-standing issue in beyond SM model building, particularly for composite Higgs models. The measured value of the Z → bb branching ratio (R b ) [29] was known to be larger than the SM prediction. A recent calculation of R b including two-loops corrections [30] suggests that the SM prediction for Z → bb branching ratio (R b ) is 2.4σ smaller than the measured value [17]. On the other hand, the forwardbackward asymmetry of the bottom quark A b F B measured at the Z-pole exhibits a 2.5σ discrepancy with the SM prediction [17]. The two notable discrepancies together prefer a larger Zb RbR coupling compared with the SM value and a Zb LbL coupling very close to the SM value [31]. Our model, by construction, does not introduce any modification to the Zbb coupling at tree level. However, there are corrections to Zb LbL at loop levels, since the custodial symmetry that protects the Zb LbL coupling is only approximately preserved by the new T R , t R and χ R respectively and will induce corrections to the Zb LbL coupling at one-loop level. These corrections are suppressed, either by the large masses of the scalars or due to the vectorlike nature of X, T and χ. 6 We found these corrections to be much smaller than the allowed deviation on the Zb LbL coupling. Another contribution to the Zb LbL coupling comes from the mixing of the top with its vector-like partners. The mixing between t and T is negligible in our model. The correction due to the mixing between t and χ, though suppressed by v 2 /f 2 , could become non-negligible for small f . Nevertheless, to fulfill the experimental constraints on the Zbb coupling, one needs to introduce additional new physics which enhances the Zb RbR coupling. If b R also couples strongly to the new physics, it is possible to arrange it in some representation under the custodial symmetry that gives an significant enhancement to the Zb RbR coupling [19,[33][34][35]. We will not discuss this possibility in this paper. Mass of the Higgs boson(s) Using the extremization conditions (requiring the linear terms of h t , h T , h χ to vanish), one can write the dimensionful parameters M 2 Φχ , K 2 and C χχ in the scalar potential in Eq. (3.15) in terms of the VEVs and quartic couplings, The second equation in Eq. (3.24) can be written as which explicitly shows that tan β > 1 as K 2 λ 1 w 2 is positive, and that the custodial symmetric limit tan β → 1 corresponds to K 2 λ 1 w 2 . The constraint on the weak-isospin violation T parameter puts an upper bound on tan β. In Section 4 it will be shown that tan β can not be much larger than 1 if the chiral symmetry breaking scale is close to the weak scale (f ∼ 1 TeV). Substituting Eq. (3.24) back to the potential in Eq. (3.15), we can write the Higgs mass in terms of the VEVs and quartic couplings, which is the smallest eigenvalue of the 3 × 3 mass-squared matrix of the CP-even neutral scalars (h t , h T , h χ ). It is useful to switch to the basis (h 1 , h 2 , h χ ) with the following rotation where the electroweak VEV is purely associated with h 1 . In this basis, the mass-squared . One can see that in this basis h 2 is already a mass eigenstate. The 126 GeV Higgs boson, on the other hand, should correspond to the lighter eigenstate of (h 1 , h χ ). At the leading order of v 2 /f 2 , the Higgs mass (M h ) is given by Since λ 2 is not generated by the fermion loops, we expect that |λ 2 /λ 1 | 0.] To obtain the correct top quark mass through the top seesaw mechanism, we need u 2 Eq. (3.29) also shows that the Higgs mass in independent of λ 2 at the leading order. Combining it with Eq. (3.20) and Eq. (3.22), we obtain where tan β ≡ v t /v T , y t is the SM top Yukawa coupling and m X is the mass of the heavy quark with charge +5/3. As mentioned earlier, for the case of small f which we are interested in, tan β is restricted to be slightly larger than 1. The correct Higgs mass (126 GeV) corresponds to λ h = M 2 h /v 2 ≈ 0.26 at the weak scale. It is typically obtained for For the other CP-even neutral scalars, h 2 is already a mass eigenstate with mass- . Due to the O(5) symmetry, the masses of the heavy doublet CP-even neutral (h 2 ), CP-odd neutral, and charged scalars all have the same mass at the lowest order, which we denote collectively as . A large K 2 , required if f is small, would imply that these scalars are significantly heavier than the hypercharge +7/6 quarks, beyond any current experimental bounds. The heavier eigenstate of (h 1 , h χ ) is mostly the EW singlet. Its mass-squared is approximately (λ 1 + λ 2 )f 2 which is also much larger than the current bounds. O(5) breaking from electroweak interactions where we have assumed that the quartic terms are U (4) R symmetric for simplicity and parameterized the scalar fields as in Eq. (3.10). Assuming that the SU (2) W × U (1) Y gauge interactions are the only O(5) breaking contribution besides the tadpole terms, the parameters ∆M 2 and κ 1(2) , κ 1(2) in Eq. (3.34) are estimated to be and It is straightforward to repeat the analysis in Section 3.2 by including Eq. (3.34). Additional O(5) breaking effects may exist besides the SU (2) W × U (1) Y gauge interactions. In principle, these effects could break the U (4) L symmetry, but in order to avoid large violation of custodial symmetry, they should at least approximately preserve O(4). If the U (4) L breaking effects are mainly in the mass term, it effectively causes a shift of the K 2 terms in Eq. (3.15) except for A 2 χ , and results in a splitting between the mass of A 2 χ and the mass of the heavy Higgs doublet. Figure 1: The tree-level diagram which corresponds to the dimensional-six operators of the form λ 2 The thin lines represent Φ χ , the thick line represents the heavy field Σ, and the thick dash lines are the heavy field VEVs Σ (i.e. σ 0 XX , σ 0 T T or σ 0 χt ). Corrections from heavy scalars masses In Sec. 3.2 we have only included the lowest order contributions from heavy scalar fields Σ X,T,t , which are the VEVs of σ 0 XX , σ 0 T T and σ 0 χt . We now study the corrections that are proportional to 1/M 2 Σ X,T,t . As long as M 2 Σ X,T,t are large, Σ X,T,t can be integrated out and the dominate contributions come from the dimension-six operators of the form They are generated by the tree-level diagram in Fig. 1, where we use Σ and λ to denote a general heavy field and a general quartic coupling. Replacing the heavy fields with their VEVs, these operators generate quartic couplings of the Φ χ fields that explicitly break O(5) and hence modify the Higgs mass. With the quartic couplings in Eq. (3.15), we can write down the terms generated by Fig. 1. For simplicity, we assume all the fields in Σ X and Σ T have mass M Σ X,T and all the fields in Σ t has mass M Σt , which is an good approximation for large M Σ X,T,t where the corrections from the tadpoles of σ 0 XX , σ 0 T T and σ 0 χt are negligible. For simplicity, we also ignore the contributions from EW interactions discussed in Section 3.3. (At the lowest order, different contributions add up linearly.) Thus, the leading correction from heavy scalars masses to the scalar potential is (3.38) In the limit λ 2 → 0, the above expression is simplified to Again, it is straightforward to calculate the effects of Eq. (3.38) on the Higgs mass by repeating the analysis in Section 3.2. For simplicity we set λ 2 = 0. Keeping the lowest orders in terms of 1/M 2 Σ X,T and 1/M 2 Σt , we have . The other contribution comes from the VEVs of the other neutral components of Σ X,T,t , which are σ 0 tT , σ 0 χT , σ 0 tt and σ 0 χt in Eq. (2.4). These fields do not have tadpole terms generated by gauge invariant fermion masses. However, once other fields develop VEVs, the quartic couplings will induce VEVs for these fields that are suppressed by 1/M 2 Σ X,T,t . Compared to the leading order corrections in Eq. (3.40) that are proportional to λ 1 f 2 2M 2 , the effects coming from these quartic-coupling-induced VEVs are further suppressed by at least a factor of w 2 /f 2 or v 2 /f 2 . The contribution to S and T parameters from the triplet scalar VEVs are also negligible as long as M 2 Σ X,T is significantly large. We will ignore these effects for simplicity. Numerical Studies and Phenomenology In this section, we perform numerical studies of this model to obtain predictions and preferred ranges of the parameters, given the experimental constraints. They serve to verify the approximate analytic results obtained in the previous sections. We also discuss possible phenomenologies at the LHC or future colliders. We start with an enumeration of the parameters of this model. At energy scale µ Λ, the theory is described by the scalar potential Eq. [36]. w is related to the mass of the charge +5/3 quark m X by m X = ξw/ √ 2. Hence, the spectrum is fully determined by the following parameters, We choose the ratios of couplings λ 1 /(2ξ 2 ) and λ 2 /λ 1 as the independent parameters because they are more convenient and better constrained. To calculate M h , we match the theory to the SM at the scale of the heavier top partner m t 3 , compute the quartic Higgs coupling λ h , and then evolve λ h down to the weak scale. Before starting the numerical calculations, we first examine the expected ranges of input parameters listed in Eq. (4.42). The Yukawa coupling ξ is expected to be ∼ 3 − 4 in a strongly coupled theory. We will use ξ ≈ 3.6 as the standard reference value [7]. The ranges of λ 1 /(2ξ 2 ) , λ 2 /λ 1 are discussed in Appendix B and are expected to be 0.35 λ 1 /(2ξ 2 ) 1 and −0.15 λ 2 /λ 1 0. Since the focus of this paper is to reduce the chiral symmetry breaking scale f without violation of experimental constraints, we will consider lower values of f ( 5 TeV). We often take f = 1 TeV as a benchmark point. As we will see later, to obtain a correct Higgs mass f can not be much smaller than 1 TeV. In Section 3, we already saw that tan β > 1 is required for the potential to be at a minimum. 7 For small f , we expect tan β to be not much larger than 1 from the T parameter constraint. For the effective theory below the composite scale Λ to be a valid description, the states in the theory should have masses below Λ ∼ 4πf . Furthermore, for the effective theory at µ Λ described in Section 3 to be a valid description, the heavy scalar masses M Σ X,T,t need to be much larger than f . Thus, we require M ρ 4πf and f M Σ X,T,t 4πf . Finally, the current bound from LHC requires m X > 0.8 TeV. In this model, we incorporate the custodial symmetry by introducing a vector-like EW doublet (X, T ), in order to reduce the chiral symmetry scale f without introducing large weak isospin violation. We first would like to verify whether this can indeed be achieved. In Fig. 2, we show the Higgs boson mass M h as a function of m X and tan β, by fixing f = 1 TeV and other parameters to some typical values, ξ = 3.6, λ 1 /(2ξ 2 ) = 0.7, For simplicity, we set the heavy scalar masses to be M Σ X,T,t = 10f , a value close to the compositeness scale. We also show the contours of the T parameter calculated using the expressions in Appendix A. The regions −0.06 < T < 0.1 and −0.11 < T < 0.15 roughly correspond to the 68% and 95% CL (fixing S = 0) [17], which are shown on the plots with different colors. We see that, indeed, there is a region for which the T parameter is within the constraint, while a 126 GeV Higgs boson mass can also be obtained. This demonstrates that the chiral symmetry breaking scale can be lowered from multi-TeV in the minimal model [7] to ∼ 1 TeV, which greatly reduces the tuning. In Section 3.2 we argued that with small f , tan β can not be much larger than 1, as otherwise the custodial symmetry is badly broken. This is verified in Fig. 2, as one can see the 68% CL bound of the T parameter requires tan β < 1.4. On the other hand, a small custodial breaking is needed to account for the (t L , b L ) contribution in the SM, which translates into a lower bound on tan β when m X is small. The Higgs boson mass is sensitive to λ 1 /(2ξ 2 ) and M ρ /f . To study the effects of these two parameters, we choose a point in Fig. 2, m X = 0.9 TeV and tan β = 1.25, then vary λ 1 /(2ξ 2 ) and M ρ /f and plot the Higgs boson mass as a function of these two parameters. The result is shown in the left panel of Fig. 3. Due to the running effects, the Higgs boson mass-squared does not vary linearly with λ 1 /(2ξ 2 ) as naïvely indicated from Eq. (3.30). The dependence is somewhat less sensitive. The Higgs mass decreases as one increases M ρ as expected from Eq. (3.37). If M ρ is not too large (M ρ 7f ), its effect can be compensated by different choices of other parameters to obtain the correct Higgs mass. The Higgs boson mass also receives corrections from the masses of heavy scalars Σ X,T,t . We repeat the same exercise (choosing the point in Fig. 2 unless m X /f is very small. This suggests that the mass of the heavier top partner, Higgs boson mass and other parameters are fixed. This is different from the predictions of many other composite Higgs models that contain more than one top partners, such as MCHM 5 and MCHM 10 in Ref. [33,37]. In practice, the required ratio m X /f depends on other parameters that affect the Higgs boson mass, such as λ 1 /(2ξ 2 ) and M ρ , which are not known a priori. Nevertheless, for any reasonable set of other parameters, we could find the corresponding value of m X /f to give M h = 126 GeV. In the right panel of Fig. 4, symmetry. For f ∼ 1 TeV, the constraint on the T parameter requires tan β 1.5 (from Fig. 2), which gives K 2 1.2λ 1 w 2 1.7m 2 X so that M H 1.3m X . The CP-even (mostly) singlet scalar has a mass ∼ √ λ 1 f , which is related to the mass of the heavier top partner m t 3 ∼ ξf / √ 2 by the standard NJL relation. We have also assumed that the scalars in Σ X,T,t have masses much larger than f . Therefore, the hypercharge +7/6 quarks (X, T ), being the lightest states in the model and carrying color, will be the first particles to be discovered if this model is realized in nature. Such hypercharge +7/6 quarks (X, T ) are a generic prediction of a composite Higgs model with a low chiral symmetry breaking scale and a custodial symmetry to avoid the T parameter and Zbb coupling constraints. To unravel the underlying theory we would still need to find the other states and study their properties. On the other hand, if the hypercharge +7/6 quarks (X, T ) are excluded up to a few TeV, in our model the chiral symmetry breaking scale would need to be at least as large, making the model as fine-tuned as the minimal model [7], then such an extension will be less motivated. The estimated reach and exclusion regions on these quarks for the 14 and 33 TeV LHC can be found in the Snowmass 2013 report [38], which is ∼ 1. To produce the Higgs boson mass at 126 GeV, we found that f needs to be somewhat larger than the X quark mass. The current LHC bound on X quark mass of 800 GeV renders a lower bound on f of the order of 1 TeV. The tuning, measured by v 2 /f 2 , can be improved to ∼ 5%, compared to 0.5% in the minimal model. Naturalness does not come without a price. To reduce fine-tuning and to avoid the experimental constraints, we are forced to introduce the X and T quarks and the corresponding composite scalars, making the structure of the theory much more complicated. As a matter of fact, the minimal model in Ref. [7] and the extended model studied in this paper are another example of the so-called "crossroads" situation, and one has to choose between fine-tuning and complexity. Ultimately, both models need to be tested by experiments. The search for the X and T quarks at the 14 TeV (and possibly the 33 TeV) LHC can provide important clues in discriminating the two scenarios. However, to fully probe either model, one needs to go beyond the LHC. There has been discussion of a future 100 TeV Hadron Collider that could be built either at CERN [40] or in China [41]. Such a collider, if realized, will further probe the origin of the EWSB and tell us which road our Mother Nature takes. Acknowledgments We would like to thank Bogdan Dobrescu and Ennio Salvioni for discussion. This work is supported by the Department of Energy (DOE) under contract no. DE-FG02-91ER40674. A T parameter from fermion loops In Section 3.1 we argued that the leading contribution to the T parameter is captured by the fermion loops. In this appendix, we provide an expression for the T parameter calculated from fermion loops. In terms of SU (2) W eigenstates, the contribution comes from the fermions (t L , b L ), (X L , T L ) and (X R , T R ) [since (X R , T R ) is also a SU (2) W doublet]. The charge +2/3 fermions, t, T and χ form a 3 × 3 mass matrix, as shown in Eq. (3.19). We denote the three mass eigenstates as t 1 , t 2 and t 3 , ordered by m t 1 ≤ m t 2 ≤ m t 3 , and denote the left-handed and right-handed rotation matrices as The contribution to the T parameter from fermion loops is and quartic couplings λ 1 , λ 2 in Eq. (3.15). It was shown in the previous paper [7] that the ratios of couplings, λ 1 /(2ξ 2 ) and λ 2 /λ 1 , are better estimated than their individual values. At the same time the predictions of the model, such as the mass of the Higgs boson, also have stronger dependences on the ratio of the couplings. This is also true for the model studied in this paper. With the addition of the (X, T ) quarks, the estimated coupling ratios are slightly modified from the minimal model [7], while the derivations remain the same. Here we provide a short summary of the results and refer the reader to the appendix of [7] for more details of this study. In the fermion bubble approximation, the ratio λ 1 /(2ξ 2 ) is predicted to be 1, while λ 2 is zero since it is not generated by the fermion loops. These results are modified once the gauge loop corrections and the back reaction of the scalar self-interactions are included, for example, by using the full one loop RG equations [16]. If the chiral symmetry breaking scale f is not much smaller than the compositeness scale Λ, as in the case that we are interested, one cannot trust the RG analysis because the couplings are strong and the logarithms are only O(1). Nevertheless, it may provide us some ideas of the possible range of the coupling ratios λ 1 /(2ξ 2 ) and λ 2 /λ 1 . N f is the number of quark flavors. We solve these equations numerically for our model which has N L = 5, N R = 4, N c = 3 and N f = 9. We set the initial conditions λ 1 = 2ξ 2 , λ 2 = 0 and choose several different initial values for ξ. The results are shown in Fig. 5. The ratios of couplings are quickly driven to some approximate fixed point values, though we should not trust the exact evolution near Λ due to potentially large higher loop contributions. If the chiral symmetry breaking scale is not far below the compositeness scale, we can not trust the 1-loop RG results. However, if we assume a smooth evolution, the ratios of couplings are expected to lie in between their initial values and the quasi-infrared fixed point values: We adopt these ranges in Section 3 and 4.
10,694.6
2014-06-25T00:00:00.000
[ "Physics" ]
A Propagator Method for Bistatic Coprime EMVS-MIMO Radar In this paper, a novel two-dimensional (2D) direction-of-departure (DOD) and 2D direction-of-arrival (DOA) estimate algorithm is proposed for bistatic multiple-input multiple-output (MIMO) radar system equipped with coprime electromagnetic vector sensors (EMVS) arrays. Firstly, we construct the propagator to obtain the signal subspace. (en, the ambiguous angles are estimated by using rotation invariant technique. Based on the characteristic of coprime array, the unambiguous angles estimation is achieved. Finally, all azimuth angles estimation is followed via vector cross product. Compared to the existing uniform linear array, coprimeMIMO radar is occupying large array aperture, and the proposed algorithm does not need to obtain signal subspace by eigendecomposition. In contrast to the state-of-the-art algorithms, the proposed algorithm shows better estimation performance and simpler computation performance. (e proposed algorithm’s effectiveness is proved by simulation results. Introduction In recent ten years, multiple-input multiple-output (MIMO) radar has aroused extensive attention. MIMO radar has many important branches, for example, detection, parameter estimation, waveform design, and synchronization. Among them, the estimation of DOD and DOA is one of the important research fields of bistatic MIMO radar. Up to now, many algorithms have been proposed to solve the above issue, for example, signal parameters estimation method based on rotational invariance technique (ESPRIT) [1], maximum-likelihood estimator [2], spectrum peak search method [3], tensor approach [4,5], sparsity-aware strategy [6], and propagator method (PM) [7]. However, most of the existing algorithms can only solve the one-dimensional angle estimation problem. ere are just a small number of algorithms focusing on the 2D-DOD and 2D-DOA estimation in MIMO radar [8][9][10]. For the 2D estimation algorithms in [8][9][10], the key of them is to utilize nonlinear geometry of the scalar sensor array [8][9][10]. On the other hand, some works propose the use of electromagnetic vector sensor (EMVS) to solve the 2D-DOD and 2D-DOA estimation in bistatic MIMO radar. Different from the scalar sensor array, a signal EMVS can not only perform the task of two-dimensional direction angle estimation but also provide the polarization status of the incoming signal. e EMVS is firstly proposed to be applied to bistatic MIMO radar in [11], where an EMVS array is mounted at transmitting and an EMVS array is at receiving. In [12], a new method for 2D-DOD and 2D-DOA estimation utilizing transmitting and receiving EMVS arrays is proposed. e framework used the ESPRIT-based method to estimate the 2D-DOD and 2D-DOA. en, vector cross-product strategy is used to estimate the polarization status and azimuth angles. In [13], a method similar to PM is proposed to avoid the step of eigendecomposition in [12]. However, none of the above methods can make full use of the MIMO radar's virtual aperture when we apply the selection matrix to the array measurement. In addition, the mentioned methods in [12][13][14] require extensive pair calculation and offer limited identifiability. For avoiding existing shortcomings in above algorithms, a parallel factor (PARAFAC) scheme was proposed in [15]. However, low computational efficiency and sensitivity to initialization are the drawbacks of PARAFAC decomposition. In [16], an improved PM-like method was introduced, which can realize the rotation invariance of the whole virtual steering vector. In addition, it can automatically pair the elevation angles associated with the receiving and transmitting arrays. More recently, the ESPRIT-like approach was investigated in [17], which is suitable for distributing EMVS sensor. In order to eliminate the effect of phase ambiguity, less than half wavelength is the limited requirement of distance between sensors, which limits the receive and transmit array's virtual aperture. In addition, the mutual coupling problem may occur when sensors are closely displaced [18], which may reduce the estimation performance. e coprime array has a promising prospect to settle the above problems, so it has attracted extensive attention [19][20][21]. Also, the EMVS with nested array geometry has also been considered in [22,23]. Two sparse and uniform linear arrays constitute a coprime linear array (ULA), in which the internal element spacings are coprime numbers. It is ambiguous to estimate the angle of two subarrays separately, but the coprime property can uniquely determine it. Due to the increase of array aperture, coprime array performs better than ULA in angle estimation. e results show that the coprime EMVS array can effectively improve the performance of parameter estimation [24]. However, there is little research on EMVS-MIMO radar's parameter estimation. is paper presents a bistatic coprime EMVS-MIMO radar framework different from ULA manifold [12][13][14][15][16], since both receivers and transmitters are composed of coprime EMVS. On this basis, the method of using propagator to construct signal subspace is proposed. en the transmitting elevation and receiving elevation are estimated and paired one-to-one, but the estimation result is ambiguous. Fortunately, the coprime characteristic allows a unique elevation. Finally, the cross-product technique is used to estimate the azimuth angle, and the results are automatically paired to estimate the elevation angle. Based on coprime array, this method increases the aperture, so it has better estimation performance. Numerical examples show that the estimation effect is improved. roughout the paper, lowercase letters represent vectors and uppercase letters denote matrices, respectively. An M × M identity matrix is represented by I M , 0 M stands for the M × M full-zeros matrix, and 1 n represents a row vector whose n − th entity is one and others are zero; ⊗ represents the Kronecker product and ⊙ is the Khatri-Rao product. { } denotes a diagonal matrix whose diagonal entities consist of the n − th row of B; ‖ · ‖ F represents the Frobenius norm; angle(·) means to gain the phase. real(·) returns the real part in the bracket, and ⊕ is Hadamard product. e vector cross product between two column vectors e 1 � [e 1 , e 2 , e 3 ] T and e 2 � [e 4 , e 5 , e 6 ] T is defined as e 1 * e 2 � 0 −e 3 e 2 e 3 0 −e 1 −e 2 e 1 0 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦. The Proposed Algorithm 2.1. Problem Formulation. As shown in Figure 1, suppose that there is a bistatic EMVS-MIMO radar system, whose receiving and transmitting are distributed in the coprime geometries. In terms of the transmit array, let us assume that Subarray 1 is equipped with M t EMVS and Subarray 2 is equipped with N t EMVS, where M t and N t are coprime integers. Subarray 1's adjacent distance is N t λ/2 and Subarray 2's adjacent distance is M t λ/2, where λ represents the carrier wavelength. Similar to transmitting array, two EMVS subarrays constitute the receiving array, which have M r and N r elements, respectively. What is more, it is assumed that there are K far-field targets appearing in same range bin, and k − th target's 2D-DOA pair is (θ r,k , ϕ r,k ), and 2D-DOD pair is (θ t,k , ϕ t,k ), where the elevation angles are denoted by θ t,k and θ r,k , and the azimuth angles are denoted by ϕ t,k and ϕ r,k . e matched outputs can be expressed as where τ represents the slow-time index (pulse index), and s(τ) � [s 1 (τ), s 2 (τ), . . . , s K (τ)] T represents the target reflect coefficient vector; a t,k � [e jN t (M t − 1)π sin θ t,k , . . . , e jN t π sin θ t,k , 1, e − M t π sin θ t,k , . . . , e − jM t (N t − 1)π sin θ t,k ] T is the k − th transmit steering vector, and a r,k � [e jN r (M r − 1)π sin θ r,k , . . . , e jN r π sin θ r,k , 1, e − M r π sin θ r,k , . . . , e − jM r (N r − 1)π sin θ r,k ] T is the k − th receive steering vector; b t,k ∈ C 6×K represents the k − th polarization response vectors of transmit array, and b r,k ∈ C 6×K represents the k − th polarization response vectors of the receive array; stand for the associate polarization response matrices, respectively. e Gaussian noise vector with zero mean and variance of σ 2 is represented by n(τ). Moreover, where c t,k and c r,k represent the k − th transmit and receive auxiliary polarization angles, respectively. e corresponding polarization phase differences are indicated, respectively, by η t,k and η r,k . Let b tp,k and b rq,k be the p-th and It should be emphasized that b t1,k and b r1,k represent the electric field information vectors, while b t2,k and b r2,k account for the magnetic field information vectors. In addition, When the targets are uncorrelated, the y(τ)'s covariance matrix is where . . , r K is the s(τ)'s covariance matrix, and r k represents the power of s k (τ). In practical application, the estimation of R can be obtained via L available snapshots; that is, Herein, estimating the 2D-DOD and 2D-DOA from R is our goal. Propagator. Let the first K rows of C be C 1 ∈ C K×K , and let the rest be C 2 ∈ C (36MN− K)×K ; that is, Since C 1 is a nonsingular matrix, taking a unique linear transformation of C 1 can get C 2 ; that is, where P c ∈ C (36MN− K)×K represents the propagator. Let Denote the noiseless item of R as R; that is, where G ∈ C 36MN×K and H ∈ C 36MN× (36MN− K) . Obviously, According to the relationship in (10), we have Consequently, P c can be estimated by fitting where G is the estimation of G and H is the estimation of H. According to equation (15), we can obtain the P c 's leastsquares solution by ereafter, we can obtain the P's estimation via Based on equation (11), there is a full-rank matrix T so that Obviously, P plays a similar role as the signal subspace that is obtained from eigendecomposition. Let ψ t1 � (J N2 P) † J N1 P and ψ t2 � (J N4 P) † J N3 P. Obviously, Θ t1 and Θ t2 consist of the eigenvalues of ψ t1 and ψ t2 , respectively. Moreover, the T's k − th column is an eigenvector that corresponds to the k − th eigenvalue of both ψ t1 and ψ t2 . Next, we have the eigendecomposition on ψ t1 so that we get the eigenvector matrix T as well as the associate eigenvalue matrix Θ t1 � diag λ t1,1 , λ t1,2 , . . . , λ t1,K ; that is, Multiply ψ t2 left by T and right by T − 1 to get the Θ t2 's estimation, which is denoted by Θ t2 . Let the k − th diagonal entity of Θ t2 be λ t2,k , and then the transmit elevation angle can be obtained from Subarray 1 and Subarray 2 as Due to the fact that the mapping function angle(·) is wrapped within the range [−π, π], but N t π sin θ t,k and M t π sin θ t,k are within [−N t π, N t π] and [−M t π, M t π], the transmit elevation angles and receive elevation angles estimated from (24a24b) are ambiguous. Similarly, we have Mathematical Problems in Engineering Define Obviously, Since the eigenvector matrix T has been estimated previously, we can get the estimation of Θ r1 and Θ r2 (denoted by Θr1 and Θr2, respectively) by left and right multiplying operation on ψ r1 and ψ r2 , respectively. Let λ r1,k and λ r2,k be the k-th diagonal elements of Θr1 and Θr2, respectively. e receive elevation angles can be estimated via Similar to the estimated transmit elevation angles, due to the coprime characteristic of the two subarrays, the receiver elevation angles estimated from (28a 28b) are ambiguous. Unique Elevation Angle Determination. It is worth noting that θ (est) t1,k and θ (est) r1,k are obtained via the rotational invariance properties of Subarray 1, while θ (est) t2,k and θ (est) r2,k are achieved via Subarray 2's rotational invariance characteristics. Because of the coprime between M t , N t and M r N r , θ (est) t1,k , θ (est) r1,k and θ (est) t2,k , θ (est) r2,k can be uniquely determined even though they are ambiguous. Since the transmit Subarray 1's interelement spacing is N t λ/2, there are N t possible answers, which include one from (24a). e connection between the n t -th (n t � 1, 2, . . . , N t ) possible solution θ (n t ) t1,k and the estimation θ (est) t1,k can be expressed as For the transmit Subarray 2, the k-th estimated transmit elevation angle should have M t possible solutions, including one derived from (24b); the m t − th (m t � 1, 2, . . . , M t ) possible solution θ eoretically, the unique solution can be determined by finding the coincident solutions in (29) and (30). As displayed in Figure 2, the results of (29) and (30) are presented, where θ � 30°, M t � 3, and N t � 4 are considered. In practice, the recovered angles may be closely distributed rather than coincide. erefore, it can be estimated by the average of the closest solutions of sin θ (n t ) t1,k and sin θ where θ t1,k and θ t2,k denote the two nearest solutions' associate angles. Similar to the transmit, the unique receive elevation angle estimation θ r,k can be obtained by us. Azimuth Angle Estimation. To estimate the azimuth angle, we need to estimate B t and B r first. According to (17), we can estimate C via Define J 1 � 1 p ⊗ I 6N ∈ C 6M×36MN , p ∈ 1, 2, . . . , 6M { }, and we can find that We have that Once the elevation angles have been estimated, the direction matrices A t and A r can be reconstructed (denoted as A t and A r , respectively). en B r can be estimated via where C r � J 1 C � J 1 PT − 1 . ereafter, we can compute where b r1,k represents a vector that is made up of the first three entities of B r 's k − th column and b r2,k is a vector that consists of the last three entities of the k − th column of B r , respectively. Finally, we can estimate ϕ r,k by where f r,k (1) is the f r,k 's first entity and f r,k (2) represents the second entity of f r,k . Similarly, we can define J 2 � I 6M ⊗ 1 q ∈ C 6M×36MN , q ∈ 1, 2, . . . , 6N { }, and construct C t � J 2 P. According to (35)-(37), the receive azimuth angle ϕ t,k can be obtained. Since the azimuth angle is one-to-one mapping with the elevation angle and the perturbation has been compensated, therefore, the estimated angles can be matched automatically. 2.6. 2D-TPA and 2D-RPA Estimation. As long as we complete the 2D-DOD and 2D-DOA estimates, we can get the estimates of D tk and D rk , represented by D tk and D rk , respectively. Based on equations (3a) and (3b), we can get the estimations of v tk and v rk via Finally, 2D-TPA and 2D-RPA estimation is finished via Because the premise of estimating 2D-TPA and 2D-RPA is to get 2D-DOD and 2D-DOA estimations, angles θ t,k , ϕ t,k , c t,k , η t,k and θ r,k , ϕ r,k , c r,k , η r,k are automatically paired. In order to understand the algorithm proposed in this paper more efficiently, the following are the algorithm steps of the estimator proposed in this paper: Step 1: obtain the estimation of the covariance R firstly; Step 2: calculate the propagator P c by (16) and construct matrix P; Step 3: build the selection matrices J t,n and J r,n (n � 1, 2, 3, 4), and get ψ t via (21a), (21b) and (22a), (22b); Step 4: obtain T and corresponding eigenvalues by having eigendecomposition on ψ t , and get θ est t1/2,k via (24a) and (24b); Step 5: obtain ψ r via (26a) and (26b), and get the ambiguous elevation angles in receiver through (28a) and (28b); Step 6: recover all the estimations based on (29) and (30), and determine the unique elevation from (30); Step 7: construct J 1 , J 2 , A r , and A t , and then calculate C t and C r . ereafter, recover B r and B t . Finally, achieve the azimuth angles via (36) and (37). Complexity. In terms of the proposed algorithm, computing P c results in major complexity. Since the proposed algorithm can pair angle estimation automatically, the final complexity is 72MNK 2 . Next, the complexity of PM [16], PARAFAC [15], and ESPRIT [12] algorithms is listed in Table 1 to facilitate comparison with the proposed algorithm. As it can be seen from Table 1, compared with the ESPRIT, PM, and PARAFAC algorithms, the proposed algorithm is more efficient. Simulation Results We use the Monte Carlo method to verify that the proposed method is effective in this section. In the simulation, we equip the bistatic coprime EMVS-MIMO radar with M transmit sensors and N receive sensors, respectively. e receive array and transmit array are both distributed in coprime geometries, respectively, and the subarrays' sensor numbers are M t , N t (M � M t + N t − 1) and M r , N r (N � M r + N r − 1), respectively. Assume that there are K � 3 and θ t � (40°, 20°, 30°), ϕ t � (15°, 25°, 35°), c t � (10°, 22°, 35°), η t � (36°, 48°, 56°), θ r � (24°, 38°, 16°), ϕ r � (21°, 32°, 55°), and c r � (42°, 33°, 60°) with η r � (17°, 27°, 39°), respectively. In the simulation, we set p and q to be 1 and 2, respectively. e reflected coefficients are modeled with a complex Gaussian process, and there are L snapshots. Each simulation curve is based on 200 independent trials. At the same time, the root mean square error (RMSE) and average running time (ART) of elevation estimation are used to evaluate the performance. Herein, RMSE is defined as where θ t/r,k (transmit or elevation) is the θ t/r,k 's estimation of the k − th trial. For the purpose of comparison, the performances of PM [16], PARAFAC [15], and ESPRIT [12] (all these algorithms are based on ULA geometry with N receivers and M transmitters) as well as CRB are added. e RMSE performance of different methods versus Signal-to-Noise Ratio (SNR) is presented in Figure 3, where we assume that M t � 3, N t � 4, M r � 4, N r � 5, and L � 200 are considered. It is shown that all algorithms achieve lower RMSE with higher SNR. When the SNR is lower than −4 dB, the proposed algorithm's performance is worse than those of the other algorithms, because it is impossible to accurately estimate the propagator in the low SNR region. However, the estimation result of the proposed algorithm is better than those of the other algorithms when the SNR is larger than −4 dB, which implies that the coprime manifold is helpful to improve estimation accuracy. Figure 4 depicts the ART performance comparison versus SNR. e results show that the proposed algorithm runs less than ESPRIT algorithm in [12], which benefits from the fact that the proposed algorithm can avoid the eigendecomposition of the high-dimensional covariance matrix. Figure 5 shows the RMSE of polarization angles estimation versus SNR, from which we can see that, with the increase of SNR, the RMSE of polarization angles estimation is decreasing. Figure 6 shows the different algorithms' estimation performance versus diverse snapshots, where the value range of snapshots L varies from 50 to 1000 with interval of 50. Meanwhile, we suppose that M t � 3, N t � 4 and M r � 4, N r � 5 with SNR � 10 dB. It is displayed that the performance of angle estimation which belongs to all algorithms improves with increase of L. Notably, it is obvious that the algorithm performs better than ESPRIT, PM, and PAR-AFAC. Figure 7 illustrates the ART performance. e result shows that the proposed algorithm's ART as a whole is much smaller than that of ESPRIT. Besides, the ART performance of the proposed algorithm is quite close to that of PARAFAC and PM. Moreover, with the increased number of L, the ART curve of the proposed algorithm performs basically the same as that of PM. e RMSE performance of polarization angles estimation versus L is shown in Figure 8, and it can be clearly seen that the higher L is, the lower RMSE is. Figure 9 presents the RMSE performance versus N, where SNR � 10 dB, L � 200, M t � 3, N t � 4, and M r � 1, respectively. N r is set to be varying from 3 to 27 with interval of 4, and the reason for operating above is to ensure M r and N r as coprime numbers. As expected, all algorithms provide better RMSE when N increases. Interestingly, when N > 7, the proposed algorithm provides obviously better RMSE performance compared to PM, ESPRIT, and PARAFAC in angle estimation. However, the RMSE performance of the proposed algorithm is worse than the remaining three when N < 5, which is owing to the fact that the propagator cannot be accurately estimated in the presence of smaller sensor number. Figure 10 gives the comparison results of ART with different N. Obviously, the number of N has a significant impact on ART performance. It is clearly manifested that the ART required by all algorithms increases significantly with N getting more. In addition, the ART required by the proposed algorithm is much smaller compared with those of ESPRIT, PM, and PARAFAC. e RMSE performance of polarization angles estimation is shown in Figure 11. It can be seen that RMSE decreases with the increase of N. Conclusion is paper proposes a new angle estimation algorithm for bistatic coprime MIMO radar. e core of the algorithm is to complete 2D-DOA and 2D-DOD estimation by constructing propagator. Because it does not need to involve high eigendecomposition and use coprime geometry with larger aperture, it is shown to be more computationally efficient than the state-of-the-art subspace algorithm. Compared with the existing algorithms, the proposed algorithm can achieve better estimation performance due to the larger virtual aperture. e effectiveness and improvement of the proposed algorithm are verified by numerical simulation. Data Availability No data were used in this paper. Conflicts of Interest e authors declare that they have no conflicts of interest.
5,457.2
2021-05-19T00:00:00.000
[ "Engineering", "Physics" ]
LOCAL REGULARITY OF THE MAGNETOHYDRODYNAMICS EQUATIONS NEAR THE CURVED BOUNDARY . We study a local regularity condition for a suitable weak solutions of the magnetohydrodynamics equations near the curved boundary. 1. Introduction. We study the local regularity problem for suitable weak solutions (u, b, π) : Q T → R 3 × R 3 × R to the magnetohydrodynamics equations (MHD) in dimensions three (1) where Ω is a bounded domain in R 3 . Here u is the flow velocity vector, b is the magnetic vector and π = p+ |b| 2 2 is the magnetic pressure. The boundary conditions of u and b are given as no-slip and slip conditions, respectively, namely where ν is the outward unit normal vector along boundary Ω. The MHD equations describe the macroscopic dynamics of the interaction of moving highly conducting fluids with electro-magnetic fields such as plasma liquid metals, two-phase mixtures (see e.g. [2]). For the existence of weak solutions for MHD equiations, it is well known that it is globally in time. Moreover, in the two-dimensional case, it become regular in [3]. On the other hand, the existence of weak solution for MHD equations with boundary condition (2) in dimension three is proved in [8] and it is shown in [10] that if weak solutions become regular under some condition. However, a regularity question remains open in dimension three not yet as in Navier-Stokes equations. I will briefly list known results for MHD equations (1) relevant to the regularity criteria in terms of the scaled invariant quantities. JAE-MYOUNG KIM Numerous works of a regularity criteria for suitable weak solutions have been also studied in terms of the scaled norms. In particular, in [6], the following regularity criteria for a velocity vector for a half space was proved following as: Other types of conditions in terms of scaled invariant norms near boundary are also found in [15] and [16] (compare to [4,14,5] and [18] in the interior cases) Also, refer to papers [11,12,1] and [9] for Navier-Stokes equations. This paper is to establish the regularity criteria for the domain near the curved boundary (cf [7] for Navier-Stokes equations). To be more precise, our main result is that Hölder continuity of suitable weak solution u is ensured near sufficiently regular curved boundary provided that the scaled mixed L p,q −norm of the velocity field u is small (see Theorem 1.1 for the details). For notational convenience, we denote for a point For x ∈Ω, we use the notation Ω x,r = Ω ∩ B x,r for some r > 0. If x = 0, we drop x in the above notations, for instance Ω 0,r is abbreviated to Ω r . A solution u and b to magnetohydrodynamics equations (1) is said to be regular at for some r > 0. In such case, z is called a regular point. Otherwise we say that u is singular at z and z is a singular point. Next, we give the assumption and remark on the boundary of Ω (see [7], [17]). Assumption 1. Suppose that Ω be a class of C 2 ∩ W 3,∞ −boundary (which is its second derivatives are Lipschitz continuous) such that the following is satisfied: For each point x = (x , x 3 ) ∈ ∂Ω there exist absolute positive constants L, µ and r 0 independent of x such that we can find a Cartesian coordinate system {y i } 3 i=1 with the origin at x and a function ϕ : D r0 → R satisfying Remark 1. The main condition on the Assumption 1 is the uniform estimate of the C 2 −norms of the function ϕ for each x ∈ ∂Ω. More precisely, there exists a sufficiently small r 1 with r 1 < r 0 , where r 0 is the number in the Assumption 1 such that for any r < r 1 sup This can be easily shown by the Taylor formula. Now we are ready to state the main part of our main results which is local regularity criteria for a suitable weak solution for MHD equations. Theorem 1.1. Let (u, b, π) be a suitable weak solution of the MHD equations (1) according to Definition 2.1. Suppose that for a one pair p, q satisfying 3 p + 2 q ≤ 2, 2 < q ≤ ∞ and (p, q) = ( 3 2 , ∞), there exists > 0 depending only on p, q such that for some point z = (x, t) ∈ ∂Ω x,r × (0, T ) u is locally in L p,q x,t near z and lim sup r→0 LOCAL REGULARITY OF THE MAGNETOHYDRODYNAMICS EQUATIONS 509 Then, u and b are regular at z. This paper is organized as follows. In Section 2 we introduce the definition of suitable weak solutions and give some known results for our proof of Theorems. In Section 3 we present the proofs of Theorem 1.1. 2. Preliminaries. In this section we introduce some scaling invariant functionals and the notion of the suitable weak solutions. We first start with some notations. Let Ω be a bounded domain in R 3 . For 1 ≤ q ≤ ∞, we denote the usual Sobolev spaces by W k,q (Ω) = {u ∈ L q (Ω) : D α u ∈ L q (Ω), 0 ≤ |α| ≤ k}. As usual, W k,q 0 (Ω) is the completion of C ∞ 0 (Ω) in the W k,q (Ω) norm. We also denote by W −k,q (Ω) the dual space of W k,q 0 (Ω), where q and q are Hölder conjugates. We write the average of f on E as We denote by C = C(α, β, ...) which may change from line to line. As defined earlier, We also denote Ω r = Ω ∩ B r and Q r = Ω r × (−r 2 , 0). Let r 0 and r 1 be the numbers in the Assumption 1 and the Remark 1, respectively. For any r < r 1 , we introduce where κ, κ * and λ are numbers satisfying 3 κ where κ, κ * and λ are numbers satisfying (6). Next we recall suitable weak solutions for the magnetohydrodynamics equations (1) in three dimensions. Let Ω ⊂ R 3 be a bounded domain satisfying the Assumption 1 and I = [0, T ). We denote Q T = Ω×I. A pair of (u, b, π) is a suitable weak solution to (1) if the following conditions are satisfied: (a) The functions u, b : Q T → R 3 and p : where κ, κ * and λ be numbers satisfying 3 solves the MHD equations in Q T in the sense of distributions and u and b satisfy the boundary conditions (2) in the sense of traces. (c) u, b and π satisfy the local energy inequality Ωr for all t ∈ I = (0, T ) and for all nonnegative function φ ∈ C ∞ 0 (R 3 × R). Let x 0 ∈ ∂Ω. Under the Assumption 1, we can represent Ω x0,r0 = Ω ∩ B x0,r0 = {y = (y , y 3 ) ∈ B x0,r0 : y 3 > ϕ(y )}, where ϕ is the graph of C 2 in the Assumption 1. Flatting the boundary near x 0 , we introduce new coordinates x = ψ(y) by formulas We note that the mapping Then using the change of variables (8), the equations (1) result in the following equations for v, h andπ: where∇ and∆ are differential operators with variable coefficients defined bŷ where a ij and b i are given as As mentioned in Remark 1, if we take a sufficiently small r 1 with r 1 < r 0 , then (4) holds for any r < r 1 . In addition, the followings are satisfied: for a sufficiently small r < r 0 , ). (12) From now on, we fix x 0 = 0 without loss of generality. We suppose that, as above, ψ is a coordinate transformation so that v, π satisfies (9) in ψ(Ω r0 ). Remark 2. Due to the suitability of u, b, π (see Definition 2.1), (v, π) solve (9) in a weak sense and satisfies the following local energy inequality: There exists r 2 with r 2 < r 0 where r 0 is the number in the Assumption 1 such that where η ∈ C ∞ 0 (B r ) with r < r 2 and η ≥ 0, and∇ and∆ are differential operators in (10). Next lemma shows relations between scaling invariant quantities above (see [7]). Lemma 2.2. Let Ω be a bounded domain satisfying the Assumption 1 and x 0 ∈ ∂Ω. Suppose that (u, p) and (v, π) are suitable weak solutions of (1) in Ω × I and (9) in ψ(Ω x0 ) × I, respectively, where ψ is the mapping flatting the boundary in the Assumption 1. Let x = ψ(x 0 ). Then there exist sufficiently small r 1 and an absolute constant C such that for any 4r < r 1 the followings are satisfied: Also, Lemma 2.2 holds for the quantitiesÊ h (2r), h (2r),M v (2r) andK v (2r). Proof of Theorem. In this section, we present the proof of the Theorem 1.1. We first show a -regularity criterion for the suitable weak solution of MHD equations (1) near the boundary. Next we prove a local regularity integration condition for the velocity vector u near boundary. For simplicity, we write Ψ(r) := A v (r) + A h (r) + E v (r) + E h (r). Let z = (x, t) ∈ Γ × I and from now on, without loss of generality, we assume x = 0 by translation. We first recall that the local energy estimate. Next we prove a local -regularity condition near boundary for MHD equations. Its proof is similar to the proof in [6, Proposition 3.1] and so we omit. Proposition 1. There exist * > 0 and r 0 > 0 such that if (u, b, π) is a suitable weak solution of MHD equations satisfying Definition 2.1, z = (x, t) ∈ Γ × I, and then z is regular point. The proof of Proposition 1 is based on the following lemma 3.1, which shows a decay estimate of (u, b, π). We estimate the scaled norm for suitable weak solutions. Next, we may continue with scaled norm of L 2,2 x,t (Q + z0,r ) estimate of b. Lemma 3.3. Let z = (x, t) ∈ Γ×I. Suppose that u ∈ L p,q x,t (Q + z,r ) with 3/p+2/q = 2 and 3/2 ≤ p < 3. Then for 0 < r < ρ/4 Proof. Although the process of proof is similar as in [6, Lemma 3.3], we will give its details for the convenience of readers. For convenience, we write x = (x 1 , x 2 , x 3 ) = (x , x 3 ) and by translation, we assume that without loss of generality, z = (0, 0) ∈ Γ×I. Let ζ(x, t) be a standard cut off function supported in Q ρ such that ζ(x, t) = 1 in Q ρ/2 . We set g(x, t) : z,ρ and we then defineg(x, t), an extension of g from Q + ρ onto Q ρ , in the following way:g( . This can be done by extending tangential components of v and h as even functions and normal components of v and h as odd functions, respectively. We denote such extensions byṽ andh for simplicity. Here we also used the fact that ζ and ∇ ζ are even and ∂ x3 ζ is odd with respect to x 3 −variable, where ∇ = (∂ x1 , ∂ x2 ). In next lemma we show an estimate of the gradient of pressure . We remark that, via Young's inequality, (22) can be estimated as follows: Next lemma shows an estimate of a scaled norm of pressure. We are ready to present the proof of Theorem 1.1. Therefore, we obtain Ψ(r) + Q(r) ≤ * /2 for all r < r 1 , which implies the regularity condition in Proposition 1. This completes the proof.
2,884
2016-01-01T00:00:00.000
[ "Mathematics" ]
Readymade Solutions and Students’ Appetite for Plagiarism as Challenges for Online Learning In the context of the COVID-19 pandemic, the importance of online learning has increased. Inherently, the stakes of a sustainable approach to the challenges raised by the wide access to the Internet, the use of readymade solutions to meet didactical tasks, and students’ appetite for plagiarism have become higher. These challenges can be sustainably managed via a procedure aimed at constructively converting students’ appetite for plagiarism (SAP conversion) into a skill of critically approaching relevant materials that are available online. The solutions proposed by the specialized literature concerned with the problem of plagiarism can be grouped into five categories: better trained students, more involved teachers, the use of anti-plagiarism software, clear anti-plagiarism policies, and ethical education of the youths. The SAP conversion procedure is a solution targeting increased involvement on behalf of teachers. Its partial application in the case of the disciplines included in the undergraduate educational program of Sociology conducted by the Transylvania University of Brasov, where students’ evaluation is based on essays, has considerably decreased the amount of student plagiarism. Introduction Learning contributes to knowledge perpetuation, and the latter supports community sustainable development by making previous experiences resulting from interaction with the world useful. At the same time, learning is a self-accomplishment tool and, as such, a goal of sustainable development. The future of communities and their members greatly depends on the way the learning processes foreshadow them in the present. There is always a stake in the changes performed in the educational field. Consequently, they must be assessed based on their outcomes at the generational level. The Internet has recently acquired a widely acknowledged didactical utility [1][2][3], given its rather short history. The COVID-19 pandemic has highlighted the true dimensions and importance of the Internet in conducting educational processes. In many situations, access to resources that were available online was synonymous with access to training. The didactical role of the Internet is to become consolidated in the next period. Once acknowledged, the importance of the online component of the educational system is to gain increased significance. In such a context, examining the prospective threats accompanying that trend and searching for solutions to efficiently manage those threats is useful. The Internet is not just a source of benefits in educational contexts. It favors, for example, the reliance on handy approaches incurring decreased intellectual involvement in accomplishing didactical tasks [4,5]. That is visible both when the solutions available online are incorporated as such in meeting didactical requirements, as well as in the case of students' plagiarism. Some of the didactical requests formulated as part of the educational process, especially in the case of socio-humanities, concern the elaboration of essays and reports on to Internet resources. We grouped the solutions by taxonomies, and we present them considering that they are useful for adopting various local and context-based strategies to solve/control the problem of student plagiarism. Changing the perspective on the evaluation of students' educational performance depends on the wider educational context and educational policies. Such a change should consolidate students' self-confidence, creativity, and responsibility and hence would trigger sustainable consequences. A change in the perspective taken on education is a lengthy process. Until the change of perspective occurs, we deem that an intervention meant to correct and recalibrate didactical requests in order to make students' plagiarism void is efficient. The recalibration we explicitly propose as a means to approach the problem of plagiarism implicitly diminishes the effects of acknowledging the sources of the readymade solutions. It forces students to use their minds in a creative and independent manner, thus making them become co-participants in the didactical process in a proactive manner. Its significant advantage lies in its capitalization on the skills already developed by students, as well as on the generous knowledge resources made available by the Internet. An additional advantage of recalibrating the didactical requests is its efficiency since it simultaneously counters students' tendency to plagiarize and the practice of using readymade solutions as already presented above. In this respect, we designed a procedure consisting of constructively converting students' appetite for plagiarism into a critical skill of critically approaching the relevant materials that are available online. We called it the SAP conversion. We will present the items characteristic of this procedure along with some of the results generated by its partial use as part of the didactical activities we ran with our students below. Materials and Methods We conducted comprehensive bibliographic research on the approaches of various studies and academic articles published for the past two decades on students' plagiarism. We thematically analyzed 60 papers on plagiarism published between 2000 and 2019 and indexed by the most important databases that can be accessed via the ANELIS Plus program as the most relevant. As a result of our analysis, we identified the following themes: a phenomenon on the increase, attempts towards taxonomies, the relative nature of approaches to students' plagiarism, causes and motivations of university students' plagiarism, solutions identified with regard to students' plagiarism. We will outline the contents related to the last topic in this article. The solutions proposed by the authors we consulted in this matter were grouped based on the taxonomy of the causes of plagiarism proposed by Sorea and Repanovici [18]. Thus, we grouped the identified solutions into five categories: better trained students, more involved teachers, the use of anti-plagiarism software, clear anti-plagiarism policies, and ethical education of the youths. We highlighted what we deem to be the most efficient approach to students' plagiarism based on our own experiences with the issue, as well as in accordance with the solutions proposed by other academics as related to their own exposure to the phenomenon and based on them, we designed the SAP conversion procedure. Over the last five years, we have monitored the behavior of students attending the undergraduate study program in Sociology organized by the Transylvania University of Bras , ov (Romania) for the disciplines we coordinate and that are evaluated (both during and at the end of the semester) by means of essays. The number of students evaluated per academic year is about 250 (out of a total of 1293 students). We have been monitoring the disciplines included in the curricula for the second and third years of study. We considered students' essays as social documents. During our evaluations, we analyzed their content in terms of the presence and the ratio of fragments of plagiarized text. We labeled the essays where at least one paragraph was reproduced without acknowledging its source as plagiarism and hence counted as such. We only took into account text-based plagiarism or patchwriting. The sources of plagiarism were identified by using Turnitin and/or Google. In order to identify the cases of patchwriting, the search for plagiarized sources was preceded by the partial reconstruction of the modified text. In most cases, the reconstruction involved replacing some terms with synonyms that best fit the context. We tracked the evolution in the number of plagiarized essays during successive examination periods organized for the second and third years of undergraduate studies, namely for the time period when we used items pertaining to the SAP conversion procedure in conducting didactical activities. Results The increased interest in plagiarism is reflected by the growing number of papers on the topic. Thus, a simple search of the term plagiarism as the keyword in ScienceDirect [19] shows an increase in the number of papers published annually from 57 in 2000 to 476 in 2020. A similar search in SpringerLink [17] shows 101 papers published in 2000 and 1069 works published in 2020. The growth rates are graphically presented in Figure 1. The search in SpringerLink also reveals an increase in the total number of entries from 1479 in 2000 to 10,788 in 2020 (and 10,994 in January 2021, already). A graphic representation of this increase is presented in Figure 2. Identified Solutions to the Problem of Students' Plagiarism Sorea and Repanovici [19] classify the causes of students' plagiarism in five categories, each of these with its own sub-categories: students, academics, Internet, institutional environment, and educational framework. We have grouped the solutions to students' plagiarism in the same categories. Better Trained Students As far as students are concerned, most of the solutions target the improvement of their education in relation to the rules of academic writing and plagiarism. In this respect, it is the students themselves who indicate this need [13]. Ramzan et al. [20] and Stabingis, Šarlauskienė andČepaitienė [8] highlight that students want theoretical and practical knowledge about the use of sources, and Löfström and Kupila [1] show that students view full access to plagiarism reports as more useful than the presentation of results (that is similarity percentages). Students must be helped to learn how to write in an academic manner. Improving training [21][22][23] and increasing their awareness about plagiarism [12][13][14] would reduce the frequency of the phenomenon, as a number of authors indicate. The results of several anti-plagiarism didactic initiatives were presented. Stetter [24] underlines the satisfaction of the participants to a web module on plagiarism and paraphrasing in relation to the increase of their knowledge in the field. Law, Ting, and Jerome [25] show that upon the completion of a course in academic writing students used fewer unethical quoting strategies. Additionally, Singh [26] highlights the usefulness of scientific writing workshops. Strittmatter and Bratton [27] emphasize the role of "Plagiarism and Ethics Awareness Training" or simply PEAT in educating students on how to quote sources, on how to adequately attribute ideas, and avoid theft of other people's ideas. Kier [28] pleads for the capacity of the tutorial for the game "Goblin Threat" to increase the ability of students to recognize plagiarized paragraphs (compared to the control group, the rate by which players recognized plagiarized paragraphs was 11% higher). Just using software like Turnitin could only favor superficial learning, whereas the tutorial facilitates understanding the information in the text, as Kier [19] shows. Yang, Stockwell, and McDonnell [29] describe the effects of a Writing in Your Own Voice intervention, which was conceived to help beneficiaries become aware of the types of plagiarism and avoid the writing problems as follows. For several weeks after its completion, the intervention reduced the cases of plagiarism along with the common writing problems by half, as well as the amount of serious plagiarism. Trautner and Borland [30] proposed a pedagogical exercise that employs sociological imagination in building and analyzing some scenarios about the reasons and consequences of unethical academic behavior. The exercise was developed for American students, but it can be easily adapted to various institutional contexts and it would help many others to simultaneously understand the personal and social implications triggered by the lack of integrity, in the authors' opinion. Concerning students' lack of motivation as the main cause of plagiarism, Stabingis, Šarlauskienė, andČepaitienė [8] underline the motivating role of switching the focus from the quantity of processed text to a qualitative approach on behalf of students, namely that is to originality and innovation. More Involved Teachers Most of the solutions to how teachers could reduce the frequency of students' plagiarism focus on the latter's more responsible involvement in managing the situation. That means overcoming the embarrassment that the identification and correction of plagiarism may generate. It is necessary for teachers to change their attitudes so that students use the information available online adequately and formulate their own ideas, as Amiri and Razmjoo [5] show. Adam, Anderson, and Spronken-Smith [31] highlight that students consider feedback as a means to improve their writing without the pressure of sanctions. They view intercative personalized support as more useful than the general information published on the university website. Many authors believe that students must be supported to write their papers correctly from an academic point of view. Beginners should be assisted by their coordinators [32]. Students should be guided in planning their tasks and managing their resources, and they should be encouraged to write, despite fear of accidental plagiarism [1,31]. It is the teachers' job to explain to students the rules of academic writing, according to Gómez, Salazar, and Vargas [33], and they should also manage the pressures their students experience when elaborating their papers [20]. Critically assisting students by expanding their knowledge, unveiling the cultural dimension of writing conventions and their admission into the community of academic writing is more efficient than applying institutional policies targeting correction. The latter is intimidating and estranging for students because it affects their feeling of belonging to an academic community, considers Bell [34]. Acquiring academic writing skills is a process and that should not be ignored, according to some authors. Students should learn to write gradually with the aid of their teachers. Thus, if such an idea is taken into account, unintentional plagiarism could be approached in students' training processes and patchwriting could be corrected, show Adam, Anderson, and Spronken-Smith [31]. Ironically, students are asked to imitate their teachers-the educational process is a mimetic one-signals Bell [34]. Patchwriting should not be considered a transgression of academic norms, but an attempt at joining the academic environment and an opportunity to learn, according to the aforementioned author. If students are caught plagiarizing, they are warned, explained in what way they made a mistake, and asked to rewrite their paper. That gradually leads to a decrease in the number of plagiarized works, consider Vanbaelen and Harrison [13]. Teachers should (re)assume the whole responsibility of the educational process. The requests they formulate for their students must be properly formulated and clearly explained [26,27]. Merging theoretical concepts and practical applications, as it is specific for every individual subject matter, leads to a better understanding and application of the rules by which plagiarism can be avoided, show Powell and Singh [35]. Dias and Bastos [2] outline the benefits of developing cross-disciplinary competencies, innovative and attractive teaching solutions, and the advantages of orienting education toward creative individual work, rather than toward memorization and repetition. Direct, live interaction with an instructor increased the efficiency of the results obtained upon taking the web module on plagiarism and paraphrasing, according to the participants [24]. It is not just crude plagiarism that should be considered. Students may be more sensible about the recommended citation practices than it is generally stated, they have a positive attitude towards detection software, and the influence of Turnitin is higher among those who used it and it is proven to change the perception of plagiarism, show Childers and Bruton [36]. Students recognized verbatim plagiarism, the joining of fragments, and patchwriting, but it was more difficult for them to recognize the re-use of ideas as plagiarism. The discussion on the complex forms of the phenomenon should be more nuanced because, as the aforementioned authors show, if the discussions with students only focus on the verbatim forms of plagiarism, then their capacity to access academic writing is reduced. Concerning the relationship between teachers and students, Stabingis, Šarlauskienė, anď Cepaitienė [8] underline the importance and the motivating role of mutual respect and trust. Teachers themselves should use new technologies, believe Heckler and Forde [7]. That would be one of the starting points for conveying their values to the next generation of students. Anti-Plagiarism Software As we mentioned in the Introduction, many authors attribute a central role to the Internet in the spread of student plagiarism, but the Internet also provides tools for detecting unethical academic practices. The solutions concerning the role of the Internet in students' plagiarism focus on using detection programs that can be used precisely due to the Internet. Technology is a key aspect in the attempts of universities to reduce the frequency of plagiarism and Turnitin is the most adequate tool, according to Mphahlele and McKenna [3]. Löfström, Huotari, and Kupila [37] indicate that the introduction of text verification software changes, for the best, both teachers' and students' opinions on plagiarism and academic writing (there are also negative changes that are related to the erosion of trust). Singh [26] believes that the implementation of software and the obligation to check texts for plagiarism is a fundamental requirement for assuring the originality of the content of students' papers. Some authors believe that the Internet can be retrieved as a didactic instrument because of the accessibility of the resources it hosts. Even students can appreciate the usefulness of detection software and such acknowledgment supports learning and understanding academic writing. Moreover, it motivates students to correctly use resources, incentivizes teachers to correctly teach the rules of academic writing, and dissuades dishonest students to obtain undeserved advantages, as Löfström and Kupila [1] highlight. Teachers' intent is first to check on their students, but detecting plagiarism has also a constructive dimension by developing students' writing skills and imposing the review of the guidelines on avoiding plagiarism, mention the same authors. The software is preponderantly misunderstood as being an instrument to detect plagiarism, while its educational potential should not be ignored, underline Mphahlele and McKenna [3]. Teachers should ask their students to produce summaries of the materials available online and self-evaluate with the aid of anti-plagiarism software, consider Granitz and Loewy [38]. Halupa, Breitenbach, and Anast [39] show that training focused on avoiding self-plagiarism is more efficient than general training on plagiarism and support its introduction into the academic curriculum. Dias and Bastos [2] suggest using anti-plagiarism software since secondary school. Clear Anti-Plagiarism Policies As far as the features of the institutional environment are concerned, the solutions identified focus on clarifying the anti-plagiarism rules and renouncing the tolerant attitude towards it. According to Law, Ting, and Jerome [25] Gullifer and Tyson [40], Vassallo [41], Uzun and Kilis [23], it is necessary to formulate and unabatedly respect a policy on academic integrity. Codes of honor [11], a moral code [23], a textbook online accessible [22] on university websites should be elaborated. Li [42] proposes the editing of an online available handbook elaborated by COPE (The Committee on Publication Ethics). There should be strict enforcement of sanctions for plagiarism [11,22]. Universities should support reporting on plagiarism. Clemency towards it is not useful and should be replaced by innovative and authentic research, indicate Amiri and Razmjoo [5]. All involved actors should be familiar with the rules of academic writing, consider the authors of some studies. It is necessary to clearly explain expectations, rules of academic writing, and what the adequate resources are to students, point out Gómez, Salazar, and Vargas [33]. Specific policies concerning plagiarism must be promoted [7,14,43]. They need to be student-centered, easily accessible, efficiently approached from an educational perspective and they should indicate clear roles and responsibilities [44]. Ramzan et al. [20] propose the organization of seminars, workshops, and symposia on plagiarism. Law, Ting, and Jerome [25] highlight the dwindling of the number of incorrect citations as a result of the courses on the correct use of bibliography. Gunnarsson, Kulesza, Pettersson [45] show the positive feedback obtained during a course on research methodology run in collaboration with teachers and librarians at the master's program level and composed of two parts: one was concerned with the correct use of sources while the other focused on the legal and ethical aspects of plagiarism. Mansoor and Ameen [15] in their turn mention the role of librarians in fighting plagiarism when attributing these the informal role of irregular counselors on anti-plagiarism. Additionally, Gibson and Chester-Fangman [46], refer to the formal framework provided by libraries as a means of training students on plagiarism and thus preventing it. As always, prevention is more efficient than sanctioning. Proactive learning in relation to plagiarism and its avoidance would be more useful than its detection by specialized software, believe Jereb et al. [22]. Knowlton and Collins [47] support a change in the reactive and punishing attitude of academics to more proactive and preventive behavior. In this respect, they show that encouraging students to actively use anti-plagiarism software before submitting their papers would teach them to prevent the phenomenon in a more useful manner than punishing them after submitting the papers. University policies on the issue can change if the experience shared by faculty members proves that the approach to plagiarism can be more useful if it is treated as a stage in the learning process rather than as an opportunity for immediate punishment, according to Stowe [48]. Even in the context of sanction-oriented policies, a continuous dialogue among teachers and between them and university leadership on plagiarism and its implications for the academic community is important. Such a dialogue reveals the attitudes of universities and their experience in relation to students' plagiarism, indicates the previously mentioned author. Institutions must acknowledge the potential of the tensions arising from establishing clear rules and explicit procedures concerning the management of plagiarism and teachers' autonomy, show Peytcheva-Forsyth, Mellar, and Aleksieva [49]. They believe that the increased involvement in European and e-learning projects leads universities toward the improvement of their procedures in the field of correct academic writing. Pandoi, Gaur, and Gupta [50], in full awareness of the originality of their solution, suggest that universities should shame students with value-focused messages, which by their manipulative potential could keep the problem of students' plagiarism under control. According to Stabingis, Šarlauskienė, andČepaitienė [8], in the universities of Lithuania, there are mostly four types of prevention measures: adopting codes of ethics, training and consulting students, monitoring their progress, and internally evaluating the quality of the didactic approaches. Ethical Education of the Youths Concerning the educational framework, solutions target the alignment of the approaches to plagiarism to the cultural context in which it occurs and also the consolidation of the ethical dimension of the educational process. Pennycook [51] shows that the relations between texts and learning are more complex than those revealed by a simple accusation of plagiarism. Many of the approaches to alleged plagiarism are tactless from a pedagogical perspective and intellectually arrogant. The conclusions that students plagiarize and the requirement for them to learn how to correctly write academically become adequate when they are related to the cultural and historical features that characterize each of these students. Their relation to texts and memory can be very different, according to the aforementioned author. Chinese culture is frequently associated with copying and that explains the interest of the Chinese authors to manage plagiarism. According to Kelm and Sharon [52], this is a constructive approach since it focuses on the production of ethically correct texts. Highlighting the dialectic relation between copying and imitating guides students in finding honest solutions for their didactical tasks. Students can be explicitly taught how to avoid the corrupt borrowing of someone else's writing, shows Li [32]. The rules of academic writing can be correct and appropriate, but students must also learn that plagiarism is an ethical problem. Students must know why plagiarism corrodes and destroys the moral tissue of culture and society, underline Strittmatter and Bratton [27]. Jereb et al. [22] indicate that it is the key task of both the educational system and society at large to educate the youths on morality and ethics when they are teenagers. Since students learn from their teachers, academic integrity should be crucial in their development, show Uzun and Kilis [23]. Building upon Already Manifest Trends in Using the Internet The solutions presented above are, in most cases, focused on preventing, deterring, and sanctioning plagiarism. We believe that Rosenberg [53] is right to view the problem of students' plagiarism from Kant's categorical imperative perspective. At the risk of offending honest students and considering them all innocent until proven otherwise, teachers need to check their work for plagiarism. If not, they run the risk of being mere instruments, means by which students achieve their goals, and that only leads to undermining the trust that should exist in the teacher-student relationship. East [54], Löfström and Kupila [1] also believe that the teachers who have the correct attitude towards students' plagiarism act to the students' and academic community's benefit. Nonetheless, we believe that to be consistent in approaching others (i.e., university students) as goals, there should be a shift from focusing on detecting and signaling the presence of plagiarism toward creating a climate conducive to perceiving it as inappropriate. As Townley and Parsell [55] considered, the efforts concerning this phenomenon should be redirected from providing technological solutions to the problem of unethical academic behavior to the support of an intellectual community upholding academic and ethical principle, and to indirectly teach students how to plagiarize without being caught does not contribute to this. Focusing on detecting, denouncing, and sanctioning plagiarism is an adequate solution for treating the symptoms. An efficient approach targets motivation. Additionally, sanctions do not seem to be able to prevent people from being willing to plagiarize [50], and the technical training related to the correct use of research resources does not seem to be sufficient in preventing plagiarism [56]. Efficiency is about gaining maximum results for minimum effort. Thus, in our case, an efficient approach would focus on demotivating students from plagiarizing. If that is to be done by building upon trends that are already manifest, all the better. Students have discovered and have been using the Internet for a long time. Projectbased learning encourages the use of the resources that are available online, but it implicitly may encourage plagiarism [19,57]. Project-based learning resumes William Kilpatrick's child-centered learning [58,59] and is a form of collaborative learning. "Briefly, based on the constructivist background, the project approach represents a student-centered pedagogy, a comprehensive instructional endeavor which consists of individual, small, or larger group in-depth extended investigation of a topic or problem, worthy of the student's interests, energy, and time" [60] (p.57). The students who benefit from project-based learning gain more self-confidence, get better results in school, pay more attention [59], are more independent and open, remember better what they learn, and get better results when evaluated [60]. However, in the case of many future university students, these benefits are accompanied by their habit of using the Internet with no ethical constraints. In the pre-university educational system, school projects are assessed by their content, not by the accuracy in employing references. That is a representation of how to use the Internet that students gain before enrolling in academic programs and it is one of the causes leading to a large number of plagiarized work among university students. It is the side effect of a generous educational approach. The Internet is an efficient didactical instrument. The habit of using Internet-based resources should not be discouraged. It must be efficiently and ethically capitalized on. Most of the solutions focus on persuading students that plagiarism is an undesirable (dishonest, useless, and dangerous because of its consequences) practice. However, this type of solution implicitly restrains students' interest in using the Internet and that does not benefit anybody. As Löfström and Kupila [1] show, the fear of plagiarism diverts attention from the very cumulative nature of knowledge and from the rules of academic writing. Moreover, the interest in detecting and punishing plagiarized work is against current trends in promoting desirable behavior. We believe that the solutions focused on sanctioning plagiarism become useful only as a secondary type of instrument that supports truly efficient solutions like retrieving the Internet from a didactical perspective and as an instrument contributing to knowledge development. We agree with Townley and Parsell [55] who underline that in the fight between students and teachers over the dishonest use of the Internet the latter are at a disadvantage. This is the classical situation of the upper hand obtained by the one who commits a crime over the one who is supposed to conduct investigation work and to sanction. There are very many chances for students to find increasingly sophisticated means to counter the attempts meant to detect their unethical behavior. The foreseeable failure of teachers in this context is also an injustice made to honest students. If university students feel tempted to search and appropriate the solutions that have been proposed by others and that meet their school-related requirements, then we believe it is more efficient for them to be asked to explain and comment on what they find. Plagiarism will no longer be encouraged if university students are asked to identify the relevant information sources for a given topic and to present their content along with their own opinions. Within this context, the solution we propose is meant to demotivate students from plagiarizing. If the didactical requirement were to change from asking students to simply present a topic to critically present it, namely asking them to refer to alternative approaches to the topic under discussion from specialized literature, then students would need to comment on the relevant materials which are available online. They would no longer be able to plagiarize those materials. The effort made to find materials that would approach the sources initially discovered in a critical manner would be too great. Consequently, that might encourage students to formulate their own critical remarks. Thus, plagiarism would be rendered useless. How to Efficiently Retrieve the Didactical Use of the Internet: SAP Conversion Plagiarism does not automatically lose its power when it is associated with a request of a teacher to a student. The request must be supported by supplementary measures. Rets and Ilya [21] present a series of such actions: asking students to complete creative tasks that require the use of their critical thinking skills; requiring students to elaborate a portfolio that should not only contain the final version of the writing task but also all the sketches, as well as some copies of the resources used in the writing process; continuous evaluation of the progress of the critical approach based on feedback. We deem such approaches appropriate for deterring students from plagiarizing. In the paragraphs to follow we will present some of the measures that we propose in order to divert students' intention to plagiarize and constructively employ their skills in using the Internet. The proposed measures are anchored in Granitz and Loewy's [38] suggestion that teachers require their students to summarize some of the materials available online. We grouped these into a procedure on the constructive conversion of students'appetite for plagiarism into a skill of critically approaching the relevant materials that are available online (SAP conversion): • (a) Training students on how to differentiate between academically trustworthy texts from unreliable ones; • (b) Directly coordinating the choice of bibliographical materials by students requires supplementary effort on behalf of teachers but ensures high efficiency since it prevents students from selecting the same materials and hence from "borrowing" summaries and critical comments, or the illegitimate sharing of attributions in fulfilling the task; moreover, if teachers are at least partially involved in establishing what materials students are to process, the quality of discussions and of documentation for undergraduate theses increases as a result of the collection of texts that is thus established. • (c) Asking students, whenever possible, to include the bibliography in (the electronic form of) the evaluation portfolio in order to avoid or at least reduce the number of copy-paste fragments used in the presentation of the work submitted by students; • (d) Asking students to introduce a minimum number of summaries (i.e., students') of recent articles on a given topic into the portfolio; • (e) Asking students to include in their evaluation portfolio an obligatory number of articles from specialized journals acknowledged by the Ministry and/or by publications recognized by famous databases in order to limit the choice of easily retrievable materials over relevant ones. To employ such a requirement as a mandatory one is useful in many ways and does not limit the sources of materials that students can deem relevant; on the contrary, such a requirement does not only render plagiarism useless but also helps students sense the trend in relation to the issues they investigate. Additionally, it indirectly teaches them to comply with academic writing standards that they implicitly encounter when reading the texts that they need to include in their portfolio; • (f) Asking students to attach an antiplagiarism evaluation report for the essay into the portfolio; • (g) Detailed reporting of plagiarism (i.e., plagiarized source, the ratio of plagiarized text in the essay) and description of its impact on evaluation results; • (h) Granting the opportunity to redo plagiarized essays. • If all of the above requirements and clarifications are clearly presented as rules of a game that has as a stake the acceptance of the evaluation portfolio, they are to be acquired by students since, after all, one of university students' salient features is their capacity to learn. The sum of requests detailed by the procedure makes the use of readymade solutions almost impossible. The students are thus forced to critically think of their essays and elaborate on them on their own. The readymade solutions that are available online become bibliography-that is, what is natural and useful, and not materials that can be used in the copy-paste manner. We present some of the results we obtained as a result of applying some of the items characteristic of the procedure we propose in our didactical activities in order to support the claim we make concerning its efficiency. We have been using items from the already presented procedure with our students for some years. The items we refer to are items a, c, e, g, h. We believe their use is efficient. The essays we analyzed were elaborated as part of the academic requirements starting from the first semester of the second year of study and until the end of the third year of the Sociology study program. Based on archived evaluation results we established the evolution of the weight of plagiarism cases during the study years. The rate of plagiarism attempts is around 40% (203/502) in the case of the first essays (502/1293) presented for evaluation in a discipline taught in the third semester of study, and it gradually decreases to 10% in the case of the assessments of a similar type to students in the final years. In order to exemplify we present in Table 1 the evolution of plagiarism in the case of the disciplines coordinated by the same academic during three consecutive semesters (semester III, IV, and V, respectively). Table 1. Weight of plagiarism for the past five years. Class Semester III Semester IV Semester V Cases of plagiarism Weight of plagiarism Figure 3 shows graphically the decrease in the weight of plagiarism in the disciplines in the table in the case of 2017, 2018, 2019, and 2020 classes. In the case of the 2018/2021 class, the discipline monitored in the third year of study is to be studied during semester VI. In most cases (around 90%), the plagiarized sources are among the first options listed while doing a simple search on Google. In most essays, different students sharing the same topic and even different generations use the same sources. The style of plagiarized texts is an efficient indicator of plagiarism. The sentence structure and the vocabulary are not characteristic of undergraduate students' writing style. They show writing skills that usually students have not acquired yet and, by the formulations used, that the authors of the essays belong to previous generations. These rank among the first indicators leading to the assumption of plagiarism before the actual checking of the text in this respect. In the other cases (around 10%) the plagiarized sources are students' essays downloaded from specialized sites: Scribd, RegieLive, Referat.ro, EreferateRo, Tocilar.ro, etc. In the case of essays written at the beginning of their study, there is a marked tendency to plagiarize large parts of a text and sometimes a whole essay would be made of such a chunk of text. By the end of studies, the number of plagiarized parts in an essay decreases and there are more frequent cases in which the excerpts from various sources are presented successively. In most cases (around 96%) it is about verbatim plagiarism. In the other essays, the students opted for patchwriting. We suspect that they used software to replace synonyms because the chosen terms are often inappropriate in the context. As previously shown, their replacement with terms frequently used in the linguistic contexts characteristic of a given text allows for the identification of plagiarized text. The decrease in the rate of plagiarism attempts is accompanied by the occurrence of camouflaged intentions to plagiarize (de-plagiarism, as Wrigley calls it, [61]). We identified several such practices aimed at preventing/making the detection of plagiarism more difficult: changing the fonts of one or of several characters in the text (usually the fonts of vowels a, e, or i), the introduction of blind characters, or joining some prepositions or conjunctions with the words next to them in order to prevent the electronic identification of the source of the text, etc. We consider that such practices require a separate approach that should focus on the ethics of academic writing. The application of the current procedure, which we deem is an increased version of the procedures we have employed for the past few years, will increasingly reduce the rate of plagiarism among our students. The procedure we propose requires teachers' extra effort and responsibly acknowledging students' plagiarism as a problem. Therefore, even if it requires students' effort, as well as using the Internet as an instrument, we believe that its place in the classification of identified solutions for plagiarism is at the level of more involved teachers. Discussion and Conclusions Mphahlele and McKenna [3] show that technology plays a major role in the attempts of universities to reduce the incidence of plagiarism and, as we have already mentioned, they consider Turnitin as the best detection tool. However, Turnitin is not perfect. It works well in the case of textual plagiarism, but not in the case of intelligent plagiarism such as paraphrasing, translations, expressing ideas, or presenting research results without acknowledging their source [53,62]. To detect cross-language plagiarism, there are various software solutions, such as the fuzzy semantic-based model [63], the segmentation by keywords [64], or the continuous word alignment-based similarity analysis [65]. Nonetheless, in the case of universities, Turnitin is, for now, the most accessible instrument considering the large amount of text to be verified. In the case of our students, the inefficiency of Turnitin in identifying texts plagiarized via translation is partially compensated by the fact that they use Google Translate without correcting the resulting text or they just correct it in a shallow manner. That is a situation that is more frequent in the case of final graduation papers rather than in the case of essays written for discipline evaluation purposes, as a result of the greater pressure to use important references in the first case. The linguistic formulations provided by Google Translate are more often than not inadequate or they lack the finesse of a good speaker of the Romanian language-a language of Latin origin whose sentence structure is heavily influenced by Slavic [66]. This even turns the translation provided by Google into an indicator of plagiarism. In addition, the effectiveness of Turnitin, which acts by considering each student as a possible plagiarist [55], depends on the size of its available database. Thus, it does not detect the plagiarism of texts which are not part of its database, such as texts from old treaties. On the other hand, every text (even drafts of future papers) that is included in the Turnitin database is treated as available for plagiarism. Therefore, the percentage of plagiarism indicated by the software must be interpreted in relation to the detailed report on plagiarized sources. Consequently, the use of Turnitin as an instrument to counter plagiarism requires responsibility and discernment of the academic staff. The Internet is a wide source of knowledge. It is a source for increasing the efficiency of teaching [38] and an opportunity for students [20]. Zarfsaz and Ahmadi [9] underline that the main source of information on plagiarism is the Internet in the case of students. On the other hand, the skills required to use the Internet also represent tools for the detection of unethical academic practices [4]. These are arguments that further corroborate the solution we propose. Capitalizing on existing skills is more efficient than encouraging the reluctance to use the Internet. Additionally, Singh [26] points out that there are no significant differences in the frequency of plagiarism between the students who use the Internet and those who use printed sources. Accordingly, the practice of plagiarizing is not necessarily connected to the ease of access to the resources which are available online. It could indicate a general human tendency to minimize (any kind of) effort for gaining the same benefits. We reckon that the lesson of the pandemic will consolidate the Internet as an acknowledged tool and framework for the development of the educational process. Within this context, the stake of efficiently using the Internet is increasingly high. The solution we suggest is efficient for several reasons. First, it forces students to responsibly and attentively interpret relevant information as provided by the sources available online. It teaches students to express their opinions based on arguments instead of reproducing other people's views, with which they sometimes do not necessarily agree. It instructs them how to constructively use their instruments and not simply disseminate other people's ideas. We believe the solution we propose is a sustainable approach to students' plagiarism because it converts already existing skills that are sanctionable into skills that contribute to a healthy academic environment. Additionally, it supplies useful long-term instruments. Furthermore, it suggests the efficient use of a resource that is generally acknowledged as valuable. SAP conversion does not undermine the effectiveness of the other solutions suggested to solve the problem of students' plagiarism. On the contrary, we believe that all these solutions build one upon the other. We believe that their arrangement by categories in the previous paragraphs puts them in the limelight for academic staff and thus supports them in making optimal decisions in relation to their challenge of managing plagiarism. Local educational contexts calibrate the usefulness of available solutions. Identifying the most efficient solutions is academic staff's responsibility. They can mold students' behavior and the importance attributed to online resources used in the educational process. Moreover, they can impose institutional policies through their representatives. In the long term, by fulfilling their roles of shaping behavior and forming opinions, they can change the entire educational context. We believe that the academic staff is the main actor involved in the approaches meant to tackle the problem of students' plagiarism.
9,875.6
2021-03-31T00:00:00.000
[ "Education", "Computer Science" ]
Reliability Modeling and Evaluation of Electric Vehicle Motor by Using Fault Tree and Extended Stochastic Petri Nets Performing reliability analysis of electric vehicle motor has an important impact on its safety. To do so, this paper proposes its reliability modeling and evaluation issues of electric vehicle motor by using fault tree (FT) and extended stochastic Petri nets (ESPN). Based on the concepts of FT and ESPN, an FT based ESPN model for reliability analysis is obtained. In addition, the reliability calculation method is introduced and this work designs a hybrid intelligent algorithm integrating stochastic simulation and NN, namely, NN based simulation algorithm, to solve it. Finally, taking an electric vehicle motor as an example, its reliability modeling and evaluation issues are analyzed. The results illustrate the proposed models and the effectiveness of proposed algorithms. Moreover, the results reported in this work could be useful for the designers of electric vehicle motor, particularly, in the process of redesigning the electric vehicle motor and scheduling its reliability growth plan. Introduction Along with increasing problems of energy and environment, more and more countries develop related policies to handle these changing and disturbing issues, that is, developing newenergy and low-carbon vehicles, implementing the remanufacturing, reuse, and recycling of waste products, and performing green transportation technologies [1][2][3][4][5]. Electric vehicle, as an important and green transportation tool, has been widely attracted by more and more researchers [6]. Motor is one of the key components of electric vehicle. Its reliability has an important impact on the system safety. Designers have well-recognized the importance of electric vehicle reliability, but to our best knowledge a detailed reliability analysis is still missing. Although the faults have been reduced in the last few years by some measures, the faults still affects the safety of vehicles, and faults of mechanical system occupy a large proportion of all the faults. In the present literature, most of the current researches have discussed electric and electronic system issue and reliability prediction analysis of the electric vehicle. For example, P. Liu and H. P. Liu present a permanent-magnet synchronous motor drive system of electric vehicles [7]. Peng et al. discuss driving and control problems of torque for direct-wheel-driven electric vehicle with motors in serial [8]. Quinn et al. present the effect of communication architecture on reliability [9]. Zhu et al. present a grey prediction model of motor reliability of electric vehicle [10]. They propose a grey prediction model of electric vehicle motor based on particle swarm optimization [11]. In addition, Zhu et al. discuss the reliability modeling method of solar array based on the fault tree (FT) analysis method [11]. It can be seen from the above literatures that the current research on reliability modeling of electric vehicle motor is limited to FT analysis method. There is no doubt that FT analysis has been widely employed as a powerful technique to evaluate the safety and reliability of complex systems by many scholars [12,13]. However, FT analysis has some limitations in reliability analysis. Firstly, in FT analysis, the probabilities of basic events must be known before analysis. Thus based on this assumption, the reliability analysis of the system is only a probability decision-making process and cannot achieve the real-time description of reliability information [14,15]. Secondly, it is not easy for FT analysis to conduct further quantitative analysis automatically due to the lack of effective means of mathematical expression. Thirdly, FT analysis cannot find the dynamic description of fault information of the system precisely and cannot describe the propagation process of fault information. The Petri net is one of the mathematical modeling approaches for the description of distributed systems, which consists of places, transitions, and directed arcs [16,17]. Many extensions to the Petri nets have been successfully developed and applied in analyzing fault diagnosis, automated manufacturing systems, and product disassembly [18][19][20][21]. The extended stochastic Petri net is a high level one; it has been used to establish models of reconfigurable manufacturing system and network attack due to its better information expression ability and dynamic description performance of process features [22,23]. Although some prior works [22][23][24] have proposed to use extended stochastic Petri nets to solve reliability issues of nitric acid reactor feed and reconfigurable manufacturing systems, they merely analyze the average failure rate/life time of system; the real-time probability analysis/reliability issues of the system are not yet addressed by using this method. Moreover, for reliability issue of electric vehicle motor, we cannot find reference to handle this issue by the extended stochastic Petri nets method to the best knowledge of the authors. To do so, this work addresses reliability modeling and evaluation of electric vehicle motor by using extended stochastic Petri nets based on fault tree for the first time. Namely, the aim of this work is to find a new way to analyze reliability of electric vehicle motor. The remainder of this paper is organized as follows: reliability model and establishment method of FT and extended stochastic Petri nets of the mechanical system are given in Section 2. Section 3 presents the reliability analysis method and algorithm. In Section 4, taking an electric vehicle motor as an example, its reliability modeling and evaluation are presented. Section 5 concludes our work and describes some future research issues. FT Based Extended Stochastic Petri Nets (ESPN) Models for Reliability Analysis Reliability model is basis and premise for reliability analysis and evaluation; thus we first introduce the concept and establishment process of extended stochastic Petri nets model based on FT for reliability analysis. To easily establish it, the following method is proposed in this work. Namely, the FT model for reliability analysis is established based on its related concept, and then transformation rules of elements of FA to ESPN are defined. Finally, FT based ESPN model for reliability analysis is established. FT Model for Reliability Analysis. FT is the most usual model of reliability analysis. Many references describe it in detail [25,26]. In this work, we only present its basic elements and schematic diagram. Basic Elements of FT. Usually, FT is composed of a series of events and logic gates. The main events include the following. Top event: it is the most undesirable system failure event and the object of the analysis. It is denoted by ◻. Middle event: it is the subsystem or component failure event and the cause of the top event. It is denoted by ◻. Basic event: it is the primary failure event and the cause of the top event or middle events. It is denoted by I. The main logic gates include the following. Logic OR gate: it indicates that output event occurs if either one of the input events occurs. It is denoted by . Logic AND gate: it indicates that output event occurs only if all of the input events occur. It is denoted by . There may be many other types of events and logic gates involved in complex system reliability analysis. However, for the sake of concisions, we only list the most commonly used ones here. For other types of events and logic gates, please refer to [25,26]. FT Model of a Mechanical System. Based on the presented basic elements and logical relationship of fault occurrence, the schematic diagram of FT for reliability analysis of a mechanical system is presented next, as shown in Figure 1. As shown in Figure 1, this FT is composed of 1 top event, 2 middle events, and 4 basic events. They are T, M1 and M2, and B1, B2, B3, and B4. Concept of ESPN. The Petri net is a graphic modeling method, which is widely used in modeling and analyzing discrete event systems such as semiconductor manufacturing, transportation, and automated manufacturing systems. An ESPN is a high level one. It is a type of improved stochastic Petri nets with arbitrary distribution. Before giving the formal definition, we present the definition of PN introduced by Petri in 1962 [27][28][29][30]. A : → and is a marking whose th component represents the number of tokens in the th place. An initial marking is denoted by 0 . The tokens are pictured by dots. A simple PN and its elements are shown in Figure 2. The four-tuple ( , , , ) is called a PN structure that defines a directed graph structure. A PN models system dynamics using tokens and their firing rules. Assume that every transition in a PN is associated with an exponentially distributed random delay from the enabling to the firing of the transition; then this PN is transformed into the stochastic Petri net (SPN), while every transition in a PN is associated with arbitrary distribution random delay; this PN is called ESPN. It is defined as follows [31]. An : × → and is an output function that defines the set of directed arcs from to ; : → and is a marking whose th component represents the number of tokens in the th place. An initial marking is denoted by 0 . : → is a vector whose component is a firing time delay with an extended/arbitrary distribution function. Note that the firing time delay is the life time of corresponding component in ESPN model for reliability analysis. Elements Transformation Rules of FT to ESPN. To obtain the ESPN, we introduce the transformation rules of elements of FT to ones of ESPN. Based on different logic relations, the transformation of AND/OR gates of FT to AND/OR transitions of ESPN is shown in Figure 3. As shown in Figure 3, the top/middle/basic event, logical gate, and logical relation line of FT are transformed to place, transition, and arc of ESPN, respectively. The Establishment of ESPN Based on FT. Based on the FT and elements transformation rules of FT to ESPN, the FT based ESPN model for reliability analysis is obtained, as shown in Figure 4. From Figures 4 and 1, compared to FT model, we can see that the building FT model needs 6 types of elements, that is, 3 types of events, 2 types of logic gates, and a kind of relation line, while the building ESPN model only needs 3 types of elements, that is, place, transition, and arc. Namely, the ESPN model is more concise than FT one. In addition, when each transition is associated with its corresponding life distribution function, it can achieve the real-time description of reliability analysis. Also, this model can achieve the dynamic delivery and propagation of reliability/fault information due to the introduction of transition and directed arc. Overall, the results denote that using ESPN method to establish the product reliability model is more convenient, concise, and effective than FT model. Reliability Evaluation Parameters and Calculation Method. In this paper, the following two evaluation parameters of a system are adapted, that is, reliability degree of ( ) and average life ( ). Let random variable denote the life for a specified system; then the unreliability ( ) of the system at time is the probability of random variable ≤ ; namely, The reliability of the system ( ) is Let probability density function of the system be ( ); then the average life ( ) is Journal of Applied Mathematics In addition, for computer system reliability, the life calculation method of AND/OR transition in ESPN is pretended next. Let a specified system consist of components and let the life of these components be 1 , 2 , . . ., and , respectively. For AND transitions, the system life is expressed as For OR transitions, the system life is expressed as For example, in Figure 4, let the life of corresponding components of places 1 and 2 be 1 and 2 , respectively; then the life of corresponding system 5 of places 5 is 5 = min ( 1 , 2 ). Let the life of corresponding components of places 3 and 4 be 3 and 4 , respectively; then the life of corresponding system 6 of places 6 is 6 = max ( 3 , 4 ). For other systems, its life calculation can be obtained by the integration of AND and OR transition calculation. Such as, a system as shown in Figure 4, its system life 7 = min{ 5 , 6 } = min{min( 1 , 2 ), max( 3 , 4 )}. Algorithm for Reliability Evaluation. Stochastic simulation is an effective means to assess and calculate stochastic and probabilistic functions. It has effectively solved many stochastic programming problems [32][33][34][35]. While neural networks (NN) have been successfully used to solve many complex industrial evaluation and optimization problems due to their strong nonlinear fitting ability [36][37][38], we propose to use a stochastic simulation algorithm based on NN to solve the reliability of the proposed FT based ESPN model. Stochastic Simulation of Reliability Function Step 1. Initialize associated fault probability (life) distribution functions of each transition in ESPN model, and set the number of simulation cycles . Step 3. Based on transition transmission rules, that is, (4) and (5), a system life value is obtained from bottom to top in ESPN model. Step 4. Repeat Steps 1-3 for times; namely, samples of system life are obtained. Step 5. Calculation of the average value of samples of system life obtained in Step 4; that is, average life ( ) of the system is obtained. Step 6. Given a time , record the number of > in samples as ; then the system reality degree ( ) is obtained; that is, ( ) = / . Neural Networks (NN). NN is treated as a nonlinear mapping system consisting of neurons (processing units), which are linked by weighted connections. It usually consists of three layers: input, hidden, and output layers. There is an activation function in the hidden layer. It is defined as the sigmoid function in this paper [36][37][38][39]. Firstly, the method to determine the number of neurons of the input, hidden, and output layers is presented as follows. The The number of output neurons is 1 representing one system life function. In terms of the NN structure, the main problem is to determine the best number of hidden neurons. The number can be infinite in theory, but finite in practice due to two reasons. Too many hidden neurons increase the training time and response time of the trained NN. On the other hand, too few hidden neurons make the NN lack of generalization ability. Therefore, it can usually be determined by the following formula; namely, = √ + V + , where and V are the number of input neurons and output neurons, respectively, and is a constant from 1 to 10 [40]. Based on it, in terms of a system shown in Figure 4, is set to be 4 since this system is composed of 4 bottom places, V = 1; thus is a constant from 3 to 12. Secondly, backpropagation is the most commonly used method to calculate values for the weight and bias terms of an NN model. In this method, all weights are adjusted according to the calculated error term using a gradient method. Learning in an NN, that is, the calculation of the weights of the connections, is achieved by minimizing the error between its output and the actual output over a number of available training data points. In this paper, the error term is controlled by the following MATLAB function, namely, net.trainParam.goal. It denotes the mean squared error between the output of the neural network and the actual output over a number of available training data points. Thus the NN algorithm is presented as follows. Step 1. Initialize the number of neurons at the input, hidden, and output layers, and initialize weight vector . Step 2. Calculate the output of the hidden layer and the output of output layer, and adjust the corresponding weights . Step 3. Calculate the error term, namely, training performance goal. If it is larger than the given error, go to Step 2, and otherwise, end. NN Based Simulation Algorithm. Based on the presented stochastic simulation and NN, the steps of the stochastic evaluation algorithm based on NN are presented as follows. Step 1. Initialize the parameters of an NN structure and the number of training data points . Step 2. Establish an FT based ESPN model for reliability analysis based on fault logic relationships of each component in a system. Step 3. Based on the relationship among the system life and the corresponding each component life in FT based ESPN model, generate the input-output data for NN training by the stochastic simulation technology. Step 4. Train the NN to approximate the uncertain function, namely, the transition transmission rule/relation of the system life calculation, and obtain output data of a system life of the NN. Step 5. Forecast outputs of system life value are obtained by the NN algorithm. Step 6. Calculate reliability degree of ( ) and average life ( ) by stochastic simulation and obtained forecast outputs of system life value. The above algorithm has been implemented in the MAT-LAB (R2009b) programming language. Reliability Modeling of Electric Vehicle Motor. As shown in Figure 5, it is a schematic graph of electric vehicle motor. It mainly consists of 3 parts, that is, stator, rotor, and axis. According to its components and their logical relation of fault occurrence, combined with the presented concept of FT, its FT model is shown in Figure 6. Based on elements transformation rules of FA to ESPN, the FT based ESPN model of motor of electric vehicle is obtained and shown in Figure 7. In addition, the paraphrase for the transitions and places of FT based ESPN model of electric vehicle motor is listed in Tables 1 and 2, respectively. Additionally, the life distribution types and parameters associated with each transition is listed in Table 2. Note that Norm( , ) and Exp( ) denote normal and exponential distributions, respectively. The distribution parameters can be determined by the life test of motor and the unit of life time is hour [41]. Reliability Evaluation of Electric Vehicle Motor. The parameters of the NN based simulation algorithm are set as follows: for FT based ESPN model of electric vehicle motor, since there are 7 bottom places in this model, the number of input neurons is set to be = 7. Thus the number of hidden neurons is 12 by letting = 12. The given error term value is 0.000000004. Based on the stochastic simulation, 5000 input-output data points are generated and obtained. Moreover, they are separated into two groups: 3000 data points for training and 2000 points for testing. Note that the number of training data points is set to be 5000 due to two reasons. First, too few data points make the solution of models inaccurate. Second, too many points increase the training time and response time of the solution model. Note that the solution accuracy cannot be increased significantly with the number of data points [36,37]. algorithms based on NN are executed, the corresponding reliability degree ( ) is obtained, as listed in Table 3. As seen in Table 3, for example, (800) = 0.871; it denotes that the probability of this motor, which does not occur faulty, is 0.871 when it has run for 800-hour time. In addition, in order to make future tests and observe the effectiveness of the proposed methods, the predicted outputs of NN method and their output error between forecast and actual outputs at test data points are shown in Figures 8 and 9. From Figures 8 and 9, the predicted and output results of the proposed algorithm are highly close. It reveals that the proposed algorithm can accurately achieve reliability evaluation of the electric vehicle motor. Conclusion Electric vehicle motor is one of key components in electric vehicle and has a great impact on the vehicle safety; thus it is important to perform its reliability analysis. To do Faults of shaft and bond cause fault of shaft assembly Life distribution of shaft 6 is Norm (18000, 1000). Life distribution of bond 7 is Norm(16000, 1100) 5 Faults of the stator, rotor, and axis system cause the fault of the motor system Life distributions of stator, rotor, and axis system are obtained by calculation of life of constituting its components so, currently, researchers have discussed this problem by using FT analysis method. However, this method has many defects in analyzing the reliability of mechanical systems; for example, it cannot achieve the dynamic description of reliability and the building process of FT model needs a variety of elements. To deal with this problem, this paper proposes reliability modeling and evaluation issues of electric vehicle motor by using FT based extended stochastic Petri nets for the first time. Based on the concepts of FT and ESPN, combined with the defined transformation rules of their elements, an FT based ESPN model for reliability analysis of the mechanical system is obtained. In addition, the reliability calculation method is introduced on FT based ESPN model of the mechanical system and this work designs hybrid intelligent algorithms integrating stochastic simulation and NN, namely, NN based simulation algorithm, to solve it. Finally, taking an electric vehicle motor as an example, its reliability modeling and evaluation issues are analyzed. The results reveal that they are feasible when used to solve the proposed problems. The obtained results can be used to guide decision makers in making better design when electric vehicle motor is developed and designed. The future work is to find and use actual reliability test data to validate this method to provide the best decision support for reliability analysis of electric vehicle motor. This work merely analyzes the reliability of mechanical system of the electric vehicle motor. Thus, the reliability issue of integrated system integrating mechanical system and other software needs to be further discussed. In addition, advanced control and technology of electric vehicle motor should be further studied to improve its safety [42][43][44].
5,097.4
2014-04-30T00:00:00.000
[ "Engineering" ]
Fine structure in the Sigma Orionis cluster revealed by Gaia DR3 , Introduction Sigma Orionis (σ Ori) is a benchmark open cluster in the nearest giant star formation complex -Orion.There are multiple dark clouds and stellar populations in Orion, indicating a complicated history of star formation over the last 15 Myr (Kubiak et al. 2017;Kounkel et al. 2018;Zari et al. 2019).The σ Ori cluster itself is located in the Orion C region, which is relatively clear of molecular cloud material and where there is no ongoing star formation.Close to it, however, there are heavily obscured regions, such as the Horsehead Nebula (Barnard 33).North of σ Ori are the two Orion OB1 subgroups of OB stars identified by Blaauw (1964).It has been suggested that the low-mass population of OB1b overlaps with that of the northern part of the σ Ori cluster, but the two populations have different distances and radial velocities (Jeffries et al. 2006;Maxted et al. 2008). The existence of a clustering of B-type stars and latetype young stars around the bright O9.5 star σ Ori was recognised by Garrison (1967) and Walter et al. (1997), respectively.Low-mass stars were identified using the Röntgensatellit (ROSAT) X-ray observations and follow-up optical imaging and spectroscopy.Very low-mass candidate members straddling the stellar-substellar border have been found using widearea optical and infrared imaging and spectroscopy (Béjar et al. 1999;Zapatero Osorio et al. 2002), and deep imaging and reconnaissance spectroscopic observations have even reached the planetary-mass domain (Zapatero Osorio et al. 2000;Barrado y Navascués et al. 2001;Martín et al. 2001;Peña Ramírez et al. 2012).A pre-Gaia review of the properties of the σ Ori cluster can be found in Walter et al. (2008). ⋆ E-mail<EMAIL_ADDRESS>membership studies using Gaia are essential because the σ Ori cluster is located in a complex region.Gaia Data Release (DR) 2 astrometric data were used in the large spectroscopic membership study by Caballero et al. (2019).They report a significant number of unconfirmed cluster members among the list of members provided by previous studies.This shows the importance of including astrometric and kinematic information when determining cluster memberships. The main aim of this paper is to re-assess the membership of the cluster and its subcomponents along the entire mass range available in the Gaia DR3 catalogue.Typical young cluster membership indicators (colour-magnitude diagram, Hα emission, and Li I and Na I absorption) are examined but are not used to determine membership.The structure of this paper is as follows.We describe the input catalogue and the mass estimation in Section 2. Section 3 includes a discussion of the division of the complex star-forming region into subgroups and their membership determination.We comment on the revised membership lists and the fine substructure of the σ Ori region in Section 4 and supplement our membership lists with the spectroscopic indicators.We summarize our findings and conclude in Section 5. Data Our membership selection is based on stellar kinematics.We used the stellar coordinates, parallaxes, proper motions, and radial velocities in the Gaia DR3 catalogue (Gaia Collaboration et al. 2016, 2022).We computed distances as 1/parallax.As estimated by Žerjal et al. (2023), the difference between the distances based on the Bayesian approach (Bailer-Jones et al. 2021) and the inverted parallax is small at the distance of Orion, on average less than 1 pc within 400 pc and 4 pc beyond this limit.Input catalogue (grey dots) and the positions of the preliminary members of the four different overdensities in the σ Ori region.The clusters σ Ori and RV Ori show overdensities in the sky as well as in the proper motion space.On the other hand, the pOBP-near component is dense in proper motion space but sparse in the sky, while the opposite is true in the Flame association. Our selection criteria for the initial source list were based on coordinates (80 < α < 90 deg, −4 < δ < 0 deg) and parallax (2 < π < 5 mas).This is a relatively wide cut to ensure that no potential members are left out.We did not perform any quality cuts in the data in order to reach the faintest cluster members.In total, 14% of stars have radial velocities in Gaia (among them almost 8% of K and 2% of M dwarfs). The brightest star in the cluster, σ Orionis AB (one single entry Gaia DR3 3216486443742786048 that is actually an unresolved triple system), is saturated in Gaia (G=3.4 mag) and is therefore lacking parallax and proper motion measurements.We took proper motions from the new reduction of Hipparcos data for this star despite the large measurement uncertainties ( µ * α = 22.63 ± 10.83 mas yr −1 , µ δ = 13.45 ± 5.09 mas yr −1 ; van Leeuwen 2007).Its distance, however, is most precisely determined by interferometry (387.5 ± 1.3 pc; Schaefer et al. 2016). Membership determination Our membership determination algorithm is based on the Perryman technique (Perryman et al. 1998).The procedure calls for the input data to be composed of two tables, one with the list of candidate members covering the entire anticipated parameter space of the cluster, and another with the list of the preliminary members that serves as a starting point to estimate the systemic velocity of the cluster.We describe all the steps in detail below. Preliminary members We manually selected the preliminary members in a cautious manner, aiming to minimize the contamination rate while ensur-ing an adequate number of members for accurately computing the initial barycentre and systemic velocity of the cluster.The parameter space in the σ Ori star-forming region is intertwined with numerous distinct young populations, as depicted in Figure 1 and explored in detail by Chen et al. (2020).The σ Ori cluster itself and its immediate vicinity consist of five kinematically distinct groups.Upon initial examination of the input catalogue, it became evident that there are two adjacent but clearly separate populations within the σ Ori cluster in the proper motion space.Both newly identified groups consist exclusively of low-mass stars that share a similar age with the σ Ori cluster.We named the first the RV Orionis association (RV Ori) after its most luminous member (spectral type K5.0, Hernández et al. 2014).It is located between the core of the σ Ori cluster and the Horsehead Nebula.The second group -the Flame association -has an inconspicuous presence due to a low number of stars and its overlap with NGC 2024.Additionally, we characterised NGC 2024, which is embedded in the Flame Nebula (Levine et al. 2006;Getman et al. 2014) and younger than the σ Ori cluster (< 1 Myr, Kuhn et al. 2015), in order to effectively disentangle it from the main group. The fifth population is a sparse and slightly older group in front of the σ Ori cluster.It is seen as a populous overdensity in the proper motion space but sparse in the sky.It is related to a group historically named OB1b, and recently characterised as OBP-near by Chen et al. (2020), where OBP stands for the Orion Belt population.Due to its broader extent compared to our input catalogue, certain members of this cluster are not included in our membership list.Consequently, in this study, we designated its members as partial OBP-near (pOBP-near), clarifying that our interest lies in its limited characterisation for the purpose of distinguishing it from the main cluster, rather than conducting a comprehensive analysis because it belongs to an older star-forming event. The criteria for the selection of the preliminary members of σ and RV Ori, NGC 2024, and pOBP-near are individually customised.Members of the Flame association were discovered and determined later in the procedure due to its inconspicuous nature, as described in Sect.4.3. The preparation of the table with the preliminary members was composed of two steps due to the discrepancies between the pre-Gaia cluster distances in the literature and those estimated from the Gaia parallaxes, and due to the missing information about the newly discovered RV Ori association.The first part consisted of the determination of the centres and radii of the clusters in the positional and velocity spaces and their mean radial velocities, as listed in Table 1.If available, we took this information from the literature; otherwise, we used our own values.In the case of the σ Ori cluster, we used the membership list from Caballero et al. (2019) due to the overlap of this cluster with other overdensities in the proper motion space.We only took objects with µ α * between 0 and 3 mas yr −1 , µ δ between -2 and 1 mas yr −1 , and parallaxes between 2 and 5 mas to eliminate the obvious outliers in their list.For RV Ori, NGC 2024, and pOBPnear, the selection was performed manually in TOPCAT (Taylor 2005) by tracing the overdensities in the proper motion and distances, and their location in the sky.These lists allowed the determination of the cluster centres and radii, as listed in Table 1.Radii in the physical and proper motion space were determined by a visual inspection and were chosen to incorporate the majority of the stars in the overdensity.Since these were preliminary lists, we aimed to achieve low contamination rather than high completeness levels. In the second step, we applied the filters from Table 1 to produce the preliminary lists of members for each group.We centred the fields on the commonly reported (α, δ) coordinates from the literature (e.g.Caballero et al. (2019); Chen et al. (2020) for the σ Ori and pOBP-near; Getman et al. 2014for NGC 2024) or determined in the first step of the procedure (RV Ori).The spatial selection of stars was done within the angle β in the sky.This angle varies from cluster to cluster and corresponds to a radius of 5 pc at its distance; this is a typical tidal radius for a young cluster within 500 pc (Žerjal et al. 2023).The distance cut-off was applied in a less restrictive manner to accommodate for uncertainties in parallax measurements.Likewise, we centred each cluster on (µ α * , µ δ ) in the proper motion space, and then identified objects falling within the radius r µ .This chosen radius encompassed the central overdensity and remained consistent across all clusters, except for NGC 2024, due to its sparse nature.Subsequently, we implemented the radial velocity boundaries to eliminate evident outliers.These boundaries were quite broad and primarily served to exclude obvious outliers with radial velocities that deviate significantly from typical values, for example by a factor of 3 or more with respect to the mean value for the cluster.The combination of all these cuts successfully isolated N preliminary members, the N RV stars of which have radial velocity measurements.This selection was very conservative and only served as a starting point for the membership selection algorithm.Figure 1 illustrates the positions of the preliminary members in the sky and the proper motion space. Mass determination The systemic velocities of the clusters were determined as the mean velocities of their members, weighted by their mass.We estimated stellar masses in the input catalogue using the colourmass relation from the PARSEC (PAdova and TRieste Stellar Evolution Code) models (Bressan et al. 2012;Chen et al. 2014Chen et al. , 2015;;Tang et al. 2014;Marigo et al. 2017;Pastorelli et al. 2019Pastorelli et al. , 2020) ) for ∼ 3 Myr (this is the cluster age estimated by Zapatero Osorio et al. 2002) and solar metallicity.We fitted a ninth-order polynomial to the (G − G RP ) − mass relation, as shown in Figure 2. The model is available up to G − G RP = 1.45 that corresponds to ∼ 0.1 M ⊙ (mid-M dwarfs).For the redder stars, we fixed the mass to 0.05 M ⊙ .While this assumption is not ideal, the limited number of low-mass stars, combined with the incompleteness in Gaia at this distance, results in a minimal impact on the computation of the systemic velocity. The most massive member of the cluster, σ Ori AB, is a hierarchical triple system composed of a wide pair A+B, where star A is a close spectroscopic binary composed of Aa and Ab (Simón-Díaz et al. 2011) with dynamical masses of M Aa = 16.99±0.20M ⊙ , M Ab = 12.81±0.18M⊙ , and M B = 11.5±1.2M⊙ (Schaefer et al. 2016).We manually entered a total mass of 41.3 M ⊙ for this star. We acknowledge the fact that our mass estimates are only approximate and that we did not account for binaries.Assuming a symmetric distribution of binaries in clusters, any potential biases in determination of the barycentre or the systemic velocity of the cluster should have cancelled out due to the large number of member stars (Perryman et al. 1998;Reino et al. 2018). Membership list To determine cluster members, we used a convergent point method developed by Perryman et al. (1998) that is based on the comparison of the kinematic properties of candidate stars with the systemic velocity of the cluster itself.We briefly summarize the method and refer the reader to the original paper (Perryman Notes.Angle β corresponds to a radius of 5 pc at the cluster's distance while r µ is the radius in the proper motion space.N and N RV are the numbers of all preliminary members and those with radial velocity measurements, respectively. Table 1.Selection criteria of the preliminary cluster members. In the first step, we determined the transversal and radial velocities of the clusters from the preliminary lists of members.The values were computed as a mean, weighted by the stellar mass.Next, we prepared the membership list by computing the transversal and radial velocities a member star would have at the location of a candidate star in question.We computed the difference between the expected and the actual velocity (vector z i for the i-th star) and determined the c value as c = z T Σ −1 z.Here, Σ is a confidence region that helps us estimate the scaled distance between the expected and the observed velocity vector, z i .It is a sum of two covariance matrices and incorporates measurement uncertainties and correlations between the observables for the measured and expected values.We can understand c as a proxy for membership probability, where low c values represent high membership probability.The value c 2 follows a χ 2 distribution for the selected number of degrees of freedom.For three degrees of freedom (proper motions and radial velocity are known), a 3σ confidence interval translates to c = 14.16.If the radial velocity is not known (two degrees of freedom), c = 11.83.Stars with c values below these limits are considered members. The convergence point method identifies stars whose motion in space corresponds to the velocity of the cluster.However, when dealing with close cluster pairs or complex star-forming regions where components have similar velocities, the method may struggle to accurately distinguish between the members of these nearby clusters.The determination of each cluster's membership was conducted independently using the full Gaia input catalogue, which could have lead to stars being assigned to more than one cluster.To ensure that each star is only assigned membership to one cluster, particular consideration was given to stars that appeared to have multiple memberships.In such cases, we assigned their membership to the cluster associated with the lowest c value. The membership list of pOBP-near was prepared in a slightly different way: since the algorithm is not robust enough to deal with such sparse populations, its results encompassed a wider range of proper motions and overdensities in the sky.Effectively, the first results for pOBP-near contained a large fraction of the σ Ori members and the rest of the young groups.We solved this problem by removing the members of other clusters in this work from the pOBP-near catalogue.Similarly, we eliminated the rest of the clusters from the input catalogue for cluster NGC 2024, and implemented an exclusion criterion for stars with α < 83.5 deg to prevent the inclusion of another nearby prominent overdensity. To minimise the contamination rate by distant stars with larger astrometric uncertainties that appear members by chance, we introduced a radial cut from the clusters' centres based on their tidal radii.We determined the tidal radius of each cluster using Eq. 3 from Röser et al. (2011).Their relation r 3 estimates the mass of the cluster M cluster within the tidal radius r.Oort's constants A = 14.5 km s −1 kpc −1 and B = −13.0km s −1 kpc −1 were determined from 581 clusters within 2.5 kpc by Piskunov et al. (2006); G is gravitational constant.We compared this relation with the cumulative radial mass distribution of the cluster.The intersection with the model gave us the tidal radius.Finally, we limited the volume of each cluster to 5 tidal radii.We present the list of members for the analysed populations in Appendix A and evaluate it in Section 4. Discussion Our membership analysis has confirmed the presence of three distinct populations in the σ Ori group, consisting of 251 members in total.The precision of the proper motion measurements has unveiled that σ Ori is not as homogeneous as previously believed.Instead, it consists of two distinct parts, with the second part being RV Ori.RV Ori is spatially separated from the rest of the cluster and exhibits similar but distinguishable proper motions, suggesting that it represents its own distinct population.The third component within 10 pc is the Flame association, which overlaps with the younger Flame Nebula cluster NGC 2024 but has the same age as the σ Ori cluster.Below, we provide a description of each component. σ Orionis Cluster σ Ori has traditionally been seen as a homogeneous young population placed behind a sparse and slightly older group that is part of OB1b (OBP-near).Thanks to precision astrometry from Gaia, we were able to split this cluster into the main part and the less populous RV Ori association that is described in the next section. We identified 217 objects in the σ Ori cluster in the volume of its 5 tidal radii, 82 of them are new and not listed in any known catalogues.Most of the members are concentrated in the centre that is surrounded by a sparse halo as shown in Figure 3. On average, the core and the halo show slightly different proper motions (1.488, −0.617) mas yr −1 and (1.374, −0.915) mas yr −1 , respectively, but share the rest of the properties. The colour-magnitude diagram in Figure 4 The population is composed of nine OBA stars, one F and two solar-type stars; the rest are low-mass objects.There are five AFG stars below the 5 Myr isochrone, and four of them are located in the outskirts of the cluster.Three of these are found in the direction of the Flame Nebula; their potential high extinction might explain their position in the diagram.We designated all five stars as tentative members. This cluster was believed to be located at the distance of 388 pc (e.g.Caballero et al. 2019) and centred around the star σ Ori.In Figure 5 we show that the majority of members are found beyond that distance, roughly between 390 and 415 pc, with a median value of 402 pc and with a spread of 9 pc.On the other hand, the distance to the star σ Ori AB measured by interferometry is 387.5 ± 1.3 pc (Schaefer et al. 2016).This is contrary to our expectations; we expected to find such a massive star in the centre of the cluster.This interferometric parallax was computed with measurements from the Center for High Angular Resolution Astronomy array (CHARA), Navy Precision Optical Interferometer (NPOI), and Very Large Telescope Interferometer (VLTI).Based on the average systematic offset between Gaia DR2 and very long baseline interferometry astrometry of −75 ± 29 µas as reported by Xu et al. (2019), future work should explore the potential systematic offset for the data used in this work before making any conclusions about the position of σ Ori AB with respect to the cluster centre. The proper motions of σ Ori are small due to the dominant motion occurring along the radial direction.Consequently, prior to Gaia, cluster membership was primarily surveyed using radial velocities.Jeffries et al. (2006) determined the cluster radial velocity of 31.0 ± 0.1 km s −1 , with an external error of ±0.5 km s −1 .While we used only Gaia's radial velocities in our kinematic membership determination, we additionally list values from other sources as a supplementary test for membership reliability and to search for potential outliers (in total, we added radial velocities for 14 stars from Sacco et al. 2008).We list these values in Table A.1.Our median values of RV=30.7 km s −1 and 31.0 km s −1 for σ and RV Ori, respectively, confirm the finding of Jeffries et al. (2006).All outliers in radial velocity are positioned at around ±35 km s −1 from the mean cluster velocity.Their c values are larger than 5, which makes them slightly less reliable members.Five of them have high ruwe parameter, which makes them potential binaries. The star σ Ori is the hottest star in the cluster (spectral type O9.5) and is responsible for the illumination of the nearby Horsehead Nebula Caballero (2007).As mentioned earlier, it is in fact a hierarchical triple system composed of a wide pair A+B and a close spectroscopic binary Aa and Ab (Simón-Díaz et al. 2011).Its c value is 11.4 and its distance from the cluster centre is 12.7 pc. Other well-known bright members of this cluster in the literature are σ Ori C, D, and E. However, our analysis only found RV Ori as a member and renounced C and E. The parallax of σ Ori E (Gaia DR3 3216486478101981056) of 2.308 ± 0.065 mas places it to 433 +12.6 −11.9 pc which is almost 20 pc behind the cluster.In the case of σ Ori C (Gaia DR3 3216486439450208000), the star is located at 405 pc, but its proper motion of µ * α = 0.358±0.028mas yr −1 and µ * δ = −1.064±0.027 mas yr −1 does not agree with the cluster parameters. The list of members includes a white dwarf, Gaia DR3 3216956892983762944).It is listed in the cata- logue of white dwarfs in Gaia (Gentile Fusillo et al. 2021).This star is a reliable member of the cluster (c = 7.68), but it is located in the outskirts in the sky, and it appears at the edge of the cluster overdensity in the proper motion space (Figure 6).Since σ Ori is a very young association, we assigned this white dwarf as a tentative member whose membership status should be examined in more detail. RV Orionis association We identified the association of 24 stars, located adjacent to σ Ori yet distinct in the parameter space, as the RV Orionis association (RV Ori), named after its brightest member.We identified four members that were not previously recognised as part of the σ Ori cluster.This association is composed exclusively of low-mass stars, and its most luminous star, RV Ori (Gaia DR3 3216500531234897920), is a late-K type and exhibits an apparent Gaia G magnitude of 13.7 mag. RV Ori is partially overlapping with the σ Ori cluster in the sky (Figure 3) but is clearly separated in the proper motion space as demonstrated in Figure 6.This is also reflected in the transver-sal velocities in Figure 3 that differ from those of the σ Ori members.It seems that these stars are coming from the projected direction of the nearby Horsehead Nebula that is located at the same distance as Sigma Orionis AB (e.g.Hwang et al. 2023).RV Ori is positioned at the same distance from the Sun as σ Ori (∼ 390 − 415 pc with the median value of 402 pc and a spread of 5 pc; Figure 5).While there is clear evidence that RV Ori is distinct from σ Ori, the two likely formed at the same time because both populations overlap in the colour-magnitude diagram (Figure 4). Figure 7 reveals the consistency of the radial velocities in this association.The weighted mean for this group is 31.0km s −1 .Another striking observation in this plot is the fact that the majority of the members have very low c values, making them reliable members.The lack of stars with higher c values in this group comes from the fact that our membership determination method includes disentanglement of stars with double membership (see Section 3.3).It turned out that stars with higher c-values in this association more likely belong to the main σ Ori cluster. NGC 2024 and the Flame association The Flame Nebula Cluster (NGC 2024) is seen as a strong overdensity in the sky near the star Alnitak (ζ Orionis), although they are not related.It appears very sparse in the proper motion space. Our kinematic results for this cluster and RV Ori association both revealed stars that are members but have distinct proper motions adjacent to the σ Ori stars, and are all located in the direction of the Flame Nebula (Figure 3).Additionally, their age seems similar to the age of the σ Ori cluster (Figure 4), while NGC 2024 is younger.In fact, Getman et al. ( 2014) report the age gradient in NGC 2024 with the values increasing from 0.2 Myr in the core to 1.5 Myr at the distance of ∼1 pc from the centre.These stars are embedded in the Flame Nebula and affected by its strong extinction (A V ∼ 20 at the core and A V ∼ 6 in the periphery; Getman et al. 2014).We thus named the new group the Flame association.It is composed of 19 stars; all of them have low-mass and are new members of the σ Ori star-formation site.Here we report the Flame association not only surrounding2 the younger core of NGC 2024 with most of the members on the southern side, but also exhibiting proper motions more akin to those of σ Ori than NGC 2024.This observation underscores the need for further investigation of the complex star-formation scenario in this region. Our NGC 2024 consists of 62 stellar members, including the outskirts, while the catalogue from Getman et al. (2014) lists 121 objects in a smaller volume around the core.We found a crossmatch for 90 stars from their membership list in the Gaia catalogue.Our input catalogue contains 73 stars from this crossmatched list.Finally, we found 29 stars in common between the work of Getman et al. (2014) and our catalogue.We note that this is a highly extinct region due to the Flame Nebula, and dedicated infrared surveys like the MYStIX project described in Getman et al. (2014) are more suitable to achieve higher detection rate. pOBP-near Our pOBP-near occupies a large volume in the sky, while it is seen as a strong overdensity in the proper motion space.The majority of its members in our volume are located in front of the σ Ori cluster.While pOBP-near is clearly older, the distance distribution of both populations shows that they are located in the proximity of each other.One group seemingly extends into another.This situation could potentially lead to a partial membership confusion, where a small portion of σ Ori members might inaccurately be attributed to pOBP-near, and vice versa.However, this is likely not common.We discuss contamination in Section 4.6.We note again that our list of OBP-near members is not complete (thus a proponent 'p').We partially characterised it in this work in order to disentangle it from the σ Ori complex and reduce the contamination rate. Spectroscopic indicators of a young age We prepared a compilation of the existing spectroscopic data to support their membership status.The table of cluster members with spectroscopic measurements is presented in Appendix C.There are 96 stars with measured equivalent width of Li I 6708Å line, 106 stars with Hα, and 80 stars with Na I doublet (8183 and 8195 Å). Very young stars might be affected by non-photospheric veiling coming from hot boundary layers in active accretion discs that can diminish the equivalent widths of absorption lines (Basri et al. 1991).To identify stars that might be affected by veiling, we applied the chromospheric criterion from Barrado y Navascués & Martín (2003) that is based on the strength of Hα emission.The details are described in Appendix B, and the veiling flags are given in Table C.1.We show equivalent widths of Hα for members in Figure 8, distinguishing the veiled and non-veiled stars for RV and σ Orionis (no Hα measurements are available for other populations).Interestingly, among the stars with equivalent width Hα measurements, RV Ori exhibits a higher proportion of veiled stars (53%) than σ Ori (32%).On the other hand, their colour-magnitude sequences overlap and do not indicate any significant age difference. The presence of lithium has often been used to differentiate between the lithium-rich members of a young cluster and old lithium-depleted background stars.However, the Orion region is populated with many young groups of stars that may still show lithium but are not truly σ Ori cluster members and contaminate our sample (e.g. from the OBP-near group discussed in Section 4.4).We nevertheless prepared lithium data to reduce the possibility of background contamination.We thus performed the lithium test of youth to search for lithium-depleted stars that might be non-members, depending on their colour.Figure 9 shows Li pseudo-equivalent widths (pEWs) described by, for example, Pavlenko et al. (2007) for late-K and M dwarfs in both clusters.We note that measured pEW(Li) in veiled stars is subject to high variability. According to Zapatero Osorio et al. (2002), the majority of the members in the σ Ori cluster are considered too young to exhibit depleted lithium.However, some exceptions have been observed, suggesting the presence of a few members with depleted lithium, which might indicate an older age (Kubiak et al. 2017).For instance, Sacco et al. (2008) reported approximately 25 members of σ Ori with a pEW(Li) of less than 150 mÅ. In our sample, the distribution of lithium indicates a prevalence of very young Li-undepleted stars.However, there are three stars that are not veiled and have no (or almost no) lithium left in their atmospheres.They are all reliable members according to their c value.We plan to address the question of lithium depletion and possible age spread in the cluster with additional spectroscopic observations in our subsequent paper. Last but not least, the last spectroscopic indicator of youth considered in this work is the Na I subordinate doublet in the far red part of the optical spectrum.The equivalent width of Na I is known to be sensitive to surface gravity in M-type objects, and it has been used as an indicator of young age (Martín et al. 2010). Stellar radius changes fast in young contracting stars in Hayashi tracks, and thus surface gravity can be used as a gravity indicator to distinguish between stars above and on the main sequence.Figure 10 reveals a colour-dependent trend of pEW(Na), with a few outliers displaying less sodium than the rest of the sample.Similar to lithium, sodium is also subject to variability due to veiling, but to a lesser extent because it is located at a longer wavelength.Most of the outliers are veiled, except stars a, b, c, and d, as annotated in Figure 10.Stars c and b are on the cluster sequence in the colour-magnitude diagram, raising questions as to why their sodium is low, and whether they are truly veiled despite their Hα being low.Active accretion may be episodic.Their Hα and Na measurements were not obtained simultaneously, and thus it may occur that they were quiet at the epoch when Hα was observed, but veiled when Na was observed. Star a appears to be overluminous with respect to the cluster sequence, which is consistent with its small pEW(Na).These observations indicate an age that is younger than the two clusters.Although only setting an upper age limit, its undepleted lithium (0.51 ± 0.06 Å) speaks in favour of its very young age.This star is a reliable kinematic member (c = 3.4, although there is no radial velocity measurement), so it must have formed at the end of the Sigma-Orionis star-formation event.The existence of a few members much younger and much older than the mean age of the σ Ori cluster could provide important information about the star formation history.Most of the stars have the cosmic amount of lithium left.A large fraction of the members are affected by veiling, which causes variation in the measured pEW(Li).Among the stars that are not veiled, we found two stars with completely depleted lithium and one star with semi-depleted lithium. Comparison with the literature, contamination, and completeness The contamination rate is about 12% in σ Ori, based on the number of stars that are not found on the cluster sequence of the colour-magnitude diagram.The majority of these outliers are located in the outskirts of the cluster, so the contamination is likely smaller in the cluster's core. We would like to emphasize that we refrained from making any cuts in parallaxes, proper motions, or magnitudes, as our primary goal was to achieve maximum completeness in the lowmass range.For almost 90% of the targets, the relative parallax errors are less than 10%, and they remain below 20% even for the faintest targets with G ∼ 20.At this limit in Gaia, the completeness ratio for a 5 and 6-parameter solution is 92.2% (Lindegren et al. 2021). Most of the existing membership analyses were conducted before Gaia's era.Many of these studies primarily focused on the low-mass members or were part of large-scale investigations (e.g.Kounkel et al. 2018).However, our current work benefits from Gaia's precision measurements, allowing us to delve into the fine structure of this cluster and conduct a more detailed study.In Table 2 we compare our new membership list with the literature.The comparison is based only on the stars from the literature that were also included in our input catalogue.We took all members from σ Ori, RV Ori, and the Flame association into account.The comparison shows 50-85% agreement with the previous works.There are 82 stars in σ Ori, 4 in RV Ori and 10 in the Flame association that are new members and are not found in any other catalogue mentioned here.They are mostly located in the outskirts of the cluster.On the other hand, there are nine stars that are listed as members in three or more reference cat- alogues and rejected in this work.They have high membership probabilities, but are located beyond 5 tidal radii. Conclusions Sigma Orionis is an important benchmark cluster in the field of stellar and substellar star formation and evolution due to its youth.However, it is located in a complex star-forming site and is thus challenging to isolate.In this work we used highprecision astrometry from Gaia DR3 to re-evaluate its membership using the modified convergent point algorithm.We explored the fine structure of this young star-forming region and described σ Ori, RV Ori, and the Flame association.In total, we report 96 members that had never before been listed in the literature. We supported their membership and young status with spectroscopic indicators, such as the equivalent widths of lithium, sodium, and Hα.Interestingly, RV Ori exhibits a higher proportion of veiled stars (53%) than σ Ori (33%).On the other hand, their members lie on the same sequence of the colour-magnitude diagram.In future work, we plan to study the age distribution in this complex region by using several indicators (isochrones, lithium depletion, and surface gravity). Knowledge about the substructure within complex starforming regions plays a crucial role in stellar astrophysics, particularly when studying the initial mass function.However, in the substellar regime, the data are relatively sparse, leading to less well-constrained results.In this context, σ Ori proves to be highly advantageous for such studies.Its youth, minimal extinction, and relative proximity to the Sun make it particularly convenient.Furthermore, the upcoming Euclid space mission will observe σ Ori, providing valuable insights into substellar objects, extending down to planetary-mass objects.Notes.We list the total number of stars in the catalogues and the overlap with our input catalogue.Among the objects in the overlap, we counted the number of stars we confirmed and rejected as members in this work.Some of these rejected members have high membership probability but are located beyond 5 tidal radii from the cluster centre. Table 2. Comparison with the literature. promises to enhance our understanding of these objects in the cluster.Interferometric parallax (Schaefer et al. 2016). References. Fig Fig. 1.Input catalogue (grey dots) and the positions of the preliminary members of the four different overdensities in the σ Ori region.The clusters σ Ori and RV Ori show overdensities in the sky as well as in the proper motion space.On the other hand, the pOBP-near component is dense in proper motion space but sparse in the sky, while the opposite is true in the Flame association. Fig. 2 . Fig.2.Colour-mass relation used to estimate stellar masses.We fitted a ninth-order polynomial that is valid up to G − G RP = 1.45, which corresponds to ∼ 0.1 M ⊙ (mid-M dwarfs). Fig. 3 . Fig. 3.The σ Orionis star-forming region in the sky.It comprises three prominent subcomponents (the σ Ori cluster, the RV Ori association, and the Flame association) as well as a halo.The Flame Nebula Cluster (NGC 2024) is located in the same region but appears younger.The right panel shows the central area of the region on top of the Digitized Sky Survey image from Aladin (Boch & Fernique 2014).Each of the populations has its distinct transversal velocities (shown in arbitrary units). Fig. 4 . Fig. 4. Colour-magnitude diagram for σ Orionis, RV Orionis, and the Flame association.Yellow dots represent NGC 2024, which is affected by high extinction.The upper plot shows all members, while the bottom plot focuses only on low-mass stars.As a guideline, we added 1, 3, and 5 Myr PARSEC isochrones for [M/H]=0. Fig. 5 .Fig. 6 . Fig. 5. Distance distribution for all components of the σ Ori starforming region.The triple star σ Orionis AB is located in front of the cluster according to the interferometric distance from Schaefer et al. (2016). Fig. 7 . Fig. 7. Radial velocity distribution versus the membership criterion, c.Outliers in radial velocity have relatively high c values (but are still considered members) and are binary star candidates. Fig. 8 . Fig. 8. Hα emission.Stars with very high emission (empty dots) are affected by veiling.The chromospheric criterion (solid line) to distinguish between the veiled and non-veiled stars follows Barrado y Navascués & Martín (2003) and is described in Appendix B. Fig. 9 . Fig. 9. Pseudo-equivalent width of lithium for σ and RV Ori members.Most of the stars have the cosmic amount of lithium left.A large fraction of the members are affected by veiling, which causes variation in the measured pEW(Li).Among the stars that are not veiled, we found two stars with completely depleted lithium and one star with semi-depleted lithium. Fig. 10 . Fig. 10.Pseudo-equivalent width of sodium in the Sigma and RV Ori clusters.There is a clear dependence of pEW(Na) on colour.The presence of veiling can affect the measured pEW(Na).Outlier a is likely younger than the rest of the sample because it shows weaker Na equivalent widths and is more overluminous than the rest of the stars in σ Ori with similar colours. Fig. B.1.Relation between the colour and spectral type from Pecaut & Mamajek (2013) as described in Appendix B. We used a linear interpolation to assign a spectral type number (following the convention from Barrado y Navascués & Martín 2003) from the G − G RP colour of the star. Table A . 1.The entire table for all clusters in this work is available only in electronic form at the CDS via anonymous ftp to cdsarc.cds.unistra.fr(130.79.128.5) or via https://cdsarc.cds.unistra.fr/cgi-bin/qcat?J/A+A/. a Proper motion from Hipparcos (van Leeuwen 2007).b
9,160.2
2024-04-03T00:00:00.000
[ "Physics" ]
Update on the geographical distribution of freshwater crabs of the Pseudothelphusidae family in the semi‑arid region of northeastern Brazil . The humid forest zones of northeastern Brazil are recognized as endemic hotspots for pseudothelphusid crabs. In this study, we report new occurrence records of the pseudothelphusids, Fredius ibiapaba and Kingsleya attenboroughi, in humid soil and streams within humid forests of the semi-arid region of northeastern Brazil. These new records expand the geographical distribution of these crabs, highlighting their potential to inhabit humid forests throughout this region. Furthermore, this information indicates that these crabs, especially K. attenboroughi, are not confined to slopes and patches of humid forests in highland swamps but can also extend to springs in other areas of the Brazilian semi-arid region. However, recent studies conducted in northeastern Brazil have revealed that the Pseudothelphusidae fauna in this region is underestimated (Pinheiro & Santana, 2016;Pralon et al., 2020;Santos et al., 2020a).Additionally, geographic distribution studies and descriptions of new species have expanded the eastern limits of pseudothelphusid distribution into the states of Ceará (Magalhães et al., 2005;Pinheiro & Santana, 2016;Santos et al., 2020a) and Piauí (Pralon et al., 2020). The restricted geographic distribution and anthropogenic pressures indicate a favorable scenario for the extinction of freshwater crab species (Vogt, 2013;Magalhães, 2016).Dalu et al. (2017) suggest that establishing the actual distribution area of a species is crucial for conservation actions.Therefore, the present study reports new occurrence records of Pseudothelphusidae freshwater crabs in the semi-arid region of northeastern Brazil, thereby expanding their distributional range. MATERIAL AND METHODS Sampling was carried out between June 2021 and November 2022 in poorly explored humid forest areas located in the states of Ceará and Piauí, in the semi-arid region of northeastern Brazil.These two states were explored due to previous records of pseudothelphusid species (Magalhães et al., 2005;Pinheiro & Santana, 2016;Pralon et al., 2020;Santos et al., 2020a, b;Araújo et al., 2022) and the presence of humid forest areas that have been under-explored for freshwater crabs. During field sampling, crab specimens were collected during the day using the active search method, inspecting small streams on humid forest slopes, as well as soils with litter, characteristic environments where Pseudothelphusidae occur in northeastern Brazil (Magalhães et al., 2005;Pinheiro & Santana, 2016;Pralon et al., 2020;Santos et al., 2020a, b).The sex was determined by the presence (males) or absence (females) of gonopods, according to Magalhães (2003).Subsequently, the crabs were individually placed in plastic containers and euthanized by cooling on crushed ice.Then, we packed the crabs in a thermal box and transported them to the Laboratório de Crustáceos do Semiárido (LACRUSE) at the Universidade Regional do Cariri (URCA), in the municipality of Crato, Ceará, Brazil. In the laboratory, the specimens were identified as Fredius ibiapaba and Kingsleya attenboroughi according to Santos et al. (2020a) and Pinheiro & Santana (2016), respectively, based on the morphology of male gonopods (Figs. 1, 2).Subsequently, specimens were measured using a digital caliper (precision of 0.01 mm) for carapace width (CW = distance between the lateral margins of the carapace), pleon width (PW = width of the 4 th pleon somite for females and the 3 rd for males), and length of the larger propodus (LP = distance between the base and the distal portion of the larger propodus).Finally, specimens were preserved in 70% ethanol and deposited in the LACRUSE carcinological collection. The new site showed few signs of anthropic intervention, with dense vegetation, humid soil rich in organic matter, and well shaded conditions, favoring the occurrence of the species.However, all these factors were present only in a small area, limiting the distribution of specimens around a headwater.Additionally, this area exhibits few signs of anthropic pressures. At Fonte do Caranguejo, we found only one male specimen LACRUSE 305 (CW = 35.05mm, AW = 11.34 mm, and LP = 26.64 mm) of F. ibiapaba (Fig. 5E) inside a burrow approximately 40 cm deep (Fig. 5B, C), constructed under rocks in the humid soil of a small forested area around the spring near the rocky wall (Fig. 6A).Due to its proximity to the urban area, the site shows strong evidence of human presence, such as solid waste (Fig. 6D-F). Despite the small number of specimens collected, we observed the presence of several burrows in the humid soil of the visited sites, with openings of various diameters, indicating the presence of other crabs.We also collected some juvenile specimens that were later released. Remarks: Two specimens of Kingsleya attenboroughi were found in Riacho Jacaré.The specimens were hidden among leaves and rocks, within water puddles in a small spring area (Fig. 7).This new occurrence area is approximately 227.17 km east of the nearest recorded occurrence point (Fig. 4D), located in the Arajara district, municipality of Barbalha, state of Ceará (Fig. 4C) (Pinheiro & Santana, 2016;Araújo et al., 2022).A male (CW = 56.94mm, AW = 15.41 mm, LP = 54.25 mm) and a female (CW = 46.09mm, PW = 24.24mm, and LP = 31.16mm; LACRUSE 306) were collected (Fig. 5A, B).Riacho Jacaré has tall and dense riparian vegetation, making the creek well shaded and rich in organic matter, important factors for the presence of the species.However, the occurrence of K. attenboroughi in Piauí is restricted to a small area of stream sources in the midst of the caatinga, an environmental characteristic of the Brazilian semi-arid region, with several shrubs, twisted trees, and medium-sized trees.This likely contributes to the small population of K. attenboroughi present in this area as it presents a situation sensitive to anthropic pressures. DISCUSSION The new records of Fredius ibiapaba and Kingsleya attenboroughi in the semi-arid region of northeastern Brazil presented in this study expand their distributional range.Fredius ibiapaba is found in humid forest enclaves of the Serra da Ibiapaba, in patches of humid soil where they can construct their burrows (Santos et al., 2020a).This type of environment is present in other unexplored locations in the Brazilian semi-arid region, known as humid forests (Ab'Sáber, 1999;Tabarelli & Santos, 2004). The new occurrence record of F. ibiapaba in Parque Estadual das Carnaúbas, on the northern limit of Serra da Ibiapaba, shares similar characteristics with the previously reported occurrence area by Santos et al. (2020b).Therefore, our results emphasize the importance of forest zones with humid soil for this species.We believe that the actual geographical distribution of F. ibiapaba may be wider than currently known, as such environments are abundant along the eastern slope of Serra da Ibiapaba (Souza & Oliveira, 2006).Additionally, the new occurrence area of F. ibiapaba is highly restricted, indicating a likely small population, making it vulnerable to any environmental disturbance.Thus, the importance and necessity for the conservation of Parque Estadual das Carnaúbas are underscored, as other populations, including the population of Sítio Caranguejo in the municipality of Ipu, occur in areas with significant environmental disturbances (Santos et al., 2020b).Therefore, as well as in other successful examples highlighted by Dalu et al. (2016), the preservation of Parque Estadual das Carnaúbas is crucial for the conservation of the population of F. ibiapaba in the municipality of Granja. Prior to this study, K. attenboroughi was considered an endemic species of the state of Ceará, occurring only in the narrow strip of humid forest on the slopes of Chapada do Araripe (Pinheiro & Santana, 2016;Araújo et al., 2022), with a strong association with areas near springs, always found in shaded streams with clear water, rich in organic matter and rocks (Pinheiro & Santana, 2016;Lima, 2018;Araújo et al., 2022).However, the records presented here reveal a new occurrence of K. attenboroughi in a small stream near the headwaters of riacho Jacaré, municipality of São João da Canabrava, in the state of Piauí.This area is not located on slopes of elevations with humid forests, implying new and important aspects for the species.Therefore, K. attenboroughi can no longer be considered endemic to the state of Ceará or to the slopes of highland swamps.This might represent a substantial increase in the potential area of occurrence of K. attenboroughi, making its geographic distribution likely underestimated.Another significant aspect of this new occurrence area is its environmental characteristics, similar to the environments present in already known occurrence points (Araújo et al., 2022), reinforcing the idea that such environmental characteristics are essential resources for the presence of these crabs, as well as consolidating the type of environment of the species. The present results underscore the importance of biological surveys for freshwater crabs in northeastern Brazil.However, they also reveal that the population of K. attenboroughi from riacho Jacaré, as well as those of F. ibiapaba from Ceará, face significant anthropic pressures.The main issues encountered by Pseudothelphusidae crabs in northeastern Brazil include deforestation in areas adjacent to springs and streams, the presence of domestic animals such as pigs inside the rivers where the species occurs, and the presence of garbage and other pollutants in the areas where crabs are found, exerting strong anthropic pressure.(88887.637365/2021-00; 88887.717864/2022-00; 88887.717871/2022-00; 88887.511078/2020-00).ACKNOWLEDGMENTS: We thank the Fundação Cearense de Apoio ao Desenvolvimento Científico e Tecnológico (FUNCAP) for the financial support and scholarship for APP and PHPN (#BP4-00172-00173.01.00/20; #BMD-0008-02422.01.08/22), to Coordenação de Apefeiçoamento de Pessoal de Nível Superior (CAPES) for granting a scholarship to CAN, CAMM, JGA and WMN (88887.637365/2021-00;88887.717864/2022-00;88887.717871/2022-00;88887.511078/2020-00),and to the Universidade Regional do Cariri (URCA) for supporting studies on decapod crustaceans.We also thank João Rafael, Gustavo Alves, Joldam Fortuna, Edimar Machado, Dionísio Ecologista and Djanilson Ecologista for the information about the existence of crabs and for the great support in the field. Figure 4 .Figure 5 . 9 Figure 7 . Figure 4. (A) Distribution of Pseudothelphusidae in northeastern Brazil; (B) previous records and new occurrences of Fredius ibiapaba; (C) previous occurrence of Kingsleya attenboroughi; (D) new occurrence of K. attenboroughi; (E) previous occurrence of Kingsleya parnaiba.(B-E) Digital Elevation Model (DEM) showing the altimetry of the Pseldothelphusidae areas of northeastern Brazil (warm colors indicate higher elevations).Map created in Qgis free software 3.16.14. Figure 6 . Figure 6.Habitat of Fredius ibiapaba at Fonte do Caranguejo in the municipality of Viçosa do Ceará.(A) Panoramic view of the place of occurrence; (B) juvenile by F. ibiapaba; (C) burrow of F. ibiapaba built in damp soil; (D, F) traces of strong human presence at Fonte do Caranguejo.
2,350.6
2024-08-09T00:00:00.000
[ "Environmental Science", "Geography", "Biology" ]
Generation and Applications of Plasma ( An Academic Review ) Plasma being the fourth and most abundant form of matter extensively exists in the universe in the inter-galactic regions. It provides an electrically neutral medium of unbound negative and positive charged particles, which has been produced by subjecting air and various other gaseous mixtures to strengthen the electromagnetic field and by heating compressed air or inert gasses for creating negative and positive charged particles known as ions. Nowadays, many researchers are paying attention to the formation of artificial Plasma and its potential benefits for mankind. The literature is sparsely populated with the applications of Plasma. This paper presents specific methods of generation and applications of Plasma, which benefits humankind in various fields, such as in electrical, mechanical, chemical and medical fields. These applications include hydrogen production from alcohol, copper bonding, semiconductor processing, surface treatment, Plasma polymerization, coating, Plasma display panels, antenna beam forming, nanotechnology, Plasma Torch, Plasma pencils, lowcurrent non-thermal Plasmatron, treatment of prostate cancer, Plasma source ion implantation, cutting by Plasma, Plasma etching, pollution control, neutralization of liquid radioactive waste, etc. Resultantly, worth of Plasma technology in the medical industry is increasing exponentially that is closing the gap between its benefits and cost of equipment used for generating and controlling it. Introduction Plasma has been observed by mankind in nature since a long time ago.It has recognized as the 4 th state of matter.Most of the universe comprises the Plasma rather than solid, liquid or gas.Astrophysicists found that galaxies are mostly composed of this form of matter.The Plasma term was initially recognized by a medical scientist, Purkinje (1787-1869) that referred it as the clear liquid, which is left after various corpuscle components, and protoplasm are removed from the blood, but this was not the Plasma, which is nowadays recognized as the 4 th state of matter.This was first ever discovered by William Crooks in 1879, and this state of matter was officially named as 'Plasma' by Irving Langmuir in 1928. The theory of Plasma Sheath was developed by Langmuir, which is the boundary layer formed between solid particles and ionized Plasma.He also discovered the periodic variations of electron density in certain regions of Plasma discharge tube, which is known as Langmuir Waves.This was how Plasma physics came into being.Plasma mainly constitutes of negative ions, and heavy charged particles.Therefore, Plasma can be defined as partially ionized gas usually produced by an electrical discharge at near surrounding temperature.Due to extensively increasing potential and benefits of Plasma in recent years, the application of Plasma is growing very rapidly. Scientists ever tried to model this form of matter and some equations defining this state.These equations apply to many proposed controlled thermo-nuclear reactors.Many issues regarding this state of matter were raised in past like the mere fact of localization of the Plasma anisotropic heating.To resolve such concerns, various models were already developed.These involve models, which gives information about the frequency dependence of the resistivity, and the associated processes of absorption and emission of radiation and effect of frequencies [1][2][3]. Nuclear Physics usually require slow neutrons for many of its operations like fission and fusion reactions. Plasma also finds its role in nuclear physics.Later studies revealed that Plasma also exists in several other forms like Non-equilibrium Plasma, which is initiated as the appropriate medium for surface properties modification of the material surface occurs.To build an effective treatment with energy and time-saving feature, various Plasma parameters are used that depends on the type of materials and their properties.Such treatment properties were considered as to be drawn of organic chemistry, during the late 20 th century.The implication and its efficient use for synthetic fabrication make it much famous in various industrial products.In the field of artificial products, this also provides the platform for advanced research in the field of organic materials integration and control.Moderate and highly ionized Plasma can also be able to provide temperature resistive effects for inorganic materials like ceramic, alloys and glass. Plasma used in medical application occurs during the biological and chemical distillation, Plasma therapy, sterilization and the blood coagulation process.It also provides the ease of producing low-temperature Plasma in the open air. Use of Plasma technology for the efficient development of Inter-punitive sciences can efficiently provide the equipment's designing for effective diagnoses and monitoring of diseases level and tools used during the medical process.According to reviews of pharmaceutical scientists about the use of Plasma for diagnoses of diseases, they provide several positive effects for Plasma use.Like the scientist, Stoffels in [4], states it as the invention of Plasma needle, which provides a practical use during the treatment of living tissues.Moreover, many other applications of Plasma in the medical field will be discussed later. In the future, a key problem to be faced will be a lack of energy resource for power generation.Scientists frequently consider the use of alternative energy resources, in which the most important practical method is the use of Plasma under the field of power production.It also identifies the implication of newly exposed state of matter and discovers the dynamic behaviors to integrate it into the electrical system, one good example is a portable Plasma torch.Electrical scientists also identify the significance of this newly discovered state of matter and find ways to integrate it into the electrical systems, like in the portable Plasma Torch [5]. Categorization of Plasma The process for plasma generation requires the application of energy on gas, which causes ionization of gas [6]. i. Hot (thermal) Plasma is composed of negative ions and heavy charged particles, which are in thermal equilibrium with each other, i.e., both, are at the same temperature.Thermal Plasma is thermodynamic and can abundantly found in the universe [14].It can be produced by electrothermal and electromagnetic launchers, (Lehrstuhl fur Raumfahrttechnik) [15], investigated the micrometeoroid and space debris impacts for surface treatment to provide Plasma of hot, dense, short wavelength and high velocity by the accelerated small glass beads that can directly affect the material surface for coating purpose.The use of Plasma antenna for beamforming has the interaction of Plasma elements which includes various scattering from Argon Plasma cylinders boundary and electromagnetic wave transmission on Argon Plasma, due to the incident electromagnetic wave for beam formation of the antenna.Such Plasma antenna provides highly efficient beam radiation, beam forming and beam scanning [16].Ion implantation based on Plasma is an advanced technique for depositing thin films and surface modification for many fabrication processes in industry. ii. Non-thermal Plasmas (Cold Plasma) have electronics which are at much higher temperature than the heavy charged particles.Cold Plasma manifests a behavior outside the thermodynamical equilibrium [17].The conventional ion sources include Plasma generators for ion extraction and acceleration.The most commonly used techniques in Plasma generators to create discharge are electron cyclotron resonance and microwave frequency ionization source [18].In recent years, the technology to produce Plasma at high temperature without gaseous collision has been of interest due to its demand in space physics, inertial fusion and magnetic fusion [19].The frequency and power of radiation emitted by artificial Plasma can be modified by using vacuum microwave oscillators as ionization sources [20]. Plasma gun technique for high energy density Plasma contains electrodes connected with high current sources like MARX generators or magnetic flux compression generators in order to ionize the gas injected and foils to accelerate the ions by Lorentz force, which then results in expulsion and flow of resultant Plasma [21]. Plasma technology is growing rapidly for industrial as well as commercial applications.Plasma technology has a strong application for use in metallurgy process, nanotechnology tools [22] for forming the antenna beams. Its application in Plasma lenses ensures the biocompatibility with the human tissues for treatments of living cell tissues [23].The surface treatment application or the implant material treatment that is targeting the blood coagulation or wound healing aid for avoiding the electrocution, the preferred one would be the floating potential electrode [17] [12].Its use in reactors with corona discharges and Plasma display panels is much efficient than the conventional flat screen televisions due to the development of better performance, highly efficient, and low-cost advanced electronic circuitry driver topologies.It depends on the resonant energy source, resonant network connection type, and panel voltage levels [24].The literature published on the use of Plasma in the processing of semiconductor is currently far better than the published on its use for fusion research [19]. The purpose of this paper is to provide an idea of the development, advancement and applications of Plasma, which will be discussed in following sections.The application of DC electrical field across the cathode and anode plate causes the acceleration of electrons in the front end of the cathode, which increases the inelastic collision between atoms and electrons and leading towards the ionization and excitation.Ions and new electrons created by the ionization collision are strongly accelerated by the electric field toward the cathode that discharges the new electrons by ioninduced through the secondary electron emission.The increase ionization collisions increase the concentration of new electrons and ions at the cathode build the discharge glow of self-sustaining plasma. The emitted electrons from the electrodes usually unable to sustain the discharge when there will be no potential difference between the electrodes, while the injection of constant potential difference increases the DC discharge because of the large flowing of current.This can be extensively utilized for material processing, as a light source, etching, Ion-deposition and for physical mechanism of surface modification.Methanol can be made from methane, which is abundant.Conversion of methanol to hydrogen can be made done by Dielectric Barrier Discharge (DBD) Plasma, corona discharge Plasma, surface-wave discharge Plasma, Microwave Plasma, glow discharge Plasma, pulse charge Plasma, etc. [6].DBD is also known as silent Plasma. It is generating by using the two electrodes, which have a barrier between them made up of dielectric of few millimeters in thickness.Three different variants of the setup used for making DBD Plasma are shown in Fig. 4. Figure.4 Three basic configurations The useful product from the decomposition of methanol is hydrogen and byproducts are CO and CO2.The highest yield of hydrogen achievable by this process to date is about 28% [25]. CH3OH2H2+CO CH3OH+H2O3H2+CO2 4. Applications of PLASMA Some of the Plasma applications for both the industries as well as individual users are as follows: i. Pollution Treatment Diesel engines are used in a different application like in transportation, power generation, farming, construction and in other industrial settings.They produce a significant amount of pollution, especially NOx [26].NOX storage reduction, Selective NOX recirculation and Non-thermal Plasma have been considered increasingly in recent times, with a view to developing techniques to reduce NOX emissions in diesel engine [26].Non-thermal Plasma technology has been introduced as a promising method of NOX removal which is done by reaction with free electrons, ions, radicals, molecules in Plasma [27].NOX removal can be summarized as the following: Where N2 (A) represents N2 meta-stable state. ii. Liquid radioactive waste utilization Plasma technology can be used for the utilization and neutralization of liquid radioactive waste [28].The most difficult phase of the nuclear fuel cycle is safe disposal or recycling.This requires separation and extraction of different components to regenerate the irradiated fuel, and neutralization of waste before disposal.The irradiated fuel is around 97% and waste is around 3%, which consists of U235and radioactive isotopes of plutonium [29][30]. iii. Semiconductor Processing Today about a hundred chips are made from a silicon wafer of 4-8-inch diameter.The element forming these chips should be around 0.25 micrometer [19].This resolution is only possible with Plasma.The Plasma helps in the process of etching in following ways a) Atomic species such as fluorine or chlorine are produced using Plasma which in turn perform the etching b) A substrate is produced by using Plasma so that the etchant species act in an effective manner.c) Plasma helps to keep the process of etching in a straight line due to its highly directional properties iv. Ion Implantation Ion implantation is a process in which the acceleration of ions towards the targets occurs at high energies to enter them below the surface of the target that usually depends on the acceleration energies and the applications for which it is used [31].Ion implanters are used for the manufacturing of modern integrated circuit (IC) by modifying silicon or doping several other semiconductors.It includes the production of an ion beam and it is steering into the substrate in order to make the ions to come to rest under the surface.The beamline at energy causes the ions to move at which they were taken out from the source material or energy at which they are decelerated or accelerated by radio frequency or dc electric fields. v. Living Tissues Treatment The plasma.Very high temperatures present in pinch and fusion plasma can produce the fully ionized material, while the low-temperature Plasmas is produced by DC or AC energization of working gas, radio-frequency or microwave electromagnetic fields.Such discharge level has a low level of ionization and most of the ions still exist in the neutral state, having the energy of few tens of electron volts that use kinetic temperatures of an electron to produce Nano-materials [32][33][34].Low-temperature plasmas are many times preferable for use in nanotechnology applications.These can be further classified as non-equilibrium (non-thermal) and equilibrium (thermal) Plasmas.The Non-equilibrium plasma follows the relation of Te ˃˃Ti = Tg.Where Tg, Te, Ti and are corresponding temperatures of background gas, ions and electrons.These values are dependent on kinetic energies. Generation Process of non-equilibrium thermal Plasmas are followed: a) Low operating pressure technique which results in less frequent collisions between neutral, ion and electrons and hence they cannot achieve thermal equilibrium. b) The high-pressure techniques used along with pulse discharge.The collisions are high but for short time, so the thermal equilibrium cannot be reached due to pulse discharge interruption [35]. c) The high-pressure micro-plasma is high-pressure plasma whose dimensions reduced to micron size, which causes enormously increase in electrons kinetic energy due to the high electric field. The equilibrium (thermal) Plasmas are in thermal equilibrium where temperature equilibration made by an electron, ions and neutral repeated collision in high-pressure plasma discharges results in the substantial degree of ionization due to Te=Ti=Tg.Both equilibrium and non-equilibrium techniques have been widely used in Plasma Nano-technology.The thermal, chemical and electrical properties of low-temperature Plasmas provide an efficient and versatile tool for nanotechnology [36]. vii. Plasma Pencil Plasma pencil, which is also known as the Plasma plume is considered as the most important application for low pressure or non-equilibrium plasma.In the medical field, low-temperature plasma provides the no arcing risks and generates a low concentration of ozone because of plasma formation [37]. It is frequently used for healing wounds, kill the bacteria inside the mouth and modify the surface of heat sensitive materials.Plasma plume usually creates the Cold Plasma that destroys the bacteria without damaging the skin tissues. Plasma plume contains the two electrodes of copper rings that are attached to the glass disk surface having a hole in the center of about 3mm.Power supplied is high voltage pulse with very high frequency (in kHz) and is applied to electrodes when gas is inserted.The device usually works on 15-watt DC power supply.The gas used in this application is mainly Helium gas with small traces of Oxygen gas.The Plasma plume production by this method is around 5cm in length, which can increase up to 12cm in length.Key factors that can affect the Plasma plume length largely depend on the flow rate of Helium gas through the electrodes and the magnitude of voltage pulses. Substantial disadvantages of other Plasma jet producing devices are that Plasma pencil causes several degrees rises of temperature from the room temperature, which can make unsafe for the skin.In addition, in Plasma jet devices, the plasma length can also be reduced to about few millimeters [38]. viii. Low Current Non-Thermal Plasmatron Hydrogen gas has many important industrial uses, such as used as an efficient power source in aerospace for hydrogen gas fuel rockets and the automobile industry, etc.It is also a useful source of energy in a fuel cell or thermonuclear fuels.It has found its uses in fertilizers, petroleum refining, food and fat processing also [39]. One of the most effective way of producing Hydrogen gas is by using Low Current Plasmatron which is also and catalysts depreciate very quickly [40].The new Plasmatron has 70% conversion efficiency and produces 3% less heat. ix. Use of Plasma in Treatment of Prostate Cancer In the early days, curing cancer was a matter of early detection and timely treatment of the tumors.The side effects of techniques used to treat the cancerous cells were quite a few including the damaging of body cells without the assurance about the complete elimination of cancer [41]. Low-temperature Atmospheric Plasma has revolutionized the field of medicine and is being successfully used to cure cancer.It has low side effects than the conventional techniques and the chances of cancer returning to the body are minimal.The reason is that the low-temperature plasma breaks down the DNA double strands within the nucleus of Prostate cancer [42]. The device uses 6kV supply at 30 kHz frequency.Helium gas is used with 0.3 % of Oxygen gas mixed with it. The distance of the plume is 10mm from body to nozzle and the time of operation is usually 5 to 10 minutes. Body temperature does not go above 36.5 o C during that period [43]. x. Cutting by Plasma Plasma can be used to cut the materials also.We have been using it for many years, but it is cost-effective and depend on the range of the material.In the start, the cutting system had some limitations just, because, of the thickness of material but now, the quality of cutting system has been improved [44].There are two machines used to cut materials as CombiCut and RUR machines. We can see the significant improvements in the cutting system of Plasma.Holes for the bolt can easily be driven and the edges are made excellently with this system even with the thicker material [45].High definition Plasma cutting is best for stainless steel cutting, mild steel cutting and aluminum cutting. xi. Plasma Etching Sputtering or ion milling is used for etching, the item that is being etched is kept on one side of the electrode of the system.Ions are accelerated by DC radio frequency or microwave electric field.To increase Plasma density, a magnetic field is used.We can also use inert gasses or reactive gasses for this purpose. Plasma that is generated from RF has a frequency ranging from kHz to GHz.Plasma is produced from the region that is empty of the substrate and then defused so that it doesn't cause any harm to the substrate by high energy electron [46]. Plasma etching is used in silicon dioxide for making memory devices.Plasma gives high-ranking results for chemical etches (especially wet) [47].Numbers of gasses are used for Plasma etching some of them are SF6, CF4, Halogen mixtures (CF4 /O2).The Plasma has the ability to break these gasses into reactive ions, neutrals or free radicals then they interact with the substrate as they both play their role in etching, especially neutrals. Reactive Ion Etching (RIE) tools are given in Figure.It is named as Glow discharge cleaning and it is like sputtering, but it removes the impurities from the surface of the material so this process is used to clean the things i.e., medical instruments and vacuum surfaces [15]. PIII we can also say Plasma Immersion Ion Implantation, this is used to harden the surfaces, so we can improve the resistance to wear fatigue and corrosion as a ferrous material that is hardened by Nitriding.PIII uses Plasma as the source of ions that is being embedded.The substrate is dipped then with proper biasing ions is rushed to the substrate with proper energy [48]. xiii. Plasma Antenna of Beam Forming This Plasma based beam formation and radiation technique used widely in communication [49][50], radio detection, target directing [51].In the communication field, the engineers use Plasma element as an effective radiator (Omni-directional radiation source) for electromagnetic energy release.In radio detection and target directing technique, we consider Plasma element as a planar shape reflector to reflect the electromagnetic wave in a specific direction [52].Plasma element can be easily energized by high-frequency AC discharging in vacuum tubes other than a heavy big size cylinder.The cylindrical geometry reflector can also be used other than to planar shape to scatter wave energy and then concentrate it in the predicted direction. To prevent the leakage of electromagnetic energy by the penetration of Plasma shield we consider the radiation source frequency well below the Eigen frequency for excellent performances in term of beamforming, beam scanning and radiation [53].This Plasma antenna technique has great influence on Plasma density, Plasma collision, radii and separations of the Plasma elements. xiv. Atmospheric Pressure Plasma Jet The conventional low-pressure plasma process provides a directional etching and deposition of thin films at a rate of 10 m/min with temperature as below as 150 o C, so it prevents the damage of thermally sensitive substrates.A uniform glow discharge generation provides the material processing at the same rate over large substrate areas.Vacuum systems are very expensive because it require a special arrangement for its operation and its maintenance is very costly, but this efficient method has high applications like load locks and robotic assemblies used for shuttle materials, also the size of the object used for operation is limited by the size of the vacuum chamber.The atmospheric pressure plasmas techniques overcome the disadvantages of vacuum operation but it is very difficult to support a glow discharge because higher voltages are necessary for the breakdown of gas at 760 torr, which often causes arcing between the electrodes [54].Plasma torches application produce arcing which can be easily sought, however, to prevent arcing and to low the temperature of the gas, pointed electrodes in corona discharges [55], or insulation insertion in dielectric barrier discharges are used [56].However, still, the problems of uniformity throughout the volume occur in these techniques. A Plasma jet based new technique is developed most recently that uses helium flow and unique electrode design to prevent arcing [57], which can etch and deposit materials at low temperatures for a wide range of applications. xv. Plasma gun Techniques for Fusion at MegaGauss Energy Densities A simple controlled fusion Plasma gun technique is used to obtain high-energy ions but has some problems like adequate quality for Fusion Plasma (electrode ablation), separation of reacting Plasma from the permanent portion of the system and repetitive operation for fusion power reactors [58]. The more advanced technique includes the coaxial guns in the form of the Plasma Flow Switch (PFS) that operated through the wire arrays discharge and foils in vacuum to build up the magnetic energy in very lowdensity plasma and then release this magnetized plasma at very high speeds.The PFS technique generates fusion temperature Plasma from an ultrahigh speed flow [59].This high-speed electromagnetic force has applications in Plasma physics community in controlled fusion research.To provide converging, collimated jets to compress xvi. Plasma Ion Implantation and Deposition In Plasma-Based Ion Implantation (PBII) and Plasma Based Ion Implantation and Deposition (PBIID) techniques, the target uses as a part of the beam forming system, in which ions beams are generated and used. The deposition is an integral part of the treatment process.These Plasma immersion ion implantation and deposition techniques suitable for an innovator non-line of sight process by incorporating a three-dimensional shaped target (substrate) in the ion acceleration scheme, the treated object immersed in Plasma and by biasing it became the part of ion source [60].The biased target surface formed around the ion acceleration that occurs in a dynamic, self-adjusting sheath. These technologies have tribological applications by decreasing wear and corrosion through the development of hard, tough, low-friction, smooth and chemically inert phases and coatings, which majorly used as frictionless engine components and have applications in microelectronics [61].Fundamental physical and economical limitations occur in conventional technique due to increasing of size, a number of substrates and pulsed high voltage reached at a level of 100 kV, which yield large secondary electrons causing disadvantages.The great technical challenges and high-cost issue leads due to the pulse modulator to handle the total current, which is dominated by secondary electrons rather than ions.Generally, the shielding against X-rays is required due to the impact of high-energy electrons with the chamber wall or various other components.It causes an increase in the process cost [62].For this; the modern techniques of PBII and PBIID require a few kilovolts voltage level or even less where pulse modulation is affordable.Therefore, it can be effectively used in large area processing emerging fields like biomaterials, deposition of thin films with controlled stress, and synthesis of nanostructured thin films [63]. xvii. Electrothermal and electromagnetic Plasma for surface treatment These launchers produce Plasma of hot, dense, short wavelength and high velocity by the accelerated small glass beads that can directly affect material surface [64], so we use it directly for coating purpose in which specific properties of some material impose on other material to achieve better properties of them and neglect bad characteristics.Such interface Plasma pulses on metallic surfaces produce an extremely rigid metallic surface that has very high tendency to resist the corrosion [65][66]. The experiments performed on LRT launcher shows that the resistance to friction corrosion considerably increases during Plasma pulse treatment process which causes no oxides on the treated surface layer of the material, for that electromagnetic and electrothermal launchers can be used [67][68][69]. Conclusions Plasma technology is the most challenging field of research, which has a wide range of applications but still has some unsolved issues.Many issues are expected to be solved in the near future as the research is going on. Understanding Plasma, its applications and problems require knowledge of many areas like particle and radiation physics, electronics, electromagnetic wave theory, thermodynamics, quantum mechanics, physical chemistry and many others.The future of Plasma technology is anticipated to provide a better understanding of the creation of the universe and the forces that work in it.The advancements also provide an inexhaustible supply of electrical power using thermonuclear fusion reactions, reliable communication with space and reentry vehicles, propulsion devices appropriate for interplanetary travel, devices for producing and amplifying high-frequency radio energy, electrical generators with no mechanical parts and numerous semiconductor devices.The immediate future must be devoted to a better understanding of the behavior of Plasmas and related phenomena under various conditions.The most significant progress is thus most likely to be made in fundamental studies under controlled conditions.It is difficult to predict what the long-term future will hold. However, we can predict that Plasma physics will grow both in importance and in scope.The present problems are indeed formidable, but the richness of the rewards for their solution warranted the interest and effort, which is and will be devoted to Plasma technology. Plasma has played an important role in every field of life.The applications of Plasma are widespread in the fields of medicine, energy, environmental sciences, physics, etc. Plasma is present in human blood also and is the most abundant form of matter and these facts increase the importance of studies linked to Plasma. We have discussed some of the applications of Plasma with respect to engineering and medical field.Other applications may include Plasma particle accelerator, Ion Propulsion of spacecraft, Plasma spray coating, corona dying of ink and textile, isotopes separation, ozone production for water purification, etc.As the developments in these applications occur, Plasma physics will become more important and will find many other practical uses and the role of engineering will become integral in the applications of Plasma. 3 . Plasma Generation Techniques i.DC Glow Discharge: This belongs to non-thermal plasma, in which Direct current (DC) electric source is connecting between cathode and anode plate and the application of plasma gas is performed among the plates for plasma generation. Figure. 1 Figure.1 System for DC glow discharge ii. Radio Frequency Discharge: It produces the plasma either by inductively or capacitively coupling energy at frequency range lies under the radio spectrum (1KHz to 103MHz) and has AC power supply.Based on the coupling mechanism, there exist two kinds of RF plasma discharge, i.e., the Capacitively Coupled Discharge (CCD) and Inductively Coupled Discharge (ICD).Within capacitively coupled discharge system, the AC voltage source is provided to power electrodes through a capacitor, while the other electrode is solidly grounded.The capacitor rapidly charged at the positive half of the voltage source, which causes the voltage drop over the plasma.The charging up of the capacitor by the ion current and dropping of plasma voltage likely occurs in the negative half cycle but the voltage is much prevalent of the lower ion mobility.It preferably utilizes as a lower temperature processing medium of plasma for material processing in aerospace and microelectronics fields. Figure.2Capacitively coupled Figure.2Capacitively coupled discharge systemThe inductive coupled discharge contains the configuration of a cylindrical helical coil, in which the electromagnetic induction provides the corresponding electric current.The passing of RF current through the coil develop the time-varying magnetic flux that largely induces the RF sinusoidal electrical field that sustains the plasma discharge and accelerate the free electrons. Figure. 3 Figure.3 Inductively coupled discharge system Transformer coupling usually aid for inducing the electromagnetic field between the induction coil and plasma at a frequency range of 1 to 100 MHz It is much efficient for generating the oscillation and electron acceleration Cold Plasma treatment initially dedicated to environmental protection has been completed in the last years with bio-medical and bio-decontamination treatments.The living tissue treatment imposes characteristics for the devices used to produce cold Plasma such as; a) To avoid the electrocution.b) Reduction of invasive actions on living cells.c) Assuring the selectivity to affect only the afflicted cells.d) Avoiding the thermal effect.e) Limitation of treatment time to avoid the induction of toxic effects to treat cells [17].vi.High Energy Density Pinch Plasma The synthesis of different characteristics Nano-scale materials have many optimizations and improvement processes and routes, including vapor reduction and milling, chemical and nature's organic routes.Lowtemperature (cold) Plasmas have temperature and density very smaller than high-density (hot) pinch and fusion Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 3 October 2018 doi:10.20944/preprints201810.0061.v1 an energy efficient way.Conventional sources used to produce Hydrogen require very large plants; high cost Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 3 October 2018 doi:10.20944/preprints201810.0061.v1 5. Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 3 October 2018 doi:10.20944/preprints201810.0061.v1 in the curved orbits that cause a significant decrease of electron losses with the walls.It is preferable to use in deposition and etching for processing of semiconductor wafers, for modifying the surface surfaces of diamond films and fabrication purposes. Table.1 Plasma generation techniques and their propertiesiii.Hydrogen Production from alcohols Non-thermal Plasma has more applications in the industry than thermal Plasma.Methanol has high H to C ratio. Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 3 October 2018 doi:10.20944/preprints201810.0061.v1 the Plasma generators and chamber walls a spherical array based quasi-steady plasma guns technique are used, but it is critical for the development of the pulsed system.
7,191.8
2018-10-03T00:00:00.000
[ "Physics" ]
Foam-like phantoms for comparing tomography algorithms A family of foam-like mathematical phantoms for comparing tomography algorithms is presented. Tomographic algorithms are often compared by evaluating them on certain benchmark datasets. For fair comparison, these datasets should ideally (i) be challenging to reconstruct, (ii) be representative of typical tomographic experiments, (iii) be flexible to allow for different acquisition modes, and (iv) include enough samples to allow for comparison of data-driven algorithms. Current approaches often satisfy only some of these requirements, but not all. For example, real-world datasets are typically challenging and representative of a category of experimental examples, but are restricted to the acquisition mode that was used in the experiment and are often limited in the number of samples. Mathematical phantoms are often flexible and can sometimes produce enough samples for data-driven approaches, but can be relatively easy to reconstruct and are often not representative of typical scanned objects. In this paper, we present a family of foam-like mathematical phantoms that aims to satisfy all four requirements simultaneously. The phantoms consist of foam-like structures with more than 100 000 features, making them challenging to reconstruct and representative of common tomography samples. Because the phantoms are computer-generated, varying acquisition modes and experimental conditions can be simulated. An effectively unlimited number of random variations of the phantoms can be generated, making them suitable for data-driven approaches. We give a formal mathematical definition of the foam-like phantoms, and explain how they can be generated and used in virtual tomographic experiments in a computationally efficient way. In addition, several 4D extensions of the 3D phantoms are given, enabling comparisons of algorithms for dynamic tomography. Finally, example phantoms and tomographic datasets are given, showing that the phantoms can be effectively used to make fair and informative comparisons between tomography algorithms. Introduction In tomographic imaging, an image of the interior of a scanned object is obtained by combining measurements of some form of penetrating wave passing through the object. Tomographic imaging is routinely used in a wide variety of application fields, including medical imaging (Goo & Goo, 2017), materials science (Salvo et al., 2003), biomedical research (Metscher, 2009), and industrial applications (De Chiffre et al., 2014). To extract relevant information from the acquired data, the measurements are often processed by several mathematical algorithms in a processing pipeline. Common processing steps include tomographic reconstruction (Kak et al., 2002;Marone & Stampanoni, 2012;Ravishankar et al., 2020), artifact removal (Barrett & Keat, 2004;Mü nch et al., 2009;Miqueles et al., 2014), and image segmentation (Iassonov et al., 2009;Foster et al., 2014;Perciano et al., 2017). Because of the importance of tomography in practice, a wide variety of algorithms have been developed for these processing steps, and tomographic algorithm development remains an active research field. In addition to classical image processing algorithms, the use of data-driven machine learning algorithms has become popular in tomography in recent years (Jin et al., 2017;Yang et al., 2017;Pelt et al., 2018;Adler & Ö ktem, 2018;Liu et al., 2020;Yang et al., 2020). To properly assess the available algorithms, it is essential to compare them with each other in a fair, reproducible, and representative way. Such comparisons are important for algorithm developers to understand how newly developed algorithms compare with existing approaches. Proper comparisons are also important for end users of tomographic imaging to learn which algorithms to use for certain experimental conditions, and to know what results to expect from each available algorithm. A common approach to compare tomography algorithms is to take a set of tomographic datasets, apply several algorithms to the data, and compare results. To make this approach as informative as possible, the chosen datasets should ideally satisfy the following requirements: (i) The datasets should be challenging: it should not be trivial to obtain accurate results for them. (ii) The datasets should be representative of typical objects, experimental conditions, and data sizes that are used in practice. (iii) The datasets should be flexible with respect to object complexity and experimental properties, making it possible to explore the capabilities and limitations of each algorithm for different acquisition modes, experimental conditions, and object complexities. (iv) The datasets should include enough samples to allow for the comparison of data-driven algorithms that require a large number of similar samples for training and testing. The datasets that are used to compare algorithms in the current literature typically satisfy some of the requirements above, but not all. For example, real-world datasets from public databases (Hä mä lä inen et al., 2015;McCollough et al., 2017;Jørgensen et al., 2017;Singh et al., 2018;De Carlo et al., 2018;Der Sarkissian et al., 2019) are often used for comparison. While these datasets are both challenging and representative since they are obtained in actual tomographic experiments, they are typically not flexible as it is impossible to change the acquisition mode and experimental conditions that were used in the experiment. In addition, while some datasets are specifically designed for data-driven applications (McCollough et al., 2017;Der Sarkissian et al., 2019), other real-world datasets are often not suitable for data-driven approaches, since the number of scanned samples is often limited. A common alternative to comparing results for real-world datasets is to use computer-generated phantom images for which virtual tomographic datasets are computed. One advantage of using such mathematical phantoms is that the true object is readily available, allowing one to compute accuracy metrics with respect to an objective ground truth. Another advantage is that this approach is flexible: since the tomographic experiment is performed virtually, different acquisition modes and experimental conditions can be easily simulated. Popular examples of phantoms used in tomography include the Shepp-Logan head phantom (Shepp & Logan, 1974), the FORBILD head phantom (Yu et al., 2012), and the MCAT phantom (Segars & Tsui, 2009). In addition to predefined phantom images, several software packages have been recently introduced that allow users to design their own custom mathematical phantoms and generate simulated tomography datasets for them (Ching & Gü rsoy, 2017;Faragó et al., 2017;Kazantsev et al., 2018). The main disadvantage of popular mathematical phantoms is that they typically consist of a small number of simple geometric shapes (i.e. less than 100). As a result, the phantoms are often not representative of real-world objects, which typically contain a much larger number of more complicated features. Several often-used phantoms, e.g. the Shepp-Logan head phantom, consist of large uniform regions and can therefore be relatively easy to reconstruct accurately using certain algorithms, making it difficult to compare algorithms using these phantoms. Finally, predefined phantoms usually consist of only a single sample and manually defined phantoms require considerable time to design, making it difficult to effectively use them for datadriven approaches that require multiple samples for training. Because of the aforementioned disadvantages of both realworld datasets and mathematical phantoms, a hybrid approach is used in practice as well (Adler & Ö ktem, 2018;Leuschner et al., 2019;Hendriksen et al., 2020). In this approach, reconstructed images of real-world tomographic datasets are treated as phantom images for which virtual tomographic datasets are computed. In this way, different acquisition modes and experimental conditions can be simulated for realistic phantom images. However, the approach has several disadvantages when comparing algorithms with each other. First, the reconstructed images have to be represented on a discretized voxel grid and often include various imaging artifacts, resulting in inaccurate representations of the actual scanned objects. Second, the approach can lead to the 'inverse crime', i.e. when the same image formation model is used for both data generation and reconstruction, which can lead to incorrect and misleading comparison results (Guerquin-Kern et al., 2012). Finally, since imaging artifacts such as noise and data sampling artifacts are present in the phantom images, artifact-free objective ground truth images with which to compute accuracy metrics are not readily available. To summarize, new datasets are needed that satisfy all requirements given above for improved comparisons between tomography algorithms. In this paper, we present a family of mathematically defined phantoms that aim to satisfy all requirements. The phantoms consist of three-dimensional foam-like structures and can include more than 100 000 features. Since foam-like objects are often investigated using tomography (Babin et al., 2006;Roux et al., 2008;Hangai et al., 2012;Raufaste et al., 2015;Evans et al., 2019), the proposed phantoms are representative of a popular class of objects. Furthermore, foam-like objects are typically challenging to accurately reconstruct and analyze due to the fact that they exhibit both large-scale and fine-scale features (Brun et al., 2010), making them well suited for comparing tomography algorithms. Tomographic datasets can be computed for the proposed phantoms for a wide variety of experimental conditions and acquisition modes and with data sizes that are common in real-world experiments, making the approach both flexible and representative. Finally, an effectively unlimited number of random variations of samples can be generated, enabling comparisons of data-driven algorithms that require multiple samples for training. The proposed family of simulated foam phantoms has already been used for comparing algorithms in several papers from various research groups Hendriksen et al., 2019Hendriksen et al., , 2020Liu et al., 2020;Etmann et al., 2020;Marchant et al., 2020;Renders et al., 2020;Ganguly et al., 2021). In this paper, a formal definition of the phantoms is given, and mathematical and computational details about both the phantom generation and tomographic experiment simulation are discussed. This paper is structured as follows. In Section 2.1 a mathematical description of the foam phantoms is given, and in Section 2.2 we introduce an algorithm that can compute such phantoms efficiently. We explain how, given a generated foam phantom, tomographic projections can be computed in Section 2.3. Several 4D (i.e. dynamic) variations of the proposed phantoms are introduced in Section 2.4. In Section 3, several experiments are performed to investigate the influence of various parameters on the generated phantoms, the computed projection data, and the final tomographic reconstruction. Furthermore, we discuss the required computation time for generating phantoms and computing projection data. In Section 4, we give a few concluding remarks. Method In this section, we give the mathematical definition of the proposed family of phantoms, and describe how such phantoms can be efficiently generated. In addition, we explain how projection images can be computed for both parallel-beam and cone-beam geometries. As explained above, the main inspiration for the design of these phantoms is the continued popularity of investigating a wide variety of real-world foam samples using tomography. In Fig. 1, two examples are given of such samples, in addition to an example of a foam phantom from the proposed family of phantoms, showing the similarities in features between the proposed foam phantoms and real-world foam samples. Mathematical description In short, each phantom from the foam phantom family consists of a single-material cylinder with a large number (e.g. > 100 000) of non-overlapping spheres of a different material (or multiple different materials) inside. A more detailed explanation follows. Each phantom is defined in continuous 3D space R 3 . In all phantoms, a cylinder is placed in the origin, parallel to the z-axis (the rotation axis). This cylinder has an infinite height and a radius of 1, with all other distances defined relative to this unit radius. Inside the main cylinder, N non-overlapping spheres are placed, which will be called voids in the rest of this paper. Each void i has a radius r i 2 R and a position p i 2 R 3 with p i = (x i , y i , z i ). The area outside the main cylinder consists of a background material that does not absorb any radiation, i.e. all positions (x, y, z) with (x 2 + y 2 ) 1/2 > r C where r C is the radius of the main cylinder (defined to be r C = 1). The foam itself, i.e. all positions that are within the main cylinder but not inside a void, consists of a single material with an attenuation coefficient of 1, with all other attenuation coefficients defined relative to this. Each separate void in the phantom can be filled with a different material, each with its own user-defined attenuation coefficient c i 2 R. In the default settings, all voids are filled with the background material. To summarize, each void i can be completely characterized by a vector s i 2 R 5 with five elements: its position x i , y i , and z i , its radius r i , and its attenuation coefficient c i . Similarly, each foam phantom is completely characterized by the set of s i vectors of all voids S = {s 1 , s 2 , . . . , s N }. The definition of a foam phantom is shown graphically in Fig. 2. The vertical size of a phantom is controlled by ensuring that the z i position of each void i satisfies jz i j z max , with a maximum position z max 2 R. In addition, no part of any void is allowed to exist outside the main cylinder, i.e. ðx 2 i þ y 2 i Þ 1=2 + r i 1 for all voids. Also, no part of any void is allowed to overlap with any other void, i.e. d( p i , p j ) ! r i + r j for all voids i and j, where d(a, b) is the Euclidian distance between points a and b. Finally, the size of the voids is controlled by choosing a maximum radius r max and ensuring that r i r max for all voids. are placed. When placing the ith void at position p i , the radius of the new void is limited by three considerations: (1) the distance to the outside of the main cylinder, 1 À ðx 2 i þ y 2 i Þ 1=2 , (2) the distance to the closest edge of any existing void, i.e. min j ¼ 1;...;iÀ1 ½dð p i ; p j Þ À r j , and (3) the maximum allowed radius r max . The final radius r i is chosen to be as large as possible, meaning that Phantom generation Since placed voids are made as large as possible, the size of newly placed voids naturally becomes smaller during the generation of the phantom: at the end of phantom generation, not much room is left for any new voids, resulting in smaller sizes. Another consequence is that each void i either touches the outside of the main cylinder, i.e. ðx 2 i þ y 2 i Þ 1=2 þ r i = 1, or touches at least one other void, i.e. d(p i , p j ) = r i + r j for some j. As a result, the radius of none of the voids can be increased without either making the void overlap another void or having part of the void outside the main cylinder. 1 We found empirically that realistic looking phantoms (see Fig. 1) are obtained when new voids are placed in positions that allow for the largest possible void size out of all possible positions. Finding such optimal positions given a partial set of voids is not trivial, since the number of possible positions is, in theory, infinite. We note that it might be possible to deterministically find an optimal position in a computationally efficient way by using a plane sweep algorithm approach (Nievergelt & Preparata, 1982). However, we propose to use a much simpler approach: a set of N p randomly picked trial points is available at all times, in which each trial point is a valid position where a void could be placed (i.e. inside the main cylinder but not inside an existing void). Then, out of all trial points, a void is placed at the point that results in the largest void. There are several advantages to this approach: (1) it is relatively simple to implement, (2) it is random in nature, enabling the generation of an infinite number of different phantoms, and (3) it is computationally efficient, allowing generation of phantoms with many voids within reasonable time. To summarize, the algorithm to generate foam phantoms works as follows: (1) Create a list of N p trial points, randomly placed within the main cylinder and satisfying the maximum height z max . (2) Pick the trial point that results in the largest void (if multiple exist, randomly select one) and remove it from the list. (3) Add a void at the picked trial point with the largest possible radius [equation (1)]. (4) Remove trial points that are inside the newly placed void from the list. (5) Add new trial points to the list until the list has N p points again, each randomly placed at a valid position (inside the main cylinder, outside any existing void, and satisfying the maximum height z max ). (6) Repeat steps (2) to (5) until N voids are placed. Note that each foam phantom can be recreated deterministically given the following values: the number of voids N, the number of trial points N p , the maximum void size r max , the maximum height z max , and the random seed used for the random number generator. There are several implementation tricks that can improve the computational performance of generating phantoms in practice. For example, the maximum possible radius [equation (1)] of each trial point can be precomputed and stored in a sorted data structure such as a skip list (Pugh, 1990) to enable fast access to the trial point with the largest possible radius. After placing a new void, the maximum radii have to be updated and reinserted in the sorted data structure, which can be efficiently done during step (4) above. For more details about these implementation tricks, we refer to the computer code that is available under an open-source license (Pelt, 2020). Computing projections The foam phantoms presented in this paper were developed for use in tomography research. As such, it is important that tomographic projections of these phantoms can be computed accurately and efficiently. Here, we assume that the projections are formed by the Radon transform: a measurement P i 2 R is computed by taking a line integral of the attenuation coefficients of the sample over the virtual X-ray i. The orientation and direction of the virtual ray depends on the tomographic acquisition geometry that is simulated. Measurements collected by 2D pixels with a certain area, which often represent real-world experiments better than individual rays, Schematic representation of a foam phantom, with an axial (i.e. horizontal) slice shown on the left, and a sagittal (i.e. vertical) slice shown on the right. The radius of the main cylinder is fixed to 1, with all other distances defined relative to this radius. Similarly, the attenuation coefficient of the main cylinder is fixed to 1 as well. Each void is characterized by its position x i , y i , and z i , its radius r i , and its attenuation coefficient c i . For one highlighted void, these parameters are shown in the figure. The vertical size of the phantom is defined by z max . Note that z max limits the position of the center of each void, which means that parts of a void can exist at positions larger than z max or smaller than À z max . can be approximated by supersampling, i.e. averaging the measurements of multiple rays within a single pixel. In many tomographic experiments, projections are formed by rotating the sample in front of a 2D detector (or, equivalently, rotating the detector around the sample) and acquiring separate 2D projection images at different angles. In these cases, the projection data are naturally described by a set of 2D projection images, each taken at a specific angle 2 R. Depending on the experimental setup, incoming rays of a single projection image are often assumed to be either parallel to each other (parallel-beam geometries) or to originate from a point source (cone-beam geometries). In many existing comparisons between algorithms in which tomographic experiments are simulated, projections are formed by first discretizing the object on a discrete voxel grid and computing line integrals for the discrete object afterwards. As mentioned above, this approach can lead to the 'inverse crime', which can produce incorrect and misleading comparison results (Guerquin-Kern et al., 2012). For the proposed foam phantoms, we prevent the inverse crime by computing projections analytically in the continuous domain, using the exact intersection between a ray and the phantom. Specifically, simulated X-ray projections of the proposed foam phantom are computed ray-by-ray. The line integral of the sample over a certain ray is computed by first computing the intersection of the main cylinder with that ray and subsequently subtracting the intersections with all voids, taking into account their attenuation factors. If we denote the intersection of ray i with the main cylinder by L i and the intersection of ray i with void j by l(i, j), we can describe the projection P i of ray i mathematically by where c j is the attenuation factor of void j, as described above. Note that for each void we have to both subtract the intersection that was counted in L i and add the attenuation of the void itself, resulting in a factor of (1 À c j ). In parallel-beam geometries, the intersection L i of the main cylinder with ray i can be computed by where dz i is the shortest distance between any point along ray i and the z-axis (the rotation axis). The computation of this intersection is shown graphically in Fig. 3. For cone-beam geometries, it is also possible to analytically compute the intersection between a ray and the main cylinder, although it is more complicated than equation (3). For the sake of brevity, we refer to the computer code (Pelt, 2020) for more details about this computation. The computation of intersections between rays and voids is similar to that of the main cylinder, where dv(i, j) is the shortest distance between the center of void j and any point along ray i, and r j is the radius of void j, as described above. The derivation of equation (4) is based on Fig. 3, extended to three dimensions. For more details about the analytic expression of the Radon transform of a sphere, we refer to Toft (1996). Note that shortest distances dv(i, j) can be computed efficiently by projecting the center p j of void j along the direction of ray i. For parallel-beam geometries, all rays of a single projection image are parallel to each other, which enables precomputing the projections of all void centers for each projection image, significantly reducing the required computation time. Detector noise can be simulated by applying Poisson noise to each measurement. First, a virtual photon count I i for measurement i is computed using the Beer-Lambert law: I i = I 0 expðÀP i Þ, where I 0 is the number of incoming photons and is a factor that controls the amount of radiation absorbed by the phantom. Afterwards, a noisy photon countÎ I i is computed by sampling from a Poisson distribution with I i as the expected value. The noisy photon count is transformed back to a noisy measurementP P i = À À1 logÎ I i =I 0 . In real-world tomographic experiments, other artifacts are often present in the measured data in addition to Poisson noise, for example due to source characteristics [e.g. beam hardening (Barrett & Keat, 2004)], additional photon interactions [e.g. free space propagation (Moosmann et al., 2011)], optical effects (Ekman et al., 2018), and detector defects (Miqueles et al., 2014). Simulating such additional artifacts is not yet supported in the current version of the computer code. However, we note that it might be possible to include such artifacts in the future, either during the computation of projections within the code or as a postprocessing step afterwards, possibly taking advantage of existing software packages that support them (Allison et al., 2016;Faragó et al., 2017). 4D extensions In recent years, improvements in radiation sources and detector equipment have increased interest in dynamic Schematic representation of the computation of the intersection (red) of a ray (black) and the main cylinder or a void (gray). The radius of the cylinder or void is given by r, and the closest distance between the ray and the center of the cylinder or void is given by a. The length of the intersection is then equal to 2(r 2 À a 2 ) 1/2 . To enable quantitative comparisons between algorithms for dynamic tomography (Kazantsev et al., 2015;Mohan et al., 2015;Van Nieuwenhove et al., 2017;Nikitin et al., 2019), 4D phantoms are needed. Similar to 3D phantoms, these phantoms should be challenging, representative, flexible, and suitable for data-driven applications. Here, we introduce such 4D phantoms by adapting the 3D foam phantoms described above, adding time-evolving aspects in different ways. Currently, the computer code includes three types of 4D extensions, which are described below. Additional 4D extensions are planned for future inclusion. The first 4D extension is a moving phantom, in which the voids move along the z-axis. All voids move with the same velocity, but the velocity changes randomly during the experiment. The second extension is an expanding phantom, in which the size of all voids increases during the experiment. The third extension is an infiltration phantom, in which the voids are slowly filled by a material with a different attenuation coefficient than the initial void material. Specifically, all voids at a certain chosen height are filled at the start of the experiment. Then, each unfilled void with an edge close to a filled void is filled after a randomly chosen interval. This process is repeated until all voids are filled. Example phantoms for the three 4D extensions are shown in Fig. 6. For all 4D extensions, several parameters can be chosen to adjust the time evolutions of the generated phantoms. After generating each 4D phantom, a dynamic tomography experiment can be simulated by virtually rotating the phantom during its time evolution, and computing projections as described in Section 2.3. Changes within the sample that happen during the acquisition of a single projection can be modeled by supersampling in time. Implementation details Computer code to generate the proposed foam phantoms and simulate tomographic experiments is available as the open-source foam_ct_phantom software package (Pelt, 2020). The code is available for the Windows, MacOS, and Linux operating systems, and can be installed using the Conda package management system: ThecodeisimplementedinthePython3(VanRossum&Drake,2009)programminglanguage.Partsofthecodewithahighcomputatio nalcostareimplementedintheCprogramminglanguage(Ke rnighan&Ritchie,1988),usingOpenMP(Dagum&Menon,1998forparalleliza tion.Projectionsforcone-beamgeom etriescanalsobecomputedusingNVidiaGraphicProc essorUnits(NVidia,SantaClara,CA,USA),whic h significantlyreduce sthe requiredcomputa tiontime.TheGPUcodeisimplementedusingthe Numbapackage (Lam etal. ,2015). Generated phantoms and projection datasets are stored in HDF5 file containers (Folk et al., 2011), using a simple custom data format that includes metadata about how the phantom or dataset was generated. A skip list (Pugh, 1990) is used to enable fast access to the trial point with the largest possible radius during generation of phantoms, and random numbers are generated using the Mersenne Twister algorithm (Matsumoto & Nishimura, 1998). The experiments in this paper were performed on a workstation with an AMD Ryzen 9 3900X CPU (AMD, Santa Clara, CA, USA), running the Fedora 32 Linux operating system. Experiments involving GPU computations were performed using a server with four NVidia GeForce RTX 2080 Ti GPUs (NVidia, Santa Clara, CA, USA), running the Fedora 30 Linux operating system. Code examples Below, we give a few code examples to show how the computer code can be used in practice to generate new phantoms, simulate tomographic experiments, and reconstruct projection data. First, new foam phantoms can be generated using the following Python code: Here, the five parameters that determine the phantom shape (see Section 2.2) are given by nspheres, ntrial points, random seed, rmax, and zmax. Once a phantom has been generated, parallel-beam projection data can be computed by the following Python code: Here, the supersampling parameter controls the number of rays that are simulated within each pixel. Specifically, supersampling 2 (i.e. supersampling -squared) rays are simulated within each pixel, evenly distributed over the area of the pixel in a supersampling  supersampling grid. The measured projection of a pixel is then the average value of the measurements of all rays within that pixel. For cone-beam projection data, only the geometry specification has to be changed to Here, sod and odd denote the source-object distance and object-detector distance, respectively. The computer code also includes utility functions to assist in reconstructing the generated projection data using existing tomography toolboxes such as the ASTRA toolbox (Van Aarle et al., 2016) and TomoPy (Gü rsoy et al., 2014). For the ASTRA toolbox, functions are included to convert defined geometries to equivalent ASTRA geometries: More code examples are included in the source code of the foam_ct_phantom package. Phantom examples In this section, we present several examples of generated phantoms, and investigate the effect of the various generation parameters on the phantom characteristics. As explained in Section 2.2, each phantom is defined by the number of voids N, the number of trial points N p , the maximum void size r max , and the maximum height z max . In the following, the values used for generating phantoms are N = 150 000, N p = 10 6 , r max = 0.2, and z max = 1.5 and all voids are filled with the background material, unless stated otherwise. Note that the code supports filling voids with other materials as well, making it possible to simulate objects with various characteristics, e.g. with lowcontrast features. In Fig. 4, generated phantoms are shown for various numbers of included voids N. Since the other parameters are identical for all shown phantoms, the figure also shows how a phantom is generated by increasing the number of included voids. As expected, phantoms with a small number of voids mostly include relatively large voids, while the void size decreases with increasing numbers of included voids. In addition, the figures show both the large-scale and fine-scale features that are present phantoms with relatively large numbers of voids. In Fig. 5, generated phantoms are shown for three maximum void sizes r max and N = 150 000. The results show that the phantom features depend significantly on the choice of r max : for a relatively large maximum void size (r max = 0.8), there are a few large voids present in the phantom and a large number of relatively small voids, as the large voids restrict the available space for the remaining voids. For a relatively small maximum void size (r max = 0.05), most voids in the phantom have a similar size. The phantom with an intermediate maximum void size (r max = 0.2) exhibits both characteristics to a lesser degree. These results show that the proposed phantom family can be used to simulate a wide variety of foam structures. Examples of the phantoms generated by the 4D extensions described in Section 2.4 are shown in Fig. 6. Projection data and reconstruction examples In this section, we present several examples of generated projection data and compare reconstruction results using several popular tomographic reconstruction algorithms. In all cases, we use the foam phantom generated with N = 150 000, N p = 10 6 , r max = 0.2, and z max = 1.5. Parallel-beam projections are computed for a detector with 2560  2160 pixels and 16 rays per pixel (i.e. 4  4 supersampling), mimicking a PCO.edge 5.5 sCMOS detector (PCO, Kelheim, Germany) that is commonly used at synchrotron tomography beamlines (Mittone et al., 2017). The width and height of a detector pixel was set to 3/2560, resulting in a detector width of 3, with the sample (which has a fixed radius of 1) projecting on two-thirds of the detector width. Projections were computed for four imaging scenarios: 'high-dose', with a large number of noise-free projections; 'noise', with a large number of projections with a significant amount of Poisson noise applied; 'few projections', with a relatively low number of noise-free projections; and 'limited range', with a large number of noise-free projections acquired over less than 180 . Specific details about scenarios are given in Table 1, and example projection data are shown in Fig. 7. We compare results for several popular tomographic reconstruction algorithms: the filtered backprojection method (FBP) (Kak et al., 2002), SIRT (Kak et al., 2002), CGLS (Scales, 1987), SART (Andersen & Kak, 1984), and SIRT and SART with additional nonnegativity constraints on the pixel values (Elfving et al., 2012). All reconstruction images were computed using the optimized GPU implementations of the ASTRA toolbox (Van Aarle et al., 2016). We compare the reconstructed images using three popular image quality metrics, the root mean square error (RMSE), peak signal-tonoise ratio (PSNR), and the multiscale structural similarity index (MS-SSIM) (Wang et al., 2003). We also compare the images using two segmentation metrics, in which the images are segmented using thresholding, and Dice scores (Bertels et al., 2019) are computed for voxels inside large voids (with radii r i ! 0.1) and small voids (with radii r i < 0.05). All metrics are computed with respect to a ground truth image that consists of a discretization of the foam phantom with the same number of voxels as the reconstructions and using 64 (4  4  4) sampling points per voxel. For the iterative algorithms, the number of iterations that minimizes the RMSE is used, which is determined using the Nelder-Mead method (Nelder & Mead, 1965) for CGLS and a simple grid search for all other algorithms. In Table 2, the quality metrics are given for the central slice of the phantom and the four projection data scenarios given above. The results show that in most cases FBP produces images with the highest RMSE and lowest MS-SSIM values, the three unconstrained iterative methods (SIRT, CGLS, and SART) produce images with lower RMSE and higher MS-SSIM than FBP, and the iterative methods with nonnegativity constraints produce images with the lowest RMSE and highest MS-SSIM. However, the segmentation-based metrics show more nuanced results. For example, in the 'limited range' scenario, the Dice score for large voids of the FBP reconstruction is close to the Dice scores of the iterative algorithms, even though the RMSE is significantly higher and MS-SSIM significantly lower. This shows that, if the specific application of tomography would require only the analysis of large voids, the FBP algorithm would be sufficient, even though its image metrics are significantly worse than other methods. Similar results are shown in Fig. 8 Examples of 4D extensions to the static 3D foam phantoms. In each case, an early time-point is shown on the left, and a later time-point is shown on the right. Given are an example of a moving phantom, in which the foam moves vertically, an expanding phantom, in which the voids grow in size, and an infiltration phantom, in which the voids fill with a different material over time. Table 1 Details of the projection datasets used for comparing reconstruction algorithms. 50% absorption means that the parameter for noise generation was chosen such that the sample absorbed roughly 50% of the incoming photons. FBP-like methods in practice (Pan et al., 2009). In Fig. 9, the PSNR of FBP, SIRT, and SIRT with a nonnegativity constraint is given as a function of both the number of projection angles and the number of voids in the phantom. In each case, data were generated with a low amount of Poisson noise (I 0 = 10 7 ) and a foam material that corresponds to an average absorption of 10% of virtual photons for a phantom with 150 000 voids. The results show how the behavior of each reconstruction algorithm depends on the complexity of the scanned sample and the amount of acquired data. For example, the results show that the accuracy of FBP does not depend significantly on the complexity of the phantoms, while the accuracy of the SIRT algorithms is significantly improved for low-complexity samples compared with high-complexity samples. We note here that the proposed family of foam phantoms is Table 2 Reconstruction results for various simulated experimental conditions (see Table 1). Additional nonnegativity constraints are indicated by ' >0 '. Metrics within 2% of the best metric in each column are shown in bold. Figure 8 Reconstructed images of the central slice of a foam phantom, for various simulated experimental conditions (see Table 1). Given are results for FBP, SIRT, and SIRTwith an additional nonnegativity constraint (SIRT >0 ). especially well suited for performing such detailed comparisons, as the phantoms exhibit features at multiple scales, the level of complexity is tunable, and complete ground-truth information about the void positions and sizes is known. For another example of a detailed task-based analysis that uses the foam phantoms, we refer to Marchant et al. (2020). Computation time In this section, we present results for measurements of the required computation time for generating a foam phantom and computing projection data. A theoretical analysis of the required computation time for phantom generation is technically complicated due to the random nature of placing trial points. However, we hypothesize that, for large numbers of voids, the most time-consuming part is step (5) of the algorithm described in Section 2.2, and that the required computation time scales with N 3 N 2 p log N p , where N is the number of voids and N p the number of trial points used during generation. The various terms come from the fact that the required time for inserting an item in a skip list scales with N p log N p , the number of new trial points that have to be placed scales with N p , the number of voids that have to be checked for overlap for each new trial point scales with N, the number of required random trials until a valid position is found scales with N (since the available space decreases when more voids are placed), and step (5) has to be evaluated N times. The required computation time for simulating a projection scales with N r N, where N r is the number of simulated rays, which depends on the size of the detector and the amount of supersampling used. In Fig. 10, the computation time required to generate a phantom with r max = 0.2 is given as a function of the number of included voids. The results show that phantoms with a large number of voids, e.g. 100 000 voids, can be generated within a few minutes. The results also show that using multiple CPU cores can significantly reduce the required computation time, especially for large numbers of voids. It is interesting to note that three different phases can be identified in Fig. 10. We hypothesize that different parts of the algorithm are dominant within each phase. During generation of the first 100 voids, we expect that most time is spent inserting newly placed trial points in the linked list data structure, which is not parallelized in the current implementation. Between 100 and around 10 5 voids, we expect that most time is spent updating the maximum possible radius of each trial point, which is highly parallelizable. Finally, for more than 10 5 voids, we expect that most time is spent finding valid positions while randomly placing new trial points, which is not parallelized in the current implementation as well. It may be possible to use such observations to reduce computation time for generating phantoms in the future. In Fig. 11, the computation time for generating projections is given as a function of the number of rows and columns in each projection, for a phantom with 150 000 voids and r max = 0.2. The results show that it is possible to compute a parallelbeam projection with common high-resolution numbers of pixels, e.g. 1024  1024 pixels, in less that a tenth of a second using a modest multi-core CPU system. This computational efficiency makes it possible to generate full tomographic datasets within a few minutes. As explained in Section 2.3, computing cone-beam projections is more computationally demanding than computing parallel-beam projections. This is indeed visible in Fig. 11, which shows that even when using multiple CPU cores, generating a cone-beam projection can take considerable time. However, multiple GPUs can be used to significantly speed up these computations, reducing the required computation time to a few seconds per projection for common detector sizes. Conclusion In this paper, we introduced a family of foam-like phantoms for comparing the performance of tomography algorithms. The generated phantoms are challenging to reconstruct, representative of typical tomography experiments, and flexible, as projections can be calculated for various acquisition modes. In addition, an unlimited number of varying foam-like phantoms can be generated, enabling comparisons of data- The required computation time for generating a foam phantom as a function of the number of voids in the phantom. The number of voids of the phantom that was used in most experiments in this paper (150 000 voids) is indicated by the vertical dashed line. Figure 11 The required computation time for generating a tomographic projection for a phantom with 150 000 voids as a function of the number of detector rows and columns. Results are given for using a single CPU core, eight CPU cores, a single GPU, and four GPUs. driven algorithms. The phantoms consist of a main cylinder with a large number of randomly placed voids (e.g. more than 100 000), resulting in foam-like structures with both largescale and fine-scale features. We also introduced several 4D extensions to the static 3D phantoms, resulting in time-evolving phantoms for comparing algorithms for dynamic tomography. Computationally efficient ways of both generating a phantom and simulating projection data for a given phantom were discussed, and a software package that implements these algorithms, the foam_ct_phantom package (Pelt, 2020), was introduced. Experimental results show that it is possible to generate phantoms on a modest workstation within a few minutes, and that projection data can be simulated for common high-resolution detector sizes within a few minutes as well. Comparisons between common reconstruction algorithms for several experimental settings show that it is possible to perform detailed analyses of algorithm performances using the proposed phantom family. These results show that the phantoms can be effectively used to make fair and informative comparisons between tomography algorithms.
9,860.2
2022-01-01T00:00:00.000
[ "Medicine", "Physics", "Mathematics" ]
Networking concert halls, musicians, and interactive textiles: Interwoven Sound Spaces ABSTRACT Interwoven Sound Spaces is an interdisciplinary project which brought together telematic music performance, interactive textiles, interaction design, and artistic research. A team of researchers collaborated with two professional contemporary music ensembles based in Berlin, Germany, and Piteå, Sweden, and four composers, with the aim of creating a telematic distributed concert taking place simultaneously in two concert halls and online. Central to the project was the development of interactive textiles capable of sensing the musicians’ movements while playing acoustic instruments, and generating data the composers used in their works. Musicians, instruments, textiles, sounds, halls, and data formed a network of entities and agencies that was reconfigured for each piece, showing how networked music practice enables distinctive musicking techniques. We describe each part of the project and report on a research interview conducted with one of the composers for the purpose of analysing the creative approaches she adopted for composing her piece. Introduction Interwoven Sound Spaces (ISS) investigated the creative possibilities of telematic performance in contemporary music ensemble practice through textile and network technologies.The aim was to enable rich and tangible interaction between musicians who are spatially in different places.Central goals for this artistic research project were to host a telematic concert programme in established, traditional music venues and to work with experienced professional musicians.This was motivated by our intention to bring telematic performance and wearable interaction design practices outside of conventional academic circles and engage with composers, performers, concert halls, and their audiences.The results of the project were presented in a joint interactive concert with the ensemble KNM in Berlin, Germany and ensemble Norrbotten NEO in Piteå, Sweden.The concert was simultaneously hosted in two concert halls approximately 1800 km apart: Studio Acusticum in Piteå and Konzertsaal der Universität der Künste in Berlin.The work was motivated by the need to enhance communication and a sense of co-presence between musicians and audiences during live concerts happening concurrently at multiple, distant geographic locations connected via telematic means.To achieve this, we commissioned new works that combined textile wearable technologies, interaction design, machine learning, and distributed performance, thereby extending the experience of playing and experiencing music telematically beyond screen-based applications.The project was the subject of a short documentary directed by Tim Nowitzki et al. (2023).The name Interwoven Sound Spaces was chosen to echo some key concepts of the project: textiles and interconnectedness of sounds and concert spaces. In addition to providing tools for telematic ensemble play, the project was particularly interested in the roles and dynamics of sociocultural spaces that characterize live music performance.Factors other than music-such as dress cultures, communication between musicians occurring in addition to the musical interplay, socio-cultural environment in which the performance is experienced-contribute to the success of live music performances.The project and technical decisions were guided by these considerations through an iterative development and close collaboration between musicians, composers, researchers, venues, and developers.Composing interactions between musicians and objects located in multiple interconnected environments situates the project in the broader field of ubiquitous music (ubimus).This research and artistic field looks at how musical activities enacted by human agents, material resources, and the relational properties that characterize them can take place in ecosystems supported by a network infrastructure (Keller and Lazzarini 2017).In ISS we aimed to connect two established concert venues, thereby combining traditional music performance contexts with contemporary, distributed networked practices.To our knowledge, this is one of the first artistic projects combining e-textiles and networked music performance. Telematic music performance and networked musicking Telematic music performance occurs when geographically separated musicians perform together by means of telecommunication technologies.This means that musicians play together without being in the same room, far away from each other.We consider networked musicking any musical activity that is expressed through a network of connections between musicians, instruments, audiences, and other agents.The architecture of the network defines the relationships between the different interconnected agencies, establishing a system with specific creative affordances.In this context, we favour the term musicking introduced by Small (1998) as it focuses on the activities that constitute a musical experience and the ways to take part in it, 'whether by performing, by listening, by rehearsing or practising, by providing material for performance (what is called composing), or by dancing' (Small 1998, 9). Since the early explorations of groups like the U.S. ensemble The Hub (Brümmer 2021) and the very first performance by remote musicians-streamed over the internet in 1993much technological research has been made in order to minimize latency and improve audio and video quality in transmission (Rottondi et al. 2016).These attempts may be related to how Minsky (1980) observes that 'the biggest challenge to developing telepresence is achieving a sense of "being there"' (46).Networked music performance practices have grown considerably in the past two decades thanks to faster and more widespread Internet infrastructures, and have also seen an acceleration in interest and development due to the constraints imposed in response to the COVID-19 pandemic (Onderdijk, Acar, and Van Dyck 2021).Several dedicated software platforms such as JackTrip (Cáceres and Chafe 2010), LOLA (Drioli and Buso 2013), and Elk LIVE-a network performance platform built on the Elk Audio OS framework (Turchet and Fischione 2021)-have received renewed interest, and some were used to support distributed concerts and enable distance collaboration during the lockdown (Bosi et al. 2021).A musician's listening involves not only auditory perception but is also cognitively embodied.In telematic performance, the relationships between body, sound, and space are modified, demanding from the performer 'to listen intently while being in a rather fragile, unstable environment' (Schroeder 2013, 225).As observed by Mills (2019, 6), 'while network technology collapses distance in geographical space, tele-improvisation takes place without the acoustic and gestural referents of collocated performance scenarios.This liminal experience presents distinct challenges for performers.' Several networked performances make use of sensors, transducers and other technologies focused on motion detection and motion control.For example, the 'Mocap Streamer' system developed by Goldsmiths University (Strutt 2022;Strutt et al. 2021) is used in a series of performances in which the avatars of two remote dancers danced together wearing an inertial sensor motion capture system that enables precise data capture.Network technologies have been used also to control body movement remotely.As an example, in some performances by Stelarc, the muscles of the artist could be controlled by the audience remotely through devices placed on the surface of the skin and a web interface (Elsenaar and Scha 2002;Stelarc 1991). Recently, Comanducci (2023) proposed a comprehensive framework dedicated to networked music performance designed after experimental studies carried out in collaboration with classically trained musicians.The results indicated that, when confronted with high latency times, 'musicians are able to somehow adapt themselves or at least to adopt different type[s] of strategies' (Comanducci 2023, 122).This resulted in a system that avoided prioritising latency minimization in favour providing adaptive tools to help musicians coping with network latency and giving particular attention to spatial audio and audiovisual immersion.However, the use of recent network technologies such as 5G shows promising results when it comes to keep latency at levels considered acceptable for music-making (Turchet and Casari 2023), thereby opening scenarios in which such technologies make networked music performance more easily accessible. An internet of musical things ISS can be situated within a broader research field intersecting networked music performance, the internet of things (IoT), and new instrument for musical expression (NIME).Turchet et al. refer to this interdisciplinary area as the Internet of Musical Things (IoMusT) (2018).They envision a network of '[m]usical things, such as smart musical instruments or wearables, […] connected by an infrastructure that enables multidirectional communication, both locally and remotely' as the foundation of IoMusT (2018, 61994).They describe an ecosystem in which performers, audiences, and machines (i.e.networked objects capable of exchanging sounds, notes, and other music-related data) can be colocated and/or remote, each interacting with the other either locally or by means of a network connection. ISS implemented an ecosystem that closely resembles many of the IoMusT core ideas.Two groups of performers were located in two distant concert venues connected via lowlatency audio and video technologies.Audiences could join the concert at both venues and online.The interactive textiles (see section 4.1) and circuit board (section 4.2) were designed to be worn by the performers during the concert and exchanged data over the network, so that the composers could use such data to design interactions (see section 5). Interactive textiles in music and performance Electronic textiles (e-textiles) and textile wearable technology are often used to explore new formats for music live performances, enabling physical touch and expressive body-based interaction between performer and computational audio systems.While typically sound is not a major consideration in textile design, e-textile and interaction designers have embroidered, woven, knitted, printed and knotted musical interfaces and instruments for both on and off the body (Perner-Wilson and Satomi 2018;Psarra 2014;Torres 2017). E-textiles originating in both artistic and scientific communities feature in numerous electronic musical performances, highlighting how they can, for example, lead to more accessible forms of music making, 1 or bridge stage dress traditions with interactive functionality (Greinke et al. 2021).In HCI, research dealing with e-textile sensors is increasingly specialized in terms of what outputs are studied, with computational and interactive audio becoming an important topic (Stewart 2019).As one of the pioneers of electronic textiles in design, Maggie Orth's work included the Musical Jacket, which featured conductive embroidery to control MIDI sound (Post et al. 2000).The Embroidered Musical Ball was a handheld device with embedded capacitive pressure sensors (Weinberg, Orth, and Russo 2000) and Teresa Marrin Nakra (2000) designed a conductor's jacket using e-textile elements, which gathered physiological and gestural data in 16 channels. Beyond a technical focus, e-textiles have also been investigated as a means to tackle gender disparity in audio engineering communities (Stewart, Skach, and Bin 2018), and for nondeterministic HCI approaches, embracing disruption, failure or uselessness when conceptualizing new types of interaction (Andersen et al. 2018;Nordmoen et al. 2019;Skach et al. 2018). E-textiles and textile wearables are however rarely used in setups involving multiple musicians or larger ensembles.An example is Ann Rosén's work with interactive knitted knee cuffs, 2 worn in performances by The Barrier Orchestra. 3The knee cuffs consisted of textile sensors paired with small computers, allowing the musicians to play them as synthesizers and samplers while wearing the knitted tubes around their knees.Another example are the prototypes developed for 'Sound Folds' (2021) 4 which investigated the use of folded e-textile sensors to augment acoustic instruments while acknowledging Western traditions of formal stage dress. Ensembles Professional musicians from two contemporary music ensembles participated in ISS, with three musicians located in Berlin, Germany, and three in Piteå, Sweden.This section briefly introduces the ensembles. KNM Berlin is an established ensemble for contemporary music.It was founded in 1988, presenting compositions, concert installations, and concert projects developed in collaboration with international composers, authors, conductors, artists, and stage directors.Three KNM ensemble musicians were involved in this project, playing cello (Cosima Gerhardt), contrabass clarinet (Theo Nabicht), and flute (Rebecca Lenton).All three musicians had long-standing experience with experimental live formats including electronic and digital sound and video.Wearables and telematic music performances, as practised in this project, had not been used by them previously. Norrbotten NEO, based in Piteå, Sweden, has been at the forefront of contemporary chamber music for over 15 years, being one of the most distinctive voices on the Swedish contemporary music scene.Today the ensemble is the only one of its kind in Sweden, promoting contemporary art music on a national basis.The ensemble continuously commissions new works and collaborates with both younger and more established composers, nationally as well as internationally.Three musicians of the ensemble participated in the project, playing contrabass clarinet (Robert Ek), percussion (Daniel Saur), and viola (Mina Fred).The ensemble has been involved in telematic performances previously, with one of the musicians being also a member of a quartet focused on networked music practice (Ek et al. 2021).This proved to be an advantage, as the technical support during the development phase was limited on the NEO side, with a larger team joining only later for the concert. All musicians had experience with improvisation and were familiar with the extended instrumental techniques found in contemporary and new music practices.They were very open to experimentation and working with the technologies developed for the project.Some of them were already familiar with the techniques and concepts employed by the composers.All musicians are experienced professionals and were regularly remunerated for their work. Composers The project involved a direct collaboration with four composers: Cat Hope, Ana Maria Rodriguez, Malte Giesen, and Ann Rosén.Each composer was asked to compose a piece tailored for the project, particularly the networked interactions between the two distant ensembles and the textiles designed for the project.This article focuses on an analysis of Cat Hope's approach to composing her piece. Technical and design actors People from various disciplines were involved with technical and design development, and implementing the final concert.These included wearables design and hardware (section 4), and interaction design (section 5).It further involved taking care of streaming and video communication between musicians, and sound engineering (described more in detail in section 6).It should be noted that these parts were also impacted by the telematic setup, meaning that standard technical stage procedures also prevalent in non-telematic performances needed to be adapted. Interactive textiles for movement detection This section introduces the design and technical work that went into creating a set of interactive artefacts to be used on stage.While different textile and non-textile mechanisms were embedded, all artefacts are designed to detect movement of the musicians wearing or operating them. ISS textiles Three different designs for interactive textiles to be used by the six musicians were developed.Two of these were textile wearables (The Tensile, The String).The third was an interactive rug (The Rug).All textile prototypes had integrated resistive sensors, able to detect stretch, pressure or acceleration when the musicians moved whilst playing their instruments.Through data processing (some examples in section 5) it was possible to detect different body positions and motion patterns. The overall design concept for the textiles was inspired by networks.This refers both to the structural behaviour of textiles, such as stretch, as well as communication through a configuration unconfined by the boundaries of physical space.Two design researchers were responsible for the work described in this section, of which the first was in charge of the overall design concept and the garment design.The second is specialized in constructed textiles and was responsible for producing textiles from yarns (both with integrated sensors and without), as well as designing connections between textiles and hardware using textile techniques.All prototypes were made relying on specialized textile and garment knowledge, through which textile solutions for connecting different hardware parts to the textile structures could be developed.The conceptual design refrained from taking a purely functional route, but instead focused on poetics of movement.The two wearables were designed as jewellery-like pieces, focusing on a specific movement or body part with expressive significance for the respective instrument.The rug was a modular design, with each musician using their own module.This allowed for adjusting the design on the stage, while at the same time making sure the musicians stayed in a dedicated area, which was streamed and was visible on the stage monitors in the paired location. The telematic arrangement of the performance had an impact on all other development processes in the project, including the garment design and making.Usually, the tailoring of garments requires several fittings, describing in-person meetings between garment maker and wearer, for discussing designs, taking measurements, and altering the garments to best fit the wearer.Due to the physical distance between the wearables team in Berlin and the NEO ensemble in Piteå, this was not possible in our setting.Designers and NEO musicians were not able to be physically together at any point of the project.While commonly tailored textile wearables are developed in iterative steps, using in-person exchanges and fittings to identify needed alterations, our design required a different approach.This was done in several stages.Firstly, we watched videos of the musicians playing at previous performances, gaining an initial understanding of their body types, learning about typical movements when playing their respective instruments, and identifying areas of the body that would benefit from motion detection.At this stage of the project, not all musicians and instruments involved were named.This required the fashion designer to also think about approaches for universally designed wearables, where little to no alteration would be needed when designs are worn by different people.Several video calls were held with the musicians known at this stage, which were the cellist, clarinettist and flautist based in Berlin. In the next stage, two movement categories were identified that would be used by as many musicians as possible.The first was the movement of the right arm, which was observed as defining for string instrument players holding the bow, as well as percussionists.The second category was weight-shifting, which we observed mostly in woodwind instruments musicians, which in this group were the flautist and the two clarinettists.In addition, all musicians were accustomed to using pedals when playing, either to turn pages in the score or to add effects in contemporary music they had played in the past, which often resulted in more intentional or controlled weight-shifting. Rehearsals offered opportunities to meet the musicians in-person in Berlin.These sessions were also used to train them in dressing the wearables as well as identify required alterations.Regarding the NEO musicians, no fittings could be scheduled until the wearables were finalized.At this stage, the prototypes were sent to them by post, including printed instructions for dressing and connecting.The designers then met the NEO musicians in a video call, assisting them with dressing and connecting the wearables and hardware. The Tensile The Tensile is a machine-knitted sleeve inspired by tensile architectural structures.It has three integrated knitted stretch sensors that detect movement of the right index finger, elbow, and shoulder.Three sleeves were produced and worn by the cellist in Berlin (shown in Figure 1), and the violist and percussionist in Piteå. The geographical distance between the design researchers and the musicians in Piteå presented a series of design challenges throughout the process.The musicians were required to record and send their own body measurements, and being unfamiliar with the process of recording one's own body in such a manner resulted in imprecise measurements from the musicians.However the elastic properties of the Tensile textile, both in yarn and knitted structure, provided enough flexibility to create accurately-fitted wearables.Figure 2 shows the textile for the Tensile as it is being knitted (a) and the finished wearable (b).For the sleeve worn by the NEO percussionist, an alteration was required for which the wearable was posted back and forth to be iterated. The String The String is a layered shoulder harness with long, soft padded strings containing an integrated accelerometer.Three String wearables were produced, worn by the contrabass clarinettists in both Berlin (see Figure 3) and Piteå, and the flautist in Berlin. The String comprised a fitted knit jersey shoulder harness with a velcro closure under one arm.The padded strings were stitched from power-mesh and filled with textile wadding.The accelerometer detected the swings that occured as a result of body movement, acting as a form of subtle motion tracking. As with the Tensile, geographical distance and limited availability meant some musicians were required to provide their own body measurements, and the properties of the knitted jersey and power-mesh proved elastic enough to compensate for any imprecise measurements. The Rug An interactive rug capable of detecting musician's weight shifting when playing their acoustic instruments was designed and prototyped (see Figure 4).Given the musicians' familiarity with pedals, the rug provided a subtle control element that musicians were able to use with little need for training.It further served to mark out the areas on stage that were captured by the streaming camera, ensuring that the musicians stayed within the camera frame at all times for effective streaming. The Rug was modular, consisting of three independent rug shapes occupied by one musician each.Two sensors were mounted underneath the rugs 30 cm apart, each large enough for the musicians to comfortably stand on and allow them to move around. ISS circuit board The ISS hardware served as an interface between the textile sensors and one computer each in Berlin and Piteå respectively.Each musician carried one circuit board (Figure 5), and an adapted version was used for the rugs.The workstations then connected via Open Sound Control (OSC) (Wright 2005) and thereby formed two central nodes of a network that included all of the textile sensors.The circuit board consists of a microcontroller and break-out boards for connecting textile sensors, haptic drivers and power supply.Wireless connectivity was a central requirement, to allow for changing setups during different parts of a performance.We used Arduino Nano 33 IoT microcontrollers that offer both Bluetooth and WiFi connectivity, with the latter being implemented in this project due to the greater range and stability of WiFi for stage applications (Mitchell et al. 2014). Regarding the connection between each board and the textiles, flexibility and ease of use were central requirements.We used 3.5 mm audio jack connectors to allow for quick and easy connections between the board and the textiles.Each of the boards has four inputs for resistive sensors, one 4-pin input used for the String Sensors external IMU and one 4-pin output for the haptic motor.Two of the four 2-pin inputs go into a custom signal preprocessing board 5 that helps to achieve a better Signal-to-Noise ratio (SNR) from the sensors' signals.The signal offset and gain can be modified through adjustable variable resistors.While offering a cleaner signal when ideally tuned, the board can also be an additional source of complexity in setups that need to change quickly.Therefore making this option available partially proved a good compromise, especially with the outlook of using the board for future projects. The essential functionality of the board is that of an interface between the workstation and the various sensors in two directions.It reads the sensor inputs as 10 bit values and forwards them as OSC packets to the workstation.Likewise, the vibration of the haptic motor can be triggered through the software running on the workstation.The command will be sent to the Arduino via OSC which then triggers the haptic driver and motor.The logic of this behaviour is handled by the Arduino microcontroller and programmed in the Arduino programming language. 6 Composing interactions To present the expressive possibilities offered by the project, we conducted an online workshop with the composers.This included a presentation of the system parts and concepts with videos of some composition and interaction design ideas filmed in collaboration with three guest performers.Following a presentation, each composer had a timeslot with the developer and design team to discuss ideas for their pieces and how to implement them.We saw this as a necessary step in the project, as mapping sensor data for musical purposes is known to be a non-obvious and challenging process with many implications on the expressivity of the resulting musical interactions (Hunt, Wanderley, and Paradis 2003).The following two subsections will give an overview of a selection of the interactions proposed during the workshop, while section 7 will provide detail on which interactions were implemented in the piece composed by Cat Hope.Her approach will be discussed further by commenting on excerpts from an interview we conducted to gain a deeper insight on her experience composing for the project.It is worth noting that the musicians were explained how the interactive textiles worked and generated data, but the decision on whether their use of the textiles should be deliberate or more unconscious was left to the composers, who scored their pieces with different degrees of explicit instructions on how to interact with wearables and rugs.While some composers gave more explicit directions, Hope opted for informing the performers of how their movements would affect the score and the live electronics without explicitly notating such behaviours in her score.Other composers opted instead for a more choreographed sequence of movements or more explicit instructions on how to use the interactive textiles during the performance. Interactions proposed during the workshop We designed a set of interactions between the musicians' movements while performing and sound in order to showcase some of the creative possibilities of the ISS system.Movement data was captured through the ISS textiles and board (see sections 4 and 5).The data was then used for interactive sound synthesis and haptic cueing using various techniques implemented in Max 7 as described below. Stretching sound using the Tensile and machine learning The first proposed interaction involved the Tensile worn by a cellist.The wearable captured the movements of the right arm while the performer bowed the string, while a microphone mounted on the bridge of the instruments captured the sound.The core idea of the interaction was to process the sound of the instrument using the data from the tensile and aurally 'stretch' the sound of the cello to mirror the stretching occurring in the textile while the musician bowed the strings.This was implemented using an FFTbased frequency shifter to process the sound captured by the cello microphone.The mapping between the tensile sensor data and the frequency shifter parameters was defined by a linear regression model created using an interactive machine learning workflow (Visi and Tanaka 2021) and was implemented using the GIMLeT package 8 for Max.It worked as follows: 1.The composer defined how the frequency shifter should affect the sound when the body of the musician is in key positions, e.g. when the strings are the closest to the bow tip and the tensile is stretched, or when the strings are the closest to the bow frog and the tensile is released; 2. While the cellist was wearing the tensile, data in the aforementioned positions was recorded; 3. Sensor data for each position was paired with the corresponding frequency shifter parameters and used to train a neural network for obtaining a linear regression model; 4. The sound of the cello was processed (i.e. 'stretched') in real time as the musician played and new sensor data was fed into the linear regression model. During the workshop we explained that such an approach can be transferred to other textiles as well as other sound parameters, and that the software interface we built would allow such interactions to be quickly set up during rehearsals.We called each key-position-to-sound-parameters pair an 'anchor point' (Tanaka et al. 2019). The String: echoing sways A key concept behind the example interaction for the String wearable was to consider the swaying of the strings hanging down the shoulder as echoes of the full body movement of the musician.We demoed this concept with flute players that perform standing.Small movements of the body made the string swing, and these oscillations were captured by the motion sensor in the wearable.To mirror these echoes of motion in the sound of the instrument, we used the sensor data to add layers of modulated echo to the sound of the instrument that became more present as the movement became more intense.Additionally, quick, sudden movement triggered additional impulsive sounds, resembling those made by hitting or shaking a spring reverb unit. Rug Each rug defined the area on stage in and around which each musician performed.It has three active areas that measure pressure (see section 4.3).We suggested three possible ways of using the pressure data obtained from the rug for musical interaction and demoed in collaboration with the flute player Diane Barbé: 1. On/off: the simplest interaction with the rug we could think of, the musician stepped on or off the rug.Stepping on the rug resulted in the sound of their instrument being sampled by a granular synthesizer and played back as long as the musician stood on the rug.2. Weight-shifting: built upon the previous interaction, if the musician shifted their weight to either side while standing on the rug the synthesis parameters of the granular synthesizer used for sampling the instrument sound changed adding subtle timbre variations tied to how the musician distributed their weight on the rug.Parameter mapping was done using machine learning to build a linear regression model similarly to the stretching sound interaction described in section 5.1.1.3. Pedal-like functions: to echo how effect pedals work, stepping on a third sensitive area of the rug resulted in more dramatic processing of the instrument sound, such as heavy distortion.This third sensing area was later discarded as it became clear that two sensors were sufficient to serve the composers' ideas. ISS performance The telematic concert took place on the 21st of December 2022 simultaneously at the Konzertsaal der UdK Berlin, 9 Berlin, Germany (Figure 6 and 9).This provided a visual representation of the remote musicians.Arranging musicians and screens along an arc (see Figure 8) allowed the screens to be seen both by the performers physically present on stage as well as by the audience.This created a visuo-spatial relationship on stage between the local and remote musicians, thereby conveying a sense of co-performance.The screens always showed the live video feed from the same angle for the purpose of creating a consistent spatial link between the stages, something more akin to looking through a window than watching a TV.This means that the fields of view of the cameras were set once before the performance using the rugs as reference points and then left unchanged throughout the concert.This approach is akin to that of media arts works such as 'Hole in Space' by Galloway and Rabinowitz (1980), in which the artists created a consistent telematic link between two distant places.Screens were essential during the performance to convey a sense of co-presence to both musicians and audience, but they proved useful also during other stages of the project. During rehearsals, seeing the remote musicians ready through the screens helped the researchers and the local musicians coordinate actions before the beginning of a trial performance.Visual contact was also important when working with the composers during workshops, as it helped composers communicate their ideas to the remote musicians and get a better sense of how the performance was unfolding at the other location.The stage screens were not used to communicate any time-sensitive cues such as conduction gestures, as musicians relied more on the sound and the scores to synchronize their performance.They were also useful for the audio engineers at both locations, as it made it easier for them to understand what was going on the other stage while sound checking and during the performance.They also used the location of the screens on stage as a spatial reference when mixing and arranged the position of each instrument in the sound field to reflect where the corresponding screen was placed on stage.The rugs marked the position on stage for each local performer in order for them to be properly shown on screen on the other stage.The stage plan for Studio Acusticum is schematized in Figure 8 and the recordings of the live streamings from both locations are available on the project website. 12 We used JackTrip (Cáceres and Chafe 2010) for streaming uncompressed multichannel audio between the locations.All instruments had their own dedicated microphones.In Berlin cello and flute were close-miked using clipon condenser microphones while the contrabass clarinet was miked using a pair of smalldiaphragm condenser microphones mounted on stands placed close to the instrument.A similar solution was used for the contrabass clarinet in Piteå, while the viola was closemiked with a condenser clip-on and the percussion was captured with two overhead condenser microphones.Four channels dedicated to live electronics were added to the 16 audio channels assigned to the microphones (8 per location), for a total of 20 channels.Each location received all separate, uncompressed remote channels via JackTrip.This allowed the audio engineers at both locations to independently mix the sound for the respective concert hall.We measured an overall 13 round trip delay of 155 ms (i.e.77.5 ms one-way) in the rehearsals venues, which is comparable to the latencies reported by Bosi et al. (2021).The live video stream from Piteå had the same stereo mix of the hall, while the live video stream from Berlin used a binaural rendering of an ambisonics mix that was made specifically for the live streaming. Cat Hope's 'The Drift' This section introduces and discusses in detail one of the four pieces composed and performed for the project: 'The Drift' by composer Cat Hope.It provides an insight into how a selection of the technologies described in the previous sections were implemented in the artistic concept and structural setup of the piece (see section 7.1).Following an interview conducted with Hope, a set of topics were extracted from this discussion.We report and discuss her considerations on the relationships between the piece and the concept of latency (see section 7.2), liveness (7.3), and wearable sensing (7.4).We conclude the section with reflections on the practical implementations of the ideas of the composition and the musicians' interpretation. It is worth making clear that the interactions here described were designed specifically for Hope's piece.The pieces composed by the other three composers involved entirely different interactions between the data generated by the interactive textiles, the musicians, and live electronics.Additionally, the other pieces adopted different approaches to scoring, including standard Western music notation, both static and animated, as well as a choreographed sequence of gestures.Describing the other pieces in detail is beyond the scope of this article and will be addressed in future publications. 7.1.Artistic and technical concept of 'The Drift' Cat Hope's composition 'The Drift' uses data generated by the wearables to alter the motion of notation in a digital animated score.The score images, which are normally fixed and moving from left to right in the majority of Hope's work, 'drift' around on the digital page, their movement indicating variations in timbral density for each player.The title of the work is a homage to US singer songwriter Scott Walker , whose 2006 album of the same name was described by him as being composed by employing 'blocks of sound' (Leone, 2006).Here the work is also developed in blocks, but in this case, of notated parts that float across the score 'surface'. Hope's work focused on how data from the wearables could influence the score read by the musicians.Scored for two contrabass clarinets, viola, percussion and electronics, the data was also used by Federico Visi in the control of the electronics scored in the piece.The two contrabass clarinets wore String wearables, with the viola and percussion wearing the Tensile models. Hope's compositions are usually presented as animated notation on networked iPads, using the Decibel ScorePlayer application 14 (Hope et al. 2015).Scores can be networked over the Internet in real time using the iPad application, and a 'canvas layer' function in the application enables real time drawing to occur as the score unfolds, using Max commands (Wyatt, Vickery, and James 2018).Coloured graphic notation, with parts for each instrument, scrolls from left to right across the screen, with a vertical 'play head' line indicating the point of performance for the musicians.The playhead in this piece provides a timbral 'density' scale, which determines the textural density the performers apply to their part, with the topmost part of the score page being the most 'complex' and the bottom the most 'clean' (see Figure 10). 'The Drift' uses data generated by the wearables to 'drift' individual score parts up and down the vertical axis of the score as they move towards the playhead, meaning the timbral variation of the sound is different at each performance.The score was written by the composer, the vertical position of each part is affected live by the data generated by the wearables.The score displayed on each screen includes all the parts and the scrolling is synced via network, therefore all musicians can see on their devices the full score and how the live data is affecting each part while they perform. Hope followed the rehearsals on site in Berlin with an audio video connection to the musicians in Sweden.She gave detailed feedback to the musicians regarding the performance of her piece, answered questions regarding how certain graphical elements of the score should be interpreted, and gave directions about musical aspects other than timbral density. Considerations on latency The choice of the String wearable is motivated by the similarity, according to Hope, between its movement and that of her scores.Moreover, she found an analogy between the oscillating movement of the wearable and the inherent latency of telematic information transmission.In our interview, the composer explains that 'swaying seemed to be a movement close to how my scores move, and made me think about how I could use sway motion to animate my scores differently' and 'the movement of the sway continues long after the sound is over, and has a kind of delayed start, a little like the delay experienced in telematic performance'. Latency is discussed further when reflecting about her approach to telematic performance and the implications of connecting musicians in distant locations: I've come to this latency friendly approach because I don't want to emulate a normal music making situation-a real in-place, inperson music making.I don't want to emulate that telematically, why would I do that?It's not going to be any good-physics always gets in the way.[…] [I tried not] to obtain some almost impossible synchronicity, but actually making a piece where that lack of synchronicity was to some extent okay, or even part of the piece. These considerations point to two aspects of Hope's approach to composing for telematic music performance.Firstly, latency is not seen necessarily as a hindrance, but as a quality of the medium that can also be aestheticized.Secondly, this understanding of latency implies an approach to musical timing in which the pursuit of strict synchronicity is eschewed in favour of ways of composing that allow for-or even integrate-a more fluid approach to timing.This could be seen as a way of composing that is specific to distributed network performance, in which the qualities of the network do not simply affect the performance but rather have an influence on how the music itself is thought of and conceived. Considerations on liveness Reflecting on composing 'The Drift', Hope made some considerations regarding the liveness of the performance.Referring to whether the score should be altered by the data obtained by the String wearables in real time, she explains: I did feel that the ephemerality of the score was key, it had to move in real time, as driven by the performers during the concert.Liveness means little details are different every time, and the performers are much more 'embedded' into the composition. In a project that makes extensive use of technology, Hope embraces human factors that make each performance unique.The wearables are conceived as fluid interfaces between the bodies of the musicians and the network, rather than controllers designed to achieve an exact and repeatable result: Yet these variations in how each musician placed the Tensile on their body, along with how their fastening of it and its shifting as they performed further served the intended purpose of the Tensile not as a controller, but as an interface which follows the movements that naturally occur when acoustic instruments are played. Embracing liveness and not fully predictable outcomes in a network of diverse interconnected technologies creates an environment that affords experimentation but that is also fragile and complex.Hope reflects on realizing this while working on 'The Drift': The complexity that [using real-time data from the wearable to generate animated scores live] created, however, soon became very apparent to me, and it was on clear that a lot of the work we had to do would be in testing the technology rather than creating ideas or rehearsing music. Wearables in 'The Drift' Hope explains the relationship she sees between the gestural affordances of the wearables and her musical aesthetic: I also used The Tensile on viola and percussion because I thought they had the most dramatic arm movements.For example, the percussion hitting the tam tam with a long acceleration, the movement of the viola bow.I liked the fact that these and the swinging movements were long and 'glissando', in line with my musical aesthetic. The interconnected wearables, musicians' bodies, stages, and sounds are entangled further through Hope's animated score, thereby extending the network of musical things that ISS enacts.Hope explains how she conceives the data collected and transmitted by the wearables as the musicians move '[it's] like an artefact of the sound and its making, rather than a different insight into the making of a sound.'In other words, the data generated by the wearables depends on the body movements required to play the instrument, it is not an independent process.The data is fed back into the network of musical agencies at the centre of the piece, affecting the score and as well as the live electronics, thereby closing an interaction loop involving musicians, instruments, wearables, and sound. Practical implications of composing for the network Hope reflected on the implications of composing for musicians distributed in different locations: being able to come to Berlin and work closely with [clarinettist of KNM ensemble] Theo but then not having that same opportunity with the Piteå musicians shaped the work considerably.[…] The result was that Theo ended up as a kind of soloist, even though that was not really what I had planned or written in the score.This reflection points to some implications of composing for a networked, distributed ensemble.The configuration of the network and its nodes has had practical implications on how established creative practices unfolded during the project.The composer worked differently with the musician physically present on location and could not do the same with the remote ones, and this affected how the composition developed.Hope elaborates further on this: Telematically, the way people related to each other when in the same space, was different than how they related over the Internet.For example, in Piteå, they had an 'acoustic' mix in addition to the recorded sound of them that Theo worked with.I think that might make it difficult to understand what each other were doing, across the telematic reach.The musicians together would rely very much on what they could hear from each other in the room, but Theo could only rely on what he was getting through the speakers in his place on stage. Again, the topology of the network has implications in the performance that are not immediately obvious.Hope is however well aware that a networked performance should not be taken as surrogate of an in-person one: If you're having a telematic performance where the performers are expecting it to be like an in person thing, then you're going to fail.Whereas if you have a kind of agreement that what you're doing is designed for that platform, that it's not a poorer form, it's actually just a different one, and the affordances that come to you are particular.You go on a path of discovery.You have to agree to be interested in that journey, and I'm interested in that as a composer.How can I make music that encourages a rewarding telematic experience?I don't want to emulate in person performance with my piece, but I want to try and create a new type of performance experience, and use composition to drive that.You have to get the performers in the state of mind where they're also ready for that adventure and once all the technical stuff is taken care of, I would hope there's some new type of musical engagement.That's what happened. Conclusion and future work We provided an overview of the artistic project Interwoven Sound Spaces which was investigating telematic music performances enriched with interactive textiles in contemporary chamber music.The article described the technical and design development of the textiles, hardware, and the software system used to design the interactions.The second part of the article analyses how the system was put into use by one of the collaborating composers. Hope reflects on considerations on latency, liveness, and the role of the wearables in the telematic performance of 'The Drift'.Latency is not seen as a drawback, but used as a conceptual and aesthetic component of the composition.This allows for composing in which synchronous play between musicians is replaced by fluid timing.Hope regards the wearables as fluid interfaces where the bodies of the musicians correlate with the network.Actions are not controlled in the way that they achieve exact and repeatable results, which could contribute to the perception of liveness in 'The Drift'.Hope gives an account of how she conceived the relationships between the musicians, the wearables, the data they generate, and the live score.This outlines the complex entanglement of interactions that networked music performance affords, indicating that networks of musical things may bring about new musical ecologies unique to the medium.The assumption that networked music performance is a medium in its own right as opposed to a surrogate to physically co-present music performance is supported further by the considerations Hope does of her experience writing 'The Drift' for ISS. In implementing Hope's interaction ideas for her piece, we tackled a set of technical and conceptual challenges.Firstly, we had to network the Decibel ScorePlayer application used by the composer with our system in order for it to access the data generated by the wearables at both locations.We implemented a simple solution that allowed the exchange of data between the systems via OSC.That was good enough for the piece, but it was felt that deeper and more flexible integration between the systems would have been desirable.This could probably be achieved more easily in an ecosystem designed following the IoMusT vision.On the other hand, we were pleased with the network infrastructure that we were able to put in place between the two locations, which gave us access to the data from all wearables without requiring much intervention from the musicians aside from turning on the devices and occasionally resetting them using a switch we placed on the side of the board.A more conceptual challenge we addressed had to do with mapping wearable data to the graphical score.With 18 continuous streams, the data obtained from the wearables was complex.To reduce dimensionality and have useful descriptors that could be mapped to the score application and the live electronics we adopted a feature mapping approach (Visi and Tanaka 2021) which aggregated the data from the motion sensor of the String wearable to obtain an overall measure of motion activity for each clarinettist.We then implemented a way to easily adjust sensitivity with respect to the range of movement we wanted to obtain in the score.A challenge common to all pieces was to give the composers a clear sense of what actions the e-textiles respond well to and how the data obtained would behave.This was partly addressed in the workshop described in section 5.However, practical sessions were necessary in order for the composers to grasp the affordances of the system, denoting specificities that corroborate the idea that such platforms for networked music performance afford musical approaches that are unique to the medium. This work has a number of limitations that motivate future work.To better understand the different artistic approaches adopted by the composers, as well as the possibilities offered by ISS as a telematic music performance framework, we are planning interviews with other composers that took part in the project.This will allow us to carry out further analysis as well as draw comparison between Hope's approach and that of other composers.In addition to that, we are planning to carry out interviews with some of the musicians from the ensembles to gain insights regarding the performers' perspective.We acknowledge that a deeper understanding of the audience's experience would also be valuable, this however is beyond the scope of the current project. Continued work beyond this project will aim to gain more insight into the usability of interactive textiles in telematic music performances.Firstly, a technical evaluation of the system and its components is needed to understand reliability and repeatability of the technologies involved.Secondly, user studies conducted with composers and musicians will provide more information about usability and ease of use by the musicians.Stefan Östersjö is Chaired Professor of Musical Performance at PiteåSchool of Music, LuleåUniversity of Technology.He received his doctorate in 2008 for a dissertation on musical interpretation and contemporary performance practice.Östersjö is a leading classical guitarist specialising in the performance of contemporary music.As a soloist, chamber musician, sound artist, and improviser, he has released more than thirty CDs and toured Europe, the USA, and Asia.He has collaborated extensively with composers and in the creation of works involving choreography, film, video, performance art, and music theatre. Figure 1 . Figure 1.The KNM cello player wearing the Tensile. Figure 2 . Figure 2. The textile for the Tensile as it is being knitted on an industrial manual knitting machine (left), and after being sewn into the wearable for the NEO percussionist (right). Figure 3 . Figure 3.The clarinet player of KNM wearing the String wearable. Figure 4 . Figure 4.The Rug under a musician's feet.Figure 5. ISS circuit board. Figure 5 . Figure 4.The Rug under a musician's feet.Figure 5. ISS circuit board. (a)); at Studio Acusticum Stora Salen in Piteå, Sweden (Figure 6(b)); and online through two live streaming video feeds hosted by UdK's own udk/stream platform. 10The recordings of the videos livestreamed from Berlin (F.Visi et al. 2023a) and Piteå (F.Visi et al. 2023b) were made publicly available.An audience was present at both locations.Attendance in Berlin can be approximately estimated as reserving a free ticket was required to attend.122 tickets were pre-booked. 11The concert programme was performed only once.Both stages featured three interactive rugs and three 65inch LCD screens each placed on a stand and set in portrait orientation.Each screen displayed a live video feed showing the nearly full figure of one of the musicians on stage at the other location and were large enough to be seen by the musicians on stage as well as by the audience (see Figures 7 Figure 7 .Figure 8 . Figure 7. Two stills of the concert from the live streaming video feeds: view of the stage at Konzertsaal der UdK Berlin (a); view of the stage at Studio Acusticum Stora Salen (b). Figure 9 . Figure 9.The screens on stage at Konzertsaal der UdK in Berlin displaying the musicians performing at Studio Acusticum in Piteå (photo: Nikolaus Brade).The picture was taken during the performance of Cat Hope's piece 'The Drift'.The score was displayed on the screens placed on the floor in front of the musicians. Figure 10 . Figure 10.A screenshot of the performers view of 'The Drift', showing the 'density playhead' over the score, and score movement indicated with black arrows.Each part is colour-coded (not shown in the black and white print version of the article).Green is percussion, blue is viola, purple and pink are contrabass clarinet parts.The electronic parts are indicated by the finely dashed lines-the colour indicating the wearable source, and the shape providing a guide for the electronics musician.The straight, dashed horizontal line serves as a pitch reference for the performers-in this example, for one of the contrabass clarinets. Notes focuses on experimental material development, specializing in the intersection of heritage hand-weaving techniques, technology and textile engineering.Philipp Gschwendtner is a media artist, freelance writer and programmer.He is currently studying in the M.A. Design & Computation at UdK Berlin and TU Berlin and holds a B.Sc. in Media Technology.His work reflects current developments in technology from a media theory perspective.Professor Cat Hope is a composer, musician, artistic director and academic.She is the co-author of 'Digital Arts-An introduction to New Media' (Bloomsbury, 2014), co-editor of 'Contemporary Virtuosities' (Routledge, 2023) and director of the Decibel new music ensemble.She is a Professor of Music at Sir Zelman Cowen School of Music and Performance at Monash University, Melbourne and a Churchill, Civitella Ranieri and Hamburg Institute of Advanced Study Fellow.
12,010.6
2024-01-02T00:00:00.000
[ "Art", "Computer Science", "Engineering" ]
Information Theory Based Evaluation of the RC4 Stream Cipher Outputs This paper presents a criterion, based on information theory, to measure the amount of average information provided by the sequences of outputs of the RC4 on the internal state. The test statistic used is the sum of the maximum plausible estimates of the entropies H(jt|zt), corresponding to the probability distributions P(jt|zt) of the sequences of random variables (jt)t∈T and (zt)t∈T, independent, but not identically distributed, where zt are the known values of the outputs, while jt is one of the unknown elements of the internal state of the RC4. It is experimentally demonstrated that the test statistic allows for determining the most vulnerable RC4 outputs, and it is proposed to be used as a vulnerability metric for each RC4 output sequence concerning the iterative probabilistic attack. Introduction In [1], the iterative probabilistic attack was proposed to reconstruct the internal state of the RC4 algorithm, starting from knowing an output sequence, which was successively improved in [2,3]. In essence, these attacks attempt to extract information about the content of the internal state {(j t , S t ) : t = 1, . . . , T} of the algorithm RC4 stream cipher from a known output sequence {(z t ) : t = 1, . . . , T}. For this, the conditional probabilities P(j t |z t ) and P(S t |z t ) are iteratively recalculated. This type of attack does not yet violate RC4, but it constitutes a serious potential threat to its security, which should not be ignored. Concerning this threat, a criterion has been developed to assess the vulnerability of an RC4 output to this type of attack. The test statistic used is based on the entropy of the conditional probability distributions P(j t |z t ) for the z t that appear in the evaluated sample. The test statistic was proposed considering that the values and position of these z t determine their probability distribution and associated entropy. The lower the value of the statistic, the more vulnerable the evaluated sample is the lower the attacker's uncertainty will be about the value of the variable j t . This result can have various applications, since it allows for an evaluation of a set of RC4 output sequences according to their vulnerability or theoretical strength in face of iterative probabilistic attacks. This criterion can characterize the keys that cause the greatest vulnerability, which can lead to the identification of a new class of weak keys. In this work, experimental results evaluating the RC4 output sequences, according to their vulnerability to probabilistic attacks, are presented. The structure of this work is as follows: in Section 2 the basic concepts of the research topic are described, which includes the description of the RC4 algorithm and the reports associated with the iterative probabilistic attack; Section 3 introduces the statistic used to evaluate the vulnerability of the RC4 outputs concerning the iterative probabilistic attack; Section 4 details the pre-calculation of frequencies that allow the estimation of the joint, marginal, and conditional probabilities and, in turn, the estimation of the entropies that will be used for the statistical calculation on the output sequences of RC4; in Section 5 experiments are performed to validate the proposed statistician. The results of applying the statistic on RC4 output sequences are illustrated; finally, Section 6 presents some conclusions. Description of the RC4 Stream Encryption Algorithm The RC4 algorithm [4] stands out from other stream ciphers for its wide use in different applications and protocols. The RC4 stream cipher [4] is optimized to be used in 8-bit processors, being extremely fast and exceptionally simple. It was included in network protocols, such as secure sockets layer (SSL), transport layer security (TLS), wired equivalent privacy (WEP), Wi-Fi protected access (WPA), and in various applications used in Microsoft Windows, Lotus Notes, Apple Open Collaboration Environment (AOCE), and Oracle Secure SQL [4]. In the last decade, some applications [5,6] have avoided RC4 encryption, given some found weaknesses [7]. However, although it is not considered very secure [8], RC4 continues to motivate research nowadays [8][9][10]. Furthermore, this cipher is a good option to measure the effectiveness of methods that analyze weaknesses in stream ciphers related to those already known in RC4 [11][12][13][14], or to check the performance of hardware or software schemes that make use of cryptography [15][16][17]. The RC4 has two main components, the key scheduling, and the pseudorandom number generator. The key scheduling generates an internal random permutation S of values from 0 to 255, from an initial permutation, a (random) key K of l-byte length, and two pointers i and j. The maximal key length is of l = 256 bytes, see Algorithm 1. The main part of the algorithm is the pseudo-random number generator that produces one-byte output at each step. As usual, for stream ciphers, the encryption will be a XOR of the pseudo-random sequence with the message, see Algorithm 2. Algorithm 2 RC4 pseudo-random generator. For the RC4 stream cipher, several modifications have been proposed, while some modified only certain components or some operations, others completely changed the algorithm, see [18]. It is important to note that even RC4 variants have had a lot of attention in the scientific community, see [19]. The RC4 stream cipher, in its definition, does not distinguish the use of IV initialization vectors [4]. However, it is well known that in practical applications of RC4, as in many other stream ciphers, an IV initialization vector with a secret key is used to form a session key. The proposed method is independent of the approach used; it simply works on the final input used as input to the cipher. Iterative Probabilistic Attacks Here we discuss three important results on probabilistic attacks that try to reconstruct the internal state of RC4 from knowing an output sequence {(z t ) : t = 1, . . . , T}. In [1], the central idea, proposed by Knudsen et al., was to conveniently use Bayes's Theorem to recalculate the probabilities P(j t |z t ) and P(S t |z t ), for each t ∈ T. In essence, they worked on obtaining probabilistic information about the two variables j t , S t from z t . It was reported that a low probability of success and a high volume of work were achieved. To be successful, it was required to know the values of at least d elements of S 0 , with d ∈ {150, . . . , 160}. The results presented in [1] are independent of the key scheduling and the key size. For sequences of length T = 256 = 2 8 , the volume of work was 2 48 in each iteration. In [2], Knudsen's method was improved by reducing the number of elements of the permutation that must be known and maintains the same workload of 2 48 in each iteration. The essential difference is that a more exact way of recalculating the probabilities was proposed using the entire Z output sequence instead of just the z t value to increase the probability of success. Experiments were reported for RC4 with n = 3 and n = 4. Finally, in [3], Golic and Morgari used the same probabilities of the previous article; the novelty in that work was that it proposed a set of 7 improvements to the probabilistic algorithm itself and estimated the minimum number d of elements in S 0 that must be known a priori so that the attack recovers the correct S 0 permutation, concluding that d ∈ {26, . . . , 85}, which is a substantial improvement compared to d ∈ {150, . . . , 160}. The workload remained at 2 48 probabilities that must be calculated at each iteration. In summary, in the three aforementioned articles, it was reported that these attacks have a low probability of success when no element of the permutation is known a priori, which is why it is concluded that they are not currently applicable to real RC4. In such articles, the authors model the ignorance over the internal state assuming the initial uniform probability distribution for S and j. It is essential to note that increasing the precision of the recalculated probabilities reduced the number d of elements of the permutation that must be known a priori. Knudsen et al. got d ∈ {150, . . . , 160}, while Golic and Morgari reduced it to d ∈ {26, . . . , 85}. The previous result suggests that by increasing the precision of the calculated probabilities in different ways or by improving the iterative algorithm, it could be possible to achieve d ≈ 0, i.e., to recover the complete permutation without knowing any of its elements a priori, which constitutes a serious threat to the safety of the RC4. Entropy As a Measure of Uncertainty Let X be a discrete random variable with possible values x i and respective probabilities p i = P(x i ), with i = 1, . . . , k. Then, Shannon's discrete entropy function H(p 1 , . . . , p k ) [20] is defined as When p i = 1/k for all i = 1, . . . , k, the maximum uncertainty about the value of X is obtained, so the entropy reaches its maximum value, equal to When there is an i such that p i = 1 and p i = 0 for all i = i , there is no uncertainty about the value of X, so the entropy reaches its minimum value, equal to H min = 0. Definition of the Proposed Test Statistic In this work, the information that z t contributes on j t probabilistically, by means of a non-uniform probability distribution, will be modeled from the knowledge of z t . To support this proposal, we start from the relationship between z t and j t , and the result of [4] on the non-equi-probability of the permutation S at the beginning of the pseudo random generation algorithm (PRGA) stage. Solving for j t in the equation that defines z t in the RC4 algorithm, we obtain: For Note that the values of i 0 and z 0 are known, while S 0 is unknown, therefore, the distribution of j 0 is determined by S 0 . Taking into account that in [4] it is shown that S 0 does not follow a uniform distribution, it is considered that for t = 0, when z 0 is known, this property of non-uniformity is translated to j 0 . Expression (3) does not allow the calculation of j t since S t is unknown. However, it allows theoretically arguing the non-equi-probability of j t , conditional on knowing the value of z t (due to the non-equi-probability of S). Denoting by t * the smallest value of t, such that for t > t * it is true that S t follows a uniform distribution. In [21,22], the authors tried to estimate the value of t * . From this definition, and following the same reasoning as for S t , at t = 0, the beginning of the first iteration; it can be assumed that for t ∈ {0, . . . , t * − 1} the conditional probability distribution P(j t |z t ), is non-uniform. In [22], it is described that it is possible to find biases in the output bytes of RC4 up to t * = 512. Thus, for t ∈ {0, . . . , 511} the conditional probabilities P(j t |z t ) do not fit a uniform distribution. Basis of the Evaluation Criterion The criterion will be limited to considering only the variable j t and its conditional probabilities P(j t |z t ). This was taken into account because the knowledge without errors of the sequence {(j t ) : t = 1, . . . , T} allows to reconstruct S 0 [5]. The central idea of the criterion is based on the different values z t that appear at each time step t ∈ T. This can cause different initial conditional probability distributions P(j t |z t ) to appear at each t ∈ T. This is the essence of the proposed test statistic; i.e., it will take into account which values z t appear in the sample and in which places (times t) each one of those values z t appears. Under this condition, two samples with different frequency distributions z t will have different vulnerabilities to attack. Even between two samples with the same frequency distribution of the values z t , the effectiveness would vary depending on their places of appearance. Definition of the Test Statistic To measure these differences, a 256 × 256 matrix was pre-calculated at each time t = {1, . . . , T}, in which the columns represent all the possible values of z t = 0, . . . , 255 and the rows each value of j t = 0, . . . , 255. The element (j, z) of the matrix, corresponding to row j and column z, constitutes the conditional probability P(j t |z t ), at time t ∈ T. In this way, each column will be the distribution of conditional probabilities P(j t |z t ), which probabilistically represents the information about j that causes the appearance of z at this time. The most interesting question one might ask is: how can one compare two different columns? For example, how can one compare two probability distributions? More exactly, which distribution is associated with the greatest uncertainty about j? To solve this problem, the concept of Shannon's discrete entropy will be used. For each column, the entropy will be calculated, denoted by H t z , which is a direct measure of the uncertainty about j t , when in the place t of the sample appears the z t value associated with that column. It is important to mention that the entropy value characterizes the distribution of the 256 possible values of j t in a single value, facilitating the comparison between probability distributions. By entropy's properties, it is satisfied that if H t z = 0, then the value of the j t variable is determined by z t , while if H t z = 8 , knowledge of z t does not provide any information about the value of j t . To evaluate in a sample of length T, the total uncertainty over j, the entropy associated with the value z t that appeared at each time t = 1, . . . , T will be added over all times. Then, the expression of the test statistic will be: where The expected value µ = E(Q) and the variance σ 2 = V(Q) of the statistic Q, are expressed from the expected value and the variance of the conditional entropies H t z in the T times and are given by and assuming that the H t z , with t = 1, . . . , T, are independent of each other, For each entropy H t z that appears as an addition in the expression of Q, its distribution can be approximated by a normal distribution according to the result of [23]. However, this plug-in estimator is known to be biased. Its bias and variance [24,25], are given by and where n is the sample size. If the bias's expression terms that include unknown parameters are depreciated, then the bias is calculable when the cardinality of the alphabet is known, as in this case, but the variance is not since it depends on the unknown probabilities p i . In this work, the point estimation of the mean µ = E(Q) and the variance σ 2 = V(Q) of the Q statistic was carried out directly, using the expressions and respectively, based on the point estimation [26] of the means E(H t z ) and the variances V(H t z ) of each entropy H t z for each time t, with t = 1, . . . , T. The lower the value of the Q-statistic, the less uncertainty about j, and, therefore, the sample would be more vulnerable to these attacks. To evaluate a set of samples of equal length, it is enough to calculate the test statistic for them and sort them increasingly. To compare samples of different lengths, the statistics obtained can be divided between the lengths of their respective samples, obtaining the average uncertainty per symbol and comparing in the same way. Decision Criteria Using the Q-Statistic The Q-statistic is defined as the sum of T random variables H t z . Following the results obtained in [23] by Zhang and Zhang, the plug-in entropy estimator, used to estimate the entropy considered in this work, follows an approximately normal distribution. In this way, assuming independence between the random variables H t z , the Q-statistic follows a normal distribution N µ, σ 2 with mean µ and variance σ 2 because the sum of Normal independent variables is also normal. Then, it is possible to approximate the distribution of the Q-statistic to a random variable with standard normal distribution. where σ is the standard deviation of Q. As mentioned above, the permutation of RC4 has biases that are transferred to j and the output z. The appearance of biases in the distribution P(j t |z t ) provides alterations in the values of H t z and consequently in the distribution of the Q-statistic, leading to the appearance of extreme values. The appearance of these extreme values adds to the distribution of Q a slight asymmetry on left tail since the alteration in the distribution of P(j t |z t ) decreases the value of H t z and, therefore, Q µ. For this reason, we will work with the standard normal distribution N(0, 1) with a single tail, in this case with a left tail, and using a significance level α, it is concluded that the sequences from RC4 that provides more information about the variable j of the internal state are those with the lowest value of Q, such that Q 01 < Z α . Pre-Computing of Probabilities and Estimation of Entropies The proposed method is divided into two phases, following an idea similar to a time memory trade off (TMTO) attack [27]. The first is the precomputation phase, often called the offline phase, where the probabilities and entropy in each time of T are estimated over each output value z t . The objective of this phase is to estimate the information, in general, that provides the output occurrence z t on the variable j t . This phase is executed only once, and then used repeatedly in the next phase for the evaluation of N outputs of the RC4. Although the second is referred to as the real-time or online phase, where it captures a sample of RC4 keystream and checks if this happens to be in the tables below. Each of the M outputs were generated from initializing the RC4 with M random inputs of 20 bytes each. To estimate the conditional probabilities P(j t |z t ), at each time t ∈ T for all possible values z t , a pre-calculation of frequencies was performed, and thus the entropies were estimated. To make a good estimation of the probabilities, in this work, we used M = 262,144,000 outputs of RC4 to reliably obtain as many biases as possible that RC4 has and taking into account the size k = 256 of the alphabet. Frequency Pre-Calculation To calculate the frequencies, M = 262,144,000 outputs of the RC4 of length T = 512 were generated and, at each time t ∈ T, the value of the pair (j t , z t ) was checked, obtaining for each fixed z t the joint distribution (j t , z t ) varying j t . The value of M was chosen in order to obtain an expected frequency of E( f (j t , z t ) ) = 262,144,000/(256 × 256) = 4000 (13) observations, by category, under the hypothesis of equi-probability. A matrix of 256 × 256 was obtained for each time t = 1, . . . , 512 which represents each value of z t = 0, . . . , 255 per column and in the rows each value from j t = 0, . . . , 255. Thus, we have in row j, column z, the frequency f (j t , z t ) of joint appearance of the pair (j t , z t ) at time t (see Table 1). Estimation of Joint, Marginal, and Conditional Probabilities From the joint frequencies f (j t , z t ) we can obtain the marginal frequency f ( z t ) at each time t, to estimate the joint probability P(j t , z t ) and the marginal probability P(z t ), in order to reach an estimate of the conditional probability P(j t |z t ), through the Bayes formula and thus From the estimation of these probabilities, a table like the Table 2 is obtained, which now contains the conditional probability P(j t |z t ) for each time t = 1, . . . , 512 . Entropy Estimation For each time t ∈ T, the entropy H t z = − ∑ 255 j t =0 P(j t |z t )logP(j t |z t ) was estimated, using the plug-in estimator [28]. This constitutes the entropy of the distribution of the j conditioned to the value z t of that column. Thus, at each time t ∈ T, 256 values of H t z are obtained. The output z t with the highest entropy H t z (tighter distribution to the uniform) provides the less information on j. Uniting the results obtained for the T = 512 times, a matrix of 256 × 512 is obtained which contains per column each value of t = 1, . . . , 512 and in the rows each value of z = 0, . . . , 255. In each category (z, t) will be the entropy value H t z corresponding to row z and column t (see Table 3). Then, to evaluate a particular sample, the value H t z corresponding to the place (t, z t ) of the matrix is added using the statistic at each time t. In this way, a random variable of the type is obtained at each time whose expected value will be: which constitutes the average uncertainty over j, at time t, when z t is known, i.e., the conditional entropy. Experimental Evaluation In the experiments ran for the present article, T = 512 times will be taken, as in the pre-calculation stage and N = 10,000 output sequences of the RC4 were generated from N random entries of 20 bytes each. The T value can be a variable parameter depending on the size required for the sample given the pre-calculation performed. For higher value selection of this parameter, it is necessary to deepen the theoretical comparison between the times and carry out more experiments. Figure 1 shows the distribution of the Q-statistic calculated at the 10,000 sequences generated. The left skewness illustrates the appearance of biases in the P(j t |z t ) distribution that decrease the value of Q. These biases are represented through the appearance of extreme values in each sequence. Figure 2 shows the extreme values of the pre-calculated H t z distribution that cause such skewness to illustrate this event. As can be seen, three groups stand out to the left of extreme values. The first two groups of extreme values are caused by the first and second output bytes of RC4, which are highly biases and not evenly distributed [21,22]. Then a third group, which is caused by the existence of z t bytes in the rest of the outputs, with t > 1, that has a high correlation with j. The last group is the remaining values of H t z . Finally, for a significance level α = 0.01, it was obtained that 233 of the 10,000 sequences of outputs analyzed do not satisfy that Q 01 > Z α = −2326. In other words, the output sequences that provide more information about the j variable were detected. In this way, the Q-statistic is able to distinguish within a set of RC4 output sequences the most vulnerable to iterative probabilistic attacks. Conclusions A statistical criterion was proposed, which allows for distinguishing a set of sequences of outputs of RC4. This Q-statistic is based on the conditional entropies of j t given the value z t , known at each time t. It was experimentally verified that the proposed criterion could determine the existence of a class of output sequences more vulnerable to iterative probabilistic attacks. Future work intends to strengthen the proposed criterion by using the conditional probabilities P(S t |z t ), as well as to extend the criterion to the case in which the output of RC4 is not known, and only the ciphertext obtained with that output is known. Another result will be to investigate the possible adjustment of the distribution of the Q statistic to some of the known distributions and theoretically determine the lowest value of M for which it is effective.
5,980.8
2021-07-01T00:00:00.000
[ "Computer Science" ]
Measuring Semantic Relatedness between Flickr Images: From a Social Tag Based View Relatedness measurement between multimedia such as images and videos plays an important role in computer vision, which is a base for many multimedia related applications including clustering, searching, recommendation, and annotation. Recently, with the explosion of social media, users can upload media data and annotate content with descriptive tags. In this paper, we aim at measuring the semantic relatedness of Flickr images. Firstly, four information theory based functions are used to measure the semantic relatedness of tags. Secondly, the integration of tags pair based on bipartite graph is proposed to remove the noise and redundancy. Thirdly, the order information of tags is added to measure the semantic relatedness, which emphasizes the tags with high positions. The data sets including 1000 images from Flickr are used to evaluate the proposed method. Two data mining tasks including clustering and searching are performed by the proposed method, which shows the effectiveness and robustness of the proposed method. Moreover, some applications such as searching and faceted exploration are introduced using the proposed method, which shows that the proposed method has broad prospects on web based tasks. Introduction Relatedness measurement especially similarity between multimedia such as images and videos plays an important role in computer vision. The image similarity is a base for many multimedia related applications including image clustering [1], searching [2,3], recommendation [4], and annotation [5]. The relatedness problem is relevant to two aspects: images representation and relatedness measurement. The former aspect needs an appropriate model to reserve the related information of an image. The latter aspect requires an effect method to compute the relatedness accurately. In the early stage, relatedness measurement is based on the low-level visual features such as texture [6,7], shape [8], and gradient [9]. These visual features are used to represent effective information of an image. Some distance metrics including Chi-Square distance [10], Euclidean distance [11], histogram intersection [12], and EMD distance [13] is used. Overall, these methods ignore the high-level features such as semantic information which can be understood by machine and people easily. These methods are limited to the applications which need semantic level information. Recently, with the explosion of community contributed multimedia content available online, many social media repositories (e.g., Flickr (http://www.flickr.com), Youtube (http://www.youtube.com), and Zooomr (http://www. zooomr.com)) allow users to upload media data and annotate content with descriptive keywords which are called social tags. We take Flickr, one of the most popular and earliest photo sharing sites, as an example to study the relatedness measurement between images. Flickr provides an open platform for users to publish their personal images freely. The principal purpose of tagging is to make images better accessible to the public. The success of Flickr proves that users are willing to participate in this semantic context through manual annotations [14]. Flickr uses a promising approach for manual metadata generation named "social tagging, " which requires all the users in the social network to label the web resources with their own keywords and share with others. The characteristics of social tags are as follows. (1) Ontology free. The ontology based labeling defines ontology and then let users label the web resources using the semantic markups in the ontology. Social tagging requires all the users in the social network to label the web resources with their own keywords and share with others. Different from ontology based annotation, there is no predefined ontology or taxonomy in social tagging. Thus, the tagging task is more convenient for users. (2) User oriented. The users can annotate images with their favorite tags. The tags of an image are determined by users' cognitive ability. To a same image, users may give different tags. Each image may be with one tag at least, and each tag may appear in many different images. (3) Semantic loss. Irrelevant social tags frequently appear, and users typically will not tag all semantic objects in the image, which is called semantic loss. Polysemy, synonyms, and ambiguity are some drawbacks of social tagging. Based on the above characteristics, we aim at measuring semantic relatedness between images using social tags. It is observed that the correlations between the concepts of images can be divided into four kinds: synonymy, similarity, meronymy, and concurrence, as illustrated in Figure 1. Synonymy means the same object with different names. Similarity denotes that two objects are similar. Meronymy means that two objects follow part-of relation. Concurrence means that two objects appear frequently. Overall, the above four correlations can be summarized as semantic relatedness [15]. Semantic relatedness is a more generic concept than semantic similarity. Similar concepts are usually considered to be related for their likeness (synonymy); dissimilar concepts can also be semantically related such as meronymy or concurrence. In this paper, we focus on measuring semantic relatedness between images. (1) Semantic relatedness follows the cognitive mechanism of people. In [16], the author suggests that the association relation is the basic mechanism of brain. When people know a concept such as "hospital, " she/ he may index the related concept such as "doctor" for appropriate understanding of the original concept. Since the goal of relatedness measurement is to facilitate related applications such as searching and recommendation, the proposed method should follow user's cognitive mechanism. (2) Semantic relatedness can be used to organize images based on their associations. In recent literatures, such as Linked Open Data (LOD) [17] and Semantic Link Network (SLN) [18][19][20], the resources are managed by their semantic relations. The proposed semantic relatedness measures can be used to build semantic links between resources especially images, which can be easily applied in real applications. The major contributions of this paper are summarized as follows. (1) We propose a framework to measure semantic relatedness between Flickr images using tags. Firstly, the cooccurrence measures are used to compute the relatedness of tags between two images. Secondly, we transform the tags relatedness integration to the assignment in bipartite graph problem, which can find an appropriate matching to the semantic relatedness of images. Finally, a decline factor considering the position information of tags is used in the proposed framework, which reduces the noise and redundancy in the social tags. (2) A real data set including 1000 images from Flickr with ten classes is used in our experiments. Two evaluation methods including clustering and retrieval are performed, which shows that the proposed method can measure the semantic relatedness between Flickr images accurately and robustly. (3) We extend the relatedness measures between concepts to the level of images. Since the association relation is the basic mechanism of brain. The proposed relatedness measurement can facilitate related applications such as searching and recommendation. The rest of the paper is organized as follows. Section 2 gives the related work of social tags and image similarity measures. The problem definition is introduced in Section 3. Related Work In this section, we give two related aspects of the proposed work. Some researches about social tags are introduced first. Then, we give the related work about image similarity measures. On Social Tags. In the area about the usage patterns and semantic values of social tags, Golder and Huberman [21] mined usage patterns of social tags based on the delicious (del.icio.us/post) data set. Al-Khalifa and Davis [22] concluded that social tags were semantically richer than automatically extracted keywords. Suchanek et al. [23] used YAGO (http://www.mpi-inf.mpg.de/yago-naga/yago) and WordNet (http://wordnet.princeton.edu) to check the meaning of social tags and concluded that top tags were usually meaningful. Halpin et al. [24] examined why and how the power law distribution of tag usage frequency was formed in a mature social tagging system over time. Beside research on mining social tags, some researches modeled the network structure of social tags. Cattuto et al. [25] investigated the network features of social tags system, which is seen as a tripartite graph using metrics adapted from classical network measures. Lambiotte and Ausloos [26] described the social tags systems as a tripartite network with users, tags, and annotated items. The proposed tripartite network was projected into the bipartite and unipartite network to discover its structures. In [27], the social tags system was modeled as a tripartite graph which extends the traditional bipartite model of ontologies with a social dimension. Recently, many researchers investigated the applications of social tags in information retrieval and ranking. In [28], the authors empirically study the potential value of social annotations for web search. Zhou et al. [29] proposed a model using latent dirichlet allocation, which incorporates the topical background of documents and social tags. Xu et al. [30] developed a language model for information retrieval based on metadata property of social tags and their relationships to annotated documents. Bao et al. [31] introduced two ranking methods: SocialSimRank, which ranked pages based on the semantic similarity between tags and pages, and SocialPageRank, which ranked returned pages based on their popularity. Schenkel et al. [32] developed a top-algorithm which ranked search results based on the tags shared by the user who issued the query and the users who annotated the returned documents with the query tags. On Measuring Images Similarity. Measuring semantic similarity is a basic issue in computer vision field. Usually some low-level visual features are used for similarity measures. For example, shape features, texture features, and gradient features can be extracted from images. Based on the extracted low-level features, distance metrics such as the Euclidean distance, the Chi-Square distance, the histogram intersection, and the EMD distance are used. In this paper, the proposed method addresses the problem by semantic-level features such as social tags. Different from the methods using low-level features, recently, a number of papers build image representation based on the outputs of concept classifiers [33]. Our observation is that Flickr provides the related social tags by web users, which reflect how people on the internet tend to annotate images. Several previous methods [34] learn object models from internet images. These methods tend to gather training examples using image search results. Besides, their approaches have to alternate between finding good examples and updating object in order to robust against noisy images. On the other hand, some papers [35] use images from Flickr groups other than search engines, which is claimed to be clean enough to produce good classifiers. Problem Definition In this paper, we study the problem of measuring semantic relatedness between images or videos with manually provided social tags. Here, a social tag refers to some concepts provided by users, which is semantically related to the content of an image or a video. The input of the proposed method is a pair of images or videos with social tags. The goal of the proposed method is to identify the semantic relatedness between two 4 The Scientific World Journal images or videos. Figure 2 shows the illustration of a pair of images from Flickr with social tags. These two images are about "Big Ben" and "London eye". These two images may be dissimilar according to the traditional similarity measurement, since they do not share some common low level visual similarity. But, these two images are semantic related since they are both the famous sightseeings of London. In the proposed method, we can compute their semantic relatedness though they may share little similar visual features. Basic Definitions. We first introduce three important definitions in this paper, the social tags set of an image and the semantic relatedness between two images. Definition 1 (social tags set of an image). The social tags (denoted by ) set of an image (denoted by ( )) is a set of tags provided by users of an image: For example, in Figure 2, the tags of the right images are "London" and "eye" other than "London eye". Since Flickr provides the related tags of each image, we just download the tags by Flickr. We do not perform any NLP operations on the tags. Definition 2 (semantic relatedness between tags). The semantic relatedness between tags (denoted by sr( 1 , 2 )) is the expected correlation of a pair of tags 1 and 2 . Definition 3 (semantic relatedness between images). The semantic relatedness between images (denoted by sr( 1 , 2 )) is the expected correlation of a pair of images 1 and 2 . The range of sr( 1 , 2 ) and sr( 1 , 2 ) is from 0 to 1. A high value indicates that semantic relatedness between tags or images is more likely to be confidential. Please notice that the definition of sr( 1 , 2 ) can also be extended to videos with social tags. Basic Heuristics. Based on common sense and our observations on real data, we have five heuristics that serve as the base of our computation model. Different from writing sentences, users usually annotate an image with different tags. For example, the possibility of using tags "apple apple apple" for an image is very low. Therefore, in this paper, we do not employ any weighting scheme for tags such as tf-idf [36]. Heuristic 2. The order of the tags may reflect the correlation against the annotated image. Different tag reflects the different aspects of an image. According to Heuristic 1, the weight of a tag against the image cannot be obtained. Fortunately, the order of the tags can be gotten since user may provide tags one by one. Different users may give different tags about the same image. For example, users may give tags such as "apple iPhone" or "iPhone4 mobile phone" for the same image about iPhone. It is hardly to say which tag is better for annotation though the latter annotation has three tags. Heuristic 4. Usually some tags may be redundant for annotating an image. Of course, users may give similar tags for an image. For example, the tag "apple iPhone" may be redundant since iPhone is very semantic similar to apple. Heuristic 5. Usually some tags may be noisy for annotating an image. Users may give inappropriate or even false tags for an image. For example, the tags "iPhone" are false for an image about the iPod. Computation Model In this section, we propose the computation model for measuring semantic relatedness between images. Based on the above five heuristics, the social tags provided by users are used in our computation model. Overall, the proposed computation model is divided into three steps. (1) Tag relatedness computation. In this step, based on Heuristic 1, all of the tag pairs between two images are computed. (3) Tag order revision. In this step, based on Heuristic 2, the image relatedness on step 2 is revised. Table 1 shows the variables and parameters used in the following discussion. Figure 3 illustrates an overview of the proposed computation model. Tag Relatedness Computation. According to Definition 1, an image can be represented as a set of tags provided by users. As for the semantic relatedness of a pair of images, we can measure the semantic relatedness between tags of these images. For example, two images with tags "apple iPhone" and "iPod Nano", we can measure the semantic relatedness between these tags. Since the number of each tag is usually one according to Heuristic 1, the semantic relatedness between tags can be computed without considering their weight. Many different methods of semantic relatedness measures between concepts have been proposed, which can be divided Semantic relatedness of two tags sr( 1 , 2 ) Semantic relatedness of two images ( ) Page counts of a tag ( ( )) Set of page counts of an image pos( ) Position information of a tag into two aspects [37]: taxonomy-based methods and webbased methods. Taxonomy-based methods use information theory and hierarchical taxonomy, such as WordNet, to measure semantic relatedness. On the contrary, web-based methods use the web as a live and active corpus instead of hierarchical taxonomy. In the proposed computation model, each tag can be seen as a concept with explicit meaning. Thus, we use some equations based on cooccurrence of two concepts to measure their semantic relatedness. The core idea is that "you shall know a word by the company it keeps" [38]. In this section, four popular cooccurrence measures (i.e., Jaccard, Overlap, Dice, and PMI) are proposed to measure semantic relatedness between tags. Besides cooccurrence measures, the page counts of each tag from search engine are used. Page counts mean the number of web pages containing the query . For example, the page counts of the query "Obama" in Google (http://www. google.com) are 1,210,000,000 (the data was gotten in the date 9/28/2012). Moreover, page counts for the query " " can be considered as a measure of cooccurrence of queries and . For the remainder of this paper, we use the notation ( ) to denote the page counts of the tag in Google. However, the respective page counts for the tag pair and are not enough for measuring semantic relatedness. The page counts for the query " " should be considered. For example, when we query "Obama" and "United States" in Google, we can find 485,000,000 Web pages; that is, (Obama ∩ United States) = 485,000,000. The four cooccurrence measures (i.e., Jaccard, Overlap, Dice, and PMI) between two tags and are as follows: ∩ denotes the conjunction query " ". According to probability and information theory, the mutual information (MI) of two random variables is a quantity that measures the mutual dependence of the two variables. Pointwise mutual information (PMI) is a variant of MI (see (5)): where is the number of Web pages in the search engine, which is set to = 10 11 according to the number of indexed pages reported by Google. Through (2)-(5), we can compute the tag relatedness as follows. (1) Extracting the tags from two images 1 and 2 , which are denoted by (2) Issue the tags from 1 and 2 as the query to the web search engine (in this paper, we choose Google for its convenient API (http://developers.google.com)), the page counts can be denoted by (3) Computing the semantic relatedness between each tags pair from 1 and 2 by (2)- (5). For example, if we use PMI to compute tag semantic relatedness, the equation can be sr ( , ) = log (( * ( ∩ )) / ( ( ) * ( ))) log , From the above steps, the tags relatedness can be computed, which is denoted as a triple ⟨ , , sr( , )⟩. In the next section, we will give the detailed analysis for choosing the best measures from (2)- (5). Overall, the page counts of each tag should be issued. Then some cooccurrence based measures are used to compute the semantic relatedness between tags. The reasons for using page counts based measures are as follows. (1) Appropriate computation complexity. Since the relatedness between each tag pair of two images should be computed, the proposed method must be with low complexity. Recently, web search engines such as Google provide API for users to index the page counts of each query. The web search engine gives an appropriate interface for the proposed computation model. (2) Explicit semantics. The tag given by users may not be a correct concept in taxonomy. For example, users may give a tag "Bling Bling" for an image about a lovely girl. The word "Bling" cannot be indexed in many taxonomy such as WorldNet. The proposed method uses web search engine as an open intermediate. The explicit semantics of the newly emerge concepts can be gotten by web easily. Semantic Relatedness Integration. In Section 4.1, we compute the tag pair relatedness of two images. Obviously, the tag pair relatedness of two images 1 and 2 can be treated as a bipartite graph, which is denoted by Based on (9), we change the semantic relatedness integration of all tag pairs to the problem-assignment in bipartite graph. We want to assign a best matching of the bipartite graph . A matching is defined as ⊆ so that no two edges in share a common end vertex. An assignment in a bipartite graph is a matching so that each node of the graph has an incident edge in . Suppose that the set of vertices are partitioned in two sets 1 and 2 , and that the edges of the graph have an associated weight given by a function : ( 1 , 2 ) → [0 ⋅ ⋅ ⋅ 1]. The function maxRel: ( , 1 , 2 ) → [0 ⋅ ⋅ ⋅ 1] returns the maximum weighted assignment, that is, an assignment so that the average of the weights of the edges is highest. Figure 4 shows a graphical representation of the semantic relatedness integration, where the bold lines constitute the matching . Based on the expression of the assignment in bipartite graphs, we have , Using the assignment in bipartite graphs problem to our context, the variables 1 and 2 represent the two images to compute the semantic relatedness. For example, that 1 and The Scientific World Journal 2 are composed of the tags ( 1 ) and ( 2 ). | ( 1 )| > | ( 2 )| means that the number of tags in ( 2 ) is lower than that of ( 1 ). According to Heuristic 3, we divide the result of the maximization by the lower cardinality of ( 1 ) or ( 2 ). In this way, the influence of the number of tags is reduced, and the semantic relatedness of two images is symmetric. Beside the cardinality of two tags set ( 1 ) and ( 2 ), the maxRel function is affected by the relatedness between each pair of tags. According to Heuristics 4 and 5, the redundancy and noise should be avoided. In maxRel function, the oneto-one map is applied to the tags ( 1 ) and ( 2 ). Thus, the proposed maxRel function varies with respect to the nature of two images. Adopting the proposed maxRel function, we are sure to find the global maximum relatedness that can be obtained pairing the elements in the two tags sets. Alternative methods are able to find only the local maximum since they scroll the elements in the first set and, after calculating the relatedness with all the elements in the second set, they select the one with the maximum relatedness. Since every element in one set must be connected, at most, at one element in the other set, such a procedure is able to find only the local maximum since it depends on the order in which the comparisons occur. For example, considering the example in Figure 4, 1 will be paired to 1 (weight = 1.0). But when analyzing 3 , the maximum weight is with 2 (weight = 0.9). This means that 2 can no more be paired to 2 even if the weight is maximum, since this is already matched to 3 . As a consequence, 2 will be paired to 3 and the average of the selected weights will be (1.0 + 0.3 + 0.9)/3 = 0.73 which is considerably lower than using MaxRel where the sum of the weights was (1.0 + 0.8 + 0.7)/3 = 0.83. Overall, the cardinality of two tag sets is used to follow Heuristic 3. The one-to-one map of tags pair is used to follow Heuristics 4 and 5. The MaxRel function is used to match a best semantic relatedness integration of two images. Tag Order Revision. According to Heuristic 2, the order of tags should be considered to compute the semantic relatedness between two images. Intuitively, the tags appearing in the first position may be more important than the latter tags. Some researches [39] suggest that people used to select popular items as their tags. Meanwhile, the top popular tags are indeed the "meaningful" ones. In this section, the MaxRel function proposed in Section 4.2 is revised considering the order of tags. For example, the relatedness of tags pair with high position should be enhanced, which is summarized as a constrain schema. Schema 1 (tag relatedness declining). This schema means that the identical tag pairs of two images 1 and 2 should be pruned in MaxRel function. In other words, the semantic relatedness of the same tag of two images is set as 0. We add a decline factor to the MaxRel function, and the detailed steps are as follows. (1) According to the MaxRel function in Section 4.2, the best matching tag pairs are selected, which is denoted by Of course, the selected tag pairs are the best matching of the bipartite graph between images 1 and 2 . (2) Computing the position information of each tag, which is denoted by Pos( ): (3) Add the position information of each tag to (11), which can be seen as a decline factor: sr ( 1 , 2 ) = ∑ Pos ( ) * sr ( , ) * Pos ( ) , We also consider the example in Figure 4. According to (14), the semantic relatedness is revised as Besides adding decline factor to the MaxRel function, we also add a constrain schema: identical tag pruning. The above schema is used to ensure the relatedness measures of two images. If we do not prune the identical tag pairs of two images, the proposed method will be transformed to the similarity measures. For example, the cosine similarity [36] between two tags is to find the number of identical elements of two vectors. The overall algorithm of the proposed computation mode is presented in Algorithm 1. Experimental Results In this section, we evaluate the results of using the proposed method for relatedness measurement. In Section 5.1, we introduce the data set for the evaluation. In Section 5.2, we determine to use the cooccurrence function for tag relatedness measures. In Sections 5.3 and 5.4, clustering and retrieval are used to evaluate the proposed method. The Data Sets. We choose Flickr groups as the resources for building data sets. Users on online photo sharing sites like Flickr have organized many millions of photos into hundreds of thousands of semantically themed groups. These groups expose implicit choices that users make about which images are similar. Flickr group membership is usually less noisy than Flickr tags because images are screened by group members. We download 1000 images from ten groups. These ten groups can be divided into two classes. The first class includes five groups, which are car, phone, flower, dog, and boat. The second class consists of another five groups, which are Louis Vuitton, Dior, Gucci, Cartier, and Chanel. Of course, these images are selected by humans, which reduce the noise of the data set. The reason why we choose two classes of groups is that we want to test the accuracy of the proposed method against the semantic relatedness of data set. The semantic relatedness of the second set is higher than the first set since the second class is all about the luxury brands. For example, almost all these brands produce handbags. Thus, if the proposed method can do well in these groups, we may say that it can measure the semantic relatedness between Flickr images accurately and robustly. Table 2 gives the detailed information of the data set. Table 3 gives some selected tags from group 2. Relatedness Function Selection. In Section 4.1, four cooccurrence measures (i.e., Jaccard, Overlap, Dice, and PMI) are given for relatedness measures between tags. In [40], Rubenstein and Goodenough proposed a data set containing 28 word pairs rating by a group of 51 human subjects, which is a reliable benchmark for evaluating semantic similarity measures. The higher the correlation coefficient against R-G ratings is, the more accurate the methods for measuring semantic similarity between words are. Figure 5 gives the correlation coefficient of four functions against R-G test set. From Figure 5, we can say that PMI performs best on relatedness measures for its highest correlation coefficient. Thus, in the later experiments, we select PMI as the relatedness measures between tags. Evaluation on Image Clustering. In this section, we evaluate the correctness of using tag order. In Section 4.3, we add the position information of each tag to the semantic relatedness measures. The tags with high position are treated as the major element for sematic relatedness measures. We evaluate the using of tag order by the clustering task. We employ the proposed semantic relatedness of images into -means [41] clustering model. Since the -means model depends on the initial points, we random select core points 100 times. We evaluate the effectiveness of document clustering with three quality measures: -measure, Purity, and Entropy [41]. We treat each cluster as if it were the result of the proposed "Louis Vuitton" "Alma" "Louis Vuitton" "Tivoli" "Louis Vuitton" "Bolsas" "LV" "Multicolore" Dior "DIOR" "lipstick" "makeup" "Dior" "Diorskin Nude" "Tan Sun Powder" "Dior" "Makeup" "Palette" "Dior" "Addict 2" "Dior" "Jadore" "Perfume" Gucci "Gucci" "Leather Belts" "Gucci" "Trainers" "Gucci" "Jolie Leopard" "Orange" "Replica" "Gucci" "Handbags" "Gucci" "Cruise" Cartier "Cartier" "Pasha" "Chronograph" "CARTIER" "Love Bracelet" "Cartier" "Santos Galbee" "Calibre" "Cartier" "Cartier Watch" "Tank Francaise" Chanel "Chanel" "Coco Noir" "Chanel" "Chanel Riva" "Chanel nail polish" "Coco Mademoiselle" "Chanel" "No 5" "Chance" "Chanel" method and each class as if it were the desired set of images. Generally, we would like to maximize the -measure and Purity and minimize the Entropy of the clusters to achieve a high-quality document clustering. Moreover, we compare the clustering results between the proposed method using tag order or not. Figures 6 and 7 give the clustering results of group 1 and group 2 data sets. From Figures 6 and 7, we can conclude the following. (1) The proposed method performs better than cosine based clustering. This result can be obtained from Figures 6 and 7. The three metrics including -measure, purity, and entropy of the proposed method are better than cosine based clustering. This may be caused by the inherent feature of the proposed method. The proposed method is based on the semantic relatedness other than the cooccurrence of the cosine based clustering. If the tags of two images are not overlapped, the cosine based clustering may be unavailable. (2) The schema on using of tag order is effective. This result can also be obtained from Figures 6 and 7. The three metrics including -measure, purity, and entropy of using tag order are the highest. The position information reflects the importance of each tag. The proposed method emphasizes the tags with high order, which raises the performance on images clustering. (3) The proposed method is robust in different data sets. The proposed method performs well in group 1 and group 2 data set. It is worth noting that the difference between the proposed method and cosine method of group 2 is higher than that of group 1. The reason of that is due to the semantic correlation of group 2 being stronger than group 1. In other words, the performance of the proposed method relies on the semantic correlation of classes in data sets. The stronger the semantic correlation between classes of data, the better the proposed method performance. Evaluation on Image Searching. In this section, we evaluate the proposed method query-based image searching task. Five queries from group 2 are selected as the test set including "Louis Vuitton, " "Gucci, " "Chanel, " "Cartier, " and "Dior". These queries are searched in Flickr. The top 50 images are obtained as the data set. Moreover, we remove the queries on the tags of each image. For example, the tag "Cartier" of the top 50 images is removed of the query "Cartier". The reason for that operation is that the proposed method is based on the semantic relatedness other than cooccurrence. We choose cut-off point precision to evaluate the proposed method on image searching. The cut-off point precision ( ) means that the percentage of the correct result of the top returned results. We compute the 1 , 5 , and 10 of the group 2 test set. Table 4 lists the comparison of the cut-off point precision between the proposed method and Flickr. From the experimental results, we can conclude the following. (1) The proposed method performs better than Flickr. In Table 4, the 1 , 5 , and 10 of the proposed method are higher than Flickr. The experimental results prove the correctness of the proposed method on image searching task. (2) The proposed method can handle the relatedness searching problem. The proposed method can measure the semantic relatedness of two images robustly and correctly. (3) The proposed method can support the faceted exploration of image search. Faceted exploration of search results is widely used in search interfaces for structured databases. Recently the faceted exploration is also appearing in online search engine in the form of search assistants. The proposed method can measure the semantic relatedness of two images. Given the search queries, we can select the related images for faceted search. Conclusions This paper mainly discusses the semantic relatedness measures systematically, puts forward a method to measure the semantic relatedness of two images based on their tags, and justifies its validity through the experiments. The major contributions are summarized as follows. (1) We propose a framework to measure semantic relatedness between Flickr images using tags. Firstly, the cooccurrence measures are used to compute the relatedness of tags between two images. Secondly, we transform the tags relatedness integration to the assignment in bipartite graph problem, which can find an appropriate matching to the semantic relatedness of images. Finally, a decline factor considering the position information of tags is used in the proposed framework, which reduces the noise and redundancy in the social tags. (2) A real data set including 1000 images from Flickr with ten classes is used in our experiments. Two evaluation methods including clustering and searching are performed, which shows that the proposed method can measure the semantic relatedness between Flickr images accurately and robustly. (3) We extend the relatedness measures between concepts to the level of images. Since the association relation is the basic mechanism of brain. The proposed relatedness measurement can facilitate related applications such as searching and recommendation.
7,946.4
2014-02-23T00:00:00.000
[ "Computer Science" ]
The Feminist and the Bible : Hermeneutical Alternatives The Feminist and the Bible: Hermeneutical Alternatives INTRODUCTION Literature on feminist method is growing at such a pace that it has rather quickly become an extended field of inquiry in itself, of which the present volume is adequate testimony!.It is not the purpose of this chapter to attempt a documented history of the feminist movement as it deals with biblical literature.For that reason whatever documentation is given is intended to be not exhaustive but representative.Rather, the intent of the present essay is to explore some of the ways in which feminists, in particular feminist biblical scholars, are meeting the challenge of adequately and sensitively interpreting biblical texts and the biblical tradition in the light of experience.Nor is it my intention to attribute superiority to anyone, but rather to 'objectively' describe and interpret each, bearing in mind of course the axiom of contemporary hermeneuticists that no interpretation is purely objective but is always conditioned by the presuppositions and prejudices of the interpreter. With that in mind, it would probably be no waste of paper to briefly set out the presuppositions and prejudices that I consciously bring to the undertaking.The careful reader will no doubt detect others of which I am not aware.Thus the interpretive process goes on.First, I belong to a large institutional church with an amazing amount of diversity in its membership and a firmly entrenched patriarchal leadership.Although that should not determine the direction of my critical scholarship, it inevitably affects my experience; and the two cannot be totally separated.Second, I take note that the very fact that we spend so much time and energy wrestling with biblical texts and traditions, the very fact that there is such a thing as 'biblical scholarship', means whether we care to acknowledge it or not that the Bible is more for us than a curious piece of history.It is part of our own living history, a power to be reckoned with in the communities of faith to which we belong or from which our students and friends come.Even those who assume a rejectionist stance toward the Bible admit by their position that there is not much middle ground; indifference to the Bible is a difficult path for the serious student of Christianity to tread. Third, I judge as the result of my own investigation and reflection that it is unnecessary to throw out the baby with the bath water.The biblical tradition contains enough of lasting and universal value that it is worth salvaging, in spite of the tremendous problems entailed in the salvage operation.Fourth, issues such as authority, inerrancy, revelation, and inspiration must be handled with careful nuances, their theoretical frameworks constructed not in the abstract but in constant interplay with the lived experience of whole communities of faith.Finally, it is my conviction that the elusive entity that we call 'tradition' is the all-encompassing movement that contains within itself the biblical text and the factors leading to its production.It contains as well the reflective interpretation of that articulation in subsequent generations, including our own, as persons in concrete life situations bring the text to bear on their own experience and, no less important, their experience to bear on the text.In other words, tradition is not a boundary but an open road that connects us with the past and points us in the direction of the future. A discussion of feminist alternatives in biblical interpretation cannot be undertaken in isolation from either recent currents in feminism or in biblical interpretation; hence a few summary remarks about both by way of establishing a context for what follows.Rosemary Ruether (1983:41-45, 216-232) has admirably summed up the three major directions in contemporary feminism as liberal, socialist/Marxist, and romanticlradical. Liberal feminism takes the model of progress within a capitalist society and works for political reform, equal rights, and improved working conditions, with the assumption that the present social and economic system of Western countries is still redeemable and reformable.It thus carries within it the tendency to classism, to the identification of the rights of upper-middle-class white women with 'women's rights', to the neglect of the plight, interests, and needs of women who are caught in the economically oppressive web of the working classes, minorities, and the poor.Much of the accusation that has been leveled against the feminist movement by working and minority women has identified feminism with this 'liberal feminism', which seems to have little to offer them.It is an indictment of the middle-class feminists of recent years for their failure to see beyond their own horizons. Socialist or Marxist feminism according to Ruether follows upon the Marxist assumption that full equality can be achieved only by the full integration of labor and ownership; thus only by the complete assimilation of women into the work force, which is at the same time in control of the means of production, can the exploitation of women cease.In the socialist experiences that have so far been tried, however, such has not been the case, because the patriarchal structure of the family has not given way to an egalitarian one commensurate with the political philosophy upon which the public sphere is based.Hence women in socialist societies fmd themselves under the double burden of making a full contribution in the work force while continuing to be the major source of domestic labor.The only apparent way out of this dilemma is to restructure completely the reproductive and preservative functions of human society in other ways than that of the traditional family, an extreme to which few societies are willing to go. lf liberal and socialist feminism assume that the way to equality is through full participation of women in the public sphere, in what has traditionally been the male world, romantic feminism does just the opposite.It exults in the differences between men and women, upholds the feminine way as innately superior, and glorifies the so-called feminine qualities of sensitivity, creativity, intuition, bodiliness, et cetera as the true female self that the predominant rational, hierarchical exploitative masculine society consistently tries to repress by patriarchal domination.The reformist branch of romantic feminism sees as its mission the transformation of the morally and aesthetically inferior masculine world through infusion of superior feminine values.The radical branch of romantic feminism proclaims the necessity of total withdrawal from the male world in a separatist stance that will be ultimately the only way to save women for themselves.In either case the resulting end product is simply a reversal of the domination and alienation that are seen to be the major problems within a patriarchal structure.The oppressed will become the oppressors, and no advance toward mutuality will be realized. In Ruether's schema, a liberation-hermeneutical feminism would represent a fourth type of feminism, which attempts to incorporate the best elements of the other three: the concern for human development and societal egalitarianism of liberal feminism, the social critique and dedication to building a just society of socialist feminism, and the sensitivity to deeper human values of romantic feminism.A true liberation feminism would thus be able to transcend the limits of the other three types.Its focus on the experience of the oppressed would free it from the bourgeois complacency to which liberal feminism is prone.Its vision of a new society would abolish the patriarchalism which socialist feminism has not succeeded in eliminating.Finally, a true liberation feminism would struggle for the liberation not only of women but of all human persons in a community of mutuality in which neither mode of being, 'masculine' or 'feminine', consistently dominates.It is this liberation hermeneutic which makes the strongest claim for biblical grounding, and, as we shall see below, this may be one of its most problematic aspects. Because we are part of our recent history, because we are involved in the process of creating that history, and because any contemporary hermeneutic must be as deeply grounded in experience as it is in theory, these alternatives in the feminist movement at large provide the basic categories within which biblical feminists also operate, whether we are aware of it or not.While not fitting neatly within the same slots, feminist biblical interpretation raises very similar questions and faces many of the same dilemmas, as we shall see below.Contemporary critical feminism attempts to confront and address the problems inherent in all four of the approaches outlined above. If critical feminism is at the point of breaking through an impasse into a new consciousness ready to try new alternatives, the same can be said of contemporary biblical method.After nearly a century of domination by the historical-critical method, its limits and inherent prejudices are becoming widely accepted.Although the method itself will continue to hold an important and fundamental place in biblical studies for the foreseeable future, it can no longer be the method, the criterion to which all interpretation must be submitted.Current biblical studies demonstrate a diversity of methods, some new, some of long duration with only minimal recognition: literary criticism, structuralism, social and sociological interpretation, and the various forms of spiritual and psychological interpretation are all adapted from other disciplines, thus giving evidence of the growing awareness that biblical interpretation cannot function in isolation from the social and intellectual world of the interpreter, a world that is too pluriform and complex to be served only within the limited boundaries of historicalcritical exegesis.Just as the varieties of feminist critique challenge traditional patriarchy, so too the varieties of biblical method challenge traditional exegesis and demonstrate that its claim to be 'value-free' is simply false. Others have previously undertaken the task of examining the various methods for approaching biblical material about women with a view to integrating it into a relevant contemporary hermeneutic.For example, Sakenfeld (1981) summarized the alternatives as the following: (l) focusing on texts that portray women in a positive way to counteract the devastatingly negative texts 'against' women; (2) rejecting the Bible altogether as not authoritative and/or useful; (3) looking more broadly to biblical texts that lend themselves to a liberation perspective; (4) taking a culturally comparative approach to analyze the intersection of the stories of ancient and modern women living in patriarchal culture.To these alternatives could be added a fifth: standing back from the specific focus on women as in (3) above, but concentrating on the broader issue of inclusive biblical anthropology, as explored in Adela Yarbro Collins (1978). Essentially the five options listed above can be reduced to three: focus on women (1) and (4); situate women within a broader context (3) and ( 5); give up on the Bible altogether as hopeless (2).Teaching and research on women in the Bible in recent years has played on all five.In the following remarks I would like to suggest yet another way of examining the alternatives for feminist biblical hermeneutics, one that I believe is thematically more inclusive and deals with all options previously discussed.Some may question use of the word 'feminist' for some of these alternatives, but the term is to be taken here in its broadest sense, as concern for the promotion and dignity of women in all aspects of society, and in this context especially inasmuch as that promotion and dignity are conditioned by biblical interpretation.Some too may question the appropriateness of 'hermeneutic' as a classification in some cases.Again, I am taking the term in its broadest sense, as a principle of interpretation, while still confinding it to interaction with biblical data.Others may consider that one or other of what are proposed here are hardly acceptable as alternatives or options, either within the range of what is life-giving to women or within the limits of possible responses that would remain true to theological premises or contemporary assumptions.I would argue that such judgments are subjective and that as long as a significant number of women in or on the margins of the Western Christian tradition find one or other of these alternatives to be their way of functioning meaningfully within their context -as indeed they do in every case -it is a valid alternative for those who would take it.Bear in mind once again that what follows is description, not advocacy.(These considerations are deliberately limited to the Christian experience in the West, since I do not claim sufficient knowledge of other religious traditions.I leave it to those who do to respond out of their own experience.) The question proposed then is: When women today in Christian communities become aware of their situation within a patriarchal religious institution, and, moreover, when they recognize that the Bible is a major implement for maintaining the oppression by the patriarchal structure, what are the ways in which they respond and adjust to that situation?I suggest that there are five ways: rejectionist, loyalist, revisionist, sublimationist, and liberationist. The rejectionist alternative is familiar enough in the recent past.It resembles Sakenfeld's second method, rejecting the Bible as not authoritative or useful, though some rejectionist writers go further, to the total rejection not only of the Bible but of the whole religious tradition it represents.Seen from this perspective, the entire Judeo-Christian tradition is hopelessly sinful, corrupt, and unredeemable.The long-discussed hermeneutical question whether patriarchy is a separable attribute in Judaism and Christianity, from which it could be purged, or whether patriarchalism is an inherent 960 HTS 53/4 (1997) Digitised by the University of Pretoria, Library Services characteristic inseparable from its nature is answered with the latter: because patriarchalism is an essential and corrupt component of Judaism-Christianity, the whole religious tradition must be rejected. Beginnings of this position can be seen as early as Elizabeth Cady Stanton, who refused to be present at a suffragist prayer meeting at which the opening hymn was 'Guide Us, 0 Thou Great Jehovah', on the principle that Jehovah had 'never taken any active part in the suffrage movement' (quoted by SchUssler Fiorenza 1983:7).Yet her great project of The Woman's Bible clearly shows that ninety years ago even she was not prepared to reject the whole of her religious tradition, perhaps because she saw too well that she would win more converts by remaining in the struggle. The primary proponent of the rejectionist alternative today is of course Mary Daly (1973Daly ( , 1979)), whose writings on the subject are well known.For Daly, the only acceptable hermeneutical principle is that of the remnant of women who leave the unsavable Judeo-Christian legacy perpetrated by men and together form a new post-Christian faith capable of conquering the evil of patriarchal ism and transcending its negative power.Ultimately this direction leads to a new dualism, in which maleness symbolizes evil and femaleness good, a reversal of the ancient Platonic cosmic/symbolic hierarchy, but a hierarchy nevertheless 2 .The rejectionist hermeneutic is the most extreme theological form of radical separatism.Carried out faithfully in the social, economic and political spheres, it would be not only very difficult but also very disruptive if successful.Even as a biblical hermeneutic, its implications are quite serious.It not only rejects what is proclaimed to be a major redemptive vehicle of Judaism and Christianity as non-redemptive; but it also rejects the possibility of conversion for its entire structure and its supPQrters.There is a kind of extreme apocalyptic finalism, rigid and unbending, which cannot yield to a dynamic of conversion.This indicates its major weakness: an almost total rootlessness from the historical past and from much of the historical and social present.Its only roots are in a hypothetical prehistoric past of idyllic goddess worship and a projected eschatological future in which evil (male) will be overcome by good (female). The second hermeneutical alternative is the loyalist one, in most ways the opposite of the rejectionist.There the foundational premise is the essential validity and goodness of the biblical tradition as Word of God, which cannot be dismissed under any circumstance.The biblical witness as revelation has an independent status which need not be vindicated by human authority: the Bible is the ultimate expression of God's authority, not only descriptive but prescriptive, to which all human inquiry must submit.Yet the Bible, precisely as Word of God, cannot by nature be oppressive.If it is seen to be so, then the mistake lies with the interpreter and interpretive tradition, not with the text.It is the interpreter who is sinful not the content; the medium which is found wanting, not the message.Biblical revelation is intended to foster the greatest human happiness for all, but such happiness may not always conform to the standards of contemporary culture.The Bible proclaims a message of true freedom and humanization, but according to a divine plan, not a human one.Men and women are intended to live in true happiness and mutual respect within that divine plan, not in oppressive patterns of domination and struggle against one another, which are sinful manifestations of the disorder of human nature without divine grace. As long as one is dealing with general principles of religious anthropology and virtuous living, such premises pose little problem.But how are these hermeneutical principles to be reconciled with the blatant biblical message of female submission, especially in the household codes of the New Testament?Herein lies the problem.Two somewhat different kinds of responses are offered within this alternative.The first is to employ careful critical exegesis to counter one text with another in order to refute simplistic literalist interpretations of anyone passage: for example, 1 Cor 14:34 with 11:5, 1 Tim 2:12 with Titus 2:3, et cetera.By building a carefully constructed argument step by step, totally based on thorough and sound exegesis of actual passages, this approach can demonstrate to the mind that is a priori open to expanding roles of women, but unyielding on the precise definition of biblical authority and revelation, that contrary to conclusions reached by a superficial reading of the texts, the Bible may not at all be condemning women to an inferior position.The problem has been with closed-minded interpreters, not with the text itself3.Thus the new exposition calls for conversion of social attitudes to the true biblical spirit of mutual respect. The second form taken by the loyalist hermeneutic is to accept the traditional argument for order through hierarchy as a datum of revelation but one sorely in need of transformation from within because of its abuse by imperfect human instruments.Thus it is argued that the subordination theme applies only or chiefly to the family, not to society at large, and is totally misunderstood and abused when seen as dominance/submission. Rather the point is the necessary leadership of one and followership of the other as the only and divinely intended way to unity and harmony in society.Far from diminishing the dignity and freedom of women, such a structure adhered to with love promotes the true liberation of both women and men to fulfill their divinely intended destiny4. Those who might tend to dismiss the loyalist hermeneutic too easily should recall that it is a carefully worked out biblical method, usually based on sound use of exegetical method, and that it is found useful by large numbers of intelligent American women as a means of explaining and interpreting their role within their biblical faith.It is an 962 HTS 53/4 (1997) Digitised by the University of Pretoria, Library Services acceptable way of using contemporary exegetical method within a conservativ.etheological structure and is an excellent demonstration that it is not exegesis that will finally determine how one interprets biblical data, but experiential and theological premises.This fact indicates too the chief weaknesses of the loyalist method: it is particularly vulnerable to the temptation to stretch history and the literal meaning of texts, and it tends to be innocent of the political implications of the types of social interaction and relationships that it advocates on the basis of fidelity to the biblical text as divine revelation. If the rejectionist hermeneutic holds the biblical tradition as unconvertible and the loyalist hermeneutic holds it as not in need of conversion, the third alternative, a revisionist hermeneutic, represents a midpoint between the two.The foundational premise of this hermeneutic is that the patriarchal mold in which the Judeo-Christian tradition has been cast is historically but not theologically determined.Because of social and historical factors the tradition has been male-dominated, androcentric, and discriminatory, but these characteristics are separable from and thus not intrinsic to it.The tradition is capable of being reformed, the perspective revised -and that is precisely the religious challenge addressed to the contemporary feminist. The method is research into women's history to reveal neglected sources of information in the tradition.In this approach, which combines Sakenfeld's (1) and ( 4), the historical sources are reexamined and reinterpreted to show how much we really do know about women and their contributions to the formation of history.For example, the role of women in the Jewish scriptures and the Talmud is interpreted against the backdrop of whatever information is available from archaeological and artistic sources; the role of women in the New Testament and early church is interpreted from the portrayal of women in the gospels, the Pauline mission, the apocryphal acts, the maltyrdom literature, et cetera.The historical sense of 'reading between the lines' is employed to portray the positive role of women in ancient sources.Meanwhile, the chauvinist-misogynist texts are explained by a combination of exegetical method and interpretation of the influence of cultural context.This approach has produced a long list of books in the last ten years on the role of women in Judaism and early Christianity and the ministry of women in the early church, so numerous that it is unnecessary to give examples.It has also produced a few fine literary studies that have reexamined familiar texts with the tools of literary criticism to reveal the androcentric one-sidedness of traditional interpretationS. The revisionist alternative adopts the position that the tradition is worth saving, and it has thus become the starting point for many feminist religious thinkers with liberal theologies of revelation who are not willing to abandon the tradition entirely as do the rejectionists.It takes the tactic of highlighting the importance of women in our religious history, of portraying their dignity within patriarchy.It moves ultimatelybut not fast enough or fIrmly enough for some -toward the rehabilitation of the tradition through reform.It proclaims in a moderate voice that the situation cannot long remain the same, but that real change is imperative.Its major weakness is that it attacks more the symptoms than the cause of the illness.It musters no direct frontal attack on the system that has caused the suppression of the very evidence which it painstakingly reconstructs.Its subsequent lack of political strategy undermines its efforts in the short run, though for those with historical patience and vision it probably produces some long-lasting results. The fourth alternative hermeneutic, the sublimationist, includes some aspects of Ruether's classifIcation of 'romantic' feminism, in varying degrees of separatism.Its basic premise is the otherness of the feminine as manifested especially in feminine imagery and symbolism in human culture.As Other, the feminine operates by its own principles and rules, which are totally distinct from those of the male realm.In some versions the feminine is innately superior to the masculine, and therefore any thought of equality or egalitarianism is unthinkable; in other versions the two poles are so different that no comparisons can be made, and social equality is simply a non-issue.The life giving and nurturing qualities of woman are of a totally different order than the initiative and constructive qualities of man, and any substantial crossing over in sex roles is against nature. In biblical studies the sublimationist hermeneutic takes the form of the search for and glorifIcation of the eternal feminine in biblical symbolism.Israel as virgin and bride of God, the church as bride of Christ and mother of the faithful, Mary as virginmother who symbolizes Israel, the church, and the feminine mystique -these are the symbols upon which the sublimationist hermeneutic focus us.More recently, feminine imagery for God and Christ has been an important drawing point: the Christ-Sophia and maternal imagery applied to Christ in patristic and Christian apocryphal literature, and the feminine symbolism for the Holy Spirit, which recurs elusively but persistently in Christian literature and iconography6. This alternative can identify with much of the mystical tradition of Judaism and Christianity and with a certain amount of traditional Mariology, inasmuch as it can feel at home with erotic imagery in language of prayer and divine union.It is also closely associated with one type of Jungianism, which uses biblical symbols as archetypal assertions of the stability and rightness of distinctive feminine and masculine modes of being.Its response to the problems of patriarchy and androcentrism is not to join battle but by a kind of philosophical idealism to transcend the conflict by ascribing greater importance to the world of symbols, and to assert that the way to true freedom will be found only by following their lead. The sublimationist henneneutic can provide a helpful way of biblical interpretation for those who are adept at handling symbolism and for whom romantic feminism provides the key to understanding self and world.Its chief weaknesses are its tendencies to exclusivism and separatism from the social-political dimension and its inclination toward dogmatism on the question of female and social roles. The fifth fonn of feminist biblical henneneutics is the most recent and the one now attracting the most attention.Liberationist feminism, pioneered earlier by Letty Russell and others and now being developed principally by Elisabeth Schussler Fiorenza and Rosemary Radford Ruether 7 , takes its starting point from the broader perspective of liberation theology.Its basic premise is a radical reinterpretation of biblical eschatology: the reign of God with its redemption is proclaimed as the task and mission of the believer in the world of the present as well as the hope of full realization in Cod's future.This beginning of its realization for women means liberation from patriarchal domination so that all human persons can be for each other partners and equals in the common task.The oppression of women is part of the larger pattern of dominancesubmission which includes political, economic, and social as well as theological dimensions: 'We cannot split a spiritual, antisocial redemption from the human self as a social being, embedded in socio-political and ecological systems'; rather, 'socioeconomic humanization is indeed the outward manifestation of redemption' (Ruether 1983:215-216). As a biblical henneneutic, liberationist feminism proclaims that the central message of the Bible is human liberation, that this is in fact the meaning of salvation.It therefore attempts to 'come clean' with bold honesty on the question of exegesis and advocacy.Rather than try to maintain that biblical interpretation can be done objectively and in a value-free framework as the historical-critical school and more recently structuralist and sociological interpreters would claim, liberationist biblical theologians, denying that possibility for any theology or henneneutic, will openly admit that theirs is an advocacy theology, already committed to certain causes and assumptions before it beginsas are, in fact, any of the other four henneneutical alternatives discussed above as well. Ruether finds the core of the biblical message of liberation in the prophetic tradition.The preaching of conversion from unjust social and economic practices is the call to create a just society free from any kind of oppression.Thus the henneneutical dynamic springs from biblical texts that do not deal specifically with women, and which in fact can be quite androcentric and patriarchal at times.Freed from their own historical and cultural contexts, however, the texts inspire a message of human liberation through the working of justice which today addresses us authoritatively within our own contemporary awareness of oppression. Fiorenza turns her attention more directly to those texts of the New Testament which transcend androcentric-patriarchal structures to express a new vision of redeemed humanity.For both authors, as for aliliberationist feminists, it is not just a question of reinterpreting texts within a patriarchal framework, but of actually approaching them within an alternate vision of salvation and new creation, which will not stop at biblical interpretation but will lead inexorably to transformation of the social order through both individual and communal, structural conversion.Thus the liberationist alternative does not reject the tradition as unredeemable, but demands a total restructuring of its expression. For the liberationist, the hermeneutical principle upon which to construct a theology of revelation is quite specific.Stated negatively, 'whatever diminishes or denies the full humanity of women must be presumed not to reflect the divine ... or to be the message or work of an authentic redeemer or a community of redemption'.Stated positively, 'what does promote the full humanity of women is of the Holy, it does reflect true relation to the divine ... the authentic message of redemption and the mission of redemptive community' (Ruether 1983:19); 'biblical revelation and truth are given only in those texts and interpretive models that transcend critically their patriarchal frameworks and allow for a vision of Christian women as historical and theological subjects and actors' (Schliissler Fiorenza 1983: 30). The liberationist hermeneutic holds much promise for creating a new direction in religious feminism.Its principal weakness lies in its almost partisan position on revelation as discussed above.Such a restrictive basis for a theology of revelation can hardly stand up under heavy scrutiny of theological tradition.It seems to equate 'revelatory' with 'authoritative' in an almost simplistic way, then to reject as non-revelatory whatever does not fit according to its own narrow criterion.Moreover, in its historical approach to biblical literature, this narrow criterion of revelation leads the liberationist method to eulogize the prophets, Jesus, and sometimes Paul while writing off other, particularly later New Testament writers, who do not meet the liberation criterion, thus forming a new 'canon within the canon' on very slim foundations.If the liberationist hermeneutic is to exercise the influence for which it has the potential, this weakness must be addressed. We have surveyed five alternative responses to the question of feminist biblical hermeneutics.They arise from five different sets of women's experiences and assumptions about the Bible.I believe that they are truly alternatives, that is, within the limits imposed upon us by our experience and human conditioning, we really are free to choose our own hermeneutical direction.The category of conversion directed by liberationist feminists to perpetrators of androcentric patriarchy applies to feminists as well, especially to those who by race and class are caught in the double web of being both oppressed and oppressor.
7,007
1997-12-14T00:00:00.000
[ "Philosophy" ]
Production of multi-strangeness hypernuclei and the YN-interaction We investigate for the first time the influence of hyperon-nucleon (YN) interaction models on the strangeness dynamics of antiproton- and $\Xi$-nucleus interactions. Of particular interest is the formation of bound multi-strangeness hypermatter in reactions relevant for \panda. The main features of two well-established microscopic approaches for YN-scattering are first discussed and their results are then analysed such that they can be applied in transport-theoretical simulations. The transport calculations for reactions induced by antiproton beams on a primary target including also the secondary cascade beams on a secondary target show a strong sensitivity on the underlying YN-interaction. In particular, we predict the formation of $\Xi$-hypernuclei with an observable sensitivity on the underlying $\Xi$N-interaction. We conclude the importance of our studies for the forthcoming research plans at FAIR. Introduction A central task of flavor nuclear physics is the construction of a realistic physical picture of nuclear forces between the octet-baryons N, Λ, Σ, Ξ [1,2,3,4,5]. Experimental information on the bare hyperon-nucleon interactions is accessible in the S=-1 sector (ΛN and ΣN channels) [6,7,8,9,10], however, empirical data for the S=-2 channels involving the cascade hyperon are still very sparse [11]. As a consequence, the parameters of the bare YN-interactions in the S=-1 channel are better under control than those parameters in the S=-2 sector, e.g., ΞN → ΞN, ΛΛ. In fact, various theoretical models, which are based on the well-established one-boson-exchange approach for the NN-interaction or rely on more sophisticated models on the quark level, predict quite different results for the ΞN-channels in vacuum [9,11]. This is very clearly manifested in the different energy dependence of phase-shifts, which in one model are compatible with an attractive [12], and in another approach with a repulsive [13] ΞN-interactions in free space. Of extreme interest is here the ΞN → ΛΛ channel, because it provides information also on the bare interaction between hyperons themselves and it is the leading channel for the production of double-Λ hypernuclei. Information of these YN interactions at finite baryon density can be achieved in studies of hypernuclei [14,15,16] in reactions induced by mesons (π-and K-beams), by high-energy (anti)protons and heavy-ions, and by electro-production [18,19]. Recently, the FOPI [20] and HypHI [21] Collaborations at GSI have performed experiments on single-Λ hypernuclei with the analysis being still in progress. The experimental investigation of multi-strangeness, e.g., double-Λ, hypernuclei will be realized in the new FAIR facility at GSI by the PANDA Collaboration [22,23]. According to the PANDA proposals, multi-strangeness bound hypermatter is supposed to be created in a two-step process through the capture of cascade particles (Ξ) -produced in primary antiproton-induced reactions -on secondary targets. Also theoretical investigations have been started recently [14,15,16,17]. We have studied in the past the formation and production mechanisms of hyperons [24], fragments [25] and hyperfragments [14,26] in reactions induced by heavy-ions, protons and antiprotons, however, by using fixed interactions for the hyperon-nucleon channels, in particular, in the S=-2 sector. The task of the present work is to investigate the role and possible observable effects of different YN-interaction models on the strangeness dynamics of hadron-induced reactions. We have extended our earlier works by considering improved parametrizations for the cross sections of the S=-2 YN-channels. They are based on the microscopic calculations of the Nijmegen group by Rijken et al. [12] and by Fujiwara et al. [13]. In sect. 2 we briefly outline the main features of the adopted microscopic YN-calculations and discuss the basic differences between them in terms of scattering observables. For numerical purposes, the theoretical cross sections have been parametrized and implemented into a transport model. We discuss then the transport results of hadron-induced reactions in detail. Our results show strong dynamical effects originating from the different underlying ΞN-interactions. They are clearly visible in the yields of multi-strangeness hypernuclear production in low-energy Ξ-induced reactions. Hence, the proposed production experiments may lead to strong constraints on the high strangeness YN and YY interactions. Hyperon-nucleon interaction models in the S=-2 sector A variety of well-established models concerning the high strangeness sector exists in the literature. Among others, the chiral-unitary approach of Sasaki, Oset and Vacas [10] and the effective field theoretical models of the Bochum/Jülich groups [10] are representative examples. The Nijmegen [12] models are based on the well-known one-boson-exchange formalism to the NN-interaction, which is then extended to the strangeness sector with the help of SU(3) symmetry arguments. Finally, Fujiwara et al. [13] have developed quark-cluster models for the baryon-baryon (BB) interactions in the S=0,-1,-2,-3,-4 sectors. The parameters are determined by simultaneous fits to NN-and YN-scattering observables in the S=-1 sector. While there are several thousand well confirmed experimental data in the S=0 sector (NN scattering), much less data points are safely known for YN reactions. They still allow a reasonable determination of the ΛN and ΣN model parameters. Thus, despite of model differences all the theoretical approaches yield similar predictions for, e.g., potential depths and scattering cross sections for exclusive channels between nucleons and hyperons with strangeness S=-1. Remaining uncertainties for S=-1 interactions are related to the unsatisfactory experimental data base. In the S=-2 sector, where cascade particles (Ξ) are involved, the uncertainties and, correspondingly, the discrepancies between the models are considerably larger [11]. To demonstrate this issue, we have chosen two particular model calculations to be tested in the present work. The one-boson-exchange calculations of the Nijmegen group in the extended soft-core version ESC04 of 2004 (we call this as ESC in the following) [12] and the quark-cluster approach [13] (denoted as FSS in the following). Note that more modern approaches of the Nijmegen group exist, e.g., the more recent models ESC08 [9]. However, our intention is to investigate possible sensitivities in the dynamics of hadron-induced reactions at PANDA , thus we choose two models which utilize very different physical pictures and exhibit the largest differences in the ΞN-interactions. Fig. 1 shows the elastic and inelastic scattering cross sections for the isospin I=1 ΞN → ΞN and ΞN → ΛΛ channels, respectively, predicted by the two models as indicated. We show cross sections for the relevant exclusive channels only, as entering into the dynamical calculations (see next section). At first, the ESC calculations predict a stronger energy dependence of the ΞN scattering. This effect is extreme at lower cascade energies, where the inelastic ΞN → ΛΛ channel dominates relative to the elastic one. In fact, this pronounced energy dependence has been observed indirectly in the production of double-strangeness Λ-hypernuclei, where the double-Λ hypernuclei yields dropped considerably with increasing Ξ-beam energy [14]. On the other hand, the FSS calculations predict a much smoother energy behaviour of both cross sections, where an opposite trend relative to the ESC results is observed: the S=-2 dynamics is largely dominated by the ΞN channel, while the ΛΛ channel is populated to a lesser degree. The differences in the cross sections reflect the rather different predictions of the scattering parameters, e.g., the phase shifts for low-energy s-states. The Nijmegen model in the ESC04 version, which is used here, predict negative s-wave phase shifts for the I=0 ΞN channel at various cascade energies indicating a repulsive interaction, while in the ΛΛ case the opposite trend is obtained. Indeed, as explicitly pointed out in [12], no bound ΞN states are expected in the ESC04 parameter set. On the other hand, the FSS model calculations [13] exhibit a completely different energy behaviour of the phase-shifts for the same I=0 ΞN channel. The phase shift values are smaller and, in particular, are carrying positive signs indicating an attractive ΞNinteraction. Note that both models lead also to different predictions for the single-particle (s.p.) inmedium potential of Λ and, in particular, of the cascade particles. As discussed in detail in Refs. [9,27] in the framework of G-matrix calculations, the in-medium s.p. Λ-potential is in both cases attractive with values between -38 MeV and -45 MeV in the ESC and FFS models, respectively, for matter at saturation density following roughly the quark-scaling rule of a reduction by a factor of 2/3 with respect to the NN-case. The in-medium cascade potentials are rather uncertain, ranging at saturation density from repulsion (ESC) to weak attraction (FFS). In order to keep the present discussion transparent, we will use for the s.p. mean-field potentials the same quark-counting scheme for both models. The question arises whether such different model predictions for the S=-2 YN-interaction can be constrained experimentally by PANDA . In spite of the pronounced model dependencies, an answer on this question is not trivial, because of secondary processes. Indeed, in contrast to free space scattering, in hadron-induced nuclear reactions sequential re-scattering also occurs, for instance, quasi-elastic hyperon-nucleon re-scattering with strangeness exchange (ΛN ↔ ΣN). For this purpose we use the elastic and inelastic cross sections for ΞN-scattering for both models in a newly derived parametrizations. In our approach we are accounting also for a smooth transition into the high-energy regime (E > 3.4 GeV for baryon-baryon collisions), where the PYTHIA [28] generator is switched on. The parametrizations used for numerical purpose in the transport calculations, which fit the ESC and FSS calculations very well, have been extracted with piecewise polynomial fits. Transport-theoretical approach to multi-strangeness production For the theoretical description of the antiproton-induced primary reactions and the subsequently generated secondary reactions with a Ξ-beam we adopt the well-established relativistic Boltzmann-Uheling-Uhlenbeck (BUU) transport approach [29]. The kinetic equations are numerically realized within the Giessen-BUU (GiBUU) transport model [30], where the transport equation is given by Eq. (1) describes the dynamical evolution of the one-body phase-space distribution function f (x, k * ) for the hadrons under the influence of a hadronic mean-field (l.h.s. of Eq. (1)) and binary collisions (r.h.s. of Eq. (1)). The mean-field is treated within the relativistic mean-field approximation of Quantum-Hadro-Dynamics [31]. It enters into the transport equation through the kinetic 4-momenta k * µ = k µ − Σ µ and effective (Dirac) masses m * = M − Σ s . The in-medium self-energies, Σ µ = g ω ω µ + τ 3 g ρ ρ µ 3 and Σ s = g σ σ, describe the in-medium interaction of nucleons (τ 3 = ±1 for protons and neutrons, respectively). The isoscalar, scalar σ, the isoscalar, vector ω µ and the third isospin-component of the isovector, vector meson field ρ µ 3 are obtained from the standard Lagrangian equations of motion [31]. The obvious parameters (meson-nucleon couplings) are taken from the widely used parametrizations including non-linear self-interactions of the σ field [32]. The meson-hyperon couplings at the mean-field level are obtained from the nucleonic sector using simply quark-counting arguments. The collision term includes all necessary binary processes for (anti)baryon-(anti)baryon, meson-baryon and mesonmeson scattering and annihilation [30]. Important for the present work is the implementation of the new parametrizations for the ΞN-scattering processes, as discussed in the previous section. Having the cross sections for all relevant exclusive elementary channels, the collision integral of the transport equation is then modelled with standard Monte-Carlo techniques. Strangeness production inp-induced reactions We have performed transport calculations forp-induced reactions including the secondary collisions of Ξ-beams and on a second target. We focus our studies on the role of the ΞNinteraction models on the reaction dynamics and start the discussion with the primaryp-induced reactions. a Cu target. At first, the S=-1 hyperon distributions are not affected by the choice of the ΞN model. This is due to the fact that the Λ hyperons are mainly produced bypp primary annihilation and, in particular, get re-distributed by many secondary processes involving sequential re-scattering of antikaons (K) and hyperonic S=-1 resonances. The secondary processes in the S=-1 sector make the Λ distributions broad and unaffected by the particular treatment of the ΞN interaction. Hence, Λ production will serve to explore independently the S=-1 sector. The production of Ξ particles is a comparatively rare process, as clearly seen in Fig. 2 by the strong decrease of the Ξ-rapidity distributions. This is due to the small values of the direct annihilation cross sectionpp →ΞΞ, which is in the range of a few micro-barns only [33]. Note that the corresponding annihilation cross section intoΛΛ pairs is in the range of a few hundreds of micro-barn [33]. As in the case of Λ particles, re-scattering contributes also to the broadness of the Ξ rapidity spectra. However, the effect is less pronounced here due to the high production threshold of the Ξ particles. They mainly escape from the residual excited target nucleus. The Ξ-yields are rather stable and only moderately dependent on the choice of the ΞNinteraction model. The results with the FSS ΞN-cross sections (solid curves in Fig. 2) lead to increased re-scattering and thus to a shift of the Ξ-spectra to lower energies, relative to the transport calculations using the ESC ΞN-cross sections (dashed curves in Fig. 2). This is obvious by looking at the high-energy part of the elastic cross sections in Fig. 1, where σ el strongly decreases with increasing Ξ-energy, in particular, in the ESC calculations. Multi-Strangeness production in Ξ-induced reactions In PANDA the low-energy part of the cascade particles is expected to be used in a second step as a secondary beam. They will react with a secondary target and may produce doublestrangeness hypernuclei thus giving access to S=-2 hypermatter [22]. Since the underlying theoretical models for the ΞN-interaction exhibit the largest differences at low energies, it is of great interest to study in detail the role of the different ΞN-approaches to the dynamics of Ξ-induced reactions. In order to investigate such processes in this section, we consider the ΞN-reactions as "primary" reactions, although under realistic experimental conditions the Ξ hyperons are produced in an initial annihilation process. Fig. 3 shows the time evolution of the net Λ yield (solid curves) together with the corresponding contributions to Λ production/absorption, for Ξ-induced reactions on a Cu-target. At first, both sets of transport calculations result to a similar total net Λ yield. The calculations using ESC (FSS) parametrizations lead to values of 0.242 (0.243), respectively, for the total net Λ relative yield (normalized to the number of GiBUU events). However, the reaction dynamics strongly depend on the underlying approach for the ΞNinteraction. The transport calculations adopting the ESC model (panel on the left in Fig. 3) lead to less dynamical effects and less re-scattering processes between the Ξ-beam and the target nucleons. In fact, the primary ΞN → ΛΛ binary process mainly dominates the dynamics and contributes basically to Λ hyperon production. These findings are compatible with the ESCbased results in Fig. 1, where the inelastic ΞN → ΛΛ elementary channel strongly dominates among the elastic channels. Note that the ΞN → ΛΣ elementary process does not appear at all. In particular, the ΞN → ΛΣ cross section in the ESC model is very small as compared to the ΞN → ΛΛ channel and in the order of a few mb only. Therefore, secondary ΣN → ΛN re-scattering only moderately affect the dynamics of the Ξ-nucleus reaction. The situation is different in the transport calculations using the FSS approach (panel on the right in Fig. 3). In contrast to the results in the ESC model, the ΞN → ΛΛ primary process is not the dominant one. Indeed, as one can see in Fig. 3, also the primary ΞN → ΛΣ channel occurs. It gives the major contribution to Λ production. This result is also consistent with Fig. 6 of Ref. [13], where at this energy (E Ξ = 0.15 GeV kinetic energy in the laboratory respectively P lab = 0.646 GeV/c) the Σ production channel opens. This enhances the Σ production and thus the secondary processes with strangeness exchange, e.g., ΣN → ΛN. This feature together with the additional issue that elastic and inelastic channels in the FSS approach are of the same order, enhances the dynamical effects of the Ξ particles inside the target nucleus such that the relaxation time of Λ production increases. Formation of multi-strangeness hypernuclei We thus conclude that the underlying ΞN-interaction has important dynamical effects in lowenergy Ξ-induced reactions thus expecting essential observable signals in the formation of S=-2 hypernuclei. This underlines the large physics potential of future S=-2 production experiments at PANDA . For this reason we study now the formation of double-strangeness hypernuclei in Ξ-induced reactions and the impact of the ΞN-interaction on the hypernuclear yields. The identification of hypernuclei in the transport calculations is performed as in detail reported in previous work [34]. At first, one applies the statistical multifragmentation model (SMM) [35] to the freeze-out configuration of the residual nucleus. This method provide us with stable and cold fragments (SMM-fragments). As a next step, a momentum coalescence between those SMM-fragments and hyperons is performed leading to capture and providing us finally with hyperfragments. For the formation of hypernuclei the momentum spectra of hyperons are crucial. It is therefore of great interest to study the model dependencies of the ΞN-interaction on the momentum distributions of strangeness production. Fig. 4 shows this in terms of the rapidity distributions of Λ (panels on the left) and Ξ particles (panels on the right) for Ξ-induced reactions at three low energies, as indicated. At first, the stopping power of Λ hyperons increases in the calculations with the FSS interaction compared to the ESC model. This is because of the enhanced re-scattering involving Λ particles by the FSS interaction. Again, the total Λ yields do not essentially depend on the underlying ΞN-interaction model, but the detailed dynamics does. The rapidity distributions of the cascade hyperons (panels on the right in Fig. 4) turn out to be more interesting. The calculations with the FSS parametrizations give clear evidence on the appearance of bound Ξ-hyperons inside the target nucleus (thick solid curves in Fig. 4), while in the results using the ESC model the probability of having bound cascade matter is extremely low (thick dashed curves in Fig. 4). In our calculations we identify bound particles as those particles whose radial distance is less than the target ground state radius plus 2 fm. Note that these spectra are extracted at the final stage of the reaction, e.g., at 60 fm/c. Thus, particles which are still inside the nucleus at large time scales (relative to freeze-out) can be definitely considered as bound particles. At capture, the Ξ particles carry an average velocityβ Ξ =< v/c >∼ 0.11 which is much smaller than e.g. the average velocity of a nucleon at the Fermi-surface,β N ∼ 0.25. The coalescence volume in phase space is thus defined by R andβ Ξ . This new feature, reported here for the first time, is indeed very promising for the formation of multi-strangeness hypermatter at PANDA (see below). The occurrence of bound cascade hyperons is attributed to the small inelastic cross sections in the FSS model. In fact, as one can see in Fig. 1 (and also in Fig. 6 of Ref. [13] for ΞN → ΛΣ), the inelastic ΞN → ΛΛ cross sections are smaller relative to the elastic ones. Therefore, the secondary Ξ-beam particles can enter deeper into the target matter and get captured by re-scattering and decelerating with target constituents. On the other hand, the probability of having bound Ξ hyperons in the ESC-based transport calculations is obviously extremely low. We have analyzed these transport calculations at different Ξ-beam Energies in terms of (hyper)fragmentation. At first, we have found that the mass distributions become broader around the target mass region and the fission region around half the initial target mass number A init is getting filled with matter as the Ξ-beam energy increases. This is a typical feature of statistical thermodynamical models like the SMM, where with increasing beam energy the excitation of the residual nucleus increases and thus more fragmentation processes (fission, de-excitation) open as the available energy increases. Such an example of fragment distributions is shown in Fig. 5 for SMM-fragments, double-Λ clusters and Ξ-hypernuclei versus their mass number for inclusive Ξ-induced reactions at a beam energy of 0.3 GeV. It can be seen that both distributions of SMM fragments and double-Λ clusters become moderately broader in the transport calculations using the FSS parametrizations (panel on the right in Fig. 5) as compared to the results using the ESC model (panel on the left in Fig. 5). This is due to the increased re-scattering processes inside the target nucleus in the FSSbased calculations, as discussed in the previous section. This leads to more internal excitation of the residual nucleus and thus to a more pronounced fragmentation dynamics leading to increased fragment yields. The most interesting feature in Fig. 5 is the strong dependence of the distributions of Ξbound hypernuclei on the underlying ΞN model. While in the dynamical calculations with the ESC model the production probability of bound Ξ-matter is relatively low, the calculations using the FSS interactions enhances the production of multi-strange Ξ-systems considerably. Another result of great interest is the prediction of Ξ-hypernuclei also far away from the regions of ordinary nuclei, e.g., evaporation (A ∼ A init ) and multifragmentation (A ≤ 4). This can be clearly seen in Fig. 5, where the fission region (A ∼ A init /2) is filled up by Ξ-hypermatter in the calculations with the FSS model. This region remains free of multi-strangeness Ξ-hypersystems in the ESC-based transport calculations. The differences are of such a magnitude that a sizeable spread is predicted for the S=-2 production cross sections. It would be therefore a challenge to measure exotic multi-strangeness hypermatter at PANDA in the future, in order to better constrain the experimentally still unknown and theoretically very controversial ΞN-interaction, eventually ruling out certain approaches to YN and YY interactions. Summary and conclusions In summary, we have extended our previous studies on hypernuclear physics by the consideration of more recent models for the high strangeness S=-2 sector and their possible influences on observables in reactions relevant for FAIR. For this purpose we have studied first the differences between two well-established models for the ΞN-interaction and then parametrized their results. Applications of ΞN-interaction models in the dynamics of hadron-induced reactions are discussed in detail for the first time. We found important dynamical effects for reactions at PANDA , depending essentially on the underlying ΞN-approach. Strong inelastic (absorption) effects in the elementary scattering channels lead to less dynamical effects in Ξ-induced reactions, while in the opposite case of less pronounced inelasticity the in-medium dynamics is enhanced. As a consequence, bound cascade particles occur in hadron-induced reactions in the case of an attractive ΞN-interaction model. A coalescence analysis is performed to study the production of (multi-)strangeness hypernuclei in low energy Ξ-induced reactions using two models for the ΞN-interaction. It is found that the distributions of pure fragments and double-Λ hyperclusters depend only moderately on the applied ΞN-model. However, the role of the ΞN-interaction is found to be essential in the formation of multi-strangeness Ξ-hypernuclei. In fact, models which predict an attractive ΞN interaction on a microscopic level, lead also to a copious production of Ξ-bound hypermatter at PANDA , while in the opposite case the formation of such multi-strangeness systems is a rare process. In particular, we predict the formation of possible exotic multi-strangeness hypermatter in Ξ-induced reactions. However, there are still remaining uncertainties on the Ξ mean-field dynamics, which we have not studied separately. We consider the investigations presented here as pilot studies serving to constrain the production of S=-2 hypernuclei. We thus conclude the challenge of the future activities at FAIR to understand deeper the still little known high strangeness sector of the hadronic equation of state. Note that the strangeness sector of the baryonic equation of state is crucial for our knowledge in nuclear and hadron physics and astrophysics. For instance, hyperons in nuclei do not experience Pauli blocking within the Fermi-sea of nucleons. Thus they are well suited for explorations of single-particle dynamics. In highly compressed matter in neutron stars the formation of particles with strangeness degree of freedom is energetically allowed. Of particular interest are hereby the Λ-,Σ-, Ξand Ω-hyperons with strangeness S=-1,-2 and -3, respectively. As shown in recent studies [36,37], these hyperons modify the stiffness of the baryonic EoS at high densities considerably leading to the puzzling disagreement with recent observations of neutron stars in the range of 2 solar masses.
5,640.6
2014-08-26T00:00:00.000
[ "Physics" ]
Influence of Al 2 O 3 Processing on the Microtexture and Morphology of Mold Steel: Hydrophilic-to-Hydrophobic Transition The surface of mold steel was processed by the simple Al 2 O 3 surface processing, and the influence of processing time on the surface morphology was studied by 3D profilometer and scanning electron microscopy (SEM). Moreover, the wettability of the Al 2 O 3 microtextured surfaces of mold steel was also investigated. The results show that the surface morphology of mold steel varies with Al 2 O 3 processing time. It reveals that the initial surface without any Al 2 O 3 processing treatment behaves as a hydrophilic surface. With the increment of Al 2 O 3 processing time, the surface roughness of the processed surface with the microtextures increases correspond-ingly. At the same time, the wettability of the microtextured surfaces changes from the hydrophilic to the hydrophobic. When Al 2 O 3 processing time reaches 60 min, the contact angle reaches its maximum at which the relevant surface roughness is the minimum. It indicates that mold steel with an Al 2 O 3 microtextured surface can be a potential application in the mold releasement. Introduction Conventionally, the surface finishing of metal molds is always done by hand lapping after processing and/or electrical discharge machining in order to attain small surface roughness without microcracks. However, in this process, operator shortcomings cause these manual processing methods to have a number of limitations. Meanwhile, consistency and repeatability are also required. Consequently, the process is extremely time consuming, which leads to high cost [1]. While automated processes suitable for the finishing of closed dies, they are limited in their application. For example, precision machining using a single point diamond tool is slow, requires conditions not readily available in an industrial environment, and is limited to flat surfaces [2][3][4]. Chemical micromachining and electrochemical micromachining are limited in their application and can be difficult to control [4][5][6][7][8][9]. The laser has also been widely used as a machine tool to modify the surface of the engineering materials, such as laser surface alloying, laser cladding, surface texturing, laser physical vapor deposition, laser polishing, etc. [2,[10][11][12][13][14][15][16][17][18][19][20]. Ultimately, surface modifications or surface treatments are vitally important for increasing service life of the critical components and devices used for engineering and structural functions. Numerous surface engineering approaches are employed such as thermal, chemical, mechanical, as well as hybrid treatments to improve or vary/change the surface finish. In this study, the influence of alumina-based surface processing on the microtexture, morphology, and wetting behavior of mold steel has been investigated. The morphology of initial as well as processed surfaces was investigated as a function of processing time. After being processed, the influence of processing time on the surface morphology of mold steel was studied by 3D profilometer and scanning electron microscope (SEM). The wettability of the processed surface is also investigated. Materials The chemical composition of mold steel tool steel is shown in Table 1. Methods The materials were processed into 25 mm × 25 mm × 5 mm slabs and carefully cleaned with acetone and pure ethyl alcohol to remove any contaminants on its surface. The planetary mono mill "Pulverisette 6" (Made in Germany) was used for surface processing in the stainless steel processing bowl (volume: 500 ml). The process was performed under the vacuum to prevent contaminations for time periods ranging between 15 and 180 min at a processing speed of 250 rpm. All the processed specimens were ultrasonically cleaned in an acetone bath for 10 min with 28-34 Hz frequency and carefully dried. The surface morphology of surfaces was observed by Taylor Hobson profilometer/Talysurf PGI, optical microscope (OM), and scanning electron microscope (SEM) JEOL/JSM-5600. Contact angle (CA) measurement was taken by an advanced contact angle goniometer with DROPimage Advanced (ramé-hart Model 500) attached with a charge-coupled device video camera (with a resolution of 768 × 494 active pixel) and an environmental chamber with temperature control. The volume of the droplet was 10 μL. Figure 1 shows the Talysurf 3D topography of the original mold steel tool steel specimen, while Figure 2 shows its corresponding 2D SEM morphology. Processing time: 15 min The morphology of the processed surface after 15 min processing is shown in Figure 3, and its related SEM image is expressed in Figure 4. The results indicate that the processed surface is smoother than its initial surface (Figures 1 and 2). According to Figures 3 and 4a, some small ridges were found distributed on the processed surface. The grooves on the surface had disappeared to a certain extent, together with some smaller island-form ridges (such as labels A and B) due to the shorter processing time. The big ridges cannot be detected and the remained chippings disappear (Figure 2), resulted in the smooth processed surface as shown in Figure 4a. Moreover, the magnified parts of Figure 4a are shown in Figure 4b and c, and crack can be found on the processed surface as shown in Figure 4c (label C) as a result of the effect of processing balls. Processing time: 30 min The morphology of the processed surface after 30 min processing time is shown in Figure 5, its corresponding texture is shown in Figure 6a, and its magnification parts are shown in Figure 6b and c. Although processing balls remove most high plateaus from the original surface of the specimens (cf. Figures 2 and 6), the island-form ridges were found to be still distributed on the processed surface. As expressed in Figure 6b and c, micropits and cracks can be observed on the surface of the substrate because the balls did not mill sufficiently. Processing time: 60 min With the increase of processing time, the processed surface becomes more and more smooth as shown in Figure 7, and the formed ridges become smaller and Figure 8b and c. It illustrates the uniform changes of the surface topography after 60 min processing time. The variation of the topography of the surface processed at 60 min is distinct. It is observed that the mold steel surface was processed effectively without any defects (cf. Figures 4, 6, and 8). Generally, increase in processing time simultaneously improves the properties of the surface substrate. However, too longer processing time will be likely to destroy the surface of the substrate which subsequently changes the surface topography and mechanical properties of the steel specimens. Wettability and Interfacial Phenomena -Implications for Material Processing 6 Processing time: 120 min When compared with the processed surface of 60 min processing time (Figure 8), the processed surfaces (Figures 9 and 11) of longer processing time give relatively coarser surface. Also, its SEM image (Figures 10a and 12a) and the further magnified counterparts (Figures 10b, c and 12b, c) show the sign of ridges regrowing bigger, some microparticles aggregating loosely, and microcracks scattering on the processed surface. Such surface topography with scattering of microaggregation and micro ball-like amorphous features (Figure 10c) implies that there is some level of change of properties of the mold steel surface. This change is not really anticipated since it is initially expected that the properties of the processed surface topography would be the same as its as-received condition or higher than the initial one. (Figure 11), relatively more severe cracks appear as shown in Figure 12b and c, which are likely to change the properties of the initial surface drastically. Figure 13 shows the relationship between the arithmetic mean surface roughness Ra and the processing time. Results indicate that the initial increase in processing time accompanies with the increase in surface roughness until the processing time reaches 60 min at which the surface roughness is the minimum. Then, a further increase in the processing time increases the roughness once again which agrees well with morphology as shown in Figures 3-12. Wettability of milled surface The variation of CA during the processing process is shown in Figures 14 and 15. It is noted that the evolution of CA is related to the processing time. For the initial surface of mold steel, its wettability is hydrophilic. When the processing increases, the wettability of the surface varies obviously. The wettability changes from hydrophilic to hydrophobic when the processing time is 60 min, which is an attractive characteristic potential for model release. Moreover, CA increases with the increment of processing time at the early processing period. However, when the surface is processed with longer time, such as 120 and 180 min, the wettability of the surface will be hydrophilic again as shown in Figure 15. It is well known that there is a distinction between the "actual surface" of an interface and the "geometric surface," which is measured in the plane of the interface. At the surface of any real solid, the actual surface will be greater than the geometric surface because of surface roughness. Due to this distinction, the contact angle will be influenced by the roughness. When the surface roughness is considered, the contact angle and droplet profile will change to keep the equilibrium. To evaluate the effect of roughness on surface wettability and calculate the new contact angle of θ' on the rough surface, two different models were proposed by Wenzel and Cassie-Baxter. Both models emphasize on the geometry feature of solid surfaces that acts as a critical factor in determining the wettability. A system with a droplet placed on a rough solid surface is considered as shown in Figure 16, in which the surface texture feature size is much smaller than the droplet, so the influence of liquid weight on an indentation can be neglected when compared to that from surface tension. In traditional theory, whether air can be trapped between liquid and the solid surface is determined by the surface tension of liquid. The liquid intends to exhibit its intrinsic contact angle (θ0) on the edge of islands. On a hydrophilic substrate (θ0 < 90°), concave menisci form in the indentations, and the result of liquid surface tension is directed downward and drives the liquid to fill the indentations as much as possible, as shown in Figure 16a. On the other hand, for a hydrophobic substrate (θ0 > 90°), convex menisci form, and the surface tension of liquid is directed upward and pushes the liquid to be suspended on indentations, as shown in Figure 16b. Considering of that θ w is smaller than θ 0 for hydrophilic materials and θ c is larger than θ 0 for hydrophobic materials, as a result, on an ideal patterned surface, the contact angle would always decrease when θ0 < 90° and be increased when θ0 > 90°. Therefore, one can make the option of the processing time with the ideal surface property according to the practical applications. Preliminary computations During the processing process, the mean value of the magnitude of the critical torque ¯ τ critical can be expressed as: where λ t is a constant in the range 0.5 < λ t < 1, d t is the distance parallel to the plane from the center to one of the asperities in contact, f adhesion is an adhesive force at each contact, and F n is the normal body forces. ¯ ∅ critical is the critical angle at which critical torque ¯ τ critical occurs, 0.63 < λ Ø < 1, P is the total load, R * is the reduced radius, and E * = is the reduced elastic modulus. where γ = − 2ξ η n | d t | 2 n 2π . ¯¯ τ decelerating is the decelerating torque, ξ is the coefficient ( ξ < 1, or ξ ≪ 1 ), η n is the damping coefficient, ω is the angular velocity, n 2π is the number of asperities per revolution, and γ is the adhesive force. The model consists of expressions for the critical angle and torque at which a ball starts to mill, as well as the rate at which they decelerate. Because of the stochastic nature of surface roughness, it is impossible to accurately reproduce all behavior in a model simple enough to be used in the ball processing phenomena reconstructions. While average processing effect can be accurately replicated, the contact between a real ball and the substrate will not necessarily follow this average due to the details of the geometry. However, in a system of many balls, it is often the case that the average behavior dominates. While variations around the average may have a large effect on the motion of each individual ball, the statistical behavior of a system of many balls will not be significantly changed. The model has been derived for the contact between a ball and a plane. The contact forces in contacts between two balls will be different, and effects such as interactions between asperities make the system far more complex, but the general principle upon which the model is based still applies. The proposed model can provide an adequate approximation of processing effects in the surface processing when torque is dominated by the largest scale of roughness. Conclusions The morphology of mold steel varies with the different processing times. Increase in processing time simultaneously makes the wettability of processed surface changed from hydrophilic to hydrophobic. Meanwhile, the initial increase in processing time accompanies with the increase in surface roughness until the processing time reaches 60 min at which the surface roughness is the minimum. However, with the increment of processing time, the wettability of the surface will be hydrophilic again. What is more serious is that longer processing time will be likely to destroy the surface of the substrate which subsequently changes the surface morphology and mechanical properties of the steel specimens, which is not really anticipated since it is initially expected that the properties of the processed surface morphology would be the same as its as-received condition or higher than the initial one, especially during the mold releasement.
3,232.6
2019-05-22T00:00:00.000
[ "Materials Science" ]
Cannabidiol is an effective helper compound in combination with bacitracin to kill Gram-positive bacteria The cannabinoid cannabidiol (CBD) is characterised in this study as a helper compound against resistant bacteria. CBD potentiates the effect of bacitracin (BAC) against Gram-positive bacteria (Staphylococcus species, Listeria monocytogenes, and Enterococcus faecalis) but appears ineffective against Gram-negative bacteria. CBD reduced the MIC value of BAC by at least 64-fold and the combination yielded an FIC index of 0.5 or below in most Gram-positive bacteria tested. Morphological changes in S. aureus as a result of the combination of CBD and BAC included several septa formations during cell division along with membrane irregularities. Analysis of the muropeptide composition of treated S. aureus indicated no changes in the cell wall composition. However, CBD and BAC treated bacteria did show a decreased rate of autolysis. The bacteria further showed a decreased membrane potential upon treatment with CBD; yet, they did not show any further decrease upon combination treatment. Noticeably, expression of a major cell division regulator gene, ezrA, was reduced two-fold upon combination treatment emphasising the impact of the combination on cell division. Based on these observations, the combination of CBD and BAC is suggested to be a putative novel treatment in clinical settings for treatment of infections with antibiotic resistant Gram-positive bacteria. Results The combination of CBD and BAC is effective against Gram-positive bacteria. Initially, we validated the antimicrobial effect of cannabidiol (CBD) against the Gram-positive bacterium Methicillin-Resistant Staphylococcus aureus (MRSA) as previously published by Appendino and colleagues 14 but also for Enterococcus faecalis (E. faecalis), Listeria monocytogenes (L. monocytogenes), and Methicillin-Resistant Staphylococcus epidermidis (MRSE). We found the Minimum Inhibitory Concentration (MIC) to be 4 µg/mL for S. aureus, L. monocytogenes, and the MRSE strain and 8 µg/mL for E. faecalis, indicating that Gram-positive bacteria are sensitive towards CBD (Table 1). To determine whether CBD would induce a higher susceptibility of BAC in Gram-positive bacteria, MICs of BAC were determined for the four Gram-positive bacteria in the presence of CBD. Remarkably, the MIC of BAC was decreased by 8 to at least 64-fold when combined with 1/2 x MIC of CBD compared to MIC of BAC alone in the different Gram-positive strains (Table 1). Furthermore, the Fractional Inhibitory Concentration (FIC) index was determined for each Gram-positive bacteria. The results showed a FIC index at 0.5 for both MRSA USA300 and MRSE and 0.375 for E. faecalis indicating weak synergistic effect between the compounds CBD and BAC (Table 1). After combining CBD with other antibiotics, both similar and different types, we concluded that CBD had the best effect together with BAC (see Supplementary Figure S1). To assess the potentiating effect of CBD on BAC over time, measurements of bacterial growth over 24 hours in the presence of either CBD alone or in combination with BAC were performed. The assessment concentrations of CBD were at 2 µg/mL and 8, 16, and 32 µg/mL for BAC. As seen in Fig. 1a, growth of S. aureus is inhibited by 2 µg/ mL CBD and 16 µg/mL BAC combined compared to monotherapies of the individual compounds. The results suggest that CBD can potentiate the antimicrobial effects of BAC. Similarly, growth measurements of E. faecalis, MRSE, and L. monocytogenes on monotherapies and combination ( Fig. 1b-d), suggests that the combination of CBD and BAC is useful against other Gram-positive bacteria. To clarify whether CBD and BAC act in synergy, time-kill assays were performed (Fig. 1e). CBD and BAC reduced the viability by 6 log 10 CFU/mL compared to CBD alone. The result shows that a clear synergistic effect in fact exists between CBD and BAC, and that the effect is bactericidal. The slight re-initiation of growth after 8 hours is almost certainly caused by degradation or oxidation of the cannabinoid 16 . To verify that the decreased CFU upon combination treatment is caused by killing of the bacteria and not due to clustering of the cells, microscopy was performed at the time 1, 2, 4, and 8 hours post treatment (Supplementary Figure S2). Images show no additional clustering of the cells treated with the combination compared to the other treatments. To further assess the spectrum of use for the combination of CBD and BAC, growth of Gram-negative bacteria upon treatment was measured as well. The Gram-negative bacteria tested were strains of Pseudomonas aeruginosa, Salmonella typhimurium, Klebsiella pneumoniae, and Escherichia coli (Supplementary Figure S3). Experiments for CBD and BAC against the Gram-negative bacteria revealed MIC values above 128 µg/mL for all tested bacteria, presumably due to the outer membrane. In addition, the experiments did not reveal any synergy between CBD and BAC in the concentrations tested, limiting the use of the combination to Gram-positive bacteria. result was confirmed by staining the Penicillin Binding Proteins (PBPs) in the membrane using Bocillin-FL, a fluorescence-conjugated penicillin V derivative (Fig. 2b) which showed similar morphology (red arrows). As peptidoglycan synthesis occurs both at the septal and peripheral cell wall, we can observe irregularities concerning the peptidoglycan all over the cell surface 17 . Upon exposure to either CBD or BAC alone regular septum formations were visualised, however, when treated with the combination, several septa formations appeared for some of the cells as visualised by Bocillin-FL as in the TEM images. This suggests that the combination of CBD and BAC affects the cell envelope causing irregular cell division visualised by multiple septa formations and irregular cell membrane. To study whether the cell division defect is specific for the combination of CBD and BAC, microscopy analysed using higher concentrations of CBD and BAC at 4 and 64 µg/mL, respectively, was performed (Supplementary Information Figure S5). Images show cells with multiple septa upon treatment with 64 µg/mL BAC, indicating that the effect visualised is not specific to the combination of CBD and BAC. However, it further emphasises the CBD mediated potentiation of BAC, since this phenotype did not appear at lower BAC concentration. Adding a higher concentration of CBD did not seem to cause any division defects. The combination of CBD and BAC decreases autolysis in S. aureus. Since treatment with the com- bination of CBD and BAC shows impaired cell division, probably causing an arrest in cell division and potentially decreased cell wall turnover, one could speculate if this would result in decreased autolysis as well. Therefore, a Triton X-100 induced autolysis assay was performed. S. aureus USA300 were grown until start exponential phase and stressed for one hour with CBD, BAC, CBD+BAC, EtOH or left untreated. Cells were then washed and incubated with or without triton X-100. As suspected, upon treatment with the combination of 1 µg/mL CBD and 16 µg/mL BAC a significant decreased autolysis was observed ( Fig. 3) compared to the untreated control from 90 to 300 minutes except at the 150 minute timepoint, indicating cell division arrest. The combination of CBD and BAC does not change the cell wall composition or the degree of cross-linking. To further asses the irregularities around the cell envelope and the possible effect on the cell wall biosynthesis, the muropeptide composition of the peptidoglycan was analysed. Peptidoglycan was purified from S. aureus USA300 grown in either CBD, BAC, the combination of CBD and BAC, EtOH or left untreated and further digested using mutanolysin and analysed using HPLC. The chromatogram of purified digested muropeptides revealed the typical pattern of S. aureus 18 with the highest peak found in the dimeric fraction (peak 4). Treating the bacteria with both CBD and BAC alone or in combination did not change the pattern of the HPLC chromatogram of the muropeptides (Fig. 4) indicating no change in the muropeptide composition. Even though the relative amount of some of the muropeptide fractions were significantly different, the degree of cross-linking was unaltered when compared to the untreated control (Supplementary Tables S3, S4 and S5). Based on these observations, CBD or the combination of CBD and BAC does not seem to cause changes in the cell wall composition. CBD causes depolarisation of the cytoplasmatic membrane. Since the analysis of the muropeptide composition did not reveal any changes, we investigated the membrane. To evaluate effects on the bacterial membrane, the membrane potential was measured when exposed to either CBD, BAC, or the combination of the two (Fig. 5). Accumulation of the fluorescent dye DiOC 2 (3) in healthy bacteria cells with intact membrane potential results in red fluorescence (high red/green ratio), whereas lower concentrations of the dye, due to membrane potential disruption, exhibit green fluorescence (low red/green ratio), as visualised for the depolarised control using CCCP. Thus, the ratio between red and green fluorescence can reveal the state of the membrane potential. As shown in Fig. 5, even very low concentrations of CBD at 0.1 and 0.2 µg/mL as well as concentration of BAC at 16 µg/mL resulted in a significant lower red/green fluorescence ratio compared to either the untreated or the EtOH control indicating disruption of the membrane potential. However, combining BAC with CBD at either 0.1 or 0.2 µg/mL did not show any significant further membrane depolarisation compared to either CBD or BAC alone. Transcriptional expression analysis by qPCR. With the shown defects in cell division and septum formation observed by TEM as well as decreased autolysis, we wished to identify whether expression of specific genes encoding proteins important for cell division and formation of the divisome as well as autolysis were www.nature.com/scientificreports www.nature.com/scientificreports/ affected by CBD and BAC. Similar to the TEM experiment, S. aureus USA300 was grown for 2.5 hours after exposure to CBD and/or BAC in the exponential growth phase. Analysis of transcriptional changes of selected genes (see Supplementary Table S1) involved in the divisome, cell division and autolysis of S. aureus upon treatment was performed by Reverse Transcriptase qPCR. Regarding the divisome and cell division genes, ezrA was shown to be the most regulated gene upon combination treatment at approximately 2-fold down-regulation (Fig. 6a). EzrA is an important multifunctional component of the bacterial cell divisome implicated in peptidoglycan synthesis and assembly of the cell division apparatus 19 . The results for the remaining genes analysed can be seen in Supplementary Figure S8. These data support the TEM images by showing that CBD in combination with BAC disrupts the cell division. As autolysis was shown to be decreased upon treatment with the combination of CBD and BAC, we studied the expression of selected autolysis genes. Of the genes studied, the expression of lytM and lytN showed to be highly upregulated upon combination treatment at approximately 2.5 (Fig. 6b) and 3.5 (Fig. 6c), respectively, whereas the combination treatment did not seem to affect the expression of the other Measurements of membrane potential in USA300 treated with CBD, BAC and the combination using BacLight Bacterial Membrane Potential Kit as described in Methods. The ratio between the mean red fluorescence and mean green fluorescence was calculated as a measure of membrane potential for each sample since the dye will accumulate in unaffected cells thus emitting a red fluorescence, whereas in cells with affected membranes less accumulations will occur resulting in emission of green fluorescence. CCCP is a depolarised control. Statistical analysis was done by one-way ANOVA with Bonferroni's Multiple Comparison Test and is shown in the upperpart of the figure. ns (not significant) is P-values above 0.05. ** is P-values below or equal to 0.01. *** is P-values below or equal to 0.001. (2020) 10:4112 | https://doi.org/10.1038/s41598-020-60952-0 www.nature.com/scientificreports www.nature.com/scientificreports/ genes autolysis genes (atl, sle1, lytA)) compared to treatment with either CBD or BAC alone (see Supplementary Information Figure S8). Discussion The limited availability of effective therapies against S. aureus has increased the pursuit to discover new treatment strategies. Development of new antibiotics is currently undergoing an innovation gap, while research in the use of helper compound in combination with antibiotics is becoming more intense. It has previously been reviewed that many natural compounds such as flavonoids and compounds from manuka honey and teas have been reported to potentiate antibiotics [20][21][22] . In this study, we found that the antibacterial effects of BAC against S. aureus as well as other Gram-positive bacteria can be enhanced by cannabidiol originating from the Cannabis plant. The potentiation was confirmed through MIC determinations, standard growth experiments, fractional inhibitory concentration determination and time-kill assays. As expected, the combination turned out to be ineffective in Gram-negative bacteria, as BAC is a mixture of related cyclic peptides which interrupt cell wall synthesis in Gram-positive bacteria and is probably unable to cross the outer membrane in Gram-negative bacteria. BAC interferes with the dephosphorylation of bactoprenol (C 55 -isoprenyl pyrophosphate); a membrane lipid-carrier that transports peptidoglycan-precursors across the membrane for peptidoglycan biosynthesis 23,24 . The use of BAC in combination with other compounds against S. aureus has been studied before such as in combination with colistin 25 and alkyl gallates 26,27 . Colistin is believed to damage the cell membrane thus increasing entry of BAC into the cell or by increasing availability of divalent ions such as Zn 2+ . This is important for the functionality of BAC 25 , whereas the mechanism underlying the alkyl gallate mediated potentiation of bacitracin is unknown. Figure 6. qPCR data of the divisome gene ezrA and autolysis genes lytM and lytN studied upon 2.5 hours treatment with either CBD, BAC, combination, EtOH or left untreated in USA300. Data was obtained using the Roche LightCycler 480 Instrument as described in Methods. Experiments were performed in four biological replicates and Cp values were generated in technical replicates. Statistical analysis was done by one-way ANOVA with Bonferroni's Multiple Comparison Test. *** is P-values below or equal to 0.001. However, it has been shown to bind the bacterial membrane affecting the membrane integrity suggesting similar mechanism for synergy as suggested for colistin 28 . The use of CBD and other cannabinoids as antibacterial agents was first described in 1976 by Van Klingeren and Ten Ham 13 and again in 2008 by Appendino and colleagues 14 , however, since then very little has been published on this topic. CBD is a quite effective antimicrobial compound with a MIC value of 4 µg/mL against S. aureus USA300 and other Gram-positive bacteria. Appendino and colleagues 14 found MIC values of CBD extracted from powdered plant material in the 0.5-1 µg/mL range towards various drug-resistant strains of S. aureus. However, the mechanism or how CBD and other cannabinoids affects the bacteria has not been studied so far. Endogenous endocannabinoids as anandamide (AEA) and endocannabinoid-like arachidonoyl serine (AraS) has been shown to contain poor antimicrobial properties but have a pronounced dose-dependent inhibitory effects on biofilm formation of all tested MRSA strains 12 . In this study, we have shown that the cannabinoid CBD is able to potentiate the antibacterial properties of the cell wall targeting BAC. Nevertheless, unlike in the case of the endocannabinoids, we did not find any effect on biofilm formation and breakdown in our experimental setup (Supplementary Figure S9). This may indicate a different mechanism of action for CBD. Cell imaging is an approach to obtain indications of the mechanism or site of action of an antimicrobial compound. Cells grown in the presence of either CBD or BAC did not reveal any phenotypical changes compared to the untreated or the EtOH control. However, treatment with the combination of CBD and BAC revealed a remarkable phenotype visualised by transmission electron microscopy. The TEM images showed bacteria with several septa causing lack of cell separation during cell cycle and a distorted cell membrane. The lack of cell separation and termination of the cell cycle is consistent with the Triton X-100 induced autolysis assay showing a decrease in autolysis upon combination treatment. We therefore thought to study the expression of genes encoding proteins involved in the autolysis and found the expression of lytM and lytN to be upregulated upon combination treatment. In a study of gene silencing of the major regulator of cell wall metabolism walRK, S. aureus was shown to have similar phenotype as observed in this study; several septa and initiation of septa 29 . In addition, they found that by increasing the expression of lytM, they could restore the viability of the cells even though the cells had still formed several septa. Based on this, the increased expression of lytM, and perhaps also lytN, might be due to a kind of self-defense mechanism trying to restore the cell viability. Regarding the several septa formation, similar characteristics have been recorded by others as well, e.g. by treating S. aureus with the wall teichoic acid biosynthesis inhibitor targocil 30,31 , causing both decreased autolysis and impaired cell division as visualised by several septa formations. In addition, formation of several septa has been visualised by others by creation of gene knockouts or by performing gene silencing of genes important for cell cycle regulation. Pang and colleagues showed this phenotype in a Δnoc strain, lacking a very important cell division regulator 32 . In addition, Stamsas and colleagues found effects on septum formation in a ΔcozEa strain upon gene silencing of cozEb, encoding proteins which together are important for proper cell division in S. aureus and which interacts with the major cell division protein EzrA 33 . Furthermore, construction of a conditional ezrA mutant has also shown to cause impaired cell division and several septa formations in S. aureus 34 . The fact that ezrA is downregulated upon exposure to the combination of CBD and BAC by approximately two-fold, indicates that the combination of CBD and BAC affects the cell division in S. aureus. Steele and colleagues 19 showed that S. aureus cells partially depleted of EzrA cannot divide without sufficient levels of EzrA. The authors also showed that EzrA is required for peptidoglycan synthesis. Nevertheless, the combination of CBD and BAC in our experimental setup does not seem to have a noteworthy effect on peptidoglycan synthesis, or at least not on the composition of the peptidoglycan nor the degree of cross-linking, as the muropeptide analysis showed a similar pattern in the chromatograms of the untreated cells as for the CBD and BAC treated cells. However, whether the decreased expression of ezrA is a direct or a secondary effect of the combination treatment is unknown and will be studied further in the future. The exact mechanism of CBD potentiation of BAC is not yet fully understood; however, it was visualised that the combination did cause cell division complications and envelope irregularities. As mentioned above regarding the combination of colistin and BAC and presumably alkyl gallates, one would argue that CBD could have similar mechanism, that is affecting the membrane as visualised by the membrane potential disruption. This causes either increase of BAC entry into the cell or increased divalent ion availability for BAC. On the other hand, the mechanism of potentiation seems to be specific for bacitracin, since no particular synergy was observed when combining CBD with either dicloxacillin, daptomycin, nisin or tetracycline indicating other mechanisms for CBD mediated potentiation of BAC than an increased uptake due to disrupted membrane. Conclusion In this study, we present a putative novel antimicrobial combination for treatment of Gram-positive bacterial infections using the cannabinoid cannabidiol and the cell wall targeting antibiotic bacitracin. Through growth experiments, it was interestingly found that CBD was able to potentiate the effects of BAC against S. aureus USA300 and other Gram-positive bacteria. However, it was found to be ineffective against Gram-negative bacteria. Upon treatment with the combination of CBD and BAC, it was revealed by TEM that the morphology of the cells had changed compared to cells treated with either CBD or BAC alone or left untreated. The cells showed several septa formations indicating lack of cell separation during cell division causing reduced autolysis, as well as an irregular membrane. In addition to this, a very important cell division gene, ezrA, turned out to be transcriptionally down regulated upon combination treatment. Changes observed in morphology was not caused by compositional changes in the cell wall muropeptide composition. Membrane potential changes for the combination of CBD and BAC compared to either CBD or BAC treatment alone did not reveal the mechanism of action for the combination of CBD and BAC. Future studies are therefore focused on the cell division and cell envelope to identify the mechanism of action. Methods Bacteria and growth conditions. The resistant Staphylococcus aureus strain, MRSA USA300 FPR3757 35 , was the main bacterium used throughout this study. MRSA was grown in Brain Heart Infusion (BHI) or Müeller Hinton (MH) media on plate or in liquid cultures with agitation at 37 °C. Additional bacteria Enterococcus faecalis (13-327129), Methicillin-resistant Staphylococcus epidermidis (933010 3F-16 b4), Listeria monocytogenes EGD, Pseudomonas aeruginosa (PA01), Salmonella typhimurium (14028), Klebsiella pneumoniae (CAS55), and Escherichia coli (UTI89) were either grown in BHI, MH or Lysogeny Broth (LB) media on plate or in liquid cultures with agitation at 37 °C. As CBD (Sigma Aldrich) was dissolved in EtOH, a control using same volume of 96% EtOH was constructed. Minimum inhibitory concentration (MIC) and Fractional Inhibitory concentration (FIC). The MIC was determined using the broth microdilution method 36 . The MIC was interpreted as the lowest concentration at which no growth was observed. Briefly, MIC measurements were performed using the MH or BHI medium in 96-well plates (Nunc A/S or Sarstedt). A total volume of 100 μl with a bacterial inoculum of approximately 5 × 10 5 CFU/mL was incubated with two-fold dilution series of the compound or antibiotic of interest for the MIC determination and incubated at 37 °C for 16-22 hours with agitation. For the FIC index determination, ¼ volume of each compound or liquid media was added to the wells in the 96-well plate and final ½ volume with same bacterial inoculum was added to the wells afterwards. The plate was incubated as mentioned above. The MIC and FIC determination were performed using at least three biological replicates. Growth was determined using a Synergy H1 Plate Reader (BioTek). The FIC index was calculated using this formula: Growth experiments and time-kill assay. For micro dilution growth experiments, a 96-well plate (Nunc Edge) was prepared with different concentrations of the compounds of interest in MH media. Diluted overnight cultures (OD 600 nm of 0.005) were added to each well. The plate was incubated at 37 °C (without agitation) for 24 hours. Using an oCelloScope (BioSense Solutions ApS), the bacterial density was measured over a period of 24 hours using the UniExplorer software. Data retrieved was obtained as Background Corrected Absorption (BCA), calculated by an algorithm enabling determination of bacterial growth kinetics resulting from images taken with the oCelloScope camera 38,39 . Experiments were performed with at least three biological replicates. For macro dilution growth experiment, an ON culture was diluted to OD 600 0.02 in BHI media in an Erlenmeyer Flask and placed in a water bath at 37 °C with agitation. Bacterial growth was determined by turbidity measurements at OD 600 nm. MIC in Combination MIC The time-kill assay was performed as previously described 40 . Briefly, a culture grown to OD 600 0.2 in BHI, was split, and treated with CBD or BAC alone and in combination. An untreated control was included. Viability was monitored by OD 600 readings and CFU/mL determinations by spotting 10-fold serial dilutions on MH agar plates. The viability assay was performed twice with similar results. Transmission electron microscopy (TEM). Cultures were prepared for TEM according to Thorsing et al. 40 . Briefly, USA300 was grown in BHI media at 37 °C with agitation from OD 600 0.02 to start exponential growth at 0.2. The culture was diluted 5 times in BHI media and split into different flasks. The cultures were either left untreated or treated with 1 µg/mL CBD and/or 16 µg/mL BAC or ethanol. The cells were incubated at 37 °C with agitation for 2 hours and 30 minutes. Treated cells were harvested and the pellet was washed twice in PBS followed by fixation ON in 2% glutaraldehyde diluted in 0.04 M phosphate buffer pH = 7.4. The fixated cells were washed in 0.1 M phosphate buffer pH = 7.4 and the pellet was resuspended in 15% bovine serum albumin and incubated at 20 °C for 1 hour and 15 minutes. The cells were centrifuged and fixed again ON in 2% glutaraldehyde at 4 °C. Fixated cell pellets were cut into pieces and washed three times using 0.1 M phosphate buffer pH 7.4, followed by staining with 1% OsO 4 for 60 minutes at 4 °C. The samples were dehydrated using increasing concentrations of ethanol (50-99%) at 4 °C and then embedded in epon TAAB-812. The samples were cut into ultra-thin sections using an ultra-microtome and collected on a nickel grid. The sections were stained using 3% uranyl acetate for 14 minutes at 60 °C followed by a wash using water and then stained using lead citrate for 6 minutes at room temperature. Finally, the samples were washed in 20 mM NaOH and water and then dried. The sections were analysed by transmission electron microscopy using a Philips EM 208 Microscope equipped with a Quemsa TEM CCD camera and an iTEM Digital Imaging Platform software. The experiment was carried out using two biological replicates. Fluorescence microscopy. Cells were grown and treated as mentioned for transmission electron microscopy. After treatment, cells were washed in PBS and incubated at room temperature with 5 µg/mL Bocillin-FL for Scientific RepoRtS | (2020) 10:4112 | https://doi.org/10.1038/s41598-020-60952-0 www.nature.com/scientificreports www.nature.com/scientificreports/ Autolysis assay. The autolysis assay was performed according to Campbell et al. 31 . Briefly, ON culture of S. aureus USA300 was diluted in BHI to OD 600 0.02 and grown to early exponential phase OD 600 0.2 at 37 °C with agitation. The culture was treated with either 1 µg/mL CBD, 16 µg/mL BAC, the combination of CBD and BAC, the solvent EtOH, or left untreated and incubated for one hour at 37 °C degrees with agitation. After incubation, cells were washed in PBS pH 7.2 and pellet was resuspended in 50 mM Tris-HCl with or without 0.05% Triton X-100 and adjusted to OD 600 1.0 and incubated at 30 °C with gentle agitation. Turbidity measurements were performed every 30 or 60 minutes for five hours at OD 600 . The autolysis assay was carried out in three biological replicates. Muropeptide isolation and analysis by reverse phase HPLC. Muropeptides were isolated according to Kühner et al. 18 with minor changes. Briefly, an ON culture of S. aureus USA300 was treated and grown as mentioned above for TEM. After 2.5 hours treatment, the cells were harvested at 10.000x G and the pellet was resuspended in 0.1 M tris/HCl containing 0.25% SDS and heated to 100 °C for 30 minutes. To remove SDS the samples were washed at least 15 times in sterile ddH 2 O and absence of residual SDS was confirmed according to Heyashi, 1975 41 . The samples were sonicated for 30 minutes in a sonicator bath followed by DNase and RNase treatment using DNase I (15 µg/mL) and RNase A (60 µg/mL) and incubated at 37 °C for 1 hour followed by trypsin digestion (50 µg/mL) for 1 hour at 37 °C. Enzymes were inactivated at 100 °C for 3 minutes. To remove wall teichoic acids, the samples were incubated with 1 M HCl at 37 °C for 4 hours with agitation. The samples were washed to pH = 5-6 and resuspended in digestion buffer. Cell walls were digested with mutanolysin (5000 U/mL) (Sigma) at 37 °C with agitation for 17 hours. The samples were then centrifuged, and supernatant was moved to a fresh tube followed by addition of 50 µL reduction solution containing 10 mg/mL NaBH 4 and left at room temperature for 20 minutes with lids open to reduce MurNac. The reaction was stopped using 15 µL of 85% phosphoric acid. Separation of the samples were carried out with a flow rate of 250 µL/min using an Agilent 1260 Infinity RP-HPLC (Agilent Technologies) and an Xselect Peptide CSH C18, 130 A, 3.5 µm, 2.1 mm ×150 mm column (Waters) heated to 52 °C. The peptides were eluted by a gradient of solvent B (0.06% trifluoroacetic acid (TFA)/35% methanol) and solvent A (0.06% TFA) as described 18 . The muropeptide isolation and subsequent separation was carried out in three biological replicates. Cross Linking x dimer x trimer xoligmer % 0 5 ( %) 0 67 ( %) 0 9 (%) -Membrane potential. The bacterial membrane potential was analysed using BacLight Bacterial Membrane Potential Kit (ThermoFisher) and the experiment was performed according to the manufacturer's recommendations. Briefly, an ON cultures of USA300 was diluted to OD 600 0.02 and grown to 0.3 in BHI media at 37 °C with agitation. The culture was diluted 1:100 in PBS and split into separate tubes. The samples including an ethanol control were either left untreated or treated with either 5 µM CCCP (depolarised control), 0.1 or 0.2 µg/mL CBD and/or 16 µg/mL BAC and incubated at room temperature for 5 minutes. Following the incubation, the dye diOC 2 (3) was added to a final concentration of 0.03 mM in each sample except for an unstained control and left to stain for at least 15 minutes at room temperature protected from light. The samples were analysed using BD FACSAria II flow cytometer and FACSDiva Version 6.1.2 software. For each sample, 10 4 events were analysed using a laser emitting at 488 nm and fluorescence was collected in the red and green channels. The experiment was carried out in three biological replicates. Isolation of total cellular RNA, DNase treatment and cDNA synthesis. Cultures of S. aureus USA300 were prepared, diluted and treated as described for TEM. After 2.5 hours of treatment, samples were harvested for RNA purification. RNA was purified by a hot acid-phenol procedure 42 using FastPrep and FastPrep beads and treated with 0.2 units of DNase I (NEB) for 15 minutes at 37 °C followed by heat inactivation. cDNA was synthesised using the High-Capacity cDNA Reverse Transcriptase Kit (ThermoFisher Scientific) and was performed according to the manufacturer's recommendations. Briefly, the cDNA synthesis was carried out at 25 °C for 10 minutes followed by 37 °C for 45 minutes and finally the enzyme was inactivated at 85 °C for 5 minutes. A No Template Control (NTC) and a No Reverse Transcriptase Control (NRT) was created as well. The experiment was performed using four biological replicates. Quantitative polymerase chain reaction. Reverse Transcriptase qPCR was performed using the Roche LightCycler 480 Instrument. For each reaction, 5 µL RealQ Plus Master Mix Green 2×(Ampliqon), 0.75 µL 10 µM primers (Supplementary Table S1), 1 µL sterile ddH 2 O and 2.5 µL sample were used. Each reaction was made in technical duplicates. Pre-incubation was set at 95 °C for 15 minutes, the amplification cycle was set at 95 °C for 15 seconds followed by 60 °C for 45 seconds and then 72 °C for 45 seconds for 45 cycles. A melting curve was created in the end of the procedure by heating to 95 °C for 5 seconds 60 °C for 20 seconds and then 97 °C continuously. Data was retrieved using LightCycler 480 Software version 1.5.1.62. Data were normalised using gyrB as a reference gene. Statistical analysis. P-values were calculated by a one-way ANOVA with Bonferroni's Multiple Comparison Test for qPCR and membrane potential data. A 2-way ANOVA with Bonferroni's Multiple Comparison Test was used for the autolysis assay. Significance was determined based on the P-values visualised in the figures. ns (not significant) is P-values above 0.05. * is P-values below or equal to 0.05. ** is P-values below or equal to 0.01. *** is P-values below or equal to 0.001.
7,351.4
2020-03-05T00:00:00.000
[ "Medicine", "Chemistry" ]
Compound-SNE: comparative alignment of t-SNEs for multiple single-cell omics data visualization Abstract Summary One of the first steps in single-cell omics data analysis is visualization, which allows researchers to see how well-separated cell-types are from each other. When visualizing multiple datasets at once, data integration/batch correction methods are used to merge the datasets. While needed for downstream analyses, these methods modify features space (e.g. gene expression)/PCA space in order to mix cell-types between batches as well as possible. This obscures sample-specific features and breaks down local embedding structures that can be seen when a sample is embedded alone. Therefore, in order to improve in visual comparisons between large numbers of samples (e.g. multiple patients, omic modalities, different time points), we introduce Compound-SNE, which performs what we term a soft alignment of samples in embedding space. We show that Compound-SNE is able to align cell-types in embedding space across samples, while preserving local embedding structures from when samples are embedded independently. Availability and implementation Python code for Compound-SNE is available for download at https://github.com/HaghverdiLab/Compound-SNE. Introduction Visualization of high-dimensional data is a key aspect when examining single-cell omics (epigenomics, transcriptomics, proteomics, etc.) data samples.Many different algorithms exist for embedding high-dimensional data into 2D space, though t-distributed Stochastic Neighbours Embedding (t-SNE) and uniform manifold approximation and projection (UMAP) remain the most common (Van der Maaten andHinton 2008, McInnes et al. 2018).Besides visualizing a single sample, it is important to be able to visually compare multiple single-cell samples, e.g.scRNA-seq from different patient samples or data modalities such as paired scRNA-seq and scATAC-seq data (Kim et al. 2022) on the same sample or same patient, (multi-view data) before moving on to further analyses.If the dataset is complete (i.e. containing all cell states of interest) the reference dataset can be first embedded and other datasets projected onto it (Spitzer et al. 2015, Angerer et al. 2016, Hao et al. 2023).Otherwise, in current approaches, data integration is performed to merge samples together, which are then embedded all at once (Haghverdi et al. 2018, Korsunsky et al. 2019, Hao et al. 2021).While this does achieve a good alignment of different samples, data integration algorithms modify gene expression values in order to best mix samples together, leading, in embedding space, to the dissolution of unique local structures that are seen in original, unintegrated embeddings.Although data integration is still important for other analyses [e.g.such as cell type label transfer tasks (M€ olbert and Haghverdi 2023)], we propose here an alternative method for visualizing multiple single-cell samples.Compound-SNE performs what we term a soft alignment, aiming to maximize the alignment of multiple embeddings while minimizing the local structural differences from the samples' independent embeddings.This is done in a two-step process: (i) alignment in PCA space via matrix transformation in order to align embedding initializations, and (ii) addition of a force term to the embedding algorithm, which pulls clusters of cells together based on annotations. Alignment overview The complete workflow of Compound-SNE consists of five steps as follows.Compound-SNE is designed to work with Scanpy (Wolf et al. 2018) (Haghverdi et al. 2018) in PCA space.We note that this in not preferred and that these clusters are only used for visual alignment and not any type of functional identification.Compound-SNE then integer encodes annotations.For the rest of the paper, we will refer to annotations as cell types.2) Reference selection: Compound-SNE requires at least one sample to use as a reference.If not specified, Compound-SNE first chooses a primary reference as the sample with the most unique cell types.This sample is used for the primary alignment, as described in the following section.Then, if the primary does not contain all of the cell types, secondary references are chosen in order to complete the set of cells.3) Primary alignment: Samples are first aligned in PCA space via matrix transformations in order to align celltype centers.A cell-type center (only for shared cell types) x components matrix is found for each sample, which is then aligned to the primary reference, after scaling, via a Procrustes transformation (Sch€ onemann 1966), which scales and rotates a matrix to minimize the sum of squared errors from a reference matrix.The obtained transformation matrix is used to transform the full sample.This impacts the embedding initialization, but has no further impact on the embedding process.4) Embedding initialization: As Kobak and Linderman (2021) shows, initializing a nonlinear embedding optimization with PCA components enhances preservation of global structures.The authors also report that both t-SNE and UMAP equally preserve global structures when using the same initialization.We therefore use the first two components of the transformed PCA space in order to initialize the embedding for each sample.5) Alignment via forces: To obtain better alignment between samples with minimal disturbance to local embedding structure, we include an additional force term to the embedding process that pulls the centers of cell type clusters (that may deviate from the primary alignment in the process of t-SNE iterations) together for each sample.We first embed the reference sample as normal, then find the centers of each type in embedding space.When embedding the remaining samples, during each embedding step, cell type centers are found and the distance between embedding sample centers and reference centers is found, with the goal of minimizing these distances.The total loss function thus becomes where L tsne is the standard t-SNE cost function (Van der Maaten and Hinton 2008), d i is the squared distance between embedded reference centers, Y r;centeri , and embedded sample centers, Y s;centeri , for cell type i, and K is the number of shared clusters between the reference and the sample data.We expand upon the relation between the alignment force exertion and minimization of the loss function in the Supplementary Methods. For practical implementation, we take advantage of the computational speed of the openTSNE (Poli� car et al. 2024) Python library.Compound-SNE alternates between a t-SNE iteration, via openTSNE, and minimizing the distance between cell-type clusters. Because not all samples may contain every cell type, as described above, the primary reference is chosen as the one with the most unique type.We then identify secondary references, using the minimum needed to create a set containing all of the present cell types.Secondary references are then aligned sequentially to the primary, using their embeddings to obtain embedding centers of remaining types.This creates a complete reference of embedding centers for each cell type present across all samples. Application We apply Compound-SNE to datasets consisting of multiple patients and modalities, demonstrating its utility for comparing different but related datasets.One dataset consists of bone marrow samples from six healthy patients, containing both gene expression and surface markers (Triana et al. 2021).The second dataset consists of gene expression and ATAC-seq data for kidney samples from the same patient (Muto et al. 2021).A subset of alignments is shown in Fig. 1, with full alignments in Supplementary Figs S1 and S2.The third dataset consists of gene expression of bone marrow hematopoietic cells for several time-points following inflammatory stimulation (Bouman et al. 2024) (shown in Supplementary Fig. S3).In Fig. 1a, using gene expression of patient B6 as a reference, we show that Compound-SNE can be used to align gene expression for several patients.The first column shows the original, independent embeddings for each sample.The second columns shows embedding following the primary alignment and the third column shows embedding with the additional force term.The final two columns shows embedding following data integration using Harmony and Seurat, as a comparison to our method.Visually, we see that even using only the primary alignment offers a reasonable improvement over the independent embeddings, with the full alignment providing a much greater visual alignment.Notably, the full alignment yields embeddings that retain much of the cluster shapes that are seen in the independent embeddings.The two integration methods, while clearly aligning all of the samples, visually erase much of the structures unique to each patient in the independent embeddings.This is because cells are forced to mix well between batches. In Fig. 1c, we align scRNA and scATAC samples from the same patient.While in comparison to Fig. 1a, where the independent embeddings look somewhat comparable between patients (as well as between scRNA and surface markers in Supplementary Fig. S1), the embeddings for scRNA and scATAC look very different from each other initially, obscuring comparison.Primary alignment achieves a modest improvement, while the full alignment yields a much stronger improvement while preserving original cluster shapes.We were unable to integrate the two modalities using Harmony, while Seurat was able to integrate them, again at the cost of dissolving structures present in the independent embeddings. Comparison statistics and evaluations Beyond a visual comparison of embeddings, we calculate several metrics to compare how well-aligned embeddings are to each other and how well embedding structures are preserved between aligned embeddings and the original embeddings. 1) Alignment score: Beyond visually comparing embeddings, we calculate a metric to determine how wellaligned samples are.In embedding space, we find the centers of each cell-type for each sample and take the sum of squared errors between points.This value, d, is then transformed via 1/(1 þ d) so that a value closer to 1 indicates a better alignment.We see that (Fig. 1b, top), as we progress from independent embeddings to aligned initializations to aligned with center-based force, we get better alignment, which is consistent with the visual results.We do see that data integration methods Harmony and Seurat yield the best alignment between samples, which is expected based on the nature of data Comparative alignment of t-SNEs integration.Alignment scores between scRNA and scATAC for patient K1 are shown directly on the plots of Fig. 1c.2) Locality preservation: While data integration yields the best alignment between samples, we can visually see that this is at the cost of the original embedding structure (Fig. 1b, bottom).To determine the preservation of local structures present in each embedding, we calculate the k nearest neighbors for each cell in the independent embedding and compare it to the nearest neighbors in each alignment, taking the fraction shared as a metric of structure preservation.We see that the primary alignment obtains the best preservation of original structure, with alignment with center-forces performing only slightly worse.Data integration, on the other hand, greatly disrupts these local structures.We therefore see that there is a trade-off between structure preservation and sample alignment.Preservation scores for scRNA and scATAC for patient K1 are shown directly on the plots of Fig. 1c.3) Alignment of data views with highly variable sizes (cell numbers): Furthermore, to demonstrate the alignment of samples with highly different cell densities, we randomly subsample bone marrow B2 to 696 cells (1/10 of the cells) and align it with the full sample for B1 (9751 cells) (Supplementary Fig. S4).We see that this still achieves a nice visual alignment.4) Computational efficiency: In Supplementary Fig. S5, we compare the runtime for each samples when embedded independently and embedded with alignment forces.We find that, with a couple of outliers in either direction, the addition of alignment forces does not impact runtime.5) Clustering for cell annotations: We mentioned that Compound-SNE is able to generate noncell-type-specific annotations for the sake of performing alignment.Applying the full alignment to these generated annotations for the bone-marrow scRNA samples is shown in Supplementary Fig. S6, which shows a comparable alignment, in this case, to using the original cell-type annotations. Conclusion With Compound-SNE, we demonstrate how we can perform a soft alignment of embeddings for single-cell samples from different patients and modalities.This aids a visual comparison between many samples, with minimal disturbance to the unique sample structures seen when embedding samples independently.When using Compound-SNE, the usual limitations in interpreting nonparametric data embedding (like standard t-SNE) should be respected (Chari and Pachter 2023).Whereas comparison of the overall structure, clusters composition and features activities (e.g.gene expression) across the map are correct and useful, over-interpretations such as comparison of cell densities over the maps should be avoided. Figure 1 . Figure 1.(a) Embeddings for the six patients from the bone marrow dataset, using B6 as the primary reference.Each row corresponds to a different alignment/integration method.All embeddings are on the same spatial scale.(b) Metrics for the embeddings shown in A. Means with error bars for standard deviation.Top: structure preservation, calculated as the fraction of KNN for each point preserved from the Independent embeddings.Bottom: alignment of the embeddings as the distance between normalized cell type centers.(c) Alignment/integration of scRNA and scATAC samples for K1 of the kidney dataset.Alignment scores of scATAC to scRNA are shown on each scATAC subplot, labeled as A. Structure preservation scores, labeled as P, for scATAC are shown on the subplots, excluding the Independent embedding.This score is also shown for integration methods on scRNA.All embeddings coordinates are on the same scale.Cell-type legends for (a) and (c) are shown in Supplementary Fig. S8. formatting, taking in an AnnData object.
2,958.2
2024-03-03T00:00:00.000
[ "Computer Science", "Biology" ]
Excitation back transfer in a statistical model for upconversion in Er-doped fibres We report a new analytical method to evaluate the accuracy of a statistical model of the migration assisted upconversion in Er-doped fibres. Unlike the mean-field approach to the excitation back transfer which was used in a previous statistical model, we use a new approximation accounting for the variance of population of the first excited level. Such an approach presents more realistic physical description of the excitation – emission processes in heavily-doped Er-based fibres. Implementing these results, we find that the accuracy of upconversion rate calculations is within 13% if the concentration of erbium ions is smaller than the critical one. [DOI: 10.2971/jeos.2007.07027] INTRODUCTION The study of the upconversion process in heavily-doped erbium-based materials is in important for the development of efficient fibre optic amplifiers [1]- [8], lasers and sensors [9]- [14] because it effects on the efficiency of such devices.For example, increased upconversion rate leads to the degradation of the performance for high concentration erbium doped fiber/waveguide amplifiers (EDFAs/EDWAs) [1]- [8] and to complex dynamic regimes for lasers at 1.5 µm [9,10].However, for upconversion lasers emitting at 0.550 µm and 3 µm [11]- [13], and temperature sensors based on green luminescence [14], increased upconversion rate provides an increased efficiency. To characterise the performance of high-concentration ED-FAs/EDWAs and lasers, models accounting for the upconversion of excitation on homogeneously distributed (homogeneous upconversion, or HUC) and clustered erbium ions (pair-induced quenching, or PIQ) have been exploited [1,2,9,10].Detailed microscopic study of erbium-doped glasses by means of X-ray absorption fine structure spectroscopy (XAFS) has found no evidence of short-range pair-clustering of Er 3+ ions [15].Therefore, more accurate physical models have to be used for fitting experimental results.In our previous publications, we report a model satisfying such criteria [5]- [8].Unlike HUC and PIQ models, the model takes into account the structure of the glass host matrix by means of pair-correlation function h(R) (the probability density to find two erbium ions at the distance R) and critical distances of upconversion/migration.These distances are directly proportional to the spectra overlap: excited-state absorption -spontaneous emission and ground state absorption -spontaneous emission.To reduce the complex problem of upconversion and migration in an ensemble of uniformly distributed erbium ions, we applied a mean-field approximation to the excitation back transfer in which the variance in the first excited level popula-tion was neglected [5]- [8].It gave us an opportunity to derive an analytical expression for the upconversion coefficient as a function of the population of the first excited level and concentration of erbium ions [5,6,8]. At high concentration of erbium ions the distance between ions decreases and, therefore, the probabilities of upconversion and migration increase as well.As a result of reinforced migration, distribution of excitation tends to homogeneous for which variance takes the maximum value.Hence, for high concentration of erbium ions the mean-field approach to the excitation back transfer has to be changed for the other approximation accounting for variance in the first excited level population.We report in this paper such approach which gives us opportunity to find the correct analytical expression for upconversion coefficient as a function of the first excited level population and concentration of erbium ions.Additionally, we evaluate the validity of the statistical models of upconversion from [5] in the wide range of erbium doped ion concentrations. MEAN-FIELD APPROXIMATION IN STATISTICAL MODEL OF MIGRATION ASSISTED UPCONVERSION Dipole-dipole interactions between erbium ions randomly distributed in a host glass leads to two processes called excitation upconversion and migration [5]- [8].Upconversion occurs between two erbium ions exited at the 4 I 13/2 metastable level and results in the excitation energy transfer from one ion (donor) to the other (acceptor).The donor looses energy and goes to the ground state level 4 I 15/2 whereas the acceptor receives energy and moves to the higher excited state 4 I 9/2 (Figure 1).The acceptor returns back very quickly to the 4 I 13/2 level through step-wise phonon-assisted relaxation processes (Figure 1).Additionally, the upconversion processes are assisted by excitation migration between exited and unexcited ions.To describe upconversion and migration processes we use the following set of the rate equations [5,6] Here time t is normalised to the lifetime of the first excited level τ 2 ; β = (σ a + σ e ) /σ a where σ a and σ e are the absorption and emission cross-sections, respectively; I p , I ps are power and saturation power for pump wave; n 2k is the probability to find an ion numbered k on the first excited level, N is the number of ions, and n 2 is the population of the first excited level (n 2 = lim N→∞ ∑ N k=1 n 2k /N ).The rates of upconversion P ki and migration W kj for the dipole-dipole mechanism of excitation energy transfer are given as (R up and R m are the critical distances for upconversion and migration respectively) [16].Since n 3 ∼ 0.01n 2 , we neglect the population of the second excited level n 3 in Eq. ( 1) [5,6].As follows from [8], macroscopic equation that is used for experimental study of upconversion processes can be written as follows: Here C up is the normalised upconversion rate, which can be found from Eq. ( 1) by averaging over distances between all ions and unexcited ions [5,6]: To find the upconversion rate from Eq. ( 2) we have to find population n 2 by solving Eq. ( 1), with further averaging over the distances between erbium ions. To simplify the problem, we apply the continuous wave excitation (dn 2 /dt = 0) and mean-field approximation as follows [5,6] N ∑ j=1,j =k In [6] was found that using the approximation Eq. ( 4) we lead to the following equation for n 2k in integral form To find the population n 2 , an averaging over two ensembles should be performed -the first ensemble consists of excited ions and the second one contains both excited and unexcited ions: Here h(R) is the pair-correlation function to find two erbium ions at the distance R. Using the notation for erbium concentration c Er = N/V , we find from Eqs. ( 5) and ( 6) where S. Sergeyev, et.al. Here h(R) is the pair-correlation function to find the erbium ions at distance R, function f m(up is the pair probability density for excitation to leave ion by the migration or upconversion.As follows from Eqs. ( 7) and ( 8), the problem of multi-particle interaction with the help of approximation Eq. ( 4) has been reduced to the pair interactions.On the other hand, for closely located unexcited and excited ions, the probability to find excitation localised on the pair will increase due to higher probability of the excitation migration between ions in pair than probability to leave the pair [16].As a result, the function f m (R) will decrease as well.To find the correct form of this function we consider excitation migration between two ions located at distance R [16].At the initial moment of time t = 0, one ion is excited and the other one is unexcited.The rate equation for probability density function for the excitation migration from initially excited ion to an unexcited one at the moment t > 0 takes the form [16]: (9) As a result, we find Applying pair-correlation function h(R) and substituting Eq. ( 10) to Eq. ( 8) we find from Eqs. ( 7) and ( 8) [5,6] Here . The general form of Eqs. ( 11) accounting for short-range coordination order of erbium ions are given in details in [6]. In spite of the fact that the statistical model demonstrated good applicability for fitting experimental data for gain [7] and upconversion coefficient [8] in high concentration erbium doped fibres, the accuracy of the mean-field approximation Eq.( 4) has to be justified to find the margins of parameters where statistical model of upconversion in the form of Eqs. ( 2) and ( 11) can provide reliable results. VARIANCE IN EXCITATION BACK TRANSFER IN THE STATISTICAL MODEL OF MIGRATION ASSISTED UPCONVERSION With the increased concentration of erbium ions, the distance between ions decreases and, therefore, the probabilities of upconversion and migration increase as well.Migration smoothes out the distribution of excitation to homogeneous one for which the variance takes the maximum value.Hence, for high concentration of erbium ions the mean-field approach to the excitation back transfer has to be changed for an other approximation accounting for the variance in the first excited level population. We find an appropriate approximation for the statistical model of upconversion accounting for the simplest form of pair-correlation function: . This function and Eqs.(11) have been successfully used to fit experimental results for gain as a function of input signal power in [7].It has been also found that parameters in [7] correspond to the condition when this approach is valid, i.e. n 2 ≤ 0.8 [6]. We start the derivation of the correct approximation with derivation of the distribution function for stochastic variable W kj n 2j /n 2 by using the results of [17].The distribution function to find one ion in the centre of sphere and the other one at the distance R inside the sphere of the radius R max is φ(R) = 3R 2 /R 3 max .Distribution function for variable Fourier transform of the function Eq. ( 12) provides an expression for characteristic function [17] Characteristic function of the sum of independent stochastic variables x j equals to the product of characteristic functions for each variable, i.e. where m is the variance of the first excited level population n 2 .Using the Inverse Fourier Transform, we find the distribution function for By means of Eqs. ( 12)-( 15) we can find distribution functions for stochastic vari- P ki .These functions take the form of Eq. ( 15) with the 5,17].As follows from definition of variables and Eq. ( 15), the stochastic variables S 0 and S 1 follow the formula Using Eq. ( 16) we rewrite Eq. ( 1) for continuous wave excitation (dn 2k /dt) as follows: By solving the Eq. ( 17) with respect to n 2k (S 1 , S 2 ) and averaging over stochastic variables S 1 and S 2 with distribution functions similar to Eq. ( 15), we find the following equations for population of the first excited level n 2 and it's variance σ 2 m : 2 ) Here u = √ πγ n 2 + √ r/2 / 2 1 + βI p /I ps . RESULTS AND DISCUSSION Using Eqs. ( 2) and ( 18) we find the upconversion coefficient C up and population fluctuations δ = σ 2 m /n 2 as a function of population of the first excited state n 2 and normalised concentration of erbium ions γ.The results of calculations are shown in Figures 2 and 3. Approximation Eq. ( 4) corresponds to σ 2 m = 0 and results in a simplified model with excitation back transfer which neglects population variance [5]. For low and high population of the first excited state, the distribution of excitation is inhomogeneous with a low value of variance in population (Figure 2).For low population it is caused by small number of excited erbium ions.For high population, the upconversion is static, i.e. there is practically no excitation migration [5].As a result, upconversion is depleting population more intensely in the regions were erbium ions are more closely located and is creating inhomogeneous distribution of excitation.For intermediate value of population, excitation migration smoothes out the inhomogeneity and, therefore, leads to increased variance in the population of the first excited state (Figure 2).As can be seen in Figure 3, accounting for the population variance leads to decreasing the upconversion coefficient.This effect intensifies with an increase in erbium ions' concentration and has to be taken into account for normalised concentrations γ > 1. Solvability of erbium ions in phosphate fibre is higher than in silica one and, therefore, only for erbium doped phosphate fibres concentration can exceed critical value without further performance degradation [1,7].To quantify the precision of the simplified statistical model of migration-assisted upconversion from [1] we use the variable The results of calculations of the precision as a function of the first excited level population are shown in Figure 4.With the increased erbium ions concentration, the distance between ions decreases and more than one closely located ion can appear in vicinity of the excited ion.As a result, probability of excitation localisation within this cluster increases and probability of excitation delocalisation decreases.It leads to decreased contribution of migration into the acceleration of upconversion and, therefore, results in decreased value of the upconversion coefficient (Figure 4).For high population n 2 ∼ 1 contribution of migration into the upconversion processes can be neglected which results in increased precision of the model considered in [5] (Figure 4).To sum up the theoretical consideration, we emphasise that decreased upconversion rate leads to improving the EDFA/EDWA characteristics and, vice versa, increased upconversion rate results in improved characteristics of the upconversion lasers and green luminescence based sensors [1]- [14].It has been found, that control of the short-range order of the erbium ions in host matrix can be used to control the upconversion processes [1,3,6,15].Suppression of the short-range order and, therefore, upconversion processes can be realized by increasing the solubility of erbium in host matrix (co-doping by Al [3] or using phosphate glass [1]) or by modification of deposition process (Direct Nanoparticle Deposition [4]).Otherwise, enhancement of the short-range order of the erbium ions leads to increased upconversion and improved efficiency of the upconversion based devices. As follows from our consideration, for the case of enhanced upconversion, the model accounting for variance of the first excited level population (Eqs.( 2) and (18)) will provide a higher precision for the upconversion characterisation as to compare to the model considered in [5]. In conclusion, we report a new statistical model of migration assisted upconversion in erbium doped fibres.Unlike meanfield approach for the excitation back transfer that was used in the previous statistical model, in the present model we use a new approximation to excitation back transfer accounting for the variance of population of the first excited level.Furthermore, the range of validity of results for the upconversion coefficient, calculated from the simplified statistical model from [5], is evaluated.We find that the maximum deviation is less than 13% for normalised concentration of erbium ions γ ≤ 1. FIG. 1 FIG. 1 Erbium ion transition diagram.Upconversion process on metastable ( 4 I 13/2 ) level: the donor ion is deactivated whereas the acceptor is exited to 4 I 9/2 level.Relaxation of 4 I 9/2 level: non-radiative phonon-assisted, and radiative relaxation from the second exited level ( 4 I 11/2 ) result in 980 nm fluorescence.Radiative relaxation from the metastable level causes 1550 nm fluorescence.
3,446.8
2007-08-21T00:00:00.000
[ "Physics" ]
A systematic mapping study on automated analysis of privacy policies A privacy policy describes the operations an organization carries out on its users’ personal data and how it applies data protection principles. The automated analysis of privacy policies is a multidisciplinary research topic producing a growing but scattered body of knowledge. We address this gap by conducting a systematic mapping study which provides an overview of the field, identifies research opportunities, and suggests future research lines. Our study analyzed 39 papers from the 1097 publications found on the topic, to find what information can be automatically extracted from policies presented as textual documents, what this information is applied to, and what analysis techniques are being used. We observe that the techniques found can identify individual pieces of information from the policies with good results. However, further advances are needed to put them in context and provide valuable insight to end-users, organizations dealing with data protection laws and data protection authorities. Introduction A privacy policy, also known as a privacy notice, is a statement through which an organization informs its users about the operations on their personal data (e.g. collection, transfer) and how it applies data protection principles. The mandatory contents of a given privacy policy depend on the applicable privacy law. For example, in the European Economic Area (EEA), the General Data Protection Regulation (GDPR) Articles 12-14 set the requirements on the information to be provided to EEA citizens whenever an organization wishes to process their personal data. In China, the new Personal Information Protection Law (PIPL) sets similar requirements. In the US, the requirements vary according to the specific circumstances. For example, the Children's Online Privacy Protection Act (COPPA) sets requirements when child data are processed, the Health Insurance Portability and Accountability Act (HIPAA) sets requirements when health data are processed, or the California Consumer Privacy Act (CCPA) sets requirements when California's residents personal data are processed. Most countries have similar legislation in place mandating organizations to inform their users about their personal data practices in clear and plain language so that they can understand the privacy concerns. Privacy policies are typically presented as textual documents [1]. Their automated analysis is becoming a pressing need for different stakeholders. Global organizations need to know whether their policies comply with the varied local privacy laws where they offer their products and services. Supervising authorities overseeing privacy laws require automated means to cope with the myriad of privacy policies disclosing the practices of online systems processing personal data (e.g. websites, smart devices). Users demand new ways of understanding the verbose and complex legal texts they are confronted with e.g., when browsing the web or installing a new fancy app. The automated analysis of written privacy policies is a multidisciplinary problem involving legal (i.e. privacy and data protection legislation) and technical (e.g. natural language processing) domains. Research efforts are scattered across several research communities, resulting in a growing body of knowledge presented at different symposia, conference tracks, and publications. To the best of our knowledge, the state of the art still lacks an overview of techniques that can support the different stakeholders in automatically analyzing privacy policies presented as textual documents. To fill this gap, this paper presents the first overview of the different techniques used to analyze privacy policy texts automatically, obtained through a systematic mapping study. It also identifies the concrete information obtained from the policies, and the goals pursued with this analysis. Finally, it discusses the most promising future research lines found. Background This section contains a summary of the main aspects covered in the research. The content and readability of privacy policies Although different privacy and data protection laws set out different requirements on policies, they also mandate some common contents to be found in any privacy policy. One of the salient contributions to the automated analysis of privacy policies is that proposed by Wilson et al. [2], which identified a set of privacy practices usually disclosed in privacy policies. We have leveraged the Wilson et al. scheme to understand what contents can be expected in a privacy policy ( Table 1). The main purpose of privacy policies is to inform users so that they can understand the privacy risks faced. However, while privacy policies may disclose detailed information on the privacy practices carried out, studies have demonstrated that users generally do not understand them [5]. Thus, policy readability is also of the utmost importance. For example, the European GDPR Recital 58 [6] requires that "any information addressed to the public or to the data subject be concise, easily accessible and easy to understand, and that clear and plain language [...] be used". Text readability depends on its contents e.g., vocabulary and syntax, as well as its presentation e.g., font type or font size [5]. Different metrics have been devised to predict text readability, such as word/sentence length, percentage of difficult words or legibility, among others. They are introduced in readability formulas that provide a score or level that predict the overall readability of a given text. Some examples of well-known readability scores are the Flesch-Kincaid Grade Level score, the Gunning-Fog score, the Coleman-Liau Index, the SMOG Index, or the Automated Readability Index. Other aspects can influence readability, such as inconsistent or vague texts. The content and readability metrics extracted from privacy policies can be applied to different goals such as assessing compliance with some laws (Law compliance), checking that the statements of the privacy policy are coherent with the behavior of the system under analysis (System check), informing users of the system of some practices declared in the privacy text (User information), or gathering new knowledge to inform further research (Researcher insight). Natural language processing Privacy policies are typically written texts and, as such, can be automatically analyzed using natural language processing (NLP) techniques. There are two main approaches in implementing NLP systems [7]: 1. Symbolic NLP (also known as "classic") is based on human-developed grammar rules and lexicons to process text and model natural language. 2. Statistical NLP (also known as "empirical") applies mathematical techniques using actual datasets (text corpora) to develop generalized models of linguistic phenomena. Details on how changes to the privacy policy will be communicated to the data subjects Children Informational aspects related to the processing of children's personal data. Children are considered as vulnerable individuals and as such, processing their personal data usually requires further information e.g., how parents can exercise control and limit the information collected Cookies A cookie is a small text file stored on the user's device by a website owner (first-party cookie) or other external services (third-party cookie) when users visit a website. Cookies have become a serious privacy threat [3], and under different legislations, websites are required to inform their users about who stores the data, the types of data they store, together with the purpose Do Not Track Do Not Track (DNT) is a World Wide Web Consortium (W3C) Recommendation [4] for an HTTP Header to be sent by users' devices to signal websites they do not want to be tracked e.g. by placing cookies on their devices. Websites were expected to inform their users whether they respond to the DNT request Other Privacy-related information not covered by the previous categories. For example, the GDPR mandates organizations sending personal data out of the EEA (international data transfers) to inform their data subjects of the privacy policy A text preprocessing stage is usually required in any NLP pipeline to transform text from human language to some more convenient format for further processing. Text preprocessing is done before applying symbolic and statistical approaches. The typical steps in text preprocessing are: 1. Tokenization, which is the process of chopping input text into small pieces (called tokens). 2. Stop words removal, which consists of eliminating terms that do not add relevant meaning (e.g. "the", "a" or "an" in English). 3. Normalization, which is the process of generating the root form of the words. There are several types of normalization, such as stemming (i.e., transforming related words without knowledge of the context) and lemmatization (i.e., normalization considering the morphological analysis of the sentences). Traditionally, symbolic NLP is broken down into several levels, namely, morphological, lexical, syntactic, semantic, discourse and pragmatic. The morphological analysis deals with morphemes, which are the smallest unit of meaning within words. The lexical analysis studies individual words as regards their meaning and part-of-speech. The syntactic analysis studies words grouped as sentences. The semantic level is used to capture the meaning of a sentence. Ontologies are closely connected to the semantic analysis to model a domain knowledge and reason on a natural language. The discourse level is concerned with how sentences are related to others. Finally, the pragmatic level deals with the context (external to the input text). The Statistical NLP approaches typically use Machine Learning (ML) algorithms to develop generalized models of some linguistic phenomena. These algorithms are usually classified into two groups depending on the type of learning on which they are based: supervised learning and unsupervised learning. The term supervised learning is applied to the algorithms that need a labeled dataset as input so they can learn a specific characteristic of the text that they will have to predict. Some examples of supervised algorithms are Random Forest, Naive Bayes, Support Vector Machines (SVMs), Decision Trees, or K-nearest neighbors (kNN). Instead, unsupervised algorithms do not need the input data to be labeled since their objective is to find hidden patterns in the data to understand and organize it. An example of unsupervised algorithms is K-Means used for clustering tasks. In recent times, Artificial Neural Networks (ANNs) have been applied as another ML approach to generate prediction models for NLP [8]. Knowledge is spread across the ANN, and the connectivity between units (called neurons or perceptrons) reflects their structural relationship. The application of statistical approaches requires the converting of some natural language (i.e. text) into a mathematical data structure (i.e. numbers), used as input of the ML algorithm. This process is commonly known as text data vectorization. Second, a prediction model is created using some training data. After a model is built (or "trained"), it should be evaluated, i.e., to measure its ability to be generalized (in other words, to make accurate predictions on new, unseen data with the same characteristics as the training set). Several metrics are used to measure the performance of the model, such as precision, recall, F1-score, or accuracy. Related work To the best of our knowledge, there are currently no systematic mapping studies, surveys, or reviews that fall into the intersection of the two domains specified in the scope of this study, those being privacy policies and text analysis techniques. Basically, the secondary studies found, which can be considered as related work, can be classified into two groups: those related to text analysis techniques applied to a specific area of knowledge and those related to the analysis of privacy and related aspects. Nevertheless, we have found two reviews touching on privacy policies and text analysis techniques. The following paragraphs describe these related works in more detail. On the borders of our related work, we can find many reviews covering text analysis techniques, mainly NLP techniques, applied to specific areas of knowledge. We have gathered a few of the most relevant. Opinion mining systems is a very active research area in recent years in the field of NLP techniques. In their review, Sun et al. present [9] an overview of all the approaches in this field and the challenges and open problems related to opinion mining. NLP is also widely used in the healthcare sector, and one very interesting application is the generation of structured information from unstructured clinical free text. A systematic review of the advances in this sector has been carried out by Kreimeyer et al. [10]. A completely different field is covered by Nazir et al. [11]. Text analysis techniques are likewise applied in software requirement engineering in order to achieve goals such as requirement prioritization and classification, and this systematic literature review gathers the main contributions. Finally, Kang et al. carried out a literature review [12] into NLP techniques applied to management research. As regards the second domain of our research, many reviews are published concerning privacy aspects, and some of them make references to privacy policies. That is the case of the systematic mapping study published by Guaman et al. [13]. There is only one paper in common between their research and ours (ID848) since one of their paper assessment criteria specifically excludes papers exclusively assessing privacy policies (as their focus was on the privacy assessment of information systems). Nevertheless, they report a great number of articles that use the privacy policy text to check compliance albeit manually, which was an exclusion criterion in our case, as we are looking for automated means. As in our research, they highlight that the most studied privacy law is the GDPR. Another aspect closely related to privacy is transparency, and Murmann et al. conducted a survey into the available tools for achieving transparency [14]. They also take the GDPR as a point of reference for the definition of transparency and divide it in two types: Ex ante transparency, the one that informs about the intended actions in privacy policies, and ex post transparency, the one that provides insights into what practices have being carried out. Since their work is more focused on ex-post transparency, there are no common articles between their research and ours. A different approach is followed by Becher et al. [15], in which they present a broad literature review about Privacy Enhancing Technologies (PETs). They include tools that allow users to perform personal privacy policy negotiation, involving the representation of the privacy policy and its personalization. Just one of our papers gathers these two characteristics (ID28), and so it is included in their study. A study closer to ours is the review presented by Kirrane et al. [16]. They analyzed the Semantic Web research domain applied to privacy, security and/or policies. Around 40% of their analyzed papers were related to privacy policies and they found that the semantic web was being used with two purposes with regard to privacy policies: policy communication in order to help producers write policies and policy interpretation to help users understand privacy policies. This latter purpose is the one most closely related to our work, and one of the papers of our research (ID30) is included in this group. Finally, Morel and Pardo [1] studied the different means of expression of privacy policies, namely textual, graphical and machine-readable. They analyzed the information each policy type usually discloses, the tools supporting authoring and analysis, and the benefits and limitations. However, they only report seven analysis techniques for textual policies while we found 39, including three papers they also found (ID28, ID62 and ID72). Considering all this, our review differs from all the available surveys and reviews since no one before has focused their attention on the existing techniques to analyze privacy policies automatically. We believe that our research is necessary since privacy compliance is becoming more and more important nowadays, and automatizing this task is the only way to start making a high-quality assessment of privacy compliance at scale. Methodology A mapping study is a systematic approach to provide an overview of a research area of interest by showing quantitative evidence to identify trends [17]. We have organized our research in three stages: 1. Planning. In this stage, we defined the scope of the research, the main goal, and the Research Questions (RQs). We also formulated the search strategy, the inclusion and exclusion criteria and procedure, and finally the classification scheme and procedure. 2. Conducting. The objective of this phase was to answer the RQs. With this purpose in mind, we carried out the paper search, filtered the results based on our defined criteria, and classified the remaining papers using the classification scheme 3. Reporting. We analyzed the results to answer the RQs and discussed interesting trends and gaps discovered during the research process. Scope and research questions The scope of this research is the intersection between two topics: (1) privacy policies and (2) text analysis techniques. Within this scope, our overall goal is (i) to identify the techniques used to analyze privacy policy texts and (ii) to identify what information is retrieved from the privacy policies. These objectives have been divided into three specific RQs. RQ1: What information is obtained from the privacy policies? RQ2: What is the purpose of the policy analysis? RQ3: What techniques have been used to analyze privacy policy texts? Paper search strategy We used Scopus and Web of Science (WoS) databases to find high-quality peerreviewed literature. Scopus indexes the most important digital libraries such as IEEE Table 2 Inclusion and exclusion criteria Publication year 2021 2020 2019 2018 2017 2016 2015 2014 <2013 Minimum citations 0 0 2 3 4 5 5 5 6 Xplore, Springer Link, Elsevier, Science Direct, or ACM. WoS complements the Scopus database by indexing other journals and conference papers [18]. We created a search string using terms related to our two topics, privacy policies and text analyses. We used the IEEE Thesaurus to find these terms. To obtain a wider search string, we simply used the stem of these terms in the search string and used the nearby operator ('W/3' in Scopus, 'NEAR/3' in WoS). The resulting strings for each database were these: Scopus: ( ( privacy OR "data protection" ) W/3 ( text* OR polic* OR statement* OR term* OR condition* OR notice*) ) W/3 ( analy* OR process* OR min* OR recogni* OR learn* OR classif* ) ) Web of Science: ( ( ( privacy OR "data protection" ) NEAR/3 ( text* OR polic* OR statement* OR term* OR condition* OR notice* ) ) NEAR/3 ( analy* OR process* OR min* OR recogni* OR learn* OR classif* ) ) To validate the completeness of the set of papers obtained, a senior privacy researcher selected 10 papers to create a test set that should be taken into consideration in the research. In every iteration of the search string definition process, we manually checked how many of them were included. We carried out the final search in these databases, searching on title, abstract and keywords. To mitigate the threat to validity by missing relevant papers, after filtering the results of the database search, we carried out a snowballing [19] with the selected papers. This technique consists of analyzing the papers cited by the selected ones (backward snowballing), and those citing the selected ones (forward snowballing). Inclusion and exclusion procedure We conducted an inclusion and exclusion procedure to filter out papers. This procedure consists of an automated filter followed by a manual one. Automated inclusion-exclusion All the following inclusion criteria must be met for a paper to pass to the manual filter: Table 3 Inclusion and exclusion criteria Inclusion criteria The paper is a primary contribution The paper describes a text analysis technique The technique described is not completely manual The technique described is applied to texts describing privacy aspects in software systems Exclusion criteria The paper reports a secondary study The paper does not report a text analysis technique The paper only reports manual techniques for analyzing texts The text analysis techniques are not applied to texts related to privacy policies in software systems (e.g., applied to privacy laws) The paper only reports the generation of a dataset of annotated texts The paper only reports a technique to analyze text but does not apply it to any privacy aspect The paper only reports a privacy policy model The paper only sets requirements for a privacy policy but does not analyze existing texts The paper only reports a technique for text processing, but it is not applied to privacy texts The paper only reports the use of existing metrics or scores to assess some text aspects such as readability or legibility The paper only reports a tool or a technique to analyze machine-oriented privacy documents Thus, we exclude short papers, i.e., heuristically, papers with less than 5 doublecolumn pages or 8 single-column pages. Manual inclusion-exclusion In the manual stage, two screening phases were carried out: a title and abstract screening followed by a full-text screening, both performed through CADIMA (https://www. cadima.info/). We followed the decision tree shown in Fig. 1. The list of inclusion-exclusion criteria used to evaluate the papers is included in Table 3. All criteria must hold for inclusion, but if any exclusion criterion holds then the paper is excluded. Each paper was reviewed by two researchers and inconsistencies were resolved in daily meetings with all the team members. At the beginning of each stage, a pilot, divided into iterations, was conducted to align the criteria of all the researchers. In each iteration, five papers were analyzed by all of the team members and the Krippendorff's alpha inter-coder reliability coefficient [20] was used to calculate the inter agreement. To finish a pilot, the agreement coefficients must be above the 'good' agreement threshold (0.8). Figure 2 shows the numbers of papers being considered in each step, distinguishing the ones retrieved from a database search (solid line) from the ones retrieved through snowballing (dotted line). The list of the 39 selected papers can be found in the "Appendix A". Classification scheme and procedure We created a classification scheme (Fig. 3) based on our two research areas to obtain all the relevant information to answer our RQs. Before starting the classification stage, a pilot was performed by all the team members to align the coding criteria and to clear any possible doubt about the categories in the scheme. Once again, Krippendorff's coefficient was used to measure the level of agreement between researchers. Once the coefficient was above 0.8 ('good' agreement) in every category, we moved on to the classification. Each paper was classified by two researchers. A daily meeting was performed to check inconsistencies between coders and to reach agreements between the whole team. When the classification was over, Krippendorffs's coefficient was calculated for all the papers, and in all categories, the value was above 0.8, which is the recommended value. RQ1-What information is obtained from privacy policies? RQ1 seeks to provide insight into what information has been automatically extracted from privacy policies by previous research. To this end, we focus on the policy contents and text readability. Policy contents Most (nearly 90%) of the papers we have found identified specific content in the privacy policies (Table 4). In this table, the 'Other' group papers focus on contents related to specific privacy laws, i.e., CCPA (ID81) and GDPR (ID136). Remarkably, ID136 provides a GDPR conceptual model and a set of classifiers for identifying all these concepts in policy texts, including, for instance, information about automated decision making. Policy readability Only 15% of the papers analyzed focused on how the policy was written. We intentionally excluded papers that only report the use of existing readability scores, as their value is on the conclusions extracted from the application of the score to a given set of policies rather than on the novelty of the technique. However, we found novel approaches that might become useful to improve a privacy policy readability by detecting vagueness and inconsistencies. Vagueness introduces ambiguous language that undermines the purpose and value of privacy policies as transparency elements. The system described in paper ID996 (2016) is aimed to detect if the words in a privacy policy are vague. Paper ID815 (2018) advances these results by classifying sentences in a privacy policy into different levels of vagueness, resulting in a more complete analysis. RQ2-What is the purpose of the policy analysis? The language in privacy policies is complex and verbose, and most users do not understand it [5]. A set of papers seek to improve users' understanding of privacy policies, following three different paths: extracting and presenting specific information, summarizing different aspects, , and answering user-posed questions with the contents available in the policy . Some works focus on a given privacy aspect, and extract and present the related information to the user e.g. ID796 presents opt-out choices given in privacy policies to the users in their web browsers. Summaries address more than one aspect, and usually take the form of a set of fixed answers (e.g. yes, no, unknown) to predetermined questions (e.g. whether the system collects personal data). ID805 is remarkable as it provides human-like summaries at different compression ratios by applying risk-or coverage-focused content selection mechanisms. ID28 and ID30 further advance these works by supporting free-from queries that are resolved to specific policy text snippets. However, while the latter requires annotated policies to reason over the former works over previously unseen policies. Finally, most of the articles do not address any specific stakeholder, but provide new valuable techniques for other researchers to leverage upon. Here we find data extraction techniques focusing on e.g. data types (ID30), data practices (ID62, ID993, Table 6 NLP techniques applied by the papers analyzed Symbolic Morphological and lexical analysis ID72, ID136, ID773 Syntactic and semantic analysis ID48, ID55, ID65, ID110, ID200, ID763, ID848, ID886, ID983, ID1044 Ontology reasoning ID30, ID33, ID60, ID175, ID804, ID989 Statistical Supervised ID10, ID17, ID19, ID59, ID62, ID64, ID72*, ID81, ID136*, ID770 (ID10, ID33, ID60, ID62, ID64, ID65). Also, a set of contributions focused on characterizing policies as inconsistent (ID175, ID763), vague (ID815, ID996), or able to answer a specific question (ID810). Still, none of them explicitly apply their results to assess the policy or system under research, or nudge users into privacy aspects, and thus were not included in the other categories. Table 6 shows the two broad categories of NLP techniques used to analyze privacy policy texts reported by the papers analyzed, namely symbolic and statistical. There are contributions that combine techniques from both categories as a pipeline, where the outcome of one technique is the input of the other (ID136), or in parallel (ID72, ID886), combining their outputs to obtain the final result. Symbolic NLP approaches As detailed in Table 6, contributions apply symbolic NLP at three different levels: morphological and lexical analysis of the words, syntactic and semantic analysis, and using ontologies to extract the meaning of the sentences. The first levels of NLP (morphological and lexical analysis) have similar results to more complex techniques when targeting certain privacy practices. There are privacy practices (e.g., those related to encryption) that often use very specific distinctive terms (e.g., SSL). In such cases, a basic keyword-based analysis performs best (e.g., see ID72). Most symbolic NLP techniques use some form of semantic analysis. The typical procedure followed in these cases consists of five phases: (1) splitting the policy into sentences, (2) parsing the words, e.g. through Part-of-Speech (PoS) tagging, (3) eliciting syntactic patterns related to a privacy practice, such as collection or disclosure, (4) detecting these patterns, and (5) deriving semantic meaning from them. The main differences between the authors are the proprietary tools or techniques implemented to carry out these tasks and the lexicons or taxonomies used. We would like to highlight some of the most useful tools found in the study. The tool most used for carrying out syntactic analysis is the Stanford dependency parser, which is available in five different languages. One of the most critical stages is the creation of semantic patterns, which many authors manually create based on collections of privacy policies or taxonomies such as that created by Anton et al. [21]. By contrast, the authors of paper ID983 use a bootstrapping mechanism from Slankas et al. [22] to automatically find patterns from privacy policies according to a simple seed pattern. This process allows them to generate more inclusive patterns. Finally, there are some papers that report the use of specific programming languages to make annotations in the text and finding the patterns. This is the case of paper ID55 that uses Eddy [23], and paper ID1044 that uses Jape [24]. In our research, one in three papers using symbolic NLP techniques report the use of ontologies. Once a privacy policy text is represented as an ontology, information can be automatically extracted with query languages such as SPARQL. The most challenging part of the use of ontologies is the definition and the creation of the ontology. In most of the cases (50%), the creation of the ontology is a manual process conducted by a group of experts that annotate the privacy policies texts. Although this is the first step, what is really interesting is the automatic creation of the ontologies, which would allow researchers to analyze large amounts of policies without human intervention. Some examples of this are papers ID33, ID60, and ID175, each of which use a different approach. The authors of paper ID33 use Tregex patterns to detect information types automatically. In the case of ID60, they use semantic rules to extract relationships from information types. Finally, the authors of paper ID175 have created a method to "capture both positive and negative statements of data collection and sharing practices" based on Hearst patterns. This paper is relevant due to its ability to detect negative statements in comparison with other more limited approaches like those used by Zimmeck et al. in ID885 and by Yu et al. in ID983. Statistical NLP approaches As Table 6 shows, contributions based on Statistical NLP use supervised (60%), unsupervised (7%), ANN-based techniques (26%), or a combination of them (7%). While supervised techniques are mostly used, ANN have begun to gain strength since their appearance around 2016. Supervised algorithms are primarily used for classification tasks, such as detecting which personal data is collected, while unsupervised algorithms are used for clustering tasks such as topic modeling. As for the supervised algorithms, geometric algorithms such as Support Vector Machine (SVM) (ID59, ID62, ID64) and Logistic Regression (LR) (ID886, ID978, ID993) are mainly applied. Decision tree-based models are also used, like Decision Tree (DT) (ID81), the ensembles Random Forest (RF) (ID770, ID783), and AdaBoost (ID783), which tends to outperform the results of DT. Unsupervised learning techniques apply Hidden Markov Models (HMM) to group segments of policies based on the privacy topic they address (ID1018) and a Latent Dirichlet Allocation (LDA) algorithm to determine the underlying topics in privacy policies (ID1045). Although one of the most interesting attributes of these approaches is the absence of a labeled corpus, it is important to highlight that in both cases, the authors had to create a labeled dataset to evaluate the accuracy of their models. ANN-based techniques have been applied to tasks such as text classification (ID10), answerability prediction (ID810), or vagueness identification (ID815). We have found different approaches and papers using different kinds of neural networks. Convolutional Neural Networks (CNNs) (ID28, ID805), Recurrent Neural Networks (RNN) (ID996) and Google's algorithm BERT (Bidirectional Encoder Presentations from Transformers) (ID796, ID810) are mostly used. Certain works comparing ANN and supervised learning techniques (ID10, ID810) report better performance in the case of ANN-based predictions. Another important aspect in ANN is the technique used to represent every word so that it can be used by the neural network. The analyzed papers have used different techniques to create this word representation: fastTest (ID28), Word2Vec (ID810), Glove (ID815, ID996) and ELMo (ID805). These tools can create a word representation from a pre-trained model or from a specific model trained for the occasion. Authors seem to agree that training the model with a related corpus improves the results. Annotated privacy policies datasets Learning algorithms require annotated datasets for training or validation. The creation of this dataset is a time-consuming task. On the other hand, these datasets are of paramount importance since the results and performance of the final model depend on the quality and the completeness of this data. Table 7 collates the information on all the public datasets of annotated privacy policies found in our research. Discussion The analysis of privacy policies seems to be a promising area of research. Having found the first work published in 2005, we have identified a growing interest especially in the last five years in which we found 72% of the papers published (Fig. 4). This increasing interest might have been boosted by the adoption of the European GDPR in 2016, aligned with a stepped increase in publications. Indeed, we found that all papers explicitly focusing on law compliance have been published since 2016, and two-thirds of them target the GDPR. This evidence is aligned with the findings of previous work that highlights that the GDPR has inspired different privacy legislations worldwide [25] and its greatest impact on privacy assessment research [13]. Overall, the identification of the policy contents shows a good performance. Our findings reveal a preference for classical ML techniques (i.e. non-ANN-based) for analyzing policy texts (Table 6), yet the use of ANN-based techniques is quite recent (Fig. 5). Researchers usually train different classifiers and compare their results selecting the one that demonstrates better performance for the problem at hand. However, some of the papers applied both approaches and compared them (ID10, ID810). The authors of ID10 highlight that ANN-based models favor precision while other models favor recall, this may be taken into consideration according to the type of task at hand. The authors of ID810 use three different approaches for answerability prediction and answer sentence selection, namely SVM, CNN and BERT. Their results show that BERT achieves the best F1-Score in both tasks. This fact suggests there is room for improvement in the research of privacy policy texts using ANN-based approaches (e.g., deep learning). Research challenges Named-entity recognition is needed to allow for fully automated analysis at scale We found ample coverage on identifying the presence/absence of the different contents expected in a privacy policy (Table 4). However, oftentimes they do not obtain its specific value. For example, many researchers detect the presence of information on data retention time (which is useful in assessing policy completeness) but not the specific retention time (which would be useful for automatically assessing privacy risks due to excessive retention time). Named-entity recognition would support automated workflows to first identify if a given content is present and then extract and assess its specific value. The analysis of specific policy contents still requires further research, particularly, those mandated only by specific privacy laws Online products and services are offered worldwide, and their policies must meet the requirements set by all the privacy legislations where they are consumed. A clear example is the transfers of personal data to other countries or international organizations. While the CCPA does not restrict them, the GDPR and PIPL set detailed requirements. As a result, future work is needed to identify more specific policy contents, so that policy compliance can be automatically assessed against different applicable laws. To this end, the techniques must be generalized to other languages, as all contributions found focus on policies written in English. Context must be considered to improve the privacy analysis Many articles focus on identifying contents in isolation but do not investigate the relationship between them. For example, gathering a list of personal information types being collected yields less utility than contextualizing an information flow including the personal information type, the associated data processing operation, the organization carrying it out, and the purpose for it. Future work is expected to contextualize data processing practices, particularly to improve users' awareness and understanding of privacy risks. Also, new contributions are needed to analyze not just one but different inter-related policies to detect inconsistencies among them e.g. between a 1st party policy and all its 3rd parties collecting and processing data. Integrating the results into tools for the benefit of different stakeholders More than 50% of the papers found do not apply their results to support any specific stakeholder, describing that as a future work. The contributions identified can support different stakeholders e.g. end-users in gaining awareness and understanding through privacy scores and summaries, legal counsels to clarify the legal texts provided by organiza-tions processing personal data, developers to spot potential non-compliance earlier in the development processes, and app stores to improve their app vetting processes. However, future work is needed to increase the maturity of the techniques and integrate them into user-oriented tools. Threats to validity The main threat to the construct validity of a mapping study is that the research questions may not completely cover all of the aspects addressed by the studied publications. To deal with this threat, an annotation scheme was created by experts in the field based on known taxonomies and classifications being iteratively updated until it was able to identify all the essential aspects of the selected papers. Another threat for the construction validity is a bias at the encoding stage. Different actions were taken to avoid this threat. First, the classes and values included in the encoding scheme were discussed by all the team members until a common understanding of all the covered aspects was reached. Second, a pilot of the codification scheme was carried out with 10% of the publications, and Krippendorf's coefficient was measured to assess the agreement between coders. All the measured questions values were above the threshold of a good agreement (0.8). Finally, two researchers analyzed and coded each paper, and their codifications were compared to avoid failure. All team members discussed inconsistencies until an agreement was reached. We addressed the threats to the internal validity by identifying all the publications matching our criteria and creating the more unbiased process possible. First, two different databases were selected, namely, Scopus and Web of Science, as they complement each other by indexing different journals and conference papers [18]. Second, the search strings are based on known taxonomies and classifications such as the IEEE Thesaurus. Furthermore, a group of ten articles, identified by the experts as matching the criteria, were used to assess the completeness of the search string, which evolved until it matched with all the selected articles. Third, a snowballing technique was performed to include all cited papers and all papers that cited them; this technique is particularly useful for expanding the coverage of a systematic mapping study [19]. This step ultimately ensures that related papers are reviewed despite using other terms to refer to the main topics. Once all the papers were obtained from the databases, inclusion, and exclusion criteria were applied. The number of citation criteria was created considering the percentile of papers in computer science, as per Thomson Reuters, which is a reliable source of publication relevance. The number of paper criteria was established taking into consideration the characteristics of short papers that normally do not include a validation section. The manual criteria were defined based on the definition of the scope. Their formulation allows the researchers to specify which cases are included and which are not. Two researchers reviewed each publication to ensure that the bias of one of the researchers did not affect the selection process. A pilot, divided into phases, was performed to assess the coders' agreement. In each stage, five papers were analyzed by all team members, and Krippendorf's coefficient was measured. The inclusion/exclusion phase started once the coefficient value was above the threshold of a good agreement (0.8). After the extraction of all the necessary information of the selected publications in the codification stage, the data was analyzed to obtain aggregate results and conclusions. One of the researchers cleaned and aggregated the data to present it to the rest of the group. The meaning of the results and the more relevant aspects were discussed by all the team members until an agreement was reached. Therefore, all the results and conclusions presented came from common agreements and not from individual thoughts. The external validity of this study is determined by its scope, the intersection between privacy policies,and text analysis techniques. Any other article that does not concern these two topics may affect the generalization of the results, and so the conclusions reached are not applicable to them. Accordingly, conclusions reached do not apply to publications on the generation of privacy policies, publications on the analysis of other kinds of texts, or publications solely containing the manual process of creation of a dataset of labeled privacy policies. Conclusion This paper has identified, classified, and analyzed the existing approaches and techniques to analyze privacy policies automatically. As a result, it provides an overview of the contents that can be automatically extracted from privacy policies and the most promising analysis techniques for each task. These techniques have been applied to check the policy's compliance with applicable privacy laws and the system compliance with its privacy policy, as well as to improve the user awareness and understanding of the privacy risks. Our future work is focused on the exhaustive compliance analysis of transparency requirements mandated by privacy laws. For that, we will leverage and combine privacy policy analysis techniques with system behavior analysis techniques, to compare both results and detect legal breaches.
9,508.8
2022-05-10T00:00:00.000
[ "Computer Science" ]
Dirac Particle in External Non-Abelian Gauge Field The Green function of a Dirac particle in interaction with a non-Abelian SU(N) gauge field exactly and analytically determined via the path integral formalism by using the approach so-called “global projection.”The essential steps in the calculation are the choice of a convenient gauge (Lorentz gauge) and the introduction of two constraints, φ = kx (related to space) and Grassmannian η = kψ (related to Dirac matrices). Furthermore, it is shown that certain selected equations obtained during the integrations can also be classically derived. Introduction At present the path integral has become indispensable, if we want to describe and to explain simply, some physical phenomena.The use of this tool is generalized to all domains of physics and this success is mainly due to the introduction of transformations which has allowed to solve the basic problems of the usual quantum mechanics, among others, the atom H. In nonrelativistic mechanics, we know that the form of the propagator is a sum over all possible paths, where at each path is assigned a weight which depends on the action of classical mechanics.With this standard form, the propagator has thus been determined for all soluble cases. In the relativistic case, the insertion of spin (represented by matrices) and the limitation of the relativistic velocity are the difficulties encountered in the path integral formulation and a similar standard Feynman form is in principle, no longer valid. However, from the viewpoint of calculation, the standard form seems more appropriate especially when it is question to determine the energies and wave functions for a system.Thus, in the formulation path integral of Fradkin and Gitman [1] where the propagator has a similar expression to that standard of Feynman seems presently enough handy and easier to us when dealing to perform calculations.In this formulation, the motion is governed by a supersymmetric action and the parameterization used for the description two type of variables: (i) bosonic (commuting) for the external movement; (ii) fermionic (anticommuting or Grassmannian) for the internal movement. This formalism has been tested on relativistic particles of spin 1/2 and described by the Dirac equation moving under the action of fields having simple configurations such as the plane wave Volkov with or without the presence of an anomaly [2,3], the wave which is nonclassical but quantified [4,5], and other configurations [6].Indirectly, the Dirac equation is solved via the determination of propagator by using the path integral approach in order to extract energies and wave functions of the Dirac particles. Another configuration which has its importance in QCD and also in different domains of physics seemed useful to us in this paper to consider it in the path integral formalism.This configuration is that of a gauge field of type non-Abelian, which has not been considered anywhere in the literature, except the path integral formulation which can only be found in the book [7].It is obvious that additional difficulties related to the non-Abelian group generators are introduced in the path integral formalism, in addition to that relating to spin of the particle.The control of this difficulty and their insertion in 2 ISRN High Energy Physics the formalism can be useful when the most complex problems in field theories are considered. In this paper, we propose an alternative solution for a Dirac particle in which the gauge field is of type non-Abelian SU(N) and where the configuration is chosen the same as that used in [8], which allowed us to obtain analytically the solution of the Dirac equation.With this choice, it is shown that the propagator can also be analytically obtained by using simple changes. The field in question has a particular configuration (i) that is developable on the basis of generator { } of group () where are partial components and , the generators of () group which satisfy the commutative and anticommutative relations and normalization condition: where and are the constants of the group () (anti-symmetric and symmetrical, respectively, in all permutation of index). In addition, the field is also chosen of type Volkov type; that is, it obeys the following properties: (i) it is only a function product of , where is the 4vector wave such as (ii) and it satisfies the condition of gauge of Lorentz where Ȧ = ()/() denotes the derivative with respect to . In this paper, we use the so-called "global projection" of Alexandrou et al. [9], first by determining (where there are superfluous states) and then by projection, or thanks to the relationship between the two Green functions and , we deduce (in order to obtain the physical states). In the present paper, it is suggested to show, after simple changes, that the classical equations play an important role in the calculation of the propagator ( , ) for the particular form of field in question. In configuration space, formally ( , ) is an element of matrix of the operator , solution of and the operator is equal to where ± = ( P ± ) , Knowing that and are connected by the following equation: the Alexandrou et al. [9] approach is used to calculate (without the matrix 5 of the path integral formulation of Fradkin and Gitman). It is easy to notice that the term ( ) 2 is equal to (see [8]) after a calculation on the light cone. Green's function to be determined is where is the Hamiltonian which governs the movement of the particle.To construct the path integral form of , the usual procedure is used: the interval [ , ] is subdivided into equal intervals of length Δ = /, and then the closure relations are inserted with the scalar product for the change of base |⟩ → |⟩ The continuous form of is where T-product is introduced due to the problem of order of the generators (operators) and matrices do not commute.At this level, in order to eliminate the operators which are the generators , the following procedure [7] is used: (i) the generators are expressed by a bilinear relationship operators Γ (ii) and the Γ a relation of anticommutation is imposed So that Γ becomes fermionic and is equal to The elimination of operators is as follows: first anticommuting (Grassmann) variables are introduced using the following identity [1]: (i) then the T-product is removed by means of the following formula: These two formulae are also applicable for the : it suffices to make the following replacement (Γ , , ) → ( , , Ψ ). Let us proceed to practical calculation of . Determination of Green Function As the four-potential depends only on , we put = and then the variable is rendered independent of by introducing the following identities: Then, by making the shift → + , it becomes possible to integrate over and , the integrations over can be easily performed and give ( ṗ ), and the delta Dirac function expresses that the momentum is constant during the motion Thus, is reduced to where M() is given by The integrations on the can be then performed and this show the solution path of which is a line contributing mainly to the calculation of . It is useful to note that is now function of the variable . Similarly, by posing = , the variables and become independent.By introducing the following identity: where and are fermionic variables, expression of becomes Now we replace the integration over by one over odd velocities such that there are no restrictions on and the boundary conditions for are satisfied.It appears thus to be a quadratic form of (), and with the following change: the Green function takes the following form: where the function is replaced by the exponential form Then, it is possible to perform the integration over the .Again, it is appears that Dirac function that is, the path of equation contributes mainly to the determination of . After rearrangement, the following standard form is obtained for the integral over velocities where As the result of the integration is and thanks to properties of the Volkov field, it is easy to show that and that the integrations on is simply that is, thus fixing = = (/2).Now, Green's function has the following form: There are still integrations to be performed on variables related to generators of the group (). Using the change → defined by then becomes and let us return to two identities ( 21) and (20). ISRN High Energy Physics After having eliminated the variables, the relationship between and Γ can be reintroduced and Green's function takes the following form: where the T-product is omitted, since there is no order problem. In order to reintroduce the : the calculation of derivations is facilitated by using the following identity: Then, after derivation, Green's function becomes It can be noted that this expression is not symmetrical, and therefore it is not possible at this level to extract wave functions.To render symmetric, let us use the following relation: where â = . After some manipulations, becomes that is, is totally symmetric.Finally, a simple integration over 0 leads to the following form: where the wave functions (normalized) describing the motion of Dirac particle in interaction with external non-Abelian SU(N) gauge field are with the projectors on positive and negative states energy and , the elements of the line + and the column (respectively) such as or also Finally, the wave function can be written in the form given in [8] Ψ + ,, () = 1 (2) Conclusion In the present work, by using the formalism of Alexandrou et al., we have determined the wave function for a Dirac particle moving under the action of a field non-Abelian.Thanks to the introduction of two constraints, = (related to space) and Grassmannian = (related to Dirac matrices), it is appeared Dirac functions; that is, the paths having simple equations are selected and it is these paths which have contributed to the determination of the propagator.From the obtained analytical expression of the propagator, the wave functions extracted are found to be the same as those obtained by direct resolution of the Dirac equation. Finally it can be shown in appendix that the selected paths which have played an important role in the determination of propagator or Green function can be also obtained from the equations of classical mechanics.
2,402.4
2014-01-22T00:00:00.000
[ "Physics" ]
Financial Accounting Measurement Model Based on Numerical Analysis of Rigid Normal Differential Equation and Rigid Functional Equation The initial value problem of stiff functional differential equations often appears in many fields such as automatic control, economics and its theoretical and algorithmic research is of unquestionable importance. The paper proposes a rigid functional equation based on the integral process method of the financial accounting measurement model of numerical analysis. This method provides a unified theoretical basis for the stability analysis of the solution of the functional differential equation encountered in the integrodifferential equation and the financial accounting fair value measurement model of investment real estate. Introduction At present, there are two follow-up measurement modes for investment in real estate in Chinese current accounting standards: one is the cost mode measurement, and the other is the fair value mode measurement. However, in recent years, the Chinese real estate market price is higher than the historical cost [1]. Therefore, if we adopt a different measurement model, it will impact its related financial indicators. Therefore, some companies will use the method of changing the measurement model of investment real estate to carry out earnings management and whitewash their performance, which will impact the company itself and investors. We usually use differential equations to describe physical or chemical processes in practical problems in technical engineering fields such as chemical reactions, biological models, fluid and molecular dynamics, electronic networks and automatic control. These problems often involve many processes that interact but vary significantly in speed. This rigid phenomenon is reflected in the mathematical language. The differential equation itself contains both a fast-changing component that decays rapidly and a slow-changing component that changes very slowly. To ensure the stability of the numerical calculation and achieve a sure calculation accuracy, we limit the step size for the fast-changing component to be relatively small. When the fast-changing component decays sufficiently small, the problem enters the slow-changing stage [2]. At this stage, it takes a long time to approach the steady state. Therefore, more calculation steps are required. The increase in calculation results in the continuous accumulation of round-off errors and ultimately affects the numerical stability and calculation accuracy. If the problems mentioned above are solved using traditional differential equation numerical integration methods, difficulties will be encountered, and ideal numerical solutions cannot be obtained. However, the wide application of stiff differential equations in various practical fields makes it urgent to construct efficient numerical algorithms. In recent years, it has become a research hotspot in computational mathematics. The Runge-Kutta (RK) method is a classic single-step method for solving severe problems. It has high accuracy and good stability. The disadvantage is that it requires a relatively large amount of calculation. The more commonly used RK methods include the single implicit RK method, diagonal implicit RK method, Rosenbroke method, etc. The Rosenbrock method is a kind of semi-implicit RK method based on the diagonal implicit RK method. Compared with the implicit RK method, there is no need to solve the nonlinear equations in the calculation process, significantly reducing the calculation workload [3]. Although the RK method has made many achievements in solving stiff differential equations, there are not many high-precision numerical calculations. Some scholars improve the algebraic accuracy of Rosenbroke's method by choosing appropriate free parameter values. However, the order condition of the Rosenbrock method is a system of nonlinear equations about the parameters, which brings specific difficulties to the calculation of the parameters. The Chebyshev spectrum method is a high-precision method. If the solution of the original problem is sufficiently smooth, the calculation accuracy can reach exponential order convergence. The main objective of this method is to carry out the Chebyshev polynomial expansion of the unknown function and approximate the derivative of the unknown function in the equation through differentiation. We transform the original equations into linear algebraic equations satisfied by the expansion coefficients of Chebyshev polynomials. In recent years, the Chebyshev spectrum method has received more and more attention in many practical applications. Some scholars combined Chebyshev polynomials with stepwise regression statistical methods to effectively predict the precipitation in the southwestern region of China in the medium and long term. Some scholars use two-dimensional tidal wave equations to describe the tidal currents in the Daya Bay area, using Chebyshev polynomials as the basis functions, and using distributed pseudospectral methods for numerical simulations. Some scholars have solved the wave equation using the second type of Chebyshev wavelet method. Numerical experiments have established that this method gives a higher-precision numerical solution than some classical wavelet methods. On the other hand, the Tau method is a unique Petrov-Galerkin method, different from the traditional Galerkin method. It does not require the basis function to satisfy the boundary conditions of the equation. Some scholars used the Chebyshev-Tau method and the Chebyshev-Galerkin method to perform high-precision numerical approximation to the boundary value problem of the two-dimensional Poisson equation, compared the convergence speed of the two methods, and analysed the calculation errors. In addition, some scholars have proposed a Chebyshev-Tau matrix method to solve Poisson-type equations with variable coefficients [4]. It should be pointed out that these methods are based on the numerical differentiation process, and the differentiation process is susceptible to a small error in the calculation. However, the sensitivity of numerical integration to errors is much smaller. On the other hand, the condition number of the coefficient matrix obtained by the Chebyshev spectral method discretisation based on the differential process increases rapidly with the increase in the number of unknowns. Significantly when solving large-scale and complex problems, the accumulation of round-off errors makes the spectral accuracy suffer a significant loss, and its calculation effect is not ideal. Therefore, some scholars have proposed that the numerical integration process' difficulties can be effectively overcome instead of the differential process. Later, some scholars proposed a pseudospectral method based on the integral matrix and gave a relevant theoretical analysis on improving the condition number. Inspired by the above research work, this paper proposes a Chebyshev-Tau method based on the integration process. This method opens up a brand-new idea for the high-precision numerical calculation of stiff differential equations. It starts from the highest order derivative term in the equation to perform Chebyshev polynomial expansion. Based on the indefinite integral formula of Chebyshev polynomial and its related operation properties, the polynomial approximation from the low-order derivative term of the equation to the unknown function is obtained from the integration process. Sufficient numerical experiments prove that the method proposed in this paper has certain advantages in solving stiff differential equations [5]. First, the integral constant introduced in the integration process can handle the boundary conditions flexibly. Secondly, compared with the classical Chebyshev spectrum method, the coefficient matrix obtained by the discrete one-dimensional problem is well formed. The condition number does not increase with the increase of the polynomial expansion order. In terms of calculation accuracy, both linear and nonlinear stiff differential equations have achieved exponential convergence accuracy. Compared with the classical differential equation numerical integration method, it consumes less calculation and obtains higher accuracy. And this method also shows good stability in long-term numerical calculations. Stiff differential equations Consider the following initial value problem: Where u is the m-dimensional function vector defined on [a, b]. f : D = R m → R m is the given sufficiently smooth map. R m represents the m-dimensional Euclidean space. 3 Chebyshev-Tau method based on the integration process Equation discrete Consider the numerical discretisation of the following one-dimensional model problem Suppose the unknown functions u(t) and u ′ (t) have the following truncated Chebyshev polynomial expansion If the function value on the node is given in advance, the expansion coefficient in the expression can be obtained by the fast Fourier transform [6]. The calculated amount is O (N log N). Derived from the indefinite integral formula (3) , a 0 , · · · , a N ] T , then the relationship satisfied between a, b, U can be expressed by the operation form of a matrix and vector as Where H 0 , H 1 is the (N + 1) × (N + 2) order integral matrix Further by the operational properties of matrices and vectors, there are For the variable coefficient term V (t) = h(t)u(t) in the control equation, it is necessary to investigate the Chebyshev polynomial expansion of the product of the two functions for processing. Obtain (5), and we write it as υ i (t) = h(t)T j (t). At the same time, we , and use the relation to obtain the coefficient vector Element 1 in row 1 is in the column j + 1, and element 2 in column 1 is in a row j + 1. We assume that can be expressed as the coefficient vector V = [V 0 , · · · ,V N ] T by the above derivation process So far, the discrete form of the governing equation in question (4) is Where g is the Chebyshev polynomial expansion coefficient vector of the correct term g(t). If there is a nonlinear term in the control equation, we can transform the original problem into a variable coefficient equation for discretisation by establishing an iterative format [7]. Boundary condition processing We use Equation (7) Among them Q = l − H 1 , l − = [1, −1, 1, · · · ] 1×(N+1) H 1 = H 1 (2N + 2). We will use Equation (10) on behalf of the person in Equation (9). We assume L = H 0 +M k H 1 , L 1 = L(: , 1), L 2 = L(:, 2 : N + 2), then Equation (9) is finally transformed into a linear algebraic equation system satisfied by the coefficient vector a (L 1 Q + L 2 )a = g − L 1 u 0 Among them L 1 Q + L 2 is a square matrix of order N + 1. From the above-mentioned discrete process, it can be found that the integral constant introduced by the integral enables the boundary conditions to be handled flexibly. 4 Overview of investment real estate and subsequent measurement models Overview of investment real estate The accounting treatment of investment real estate that uses the cost model for subsequent measurement is similar to fixed assets and intangible assets, with monthly depreciation and amortisation. Most of the listed companies in China adopt the cost model. The cost model has strong practicability [8]. This model can fully consider the various consumption of assets, but it is difficult to accurately calculate various depreciation and appreciation, which leads to insufficient authenticity of the data. Under the fair value model, no depreciation and amortisation are provided. Accounting treatment of investment real estate's subsequent measurement mode conversion under the current standards Handling of changes The change in the subsequent measurement mode of investment in real estate should be treated as a change in accounting policy. There will be a certain difference between the fair value and the book value when the measurement model changes. We will adjust the difference to the retained earnings at the beginning of the period [9]. The enterprise makes changes to the investment in real estate measurement model, which complies with the provisions of the accounting standards on investment in real estate. Accounting treatment of investment in real estate measured by fair value model after the change No depreciation, amortisation, and asset impairment provisions are accrued for investment in real estate measured under the fair value model [10]. The accounting entries are as follows: Impact on the balance sheet Investment real estate is a non-current asset. Therefore, the total assets of a company will change as its book value changes. Figure 1 shows the fair value model. Over the past 10 years, Chinese real estate has continued to increase in value. Under normal circumstances, the book value of the investment in real estate under the fair value model is much greater than the book value under the cost model [11]. Therefore, after the change in the measurement model, the scale of the company's book assets will increase substantially. In addition, when the measurement model is changed, there will be an individual difference in the book value of the investment in real estate under the two models. This will affect the company's undistributed profits and other subjects. After a Commercial investment changed the subsequent measurement model of investment in real estate to a fair value model in 2015, the book value of the investment in real estate has increased significantly. This proportion of total assets has also increased, as shown in Table 1. According to the data in Table 1, it can be concluded that the book value of a Commercial's investment in real estate has been rising from 2013 to 2015. In 2015, the book value of investment in real estate under the cost model measurement model was RMB 270.9776 million [12]. This is 5.28 times the cost model measurement, increasing its share of total assets to 22.47%. The real estate downturn in 2016 led to a decrease in fair value, and its share in total assets also declined. In 2017, the real estate market rebounded, and the fair value increased. By 2018, investment real estate accounted for 31.65% of total assets. According to Table 2, we can get the follow-up measurement model of a commercial change investment in real estate in 2015. The difference between the book value under the two measurement models increased the company's other comprehensive income of 875,669,800 Yuan. Since 2016, the company's other comprehensive income has been on the rise. On the whole, the fair value model has increased the deferred income tax liabilities of a commercial investment real estate. Still, the undistributed profits and other comprehensive income have also increased substantially. Therefore, it is better for the development of the enterprise than for the cost measurement model. Impact on the income statement There is an overall upward trend, with a fair value higher than the book balance. When the gains and losses from changes in fair value increase, the enterprise's profits will increase accordingly, which can enhance the enterprise's profitability. According to the data in Table 3, it can be concluded that depreciation and amortisation, and other cost-related items are not accrued under the fair value measurement mode. Therefore, if housing prices continue to increase and the depreciation and amortisation in the cost measurement model decrease, the enterprise's profits will increase substantially [13]. Since 2017, the gains and losses from changes in the fair value of investment real estate have been positive, and the book value has increased. By 2018, the proportion of gains and losses from changes in fair value in operating profits has increased to 6.56%. Impact on financial indicators The financial indicators of listed companies mainly include three aspects: solvency, profitability, and operating ability. Based on the basic situation, this article mainly calculates and analyses the company's debt-to-asset ratio, return on net assets, net sales interest rate, and total asset turnover rate. According to the various values in Table 4, it can be concluded that the financial indicators of the enterprise in the year (2015) have been significantly improved when the business of a business changed the investment in real estate measurement model (2015) [14]. However, the improvement of financial indicators in the following years is relatively small. The asset-liability ratio dropped significantly when it was changed in 2015. The asset-liability ratio fluctuated in 2016-2018. It can be seen that our use of the fair value model to measure investment in real estate will not necessarily continue to reduce the company's debt-to-asset ratio, and the fair value of the real estate will also decline. However, under the fair value measurement model, the company's book profits will increase, and the rate of return on net assets will increase. To achieve high-precision numerical calculation of stiff differential equations, this paper proposes an improved Chebyshev-Tau method. Based on the indefinite integral formula of Chebyshev polynomial, the numerical integration process replaces the differential process. This makes the condition number of the discrete matrix significantly improved, thereby effectively controlling the rounding error. From the data, it can be seen that the profits and debt capacity of real estate under fair value have increased significantly. However, the current market environment in China is not mature enough to apply fair value widely. Therefore, it is necessary to strengthen the construction and regulation of Chinese investment in the real estate market.
3,836
2021-11-22T00:00:00.000
[ "Mathematics" ]
Who is the Blockchain Employee ? Exploring Skills in Demand using Observations from the Australian Labour Market and Behavioural Institutional Cryptoeconomics LinkedIn recently predicted that blockchain skills will be the most in-demand skill in 2020, and in 2018 blockchain led the list of the fastest growing skills in demand according to Upwork. But what exactly constitutes the skill set of a blockchain employee? We use Australian labour market data to explore what skills are in demand among the blockchain workforce. We also take a deeper dive and explore what educational qualifications and experiences are required of blockchain employees, and how blockchain-related jobs perform on salary scales. We discover that alongside ‘hard’ software engineering skills such as programming languages or computer science, blockchain-related jobs require candidates to have ‘soft’ skills such as creativity, communication and leadership. To explain this, we use institutional cryptoeconomics, applied game theory and applied behavioural science to suggest that the demand for skills may be understood as a function of challenges to blockchain adoption. We suggest that for blockchain to enter a mass adoption phase, the industry will need employees with an integrated skill set of both hard software engineering skills and soft behavioural or enterprise skills. Furthermore, blockchain leaders, community leaders and end users will need to gain ‘blockchain literacy’ to overcome the challenge of coordinating expectations by developers and users, who will create network externalities and facilitate rapid, coordinated adoption. We contribute to the evidence-based blockchain literature by using Australian labour market data to derive insight into the challenges posed to the adoption of blockchain as (and if) it climbs out of the current ‘trough of disillusionment’. Introduction Blockchain 1 can potentially transform the Australian and global economy by offering greater data transparency, improved traceability, enhanced security and reduced costs across a variety of industries [2][3][4]. Blockchain allows users to transfer value efficiently in the absence of trusted intermediaries, and it has the potential to form a basis for an 'Internet of Value' by overcoming issues of trust in an online environment [5]. It has the potential to serve as a new type of inter-institutional infrastructure transforming the roles of traditional institutions including governments, firms, clubs, commons and indeed 1 According to the Crypto Encyclopedia, a blockchain is 'a publicly accessible distributed ledger that was initially designed and implemented to enable Bitcoin transactions. It is a piece of information technology infrastructure that serves as a database which is used to keep a continuously growing list of records, so called blocks' [1]. markets themselves [6]. Whether these changes can be realised is a question predicated on the level of adoption of blockchain as a technology for economic interaction [7]. This article investigates which skills are in demand for blockchain employees as the technology progresses beyond the initial hype that typically follows the introduction of a new technology, through the notorious 'trough of disillusionment' and finally into a 'plateau of productivity'-where most of the substantial economic gains can be produced [8]. To do this, we explore two data sets from the Australian labour market in 2015-2019. We then seek a theoretical explanation for our observations. This approach can be seen as phenomenological [9], and we indeed want the readers to experience and explore the data and hence observe phenomena before we position the theory to explain them. We provide the theoretical explanation by drawing on institutional cryptoeconomics, applied game theory and applied behavioural science to explain our observations as a function of the challenges to blockchain adoption. We also discuss the future challenges that Australia might face in meeting the fast-growing demand for blockchain employees seeking to solve the broader problem of securing blockchain adoption. We first consider the emergence of blockchain jobs globally and in Australia in line with the 'hype cycle'. We then explore data sets on blockchain-related job ads and required skills. Next, we explain our observations drawing on the perspective of behavioural institutional cryptoeconomics. Finally, we discuss the broader significance of our results. Blockchain hype and skills demand: a historical review There is no industry in the world today that has not investigated the opportunities of blockchain. In just a decade the technology has facilitated the creation of new products and services in Australia and internationally. Between 2014 and 2018, worldwide venture capital funding of blockchain grew by a factor of 11 to US$5.6 billion [10]. Australia is one of the nations at the forefront of blockchain innovation with worldleading public and private sector projects such as the Australian Securities Exchange's CHESS replacement [11], Commonwealth Bank's Bond-i [12], IP Australia's IP Rights Exchange and Smart Trade Mark [13,14] and Power Ledger's energy trading platform [15]. Australia also leads the secretariat to the technical committee developing international blockchain standards [16,17]. Along with the emergence of blockchain technology, the demand for blockchain-related skills has been growing. The Bitcoin hype of 2017 sparked a boom in demand for blockchain-related skills, resulting in a competitive global hunt for blockchain employees [18]. For two quarters in a row (Q1-2 2018) blockchain led the list of the fastest growing skills in demand on the freelancing platform Upwork [19]. Blockchain first drew attention on the Upwork skills index in Q3 2017 as the second fastest growing skill followed by Bitcoin as the third. In Q4 2017, Bitcoin took the lead as the top skill [19] before losing its place to blockchain for Q1 and Q2 of 2018. Since then both Bitcoin and blockchain have slipped off the Upwork skills index list. Similarly, job analytics firm Burning Glass Technologies (BGT) revealed a steady increase in the number of blockchain job postings between 2010 and 2014 in the United States of America (USA). The figure thereafter drastically increased, from 500 jobs in 2014 to almost 1,500 in 2015, before later spiking to 3,958 in 2017 [20]. In line with global trends, labour demand in Australia also experienced fast growth in blockchain-related jobs since 2014/2015 (see Figure 1). The number of job ads in 2015/2016 was 19 and grew almost by 215% in 2017/2018 to 408. The Australian market, being smaller and less developed than that of the USA, saw explosive growth two years later than the USA did and the number of blockchain job ads in the USA was almost 10 times higher than those in Australia (see Figure 1). The end of 2017 marked the peak of global hype and inflated expectations for blockchain technology. The following year saw a deepening disillusionment heading into the infamous 'cryptowinter' (see Figure 2). In this period, blockchain began to be thought of as the most over-hyped technology since the beginning of the century. Voices questioning the applicability of blockchain, its maturity and effectiveness became increasingly prominent among business and government experts [21]. Media messages moved from 'blockchain can solve any problem' and 'all industries have use cases for blockchain' [22,23] in 2017 to more sober accounts of non-blockchain use cases [21,24] with an occasional smattering of 'there are no good uses for blockchain' [25]. Additionally, a global survey of public and private sector leaders showed that early investments in blockchain did not realise their anticipated returns [26]. On average, the respondents expected a 24% return but only realised 10%. Figure 3). Since then the demand for blockchain employees in Australia has decreased but remains relatively high. Globally, in January 2020 LinkedIn predicted blockchain will be the most in-demand hard skill in 2020 on the platform [28]. This may signal a recovery of blockchainrelated project investment and that the sector in general might be plateauing in the trough of disillusionment, and potentially recovering from it. This is interesting in and of itself. But job openings contain more information that allows us to ask the still more interesting question: what does it mean to be a blockchain employee? We will now use the Australian labour market data to consider which skills are required for blockchain employees. We will explore blockchain-related job ads in two data sets on the Australian labour market (BGT's Labor Insight™ data set [7] and the Data61 Australian Skills Dashboard [27]). This article extends on previous research by Data61 [7]. Data exploration Burning Glass Technologies data set One data set was sourced from job analytics firm BGT [29]. BGT data have been used by government and private organisations in Australia and internationally to investigate skills transformation [30], job transitions [31,32], supply and demand [33,34], education and credentials [35] among other topics. BGT's Labor Insight™ data set includes job vacancy data from company websites, online job boards and other online resources available for web crawling. As of August 2018, BGT covered over 44,000 web page sources across Australia, New Zealand, the United Kingdom, USA, Singapore and Canada. Once the data are collected, BGT applies natural language processing to remove duplicate job ads and classify job skills. BGT acknowledges their data may include duplicate or miscoded job ads. See [36] for a detailed method and skills taxonomy. We filtered the Labor Insight™ data by searching for ads that included 'blockchain' as a keyword. The final data set included 497 job ads posted between July 2014 and June 2018. Data61 Australian Skills Dashboard data set The Data61 Australian Skills Dashboard data set provides a snapshot of the labour market [27]. This dashboard analyses job ad data provided by the labour market platform Adzuna Australia. The data set includes job ads listed directly on the Adzuna Australia platform, ads listed in Australia's major newspapers and ads 'scraped' from other available online resources. Scraped ads must pass a screening process before entering the Adzuna platform, to minimise the number of expired, duplicate or incomplete job ads. The Data61 Australian Skills Dashboard data set is further cleansed through natural language processing and human coding to remove any remaining job ads that are duplicate or are from unreliable sources [37,38]. Skills required by job ads are categorised using the European Skills, Competences, Qualifications and Occupations skills taxonomy. The dashboard represents the Australian labour market in terms of occupational categories and geographic locations at the state and capital city level [38]. However, job ads in the state of Western Australia as well as 'blue collar' jobs may be underrepresented [38]. For the purposes of this article, the Adzuna data set was filtered with 'blockchain' as a keyword. The search returned 1,863 job ads posted between September 2015 and May 2019. We also qualitatively classified the job ad skills into 'soft' skills and 'hard' skills to determine the demanded skills mix in advertised positions. Skills demand Examination of the Data61 Australian Skills Dashboard data set demonstrates that employers are looking for a combination of soft and hard skills in the blockchain workforce. The hard skills frequently mentioned in the blockchain-related job ads include computer technologies and more specifically knowledge of JavaScript and Internet of Things. Around half of the skills most frequently mentioned in the job ads, alongside blockchain, are soft skills including creative thinking, customer service, communication, as well as project management and leadership (see Figure 4). Moreover, 84.3% Percentage of job ads of the job ads required a mix of both soft and hard skills (see Figure 5). For a more detailed exploration of the required hard and soft skills, we looked at BGT job ads posted in 2017-2018. The data reveal the top technical skills desired from prospective blockchain employees (see Figure 6). The listed skills require a background in programming and/or mathematics. The BGT data also show that there is demand for soft skills among blockchain employees (see Figure 7). Experience required In the BGT data, 161 job ads mentioned required experience (see Figure 10), with over half of the jobs requiring between three to five years of experience. Source: BGT data [7]Note: 68% of records have been excluded because they did not mention required experience. Therefore, this chart may not be representative of the full sample. In the BGT data set, 107 blockchain-related job postings referenced a preferred field of study. The top majors that blockchain job ads required are listed in Figure 9. Source: BGT data [7] Note: 77% of records have been excluded because they did not include a major. Therefore, this chart may not be representative of the full sample. Required educational qualifications The observed demand for skills was reflected in the desired level of educational qualifications for blockchain employees (see Figure 8). Over 9 in 10 blockchain jobs required either a bachelor's degree or an even higher level of education according to the BGT data. Figure 10. Qualifications required in Australian blockchainrelated job ads between August 2017 and August 2018. Source: BGT data [7] Salary distribution of jobs Almost 60% of the jobs offered to pay blockchain employees above AU$100,000 per year (see Figure 11). This is a higher wage level than most Professional job offers. Only around 45% of Professional jobs offered the same salary bracket. However, the data showed no difference in wage level between blockchain employees and Data Scientists and Software Engineers who have a relatively similar skill set to blockchain developers. More than $150,000 Source: BGT data [7] Note: Professional jobs are defined as the jobs requiring at least a bachelor degree. The picture of the in-demand blockchain workforce is therefore a somewhat interesting one for a technology-heavy sector building something akin to a digital utility. The typical blockchain employee at least in Australia is one who integrates hard technical skills and soft personal and enterprise skills. They are highly educated, typically with a formal higher degree in-hand. Explaining skills demand as a function of blockchain adoption challenges: behavioural institutional cryptoeconomics We can understand the observed skills demand for blockchainrelated jobs in Australia as a response to the challenge of securing blockchain adoption. Blockchain is a different technology to traditional technologies studied by economic theory as it is an institutional technology [6,39]. Industrial, inter-firm productivity-enhancing technologies have tended to evolve at a relatively rapid rate compared to institutional technologies such as firms, markets, clubs, commons and governments that take decades and centuries to develop. However, blockchain as an institutional technology will be characterised by rapid, coordinated adoption. To understand challenges being posed to blockchain adoption, we apply some game theory and behavioural science to round out the insights of institutional cryptoeconomics. We call this mix 'behavioural institutional cryptoeconomics'. It shows us that the key challenge to blockchain adoption is building capacity for adoption and then coordinating expectations across that population to facilitate rapid, coordinated adoption. It is the solution to this challenge-a similar challenge to that faced by Facebook, Uber, Airbnb, Amazon, PayPal and YouTube in their early years-that reveals to us what the Australian labour market for blockchain employees may be responding to. Institutional cryptoeconomics: platforms and network externalities Institutional cryptoeconomics identifies that the defining characteristic of blockchain is not that it is a distributed ledger technology (DLT) 2 per se, but rather that it is an institutional technology [6,39]. It introduces a sixth archetype to the traditional five: markets, firms, governments, commons and clubs [40][41][42][43]. Such technologies require different kinds of governance, delimiting and enforcing the bounds of acceptable behaviour in society. The contention of institutional cryptoeconomics is that blockchain presents a sixth institutional technology because it is differentiated by the nature of its emergence and operation [6,44]. Blockchain protocols (such as Bitcoin, Ethereum and Monero) delimit a range of interactions on internet platforms that can be considered legitimate and integrated by a consensus algorithm into a record held by a network. The writing and actioning of blockchain protocols to support institutional governance of internet platforms, therefore almost by definition, emerges from a decentralised network and is actioned by that network. It does not require legitimation by government or some other centralised enforcement authority. It can be entirely supported by private entities. Blockchain is thus an institutional technology that allows for privatised emergent governance of internet-based platforms. The defining problem in blockchain adoption, that makes it different from industrial technology adoption, is that, as a technology that enables institutional governance of internet platforms, it must, as with any platform technology, harness network externalities to achieve rapid, coordinated adoption [45][46][47]. This is not necessarily the case with industrial technologies [48][49][50][51][52]. But because platform technologies exist to enable and support interactions that would not otherwise be possible, they derive their value from the interactions that are possible within them. Therefore, the value of adopting a platform for interacting with others by any one individual or organisation is contingent upon its adoption by other individuals and organisations they might like to interact with. In economic theory we call this a network externality [53][54][55][56]-the collective adoption of a particular technology affects the value an individual could realise from it. Applied game theory, network externalities and Schelling-point coordination Applied game theory allows us to identify why blockchain adoption needs to be rapid and coordinated. Achieving adoption of a platform governed by institutional technology is a special case of Schelling-point coordination [57]. Originally, Schelling-point coordination illustrated why the problem of disarmament is difficult to solve, because unilateral disarmament could be disastrous, and so all nuclear powers must simultaneously disarm (and maintain their disarmament). To obtain such an equilibrium, the various nuclear powers must therefore believe that all other nuclear powers will disarm simultaneously with them. Thus Schelling-point coordination becomes a problem of coordinating expectations between various nuclear powers to ensure simultaneous disarmament. A similar problem is created by network externalities in the context of platform technology adoption and therefore the adoption of blockchains. The value of adopting a given internetbased platform for interaction subject to blockchain-based institutional governance is completely contingent on its adoption by others. Obtaining an equilibrium where a given platform and its blockchain are adopted therefore requires that there be a belief across the population that the population at large will adopt it. Hence, the adoption of blockchain as an institutional technology for platform governance depends on the coordination of expectations across the population of potential users that sufficiently many others in the population will adopt the platform and its blockchain. Lest those expectations be 'dashed' and the adoption 'fizzle', that coordination of expectations must support rapid, coordinated adoption of the internet-based platform for interaction subject to blockchainbased institutional governance under consideration. Applied behavioural science and restraining forces in blockchain adoption The problem of coordinating expectations is fundamentally predicated on human behaviour in a systemic context. Blockchain will not be adopted unless there is rapid, coordinated adoption at the systemic level. Arguably the simplest formulation of psychological theory that is directly applicable to understanding the solution to this problem is that provided by Kurt Lewin [58]. Lewin sees behaviour as an equilibrium between driving and restraining forces that emerge from the interaction between motivation [59], cognition [60] and environment [61]. The challenge, Lewin suggests, when we approach problems of behaviour change, such as securing adoption of blockchain, is not to increase the driving forces towards that behaviour. The challenge is to reduce the restraining forces emerging from the interaction between motivation, cognition and environment that urge the individual away from that behaviour. For restraining forces in blockchain adoption, there are two broad categories. For an individual or an organisation to adopt a platform subject to blockchain governance, they must be (1) able to adopt the platform as a system for interaction with others and (2) be willing to adopt the platform (see Figure 12). In terms of the ability to adopt a platform subject to blockchain governance, the first restraining force is the actual creation and functionality of the code itself. Building the initial system can be difficult since it often requires collaboration from various users across networks, business units, jurisdictions and systems. The networked nature of blockchain also means that it will, typically, exist within a 'winner takes all' system-with dominance in systems and protocols often being gained by those able to grow rapidly in the initial phases and obtain first mover advantage [62]. The 'winner takes all' dynamic makes the collaboration delicate and challenging. As such, gaining collaboration to build the initial system often requires strong skills in strategic management. Delivering requires good communication to the technical team, so the system meets the requirements of the collaborators. In the event they can achieve this, the technical team is likely to successfully build a system with basic functionality. But when human beings are involved, cognitively limited organisms, usability goes to a far deeper level than engineering alone. To maximise the likelihood of a blockchain platform's adoption, the platform itself must be designed to be user friendly enough so that any technical functioning of the platform is essentially invisible to the user experience. The more complex the platform is in terms of user experience, the greater the restraining forces against adoption, because the requisite cognitive capabilities to use the platform cannot be developed. World-class user experience design is necessary for developing capacity for blockchain adoption among a population of potential adopters. Continuing this, one step removed again from engineering concerns, the usability of a platform subject to blockchain governance depends on the complexity of the institutional arrangements to which it is subject. The more complex the institutional arrangements that govern the platform both internally ('on-chain') and externally ('off-chain'), especially due to external regulatory structures and again uncertain regulatory structures, the greater the restraining forces against adoption. Cognitive capabilities are necessary not only for the simple ability to use the platform on a functional level, but also for the ability to use it within the bounds of acceptability delimited by institutional governance. How many laws does one break simply because they are too complex for one unindoctrinated in the law to understand? Good institutional design and negotiation with external parties is needed to ensure the blockchain governance structure is usable enough for all potential adopters. Now as to the willingness to adopt a platform subject to blockchain governance, this depends on the extent to which the cognitive dissonance [63] created by ideas about breaking with traditional platforms for interaction and embracing platforms subject to blockchain governance can be overcome. This cognitive dissonance presents a significant restraining force urging against adoption of blockchain technology, as it does with any new technology. But in the case of blockchainbased institutional technologies, the existence of network externalities and the pre-existence of established platforms (such as Amazon, Uber and YouTube) is particularly acute. This restraining force is something that must be overcome by world-class strategic management and marketing of the design of a platform subject to blockchain governance. This strategic management and marketing must integrate design across all aspects of the platform from the functionality of the code itself to the user interface laid over it, and also integrate this design with strategic marketing that builds sufficient expectations (that will be validated) about the value of adopting the platform and its governance structure. Critically for the validation of these expectations, the strategic management and marketing of the design must be oriented to facilitating rapid, coordinated adoption en masse. Unless this strategic management and marketing of design is strong, it will fail to build and/or validate expectations that reduce the restraining force of cognitive dissonance about the value of adopting a new blockchain-based platform. If that is the case, we will fail to see harnessing of network externalities to leverage rapid, coordinated adoption of the platform subject to blockchain governance, and thus we will fail to see adoption at all. Hence astute strategic thinking in management and marketing of the platform and blockchain design is critical for blockchain adoption. Behavioural institutional cryptoeconomics: labour market demand for skills as a function of the adoption problem We are now in a position to understand what we might be observing in the Australian labour market data as reflecting the market's response to this problem. We saw that as a technology for institutional governance of internet-based platforms, the adoption of blockchain technology is subject to network externalities that must be harnessed and overcome by Schelling-point coordination. We saw how the achievement of this Schelling-point coordination required the overcoming of restraining forces against the adoption of blockchain technology by world-class user experience design, institutional design and astute strategic thinking in the management, marketing and design of platforms subject to blockchain-based governance. To reduce restraining forces in blockchain adoption, it is therefore necessary to integrate software engineering with insights from user experience, negotiation, lawmaking, political theory, strategy, management, marketing and design. While different employees in a development team may differ in their skills and strengths, it will be necessary for at least one to have an integrated skill set across all of them to facilitate their integration across the whole team. At least one employee, in other words, will need to 'speak the language' of hard and soft skills to facilitate their integration, and this will necessarily require them to have some proficiency in both. Only if this integration of soft skills and hard skills occurs will we observe the development of capacity and the coordination of expectations necessary to support rapid, coordinated adoption of blockchain as an institutional technology for internet-based platforms. Discussion Our exploration of Australian labour market data would appear to provide hope for blockchain enthusiasts if the observations are a function of the market responding to the core problem in blockchain adoption. If we were going to observe the adoption of blockchain as an institutional technology for internet platform governance, we ought to be observing the emergence of demand for employees who are skilled in communication strategy, management, marketing and user experience as well as those who are skilled in software engineering. Indeed, we ought to be observing a demand for employees who can integrate soft skills with hard skills. Our observations from the Australian job ad data provide some evidence that this may be occurring, revealing a demand for hard skills, soft skills and integrated skill sets from blockchain employees. These results accord with the general findings of empirical studies in labour economics as they track the emergence of the digital economy. As digital technologies advance and more jobs are expected to be replaced or disrupted by automation, we are observing growing demand for technical skills and programming universally across the economy. However, the demand for soft skills is also growing, and in many cases outstripping the demand for technical skills [64]. Based on the data insights and theoretical frame of behavioural institutional cryptoeconomics, we suggest we are observing at least in Australia a labour market The JBBA | Volume 3 | Issue 2 | 2020 Published Open Access under the CC---BY 4.0 Licence 9 response to the challenge of securing blockchain adoption. This might suggest that the technology is poised to emerge from the trough of disillusionment as a new generation of blockchain employees enter the sector. These employees may develop a stronger integration between software design through the application of hard technical skills, and securing the platform's adoption through the application of soft skills. This may promote rapid, coordinated adoption of blockchain by the overcoming of restraining forces contributed to by network externalities and usability, and cause the technology to become more integrated into the technological base of the economy at its core, rather than as a peripheral technology. Our data insights and theoretical frame also suggests that blockchain adoption may require blockchain employees who can help build the combination of technology and complementary skills required for different groups from blockchain users to blockchain developers. A simple model of this integrated skills hierarchy that we suggest needs to be built and perhaps is being built as presented in Figure 13. Blockchain leaders will need to understand the opportunities and limitations of the technology to strategically develop, market and manage blockchains as a software as well as develop a population that can co-develop and use it. Blockchain developers and adopters will play an essential role in further development and implementation of the new technology across the economy and will rely on blockchain knowledge and industry expertise, contributing to the building capability for adoption. Blockchain leaders, community leaders and end users would benefit from 'blockchain literacy' or a broader understanding of how the technology works. While the usability of the system should make its technical functioning invisible, the end users will need to understand blockchain's value proposition and key differences to existing systems to build expectations of coordinated adoption. Complementary soft skills will be crucial for adopting companies and industries to fit the new approaches with existing legacy systems and to ensure the technology fit for jobs, teams and industry-specific requirements. One pressing issue for the development and uptake of blockchain technology is the supply of a qualified workforce to meet the growing demand for blockchain development. Australia might produce fewer potential blockchain employees than other countries as Australia has fewer Information and Computer Technology (ICT) graduates than countries such as Singapore, Finland and New Zealand. In these countries, more than 6% of all students graduate with ICT qualifications compared to only 3.5% in Australia [65]. The continuing expansion of blockchain outside the ICT industry, we suggest, will open large markets for educational providers in Australia and internationally. The growing demand for quality blockchain education therefore forms a market niche for accredited Australian educational providers. Limitations and further research directions This article explicitly focuses on blockchain as the most popular DLT. There are two reasons for this narrow focus: (1) compared to blockchain, DLT as a term (and key word) is rarely present in the online job ad data that we used, and (2) DLTs are not tracked by the Gartner Hype Cycle. Although we suspect that our theoretical frame can be applied more broadly to DLTs, our article has not specifically investigated DLTs. Given that blockchain technology is new and it is still early in the hype cycle, there is a lack of high-quality data to deeply understand the challenges to blockchain development and adoption. This makes it difficult to perform much more than the descriptive analyses conducted in this study. Another limitation of the current study lays in the nature of job ad data and skills classifications. Job ads represent what skills employers demand from employees, but does not necessarily reflect the skills of those who are interviewed or hired, neither do they directly reflect the roles, tasks and responsibilities of those hired. Lastly, in our approach, we used theories to explain what we observed in the data. The next logical step would be to validate our explanations should additional or more detailed data become available. Future research could therefore target collection of higher-quality and larger data sets and conduct inferential statistical analysis. It would also be interesting to perform a comparative analysis across international blockchain job ad data sets, especially for regions with larger labour markets such as the USA. Another direction for future research would be a study of labour market dynamics as well as constitution and transformations of skill sets for blockchain (and broader DLTs) in comparison with other emerging technologies such as artificial intelligence or quantum computing. Conclusion This article contributed to the evidence-based blockchain literature by examining the in-demand blockchain workforce as (and if) the technology moves through the trough of disillusionment into a plateau of productivity. The exploration of Australian labour market data showed that the in-demand blockchain workforce is well compensated, experienced and highly educated, with a mix of hard software engineering skills as well as soft enterprise and personal skills. To explain the skills demand, we used behavioural institutional cryptoeconomics which theorises that coordinating expectations of blockchain adoption among developers and users is necessary to create network externalities to facilitate rapid, coordinated adoption. We explained that a mix of soft and hard skills are necessary to overcome the challenge of coordinating expectations. More specifically, we argued that hard software engineering skills, together with world-class user experience design and institutional design, are needed to create a functioning blockchain system that can be adopted by end users. Furthermore, strategic management and marketing are needed to give end users the motivation to adopt. We also argued that mass adoption also requires blockchain leaders and end users to gain blockchain literacies, as this helps them understand the platform's value proposition, thus boosting their motivation to adopt. The job market demand for both soft and hard skills showed that the blockchain industry, at least in Australia, is aware of the need for a skills mix. Gaining and maintaining this skilled workforce may be what makes or breaks blockchainwhether adoption fizzles due to a lack of strategic management, usability and marketability, or whether it overcomes these challenges and becomes the mass-adopted institutional technology that many are hopeful of. Ethical approval: Not applicable. Funding: The Australian Computer Society funded the previous foundational research project resulted in the publication of "Blockchain 2030: A look at the future of blockchain in Australia" report. This funding allowed the researchers to access to the Burning Glass Technologies data set.
7,644
2020-06-19T00:00:00.000
[ "Computer Science" ]
Study of the structure of the Hoyle state by refractive α-scattering α + C elastic and inelastic to the Hoyle state (02, 7.65 MeV) differential cross-sections were measured at the energies 60 and 65 MeV with the aim of testing the microscopic wave function [1] widely used in modern structure calculations of C. Deep rainbow (Airy) minima were observed in all four curves. The minima in the inelastic angular distributions are shifted to the larger angles relatively those in the elastic ones, which testify the radius enhancement of the Hoyle state. In general, the DWBA calculations failed to reproduce the details of the cross sections in the region of the rainbow minima in the inelastic scattering data. However, by using the phenomenological density with rms radius equal 2.9 fm, we can reproduce the Airy minimum positions. Introduction and experimental data The structure of the 0 + 2 , 7.65 MeV "Hoyle" state of 12 C permanently attracts attention due to its importance for understanding many features of clustering phenomena in nuclei. The microscopic wave function calculated by Kamimura [1] represents the basis of different theoretical approaches to the problem. However, some predictions made by using this wave function, e.g., the radius of the Hoyle state, were not confirmed by empirical data [2]. Thus additional tests are highly desirable. It is well known that the nuclear rainbow scattering is one of the most powerful instruments for studying the nuclei interior, so we used it for testing [1] in the inelastic α+ 12 C scattering to the Hoyle state. The elastic and inelastic to the 0 + 2 state differential cross sections were measured at the energies 65 and 60 MeV in the angular ranges where rainbow effects reveal themselves. A pronounced minima at about 70 o (65 MeV) and 75 o (60 MeV) were observed in the inelastic scattering angular distributions (Fig. 1) and identified as the rainbow (Airy) ones due to repetition of similar minima in the far components of the cross sections and the expected shift of the experimental minima with the energy (as 1/E). The Airy-minima observed in the elastic scattering cross sections (Fig.2) are located at smaller angles (44 o and 50 o ), providing thus an indication to the enhanced radius of the Hoyle state [3,4]. The measured differential cross sections are presented in Figs. 1,2 together with DWBA and optical model calculations excluding the elastic and inelastic transfer of 8 Be. Experimental minima in the scattering data that recognized as the Airy-minima are seen only in the far side components of the cross sections calculated with zero absorption. The cross sections at the angles larger than ~ 90 o are strongly influenced by the 8 Be transfer mechanism [5]. Elastic scattering analysis Unambiguously defined optical potentials describing the elastic scattering in the entrance and exit channels provide the most important basis for the analysis of the inelastic scattering. We used the semi-microscopic potentials obtained within the dispersive optical model (SMDOM) [6] for both channels. The SMDOM potential consists of three terms: 1) the real mean-field potential calculated in a microscopic folding model with corrected SNKE-approximation (for details see [5,6]); 2) the phenomenologically constructed complex Dynamic Polarization Potential (DPP) and 3) the Coulomb potential in the form of uniformly charged sphere. The folding potentials for the exit channel were calculated with the adopted material densities of the Hoyle state (see Section 3). The real (V i ) and imaginary (W i ) parts of the DPP use similar radial (Woods-Saxon) shapes for the volume terms and its derivatives for the surface ones. The energy dependent potential strengths (V S (E), W S (E)), V D (E) and W D (E)) were found by fitting all available elastic scattering data in a wide energy range. Due to appearance of new data, we refined some parameters obtained before [5]. The parameters estimated by the energy dependence (Fig. 3) were used for the exit channel. Refined SMDOM-potentials, keeping the quality of the description of the angular distributions in the energy range from 40 to 240 MeV, better reproduce the data on the reaction cross sections, better suit the dispersion relations (Fig. 4) and well satisfy the inverse linear energy dependence of positions of the Airy-minimum (Fig. 5). Inelastic scattering analysis The results of the DWBA analysis of the inelastic scattering data are shown in Fig.1. The inelastic form factor was taken in the form: The first term represents the non-diagonal part of folding model potential, which was calculated as in [7]. The V if C describes the Coulomb excitation by standard way, and the reduced matrix element M(EL) is given by the known experimental value taken from [7]. Two approaches were used. As a first way, we calculated the material density of the excited state of 0 + 2 (Fig. 6) and the transition density based on Kamimura wave functions [1] (Fig. 7). Fitting the data required renormalization of the transition density by N = 0.4 at both energies, 60 and 65 MeV. As a second way we used the phenomenological models for the matter density of the state 0 + 2 and for the transition density. For the matter density (Fig. 6) we used a simple two-parameter Fermi form with the radius and diffusion parameters c = 2.88 fm and a = 0.5 fm leading to R rms = 2.9 fm. This value is the same as the empirical one [2] and predicted by AMD [8], but significantly smaller than that given by [1] (3.53 fm). For the transition density we used the form of the parameterization based on the derivative of two-parametric Fermi form, which preserves the number of particles, but with the parameters c and a fitted to give the description of the angular distribution at angles up to 60 o and the position of the first Airy-minimum in the far component. We have obtained the following values of parameters: c = 3.5 fm, a = 0.45 fm and parameter of renormalization ρ 0 = 0.005 at both energies 60 and 65 MeV (see Fig. 7). Both calculations predict the appearance of the main Airy minimum at the angles larger than that in the corresponding elastic scattering angular distributions. However, Airy-minima are seen only in the far components of the calculated cross sections. The experimental cross sections are not reproduced in the rainbow minima regions. The reason of this is not completely clear. In particular, it can be attributed to a contribution of the 8 Be direct transfer mechanism, which is especially strong at the angles larger than 80° both in the elastic and inelastic scattering [5]. In general, our analysis led to two interesting results. First, it indicates to incomplete adequacy of the wave function [1], which produces too small rainbow angle in the inelastic scattering. Secondly, it shows that the rms radius of the state is not the only characteristic which defines the position of the Airy-minimum. The phenomenological transition density was used to predict the correct position of the minimum. To conclude, the rainbow (Airy) minima were observed in the differential cross sections of the inelastic α+ 12 C(0 + 2 , 7.65 MeV) scattering. Their positions were reproduced by DWBA calculations with the phenomenological density distribution with the rms radius 2.9 fm that is the same as obtained in [2,8], and differs from the predictions given by microscopic Kamimura wave function [1]. The work was partly supported by grant 12-02-00927 of Russian Fund of Basic Researches.
1,741.6
2014-03-01T00:00:00.000
[ "Physics" ]
A Proposed Web Based Real Time Brain Computer Interface (BCI) System for Usability Testing —Hallway testing, Remote Usability testing, Expert review, Automated expert review and A/B testing are the methods commonly used for Usability testing. However, there is no reliable system that integrates Brain Computer Interface (BCI) into the testing process with focus given towards user emotion analysis using electroencephalography (EEG) signals. This paper proposes a system that would be able to identify user emotions while they are conducting usability tests and the results would be able to increase the accuracy of the usability test. In the proposed system, the results of the usability test would be displayed in real time on a dashboard and a summary report can be generated for distribution. Introduction Brain Computer Interface (BCI) is an area of Computer Science that aims to capture brain activity in the form of electroencephalography (EEG) waves and convert those signals to commands readable by computers using a series of algorithms [1]. Typically, EEG signal analysis is conducted in the medical field to identify patients' suffering from neurological diseases such as epilepsy. With the availability of cheap hardware such as Open BCI [2] and Emotiv Epoch [3], the possibility of integrating BCI into business applications are vast. A new area that is being explored is the aspect of recognising emotions in real time using EEG waves [4]. This paper proposes a system which integrates real-time emotion recognition to usability testing. This paper intends to propose a usability testing method which incorporates Brain Computer Interface (BCI) into the testing process to identify user's emotions while using the system that is being tested. The process of conducting usability tests for software can be refined by integrating BCI into the process. BCI would be used to identify the user's emotions in real time and the data can be used to improve the outcome of the usability tests conducted. In this paper, we propose a web-based system (dashboard) that would display analysis of user emotions as the usability test is conducted. The organization of the paper is as follow, Section II will reviews the existing literature on usability testing and BCI, Section III describes the proposed solution that includes the hardware, software and system architecture of the proposed system, and finally, Section IV concludes the paper. 2 Literature Review on Usability Testing and BCI Usability testing and current methods In the context of human computer interactions, usability is defined as the "extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use" [3]. In other words, a system can be said to have achieved its usability goals if its intended users are able to use the system as envisioned. Nielsen has identified 5 attributes of usability, namely efficiency, satisfaction, learnability, memorability, and errors rate [5]. A system can be considered usable if it scores a satisfactory level in all these aspects. Usability in software design can also be defined as the degree to which a software can be used by its intended audience for the purpose that the software was designed for while maintaining a high standard of effectiveness, efficiency, and satisfaction [6]. In order to ascertain whether a system is usable, a series of usability tests have to be conducted with real users. Among usability testing methods that are currently being used is heuristic evaluation where usability experts' judge whether the system follows currently established principles and cognitive walkthrough where a detailed list of steps are used to test a user's problem-solving process while using the system [4]. Usability testing, on the other hand, can be defined as a process which determine how well users are able to use a product through systemic observations. The sphere of usability testing is large as there are many methods being practiced. Typical methods include hallway testing where randomly selected users are asked to use the system, this helps designers identify major loopholes in the design. Expert review is another method of conducting usability testing. Here experts with experience within the domains test the system to identify its usability. In addition to the traditional methods discussed above, a research published in 2017 discusses a pattern-based usability testing method [1]. The paper acknowledges that there are some usability test guidelines that can be tested automatically. It suggests different strategies to test these guidelines. Another proposed system describes how usability tests are recorded using paper and how an automated system called Usability Management System (USEMATE) can be used to automate some aspects of usability testing while increasing efficiency of each test [4]. The goal of USEMATE is to reduce the costs of conducting usability tests since it is understood that usability testing is one of the most expensive methods of evaluation [7]. As mentioned earlier, heuristic evaluation is one of the common methods of conducting usability tests. A paper on heuristic evaluation suggests that it is one of the most effective methods of conducting usability tests as it uses Nielsen's 10 usability heuristics list to conduct the tests [8]. The research team used public university websites to conduct their experiment by using a set of heuristic evaluation, the good and bad websites were identified. Another research argues that the Graphical User Interface (GUI) testing is the best way to evaluate the usability of a newly developed system [9]. It is stated that the largest contributors towards the usability of a system are Event Coverage, Ease of Use, and Ease of Understanding. This research recommends a fuzzy logic model to predict the usability of a test case. The idea of combining BCI and usability testing is relatively new and left unexplored. This paper aims to propose a system to identify user emotions through EEG waves and using this data to support usability testing. Problems with current methods Unable to accurately identify a user's emotional state while they are conducting the usability test: The emotional state of a user is an important aspect to consider in order to measure user satisfaction level from the system under test [10]. Inability to know how a user feels accurately can have a negative effect on the results of the test making it inaccurate. Although there are different methods being used to guess the user's state of emotions while using the system, there is no standard method to identify and measure emotions in real time. It is important to differentiate the emotion levels at different stages of the usability test as different stages may trigger different emotions. Reliability of data collected may not be high: Among techniques used to collect data are surveyed questionnaires, shadowing the user, interviews etc. Although this method may be a good approach when collecting quantitative data, it may not be the best way to identify user satisfaction and emotional state while conducting the usability tests. This is because they would be a possibility of a high level of bias such as picking the inaccurate representation of their emotion when the user is filling up the survey which would result in low accuracy. Brain computer interface (BCI) According to a research by Vallabhaneni, BCI can be classified as a communication method that depends on brain activity [11]. The brain activity that is also known as neural activity is typically recorded from a subject using two different types of techniques known as invasive and non-invasive. The human brain needs to be explored first to define what neural activity is and what causes them? The brain is the most complex organ known to man and it is the centre of the nervous system. In humans, the brain is located in the skull close to other sensory organs such as eyes, ears, and nose. According to research by Pelvig, it is estimated that a normal human brain is made up of 13-33 billion neurons that are interconnected through synapses [12]. A network of neurons connects different parts of the brain to each other. These neurons communicate with each other by passing electrical signals from one neuron to another. The electrical impulses are transferred from one cell to another through synapses. The electrical impulse travels from one cell to another through the release and reception of neurotransmitters into the extracellular space [13]. As a by-product of this activity, an electric field is generated in the brain. If a large enough area of the brain is simultaneously activated, the field generated can be detected outside the scalp. The electric field is recorded using a method known as electroencephalography (EEG) [14]. The electric field can be read using two methods that were mentioned earlier, invasive and non-invasive. The invasive method receives data through electrodes that have been surgically implanted on the surface or inside the brain. This is typically done in a medical condition such as epilepsy where accurate as well as a constant recording of data is required. The type of invasive EEG that sits on the surface of the brain is known as subdural EEG while the type inside the brain is known as depth EEG. The main advantage of using invasive EEG is stronger signals that are recorded due to proximity to the source (brain). The data collected through invasive methods are also referred to as electrocorticography (ECOG). The non-invasive EEG collection method is where the electrodes are placed on the scalp of the subject and data is read from it. These electrodes are "pasted" onto the scalp using a conductive gel to ease the data collection process. It is also common to use a set of electrodes combined into a cap where the electrodes are embedded, making it easier to use. The main advantage of using the non-invasive method is the fact that there is no permanent effect on the subject. It is a painless temporary process that does not leave any side effects on the subject. However, compared to invasive methods, non-invasive methods would be susceptible to noise, which are distractions on the wave created by external factors such as muscle movement [15]. EEG waves can be divided into 5 main types of waves based on frequency namely Delta, Theta, Alpha, Beta, and Gamma. Delta is from 0 Hz to 4 Hz and is usually the highest in amplitude as well as the slowest. Usually seen in adults during sleep and more frequently in babies, Theta waves are waves of frequency from 4 Hz to 7 Hz. It is usually seen in young children or in adults when they are drowsy and can also be seen during meditation [16]. Abnormal brain activity is usually detected if there is an excess of theta waves from a patient. Alpha, on the other hand, ranges from 7 Hz to 13 Hz in frequency [17]. Alpha waves are said to be stronger when eyes are closed, and the subject is relaxed and weakens when eyes are open. Beta is the frequency range from 14 Hz to about 30 Hz and can be detected from both sides of the head. "Beta activity is closely linked to motor behaviour and is generally attenuated during active movements", [18]. In a typical scenario, low amplitude beta waves can be seen during muscle movements and typically more prominent when a subject is active. Lastly, Gamma waves have a frequency range of 30Hz -100Hz and currently theorised to be the combination of different parts of the neurons of the brain to carry out a complex task such as cognitive or motor oriented. The EEG data that is collected through the non-invasive method is always corrupted by noise known as artifacts. These artifacts are electrical signals that originate from outside the brain typically due to muscle movements or even eye movements. Signal pre-processing and removal of noise would have to be conducted in order to remove as many artifacts as possible before analysing the data for other features. BCI is an area of science that focuses on creating a communication pathway between the brain and an external device. It is typically understood that in BCI there would be a bidirectional information flow, from the brain to the device and feedback from the device would be obtained through other sensory organs. Early research into BCI has been focused on the medical field such as neuroprosthetics that is commonly used to help patients suffering from hearing disabilities. A hearing BCI device for example would listen to a sound, convert it to a signal and send it directly to the brain. The brain would manage these signals as if from a natural source provided the device was configured properly. Since the 1990s these practices have been commercialized and BCI advancements have directly impacted many lives by improving their quality of life. The research by Mor states that BCI research is currently being conducted in areas outside of medicine such as security, awareness monitoring, gaming, augmented reality, and even in commerce [19]. This means there is a commercial value of BCI outside of medicine and it could even lead the Web 5.0. It is accepted that the goal of BCI is not to listen to what a person is thinking rather use thought to conduct an activity or achieve an outcome [20]. Feature extraction algorithms used in BCI According to Jenke, feature extraction is typically conducted by isolating features in few different domains such as time, frequency, and time-frequency. Features are usually extracted from signals that originate from a single electrode from the EEG headset. However, sometimes signals from multiple headsets can also be combined to form a unified set of signals and features can be extracted from there [21]. An analysis of the three different domains that are used in feature extraction algorithms of EEG waves is given below. Time Domain Feature Extraction: is said to be the weakest among the three different domains [21]. Although there are few different feature extraction algorithms in the time domain, the most efficient algorithm is said to be High Order Crossing (HOC) algorithm. HOC algorithm focuses on a more robust feature of EEG which is the wave oscillatory pattern of an EEG wave. HOC's use is also validated by another research where it uses this algorithm to extract features related to emotions from EEG [22]. This algorithm would apply a sequence of high-pass filters to an EEG wave. Frequency domain feature extraction: on the other hand, focusses on different frequency bands that are showcased by EEG waves to analyse emotions. There are two common approaches to this method namely Band Power approach and Higher Order Spectra (HOS) approach. The band power method divides EEG waves into bands of different frequencies. Once the features are extracted, it is ready for feature classification that is also known as feature mapping. Time-Frequency domain feature extraction: Algorithms of this approach is a lot more powerful compared to the previous two approaches because it is capable of extracting a lot more information from non-stationary waves by taking into account the dynamic changes that the wave goes through. Three common algorithms that use this approach is the Hilbert-Huang Spectrum (HHS), and Discrete Wavelet Transform algorithm. By using Hilbert-Huang Spectrum (HHS), here meaningful features are extracted by comparing the amplitude of the wave against the phase of the wave. Average of the frequencies can also be used, and the benefits of HHS really depends on the classification algorithm that is used after this stage. Another technique of Time-Frequency feature extraction is the Discrete Wavelet Transformation (DWT) algorithm. This algorithm breaks each wave down to different levels based on frequency ranges while maintaining the time property of each wave. According to research by Murugappan, DWT wave analysis can be used to extract emotions from EEG by using the db4 wavelet function [23]. Signal mapping algorithms used in BCI In the previous subsection, we have discussed signal feature extraction algorithms. Those algorithms would channel the output to an algorithm for feature mapping or classification. The goal would be to extract the features related to emotions from EEG waves and subsequently map them to a specific emotion through a feature mapping algorithm. Support Vector Machine (SVM): One of the commonly used signal classifier is Support Vector Machine (SVM) model [24]. It is a machine learning model which can be used to analyse and classify a set of data. The general method would be to employ a machine learning algorithm to a training data set of a specific emotion for example happiness. Then create a predictive model that has a relatively high level of accuracy. Upon which, the EEG wave that has been extracted would be sent to the predictive model for classification. When an arbitrary level of match is obtained, we can safely assume that the specific emotion was present in the EEG wave. The process can be repeated with other emotions as well to obtain a comprehensive analysis. Another advantage of taking the SVM approach is its compatibility with the DWT algorithm that we explored in the previous section. It is recommended that a combination of SVM and Hidden Markov Model (HMM) is used to achieve a higher level of accuracy of the predictive model. Neural Network: A research by Mangalagowri has recommended a neural network based approach for feature classification [25]. At the neural network level of data flow, an algorithm known as "Feed Forward Backward" is used to classify features. A training model using the neural network is created to analyse emotions by focusing on a few electrodes that are connected to the neuroimaging device. 3 Proposed Solution The proposed system would be a web-based system where the EEG acquisition device (hardware) would be connected to a computer that is capable of constantly uploading the stream of data to the cloud for processing and distribution. Hardware Hardware capable of reading and transmitting EEG waves will be used for signal acquisition and transmission. Emotiv EPOC is a device that fits this requirement well. Emotiv EPOC is an EEG acquisition tool by Emotiv that provides up to 14 channels of signals and has a mere 3-5 minute set up time. This would allow usability tests to be conducted with relative ease and reduce time wasted on setting up. In addition to that, the data transmission is wireless and Emotiv EPOC is battery powered. This means the test set up would not be cluttered with wires thus reducing possible data inaccuracies due to incorrect setting up. Fig 1 below shows the top view of the Emotiv EPOC device. The Emotiv EPOC also comes with a series of Application Programming Interfaces (APIs) that can be used for analysis. The Raw EEG API will be used by the proposed system to obtain data from the devices which will then be used for analysis. The languages which will be used to develop the analytical tools will be discussed in the next section. Web development frameworks -Angular JS AngularJS is a JavaScript based front-end framework that can be used to develop single page applications as well as multi-page web applications [26]. It was developed by Google and is maintained by them and the community of developers. AngularJS also extends from Hypertext Markup Language (HTML) which makes it easy to read and easier to pick up compared to other frameworks [27]. The main advantage of using AngularJS is the fact that applications built using this framework would be using the Model-View-Controller (MVC). MVC is a software design pattern where the application is separated into three main logical components: the model, the view, and the controller. Latest AngularJS versions use a hybrid version of the traditional MVC where it uses the Model-View-ViewModel (MVVM) design pattern [28]. The model in the design pattern refers to the data. Since all data types are treated as objects in JavaScript, the model would consist of all the data objects. In AngularJS, the view is the HTML that exists after AngularJS has parsed and compiled. It also includes all the rendered mark-ups as well as data bindings. The ViewModel, on the other hand, performs a hybrid of the controller function where it provides views with their data as well as methods to alter the data. The main disadvantage of using AngularJS would be its performance, especially when handling a large set of data. The system would constantly be updated with a large inflow of data such as raw EEG as well as the inflow of processed data. Since there is no virtual Document Object Model (DOM) to track changes and only display the updates, the regular DOM would consistently be injected with updates causing performance delays when handling a large dataset [29]. In conclusion, AngularJS may be a good choice since it is also part of the MEAN stack which is used to develop full-stack web applications [30]. MEAN stack is made up of MongoDB which is a NoSQL database, Express.JS which is a web application framework that works in the backend communicating with the database and the frontend environment, AngularJS and Node.js which is a server-side environment built using JavaScript. However, performance concerns are there and the dashboard requiring constant updates with live data streams may need a framework that is capable of high-performance rendering. BCI Usability Testing System The architectural overview of the system is illustrated in Fig. 2. Types of users involved: Test Subject User: The individual that would be using the system under test while wearing the EEG acquisition device. When selecting this individual, special attention should be given to ensuring the person has a relatively normal medical history and has not been diagnosed with any condition that may cause abnormality in their EEG signals. An emotion inducing video should be shown beforehand to create a baseline data set unique to each user. This data-set would be used for signal classification at the cloud layer later on. Test Supervisor: The individual that conducts the tests and observe the dashboard. Since it is a cloud-based system, other stakeholders can view the results remotely as well. The test supervisor would also be in charge of assisting the setting up of hardware and ensuring all nodes are connected and are active. The system can be divided into three modules which are the System Under Test (SUT), Cloud Layer and Dashboard. System Under Test (SUT) module is the system that is going through the usability testing process. It can be any computer-based system ranging from simple websites to complex web applications. SUT can also be two systems that the Test Supervisor wants to test as an A/B test. Data acquisition and transmission would also happen at this layer as depicted in Fig. 2. Cloud layer module is where the analysis of EEG signals would take place. A database is used in the cloud to store credentials and other data. At this layer, the signal stream would go through a series of algorithms which will handle pre-processing (removal of noise) and feature mapping (extraction of features from EEG signals and mapping them to known emotions). There are three different approaches to conduct feature extraction namely time domain, frequency domain, and time-frequency domain approach. Each of this approach has numerous algorithms that can be used. As a feature extraction algorithm, we would recommend Discrete Wavelet Transformation (DWT) algorithm which is based on the time-frequency approach because it is capable of time-frequency localization, multiscale zooming, and noise filtering [24]. The output would then be channelled to an algorithm for feature mapping or classification. The goal would be to extract the features related to emotions from EEG waves and subsequently map them to a specific emotion through a feature mapping algorithm. One of the commonly used signal classifiers is Support Vector Machine (SVM) model [24]. It is a machine learning model which can be used to analyse and classify a set of data. The general method would be to employ a machine learning algorithm with a training data set of a specific emotion for example happiness. Then create a predictive model that has a relatively high level of accuracy. Upon which, the EEG wave that has been extracted would be sent to the predictive model for classification and an arbitrary level of match is obtained. We can safely assume that the specific emotion was present in the EEG wave. The process can be repeated with other emotions as well to obtain a comprehensive analysis. Another advantage of taking the SVM approach is its compatibility with the DWT algorithm [24]. It is recommended that a combination of SVM and Hidden Markov Model (HMM) is used to achieve a higher level of accuracy of the predictive model [31]. These models can be created using cloud service platforms such as Amazon Web Services and Azure. The third module is the Dashboard. This module is where all the data from the cloud server is displayed in a comprehensible format. The dashboard would have the functionalities such as general view, specific data stream analysis, generate a report for the session as well as download or share report. The emotions that the Test Subject User felt would be shown using graphs and at the end of the session, the aggregated results can also be shown. The Test Supervisor can also view previous results and identify whether the usability levels have increased if the same system is retested. The dashboard would also have the option to generate a report of each session which allows the Test Supervisor to download a report containing a summary of the session. Conclusion This paper proposed a web-based system that can be used for usability testing that incorporates BCI into the testing process through emotion recognition. The proposed system aims to improve the outcome of a usability test by giving the testing team the opportunity to know user's emotions while they are conducting the usability test. The paper described the architecture of the proposed system as well as recommended tools that should be used with the system. In the future, this research will focus on testing the system and connecting it to other hardware such as eye-tracker which aims to increase the accuracy of the results that can be obtained. The goal would be to use the proposed system to test any computer-based system out there or any that is currently being developed.
6,092.8
2019-04-15T00:00:00.000
[ "Computer Science" ]
On the electron energy distribution index of Swift Gamma-Ray Burst Afterglows The electron energy distribution index, p, is a fundamental parameter of the synchrotron emission from a range of astronomical sources. Here we examine one such source of synchrotron emission, Gamma-Ray Burst afterglows observed by the Swift satellite. Within the framework of the blast wave model, we examine the constraints placed on the distribution of p by the observed X-ray spectral indices and parametrise the distribution. We find that the observed distribution of spectral indices are inconsistent with an underlying distribution of p composed of a single discrete value but consistent with a Gaussian distribution centred at p = 2.36 and having a width of 0.59. Furthermore, accepting that the underlying distribution is a Gaussian, we find the majority (>94%) of GRB afterglows in our sample have cooling break frequencies less than the X-ray frequency. INTRODUCTION The afterglow emission of Gamma-Ray Bursts (GRBs) is generally described by the blast wave model (Rees & Mészáros 1992;Mészáros et al. 1998) which details the temporal and spectral behaviour of the emission that is created by external shocks when a collimated ultra-relativistic jet ploughs into the circumburst medium, driving a blast wave ahead of it. Fundamental to this model is the electron energy distribution index, p; a characteristic parameter of the process by which the electrons are accelerated to relativistic speeds and by which they radiate via synchrotron emission. This acceleration mechanism, common to many astronomical jet sources (as well as particle acceleration in the solar wind and supernovae, and the acceleration of cosmic rays), is thought to be Fermi diffusive shock acceleration (Fermi 1954) due to the passage of an external shock (Blandford & Ostriker 1978;Rieger et al. 2007) after which the energy of the electrons, E, follows a power-law distribution, dN ∝ E −p dE, with a cut-off at low energies. This is consistent with recent PIC simulations (Spitkovsky 2008) and Monte Carlo models (Achterberg et al. 2001;Ellison & Double 2002;Lemoine & Pelletier 2003) but at odds with others . The blast wave model describes how synchrotron emission from relativistic electrons produces a smoothlybroken, broad band spectrum that is well characterised by a peak flux and three, time evolving, break frequencies (peak frequency, ν m ; cooling frequency, ν c ; synchrotron self-absorption frequency, ν a ) as well as the electron energy distribution index, p (Sari et al. 1998;Granot & Sari 2002). The spectrum is divided into four regimes by the three break frequencies and within each regime the spectrum is asymptotically described by F ν ∝ ν −β , where the spectral index, β, is a function of p only. By comparing the observed X-ray spectra to the predicted asymptotic values of the synchrotron spectra, we can extract information about the electron energy distribution index, p, which is dependent only on the underlying micro-physics of the acceleration process. Some (semi-)analytical calculations and simulations indicate that there is a nearly universal value of ∼ 2.2 − 2.4 (e.g., Kirk et al. 2000;Achterberg et al. 2001;Spitkovsky 2008) though other studies suggest that there is a large range of possible values for p of 1.5 − 4 (Baring 2004). Observationally, different methods have been applied to samples of BATSE, BeppoSAX and Swift bursts which reached the conclusion that the observed range of p values is not consistent with a single central value of p (Chevalier & Li 2000;Panaitescu & Kumar 2002;Shen et al. 2006;Starling et al. 2008;Curran et al. 2009). The latter three showed that the width of the parent distribution is σ p ∼ 0.3 − 0.5. However, in the studies so far there have been some limitations: multi-wavelength studies (Chevalier & Li 2000;Panaitescu & Kumar 2002;Starling et al. 2008;Curran et al. 2009) suffer from limited samples ( 10 sources each) with sufficient temporal and spectral observations, while those studies that rely on X-ray afterglows alone are subject to a large uncertainty because the position of the cooling frequency, relative to the X-ray, is unknown. The only X-ray study of Swift afterglows so far (Shen et al. 2006) used a very limited sample of spectral indices (∼ 30), dictated by the number of GRBs observed by Swift at the time, to estimate the distribution of p. Neither, they did not take a statistical approach to the position of the cooling frequency relative to the X-ray regime, as we do here. We interpret a much larger (∼ 300) and, statistically, more significant sample of Swift observed GRB afterglows, to constrain the distribution of the values of electron energy distribution index, p. In §2 we introduce our method while in §3 we present the results of our Monte Carlo analyses and their implications in the overall context of GRB observations and particle acceleration in general. We summarise our findings in §4. All uncertainties are quoted at the 1σ confidence level. METHOD Our general method is to constrain the electron energy distribution index, p, from the values of the Xray spectral indices, β X or β, observed by the Swift XRT (Burrows et al. 2005) and detailed by Evans et al. (2009, Table 7). A normalised histogram of the spectral indices of these 301 GRB spectra is plotted in figure 1. We derive p from the spectral index as opposed to the temporal index because for a given spectral index, assuming the asymptotic limit, there are only two possible values of p depending on whether the cooling frequency, ν c is less than or greater than the X-ray regime, ν X , while for a given temporal index there are multiple possible values which are model dependent (e.g., the simple blast wave model (Rees & Mészáros 1992;Mészáros et al. 1998), or modifications thereof (e.g., Granot & Kumar 2006;Genet et al. 2007;Ghisellini et al. 2007;Uhm & Beloborodov 2007); for a discussion on the choice of X-ray spectral index to derive p see Curran et al. 2009). In accordance with synchrotron emission predicted by the blast wave model, we ascribe the behaviour of the unabsorbed X-ray spectrum to be a single power law where the flux goes as: F ν (ν) ∝ ν −β and β is the spectral index. Under the standard assumptions of slow cooling and adiabatic expansion of the blast wave, the electron energy distribution index is related, in the asymptotic limit, to the spectral index by either p = 2β (ν c < ν X ) or p = 2β + 1 (ν c > ν X ) (e.g., Granot & Sari 2002); implying a difference between the slopes of the two spectral regimes of ∆β = 0.5. Throughout we use the regime probability, X, as the probability that the cooling frequency is less than the frequency of the X-ray regime (i.e., ν c < ν X ) and 1−X is the probability that ν c > ν X . We neglect the case where the cooling frequency may be passing through the X-ray regime (since there is no sign of spectral evolution in the sample) and the cases where the peak frequency, ν m , or self absorption frequency, ν a , is greater than the X-ray regime as this is not observed in late time afterglows. To parameterise the underlying distribution of the electron energy distribution index, p, from the X-ray spectral indices observed by Swift we use a maximum likelihood Monte Carlo method. This method uses a maximum likelihood fit to return the most likely parameters of the assumed underlying model, the errors on which are estimated via a Monte Carlo error analysis. Another Monte Carlo analysis tests the probability that the observed distribution of spectral indices could be obtained from an underlying distribution of p described by the most likely parameters. In this method we first assume a model for the underlying distribution of p which we transform into a distribution in spectral index space, via the regime probability, X. We convolve this distribution with the measured probability of the data set and calculate the loglikelihood of the parameters of the underlying model (see appendix A). To estimate the most likely parameters of the underlying model and the regime probability, X, we minimise the log-likelihood (equates to maximising the -Normalised histogram of the data (301 measurements) overlaid with high-resolution normalised histograms of the synthesized data sets (10 4 × 301 data points) from the most likely parameters of a single discrete p (grey line) and a Gaussian distribution of p (blue line) as detailed in Table 1. likelihood) using the simulated annealing method ( § 10.9 of Press et al. 1992, and references therein). Uncertainties of the fit parameters are estimated via a Monte Carlo analysis, whereby the observed data are randomly perturbed within their (asymmetric) Gaussian errors and refit multiple times (10 4 ); the standard deviation of returned most likely parameters is used as a measure of the uncertainties. We then find the probability that the observed distribution of spectral indices could have originated from an underlying distribution of p based on the most likely parameters estimated from the log-likelihood minimisation. We do so by generating 10 4 synthetic data sets drawn randomly from the underlying probability distribution of p, randomly transformed into spectral index space via the regime probability, X, and further randomly perturbed within the (asymmetric) Gaussian errors of each of our original data points. Each of these synthetic data sets is fit as above and the value of the loglikelihoods recorded; one would expect that if the data are consistent with the underlying model, at the 1σ level, the original log-likelihood value should fall within the 1σ distribution of the synthetic values. With 10 4 synthetic data sets we can measure the percentage to an accuracy of two decimal places and rule out chance agreements to the 4σ level; the ∼ 2 × 10 5 synthetic data sets required to rule out chance agreements at the 5σ level was considered too costly, computationally. There are two underlying models, or hypotheses, regarding the data that we want to test: that the observed distribution of spectral indices, β, can be obtained from an underlying distribution of p composed of i) a single discrete value (SDp) and ii) a Gaussian distribution (GDp). Details of these models and their log-likelihoods are discussed in the Appendix, A. RESULTS AND DISCUSSION 3.1. Distribution of p The results of our analysis are detailed in Table 1 which shows our most likely parameters for electron energy distribution index, p, the related Gaussian scatter, σ p , the probability that the cooling frequency is less than the -The most likely values of electron energy distribution index, p, the related Gaussian scatter, σp, the probability that the cooling frequency is less than the X-ray frequency (νc < ν X ), X, and the log-likelihood of that fit, l. Values in brackets are the average and error values from the Monte Carlo error analysis of perturbed data sets, while those in square brackets are the average and standard deviations returned from the Monte Carlo analysis of synthesized data sets. % is the percentage of synthesized data sets with a better fit than the original. X-ray frequency (ν c < ν X ), X, and the log-likelihood of that fit, l. Values in brackets are the average and error values from the Monte Carlo error analysis of perturbed data sets, while those in square brackets are the average and standard deviations returned from the Monte Carlo analysis of synthesized data sets. % is the percentage of synthesized data sets with a better fit than the original. Though our most likely discrete value of p = 2.25 agrees well with the predicted universal value of p ∼ 2.2 − 2.3 (e.g., Kirk et al. 2000;Achterberg et al. 2001), it is comprehensively rejected by our tests; the hypothesis that the observed distribution of spectral indices, β, can be obtained from an underlying distribution of p composed of a single discrete value (SDp) is rejected at the 4σ level as all synthesized data sets had better log-likelihoods. The hypothesis that the observed distribution can be obtained from an underlying Gaussian distribution (GDp) centred at p = 2.36, having a width of 0.590 and regime probability, X = 1.000, is acceptable at the 1.5σ level (Figure 2). As a visual aid, we compare the normalised histogram of the observed data with the high-resolution normalised, average histograms of the 10 4 synthesized data sets (Figure 1). Note that the SDp model is clearly a poor fit and exhibits a secondary peak at β ∼ 0.6 due to the fact that X = 0.96 for the likelihood fit of that model to the observed spectral indices. This result confirms the results from previous small-sample GRB afterglow studies (Shen et al. 2006;Starling et al. 2008;Curran et al. 2009) as regards the non-universality of p, the central value at p ∼ 2.0 − 2.5, and the width of the distribution of σ p ∼ 0.3 − 0.5. However, our results are based on a sample of bursts an order of magnitude larger than these studies and we took a statistical approach to the position of cooling frequency relative to the X-ray, by using the regime probability, X. Given that the value of p is not universal, it may be possible that it changes suddenly or evolves gradually with time or radius even in a single event as environmental shock parameters (e.g., magnetic field, ambient density) change or evolve (e.g., Hamilton & Petrosian 1992;Vlahos et al. 2004;Kaiser 2005). It is also possible that different components of a structured jet (Mészáros et al. 1998;Kumar & Granot 2003), multicomponent jet (Berger et al. 2003;Huang et al. 2004) or jet-cocoon (Zhang et al. 2003) rameter, though a change or evolution of p should be observable as a change or evolution of the synchrotron spectral index, β, and no significant example of such an evolution has been observed in GRB afterglows. 3.2. Limits on regime probability, X Though previous multi-band (optical -X-ray) studies (e.g., Panaitescu & Kumar 2002;Starling et al. 2008;Curran et al. 2009) have shown that the cooling break frequencies of a number of GRB afterglows are greater than the X-ray frequency (ν c > ν X ), here, accepting that the underlying distribution of p is a Gaussian, we find that this number is consistent with zero (X = 1). From our Monte Carlo error analysis (where we refit the randomly perturbed data multiple times) we can plot the distribution of possible values of X (Figure 3; note that this is a log-log plot). The distribution of the errors is clearly not Gaussian so we should not use the standard deviation as a measure of error as we have in Table 1. We can however place a nominal 3σ lower limit on the value of X by estimating the value at which there is 99.7 percent containment; we find that this is at 0.935 < X < 0.936, compared to the nominal 1σ (68.2 percent containment) at 0.988 < X < 0.989. Any value of X less than this limit is inconsistent with the distribution of the data and would produce a clear secondary peak at β = (p − 1)/2, not observed in the data (Figure 1). To avoid this peak, an extremely wide distribution is needed, much wider than the spread of the observed data. Hence, the upper limit on the percentage of GRBs in our sample where the cooling frequency is greater than the X-ray frequency (ν c > ν X ) is approximately 6.5 percent, or ∼ 20 out of 301 GRBs. This is a discrepancy from the ratio observed in the multi-band studies (e.g., Starling et al.; Curran et al.: 5 out of 10 and 2 out of 6, respectively) but not seriously so given the extremely low number statistics of those studies. If we create a sample of 6 random bursts from our statistical distribution of 301, there would be a non-negligible (∼ 5 percent) chance that 2, or more, bursts would have a cooling frequency greater than the X-ray frequency, consistent with the aforementioned Swift study (Curran et al.). The study of Starling et al. is based on a sample of BeppoSAX, as opposed to Swift, GRBs which may have a different limit on the regime probability, X. An investigation as to why the cooling frequency of afterglows follow this apparent limit, or its implications regarding the distribution of other parameters, is beyond the scope of this work. CONCLUSION We use the X-ray spectral indices of gamma-ray burst afterglows observed by the XRT aboard Swift to parameterise the underlying distribution of the electron energy distribution index, p, within the framework of the blast wave model. The electron energy distribution index is a fundamental parameter of the synchrotron emission from a range of astronomical sources and in this case of the synchrotron emission of GRB afterglows. We use a maximum likelihood Monte Carlo analysis to test two hypotheses, namely that the observed distribution of spectral indices, β, can be obtained from an underlying distribution of p composed of i) a single discrete value and ii) a Gaussian distribution. We find that the observed distribution of spectral indices are inconsistent with the first hypothesis but consistent with the second, a Gaussian distribution centred at p = 2.36 and having a width of 0.59. Furthermore, if we accept that the underlying distribution is a Gaussian, the majority ( 94 percent) of GRB afterglows in our sample have a cooling break frequency less than the X-ray frequency. We thank the referee for their comments. PAC, PAE, MJP, MdP acknowledge support from STFC. AJvdH was supported by an appointment to the NASA Postdoctoral Program at the MSFC, administered by Oak Ridge Associated Universities through a contract with NASA. where L(p, σ p , X) is the likelihood function. This is a numerically calculable function that is minimised to find the most likely parameters, errors on which can be estimated via a Monte Carlo error analysis. If the distribution of the electron energy distribution index, p, can be described by a single discrete value, P(p|p) = 1 (p =p) 0 (p =p) , then the probability of β is P(β|p) = X (β =p/2) 1 − X (β =p/2 − 0.5) 0 (β =p/2 & β =p/2 − 0.5) (A6) and the above convolved probabilities and likelihoods hold with σ p = 0.
4,279.8
2009-08-06T00:00:00.000
[ "Physics" ]
The Recent Developments in Biobased Polymers toward General and Engineering Applications: Polymers that Are Upgraded from Biodegradable Polymers, Analogous to Petroleum-Derived Polymers, and Newly Developed The main motivation for development of biobased polymers was their biodegradability, which is becoming important due to strong public concern about waste. Reflecting recent changes in the polymer industry, the sustainability of biobased polymers allows them to be used for general and engineering applications. This expansion is driven by the remarkable progress in the processes for refining biomass feedstocks to produce biobased building blocks that allow biobased polymers to have more versatile and adaptable polymer chemical structures and to achieve target properties and functionalities. In this review, biobased polymers are categorized as those that are: (1) upgrades from biodegradable polylactides (PLA), polyhydroxyalkanoates (PHAs), and others; (2) analogous to petroleum-derived polymers such as bio-poly(ethylene terephthalate) (bio-PET); and (3) new biobased polymers such as poly(ethylene 2,5-furandicarboxylate) (PEF). The recent developments and progresses concerning biobased polymers are described, and important technical aspects of those polymers are introduced. Additionally, the recent scientific achievements regarding high-spec engineering-grade biobased polymers are presented. Introduction In the mid-20th century, the polymer industry completely relied on petroleum-derived chemistry, refinery, and engineering processes. The negative impacts of these processes on the environment was scientifically discussed in this period, but the processes were not changed in industrial settings until their negative effects reached a critical level around the 1980s. At this point, biodegradable polymers such as polylactides (PLA), poly(hydroxy alkanoates) (PHAs) succinate derived polymers, and others began to develop, and practical biodegradable polymers were commercialized and launched, solving many waste problems in the agricultural, marine fishery, and construction industries, among others [1,2]. The development of biodegradable polymers is recognized as one of the most successful innovations in the polymer industry to address environmental issues. Since the late 1990s, the polymer industry has faced two serious problems: global warming and depletion of fossil resources. One solution in combating these problems is to use sustainable resources instead of fossil-based resources. Biomass feedstocks are a promising resource because of their • 1st class; naturally derived biomass polymers: direct use of biomass as polymeric material including chemically modified ones such as cellulose, cellulose acetate, starches, chitin, modified starch, etc.; • 2nd class; bio-engineered polymers: bio-synthesized by using microorganisms and plants such as poly(hydroxy alkanoates (PHAs), poly(glutamic acid), etc.; • 3rd class; synthetic polymers such as polylactide (PLA), poly(butylene succinate) (PBS), bio-polyolefins, bio-poly(ethylene terephtalic acid) (bio-PET) [8,9]. Usually, 1st class is directly used without any purification and 2nd class polymers are directly produced from naturally derived polymers without any breakdown, and they play an important role in situations that require biodegradability. Direct usage of 1st and 2nd class polymers allows for more efficient production, which can produce desired functionalities and physical properties, but chemical structure designs have limited flexibility. Monomers used in 3rd class polymers are produced from naturally derived molecules or by the breakdown of naturally derived macromolecules through the combination of chemical and biochemical processes. As breakdown processes allow monomers to have versatile chemical structures, polymers comprised of these monomers also have extremely versatile chemical structures. It is practically possible to introduce monomers in 3rd class polymers into the existing production system of petroleum-derived polymers. For the above reasons, the 3rd class of biobased polymers is the most promising. Some of these 3rd class polymers such as bio-polyolefines and bio-PET are not supposed to enter natural biological cycles after use. Thus, the contribution for reducing environmental impact from these polymer classes is mainly derived from reducing the carbon footprint. In Table 1, the chronological development and categorization of biobased polymers that are based on application fields are displayed and compared with those of petroleum-derived polymers. From 1970 to 1990, PLA (low L-content) and poly(hydroxy alkanoates) (PHAs) are the most important and representative development of biobased polymers [11]. During that period, scientists developed a fundamental understanding of biobased polymers for future applications. Since the 1990s, biobased polymers have gradually shifted from biodegradable applications to general and engineering applications. High L-content PLLA, high molecular weight PHAs, and stereocomplexed-PLA (sc-PLA) (low T m grade) are important examples of this development. The deliverables of these were effectively Polymers 2017, 9, 523 4 of 26 to certain petroleum-derived polymers. In addition to physical modification and optimization, the importance of chemical modification and optimization has been emphasized as they allow for further improvement and new functionalities of biobased polymers. This review introduces recent important developments in chemical modifications of biobased polymers and development of new biobased building blocks for new generation biobased polymers. Biobased Polymers: Upgraded from Biodegradable-Grade Polymers PLA, PHAs, and succinate polymers are the most common biobased polymers since they have been successfully applied in the biodegradable plastic industry. The biodegradability of these polymers has been utilized to solve environmental issues, such as waste and public pollution. Due to changes in the social requirements for biodegradable polymers, it is necessary to improve the performance of biodegradable polymers so they can be used for general and engineering applications. The recent examples of development and applications of PLA, PHAs, and succinate polymers are described. Their fundamental properties and chemistries are also introduced. PLA is generally prepared via ring-opening polymerization (ROP) of lactide, which is a cyclic dimer from lactic acid. Direct polycondensation from lactic acid is also performed, but ROP is the standard process in most industries. PLA has a chiral active chain structure, and controlling it allows one to determine the physical properties of PLA. The relationship between the physical properties and L-unit content of PLA has been comprehensively studied [23]. Regarding the effectiveness of biological production, L-lactic acid has superior productivity compared to D-lactic acid. Therefore, poly(L-lactide) (PLLA) is more commonly commercialized. The parameters listed in Table 2 are related to crystallinity. The table shows a clear trend in which the physical properties of PLA are improved by increasing the purity of the L-unit content. The growth rate of spherulite and increase in L-unit content are almost proportional; when the L-unit content is increased 1.0%, the growth rate of spherulite is increased about 2.0 times. Other parameters concerning the crystallization properties of PLA show a similar trend; crystallinity depends on crystallization conditions, but in this report, it is described that crystallinity increases 1.3 times when L-unit content is increased 1.0%. The common crystal structure of highly pure homo-chiral PLA is pseudo-orthorhombic and consists of left-handed 10 3 -helical chains, which are generally called α-forms [24][25][26]. A slightly disordered pseudo-orthorhombic PLA is called an α'-form [27,28]. Because of the slightly disordered structure of the α'-form, an α'-form based PLA crystal has lower thermal and physical properties than those of α-form. Table 3 summarizes the infrared spectroscopy (IR) frequencies of α-forms and α'-forms. Although both α-forms and α'-forms have the same helical conformation, IR analysis of these forms reveals different results, which can be utilized for detection of crystallization form of PLA [29]. The chemical structure and conformation of homo-chiral PLLA are shown in Figure 1. Table 3. IR frequencies of amorphous, α'-form, and α-form PLLA [29]. Sc-PLA is a complex form of PLLA and poly(D-lactide) (PDLA) that was initially reported as an insoluble precipitant for solutions [30]. As the chemical properties of PLA change during the formation of sc-PLA, the original solubilities of homo-chiral PLAs are lost. As a result, sc-PLA is selectively precipitated as granules made from sc-PLA crystallites. A sc-PLA film is created from the Langmuir-Blodgett membrane when PLLA and PDLA are combined [31]. In addition, PLLA and PDLA with molecular weights as high as 1000 kDa have preferable stereocomplexation on the water surface. Further, sc-PLA is assembled on a quartz crystal microbalance (QCM) substrate by stepwise immersion of the QCM in acetonitrile solutions of PLLA and PDLA [32]. The Langmuir-Blodgett membrane and assembled methods are interesting new approaches to achieve nano-ordered structural control of sc-PLA layers. A striking property of sc-PLA is its high T m (around 230 • C). This is 50 • C higher than the conventional T m of homo-chiral high L-content PLLA. In contrast to the stereocomplexation of high molecular weight PLA in a solution state, a simple polymer melt-blend of PLLA and PDLA is usually accompanied by homo-chiral crystallization of PLLA and PDLA, particularly when their molecular weight is sufficient for general industrial applications. The homo-chiral crystals deteriorate the intrinsic properties of sc-PLA, but this drawback can be overcome using sb-PLA. A sb-PLA with an equimolar or moderate non-equimolar PLLA to PDLA ratio features 100% selective stereocomplexation [13]. Therefore, formation of homo-chiral PLA-derived crystallization, which is known to cause poor physical performance of sc-PLA produced from direct combination of PLLA and PDLA, is prevented. An important issue with sb-PLA is that its T g is identical to that of homo-chiral PLA, and thus the final thermal durability of sb-PLA is controlled by T g due to its relatively low crystallinity. The chemical structure of sb-PLA and conformation of sc-PLA from a combination of PLLA/PDLA are shown in Figure 1. Examples of PLA Applications Although the application of PLA was limited to biodegradable plastics in the early stage of its development, it has been successfully applied to general and semi-engineering situations and achieved successful commercialization. The most common commercialized PLA in the world is made by NatureWorks, which trademarked their PLA "Ingeo" [35]. Currently, there are more than 20 commercialized types of Ingeo with both amorphous and semi-crystalline structures, allowing customers to choose the PLA that is appropriate for their specific situations (Table 4). Another important player affecting the industrialization of PLA is Corbion/Total. Now, there are many commercial-grade PLAs on the market, such as Biofoam, made by Synbra; Revode, made by Zhejiang Hisun Biomaterials Biological Engineering; Futerro, made by Futerro; Lacea, made by Mitsu Chemicals; and Terramac, made by Unitika. sc-PLA will play a key role in future engineering applications of PLA. Biofront, made by Teijin, is a good example of the industrial development of sc-PLA [36]. This product features high physical properties, including a melting point of 215 °C, HDT of 130 °C to 0.45 MPa, and a modulus of 115 MPa at 23 °C. These properties are considered suitable enough for sc-PLA to replace petroleum-derived engineering plastics. -160 55-60 3001D 22 155-170 55-60 3251D 80 155-170 55-60 3801X 155-170 45 4032D film, sheet 7 155-170 55-60 4060D 10 -55-60 6060D fiber, non-woven 8 122-135 55-60 6252D 80 155-170 55-60 6752D 14 145-160 55-60 Examples of PLA Applications Although the application of PLA was limited to biodegradable plastics in the early stage of its development, it has been successfully applied to general and semi-engineering situations and achieved successful commercialization. The most common commercialized PLA in the world is made by NatureWorks, which trademarked their PLA "Ingeo" [35]. Currently, there are more than 20 commercialized types of Ingeo with both amorphous and semi-crystalline structures, allowing customers to choose the PLA that is appropriate for their specific situations (Table 4). Another important player affecting the industrialization of PLA is Corbion/Total. Now, there are many commercial-grade PLAs on the market, such as Biofoam, made by Synbra; Revode, made by Zhejiang Hisun Biomaterials Biological Engineering; Futerro, made by Futerro; Lacea, made by Mitsu Chemicals; and Terramac, made by Unitika. sc-PLA will play a key role in future engineering applications of PLA. Biofront, made by Teijin, is a good example of the industrial development of sc-PLA [36]. This product features high physical properties, including a melting point of 215 • C, HDT of 130 • C to 0.45 MPa, and a modulus of 115 MPa at 23 • C. These properties are considered suitable enough for sc-PLA to replace petroleum-derived engineering plastics. PHAs are members of a family of polyesters that consist of hydroxyalkanoate monomers. In nature, they exist as homopolymers such as poly(3-hydroxybutyrate) (P3HB) or copolymer poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (P(3HB-co-3HV)) [37]. PHAs exist as granules of pure polymer in bacteria, which are used as an energy storage medium (akin to fat for animals and starch for plants). PHAs are commercially produced using energy-rich feedstock, which is transformed into fatty acids on which the bacteria feed. During industrial production of PHAs, after a few "feast-famine" cycles, cells are isolated and lysed. The polymer is extracted from the remains of the cells, purified, and processed into pellets or powder [37]. In addition to using pure feedstock as a source of energy for PHAs production, there are on-going efforts to use energy-rich waste water as feedstock and thus as PHAs [38]. Production of PHAs can be improved using genetic modification, either by increasing the amount of PHAs-producing bacteria or by modifying plants to start making PHAs [39,40]. As chemical synthesis of PHAs via the ROP of a corresponding lactone is feasible, ROP of lactones for PHAs can be done via metal-based or enzymatic catalysts [41]. However, the chain of chemically synthesized PHAs is shorter in length than that of biologically synthesized PHAs. The latter also ensures great stereo control and enantiomeric pure (R) configuration in almost all PHAs. Through depolymerization, enantiomeric purity allows for the creation of an enantiomeric monomer that can be used as a building block [42]. On the other hand, when pure (S)-methyl 3-hydroxybutyrate is used as feedstock for the production of PHAs, the corresponding (S)-configuration polymer is produced [43]. The biological synthesis of P3HB is displayed in Figure 2. Sugars in the feedstock are converted to acetates, which are complexed to coenzyme A and form acetyl CoA. This product is dimerized to acetoacetyl A. Additionally, through reduction, hydroxy butyryl CoA is polymerized. Poly(hydroxyalkanoates) (PHAs) PHAs are members of a family of polyesters that consist of hydroxyalkanoate monomers. In nature, they exist as homopolymers such as poly(3-hydroxybutyrate) (P3HB) or copolymer poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (P(3HB-co-3HV)) [37]. PHAs exist as granules of pure polymer in bacteria, which are used as an energy storage medium (akin to fat for animals and starch for plants). PHAs are commercially produced using energy-rich feedstock, which is transformed into fatty acids on which the bacteria feed. During industrial production of PHAs, after a few "feast-famine" cycles, cells are isolated and lysed. The polymer is extracted from the remains of the cells, purified, and processed into pellets or powder [37]. In addition to using pure feedstock as a source of energy for PHAs production, there are on-going efforts to use energy-rich waste water as feedstock and thus as PHAs [38]. Production of PHAs can be improved using genetic modification, either by increasing the amount of PHAs-producing bacteria or by modifying plants to start making PHAs [39,40]. As chemical synthesis of PHAs via the ROP of a corresponding lactone is feasible, ROP of lactones for PHAs can be done via metal-based or enzymatic catalysts [41]. However, the chain of chemically synthesized PHAs is shorter in length than that of biologically synthesized PHAs. The latter also ensures great stereo control and enantiomeric pure (R) configuration in almost all PHAs. Through depolymerization, enantiomeric purity allows for the creation of an enantiomeric monomer that can be used as a building block [42]. On the other hand, when pure (S)-methyl 3-hydroxybutyrate is used as feedstock for the production of PHAs, the corresponding (S)-configuration polymer is produced [43]. The biological synthesis of P3HB is displayed in Figure 2. Sugars in the feedstock are converted to acetates, which are complexed to coenzyme A and form acetyl CoA. This product is dimerized to acetoacetyl A. Additionally, through reduction, hydroxy butyryl CoA is polymerized. PHAs consisting of 4-14 carbon atoms in the repeating unit are called "short chain length PHAs" (sCL-PHAs) or "medium chain length PHAs" (mCL-PHAs) [44]. Some of these PHAs are commercialized. The average molecular weight (Mw) of PHAs corresponds to their chain length. Typically, the Mw of sCL-PHAs is around 500,000, while that of mCL-PHAs is lower than 100,000. In large part, the chain length of PHAs determines the flexibility of the polymer, with short chain butyrate providing the most rigidity and longer side chains disturbing crystal packing, resulting in more flexibility. Long chain length PHAs, which consist of repeating units of more than 14 carbon atoms, and PHAs that consist of either aromatic or unsaturated side-chains are rarely commercialized. The most commonly commercialized PHAs are P3HB, P(3HB-co-3HV) and P(3HB-co-3-hydroxyhexanoate) (P(3HB-co-3HH)), the thermal and physical properties of which are displayed in Table 5. P3HB has a Tg of 4 °C, which becomes lower when the PHAs has a longer chain length. The Tm of PHAs decreases with increasing chain length; P3HB has a melt temperature of 160 °C, while the melt temperature of P3HB-co-3HV is only 145 °C. Both Tg and Tm can be altered by changing the ratio of repeating units. The chemical structures of PHAs are shown in Figure 3. PHAs consisting of 4-14 carbon atoms in the repeating unit are called "short chain length PHAs" (sCL-PHAs) or "medium chain length PHAs" (mCL-PHAs) [44]. Some of these PHAs are commercialized. The average molecular weight (M w ) of PHAs corresponds to their chain length. Typically, the M w of sCL-PHAs is around 500,000, while that of mCL-PHAs is lower than 100,000. In large part, the chain length of PHAs determines the flexibility of the polymer, with short chain butyrate providing the most rigidity and longer side chains disturbing crystal packing, resulting in more flexibility. Long chain length PHAs, which consist of repeating units of more than 14 carbon atoms, and PHAs that consist of either aromatic or unsaturated side-chains are rarely commercialized. The most commonly commercialized PHAs are P3HB, P(3HB-co-3HV) and P(3HB-co-3-hydroxyhexanoate) (P(3HB-co-3HH)), the thermal and physical properties of which are displayed in Table 5. P3HB has a T g of 4 • C, which becomes lower when the PHAs has a longer chain length. The T m of PHAs decreases with increasing chain length; P3HB has a melt temperature of 160 • C, while the melt temperature of P3HB-co-3HV is only 145 • C. Both T g and T m can be altered by changing the ratio of repeating units. The chemical structures of PHAs are shown in Figure 3. Table 5. Thermal and mechanical properties of representative PHAs [45]. P3HB P(3HB-co-20% 3HV) P(3HB-co-12% 3HH) Poly(4-hydroxybutyrate) (P4HB) P3HB crystalizes in an orthorhombic structure (P3HB: a = 5.76 Å , b = 13.20 Å , and c = 5.96 Å ), and its crystallinity can reach 80% [46]. Pure P3HB has poor nucleation density, leading to slow crystallization, due to the formation of large crystallites induced by poorly dispersed nucleation points. A promising way to improve crystallization speed is quiescent crystallization in isothermal conditions, which are 10-20 °C lower in temperature than crystallization conditions. This allows crystallization with the most possibility for arranging chains. It should be applied with appropriate nucleation agents for optimum processing in industry. Processing PHAs is challenging compared to conventional petroleum-derived polymers because of their sensitivity to thermal degradation and slow solidification due to slow crystallization. The degradation temperature of PHAs is around 180 °C, which is near the optimum processing temperature for polyester. A rapid increase of shear-induced internal heat can cause severe degradation, leading to a drop in molecular weight and discoloration. For these reasons, it is important to precisely monitor and control the practical temperature in extruders during processing of PHAs. Processing PHAs is challenging also due to their low durability and tackiness in the final product due to insufficient crystallinity. Cooling below the Tg can easily decrease tackiness, but the Tg of PHAs is 0 °C or lower, which is not an easily controllable temperature for the conventional extruders and molders used in the plastic industry. Polysaccharides Carbohydrates are probably the most prevalent group of organic chemicals on earth. Encompassing monosaccharides, disaccharides (commonly known as sugars), oligosaccharides, and polysaccharides, they are present in all lifeforms. Polysaccharides include well known polymers, such as cellulose and starch and their derivatives, as well as more exotic polymers, such as chitosan and pectin. In this review, we will briefly focus on cellulose and starch. The chemical structures of cellulose and starch are shown in Figure 4. P3HB crystalizes in an orthorhombic structure (P3HB: a = 5.76 Å, b = 13.20 Å, and c = 5.96 Å), and its crystallinity can reach 80% [46]. Pure P3HB has poor nucleation density, leading to slow crystallization, due to the formation of large crystallites induced by poorly dispersed nucleation points. A promising way to improve crystallization speed is quiescent crystallization in isothermal conditions, which are 10-20 • C lower in temperature than crystallization conditions. This allows crystallization with the most possibility for arranging chains. It should be applied with appropriate nucleation agents for optimum processing in industry. Processing PHAs is challenging compared to conventional petroleum-derived polymers because of their sensitivity to thermal degradation and slow solidification due to slow crystallization. The degradation temperature of PHAs is around 180 • C, which is near the optimum processing temperature for polyester. A rapid increase of shear-induced internal heat can cause severe degradation, leading to a drop in molecular weight and discoloration. For these reasons, it is important to precisely monitor and control the practical temperature in extruders during processing of PHAs. Processing PHAs is challenging also due to their low durability and tackiness in the final product due to insufficient crystallinity. Cooling below the T g can easily decrease tackiness, but the T g of PHAs is 0 • C or lower, which is not an easily controllable temperature for the conventional extruders and molders used in the plastic industry. Polysaccharides Carbohydrates are probably the most prevalent group of organic chemicals on earth. Encompassing monosaccharides, disaccharides (commonly known as sugars), oligosaccharides, and polysaccharides, they are present in all lifeforms. Polysaccharides include well known polymers, such as cellulose and starch and their derivatives, as well as more exotic polymers, such as chitosan and pectin. In this review, we will briefly focus on cellulose and starch. The chemical structures of cellulose and starch are shown in Figure 4. Cellulose, or more specifically, cellulose nitrate, has a special place in the history of polymer chemistry: it is the first polymer to be deliberately synthesized by human beings during the quest for synthetic ivory. Cellulose nitrate resulted in further derivatives of cellulose, such as cellulose acetate because of the safety aspects in handling and processing. Cellulose derivatives are still used on a wide scale in film, cigarette filters, and biomedical applications [47]. Other cellulose products, such as paper and cotton clothes, can be viewed as polymeric products. Cellulose is important to the polymer industry due to its abundance in plant fibers. It is not used as a polymer matrix but as an additive; the incorporation of natural fibers (e.g., wood, hemp, and flax) into a polymer compound improves the mechanical properties of the final product. The current focus of cellulose research is nano-cellulose, including cellulose nano-fibers and nano-crystalline cellulose [48][49][50]. Cellulose nanofibers are delaminated fibrils with a small diameter (5-25 nm) and long length (micrometer scale). Cellulose nano-fibers have interesting properties, such as high tensile strength and absorbance ratio. Nano-crystalline cellulose-tiny crystals of cellulose-is of interest due to its high mechanical load and shear thinning properties. Both materials are produced from wood fibers after intensive physical, chemical, and separation procedures. As previously mentioned, starch is a means for obtaining and storing energy in plants. Starch-rich plants have been used for ages as sources of food, and starch is very commonly extracted for use in industry. Starch is stored in granules containing linear amylose and branched amylopectin. Both feature repeating units of D-glucose linked in α 1,4 fashion, with amylopectin containing about 6% of 1,6 linkages. A natural starch is not directly applicable for a processing, rather starch and water are passed through an extruder which produces thermoplastic starch (TPS) [51,52]. TPS is however not stable and retrodegradation is an issue; i.e., TPS tries to revert to its natural starch form. The main process hereby is the gelatinization of the starch granules which causes swelling of the amorphous parts of the granules. To stabilize the thermoplastic starch plasticizers (e.g., glycol and sugars) are added. An interesting approach to improve starch processability is a formation of amylose-lysophosphatidylcholine complexation to control rheological behaviors [53]. The induced lower modulus proved the formation of particle gel, resulting in less retrogradation. The complexation is also able to decrease the susceptibility of starch granules against amylase digestion [54,55]. Succinate Polymers As biobased succinic acid (SA) becomes more commercially available, more biobased succinate polymers are being developed [56,57]. Poly(butylene succinate) (PBS), which is produced by direct polycondensation of SA and butanediol (BD), is one of the most well-known succinate polymers [58]. Both SA and BD for commercialized PBS were only produced from fossil fuel resources until recently, but the high interest in green sources led to the discovery that the two monomers can be obtained from refined biomass feedstock. The widely commercialized PBS named "Bionolle" was launched by Showa Denko in 1993, and 3000-10,000 tons are produced per year. Since 2013, Succinity, a joint venture of BASF and Corbion, has been able to produce 10,000 tons of 100% biobased PBS. Poly(ethylene succinate) (PES) produced Cellulose, or more specifically, cellulose nitrate, has a special place in the history of polymer chemistry: it is the first polymer to be deliberately synthesized by human beings during the quest for synthetic ivory. Cellulose nitrate resulted in further derivatives of cellulose, such as cellulose acetate because of the safety aspects in handling and processing. Cellulose derivatives are still used on a wide scale in film, cigarette filters, and biomedical applications [47]. Other cellulose products, such as paper and cotton clothes, can be viewed as polymeric products. Cellulose is important to the polymer industry due to its abundance in plant fibers. It is not used as a polymer matrix but as an additive; the incorporation of natural fibers (e.g., wood, hemp, and flax) into a polymer compound improves the mechanical properties of the final product. The current focus of cellulose research is nano-cellulose, including cellulose nano-fibers and nano-crystalline cellulose [48][49][50]. Cellulose nanofibers are delaminated fibrils with a small diameter (5-25 nm) and long length (micrometer scale). Cellulose nano-fibers have interesting properties, such as high tensile strength and absorbance ratio. Nano-crystalline cellulose-tiny crystals of cellulose-is of interest due to its high mechanical load and shear thinning properties. Both materials are produced from wood fibers after intensive physical, chemical, and separation procedures. As previously mentioned, starch is a means for obtaining and storing energy in plants. Starch-rich plants have been used for ages as sources of food, and starch is very commonly extracted for use in industry. Starch is stored in granules containing linear amylose and branched amylopectin. Both feature repeating units of D-glucose linked in α 1,4 fashion, with amylopectin containing about 6% of 1,6 linkages. A natural starch is not directly applicable for a processing, rather starch and water are passed through an extruder which produces thermoplastic starch (TPS) [51,52]. TPS is however not stable and retrodegradation is an issue; i.e., TPS tries to revert to its natural starch form. The main process hereby is the gelatinization of the starch granules which causes swelling of the amorphous parts of the granules. To stabilize the thermoplastic starch plasticizers (e.g., glycol and sugars) are added. An interesting approach to improve starch processability is a formation of amylose-lysophosphatidylcholine complexation to control rheological behaviors [53]. The induced lower modulus proved the formation of particle gel, resulting in less retrogradation. The complexation is also able to decrease the susceptibility of starch granules against amylase digestion [54,55]. Succinate Polymers As biobased succinic acid (SA) becomes more commercially available, more biobased succinate polymers are being developed [56,57]. Poly(butylene succinate) (PBS), which is produced by direct polycondensation of SA and butanediol (BD), is one of the most well-known succinate polymers [58]. Both SA and BD for commercialized PBS were only produced from fossil fuel resources until recently, but the high interest in green sources led to the discovery that the two monomers can be obtained from refined biomass feedstock. The widely commercialized PBS named "Bionolle" was launched by Showa Denko in 1993, and 3000-10,000 tons are produced per year. Since 2013, Succinity, a joint venture of BASF and Corbion, has been able to produce 10,000 tons of 100% biobased PBS. Poly(ethylene succinate) (PES) produced via polymerization of succinic acid and ethylene glycol is biodegradable and could also be sourced from biobased building blocks [59]. PES was commercialized by Nippon Shokubai from fossil resources. It has been claimed that PES is suited for film applications due to its good oxygen barrier properties and elongation. Copolymers of succinic acid and other dicarboxylic acids, such as adipic acid for poly(butylene succinate-co-butylene adipate) [60], poly(butylene succinate-co-butylene terephthalic acid) [61], and poly(butylene succinate-co-butylene furandicarboxylate) [62], have been reported. Figure 5 shows the chemical structures of the presented succinate polymers. Because of these polymers' relatively long alkyl chains, they usually have soft properties; for instance, PBS has a T m of 115 • C and tensile strength of 30-35 MPa. Thus, succinate polymers are usually considered an alternative to polyolefins in the packaging industry. via polymerization of succinic acid and ethylene glycol is biodegradable and could also be sourced from biobased building blocks [59]. PES was commercialized by Nippon Shokubai from fossil resources. It has been claimed that PES is suited for film applications due to its good oxygen barrier properties and elongation. Copolymers of succinic acid and other dicarboxylic acids, such as adipic acid for poly(butylene succinate-co-butylene adipate) [60], poly(butylene succinate-co-butylene terephthalic acid) [61], and poly(butylene succinate-co-butylene furandicarboxylate) [62], have been reported. Figure 5 shows the chemical structures of the presented succinate polymers. Because of these polymers' relatively long alkyl chains, they usually have soft properties; for instance, PBS has a Tm of 115 °C and tensile strength of 30-35 MPa. Thus, succinate polymers are usually considered an alternative to polyolefins in the packaging industry. Biobased Polyethylene (Bio-PE) Due to soaring oil prices, bio-ethanol produced by fermentation of sugar streams attracted the fuel industry in the 1970s. Bio-ethanol could also be chemically converted to bio-ethylene for production of biobased polyethylene (bio-PE) [63]. A drop in the price of oil diminished the bio-PE market, but the polymer continues to be exploited by important players such as Braskem due to increasing oil prices and environmental awareness [64]. The big advantage of bio-PE is the fact that its properties are identical to fossil-based PE, which has a complete infrastructure for processing and recycling. However, it faces direct competition with fossil-based feedstock, the price of which Biobased Polyethylene (Bio-PE) Due to soaring oil prices, bio-ethanol produced by fermentation of sugar streams attracted the fuel industry in the 1970s. Bio-ethanol could also be chemically converted to bio-ethylene for production of biobased polyethylene (bio-PE) [63]. A drop in the price of oil diminished the bio-PE market, but the polymer continues to be exploited by important players such as Braskem due to increasing oil prices and environmental awareness [64]. The big advantage of bio-PE is the fact that its properties are identical to fossil-based PE, which has a complete infrastructure for processing and recycling. However, it faces direct competition with fossil-based feedstock, the price of which heavily fluctuates (e.g., shale gas is cheap) [65]. The downside of biobased PE is that it is not biodegradable. However, as will be shown next, some plastics produced from fossil fuel feedstock are biodegradable. Biobased Poly(Ethylene Terephthalate) (PET) and Poly(Trimethylene Terephthalate) (PTT) PET is a high-performance engineering plastic with physical properties that are suitable enough to be applied to bottles, fibers, films, and engineering applications. While PET plays an important role in the plastic market, huge consumption of this polymer results in serious environmental issues, especially regarding waste, because of its poor sustainability and degradability. Polymer-to-polymer material recycling of PET has been launched in some fields, but it is always accompanied by non-negligible deterioration of the polymer's physical properties in the final recycled products due to side-reactions and thermal degradation, hydrolysis, and thermo-oxidative degradation during recycling. Ways to chemically recycle PET are under development, but many technical difficulties, such as the high stability of PET under normal hydrolysis, alcoholysis, or breakdown processes, must be overcome. To realize a truly sustainable PET industry, it is important to establish sustainable production of monomers for biobased PET (bio-PET) from sustainable resources, such as biomass. First, ethylene glycol (EG) from petroleum-derived sources must be replaced by EG from biobased sources. The Coca-Cola Company (TCCC), a beverage giant, has accelerated the production of bio-PET known as "PlantBottle" [66]. PlantBottle, which was launched in 2009, consists of 30% biobased materials, 100% biobased EG (bio-EG) and petroleum-derived terephthalic acid (TPA). Following this, biobased terephthalic acid (bio-TPA) is being developed to further improve the sustainability of PET, as bio-TPA is produced from naturally derived sustainable biomass feedstock. Theoretically, combining bio-EG and bio-TPA could achieve 100% natural biomass feedstock derived bio-PET. Figure 6 shows the proposed development of bio-TPA from biomass feedstock. One of the most important players in the development of bio-TPA is Gevo [67]. Based on technology announced by Gevo, biobased isobutylene obtained from iso-butanol, which is produced by dehydration of sugar, is a key building block in various chemicals, such as ethyl tert-butyl ether, methyl methacrylate, isooctane, and other alkanes. For bio-TPA production, p-xylene is first produced by cyclization of two isooctane molecules via dehydrogenation. Second, the p-xylene is converted to bio-TPA via oxidation. However, this is not the only way to obtain bio-TPA; it has been proposed that bio-TPA could be obtained from other biomass-derived products. Muconic acid, which is produced from sugar through a combination of chemical processes and biorefinery, is one interesting building block for bio-TPA [68]. After a series of stepwise cis-cis to trans-trans transitions of muconic acid, a tetrahydro terephthalic acid (THTA) can be produced by an ethylene addition reaction, dehydrogenation of which produces bio-TPA. Bio-TPA produced from limonene-derived building blocks is also under development. p-cymene is a limonene-derived precursor that can be produced from chemical refinery of limonene [69]. Oxidation uses concentrated nitric acid for the iso-propyl group, which reacts with potassium permanganate. This oxidation results in 85% overall conversion from limonene, which is the target in industrial applications. Bio-TPA from furan derivatives should be also featured, as biobased furan derivatives such as 2,5-furan dicarboxylic acid (FDCA) are already in large scale pilot production state [70][71][72][73]. Diels-Alder (DA) reaction is the key chemical reaction in the bio-TPA production from furan derivatives. First, furfural is oxidized and dehydrated to produce maleic anhydride, which is then reacted with furan to produce a DA adduct. Dehydration of the DA adduct results in phthalic anhydride, which is converted to bio-TPA via phthalic acid and dipotassium phthalate. Another interesting bio-TPA synthesis pathway involving DA was reported by Avantium, the leading developer of biobased FDCA. Hydroxymethylfurfural (HMF) from fructose is an important precursor of FDCA and is produced by hydrogenation to convert HMF to dimethyl furan (DMF). DMF is converted to p-xylene through several steps, such as cyclization with ethylene by DA and dehydration, and then the p-xylene is converted to bio-TPA. Another interesting approach is reported by BioBTX, which is building a pilot plant to produce aromatics (benzene, toluene, and terephthalate or BTX) by means of catalytic pyrolysis of biomass (e.g., wood and other lignin-rich biomass resources) [74]. Figure 6 shows the four methods of bio-TPA production discussed above. Similar to bio-EG, bio-based 1,3-propoane diol (bio-PDO) is used for development of biobased poly(trimethylene terephthalate) (PTT). Because of the good shape recovery properties of PTT due to its unique chain conformation, PTT fibers are used in the carpet and textile industries. Partnering with Tate & Lyle and Genencor, DuPont produces bio-PDO named "Susterra" by fermenting sugars from starches [75]. Susterra is used as building block for biobased PTT named "Sorona", which consists of 37 wt % sustainable components. Biobased poly(butylene terephthalate) (PBT) will become an available biobased polyester, since biobased production of monomer component BD is under steady development [76]. PBT is used for special applications that require high dimensional stability and excellent slidability. It explores new applications of biobased polymers when PET and PTT are rarely used. (e.g., wood and other lignin-rich biomass resources) [74]. Figure 6 shows the four methods of bio-TPA production discussed above. Similar to bio-EG, bio-based 1,3-propoane diol (bio-PDO) is used for development of biobased poly(trimethylene terephthalate) (PTT). Because of the good shape recovery properties of PTT due to its unique chain conformation, PTT fibers are used in the carpet and textile industries. Partnering with Tate & Lyle and Genencor, DuPont produces bio-PDO named "Susterra" by fermenting sugars from starches [75]. Susterra is used as building block for biobased PTT named "Sorona", which consists of 37 wt % sustainable components. Biobased poly(butylene terephthalate) (PBT) will become an available biobased polyester, since biobased production of monomer component BD is under steady development [76]. PBT is used for special applications that require high dimensional stability and excellent slidability. It explores new applications of biobased polymers when PET and PTT are rarely used. Biobased Polyamides Development of biobased polyamides is accelerated by the recent progress in the refinery of biobased building blocks. Table 6 lists the chemical structures, typical Tm, moduli, and suppliers of commercialized biobased polyamides. Polyamides 6 and 6.6 are representative petroleum-derived polyamides that are widely used for many general and engineering applications. Thus, making the properties of biobased polyamides similar to those of polyamides 6 and 6.6 is a reasonable milestone to set when creating realistic development strategies. Figure 7 shows the typical methods producing building blocks and polymerizing biobased polyamides. As shown in Table 6, the thermal properties of biobased polyamides containing four carbons (4C) are comparable or higher than those of polyamides 6 and 6.6. The high Tm of 4C polyamides is accompanied by high thermal durability and Biobased Polyamides Development of biobased polyamides is accelerated by the recent progress in the refinery of biobased building blocks. Table 6 lists the chemical structures, typical T m , moduli, and suppliers of commercialized biobased polyamides. Polyamides 6 and 6.6 are representative petroleum-derived polyamides that are widely used for many general and engineering applications. Thus, making the properties of biobased polyamides similar to those of polyamides 6 and 6.6 is a reasonable milestone to set when creating realistic development strategies. Figure 7 shows the typical methods producing building blocks and polymerizing biobased polyamides. As shown in Table 6, the thermal properties of biobased polyamides containing four carbons (4C) are comparable or higher than those of polyamides 6 and 6.6. The high T m of 4C polyamides is accompanied by high thermal durability and mechanical strength, but the rigidity of these polyamides should be moderated to ensure stable processing in practical extrusion and injection molding. Among the general techniques for moderation of rigidness, branching in the main chain of a polymer may be the most promising for polyamide 4 [77]. In this report, 3-and 4-arm branched polyamide 4 were prepared with high molecular weight, comparable to linear polyamides. The branched structure improved mechanical properties without decreasing T m . Although promising improvements were made in physical properties, there are some technical issues regarding polyamide 4 that must be overcome, including the level of gel formation during polymerization. When the amount of initiator for branched polyamide 4 is higher than 3.0 mol %, some gelation occurs, which might negatively affect the physical properties of the final product. For stable industrial production of polyamide 4, the optimum polymerization process must be determined. Table 6 displays polyamides that consist of 4C, 10-carbon (10C), 11-carbon (11C), and 12-carbon (12C) biobased building blocks [77][78][79]. The 10C, 11C, and 12C comprising biobased polyamides have milder and softer physical properties. However, these properties are prized for applications such as automotive fuel lines, bike tubing, and cable coating, which require flexibility. Sebacic acid for C10 and 11-aminoundecanoic acid for C11 are the building blocks for polyamides 4.10, 6.10, 10.10 and 11. The long alkyl chains of these result in low water uptake and low density, which are advantages over conventional polyamides. Besides the relatively low T m of polyamides 6.10, 10.10 and 11 compared to polyamides 6 and 6.6, the flexibility of long alkyl chains is attractive for engineering applications that require high impact resistance and resilience. Another interesting approach to creating biobased polyamides is the development of biobased lactams [80][81][82]. The proposed steps of lactam synthesis are rather complicated, but it is expected that the tunable aspects of biobased lactams will lead to new functionalized polyamides in the near future. PEF from Condensation of Diol and FDCA In the past, PEF was not considered special; it was a downgraded PET because of its slow crystallization and low T m . Although the general flow of polymerization processes, physical properties, and fundamental crystallography of PEF had been reported in the 1940s, the available information was not sufficient for practical application. However, in 2008, some reliable information about PEF, including its currently known polymerizations, was reported [83]. Around the same time, the widely known thermal properties of PEF-T m around 210 • C and T g around 80 • C-were reported [84]. Other studies followed, increasing the scientific understanding of the physical properties of PEF. The thermal decomposition temperature of PEF is approximately 300 • C, which also results in β-hydrogen bonds [85]. The brittleness and rigidity of PEF result in about 4% elongation at break [86]. PEF is generally produced by polycondensation and polytransesterification of EG and FDCA, derivatives of dichloride-FDCA, dimethyl-FDCA, diethyl-FDCA, or bis-(hydroxyethyl)-FDCA [85]. Solid-state polymerization (SSP) is the key to obtaining high molecular weight, which enables PEF to be suitable for engineering applications. These steps are analogous to industrial processes for producing PET. The results of scientific studies have been successfully applied to pilot and upcoming industrial production of PEF. The most widely known example of industrial PEF production is that of Synvina, a joint venture of Avantium and BASF. Figure 8 shows Avantium's plan for production of PEF from FDCA derived from fructose [87]. First, fructose is converted to 5-methoxy methyl furfural (MMF) by dehydration. MMF is then treated by oxidation to produce crude FDCA, which can be highly purified to achieve high-grade FDCA that can be used for production of PEF. With optimal modification, MMF and hydromethyl furfural can serve as important intermediate biobased building blocks for fine chemicals. Therefore, the side products of FDCA may create a new biobased industry. In Avantium's plan, the side product methyl levulinate is also considered an interesting chemical for development of biobased building blocks. The important properties and functionalities of PEF are compared with those of PET in Table 7 [88]. The remarkably high gas barrier properties of PEF should be emphasized; the high O 2 barrier is advantageous for packaging, leading to PEF's practical application in the food and beverage industry. Thermal properties of other poly(alkylene furanoates) (PAF) from FDCA and other biobased aliphatic diols such as C3-C18 long alkyl chain liner alkyls are also reported [85,86,[89][90][91][92][93][94]. The T m and T g of them constantly drop, as length of alkylene chain becomes longer, as it is represented by T m and T g of PEF are 211 • C and 86 • C, poly(trimethylene furanoate) are 172 • C and 57 • C, poly(butylene furanoate) (PBF) are 172 • C and 44 • C, and poly(1,6-hexanediol furanoate) are 144 • C and 13 • C, respectively [92]. PAF consisting of isosorbide which has rigid and bulky structure is also an interesting polymer because of its outstanding T g [93]. The reported isosorbide containing PAF shows T g 196 • C with excellent amorphous properties. Long alkylene chain containing PAF are also prepared by environmentally benign process i.e., enzymatic polymerization [95]. In this report, the structure-property relationships for example, alkylene chain length and thermal properties, crystallinity, and alkylene component were scientifically discussed and summarized. The enzymatic polymerization was applied for FDCA based polyamide [96] and furan containing polyester from 2,5-bis(hydroxymethyl)furan and aliphatic dicarboxylic acid [97]. [92]. PAF consisting of isosorbide which has rigid and bulky structure is also an interesting polymer because of its outstanding Tg [93]. The reported isosorbide containing PAF shows Tg 196 °C with excellent amorphous properties. Long alkylene chain containing PAF are also prepared by environmentally benign process i.e., enzymatic polymerization [95]. In this report, the structure-property relationships for example, alkylene chain length and thermal properties, crystallinity, and alkylene component were scientifically discussed and summarized. The enzymatic polymerization was applied for FDCA based polyamide [96] and furan containing polyester from 2,5-bis(hydroxymethyl)furan and aliphatic dicarboxylic acid [97]. Although the improvements in polycondensation, polytransesterification, and SSP have enabled PEF with a high molecular weight to be consistently and stably produced, it is still important to develop an alternative method of polymerization of PEF for further functionalization and minimization of side reactions. One interesting alternative is ROP from cyclic compounds consisting of FDCA and liner alkyl diols (Figure 9). ROP is advantageous as it can precisely control molecular weight by adjusting initiator and monomer ratio. Precise control of polydispersity can also be attained by reducing trans-esterification. In addition, various sequence structures can be prepared by copolymerization with other lactones, and end group functionalization. One study reported ROP of poly(butylene 2,5-furandicarboxylate) (PBF) [98]. In this report, cyclic oligomers of PBF are synthesized by reaction of furandicarbonyl dichloride (FDCC) and 1,4-butanediol in solution, and the obtained cyclic oligomers were used for ROP at 270 °C in a bulk state. The PEF from ROP Although the improvements in polycondensation, polytransesterification, and SSP have enabled PEF with a high molecular weight to be consistently and stably produced, it is still important to develop an alternative method of polymerization of PEF for further functionalization and minimization of side reactions. One interesting alternative is ROP from cyclic compounds consisting of FDCA and liner alkyl diols (Figure 9). ROP is advantageous as it can precisely control molecular weight by adjusting initiator and monomer ratio. Precise control of polydispersity can also be attained by reducing trans-esterification. In addition, various sequence structures can be prepared by copolymerization with other lactones, and end group functionalization. One study reported ROP of poly(butylene 2,5-furandicarboxylate) (PBF) [98]. In this report, cyclic oligomers of PBF are synthesized by reaction of furandicarbonyl dichloride (FDCC) and 1,4-butanediol in solution, and the obtained cyclic oligomers were used for ROP at 270 • C in a bulk state. The molecular weight of the obtained PBF was too low for practical applications, but the thermal properties were comparable to those achieved using conventional polymerization. Polymers 2017, 9,523 16 of 26 molecular weight of the obtained PBF was too low for practical applications, but the thermal properties were comparable to those achieved using conventional polymerization. m = 1 or 2 Figure 9. Synthetic scheme of cyclic oligomers for PEF and PBF [98,99]. High molecular weight PBF and PEF were also obtained in another study [99]. The starting cyclic oligomers were prepared by reaction of FDCC and corresponding diols, and the remaining liner oligomer was carefully removed. The obtained cyclic oligomers were used for ROP, which was catalyzed by stannous octoate. The final molecular weight was 50,000 with Mw/Mn of 1.4 for PEF and 65,000 with Mw/Mn of 1.9 for PBF. Differences in the physical properties of PEF obtained using polycondensation/SSP and ROP have not been deeply studied yet, but these differences will lead to new applications of PEF. High-Performance PLA from Modified Lactides The functional groups in the main chains of polylactones such as methylene, ester, and ether control the properties and functionalities of these polylactones, but the functional groups in side chains also play an important role. A simple example of an effect from difference in side group structure is the difference in the thermal properties of polyglycolide (PGL) and PLA. The Tg and Tm of PGL are around 40 °C and 230 °C, respectively, and those of PLA are 55 °C and 170 °C, respectively. In addition, methyl substitution results in higher hydrophobicity, so the hydrolytic stability of PLA is higher than that of PGL. This indicates that a desired functionalization of PLA can be managed by optimum substitution of the methyl group of lactide for other functional groups to overcome the drawbacks of PLA, such as low Tg and transparency. It has been proposed that Tg can be improved by substituting methyl for a bulky functional group, such as a phenyl group, and a phenyl-substituted lactide is practically reported by using naturally derived phenyl containing mandelic acid [100]. Mandelic acid is a biobased α-hydroxy acid that is widely used as a precursor for cosmetics, food additives, and other chemicals. Phenyl-substituted lactide can be synthesized by cyclic dimerization of mandelic acid, also called mandelide, and the reported Tg of polymandelide (PMA) is higher than 100 °C, which is high enough to be an alternative to high-Tg petroleum-derived amorphous polymers such as polystyrene. The reported ROP of mandelide is only applicable for meso-mandelide, which produces completely amorphous PMA, because the high Tm and poor solubility of racemic mandelide are not suitable for ordinal ROP in bulk or solution state. Another interesting polymerization of PMA is ROP of phenyl containing 1,3-dioxolan-4-one (Ph-DXO) [101]. This method allows for control of the chiral structure of the final PMA by preparing homo-chiral Ph-DXO, as Ph-DXO with any chirality has moderate solubility. In addition, ROP of cyclic o-carboxyanhydride can be used for synthesis of isotactic PMA. This report is the first about isotactic PMA for crystalline PMA, and its Tm is reportedly higher than 310 °C [102]. These thermal properties of PMA allow for new applications of biobased polymers. Norbornene-substituted lactide also yields high-Tg PLA [103]. For norbornene substitution, L-lactide is brominated, and then an elimination reaction produces (6S)-3-methylene-6-methyl-1,4-dioxane, which is modified by DA reaction, producing norbornene modified lactide is produced. When the norbornene-modified lactide is involved in a ring-opening metathesis reaction, a polymer with narrow polydispersity and Tg of 192 °C is obtained. Substitution of methyl with a longer alkyl chain can be used to soften PLA by decreasing its Tg. For example, ethylene-modified lactide results in Tg of 66 °C, n-hexyl-modified lactide results in Tg of Figure 9. Synthetic scheme of cyclic oligomers for PEF and PBF [98,99]. High molecular weight PBF and PEF were also obtained in another study [99]. The starting cyclic oligomers were prepared by reaction of FDCC and corresponding diols, and the remaining liner oligomer was carefully removed. The obtained cyclic oligomers were used for ROP, which was catalyzed by stannous octoate. The final molecular weight was 50,000 with M w /M n of 1.4 for PEF and 65,000 with M w /M n of 1.9 for PBF. Differences in the physical properties of PEF obtained using polycondensation/SSP and ROP have not been deeply studied yet, but these differences will lead to new applications of PEF. High-Performance PLA from Modified Lactides The functional groups in the main chains of polylactones such as methylene, ester, and ether control the properties and functionalities of these polylactones, but the functional groups in side chains also play an important role. A simple example of an effect from difference in side group structure is the difference in the thermal properties of polyglycolide (PGL) and PLA. The T g and T m of PGL are around 40 • C and 230 • C, respectively, and those of PLA are 55 • C and 170 • C, respectively. In addition, methyl substitution results in higher hydrophobicity, so the hydrolytic stability of PLA is higher than that of PGL. This indicates that a desired functionalization of PLA can be managed by optimum substitution of the methyl group of lactide for other functional groups to overcome the drawbacks of PLA, such as low T g and transparency. It has been proposed that T g can be improved by substituting methyl for a bulky functional group, such as a phenyl group, and a phenyl-substituted lactide is practically reported by using naturally derived phenyl containing mandelic acid [100]. Mandelic acid is a biobased α-hydroxy acid that is widely used as a precursor for cosmetics, food additives, and other chemicals. Phenyl-substituted lactide can be synthesized by cyclic dimerization of mandelic acid, also called mandelide, and the reported T g of polymandelide (PMA) is higher than 100 • C, which is high enough to be an alternative to high-T g petroleum-derived amorphous polymers such as polystyrene. The reported ROP of mandelide is only applicable for meso-mandelide, which produces completely amorphous PMA, because the high T m and poor solubility of racemic mandelide are not suitable for ordinal ROP in bulk or solution state. Another interesting polymerization of PMA is ROP of phenyl containing 1,3-dioxolan-4-one (Ph-DXO) [101]. This method allows for control of the chiral structure of the final PMA by preparing homo-chiral Ph-DXO, as Ph-DXO with any chirality has moderate solubility. In addition, ROP of cyclic o-carboxyanhydride can be used for synthesis of isotactic PMA. This report is the first about isotactic PMA for crystalline PMA, and its T m is reportedly higher than 310 • C [102]. These thermal properties of PMA allow for new applications of biobased polymers. Norbornene-substituted lactide also yields high-T g PLA [103]. For norbornene substitution, L-lactide is brominated, and then an elimination reaction produces (6S)-3-methylene-6-methyl-1,4-dioxane, which is modified by DA reaction, producing norbornene modified lactide is produced. When the norbornene-modified lactide is involved in a ring-opening metathesis reaction, a polymer with narrow polydispersity and T g of 192 • C is obtained. Substitution of methyl with a longer alkyl chain can be used to soften PLA by decreasing its T g . For example, ethylene-modified lactide results in T g of 66 • C, n-hexyl-modified lactide results in T g of −37 • C, and iso-butyl modified lactide results in T g of 22 • C [104]. The ROP scheme of high-T g PLA is shown in Figure 10, and the T g values of the abovementioned substituted PLAs are listed in Table 8. Terpen-Derived Biobased Polymers Terpens are a class of naturally abundant organic compounds that are the main component of resins from a variety of plants, especially conifers. Terpens are used by plants for defense, deterring herbivores and attracting predators or parasites to those herbivores. In addition, some insects emit terpens from their osmeteria, such as termites and the caterpillars of swallowtail butterflies, also for defensive reasons. A variety of chemical modifications and functionalizations, such as oxidation, hydrogenation, and rearrangement of the carbon skeleton, can be applied to terpens, resulting in compounds called terpenoids. Both terpens and terpenoids are used in essential oils and fragrances for perfumes, cosmetics, and pharmaceuticals. The polymerizability of economically reasonable terpens and terpenoids is being studied, and there are interesting reports of biobased polyterpenes, especially in high-T g polymers with excellent amorphous properties. Pinenes are an important and widely known class of terpen. Polypinenes consisting of alicyclic hydrocarbon polymers comprised of β-pinene or α-phellandrene have high T g (>130 • C), excellent transparency, and amorphous character [105][106][107]. In the early stage of development, polypinenes with high molecular weight are prepared by cationic polymerization using an optimum Lewis acid under polymerization conditions. However, the temperature required for polymerization (lower than −70 • C) is too low for industrial production. Radical polymerization of modified pinenes has been reported to be an alternative to cationic polymerization of pinenes [108]. In this report, α-pinene is transformed into pinocarvone, which contains a reactive exo-methylene group involved in radical polymerization by chemical photo-oxidation and visible-light irradiation. Radical polymerization of pinocarvone is performed in relatively uncommon solvent to achieve high molecular weight and conversion as well as excellent thermal properties (T g higher than 160 • C). The above mentioned cationic and radical polymerization methods are shown in Figure 11. Limonene is classified as a cyclic terpen and the reason for the attractive smell of citrus fruits. Limonene is an optically active molecule, and its D-isomer is common in nature. D-limonene is widely used in the cosmetics and food industries. In addition to the economic value of limonene, it features high reactivity during radical polymerization in biobased polymer applications. High-T g limonene homo-polymers can achieve excellent glass morphology [109]. The kinetics study in that report also indicates the possibility of high molecular weight and high polylimonene yield by optimizing the polymerization conditions. Copolymerization of limonene and other vinyl groups containing monomers is an approach to synthesis of limonene copolymers [110]. One report presented a striking example of copolymerization of limonene and carbon dioxide to yield a high molecular weight polycarbonate [111]. In this report, limonene is converted to limonene-oxide, and polycarbonate obtained from copolymerization and thiol-ene coupling achieved excellent T g (>150 • C). There are also interesting reports of chiral active polylimonens, as these show unique properties and stereocomplexability due to the interaction of chiral counterparts [112][113][114]. A interaction of L-configured and D-configured polylimonene carbonate forms a stereocomplex with T g of >120 • C. Interestingly, the preferred crystallization of poly(limonene carbonate) occurs only in a stereocomplexed formation. under polymerization conditions. However, the temperature required for polymerization (lower than −70 °C) is too low for industrial production. Radical polymerization of modified pinenes has been reported to be an alternative to cationic polymerization of pinenes [108]. In this report, α-pinene is transformed into pinocarvone, which contains a reactive exo-methylene group involved in radical polymerization by chemical photo-oxidation and visible-light irradiation. Radical polymerization of pinocarvone is performed in relatively uncommon solvent to achieve high molecular weight and conversion as well as excellent thermal properties (Tg higher than 160 °C). The above mentioned cationic and radical polymerization methods are shown in Figure 11. Limonene is classified as a cyclic terpen and the reason for the attractive smell of citrus fruits. Limonene is an optically active molecule, and its D-isomer is common in nature. D-limonene is widely used in the cosmetics and food industries. In addition to the economic value of limonene, it features high reactivity during radical polymerization in biobased polymer applications. High-Tg limonene homo-polymers can achieve excellent glass morphology [109]. The kinetics study in that report also indicates the possibility of high molecular weight and high polylimonene yield by optimizing the polymerization conditions. Copolymerization of limonene and other vinyl groups containing monomers is an approach to synthesis of limonene copolymers [110]. One report presented a striking example of copolymerization of limonene and carbon dioxide to yield a high molecular weight polycarbonate [111]. In this report, limonene is converted to limonene-oxide, and polycarbonate obtained from copolymerization and thiol-ene coupling achieved excellent Tg (>150 °C). There are also interesting reports of chiral active polylimonens, as these show unique properties and stereocomplexability due to the interaction of chiral counterparts [112][113][114]. A interaction of L-configured and D-configured polylimonene carbonate forms a stereocomplex with Tg of >120 °C. Interestingly, the preferred crystallization of poly(limonene carbonate) occurs only in a stereocomplexed formation. Myrcene is an organic olefinic hydrocarbon consisting of optically active carbon. β-myrcene is one of the main components of essential oils, but α-myrcene has not been found in nature and is used in extremely few situations. In industry, β-myrcene can be cheaply produced by pyrolysis of pinene. There are several interesting reports concerning polymers comprised of myrcenes and their derivatives. For example, myrcene can be converted to cyclic diene monomer, which produces an amorphous polymer ( Figure 11) [115]. In addition, a copolymer of myrcene and dibutyl itaconate can be used for functionalized applications [116], and the low Tg of the copolymer is promising for biobased elastomer applications. The polymerization process and Tg of the featured polyterpenes are listed in Table 9. Figure 11. Production of polyterpenes from β-pinene using: (a) cationic polymerization [105]; and (b) radical polymerization [108]; and (c) production from myrcene [115]. Figure 11. Production of polyterpenes from β-pinene using: (a) cationic polymerization [105]; and (b) radical polymerization [108]; and (c) production from myrcene [115]. Myrcene is an organic olefinic hydrocarbon consisting of optically active carbon. β-myrcene is one of the main components of essential oils, but α-myrcene has not been found in nature and is used in extremely few situations. In industry, β-myrcene can be cheaply produced by pyrolysis of pinene. There are several interesting reports concerning polymers comprised of myrcenes and their derivatives. For example, myrcene can be converted to cyclic diene monomer, which produces an amorphous polymer ( Figure 11) [115]. In addition, a copolymer of myrcene and dibutyl itaconate can be used for functionalized applications [116], and the low T g of the copolymer is promising for biobased elastomer applications. The polymerization process and T g of the featured polyterpenes are listed in Table 9. Table 9. Polymerization process and thermal properties of polyterpenes [105][106][107][108][109][110][111][112][113][114][115]. Other Noteworthy Biobased Polymers This section presents recent studies about new biobased polymers with physical properties that are superior to conventional petroleum-derived polymers. By utilizing naturally derived phenols, which contain aromatic rings, biobased liquid crystalline polymers (bio-LCP) can be developed. As the main chemical bonds of bio-LCP are ester bonds, hydroxy and carboxylic acid, biobased building blocks from natural phenols, can be used to produce bio-LCP. For example, 4-hydroxycinnamic acid (4HCA) is one important phenol that can be used to introduce liquid crystalline properties into polyesters via its aromatic function. 4HCA exists in plant cells that are intermediates of metabolites of the biosynthetic pathway of lignin. The mechanical properties of 4HCA-derived bio-LCP are superior to those of other commercialized biobased plastics, with a mechanical strength (σ) of 63 MPa, a Young's modulus (E) of 16 GPa, and a maximum softening temperature of 169 • C [117,118]. One study reported a high-performance biobased polyamide with T g of >250 • C [119]. This polyamide consists of repeating units from {(4,4 -diyl-α-truxillic acid dimethyl ester) 4,4diacetamido-α-truxillamide}. Monomers are prepared through conversion from naturally derived 4-aminophenylalanine, which involves UV coupling of cinnamic acid-derived vinyl groups from each monomer. Scientific investigation into monomer production and polymerization processes is still being performed, but this innovation indicates the possibility of development of super-engineering-grade polymers from naturally derived feed stock. High-T m biobased esterified poly(α-glucan) can undergo in vitro enzymatic synthesis [120]. Naturally available sucrose is used as the starting material for polymerization of linear poly(α-glucan), which is then enzymatically catalyzed and esterified on acetic or propionic anhydride. The T g and T m of acetated poly(α-glucan) are 177 • C and 339 • C, respectively, and the T g and T m of propionated poly(α-glucan) are 119 • C and 299 • C, respectively. The molecular weight of these polymers is higher than 150,000; therefore, it is expected that they have reliable mechanical strength when processed using the right procedures. The in vitro process is technically challenging in terms of scaling up and stabilizing production, but the promising thermal properties should be featured for future applications. Biobased poly(ether-ether ketone) (bio-PEEK) consisting of FDCA derivatives is a representative super-engineering biobased polymer. Bio-PEEK has a T m of >300 • C, which is comparable to that of PEEK created from petroleum-derived resources [121]. Synthesis with TPA-derived biobased monomers is a way to replicate conventional PEEK. Thus, it is easily applicable to industrial processes as long as a supply chain of biobased furan derivatives are created. Figure 12 displays the chemical structures of the aforementioned biobased polymers. Discussion There have been constant and stable improvements in the production of biobased polymers (i.e., in the polymerization and refinery processes that yield biobased building blocks) in the past few decades. As a result, the application of biobased polymers has been expanded. In the early stage of development of biobased polymers, they were recognized as biodegradable polymers for temporary applications, which is still an important part of their applications. However, upgraded biodegradable polymers can now be used for general and engineering applications. These polymers as well as those that are analogous to conventional petroleum-derived polymers play an important role in further growth of biobased polymer applications. As the scaling-up of production of monomers for polymers that are analogous to conventional polymers has been successful, production of biobased polymers will also be scaled up. This will result in prices that are competitive with those of petroleum-derived polymers. Moreover, new biobased polymers comprised only of biobased building blocks, such as PEF and biobased polyamides, have unique and promising functionalities and applications. Thus, the goal of biobased polymer production is no longer to simply replace petroleum-derived polymers. Explorations into the topic will be accelerated by the development of high-spec engineering-grade biobased polymers, which have already been reported at the scientific level. We are confident that industrialization of these polymers will be discussed in the near future. Discussion There have been constant and stable improvements in the production of biobased polymers (i.e., in the polymerization and refinery processes that yield biobased building blocks) in the past few decades. As a result, the application of biobased polymers has been expanded. In the early stage of development of biobased polymers, they were recognized as biodegradable polymers for temporary applications, which is still an important part of their applications. However, upgraded biodegradable polymers can now be used for general and engineering applications. These polymers as well as those that are analogous to conventional petroleum-derived polymers play an important role in further growth of biobased polymer applications. As the scaling-up of production of monomers for polymers that are analogous to conventional polymers has been successful, production of biobased polymers will also be scaled up. This will result in prices that are competitive with those of petroleum-derived polymers. Moreover, new biobased polymers comprised only of biobased building blocks, such as PEF and biobased polyamides, have unique and promising functionalities and applications. Thus, the goal of biobased polymer production is no longer to simply replace petroleum-derived polymers. Explorations into the topic will be accelerated by the development of high-spec engineering-grade biobased polymers, which have already been reported at the scientific level. We are confident that industrialization of these polymers will be discussed in the near future.
15,303.4
2017-10-01T00:00:00.000
[ "Engineering", "Materials Science", "Environmental Science" ]
Hydroxyapatite Double Substituted with Zinc and Silicate Ions: Possibility of Mechanochemical Synthesis and In Vitro Properties In this study, the mechanochemical synthesis of substituted hydroxyapatite (HA) containing zinc and silicon ions having a chemical formula of Ca10−xZnx(PO4)6−x(SiO4)x(OH)2−x, where x = 0.2, 0.6, 1.0, 1.5, and 2.0, was carried out. The synthesized materials were characterized by powder X-ray diffraction, Fourier transform infrared spectroscopy, transmission electron microscopy, and inductively coupled plasma spectroscopy. We found that HA co-substituted with zinc and silicate formed up to x = 1.0. At higher concentrations of the substituents, the formation of large amounts of an amorphous phase was observed. The cytotoxicity and biocompatibility of the co-substituted HA was studied in vitro on Hek293 and MG-63 cell lines. The HA co-substituted with zinc and silicate demonstrated high biocompatibility; the lowest cytotoxicity was observed at x = 0.2. For this composition, good proliferation of MG-63 osteoblast-like cells and an increased solubility compared with that of HA were detected. These properties allow us to recommend the synthesized material for medical applications, namely, for the restoration of bone tissue and manufacture of biodegradable implants. Introduction The chemical composition of hydroxyapatite (HA), Ca 10 (PO 4 ) 6 (OH) 2 , is close to that of human bone. Synthetic HA is actively used in medicine [1]. In dentistry, it is used to fill bone voids after a tooth has been extracted and in the treatment of pulpitis [2]. In maxillofacial surgery, HA can form bone implants or bioactive coatings on metal implants to improve and accelerate the osseointegration process [3]. However, because of a low mechanical strength of HA, it cannot be used alone to form implants of the musculoskeletal system, as the latter must withstand high stresses. HA is a component of many toothpastes and powders, providing protection from the formation of plaque and calculus [4,5]. HA's Synthesis of Substituted HA A series of ZnSi-HA samples with a general chemical formula of Ca 10−x Zn x (PO 4 ) 6−x (SiO 4 ) x (OH) 2−x was obtained by mechanochemical synthesis (MS) in a high-energy planetary ball mill (AGO-2 model). The synthesis was carried out in water-cooled steel vials filled with 8 mm steel balls. The rotation speed of the vials was 1200 rpm. The milling time was 5, 10, 15, 20, 25, 30, or 40 min. To prevent contamination of the product of synthesis by the milling debris, the walls of vials and the surface of the balls were lined with the reaction mixture (for that, the reaction mixture was milled in the vials for 0.5 min). The initial reagents were freshly calcined calcium oxide CaO, calcium hydrophosphate CaHPO 4 , silicon oxide SiO 2 ·0.7 H 2 O, and zinc dihydrogen phosphate dihydrate Zn(H 2 PO 4 ) 2 ·2H 2 O. The ratio of the reagents was taken in accordance with equations given in Table 1, assuming the substitution of zinc ions for calcium ions and silicate ions for the phosphate groups. All synthesized samplers were fine, white powders. The analytical and in vitro investigations, except for the MG63 cell proliferation analysis, were performed on powdered samples. For the latter, pellets with a diameter of 5 mm and a mass of 0.06 g were prepared. The pellets were obtained by pressing the powder material in a steel die at a pressure of 500 MPa. Table 1. Ratio of the initial reagents and expected product of the mechanochemical synthesis. Analysis of Synthesized Compounds The obtained powders were analyzed with several analytical methods. X-ray diffractions (XRD) patterns were recorded on a D8 Advance powder diffractometer (Bruker, Karlsruhe, Germany) with Bragg-Brentano geometry using Cu-Kα radiation. The X-ray phase analysis of the compounds was carried out using the ICDD PDF-4 database (2011). The unit cell parameters, crystallite size, and amorphous phase content were determined by the Rietveld method in Topas 4.2 software (Bruker, Karlsruhe, Germany). The Fourier transform infrared (FTIR) spectra of the powders were recorded on an Infralum FT-801 spectrometer (Simex, Novosibirsk, Russia). The specimens were prepared by the KBr pellet method. Transmission electron microscopy (TEM) images were obtained using a JEM-2200FS microscope (JEOL Ltd., Akishima, Japan). High-resolution TEM (HRTEM) was also carried out. Energy-dispersive X-ray microanalysis (EDX) of the samples was performed by a four-segment Super-X detector in scanning dark-field mode with the maps of distributions of elements constructed with the characteristic lines of the spectrum from each point in the analyzed region. The samples for the TEM examination were dispersed by ultrasonication and deposited from an alcohol-based suspension on an aluminum substrate. The elemental analysis of the synthesized materials was carried out by inductively coupled plasma atomic emission spectroscopy (ICP-AES) using an iCAP 6500 (Thermo Scientific, Waltham, MA, USA) spectrometer. The powder samples were dissolved in aqua regia (50 mg per 2 mL) upon heating. The resultant solutions were analyzed. The plasma was observed axially to obtain the best possible sensitivity. The background corrected signals were used for the quantitative analysis. In Vitro Investigations The cytotoxicity and biocompatibility of the obtained materials were studied on human embryonic kidney Hek293 cell line (ATCC, Manassas, VA, USA) and human-osteosarcomaderived MG-63 cell line (Center for Vertebrate Cell Culture Collection, St. Petersburg, Russia), respectively. The viability of the Hek293 cell line was evaluated with Hoechst 33342/propidium iodide staining by the standard method described in [20]. The cells were seeded on 96-well plates at 5 × 10 3 cells per well and cultured in Iscove's modified Dulbecco's medium (IMDM, pH = 7.4) supplemented with 10% fetal bovine serum in a CO 2 incubator at 37 • C. After 24 h, the cells were treated with water suspension of the synthesized powders at concentrations of 0.01-50 mg/mL for 48 h. For identifying the live, apoptotic, and dead cells, the treated and control cells were stained with a mixture of fluorescent dyes: Hoechst 33,342 (Sigma-Aldrich, St. Louis, MA, USA) and propidium iodide (Invitrogen, Waltham, MA, USA), for 30 min at 37 • C. IN A Cell Analyzer 2200 (GE Healthcare, Chicago, IL, USA) was used to perform four fields per well of automatic imaging in bright-field and fluorescence channels. By means of IN Cell Investigator image analysis software (version 1.5, GE Healthcare, Chicago, IL, USA) in accordance with the morphological changes, the cells were classified as live and apoptotic cells. All data shown are the mean of three wells. The quantitative data are expressed as the mean ± standard deviation (SD). The MG-63 cells were cultured in 96-well plates (Costar, Washington, DC, USA). DMEM (State Research Center of Virology and Biotechnology VECTOR, Koltsovo, Russia) supplemented with 1-5% fetal bovine serum (Gibco, Grand Island, NY, USA) served as the growth medium. For the MTT assay, MG63 cells were attached to the plate bottom; a sterile synthesized powder was added to the medium to a concentration of 2.5 × 10 −5 g/L in each well. This concentration was selected as the optimal for optical-density measurement in a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay. After 96 h, the cell viability was assessed with the MTT assay. For this purpose, 5 µL of an MTT solution (Sigma, St. Louis, MA, USA) was added into the wells and incubated for 4 h. At the end of the incubation, 100 µL of dimethyl sulfoxide (Vecton, Saint Petersburg, Russia) was added, and the cell viability was determined by means of the color intensity of the resultant formazan solution. The optical density was measured on a Tecan Sunrise microplate reader (Tecan, Grödig, Austria) at a wavelength of 492 nm. For the bone cell proliferation assessment, MG-63 cells were seeded on the pellet samples and kept in the growth medium for 7 days. The cell concentration in the growth medium was 2 × 10 3 cells per well. The cells on the pellet samples were fixed with 2% glutaraldehyde buffer (pH 7.4) at room temperature for 12 h, subsequently washed three times with HEPES (0.1 M) buffer, and dehydrated by increasing the concentration of alcohol (50%, 70%, 95%, and 100%) twice for 10 min each. Finally, the samples were chemically dried (4 steps with different ratios of alcohol to HMDS) and sputter-coated with gold under a vacuum. The cell morphology was examined using scanning electron microscopy on a TM-1000 Tabletop microscope (HITACHI, Tokyo, Japan). To evaluate the bioresorption, the solubility of the synthesized powders in water was studied. The powders were placed in distilled water at a concentration of 0.2 g/mL and kept for 1, 3, and 5 days. After the specified time, the suspensions were filtered. The quantitative determination of ions in the mother liquor was carried out on an AA-280FS atomic absorption spectrometer (Varian, Inc., Palo Alto, Santa Clara, CA, USA). The work was carried out under a hollow cathode lamp current of 5 mA, a flame of air-acetylene oxidative, a wavelength of 213.9 nm, a monochromator slit width of 1.0 nm, and optimal working area of 0.01-2 ppm. Determination of the Optimal Conditions of the Mechanochemical Synthesis One of the main parameters of mechanochemical synthesis is the time of treatment. If the process is not long enough, there will be incomplete conversion of the components of the initial mixture into the target product, and the reactants will be present in the mixture. When the time of treatment is excessively long, complete or partial decomposition of the reaction product is possible. Therefore, determining the optimal time for obtaining the target product is a primary task in mechanochemical synthesis. To determine the optimal synthesis time, the reaction mixtures (Table 1) were treated in a planetary ball mill for 5-40 min. Figure 1 shows the XRD patterns of the reaction mixtures for samples with high contents of the substituent ions, namely 1.0-ZnSi-HA and 2.0-ZnSi-HA, treated in the mill for different times. The most intense reflection of the HA phase appeared after 5 min of mechanical treatment of 1.0-ZnSi-HA. However, the reflections of the initial reagents were retained in the mixture for up to 20 min of treatment. In the 2.0-ZnSi-HA sample, due to the high concentration of hydrate water (Table 1), the reaction proceeded faster-the reflections of the initial reagents were absent already after 15 min of treatment. It should be noted that this sample contained a large amount of an amorphous phase (22-40 • 2θ), which remained even after 40 min of treatment. This may have been due to a larger amount of water being released during the first seconds of the interaction of SiO 2 ·0.7H 2 O and Zn(H 2 PO 4 ) 2 ·2H 2 O hydrates with other components. According to the reaction equations (Table 1), 5.7 mol of water (9.2 wt.%) should be released in the reaction mixture at x = 1, while 9.4 mol of water (14.3 wt.%) should be released at x = 2. At x = 1, the released water participates in the formation of a calcium hydroxide phase ( Figure 1). It is possible that at x = 2, there is more water than is involved in the hydroxide formation. Water molecules present in the reaction mixture can be both sorbed by the surface of the particles and incorporated into the HA crystal lattice [21,22]. According to Table 1, at x = 2, apatite with composition Ca 8.0 Zn 2.0 (PO 4 ) 4.0 (SiO 4 ) 2.0 should crystallize. In this case, there are no hydroxyl groups in the crystal lattice (otherwise appearing due to charge compensation). This means that an apatite hydroxyl channel contains only hydroxyl vacancies. These vacancies can be occupied by water molecules owing to a large channel diameter [23]. The incorporation of a large number of water molecules into the apatite lattice hinders the formation of HA crystals as the electrostatic attraction between the ions is destroyed. This leads to the presence of an amorphous phase at high concentrations. the high concentration of hydrate water (Table 1), the reaction proceeded faster-the reflections of the initial reagents were absent already after 15 min of treatment. It should be noted that this sample contained a large amount of an amorphous phase (22-40° 2θ), which remained even after 40 min of treatment. This may have been due to a larger amount of water being released during the first seconds of the interaction of SiO2·0.7H2O and Zn(H2PO4)2·2H2O hydrates with other components. According to the reaction equations (Table 1), 5.7 mol of water (9.2 wt.%) should be released in the reaction mixture at x = 1, while 9.4 mol of water (14.3 wt.%) should be released at x = 2. At x = 1, the released water participates in the formation of a calcium hydroxide phase ( Figure 1). It is possible that at x = 2, there is more water than is involved in the hydroxide formation. Water molecules present in the reaction mixture can be both sorbed by the surface of the particles and incorporated into the HA crystal lattice [21,22]. According to Table 1, at x = 2, apatite with composition Ca8.0Zn2.0(PO4)4.0(SiO4)2.0 should crystallize. In this case, there are no hydroxyl groups in the crystal lattice (otherwise appearing due to charge compensation). This means that an apatite hydroxyl channel contains only hydroxyl vacancies. These vacancies can be occupied by water molecules owing to a large channel diameter [23]. The incorporation of a large number of water molecules into the apatite lattice hinders the formation of HA crystals as the electrostatic attraction between the ions is destroyed. This leads to the presence of an amorphous phase at high concentrations. Despite the disappearance of the reflections of the initial reagents after short treatment times, the parameters and unit cell volume of the HA phase continued to change up to 35 min ( Figure 2). Therefore, the optimal time of the mechanochemical synthesis of HA co-substituted with zinc and silicate is 35 min. Despite the disappearance of the reflections of the initial reagents after short treatment times, the parameters and unit cell volume of the HA phase continued to change up to 35 min ( Figure 2). Therefore, the optimal time of the mechanochemical synthesis of HA co-substituted with zinc and silicate is 35 min. Figure 2 shows that the lattice parameter a and unit cell volume of 2.0-ZnSi-HA increased with the time of synthesis, showing trends opposite to those observed for 1.0-ZnSi-HA. The incorporation of large amount of water molecules into the hydroxyl channel, as mentioned above, increased the channel diameter, which led to an increase in the basal parameters (a and b) and an increase in the unit cell volume. The incorporation of large amount of water molecules into the hydroxyl channel, as mentioned above, increased the channel diameter, which led to an increase in the basal parameters (a and b) and an increase in the unit cell volume. Figure 3a shows the FTIR spectra of HA co-substituted with zinc and silicate together with that of the nonsubstituted HA. The latter shows the absorption bands corresponding to the HA structure: the absorption bands of the phosphate ion (570, 602, 960, 1047, and 1088 cm −1 ) and of the hydroxyl group (630 and 3573 cm −1 ) [23]. In addition, small-intensity absorption bands at 875, 1420, and 1480 cm −1 were observed in the spectrum of the HA sample. According to Ref. [24], these bands belong to the carbonate ion in the position of the phosphate group in the HA structure. Upon substitution, the broadening of the absorption bands of the phosphate groups and disappearance of the OH-group bands are observed. The absorption bands of the carbonate group are absent in the spectra of the cosubstituted HA samples. In addition, the spectra of the co-substituted HA demonstrate an absorption band at a wavelength of 940 cm −1 . This band was detected in the FTIR spectra of Si-HA samples and attributed to silicate ions [11]. The wide absorption bands at wavelengths of 1640 and 3423 cm −1 were associated with vibrations in the water molecules. Figure 3a shows the FTIR spectra of HA co-substituted with zinc and silicate together with that of the nonsubstituted HA. The latter shows the absorption bands corresponding to the HA structure: the absorption bands of the phosphate ion (570, 602, 960, 1047, and 1088 cm −1 ) and of the hydroxyl group (630 and 3573 cm −1 ) [23]. In addition, small-intensity absorption bands at 875, 1420, and 1480 cm −1 were observed in the spectrum of the HA sample. According to Ref. [24], these bands belong to the carbonate ion in the position of the phosphate group in the HA structure. Upon substitution, the broadening of the absorption bands of the phosphate groups and disappearance of the OH-group bands are observed. The absorption bands of the carbonate group are absent in the spectra of the co-substituted HA samples. In addition, the spectra of the co-substituted HA demonstrate an absorption band at a wavelength of 940 cm −1 . This band was detected in the FTIR spectra of Si-HA samples and attributed to silicate ions [11]. The wide absorption bands at wavelengths of 1640 and 3423 cm −1 were associated with vibrations in the water molecules. Figure 3a shows that the shape of absorption band at 3423 cm −1 gradually changes. The band becomes broader and its integral intensity increases with increasing dopant concentration. In the sample with the maximal amount of released water, namely 2.0-ZnSi-HA, this band was more pronounced than for other samples, and the phosphate ion bands were poorly resolved. Figure 3a shows that the shape of absorption band at 3423 cm −1 gradually changes. The band becomes broader and its integral intensity increases with increasing dopant concentration. In the sample with the maximal amount of released water, namely 2.0-ZnSi-HA, this band was more pronounced than for other samples, and the phosphate ion bands were poorly resolved. shows that the shape of absorption band at 3423 cm −1 gradually changes. The band becomes broader and its integral intensity increases with increasing dopant concentration. In the sample with the maximal amount of released water, namely 2.0-ZnSi-HA, this band was more pronounced than for other samples, and the phosphate ion bands were poorly resolved. The lattice parameters and crystallite size of the obtained compounds were refined by the Rietveld method (Figure 4). The lattice parameters have a complex dependence on the concentrations of the substituents. The volume of the unit cell and the lattice parameters decreased as x increased from 0 to 0.6 and then increased with a further increase in x. This may have been due to the fact that at low concentrations of the substituents, a smaller The lattice parameters and crystallite size of the obtained compounds were refined by the Rietveld method ( Figure 4). The lattice parameters have a complex dependence on the concentrations of the substituents. The volume of the unit cell and the lattice parameters decreased as x increased from 0 to 0.6 and then increased with a further increase in x. This may have been due to the fact that at low concentrations of the substituents, a smaller ionic radius of zinc in comparison with that of calcium played a role [25]. At high concentrations, this contribution was insignificant, and the difference between the ionic radii of the silicate and phosphate played the main role, causing lattice expansion. In addition, the introduction of water molecules into the lattice can lead to an increase in the volume of the cell [21], which is consistent with the data in Table 1. It can be seen that with an increase in the concentration of the introduced substituent ions, the concentration of the released water increased. Both the released water molecules and the substituent ions themselves complicated the formation of the HA crystal lattice, so the crystallite size decreased with increasing x. Based on data presented in Figure 4, we consider the substitution limit for the mechanochemical method to be x = 1.0. At higher concentrations of the dopants, water molecules were incorporated into the lattice of the substituted HA. Mechanochemical Synthesis of ZnSi-HA Samples As seen in Figure 5, the introduction of dopants changed the particle morphology. Nonaggregated particles smaller than 20 nm in size were observed in the unsubstituted HA sample. At x = 2, the particles formed aggregates~100 nm in size. Although the XRD data indicated the presence of a large amount of an amorphous phase in the 2.0-ZnSi-HA sample, HRTEM showed that all particles were crystalline. This discrepancy can be explained by the fact that amorphous calcium phosphates are prone to crystallization under an electron beam due to intensive heating during HRTEM measurements [22]. Figure 6 shows that the distribution of the elements in the particles of 2.0-ZnSi-HA was uniform. Their concentrations were close to the expected concentrations (Table 2). Despite the fact that during the synthesis of 2.0-ZnSi-HA, the amount of released water (causing agglomeration of the particles) was 4.7 times that released during HA synthesis (Table 1), the reagents were efficiently mixed. Table 2 presents the results of the ICP-AES analysis of the cosubstituted HA, which show that with an increase in the concentration of the introduced dopants, the measured concentration of the elements increased. There was also good agreement with the expected compositions (within the measurement error). Taking into account the results of the elemental analysis performed during the TEM measurements ( Figure 6), it can be assumed that there may be a slight overestimation in the ICP data for silicon. In Vitro Investigations Because the double substitution limit for zinc and silicon was set to x = 1.0, the in vitro studies were carried out only on the HA, 0.2-ZnSi-HA, 0.6-ZnSi-HA, and 1.0-ZnSi-HA samples. The cytotoxicity of the samples was tested against the Hek293 cell line. When Hek293 cells were incubated with different concentrations of the tested compounds for 48 h, good cell viability was observed up to a concentration of 1 mg/mL, as shown in Figure 7. A slight cytotoxic effect appeared at a concentration of 1 mg/mL for all investigated samples and increased at higher concentrations of the powder. By comparing the samples with different concentrations of the substituent ions, we concluded that at a substituent concentration of x = 1 (sample 1.0-ZnSi-HA), a slight increase in the cell viability wad observed at a powder concentration of 1 mg/mL. At a powder concentration of 10 mg/mL, the cell count decreased and became the same for all samples (around 45%). At the maximum HA and 1.0-ZnSi-HA concentration studied (50 mg/mL), a significant decrease in the cell count and an increase in dead cells together with apoptotic cells were observed. This toxic effect on the cells could be associated with hypoxia and nutrient deficiency caused by particle sedimentation over the cells [16,26]. Nevertheless, despite the overall low number of cells observed, the 0.2-ZnSi-HA sample contained the same levels of live, dead, and apoptotic cells as observed at a lower powder concentration, indicating a positive effect of the low concentrations of the dopants on cell viability. A standard MTT assay was performed to determine the powder cytotoxicity on human MG-63 osteoblasts. Table 3 shows the results of the cell incubation with the HA powders for 4 days. The results showed that none of the studied samples had a cytotoxic effect. Moreover, doping of HA with Zn and Si ions at a degree of substitution of x = 0.2 led to a significant increase in the growth of osteoblasts. This result is consistent with previously reported data [29]. At a higher ion concentration, the osteoblasts grew at a lower rate. For a substituent concentration of x = 0.6, the optical density value was even lower than that of the control. For the bone cell proliferation assessment, the osteoblastic MG-63 cells were cultured on the surface of pellets made of HA with different concentration of the substituents. The SEM micrographs of the osteoblasts after 7 days of culturing on the pellets are shown in Figure 8. It can be seen that cells were observed in large numbers on the HA (Figure 8a) The obtained data on the relatively low toxicity of HA and HA cosubstituted with zinc and silicate and increased cell viability are in agreement with the data in the literature [27,28]. Thus, we concluded that the obtained ZnSi-HA materials are nontoxic. Cytotoxic properties were observed only at a concentration of the powder in solution of 50 mg/mL, which was most likely due to hypoxia and nutrient deficiency caused by particles' sedimentation over the cells. The presence of a small amount of substituents (x = 0.2) in HA reduced the cytotoxicity of the latter at this concentration of the powder in solution. At the higher concentration of substituents (x = 1.0), the cytotoxicity was again at the level of that of the unsubstituted HA. A standard MTT assay was performed to determine the powder cytotoxicity on human MG-63 osteoblasts. Table 3 shows the results of the cell incubation with the HA powders for 4 days. The results showed that none of the studied samples had a cytotoxic effect. Moreover, doping of HA with Zn and Si ions at a degree of substitution of x = 0.2 led to a significant increase in the growth of osteoblasts. This result is consistent with previously reported data [29]. At a higher ion concentration, the osteoblasts grew at a lower rate. For a substituent concentration of x = 0.6, the optical density value was even lower than that of the control. For the bone cell proliferation assessment, the osteoblastic MG-63 cells were cultured on the surface of pellets made of HA with different concentration of the substituents. The SEM micrographs of the osteoblasts after 7 days of culturing on the pellets are shown in Figure 8. It can be seen that cells were observed in large numbers on the HA (Figure 8a Because the samples consisted of particles of the same size, the effect of the particle size on the cell growth was excluded. Obviously, there was a toxic effect of the substituent ions, most likely zinc. This observation agrees with results obtained by other authors in studies of the single substitution by zinc. Decreases in the cell proliferation [30], biocompatibility, and osteoconductivity [26] for HA containing more than 0.2 mol of Zn were observed. This effect did not occur in silicon-substituted HA [8]. Thus, it can be concluded that the acceptable degree of cosubstitution is x = 0.2. At high concentrations of zinc and silicon in the material, the attachment and proliferation of MG-63 cells become weak. size, the effect of the particle size on the cell growth was excluded. Obviously, there was a toxic effect of the substituent ions, most likely zinc. This observation agrees with results obtained by other authors in studies of the single substitution by zinc. Decreases in the cell proliferation [30], biocompatibility, and osteoconductivity [26] for HA containing more than 0.2 mol of Zn were observed. This effect did not occur in silicon-substituted HA [8]. Thus, it can be concluded that the acceptable degree of cosubstitution is x = 0.2. At high concentrations of zinc and silicon in the material, the attachment and proliferation of MG-63 cells become weak. The solubility studies of the samples of the unsubstituted HA and 0.2-ZnSi-HA showed that the solubility of HA cosubstituted with zinc and silicate was approximately twice that of the unsubstituted HA (Table 4). We concluded that the introduction of zinc and silicon ions accelerates the bioresorption of the material. The solubility studies of the samples of the unsubstituted HA and 0.2-ZnSi-HA showed that the solubility of HA cosubstituted with zinc and silicate was approximately twice that of the unsubstituted HA (Table 4). We concluded that the introduction of zinc and silicon ions accelerates the bioresorption of the material. Conclusions In this study, we found that mechanochemical synthesis can be used to obtain substituted HA containing zinc and silicon ions in its structure (cosubstituted HA). We found that the formation of the crystal lattice of the cosubstituted HA was complete after 35 min of treatment of the reaction mixtures in a high-energy planetary ball mill. The limit of possible substitution was found to be x = 1.0. With a further increase in the substituent concentration, a significant increase in the unit cell volume was observed. As the dopant concentration increased, the crystallite size of the product decreased, while the concentration of the amorphous phase increased. In the FTIR spectra of the cosubstituted HA, in addition to the absorption bands of the phosphate and hydroxyl groups, the presence of bands of silicate ions was observed. The TEM studies confirmed the uniform distribution of the elements in the particles of the synthesized material. The in vitro studies revealed that the introduction of zinc and silicon ions into the HA structure at a concentration of x = 0.2 improved the biocompatibility, decreased the cytotoxicity, and increased the solubility of the material. This composition is recommended for further in vivo investigations of the biodegradation of a granulated material or bioresorbable scaffold. Samples with a higher concentration of the substituent ions can be used as a biologically active additive in the development of bioresorbable multicomponent materials. The cosubstituted apatite with high concentrations of the substituents can also be used as a source material for producing calcium phosphate coatings by microarc oxidation, magnetron sputtering, or other deposition method, in which the ionization of the sprayed material takes place. In this case, the concentrations of the doping elements in the coating may differ from those in the initial material due to the different deposition rates of different ions. Data Availability Statement: The raw/processed data required to reproduce these results are included in the Section 2.
6,885.2
2023-02-01T00:00:00.000
[ "Materials Science" ]
Sinosasagracilis (Poaceae, Bambusoideae), a new combination supported by morphological and phylogenetic evidence Abstract The results of phylogenetic analysis, based on the whole chloroplast genome and morphological study support the transfer of a long ignored bamboo species, Sasagracilis, to the recently established genus, Sinosasa, in this study. Morphologically, this species differs from all the other known Sinosasa species by having very short (2–3 mm) foliage leaf inner ligules, which is unusual in this genus. A revised description of its morphology and colour photos are also provided. Introduction Sinosasa L.C.Chia ex N.H.Xia, Q.M. Qin & Y.H.Tong was recently segregated from Sasa Makino and Shibata (1901) to accommodate some species previously placed in Sasa subg. Sasa from China, based on morphological and phylogenetic evidence (Qin et al. 2021). This genus differs from Sasa in having raceme-like (vs. panicle-like) synflorescences, two to three (vs. four to ten) florets per spikelet with a rudimentary terminal floret, three (vs. six) stamens and two (vs. three) stigmas per floret, wavy (vs. usually flat) foliage leaf blades when dry and relatively long (> 1 cm) (vs. short) foliage leaf inner ligules (Qin et al. 2021). Up to now, Sinosasa contains seven species endemic to subtropical areas of China and usually found growing along the river valley or in moist areas under evergreen broad-leaved forests at elevations of 700-1200 m (Qin et al. 2021). Sasa gracilis B.M. Yang (1990) was described based on the only collection B. M. Yang 06774 from Shangmuyuan, Jiangyong County, Hunan Province, China. After its publication, it is only recognised in 'Bamboos of Hunan' (Yang 1993), edited by the author of this name, 'Iconographia Bambusoidearum Sinicarum' (Yi et al. 2008) and its English version 'Illustrated Flora of Bambusoideae in China' (Shi et al. 2022). However, because of the narrow circulation of the publication Acta Scientiarum Naturalium Universtis Normalis Hunanensis (later the name was changed to Journal of Natural Science of Hunan Normal University) at that time in China (Deng and Xia 2014), this species was ignored by the widely distributed monographs, such as 'Flora Reipublicae Popularis Sinicae', 'Flora of China', 'World Checklist of Bamboos and Rattans' (Hu 1996;Wang and Stapleton 2006;Vorontsova et al. 2016), the well-known database GrassBase-The Online World Grass Flora (Clayton et al. 2016) and some important websites like http://www.ipni.org, http://www.tropicos.org and http://www.theplantlist.org. In the protologue, this species was described to possess a suite of vegetative characters, such as solitary branches at each branching node, strongly raised supranodal ridges and wavy foliage leaf blades when dry, which fit well with the circumscription of Sinosasa. However, this species has very short foliage leaf inner ligules that are only 2-3 mm long, while all hitherto known Sinosasa species typically have more than 1 cm long inner ligules. Therefore, the taxonomic position of Sasa gracilis needs a further study. Materials and methods The specimens of Sasa gracilis were collected from its type locality during a field trip in September 2022. Fresh foliage leaves were deposited in silica gel for DNA extraction. Type specimens of Sasa gracilis deposited in the Herbarium of Hunan Normal University (HNNU) were examined. Observations and measurements were taken using a magnifier and a ruler with a scale of 0.5 mm. Some minor characters like the indumentum were observed with a stereomicroscope (Mshot MZ101). The morphological terms follow McClure (1966) and Beentje (2016). Herbarium acronyms follow Thiers (2022, continuously updated). To study the phylogenetic position of Sasa gracilis within the tribe Arundinarieae, the whole chloroplast genomes were used for building the phylogenetic tree. A total of 24 representatives belonging to all the five subtribes of the tribe Arundinarieae (Zhang et al. 2020a) were sampled and Bambusa multiplex (Lour.) Raeusch. ex Schult. f. from the tribe Bambuseae was used as outgroup. All the sampled taxa, as well as their voucher information and GenBank accession numbers, are listed in Table 1. (Wick et al. 2015). Finally, the sequence editing was manually operated in Geneious v. 9.1.4 (Kearse et al. 2012) with the structure of LSC-IRa-SSC-IRb. Phylogenetic analysis All the whole chloroplast genomes were aligned with MAFFT v. 7.490 (Katoh and Standley 2013) and combined as a data matrix. Phylogenetic analyses were conducted using Maximum Likelihood (ML) and Bayesian Inference (BI) implemented in the PhyloSuite v.1.2.2 platform (Zhang et al. 2020b). The best substitution model (K81 + GTR) for the combined data was determined using the Bayesian Information Criterion (BIC) in ModelFinder (Kalyaanamoorthy et al. 2017). A standard Maximum Likelihood tree search was performed using IQ-TREE v.1.6.8 (Nguyen et al. 2015). Nodal support (bootstrap support; BS) was assessed using 1000 standard bootstrap replicates. variable sites (2.56%), including 773 parsimony informative sites (0.54%) and 2,915 singleton variable sites (2.02%). The phylogenetic trees, generated by the ML and BI methods, were generally consistent in topology, so only the ML tree was shown with nodal support values from both methods labelled on each node (Fig. 1). As shown in the phylogenetic tree, Sasa gracilis is distantly related to Sasa veitchii Rehder (= Sasa albomarginata (Miq.) Makino & Shibata, the type of Sasa), but forms a monophyletic clade with three Sinosasa species with strong nodal support (BS = 100% and PP = 1.00).
1,194.2
2023-05-09T00:00:00.000
[ "Biology" ]
Network pharmacology combined with molecular docking and in vitro verification reveals the therapeutic potential of Delphinium roylei munz constituents on breast carcinoma Delphinium roylei Munz is an indigenous medicinal plant to India where its activity against cancer has not been previously investigated, and its specific interactions of bioactive compounds with vulnerable breast cancer drug targets remain largely unknown. Therefore, in the current study, we aimed to evaluate the anti-breast cancer activity of different extracts of D. roylei against breast cancer and deciphering the molecular mechanism by Network Pharmacology combined with Molecular Docking and in vitro verification. The experimental plant was extracted with various organic solvents according to their polarity index. Phytocompounds were identified by High resolution-liquid chromatography-mass spectrometry (HR-LC/MS) technique, and SwissADME programme evaluated their physicochemical properties. Next, target(s) associated with the obtained bioactives or breast cancer-related targets were retrieved by public databases, and the Venn diagram selected the overlapping targets. The networks between overlapping targets and bioactive were visualized, constructed, and analyzed by STRING programme and Cytoscape software. Finally, we implemented a molecular docking test (MDT) using AutoDock Vina to explore key target(s) and compound(s). HR-LC/MS detected hundreds of phytocompounds, and few were accepted by Lipinski’s rules after virtual screening and therefore classified as drug-like compounds (DLCs). A total of 464 potential target genes were attained for the nine quantitative phytocompounds and using Gene Cards, OMIM and DisGeNET platforms, 12063 disease targets linked to breast cancer were retrieved. With Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway enrichment, a total of 20 signalling pathways were manifested, and a hub signalling pathway (PI3K-Akt signalling pathway), a key target (Akt1), and a key compound (8-Hydroxycoumarin) were selected among the 20 signalling pathways via molecular docking studies. The molecular docking investigation revealed that among the nine phytoconstituents, 8-hydroxycoumarin showed the best binding energy (−9.2 kcal/mol) with the Akt1 breast cancer target. 8-hydroxycoumarin followed all the ADME property prediction using SwissADME, and 100 nanoseconds (ns) MD simulations of 8-hydroxycoumarin complexes with Akt1 were found to be stable. Furthermore, D. roylei extracts also showed significant antioxidant and anticancer activity through in vitro studies. Our findings indicated for the first time that D. roylei extracts could be used in the treatment of BC. Introduction Breast cancer (BC) is the most commonly diagnosed cancer worldwide, and its burden has been rising over the past decades (Pistelli et al., 2021;Pospelova et al., 2022).Having replaced lung cancer as the most commonly diagnosed cancer globally, breast cancer today accounts for 1 in 8 cancer diagnoses and a total of 2.3 million new cases in both sexes combined (Sung et al., 2021).Representing a quarter of all cancer cases in females, it was by far the most commonly diagnosed cancer in women in 2020, and its burden has been growing in many parts of the world, particularly in transitioning countries (Heer et al., 2020).An estimated 685,000 women died from breast cancer in 2020, corresponding to 16% or 1 in every 6 cancer deaths in women (Deo et al., 2022).The discovery of new cancer treatments is still regarded as an active research area despite the great advance in the area of chemotherapy.Medicinal plants are regarded as a renewable source of bioactive compounds that can be exploited in the treatment of various ailments including cancer (Tariq et al., 2021;Mir et al., 2022a;Mir et al., 2022b).Phytochemicals play an important role in the initiation, development, and advancement of carcinogenesis, as well as in suppressing or reversing the early stages of cancer or the invading potential of premalignant cells and also regulate cell proliferation and apoptosis signalling pathways in transformed cells (George et al., 2021;Bhat et al., 2022a;Bhat et al., 2022b). Delphinium roylei Munz. is an important medicinal herb of the Delphinium genus.The ethnomedicinal uses of this plant include treating the liver and persistent lower back discomfort (Sharma and Singh, 1989;Kumar and Hamal, 2011).Diterpenoid alkaloid and flavanols, which constitute most of the compounds isolated from Delphinium plants, have been tested for various biological activities, such as effects on cholinesterase inhibition, antimicrobial, antineoplastic, insecticidal, anti-inflammatory, and anticancer activities (Yin et al., 2021).Delphinidin is a type of anthocyanin isolated from genus Delphinium, which has anticancer, anti-inflammatory, and anti-angiogenic properties.Recent in-vitro studies showed that delphinidin can inhibit the invasion of HER-2-positive MDA-MB-453 breast cancer cell line, with low cytotoxicity on normal breast cells (Wu et al., 2021) and ovarian cancer cells (Lim et al., 2017) and can also induce autophagy in breast cancer cells (Chen et al., 2018). Due to the complex nature of plant extracts and their chemical constituents, it is difficult to understand the molecular mechanism by which they act on certain molecular targets due to the synergistic effects of their chemical constituents and the fact that they could interact with many targets simultaneously.In recent years, network pharmacological analysis has been effectively applied for prediction of the protein targets and the related disease pathways of plant active constituents. Network pharmacology, based on system biology, includes network database retrieval, virtual computing, and high throughput omics data analysis.This approach breaks the ancient limitation of one drug-one biological target research and is applied mainly to explaining effective mechanisms, active ingredient screening, and pathogenesis research (Yu et al., 2021). The versatile approach of molecular docking, which is based on the theory of ligand-receptor interactions, is widely used in drug discovery to understand how compounds bind with their molecular targets (Tiwari and Singh, 2022;Saifi et al., 2023). Here, in the present study, we carried out a HR/LC-MS analysis to identify the phytoconstituents present in the D. roylei and screened to find drug-likeness compounds by ADMET analysis.Secondly, we used network pharmacology to predict the potential effective components, corresponding target genes, and pathways of phytocompounds of D. roylei against breast cancer.Lastly, we explore the molecular mechanism of the most potent bioactive constituent of D. roylei and a hub therapeutic target to alleviate the breast cancer based on molecular docking, molecular dynamic (MD) simulation and in vitro experimental analysis. GRAPHICAL ABSTRACT The graphical abstract illustrates a multidisciplinary approach to uncover the therapeutic promise of Delphinium roylei Munz, a medicinal plant, against breast carcinoma. Collection of plant material The root parts of the Delphinium roylei plant were taken from high-altitude areas of Kashmir Himalaya.The collected samples were identified and confirmed by Akhtar Hussain Malik, Taxonomist, Department of Botany, the University of Kashmir, with voucher specimen No.2954-(KASH). Extraction Various solvents such as methanol, ethanol, ethyl acetate, and petroleum ether were selected as extraction solvents according to their polarity index to obtain plant extract using the Soxhlet equipment technique.About 200 g of the D. roylei plant material was powdered using a mechanical grinder after being cleaned with deionized water, dried in the shade for 15 days, pulverized with a mechanical grinder, and placed in an airtight container.The extracts were concentrated using a rotating vacuum evaporator after being filtered using Whatman no. 1 filter paper.They were then kept at 4 °C for further use. High resolution-liquid chromatographymass spectrometry (HR/LC-MS) The HR/LC-MS analysis of the ethanolic extract was carried out by a UHPLC-PDA-Detector Mass Spectrophotometer (HR/LC-MS 1290 Infinity UHPLC System), Agilent Technologies ® , Santa Clara, CA, USA (Noumi et al., 2020).It consisted of a HiP sampler, binary gradient solvent pump, column compartment, and Quadrupole Time of Flight Mass Spectrometer (MS Q-TOF) with a dual Agilent Jet Stream Electrospray (AJS ES) ion source.A total of 1% formic acid was used as a solvent in deionized water (solvent A) and acetonitrile (solvent B).The flow rate of 0.350 mL/min was used, while MS detection was performed in MS Q-TOF.Compounds were identified via their mass spectra and their unique mass fragmentation patterns.Compound Discoverer 2.1, ChemSpider, and PubChem were the main tools for identifying the phytochemical constituents. Network pharmacology-based analysis Eight 8) compounds were identified according to UPLC-MS/MS analysis and were subjected to network pharmacology-based analysis.The identification of the target genes linked to the selected constituents was performed using the database STITCH DB (http://stitch.embl.de/,ver.5.0) and the obtained results were utilized for construction of compound-target (C-T) network using Cytoscape 3.5.1.Cytoscape combined score of interactions was adopted for judging the importance of nodes in each given network.Information about functional annotation and the pathways that were highly associated with the target proteins were retrieved FIGURE 2 Venn diagram represents common target genes for compound prescription and disease.The size denotes the number of the target genes, the blue circle symbolizes the target genes of breast cancer, the red circle represents the target genes of 10 quantitative components in HR/LC-MS, and the coincident part depicts the common target genes. In-silico drug-likeness and toxicity predictions In silico drug-likeness and toxicity of top hit compounds in the database based on were carried out using the SwissADME web browser (http://www.swissdme.com)(Daina et al., 2017).Druglikeness and toxicity filtering was based on Lipinski's rule of five (Lipinski, 2004).For example, constituents with predicted oral bioavailability (OB) ≥30 were considered active.Constituents that satisfied less than three criteria were considered inactive. Molecular docking The ligand molecule (8-hydroxycoumarin) are retrieved in the form of 3D Standard Data Format from the PubChem database (3D SDF) (Bolton et al., 2011).The ligand was then converted from 3D SDF to Protein Data Bank (PDB) format using Avogadro software.The crystal structures of breast cancer target proteins (Akt1, SRC, EGFR, IL6, HSP90AA and ESR1) were retrieved from the Protein Data Bank.Biovia Discovery studio was used to eliminate undesirable bindings, ligand molecules, water molecules, and other contaminants from the macromolecule.After that polar hydrogens were added to the protein throughout the preparation process for improved interactions, followed by Kollman charges and other modifications.Molecular docking studies were performed between the target proteins of breast cancer and compounds of the selected plant using AutoDock version 4.2.1 (Trott and Olson, 2010).All other parameters were left at their default values.the grid box was generated centered at X = 14.09,Y = 15.47,Z = 15.48 with dimensions X:3.53, Y:0.58, Z: 12.41.The Lamarckian Genetic Algorithm (LGA) was used for docking studies on the protein and ligand complexes (Fuhrmann et al., 2010).The RMSD clustering maps were developed after the docking process was complete by re-clustering with the clustering tolerances of 1, 0.5, and 0.25 to identify the best cluster with the most populations and lowest energy score. Molecular dynamics (MD) simulation study MD simulations were performed using the Desmond 2020.1 from Schrodinger, LLC (Chow et al., 2008;Release, 2017) on the dock complex of Akt1 and 8-hydroxycoumarin ligand.This system used the OPLS4.force field (Mazurek et al., 2021) and an explicit solvent model containing TIP3P water molecules in a period boundary salvation box of 10 Å × 10 Å x 10 Å dimensions.The system was initially equilibrated to retrain over the protein-ligand complexes using an NVT ensemble for 10 ns.Following the preceding phase, an NPT ensemble was used to carry out the brief run of minimization and equilibration for 12 ns.The NPT ensemble was assembled using the Nose-Hoover chain coupling method and operated at 27 °C for 1.0 ps under a pressure of 1 bar for the duration of the investigation.The time step was 2 fs.The Martyna-Tuckerman-Klein barostat method with a 02 ps relaxation time was adopted for pressure regulation.The radius for coulomb interactions was set at 9 nm, and long-range electrostatic interactions were calculated using Ewald's particle mesh approach.With each trajectory, the bonded forces were estimated using the RESPA integrator for the time step of 2 fs.Root mean square fluctuation (RMSF), root mean square deviation (RMSD), solvent accessible surface area (SAS Area), and radius of gyration (Rg) were estimated to monitor the stability of molecular docking simulations (Rapaport and Rapaport, 2004). In vitro antioxidant activity 1, 1-diphenyl-2-picrylhydrazyl radical scavenging activity The capacity of the D. roylei extracts to scavenge the DPPH radicals were assessed by using the Gyamfi et al. method with slight FIGURE 3 (Continued).The common target genes network interaction results (A) PPI network of the common target genes.The nodes represent target genes; the stuffing of the nodes represents the 3D structure of target genes; the edges represent target genes-target genes associations; the colors of the edges represent different interactions; cyan and purple represent known interactions; green, red, and blue-purple represent predicted interactions; chartreuse, black, and light blue represent others. modification (Obi et al., 2022).0.5 mL of a test extract aliquot at various concentrations of 20-160 μg/mL in methanol was dissolved with 0.5 mL of a 100 mM DPPH solution.The resulting absorbance was measured at 517 nm after a 30-min incubation period in complete darkness and at room temperature.The following formula was used to compute the percentage inhibition: Percentage inhibition: Absorbance conrol −Absorbance sample x 100 absorbance control Hydroxyl radical scavenging activity Hydroxyl radical scavenging activity was determined by Elizabeth and Rao with a bit of modification (Nwakaego et al., 2019).The assay measures the 2-deoxyribose breakdown product by condensing it with Thiobarbituric acid.Hydroxyl radicals are produced by the Fenton reaction, which includes a ferric ions-EDTA-ascorbic acid-H 2 O 2 system.The reaction mixture contains these above components and different plant extract concentrations (10-80 μg/mL).0.5 mL of the reaction mixture was dissolved in 1 mL of 2.8 percent of TCA after 1-h incubation at 37 °C, then 1 mL of 1 percent aqueous TBA was poured, and the mixture was then incubated for 15 min at 90 °C to develop the color.The absorbance was calculated at 532 nm.Butylated hydroxytoluene (BHT) was used as standard. Scavenging ef f ect: Absorbance conrol − Absorbance sample x 100 absorbance control Reducing power The assay was carried out using Oyaizu's method (Zhang et al., 2021).This method estimated the reduction of Fe 3 + -Fe 2 + by measuring the absorbance of Pearl's Prussian blue complex.This procedure relies on the stoichiometric reduction of (Fe 3 + ) ferricyanide relative to antioxidants.Various concentrations of the plant extracts (1-200 μg/mL) were added to 2.5 mL of 1% potassium ferricyanide [K 3 Fe (CN) 6 ] and 2.5 mL of 0.2 M phosphate buffer with a pH of 6.6.Then 2.5 mL of 10% trichloroacetic acid was added to the mixture after 20 min of incubation at 50 °C, and the mixture was then centrifuged at 3,000 rpm for 10 min.The upper layer (2.5 mL) was dissolved in 2.5 mL of distilled water and 0.5 mL of 0.1% FeCl 3 , and the absorbance was determined at 700 nm.Rutin was taken as a positive antioxidant, and the reducing power was calculated according to the absorbance values. Superoxide radical scavenging activity This assay was based on the extract's capacity to reduce formazan production by scavenging the radicals produced by the riboflavin-NBT system (Jaganathan et al., 2018).The reaction mixture consists of 20 μg riboflavin, 50 mM phosphate buffer with a pH of 7.6, 0.1 mg/3mL NBT, and 12 mM EDTA.The reaction was initiated by illuminating the above reaction mixture with various concentrations of plant extracts (10-80 μg/mL) for 90 s.Then the absorbance was estimated at 590 nm.BHT was used standard antioxidant.The %age of scavenging of superoxide anion s was calculated using the equation. Percentage Inhibition: (1 − A S A C ) x100 Where A S is the absorbance of the test sample, and A C is the absorbance of the control used. In vitro anticancer activity Cell culture MCF-7, MDA-MB-231, and MDA-MB-468 cell lines were purchased from the National Centre for Cell Science (NCCS) in Pune.Prof. Annapoorni Rangarajan, IISC, Bangalore, kindly supplied the 4T1 cell line.Cells were grown in high glucose media DMEM using 10% FBS and 1% penicillin/streptomycin.The cells were cultured in a CO2 incubator (5%) at 37 °C. Cytotoxicity assay The MTT (3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay was utilized to determine the cell cytotoxicity (Rezadoost et al., 2019).Breast cancer cells were seeded in 96 well plates with the cell number of 3 × 10 3 cells in each well and allowed to adhere overnight.To prepare the stock solution, From the stock solution 10 mg/mL, various concentrations (12.5-400 μg/mL) of various extracts of Delphinium roylei were obtained in fresh media.Then the breast cancer cells (MCF-7, MDA-MB-231, MDA-MB-468 and 4T1) were treated with extracts of Delphinium roylei for 72 h and the plate system was placed into the incubator.MTT solution was applied to every well after incubation.The plate was kept in the incubator at 37 °C for 4 h under dim lighting.The supernatant was removed after 4 h, and 100 μL of DMSO was added to the purple formazan to dilute it.The ELISA plate reader was used to read the plate at 595 nm.The percentage of cell cytotoxicity was determined using an optical density. Percentage Cell viability OD of test sample OD of control × 100 Frontiers in Pharmacology frontiersin.org13 Mir et al. 10.3389/fphar.2023.1135898Annexin V/PI apoptosis detection We used a BD Biosciences Annexin V apoptosis detection kit to examine the mechanism behind the anticancer effect of D. roylei ethyl acetate extract.MDA-MB-231 was treated with acetate extract for 24 and 48 h.As instructed by the manufacturer, adherent and free-floating cells were all collected and stained with the fluorescent dyes FITC-Annexin V and PI.Flow cytometry was performed at the Department of Biotechnology, National Institute of Technology, Rourkela, Odisha, India, on a BD Accuri ™ C6 flow cytometer (Mehraj et al., 2022) Quantification of chemical components of HR/LC-MS In our previous investigation, 168 phytocompounds were tentatively identified using the chromatography-mass spectrometry (LC/MS) approach in both negative and positive ionization modes (Mir et al., 2022b).A few more compounds were identified by performing the advanced technique like HR-LC/MS) and chromatograms in TOF MS ES + shown in (Figures 1A-D).Figure S1 Frontiers in Pharmacology frontiersin.org14 chemical markers and listed as potential candidates for further network pharmacology analysis. Network pharmacology Target gene screening and interaction network construction Total of 514 potential target genes were obtained for the 10 quantitative components of HR/LC-MS (shown in Figure 2).Meanwhile, 12063 disease target genes associated with breast cancer were retrieved using GeneCards, OMIM and DisGeNET platforms.464 shared common target genes were identified between the quantitative components of HR/LC-MS and breast cancer.All 10 components in D. roylei, namely, 8hydroxycoumarin, Delsoline, Royleinine, Delsemine B, Herniarin, Aloesin, Talatisamine, Narwedine, Scoparone, Piperine in D. roylei were targeted for further analysis.The common target genes PPI diagram indicated that there were 464 nodes and 6,035 edges in PPI (Figure 3A).The frequency of occurrence of the top 30 common target genes was shown in Figure 3B.Akt1, SRC, EGFR, IL6, HSP90AA, and other target genes exhibited a high frequency of protein interaction, which may be the node protein of the whole network.The results showed that the selected components of D. roylei had a high binding activity with these target proteins and could be used as the potential target genes of HR/LC-MS for treating breast cancer. Screening of key pathways of HR-LC/MS for treating breast cancer GO analysis of the common target genes showed that the biological process was mainly involved in Protein autophosphorylation, peptidyl-tyrosine modification, and pos.neg. of phosphorylation (Figure 4) and also given as S-2 to S-4.The molecular functions non-membrane spanning protein tyrosine kinase activity and protein serine/threonine/tyrosine kinase activity were leading.The cyclin-dependent protein kinase holoenzyme complex and protein-kinase complex were observed in the cellular component.KEGG pathway enrichment analysis of the aforementioned common target genes is shown in Figure 5 and Frontiers in Pharmacology frontiersin.org15 also given as S-5.After the exclusion of broad pathways, the top 20 signalling pathways are listed in Table 2.This suggested that the effect of HR-LC/MS for treating breast cancer may act on multiple pathways, as well as complex interactions among these pathways. Compound prescription-active componentdisease target gene-pathway interaction network Compound prescription-active component-disease-target genepathway interaction network finding is shown in Figure 6.The network contained 474 nodes (464 target genes, 10 active components.Besides, the interaction network results of 10 active compounds are shown in Table 3.The degree of 8hydroxycoumarin, Delsoline, Royleinine, Delsemine B, Herniarin, Aloesin, Talatisamine, Narwedine, Scoparone, and Piperine were 112, 104, 101, 104, 102, 108, 101, 105, 102 and 110 respectively.The results above show that quality markers in LC-MS may act on the whole biological network system rather than on a single target gene. Molecular docking studies All the binding energy scores are calculated from the best cluster (95%) that falls within the lowest RMSD 0.25 Å.With the lowest binding energy (ΔG-9.2kcal/mol) and inhibitory concentration, Ki (1.14 µM), 8-hydroxycoumarin showed a considerable binding affinity for the Akt1 (Figure 7A).During the interaction of the ligand 8-hydroxycoumarin, Trp80 is involved in pi-pi stacking, and Lys268 and Val270 residues are involved in pi-alkyl interaction at the binding cavity of the protein.Besides hydrogen bonding, Leu264 is involved in pi-sigma interaction, and the rest residues are van der Waal's interactions by amino acid residues formed weak non-bonded interaction with the ligand (Figure 7A and right panel).Molecular docking studies were performed to verify the affinity of target protein(s) and bioactive phytocompounds.The 3D interactions of 8-Hydroxycoumarin with various target proteins; SRC, EGFR, IL-6, Hsp90aa1 and ESR-1 and 2D structure of 8-Hydroxycoumarin interacted with respective amino acids respectively are shown in Figures 7B-F.Table 4 depicts the docking and scores of the D. roylei compounds; 8hydroxycoumarin, Delsoline, Royleinine, Delsemine B, Herniarin, Aloesin, Talatisamine, Narwedine, Scoparone, and Piperine against the active sites of the identified protein targets, Akt1, SRC, EGFR, IL-6, Hsp90aa1and ESR-1 performed using the AutoDock Vina software. Drug-likeness prediction of D. roylei phytoconstituents The compounds retrieved from PubChem were assessed for Lipinski's Rule of 5, with drug-likeliness properties.Furthermore, ADMET evaluation was applied and was selected for molecular docking to determine the binding affinity with protein at the active site.Almost all the nine phytoconstituents accept Lipinski's rule with few limitations, as shown in Table 5. Molecular dynamic (MD) simulation study Studies using molecular dynamics and simulation (MD) were conducted to determine the convergence and stability of the Akt1+8-Hydroxycoumarin complex.Comparing the root mean square deviation (RMSD) values, the simulation of 100 ns showed stable conformation.The RMSD of the Cα-backbone of Akt1 bound to 8-Hydroxycoumarin showed a deviation of 2.1 Å (Figure 8A), while the ligand showed an RMSD deviation of 3.8.0Å. Stable RMSD plots indicate good convergence and stable conformations throughout the simulation.As a result, it may be inferred that 8-Hydroxycoumarin bound with Akt1 is quite stable in complex due to the ligand's increased affinity.The plot for root mean square fluctuations (RMSF) indicates the residual fluctuations due to conformational variations in different secondary structures.Here RMSF plot displayed fluctuating residues while high fluctuations were observed among 50-60, 110-130, and 240-260 residual positions (Figure 8B).The highest fluctuating peaks comprised of 3.8 Å to 6.7-6.8Å, might be due to higher ordered flexibility conforming into loops (Figure 2B).Therefore, the protein Akt1 has significant flexibility to conform to specific secondary structures to accommodate the ligand.The radius of gyration (Rg) quantifies how compact a protein is with a ligand molecule (Miu et al., 2008).In this investigation, the Akt1 Cα -backbone linked to 8hydroxycourmarin demonstrated a stable radius of gyration (Rg) values between 22.1 and 21.8 Å, hence there are no sudden changes in radius of gyration as shown in (Figure 8D).This steady values of Rg suggest that despite the structural changes caused by the compounds, the protein remains folded.Significant stable gyration (Rg) suggests that the protein is oriented in a highly compact manner when it is attached to a ligand.The quantity of hydrogen bonds between the ligand and protein indicates the complex's interaction's complex stability and depth.During the 100 ns of the simulation, there were significant numbers of hydrogen bonds between Akt1 and 8-hydroxycourmarin (Figure 8C).Between Akt1 and 8-hydroxycourmarin-ligand, the average number of hydrogen bonds is two.The overall analysis of Rg indicates that the binding of the various ligands causes the corresponding proteins to become less flexible and more compact. Molecular mechanics generalized born surface area (MM-GBSA) calculations Utilizing the MD simulation trajectory, the binding free energy and other contributing energy in the form of MM-GBSA is determined for Akt1 bound to 8-Hydroxycoumarin complexes.The results (Table 6) suggested that the maximum contribution to ΔGbind in the stability of the simulated complexes was due to ΔG bind Coulomb, ΔG bind vdW, ΔG bind Hbond, and ΔG bind Lipo, while ΔG bind Covalent and ΔG bind SolvGB contributed to the instability of the corresponding complex.Akt1 bound to 8-Hydroxycoumarin complex has significantly higher binding free energies dGbind = −52.23 ± 5.12 kcal/mol (Table 6).These results supported the potential of AKT1 bound to 8-Hydroxycoumarin having a high affinity of binding to the protein, efficiency in binding to the selected protein, and the ability to form stable protein-ligand complexes (Balasubramaniam et al., 2021). In vitro antioxidant activity DPPH assay All the extracts showed different levels of DPPH radical scavenging activity over the range of 20-160 μg/mL concentration, as shown in Figure 9A.The methanolic extract exhibited the strongest DPPH radical scavenging activity compared to other extracts.The extract's radical scavenging activity was effective in the order Methanolic > ethanol > Ethyl acetate > Petroleum ether.A maximum of 82.46% ± 0.2% radical scavenging potential was observed at 200 μg/mL of methanolic extract used, whereas for ascorbic acid, the scavenging activity was 89.57% ± 0.25%.A minimum of 66.35% ± 0.32% scavenging potential was observed at 200 μg/mL of petroleum ether extract.Standards and all the extracts showed a dose-dependent inhibition of the DPPH radicals. Hydroxyl radical scavenging assay In this assay, results showed that the methanolic extract of D. roylei had the highest potential to scavenge hydroxyl radicals than the ethanolic extracts, ethyl acetate, and petroleum ether, as shown in Figure 9B.At a concentration of 80 μg/mL, the methanolic, ethyl acetate, ethanolic, and petroleum ether extract showed the maximum scavenging effect of 81.92% ± 0.49%, 78.72% ± 0.8%, 73% ± 0.7% and 69.88% ± 0.55% inhibition on hydroxyl radicals.Butylated hydroxytoluene (BHT) taken as a control had shown a more scavenging effect (90.07% ± 1%) than plant extracts. Reducing power assay As illustrated in Figure 9C, Fe 3 + was transformed to Fe 2 + in the presence of D. roylei extract and the reference compound Rutin to measure the reductive capability.The reducing power increased with an increase in the concentration of plant extracts.The methanolic extract of D. roylei showed significant reducing power when compared with standard Rutin.The reducing power demonstrated by the methanolic extract of the plant was 1.429 ± 0.005 at the concentration of 200 μg/ml as compared to 1.811 ± 0.0035 shown by standard Rutin at the same concentration.Ethanolic, ethyl acetate, and petroleum ether extracts showed less reducing power (1.246 ± 0.0025, 1.136 ± 003, and 0.987 ± 0.005, respectively) compared to methanolic extracts at 200 μg/mL concentration. Superoxide radical scavenging (SARS) assay All the test extracts exhibited effective O 2 −* scavenging activity in a concentration-dependent manner (10-80 μg/mL), as shown in Figure 9D.The highest activity (Scavenging effect of 72.91 ± 0.76 was shown by methanolic extracts of D. roylei at a concentration of 80 μg/mL, followed by ethyl acetate, ethanolic, and petroleum ether extracts with a scavenging effect of 68.125% ± 0.45%, 65.84% ± 0.41% and 62.27% ± 0.5% respectively, which is least as compared to methanolic extract and standard Rutin.Standard Rutin showed the highest scavenging potential of 77.46% ± 0.7% showed the highest potential at 80 μg/mL concentration as compared to all four extracts of the plant.7. The ethyl acetate extract of the plant showed a maximum cytotoxic effect on MDA-MB-231 with an IC 50 value of 116.7 μg/mL, followed by methanolic, petroleum ether, and ethanolic extract with IC 50 values of 274.1, 317.1, and 549.3 μg/ mL respectively.The Methanolic extract of the plant showed maximum reduction in the growth of MD-MB-468 breast cancer cell line with the lowest IC 50 value of 166.3 μg/mL, followed by ethyl acetate, ethanolic, and petroleum ether extract with IC 50 values of 296.1, 335.5 and 415.5 μg/mL respectively.The ethyl acetate extract showed maximum anti-proliferative potential against the 4T1 cell line with an IC 50 value of 157.9 μg/mL.The ethanolic, petroleum ether and methanolic extracts have shown less anti-proliferative potential against 4T1with IC 50 values of 420.7, 598.9, and 689.4 μg/mL, respectively.The ethyl acetate extract is highly specific to MCF-7 cell lines with an IC 50 value of 125.5 μg/mL, followed by methanolic, ethanolic, and petroleum ether extracts with IC 50 values of 213, 299.3, and 498.6 μg/mL, respectively.The IC 50 values of standard doxorubicin are shown in Table 4. Hence ethyl acetate extract of D. roylei showed the highest anticancer activity against the MDA-MB-231, MCF-7 and 4T1 cell lines compared to the other three extracts. In vitro anticancer activity Furthermore, we utilized Annexin V and PI staining to assess the apoptosis induction potential of ethyl acetate extract of D. roylei.MDA-MB-231 cells were treated for 24 and 48 h, followed by staining with Annexin V and PI.Flow cytometry analysis revealed that plant extract induction tumour cell death via induction of apoptosis and apoptosis enhanced significantly upon higher concentration are shown in Figure 11. Discussion In the current study, many secondary metabolites were identified using HR-LC/MS of D. roylei, out of which few phytocompounds were selected to propose a possible mechanism against breast cancer treatment using network pharmacology, molecular docking, molecular dynamic simulation and in vitro studies.The network pharmacology analysis suggested that the therapeutic efficacy of the D. roylei phytoconstituents against breast cancer was mainly associated with 20 signalling pathways, 30 potential target genes, and 10 bioactives.Through the network pharmacology, we identified the most significant protein (Akt1) associated with the occurrence and development of cancer and a bioactive 8-hydroxycoumarin (8-hydroxychromen-2-one) from the D. roylei.We identified a hub signaling pathway (PI3K-Akt signaling pathway, indicating the lowest rich factor among 20 signaling pathways.Akt1 kinase is a protein made according to instructions from the Akt1 gene (Lv et al., 2020).This protein is present in many different cell types throughout the body and plays a crucial role in numerous signaling pathways.For instance, Akt1 kinase is vital in controlling cell survival, apoptosis, proliferation, and differentiation (Jafari et al., 2019).Recent research has demonstrated that the PI3K/Akt signalling pathways, which play a role in the above-mentioned processes, are frequently disrupted in various human malignancies (Xu et al., 2019).This pathway is crucial for tumor growth and potential responsiveness to cancer treatments. Many new targeted agents have been created, especially to target P13K/Akt-related targets.Therefore, having a better understanding of the PI3K/Akt signaling pathway may help to enhance the oncologist's accuracy of prediction as to response to treatment. Based on the degree value of compounds in the network, we obtained the 8-hydroxycoumarin compound as the most active ingredient of D. roylei.Previous studies have revealed that 7-and 4hydroxycoumarin and its derivatives have numerous therapeutic benefits (Gaber et al., 2021).These are employed as drug intermediates and as antitumor drugs, anti-inflammatory, anti-HIV, antimicrobial, anti-coagulant, antioxidant, and anti-viral agents (Gaber et al., 2021).However, the degree of other compounds was also high, indicating that quality markers may affect the entire biological network system instead of only one target gene. Furthermore, the KEGG pathway enrichment analysis of 30 targets suggested that 20 top signaling pathways were involved in breast cancer occurrence and development.This indicated that the effect of phytocompounds acts on multiple pathways for treating breast cancer and complex interactions among these pathways.Based on the frequency of each gene in the compound-gene network, Akt1 showed the highest frequency of protein interaction, followed by SRC, MAPK3, EGFR, IL-6, HSP90AA1, ESR-1, and other target genes. Furthermore, an in silico docking analysis of the nine most prevalent compounds was carried out against the 30 potential targets using AutoDock version 4.2.6 software (Morris et al., 2008).All tested compounds showed promising results against Akt1 protein based on docking scores.Among the all, 8hydroxycoumarin bioactive showed the highest energy score with Akt1.8-hydroxycoumarin fits comfortably into the binding sites on Akt1 protein and interacts favourably with critical amino acid residues (Figure 7).During the interaction of the ligand 8-hydroxycoumarin, Trp80 is involved in pi-pi stacking, and Lys268 and Val270 residues are involved in pi-alkyl interaction at the binding cavity of the protein.Besides hydrogen bonding, Leu264 is involved in pi-sigma interaction, and the rest residues are van der Waal's interactions by amino acid residues formed weak non-bonded interaction with the ligand (Figure 7A and right panel).All the binding energy scores are determined from the best cluster (95 percent) that falls within the lowest RMSD 0.25Å.Therefore, it can be inferred from the molecular docking studies that 8-hydroxycoumarin has a high affinity for the protein Akt1. Also, the stability of the representative Akt1 and 8hydroxycoumarin complex was further explored using molecular dynamics simulations.The RMSD plots show that the MD results showed stable patterns throughout the entire simulation run (Figures 8A,B).The RMSF graphs show that Akt1 has high flexibility to accommodate the ligand at the binding pocket.The Rg plots demonstrate that the protein stayed compact throughout the simulation.According to the average results observed, the protein backbone was compact.To understand how the residues behaved throughout the simulation run, the fluctuations of the residues were analyzed.After ligand binding, the target's surface area that was accessible to solvent decreased. Additionally, our research aimed to find the antioxidant and anticancer potential of D. roylei active extracts by in-vitro approach.In this investigation, DPPH radical scavenging, Hydroxyl scavenging effect, reducing power, and superoxide radical anion scavenging have shown the antioxidant potential of D. roylei extracts and were observed to be significant when compared to positive controls such as Ascorbic acid, BHT, Rutin, and BHT respectively.These observations are in accordance with the previous studies on the antioxidant potential of Delphinium malabaricum extracts that the DPPH radical scavenging assay has investigated and the Ferric reducing antioxidant power (FRAP) assay (Lotfaliani et al., 2021) Further, the petroleum ether, ethyl acetate, methanol, and ethanolic extracts were subjected to cytotoxicity assay against various breast cancer cell lines.The results shown in Figure 10 demonstrated that ethyl acetate extract of D. roylei showed the highest anticancer activity against the 4T1, MCF-7, MD-MB-468, and MDA-MB-231 breast cell lines as compared to the other three extracts.As a result, the phytoconstituents found in plant extracts might have a greater propensity to suppress the proliferation of cancer cells.D. roylei extracts had relatively lower IC 50 values against breast cancer cell lines, which may be due to phytocomponents with stronger binding affinities or altering proteins or pathways implicated in tumor development.It can be observed that the ethyl acetate extract had the highest content of the compounds verifying the above-given conclusions about these compounds being the key constituents responsible for the cytotoxic activity of the studied plant.Previous studies by (Yin et al., 2020) revealed that Siwanine E, Uraphine, Delpheline, Delcorinine, Nordhagenine A, Delbrunine, and Delbrunine from D. honanense and D. chrysotrichum exhibited anticancer potential against MCF-7 and cells with IC 50 values of 9.62-35.32μM.Flow cytometry analysis revealed that plant extract induction tumor cell death via induction of apoptosis and apoptosis enhanced significantly upon higher concentration are shown in Figure 11. Conclusion In the present study, we explored the potential mechanisms of phytocompounds present in D. roylei in suppressing breast cancer by network pharmacology-based analysis in combination with chemical profiling, molecular docking, MD simulation, and in vitro studies.HR-LC/MS identified some important phytoconstituents followed by network pharmacology analysis which revealed that 8hydroxycoumarin, Delsoline, Royleinine, Delsemine B, Herniarin, Aloesin, Talatisamine, Narwedine, Scoparone, and Piperine were the main constituents related to breast cancer targets while Akt1, SRC, EGFR, IL-6, Hsp-90AA1, and ESR-1 were the main breast cancer-related molecular targets.20 cancerrelated pathways were identified where neuroactive ligandreceptor interaction was the most enriched with the highest number of observed genes and lowest false discovery rate, followed by non-small-cell lung cancer.Molecular docking studies showed that 8-hydroxycoumarin possessed the highest binding energies towards all the target proteins, followed by other compounds against studied targets.Furthermore, in vitro studies showed that ethyl acetate extract possess the highest anticancer activity and methanolic extract showed significant antioxidant activity compared to other extracts in the studied plant.The study provided a comprehensive understanding of the suggested mechanism of action of Delphinium roylei that may have potential use in breast cancer treatment. FIGURE 1 ( FIGURE 1 (Continued).(A).HR-LC/MS chromatogram of methanolic extract in positive mode of D. roylei (B) HR-LC/MS chromatogram of ethanolic extract in positive mode of D. roylei (C) HR-LC/MS chromatogram of Ethyl acetate extract in positive mode of D. roylei (D) HR-LC/MS chromatogram of Petroleum ether extract in positive mode of D. roylei. FIGURE 4 GO (Biological process, molecular function, and cellular component) analysis (top 20).The node length represents the number of target genes enriched, and the node color from blue to red represents the p-value from large to small. FIGURE 5 FIGURE 5KEGG pathway enrichment analysis (top 20).The node size represents the number of target genes enriched, and the node color from blue to red represents the p-value from large to small. FIGURE 6 FIGURE 6Compound prescription-active component-disease-target gene.The node colors represent different groups: purple represents the components, and blue represents target genes.Component node size represents the degree; low to high degree represents the node size from small to large. FIGURE 10 (A).Cell viability of D. roylei extracts against various breast cancer cell lines.(B) Cell viability of positive control (Doxorubicin) against various breast cancer cell lines. FIGURE 11 FIGURE 11Annexin V & P1 staining showed increased apoptotic cells in plates treated with the ethyl acetate extract of D. roylei after 24 h or 48 h periods. Figures Figures 10A,B shows the cell viability (%) of various breast cancer cell lines; MDA-MB-231, MCF-7, MDA-MB-468, and 4T1 when treated with different concentrations of methanolic, ethanolic, ethyl acetate, and petroleum ether extracts of D. roylei and doxorubicin drug.The IC50 values of four different extracts and positive control against various breast cancer cell lines are shown in Table7.The ethyl acetate extract of the plant showed a maximum cytotoxic effect on MDA-MB-231 with an IC 50 value of 116.7 μg/mL, followed by methanolic, petroleum ether, and ethanolic extract with IC 50 values of 274.1, 317.1, and 549.3 μg/ mL respectively.The Methanolic extract of the plant showed maximum reduction in the growth of MD-MB-468 breast cancer TABLE 1 List of selected phytocompounds detected in D. roylei by HR-LC/MS. from DAVID ver.6.8 (Database for Annotation, Visualization and Integrated Discovery) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways.Relevant pathways with p-values <0.05 were selected.Target-pathway and constituent-Pathway networks were constructed to visualize the interactions between compounds, targets and cancer-related pathways. TABLE 3 Interaction network details of 21 active components. Table 1 . These nine compounds were subjected to network pharmacology, in silico docking, and MD simulations analysis.These phytocompounds were identified as TABLE 4 Binding energy obtained from the docking calculations of bioactive phytoconstituents with target proteins. TABLE 5 Drug-likeness prediction of D. roylei phytoconstituents by ADMET evaluation using SwissADME Software. TABLE 6 Binding free energy components for the Akt1+8-hydoxycoumarin calculated by MM-GBSA. TABLE 7 IC 50 values in μg/mL of various extracts of D. roylei against different breast cancer cell lines.
8,927.8
2023-09-01T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
The Advanced HLA Typing Strategies for Hematopoietic Stem Cell Transplantation The occurrence of graft rejection and/or graft-versus-host disease (GVHD) after allogeneic hematopoietic stem cell transplantation (allo-HSCT) is largely depended on whether the re‐ cipient and the donor have matched HLA types. Under normal circumstances, the individu‐ al with completely matched HLA antigens can be the donor. However, due to the high level of HLA polymorphism, the major obstacle in the allogeneic hematopoietic stem cell trans‐ plantation is to find a donor with HLA antigens that are a perfect match. This can prove to be quite problematic. Introduction The occurrence of graft rejection and/or graft-versus-host disease (GVHD) after allogeneic hematopoietic stem cell transplantation (allo-HSCT) is largely depended on whether the recipient and the donor have matched HLA types. Under normal circumstances, the individual with completely matched HLA antigens can be the donor. However, due to the high level of HLA polymorphism, the major obstacle in the allogeneic hematopoietic stem cell transplantation is to find a donor with HLA antigens that are a perfect match. This can prove to be quite problematic. In 1954, an organ transplantation team led by Dr. Merri at Harvard University successfully completed a kidney transplantation between identical twins for the first time. From then on, the importance of histocompatibility in organ transplantation has been well recognized. The first human bone marrow transplantation between identical twins in 1957 provided a new approach for the treatment of leukemia and other hematologic malignancies. As a result, the basic research on HLA as well as the HLA typing techniques gained much attention over the next 20 years. The short-term survival rate of organ transplantation has been greatly improved since the 1980s due to the clinical application of immunosuppressive agents such as CsA. These successes,as well as the defects and limitations in serotyping and cellular typing of HLA, the clinical value of HLA typing has been largely ignored in the medical community. With the advance of research in immunology and transplantation immunology, particularly in the structure and function of HLA in the 1990s, new technology for HLA typing has emerged and continues to improve. Terasaki and Opelz analyzed a large amount of organ transplantation cases performed in major transplantation centers around the world. The role, status and importance of HLA typing in hematopoietic stem cell transplantation have been recognized once again. Overall, HLA typing is required in hematopoietic stem cell transplantation. HLA compatibility not only significantly reduces the incidence of acute rejection, but also significantly reduces the incidence of chronic rejection. HLA compatibility is one of the most critical factors that affect the long-term survival of the graft. HLA loci are the most genetically variable gene loci in human. Two hundred and twenty four loci of HLA complex have been identified so far. Among these, 128 are functional loci that encode proteins, and 39.8% of HLA genes are related to the immune system, particularly those belong to class II loci. Almost all these genes display immune-related functions. Approximately 100 HLA genes loci have been cloned and named, and at least 18 of them have alleles. Since these loci have various amounts of alleles and each allele encodes a corresponding HLA antigen, the HLA complex has the most abundant genetic polymorphism in the human immune system. (Fig 1). Theoretically, it is very difficult to find an unrelated donor with a perfectly matched HLA genotype (at the allele level) in the general population. The polymorphism of HLA makes it difficult to find a match between unrelated donor and recipient in the allotransplantation. Currently, the most commonly used HLA typing in organ transplantations around the world is based on HLA-A, B, C and DR genes. There are up to 7400 alleles in these genes corresponding to more than 100 specific antigens. With the increasing number of patients who need hematopoietic stem cell transplantation, the lack of appropriate donors has become a significant challenge. Therefore, there is an urgent need to develop novel scientific, practical, and feasible HLA typing methods in the field of hematopoietic stem cell transplantation. Principles for HLA typing strategy in allogeneic hematopoietic stem cell transplantation The first successful human bone marrow transplantation between identical twins in 1957 has provided a new approach for the treatment of leukemia and other hematologic malignancies. After the successful hematopoietic stem cell transplantation between unrelated donor and recipient with matched HLA, a bone marrow donor registry was established in 1988 (National Marrow Donor Program, NMDP) in the USA. Later on, a public cord blood bank was established. According to the World Marrow Donor Association (WMDA), as of July 2012, the association has 68 bone marrow banks in 49 countries and regions. It also has 46 cord blood banks in 30 countries and regions. The registered bone marrow and umbilical cord blood donors have exceeded 20 million. Meanwhile, the technology of HLA typing has been transformed from simple serotyping to more accurate genotyping. Although there are hundreds of reports regarding the effect of HLA matching degree on the efficacy of hematopoietic stem cell transplantation, these results are not consistent due to the differences in sample size, disease type and stage, and HLA typing. In addition, the interpretation of HLA genotyping results and their biological significance is becoming increasingly complicated. It is challenging for the clinicians outside of the HLA field to select an unrelated donor with the best-matched HLA. To meet this challenge, WMDA, NMDP of the USA and European Federation of Immunogenetics (EFI) have provided guidelines for HLA typing. Correlation between HLA allele and HLA antigen specificity There is a fundamental difference in the result and biological significance between HLA serotyping and genotyping. In the HLA serotyping, HLA antibodies are used to identify the HLA antigens on the surface of lymphocytes. HLA antigens are proteins that can be recognized by the host immune system during blood transfusion, organ transplantation, as well as pregnancy. Specific antibodies against HLA antigens are the basis of the identification of the HLA antigens. The HLA antisera used in serotyping, regardless of whether they are from the same species or different species, are all produced by immune stimulation with HLA antigens or peptides. In the HLA genotyping analysis, a specific HLA gene fragment is amplified in vitro from an individual's genomic DNA using synthetic oligonucleotide probes or primers. The genetic difference caused by variant HLA gene alleles is reflected by the variation in the DNA sequence. Therefore, HLA genotyping can identify all HLA alleles at the DNA level while HLA serotyping can only detect part of variants. The efficacy of bone marrow transplantation is closely related to the matching level of HLA between the donor and recipient. However, the HLA genotyping result does not directly reflect the antigen that causes immune rejection after the transplantation. Therefore, for the purpose of clinical relevance, the result of HLA genotyping should be converted to the HLA specificity. To this end, the NMPD and the University of California in Los Angels (UCLA) established the International Cell Exchange program, through which correlations between the HLA alleles and HLA antigen specificities are established by comparing a large amount of testing results worldwide. The dictionary of HLA alleles and their corresponding antigen specificities is under constant updating. As of 2008, 70% of HLA alleles have been correlated to HLA antigen specificities. The rest 30% alleles are rare alleles with a frequency less than 1 in 10,000. Therefore, their clinical values are relatively low. The HLA genotyping result can be easily converted to the HLA antigen specificity by using this HLA dictionary. The number of donor with matched HLA gene types is much lower than that with matched HLA antigens The criteria of matched HLA between the donor and recipient are different for the HLA genotyping and HLA serotyping in the bone marrow transplantation. From the HLA dictionary, one can tell that the HLA antigen specificity is unique, while a unique antigen may have one or more corresponding HLA alleles. For example, HLA-DR10 antigen only corresponds to HLA-DRB1*1001 allele, while HLA-DR11 antigen corresponds to 21 alleles such as HLA-DRB1*1101, 1102 and 1103. Therefore, the choice of donor for bone marrow transplantation may differ, depending on the method of HLA typing. For example, a donor and recipient listed in Table 1 may have matched HLA according to antigen specificity. However, their HLA genotypes may not match. Which method is more accurate for bone marrow transplantation is currently under investigation. Statistical analysis indicates that the chance of finding matched HLA genotypes in a random population is much lower than finding matched HLA antigens. For instance, as of February 2002, HLA-A, B and DR have 93 specific antigens. HLA-A, B and DR have 25, 50 and 18 loci respectively, which can generate 2.2 x 10 4 haplotypes. The genotype number of these haplotypes can be up to 2.5x10 13 . Currently, there are 2100 alleles have been identified in HLA-A, B and DRB1 genes. Their combination will yield 3.4 x10 7 haplotypes. As a result, the number of HLA-A, B and DRB1 genotypes in a population can be up to 5.78x10 15 , making it almost impossible to find the matched HLA genotype in a random population. In other words, the HLA genotypes of the donor and the recipient are always more or less mismatched in bone marrow transplantation. Because of this, the concept of permissible HLA mismatches has been introduced. Permissible HLA mismatches In the case of permissible HLA mismatches, the donor and the recipient have mismatched HLA in a bone marrow transplant. However, the mismatch does not cause a significantly increased rate of GVHD or graft failure, and is acceptable for bone marrow transplantation. Results from retrospective analyses suggest that mismatched alleles in HLA class I antigens as well as alleles in HLA-DQ and DP loci have minimal impact on the efficacy of bone marrow transplantation. HLA class I antigen or allele mismatch Petersdorf et al had investigated the effect of matching level of HLA class I antigens and alleles on the success rate of bone marrow transplantation in 471 patients. The transplant failure rate is 0.7% in 280 cases with matched HLA-A, B and C genes, and is 0% in 47 cases with one of mismatched heterozygous HLA-A, B or C gene. However, the failure rate in 51 cases with one of mismatched HLA-A, B or C antigens is 14%, which is significantly higher than that in the control group. In 76 cases with 2 or more mismatched antigens or genes, the transplant failure rate is 17%. These results indicate that a single mismatched allele in the HLA class I gene does not increase the transplant failure rate, while a single mismatched antigen, or 2 or more mismatched antigens or genes can significantly increase the transplant failure rate. These results support Petersdorf's hypothesis that the immune response caused by mismatched HLA class I alleles is lower than that caused by mismatched antigens. Therefore, mismatched HLA class I genes are permissible in the bone marrow transplantation, as long as HLA antigens match. Rubinstein et al also believes that transplantation can be considered if there is only one mismatched allele. For example, the recipient's genotype is HLA-A*0202 while the donor's genotype is HLA-A*0203. This kind of mismatch does not increase the rate of immune rejection. Further analysis indicates that whether a single allele mismatch is allowed in the transplantation also depends on the type of corresponding mismatched amino acid and the position of that amino acid in the HLA class I antigen. HLA class I molecules consist of a covalently bound heavy chain molecule and a 2 microglobulin. The extracellular fragment of the heavy chain has 3 activity domains (1, 2 and 3), and the 1 and 2 domains form the peptide-binding region. The complex of HLA and its bound peptide on the cell surface constitutes the ligand for the T-cell receptor (TCR), thereby inducing an immune response. If there is only one mismatched allele between the donor and recipient, the number of mismatched amino acids will be much lower, and may rarely involve the amino acids for TCR binding. On the other hand, if the donor and the recipient have a mismatched antigen, it may have many mismatched amino acids, and some of these amino acids may be involved in peptide binding and TCR binding. This may explain why the matching of HLA class I antigen is more important than the matching of genotype (Fig 2). (Fig 3). HLA typing standard for hematopoietic stem cell transplantation According to the guideline of World Bone Marrow Donor Association (WBDA) and European Federation for Immunogenetics (EFI), HLA typing of the donor in a large-scale bone marrow center is generally limited to 2 digits after the asterisk in the WHO HLA nomenclature, corresponding to the subtype of a specific HLA antigen. However, high-resolution HLA typing should be performed for recipients and donors with matched HLA. In addition, the typing of HLA class I genes should also include the locus C. Due to the increasingly recognized role of locus C in the immune rejection, the typing of HLA-C should be performed. When choosing a donor, the HLA-DRB1 gene of the donor and the recipient should have 4 identical digits after the asterisk in the WHO HLA nomenclature. Although most commonly used methods for HLA genotyping cannot cover all genes, it does not limit their applications in HLA typing for bone marrow transplantation. Among thousands of identified HLA alleles, most of them are rare alleles. Therefore, it is not necessary to type all HLA alleles. For instance, 244 expressing genes have been identified in DRB1 loci. Among them, 148 (60%) alleles have corresponding specific DR antigens, while 96 alleles (40%) do not. According to the NMDP, result of HLA-DRB1 typing in 65,752 donors shows that 86 alleles have 0 frequency and the frequency of another 105 alleles is lower than 0.0002. In addition, the total frequency of 10 alleles without corresponding antigens is 0.000084. Therefore, identification of the rest 43 DRB1 alleles will cover 99.6% of HLA-DR antigen specificities, which is sufficient for the screen of donor in hematopoietic stem cell transplantation. PCR based HLA genotyping methods The technology for HLA typing has evolved from the serological level to the cellular level, to the molecular level. Serotyping was the mainstream method for HLA type and has played a critical role in organ transplantations before 1990s. However, most HLA antisera are polyclonal and often have cross-reactions, making it difficult to distinguish antigens with subtle structural differences, and leading to misidentifications. Further more, many factors, such as a prolonged transportation time of the blood sample and excessive amount of immature cells, may affect the result of serotyping and cellular typing. These are the limitations of traditional HLA typing methods. The development of polymerase chain reaction (PCR) and its application in biomedical sciences has made the HLA typing at the DNA level possible. Therefore, using molecular methods to type HLA at the DNA level has gradually replaced serotyping and cellular typing. Commonly used DNA based HLA typing methods include PCR based sequence specific primers (PCR-SSP), and PCR based restriction fragment length polymorphism (PCR-RFLP), single-strand conformation polymorphism (PCR-SSCP), sequence-specific oligonucleotide (PCR-SSO) and single nucleotide polymorphism (PCR-SNP). PCR-SSP (sequence specific primers) To identify point mutations in a DNA molecule, Newton invented the amplification refractory mutation system (ARMS) for in vitro DNA amplification. The technique requires an allele sequence specific 3' primer for the PCR amplification. Otherwise the PCR reaction will not be effective. This is because the Taq DNA polymerase used in the PCR reaction has 5' to 3' polymerase activity and 5' to 3' exonuclease activity but lacking 3' to 5' exonuclease activity. Therefore, the enzyme cannot repair the single mismatched nucleotide in the 3' primer. In order to amplify the allele with a specific sequence, the primer with the corresponding sequence is designed. The conditions for PCR reaction are strictly controlled so that the amplification of the fragment with its sequence perfectly matching the primer is much more effective than the sequence with one or more mismatched nucleotide. One mismatched nucleotide between the 3' primer and the template is sufficient to prevent the amplification. The PCR product is further analyzed by electrophoresis to determine whether the amplicon corresponds to the anticipated primer-specific product. Since the DNA sequence of HLA class I and class II genes are known, PCR primers can be designed based on the specific sequence of each allele for PCR-SSP genotyping. The encoding allele sequences of various HLA antigens can be amplified with sequence specific primers. By controlling the conditions of PCR, a specific primer can only amplify its corresponding allele, and not other alleles. Therefore, the presence of a PCR product can be used to determine the presence or absence of a specific allele. The specificity of PCR product can be further determined by agarose gel electrophoresis. Fig 4 shows the principle of PCR-SSP. In the first step of PCR reaction, double-stranded DNA is denatured into single-stranded DNA. In the second step, specific primers anneal to the template DNA. In the third step, double stranded DNA is generated by TaqDNA polymerase by incorporating 4 types of dNTP into the newly synthesized DNA strand. After 30-40 cycles of amplification, the target gene is increased to 10 8 fold. The main advantage of this method is that it is simple and fast, and the result is easy to interpret. The heterozygosity can be easily detected as well. Therefore, PCR-SSP is the currently most used method for HLA typing. There are several FDA approved high-resolution and low-resolution detection kits available for HLA class I and class II typing. Many clinical laboratories in China have been using this method for accurate pre-transplantation HLA typing. The procedure of PCR-SSP is shown in Fig 5. One disadvantage of this method is that it requires multiple primers in order to amply all relevant alleles. PCR-RFLP (restriction fragment length polymorphism) Restriction endonucleases have unique recognition sites. Using computer analysis, restriction endonucleases that can recognize HLA sequence polymorphism are chosen to digest the PCR product. Because of sequence difference among the alleles, enzyme digestion will yield DNA fragments with unique patterns of length, which can be distinguished by electrophoresis. Compared to serotyping, PCR-RFLP method is specific, simple and rapid and does not require probes. It can accurately detect single nucleotide difference and two linked polymorphic sites. The disadvantage of this method is that if the enzyme cannot completely digest the PCR product, the DNA fragments with similar lengths may be difficult to distinguish after electrophoresis. In addition, alleles need to have endonuclease recognition sites. Furthermore, PCR-RFLP cannot distinguish certain HLA heterozygosities. It requires multiple endonucleases for those alleles with high polymorphism such as HLA-DRB1, and may produce complicated restriction maps. For these reasons, this method is rarely used for HLA typing currently. PCR-SSCP (single-strand conformation polymorphism) Suzuki et al in Japan have found that a single-stranded DNA fragment has complex spatial conformation. The three-dimensional structure is generated by the intramolecular interactions among the base pairs. The changing of one nucleotide may affect the spatial conformation of the DNA strand. Single stranded DNA molecules have their unique size exclusion characters in polyacrylamide gels due to their molecular weights and three-dimensional structures. Therefore, they can be separated by non-denature polyacrylamide gel electrophoresis (PAGE). This method is sensitive enough to distinguish molecules with subtle structural differences, and it is called single-stranded conformation polymorphism (SSCP). The authors later applied SSCP in the detection of mutations in PCR products and developed PCR-SSCP technique, which has further improved the sensitivity and simplicity for mutation detection. This method is simple, rapid, sensitive, requiring no special equipment, and is suitable for clinical applications. However, this method can only detect mutations. The location and the type of the mutation need to be determined by sequencing. In addition, the conditions of electrophoresis need to be tightly controlled. Furthermore, point mutations in certain locations may have no to little effect on the DNA conformation. Therefore, different DNA molecules may not be able to separate by PAGE due to these reasons and other factors. Nevertheless, this method has a relatively high detection sensitivity compared with other methods. It can detect mutations in unknown locations in the DNA molecule. Takao has demonstrated that SSCP can detect 90% of single nucleotide mutations in a DNA fragment smaller than 300bp. He believes that most known single nucleotide mutations can be detected by this method. Mutant DNA molecules can be separated and purified by PAGE due to the different migration rates, and the mutation can be eventually identified by DNA sequencing. In SSCP analysis, the separation of single stranded DNA by non denature PAGE is not just based on their molecular weights and electric charges, but also on the retention force caused by their spatial conformations. Therefore, the migration rate of a DNA fragment does not reflect its molecular size. Since the wild type and mutant DNA molecules may migrate very closely and are difficult to distinguish, it is generally required for DNA molecules to migrate for more than 16-18 cm in the gel. Mobility is calibrated using reference DNA as an internal control. Because of these reasons, this method cannot clearly determine the HLA genotype. PCR-SSO (sequence specific oligonucleotide) In PCR-SSO, specific probes are synthesized according to the sequence in the HLA polymorphic region. The target DNA fragment is amplified in vitro first. Then a specific probe will be hybridized to the PCR product under certain conditions based on base pair complements. The hybridized product can be detected by radioactive or non-radioactive signals. There are two types of SSOP method, direct hybridization and reverse hybridization. In the direct hybridization, the PCR product is fixed on the membrane while in the reverse hybridization, the probe is fixed on membrane. Figure 6 is the diagram of PCR-SSO. In 1986, Saiki et al were the first to report the analysis of DQA1 polymorphism using PCR and 4 allelic specific oligonucleotide (ASO) probes. MicKelson has typed the DR loci by serotyping and PCR-SSOP in 268 specimens. The success rate of serotyping is 91.0% while the success rate of PCR-SSOP is 97.0%. Overall, PCR-SSOP has a high success rate, a wide source of reagents, a high specificity and resolution. It can detect the difference of one nucleotide. In addition, PCR-SSOP can be used for a large number of samples with accurate and reliable results. However, this method is time consuming. It often takes a few days and needs a large amount of probes. In addition, it is difficult to detect heterozygous alleles, particularly those of the complicated HLA-DRB1 genes. Overall, PCR-SSO is an accurate HLA genotyping method, and can identify all known HLA alleles for accurate analysis of HLA polymorphism. HLA is a super gene family and new alleles are continuously been identified. SSO probes can only be designed based on the sequences of known alleles. Although PCR-SSO may discover new HLA polymorphism through its hybridization pattern, dot-hybridization often leads to false positives. In addition, when an allele is identified in the sample, it is difficult to determine whether the allele is homozygous or heterozygous. Therefore, the HLA allele frequency and haplotype frequency cannot be precisely determined by this method. PCR-SNP (single nucleotide polymorphism) Single nucleotide polymorphism (SNP) is the inheritable and stable biallelic single nucleotide difference. In the human genome, every 1000 base pairs have one to 10 SNPs. SNP may have some regulatory functions in gene expression and protein activity. High SNP density has been found in HLA class I genes with one SNP in every 400bp, setting the basis for high-throughput MHC-SNP analysis. Compared with other methods, SNP is less time consuming and with a low cost. Gou et al have developed a simple and effective oligonucleotide microarray to detect SNPs in the coding sequence of HLA-B locus. Based on the known polymorphism in the exon 2 and 3 of HLA-B genes, 137 specific probes were designed. In a double-blind experiment, these probes were used in the PCR-SNP analysis of 100 specimens from unrelated individuals. The result showed that this method could explicitly identify all SNPs in the HLA-B locus. Bu Ying et al have established a rapid, efficient, and cost effective SNP detection method using a single tube. In this method, 4 primers are used for the PCR amplification. Two primers are used to amplify the DNA fragment containing the SNP region, and the other two primers are SNP specific. The primer extension error is significantly reduced when 4 primers simultaneously carry out the PCR reaction, thereby the accuracy of SNP analysis is greatly improved. With the development of third-generation genetic markers, it is expected to find a series of single nucleotide polymorphisms in the HLA complex, and generate high-density SNP maps. In order to develop SNP technology into a simple and effective HLA typing method, production of high-density SNP maps in the HLA regions and development of HLA-SNP genotyping kits have been proposed in the 13 th IHWC conference. Reference-strand-mediated conformation analysis (RSCA) Arguello et al devised the double-stranded conformation analysis (DSCA) technique in 1998 for the detection and analysis of gene mutations and complex polymorphic loci. Based on this technique, reference strand mediated conformation analysis (RSCA) has been developed. This is a major technical breakthrough in HLA typing. This technique combines sequencing and conformational analysis to overcome the limitations of the methods that just employ DNA sequencing or conformational analysis. The concept behind RSCA is that a fluorescent labeled reference strand is hybridized with the amplified product of a specific gene to form stable double stranded DNA with unique conformation. After non-denature polyacrylamide gel electrophoresis or capillary electrophoresis, HLA alleles can be detected by laser scanning and computer software based analysis. Figure 7 is the basic procedure of RSCA. Alleles with different sequences will produce DNA duplexes with different spatial structures after hybridization with their fluorescent labeled probes. Two alleles with one nucleotide difference will cause a change in the spatial structure of a hybridized duplex, resulting in an altered migration rate in electrophoresis. Therefore, RSCA can distinguish the alleles with a single nucleotide difference. For example, HLA-A*0207 and A*0209 alleles only differ from one nucleotide at the site 368 of exon 3. In this site, A*0207 has a G while A*0207 has an A. Likewise, HLA-A*0224 and A*0226 only differ from one nucleotide. These alleles all can be distinguished by RSCA. (2) high reproducibility. In RSCA, each lane in the non-denature polyacrylamide gel has markers and each gel has a DNA ladder. Therefore, the alteration caused by different gels or lanes can be eliminated. (3) new allele or mutation identification. RSCA is based on the electrophoretic mobility difference caused by different spatial structure of the duplexes after allele-FLR hybridization. New alleles or mutations will have electrophoretic mobility different from that of known alleles. (4) RSCA can be applied at a large scale with a low cost. The disadvantages of RSCA are (1) time-consuming for a single sample; (2) requiring high quality samples; PCR-SSP requires 10-100ng/ml of DNA, which can be obtained with a regular DNA purification kit from patients even with a low amount of white blood cells. However, RSCA requires 50-100ng/ml of DNA. It may require an increased amount of blood sample for patients with low levels of white blood cell in order to obtain sufficient DNA; and (3) insufficient database. Pyrosequencing: A high-resolution method for HLA typing Pyrosequencing is a new HLA genotyping technology based on real time sequencing during DNA amplification. The reaction system contains 4 enzymes (DNA polymerase, ATP sulfurylase, luciferase and apyrase), a substrate (APS: adenosine 5' phosphosulfate), fluorescein (luciferin), primers and the single stranded DNA template. After one type of dNTP (dATP, dTTP, dCTP and dGTP) is added to the reaction system, it will be incorporated into the newly synthesized chain if it is complementary to the nucleotide on the template. Incorporation of dNTP will generate the same molar amount of pyrophosphate (PPi). ATP sulfurylase converts APS and PPi into ATP, which provides energy for luciferase to oxidate luciferin and emit light. The amount of light signal is proportional to the amount of ATP. The optical signal is detected by a CCD (charge couple device) camera and generates a peak in the pyrogram. The principle of Pyrosequencing is shown in Fig 8. The height of each signal's peak is proportional to the number of nucleotides incorperated. Unincorporated dNTPs and excessive ATP are converted to dNDPs, which are further converted to dNMPs by apyrase. The optical signal is quenched and the system is regenerated for the next reaction. The next dNTP can be added to the system to start the next reaction after the unincorporated dNTPs and excessive ATP are removed. The reaction cycle continues until the complementary DNA strand is synthesized. Under the room temperature, it takes 3-4 seconds from polymerization to light detection. In this system, 1 pmol of DNA will generate 6x10 11 pmol of ATP, which in turn yields 6x10 9 pmol of photon with a wavelength of 560nm. The signal can be easily detected by a CCD camera. For the analysis of DNA with an unknown sequence by Pyrosequencing, a cyclic nucleotide dispensation order (NDO) is used. dATP, dGTP, dTTP and dCTP are sequentially added to the reaction. After one nucleotide is incorporated, the other three will be degraded by the apyrase. For the DNA with a known sequence, non-cyclic NDO can be used and will yield a predicted pyrogram. The sequence of the complementary DNA strand can be determined based on the NOD and peak value in the pyrogram. Since nucleotides are differentially incorporated, Pyrosequencing can produce high-resolution results. Typing HLA-DRB1*04, 07 and DRB4* in the donor's DRB genes by Pyrosequencing not only yields the same result as using the SSOP typing kit, but also produces the result with a higher resolution. Compared with SSP, SSOP, direct or reverse hybridization, Pyrosequencing can be used to solve ambiguous allele combinations of HLA-DQ and HLA-A/B in a short time. The types of HLA-DQB1 and HLA-DRB alleles have been accurately determined by Pyrosequencing. An inherent problem with this technology is the de novo sequencing of polymorphic region in heterozygous DNA, although polymorphism can be detected in most cases. When the nucleotide in the polymorphic region is altered, synchronized extension can be achieved by the addition of the substituted nucleotides. If there is a deletion or insertion in the polymorphic region, and the deleted or inserted nucleotide is the same as the adjacent nucleotide on the template, the sequence after the polymorphic region will be synchronized. However, if the deleted or inserted nucleotide is different from the adjacent nucleotide on the template, the sequence reaction can be out of phase, making the subsequent sequence analysis difficult. Another issue with this technology is the difficulty in determining the number of incorporated nucleotides at the homopolymeric region. The light signal will become nonlinear after the incorporation of more than 5-6 nucleotides. Studies on the polymerization efficiency of the homopolymeric region have shown that it is possible to incorporate less than 10 identical nucleotides in the presence of apyrase. However, it needs a specific software algorithm of signal integration to determine the precise number of incorporated nucleotides. For re-sequencing, the nucleotide is added twice to ensure complete polymerization in the homopolymeric region. Another limitation of this technology is the length of the sequencing. Application of flow cytometry in HLA typing Flow cytometry has failed to become a main method for HLA typing since it was applied to the field of immunology for the first time in 1977. This is mainly due to the large number of specific probes required for HLA typing. The flow analyzer LABScan100 that combines the flow cytometry and reverse SSO technology is trending to replace three conventional methods, SSO, SSP and SBT (sequence-based typing, direct sequencing), in HLA typing. On a suspension platform, multiple types of color-coded beads conjugated with SSO probes specifically bind to the single stranded DNA. Each type of bead has its unique spectral characteristics due to the different amount of fluorescent dye conjugated to the beads. When beads pass through a flow cytometer, the difference in the light scattering pattern from various angles can distinguish HLA genotypes. Currently, LabType TM SSO is a relatively more mature technique compared with others in HLA typing. Its unique advantage is that thousands of molecules can be simultaneously analyzed in a matter of seconds. Therefore, this technique can be used for a large-scale analysis. Overall, this technique has the following advantages: (1) It has increased accuracy due to the automated detection system. 7) The pollution to the environment and potential harm to the staff are reduced because electrophoresis is not required in this method. Gene chip or DNA microarray In gene chip or DNA microarray, large amount of probe molecules (usually 6x10 4 molecules/cm 2 ) are attached to a solid surface. Labeled DNA samples are hybridized to the probes. The amount and sequence information of the target can be determined by the intensity of the hybridization signal. Gene chip or DNA microarray technology was first developed by Affymetrix in the USA, and has been improved significantly within a few years. The technology is based on the principle of reverse dot hybridization. Thousands of oligonucleotide probes representing different genes are spotted on a solid surface by a robot. These probes will bind to radioactive isotope or fluorescent dye labeled DNA or cDNA through complementary sequences. After autoradiography or fluorescence detection, signals are processed and analyzed by computer software. The intensity and distribution of hybridization signal reflect the expression level of the gene in the sample. The operation process of microarray is shown in Fig 9. Balazs et al spotted amplified DNA samples on silicon chips and compared the microarray results with PCR-SSO results in 768 specimens. It has been found that microarray has a high sensitivity and specificity. The consistent rate of genotyping results from microarray and PCR-SSO is 99.9%. Signals are amplified twice, first, PCR amplification of the template DNA and second, amplification of fluorescence signal. Therefore, the sensitivity is greatly improved. (5)High ac-curacy. The intensity of the fluorescent signal generated by the perfect pairing of the probe and the sample is 5 to 35 times higher than the signal generated by the probe and the sample with one or two mismatched nucleotides. Accurate detection of fluorescent signal intensity is the basis of the detection specificity. Studies have shown that the consistency between microarray and Sanger sequencing in the detection of mutations and polymorphism is 99.9%. (6) High efficiency. The whole process is highly automatic, which saves manpower and time for data analysis. Genotyping of genes such as HLA-A, B, DR and DQ in multiple samples can be done with one PCR reaction and hybridization on one chip. (7) High level of standardization. Using a variety of multi-point synchronized hybridization and automated analysis, the human error is minimized to ensure the specificity and objectivity. (8) Low cost. Since the chip fabrication and signal detection are all automatic, only a small amount of probes and samples are required. One chip can be used for the analysis of samples from multiple individuals, which further reduces the cost. The biggest drawback of microarray analysis is its expensive equipment, which prevents it from becoming widely used. Only institutions with a large program can afford the equipment. DNA sequencing technology For the analysis of gene structure, sequencing is the most direct and accurate method. In this case, the DNA fragment is amplified by PCR and followed by sequencing. The basic process of this method is shown in Figure 10. Since the entire nucleotide sequence of the amplified fragment is obtained, this is the most reliable and thorough genotyping method. It can not only identify the sequence and genotype, but also lead to the discovery of new genotypes. Currently, the newly identified HLA alleles can only be verified by sequencing. It has been reported that if the HLA type cannot be determined by serotyping or the results from PCR-SSP and PCR-SSOP are inconsistent, sequence-based typing (SBT) often can yield accurate and reliable results with a high resolution. Hurley et al have typed HLA alleles by PCR-STB in 1775 bone marrow transplant patients and unrelated donors in NMDP, USA. The study has found that the degree of HLA allele mismatching between the recipient and donor of bone marrow transplantation is much higher than previously thought after examining the antigen matching results of HLA-A, HLA-B and HLA-DR. The advantage of SBT over PCR-SSP and PCR-SSOP is its ability to analyze the entire gene sequence including the non-polymorphic region. SBT can be used not only for DNA sequencing but also for cDNA sequencing to determine gene expression. With increasing popularity of DNA sequencing technology, the PCR-SBT method has gained much attention for genotyping. PCR-SBT has advantages over other typing methods in terms of accuracy, efficiency and the degree of automation. Specialized software and solid phase sequencing kits with automatic loading are available for HLA typing. In addition, the cost of DNA sequencing has been greatly reduced. Therefore, PCR-SBT is an ideal method for HLA typing in researches. With further decrease in the cost of automatic sequencing, this genotyping method will be widely used. Currently, PCR-SBT is the gold standard of HLA typing. This method has several advantages: (1) It can accurately determine gene type in the exon 8 by a high-resolution sequencing, sufficient to meet the need in researches and clinics. (2) It can analyze more than 15,000 samples every month with high throughput detection. (3) Automated SOP and advanced data management system can reduce human error. (4) It has high quality assurance. Ten percent blind samples are used repeatedly as internal quality control and 100% accuracy is achieved for 10 consecutive times using UCLA external quality assurance samples. The results are confirmed by SSP. (5). It may lead to the discovery of new alleles. (6) HLA genotype can be updated by re-analyzing the sequence after the HLA database is updated. HLA-A HLA-B HLA-C Ex 2 Ex 3 Figure 10. The diagram of DNA sequencing Conclusion Hematopoietic stem cell transplantation has become one of the most effective treatments for a variety of hematologic malignancies. However, graft-versus-host disease (GVHD) is still inevitable in some cases. This is mainly due to the difference in the major histocompatibility complex (human leukocyte antigen, HLA) between the recipient and the donor. Other known and unknown factors that may cause GVHD include minor histocompatibility anti-gen (mHA) and tissue specific antigens. GVHD is the main cause of transplant failure in the allogeneic transplantation. Therefore, GVHD is the most significant challenge in allogeneic hematopoietic stem cell transplantation in clinics. It has been proven that whether the graft can survive largely depends on the degree of HLA matching between the recipient and the donor. Therefore, HLA typing of the recipient and the donor before the transplantation is particularly important. Currently, PCR-SSP genotyping is a commonly used method for HLA typing in clinical laboratories worldwide. Like SSP method, PCR-SSP method depends on specific primers for genotyping. Although the process is simple and rapid, high-resolution genotyping requires a large number of sequence specific primers, which leads to a high cost and prolonged operation time. Similarly, SSO technique is based on the sequence-specific oligonucleotide probes. High-resolution genotyping by SSO significantly increases the cost and complexity. Therefore, it is rarely used for HLA typing today. PCR-SNP is a simple and fast method with a high resolution, and PCR-SNP is expected to become more popular in HLA typing as the technology continues to improve. Although RSCA and Pyrosequencing can achieve high-resolution results, their applications in HLA typing will be gradually eliminated as the technology of gene chip and sequencing continues to improve and the cost continues to decrease. HLA-chip genotyping is still largely dependent on the known sequence. It cannot identify new alleles with unknown sequence. At this moment, PCR-SBT technology has significant advantages over other HLA typing methods in terms of accuracy, efficiency and automation. There are specialized software and automatically loaded sequencing reagents for HLA typing by PCR-SBT. In addition, the operation cost has been greatly reduced. In conclusion, PCR-SBT technology with HLA-chip is the best method for HLA typing in research. With the reduction in the cost of automated nucleic acid sequencing, this genotyping method will be widely used in the field of basic research as well as in clinical transplantation. Author details Sun Yuying and Xi Yongzhi * *Address all correspondence to<EMAIL_ADDRESS>Department of Immunology and National Center for Biomedicine Analysis, Beijing 307 Hospital Affiliated to Academy of Medical Sciences, Beijing, P. R. China
9,440.2
2013-02-13T00:00:00.000
[ "Biology", "Medicine" ]
Calculation of the viscosity of dispersions of nanoparticles with a polymer adsorption layer in a melt Abstract. As a result of theoretical analysis of the influence of the chemical structure of the polymer and the chemically modified surface of nanoparticles, it was shown that it is possible to predict the dependence of the viscosity of the polymer melt on the concentration of nanoparticles, their size, and the molecular weight of the polymer. Two situations have been analyzed where a strong intermolecular interaction between polymer chains and polar groups located on the surface of nanoparticles is absent, and when a strong intermolecular interaction between polymer chains and polar groups located on the surface of nanoparticles takes place. Abstract.As a result of theoretical analysis of the influence of the chemical structure of the polymer and the chemically modified surface of nanoparticles, it was shown that it is possible to predict the dependence of the viscosity of the polymer melt on the concentration of nanoparticles, their size, and the molecular weight of the polymer.Two situations have been analyzed where a strong intermolecular interaction between polymer chains and polar groups located on the surface of nanoparticles is absent, and when a strong intermolecular interaction between polymer chains and polar groups located on the surface of nanoparticles takes place. Currently, there are a variety of models and derived equations that relate the viscosity with volume fraction of dispersed particles.One of the equations describing the dependence of the viscosity of a dispersion containing spherical particles, is Einstein's equation: (1) where η is the viscosity of the suspension, η 0 is the viscosity of initial fluid, φ is the volume of the dispersed phase, 2.5 is the parameter applicable only for spherical particles.Another most popular equation for the concentration dependence of the viscosity of the dispersions was proposed by Mooney [1]: where φ* is the so-called critical concentration (limit value of the filling degree of solid particles).This value depends on particle shape and method of packaging, i.e. coefficient packaging.For example, the value φ* = 0.74 in the case of hexagonal packing of solid spherical particles. Other popular models and the corresponding equations, we note the model of Krieger and Dougherty [2] 1 where K takes values from 2.54 to 3.71.Studies of concentrated suspensions [4,5] belong to R. Pal.In the works cited, there is a large amount of experimental data that can be used to test the functionality of various models describing the dependence of the relative viscosity of the suspensions in a wide concentration range.Look at some modern models.In the work [6] the role of normal stresses in curvilinear flows of suspensions is considered.The resulting equation for describing the relative viscosity dependence on volume fraction of the dispersed phase has the form: In the work [7] the nature of the divergence in low shear viscosity of colloidal dispersions of hard spheres is discussed.In the reviews [8][9] the analysis of the viscosity of concentrated suspensions of hard nano-size particles and bimodal spheres is provided.Proposed ratio: In [10] considers the effective viscosity of concentrated suspensions of soft particles under conditions of static and high-frequency effects.The resulting ratio is: In the last few years, considered numerous models to describe the relative viscosity of suspensions [11][12], and in [11] proposed and proved the generalized equation for suspensions subjected to steady flow at low Reynolds numbers.For suspensions containing solid particles, the equation is written like this: All the considered models describe the dependence of the relative viscosity of the suspensions at different intervals of concentration of the dispersed particles depending on their volume fraction.However, no model allows one to calculate the relative viscosity of the suspension in which the dispersed particles stabilized by polymer coatings grafted on the surface of metal nanoparticles.This should be considered two options.One of them dispersed particles are inert and the other they have a strong intermolecular interaction with molecules of the liquid.In this regard, the aim of this paper is to develop a calculation scheme that allows prediction of the dependence of the volume fraction of the nanoparticles stabilized by the polymer.In the calculation scheme takes into account the chemical structure of grafted polymer.Two tasks are solved: 1) The direct problem, which allows calculating the value of relative viscosity based on chemical structure of polymer and its concentration.Such a calculation can be made using any of the previously proposed equations, which include the volume fraction of dispersed particles. 2) Inverse problem, which allows on the basis of measurements of the relative viscosity to estimate the size of polymer stabilized nanoparticles.Here we consider single-layer and multi-coating polymer layers of metallic nanoparticles. For the analysis the classical Einstein (1) and Mooney (2) equations have been used.Immediately, we note that equation (1), in the same way as equation ( 2), in pure form, becomes inapplicable, as a result of strong intermolecular interactions between the molecules of the liquid and solid particles formed stable adsorption layer.Consider the calculation of the dependence of viscosity on the weight fraction of nanoparticles where there is a strong intermolecular interaction between the polar groups present on the surface of the nanoparticles and the molecules of the melt.In this case, the adsorption layer formed around the nanoparticles.The S np surface of one nanoparticle is equal to where R np is the radius of the nanoparticle.The volume of one nanoparticle is equal to The number of nanoparticles in the system is where c np is the weight of nanoparticles, g np is the weight of one nanoparticle.The weight of one nanoparticle is equal to where ρ np is density of nanoparticle, v np is the volume of nanoparticle.Consider 1g of the systems containing polymer melt and nanoparticles.In this composition c np is the weight of the nanoparticles and 1-c np is the weight of the solvent.Then the number of repeating units of the polymer in the composition of the N c is equal to where M 0 is the molecular weight of the repeating unit, N A is Avogadro's number.The number of repeating units per macromolecule is equal to where M is the molecular weight of the polymer.If the structure of the melt is globular, the volume of globule equal to here is the van der Waals volume of the polymer repeating unit, k is the coefficient of molecular packing.Typically for polymer melts k = 0.64.The value The radius of the globules can be calculated using the following formula: The area occupied by 1 globule on the surface of the nanoparticle equal to The total surface of one nanoparticle is calculated using the formula (9).Limit the number of globules, which can be distributed on the surface of one nanoparticle is determined by the following ratio Let's consider an example.For polyamide-6 (PA-6) the van der waals volume of the repeating unit s equal to 116Ǻ 3 , M 0 = 113.If the radius of the nanoparticl0e is equal to 50 Ǻ (5 nm) and the molecular weight of the polymer M = 14000, the value of n gl , calculated by the formula (19), equal to 32.6.It is obvious that the radius of one nanoparticle together with the adsorption layer is equal to: The volume of single nanoparticle with adsorptive layer is For nanoparticles that do not contain adsorption layers, their volume fraction can be calculated by the following formula where V p is the volume of polymer in the composition.For nanoparticles containing adsorption layers, their volume fraction can be calculated using the following formula where G p is the weight of the polymer in the composition, and ρ p is the density of the polymer. If we analyze 1 g of the composition, G p = 1-c np .The density of the polymer is calculated by the following formula [13]: Thus, we obtain The volume of the adsorption layer of V ad.l .is: Substituting relations ( 15) and ( 19) into formula (27), we get: Substituting formula (11), ( 12), ( 26) and (28) into relation (23), we can write the relation for the volume fraction of nanoparticles containing adsorption layers: The initial polymer is polyamide-6.The parameters of the polymer system are given in table 1. Table 1.Parameters for the system "PA-6 + nanoparticles".Molecular weight of the repeating units, M 0 113 Molecular weight of the polymer, M 14000 Van-der-Waals volume of the repeating units of the polymer, Coefficient of molecular packing for melt, k 0.64 Weigh of the nanoparticles containing in 1 g of composition, g 0.1 Radius of nanoparticle R np , Ǻ 50 Density of nanoparticle ρ np , g/cm 3 1.54 Zero shear viscosity for PA-6 at 463K, N•sec/cm 2 15.2 If the adsorption layer is absent, the volume fraction of nanoparticles is calculated using the following formula Substituting all the parameters available in the table 1, into formula (30), obtain φ = 0.0695. If the surface of the particles formed adsorption layer, using all parameters from table 1 and formula (29), have φ np+ad.l= 0.2226.Now determine the dependence of viscosity on the weight fraction of nanoparticles.The type of this dependence is determined by the possibility of the formation of a polymeric adsorption layer on the surface of the nanoparticles. 1 -There is no a strong intermolecular interaction between polymer chains and the polar groups on the surface of nanoparticles.Then, substituting all the system parameters from table 1 into equation ( 1), we obtain Next, let's calculate these dependencies using equation Mooney (2).When the strong intermolecular interaction between polymer chains and the polar groups on the surface of nanoparticles is absent, substituting all the parameters from table 1 into equation ( 1) with φ* = 0.74 (hexagonal packaging), we get: When a strong intermolecular interaction is manifested, we obtain the following formula: The dependence of viscosity on concentration of nanoparticles calculated by the Einstein equation (1) shown in figure 1.The dependencies of viscosity on concentration of nanoparticles calculated by Mooney equation (2) shown in figure 2. examples of calculations. Fig. 1 . Fig.1.Dependencies of viscosity on the weight of nanoparticles in 1 g of composite containing PA-6 and nanoparticles. 1 -There is no strong intermolecular interaction between polymer chains and polar groups located on the surface of nanoparticles. 2 -Strong intermolecular interaction between polymer chains and polar groups located on the surface of nanoparticles takes place.The calculations were performed using Einstein equation (1). Fig. 2 .Fig. 3 . Fig.2.Dependencies of viscosity on the weight of nanoparticles in 1 g of composite containing PA-6 and nanoparticles. 1 -There is no strong intermolecular interaction between polymer chains and polar groups located on the surface of nanoparticles. 2 -Strong intermolecular interaction takes place.The calculations were performed using Mooney equation (2).Now let's calculate the dependencies of the viscosity on the radius of nanoparticles.When the adsorption layer is absent, the viscosity is independent of the size of nanoparticles.If the adsorption layer is formed, the relations for description of the dependencies of viscosity on the radius of nanoparticles are as follows (taking into account the values of all parameters are available in table1).The dependencies of viscosity on radius of nanoparticles are described by relations (35) and (36).The dependencies of the viscosity on the radius of nanoparticles are shown in figure3.It can be seen that in agreement with equation Mooney the viscosity increases sharply when the size of nanoparticles decreases. . is the volume of the adsorption layer per one nanoparticle,
2,633.4
2017-01-01T00:00:00.000
[ "Materials Science" ]
An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding The adaptive multi-rate wideband (AMR-WB) speech codec is widely used in modern mobile communication systems for high speech quality in handheld devices. Nonetheless, a major disadvantage is that vector quantization (VQ) of immittance spectral frequency (ISF) coefficients takes a considerable computational load in the AMR-WB coding. Accordingly, a binary search space-structured VQ (BSS-VQ) algorithm is adopted to efficiently reduce the complexity of ISF quantization in AMR-WB. This search algorithm is done through a fast locating technique combined with lookup tables, such that an input vector is efficiently assigned to a subspace where relatively few codeword searches are required to be executed. In terms of overall search performance, this work is experimentally validated as a superior search algorithm relative to a multiple triangular inequality elimination (MTIE), a TIE with dynamic and intersection mechanisms (DI-TIE), and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS) approach. With a full search algorithm as a benchmark for overall search load comparison, this work provides an 87% search load reduction at a threshold of quantization accuracy of 0.96, a figure far beyond 55% in the MTIE, 76% in the EEENNS approach, and 83% in the DI-TIE approach. Introduction With a 16 kHz sampling rate, the adaptive multi-rate wideband (AMR-WB) speech codec [1][2][3][4] is one of the speech codecs applied to modern mobile communication systems as a way to remarkably improve the speech quality on handheld devices.The AMR-WB is a speech codec developed on the basis of an algebraic code-excited linear-prediction (ACELP) coding technique [4,5], and provides nine coding modes with bitrates of 23.85, 23.05, 19.85, 18.25, 15.85, 14.25, 12.65, 8.85, and 6.6 kbps.The ACELP-based technique is developed as an excellent speech coding technique, having a double advantage of low bit rates and high speech quality, but a price paid is a high computational complexity required in an AMR-WB codec.Using an AMR-WB speech codec, the speech quality of a smartphone can be improved but at the cost of high battery power consumption. As in [11], a TIE algorithm is proposed to address the computational load issue in a VQ-based image coding.Improved versions of TIE approaches are presented in [12,13] to reduce the scope of a search space, giving rise to further reduction in the computational load.However, there exists a high correlation between ISF coefficients of neighboring frames in AMR-WB, that is, ISF coefficients evolve smoothly over successive frames.This feature benefits TIE-based VQ encoding, according to which a considerable computational load reduction is demonstrated.Yet, a moving average (MA) filter is employed to smooth the data in advance of VQ encoding of ISF coefficients.It means that the high correlation feature is gone, resulting in a poor performance in computational load reduction.Recently, a TIE algorithm equipped with a dynamic and an intersection mechanism, named DI-TIE, is proposed to effectively simplify the search load, and this algorithm is validated as the best candidate among the TIE-based approaches so far.On the other hand, an EEENNS algorithm was derived from equal-average nearest neighbor search (ENNS) and equal-average equal-variance nearest neighbor search (EENNS) approaches [15][16][17][18][19].In contrast to TIE-based approaches, the EEENNS algorithm uses three significant features of a vector, i.e., mean, variance, and norm, as a three-level elimination criterion to reject impossible codewords. Furthermore, a binary search space-structured VQ (BSS-VQ) is presented in [20] as a simple as well as efficient way to quantize line spectral frequency (LSF) coefficients in the ITU-T G.729 speech codec [5].This algorithm demonstrated that a significant computational load reduction is achieved with a well maintained speech quality.In view of this, this paper will apply the BSS-VQ search algorithm to the ISF coefficients quantization in AMR-WB.This work aims to verify whether the performance superiority of the BSS-VQ algorithm remains, for the reason that the VQ structure in AMR-WB is different from that in G.729.On the other hand, another major motivation behind this is to meet the energy saving requirement on handheld devices, e.g.smartphones, for an extended operation time period. The rest of this paper is outlined as follows.Section 2 gives the description of ISF coefficients quantization in AMR-WB.The BSS-VQ algorithm for ISF quantization is presented in Section 3. Section 4 demonstrates experimental results and discussions.This work is summarized at the end of this paper. ISF Coefficients Quantization in AMR-WB In a quantization process of AMR-WB [1], a speech frame of 20 ms is firstly applied to evaluate linear predictive coefficients (LPCs), which are then converted into ISF coefficients.Subsequently, quantized ISF coefficients are obtained following a VQ encoding process, which is detailed below. Linear Prediction Analysis In a linear prediction, a Levinson-Durbin algorithm is used to compute a 16th order LPC, a i , of a linear prediction filter [1], defined as: Subsequently, the LPC parameters are converted into the immittance spectral pair (ISP) coefficients for the purposes of parametric quantization and interpolation.The ISP coefficients are defined as the roots of the following two polynomials: Symmetry 2017, 9, 54 3 of 10 F 1 (z) and F 2 (z) are symmetric and antisymmetric polynomials, respectively.It can be proven that all the roots of such two polynomials lie and alternate successively on a unit circle in the z-domain.Additionally, F 2 (z) has two roots at z = 1 (ω = 0) and z = −1 (ω = π).Such two roots are eliminated by introducing the following polynomials, with eight and seven conjugate roots respectively on the unit circle, expressed as: where the coefficients q i are referred to as the ISPs in the cosine domain, and a [16] is the last predictor coefficient.A Chebyshev polynomial is used to solve Equations ( 4) and (5).Finally, 16th order ISF coefficients ω i can be obtained by taking the transformation ω i = arccos(q i ). Quantization of ISF Coefficients Before a quantization process, a mean-removed and first order MA filtering are performed on the ISF coefficients to obtain a residual ISF vector [1], that is: where z(n) and p(n) respectively denote the mean-removed ISF vector and the predicted ISF vector at frame n by a first order MA prediction, defined as: where r(n − 1) is the quantized residual vector at the previous frame.Subsequently, S-MSVQ is performed on r(n).As presented in Tables 1 and 2, S-MSVQ is categorized into two types in terms of the bit rate of the coding modes.In Stage 1, r(n) is split into two subvectors, namely, a 9-dimensional subvector r 1 (n) associated with codebook CB1 and a 7-dimensional subvector r 2 (n) associated with codebook CB2, for VQ encoding.As a preliminary step of Stage 2, the quantization error vectors are split into three subvectors for the 6.60 kbps mode or five for the modes with bitrates between 8.85 and 23.85 kbps, symbolized as r For instance, r (2) 1,1-3 in Table 1 represents the subvector split from the 1st to the 3rd components of r 1 , and then VQ encoding is performed thereon over codebook CB11 in Stage 2. Likewise, r (2) 2,4-7 stands for the subvector split from the 4th to the 7th components of r 2 , after which VQ encoding is performed over codebook CB22 in Stage 2. Finally, a squared error ISF distortion measure, that is, Euclidean distance, is used in all quantization processes. BSS-VQ Algorithm for ISF Quantization The basis of the BSS-VQ algorithm is that an input vector is efficiently assigned to a subspace where a small number of codeword searches is carried out using a combination of a fast locating technique and lookup tables, as a prerequisite of a VQ codebook search.In this manner, a significant computational load reduction can be achieved. At the start of this algorithm, each dimension is dichotomized into two subspaces, and an input vector is then assigned to a corresponding subspace according to the entries of the input vector.This idea is illustrated in the following example.There are 2 9 = 512 subspaces for a 9-dimensional subvector r 1 (n) associated with codebook CB1, and an input vector can then be assigned to one of the 512 subspaces by means of a dichotomy according to each entry of the input vector.Finally, VQ encoding is performed using a prebuilt lookup table containing the statistical information on sought codewords. In this proposal, the lookup table in each subspace is pre-built in a way that requires lots of data for training purposes.A training as well as an encoding procedure in BSS-VQ is illustrated with the example of a 9-dimensional codebook CB1 with 256 entries in AMR-WB. BSS Generation with Dichotomy Splitting As a preliminary step of a training procedure, each dimension is dichotomized into two subspaces, and a dichotomy position is defined as the mean of all the codewords contained in a codebook, formulated as: where c i (j) represents the jth component of the ith codeword c i , dp(j) the mean value of all the jth components.Taking the codebook CB1 as an instance, CSize = 256, Dim = 9.As listed in Table 3, all the dp(j) values are saved and then presented in a tabular form.Subsequently, for vector quantization on the nth input vector x n , a quantity ν n (j) is defined as: where x n (j) denotes the jth component of x n .Then x n is assigned to subspace k (bss k ), with k given as the sum of ν n (j) over the entire dimensions, formulated as: Symmetry 2017, 9, 54 5 of 10 In this study, 0 ≤ k < BSize and BSize = 2 9 = 512 represents the total number of subspaces.Taking an input vector x n = {20.0,20.1, 20.2, 20.3, 20.4, 20.5, 20.6, 20.7, 20.8} as an instance, ν n (j) = {2 0 , 2 1 , 2 2 , 0, 0, 0, 0, 0, 2 8 } for each j, 0 ≤ j ≤ 8, and k = 263 can be obtained by Equations ( 9) and ( 10) respectively.Thus, the input vector x n is assigned to the subspace bss k with k = 263. By means of Equations ( 9) and (10), it is noted that this algorithm requires a small number of basic operations, i.e., comparison, shift and addition, such that an input vector is assigned to a subspace in a highly efficient manner. Training Procedure of BSS-VQ Following the determination of the dichotomy position for each dimension, a training procedure is performed to build a lookup table in each subspace.The lookup tables give the probability that each codeword serves as the best-matched codeword in each subspace, referred to as the hit probability of a codeword in a subspace for short. A training procedure is stated below as Algorithm 1.With more than 1.56 GB of memory, a duration longer than 876 min and a total of 2,630,045 speech frames, a large speech database, covering a diversity of contents and multiple speakers, is employed as the training data in a training procedure. Algorithm 1: Training procedure of BSS-VQ Step 1.Initial setting: assign each codeword to all the subspaces, and then set the probability that the codeword c i corresponds to the best-matched codeword in bss k P hit (c Step 2. Referencing Table 3 and through Equations ( 9) and ( 10), an input vector can be efficiently assigned to bss k . Step 3. A full search is conducted on all the codewords according to the Euclidean distance, given as: and an optimal codeword c opt satisfies: Step 4. Update the statistics on the optimal codeword, that is, P hit (c opt bss k ) . Step 5. Repeat Steps 2-4, until the training is performed on all the input vectors. A lookup table is built for each subspace, following the completion of a training procedure.The lookup table gives the hit probability of each codeword in a subspace.For sorting purposes, a quantity P hit (m|bss k ) , 1 ≤ m ≤ CSize, is defined as the m ranked probability that a codeword hits the best-matched codeword in subspace bss k .Taking m = 1 as an instance, P hit (m|bss the highest hit probability in bss k .As it turns out, the lookup table in each subspace gives the ranked hit probability in descending order and the corresponding codeword. Encoding Procedure of BSS-VQ In the encoding procedure of BSS-VQ, the cumulative probability P cum (M|bss k ) is firstly defined as the sum of the top M P hit (m|bss k ) in bss k , that is: Subsequently, given a threshold of quantization accuracy (TQA), a quantity M k (TQA) represents the minimum value of M that satisfies the condition P cum (M|bss k ) ≥ TQA in bss k , that is: For a given TQA, a total of 512 M k (TQA)s are evaluated by Equation ( 14) for all the subspaces, and the mean value is then given as: Illustrated in Figure 1 is a plot of the average number of searches M(TQA) corresponding to the values of TQA ranging between 0.90 and 0.99.Given a TQA = 0.95 as an instance, a mere average of 14.58 codeword searches is required to reach a search accuracy as high as 95%.In simple terms, the search performance can be significantly improved at the cost of a small drop in search accuracy.Furthermore, a trade-off can be made instantly between the quantization accuracy and the search load according to Figure 1.Hence, a BSS-VQ encoding procedure is described below as Algorithm 2. Symmetry 2017, 9, 54 6 of 10 Illustrated in Figure 1 is a plot of the average number of searches ) (TQA M corresponding to the values of TQA ranging between 0.90 and 0.99.Given a TQA = 0.95 as an instance, a mere average of 14.58 codeword searches is required to reach a search accuracy as high as 95%.In simple terms, the search performance can be significantly improved at the cost of a small drop in search accuracy.Furthermore, a trade-off can be made instantly between the quantization accuracy and the search load according to Figure 1.Hence, a BSS-VQ encoding procedure is described below as Algorithm 2. Algorithm 2: Encoding procedure of BSS-VQ Step 1.Given a TQA, Mk(TQA) satisfying Equation ( 14) is found directly in the lookup table in bssk. Step 2. Referencing Table 3 and by means of Equations ( 9) and ( 10), an input vector is assigned to a subspace bssk in an efficient manner.Step 3. A full search for the best-matched codeword is performed on the top Mk(TQA) sorted codewords in bssk, and then the output is the index of the found codeword.and the corresponding codeword is built for each subspace according to the training procedure.Accordingly, the VQ encoding can be performed using Algorithm 2. Experimental Results There are three experiments conducted in this work.The first is a search load comparison among various search approaches.The second is a quantization accuracy (QA) comparison among a full search and other search approaches.The third is a performance comparison among various approaches in terms of ITU-T P.862 perceptual evaluation of speech quality (PESQ) [21] as an objective measure of speech quality.A speech database, completely different from all the training data, is employed for outside testing purposes.With one male and one female speaker, the speech database in total takes up more than 221 MB of memory, occupies more than 120 min, and covers 363,281 speech frames. Firstly, Table 4 lists a comparison on the average number of searches among full search, multiple TIE (MTIE) [13], DI-TIE, and EEENNS, while Table 5 gives the search load corresponding to TQA values Algorithm 2: Encoding procedure of BSS-VQ Step 1.Given a TQA, M k (TQA) satisfying Equation ( 14) is found directly in the lookup table in bss k . Step 2. Referencing Table 3 and by means of Equations ( 9) and ( 10), an input vector is assigned to a subspace bss k in an efficient manner. Step 3. A full search for the best-matched codeword is performed on the top M k (TQA) sorted codewords in bss k , and then the output is the index of the found codeword. Step 4. Repeat Steps 2 and 3 until all the input vectors are encoded. The BSS-VQ algorithm is briefly summarized as follows.Table 3 is the outcome by performing Equation (8) and is saved as the first lookup table.Subsequently, the second lookup table concerning P hit (m|bss k ) and the corresponding codeword is built for each subspace according to the training procedure.Accordingly, the VQ encoding can be performed using Algorithm 2. Experimental Results There are three experiments conducted in this work.The first is a search load comparison among various search approaches.The second is a quantization accuracy (QA) comparison among a full search and other search approaches.The third is a performance comparison among various approaches in terms of ITU-T P.862 perceptual evaluation of speech quality (PESQ) [21] as an objective measure of speech quality.A speech database, completely different from all the training data, is employed for Symmetry 2017, 9, 54 7 of 10 outside testing purposes.With one male and one female speaker, the speech database in total takes up more than 221 MB of memory, occupies more than 120 min, and covers 363,281 speech frames. Firstly, Table 4 lists a comparison on the average number of searches among full search, multiple TIE (MTIE) [13], DI-TIE, and EEENNS, while Table 5 gives the search load corresponding to TQA values of the BSS-VQ algorithm.Moreover, with the search load required in the full search algorithm as a benchmark, Tables 6 and 7 present comparisons on the load reduction (LR) with respect to Tables 4 and 5.A high value of LR reflects a high search load reduction.Table 6 indicates that DI-TIE provides a higher value of LR than MTIE and EEENNS search approaches among all the codebooks.It is also found that most LR values of BSS-VQ are higher than the DI-TIE approach by an observation in Tables 6 and 7.For example, the LR values of BSS-VQ are indeed higher than DI-TIE in case the TQA is equal to or smaller than 0.99, 0.98, 0.96, and 0.99 in codebooks CB1, CB2, CB21, and CB22, respectively.Accordingly, a remarkable search load reduction is reached by the BSS-VQ search algorithm.In the QA aspect, a 100% QA is obtained by the MTIE, DI-TIE, and EEENNS algorithms as compared with a full search approach.Thus, only the QA experiment of BSS-VQ is conducted.The QA corresponding to TQA values of the BSS-VQ algorithm is given in Table 8.It reveals that QA is an approximation of TQA in either inside or outside testing cases.Moreover, this algorithm provides an LR between 77.78% and 93.98% at TQA = 0.90 as well as an LR between 67.23% and 88.39% at TQA = 0.99, depending on the codebooks.In other words, a trade-off can be made between the quantization accuracy and the search load.Furthermore, an overall LR is evaluated to observe the total search load of an entire VQ encoding procedure of an input vector.The overall LR refers to the total search load, defined as the sum of the average number of searches multiplied by the vector dimension in each codebook.Thus, an overall LR comparison with the full search as a benchmark is presented as a bar graph in Figure 2. As clearly indicated in Figure 2, the overall LR of BSS-VQ is higher than MTIE, DI-TIE, and EEENNS approaches, but at the same time the QA is as high as 0.98.Moreover, Table 9 gives a PESQ comparison, including the mean and the STD, among various approaches.Since MTIE, DI-TIE, and EEENNS provide a 100% QA, they both share the same PESQ with a full search, meaning that there is no deterioration in the speech quality.A close observation reveals little difference between PESQs obtained in a full search and in this search algorithm, that is, the speech quality is well maintained in BSS-VQ at TQA not less than 0.90.This BSS-VQ search algorithm is experimentally validated as a superior candidate relative to its counterparts. comparison, including the mean and the STD, among various approaches.Since MTIE, DI-TIE, and EEENNS provide a 100% QA, they both share the same PESQ with a full search, meaning that there is no deterioration in the speech quality.A close observation reveals little difference between PESQs obtained in a full search and in this search algorithm, that is, the speech quality is well maintained in BSS-VQ at TQA not less than 0.90.This BSS-VQ search algorithm is experimentally validated as a superior candidate relative to its counterparts. Conclusions This paper presents a BSS-VQ codebook search algorithm for ISF vector quantization in the AMR-WB speech codec.Using a combination of a fast locating technique and lookup tables, an input vector is efficiently assigned to a search subspace where a small number of codeword searches is carried out and the aim of remarkable search load reduction is reached consequently.Particularly, a trade-off can be made between the quantization accuracy and the search load to meet a user's need when performing a VQ encoding.This BSS-VQ search algorithm, providing a considerable search load reduction as well as nearly lossless speech quality, is experimentally validated as superior to MTIE, DI-TIE, and EEENNS approaches.Furthermore, this improved AMR-WB speech codec can be adopted to upgrade the VoIP performance on a smartphone.As a consequence, the energy efficiency requirement is achieved for an extended operation time period due to computational load reduction. Figure 1 . Figure 1.A plot of the average number of searches versus TQA. Step 4 . Repeat Steps 2 and 3 until all the input vectors are encoded.The BSS-VQ algorithm is briefly summarized as follows. Figure 1 . Figure 1.A plot of the average number of searches versus TQA. Figure 2 .Figure 2 . Figure 2. Comparison of overall search load reduction among various approaches. Table 1 . Structure of S-MSVQ in AMR-WB in the 8.85-23.85kbps coding modes. Table 3 . Dichotomy position for each dimension in the codebook CB1. Table 4 . Average number of searches among various algorithms in the 8.85-23.85kbps modes. Table 5 . Search load of the BSS-VQ algorithm versus TQA values in the 8.85-23.85kbps modes. Table 8 . Comparison of QA percentage of the BSS-VQ algorithm versus TQA values in the 8.85-23.85kbps modes among codebooks. Table 9 . Comparison on mean opinion score (MOS) values using the PESQ algorithm among various methods.
5,177.2
2017-04-12T00:00:00.000
[ "Computer Science" ]
Simulation of building damage distribution in downtown Mashiki, Kumamoto, Japan caused by the 2016 Kumamoto earthquake based on site-specific ground motions and nonlinear structural analyses Most of the buildings damaged by the mainshock of the 2016 Kumamoto earthquake were concentrated in downtown Mashiki in Kumamoto Prefecture, Japan. We obtained 1D subsurface velocity structures at 535 grid points covering this area based on 57 identified velocity models, used the linear and equivalent linear analyses to obtain site-specific ground motions, and generated detailed distribution maps of the peak ground acceleration and velocity in Mashiki. We determined the construction period of every individual building in the target area corresponding to updates to the Japanese building codes. Finally, we estimated the damage probability by the nonlinear response model of wooden structures with different ages. The distribution map of the estimated damage probabilities was similar to the map of the damage ratios from a field survey, and moderate damage was estimated in the northwest where no damage survey was conducted. We found that both the detailed site amplification and the construction period of wooden houses are important factors for evaluating the seismic risk of wooden structures. Introduction The 2016 Kumamoto earthquake sequence included two major earthquakes, both epicenters were located in Kumamoto, Japan. According to the Japan Meteorological Agency (JMA), the first strong earthquake occurred at 21:26 Japan standard time (JST) on April 14 (12:26 coordinated universal time, UTC) and had a magnitude of 6.5 on the JMA magnitude scale (M JMA ), which converts to a moment magnitude (M W ) of 6.2. Its focal depth was 11 km, and the highest JMA seismic intensity of VII was recorded in downtown Mashiki. The second strong earthquake occurred at 01:25 JST on April 16, 2016 (16:25 1 3 UTC on April 15, 2016). The reported M JMA was 7.3 (M W = 7.0) and the focal depth was 12 km. The highest JMA seismic intensity of VII was also recorded in Mashiki (Sugino et al. 2016;Asano and Iwata 2016;Yamanaka et al. 2016;Nagao et al. 2017;Yamada et al. 2017;Kawase et al. 2017;Yamazaki et al. 2018;Sun et al. 2020). Hereafter, we refer to the 6.5 M JMA earthquake on April 14, 2016 as the foreshock and the 7.3 M JMA earthquake on April 16, 2016 as the mainshock. Three strong-motion stations were established at Mashiki: KMMH16 (station of NIED KiK-net), KMMP58 (Mashiki Town Hall station), and Bunkakaikan (Mashiki Community Hall station), their locations are shown in Fig. 1. The inset shows the locations of downtown Mashiki and the epicenter locations of the two earthquakes on the Kyushu Island (NIED 2020; Sun et al. 2020). Downtown Mashiki is located in the east of Kumamoto City, Japan, which was heavily damaged by the mainshock. After the mainshock, the Architectural Institute of Japan (AIJ) performed a field survey to estimate the building damage, as shown in Fig. 1 (NILIM and BRI 2016). The damage survey followed the building damage scale which includes seven damage grades, from D0 (no damage) to D6 (full collapsed), as referring to the Damage Scale #4 in Fig. 3. The damage ratio of each cell in Fig. 1 is the ratio between the total number of buildings with ≥ D4 damage and the total number of buildings in a cell (damage levels were referred to the Damage Scale #4 in Fig. 3). D4 means the building occurs (NILIM and BRI 2016). The damage ratio of each cell = (the number of ≥ D4 damaged buildings in a cell)/(the total building number in a cell). 2016 and 2018 microtremor observation sites are marked by blue triangles and red rectangles, respectively. The three strong-motion stations at Mashiki are marked by large purple stars (Sun et al. 2020). Site K (black star in the northeast) is the location where a borehole was drilled by Arai (2017) and Shingaki et al. (2017); it is very close to the NIED KiK-net site KMMH16. The inset shows the earthquake mechanism information of the foreshock and mainshock; the colors of the earthquake symbols represent the depths of the centroid moment tensor (CMT) solution, 17 and 11 km (NIED 2020) pancake-like collapse, loss of connectivity of beam and column, or interlayer deformation (Takai and Okada 2001). Most of the heavily damaged buildings appeared in the area between the local road No. 28 and the Akitsu River . Unfortunately, Fig. 1 does not depict the building damage in the northwest. Then, Yamada et al. (2017) estimated the building damage distribution in Mashiki by the aerial photo analysis departed before, after, and during the interval of the foreshock and mainshock. Moya et al. (2018) discussed the building damage in the south part of Mashiki using the empirical fragility functions. Yamazaki studied the relationship between the building damage ratio and the building construction period in Mashiki . In this study, we want to simulate the building damage distribution in Mashiki and figure out the damage distribution of houses in the northwestern area, considering the building construction periods by using the nonlinear dynamic analysis of wooden houses (Yoshida et al. 2004;Nagato and Kawase 2004). Estimating building damage based on the local strong ground motions is crucial for urban disaster prevention. An accurate prediction of seismic damage to buildings on a regional scale is important for modern city planning and post-earthquake rescue, which will help to mitigate direct/indirect seismic losses (Hori 2011;Lu et al. 2014). Currently, existing approaches for seismic damage evaluation of buildings on a regional scale mainly include the damage probability matrix method (e.g., Ulrich et al. 2014;Rojahn et al. 1985;Yakut et al. 2006;Polese et al. 2015;Palanci and Senel 2019;Del Gaudio et al. 2020), the spectrum method (Jalayer et al. 2010;FEMA 2012), and methods based on time history analysis (e.g., Kawase 2002, 2004;Yoshida et al. 2004;Benavent-Climent et al. 2014;Lu and Guan 2017;Brunelli et al. 2020). After the 1995 Hyogo-ken Nanbu Earthquake (hereafter Kobe earthquake), several researchers statistically analyzed the damage survey data to study the relationship between the structural characteristics and building damage ratios (e.g., Hayashi et al. 2000;Murao and Yamazaki 2000;Miyakoshi 2002). Also, other researchers attempted to determine the relationship between the peak ground acceleration (PGA) or peak ground velocity (PGV) and the building damage ratio and established empirical vulnerability functions for different building types (e.g., Hayashi et al. 1997;Yamazaki 2000, 2002;Yamaguchi and Yamazaki 2000). However, these methods could not precisely evaluate the damage probability (DP) to every building in an area. For more quantitative prediction it would be better to build numerical-models of structures based on the seismic response analyses for wooden and reinforced concrete (RC) buildings in the heavily damaged area during the previous earthquakes. Nagato and Kawase (2004) developed a seismic response analysis model (i.e., Nagato-Kawase model) for multi-story structures, based on the building damage statistics for the Kobe city after the 1995 Kobe earthquake. They successfully reproduced the damage belt in Kobe after the disaster, based on the estimated ground motions (Kawase 1996). This work established a standard model of the common two-story wooden houses in the Kobe area associated with the Japanese building code. Further, because the Japanese building codes were improved in 1950, 1971, and 1981 in terms of the seismic performance of buildings for design forces, Yoshida et al. (2004) enhanced the standard model of Japanese wooden houses to consider four construction periods (i.e. Yoshida model) before 1950Yoshida model) before , 1951Yoshida model) before -1970Yoshida model) before , 1971Yoshida model) before -1981Yoshida model) before , and after 1982. Thus, the local ground motions and construction period should be considered when estimating the damage to wooden houses. In this study, we evaluate the damage probabilities of wooden houses in Mashiki which were caused by the 2016 Kumamoto earthquakes by considering the construction period and estimated strong ground motions. We also attempt to estimate the building DP distribution in the northwestern part, where no systematic survey was 1 3 conducted during the AIJ survey. We have previously identified subsurface velocity structures for this area (Sun et al. 2020). However, as the precondition, we need a more spatially dense sampling of ground responses to delineate a detailed map that corresponds to the damage survey. The construction period of every building in this area is also needed to be determined. (NILIM and BRI 2016). The x-axis denotes the construction period, and the y-axis denotes the damage grade used in the AIJ survey and the corresponding damage index. The number inside the parentheses is the building number of the respective category Fig. 3 Four damage scale standards and damage index used in Japan. Damage Scale #1 is the European Macroseismic Scale 1998 (Grunthal 1998). Damage Scale #2 is the AIJ 1987 standard. Damage Scale #3 is used in the Yoshida model. Damage Scale #4 was used for the AIJ survey after the 2016 Kumamoto earthquake. This figure is edited based on the results of Takai and Okada (2001) 2 Data preparation and methodology Statistical analysis of wooden houses and building damage scales used in Japan A statistical analysis of the damage to wooden houses obtained from the AIJ survey report was depicted in Fig. 2. The buildings were classified into six categories based on the main construction material: wooden structure, steel structure, RC structure, hybrid structure, others, and unknown (NILIM and BRI 2016). For the hybrid structure, the main building material varied between different floors; for example, RC or steel was used in the first floor of the building, whereas wood was on the second floor. The other structures used steel or other materials in certain parts of the building. In total, the AIJ surveyed 2, (Okada and Takai 1999;Takai and Okada 2001). Most of the wooden houses in Mashiki were built after 1981, and the damage grades of approximately 27% of wooden houses were ≥ D4 (Fig. 2). The DP of wooden houses might be correlated with the construction period, as more than two-thirds of heavily damaged wooden houses (≥ D4) were built before 1981, while wooden houses built after 1981 showed a better seismic performance. Houses with ≥ D4 damage made up 46% (351/770) of buildings built before 1981 while only 15% (176/1185) of the buildings were built after 1981. Microtremor observation and identified subsurface velocity structures We performed the microtremor observations at 86 sites in the heavily damaged area in 2016 and 2018, with the locations shown in Fig. 1. We then generated 57 one-dimensional (1D) subsurface velocity structures for this area (Sun et al. 2020). Table 1 presents the identified velocity models at KMMH16 and KMMP58 (Town Hall site). The relationship between the shear strain and shear modulus reduction ratio (γ-G/G max ) and between the shear strain and damping ratio (γ-h) for shallow layers were previously obtained at site K ( Fig. 1), where Arai (2017) and Shingaki et al. (2017) conducted triaxial tests of borehole soil materials. For horizontal motions at the seismological bedrock (V s > 3000 m/s), Nagashima and Kawase (2018) estimated the seismic bedrock motions at KMMH16 during the mainshock by using the diffused field concept for earthquakes (Kawase et al. 2011;Nagashima et al. 2017). Figure 4 shows the acceleration time histories of the calculated bedrock input. We assumed that the same input was incident to the seismological bedrock of all soil columns for the site response analyses (Sun et al. 2020). Based on these results, we estimated the seismic ground motions at KMMH16 during the mainshock by using the linear and equivalent linear analysis (LA and ELA) methods (Yoshida and Iai 1998;Yoshida 2014). In Mashiki, we obtained the ground accelerations for the mainshock at KMMP58 and the ground and borehole accelerations (underground 252 m) for the mainshock at KMMH16. Figures 5 and 6 show the comparisons of the estimated and observed ground motions at KMMH16 for the east-west (EW) and north-south (NS) components, respectively. The horizontal strong ground motions at KMMH16 were simulated well. The linear soil responses produced much larger accelerations than observed. According to these results, consideration of soil nonlinearity in the ground response analysis is found to be necessary when we compare linear and equivalent linear analysis. Nonlinear response models of wooden buildings Nagato and Kawase constructed a model for estimating the damage probabilities of a group of buildings in an area, including wooden, RC, and steel structures (Nagato and Kawase 2004). They calculated the observed damage ratios (ODRs) based on statistical data for Kobe after the 1995 Kobe earthquake. They also established building models by referring to the Japanese building code (except for wooden houses, for which the dependence on the construction period was not obtained). Subsequently, they analyzed (Sun et al. 2020). a, b Estimated accelerations and velocities at the ground surface with LA and ELA, which were compared with observations at KMMH16 during the mainshock, respectively. c, d Estimated Fourier spectra and acceleration response spectra, respectively. The dotted, dashed, and solid lines denote the LA, ELA, and observation results, respectively the seismic responses of the building models and obtained the calculated damage ratios (CDRs, the CDR is a value of the calculated DP of a building) by assuming log-normal distributions of the yield capacities. They compared the ODRs and CDRs, modified the strength of the building model, and performed the analysis until the deviation between the ODRs and CDRs was sufficiently small. Finally, they established the Nagato-Kawase model, which was applied to successfully reproduce the special building damage belt caused by the 1995 Kobe earthquake. Nagato and Kawase (2004) used multi-degree-offreedom models with non-linear springs for their seismic response analysis model. They assumed that RC buildings are characterized by the degrading trilinear hysteresis type (D-tri type) of nonlinear springs (e.g., Fukada 1969;Otani 1981; Nagato and Kawase Fig. 7. In particular, Japanese wooden houses were estimated to be twice as strong on average, compared to the building standard. Nagato and Kawase (2004) found that the reproduced strong-motion synthetics of Matsushima and are sufficient to reproduce observed severe damages in Kobe because the possibility of the heavy damage and collapse is primarily controlled by the strength of the velocity input. This is because the maximum deformation of a building in the nonlinear regime is correlated with the potential energy that the structural system should absorb. Under such a relatively low-frequency (but with high peak ground velocity) input, we would not see a strong correlation between the building's deformation and its resonant frequency. This is because the resonant frequency of a building is a parameter in the linear regime so that it will correlate only with a small deformation for a small input (less than 1/100 in terms of the story drift angle), referring to the results of capacity demand diagram by Chopra and Goel (1999). Additionally, inelastic response of a building with a higher resonant frequency would result in the same level of response in the inelastic regime (i.e., the potential-energy regime). Then, Kawase and Masuda (2004) applied this model to predict the building damage in the Yatsushiro City, Japan. Yoshida et al. (2004) followed the construction procedure of the Nagato-Kawase model and updated the parameters related to the Japanese building codes during four construction periods to analyze the dynamic response of the standard models, for a more detailed damage estimation of Japanese wooden houses. They used 24 representative models for the two-floor wooden houses of different strengths. For the first floor, eight representative strength factors relative to the standard strength and the existing ratio were used. These eight strength factors are shown in Fig. 8. For the second floor, they used a ratio between the structural wall sufficiency rate (i.e. a ratio between the existing structural wall quantity and necessary structural wall quantity) of the second and first stories, which are 1.0, 1.5, and 2.0 (Yoshida et al. 2004;Nagato and Kawase Fig. 7 An example of nonlinear behavior of assumed D-tri type. The horizontal axis is the story drift angle, and the vertical axis is the shear force coefficient (Nagato and Kawase 2004) 2004), respectively. The strength of the standard wooden model was defined by the necessary structural wall quantity for a wooden house according to the Japanese building code. Figure 9 shows three types of nonlinear factors for the standard model used in the Yoshida model. Parameters of the standard wooden house in the Yoshida model are listed in Tables 2 and 3. Furthermore, 1,2 t and 1,3 t denote the stiffness ratio of 2nd stiffness/1st stiffness and 3rd stiffness/1st stiffness for the tri-linear type model, respectively. 1,2 s , 1,3 s , and 1,R s denote the stiffness ratio of 2nd stiffness/1st stiffness, 3rd stiffness/1st stiffness, and returning stiffness/1st stiffness for the slip type model, respectively. The necessary structural wall quantity is satisfied when the base shear coefficient is 0.2 and the angle of deformation is 1/120 rad. (Kawase and Masuda 2004;Yoshida et al. 2004;Nagato and Kawase 2004). They also set a parameter α, which is (Yoshida et al. 2004), with three types of relationships between the story drift angle and base shear ratio the ratio between the structural strength of a wooden house for a specific construction period and that of the standard wooden model, as given in Table 4 (Yoshida et al. 2004(Yoshida et al. , 2005. Then, the strength of a model is the product of α and the standard strength. As for the estimated DP by Yoshida model, we calculate the sum of the log-normal existing ratios of heavily damaged models after the analysis, the total percentage of damaged models among 24 models is obtained as the DP of the site. They successfully expanded the Nagato-Kawase model to consider different construction periods because they used the damage statistics for tax evaluation by municipal governments which requires the construction period of each house. The database of building damage to make the Yoshida model was following the Damage Scale #3 in Fig. 3. Furthermore, 24 models with different strengths were defined for the nonlinear analysis when the construction period of the building was identified. Figure 10 presents the predominant frequencies of the 24 models used for four different construction periods. After the nonlinear analysis, the DP of the input seismic waveform is calculated as the sum of the existing ratios of the models (Fig. 8) that exceed the damage criterion among 24 models. Please note that the first and second frequencies are varying as a function of construction age and the strength variation with the log-normal distribution of existence. This is so because we set the same yield deformation angle to be 1/120 for all the models. The higher the yield strength is, the higher the stiffness in the linear regime. The actual measured fundamental peak frequencies of wooden houses in Japan are distributed in the same range of 2-10 Hz. According to the above-mentioned descriptions, the Yoshida model was built in 2005 for wooden houses in Kobe, based on the same estimated ground motions up to 1.75 Hz by Matsushima and Kawase (2000). Again, despite of much higher resonant frequencies of actual wooden houses in Japan, the estimated ground motions had sufficient power to create heavy damages or collapses. Therefore, in this study, it is better to filter the estimated ground motions by the high-cut (1-2 Hz) filter when applying the Yoshida model. 1 3 Investigation of construction periods According to the Yoshida model, the construction period is an important parameter for determining the average yield strength of the wooden houses (Yoshida et al. 2004). The model considers four construction periods: before 1950, 1951-1970, 1971-1981, and after 1982. Thus, it was required to obtain the construction period of each building in the target area. After the mainshock, Yamazaki et al. (2018) performed a statistical analysis of the building ages for the entire Mashiki. However, they did not provide details on the construction period for every building. Note that our target area is the central part of the entire Mashiki town. Yamada et al. (2017) determined the building construction period for every building in the target area related to the date when the aerial photos of the Geospatial Information Authority of Japan (GSI) in Mashiki were obtained. Moya et al. (2018) reported their building construction period results for the southern area between road No. 28 and Akitsu river. However, these results did not define construction period the same way as the Yoshida model; hence, their data could not be used directly in this study. We obtained the building ages for the AIJ surveyed area in Mashiki. However, the AIJ survey only covered the area shown in Fig. 1, without including the northwestern part. In addition, the AIJ classified building ages as either before 1981 or after 1982, and thus, this information alone ( Fig. 1) was insufficient for our study. We needed to generate a database of wooden houses in the target area corresponding to the construction period as defined by the Yoshida model. We used the following steps to determine the construction period of the target area corresponding to the Yoshida model: (1) We obtained the coordinates of all building centroids in the target area by referring to OpenStreetMap (OSM). (2) We determined the building construction periods by referring to Yamada et al. (2017). They had investigated construction periods as before 1967, 1968-1975, 1976-1981, 1982-1986, 1987-1997, 1998-2008, and after 2008. These results were obtained through an analysis of aerial photos shared by the Geospatial Information Authority of Japan (GSI). We then re-investigated the construction periods of our target area as before 1967, 1967-1981, and after 1982. (3) We used the geographical coordinates of buildings in the AIJ survey with a construction period after 1982. All the buildings were located within the heavy blue dashed line shown in Fig. 11. We checked all these buildings and corrected any information contradicting the AIJ survey data. (4) For buildings with construction periods before 1967, we compared their locations with aerial photos captured in 1947 and1956 (GSI). For buildings with a 1967-1981 construction period, we re-checked their locations in aerial photos captured in 1964 and1975 (GSI). After this step, we obtained the database of construction period as before 1947, 1947-1956, 1956-1967, 1967-1975, 1975-1981, and after 1981 (5) We compared our results with Moya et al. (Moya et al. 2018) to correct and confirm the database for buildings between Road No.28 and Akitsu River in Mashiki. Then, we built a database for the actual construction period of each building in Mashiki. (6) According to the Yoshida model, 1951Yoshida model, , 1971Yoshida model, , and 1981 are important transitions for construction periods. Because we could not obtain aerial photos captured exactly in 1951 and 1971, we assumed construction periods for the database to correspond with the construction periods used in the Yoshida model, as given in Table 5. Finally, we obtained a distribution map of the construction periods for the target area following the same standards as those used in the Yoshida model, as shown in Fig. 11. Fig. 11 Definition of the construction periods for all buildings in the target area. The black, blue, and yellow dashed lines denote the heavily damaged area, AIJ field survey area, and northwestern part not surveyed by the AIJ, respectively. The three stars mark the strong motion stations, as in Fig. 1. The dark red, red, yellow, and green dots mark buildings built before 1950, during 1951-1970, during 1971-1981, and after 1982, respectively. The background was obtained from OpenStreetMap We compare the statistical analysis results of building construction periods of our database with those of Yamazaki et al. (2018), as depicted in Fig. 12. The overall matching was satisfactory. We had a slightly lower percentage of buildings constructed before 1950 than Yamazaki et al. because we counted buildings based on an aerial photo that was captured in 1947, so we were missing 3 years. We had a larger percentage of buildings constructed during 1951-1970 because this period included buildings constructed during 1947-1975. We had a slightly larger percentage of buildings constructed before 1950 and during 1951-1970 than Yamazaki et al. This may be because the number of older buildings built before 1970 in the target area was slightly more than that of the entire downtown Mashiki. For buildings built after 1982, our result was close to those of Yamazaki et al. Research steps With the necessary introduction above, our research process is clear. First, we observed the microtremor in the target area. Second, we identified 57 1D subsurface velocity structures of this area. Third, we obtained 535 1D subsurface velocity structures in this area. Fourth, we obtained the ground motions by the ELA. Finally, we estimated the building DPs through the Yoshida model. The detailed workflow is illustrated in Fig. 13. 19501947-19561951-19701956-19671967-19751975-19811971-1981After 1981 After 1982 Interpolated subsurface velocity structures Based on the 57 1D subsurface velocity structures, we generated a 30 × 30 grid along the horizontal plane of each subsurface layer for the target area (each grid has an area of approximately 47 m × 47 m) by using the linear interpolation method for a 3D space (Barber et al. 1996;Virtanen et al. 2020). Finally, we obtained 535 velocity structures for Mashiki. The upper 8 layers (from ground to the engineering bedrock) are shown in Fig. 14. Shallow subsurface structures in the southwest are deeper than other areas. Dynamic response analyses of subsurface velocity structures at 535 sites We followed the basic procedure of the ground motion simulation scheme as in previous research (Sun et al. 2020) but analyzed the ground response with considerably denser sampling locations. We estimated the ground motions during the mainshock for these 535 grid points by employing the ELA (Yoshida and Iai 1998;Yoshida 2001). Figure 15 depicts the distribution maps of the estimated PGAs and PGVs according to the ELA. Although we previously summarized the distribution of the estimated PGAs and PGVs of the 57 Fig. 15 has a more detailed spatial resolution, especially in the large PGA/PGV distributed area. A close relationship was not obtained between the PGA and building damage distribution at Maishiki (Fig. 1); however, the PGV distribution of the ELA showed a close relationship with the building damage distribution from the AIJ survey. Moreover, the seismic response was clearly stronger for the EW component than for the NS component. This is primarily because the amplitude of the EW component had a larger amplitude than the amplitude of the NS component for the seismic bedrock waves in the intermediate (~ 1 Hz) frequency range. Because most of the wooden houses in Mashiki were two-story structures, we used the two-story wooden model of Yoshida model for analyzing the target area. We applied the Newmark-β analysis method, with β = 0.25 and a time increment Δt = 0.005 s. Instantaneous stiffness proportional-type damping was applied with a damping ratio of 5%. For the standard model constructed according to the Japanese building code, the mass of the first floor was set to 15.88 tons and that of the second floor was 11.52 tons. Both floors were set to a height of 2.9 m. A log-normal distribution was assumed with the standard deviation based on the measured resonant frequency distribution of wooden houses. Because the damage ratios shown in Fig. 1 are those for damage grades of D4 and higher, we set the damage criterion for the nonlinear building response analysis to a story drift angle of greater than 1/30 rad. This threshold was used by Yoshida et al. (2004) to represent the total damage grade in the survey for property tax evaluation; this should be similar to a damage grade of D4 or higher, although it is not an exact match. The average yield capacities of the Nagato-Kawase model and Yoshida model, were obtained by using synthetic seismograms of the ground surface in Kobe, that was simulated by the 3D finite difference method for a frequency range of up to 1.75 Hz . Thus, we apply a high-cut filter of 1-2 Hz to the estimated strong motions of the ground surface at Mashiki. Even though we filter high-frequency components from the input waves, the damage still occurs because the structural period will be Figures 16 and 17 show representative results for the dynamic response analysis of a wooden house model with a weaker yield strength (construction period of 1971-1981, strength coefficient factor of the first and second floors respectively are 0.29648 and 0.59296) among the 24 representative models. The filtered estimated ground motions at KMMH16 were applied to this model. The maximum story drift angles of the EW and NS components were 16.7% and 3.8% rad, respectively. Full damage was considered to occur because both were greater than 1/30 rad (3.33%). Table 6 lists the maximum response accelerations, By applying the filtered ground motions and setting different construction periods in Yoshida model, we estimated the DPs for four construction periods at every grid point with the two-floor standard wooden model. The DPs corresponding to the construction periods of before 1950, 1951-1970, 1971-1981, and after 1982 are designated as DP 950 , DP 970 , DP 980 , and DP 990 , respectively. We created distribution maps of these DPs through ELA. Figure 18 presents the estimated DP distributions of Mashiki for the four construction periods with the estimated ground motions by ELA. Based on the statistical analysis of buildings in Mashiki by Yamazaki et al. (2018), we calculated the composite DP (DP COM ) of each grid point by considering the building percentages of different construction periods, using Eq. (1). where Per 950 , Per 970 , Per 980 , and Per 990 are the existing percentages of buildings in the corresponding square cell with construction periods before 1950, 1951-1970, 1971-1981, (1) DP COM = Per 950 × DP 950 + Per 970 × DP 970 + Per 980 × DP 980 + Per 990 × DP 990 Figure 19 shows the distribution map of DP COM based on the estimated ground motions from ELA. After obtaining the DPs for four different construction periods at every grid point, we calculated the DP of each building. Within a 30 × 30 grid, each wooden house is surrounded by four grid points, as shown in Fig. 20. The distance between a grid point and a wooden house was denoted as Dis(i) (House1-grid point). In other words, a building is affected by the estimated ground motions at the four surrounding grid points. Then, we defined an influence coefficient (IC) related to the distance between the building and four grid points, as shown in Eq. (2). We used IC to calculate the DP of every building with Eq. (3). For example, if House1 is one building as shown in Fig. 11 with a construction period after 1982, then DP(B2) 990 , DP(B3) 990 , DP(C2) 990 , and DP(C3) 990 are the damage ratio of B2, B3, C2, and C3 respectively, with a construction period of after 1982 (Fig. 18d). By considering IC(B2), IC(B3), IC(C2), and IC(C3) of the four grid points, we could obtain the damage ratio of this building with Eq. (3). where, Power(i) denotes the inverse of each distance, and IC(i) represents the influence coefficient of each grid point. where, DP building denotes the DP of the target building, and DP(i) period is the estimated DP of the four surrounding grid points with the same construction period as the target building. Although we estimated the DP of every building in the target area, we cannot present these results because it is the private information of the relevant house owner. To compare the estimated damage of houses with the AIJ survey results, we created a new grid similar to the AIJ survey and calculated the averaged DP of each cell (DP cell ) with Eq. (4). Figure 21 is the distribution map of DP cell . (2) Averaged damage probability of each cell based on the estimated ground motions of the ELA. The green, yellow, orange, red, and dark red colors denote damage probabilities of 0-15%, 15-25%, 25-50%, 50-75%, and greater than 75%, respectively. The other markers and lines have the same meanings as in Fig. 1 where, N is the number of buildings located in the cell, and DP(i) building is the estimated DP of each building in the cell. The size of the new grid is approximately equal to that used in the AIJ survey results (Fig. 1). Figure 18 indicates that most of the houses which were constructed before 1950 and during 1951-1970 had a high DP during the mainshock. Buildings built during 1971-1981 and after 1982 showed a better seismic performance than the older ones. Figure 19 shows that the damage grades of the DP COM distribution map correspond to those for DP 980 and DP 990 . The estimated DP distribution shows a close correlation with the estimated PGV distribution ( Fig. 18c and d). This is because building damage requires large deformation and therefore a large amount of energy, as shown in Fig. 16. Thus, PGV is the controlling ground motion index for structural damage in Mashiki because the input energy is proportional to the square of PGV. Figure 21 indicates that most of the large DPs were located in the south Mashiki, and the overall pattern were similar to the AIJ damage survey. Moreover, the estimated DPs in the northwestern area of Mashiki were not negligible. In the southwestern part near Akitsu River as well as the northeastern part, several cells with large DPs in Fig. 21 were located outside the heavily damaged cells in Fig. 1. One explanation is that some of the buildings in these cells were actually steel or RC structures, whereas we assumed all buildings to be two-floor wooden structures. Another more plausible reason is that most of the buildings in these cells were built after 2000 and did not have the same yield strength as that specified for the construction period after 1982 because the Japanese building code for wooden structures was updated in 2000. In addition, we did not consider soil liquefaction in the southwestern part near the Akitsu River. Further study is needed of a nonlinear effective stress analysis of soil columns. Moreover, the number of buildings in each cell was different as shown in Fig. 1. The observed damage ratio of a cell will be strongly affected by the building number when the building number of the cell is small. This is a reason why there are so many cells with no damage in the observed ratio distribution in Fig. 1. If the expected damage ratio of a cell was less than say 25% and the existing number of buildings was less than four, it will be highly probable to have no damage at the cell. Furthermore, the estimated ratio of heavy damage tended to be slightly greater than the AIJ survey results for the lightly damaged cells (DP < 25%), while the results were opposite for heavily damaged cells (DP > 75%). This is because the Yoshida model was developed based on the damage to wooden houses after the 1995 Kobe earthquake (Okada and Takai 1999;Takai and Okada 2001) for property tax evaluation by the municipal government (Damage Scale #3 in Fig. 3), the AIJ survey used the Damage Scale #4 to evaluate building damage in Mashiki. The damage grades used to construct the Yoshida model construction were interpreted to correspond to D1-D2, D3, and D4-D6, but it is difficult to prove the appropriateness of the correspondence. These differences explain the estimated damage probabilities and AIJ survey results. Although the results in Fig. 21 do not perfectly match the survey results for every cell in Fig. 1, they indicate that the Yoshida model can be used to estimate the building damage probabilities in Mashiki. Further improvement is needed to consider the construction period after 2000. Figure 22 shows the overall comparison between the AIJ investigated DRs and estimated DPs of the AIJ survey cells with or without using the high-cut 1.0-2.0 Hz filter. Discussion Even though the estimated DPs without using filter, generating a slightly better match to the AIJ DR in the heavy damage grade (DR > 50%) than the case of 1.0-2.0 Hz high-cut filter, they do not close to the AIJ DRs in the slight and moderate damage grades (DR < 50%). In contrast, results from the 1.0-2.0 Hz high-cut filter show results close to the AIJ DR in the slight and moderate damage grades, while they were underestimated in the heavy damage grades (DR > 50%). According to the AIJ DR, 74% of the investigated cells were slightly and middle damaged, which emphasizes the good match in the slight and middle damage grades, which is more expected. Figure 23 presents the absolute difference between the estimated DP with 1.0-2.0 Hz high-cut filter and the middle value of the AIJ DR range in each cell. The difference of 83% cells is less than 35%, whereas only several cells have a very large difference. Thus, applying the 1.0-2.0 Hz high-cut filter to estimated ground motions was applicable to estimate DP in Mashiki. Considering the Yoshida model was built based on the building data of Kobe city, and most of the large DPs were found in the older buildings (before 1970), the seismic performance of older buildings in Mashiki was worse than that of Kobe buildings, while that of the Mashiki young buildings was similar to the Kobe young buildings. Figure 24 shows the comparison between estimated DPs by using the local strong ground motions of each building and the interpolation method from four grid points as shown in Fig. 20. It is obvious to find that two estimated DPs at most sites were similar. Moreover, it is really time-consuming to analyze the site response by LA and ELA and continue estimating the DP by nonlinear analysis for more than 3000 buildings in Mashiki. These results indicate the proposed four-point-interpolation method for the DP estimation is sufficiently accurate and efficient. Fig. 22 Comparisons of estimated DPs for five cases and observed damage ratios. Grey dot with error bar denote the AIJ investigated damage ratios in 400 cells, recall that the cell locations were show in Fig. 1, in which the damage ratio ranges are 0%, 0 -25%, 25%-50%, 50%-75%, and 75%-100%. The red rectangles denote the final averaged DP of each cell from the response of estimated ground motions by the 1.0−2.0 Hz high-cut filter; Similarly, the yellow crosses denote the averaged DP of each cell which is analyzed from the response of estimated ground motions by using the no filtered waveforms 1 3 Fig. 20 shown). a, b, c, and d denote the results of buildings in Cell 331, Cell 394, Cell 530, and Cell 796, which are shown in Fig. 21. The horizontal and vertical axes denote the building ID and estimated DP, respectively Conclusion In this study, we estimated the detailed ground motions of Mashiki during the mainshock of the 2016 Kumamoto earthquakes through the equivalent linear ground response analysis of 1D soil columns from the seismological bedrock to the ground surface. We also estimated the building damage probabilities through nonlinear dynamic response analysis of wooden houses. We evaluated the site responses based on velocity models at the 535 grid points, determined the construction periods, and estimated the building DP for each wooden house in Mashiki. The whole study was begun from the microtremor observation which is a very convenient research tool. Based on the field data of buildings, the seismic risk of buildings could be inferred. Before an earthquake, this method can be used to guide how to retrofit local buildings. After an earthquake, this method can be used to quickly estimate the damage ratios of the near-source buildings for immediate action of rescue and recovery. Our main conclusions and finding are as follows: 1. The estimated spatial PGV distribution of the EW component by ELA was similar to that of the building damage distribution in the AIJ survey. In addition, the estimated PGAs and PGVs of the EW component were considerably stronger than those of the NS component. The analytical results for the 535 1D subsurface velocity structures have a more detailed resolution (~ 47 m × 47 m grid) than the previous work. The 535 estimated ground motions provided the conditions for analyzing the dynamic response of every building. 2. We determined the construction period for every building in the target area with our research method. The total percentage of the older buildings (constructed before 1950 and during 1951-1970) in the central Mashiki was slightly greater than that of another survey for all the Mashiki buildings. 3. The estimated DPs indicated heavily damage to older buildings (constructed before 1950 and during 1951-1970). The estimated DP distribution of each new cell (Fig. 21) generally matched the DP distribution of the AIJ survey, although our study underestimated the heavy damaged buildings. Moreover, in the northwestern part where the AIJ survey did not cover, we found that non-negligible damage probabilities would have been caused by the mainshock. In several cells in the northwestern and southwestern parts, the DPs were clearly found to be greater than those of the AIJ survey. This may be because the buildings were not wooden, or newly built (after the building code was modified 2000); both factors were not considered in this study. Soil liquefaction may also have caused a smaller amount of damage to houses in the area near Akitsu River. We will study these aspects further in the future. 4. The results indicate that the presented building damage evaluation method can be used to estimate the DP distributions of other areas as long as the necessary building information is obtained in a similar manner. We believe that the detailed estimated ground motions can serve as a basis for dynamic analyses of structures for quantitative damage prediction. This method allows the building responses of a target area to be estimated precisely. We emphasize that the construction period of buildings and the soil conditions must be considered when evaluating the safety of a building. Intellectual property We confirm that we have given due consideration to the protection of intellectual property associated with this work and that there are no impediments to publication, including the timing of publication, with respect to intellectual property. In so doing we confirm that we have followed the regulations of our institutions concerning intellectual property. Ethics statement We confirm that this work does not have any conflict with social ethics. All research data of this study have been approved by relevant departments. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
10,063.2
2021-05-13T00:00:00.000
[ "Engineering" ]
TopEnzyme: a framework and database for structural coverage of the functional enzyme space Abstract Motivation TopEnzyme is a database of structural enzyme models created with TopModel and is linked to the SWISS-MODEL repository and AlphaFold Protein Structure Database to provide an overview of structural coverage of the functional enzyme space for over 200 000 enzyme models. It allows the user to quickly obtain representative structural models for 60% of all known enzyme functions. Results We assessed the models with TopScore and contributed 9039 good-quality and 1297 high-quality structures. Furthermore, we compared these models to AlphaFold2 models with TopScore and found that the TopScore differs only by 0.04 on average in favor of AlphaFold2. We tested TopModel and AlphaFold2 for targets not seen in the respective training databases and found that both methods create qualitatively similar structures. When no experimental structures are available, this database will facilitate quick access to structural models across the currently most extensive structural coverage of the functional enzyme space within Swiss-Prot. Availability and implementation We provide a full web interface to the database at https://cpclab.uni-duesseldorf.de/topenzyme/. Background Recent developments in high-throughput sequencing methods led to a massive increase in sequence information. Databases such as the UniProtKB (The UniProt Consortium 2021) contain over 225 000 000 sequence records, of which over 550 000 are manually annotated and reviewed in Swiss-Prot. In contrast, the Protein Data Bank (PDB) (Berman et al. 2003), the worldwide repository of information about the 3D structure of biomolecules, contained 185 539 crystal structures at the end of 2021, of which many are redundant structures, and not all are enzymes. Generally, the topology (fold) of an enzyme is thought to be the major determinant for the given function (Hegyi and Gerstein 1999;Orengo et al. 1999). Currently, enzyme function prediction methods often use structures from the PDB for the training data set. However, this can lead to biases in prediction from the protein topology, especially for proteins with a similar topology but a different function, such as TIM-barrels (Nagano et al. 2002) and Rossman folds (Medvedev et al. 2019). With recent improvements in protein structure prediction methods (Mulnaes and Gohlke 2018;Mulnaes et al. 2020;Baek et al. 2021;Jumper et al. 2021), the availability of high-quality structural models has increased (Varadi et al. 2022). Such structural models will contribute to better coverage and balance of the structural enzyme space. Currently used databases that categorize structural relationships are, among others, SCOP2 (Andreeva et al. 2014) (Structural Classification of Proteins), CATH (Sillitoe et al. 2021) (CATH Protein Structure Classification Database), and ECOD (Cheng et al. 2014) (Evolutionary Classification of Protein Domains). These databases provide a detailed and comprehensive description of the structural evolutionary relationships between proteins whose 3D structure has been deposited in the PDB. Although these databases provide information on the structural characteristics of the protein and the related functions, further analyses using database-specific classifiers are required to obtain the structural information related to the function. While CATH provides FuncFams (functional families), these are not based on the enzymatic functions as classified by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology (IUBMB) (https://iubmb.org/). The IUBMB currently curates the list of enzyme commission (EC) numbers. To our knowledge, only the Enzyme Structure Database (ESD) and IntEnz databases from the EMBL-EBI relate structures to the enzyme classification. However, the ESD has not been updated since 2018 and does not cover the newest enzyme class, translocases. The IntEnz database contains no structural information on translocases, as it is based on the nonupdated ESD. Furthermore, both databases do not contain any structural information obtained from modeling sources such as the AlphaFold Protein Structure Database (Varadi et al. 2022) (AlphaFold DB) or the SWISS-MODEL repository (Bienert et al. 2017). In this study, we introduce the database TopEnzyme, where 15 500 representative structural models are categorized by enzyme classification numbers for the currently largest structural coverage of functional enzyme space in the UniprotKB/Swiss-Prot. TopEnzyme, which includes additional information from SWISS-MODEL repository and the AlphaFold DB, provides a comprehensive overview and facilitates access to obtain structures associated with specific enzyme functions. When we started the generation of TopEnzyme, only 22% of the structural enzyme space was covered with respect to the available sequence information in the PDB. Using our deep learning-and template-based software TopModel (Mulnaes et al. 2020), we generated structural models of 10 125 enzyme domains covering 4758 different folds, increasing the coverage to 35%. With the release of AlphaFold2 (Jumper et al. 2021) and its goal to model the full UniprotKB/Swiss-Prot (Varadi et al. 2022), the current structural coverage of the functional enzyme space is at 60% across the SWISS-Model repository, TopEnzyme, and AlphaFold DB, covering all available sequences with EC annotation in the manually curated UniProtKB/Swiss-Prot. The recent release of AlphaFold2 structures for the unreviewed UniprotKB/TrEMBL are not included in this investigation. We made use of the availability of two complementary protein structure prediction methods to mutually validate structural models and provide a comparison of the structural quality for 2419 models. Construction and content Using ExpasyEnzyme (Bairoch 2000) (accessed on 12 May 2022), we obtained a complete list of UniprotAC identifiers for 241 125 sequences with an enzyme function annotation according to EC numbers from Swiss-Prot. The first three levels of EC numbers represent the main-, sub-, and subsub-class functions, while the fourth level is the specific enzyme function designation. For example, the small monomeric GTPase with designation 3.6.5.2 is a hydrolase (3) that acts on acid anhydrides (6) and specifically on guanosine triphosphate (GTP) to facilitate cellular and subcellular movements (5). In total, we find 252 subsub-classes spanning 4926 unique designations with available sequence data. For many unique designations, sequence information is not available in the UniprotKB/ Swiss-Prot. Using MMseqs2 (Steinegger and Sö ding 2017), we clustered the obtained sequences with an identity cut-off of 30%, such that each cluster represents a homologous cluster in the enzyme fold space (Rost 1999;Koehl and Levitt 2002;Pearson 2013). When a cluster contains more than one subsub-class, we split this cluster into smaller clusters such that we identify a representative for each subsub-class function. For each cluster, we aimed at modeling the representative with TopModel, using templates with a sequence identity > 30% and a sequence coverage > 80% (Data S1). The refinement procedure in TopModel was skipped as the strict restraints set by the chosen templates should provide structures of sufficient quality while keeping computational costs to a feasible level. This is confirmed for 70 structural models, 10 randomly selected from each enzyme mainclass, which were evaluated as to the effect the refinement procedure has on the TopScore (Mulnaes and Gohlke 2018) of such models compared to ones without refinement; TopScore is a meta Model Quality Assessment Program (MQAP) using deep neural networks to combine scores from 15 different primary predictors to predict the quality of protein structural models and highly correlates to the local superposition-free score lDDT (Mariani et al. 2013). The refinement improved the models by a TopScore of only 0.06 on average ( Supplementary Fig. S1). TopModel allows for manual template selection, e.g. offering the ability to use templates with bound ligands. However, by restricting the possible pool of templates, this might limit the availability of templates with sufficient quality. Thus, we opted to use all available templates to create more enzyme structural models. In the case of multi-domain enzymes for which template information is missing for one or more noncatalytic domains, we remove unmodeled regions according to the following criteria: (i) Must be at least ten residues long. (ii) Must not contain residues of secondary structure elements (a-helix or b-strand) longer than five residues. (iii) Must have a median relative solvent accessible surface area larger than 0.40. (iv) Must have a median contact density smaller than four contacts. This prevents the removal of loops close to the binding site. (v) We only remove unmodeled regions from the C-or N-terminal to keep loops within the modeled domain(s). In the case of multi-domain enzymes for which the template contains information on multiple domains, we often model the complete structure. We created an interactive treemap of the resulting EC space and associated structural models of enzymes ( Fig. 1, https://cpclab.uniduesseldorf.de/topenzyme/). Each section is mapped by the main-, sub-, and subsub-class, as well as designation EC labels. The size of each section represents the proportion of the EC space as given by the number of representative sequences with this EC number. The color represents the average score of the structural representatives: Depending on the model source selected, we provide the pLDDT score (Jumper et al. 2021), a confidence measure of predicted structural quality for models from AlphaFold2, 1 -TopScore, a meta-MQAP predicting structural quality for models from TopModel, and QMEAN6, a linear combination of six statistical potential terms, related to the Z-score of X-ray structures (Benkert et al. 2011) for models from the SWISS-MODEL repository. In all three cases, values close to 1 indicate high-quality structures and values close to 0 the opposite. Above the treemap, we provide a search functionality for EC numbers and IUBMB names together with two filters for organisms and keywords. A representative table below the treemap is updated based on the search, filter, and map navigation input of the user. Here, the representatives for the current selection are shown with known EC numbers and links to the PDB, AlphaFold2, TopModel, and SWISS-MODEL models. By clicking on the UniprotAC identifier of the representative, a member table opens containing the same information for members in each representative cluster. By clicking on the UniprotAC identifier of the member, an information tablet opens containing links to the UniProt, ExplorEnz, KEGG, Brenda, and Expasy databases. Furthermore, a summary of the function with experimental evidence identifiers is given. We show the organism and strain from which the sequence is obtained and keywords for finding similar enzymes within the database. We also added a home page describing the user interface and a contact page for any inquiries or questions regarding the TopEnzyme database. The meta-data required to create this treemap is available as a csv file in the Supporting Information Data S2. Using TopScore, we analyzed the quality of the predicted models. Ninety percent of these models are of good quality (TopScore < 0.4), equivalent to 9039 structures spanning 233 subsub-classes ( Fig. 2a and e). As to secondary structures, TopModel works better for predicting a-helices and b-sheets than loop conformations (Fig. 2b). This effect might be caused by bypassing the refinement stage ( Supplementary Fig. S1). As expected, the score of our models increases with the sequence similarity of the template, except for models with a sequence similarity > 90% (Fig. 2c). Overall, the structural quality of the binding site is similar to that of the entire enzyme. However, we see a larger spread in per-residue scores around the binding site ( Fig. 2d): Often, secondary structure features in the binding site are of high quality, whereas loop regions contribute to the lower-scoring residues. The exact structure of these loop regions may be less relevant to characterizing the dynamic nature of the protein binding site as these loops have often been shown to move to accommodate the space required for binding the ligand. The spread in binding site model quality might also be due to differences in the structural completeness of the binding sites, e.g. in the case of multidomain enzymes or allosteric enzymes where activation depends on domain-and ligand binding interactions. Using PocketAnalyzerPCA (Craig et al. 2011), we determined the average degree of buriedness (DOB) of our binding sites. The majority (77%) of the binding sites are characterized as inside a domain and, hence, are considered complete (Fig. 2f). For the binding sites on the surface, we categorize two types: surface and surface (noncomplete). The latter fraction (15% of all binding sites) is determined by mapping the binding site location to the homologous template and identifying the presence of not modeled complementary surfaces from global stoichiometry information in the template. In contrast, for the surface fraction (8%), we could not find structural information for complementary interfaces. This does not mean that there is no complementary surface present, just that there is no available information. Each structural model contains the residue-wise TopScore in the B-factor column of the PDB file. This allows the user to investigate the model confidence for specific regions (Fig. 2g). Utility and discussion The intended use of the database is to facilitate research connecting enzyme structure and biochemical function. The database serves this aim with its framework of covering EC space with structural models and easy applicability for users with different levels of expertise (see below). By using familiar identifiers, UniprotACs and EC numbers, and linking to other databases, such as the AlphaFold DB and SWISS-MODEL repository as well as ExploreEnz, KEGG, BRENDA and Expasy, we provide a framework for comprehensive structural enzyme information linked to enzymatic function. We envision using generated models over crystal structures important for prediction methods for several reasons. First, some structural noise could make the machine learning method more robust to uncertain information (Mahajan et al. 2018;Plappert et al. 2018). Second, proteins are not rigid objects; having a uniform way of generating structural models should be advantageous compared to using experimental structures from different sources or binding states. Third, databases of predicted structural models cover a larger functional enzyme space. Last, this allows for extendibility to information from, e.g. metagenomic approaches, where no structural information is available, but sequences are deposited in the UniProtKB. Compared to current databases such as the SCOP2 and CATH, our focus is on enzymatic function linked to available enzyme structural models. TopEnzyme starts from enzyme function categorization and provides available structural models from the respective fold space with easy access from the largest collection of generated enzyme models. There are two methods to interact with the database: (i) For scientists interested in large-scale analysis, we provide a csv file containing all the meta-data for each UniprotAC identifier. This allows users to download the latest release and incorporate the information into their workflows. (ii) For scientists interested in a few cases with specific enzyme functions, we provide a visualization in the form of the treemap (Fig. 1) hosted on https://cpclab.uni-dues seldorf.de/topenzyme/. The treemap allows users to browse enzymes with specific functions and provides a simple download method to obtain the representative models from the linked databases. In our own project, we used TopEnzyme to quickly obtain representative Figure 1 Enzyme map presenting the coverage of EC space with structural models. A screenshot of the interactively explorable enzyme map available at https://cpclab.uni-dues seldorf.de/topenzyme/. The color represents the structural scoring for representative models obtained from each database. The rectangle size represents the number of representative structures for the specific function. The treemap is ordered according to EC classification. By clicking on an area, the next subclass enlarges and shows the information for enzymatic function. The user can select between different database sources, PDB, TopModel, SWISS-MODEL repository, and AlphaFold2 for model selection. A search bar for EC classes and IUBMB names is provided, along with a filter for organisms and keywords. Tables below the treemap display all available UniprotAC representatives with EC numbers and a link to the PDB, TopModel, and SWISS-MODEL repository, and AlphaFold2 models. Clicking the UniprotAC identifier in the representative table opens the member table and shows all available models for cluster members. By clicking on the UniprotAC identifier in the member table, the information panel is opened, which contains links to the Uniprot, ExplorEnz, KEGG, BRENDA, and Expasy databases. Furthermore, a summary of the function with experimental evidence identifiers is given. We show the organism and strain from which the sequence is obtained and keywords for finding similar enzymes within the database enzyme structures for building large datasets for a deep learningbased EC number classification. We plan to update TopEnzyme when there is a new major release to databases of enzyme structural models or structural information for previously unlinked EC classes. As to new features, we intend to integrate a more exhaustive structural data collection from linked databases, as, currently, we collect only the best-ranked structural model from each method, while some methods produce ensembles of models. Furthermore, we will improve the search options to include a list of enzymatic functions to move the treemap to the selected function and include structural visualization when selecting a treemap node. Comparison to AlphaFold2 To obtain further insights into the quality of our structural models, we compare a proportion of our enzyme structural models with AlphaFold2 (Jumper et al. 2021) structures from the AlphaFold DB for the same enzymes. We remove disordered domains on the AlphaFold2 structures in the same way as in ours for fairness. The current AlphaFold2 implementation only folds single domains, although other implementations based on AlphaFold2 have been described that can predict multimers (Evans et al. 2021;Mirdita et al. 2021). We randomly selected 25% in each main class of our database for comparison to AlphaFold2 (Fig. 3). While the majority of TopEnzyme structures are available in the AlphaFold DB, TopEnzyme contains few structures unmodeled in the AlphaFold DB yet. We compared TopScore to pLDDT for both regimes (Supplementary Fig. S2). As TopScore and pLDDT predict (1-lDDT) (Mulnaes and Gohlke 2018) and lDDT (Jumper et al. 2021), both correlate significantly and fairly (P < .001, R 2 ¼ 0.59). Remarkably, when comparing computational AlphaFold2 models to experimental enzyme structures unseen by both methods (Supplementary Fig. S3), TopScore underestimates and pLDDT overestimates the true lDDT against the X-ray structure. We investigated the majority in the good-quality regime (TopScore values < 0.4; n ¼ 1935) and a smaller number in the poor-quality regime (TopScore values 0.4; n ¼ 484) to obtain a comprehensive Figure 2 Quality assessment of enzyme models generated with TopModel. (a) TopScore for all (n ¼ 9947) models predicted with TopModel. All models are generated with a template identity > 30% and a coverage > 80%. The lines indicate the cut-offs for TopScore values associated with high quality (TopScore < 0.2; n ¼ 1297) and good quality (TopScore < 0.4; n ¼ 9039) structures. TopScore values are bounded between 0 and 1; a lower TopScore is better. (b) Residue-wise TopScore for all model residues (n ¼ 3 191 133) grouped by structural features. The continuous line represents the b-sheet residues (n ¼ 629 778), the dash-dot line represents the a-helix residues (n ¼ 1 961 309), and the dotted line represents the loop residues (n ¼ 600 039). (c) The distribution of model quality based on the sequence similarity to the template used. The whiskers show the full range of TopScore values. The median is shown on the horizontal notch. (d) TopScore values for all (n ¼ 3 191 133) model residues. The scores are clustered by the distance from the binding site. The boxplot properties are the same as in c. (e) Coverage of the generated structures by the main enzyme class. The horizontal bars are separated by low (TopScore 0.4), good (0.2 TopScore < 0.4), and high (TopScore < 0.2) quality structures. (f) Fraction of generated structures categorized by the main enzyme class with binding interfaces within a domain ('buried'), binding interfaces on the surface of a domain with ('surface') and without ['surface (noncomplete)'] known complementary domain(s). (g) An example structure (UniprotAC: Q9CQ28) highlighting the structural per-residue quality as judged by TopScore (see color scale; lower is better). The image is generated using PyMOL 2.3.0 (PyMOL) overview. In the good-qualitative regime, AlphaFold2 performs slightly better than TopModel as judged by TopScore values computed for each model pair, which is consistent across all enzyme main classes. However, in the poor-quality regime, TopModel creates better models than AlphaFold2 for most enzyme classes except transferases and hydrolases. This result suggests that for some target sequences a model created from one or more homologous templates might be better. Comparison to experimental structures Besides comparing both structure prediction methods, we also compare both methods to recently released X-ray crystallography structures in the PDB (Fig. 4). These structures are chosen such that they were not part of the training data for AlphaFold2 or TopModel. In general, both methods produce models of comparable quality, with AlphaFold2 models having a better average TopScore of 0.04. In NADH-ubiquinone oxidoreductase (Fig. 4a), the membrane domain is modeled well by both methods, except for the N-terminus, which is uncharacterized in PDB ID 7A23. In the case of TopModel, this part is modeled as a disordered region, which gets removed by postprocessing. AlphaFold2 predicts this region as an ahelical structure, albeit with low confidence. In both cases, the Ntermini stick straight through the binding site for cardiolipin in the crystal structure. This site is recognized as an important site for the stability of the protein domain (Soufari et al. 2020). In Salicylate 5hydroxylase (Fig. 4b), two models were predicted with structural features of excellent quality. However, the loops between the bstrands and the random loops deviate from the crystal structure, which lowers the global score. For the Yeast TFIIK (Kin28/Ccl1/ Tfb3) complex (Fig. 4c), we focus on the specific N-lobe region of the enzyme (van Eeuwen et al. 2021). The difference in TopScore values between both methods is due to the improved structural features in that domain of the AlphaFold model. However, both methods create a similar deviation in the N-lobe region in that they modeled a larger b-sheet. Note that the crystal structure (PDB ID 7KUE) was refined in conjunction with enzyme CDK2 (PDB ID 1FIN), which is present in both the training database for TopModel and AlphaFold2. If we compare the b-sheet region among the models and CDK2, the models agree perfectly with CDK2 (van Eeuwen et al. 2021), where this b-sheet region is much larger. Likely, both methods learned to model this section according to CDK2 instead of predicting the smaller b-sheet seen in Kin28. The TopModel model of Gamma-glutamyl-gamma-aminobutyrate hydrolase (Fig. 4d) is an example of insufficient quality. Even though most of the structural features are in good agreement, a loop region should have been modeled as a b-sheet. Finally, for both Phosphotyrosine protein phosphatase (Fig. 4e) and N-a-acetyltransferase 30 (Fig. 4f), the models generated by either method are very good. The TopScore is high, and the structural features are in excellent agreement with the PDB structures (PDB ID 7CUY and PDB ID 7L1K). Furthermore, we take a detailed look at three binding interfaces for recently released X-ray crystallography structures in the PDB (Fig. 5). Only the structural features and loops close to the binding sites are visualized to improve clarity. In the matrix arm for plant mitochondrial respiratory complex I (PDB ID 7A23, Fig. 5a), FeS and SF4 clusters are important for the proton pump mechanism in ubiquinone reductase (Soufari et al. 2020;Parey et al. 2021). For both models, the structural details around the FeS and SF4 clusters are nearly identical to the crystal structure. In the Mycobacterium tuberculosis protein FadB2 (PDB ID 6HRD, Fig. 5b), the two Rossman folds, b1-a1-b2 and b4-a4-b5, and the a7-a11 region are very well modeled (Cox et al. 2019). The a2 helix in the TopModel model slightly points away from coenzyme A. Further investigation of the N-lobe in Yeast TFIIK reveals that for the TopModel model the flexible linker sticks into the ADP binding site and the activation loop deviates from crystal structure (Fig. 5c). Yet, predicting these residues exactly as in the crystal structure might be less crucial as they are shown to be the least stable residues of the enzyme (Luque and Freire 2000). To conclude, both TopModel and AlphaFold2 can provide highquality enzyme structural models with, in general, very good structural features of the binding sites. In some cases, TopModel falls short in modeling loops when compared to static crystal structures. However, the exact structure of these loop regions may often be less important due to the dynamic nature of proteins. Conclusions We have developed TopEnzyme, a database and framework for the structural coverage of functional enzyme space. By combining the TopEnzyme, SWISS-model repository, and AlphaFold DB databases, we provide the currently largest collection of enzyme structural models classified according to EC numbers in the UniProtKB/ Swiss-Prot. TopEnzyme provides easy access to this collection with two methods: (i) A csv file containing all the metadata required for large-scale analyses. (ii) A treemap hosted on https://cpclab.uni-dues seldorf.de/topenzyme/ that allows the user to investigate specific enzyme functions. With our in-house method TopModel, we added 9039 goodquality structural models, including 1297 ones of high quality. We compared a subset of these structures with AlphaFold2 models; on average, the TopScore between both models only differs by 0.04. Both methods can provide models with excellent structural features compared to experimental structures, although TopModel models sometimes differ in loop regions. Figure 3 Comparison of structural models generated by TopModel and AlphaFold2. The models generated by TopModel or AlphaFold2 were scored with TopScore. For TopModel, all models are generated with a sequence similarity of > 30% and a coverage of > 80% with respect to the target. The AlphaFold2 models are obtained from the AlphaFold DB, alphafold.ebi.ac.uk. We compared 20% of the models generated by TopModel in the good (TopScore < 0.4; n ¼ 1935) and 5% of the models in the bad (TopScore 0.4; n ¼ 484) regime for each enzyme main class against AlphaFold2 structures. The diagonal line represents an equal score between the models. Data points above the diagonal favor AlphaFold2 structures, and data points below the diagonal favor TopModel structures. The blue area (TopScore 4 0.4) represents the score for good-quality models, and the green area (TopScore 4 0.2) represents the score for high-quality models. The panels around the figure show the same content but separated by enzyme main class Figure 4 Comparison to experimental structures. An overview of six crystal structures (gray) obtained from the Protein Data Bank compared to AlphaFold2 (red) and TopModel (blue) models. Below the structures, the TopScore for the TopModel and AlphaFold2 models is shown. The arrows correspond to structural features discussed in the text. All structures have been deposited recently and are not present in the training databases for either method. (a) NADH-ubiquinone oxidoreductase (PDB ID 7A23) in the plant mitochondrial respiratory complex I. In the AlphaFold2 model, an a-helix incorrectly protrudes in the direction of the complementary subunit. (b) Salicylate 5-hydroxylase (PDB ID 7C8Z). Both methods produce a good-quality structure. (c) Yeast TFIIK (Kin28/Ccl1/Tfb3) complex (PDB ID 7KUE). Both methods predict a larger b-sheet region that is characterized as mostly coil in the PDB file. (d) Gamma-glutamyl-gamma-aminobutyrate hydrolase (PDB ID 6VTV). The TopModel model mispredicts part of the fold and creates a random coil region instead of a b-sheet. (e) Phosphotyrosine protein phosphatase 1 (PDB ID 7CUY). In the TopModel model, a small random coil region diverges from the PDB structure and AlphaFold2 model. (f) N-a-acetyltransferase 30 (PDB ID 7L1K). Both methods create a good-quality structure Figure 5 Comparison of binding sites. A cut-out of the binding site for three crystal structures (gray) obtained from the Protein Data Bank and compared to AlphaFold2 (red) and TopModel (blue) models with the corresponding binding ligands. The arrows correspond to structural features discussed in the text. All structures have been deposited recently and are not present in the training databases for either method. Only the structural features close to the binding sites are visualized to improve clarity. (a) The matrix arm of plant mitochondrial respiratory complex I (PDB ID 7A23). In both models, the structures around the FeS and SF4 ligands are of excellent quality. (b) Mycobacterium tuberculosis protein FadB2 (PDB ID 6HRD). Both methods model the two Rossman folds, b1-a1-b2 and b4-a4-b5, and the a7-a11 region very well. The TopModel slightly deviates in the a2 helix. (c) Yeast TFIIK (Kin28/Ccl1/Tfb3) complex (PDB ID 7KUE). AlphaFold2 predicts an excellent model, while TopModel places the N-lobe through the ADP binding region. This loop conformation represents another state found in the kinase structure template PDB ID 4ZSG in complex with an inhibitor With this collection of enzyme structural models charted on functional space, researchers have access to a comprehensive and structured dataset, which should help to facilitate structure-guided investigations of specific enzymes and to develop predictive models for enzyme characteristics.
6,560.4
2022-06-16T00:00:00.000
[ "Computer Science", "Biology" ]
Collisional Growth Within the Solar System's Primordial Planetesimal Disk and the Timing of the Giant Planet Instability The large scale structure of the Solar System has been shaped by a transient dynamical instability that may have been triggered by the interaction of the giants planets with a massive primordial disk of icy debris. In this work, we investigate the conditions under which this primordial disk could have coalesced into planets using analytic and numerical calculations. In particular, we perform numerical simulations of the Solar System's early dynamical evolution that account for the viscous stirring and collisional damping within the disk. We demonstrate that if collisional damping would have been sufficient to maintain a temperate velocity dispersion, Earth mass trans-Neptunian planets could have emerged within a timescale of 10 Myr. Therefore, our results favor a scenario wherein the dynamical instability of the outer Solar System began immediately upon the dissipation of the gaseous nebula to avoid the overproduction of Earth mass planets in the outer Solar System. INTRODUCTION The emergent picture of the evolution of the early Solar System is an instability-driven scenario. Tsiganis et al. (2005) proposed what is commonly refered to as the "Nice-model," in which the giant planets formed on circular, coplanar orbits with an accompanying planetesimal disk located between ∼ 15 − 35au. Subsequent interactions with this primordial disk triggered a dynamical instability. The Nice model reproduces a variety of characteristics of the present day Solar System, including the current orbits of the giant planets, the inclination distribution of the co-orbital Jupiter Trojans Nesvorný et al. 2013) and the existence and structure of the Kuiper belt (Levison et al. 2008;Nesvorný & Morbidelli 2012;Nesvorný 2015;Gomes et al. 2018). For a recent review of the early evolution of the Solar System, we refer the reader to . Corresponding author: Marvin Morgan<EMAIL_ADDRESS>Although the Nice model successfully accounts for the large-scale structure of the Solar System, numerous details remain elusive. Particularly, the precise timing of the instability is somewhat unconstrained. A promising avenue to constrain the timing of the instability was to relate it to the Late Heavy Bombardment (LHB) of the moon. The petrological record of Lunar craters implies that the Moon experienced a bombardment flux of planetesimals, roughly ∼ 700 Myrs after the planets formed (Hartmann et al. 2000). It is uncertain whether this bombardment occurred during a cataclysmic spike or one that slowly decayed over time. This increase in cratering events could be caused by an increase of the bombardment rate (Tera et al. 1974;Ryder 1990Ryder , 2002. Gomes et al. (2005) demonstrated that the migration of the giant planets would naturally produce a sudden flux of planetsimals into the inner Solar System, which would explain the LHB. Levison et al. (2011), however, demonstrated that the exchange of energy between the giant planets and the planetesimal disk would only explain the timing of the LHB if the disk's inner edge was sufficiently removed from the orbit of Neptune. Alter-natively, the LHB could also be explained by impacts from left over planetesimals at the tail-end of terrestrial planet accretion (Hartmann 1975;Neukum et al. 2001;Hartmann 2003). In a set of recent studies, numerous authors have proposed that the Nice model instability and the LHB are unrelated and have argued for an early instability that started < 100 Myrs after the formation of the Sun (de Sousa et al. 2020;Nesvorný et al. 2021). Nesvorný et al. (2018) argued that the existence of the binary Jupiter Trojan (617) Patroclus-Menoetius (Grav et al. 2011;Buie et al. 2015) implied that the primordial disk dissipated via migrating planets within ∼ 100 Myr of the formation of the Sun. Beyond these concerns, aspects of the terrestrial planets are more readily reproduced with the early instability scenario. In particular, the survival of the terrestrial planets is more likely in the early instability model (Clement et al. 2018(Clement et al. , 2019. In this work, we study a related, but distinct aspect of the instability driven scenario: the evolution of the planetesimal disk. For computational purposes, most studies ignore self-gravity -and thus the possibility of growth generally -in the planetesimal swarm. Accordingly, the possibility that the planetesimal disk could coalesce into planets has not been explored in detail. In this work, we ask a simple question: if the Sun was encircled by ∼ 20M ⊕ of debris for millions of years, could these debris have formed planets? We answer this question through analytic estimates and direct N-body simulations. The remainder of this paper is organized as follows. In §2, we consider estimates of planetesimal growth rates and the excitation of their velocity dispersion from analytic grounds. In §3, we report the results of our numerical experiments and discuss the implications of our results in §4. ANALYTIC ESTIMATES In this section, we consider accretion within the Solar System's planetesimal disk from analytic grounds. In particular, the goal of the following analysis is to obtain an estimate for the timescale of mass-growth that is expected to unfold in the primordial trans-Neptunian region of the Solar System, and highlight its connection to the velocity dispersion. The characteristics of the debris belt that drives planetary migration in the Nice model are readily summarized. The disk is typically assumed to be comprised of solids with a mass, M disk ∼ 20 M ⊕ , and to extend from r 0 ∼ 15 au to r out ∼ 35 au. Moreover, the disk is envisioned to emerge from the Solar nebula in a dynamically cold state, meaning that the planetesimals that comprise this belt originate on roughly circular and co-planar or-bits. For simplicity, here we adopt the surface density profile proposed in Mestel (1963), with Σ = Σ 0 (r 0 /r), where Σ 0 is the planetesimal surface density at the disk's inner edge and r is the radial distance. For simplicity, we assume that the velocity dispersion of the planetesimal belt is initially considerably smaller than the planetesimals' escape velocity, such that the Safronov number, Θ = (v 2 esc / v 2 ), satisfies Θ 1. Here, v esc and v are the escape velocity and velocity dispersion of the planetsimals. The latter assumption is optimistic, and implies that the initial growth is predominantly facilitated by gravitational focusing (Safronov 1972). Generically speaking, planetesimal growth can proceed via pebble accretion (Ormel et al. 2010;Lambrechts & Johansen 2012) or through pairwise collisions (Lissauer 1993). Although the former process is nominally faster, it is facilitated by the existence of the gaseous disk and does not operate in the absence of hydrodynamical drag forces. In this work, we are specifically concerned with post-nebular growth of planetesimals. Therefore, we ignore the effects of pebble accretion altogether and focus entirely on mutual impact driven assembly. The collisional mass-accretion rate of planetesimals,Ṁ , can be estimated via an n − σ − v relation. The product of the typical planetesimal mass, m, and number density, n is given by m n ∼ Σ/h, where h is the characteristic scale-height of the planetesimal disk. The velocity dispersion can be written as v ∼ v kep (r) (h/r), where v kep (r) is the Keplerian velocity at a given radial distance, r. The characteristic accretion rate,Ṁ , of a planetesimal is given by, where ρ is the density of the planetesimal. The gravitational focusing is accounted for by the (1 + Θ) enhancement of the collisional cross-section. The derivation of Equation 1 is analogous to the formalism developed in Armitage (2010). This expression yields a constant dR/dt, because M ∝ t 3 . This can be recast as the characteristic timescale, τ accr , that is necessary for the radius to increase by some value ∆R, We can estimate the value of the surface density using M disk ∼ 2 π r 0 Σ 0 (r out − r 0 ), where r 0 and r out are the inner and outer disk radii respectively. We obtain τ accr ∼ 10 10 /Θ years for ∆R ∼ 10 3 km, ρ ∼ 2 g/cc and Θ 1. Equation (2) highlights the fact that significant growth within the planetesimal disk can take place on a ∼ 100 Myr timescale only if the disk remains dynamically cold throughout this time. The ratio of the escape velocity to orbital velocity is on the order of v esc /v kep ∼ 0.01 for an R ∼ 100 km object orbiting at r 0 . Therefore the estimate above implies that sustained growth requires eccentricities and inclinations smaller than ∼ 1%. In reality, the velocity dispersion of the planetesimal disk is controlled by an assortment of factors, including self-gravitational (viscous) stirring, collisional damping, and the system's approach towards equipartition. To estimate the degree of orbital crossing that is expected to develop within the disk, we consider the competing effects of collisional damping and viscous stirring. For this order of magnitude calculation, we neglect planet-driven evolution and accretion, such that the disk is comprised of equal-mass bodies. In this regime, equipartition (which drives dynamical cooling of massive objects at the expense of dynamical heating of low-mass planetesimals) is inconsequential and all objects are characterized by a common velocity dispersion. The evolution of the velocity dispersion is given by Armitage (2010), and can be estimated as, where m is the planetesimal mass, and ln(Λ) ∼ 10 is the Coulomb logarithm. As above, the collisional timescale is given by the n − σ − v relation, and takes the form, In the absence of collisional damping, Equation (3) dictates that the velocity dispersion grows as v ∝ t 1/4 . For finite t col , however, there exists an equilibrium solution. Noting the approximate relationship, v ∼ e v kep , this equilibrium corresponds to a characteristic eccentricity given by, for an R ∼ 100 km body, which is the typical planetesimal size produced by the streaming instability (Youdin & Goodman 2005;Johansen et al. 2007). This expression provides a useful gauge for the effective dynamical temperature of the disk. It is important to note that this estimate does not account for planetary perturbations (which will stir the planetesimal swarm further), or dynamical friction (which will facilitate accretion of larger embryos, as is typical for simulations of oligarchic growth). Generally, these processes cannot be quantified analytically. Since our analytical estimates are conservative and do not account for gravitational focusing, in the next section, we use numerical simulations to model trans-Neptunian accretion in the primordial Solar System. NUMERICAL EXPERIMENTS To simulate accretion within the planetesimal disk, we used the mercury6 code (Chambers 1999). Our numerical experiments follow the standard version of the Nice-model. Planets were initialized in a compact multiresonant configuration (Batygin & Brown 2010), encircled by a 20M ⊕ disk of planetesimals. The disk's inner and outer edges were set to 15 au and 35 au, respectively. The calculations were performed using the hybrid Wisdom-Holman/Bulirsch-Stoer algorithm (Wisdom & Holman 1991). The time-step, ∆t and accuracy parameter, , were set to ∆t = 300 days and = 10 −12 , respectively. We modeled the planetesimal disk itself with N s = 1000 super-planetesimals (to represent the larger population of small planetesimals within the disk), and seeded it with N b = 100 protoplanetary embryos (assumed to be fully formed at the start of the simulation and can accrete mass). All planetesimal and embryo masses were set to a common value at the beginning of the simulation, 10 −8 and 10 −7 M with radii ∼ 2000km and ∼ 5200km for fiducial small and large seeds respectively. All particles were initialized with negligible eccentricities and inclinations. Nesvorný et al. (2020) presented a methodology to assign radii to superparticles such that numerical simulations have the correct collisional timescale. Here, we assume that each simulated protoplanetary embryo represents a fully formed, spherical particle with a bulk density of 1g/cm 3 . While this is an appropriate approximation for the accreting protoplanetary embryos, it is not necessarily appropriate for the superparticles, which represent a larger number of planetesimals. We performed convergence tests with initial masses of superparticles that were ∼ 1/2 and ∼ 2× the inital mass, and verified that the results of the simulations did not sensitively depend on the initial radii of the particles. While the interactions among the embryos and planets were self-consistently modeled in a conventional N −body fashion, self-gravitational coupling among the planetesimals was neglected to conserve computational costs. All collisions were treated as perfect mergers. Despite being a necessary approximation for computational efficiency, the super-particle modeling scheme leads to a strongly over-excited velocity dispersion. This is because a vast population of undamped massive planetesimals generates much stronger gravitational scattering events than those that occured in the real disk. This leads to an unphysical excitation of the distribution of the planetesimals' eccentricities and inclinations. To counteract this effect in our calculation, we mimicked the effects of collisional damping and dynamical friction by introducing a fictitious acceleration, a damp , into the equations of motion at each timestep (Papaloizou & Larwood 2000) of the form, where k = (0, 0, z) is the vector corresponding to the z-component of the planetesimals' position. Simply put, this acceleration does not affect objects that occupy circular and planar orbits, but damps the eccentricities and inclinations on a characteristic timescale, τ , if they develop. To calibrate the damping time in our numerical experiments, we tuned the value of τ until our simulation reproduced the viscous stirring -collisional damping equilibrium discussed in the previous section. We did this by removing the giant planets from the simulation and suppressing physical collisions within the disk such that the planetesimals and embryos retain equal masses in perpetuity. Varying τ , we carried out each test integration for ∼ 10 damping times, and recovered the e ∝ τ 1/4 relationship predicted by equation (3). Accordingly, we found that a damping time of τ = 50,000 years yields e = 0.014 -a value that is close to our analytic estimate (see Table 1 and the discussion in the final paragraph of this section). This result is consistent with the numerical results presented by Levison et al. (2011), who carried out detailed simulations of the early dynamical evolution of the Solar System, and also found that viscous stirring yields root-mean-square disk eccentricity of approximately 2%. Adjusting the gravitational potential length scale for each particle could provide an alternative solution to damping the over-excitation of particles. With our numerical setup calibrated, we restored the giant planets into the simulation and evolved the Solar System, fully accounting for growth of protoplanetary embryos through pairwise impacts. In Figure 1, we show a schematic of the initial conditions of and relevant processes included in the simulation. Intriguingly, our simulations indicate that if the planetary instability or rapid outward migration of Neptune does not ensue shortly after the dissipation of the protosolar nebula, then significant collisional growth of embryos unfolds on a ∼ 10 Myr timescale. A series of snapshots of the dynamical state of the outer Solar System attained within our fiducial simulation are shown in Figure 2 and Figure 3. In some similarity with standard models of terrestrial planet accretion (Chambers 2001;Raymond et al. 2011)), our numerical experiment shows how the initially circular and planar disk of planetesimals begins to coalesce, and because of dynamical friction, growing embryos settle to the midplane where their continued accretion is further aided by a diminished velocity dispersion. Within ∼ 10 Myr, numerous ∼ M ⊕ planets emerge within the disk, and the system eventually enters a slower mode of accretion. The mass evolution of the accreting embryos within our model disk is shown in Figure 4. Of the 100 initial embryos in the simulation, 2 grow to larger than ∼ 1M ⊕ and ∼ 10% of the objects grew to masses of m ∈ (0.25, 1) M ⊕ . As seen in Figures 2 and 3, the planetesimal disk spreads beyond the initial edge limit of 35 au. Under the assumption that the disk spread beyond 35 au, Neptune would not stop migrating at 30 au, as it would still have the ability to undergo planetesimal driven migration. However, the outer regions of the disk that spread viscously is not dynamically cold, and could potentially violate the conditions necessary for planetesimal driven migration. Moreover, as depicted in Figure 2 and Figure 3, large planetary embryos formed within the disk by T = 8Myr. Clearly, we do not see either of these features represented in the structure of the Kuiper Belt or Neptune's current orbit. In order to alleviate this discrepancy, Neptune would have had to have undergone rapid planetesimal driven migration. This would eliminate the opportunity for large planetary embryos to form within the disk. Therefore, we conclude that the instability must have occurred very early in the formation of the Solar System. It is crucial to keep in mind that our results are contingent upon the assertion that the competition between viscous stirring and collisional damping prevents the disk's eccentricity and inclination distributions from widening too much. We verified that in the absence of damping, no collisional growth ensues in our calculation. Because the effects of the disk's dynamical friction are implemented within our model through a fictitious damping acceleration, this effect is governed by our choice of a finite value for τ . As already noted above, however, the τ → ∞ assumption leads to an unphysically over-excited disk, and therefore significantly underestimates the potential for collisional growth within the system. We further note that the scaling between e and τ is rather forgiving, such that choosing a somewhat higher value for the damping timescale in our model should yield similar results. To verify this, we ran analogous simulations with 4 different damping timescales, τ , and calculated the median eccentricity and inclination for all of the planetesimals after the disk reached an equilibrium state, ∼ 1Myr for all cases. The results of these experiments are shown in Table 1. Evidently, the median eccentricity and inclination do not sensitively depend on the magnitude of τ . DISCUSSION In this work, we explored the possibility of postnebular oligarchic growth of protoplanetary embryos in the Solar System's primordial disk of debris. Overall, our numerical experiments indicate that it is plausible for ∼ 1M ⊕ objects to emerge within the ancient trans-Neptunian region (i.e., ∼ 15 − 35 au) on a ∼ 10 Myr timescale, provided that the disk's velocity dispersion remains temperate. Accordingly, our numerical experiments indicate that if the planetesimal disk were to remain unperturbed for ∼ 10 − 100 Myr, the present-day architecture of the Solar System likely would have been very different. In virtually all respects of the physical setup, our simulations are similar to standard realizations of the Nice model (Levison et al. 2008(Levison et al. , 2011Batygin et al. 2011;Nesvorný & Morbidelli 2012). Previously published self-gravitating N−body simulations of early dynamical instabilities in the Solar System had interactions among super-particles that led to an unphysical degree of self-excitation within the planetesimal disk (Reyes-Ruiz et al. 2015;Fan & Batygin 2017). In our simulations, we introduced dissipation within the disk to approximately capture the effects of collisional damping. This effect maintains the planetesimals' eccentricities and inclinations on the order of a few percent and few degrees respectively, and facilitates accretion of protoplanetary embryos. Moreover, in this scenario, gravitational scattering forces a substantial amount of icy debris to spread beyond the present day orbit of Neptune. Cumulatively, our results translate to an intriguing constraint on the instability timescale. That is, if the Solar System's transient phase of planet-planet scattering were to be delayed by hundreds of millions of years as envisioned in the original version of the Nice model , the disk would have coalesced into planets, compromising its capacity to serve as the trigger mechanism for the instability. Consequently, our results suggest that the outward migration of Neptune would have started shortly after the dissipation of the protoplanetary nebula. This result is consistent with the conclusions of recent work that favored the early instability scenario Clement et al. 2018Clement et al. , 2019de Sousa et al. 2020;Nesvorný et al. 2021). There are numerous ways in which our study could be expanded upon. A limitation of our calculation is the relatively small number of particles in the simulation. Numerical methods such as those implemented in the GENGA code (Grimm & Stadel 2014), which uses GPUs to calculate gravitational interactions between bodies and produces results consistent with mercury6 (Chambers 1999), could be used to expand these simulations, such as in Quarles & Kaib (2019). We have also ignored the initial size-distribution of particles, although Kaib et al. (2021) found that in order to explain the existence of Pluto and Eris, the number of ∼Pluto-mass bodies in the primordial Kuiper belt would have been as few as ∼ 200 and no greater than ∼ 1000. Shannon & Dawson (2018) provided similar limits on the number of primordial ∼Pluto-mass and ∼Earth-mass protoplanets by considering the survival of ultra-wide binary TNOs, the Cold Classical Kuiper belt, and the resonant population. If these large planetesimals formed during the gaseous disk stage and were present in the primordial disk, this would likely speed up accretion such that Earth-mass objects would emerge on even shorter timescales via the same physical processes explored in our simulations. If instead these objects formed after the disk dispersal, then the simulations described in the previous paragraph could test this hypothesis. For computational efficiency, our simulations used accreting embryos that are slightly larger than Pluto initially. We expect that Pluto mass objects could have formed at some t < 10 Myr based on our collisional growth calculations. In any case, considerations of growth in the disk provide an intriguing possibility to further constrain the instability driven scenario of the early evolution of the Solar System.
5,080.6
2021-07-22T00:00:00.000
[ "Physics" ]
Synthesis and Characterization of Camphorimine Au(I) Complexes with a Remarkably High Antibacterial Activity towards B. contaminans and P. aeruginosa Fourteen new camphorimine Au(I) complexes were synthesized and characterized by spectroscopic (NMR, FTIR) and elemental analysis. The structural arrangement of three selected examples were computed by Density Functional Theory (DFT) showing that the complexes essentially keep the {AuI-CN} unit. The Minimum Inhibition Concentrations (MIC) were assessed for all complexes showing that they are active towards the Gram-negative strains E. coli ATCC25922, P. aeruginosa 477, and B. contaminans IST408 and the Gram-positive strain S. aureus Newman. The complexes display very high activity towards P. aeruginosa 477 and B. contaminans IST408 with selectivity towards B. contaminans. An inverse correlation between the MIC values and the gold content was found for B. contaminans and P. aeruginosa. However, plots of MIC values and Au content for P. aeruginosa 477 and B. contaminans IST408 follow distinct trends. No clear relationship could be established between the MIC values and the redox potentials of the complexes measured by cyclic voltammetry. The MIC values are essentially independent of the redox potentials either cathodic or anodic. The complexes K3[{Au(CN)2}3(A4L)] (8, Y = m-OHC6H4) and K3[{Au(CN)2}3(B2L)]·3H2O (14, Z = p-C6H4) display the lower MIC values for the two strains. In normal fibroblast cells, the IC50 values for the complexes are ca. one order of magnitude lower than their MIC values, although higher than that of the precursor KAu(CN)2. Introduction The medicinal properties of camphor have been recognized since ancient times, having a long history of traditional applications. Pharmacological uses of camphor include liniments and balms for relief of muscular pain, inhalants for nasal decongestion, antitussives, and expectorants. The activity of camphor on nasal decongesting was attributed to the stimulation of cold receptors in the nose [1][2][3]. Such behavior triggered our interest in the ability of camphor derivatives, in particular camphor derived complexes, to interact with other biological targets and also to evaluate their antibacterial properties. With that purpose, we prepared silver and copper complexes and investigated their antimicrobial Since the camphor ligands are neutral, no such type of interaction is feasible. However, adducts with a variety of metal to ligand ratios [{KAu(CN) 2 } n (L)] (n = 1, 3, 5, 7) were reproductively obtained. A tentative to rationalize such a reactivity trend is based on the interaction of the potassium ion with several nitrogen atoms of the neighbor molecules as found for the precursor potassium gold dicyanide by X-ray diffraction analysis. Potassium gold dicyanide displays a polynuclear tridimensional structure formed by alternating linear anionic (Au(CN) 2 − ) and cationic (K + ) layers, with each potassium ion interacting with several nitrogen atom of distinct Au(CN) 2 units [21]. By reaction of the camphor derivatives ( A L, B L or C L) with potassium gold dicyanide (KAu(CN)2), two main types of complexes were obtained: those that kept the KAu(CN)2 unit and those that lost the KCN moiety to afford the Au(CN) unit center (Scheme 1). The anionic [Au(CN)2] − unit is known to form coordination polymers acting as a spacer between gold and cationic Cu(II), Zn(II), Ni(II), Co(II), Sn(II) metal centers [18][19][20]. Since the camphor ligands are neutral, no such type of interaction is feasible. However, adducts with a variety of metal to ligand ratios [{KAu(CN)2}n(L)] (n = 1, 3, 5,7) were reproductively obtained. A tentative to rationalize such a reactivity trend is based on the interaction of the potassium ion with several nitrogen atoms of the neighbor molecules as found for the precursor potassium gold dicyanide by X-ray diffraction analysis. Potassium gold dicyanide displays a polynuclear tridimensional structure formed by alternating linear anionic (Au(CN)2 − ) and cationic (K + ) layers, with each potassium ion interacting with several nitrogen atom of distinct Au(CN)2 units [21]. Although the camphorimine gold adducts with nuclearity higher than three are intriguing, they will be considered as dopped gold potassium dicyanide species and, therefore, they are not further discussed or their biological properties studied. Depending As far as we know, formation of {Au(CN)} from potassium gold dicyanide was not reported before. All complexes were formulated based on analytical and spectroscopic properties (See Experimental). In order to elucidate the structural arrangement and geometry of the complexes, computational calculations by DFT were undertaken, since no suitable crystals could be obtained to perform single crystal X-ray diffraction analysis. Computational Calculations DFT calculations were carried out using GAMESS-US [22] version R3 with a B3LYP functional, using a SBKJC basis set. Although the camphorimine gold adducts with nuclearity higher than three are intriguing, they will be considered as dopped gold potassium dicyanide species and, therefore, they are not further discussed or their biological properties studied. Depending . No such type of complexes was obtained for ligand C L. As far as we know, formation of {Au(CN)} from potassium gold dicyanide was not reported before. All complexes were formulated based on analytical and spectroscopic properties (See Experimental). In order to elucidate the structural arrangement and geometry of the complexes, computational calculations by DFT were undertaken, since no suitable crystals could be obtained to perform single crystal X-ray diffraction analysis. Structures with one and two cyanide groups per gold atom were attempted. However, all the essayed structures with the fragment {Au(CN) 2 } underwent dissociative pathways releasing the ligand and regenerating the [Au(CN) 2 ] − unit. All the structures converged to [Au(CN)L] in a linear arrangement (see complex 13, Figure 1). In some cases, the released KCN can co-crystallize (1, Figure 1) or a second unit of AuCN is incorporated (complex 6 in Figure 1). The second gold cyanide unit binds to the {Au(CN)L} fragment through a week Au-Au bond (bond order 0.178) as well as a week Au-O bond (bond order 0.234). Although stronger, the Au-N bond is very labile with a bond order of 0.324. the released KCN can co-crystallize (1, Figure 1) or a second unit of AuCN is incorporated (complex 6 in Figure 1). The second gold cyanide unit binds to the {Au(CN)L} fragment through a week Au-Au bond (bond order 0.178) as well as a week Au-O bond (bond order 0.234). Although stronger, the Au-N bond is very labile with a bond order of 0.324. Antibacterial Activity The antibacterial properties of the camphorimine Au(I) complexes were assessed experimentally through the determination of the Minimum Inhibitory Concentration (MIC) against the Gram-positive strain S. aureus Newman and the Gram-negative strains E. coli ATCC25922, P. aeruginosa 477, and B. contaminans IST408. Experimental results show that all the complexes are active towards the bacterial strains under evaluation. As a general trend, complexes perform better for P. aeruginosa 477 and B. contaminans IST408 than for E. coli ATCC25922 or S. aureus Newman, although complexes 1, 2, and 10 display a reasonable activity against all the strains under study (Table 1). Antibacterial Activity The antibacterial properties of the camphorimine Au(I) complexes were assessed experimentally through the determination of the Minimum Inhibitory Concentration (MIC) against the Gram-positive strain S. aureus Newman and the Gram-negative strains E. coli ATCC25922, P. aeruginosa 477, and B. contaminans IST408. Experimental results show that all the complexes are active towards the bacterial strains under evaluation. As a general trend, complexes perform better for P. aeruginosa 477 and B. contaminans IST408 than for E. coli ATCC25922 or S. aureus Newman, although complexes 1, 2, and 10 display a reasonable activity against all the strains under study (Table 1). As previously reported, the ligands are not active against the upper mentioned strains [9]. In order to try to correlate the characteristics (metal content, ionic/neutral character, nuclearity) of the complexes with their antibacterial activity, the MIC values obtained for P. aeruginosa 477 and B. contaminans IST408 (Table 1) were graphically depicted versus the metal content of each complex (Figures 2 and 3). As previously reported, the ligands are not active against the upper mentioned strains [9]. In order to try to correlate the characteristics (metal content, ionic/neutral character, nuclearity) of the complexes with their antibacterial activity, the MIC values obtained for P. aeruginosa 477 and B. contaminans IST408 (Table 1) were graphically depicted versus the metal content of each complex (Figures 2 and 3). Table 1). Table 1). Table 1). As previously reported, the ligands are not active against the upper mentioned strains [9]. In order to try to correlate the characteristics (metal content, ionic/neutral character, nuclearity) of the complexes with their antibacterial activity, the MIC values obtained for P. aeruginosa 477 and B. contaminans IST408 (Table 1) were graphically depicted versus the metal content of each complex (Figures 2 and 3). Table 1). Table 1). Table 1). Figures 2 and 3 clearly evidence a direct relationship between the antibacterial activities of the complexes and their gold content for the B. contaminans and P. aeruginosa strains, i.e., the higher the gold content, the higher the activity, the lower the MIC values. However, the correlation pattern is considerably different for the two strains. That is not the case for P. aeruginosa for which complex 3 displays one of the lowest MIC values (Figure 2). Such trend evidence that different mechanisms of action operate for the two bacterial species. In fact, the plots defined for P. aeruginosa ( Figure 2) stay away from the point defined by the gold cyanide precursor. In this case, the gold cyanide content remains relevant but structural effects due to the gold cyanide unit {Au(CN) 2 − } seem less important than for B. contaminans. In the case of complexes with bi-camphor ligands, symmetry conceivably play a role in the activity, since the para spacer at complex 14 considerably enhances the antibacterial activity compared to the meta spacer at complex 13 ( Figure 2). The high symmetry of the para bicamphor ligand [23] rends almost equally accessible four binding atoms (N, O) to interaction with bacteria receptors. Plots in The effect of the nuclearity, number, and eleFctronic characteristics of the ligands at the antibacterial properties of the complexes does not show a trend. However, it is evident that the ligands fine-tune the antibacterial activity, according to the slope of the lines (Figures 2 and 3). The distinct trend for the activity/gold content observed for the two bacterial strains (B. contaminans and P. aeruginosa) cannot be attributed to the cell wall structural arrangement, since both species are Gram-negative, with a complex cell wall composed of an inner and an outer membrane. Toxicity The assessment of the cytotoxicity of new complexes is an essential step in the investigation of their application as prospective drugs. The first screenings are in vitro studies using normal cells such as human fibroblasts. The MTT assay was selected to assess cell viability by measuring the activity of a mitochondrial succinate dehydrogenase as the end-point of cytotoxicity [24,25]. Results obtained with the gold camphorimine complexes indicated that in general they have high cytotoxicity presenting IC 50 values in the range 0.15-0.40 µg/mL. Such cytotoxic activity is attributed to the contribution of the core of the complexes based on gold cyanide. From the results obtained, KAu(CN) 2 displayed an IC 50 value (0.06 µg/mL) well below (one order of magnitude) than that of the camphorimine gold complexes. Redox Properties Electron transfer is often involved in biological processes, so the study of the electrochemical properties of the camphorimine gold complexes could allow some insight into their redox properties. The study was undertaken by cyclic voltammetry in acetonitrile using Bu 4 NBF 4 as electrolyte (see Experimental for details). The data obtained is depicted in Table 1. As a general trend, the complexes display irreversible anodic waves at potentials lower than that of K[Au(CN) 2 ] (E ox p = 1.72 V, Figure 4a). The cathodic processes are also irreversible and fall within the potentials −1.65 and −1.85 V (Table 1) (Figures 4a and 5a) that fall in the range of values reported for the free ligands [5,26,27]. also irreversible and fall within the potentials −1.65 and −1.85 V (Table 1). No cathodic process or adsorption wave indicative of gold formation were observed for K[Au(CN)2]. The plots of the anodic and cathodic potentials versus the MIC values for P. aeruginosa 477and B. contaminans were depicted in Figures 4 and 5, respectively. A random distribution is observed in what refers to points defined by the MIC values and the reduction potentials (Figures 4a and 5a) that fall in the range of values reported for the free ligands [5,26,27]. The anodic processes accommodate within two ranges of potentials either for P. aeruginosa or B. contaminans (Figures 4b and 5b). The complexes with just one ligand (1,2,11,13) tend to fit into the range of the lower potentials, while those with two ligands (5,7,10) also irreversible and fall within the potentials −1.65 and −1.85 V (Table 1). No cathodic process or adsorption wave indicative of gold formation were observed for K[Au(CN)2]. The plots of the anodic and cathodic potentials versus the MIC values for P. aeruginosa 477and B. contaminans were depicted in Figures 4 and 5, respectively. A random distribution is observed in what refers to points defined by the MIC values and the reduction potentials (Figures 4a and 5a) that fall in the range of values reported for the free ligands [5,26,27]. The anodic processes accommodate within two ranges of potentials either for P. aeruginosa or B. contaminans (Figures 4b and 5b). The complexes with just one ligand (1,2,11,13) tend to fit into the range of the lower potentials, while those with two ligands (5,7,10) fit the potential values that include the KAu(CN)2 precursor. The electron releasing/withdrawing properties of the camphorimine ligands (L = OC10H14NY) may drive the redox and structure of the complexes, since electron donor ligands (Y = C6H4NH2, C6H4N; The anodic processes accommodate within two ranges of potentials either for P. aeruginosa or B. contaminans (Figures 4b and 5b). The complexes with just one ligand (1,2,11,13) tend to fit into the range of the lower potentials, while those with two ligands (5,7,10) fit the potential values that include the KAu(CN) 2 precursor. The electron releasing/withdrawing properties of the camphorimine ligands (L = OC 10 H 14 NY) may drive the redox and structure of the complexes, since electron donor ligands (Y = C 6 H 4 NH 2 , C 6 H 4 N; Z = m-C 6 H 4 , Scheme 1) tend to produce mono ligand-complexes that oxidize at the lower range of potentials. Materials and Methods The complexes were synthesized under nitrogen using Schlenk and vacuum techniques. Camphor ligands (OC 10 H 14 NY: Y = NH 2 , C 6 H 5 , C 6 H 4 NH 2 -4, C 6 H 4 CH 3 -4, C 6 H 4 OH-3, NC 10 H 14 NC 6 H 4 ) were prepared according to reported procedures [27]. Gold potassium dicyanide, camphor, the amines, and hydrazine were purchased from Sigma Aldrich. Acetonitrile (PA grade) was purchased from Carlo Erba, purified by conventional techniques [28] and distilled before use. The FTIR spectra were obtained from KBr pellets using a JASCO FT/IR 4100 spectrometer. The NMR spectra ( 1 H, 13 C, DEPT, HSQC, and HMBC) were obtained from CD 3 CN, DMSO, or CDCl 3 solutions using a Bruker Avance II+ (300 or 400 MHz) spectrometers. The NMR chemical shifts are referred to TMS (δ = 0 ppm). The redox properties were studied by cyclic voltammetry using a three compartments cell equipped with a Pt wire electrode and interfaced with a VoltaLab PST050 equipment. The cyclic voltammograms were obtained using solutions of NBu 4 1 2 h. Then, acetonitrile (7 mL) was added, and the mixture stirred for 3 days. Upon solvent evaporation, a yellow precipitate was obtained that was washed with ether and dried under vacuum. Yield 57 mg, 73%. 13 13 Computational Calculations The optimization of the structures and the molecular geometry of the complexes were carried out by DFT calculations using GAMESS-US [22] version R3, with a B3LYP functional, using a SBKJC basis set [29]. The structures were confirmed stationary points by Hessians with non-negative eigen values and six near zero rotational and translational frequencies. Bacterial Strains The bacterial strains E. coli ATCC25922, P. aeruginosa 477, B. contaminans IST408, and S. aureus Newman were used in the present work and were kept at −80 • C in 20% glycerol. When in use, bacterial strains were maintained in Luria Broth (LB) solid media (Sigma). Minimal Inhibitory Concentration Assessment The Au(I) complexes minimal inhibitory concentration (MIC) towards the abovementioned bacterial strains was determined by microdilution methodologies using Mueller-Hinton broth (MH), as previously described [30]. Positive (no compound) and negative controls (no bacterial inoculum) were included in each microdilution experiment. Toxicity Assessment in Normal Cells The in vitro cytotoxicity of the gold complexes was evaluated in adult human dermal fibroblasts (HDF) (Sigma-Aldrich) using the MTT assay as previously described [29]. Cells were cultured in Fibroblast Growth Medium (Sigma-Aldrich) following the instructions of the supplier for these cells. Cells laid in 96 well plates were incubated with serial dilutions of the compounds for 24 h. Each compound's dilution was performed in four wells. For each assay controls (cells without test compound) were done. At least two independent experiments were performed for each cytotoxicity analysis. The cytotoxicity of each compound was expressed by the IC 50 , the concentration of compound causing 50% decrease of cellular viability. Conclusions Camphorimine gold(I) cyanide complexes display relevant antibacterial activity towards Gram-negative E. coli ATCC25922 and Gram-positive S. aureus Newman bacterial strains and a remarkably high antibacterial activity towards the Gram-negative strains P. aeruginosa 477 and B. contaminans IST408. Depending on the characteristics of the camphorimine ligands, the gold content and the structural arrangement, the complexes reach rather low MIC values (2.4 µg/mL for B. contaminans IST408 (14) or 3.9 µg/mL for P. aeruginosa 477 (8) An insight into the relationship between the MIC values and the gold content shows that an inverse correlation exists, i.e., the higher the Au content, the lower the MIC values. The plots of the MIC values versus Au (%) reveal that the performance of the complexes towards the two strains is distinct. In the case of B. contaminans, the antimicrobial activity correlates directly with that of the potassium gold cyanide precursor, while in the case of P. aeruginosa, it is essentially independent of it. Except for compound 14, complexes of general formula K[Au(CN) 2 L] (1,4,9) and K[Au(CN) 2 L 2 ] (10) display the highest activities against B. contaminans. These complexes have in common camphorimine ligands with amine ( A1 L, A5 L) and hydroxy ( A6 L) substituents, which are easily ionizable, a fact that is considered to improve their low MIC values supported by the observation that the highest MIC value was found for complex [Au(CN)L 2 ] (5) with low ionizable character. In the case of P. aeruginosa, the plots show that there is no correlation between the values (MIC and % Au) for the Au(I) camphorimine complexes and the precursor KAu(CN) 2 , which sits aside from all lines. In this case, the MIC values for most of the complexes are lower than that of KAu(CN) 2 and there is no evidence for enhancement of the antibacterial activity enabled by protic or hydrogen bonding substituents at the camphorimine ligand. Such trend shows that the coordination of the camphorimine ligands have a positive effect on the antibacterial activity, independently of the substituent. No correlation was found between the redox potentials and the MIC values, in agreement with structural rather than electronic aspects are driving the interaction of the complexes with bacteria cells.
4,554.2
2021-10-01T00:00:00.000
[ "Chemistry", "Medicine" ]
A Complicated Relationship: Examining the Relationship Between Flexible Strategy Use and Accuracy This study explores student flexibility in mathematics by examining the relationship between accuracy and strategy use for solving arithmetic and algebra problems. Core to procedural flexibility is the ability to select and accurately execute the most appropriate strategy for a given problem. Yet the relationship between strategy selection and accurate execution is nuanced and poorly understood. In this paper, this relationship was examined in the context of an assessment where students were asked to complete the same problem twice using different approaches. In particular, we explored (a) the extent to which students were more accurate when selecting standard or better-than-standard strategies, (b) whether this accuracy-strategy use relationship differed depending on whether the student solved a problem for the first time or the second time, and (c) the extent to which students were more accurate when solving algebraic versus arithmetic problems. Our results indicate significant associations between accuracy and all of these aspects— we found differences in accuracy based on strategy, problem type, and a significant interaction effect between strategy and assessment part. These findings have important implications both for researchers investigating procedural flexibility as well as secondary mathematics educators who seek to promote this capacity among their students. assessment where students were asked to complete the same problem twice using different approaches, allowing us to examine the relationship between strategy use and accuracy beyond a student's primary approach of choice. Further, we examine whether the relationship between strategy use and accuracy depends on the problem type-arithmetic or algebraic-to add further nuance to our current understandings about the role of structural features and mathematical domains for the relationship between strategy selection and accuracy. Relationships Between Strategy Appropriateness and Strategy Accuracy Among the many dimensions along which problem-solving strategies can differ are two that are of particular interest in this study: strategy appropriateness and strategy accuracy. Strategy appropriateness lies at the core of procedural flexibility (Star, 2005). Some strategies may be better than others for a given problem by virtue of a number of factors, including efficiency of solving and elegance of the steps. Other characterizations of appropriateness consider situational variables and the learner (e.g., Verschaffel et al., 2009); here, we are primarily concerned with the strategies students employ with respect to the specific features of the task and their efficiency and elegance. While mathematicians disagree about how to objectively define elegance in their discipline, they widely agree that strategy appropriateness and elegance go hand-in-hand (Hardy, 1940). For example, within the algebraic domain of linear equation solving, many equations can be solved using a so-called standard algorithm (Buchbinder et al., 2015;Star & Seifert, 2006). Such an algorithm applies to a broad range of problems and is reasonably efficient. A standard algorithm for solving a linear equation such as 3(x + 1) = 15 involves first distributing the coefficient 3 to obtain the expression 3x + 3, then subtracting 3 from both sides, and finally dividing by 3 to arrive at x = 4 (Buchbinder et al., 2015;Star & Seifert, 2006). Similarly, a standard approach for adding a series of integers such as 146 + 12 -46 + 88 would be to add from left to right: 146 + 12 is 158, 158 -46 is 112, and 112 + 88 is 200. Among other possible strategies, some are arguably better than the standard algorithm, where better (or "situational ly appropriate"; Star et al., 2022) may mean that the strategy is more elegant and/or better matched to the structural features of the problem. To illustrate, for the above equation 3(x + 1) = 15, an arguably better strategy would involve dividing both sides of the equation by 3 as a first step. This approach is considered better than the standard algorithm due to the affordances created by having 3 be a factor of 15 -a structural feature of the problem-and to the fewer steps required to solve it. In the case of the addition problem 146 + 12 -46 + 88, an arguably better approach would be to recognize a structural, numerical relationship between the to-be-added numbers (that both 146 -46 and 12 + 88 are easy to compute, as is their sum) and to add the non-consecutive pairs of integers to take advantage of this. Furthermore, recognizing and taking advantage of structural features of a problem when selecting a strategy also illustrates linkages between flexibility and conceptual knowledge (e.g., Schneider et al., 2011). In the former example, use of the better strategy may imply a more sophisticated understanding of the concept of variable, in treating (x + 1) as a variable term. And in the later example, use of the more situational appropriate strategy relies both upon the conceptual principle of commutativity as well as the relationship between subtraction and addition. But is there a general relationship between strategy appropriateness and strategy accuracy? In other words, which strategies tend to be more accurately implemented by students, those that are standard algorithms or those that are more situationally appropriate? On the one hand, an argument can be made that a standard approach tends to be the strategy most related to accuracy. Standard approaches are by definition broadly applicable to a wide range of problems. As a result, such algorithms can be automatically executed without a great deal of attention to the specific structural features of a problem. Such routine execution of standard algorithms can be efficient and reduce the likelihood of error. Prior research has suggested a "freed resources" mechanism behind this potential benefit of standard algorithms, where highly routinized strategies enable students to focus on the relationships and operations in a problem with greater facility (Kotovsky et al., 1985;McNeil & Alibali, 2004;Shiffrin & Schneider, 1977;Shrager & Siegler, 1998). In addition, well-rehearsed standard approaches may also lead to greater confidence and trust among users of these algorithms, which may also be linked to higher accuracy. Finally, use of a standard algorithm may place a lower burden on working memory capacity during anxiety-inducing assessments, enabling students to perform more successfully as compared to students who use other strategies (Torbeyns & Verschaffel, 2013. But on the other hand, one might hypothesize that strategies that are more appropriate or better than the standard algorithm would result in greater accuracy. Situationally appropriate strategies take advantage of structural features of problems and tend to have fewer operations, and this may reduce the opportunity and likelihood for errors. When students opt for a strategy that departs from a highly routinized, standard approach, they may engage more carefully in on-the-spot encoding of problem features (McNeil & Alibali, 2004); this break from automaticity, as well as the more conscious and deliberate attention to the problem-solving process, may increase accuracy. Further, highly practiced strategies such as standard algorithms cause students to only encode the features of the problem necessary for executing their strategy (McNeil & Alibali, 2004), and this may cause students to miss opportunities for structural efficiencies and shortcuts available to them. For these reasons, better-than-standard approaches may lead to greater success in problem solving. As an alternative to either of these hypotheses, it may instead be the case that a general relationship does not exist between strategy appropriateness and strategy accuracy -but rather, that this relationship is an interaction related to the structural features of a given problem. In particular, for problems where it is relatively straightforward to identify an alternative, better strategy (such as in the specific linear equation and integer addition examples previously shown), perhaps the use of the more situationally appropriate strategy leads to greater accuracy, for the reasons noted above. As another example, consider the fraction addition problem 18/36 + 21/42 (Newton, 2008), where there may be considerable advantage in terms of accuracy in recognizing that both fractions are equivalent to 1/2 rather than using the standard algorithm (such problems have been termed "flexibility-eligible" problems; Hästö et al., 2019). But for problems where it may not be readily apparent whether there is a better alternative to the standard algorithm, perhaps the standard algorithm holds greater promise for accuracy. For example, consider the linear equation 3(x -4) -15x = -72. Alternative strategies may exist -some of which might be considered better than the standard approach, such as dividing each term by 3 as a first step -but given the greater effort required to generate and execute these alternatives, the standard approach may be more likely to lead to the correct solution. Other Considerations for the Relationship Between Accuracy and Strategy Use An additional consideration with regard to the relationship between strategy appropriateness and strategy accuracy concerns the particulars of how these constructs are assessed. Specifically, in prior studies of procedural flexibility (e.g., Star & Seifert, 2006), a common means for assessing strategy appropriateness is to ask students to re-solve a previously completed problem but using a different approach. Inferences about students' flexibility are then made based on whether a better-than-standard or a standard strategy was used in the student's first solution attempt or in subsequent attempts (e.g., Xu et al., 2017), where typically the use of a better-than-standard approach in a student's first attempt is viewed as an indicator of more sophisticated procedural flexibility. Using this same logic, it is not clear whether students would exhibit greater accuracy in their first attempt or in their second attempt on a problem. Students' first attempt will presumably draw upon the strategy that they are most comfortable and familiar with, which could produce greater accuracy. But alternatively, after having solved the problem once, the students' greater familiarity with the problem (and its solution) could result in greater accuracy on the second attempt. Furthermore, there could conceivably be an interaction between the section of the assessment and the strategy used, whereby (for example) the first attempt strategy is more accurate if it is the better-than-standard approach but less accurate if it is the standard approach, etc. Finally, it is possible that any relationships that exist between strategy appropriateness and strategy accuracy vary across mathematical domains. Procedural flexibility has been frequently studied in linear equation solving (e.g., Star & Seifert, 2006;Star & Rittle-Johnson, 2008;Newton et al., 2020) but also in fraction operations (Newton, 2008), calculus (Maciejewski & Star, 2016), integer operations (e.g., Torbeyns & Verschaffel, 2016), and other domains. It is unclear whether the substantial differences in these domains (including the age at which learners typically encounter these domains, as well as mathematical differences in the domains) might influence the relationships between strategy accuracy and strategy appropriateness. With respect to mathematical domain differences in the relationship between strategy appropriateness and accuracy, of particular interest here is whether such differences exist between arithmetic problems and algebraic problems. Stu dents employ myriad strategies for computing in arithmetic, including both formal algorithms and informal strategies. With respect to the latter, the richness and variety of children's informal strategies for solving arithmetic problems has been well-documented in the literature (e.g., Shrager & Siegler, 1998). For symbolic algebra problems such as equation solving, students tend to demonstrate a greater reliance on formal symbolic algorithms (e.g., Mayer, 1982). For the reasons suggested above, it may be the case that standard algorithms are more accurate for both arithmetic and algebraic problems. However, one might also predict that there will be differences between these domains. On the one hand, the prevalence of a wide variety of informal and innovative strategies for arithmetic problems -and the increasing instructional emphasis on these strategies over the past several decades, especially in the U.S. (e.g., Carroll, 1999) -might suggest that standard arithmetic algorithms are on average less accurate than other strategies. But on the other hand, because of the greater abstraction present in algebra problems and the challenges that students often experience when learning algebra (e.g., Chu et al., 2017;Payne & Squibb, 1990;Van Amerom, 2003), it may be the case that standard algorithms are more accurate than alternative strategies in this mathematical domain. In sum, there is considerable potential nuance in the relationship between strategy appropriateness and strategy accuracy within the context of procedural flexibility. Although there has been an increase in interest in and research on procedural flexibility over the past decade, little is known about this relationship. Increasing our knowledge base about the relationship between strategy appropriateness and strategy accuracy is important to the field for the following reasons. First, scholars, educators, and policymakers have focused on increasing students' ability to employ multiple strategies flexibly in problem solving; however, flexibility itself absent the dimen sion of accuracy in problem solving seems a less desirable outcome. Arguments in favor of flexibility as an instructional goal would be substantially advanced if there were clearer links between flexibility and accuracy, including relationships between strategy appropriateness and accuracy. Attending to the dimension of accuracy in efforts to advance flexibility in practice would better align with broader goals for improving students' mathematical proficiency and conceptual understanding. Second, understanding more about the relationship between strategy appropriateness and accuracy also relates directly to debates in mathematics education about the role of standard algorithms in the curriculum (e.g., Ebby, 2005; Van den Heuvel-Panhuizen, 2010). Algorithms are powerful tools, but over-reliance instructionally on algorithms has been linked in some countries to rote and inflexible knowledge. Under what conditions do standard algorithms offer benefits to students in terms of problem-solving accuracy? Under what conditions do better strategies not only indicate awareness of mathematical structure and deeper conceptual understanding but also yield more accurate results? These questions are core to conversations about what we teach in math and why we teach it. Current Study The relationship between strategy appropriateness and strategy accuracy is the focus of the present study. We explore this relationship in the context of algebra equation solving problems and in arithmetic problems, as well as in a task that prompted students to solve problems in different ways. While prior literature has demonstrated the variability in individual students' strategy choices on the same items and across occasions (e.g., Siegler, 1998), we were interested in examining how accuracy varies between standard and above-standard approaches on a task in which students are explicitly asked to show variable strategy use. We ask the following research questions in this study. First (R1), are students more accurate when using standard approaches or better-than-standard approaches? Second (R2), is this relationship between accuracy and strategy appropriateness influenced by whether a problem is being solved for the first time or being re-solved? Third (R3) are students more accurate when solving problems that are algebraic or arithmetic? We did not begin the study with a priori hypotheses about the answers to the research questions given the considerable nuances and uncertainties described above. Method Participants A convenience sample of 450 high school students from 19 math classes in a single large high school in the Southeastern region of the United States participated in this study. We reduced the sample to the 449 students who completed the assessment. An additional 36 students who were missing demographic data were also excluded, leaving N = 413 students in the full sample. Data were collected in January 2020. The girl-boy ratio was almost evenly distributed, with 47.2% girls and 52.5% boys (see Table 1). About 53% of participants were in 9th grade, 26% were in 10th grade, and 21% were in 11th or 12th grade at the time of the study. Students were taking Algebra 1, Algebra 2, Geometry or AP Statistics courses. Students' self-reported grades in math ranged from 41.2% earning A's, 32.7% earning B's, 17.7% earning C's, and 8.5% earning a D or lower. The majority of students, 86.9%, were between the ages of 14 and 16 years old. Measures Participants completed a two-part assessment. In Part 1, students were prompted to complete five problems (see Table 2), each of which could be approached with standard and better-than-standard approaches. For example, in Problem 5, a standard approach is to add the numbers from left to right. Alternatively, a better-than-standard approach would be to commute terms and add (12 + 88) and (146 -46), using 100 as a reference point to simplify the operations. The instructions prompted students not to proceed to the next section until instructed to do so. Then in Part 2, the assessment presented students with the same problems from Part 1 and instructed them to complete each problem using a different method than the one they used before. Students were instructed not to look back at their work in Part 1 while completing the same set of problems in Part 2. Students completed the 5 problems in each of the two parts of the assessment, yielding 4,130 student-problems (resulting from 413 students completing each of 5 questions twice) in the data set. The unit of analysis for this study was student-problems. Problems 2 and 3 represent algebra problems, and Problems 1, 4 and 5 represent arithmetic problems. Our measure of the algebra domain consisted of both replications of Items 2 and 3 across the two assessment parts, totaling 4 items (α = .60). Similarly, our measure of the arithmetic domain consisted of both replications of Items 1, 4 and 5 across the two assessment parts, totaling 6 items (α = .60). Coding We coded student-problems for both accuracy and type of strategy; of particular interest here is the distinction between standard and better-than-standard strategies. Two coders independently coded all strategies for strategy type and accuracy and subsequently resolved all disagreements. We elaborate on these types of strategies below and provide student examples in Table 2. The standard approach for Problem 1 involved putting all the fractions in the same form over the common denominator of 9 to get 15/9, 5/9, 3/9, and 4/9, adding the numerators to get 27/9, and finally simplifying to get 3 (see Table 2). For Problem 2, the standard approach involved distributing the 3, subtracting three from both sides, and at the end dividing both sides by 3 to obtain x = 4. Problem 3 follows a similar approach to problem Number 2. Using the standard approach, students first distribute the coefficients 4 and 3 respectively to get the expression 4x + 8 + 3x + 6 on the left side before combining like terms, subtracting 14 from both sides, and finally dividing both sides by 7 to obtain x = 1. For Problem 4, the standard approach is to first multiply both sets of fractions in line with the order of operations, adding 13/50 with 52/50 to get 65/50, and simplifying to obtain 13/10 or the mixed number 1 and 3/10. Finally, the standard approach for the last problem involves computing each operation in the expression from left to right to arrive at 200. Student-problems were coded as better-than-standard if their approach demonstrated more elegance and innovation than the standard approach, based on similar determinations in prior studies (e.g., Star & Seifert, 2006;Star et al., 2022). For example, for Question 2, if a student divided by 3 as a first step, this was considered a better-than-standard strategy because the strategy takes advantage of the structural features of the problem (15 is evenly divisible by 3) and can be solved in fewer steps. In Question 3, as another example, if a student noticed that 4(x + 2) and 3(x + 2) were like terms and combined them as a first step to create 7(x + 2) on the left side of the equation, this was coded as better-than-standard. See Table 2 for additional examples. Because our research questions were concerned with differences in accuracy between standard and better-thanstandard approaches, we restricted the analysis sample to student-problems that used either of these two types of strategies across both Parts 1 and 2. Given the well-established relationship between worse-than-standard strategies and inaccuracy, we were only interested in examining problem solving accuracy between standard or better strategies (e.g., Star et al., 2022). Student-problems that were not included in the subsequent analysis included those that were incomplete, left blank, or showed a strategy that was judged by coders to be worse than the standard strategy on at least one part of the assessment. An example of a worse-than-standard strategy is using tallying to simplify an expression containing large integers (e.g., counting up 12 tallies or "tick" marks from 146 to add the expression 146 + 12). Excluding the 2,046 student-problems (or 1,023 student-questions) that were coded as having used neither the standard nor better-than-standard approaches across either of the two assessment parts, we retain n = 2,084 student-problems (from 1,042 student-questions and n = 377 students) in the analysis sample. Of the 1,319 individual student-problems coded for demonstrating a below-standard strategy, 91.2% were marked incorrect, underscoring our need for excluding them from our analysis sample. See Table 3 for a description of the patterns of strategy use across Parts 1 and 2 for the full sample. We found no significant differences in student demographics between the full and analysis samples using chi-square difference tests. Analysis To begin investigating differences in accuracy between standard and better-than-standard approaches (RQ1), we present descriptive statistics for the accuracy rate by strategy use in student-problems. To begin to answer our second research question (R2) concerning how the relationship between strategy selection and accuracy might differ depending on assessment part, we present the accuracy rate for each strategy within Parts 1 and 2 of the assessment separately. We also show patterns of strategy use and accuracy across the problems in the first and second attempt to contextualize student-problems in the two-part assessment task. We present similar descriptives for the accuracy rate by problem type to begin answering our third research question (RQ3) concerning differences in problem solving success on algebra problems compared to arithmetic ones. Note. n = 2,065 student-questions for N = 413 students. Strategies in bold indicate student-questions retained in the analysis sample (n = 1,042 student-question for n = 377 students). We then fit a multi-level logistic regression model, with the dichotomous outcome variable corresponding to whether students' answer to each problem was correct or incorrect (see Equation 1). We created a three-level mixed-effects mod el with random intercepts across students and classrooms to account for the nesting of student-problem i within student j within classroom k. This modeling of within and between student, and within and between classroom, differences in problem-solving success enabled us to account for the homogeneity of errors within each group (Raudenbush & Bryk, 2002). The equation for our main model is represented as: logit π ijk = β 0 + β 1 algebra ijk + β 2 abovestand ijk + β 3 part ijk + β 4 abovestand * part ijk + ϵ ijk + u 0jk + u 00k where π ijk = Pr correct ijk = 1 ; ϵ ijk , u 0jk , and u 00k are the Level 1, 2, and 3 variance components, respectively; ϵ ijk = N 0,σ 2 ; j = 1, …, 377 students with i = 2,…, 10 student-problems in each student, and k = 1,…, 20 classrooms with i = 6,…, 186 student-problems in each classroom. To ensure the appropriateness of a three-level mixed-effects model, we first fit a null (unconditional) model to test whether our outcome of interest, accuracy of student-problem, varied across students and classrooms. We calculated the student-level and classroom-level intraclass correlation coefficients (ICC) to determine the proportion of variance accounted for between students and between classrooms (Raudenbush & Bryk, 2002). The ICC at the classroom level indicated that 6.44% of the chance of answering correctly was explained by between-classroom differences (and 93.56% was explained by within-classroom differences); the ICC at the student level showed that 29.14% of the chance of answering correctly was explained by between-student differences (and 70.86% was explained by within-student differences). We thus proceeded with a three-level model. To answer our research questions about how accuracy may relate to strategy selection (RQ1) and problem type (RQ3), we included three dichotomous level one variables in the model: the strategy employed (standard or better-thanstandard), assessment part (Part 1 or Part 2), and problem type (arithmetic or algebra). To fully answer RQ2, we modeled the interaction between the strategy used and assessment part to determine if the effect of strategy on accuracy depended upon whether the student was on Part 1 or Part 2 of the exam, in which students were asked to re-solve the same problem a second time using a different strategy from the one employed in Part 1. Finally, we conducted post-hoc tests to test for differences in the likelihood of an accurate response. All computing was completed using Stata version 17.0. Table 4 presents the frequency counts and percentages correct by strategy for student-problems in the analysis sample. Reliance on the standard algorithm was common across all problems, with 57.1% of all student-problems using this strategy compared to 42.9% employing better-than-standard ones. Of the 1,189 student-problems that used the standard approach, 80.7% were completed accurately, compared to 74.2% correct for the 895 student-problems that used betterthan-standard approaches. Note. N = 2,084. Frequency counts and percentages correct by strategy are also shown separately for assessment Part 1 (n = 1,042) and Part 2 (n = 1,042). Column percentages are shown in parentheses. Results However, we observed notable differences in accuracy by assessment part. In Part 1, the accuracy rate for student-prob lems that used standard approaches was 84.8%, compared to 69.7% for better-than-standard approaches. But in Part 2, the difference between the accuracy obtained between both approaches reverses, with standard and better-than-stand ard strategies demonstrating accuracy rates of 73.3% and 76.3%, respectively. With regard to differences in accuracy by assessment part alone, we found that 80.7% of student-problems in Part 1 were completed correctly, compared to 75.1% in Part 2. Further, when it came to problem type, 75.1% of arithmetic stu dent-problems were completed correctly, compared to 81.0% of algebraic ones. Students appeared to be more successful on algebra problems compared to arithmetic ones. Table 5 describes patterns of accuracy and strategy across Parts 1 and 2 for all student-questions (n = 1,042). The majority of student-questions (46.7%) showed the student using a standard approach in Part 1 followed by an abovestandard approach in Part 2, and the proportion of these student-questions answered correctly in both parts of the exam was 74.7%. 15.4% of student-questions showed the opposite pattern, with the student starting with the above-standard approach in Part 1 followed by a standard strategy in Part 2, and the proportion of these student-question answered correctly both times was comparable at 73.1%. The results from our mixed-effects logistic regression analysis elaborate on these findings. We begin with our third research question concerning differences in accuracy by problem type, which factors into our discussion of the findings for our first two research questions. We found further support for the finding that responses on algebra problems were more accurate than responses for arithmetic problems, even when controlling for assessment part and strategy type. Algebra problems had an estimated 0.34-logit greater likelihood of a correct response compared to arithmetic problems, β 1 = 0.34, z = 2.54, p = .011; see Table 6. In the next sections, we describe the probability of a correct response by strategy type and assessment part for algebraic and arithmetic student-problems separately. With regard to our first research question concerning differences in accuracy by strategy type and our second research question investigating how this relationship may differ by assessment part, we found further evidence in support of the interaction between strategy use and whether a student was completing a problem for the first or second time. In Part 1 of the assessment, the standard approach was related to a greater likelihood of an accurate response compared to the above-standard approach, β 2 = − .90, z = − 4.50, p < .001 (see Table 6). For arithmetic student-problems in Part 1, the standard approach was associated with an estimated 87% chance of a correct response, compared to only a 73% chance with above-standard approaches. For algebra student-problems in Part 1, standard and above-standard approaches were associated with an estimated 90% and 79% probability of success, respectively. See Figure 1 for the estimated probabilities of a correct response by strategy type for arithmetic and algebra problems separately. However, this relationship between strategy use and accuracy changes in Part 2, as evidenced by the significant interaction term between strategy and part, β 4 = 0.89, z = 3.40, p = .001 (see Table 6). When students solved the same problems a second time in Part 2 of the assessment, the differences in accuracy by strategy type disappear. For arithmetic problems in Part 2, standard and above-standard approaches were associated with an estimated 77% and 76% likelihood of a correct response, respectively; for algebra problems in Part 2, both strategy types were associated with an estimated 82% chance of being answered correctly (see Figure 1). For arithmetic and algebra problems alike, in Part 2 of the assessment we found no significant differences in accuracy between standard and better-than-standard approaches, X 2 1, N = 2,084 = 0.01, p = .9221. Examining the effectiveness of each strategy type across the two attempts, the standard approach was significantly more likely to yield a correct response in Part 1 compared to Part 2, but this is not the case for the above-stand ard approach, which demonstrated equal chances of success in both Parts 1 and 2. The standard approach had an estimated 10% greater chance of success in Part 1 (87%) compared to Part 2 (77%) for arithmetic problems and an estimated 8% greater chance of success in Part 1 (90%) compared to Part 2 (82%) for algebra problems, β 3 = − 0.70, z = − 4.11, p < .001 (see Table 6). For above-standard approaches, we found no differences in the chance of obtaining a correct response across Parts 1 and 2, X 2 1, N = 2,084 = 1.02, p = .3123. Above-standard approaches had an estimated 73% chance of success in Part 1 and 76% chance of success in Part 2 for arithmetic problems, and an estimated 79% chance of success in Part 1 and 82% chance in Part 2 for algebra problems. See Figure 1 for the estimated probabilities of a correct response by strategy, assessment part, and problem domain. Proportion of Student-Problems Correct by Strategy, Assessment Part, and Problem Type Note. Test results for differences in the likelihood of a correct response between strategies within the same part of the assessment are shown in the bars above. Test results for differences in the likelihood of a correct response between the two assessment parts within the same strategy are shown in brackets. *p < .05. **p < .01. ***p < .001. Discussion In the present study, we investigated differences in the accuracy achieved with standard and better-than-standard strategies and the extent to which accuracy differed by whether a student was solving for the first or second time and whether the problem was arithmetic or algebraic. We found that there is a relationship between strategy use and accuracy, and that the relationship depends on whether students are doing a problem for the first or second time as well as on the problem domain. The standard approach was related to greater success in problem solving only when a student was solving a problem for the first time; when asked to solve the same problems a second time using a different strategy, the standard approach was no more accurate than better-than-standard approaches. Further, examining the patterns of strategy use across the problems in the first and second attempt showed a majority of student-problems beginning with the standard approach in Part 1 followed by an above-standard approach in Part 2, with the proportion answered correctly both times being 74.7%. Students' flexible strategy use appears to show the dominance of the standard approach as the primary strategy of choice. A small proportion of student-questions began with the above-standard approach as the primary choice of strategy in Part 1 followed by a standard strategy in Part 2, with the proportion answered correctly both times comparable at 73.1%. For students' primary strategy selection in Part 1, the standard approach may be associated with a higher accuracy rate than the better-than-standard strategy for reasons that may seem intuitive: this approach is the more common, reliable, and routine way of solving a problem. The success of the standard approach here may be attributable to the "freed resources account" for student attention on problems, which posits that highly routinized and well-practiced strategies require fewer cognitive resources from students, "freeing up" their ability attend to the key features and relationships in the problem (Kotovsky et al., 1985;McNeil & Alibali, 2004;Shiffrin & Schneider, 1977;Shrager & Siegler, 1998). Students who used the better-than-standard strategy in Part 1 may have had fewer cognitive resources at their disposal given the nonroutine nature of these approaches, making it more difficult to attend to the mechanics of solving and simplifying. This "freed resources account" may have made the problem easier to solve using the standard approach-and more likely to lead to an accurate solution-compared to better-than-standard approaches in Part 1. However, when students were asked to go beyond their primary strategy of choice, as our assessment prompted them to do in Part 2, both strategy types were equally related to accuracy. To add to this, the standard approach was significantly more successful in Part 1 compared to Part 2, but better-than-standard approaches were equally successfully across the two exam parts. It could be the case that the application of above-standard approaches is a robust indicator of greater flexibility and conceptual understanding, regardless of whether this type of approach is a student's primary or secondary strategy choice. Recognizing the structural features in a problem is related to flexibility and conceptual knowledge (e.g., Schneider et al., 2011), and this may explain the stability of this approach's relation to accuracy irrespective of which attempt a student was on. It could also be the case that students using better-than-standard strategies in Part 2 benefitted from having previously solved using the standard method, improving their accuracy the second time around. Further, students using the standard strategy the second time may have had less practice with this approach, as it was not their primary choice of strategy, and this lack of routinization could have contributed to the reduction in accuracy we found. When students do not have a well-routinized and practiced strategy readily available to them, they need to attend to the specific features of the problem more carefully and encode the information in the problem to devise their approach (McNeil & Alibali, 2004). This on-the-spot encoding could have taken up more working memory, increasing the likelihood for error. It could also be the case that students only familiar with standard approaches tried to apply the same standard procedures a second time but were less successful. This is because highly-practiced and internalized strategies-such as the standard ones-dictate a student's encoding when problem-solving, and this top-down approach may cause them to only attend to and encode the features necessary for executing their specific strategy (McNeil & Alibali, 2004). This relationship between strategy use and encoding may have contributed to the reduction in accuracy we found in Part 2 with the standard approach compared to the same approach the first time. Our results point to the value in cultivating flexibility in problem solving for learners. The vast majority of students in our sample who correctly solved a problem twice used some combination of standard and above-standard approaches across the two parts (see Table 5). This is not surprising, given that procedural flexibility is indicative of greater conceptual and procedural knowledge in mathematics (Durkin et al., 2021). Demonstrating successful problem solving using multiple strategies is a more robust indicator of student learning and comprehension than successful problem solving using one strategy alone. Understanding each strategy's likelihood of success on a task that promotes flexible strategy use helps us get a clearer picture of students' procedural flexibility. The Relationship Between Problem Domain and Accuracy While our findings more generally indicated that the success of certain strategies depends on whether a student is solving a problem for the first or second time, we observed notable differences in this relationship depending on the problem domain. We found significant differences in accuracy across algebraic and arithmetic student-problems, even controlling for assessment part and strategy type. It is possible this difference in accuracy is due to structural differences between the two problem types. The multi-step equations found in algebra domains may reduce the cognitive burdens of problem solving, given their predictable and less varied nature compared to arithmetic problems, with common patterns of distributing coefficients, combining like terms, and isolating variables. It seems reasonable for students to have common templates for solving, given the common format and structural features found in such equations. It could also be the case that equations that are "flexible-eligible" (Hästö et al., 2019) may offer more entry points for solving. For arithmetic problems, particularly those containing fraction operations as seen in our assessment, students may have limited facility given the way fractions are taught in many U.S. schools (e.g., Harvey, 2012;Lamon, 2007). Another reason for the greater accuracy found in algebra problems could simply be recency effects: temporal distance from curricula emphasizing the fraction and integer operations more often seen in primary school mathematics may hinder students' success when it comes to these same types of problems later in high school, where the mathematics curricular program is heavily focused on algebra. Limitations and Future Directions A limitation of the present study is the use of student-problems as the unit of analysis. Future studies could look across assessment questions for the same student and examine each student's strategy selection and accuracy conditionally on how the student solved a problem for the first time, including those that used a below-standard strategy. Given that we excluded 2,046 student-problems (or 1,023 student-questions) that showed a below-standard strategy in at least one part of the assessment, our study does not generalize to flexible problem-solving situations in which a student uses a standard or better strategy in combination with a below-standard strategy; our results can only speak to the relationship between flexible strategy use and accuracy for students using standard or better strategies. We recommend future work that examines the relationship between below-standard approaches, procedural flexibility, and conceptual understanding. Future work investigating flexible strategy use might also adapt the choice/no-choice method, which compares problem solving latencies or times-to-solution between a method of choice and a prescribed method within individual students (Siegler & Lemaire, 1997). This technique could help to illuminate patterns in the relationship between accura cy and flexibility at the student level. The choice/ no-choice method has been widely applied in the study of flexible strategy use on a broad array of task types with a broad range of latencies. However, this method works best for mental computation or tasks with limited use of external aids, such as calculators and pencil and paper on short multiplication tasks (Siegler & Lemaire, 1997). Given that the hand-written symbolic manipulation seen in algebraic problem-solving increases latencies by up to a factor of 10 compared to prior studies using this method, the choice/ no-choice method would likely need to be adapted for the study of flexible strategy use in algebraic problem solving. Another limitation of the choice/ no-choice method is that researchers must explicitly prescribe a specific strategy to participants in the no-choice condition ). This is more easily done in, for example, simple arithmetic tasks by telling participants to round to the nearest tens or hundreds place. In algebra problem-solving, however, naming the strategy for the student without inadvertently aiding a student through problem solving may be difficult. Students are unlikely to know what the "standard approach" or the "above-standard approach" means, and to accurately prescribe this technique to students by describing the steps to them might undermine the point of investigating students' independent flexible strategy use. We recommend future methodological exploration of how to adapt the choice/ no-choice method to longer, hand-written algebraic and arithmetic problem-solving tasks such as the ones used in the current study. In addition, further work could be done to explore more of the variation in strategy selection and accuracy within each problem type. For example, the two algebra equations we used, while typical in secondary mathematics and algebra curricula, do not capture the full array of algebraic problem features students encounter. Similarly, the arithmetic problems in our assessment are primarily concerned with fraction and integer operations. Our results may have been influenced more by the specifics of these problem features rather than arithmetic problems more generally, limiting the generalizability of our findings to these problem domains. Future studies exploring the relationship between accuracy and strategy choice in different problem domains may wish to better account for general characteristics of problems in the mathematical domains of interest as well as to increase the number of items and thus reliability. Similar to problem domain, future studies may wish to investigate the relationship between strategy selection and accuracy on word problems in mathematics. Prior work has shown that even expert mathematicians struggle to apply simple arithmetic procedures to word problems when the problem presents semantic content that is incongruent with the arithmetic solving procedure (Gros et al., 2019). Extending our research questions to word problem situations invoking flexible strategy use and application via problem statement would provide meaningful nuance to our understanding of the phenomenon, especially in this Common Core era of open-ended tasks and word problems in mathematics. Another limitation of the current study is the limited age and grade range of students in the sample (86.9% of students in the sample were between the ages of 14 and 16 years old, and 79.5% were in grades 9 or 10), precluding us from examining the effect of student age, grade, and by proxy grade-based curriculum, on problem-solving success. We recommend a thorough examination of situational variables related to the learner, taking into account student characteristics such as age, grade, math placement, and math performance as they relate to procedural flexibility. Related to situational variables, our sample comes from one specific high school, limiting the generalizability of our findings-future studies on the relationship between flexible strategy use and accuracy may wish to vary the school site and examine contextual factors related to the phenomenon. Finally, more qualitative research on students' encoding of flexible-eligible problems and how a student decides which strategy to apply is needed to better understand students' rationale for which strategies they employ. For example, secondary school students in Spain tend to prefer standard algorithms and approaches . Conclusion Our findings have implications for mathematics educators seeking to promote procedural flexibility in their classrooms. There is potential value in using this kind of task (in which students are prompted to re-solve a previously completed problem) for two reasons. First, our results are consistent with prior calls for the inclusion of this type of task, both as a student learning task as well as an assessment task (e.g., Blöte et al., 2001;Star & Seifert, 2006). Being asked to re-solve a problem prompts students to try to generate a different strategy, which can have the effect of building knowledge of multiple strategies and thus potentially flexibility. But second, the presence of multiple strategies, as promoted by this task, affords teachers the potential opportunity to engage students in thinking and discussion around some of the nuanced issues related to flexibility. These include which strategy students feel that they can more accurately and reliably use, as well as which strategy is better (and what 'better' means). Engaging students in such metacognition of strategy appropriateness may deepen student mathematical knowledge and flexibility (Heirdsfield & Cooper, 2002). Further, a task of this nature may prompt learners to recognize and take advantage of the structural features in a problem that they may not have noticed when solving the first time, and this may help to develop their conceptual understanding and flexibility (Schneider et al., 2011). Our analysis contributes important nuance to our understanding of procedural flexibility with respect to the relationship between the use of standard and better-than-standard strategies and the accuracy of these strategies, adding complexity to debates about which strategies are better for problem solving. Funding: The authors have no funding to report.
10,318
2022-11-16T00:00:00.000
[ "Mathematics" ]
Cloning of a cDNA encoding an aldehyde dehydrogenase and its expression in Escherichia coli. Recognition of retinal as substrate. The biosynthesis of the hormone retinoic acid from retinol (vitamin A) involves two sequential steps, catalyzed by retinol dehydrogenases and retinal dehydrogenases, respectively. This report describes the cloning of a cDNA encoding a heretofore unknown aldehyde dehydrogenase from a rat testis library and its expression in Escherichia coli. This enzyme has been designated retinal dehydrogenase, type II, RalDH(II). The deduced amino acid sequence of RalDH(II) had the highest identity with mammalian aldehyde dehydrogenases that feature low Km values (μM) for retinal: human ALDH1 (72.2%), rat retinal dehydrogenase, type I (71.5%), bovine retina (72.7%), and mouse AHD-2 (71.5%). RalDH(II) expressed in E. coli recognizes as substrates free retinal, with a Km of ∼0.7 μM, and cellular retinol-binding protein-bound retinal, with a Km of ∼0.2 μM. RalDH(II) also can utilize as substrate retinal generated in situ by microsomal retinol dehydrogenases, from the physiologically most abundant substrate: retinol bound to cellular retinol-binding protein. Rat testis expresses RalDH(II) mRNA most abundantly, followed by (relative to testis): lung (6.7%), brain (6.3%), heart (5.2%), liver (4.4%), and kidney (2.7%). RalDH(II) does not recognize citral, benzaldehyde, acetaldehyde, and propanal efficiently as substrates, but does metabolize octanal and decanal efficiently. These data support a function for RalDH(II) in the pathway of retinoic acid biogenesis. The hormone RA 1 induces a variety of biological responses by modulating gene expression during development and postnatally, to control differentiation or entry into apoptosis of diverse cell types in numerous organs (1)(2)(3). Vertebrates require RA for normal hematopoiesis, reproduction, bone remodelling, and sustaining epithelia (4,5). Yet, excessive RA causes toxicity, both in the embryo and postnatally (4,6). Presumably, a combination of mechanisms closely controls RA concentrations in vivo including regulation of biosynthesis and catabolism. RA biosynthesis from retinol (vitamin A) entails two sequential reactions with retinal as an intermediate (7,8). At least three microsomal RoDH isozymes, members of the short-chain dehydrogenase/reductase gene family, catalyze the first and rate-limiting step that generates retinal from the substrate holo-CRBP (9,10). The deduced structures of RoDH isozymes suggest globular proteins with only one transmembrane helix at the N terminus, bounded by four hydrophilic residues, consistent with an enzyme anchored to the endoplasmic reticulum, but exposed to the cytoplasm (11)(12)(13). Such topology would permit access to RoDHs by the cytosolic proteins CRBP and RalDHs. Liver expresses the mRNA of all three known RoDH isozymes, whereas extrahepatic tissues express RoDH(I) and RoDH(II) mRNAs in quantitatively differing patterns. Thus, RoDHs are distributed throughout multiple tissues, consistent with the widespread ability of tissues to synthesize RA (14), and each of the three RoDH isozymes has a characteristic tissue expression pattern, as do the RA receptors (15,16). Fractionation of rat tissue cytosol by anion-exchange chromatography revealed at least four RalDH isozymes, with the three liver isozymes having K 0.5 values for free retinal ϳ1 M (17). The quantitatively major isozyme in liver, kidney, and testis, RalDH(I), formerly designated P1, has been purified from rat liver (17). In addition to free retinal, RalDH(I) recognizes retinal bound to CRBP as substrate, suggesting that CRBP could facilitate relocation of the retinal product from RoDHs to RalDH(I). Attempts to clone the cDNA encoding RalDH(I) from a rat liver library were complicated by the identification of much more abundant aldehyde dehydrogenase cDNA clones, such as rat phenobarbital-inducible aldehyde dehydrogenase. Therefore, our attention turned to a rat testis cDNA library in an effort to avoid abundant hepatic aldehyde dehydrogenases. During screening for RalDH(I), we identified a cDNA clone, distinct from that of RalDH(I), which encodes a heretofore unknown aldehyde dehydrogenase. This report describes the cloning of a cDNA encoding this new aldehyde dehydrogenase, the distribution of its mRNA expression in tissues, and the enzymatic characteristics of the enzyme expressed in Escherichia coli. This isozyme has been designated RalDH(II) because it recognizes as substrate, with K m values Ͻ 1 M, unbound retinal and retinal in the presence of CRBP, and also retinal generated in situ by microsomal RoDH from holo-CRBP. MATERIALS AND METHODS A cDNA Encoding RalDH(II)-Rat testis RNA served as the template for reverse transcriptase-PCR with the two primers, 5Ј-CTTGG(G/ A)GG(G/A)AA(G/A)AGC-3Ј and 5Ј-AC(T/C)GG(T/C)CCAAA(T/A/G)AT-CTCCTC-3Ј. RNA (3.5 g) was allowed to react with 3 l of random * This work was supported by National Institutes of Health Grant AG13566. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EBI Data Bank with accession number(s) U60063 (RalDH(II)). ‡ To whom correspondence should be addressed: 140 Farber Hall, School of Medicine and Biomedical Sciences, SUNY-Buffalo, Buffalo, NY 14214. Tel.: 716-829-2032; Fax: 716-829-2661. 1 The abbreviations used are: RA, all-trans-retinoic acid; CRBP, cellular retinol-binding protein, type I; IPTG, isopropyl-1-thio-␤-D-thiogalactoside; PCR, polymerase chain reaction; RalDH(II), retinal dehydrogenase type II; RalDH(II)/rL, recombinant RalDH(II) expressed from the first ATG with an N terminus histidine tag; RalDH(II)/rS, recombinant RalDH(II) expressed from the second ATG with an N terminus histidine tag; RoDH, retinol dehydrogenase; PCR, polymerase chain reaction; PAGE, polyacrylamide gel electrophoresis; kb, kilobase(s); nt, nucleotide(s). hexamer (Invitrogen), 10 units of rRNase ribonuclease inhibitor and 45 units of avian myeloblastosis virus reverse transcriptase (Promega) in a total volume of 30 l for 2 h at 39°C. The reaction mixture was diluted 1/20 with water, and 10 l of the diluted solution were added to a PCR reaction mixture consisting of (final concentrations): 1 M each primer, 1.5 mM MgCl 2 , 0.2 mM each dNTP, and 2 units of Taq DNA polymerase (Promega) in 50 l of 10 mM Tris-HCl, pH 9.0, 50 mM KCl, 0.1% Triton X-100. PCR used 35 cycles of 2 min at 94°C, 2 min at 55°C, 3 min at 72°C, and 10 min at 72°C after the final cycle. The 0.4-kb products were gel-purified and cloned into pBluescript II SK(ϩ). One clone had a 407-base pair insert with ϳ70% nucleotide sequence identity with the known members of the aldehyde dehydrogenase superfamily. This insert was amplified by PCR with the two original primers, radiolabeled with [␣-32 P]dCTP with the Megaprime DNA Labeling Kit (Amersham), and used to screen a gt11 rat testis cDNA library (Clontech) with a final wash at 45°C with 0.1% SSC and 0.1% SDS. After three rounds of screening, seven of the original 1.6 ϫ 10 5 plaques were positive. The longest insert, ϳ2.2 kb, was excised with EcoRI and cloned into pBluescript II SK(ϩ) to produce pBC/RalDH(II). A series of unidirectional deletions of pBC/RalDH(II) were made and sequenced in both directions by dideoxy chain termination using the fmol TM DNA Sequencing System (Promega). Northern Blots-The RNA blot was prepared with adult male rat tissue poly(A) ϩ as described previously (11). The probe for RalDH(II), a 796-nucleotide BssHII/EcoRI fragment from a unidirectional deletion mutant that contained nt 1474 through 2240 of pBC/RalDH(II), was labeled with [␣-32 P]dCTP by the method described above. Blot hybridization and washing were done under high stringency conditions as reported (11). Autoradiography was done with Kodak XAR film and one intensifying screen at Ϫ70°C for 24 h. RNase Protection Assays-The RalDH(II) probe was obtained by in vitro transcription of unidirectional deletion mutants with cDNA from RalDH(II). A plasmid containing nt 1 through 1853 of pBC/RalDH(II) was linearized with ScaI, which cut at nucleotide 1512 to produce a 341-nucleotide fragment consisting of nt 1513 through 1853. This fragment included 291 nt of 3Ј-untranslated cDNA. The 341-nucleotide fragment was used as template for transcription of 32 P-labeled antisense riboprobes by T3 RNA polymerase (Ambion) for 1 h at 37°C in 10 mM dithiothreitol, 0.5 mM each ATP, CTP, and GTP, 5 M UTP, and 50 mCi of [␣-32 P]UTP (800 Ci/mmol). The DNA template was removed through DNase I digestion. The 125-nucleotide-long antisense probe for rat ␤-actin mRNA was transcribed in vitro from p-TRI-␤-actin-125-rat (Ambion) under the same conditions. Probes were purified with a 5% PAGE, 8 M urea gel. The ribonuclease protection assay was done with the RPA II TM Ribonuclease Protection Kit (Ambion). Total RNA (60 g in each sample) was extracted with guanidinium thiocyanate/phenol/ chloroform from adult male Sprague-Dawley rats and was co-precipitated with RNA probes (8 ϫ 10 4 cpm) by 0.5 M ammonium acetate. Each precipitate was resuspended in 20 l of hybridization buffer (80% deionized formamide, 100 mM sodium citrate, 300 mM sodium acetate, 1 mM EDTA, pH 6.4) and then was incubated at 45°C for 18 h. The same amount of probe was hybridized with 10 g of yeast RNA as a control. After digestion with RNase A (1 unit/ml) and RNase T1 (40 units/ml) for 30 min at 37°C, the protected fragments were resolved on 5% polyacrylamide, 8 M urea gels and visualized by autoradiography at Ϫ70°C overnight with one intensifying screen. Quantitative comparisons were made with an LKB UltroScan XL densitometer. Expression of RalDH(II) in E. coli-Two single-stranded primers were designed to contain nt sequences 27-44 and 1554 -1572 from pBC/RalDH(II): CCGATCATATGCCCGGCGAGGTGAAG and AGGG-ATCCTGGGCCTCTTAGGAGTT, respectively (underlined nucleotides indicate NdeI and BamHI sites, respectively). These primers were used to amplify the coding region of RalDH(II) by PCR (final conditions): 1 M concentration of each primer, 200 ng of DNA template, 0.2 mM each dNTP, 2.5 units of pfu DNA polymerase (Stratagene), 10 mM KCl, 6 mM (NH 4 ) 2 SO 4 , 2 mM MgCl 2 , 0.1% Triton X-100, and 10 g/ml nuclease-free bovine serum albumin in 0.1 ml of 20 mM Tris-HCl, pH 8.2. PCR was done with 35 cycles of 1 min at 94°C, 1 min at 72°C, 1 min at 52°C, and 10 min at 72°C after the final cycle. The 1.5-kb product was gel-purified and cloned into pBluescript II SK(ϩ) by blunt-end ligation. The insert was excised with NdeI and BamHI, gel-purified, and ligated into pET-14b between the unique NdeI and BamHI sites to yield pET/ RalDH(IIL). The insert of pET/RalDH(IIL) was sequenced. pET/ RalDH(IIL) expresses an RalDH(II), RalDH(II)/rL, with an additional 20 amino acids on its N terminus containing a histidine tag: MGSSH-HHHHHSSGLVPRGSH. E. coli strain BL21(DE3) was transformed with pET/RalDH(IIL) by the CaCl 2 method. Bacteria were grown at 37°C until the absorbance at 595 nm reached 0.6. The temperature was lowered to 18°C, 0.4 mM IPTG was added, and incubation was continued for 24 h. Isolation of RalDH(II)/r from E. coli-Bacteria were pelleted (3500 ϫ g for 10 min), resuspended in 0.25 mM sucrose, 1 mM EDTA, 10 mM Tris-HCl, pH 7.5, and were then disrupted with a French press. The lysate was treated with 200 units/ml DNase I for 15 min at ambient temperature in the presence of 2 mM phenylmethylsulfonyl fluoride, 20 g/ml pepstatin, and 20 g/ml leupeptin, and then was centrifuged at 10,000 ϫ g for 30 min at 4°C. RalDH(II)/r was isolated from the supernatant by nickel affinity chromatography with the His-Bind Buffer Kit (Novagen, Madison, WI). Typically, 2 ml of supernatant (ϳ120 mg of protein) were applied to 1.6 ml of resin. The resin was washed with 16 ml of binding buffer, 9.6 ml of wash buffer and then eluted with 9.6 ml of eluting buffer (1 M imidazole, 0.5 M NaCl, 20 mM Tris-HCl, pH 7.9) as recommended by the manufacturer. RA Synthesis-Unless stated otherwise, assays were done in duplicate (variation was ϳ10% of the averages) in buffer A (20 mM Hepes, 150 mM KCl, 1 mM EDTA, and 2 mM dithiothreitol, pH 8.5) at 37°C with 400 ng of RalDH(II). Retinal was added in 2 l of dimethyl sulfoxide to a final volume of 0.5 ml. When CRBP-bound retinal was substrate, preincubations of retinal and apo-CRBP were done at 37°C for 20 min to allow CRBP-retinal complex formation. Incubations were initiated by addition of 2 mM cofactor (final concentration) and protein and were done for 10 min. Controls were assays done without protein or without cofactors. Reactions were quenched and RA was quantified by normalphase high performance liquid chromatography as described (18). RalDH(II) Activity with Aldehydes Other Than Retinal-Assays were done in 1 ml of buffer A by monitoring for 5 min at 25°C the synthesis of NADH (340 nm). Substrates were added in ethanol (Ͻ10 l), and the reactions were initiated by adding NAD (final concentration, 2 mM). At least six concentrations were used for determining kinetic constants of each substrate, ranging from 0.2-to 20-fold the K m value. Kinetic constants were determined under initial velocity conditions linear with time and protein. Rat Tissues-Microsomes and RNA were prepared as described previously from the tissues of male rats (ϳ250 g) fed a chow diet (11). Preparation of CRBP-CRBP was generated in E. coli with the vector pMONCRBP (19), as described (9). The concentration of functional apo-CRBP was determined by saturating an aliquot with retinol, separating free and bound retinol by size-exclusion chromatography, and determining the A 350 /A 280 ratio. Holo-CRBP denotes CRBP saturated with retinol, whereas CRBP-retinal denotes CRBP saturated with retinal. RESULTS AND DISCUSSION Nucleotide and Deduced Amino Acid Sequences of RalDH(II)-We selected testis RNA and cDNA, respectively, as templates for synthesizing a probe and for library screening to avoid abundant hepatic aldehyde dehydrogenases. Primers were designed for reverse transcriptase-PCR from two sequences of amino acids in rat phenobarbital-inducible aldehyde dehydrogenase (residues 267-273 and 399 -404), which are highly conserved regions of the mammalian members of the aldehyde dehydrogenase superfamily (20). Reverse transcriptase-PCR with testis RNA produced a 0.4-kb probe (the anticipated size), with a nucleotide sequence only ϳ70% identical with known aldehyde dehydrogenases (Fig. 1, nt 864 through 1270). Library screening with this probe identified an ϳ2.2-kb clone, which was subcloned to yield pBC/RalDH(II). The single open reading frame in pBC/RalDH(II) has two possible initiator codons; the first starting with adenosine 27 and the second starting with adenosine 63 (Fig. 1). The first one represents the most likely translation initiation site in vivo, according to the Kozak rules (21), and predicts a protein of 511 amino acids. This protein has all 23 of the strictly conserved residues of the aldehyde dehydrogenase superfamily, expressed in phylogenetically diverse organisms (20). It also has all 66 residues, either strictly conserved or "invariantly simi- The protein encoded by pBC/RalDH(II) has the highest amino acid identities and similarities, in parentheses respectively, with chicken (73, 86%), bovine retina (73, 87%), human ALDH1 (72, 87%), rat RalDH(I) (72, 85%), mouse AHD-2 (72, 85%), and rat phenobarbital-inducible (71, 85%) aldehyde dehydrogenases (22)(23)(24)(25)(26)(27). Thus, the overall amino acid sequence similarity, as well as the conservation of specific amino acid residues, denote a heretofore unrecognized aldehyde dehydrogenase. The high amino acid similarity with rat RalDH(I), mouse AHD-2, bovine retina aldehyde dehydrogenase, and human ALDH-1 is especially intriguing, because these enzymes catalyze the conversion of retinal into RA with K m values Ͻ1 M, except for bovine retina aldehyde dehydrogenase, which has a K m value ϳ9 M (17,(27)(28)(29)(30). These enzymes, however, have not been tested for recognition of retinal in the presence of CRBP or retinal generated in situ from holo-CRBP. RalDH(II) from E. coli-RalDH(II) was expressed in E. coli to study its enzymatic characteristics. Judging from SDS-PAGE analysis, ϳ5% of E. coli protein was RALDH(II)/rL (Fig. 3). Similar results were obtained with the shorter recombinant product, expressed from the second putative initiator (data not shown). RalDH(II)/rL was easily purified via nickel affinity chromatography to a single band on SDS-PAGE. About 1 mg of purified RalDH(II)/rL was obtained from a 100-ml incubation. RalDH(II)/rL and Retinal-RalDH(II)/rL catalyzed the conversion of 2 M retinal into RA at a rate linear for at least 12.5 min and up to 0.4 g of purified protein and functioned optimally ϳpH 8.5 (not shown). RalDH(II)/rL recognized NAD with a K m of 70 Ϯ 0.6 M and NADP with a K m of 400 Ϯ 7 M (Ϯ S.E., Enzfitter (31)). The V max in the presence of NAD was 5-fold higher than that with NADP. The apparent K m of RalDH(II)/rL for free retinal (i.e. unbound with CRBP) was 0.7 Ϯ 0.3 M, and the V max was 105 Ϯ 4 nmol/min/mg of protein (Ϯ S.D., n ϭ 3) (Fig. 4, top panel). The kinetic constants were re-evaluated in the presence of a 2 molar excess of apo-CRBP at each retinal concentration, which generated an apparent K m of 0.2 Ϯ 0.06 M and a V max of 62 Ϯ 15 nmol/min/mg of protein (n ϭ 3) (Fig. 4, bottom panel). CRBP binds retinal with a K d between 50 and 100 nM (19,32). In the presence of a 2 molar excess of CRBP, the concentration of unbound retinal would range from 0.042 M, at a total retinal concentration of 0.1 M, to 0.1 M at a total retinal concentration of 6 M (calculating from a K d of 100 nM), i.e. it would remain practically constant. In contrast, the CRBP-retinal concentration would range from 0.58 M to 5.9 M. The Michaelis-Menten relationship must therefore occur between CRBP-retinal and the rate of RA production, suggesting that RalDH(II)/rL recognizes retinal in the presence of CRBP as substrate. Because retinal has very poor solubility in aqueous media and there occurs an excess of CRBP relative to retinal in vivo, retinal in cells would likely occur bound to CRBP, in equilibrium with membranes and/or proteins. The ability of RalDH(II)/rL to catalyze the synthesis of RA in the presence of CRBP-retinal shows that it functions under conditions that more closely model physiological conditions than retinal dispersed in the aqueous medium. Because holo-CRBP is the most abundant form of retinol in vivo, and NADP-dependent microsomal RoDHs are the most active retinol dehydrogenases with respect to holo-CRBP as substrate (33), the ability was determined of RalDH(II)/rL to utilize as substrate retinal generated in situ by microsomal RoDHs from holo-CRBP. Apo-CRBP also was included to ensure complete binding of retinol (K d ϭ 0.1-1 nM (32)) and (5), heart (6), and brain (7). Lane 1 shows marker DNA; lane 8 shows the digested yeast control; lane 9 shows the probe. The anticipated fragments were observed at 341 nt. The arrow on the right indicates the position of the probe. Lane 9 was exposed for 15 h; the other lanes were exposed for 3 days. Table II. because this combination of holo-CRBP and apo-CRBP most closely approximates conditions in vivo. Generation of RA from the retinal produced in situ from holo-CRBP by microsomal RoDHs increased with increasing amounts of RalDH(II)/rL titrated into the incubation mixture (Fig. 5). Maximum RA synthesis from holo-CRBP occurred only in the presence of both microsomes and RalDH(II)/rL along with both cofactors, whereas measurable but much less RA production was observed with a combination of microsomes and RalDH(II)/rL, if one cofactor were omitted (Table I). Consistent with previous results, microsomes alone in the presence of both NADP and NAD produced little RA (17,34). These data show that the NAD-supported RalDH(II)/rL can use retinal generated by the microsomal, NADP-dependent RoDHs to biosynthesize RA. RalDH(II)/rL Substrate Specificities-RalDH(II)/rL does not recognize citral as substrate and inefficiently catalyzes the dehydrogenations of acetaldehyde, benzaldehyde, and propanal (Table II). These are the prototypical substrates used to assay and classify the mammalian aldehyde dehydrogenases. Thus, RalDH(II) differs from many other members of the superfamily not only in primary amino acid sequence, but also in substrate specificity. Medium-chain aldehydes, however, were metabolized efficiently by RalDH(II). Of the eight aldehyde dehydrogenase substrates assayed, octanal and decanal had the most favorable V max /K m values, whereas retinal had the lowest K m . Although the V max /K m for retinal was 3-to 4-fold lower than the values for octanal and decanal, the apparent K m for retinal was an order of magnitude lower. Thus, RalDH(II) can catalyze the dehydrogenation of retinal at much lower concentrations than the other substrates assayed. It should not surprise that medium-chain aldehydes would be accommodated by an active site that recognizes retinal. Octanal and decanal are similar in length to the side chain of retinal and enjoy greater flexibility. Perhaps their greater flexibility and more simple structures (lack of double bonds, lack of methyl groups) and the absence of a conjugated carbonyl account for their greater rates of dehydrogenation, even though they are apparently accommodated less efficiently by the active site. Characteristics of RalDH(II)/rS-RalDH(II)/rS, the shorter protein expressed in E. coli from the second possible initiator, had enzymatic characteristics similar to those of RalDH(II)/rL. It was NAD-dependent, had a pH optimum ϳ8.5, recognized free retinal with a K m ϳ1 M, and was inefficient in metabolizing benzaldehyde, acetaldehyde, propanal, and hexanal, but metabolized octanal and decanal, efficiently with V max /K m ratios of 175 and 148, respectively. The shorter N terminus, therefore, does not affect activity markedly. This affords the possibility that alternative forms of RalDH(II) are translated in different cell types or under different conditions. Concluding Summary-The designation of the isozyme reported here as RalDH(II) seems justified by several criteria. Firstly, the amino acid sequence of RalDH(II) has the greatest identity and similarity with RalDH(I) and other aldehyde dehydrogenases that catalyze the conversion of retinal into RA. Secondly, the enzyme recognizes "free" retinal with a low K m and also recognizes retinal in the presence of CRBP with a low K m . Thirdly, RalDH(II) can use retinal generated in situ from holo-CRBP/apo-CRBP by microsomal RoDHs as substrate for RA synthesis. These conditions approximate those that occur in RA-producing tissues. Fourthly, although RalDH recognizes aldehydes other than retinal, their structures are simpler than that of retinal. Their ability to enter into an active site that accommodates retinal would not be unusual, whereas it would be more unusual for the more rigid, branched, and polyunsaturated molecule, retinal, to adapt to a nonspecific enzyme with a low K m and a physiologically effective V max .
5,087.6
1996-07-05T00:00:00.000
[ "Biology" ]
A large-scale metagenomic survey dataset of the post-weaning piglet gut lumen Abstract Background Early weaning and intensive farming practices predispose piglets to the development of infectious and often lethal diseases, against which antibiotics are used. Besides contributing to the build-up of antimicrobial resistance, antibiotics are known to modulate the gut microbial composition. As an alternative to antibiotic treatment, studies have previously investigated the potential of probiotics for the prevention of postweaning diarrhea. In order to describe the post-weaning gut microbiota, and to study the effects of two probiotics formulations and of intramuscular antibiotic treatment on the gut microbiota, we sampled and processed over 800 faecal time-series samples from 126 piglets and 42 sows. Results Here we report on the largest shotgun metagenomic dataset of the pig gut lumen microbiome to date, consisting of >8 Tbp of shotgun metagenomic sequencing data. The animal trial, the workflow from sample collection to sample processing, and the preparation of libraries for sequencing, are described in detail. We provide a preliminary analysis of the dataset, centered on a taxonomic profiling of the samples, and a 16S-based beta diversity analysis of the mothers and the piglets in the first 5 weeks after weaning. Conclusions This study was conducted to generate a publicly available databank of the faecal metagenome of weaner piglets aged between 3 and 9 weeks old, treated with different probiotic formulations and intramuscular antibiotic treatment. Besides investigating the effects of the probiotic and intramuscular antibiotic treatment, the dataset can be explored to assess a wide range of ecological questions with regards to antimicrobial resistance, host-associated microbial and phage communities, and their dynamics during the aging of the host. this dataset can find all relevant details on experimental design, experimental approach and primary data processing in this manuscript. -The authors have removed most of the concerning analysis details. Few comments; 1) Data description / OTUs. Are OTUs used or "ASVs". I understand sortMeRNA was used that includes qiime v1.x but current methods use a DaDa2 similar approach moving away from OTUs (as is qiime v2.x for over a long period already)? RE: Although we are aware of the advances that ASV inference methods have led to in the analysis of 16S amplicon sequencing data, there are no rigorous performant methods, that we are aware of, to obtain ASVs from shotgun metagenomic reads. We therefore left the existing analysis unchanged. 2) Data description: to largely describe (overall picture) a simple alpha-diversity plot and beta-diversity ordination plot would be an easy/quick way that brings more meaningful insight in describing the overall samples and individual variation observed. Now all data of the various groups is averaged per piglet or mother in a bardiagram/Krona plot which does not make sense since there is a large diversity within those groups of treatments. RE: With regards to the diversity, an analysis of diversity (included in our first submission) suggested that differences in microbial community composition (alpha and beta diversity) between treatment groups was mild, while more prominent shifts of diversity were detected between samples from distinct time points (age of the piglets). The revised manuscript now includes a PCoA to describe the beta diversity of samples. We removed the Krona plot of the mothers, and merged the Krona plot of the piglets to a panel in a combined figure with the PCoA (beta diversity). The PCoA plots highlight the strong effect of time/aging, and the (dis)similarity between the mothers and piglets at distinct time points in the trial. In the revised submission we also report alpha diversity indices. 3) Supplement Fig2 ; make more clear that frequency is N samples. Also add bin size in legend for both sub figures. RE: Done. 4) Supplement Fig4; would this be a possible result of using cfu which does not take into account dead/alive ratio's? That discussion/mentioning seems missing in the current text. RE: Yes, that is true. This has now been added to the manuscript. Pig trial and sample collection Animal studies were conducted at the Elizabeth Macarthur Agricultural Institute (EMAI) NSW, Australia and were approved by the EMAI Ethics Committee (Approval M16/04). The trial animals comprised 4-week old male weaner pigs (n=126) derived from a commercial swine farm and transferred to the study facility in January 2017. These were cross-bred animals of "Landrace", "Duroc" and "Large White" breeds and had been weaned at approximately 3 weeks of age (Supplementary Figure 1). Each room had nine pens, consisting of a set of six and a set of three pens, designated a-f and g-i respectively, with the two sets of pens being physically separate, i.e. animals could come in contact with each other through the pen's bars within each set of pens, but not between sets. The rooms were physically separated by concrete walls and contamination between rooms was minimized by using separate equipment (boots, gloves, coveralls) for each room. In addition, under-floor drainage was flushed twice weekly and the flushed faeces/urine was retained in under-floor channels that ran the length of the facility, so that Rooms 1, 2 were separate from Rooms 3, 4 and flushing was in the direction 1 to 2 and 3 to 4. The pigs were fed ad libitum a commercial pig grower mix of 17.95% protein free of antibiotics, via self-feeders. On the day of arrival (day 1) 30, 18, 18, and 60 pigs were allocated randomly to Rooms 1, 2, 3 and 4 respectively in groups of 6, 6, 6 and 6-7 pigs per pen respectively (Supplementary Figure 1A). Pigs were initially weighed on day 2, and some pigs were moved between pens to achieve an initial mean pig weight per treatment of approximately 6.5 kg (range: 6.48-6.70; mean±SD: 6.53±0.08). Pigs were weighed weekly throughout the trial, and behaviour and faecal consistency scores were taken daily over the 6-week period of the trial (Supplementary Table 2). Developmental and commercial probiotic paste preparations ColiGuard® and D-Scour™ from International Animal Health, were used in some treatment groups. The animals were acclimatised for 2 days before the following treatments were Table 2), were obtained throughout this study. At the end of the trial period, all samples were transported from EMAI to the University of Technology Sydney (UTS) for further processing. The experimental workflow is schematically represented in Figure 3. Positive controls As a positive control "mock community" for this study, four Gram positive (Bacillus DNA extraction Piglet and sow faecal samples, mock community samples, negative controls and probiotic samples (D-Scour™ and ColiGuard® paste) were allocated to a randomized block design to control for batch effects in DNA extraction and library preparation. The faecal samples were thawed on ice first, followed by the probiotics and mock community samples. MetaPolyzyme (Sigma-Aldrich) treatment was performed according to the manufacturer's instructions except for the dilution factor, which we allowed to be 4.6 times higher. Immediately after incubation, DNA extraction was performed with the MagAttract PowerMicrobiome DNA/RNA EP kit (Qiagen) according to the manufacturer's instructions. Quantification of DNA was performed using PicoGreen (Thermofisher) and measurements were performed with a plate reader (Tecan, Life Sciences) using 50 and 80 gain settings. All samples were diluted to 10 ng/µL. Library preparation Sample index barcode design using a previously introduced method 4 yielded a set of 96 x 8nt sequences with a 0.5 mean GC content and none of the barcodes containing 3 or more identical bases in a row. Nine hundred sixty different combinations of i5 and i7 primers were used to create a uniquely barcoded library for each sample. The detailed sample-to-barcode assignment is given in Supplementary Table 3. Library preparation was carried out using a modification of the Nextera Flex protocol to produce low bias, called Hackflex, that allows the production of low cost shotgun Normalization and sequencing The master pool was sequenced on an Illumina MiSeq v2 300 cycle nano flow cell (Illumina, USA). Read counts were obtained and used to normalise libraries. The liquid handling robot OT-One (Opentrons) was programmed to re-pool libraries based on read counts obtained from the previous MiSeq run. The code used to achieve the normalization is available through our Github repository. The read count distribution after normalisation is displayed in Supplementary Figure 2. The normalized and purified pooled library was sequenced on an Illumina NovaSeq 6000 S4 flow cell at the Ramaciotti Centre for Genomics (Sydney, NSW, Australia), generating a total of 27 billion read pairs from 911 samples. Comparison of the expected and the observed taxonomic profile of the positive controls All the mock community members, in seven of the eight technical replicates, were detected by MetaPhlAn2 (version 2.7.7) (Supplementary Figure 3) Figure 3). An additional 25 taxa were detected, of which 18 and 7 were identified at the species and at the genus level, respectively. Contaminants were present at a higher concentration in three technical replicates (R3, R7, R8) with the most frequent contaminant (Methanobrevibacter spp.) being present in 5 of the 8 replicates (Supplementary Figure 5). (Supplementary Figure 3). ColiGuard® contained a total of 20 contaminants, of which 16 and 4 were identified at the species and the genus level, respectively. Contaminants were present at a higher level in two technical replicates (R5, R7), with R7 displaying the most diverse and highest contamination rate (R7: 14 taxa; total contaminating reads: 2.67%; R5: 9 taxa; total contaminating reads: 0.30%). Technical controls in metagenomic studies and methodological limitations Taxonomic assignment of the raw reads from the positive controls was performed with MetaPhlAn2 8 which relies on a ca. 1M unique clade-specific markers derived from 17,000 reference genomes. Such a database to map against the positive controls suffices as these organisms are cultivable, and for this reason they are widely studied hence the sequences are known. This is not the case for real-world samples where mapping against a database (the completeness of which relies on studied and often cultivable organisms) would narrow the view on the true diversity within the sample. Positive controls with well-studied members and known ratios within the samples have proven to be a valuable approach to assess consistency among technical replicates across batches and to detect possible biases derived from the DNA extraction method. Systematic taxonomic bias in microbiome studies, resulting from differences in cell wall structures between Gram positive and Gram negative bacteria, have previously been reported; bead beating and sample treatment with enzymatic cocktails can modestly reduce this bias 9-12 . Although we implemented such steps in our workflow, it seems that, from the read abundance of our mock community, which contained three In terms of contamination we concluded that: a) contamination in our study was not batch specific; b) a problem of sample cross-contamination may have occurred at the DNA extraction step between neighbouring wells. During the bead-beating step of DNA extraction, the deep-well plate is sealed with a rubber sealing mat, rotated and placed in a plate shaker for the bead beating to take place. As leakage was observed around the wells despite the presence of the sealing mat, we consider that sample cross-contamination is most likely to occur during this step. Taxonomic profiling of samples All Alpha and beta diversity The abundance profile of all samples, based on the 16S rRNA reads that passed Potential uses This dataset can be utilised to assess a broad range of ecological questions pertaining to host-associated microbial communities of the post-weaning piglet. These include the assessment of: 1. the compositional and functional core faecal microbiome of the postweaning piglet, 2. the microbial changes that piglets undergo between the first and the 5 th week after weaning, 3. the degree of strain-host specificity, 4. the variability of microbiomes within or between host species, 5. the variability of microbiomes between different cross-breeds and small age differences of the hosts, 6. the degree of strain transfer from mothers to piglets, 7. the effects of two probiotic treatments and of intramuscular antibiotic treatment on the post-weaning pig faecal microbiome, 8. species co-occurrence and co-exclusion, 9. the repertoire of antimicrobial resistance genes and how it is impacted by antibiotic and probiotic treatment, 10. the extent of within-host and population evolution of microbes over a 5-week period. Data availability The sequencing reads from each sequencing library have been deposited at NCBI Scholarships. NSW DPI approved the paper before submission for publication. Competing interests D-Scour™ was sourced from International Animal Health Products (IAHP). ColiGuard® was developed in a research project with NSW DPI, IAHP and AusIndustry Commonwealth government funding. Author contributions Pig
2,886.8
2021-06-01T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Relationship classification based on dependency parsing and the pretraining model As an important part of information extraction, relationship extraction aims to extract the relationships between given entities from natural language text. Based on the pretraining model R-BERT, this paper proposes an entity relationship extraction method that integrates an entity dependency path and pretraining model, which generates a dependency parse tree by dependency parsing, obtains the dependency path of an entity pair via a given entity, and uses an entity dependency path to exclude information such as modifier chunks and useless entities in sentences. This model has achieved good F1 value performance on the SemEval2010 Task 8 dataset. Experiments on datasets show that dependency parsing can provide context information for models and improve performance. Introduction Information extraction (IE) aims to extract structured information from large-scale semistructured or unstructured natural language text (Golshan et al. 1803). Information extraction tasks are applied, for example, to knowledge graph construction ), information retrieval (Wu and Weld 2010), question-answering systems and text summarization. Entity relationship extraction is an important part of information extraction tasks, and its results will affect the performance of followup tasks. Entity relationship extraction based on deep learning falls into one of the following two major categories: supervised entity relationship extraction and distantly supervised entity relationship extraction. In supervised entity relationship extraction, entity relationship extraction can be achieved by either pipeline learning or joint learning (Li et al. 2020). The pipeline learning method extracts the relationships between entities directly based on entity identification, and the joint learning method identifies entities while extracting the relationships between entities based on an end-to-end neural network model. Compared with supervised entity relationship extraction, distantly supervised entity relationship extraction, due to the lack of a human-annotated dataset, takes one more step to distantly align the knowledge base to label unlabeled data. To construct the relationship extraction model, there is little difference between distantly supervised entity relationship extraction and the pipeline learning method of the supervised entity relationship extraction. The main difference between supervised entity relationship extraction and distantly supervised entity relationship extraction is the difference in the annotation level of the dataset. For the supervised method, entity and relationship types are given in the dataset. At this time, the relationship extraction task can be performed by the classification task method. Due to the increase in the pretraining models, the entity relationship extraction tasks have gradually moved closer to the direction of pretraining models. Researchers have achieved very good results by only fine-tuning the pretraining model BERT and then performing entity relationship extraction experiments. In 2019, the Ali team (Wu and He 2019) took the lead in applying BERT to relationship extraction tasks and achieved the best results at that time. The result led to more researchers focusing on the pretraining model. After these experiments, most entity relationship extraction models have been based on pretraining models, usually by training their pretraining models after changing the initialization parameters of the BERT structure or integrating external knowledge. Peng et al. proposed in 2020 that the existing models paid too much attention to the impact of the whole sentence on the relationship classification, without considering the noise caused by content such as modifier chunks in sentences. Moreover, external knowledge is cited to assist the model in sentence classification, but the syntactic knowledge of the sentence is ignored. This paper proposes a method using dependency parsing, which establishes a dependency tree for each data point and obtains the shortest dependency path between entities via dependency trees. This paper focuses on the word information in the dependency path between entities rather than using the type of dependency relationship between words in the past. Parsing is used to enhance the context information of model learning to avoid noise caused by information such as modifier chunks and unannotated entities in sentences. Related work Parsing is one of the key technologies in natural language processing, and its basic task is to determine the syntactic structure of a sentence or to clarify the dependency relationships between words in a sentence. Dependency parsing analyzes a sentence into a dependency syntax tree to describe the dependency relationships between words. The dependency relationship is represented by a directed arc, which is called the dependency arc. The shortest dependency path refers to the shortest path of two words in the dependency syntactic structure. The entity dependency path is the shortest path between two entity nodes in the dependency syntactic structure. The shortest dependency path can express the syntactic relationships between two nodes. According to the characteristics of the shortest dependency path, the entity dependency path can concisely express the syntactic relationships between entities, remove modifier chunks, and retain the backbone mode that can clearly express the entity relationships. Therefore, dependency parsing is widely used in relationship extraction. Entity relationship extraction, as one of the most critical tasks in natural language processing, is widely used in fields such as information extraction, natural language understanding, and information retrieval. Early relationship extraction methods include feature-based methods and kernel-based methods. As early as feature-based methods, syntactic knowledge was used for relationship extraction. Today's relationship extraction methods can be divided into two categories: statistical relationship extraction and neural relationship extraction. The statistical relationship extraction annotates the relationships of the target entity pair in the text based on traditional machine learning methods. Among them, classical entity relationship extraction methods can be divided into four categories: supervised, semisupervised, weakly supervised and unsupervised methods, which are distinguished by the degree of annotation in the dataset. Neural relationship extraction applies deep learning to relationship extraction tasks, and the entity relationship extraction tasks of deep learning can be divided into supervised tasks and distantly supervised tasks. Among the classical statistical relationship extraction methods, Zhou (Zhou et al. 2005) and Guo Xiyue et al. (Guo et al. 2014) used SVM as a classifier to study the effects of lexical, syntactic and semantic features on entity relationship extraction. Craven et al. (Craven and Kumlien 1999) first proposed the idea of weakly supervised machine learning in the process of extracting structured data from text to establish a biological knowledge base; Brin (Brin 1998) used the bootstrapping method to extract the relationships between named entities. Hasegawa et al. (Hasegawa et al. 2004) first proposed an unsupervised method for extracting relationships between named entities at the ACL meeting. Traditional methods have the error propagation problem of feature extraction, so the entity relationship extraction method based on deep learning, which can effectively solve this problem, has been considered and achieved good results. Zeng et al. (Zeng et al. 2014) first proposed using a CNN to extract the meaning of a word and applying softmax for classification in 2014. Zhang et al. (Zhang and Wang 1508) proposed using Bi-LSTM for relationship classification in 2015. Xu et al. (Xu et al. 2015) reintroduced the traditional method and proposed a CNN that is based on the shortest path. In addition, in recent years, many attention-based models have been applied to relationship extraction tasks. Katiyar et al. (Katiyar and Cardie 2017) first used attention, an attention mechanism, and Bi-LSTM to jointly extract entity and classification relationships in 2017. Scholars have also proposed a variety of improvements based on basic methods, such as the fusion method of PCNN and multi-instance learning (Zeng et al. 2015) and the PCNN fusion method and attention mechanism (Lin et al. 2016). Ji et al. (Ji et al. 2017) proposed adding entity description information based on PCNN and attention to assist in learning entity representations. The COTYPE model proposed by Ren et al. (Ren et al. 2016) and the residual network proposed by Huang (Huang and Wang 2017) both enhanced the effect of relationship extraction. After the pretraining model was proposed, Wu et al. first applied the pretraining model to relationship extraction tasks in 2019 and explored the mode of combining entities and entity locations in the pretraining model by adding identifiers before and after the entities to indicate the entity locations rather than using the traditional location vector. The best results were achieved at that time, which led more researchers to focus on the pretraining model. Later, Livio Baldini Soares et al. (Soares et al. 2019) from the Google team proposed a pretraining model of BERTEM ? MTB in 2019. In that paper, the effects of input and output on the results of relationship classification under different conditions were discussed, and matching the blank pretraining task was proposed according to the results to eliminate the error caused by the overutilization of entities. Peng et al. (Peng et al. 2020) conducted experiments based on BERT and MTB in 2020, explored the information types used by the existing models in the entity relationship extraction tasks, designed experiments, and finally concluded that the existing models did not make full use of context information. Chen et al. proposed a new pretraining model in 2021 that integrated entity type information during pretraining, conducted experiments on multiple relational classification datasets, and achieved good results on small sample datasets. In this paper, the BERT model is used for experiments. In addition to the entity information, the entity dependency path is used as a syntactic representation, and sentence information, entity information and syntactic information are used as sentence representations for relationship classification. Model introduction In supervised entity relationship extraction tasks, since the dataset has fully annotated entities and the corresponding relationship types are given, the existing models all use these tasks as classification tasks. The model outputs a vector as a sentence representation and predicts the relationship type. This paper proposes a model framework that uses context information for relationship extraction; its architecture is shown in Fig. 1. In this paper, the pretraining model BERT is used as the basic model for relationship extraction, and its structure includes three parts. Given a sentence, the shortest dependency path between entities is obtained first after dependency parsing, which, together with the sentence, is used as the input to the model. The token of the sentence obtained through word segmentation is input to an encoder for coding to obtain the vector representation of each token, and the sentence vector, entity vector and dependency vector are spliced to obtain the final representation of the sentence, which is also the final vector for classification. This vector is input to the softmax classifier for prediction. Dependency syntactic parsing Parsing is one of the key technologies in natural language processing, and its basic task is to determine the syntactic structure of a sentence or the dependency relationships between words in a sentence. The dependency syntax was first proposed by French linguist L. Tesniere (1959) in his works, which analyzed a sentence into a dependency syntax tree to describe the dependency relationships between words. In the structure of dependency grammars, there is a direct dependency relationship between words to form a dependency pair, one of which is the core word, also known as the governing word, and the other is called the modifier, also known as the dependent word. The dependency relationship is represented by a directed arc, called the dependency arc. Take the sentence ' ' as an example for dependency parsing. The analyzed dependency relationship of the sentence is shown in Fig. 2: To obtain the entity dependency path, the dependency tree should be obtained from the dependency structure of the sentence first. According to the dependency tree and the annotated entities, the path between entities e 1 and e 2 on the dependency tree can be found, which is the entity dependency path. The entity dependency path is shown in Fig. 3, where the red nodes represent entity nodes, and the dotted line is the entity dependency path. Input The pretraining model BERT is a multilayer two-way transformer encoder. The input to BERT can be a sentence or a pair of sentences. A special tag [CLS] is the first tag of each sequence. Given a sentence S, the dependency parsing tree is obtained through dependency parsing, and the shortest dependency path between entities is found according to the target entities (e 1 , e 2 ). To prevent the path length from being 0 because one entity is dependent on the other entity, the entity dependency path is a path that contains the entities. Special identifiers are inserted before and after the two target entities to emphasize the entities and assist the model in capturing the locations of the entities. The processed sentences and the entity dependency path are entered into the model. The location of the node words can be obtained by the entity dependency path, and the input of the one-hot vector is entered in the path. A [CLS] tag is added to the beginning of the sentence, and the data are input into a tokenizer to obtain its token sequence. The Relationship classification based on dependency parsing and the pretraining model 8577 vector representation of each token is generated by the encoder. Sentence representation The hidden state sequence H output by the BERT module corresponds to each token. H 0 is the hidden state vector corresponding to the [CLS] token, for which an activation operation is performed, the result is input to a fully connected layer, and the resulting vector H 0 ' is used as the representation of the sentence vector. The hidden state vectors of the tags of the two pairs of entities are represented by H i , H j , H m and H n . For the vectors of the target entities, the vector between H i and H j represents the vector of entity e 1 , and the vector between H m and H n represents the vector of entity e 2 . These vectors are summed and averaged to obtain a single representation, for which a tanh activation is performed, and the result is input to a fully connected layer to obtain the required entity vector representation. For the vector representation of dependency syntax and the entity vector representation, the same method is adopted. POS represents the position of the word in the sentence. According to the shortest dependent path, we can obtain the position of the word on the path in the sentence. Then, tanh activation is performed, and the result is input to a fully connected layer to obtain a single syntax vector representation. After the single representations of the sentence vector, entity vector and dependency syntactic vector are obtained, the four vectors are spliced to obtain a vector z. The vector z is input to a fully connected layer, and the resulting vector is the final sentence representation r used for classification. Classification Given a sentence x containing the entire sentence and the analyzed shortest dependency path sequence, a vector representation r can be obtained by inputting x to a relational encoder. After the relationship representation is obtained, a fully connected softmax layer is used to predict the relationships of the sentence. Then, a probability distribution P covering all predefined relationship types is obtained. where y [ y is the target relationship type, and h refers to all learnable parameters, including W r and b r . Dataset In this experiment, the SemEval-2010 Task 8 dataset was used. The dataset was collected from major data sources according to nine preset incompatible relationships, which contains 10,717 pieces of data, including 8,000 use cases for training and 2,717 use cases for testing. All examples in the dataset were annotated with nine relationships and an other relationship. The distribution of the nine relationship types is shown in Table 1. In addition to the annotated relationship types, each data point also contains two annotated entities e 1 and e 2 . The relationship types other than the other type are directional. For example, cause-effect (e 1 , e 2 ) and cause-effect (e 2 , e 1 ) are different. Therefore, in the experiments, (Table 2) 19 relationship types are usually set to make predictions. In this paper, the macro-average F1 value in the official scoring script provided by the SemEval-2010 Task 8 dataset was used for scoring. According to this scheme, the macro-average F1 value scores of 9 actual relationships (excluding relationships of other types) were calculated, and the directionality of the relationships was considered. The calculation of F1 values requires precision and recall. The calculation formula is shown in Eqs. (9) to (11): where true positive (TP) represents the number of correct predictions in positive prediction cases, false-positive (FP) represents the number of incorrect predictions in positive prediction cases, and false negative (FN) represents the number of incorrect predictions in the negative prediction cases. Hyperparameter settings The hyperparameters settings are as follows: Hyperparameters were set to better compare with the baseline model (Wu and He 2019). Therefore, most of the hyperparameters were set to be the parameters of the Table 3 compares the performance of the model in this paper with various neural network models on the SemEval-2010 Task 8 dataset, which proves that the method proposed in this paper achieved good results. The highest value in each column of indicators is shown in bold. It can be seen from the results in the table that the effect of the pretraining model is much better than those of neural network models such as CNN and LSTM. In this paper, the pretraining model was also used for experiments, and the R-BERT model was selected as the baseline model. The R-BERT model was based on the pretraining model and highlighted the entity information with special identifiers to indicate the entity location, which achieved the best results at that time, and the official F1 evaluation value reached 89.25%. On this basis, the shortest dependency path was obtained through dependency parsing and integrated into the R-BERT model in this paper so that the model could learn the context information of sentences. The results show that the F1 value performance of the model reached 89.97% after parsing was introduced, which fully proves that the context information provided by dependency parsing is effective. Comparison of experimental results We believe that the context information provided by the entity dependency path would play a large role, and a dependency syntax tree could be obtained after dependency parsing, which could provide much context information. We incorporated one of these paths, the dependency path between entities, into the pretraining model to provide the model with entity-related context information. The auxiliary model provided better prediction results. We also performed ablation experiments to prove the effectiveness of the entity-dependent path. Ablation experiments The method proposed in this paper was proven by the above experimental results. We wanted to further understand what factors, in addition to BERT, contributed to the experimental results in the method based on the pretraining model, and therefore, three ablation experiments were designed. Since the entity tags '' \ e1 [ '' and '' \ e2 [ '' were added to emphasize the entity and add boundary information to the entity, which significantly improved the classification prediction, these entity tags were reserved and used in each ablation experiment. In the first experiment, a [CLS] token was added before the sentence input, the hidden layer vector of this token was used as a vector representation of sentence classification, and only this vector was used for classification. In the second experiment, [CLS] and the hidden vector of the entity dependency path were spliced to obtain a vector as the sentence representation, in which the entity dependency path did not contain entity information. In the third experiment, [CLS] and the hidden vector of the entity were spliced as sentence representation, and in this case, the entity information contained the tags of the entity and integrated the boundary information of the entity. The SDP represents the shortest dependency path. In the fourth experiment, we added SDP, which is the method we proposed based on the third experiment. After obtaining the SDP, we calculated statistics on the positions of all nodes on this path, summed all hidden vectors of nodes and took the average value. The obtained vector was the dependency syntactic vector, which was spliced with the [CLS] vector and entity vector to obtain the final representation of the sentence. It can be seen from the results in Table 4 that the experimental results improved after the addition of entity identifiers, which provided the model with the boundary information of the entity and emphasize the entity. There Table 3 Experimental results was little difference between the result of using the hidden vector of entity dependency path information as sentence representation and that of using the hidden vector of entity as sentence representation, but the result of using entity information was better. Experimental results show that the model can make use of context information, but the model still needs entity information for supplementation. After combining the entity information with the context information provided by the dependency parsing, the model can predict the classification better. Case study This section analyzes the results of the R-BERT model and the model proposed in this paper in detail and compares the results of various relationship types, as shown in Table 5. The results in the table show that the classification effect for most relationship types is improved compared with the baseline after introducing the entity dependency path, and the effect was more obvious for such relationship types as content-container, product-producer and instrument-agency, indicating that this experiment successfully integrated the entity dependency path into the pretraining model and was beneficial to improving the effect of relationship classification. However, the classification effect for cause-effect and entity-destination did not improve but was significantly reduced. Therefore, we reviewed the classification results obtained by the two models in detail and extracted examples of incorrect classification results of the two models. Table 6 provides detailed examples of classification errors in these two types. From the data classification results in the table, we can see that in the prediction results of these two types, the model proposed in this paper correctly predicted the relationship types but mispredicted the relationship directions, and the relationship types predicted by the baseline model were different from the standard results. Therefore, taking cause-effect as an example, when the accuracy of this type is calculated on the premise that the recall rates of the two models are not much different, due to the incorrect relationship directions in the prediction results on some data, the model in this paper predicted more data to be of causeeffect type than that of the baseline model, so the accuracy rate obtained was lower. As a result, the F1 value evaluation of the cause-effect classification results is lower than that of the baseline model. Through the analysis of these two misclassified relationships, the length of the entity dependency path was 0 in the dependency syntax tree obtained by dependency parsing; that is, one entity depended on the other entity. Therefore, these cases did not obtain enough context information from the dependency syntax tree but only used the entity information again, leading to the correct prediction of relationship types and the incorrect prediction of relationship directions in some classification results. It can be seen from the above results that the method proposed by the model in this paper not only allows the model to learn the context information provided by the dependency syntax but also improves the prediction of the model. However, the model underutilizes the context information of the data in some relationship types, resulting in the correct classification of relationship types and incorrect classification of relationship directions. This case shows that there is still room for improvement in the use of context information, which is also the focus of our following work. Conclusion Based on the pretraining model, this paper proposed a pretraining model integrating dependency parsing for supervised entity relationship extraction. The shortest dependency path between entities is obtained by dependency parsing, which concisely expresses the syntactic relationships between entities, retains the main part of the expression relationship type, and removes useless modifier chunks and redundant entity information. The context information between entities can be obtained through dependency parsing, and a syntactic representation can be obtained by adopting the same processing method as the entity representation in the R-BERT model, which is spliced to the sentence vector and entity vector to obtain the vector representation for classification. The F1 value increased to 89.97% on the SemEval-2010 Task 8 dataset, which is an increase of 0.72% compared with R-BERT. Through the analysis of the results and the comparison of the results of the models, the model in this paper achieved good results, successfully learned the context information of sentences, basically solved the problems raised and achieved the expected results. In the detailed analysis of the results, it was found that the length of the dependency path between entities in some sentences is 0. To avoid this situation, this paper also counts entities as nodes on the path during data processing. However, the model cannot obtain enough context information from the data, and the entity information is reused, which leads to the situation that in some relationship types, the entity relationship is accurately predicted, while the direction of the relationship is incorrectly predicted, affecting the final relationship prediction results to some extent. Therefore, in future work, we will attempt to design strategies to extract context information for these sentences to improve the overall effect of relationship extraction. In this experiment, we only used the dependency paths between entities, but the whole dependency tree and types of dependency relationships were not effectively utilized. In addition, excluding all information outside the entity dependency path may also be the reason for the poor classification results of some relationship types. Therefore, our next goal is to make effective use of the entire dependency syntax tree and types of dependency relationships to obtain more complete context information to help the model judge the relationship types. In addition, for the dependency tree, we try the GCN model to obtain the information contained in the tree structure. This is because the GCN has a strong ability to obtain the information of the surrounding nodes. To avoid performance degradation due to noise, we plan to incorporate types of dependency relationships during training to selectively obtain information about the surrounding nodes. Author contribution Author have made equal contribution. Funding This work was supported by the National Defense Science and Technology Industrial Technology Research Project (JSQB2017206C002). Availability of data and material Data for this work were obtained from the web (accessed from www.semeval2.fbk.eu/semeval2.php). Declarations Conflicts of interest The authors declare no conflict of interest.
6,174
2021-12-09T00:00:00.000
[ "Computer Science" ]
Water-Soluble Polymer Polyethylene Glycol: Effect on the Bioluminescent Reaction of the Marine Coelenterate Obelia and Coelenteramide-Containing Fluorescent Protein The current paper considers the effects of a water-soluble polymer (polyethylene glycol (PEG)) on the bioluminescent reaction of the photoprotein obelin from the marine coelenterate Obelia longissima and the product of this bioluminescent reaction: a coelenteramide-containing fluorescent protein (CCFP). We varied PEG concentrations (0–1.44 mg/mL) and molecular weights (1000, 8000, and 35,000 a.u.). The presence of PEG significantly increased the bioluminescent intensity of obelin but decreased the photoluminescence intensity of CCFP; the effects did not depend on the PEG concentration or the molecular weight. The photoluminescence spectra of CCFP did not change, while the bioluminescence spectra changed in the course of the bioluminescent reaction. The changes can be explained by different rigidity of the media in the polymer solutions affecting the stability of the photoprotein complex and the efficiency of the proton transfer in the bioluminescent reaction. The results predict and explain the change in the luminescence intensity and color of the marine coelenterates in the presence of water-soluble polymers. The CCFP appeared to be a proper tool for the toxicity monitoring of water-soluble polymers (e.g., PEGs). Introduction In recent decades, pollution of the world's oceans by water-soluble and insoluble polymers has become a challenge for ecologists. It is known that water-soluble polymers can change the rates of cellular processes via stabilizing biological structures. It is shown [1][2][3] that enzymes included in the gelatin or starch gel matrix become more stable and lead to an increase in the activity. Moreover, water-soluble polymers such as polyethylene glycol (PEG) may cause toxic effects on living organisms [4,5]. The toxic effects of polymer pollution are now being intensively studied [6][7][8][9][10]. Bioluminescence assay systems are widely used in ecological science to monitor the toxic effect of chemicals on living organisms. Bioluminescence intensity is considered as a test physiological parameter for evaluating toxic effects. Simplicity, sensitivity, and a high registration rate of the emission intensity make bioluminescent tests convenient and widely applied. The marine luminescent bacterium is one of the most common bioassays that has been widely used for more than 50 years to monitor the toxicity of media due to its high sensitivity to toxicants [11][12][13]. Another type of bioluminescent assay, the bacterial bioluminescent enzyme system, was suggested in 1990 [14]. An advantage of the enzymatic assays is the possibility to change the sensitivity to toxic compounds by (a) varying the component concentrations and (b) constructing polyenzymatic coupled systems [15][16][17]. Technological applications of the bioluminescent enzymatic system were reviewed in [18,19]. A question arises whether it is possible to detect suppressive or activation effects of chemicals by a simpler system than a bacterial one. Recently, the authors have suggested an application of luminescent proteins to monitor the toxicity in biological liquids [20]. The paper reviews the effects of radiation [21,22], chemical agents [23,24], and higher temperature destruction [25] on the photoluminescence spectra of a coelenteramide-containing fluorescent protein (CCFP). It was stated that CCFPs could serve as new fluorescence biomarkers with color differentiation to explore the results of destructive exposures. CCFP demonstrated changes in color under the exposures listed before. The question of interest is: will the photoprotein obelin and CCFP-a product of the bioluminescent reaction of obelin-be sensitive to water-soluble polymers, for example, PEGs? The color and the intensity of the luminescence are of interest in both cases. Note that the studies [20][21][22][23][24][25] applied CCFP, a product of the bioluminescent reaction of photoprotein obelin from the marine coelenterate Obelia longissima. The structure and mechanisms of the photoprotein bioluminescence are now being intensively studied [26][27][28]. Our previous studies did not find changes in the bioluminescence spectra of the photoprotein obelin after the exposures listed before (chemical, radiation, or thermal); only the bioluminescence intensity appeared to be sensitive to the exposures. The sensitivity of the obelin bioluminescence to PEG is a problem of special interest. In general, CCFPs can be isolated from luminous marine coelenterates, e.g., jellyfish Aequorea [28] and Phialidium (Clytia) [29], hydroid Obelia longissima [30], etc. A fluorophore of CCFPs is coelenteramide, an external molecule; being a heteroaromatic fluorescent compound, it is non-covalently bound to a protein inside its hydrophobic cavity. The chemical structure of the coelenteramide molecule (neutral and ionized forms) is presented in Figure 1. Coelenteramide is a photochemically active molecule, as it is able to be a proton donor in its electron-excited states and to generate several forms which differ in energy fluorescent states [31] and, hence, in fluorescence color. Contributions of the forms to visible fluorescence spectra depend on the efficiency of the photochemical process and these are governed by the microenvironment of coelenteramide [32][33][34][35][36][37]. There occur similar proton transfer processes and formations of fluorescent forms after chemical/biochemical excitation in the course of bioluminescence reactions in marine coelenterates. [15][16][17]. Technological applications of the bioluminescent enzymatic system were reviewed in [18,19]. A question arises whether it is possible to detect suppressive or activation effects of chemicals by a simpler system than a bacterial one. Recently, the authors have suggested an application of luminescent proteins to monitor the toxicity in biological liquids [20]. The paper reviews the effects of radiation [21,22], chemical agents [23,24], and higher temperature destruction [25] on the photoluminescence spectra of a coelenteramide-containing fluorescent protein (CCFP). It was stated that CCFPs could serve as new fluorescence biomarkers with color differentiation to explore the results of destructive exposures. CCFP demonstrated changes in color under the exposures listed before. The question of interest is: will the photoprotein obelin and CCFP-a product of the bioluminescent reaction of obelin-be sensitive to water-soluble polymers, for example, PEGs? The color and the intensity of the luminescence are of interest in both cases. Note that the studies [20][21][22][23][24][25] applied CCFP, a product of the bioluminescent reaction of photoprotein obelin from the marine coelenterate Obelia longissima. The structure and mechanisms of the photoprotein bioluminescence are now being intensively studied [26][27][28]. Our previous studies did not find changes in the bioluminescence spectra of the photoprotein obelin after the exposures listed before (chemical, radiation, or thermal); only the bioluminescence intensity appeared to be sensitive to the exposures. The sensitivity of the obelin bioluminescence to PEG is a problem of special interest. In general, CCFPs can be isolated from luminous marine coelenterates, e.g., jellyfish Aequorea [28] and Phialidium (Clytia) [29], hydroid Obelia longissima [30], etc. A fluorophore of CCFPs is coelenteramide, an external molecule; being a heteroaromatic fluorescent compound, it is non-covalently bound to a protein inside its hydrophobic cavity. The chemical structure of the coelenteramide molecule (neutral and ionized forms) is presented in Figure 1. Coelenteramide is a photochemically active molecule, as it is able to be a proton donor in its electron-excited states and to generate several forms which differ in energy fluorescent states [31] and, hence, in fluorescence color. Contributions of the forms to visible fluorescence spectra depend on the efficiency of the photochemical process and these are governed by the microenvironment of coelenteramide [32][33][34][35][36][37]. There occur similar proton transfer processes and formations of fluorescent forms after chemical/biochemical excitation in the course of bioluminescence reactions in marine coelenterates. The aromatic fragments that can be involved in electronic excitation are marked with the letters F, B, and P, corresponding to phenolic, benzene, and pyrazine rings [38]. According to [39], the spectra of the obelin bioluminescence and light-induced fluorescence of CCFP are a superposition of spectral components (emitters) that correspond to different forms of coelenteramide. The contributions of the spectral components might change, indicative of proton interactions in the active center of the protein complex. The contribution of the spectral components to the integral spectrum determines the color of the luminescence. The spectra of the obelin bioluminescence and the light-induced fluorescence of CCFP can be deconvolved into Gauss components [39]. It is shown in these studies that Figure 1. The chemical structure of the coelenteramide molecule (neutral and ionized forms). The aromatic fragments that can be involved in electronic excitation are marked with the letters F, B, and P, corresponding to phenolic, benzene, and pyrazine rings [38]. According to [39], the spectra of the obelin bioluminescence and light-induced fluorescence of CCFP are a superposition of spectral components (emitters) that correspond to different forms of coelenteramide. The contributions of the spectral components might change, indicative of proton interactions in the active center of the protein complex. The contribution of the spectral components to the integral spectrum determines the color of the luminescence. The spectra of the obelin bioluminescence and the light-induced fluorescence of CCFP can be deconvolved into Gauss components [39]. It is shown in these studies that the spectra can include three components in the visible region, with the maxima (Figure 2) corresponding to violet, blue-green, and green spectral regions. According to [31][32][33][34][35][36], component I was attributed to the neutral coelenteramide form, while components II and III were attributed to the ionized forms of coelenteramide (Figures 1 and 2). Ionized forms II and III can differ in the effective location of a proton in the complex of polypeptide with coelenteramide between the phenolic coelenteramide group as a proton donor and His22 as a proton acceptor; hence, forms II and III can differ in ionization degree. the spectra can include three components in the visible region, with the maxima ( Figure 2) corresponding to violet, blue-green, and green spectral regions. According to [31][32][33][34][35][36], component I was attributed to the neutral coelenteramide form, while components II and III were attributed to the ionized forms of coelenteramide (Figures 1 and 2). Ionized forms II and III can differ in the effective location of a proton in the complex of polypeptide with coelenteramide between the phenolic coelenteramide group as a proton donor and His22 as a proton acceptor; hence, forms II and III can differ in ionization degree. There is a possibility to change the spectral characteristics of the photoprotein obelin by varying the rigidity of its environment. The effects of water-soluble polymers on the bioluminescence reaction of obelin and the light-induced fluorescence of its product, CCFP, have not been studied yet. Dependencies of the effects on the polymer characteristics in solutions (molecular weight and concentration) might elucidate the influence of water-soluble polymers on water organisms in the world's oceans. This study aims at evaluating the effects of PEGs of different molecular weights and concentrations on the bioluminescent reaction of the photoprotein obelin from the marine coelenterate Obelia longissima and photoluminescence of CCFP, a bioluminescence reaction product. Accordingly, two main points were under consideration: (1) effects of PEGs on the intensity of the bioluminescence and photoluminescence responses; (2) effects of PEGs on the spectra of the obelin bioluminescence and CCFP photoluminescence. The work is original in the area of responses of marine coelenterates to water-soluble polymer pollutants. Results and Discussion We studied the effects of PEGs of different molecular weights (1 kDa, 8 kDa, and 35 kDa) on the bioluminescence of the photoprotein obelin and photoluminescence of the coelenteramide-containing fluorescent protein (CCFP). The concentration of PEG varied from 0 to 1.44 mg/mL. The bioluminescence/photoluminescence intensities/yields and spectra were studied. There is a possibility to change the spectral characteristics of the photoprotein obelin by varying the rigidity of its environment. The effects of water-soluble polymers on the bioluminescence reaction of obelin and the light-induced fluorescence of its product, CCFP, have not been studied yet. Dependencies of the effects on the polymer characteristics in solutions (molecular weight and concentration) might elucidate the influence of watersoluble polymers on water organisms in the world's oceans. PEG Effect on the Bioluminescence Yield of Obelin This study aims at evaluating the effects of PEGs of different molecular weights and concentrations on the bioluminescent reaction of the photoprotein obelin from the marine coelenterate Obelia longissima and photoluminescence of CCFP, a bioluminescence reaction product. Accordingly, two main points were under consideration: (1) effects of PEGs on the intensity of the bioluminescence and photoluminescence responses; (2) effects of PEGs on the spectra of the obelin bioluminescence and CCFP photoluminescence. The work is original in the area of responses of marine coelenterates to water-soluble polymer pollutants. Results and Discussion We studied the effects of PEGs of different molecular weights (1 kDa, 8 kDa, and 35 kDa) on the bioluminescence of the photoprotein obelin and photoluminescence of the coelenteramide-containing fluorescent protein (CCFP). The concentration of PEG varied from 0 to 1.44 mg/mL. The bioluminescence/photoluminescence intensities/yields and spectra were studied. The time-dependent change of the relative bioluminescence quantum yield is presented in Figure 3. This figure illustrates the results of exposure to three concentrations of PEG; PEG of 1 kDa was chosen as an example. It can be seen that PEG increases the quantum yield of bioluminescence during all the time periods of registration (relative quantum yield > 1). The bioluminescence intensity increase might be due to the protein complex stabilization in the course of the bioluminescent reaction and to a decrease in the non-radiative relaxation in the electron-excited states of the bioluminescence emitter. We observed an increase in the bioluminescence yields for all the three PEG samples (1 kDa, 8 kDa, and 35 kDa) at all the PEG concentrations applied. PEG effect on the spectra of bioluminescence reaction The impact of the PEG samples (1 kDa, 8 kDa, 35 kDa) on the bioluminescence spectra was analyzed. Three concentrations of PEGs were analyzed: 0.01 mg/mL, 0.1 mg/mL, and 1 mg/mL. As an example, Figure 4 shows the changes in the bioluminescence spectra in the presence of PEG, 1 mg/mL (8 kDa). It can be seen that PEG increases the quantum yield of bioluminescence during all the time periods of registration (relative quantum yield > 1). The bioluminescence intensity increase might be due to the protein complex stabilization in the course of the bioluminescent reaction and to a decrease in the non-radiative relaxation in the electron-excited states of the bioluminescence emitter. We observed an increase in the bioluminescence yields for all the three PEG samples (1 kDa, 8 kDa, and 35 kDa) at all the PEG concentrations applied. PEG Effect on the Spectra of Bioluminescence Reaction The impact of the PEG samples (1 kDa, 8 kDa, 35 kDa) on the bioluminescence spectra was analyzed. Three concentrations of PEGs were analyzed: 0.01 mg/mL, 0.1 mg/mL, and 1 mg/mL. As an example, Figure 4 shows the changes in the bioluminescence spectra in the presence of PEG, 1 mg/mL (8 kDa). The changes in the spectra upon the addition of PEG are evident from Figure 4, with a decrease in the violet component contribution. We analyzed the effect of PEGs on the contributions of the components to the bioluminescence spectra. The complex bioluminescence spectra in the PEG solutions were deconvoluted into peak components: violet The changes in the spectra upon the addition of PEG are evident from Figure 4, with a decrease in the violet component contribution. We analyzed the effect of PEGs on the contributions of the components to the bioluminescence spectra. The complex bioluminescence spectra in the PEG solutions were deconvoluted into peak components: violet (maximum 400 nm), blue-green (maximum 485 nm), and green (maximum 557 nm). For all the PEG samples, the contributions of the components to the reconstituted bioluminescence spectral outlines were calculated. As an example, Figure 5 shows Figure 5 demonstrates the kinetics of the violet component contribution during the bioluminescent reaction. One can see the suppression of the violet component at the beginning and at the end of the bioluminescent reaction and its increase at the 5th-10th second, which is as high as 190%. Contribution of the sum of II (blue-green) and III (green) components varied as well but were opposite to the I (violet) component. A similar tendency was observed in the solutions of different PEG concentrations. No monotonic concentration dependence was observed, as seen in Figure 5. Bioluminescence kinetics in Figure 5 suggests that PEG initiates the less effective deprotonation of coelenteramide in the course of the reaction compared with the initial and final stages of the reaction. It is evident that the binding of photoprotein with fragments of PEG can stabilize the photoprotein structure with a lower efficiency of proton transfer; this process is time-dependent on the bioluminescence kinetics. Details of this dependence should be further studied in special experiments. Figure 6 presents the concentration dependences of the violet contribution for the different concentrations and molecular weight of PEG. The bioluminescence spectra were analyzed at the beginning (0.6 s), in the middle (6 s), and at the end (12.6 s) of the reaction. The figure demonstrates the absence of the violet contribution dependence on the PEG molecular weight and confirms its absence at the PEG concentration chosen in the experiments. Figure 5 demonstrates the kinetics of the violet component contribution during the bioluminescent reaction. One can see the suppression of the violet component at the beginning and at the end of the bioluminescent reaction and its increase at the 5th-10th second, which is as high as 190%. Contribution of the sum of II (blue-green) and III (green) components varied as well but were opposite to the I (violet) component. A similar tendency was observed in the solutions of different PEG concentrations. No monotonic concentration dependence was observed, as seen in Figure 5. Bioluminescence kinetics in Figure 5 suggests that PEG initiates the less effective deprotonation of coelenteramide in the course of the reaction compared with the initial and final stages of the reaction. It is evident that the binding of photoprotein with fragments of PEG can stabilize the photoprotein structure with a lower efficiency of proton transfer; this process is time-dependent on the bioluminescence kinetics. Details of this dependence should be further studied in special experiments. Figure 6 presents the concentration dependences of the violet contribution for the different concentrations and molecular weight of PEG. The bioluminescence spectra were analyzed at the beginning (0.6 s), in the middle (6 s), and at the end (12.6 s) of the reaction. The figure demonstrates the absence of the violet contribution dependence on the PEG molecular weight and confirms its absence at the PEG concentration chosen in the experiments. The results can depend on temperature, as it is supposed that diffusion components in polymer solutions can be valuable in molecular mechanisms of the polymer interaction with photoprotein. The temperature dependences of the spectral component contributions should also be a subject of further investigations. PEG Effect on the Photoluminescence Intensity of the Coelenteramide-Containing Fluorescent Protein The effect of PEG on the photoluminescence intensity of the coelenteramide-containing fluorescent protein (CCFP) was studied. Figure 7 illustrates the interaction results of CCFP and PEG (1,8,35 kDa) at different PEG concentrations; the photoluminescence decay was observed, but no dependence on polymer molecular weight was found. These results can be attributed to the collisional interactions of CCFP with the fragments of the polymer chains. The differences in the effects of PEG on CCFP and bioluminescence reactions could be concerned with post-reaction stabilization of the product of the reaction, CCFP. This supposition is based on differences in structures (and hence sensitivities to exogenous compounds) of CCFP and bioluminescence emitters [32] Therefore, the coelenteramide-containing fluorescent protein appeared to be a proper tool for toxicity monitoring of water-soluble polymers, PEGs. Photoluminescence inhibition efficiency reached 40% at a PEG concentration of 0.15 mg/mL. Our results make us suppose that it is not a polymeric molecule as a whole that is responsible for the luminescence suppression but only fragments of the polymeric chains. The results can depend on temperature, as it is supposed that diffusion components in polymer solutions can be valuable in molecular mechanisms of the polymer interaction with photoprotein. The temperature dependences of the spectral component contributions should also be a subject of further investigations. The effect of PEG on the photoluminescence intensity of the coelenteramide-containing fluorescent protein (CCFP) was studied. Figure 7 illustrates the interaction results of CCFP and PEG (1,8,35 kDa) at different PEG concentrations; the photoluminescence decay was observed, but no dependence on polymer molecular weight was found. These results can be attributed to the collisional interactions of CCFP with the fragments of the polymer chains. PEG The differences in the effects of PEG on CCFP and bioluminescence reactions could be concerned with post-reaction stabilization of the product of the reaction, CCFP. This supposition is based on differences in structures (and hence sensitivities to exogenous compounds) of CCFP and bioluminescence emitters [32]. Therefore, the coelenteramide-containing fluorescent protein appeared to be a proper tool for toxicity monitoring of water-soluble polymers, PEGs. Photoluminescence inhibition efficiency reached 40% at a PEG concentration of 0.15 mg/mL. Our results make us suppose that it is not a polymeric molecule as a whole that is responsible for the luminescence suppression but only fragments of the polymeric chains. It might be seen from Figure 8 that no changes in the photoluminescence spectra of CCFP upon the variation of the PEG concentration were observed. Spectral maxima (510 nm) did not shift and the shape of the spectra changed negligibly. This indicates that PEGs do not affect the efficiency of photochemical proton transfer in the CCFP complex, nor do they affect the ratio of the protonated and deprotonated forms of coelenteramide in the fluorescent protein (Figures 1 and 2). Similar results were obtained for polymers of other molecular weights, e.g., 1 and 35 kDa. Hence, only the bioluminescence reaction spectra appeared to be sensitive to the presence of the polymer rather than the product of the bioluminescence reaction: "discharged photoprotein" or coelenteramide-containing fluorescent protein. This indicates the optimality and stability of the protein structure. Materials Recombinant preparations of obelin D12C were obtained from the Photobiology Laboratory, Institute of Biophysics, SB RAS, Krasnoyarsk, Russia. A detailed description of the recombinant obelin production was given in [40,41]. Solutions of lyophilized obelin (2.65 mg/mL) in a 0.02 M tris(hydroxymethyl)aminomethane buffer (pH 7) and 0.05 M ethylenediaminetetraacetic acids (Sigma-Aldrich, St. Louis, MO, USA) were applied. Three PEG samples of different molecular weights were used: 1, 8, and 35 kDa. Table 1 provides the physicochemical characteristics of the polymer samples. Instrumentation Bioluminescence and light-induced fluorescence spectra were measured with a Cary Eclipse-2000 spectrofluorometer (Agilent, Santa Clara, CA, USA). Ultraviolet (UV) radiation was emitted by the xenon flash lamp (80 Hz), producing 80 flashes per second [44]. Emission spectra for bioluminescence and photoluminescence were recorded in the range from 370 to 600 nm; photoluminescence excitation was 350 nm. The scanning rate was 600 nm/min and the spectral bandwidths for emission and excitation were 10 nm and 10 nm, with the wavelength accuracy being ± 1.5 nm. A quartz cuvette with a rectangular cross-section (2 × 10 mm) was used to register the spectra. The emission spectra were converted from the wavelength-to wavenumber-dependences, as presented in [45]. Experiment Methodology We examined the PEG effects on the bioluminescent reaction of the marine coelenterate Obelia and coelenteramide-containing fluorescent protein. The amount of 2 µL of PEG of different concentrations and molecular weights was added to the recombinant obelin preparation (10 −6 mg/mL). The bioluminescence reaction was triggered by 10 µL Ca 2+ . The time of bioluminescence registration varied from 1 to 12.6 s. The light-induced fluorescence spectra of Ca 2+ -discharged obelin (i.e., photoluminescence of the coelenteramide-containing protein) were measured. The excitation wavelength was 350 nm, with the ambient temperature being 20-25 • C. The changes in bioluminescence and photoluminescence spectra were studied in five experiments, providing statistical significance. Analysis of the Obelin Bioluminescence Spectra The complex spectra were deconvoluted into peak components by the peak analysis using Origin lab 2018. The mathematical treatment involved two steps: The function increment method [46], based on the Gauss distribution, was applied to identify spectral components. A secondary derivative for the fluorescence intensity was calculated [47] to determine the number of spectral components and their maxima. To compare the squares of the calculated and experimental spectra, the d-values were calculated as follows: where S exp is the square of the experimental spectral outline and S sum is the square of the sum of the reconstituted spectral outlines. The d values for all the spectra did not exceed 10%. All the spectra were deconvoluted into a minimum number of components in the coordinates of luminescent intensity and wavelength number. Microsoft Office Excel was used to carry out the statistical analysis. An example of the deconvoluted spectrum is presented in Figure 2 in the Introduction. Relative bioluminescence quantum yields were evaluated as S PEG /S contr, where S PEG and S contr are the squares of the experimental spectral outlines in the presence and absence of PEG, respectively, in the coordinates of luminescent intensity and wavelength number. Conclusions Generally speaking, studies of the effects of polymers on living marine systems are now of vital importance due to the intensive pollution of the world's oceans with polymeric compounds, including commercial water-soluble polymers. The studies should develop in different directions; marine coelenterates and their protein complexes are significant part of these investigations. The following direct conclusions can be made from the results presented in the current paper: 1. PEGs increase the intensity of the bioluminescence reaction because of the photoprotein structure stabilization. The efficiency of the bioluminescence activation does not depend on PEG molecular weights (1,8,or 35 kDa) or the PEG content in the interval of PEG concentrations 0.01-1 mg/mL. 2. PEGs initiate spectrum shifts and changes in the contribution of spectral components in the course of the bioluminescent reaction of obelin. The changes are multidirectional and depend on the reaction stage. The increase in the contribution of the blue component is as high as 190%. 3. PEGs decrease the photoluminescence intensity of the coelenteramide-containing fluorescent protein, which is likely due to collisional interactions with the fragments of the polymer chains. 4. PEGs do not change the shape of the photoluminescence spectrum of the coelenteramidecontaining fluorescent protein. This may indicate that PEG does not affect the efficiency of proton phototransfer in the protein complex. 5. The coelenteramide-containing fluorescent protein appeared to be a proper tool for the toxicity monitoring of water-soluble polymers, PEGs, which include polar and nonpolar chemical groups and hence are able to interact with different fragments of biological structures, producing integral bioresponses. Author Contributions: Experimental studies using bioluminescence and photoluminescence methods, T.S.S. and A.V.R.; interpretation, N.S.K.; data processing, data analysis, writing-original draft preparation, T.S.S.; writing-review and editing the manuscript, N.S.K. All authors have read and agreed to the published version of the manuscript. Funding: This paper was prepared with the partial financial support of the Russian Science Foundation N23-26-10018, Krasnoyarsk Regional Science Foundation. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
6,183.8
2023-03-28T00:00:00.000
[ "Biology", "Chemistry" ]
Transfer matrix spectrum for cyclic representations of the 6-vertex reflection algebra II This article is a direct continuation of [1] where we begun the study of the transfer matrix spectral problem for the cyclic representations of the trigonometric 6-vertex reflection algebra associated to the Bazhanov-Stroganov Lax operator. There we addressed this problem for the case where one of the K-matrices describing the boundary conditions is triangular. In the present article we consider the most general integrable boundary conditions, namely the most general boundary K-matrices satisfying the reflection equation. The spectral analysis is developed by implementing the method of Separation of Variables (SoV). We first design a suitable gauge transformation that enable us to put into correspondence the spectral problem for the most general boundary conditions with another one having one boundary K-matrix in a triangular form. In these settings the SoV resolution can be obtained along an extension of the method described in [1]. The transfer matrix spectrum is then completely characterized in terms of the set of solutions to a discrete system of polynomial equations in a given class of functions and equivalently as the set of solutions to an analogue of Baxter's T-Q functional equation. We further describe scalar product properties of the separate states including eigenstates of the transfer matrix. Introduction In the recent years, the out-of-equilibrium behavior of close and open physical systems has attracted a lot of interest motivated in particular by new experimental results, see e.g. [2][3][4][5][6][7][8][9][10]. Microscopic models able to describe such situations are thus in general not only described through their bulk Hamiltonian but also by specifying appropriate boundary conditions. It leads eventually to rather complicated dynamical properties with possible deformations of the bulk symmetries. In the context of strongly coupled systems, integrable models in low dimension, with boundary conditions preserving integrability properties, can be used to gain insights into the non-perturbative behavior of such out-of-equilibrium dynamics, see e.g. [11] and references therein. In particular, they can also describe classical stochastic relaxation processes, like ASEP [12][13][14][15][16] or transport properties in one dimensional quantum systems, see e.g. [17,18]. The algebraic description of quantum integrable models with non-trivial boundary conditions (namely going beyond periodic boundary conditions) goes back to Cherednik [19] and Sklyanin [20]. Such models have already a long history, that started with spin chains and Bethe ansatz [21][22][23][24][25][26][27][28][29], and continued using its modern developments, see e.g. [20,. The key point of the algebraic approaches is an extension of the standard Quantum Inverse Scattering method , see e.g. [82][83][84][85][86][87][88][89][90][91][92][93][94][95][96][97], and its associated Yang-Baxter algebra ; it takes the form of the so-called reflection equations [19,20] satisfied by the boundary version of the quantum monodromy matrix. The integrable structure of the model with boundaries can be described in terms of the corresponding bulk quantities supplemented with boundary conditions encoded in some K-matrices. To preserve integrability properties, these K-matrices should satisfy reflection equations driven by the R-matrix of the model in the bulk which solves the usual Yang-Baxter equation. As shown by Cherednik in [19] these reflection equations are just consequences of the factorization property of the scattering of particles on a segment having reflecting ends described by the boundary K-matrices. It leads to compatibility properties between the scattering in the bulk described by the R-matrix and the reflection properties of the ends encoded in the K-matrices. These are such that there still exists full series of commuting conserved quantities for the model with boundaries generated by the boundary transfer matrix [20]. Its expression is quadratic in the bulk monodromy matrix entries and depends on the right and left boundary K-matrices. Then, as for the periodic case, the local Hamiltonian for the boundary model can be obtained from this boundary transfer matrix. This is the standard framework to then address the resolution of the common spectral problem for the transfer matrix and its associated local Hamiltonian. There have been quite a number of works devoted to boundary integrable models using, as in the periodic situation, various versions of the Bethe ansatz . It appeared however, that while for special models and boundary K-matrices a method very similar to the standard algebraic Bethe ansatz (ABA), here based on the reflection equations, can be applied, the case of the most general boundary conditions (and associated K-matrices) preserving integrability turns out of be out of the reach of these methods. This motivated the use of different approaches like in particular the use of q-Onsager algebras, see e.g. [47,48], modifications of the Bethe ansatz [45,46,[49][50][51][52][53]59] and the implementation for this case of the separation of variable (SoV) method . For an extensive discussion and comparison of these various methods in the case of boundary integrable models, we refer to the general discussion given in the introduction of our first article [1] and to the references therein. In our article [1] we started the implementation of the SoV method to cyclic representation of the 6-vertex reflection equation associated to the most general Bazhanov-Stroganov quantum Lax operator [120][121][122][123][124][125]. Let us recall that the periodic boundary conditions case (spectrum and form factors) was considered in previous works [106][107][108][109][110][111], generalizing in particular [126,127]. The interest in such a problem is due to the fact that special cases include the Sine-Gordon lattice model at roots of unity and the Chiral Potts model [128][129][130][131][132][133][134][135][136][137]. In [1] we started the analysis considering the special case where one of the boundary K-matrices has triangular form (which is equivalent to one constraint on the boundary parameters). For that situation we have been able to apply successfully the SoV method by identifying the separate basis as the eigenstate basis of a special diagonalizable B-operator with simple spectrum which can be constructed from the boundary monodromy matrix entries. Then using this separate basis, the spectrum (eigenvalues and eigenstates) for the boundary transfer matrix was completely characterized in terms of the set of solutions to a discrete system of polynomial equations in a given class of functions. The purpose of the present article is to address this spectral problem for the most general boundary conditions preserving integrability, namely for the most general K-matrices solution of the reflection equation. The method to reach this goal is to design a gauge transformation that enable us to put this general situation into correspondence with the previous one, namely with a model having one triangular K-matrix. For that purpose, the standard idea of Baxter's gauge transformations, see e.g. [83,95] and references therein, has to be adapted in a way similar to [68,70] and generalized to these cyclic representations of the 6-vertex Yang-Baxter algebra. Then using this correspondence, the method and tools obtained in our first paper [1] can be used, leading to the complete characterization of the spectrum (again eigenvalues and eigenstates) of the general boundary transfer matrix. We also give determinant formula for the scalar products of the separate states. Further, we show that the spectrum characterization admits a representation in terms of functional equations of Baxter T-Q equation type. Let us note that an analogous inhomogeneous Baxter's like equation has been already proposed in [60] for this model on the basis of pure functional arguments on the fusion of transfer matrices. Thanks to our SoV construction, we prove in the present article that our inhomogeneous Baxter's like equation does characterize the full transfer matrix spectrum. Let us further remark that the inhomogeneous term in the proposal of [60] is presented in terms of the averages of the entries of the monodromy matrix. For general cyclic representations these quantities are defined only through (unresolved) recursion formula in [60]. Hence in [60] this inhomogeneous term is in fact not given explicitly which makes the comparison with our explicit functional equation not directly possible. Moreover, we would like to stress that in our formulation the Q-function are Laurent polynomial of a smaller degree compared to the one in [60]. This is due to the fact that the inhomogeneous term being computed explicitly in our SoV derivation it allows us to remove 4p (p being an integer characteristic value defining the cyclic representation, see (2.10) in section 2) irrelevant zeros from the T-Q functional equation as they can be factored out from each term of this equation (see section 5). This article is organized as follows. In section 2 we just recall the basics of the cyclic representations associated to the Bazhanov-Stroganov quantum Lax operator. In section 3 we define the gauged transformed reflection algebra that put into correspondence the most general boundary condition K-matrix with a triangular one. It enables us to adapt the SoV method that we already described in our first article [1] to this more general context, leading in section 4 to the transfer matrix spectrum complete characterization in this SoV basis. There we also present the scalar product formulae for the so-called separate states containing the transfer matrix eigenstates. In section 5 we show that the spectrum characterization admit a representation in terms of functional equations of Baxter T-Q equation type. Details about the construction of the gauge transformation are given in Appendices A and B together with determinant identities used in the spectrum characterization in Appendix C. 2 Cyclic representations of 6-vertex reflection algebra. Following Sklyanin's paper [20], we consider the most general cyclic solutions of the 6-vertex reflection equation associated to the Bazhanov-Stroganov Lax operator [121]: where the two sides of the equation belong to End(V 1 ⊗ V 2 ⊗ H) and are defined by the following boundary monodromy matrices: and V a ≃ C 2 is the so-called auxiliary space. Here, is the cyclic solution of the 6-vertex Yang-Baxter equation: and defined in terms of the Bazhanov-Stroganov's Lax operators [121]: (2.7) where: γ n = a n c n /α n , δ n = b n d n /β n . (2.8) The u n ∈End(R n ) and v m ∈End(R m ) are unitary Weyl algebra generators: and: q = e −iπβ 2 , β 2 = p ′ /p with p ′ even and p = 2l + 1 odd, l ∈ N. (2.10) The local quantum spaces R n are p-dimensional Hilbert spaces and the full representation space of the cyclic Yang-Baxter and reflection algebra is defined by the tensor product of the local quantum spaces, i.e. H = ⊗ N n=1 R n . Moreover, we consider here the most general boundary matrices defined as: where: We introduce the functions: where: a 0 is a free nonzero parameter and where ǫ = ±1 and we have defined: Moreover, later we will use: µ n,h ≡ iq 1/2 (a n β n /α n b n ) 1/2 h = +, Following the Sklyanin's paper [20] the next proposition holds: Proposition 2.1. The most general boundary transfer matrix associated to the Bazhanov-Stroganov Lax operator in the cyclic representations of the reflection algebra is defined by: It is a one parameter family of commuting operators satisfying the following symmetries proprieties: The boundary quantum determinant: is a central element in the reflection algebra, i.e. Gauged cyclic reflection algebra and SoV representations In our previous paper we solved the spectral problem associated to the transfer matrix of the cyclic representations under the requirement that one of the boundary matrices is triangular, i.e. b + (λ) ≡ 0. In this paper we want to solve the same type of spectral problem but for the most general boundary conditions. In order to do so we can follow the same approach used in the case of the transfer matrix associated to the spin-1/2 reflection algebra [68]. That is, we introduce the following linear combinations of the original reflection algebra generators: where β = ±1, ±q 2 and α are arbitrary complex values; to simplify the notation, we won't explicit the dependance in α. As it is discussed in the appendices A and B, these operators families still satisfy a set of commutation relations which are gauged versions of the reflection algebra commutation relations. In the following we will refer to these families as the gauge transformed reflection algebra generators. In the same appendices, we prove the following theorem, characterizing the representation of these generators: 8) for any ǫ = ±1, n, m ∈ {1, ..., N} and h ∈ {1, ..., p − 1}. Then, the following set of states: form a left and a right basis of the representation space defining the following decomposition of the identity: for the non-zero normalization fixed by (3.14) In this basis the operator family B − (λ|β) is pseudo-diagonalized: 17) and the operator families A − (λ|β) and D − (λ|β) in the zeros ζ (ha) a of B − (λ|β) act as simple shift operators: and: Let us comment that the existence of the states Ω β | and |Ω β can be proven by a general argument which we present in Appendix B. For general representations, the pseudo-spectrum of B − (λ|β), i.e. the values of b −,n (β) and b − (β), must be computed by recursion on the number of sites. However, in Appendix B we present the explicit expression for b −,n (β) and b − (β) in some particular representations. The interest in these gauge transformed boundary generators is due to the possibility to use them to rewrite the transfer matrix associated to the most general cyclic 6-vertex reflection algebra representations in a simple form, as presented in the following proposition: Proposition 3.1. The quantum determinant can be written in terms of the gauge transformed boundary generators as: Moreover, if we set the gauge parameter α to: (α + and β + are defined in (2.17), they are linked to the boundary parameters ζ + , κ + and τ + , see (2.11)-(2.12)) then the transfer matrix can be written as where we have defined: Proof. The proof of this statement coincides with the one given in [68] for the XXZ spin 1/2 quantum chain with general integrable boundaries; in fact, this statement is representation independent. The only difference is that here we have used a Laurent polynomial form while in the XXZ case it was a trigonometric form. The simple representations (3.27)-(3.28) of the transfer matrix in terms of the gauge transformed boundary generators and the known actions (3.18)-(3.19) of these operators imply that the transfer matrix spectral problem is separated in the pseudo-eigenbasis of B − (λ|β). T -spectrum characterization in SoV basis and scalar products In this section we present the complete characterization of the spectrum of the transfer matrix T (λ) associated to the cyclic representations of the 6-vertex reflection algebra. We first present some preliminary properties satisfied by all the eigenvalue functions of the transfer matrix T (λ): Lemma 4.1. Denote by Σ T the transfer matrix spectrum, then any τ (λ) ∈ Σ T is an even function of λ symmetrical under the transformation λ → 1/λ which admits the following interpolation formula: where: Λ ≡ (λ 2 + 1/λ 2 ) and X ≡ q + 1/q (4.2) and We recall that ζ − , κ − , τ − , ζ + , κ + and τ + are the boundary parameters, see (2.11)-(2.12), and α n , β n , γ n and δ n are the bulk parameters, see (2.7). Proof. This lemma coincides with Lemma 5.1 of our previous paper. We introduce the following one-parameter family D τ (λ) of p × p matrices: 4) where for now τ (λ) is a generic function and we have defined: (from (2.13) and (3.29)) where the coefficient a(λ) satisfies the quantum determinant condition: The separation of variables lead to the following discrete characterization of the transfer matrix spectrum. I) The right T -eigenstate corresponding to τ (λ) ∈ Σ T is defined by the following decomposition in the right SoV-basis: where the gauge parameters α and β satisfy the condition (3.26) and the q (ha) τ,a are the unique nontrivial solutions up to normalization of the linear homogeneous system: II) The left T -eigenstate corresponding to τ (λ) ∈ Σ T is defined by the following decomposition in the left SoV-basis: where the gauge parameters α and β satisfy the condition (3.26) and theq (ha) τ,a are the unique nontrivial solutions up to normalization of the linear homogeneous system: defined from (2.13) and (3.29). Proof. The Theorem 3.1 implies that for almost all the values of the gauge-boundary-bulk parameters the conditions (3.7)-(3.8) hold. Here, we need to prove also that for almost all the values of the boundary-bulk parameters we have, once we set the ratio α/β as in (3.26). Let us first observe that B − (λ|β) is a Laurent polynomial in α, β, the inner boundary parameters and the bulk parameters. So that by (3.26), the one parameter family B − (λ|β) becomes Laurent polynomial in the outer boundary parameters too. Consequently, to prove that (4.13) is satisfied for almost all the values of the boundary-bulk parameters it is enough to prove that we can find some values of these parameters for which (4.13) is satisfied. Indeed, we can chose arbitrary boundary-bulk parameters satisfying the following inequalities: µ p +,n = α ±p + and µ p +,n = −β ±p + , ∀n ∈ {1, ..., N}, (4.14) together with those in (B.64) and (B.65) and impose the N conditions (B.63). Under these conditions, Theorem 3.1 implies the pseudo-diagonalizability of B − (λ|β) and fixes the spectrum of its zeros b −,n (β) by (B.66); so that the inequality (4.13) is satisfied. As we have proven that for almost all the values of the boundary-bulk parameters the inequalities (4.13), (B.64) and (B.65) hold, to prove this theorem we have just to follow the same proof given in the non-gauged case, i.e. the proof of Theorem 5.1 of our previous paper. Let us comment that with respect to this last theorem here we are stating also the diagonalizability of the transfer matrix for almost any value of the parameters of the representation. This last statement can be proven as it follows. Let us consider the following special representation, where the bulk parameters satisfy: and where the boundary matrices are diagonal, K a,− (λ) = K a (λ; ζ − , 0, 0) and K a,+ (λ) = K a (qλ; ζ + , 0, 0) (see (2.11)-(2.12)), with the associated boundary parameters satisfying moreover |ζ − | = |ζ + | = 1. The * operation is the complex conjugation. A simple direct calculation made for example in [110] leads to the following Hermitian conjugate of the monodromy matrix (2.4): where σ y a denotes the Pauli matrix. From this relation, and using the specific inner boundary matrix introduced, one can compute the Hermitian conjugate of the boundary monodromy matrix (2.2): Then, from the definition of the boundary transfer matrix, and for the special choice of representation here chosen, we can show: Thus for this special representation the boundary transfer matrix is normal. Then it follows that the determinant of the p N × p N matrix of elements e i |τ j , where e i | is the generic element of a given basis of covectors and | τ j is the generic transfer matrix eigenvector, is non zero. Noticing that this determinant is a fractional function of the bulk and boundary parameters, non zero for the special choice of the parameters above defined, it follows that it is non zero for almost every choice of the parameters. Which concludes the proof. It is also interesting to remark that we can obtain the coefficients of a left transfer matrix eigenstates in terms of those of the right one. The following lemma defines this characterization and can be proven as in the standard case [1]: Let us introduce a class of left and right states, the so-called separate states, characterised by the following type of decompositions in the left and right separate basis: where the coefficients α These separate states are interesting at least for two reasons: the eigenstates of the boundary transfer matrix are special separate states, and they admit a simple determinant scalar product, as it is stated in the next proposition: Proposition 4.1. Let us take an arbitrary separate left state α| and an arbitrary separate right state |β . Then it holds: where the elements of the size N matrix M (α,β) are given by: The proof is quite straightforward, it is based on the fact that one can see a Vandermonde determinant when computing the scalar product. One of the main corollary is the orthogonality of two eigenstates τ | and |τ ′ of the boundary transfer matrix associated to two different eigenvalues τ (λ) and τ ′ (λ): The computation of such scalar products is the very first step towards the dynamics, several further steps being required to reach this characterization for the models associated to cyclic representations of the 6-vertex reflection algebra: the reconstruction of the local operators in separate variables, the identification of the ground state, the homogeneous and the thermodynamic limit. For example a rewriting of the determinant representations for the form factors obtained from separation of variable will be necessary to overcome the standard problems related to the homogeneous limit. This problem has been addressed and solved for the XXX spin 1/2 chain, linking the separation of variable type determinants with Izergin's, Slavnov's and Gaudin's type determinants [118,119]. Functional equation characterizing the T -spectrum The purpose of this section is to characterize the spectrum by functional relations analogous to Baxter's T-Q equation. To begin with, we first need the following property. Proof. The first part of this lemma about the dependence w.r.t. Z of det p D τ (λ) has been already proven in Lemma 5.2 of our previous paper [1] while the second part of this lemma can be proven following the proof given in Proposition 6.1 of the same paper. To adapt this proof here, let us observe that the matrix D τ (i a q h+1/2 ) for a ∈ {0, 1} and h ∈ {0, ..., p − 1} contains one row with two divergent elements, i.e. −a(±1) and −a(±i), respectively for a = 0 and a = 1. Nevertheless the determinants det p D τ (i a q h+1/2 ) are all finites for any a ∈ {0, 1} and h ∈ {0, ..., p − 1} if τ (i b q k+1/2 ) are finite for any b ∈ {0, 1} and k ∈ {0, ..., p−1}. Indeed, by the symmetries λ p → 1/λ p and λ → −λ all the determinants det p D τ (q h+1/2 ) coincide as well as all the determinants det p D τ (iq h+1/2 ). So that we have to prove our statement for one value of q h+1/2 and one value of iq h+1/2 . Now, we can use the expansion of the determinant w.r.t. the central row: and D τ,i,j (λ) denotes the (p − 1) × (p − 1) matrix obtained from D τ (λ) removing the row i and the column j. From the identity: 4) and the regularity of these two determinants for λ → ±1 and λ → ±i, it follows that det p D τ (i a q 1/2 ) are finites too for a ∈ {0, 1}. Now, our statement about the Laurent polynomiality of degree N+2 of det p D τ (λ) w.r.t. Z follows from the symmetries and from the fact that τ (λ) and x(λ) are Laurent polynomials in λ of degree 2N + 4. Let us introduce the following notations: and where we recall that ζ − , κ − , τ − , α − , β − , ζ + , κ + , τ + , α + and β + are the boundary parameters (see (2.11),(2.12) and (2.17)), while a n , b n , c n and d n are the bulk parameters, see (2.7). Then the following results hold: For almost all the values of the boundary-bulk parameters, T (λ) has simple spectrum and τ (λ) of the form (4.1) is an element of Σ T (the set of the eigenvalues of T (λ)) if and only if det p D τ (λ) is a Laurent polynomial of degree N + 2 in the variable Z (see (5.1)) which satisfies the following functional equation: Proof. The SoV characterization of the spectrum implies that τ (λ) ∈ Σ T if and only if it holds: and τ (λ) has the form (4.1). In the previous lemma we have shown that det p D τ (λ) is a Laurent polynomial of degree N + 2 in Z, here we show that from τ (λ) of form (4.1) it follows the identities: For the symmetry it is enough to consider the above limit in the case h = 0. Let us denote with D τ (λq 1/2 ) the matrix whose first row is the sum of the first and the last row of D τ (λq 1/2 ) divided for (λ 2 −1/λ 2 ) and whose row (p + 1) /2 is the row (p + 1) /2 of D τ (λq 1/2 ) multiplied for (λ 2 −1/λ 2 ) while all the others rows ofD τ (λq 1/2 ) and D τ (λq 1/2 ) coincide. Clearly it holds: so that we can compute the limits directly for det pDτ (λq 1/2 ). The interesting point is that now all the rows of the matrixD τ (λq 1/2 ) are finites in the limits λ → ±1, ±i, this is a consequence of the identities: τ (±i a q 1/2 ) = a(±i a q 1/2 ), a(±i a q −1/2 ) = 0, ∀a ∈ {0, 1}. (5.12) Explicitly, we have that the nonzero elements of the rows 1, (p + 1) /2 and p are: where we have defined: The remaining rows ofD τ (±i a q 1/2 ) produce the tridiagonal part of this matrix. Then, it is possible to prove that this matrix has linear dependent rows; so that det pDτ (±i a q 1/2 ) = 0. Finally, we can compute the following asymptotic formulae: where we have denoted with t the transpose of the matrix and x = q 2(N+2) . We have that ∆ ∞ is a degree p polynomial in τ ∞ whose zeros are known from the identities: so that we get: This means that we have determined det p D τ (λ) in N + 2 different values of Z together with the asymptotic for Z → ∞. From which the characterization (5.8) trivially follows. The discrete characterization of the spectrum given in Theorem 4.1 can be reformulated in terms of Baxter's type T-Q functional equations and the eigenstates admit an algebraic Bethe ansatz like reformulation, as we show in the next theorem. These type of reformulations of the spectrum holds for several models once they admit SoV description, see for example [69,[108][109][110][115][116][117]. In the following we denote with Q(λ) a polynomial in Λ = λ 2 + 1 λ 2 of degree N Q of the form: Theorem 5.1. For almost all the values of the boundary-bulk parameters such that: (5.26) and the conditions: We recall that a ∞ , a 0 and F (λ) are defined in (5.5)-(5.7) and that X = q + 1/q (4.2). Proof. Let us prove first that if it exists a Q(λ) of the form (5.24) with N Q = (p − 1) N satisfying (5.27) and (5.26) with τ (λ) an entire function, then τ (λ) ∈ Σ T . The r.h.s of the equation (5.26) is a Laurent polynomial in λ as we have: which is finite in the limits λ → ±1, λ → ±i. So that the r.h.s. of (5.26) is a polynomial of degree pN + 2 in Λ, as it is invariant w.r.t. the transformations λ → −λ and λ → 1/λ. Then, the assumption that τ (λ) is entire in λ implies by the equation (5.26) that τ (λ) is a polynomial in Λ of the form (4.1) and that it satisfies the equations: thanks to (5.26) and (5.27), so that we obtain by SoV characterization τ (λ) ∈ Σ T . Let us now prove the reverse statement, i.e. we assume τ (λ) ∈ Σ T and we prove that there exists Q(λ) of the form (5.24) with degree N Q = (p − 1) N satisfying (5.27) and (5.26). Let us consider the system of equations: where we have used the notations: From the condition τ (λ) ∈ Σ T and the assumption of general values of the boundary-bulk parameters (5.25), we know that det p D τ (λ) is a non-zero polynomial, so defining: we can solve the previous system of equations for any value of λ ∈ C\Z detpDτ by the Cramer's rule: is the p × p matrix obtained replacing the column i by the column at the r.h.s. of (5.30). Let us now rewrite the system of equation (5.30) bringing the first element in the last one for the two column vectors: where it is easy to see thatD τ (λ) = D τ (λq). Rescaling now the argument of the functions, we can rewrite it as it follows: so that it must hold: where we have used the notation X p (λ) ≡ X 0 (λ), or equivalently: Let us observe now that, from their definition, X a (λ) are continuous functions of λ so the above equation must be indeed satisfied for any value of λ ∈ C. Moreover, from the identity: which we can prove by some simple exchange of rows and columns, and from the fact that: we get the symmetry: X 0 (λ) = X 0 (1/λ), (5.40) which together with the symmetry X 0 (λ) = X 0 (−λ) implies that X 0 (λ) is a function of Λ. By using this last result we can rewrite the first equation of the system (5.30) as it follows ∀λ ∈ C : Note that in the following when we refer to a row k ∈ Z what we mean is the row k ′ ∈ {1, ..., p} with k ′ = k mod p. In the rowh = (p + 1)/2 + h of D (1) τ (±i a q 1/2−h λ) at least one of the three non-zero elements is diverging under the limit λ → ±1, ±i. We can proceed as done in the previous theorem, we define the matrixD (1) τ,h (λ) as the matrix with all the rows coinciding with those of D 42) and the interesting point is that now all the rows of the matrix D (1) τ,h (±i a q 1/2−h λ) are finite in the limits λ → ±1, ±i. We have that the nonzero elements of the rows h, h+1 andh ofD (1) τ,h (±i a q 1/2−h ) reads: where we have defined: The remaining rows ofD (1) τ,h (±i a q 1/2−h ) produce the tridiagonal part of this matrix. It is possible to prove than that for any h ∈ {0, ..., p − 1} the matrixD (1) τ,h (±i a q 1/2−h ) has linear dependent rows; so that det p D (1) τ (±i a q 1/2−h ) = 0 and the following factorization holds: Here P τ (λ) is a Laurent polynomial of degree 2(p − 1)N + 2p in λ, with the following odd parity: Here, we want to prove that in fact: whereQ τ (λ) is a polynomial of degree (p − 1)N in Λ. In order to do so we write down the equation: where for convenience we have denoted R τ (λ) =det p D τ (λ), and we recall Z = λ 2p + 1 λ 2p . The above equation is a direct consequence of the equation satisfied by X 0 (λ) and of the definition of this last function in terms of det p D (1) τ (λ). Now let us consider the following limit on the above equation λ → ±i a with a ∈ {0, 1}: now by using the known identities: we get: and so being x(±i a ) = 0 These results imply the identities: We can now write the functional equation for P τ (λ): τ (λ)P τ (λ) = a(λ)P τ (λ/q) + a(1/λ)P τ (λq) Taking the limit λ → ±i a with a ∈ {0, 1}, we obtain: so that using the previous result (5.61) and the identity: being τ (±i a ) = 0. Let us now compute the functional equation for P τ (λ) in the points λ = ±i a q ǫ for a ∈ {0, 1}, ǫ ∈ {−1, 1}, we obtain: τ (±i a q)P τ (±i a q) = a(±i a q)P τ (±i a ) + a(±i a /q)P τ (±i a q 2 ), (5.66) τ (±i a /q)P τ (±i a /q) = a(±i a /q)P τ (±i a /q 2 ) + a(±i a q)P τ (±i a ), (5.67) implying: being a(±i a q ǫ ) = 0 for a, ǫ ∈ {0, 1}. We can iterate these computations for λ = ±i a q bǫ for any a ∈ {0, 1}, ǫ ∈ {−1, 1} and b ∈ {2, ..., (p − 3) /2} obtaining that: In the cases λ = ±i a q ±1/2 as a(±i a /q 1/2 ) = 0 the functional equation for P τ (λ) give us: τ (±i a q ±1/2 )P τ (±i a q ±1/2 ) = a(±i a q 1/2 )P τ (±i a q ∓1/2 ), (5.70) which being P τ (±i a q 1/2 ) = −P τ (±i a q −1/2 ) and τ (±i a q ±1/2 ) =a(±i a q 1/2 ) = 0 implies the identity: so that the factorization (5.54) is proven and we get that: is a polynomial of degree N Q = (p − 1) N in Λ which has the form (5.24). This follows by taking the asymptotic of its functional equation so that we can fix: hence giving a constructive proof of the existence of the polynomial Q-function solution of the equation (5.26). The fact that it is unique is shown observing that ifQ(λ) is another polynomial solution then: and the SoV representation implies the following centrality condition: from which in particular follows: Let us remark now that the r.h.s and the l.h.s of the above equation are continuous w.r.t. the boundary-bulk parameters so that the above identity holds also if we take the special limit µ a,− → q 1−p /µ a,+ for which it holds a(1/ζ (p−1) a ) = 0 and so we get: By definition of the function Q(λ) under these conditions and limit on the bulk parameters we get: Now replacing the first row R 1 with the following linear combination of rows: we getR and so: for generic values of the boundary-bulk parameters. Indeed, as the W a,h+1+i are functions only of the bulk parameter µ a,+ while the ratios a(1/ζ ) are functions of both the boundary and the bulk parameters then we can prove thatW a,h = 0. Explicitly we can compute the asymptotic ofW a,h in the limit µ a,+ → ∞, by using the know asymptotic of the transfer matrix, therefore showing that it is non-zero for general values of boundary-bulk parameters. In the previous theorem we have excluded the boundary-bulk one-constraint cases leading to an identically zero detD τ (λ) for any τ (λ) ∈ Σ T , these specific cases are considered in the next theorem. Let us introduce now the following states: (see (4.5), (4.12) and (4.19)) and the following renormalization of the B − -operator familŷ which is a degree N polynomial in Λ = λ 2 + 1 λ 2 , and where T β is simply a shift on the gauge parameter β (see (B.94)). As first remarked in the papers [61,104], from the polynomial characterization of the Q-function and the SoV characterization it follows the Bethe-like rewriting of the transfer matrix eigenstates stated in the following 1 : Corollary 5.1. The left and right transfer matrix eigenstates associated to τ (λ) ∈ Σ T admit the following Bethe ansatz like representations: .., N Q } are the zeros of Q(λ) and we have imposed the condition (3.26) on the gauge parameters. 1 One should remark that the logic that lead us to the ABA rewriting of the transfer matrix eigenstates is completely different from the one underling the algebraic Bethe ansatz. We get it by rewriting the original SoV form and this allows us to identity the non-trivial state that takes a role similar to a reference state. Note however that it has properties rather different from an ABA reference state as in general it is not an eigenstate of the transfer matrix! For simpler models, for which such a reference state can be naturally guessed, one can also follow the ABA logic i.e. to make an ansatz on the form of the ABA states and then to compute the action of the transfer matrix on these states deriving the Bethe equations by putting to zero the so-called unwanted terms. This is what it has been done in the paper [53] for the quantum spin 1/2 chains. Proof. These identities follow from the polynomiality of the Q-functions, which implies the following identity: where theb h (λ b ) is the eigenvalue of the operatorB − (λ|β) and the are the zeros of the Q-function as defined in (5.24). Now we have just to do the action of the monomial: on the right state (5.107) and use that by definition: to prove that the vector in (5.109) coincides, up to the sign, with the vector (4.8) and so it is the corresponding transfer matrix eigenvector; similarly one shows that the covector in (5.109) coincides with the covector (4.10). Conclusions In this second article we have shown how to implement the SoV method to characterize the transfer matrix spectrum for integrable models associated to the Bazhanov-Stroganov quantum Lax operator and to the most general integrable boundary conditions. For that purpose it was necessary to perform a gauge transformation so as to recast the problem in a form similar to the one studied in our first article, i.e., such that one of the boundary K-matrices becomes triangular after the gauge transformation. Let us stress that the separate basis was designed again as the (pseudo)-eigenvector basis of some gauged operator of the reflection algebra having simple spectrum. What remains to be done is the construction of integrable local cyclic Hamiltonian having appropriate boundary conditions and commuting with the boundary transfer matrices considered here. This amounts to use trace identities involving the fundamental R-matrix acting in the tensor product of two cyclic representations [121,124,125] and to construct the associated K-matrices, hence also acting in these cyclic representations. The reflection equations will have to be written for arbitrary choices (and mixing) of the spin-1/2 and cyclic representations. Correspondingly, there will be compatibility conditions between the different K-matrices acting in these two different representations. We will address this question in a forthcoming article [138]. A.2 Pseudo-reference state for the gauge transformed Yang-Baxter algebra In the following, we want to study the conditions for which a nonzero state identically annihilated by the action of the operator family A(λ|α, β, γ) exists: It is an easy consequence of the gauge transformed Yang-Baxter commutation relations that under the condition that this state exists and is unique then it is a pseudo-reference state for the gauge Here, we show that we can construct such a pseudo-reference state if and only if we impose at least N + 1 constraints on the bulk and gauge parameters. Let us remark that if the condition (A.27) are not satisfied we can still derive the left and right local reference states imposing some case dependent condition on the gauge parameters; here for simplicity we have chosen to omit the description of these cases. for fixed N-tuples of ǫ n = ±1 and k n ∈ {0, ..., p − 1}, moreover it is uniquely defined by: c n q rn−1/2 + d n q 1/2−rn a n q rn−1/2 + b n q 1/2−rn Proof. The operator family A(λ|α, β, γ) is a degree N Laurent polynomial of the form: where the A n (α, β, γ) are operators, for example we write explicitly: For general values of the parameters these are invertible operators so that we have to impose at least N + 1 constraints to have that their common kernel is at least one dimensional. We can find the set of constraints by using induction and decomposing A(λ|α, β, γ) in terms of gauged operators on two subchains one of N − 1 sites and one of 1 site. The most general decomposition reads: from which it follows: Then A N,...,1 (λ|α, β, γ) admits a non-zero state annihilated by its action once we impose that it is true for A N,...,2 (λ|α, β, x 1 y 1 ) and A 1 (λ|1, 1/x 1 y 1 , γ) or for A 1 (λ|1, y 1 /x 1 , γ) and A N,...,2 (λ|α, β, x 1 /y 1 ), and this state is given by the tensor product of the ones on the two subchains. As the parameters x 1 and y 1 are arbitrary in fact these two conditions are equivalents and so we can chose just one of them. So let us say we ask the second one and we repeat the same argument for A N,...,2 (λ|α, β, x 1 /y 1 ), i.e. A N,...,2 (λ|α, β, x 1 /y 1 ) admits such a state if A 2 (λ|1, y 2 /x 2 , x 1 /y 1 ) and A N,...,3 (λ|α, β, x 2 /y 2 ) do. So on by induction we get that the existence condition is equivalent to the existence conditions for the following N local operators: A n (λ|1, y n /x n , x n−1 /y n−1 ) for any n ∈ {1, ..., N } , (A. 66) where we have denoted while the y n /x n for any n ∈ {1, ..., N − 1} are free parameters to be used to satisfy the existence condition for the local operators A n (λ|1, y n /x n , x n−1 /y n−1 ). From the previous lemma for A n (λ|1, s n , r n ), the existence condition is equivalent to: From this it is clear that the existence conditions of such a state for A N,...,1 (λ|α, β, γ) coincides with the simultaneous existence for the N local operators (A.66) and that the state is just the tensor product of the states (A.69) so that our proposition is proven. Similarly, we can prove the statement for the right state and using the previous lemma we can prove our statement on the action of the operator B(λ|α, β) on these states. B Gauge transformed Reflection algebra B.1 Gauge transformed boundary operators The gauged two-row monodromy matrix can be defined as it follows: Note that one can expand this last gauged monodromy matrix in terms of the gauged bulk ones. Moreover, U − (λ|α, β) does not depend on the internal gauge parameter γ, so we are free to chose it at will. The following decompositions hold: Explicitly, for B − (λ|α, β), it holds: Then it holds:K for ǫ = ± and These gauge transformed boundary operators satisfies the following gauge deformed reflection algebra. Proposition B.1. The gauge transformed boundary operators satisfy the following commutation relations: and Similar commutation relations involving C − (λ|β) can be written by using the following β-symmetries: Moreover, these gauge transformed operators satisfy the following parity properties: Proof. Both the commutation relations and the parity properties here presented coincide with those derived in [68] for the case of the XXZ spin 1/2 quantum chain with general integrable boundaries. This is the case as they are clearly representation independent. Here we are just writing them in a Laurent polynomial form instead of a trigonometric form. B.2 Representation of the gauge transformed Reflection algebra In the bulk of the paper we have anticipated that for almost all the values of the boundary, bulk and gauge parameters the operator family B − (λ|β) is pseudo-diagonalizable. We will show this statement in the last subsection of this appendix, but for now we want to write explicitly the representation of the other gauge transformed boundary operator families in the left and right basis formed out of the pseudo-eigenstates of B − (λ|β). Theorem B.1. The action of the reflection algebra generator A − (λ|βq 2 ) on the generic state β, h| is given by the following expression: once the parameter α has been fixed by (3.26). Proof. The following interpolation formula: where we have defined: is a direct consequence of the functional dependence with respect to λ: and of the identities: which are representation independent. Instead the asymptotic operators A ∞,0 − (βq 2 ) depend on the representation and we can compute them observing that using the definition (3.1) of A − (λ|βq 2 ) it holds: are the asymptotic limits of the ungauged elements of U − (λ). The identities: following from (B.17)-(B.19), and imply the main identity: This identity allows to compute these asymptotic operators once we use the interpolation formula to write A 0 − (βq 2 ) in terms of A ∞ − (βq 2 ) as it follows: Similarly, the following theorem characterizes the right SoV representation of the gauged cyclic reflection algebra: Theorem B.2. The action of the reflection algebra generators D − (λ|β) on the generic state |β, h , can be written as it follows: where: Proof. The following interpolation formula is derived as in the previous theorem using the polynomiality of the operator family D − (λ|β): we have just to compute the asymptotic operator D ∞ − (β). The following identities: trivially follows by the definition of the operator family D − (λ|β), from which we get: while from the interpolation formula we get: from which the statement of the theorem follows easily . B.3 SoV spectral decomposition of the identity The Theorem 3.1 states the pseudo-diagonalizability of B − (λ|β) for almost all the values of the boundary-bulk-gauge parameters, so that for almost all the values of these parameters the left and right states β, h| and |β, k are well defined nonzero left and right states describing a left and right basis in the space of the representation. We can now defines the following p N × p N matrices U (L,β) and U (R,βq 2 ) defining the change of basis from the original left and right basis: (B.42) where κ is an isomorphism between the sets {0, ..., p − 1} N and {1, ..., p N } defined by: It follows from the pseudo-diagonalizability of B − (λ|β) that the p N × p N square matrices U (L,β) and U (R,βq 2 ) are invertible matrices for which it holds: We can prove that it holds, with the same notation as in Theorem 3.1: For almost all the values of the boundary-bulk-gauge parameters it holds: so that fixed the normalization factor: in the left and right pseudo-eigenstates, then the p N ×p N matrix M ≡ U (L,β) U (R,βq 2 ) is the following invertible diagonal matrix: from which the following spectral decomposition of the identity I follows: Proof. The following identity holds: From it follows the fact that the matrix M is diagonal: as there exists at least a n ∈ {1, ..., N} such that h n = k n and then: |βq 2 ) we get: from which one can prove: This proves the proposition being by our choice of normalization: B.4 Proof of pseudo diagonalizability and simplicity of B − (λ|β) We prove the pseudo-digonalizability and pseudo-simplicity of B − (λ|β) in two steps. We first consider some special representation for which such statement is proven by direct computation then we use this result to prove our statement for general representations. The following theorem holds: Theorem B.3. Let us assume that the conditions on the bulk-gauge parameters: are satisfied for fixed N-tuples of ǫ n = ±1 and k n ∈ {0, ..., p − 1} and that the conditions (A.27) hold together with the following ones: and µ 2p n,ǫn = ±1, µ 2p n,ǫn = α 2pǫ − , µ 2p n,ǫn = −β 2pǫ − , µ 2p n,+ = µ 2pǫ m,− , µ p n,ǫn = µ p m,ǫn , (B.65) for any ǫ 0 = ±1 and n, m ∈ {1, ..., N}, then the operator family B − (λ|β) has simple pseudo-spectrum characterized by: b −,n (β) = µ n,ǫn q 1/2 ∀n ∈ {1, ..., N}, i.e. independent w.r.t. β, and the left pseudo-eigenbasis characterized by the formulae (3.9) by fixing: for fixed N-tuples of ǫ n = ±1 and k n ∈ {0, ..., p − 1}, then the operator family B − (λ|β) has simple pseudo-spectrum characterized by fixing : b −,n (β) = µ n,ǫn q −1/2 ∀n ∈ {1, ..., N}, i.e. independent w.r.t. β, and right pseudo-eigenbasis characterized by the formulae (3.10) by fixing: c n q rn−1/2 + d n q 1/2−rn a n q rn−1/2 + b n q 1/2−rn a p n + b p n c p n + d p so that it holds: and consequently: so that for the pseudo-eigenvalue it holds: respectively on the left and the right. This fixes the values of the b − (β) and b −,a (β) to those stated in this theorem. Note that the condition (B.64) implies that: Let us now prove that the states (3.9) and (3.10) are all nonzero states. The reasoning is done explicitly only for the left case as for the right one we can proceed similarly. We know by construction that the state Ω β | is nonzero so let us assume by induction that the same is true for the state β,h (0) | = β, h N | is nonzero. We have that: j | is nonzero. Using this we can prove that all the states β, h 1 + x 1 , ..., h N + x N | with x j ∈ {0, 1} for any j ∈ {1, ..., N} are nonzero, which just proves the validity of the induction. Note that the same statements hold if we substitute the given value of β fixed in (B.69) with any value β/q 2a for any a ∈ {1, ..., p − 1}; i.e. we have that Ω β/q 2a | is nonzero and from that we prove similarly the induction. Let us now prove that the sets of left and right states define respectively a left and a right basis of the linear space of the representation. Let us consider the linear combination to zero of the left states: and let us act on it with the following product of operator: 2 |β/q 2p ) · · · B − (ζ where the generic monomial in it: contains only the p − 1 arguments ζ (km) m with k m ∈ {0, ..., p − 1}\{h n } and h≡ {h 1 , ..., h N } is a generic element of Z N p . Then, it easily to understand that it holds: where k ′ n = k n if k ′ n < h n and k ′ n = k n − 1 if h n < k n . Now the simplicity of the pseudo-spectrum of B − (λ|β) implies that: Note that in the bulk of the paper we have chosen to present the construction of the SoVbasis starting from a state |Ω β associated to the pseudo-eigenvalue b 0 (λ|β) just to simplify the simultaneous presentation of the left and right basis; in fact, we can construct the right basis also starting from the state |Ω β associated to b 1 (λ|β), which is the state constructed directly here for the considered special representations. as a consequence of the commutation relations (B.12). The result of the previous section implies that for some special choice of the boundary-bulk-gauge parameters all the operatorsB −,a,β are invertible as B − (λ|β) is pseudo-diagonalizable and it admits the following representation: where the B −,a (β) are commuting and invertible operators. Then the fact that this operators depend continuously on these parameters implies that this statement is true for almost any values of these parameters. This also implies that for almost all the value of the boundary-bulk-gauge parameters we can use the above representation for B − (λ|β). We can now recall that, thanks to the result of the Lemma A.1 of our previous paper, we can always find a nonzero simultaneous eigenstate of commuting operators such as the B −,a (β) for any a ∈ {1, ..., N}. This is a pseudo-eigenstate of the operator family B − (λ|β). Now, for the same set of representations considered in the previous section we know that the pseudo-eigenvalues of B − (λ|β) satisfy the conditions (3.7) and (3.8). Then, we can use once again the continuity argument to argue that the eigenvalues on the common eigenstate still satisfy (3.7) and (3.8). We can now prove the Theorem 3.1, by using the results of the previous sections. The statements about the spectral decomposition of the identity of the theorem have been already given in Proposition B.2. C Properties of cofactor In this appendix we prove a lemma giving the main properties of the cofactors of the matrix D τ (λ). Proof. Let us remark that independently from the explicit form of τ (λ) the following identities hold: so that C 1,p (λ) is a non-zero polynomial in λ which implies the statement on the rank of D τ (λ). The proof of the above symmetry properties is standard we just need to make some exchange of rows and columns to bring the matrix in the determinant defining the cofactor in the l.h.s into the matrix defining the cofactor in the r.h.s.. Let us show our statement on the form of C 1,1 (λ). In order to do so we have to prove that C 1,1 (λ) is finite in the points 2 λ = ±i a q h for any h ∈ {1, ..., p − 1}. More precisely, in the line p − h there is at least one element of the matrix M 1,1 (λ) associated to C 1,1 (λ) which is diverging in the limit λ → ±i a q h . Here, we have to distinguish three cases. For the case h = (p ± 1)/2, we can proceed as done in the bulk of the paper. We can define the matrix M (h) 1,1 (λ) as the matrix with all the rows coinciding with those of M 1,1 (λ) except the row (p + 1)/2 − h, which is obtained by summing the row (p−1)/2−h and (p+1)/2−h of M 1,1 (λ) and dividing them by ((λ/q h ) 2 −(q h /λ) 2 ), and the row p − h, obtained multiplying the row p − h of M In the remaining cases, if h = (p ± 1)/2 then the row (p ± 1)/2p − h = p mod(p) is not contained in M 1,1 (±i a q h ) so that we cannot remove here the divergence as we have done before. However, we can proceed differently, let us explain it in the case h = (p + 1)/2 as in the other case we can proceed similarly. In the last row of M 1,1 (±i a q (p+1)/2 λ) under the limit λ → 1 the last element tend to τ (i a q 1/2 ), finite nonzero value, and the next to last tend to a(±i a q −1/2 ) = 0, all the others on this row are zero. So that C 1,1 (±i a q (p−1)/2 ) is finite iff det p−2 D (1,p),(1,p) (±i a q (p+1)/2 ) is finite. This is shown using the following expansion of the determinant: det p−2 D (1,p),(1,p) (±i a q (p+1)/2 λ) = τ (λ)det p−3 D τ,(1,(p+1)/2,p),(1,(p+1)/2,p) (±i a q (p+1)/2 λ) (C.9) + x(λ)det p−3 D τ,(1,(p+1)/2,p),(1,(p+1)/2−1,p) (±i a q (p+1)/2 λ) λ 2 − 1/λ 2 (C.10) 1,1 (λ) as the matrix with all the rows coinciding with those of M 1,1 (λ) except the row (p + 1)/2, which is obtained by summing the row (p − 1)/2 and (p + 1)/2 of M 1,1 (λ) and dividing them by (λ 2 − 1/λ 2 ), this matrix has finite elements on the row (p + 1)/2 also in the limit λ → ±i a . Similarly to the previous cases one can show that the rows of M from which our statement on the form of C 1,1 (λ) follows. Similarly, we can prove our statement on C 1,p (λ).
13,374.2
2016-07-11T00:00:00.000
[ "Mathematics" ]
$R+\alpha R^n$ Inflation in higher-dimensional Space-times We generalise Starobinsky's model of inflation to space-times with $D>4$ dimensions, where $D-4$ dimensions are compactified on a suitable manifold. The $D$-dimensional action features Einstein-Hilbert gravity, a higher-order curvature term, a cosmological constant, and potential contributions from fluxes in the compact dimensions. The existence of a stable flat direction in the four-dimensional EFT implies that the power of space-time curvature, $n$, and the rank of the compact space fluxes, $p$, are constrained via $n=p=D/2$. Whenever these constraints are satisfied, a consistent single-field inflation model can be built into this setup, where the inflaton field is the same as in the four-dimensional Starobinsky model. The resulting predictions for the CMB observables are nearly indistinguishable from those of the latter. Introduction As one of the first examples of single-field slow-roll inflation, Starobinsky proposed a model of extended gravity with f (R) = R + αR 2 that leads to a scalar field theory with an exponentially flat potential [1]. By means of a Legendre-Weyl transformation the nontrivial gravity action can be recast into the form of Einstein-Hilbert gravity with a minimally coupled scalar field, φ, whose scalar potential takes the form (1.1) This model of inflation is, over three decades after its proposal, compatible with the latest observational constraints [2]. The aim of this work is to study possible generalisations of the underlying f (R) theory in D > 4 dimensions; it is based on [3]. Recently, there has been further research in this direction [4], which shares some of the conclusions of the present work, without addressing the important aspect of moduli stabilisation. Whenever higher-dimensional theories are compactified, deformation modes of the internal manifold enter the four-dimensional effective field theory (EFT) as additional scalar fields. Usually those fields must be stabilised in a suitable way to not cause a variety of problems. We study the interplay between inflation from f (R) gravity in higher dimensional spacetimes and moduli stabilisation using a simple toy model. We show that, without ingredients other than the gravitational action and a cosmological constant, the potential is generically unstable along the direction of the volume of the compact space. Following the original idea of Freund and Rubin [5], we demonstrate that non-vanishing two-form flux on the compact space can lead to sufficiently stable minima with a Minkowski or de Sitter space-time in four dimensions. However, we show that there are no stable inflationary trajectories ending in those minima. While for large values of the scalar field φ the potential features a plateau -as in the original Starobinsky model -this plateau is always unstable in the direction of the volume modulus. Finally, we propose a solution to this problem using a more general p-form flux background on the compact space. This allows us to separate moduli stabilisation from the inflationary dynamics. This work fits well in a line with previous studies of plateau inflation in higher-dimensional theories. For example, the authors of [6] use a similarly simple toy model of moduli stabilisation to investigate its interplay with inflationary theories. Moreover, the past decade has seen substantial progress in string theory implementations of plateau inflation models, also including the study of moduli stability, cf. [7][8][9][10][11][12][13]. The remainder of this paper is organised as follows. In Section 2 we first give the ansatz for the D-dimensional f (R) theory. Second, we give the resulting four-dimensional action for the two involved scalar fields in the Einstein frame, after compactification on a sphere. Third, we demonstrate that two-form flux cannot sufficiently stabilise the volume modulus during inflation. Finally, we solve this problem by introducing p-form flux on the sphere and discuss the ensuing observational footprint of the model. In Section 3 we conclude, and compile the details regarding the main result, which is the four-dimensional action in Einstein frame, in Appendix A. Starobinsky's model in D dimensions The starting point of our discussion is a generalisation of Starobinsky's model in D spacetime dimensions. Following [3] the D-dimensional action features an Einstein-Hilbert term, a higher-order curvature term, a cosmological constant, and the kinetic term of a (p−1)-form gauge potential. In total, we have and n and Λ are treated as free parameters. Moreover, M denotes the D-dimensional Planck mass. In the following we are interested in the four-dimensional effective field theory (EFT) after compactification of D − 4 dimensions on a sphere. 1 F p only has non-vanishing components in the compact space to satisfy Lorentz invariance in the EFT, so for D = 4, n = 2, and Λ = 0 (2.1) reduces to the standard Starobinsky action. Einstein frame and compactification The action above is written in a D-dimensional Jordan frame. To extract the physical predictions of the EFT after compactification of D − 4 dimensions, an Einstein frame description is particularly useful. The strategy to obtain the desired four-dimensional action is as follows. First, by introducing an auxiliary scalar field A we can remove the term proportional to R n in (2.1). Second, using a conformal transformation of the D-dimensional metric, we can transform the result to the D-dimensional Einstein frame. Subsequently we compactify the D − 4 internal dimensions on a sphere. Finally, as the result is again given in a four-dimensional Jordan frame, we perform another conformal transformation to obtain the four-dimensional Einstein frame action of the EFT. The details of this procedure can be found in Appendix A.1. Here we merely state the final result, where g and R now denote the respective four-dimensional quantities. This action is given in terms of four-dimensional natural units, i.e., we have set the four-dimensional Planck mass to unity, M p = 1. Also, compared to (A.30) in the appendix we have dropped the hats for convenience. The four-dimensional EFT apparently contains Einstein gravity and two dynamical scalar fields. Here A is the would-be inflaton field analogous to the one in Starobinsky's model and σ is the radial modulus of the compact sphere. The canonically normalised variables, φ and Σ, can be deduced from (2.3) and are defined by The scalar potential V (σ, A) features contributions from the D-dimensional higher-order curvature term, from the integrated curvature of the compact space, and from compact space fluxes. It reads where V flux is the potential generated by the non-vanishing integral over |F p | 2 on the sphere. It is generally a function of both A and σ; its form is given below. If (2.5) is to have a plateau at large values of A, as is typical of four-dimensional Starobinsky inflation, the dimensionality of space-time must be related to the power of the Ricci scalar as follows [3,4], We stress that the violation of this condition does not exclude the existence of a flat patch of the potential where inflation can take place. However, in the remainder of the paper we consider setups that feature an infinite plateau in the A direction, for which (2.6) is a necessary (but not sufficient) condition. As argued below, the stability of this plateau places non-trivial constraints on the functional form of V flux . Before we analyse in detail the interplay between the flux stabilisation of the volume modulus and the existence of a stable and flat inflationary trajectory, let us note that, by taking the limit D → 4 and n → 2 while setting V flux = Λ = 0, one recovers the standard four-dimensional Starobinsky potential in terms of the non-canonical variable A. Volume stabilisation with two-form fluxes One crucial observation following from the result (2.5) is that, if V flux = 0, the theory always has a runaway direction towards σ → 0. To arrange for a (meta-)stable minimum of the volume modulus, we can turn on fluxes in the compact space which contribute to the fourdimensional scalar potential. Like in the original Freund-Rubin compactification [5] (see also [14,15] for a relation to string theory), we may try to employ two-form field strengths. In that case the last term of our starting action (2.1) reads which, upon dimensional reduction, gives rise to in the four-dimensional Einstein frame. The integer flux constant f is defined in (A.24). Thus, the full scalar potential in this case, assuming D = 2n, reads (2.10) We may now study whether this potential has a sufficiently stable minimum with vanishing (four-dimensional) cosmological constant and a stable inflationary trajectory. A vacuum with the desired properties seems to exist in a limited region of parameter space for any value of n. For example, with n = 3 one finds after solving ∂ σ V = ∂ A V = V = 0, with λ = f 4 − 48α. Thus, the existence of a post-inflationary vacuum implies the parameter constraint f 4 > 48α. Inflation, however, seems challenging to realise. One can check that for any n, the only potentially viable inflationary trajectory in the potential (2.10) is along the coordinate A [3]. We can evaluate the potential for large values of A as follows, The result does not depend on A, so the potential develops a plateau as in the original setup of Starobinsky. However, the plateau is always unstable in the direction of the modulus σ. In fact, V lim has a single local extremum at , which does feature a positive value for the scalar potential on the plateau, but a mass for σ that is always negative for n > 2, This leads us to exclude the possibility of Starobinsky inflation in D > 4 dimensions in cases where the radial modulus of the compact dimensions is stabilised by two-form flux. This is ultimately due to the fact that, as a result of the dimensional reduction and conformal transformation, the flux term in (2.10) depends inversely on A. Hence, for large A the crucial stabilising term is eliminated. In the following, we show how this problem can be avoided in a more general flux background. Volume stabilisation with p-form fluxes In order to disentangle problem of moduli stabilisation from the potential of the would-be inflaton field, one can consider the more general case of stabilisation via p-form fluxes with p > 2. The corresponding term in the original action is then After noticing that the source of difficulties in the two-form case is the A dependence in (2.9), we consider flux terms that are invariant under the Legendre-Weyl transformation that recasts the D-dimensional action into the Einstein frame. 2 This implies a link between the rank of the p-form and the dimensionality of space-time, This degree of flux is only possible if D ≥ 8, since it must be p ∈ N and p ≤ D − 4. As shown in Appendix A.1, upon dimensional reduction (2.16) gives rise to the following term in the four-dimensional Einstein-frame action, where the last equality follows from imposing D = 2n. As advertised, this stabilising term is independent of A. The full scalar potential then reads Again we find a stable Minkowski vacuum for any n, given by (2.20) As in Section 2.2 we may look for the possibility of a plateau at large values of A. Indeed one finds in the A → ∞ limit In this regime the volume modulus actually develops a local minimum at σ c , defined by which implies that the height of the plateau at large A is given by This situation is different from the one with two-from fluxes in Section 2.2. The plateau is actually stable in a certain parameter regime, since the mass of σ can be positive and large compared to the inflationary energy scale. In particular, one finds for the mass of the canonically normalised modulus at σ c , Requiring the inflationary dynamics to be described by a single-field system, i.e., imposing that σ can be integrated out consistently, leads to the following two constraints on the parameters of the model, The latter constraint comes from the requirement that the dynamics of σ are negligible during the inflationary epoch. These constraints imply a tuning of the parameters such that 2 < σ 2 c (nα) 1 1−n < 2n − 12/5 n − 2 . (2.26) With n > 3, as has to be the case in our setup, one finds 2n−12/5 n−2 < 4. Moreover, note that (2.20), (2.22), and (2.26) imply that in the desired parameter regime σ 0 ≈ σ c . This means that the back-reaction of the inflationary energy density on the expectation value of the volume modulus is negligible. Validity of the four-dimensional EFT In order to evaluate the validity of the four-dimensional EFT one must compare the energy scales in the problem to the Kaluza-Klein (KK) scale of the compactification. Let us expand σ 2 c (nα) where δ 1. One can then show that For the four-dimensional description to be valid both energy scales must be below the KK scale, V plat m 2 Σ M 2 KK , which is given by Since one can tune δ 1 it automatically follows that the Hubble parameter during inflation is parametrically smaller than the square of the KK scale. The situation of m Σ is more subtle since for D = 2n ≥ 8 one finds We therefore conclude that the mass of the volume mode is below, but very close to the KK scale. We note that by tuning the dimensionality of space-time the ratio can be made smaller but that a hierarchical separation is hard to achieve. This renders the moduli stabilisation physics discussed above vulnerable to corrections coming from higher-dimensional physics. Inflationary footprint If (2.26) is fulfilled we can describe inflation in terms of a single-field Lagrangian with the scalar potential V (A) ≈ V (σ 0 , A) to very high accuracy. We can then determine the observational footprint of the model as follows. The inflationary potential in terms of the canonical variable φ, defined in (2.4), reads where κ ≡ 2n−2 2n−1 and the C i can be read off from (2.19) after setting σ = σ 0 . Notice that the value of κ plays a pivotal role in the determination of the observables for this class of potentials. For interesting cases one finds . (2.32) As mentioned above, the single-field regime of this setup can be reached whenever the conditions (2.26) are imposed. The closer the parameter choice is to saturating the lower bound on the left-hand side of (2.26), the more robust the mass hierarchy between volume modulus and the inflaton becomes. Furthermore, the correct normalisation of the scalar perturbations requires that at horizon exit V ∼ 10 −10 in Planck units. Hence the closer the parameter choice is to saturating the lower bound, the smaller the radius of the compact space σ 0 , and consequently the smaller the required values of the parameters f and α. For illustration, Figure 1 depicts one correctly normalised example with a large mass hierarchy. In what concerns CMB observables, even in D > 4 dimensions, one recovers values similar to the well-known ones for Starobinsky-type potentials, namely (2.33) where N e denotes the number of e-folds of expansion. Therefore, while these models lie at the centre of the Planck 1−σ region [2], they are essentially indistinguishable among themselves and also from the four-dimensional Starobinsky model with κ = 2/3. Discussion In this paper we have explored the relation between an R + αR n gravitational theory in a D-dimensional space-time and the occurrence of inflation in four dimensions. This work constitutes an obvious extension of the Starobinsky model of inflation. Using the example of a sphere with a single volume modulus, we have found that the stabilisation and dynamics of the extra-dimensional manifold is closely connected to inflation and that disentangling the two requires judicious choices of the model parameters. This situation is analogous to well known results in string inflation, where the interplay between inflation and moduli stabilisation has been extensively studied over the last decade. The stand-out feature of the original four-dimensional Starobinsky proposal, apart from the fact that after 30 years it is nowhere near being excluded by CMB data, is the existence of an infinite plateau at large field values. In our D-dimensional case, demanding the scalar potential in the Einstein frame to have a similar plateau constrains the form of the initial gravitational action to f (R) = R + αR D/2 . Requiring stability of the compact space during inflation further constrains the form of the action, determining the extra degrees of freedom that can be present in the UV limit. More concretely, it excludes stabilisation of the compact space with a two-form field strength, as in Freund-Rubin compactifications. Instead, one may stabilise the volume via p-forms, where the rank p is related to the dimensionality of space-time, p = D/2. This last constraint combined with four-dimensional Lorentz invariance forces us to consider spaces of even dimensionality with D ≥ 8. Once all these conditions can be met, it is possible to tune the microscopic parameters -such as the amount of flux, the D-dimensional cosmological constant, and the strength of the R n term -to generate viable models of single-field inflation, compatible with the latest observational constraints, that exit into a viable post-inflationary minimum. a function of a scalar field and the f (R) frame is the one in which the gravitational part of the action is expressed as a (non-linear) function of the Ricci scalar. In this paper, in a slight abuse of nomenclature, we refer to the Jordan and f (R) frames indiscriminately. Let us first focus on the pure gravity part of the action in order to write it in the Einstein-Hilbert form in D dimensions. We decompose (2.1) into S = S grav + S matt , where The D-dimensional cosmological constant Λ is dimensionless and the field strength p-forms have mass dimension one. Let us introduce an auxiliary field χ with mass dimension 2, and write the action as [16] Note that, at this level, χ is a genuine auxiliary field because its action has no time derivatives. The equation of motion for χ following from this action is where we used that ∂ 2 f (χ) From this point onwards α (like Λ) is dimensionless. So far, the total action is thus In order to transform this to the D-dimensional Einstein frame, we perform a conformal transformation and write the action in terms of the metricg defined by One can show that under a Weyl rescaling R transforms as [17] where quantities with a tilde are understood with respect tog M N . In order forg to be the D-dimensional Einstein frame metric it must be Then the action in the D-dimensional Einstein frame reads In what follows we ignore the total derivative∇ 2 ln A. where g (4) and g (D−4) are the determinants of g µν and g mn , respectively. Then the Ddimensional Ricci scalar decomposes as follows, where R (4) is the curvature of the four-dimensional metric g µν , R (D−4) is the curvature of the (D − 4)-dimensional metric g mn and ∇ µ is the four-dimensional covariant derivative with respect to g µν . Under this decomposition the D-dimensional Einstein-Hilbert action becomes One can, at this point, perform the integral over the (D − 4)-dimensional internal space. We remember that for the Euler characteristic of the compact sphere. Moreover, the volume of the compact space is given by where we used that A further conformal transformation is necessary to yield the four-dimensional Einstein frame, so we define the new metricĝ via g µν = Ω −2ĝ µν , g µν = Ω 2ĝµν , −g (4) = Ω −4 −ĝ (4) . where have used (A.7). This is the result given in Section 2.1, where we set M p = 1 and omit the hats and indices on g and R for clarity.
4,629.8
2017-02-27T00:00:00.000
[ "Physics" ]
Priming and bonding metal, ceramic and polycarbonate brackets Abstract Objective: To investigate if primers can be used to modify bonding characteristics of orthodontic brackets. Materials and methods: Stainless steel, zirconia-alumina ceramic and polycarbonate brackets were bonded to enamel with and without universal and bracket material specific primers on the bracket base. Orthodontic adhesive cement (Transbond™XT) was used for bonding. The primers in each group (n = 10) were silane based (RelyX™ Ceramic Primer) and universal primer (Monobond Plus) for ceramic and metal brackets, and adhesive resin (Adper™ Scotchbond™ Multi-Purpose Adhesive) and composite primer (GC Composite Primer) for polycarbonate brackets. Controls with no primer were used for all bracket types. Teeth with bonded brackets were stored in distilled water in 37 °C for 7 days and debonded with static shear loading. Debonding forces were recorded and analyzed with ANOVA. Adhesive remnant index (ARI) was determined and enamel damage examined. Results: The bond strength without primers was 8.14 MPa (±1.49) for metal, 21.9 MPa (±3.55) for ceramic and 10.47 MPa (±2.11) for polycarbonate brackets (p < .05). Using silane as primer increased the bond strength of ceramic brackets significantly to 26.45 MPa (±5.00) (p < .05). ARI-scores were mostly 2–3 (>50% of the adhesive left on the enamel after debonding), except with silane and ceramic brackets, ARI-score was mostly 0–1 (>50% of the adhesive left on the bracket). Debonding caused fractured enamel in four specimens with ceramic brackets. Conclusions: Bond strength was highest for ceramic brackets. Silane primer increased bond strength when used with ceramic brackets leading to enamel fractures, but otherwise primers had only minor effect on the bond strength values. Introduction In fixed orthodontic appliances, the bracket-enamel adhesion should provide a strong attachment of the bracket during treatment, but allow debonding of the bracket without enamel damage at the end of the treatment. The rate of bracket failure is found to be varied, but somewhere between 2-20% of the brackets fail prematurely during the treatment [1][2][3]. The bond strength can be modified by affecting the properties of the adhesive cement or by increasing the mechanical retention by changing the design of the bracket base. In case of bonding failure or in post-treatment removal of the brackets, the break-off can take place either between the bracket and the adhesive cement or between the adhesive cement and the enamel, or at both interfaces. A too high bonding strength may lead to breakage inside enamel or even at the dentinoenamel junction when the bracket is removed [4]. There are many different types of brackets available, the most common materials being metal, ceramic and polycarbonate. The brackets are bonded to enamel with light-curing adhesive cement. When bonding metal brackets, penetration of light under the bracket is limited, and without chemical bonding between the metal bracket and the adhesive cement, the bond strength tends to remain low compared to translucent or chemically bondable ceramic brackets [5][6][7]. Consequently, with metal brackets there is a higher risk for bracket failure but on the other hand, removing brackets is easy. A break-off usually takes place at the bracket-adhesive interface, and enamel damage is only rarely encountered [8][9][10][11]. The bond strength of metal brackets can be improved e.g. by sandblasting, microetching and silanation of the bracket base [12][13][14][15]. Translucent ceramic brackets allow a more complete photopolymerization of the adhesive, and some ceramic brackets rely on chemical bonding in addition to mechanical retention, resulting in high bond strength [4,7,9,[16][17][18], but because of the strong attachment, there is a higher risk for enamel damage during bracket removal. Similarly to ceramic brackets, polycarbonate brackets are translucent but they are reported to have lower bonding properties than ceramic or metal brackets [19,20]. Primers are used in dentistry to promote adhesion between dissimilar substrates that do not naturally bond with each other. Primers are substrate specific, and with some substrates, chemical bonding can be achieved. However, despite of their surface specificity, improvement of wettability of the bonding surface by the primer is a common property of all primers. Silane based primers are used with ceramic and also with metal substrates, but for polymer composite substrates there are specific primers. Recently universal primers, which can be used with various types of substrates, have been introduced [21,22]. The objective of this study was to investigate whether different primers could be used to modify the bonding characteristics of metal, ceramic and polycarbonate brackets to achieve adequate bond strength without increasing the risk of enamel damage at debonding. Materials and methods Brackets of three different materials (stainless steel, zirconia-alumina ceramic and polycarbonate) were bonded to enamel using bracket material specific or universal primers on the bracket base with orthodontic adhesive cement (Transbond TM XT). The brackets were upper central incisor brackets, Inspire ICE by Ormco (a ceramic monocrystalline aluminum oxide bracket with a base covered in small zirconia spheres), Spirit MB by Ormco (a filler reinforced polycarbonate bracket), and Ortomat Minimat by Ormco (a stainless steel bracket). The brackets of each material were divided in three groups (n ¼ 10) according to the primer used in the bonding procedure. The primers were selected to match the different bracket types based on their universal affinity or material specificity for the bracket materials, and they were a silane based primer (RelyX TM Ceramic Primer) and a universal primer (Monobond Plus) for ceramic and metal brackets, and adhesive resin (Adper TM Scotchbond TM Multi-Purpose Adhesive) and a composite primer (GC Composite Primer) for polycarbonate brackets. Two types of adhesion promoters, methacryloxypropyltrimethoxysilane (MPS) of the silane based primer, and methacrylated phosphoric acid ester (MDP) combined with MPS of the universal primer, were chosen to be used with ceramic and metal brackets because of their ability to bond with multiple types of substrates. For the polycarbonate brackets, composite primer is specifically aimed at bonding between composite substrates, and the adhesive resin was chosen because of a similar solubility parameter between polycarbonate and BIS-GMA, which would allow the primer to dissolve and penetrate into the polycarbonate. A control group with no primer was used with all bracket types. The brackets and primers used in the study are listed in Table 1. The teeth used in the study were extracted molars acquired from the teaching clinic of Institute of Dentistry, University of Turku, Turku, Finland. The teeth were examined and only sound molars with sufficiently large and not too curved enamel areas similar to upper central incisors were included. The teeth were embedded vertically to blocks of acrylic resin so that the roots were inside the acrylic, they were cleaned with pumice, etched for 15 s using a 32% phosphoric acid etching gel, rinsed and air-dried. The selected primer was applied on the base of the bracket and air-dried/light cured according to the manufacturer's instructions (Table 2), Transbond XT primer was applied on enamel, a small amount of Transbond XT adhesive cement was applied on the bracket base and the bracket was placed firmly on the enamel. Excess adhesive cement was removed with an instrument and the adhesive cement was light cured for 10 s (5 s from both sides) according to the manufacturer's instructions. The specimens were stored in distilled water in 37 C for 7 days and debonded with static loading using a testing machine (LLOYD Instruments, AMETEK Lloyd Instruments Ltd, West Sussex UK) with so-called shear-bond strength test with cross-head speed of 1 mm/min. The tip of the testing blade was positioned above the bracket wings close to the bracket base, the distance of the tip from the bracket base varying between 0.5-1 mm due to the differences in the thickness of the brackets, the metal brackets being thinner than ceramic or polycarbonate brackets. Debonding force and load-displacement curve were recorded. Testing was made in air at room temperature. After the testing, the specimens were analyzed and adhesive remnant index (ARI) ( Table 3) and enamel damage were determined using a stereomicroscope (Wild 3MZ stereomicroscope, Wild Heerbrugg, Geis, Switzerland). Statistical analysis was performed with SPSS Statistics version 22.0 using Kruskall-Wallis test. Few specimens (one specimen from met1, cer1 and polyc1 groups each) were not included in the results due to testing machine error. Results SEM micrographs of the brackets are presented in Figure 1. The brackets had different base designs: the metal bracket had a mesh base, the ceramic bracket base was covered with small spheres (Ø approximately 40 mm) and the polycarbonate base had large square protuberances of varying sizes (approximately 200-500 mm). There was considerable variance in the height of the texture on the bracket bases between different bracket types: the difference between the highest and the deepest point in the base was approximately 125 mm for the metal, 50 mm for the ceramic, and 150 mm for the polycarbonate bracket, as can be seen in the profile graphs in Figures 2-4. None of the brackets fractured during testing. The bond strength of brackets without primers was Table 3. Adhesive remnant index (ARI), definition of scores. Score Definition 0 No adhesive remained on enamel 1 Less than 50% of adhesive remained on enamel 2 More than 50% of adhesive remained on enamel 3 All adhesive remained on enamel Figure 5). There were no differences between different primers and control group with metal or polycarbonate brackets. The bond strength of ceramic brackets used with silane primer was 26.45 MPa (±5.00), which was significantly higher compared to the control group and the universal primer group (p < .05). ARI scores were mostly 2-3, except when silane primer was used with ceramic brackets, where the ARI score was mostly 0-1 ( Figure 6). ARI scores 2-3 indicate that all or above 50% of the adhesive remained on the enamel after debonding, whereas ARI scores 0-1 indicate that all or above 50% of the adhesive remained on the bracket (Table 3). Stereomicroscope images of different ARI scores can be seen in Figure 7 (Images 1-4). An enamel fracture was observed in four specimens using ceramic brackets: three with the silane based primer and one with the universal primer. No enamel fractures were observed with ceramic brackets without primer, or metal or polycarbonate brackets. As a group, ceramic brackets had significantly higher amount of enamel fractures compared to metal or polycarbonate brackets (p < .05). Stereomicroscope and SEM images of a specimen with fractured enamel can be seen in Figure 7 (Images 5-6). Discussion Due to modern resin composites and the acid etching technique, the bond strength between enamel and the resin composite is quite high [23,24]. Therefore if the bond between the bracket and the adhesive cement also becomes very strong, the risk for enamel damage increases. In this study, ceramic brackets yielded significantly higher bond strength than metal or polycarbonate brackets. Higher bond strength of ceramic brackets, especially with chemical bonding, can result in enamel damage during debonding [25,26], which was evident also in this study in group cer2 with silane primer, where enamel fractures were observed in three specimens (Figure 7, images 5-6). In fact, to avoid enamel fractures, bonding of ceramic brackets should be based on mechanical retention rather than on chemical bonding, as other studies have also suggested [16,18]. An additional risk for enamel damage is caused by the low fracture toughness of the ceramic brackets, especially the monocrystalline brackets, which may lead to break-ups of the bracket itself during debonding [17,[27][28][29][30][31][32]. A remaining part of the ceramic bracket on enamel can be shaped in a way that it cannot be removed with pliers, and therefore needs to be removed with a rotary instrument. Due to the hardness of the ceramic bracket, it needs to be done with a diamond bur, but this can result in enamel damage [33]. It has been reported that the chance of enamel fracture during debonding of ceramic brackets could be diminished e.g. by using laser in the debonding procedure [34], or by applying the debonding force by compression rather than by shearing off the bracket, since this would lead to more favorable stress distribution in the enamel [35]. The compression from two sides of the bracket models the clinical case of using pliers, but it has been found that there is no difference in the failure mode between using pliers to detach brackets and shearing them off with a testing machine [20]. Because of the risk for enamel damage, it is usually considered safer if the breakage happens at the bracket-adhesive interface, even though higher ARI values leave more cement to be cleaned from the enamel surface [36]. The findings of the present study showed that when the bond strength between the bracket and the adhesive cement was not increased, the breakage usually happened at the bracket-adhesive interface, but stronger bonding, achieved by added chemical retention, resulted in lower ARI scores, i.e. breakage of the bond at the enamel-adhesive interface (Figures 6 and 7) or even inside enamel. Similar findings have been reported in earlier studies [13,37]. ARI score values of 0 and 1 were found significantly more often when primers were used with ceramic brackets (Figure 6). When ceramic brackets were bonded without primers, most adhesive cements was left on enamel, which is in accordance with the findings of previous studies [17,33,38]. The remaining adhesive cement must be removed, and a small damage to the enamel seems inevitable, but using e.g. a carbide bur for the clean-up is less destructive to enamel than removing a piece of a bracket with a diamond bur [39]. Silanes are alcohols containing a silicon (Si) atom. The most commonly used silane in dentistry is methacryloxypropyltrimethoxysilane (MPS), which contains a methacrylate group and three alkoxygroups attached to a Si-atom. The methacrylate group reacts with the methacrylate groups in the composite resin, and the alkoxy groups are hydrolyzed and form acidic silanol groups, which then form bonds with the hydroxyl (-OH) groups on the surface of the substrate, e.g. glass ceramic [21,22,40]. When bonding to substrates that contain silica, which is spontaneously covered by -OH groups from the ambient moisture, siloxane linkages (-Si-O-Si-) are formed, and thus, the resin is covalently bound to the silica surface [41]. Weaker adhesion is achieved to metals (-Si-O-M-), because of fewer -OH groups on the oxidized metal surface. Silanes cannot sufficiently bond to chemically more inert substrates, e.g. oxide ceramics such as fully crystallized zirconia [42]. However, the surface of inert ceramic can be conditioned for chemical reactivity with silanes, e.g. chemical bonding to zirconia is possible with tribochemical silica-coating conditioning [43,44]. In addition, other adhesion promoters such as organophosphate ester monomers (MDP) can be used to enhance bonding to oxide ceramics [42,[45][46][47]. It has been suggested that the bond strength of ceramic brackets could be further increased by mechanical retention, e.g. air abrasion or selective infiltration etching [40]. A problem with silane promoted bonding is poor hydrolytic stability that leads to bond deterioration over time [48,49]. In this study, the specimens were stored in water for seven days prior to testing, and it is possible that with a longer water storage time, the bond would have started to deteriorate. In the present study, the silane primer with MPS increased the bond strength of the ceramic brackets, whereas the universal primer with both MDP and MPS had no effect, even though it is suggested to bond to oxide ceramics [42,[45][46][47]. Our findings differ from those of earlier studies that reported no effect of MPS on the bond strength of ceramic brackets [20], and an enhancing effect of MDP on bonding of ceramic brackets to ceramic substrates [50]. In addition to bonding with surface hydroxyl groups, another mechanism of action of silane coupling agents is based on improvement of surface wettability of the substrate by the monomers of the resin. This could explain the high bond strength of the samples in group cer2. It seems that the silane was able to improve the wettability of the ceramic bracket more than the universal primer. Polycarbonate is a thermoplastic polymer, it is translucent and has somewhat higher mechanical properties than commonly used denture base polymer, poly(methyl methacrylate). Because polycarbonate is not a strong material, polycarbonate brackets are often reinforced with fillers or fibers. Polycarbonate brackets have been reported to yield lower bond strength values than metal brackets [19]. However, our findings indicate stronger bonding of the polycarbonate brackets compared to metal brackets. Composite primers are primers which function either with the inorganic filler particles of resin composite or by acting with the polymer matrix by dissolution and polymerization. Typically composite primers are solvents and methacrylate monomers with photo initiators for polymerization [51,52]. Dissolution of the polymer substrate surface requires linear polymer structure of the substrate and therefore cross-linked polymers cannot be dissolved. Actual bonding is based on formation of interpenetrating polymer network to the interface of substrate and adhesive [53]. Composite primers can be mixtures of monomers and silanes, but the silanes have shown to be inactivated in the mixtures during the shelf-life time and the function of the silane component is questioned [54,55]. In the present study, the effect of composite primer or the adhesive resin on the bond strength of polycarbonate brackets was statistically insignificant. One way to significantly improve bond strength when bonding polycarbonate brackets with glass-fibers as fillers includes first exposing the fibers with sandblasting and then adding silane as a coupling agent [19]. The design of the bracket base is a key factor in creating mechanical retention, and it greatly affects the bonding properties of the brackets. The brackets in this study had very different types of base designs (Figure 1.), and each required a different debonding force. The more irregular the base of the bracket, the higher the surface roughness, which creates mechanical retention [56]. The small spheres of the ceramic brackets provide large surface area and undercut areas, which seem to provide better retention than the mesh on the metal bracket or the square protuberances on the polycarbonate bracket, even though the height difference of the base texture was lowest in the ceramic bracket (Figures 2-4). In metal brackets, bonding at the bracket-adhesive interface is based on mechanical retention, and the No of specimens ARI scores 0 1 2 3 Figure 6. ARI scores of the test groups. See Table 3 for ARI score description. macroscopically retentive design of the bracket base is therefore of primary importance [57][58][59][60][61][62]. In metal brackets with a mesh base design, larger mesh apertures have been shown to correlate with higher bond strengths, since it allows better resin penetration into the bracket base and allows air to be displaced from under the adhesive [63]. Improvement of the bond strength of metal brackets without enamel damage has been achieved by applying a metal primer containing 4-META (4-Methacryloxyethyl Trimellitate Anhydride) to the base of the bracket [64]. There are many different types of brackets available, and even within the same material category they differ in many of their properties, e.g. ceramic brackets include mono-and polycrystalline brackets, chemically bonding brackets and just mechanically bonding ones, and the size of the brackets and the design of the base vary considerably. Therefore the results of a certain type of a bracket cannot be generalized to comprise all the other brackets of the same material category, which adds difficulty to comparing the results of one study with another. However, with a Figure 7. Examples of ARI scores. 1-5 light microscope images. 1 ¼ ARI score 3, 2 ¼ ARI score 2, 3 ¼ ARI score 1, 4 ¼ ARI score 0, 5 ¼ ARI score 1 with enamel fracture, 6 ¼ SEM image of the same sample as picture 5: adhesive remnants and fractured enamel, magnification X25. growing body of research, the benefits and risks of different brackets will become clearer. Conclusions Bond strength values were highest for ceramic brackets, followed by polycarbonate brackets, and lowest for metal brackets. Silane primer increased bond strength when used with ceramic brackets, but otherwise primers had only minor effect on the bonding values. There is a risk for enamel damage with ceramic brackets when a silane primer is used and bond strength reaches very high values. Since the effects of the primers tested in this study were either insignificant or adverse, the use of these primers on the base of the brackets in orthodontic bonding cannot be recommended. Disclosure statement No potential conflict of interest was reported by the authors. Funding This work was supported by Finnish Dental Society Apollonia, the Finnish Cultural Foundation, and the Emil Aaltonen Foundation.
4,754.4
2019-11-06T00:00:00.000
[ "Materials Science", "Medicine" ]
Dimension-eight Operator Basis for Universal Standard Model Effective Field Theory We present the basis of dimension-eight operators associated with universal theories. We first derive a complete list of independent dimension-eight operators formed with the Standard Model bosonic fields characteristic of such universal new physics scenarios. Without imposing C or P symmetries the basis contains 175 operators – that is, the assumption of Universality reduces the number of independent SMEFT coefficients at dimension eight from 44807 to 175. 89 of the 175 universal operators are included in the general dimension-eight operator basis in the literature. The 86 additional operators involve higher derivatives of the Standard Model bosonic fields and can be rotated in favor of operators involving fermions using the Standard Model equations of motion for the bosonic fields. By doing so we obtain the allowed fermionic operators generated in this class of models which we map into the corresponding 86 independent combinations of operators in the dimension-eight basis of Ref. [1]. I. INTRODUCTION The Standard Model (SM) based on the SU (3) C ⊗ SU (2) L ⊗ U (1) Y gauge symmetry has been extensively tested at the large hadron collider (LHC) and so far, no deviation of its predictions [2] or new heavy state have been observed [3].The natural conclusion is that there must be a mass gap between the electroweak scale and the beyond the Standard Model (BSM) physics required to address the well-known shortcomings of the SM.In this scenario, precision measurements of SM processes are an important tool to probe BSM physics and Effective Field Theory (EFT) [4][5][6] has become the standard tool employed to search for hints of new physics. The paradigmatic advantage of EFTs for BSM searches is its model-independence since they are based exclusively on the low-energy accessible states and symmetries.Assuming that the scalar particle observed in 2012 [7,8] belongs to an electroweak doublet, the SU (2) L ⊗U (1) Y gauge symmetry can be realized linearly at low energies.The resulting model is the so-called Standard Model EFT (SMEFT) which can be written as where the higher-dimension operators O (j) n involve gauge-boson, Higgs-boson and/or fermionic fields with Wilson coefficients f n and Λ is a characteristic scale. There is a plethora of analyses of the LHC data in terms of the SMEFT up to dimension-six; see for instance [9][10][11][12][13][14][15][16][17][18][19][20][21][22] and references therein.In order to assess the importance of the different contributions in the 1/Λ expansion in such analysis, as well as avoid the appearance of phase space regions where the cross section is negative [14], one is required in many cases to perform the full calculation at order 1/Λ 4 .As is well known the consistent calculation at order 1/Λ 4 requires the introduction of the contributions stemming from dimension-eight operators. At this point the advantage of the model-independent approach mentioned above becomes a limitation due to the large number of Wilson coefficients.Already at dimension-six there are 2499 possible operators when taking flavor into account [23,24].At dimension-eight the number grows to 44,807 [1,25].Clearly such large number of operators precludes a complete general analysis at any order beyond 1/Λ and we are forced to reintroduce some model dependent hypothesis.In this realm, identifying physically motivated hypothesis able to capture a large class of BSM theories becomes the new paradigm. One such well-motivated hypothesis is that of Universality, which in brief refers to BSM scenarios where the new physics (NP) dominantly couples to the gauge bosons of the Standard Model.It was first put forward in the context of the analysis of electroweak precision data from LEP and low energy experiments, with the introduction of the oblique parameters S, T, U [26,27] (or ϵ 1 , ϵ 2 , ϵ 3 [28]) which captured the dominant NP effects in the observables.In the context of the SMEFT, Universality formally refers to BSM models for which the low-energy effects can be parametrized in terms of operators involving exclusively the SM bosons, hereon referred to as bosonic operators [29].Ultraviolet (UV) completions that satisfy this specific definition of universal theories include theories in which the new states couple only to the bosonic sector, as in composite Higgs models [30], as well as models where the SM fermions are coupled to new states via SM-like currents [31,32] like in type I two-Higgs-doublet models [33]. In the EFT framework not all operators at a given order are independent as operators related by local changes of variables in quantum field theories possessing a jacobian determinant equal to one at the origin exhibit the same S-matrix elements [34,35].In particular, operators connected by the use of the classical equations of motion (EOM) of the SM fields lead to the same S-matrix elements [36][37][38][39] 1 .In general, a given SMEFT basis trades some of the bosonic operators for other bosonic operators and operators involving fermions, hereon called fermionic operators, in order to keep only independent operators.Therefore, the action of a rotated operator is equivalent to a relation between the Wilson coefficients in the basis.These relations for universal dimension-six operators were obtained in Ref. [29].This work represents the next step in the exploration of the BSM effects for universal theories by presenting the SMEFT operator basis and relations implied by the Universality hypothesis at dimension-eight.As a first step we search for a complete list of independent dimension-eight operators composed exclusively with SM bosons before the use of EOM.A large fraction of these operators involve higher derivatives of the gauge bosons and/or the Higgs field and therefore, in the existing dimension-eight basis [1,25], they have been generically eliminated in favor of fermionic operators.Consequently, in universal theories only a subset of the fermionic operators of the general dimensioneight operator basis are generated and, furthermore, their Wilson coefficients are related.In this work we use, for concreteness, the basis presented by Murphy in Ref. [1] which we refer to as M8B.Thus, the program at hand is first to identify a suitable basis of independent bosonic operators at dimension-eight and then by application of EOM to identify the combination of fermionic operators of M8B associated with universal theories. The relevance of constructing the most general EFT within a minimal set of assumptions -such as that of Universality-is precisely to provide a tool for phenomenological studies as model independent as possible within that assumption.On this front, it is important to stress that the universality assumption allows us to perform detailed studies at 1/Λ 4 without resorting to very simplified hypothesis where just one dimension-eight operator is considered, or to specific UV completions.For instance, working in the framework of universal models, Ref. [42] studies the impact of dimension-eight operators on the experimental analysis of anomalous triple gauge couplings by combining the available electroweak precision data and electroweak diboson (W + W − ,W ± Z, W ± γ) productions.It is interesting to notice that the inclusion of dimension-eight operators breaks the relation λ γ = λ Z that holds for the 1 When considering higher orders in the 1/Λ expansion one needs to take care when applying the EOM.While they are consistent when at the highest order in the expansion considered, at lower orders one needs to include terms "beyond linear order".Alternatively, the application of field redefinitions is always consistent [40,41]. dimension-six operators.Another possible application is the complete 1/Λ 4 analysis of Drell-Yan processes [43] that goes beyond the S, T , W and Y oblique parameter analysis [44] with the introduction of further contributions to the electroweak gauge boson propagators. For the sake of illustration we also present in Section VI a few simple UV completions of the SM that give rise only to bosonic operators when heavy states are integrated out at tree level.As expected, once a specific UV model is specified, only a subset of the possible dimension-eight universal operators is generated, and its number grows with the complexity of the UV completion and its mass spectrum.Thus the results in this paper can be generically utilized in two different approaches.Firstly, as mentioned above, it allows to perform a 1/Λ 4 complete analysis in a totally model agnostic way by considering all universal dimension-six and -eight operators which contribute to the process of interest.Alternatively, it can be of practical use when working within a specific universal UV completion matched to the SMEFT by integrating out the heavy states to obtain the generated bosonic effective operators up to dimension eight.In this case the results in appendix A can be used to rotate these generated bosonic operators to M8B without having to do each time the exercise of applying the equivalence of operators by integration by parts, Fierz identities or equations of motion because it has been already taken care of. The work is organized as follows.Sec.II contains our notation and framework.In Section II we present our notation and framework.Section III is dedicated to presenting our basis of independent dimension-eight universal bosonic operators while in Sec.IV we construct the Lorentz structures involving fermions associated with the product of SM currents, which are used in Sec.V to obtain the basis of universal fermionic operators.In Section VI we introduce a few simple bosonic UV completions and the corresponding low-energy operators, while we present our final remarks in Sec.VII.The work is complemented with three appendices.The full explicit expressions of the relations between the bosonic and fermionic operators for universal theories are presented in Appendix A. For convenience we include in Appendix B a compilation of the relations more frequently employed, and we reproduce in Appendix C the subset of M8B operators which appear in the universal operators. II. NOTATION AND FRAMEWORK Our conventions are such that the SM lagrangian reads where G A µν , W a µν , B µν stand for the field strength tensors of SU (3) c , SU (2) L , U (1) Y respectively.We denoted the quark and lepton doublets by q and ℓ while the SU (2) L singlets are u, d and e and the respective Yukawa couplings are y u,d,e .We also define H j = ϵ jk H k † with ϵ 12 = +12 .The covariant derivative for objects in the fundamental representation reads where Y is the hypercharge of the particle, T A are the SU (3) c generators and τ a stands for the Pauli matrices.On the other hand, the covariant derivatives for the field strengths are where f ABC are the SU (3) c structure constants.We denote the SU (3) c completely symmetric constants by d ABC . As mentioned above the first step in the program is to obtain the basis of independent dimension-eight operators consisting only of SM bosons.In order to do so we first obtained the number of independent operators belonging to each of the different bosonic classes before applying the EOM using available packages like BASISGEN [45], a modified version of ECO [46] given in Ref. [47] and GrIP [48].Next, we wrote down all possible operators satisfying the SM gauge symmetry and Lorentz invariance.In this process, we worked with the irreducible Lorentz representation of the field strengths where we defined the Levi-Civita totally antisymmetric tensor ϵ 0123 = −ϵ 0123 = +1.The transformation properties of these fields under the Lorentz group are simple, At this point, we obtained all possible linear relations between our set of operators using SU (3) and SU (2) Fierz transformations [49][50][51] summarized in Appendix B. Further, linear relations between the effective operators in a given class can be obtained using integration by parts (IBP) for which we follow a procedure similar to the one described in Ref. [52].In brief, given the field content and number of derivatives in a given class we obtain all operators invariant under gauge and Lorentz transformations.To obtain the relations among them implied by IBP we write all the vector structures y ν j that contain one less derivative than the operator class under consideration, then the IBP relations are obtained by setting D ν y ν j = 0.At this point, we consider the Fierz and IBP linear relations and eliminate as many operators as there are independent relations.In order to apply the EOM more easily, we then express the final set of operators in terms of the field strengths X µν and their duals. As illustration of the above procedure, let us consider the D 2 B L H4 operator class that contains eight members 3 4 : ) ) ) At this stage, we consider operators and their hermitian conjugates as different structures.In this example, linear Fierz relations can be obtained using Eq.(B2) leading to ) ) We can see clearly from these relations that we can trade (x 5 , x 6 , x 7 , x 8 ) for (x 1 , x 2 , x 3 , x 4 ).Therefore, we focus on the latter operator set when obtaining the IBP relations which are derived from the following vector operators ) (2.18) The IBP relations are, then, derived from D ν y ν j = 0 and they read x 1 + x 2 + x 4 = 0 , (2.20) Just two of the last relations are independent, so we have two independent operators that we can choose to be x 1 and x 3 since this choice renders the rotations of these operators into M8B straightforward. Once the set of independent bosonic operators have been identified we apply the EOM to those with one or more derivatives acting on the gauge strength tensors and two or more acting on the Higgs field.With our conventions the EOM read where H and we have defined the fermionic "currents" J j H = ab y u † ab (u a q bk ) ϵ jk + y d ab , ( q j a d b ) + y e ab (l j a e b ) , J † H j = ab y u ab (q k a u b ) ϵ kj + y d † ab , ( d a q bj ) + y e † ab (e a l bj ) . , −1} and J j H , does not contain the CKM matrix because the fermion fields in these equations are in gauge eigenstates (labeled with the latin indexes a, b or c) and so are the Yukawa matrices y f .In addition, we denote the SU (2) L indices as ijk. Expressing the fermionic operators generated by products of these currents and their derivatives in terms of operators in the M8B basis requires in some cases trivial but lengthy field manipulations which make use of identities involving the SU (2) and SU (3) generators as well as Fierz field rearrangements [49][50][51]; see Appendix B for the more frequently employed relations.In addition, the simplification also involves the equations of motion for the fermions which in our notation read together with the covariant conservation of the gauge currents which imply that and the commutators of the covariant derivatives of the gauge currents are (2.26) III. INDEPENDENT BOSONIC OPERATORS The building blocks of the operator basis for universal theories are the Higgs field H, the SM field strengths (X µν L,R ∼ B µν L,R , W aµν L,R , G Aµν L,R ) and covariant derivatives D. As mentioned above we obtain the number of independent operators with this field content using the packages BASISGEN [45] and ECO [46,47].Doing so one finds, prior to the application of the EOM and without imposing C and P symmetries, there are 175 independent bosonic operators at dimension-eight.Of those, 89 can be chosen to be those included in M8B, and which, for convenience, we list in Table I.They include all independent operators without derivatives acting on the gauge strength tensors and with up to one derivative acting on each Higgs field.They lead to a rich and well-known phenomenology.For example, the operators in the classes X 4 , X 3 X ′ and X 2 X ′ 2 generate anomalous quartic and higher gauge self-couplings that have no triple gauge vertex associated to them [53,54].The operator in the H 8 class modifies the Higgs self-couplings and the operatores in the X 3 H 2 class give rise to multi H [55][56][57][58] and gauge boson [59,60] vertices, e.g.anomalous triple gauge couplings [42,61].Furthermore, the operators in class X 2 H 4 class give finite renormalization to the SM input parameters [42] and they also generate multi Higgs and gauge boson vertices [62,63]. The first task at hand is, therefore, to identify a suitable set for the remaining 86 operators following the procedure sketched in the previous section.Since our final objective is to find the corresponding combinations of fermionic operators generated after application of the EOM, we select the 86 operators for which the transformation can be more directly implemented.With this in mind, we make the following choice of operators. A. Operators with Higgs fields and two or more derivatives Prior to applying the EOM, the classes H 6 D 2 , H 4 D 4 and H 2 D 6 contain 18 independent bosonic operators of which five are those included in the corresponding classes in Table I.As for the remaining 13 independent bosonic operators, 2 of them are in the class H 6 D 2 and we chose them as In addition, there are 10 independent operators in the class H 4 D 4 selected to be R while there is only one in the class As we will see upon application of EOM they generate combinations of fermionic operators with two fermions of classes ψ 2 H 5 and ψ 2 H 3 D 2 , and operators with four fermions in classes ψ 4 H 2 and ψ 4 D 2 with chiralities (LL)(RR), (LR)(LR) and (LR)(RL), with related Wilson coefficients.Explicit expressions for the relations can be found in Eqs.(A1)-(A13) of Appendix A. B. Operators with gauge field strengths and derivatives There are 19 independent operators in classes X 3 D 2 , X 2 X ′ D 2 and X 2 D 4 none of which is included in M8B.Four involve three powers of the W field strength tensor and another four three powers of the G tensor and we selected them to be R (1) Eight operators contain two powers of W µν or G µν together with B µν which can be chosen as These operators modify the triple (multi) gauge couplings.Upon application of the EOM they will lead to combinations of two-fermion operators in the classes Finally, there are three operators in X 2 D 4 , one per gauge boson, R They affect the gauge boson propagators and can give rise to ghosts [64] in addition to anomalous multi gauge boson vertices.Equations of motion rotate these three operators to combinations of two-fermion operators in classes C. Operators with field strengths, Higgs fields and derivatives There are 62 independent bosonic operators in the class X 2 H 2 D 2 prior the use of EOM.M8B contains 18 operators in this class; see Table I.There are, therefore, 44 additional independent bosonic operators in class X 2 H 2 D 2 of which 9 contain two powers of the hypercharge field strength tensor and another 9 contain two powers of the gluon field strength tensor while 13 contain two powers of the W field strength tensor and another 13 contain the product of the hypercharge and W field strength tensors Generically, operators in this class modify the gauge couplings of the Higgs boson and vertices with two scalars and two or more gauge bosons.As we will see upon application of EOM they generate combinations of fermionic operators with two fermions belonging to the classes ψ 2 H 5 , ψ 2 H 4 D, ψ 2 X 2 H, and ψ 2 XH 2 D, and also operators with four fermions in classes ψ 4 H 2 involving chiralities (LL)(RR), (LR)(LR), (LR)(RL), and (RR)(RR).Explicit expressions for the relations can be found in Eqs.(A33)-(A76) of Appendix A. Class XH 4 D 2 contains 10 independent operators, six of them in M8B and another four which we chose as (3.9) As seen in Eqs.(A77)-(A80), these four bosonic operators are rotated by EOM to combinations of two-fermion operators in classes ψ 2 H 5 and ψ 2 H 4 D. Finally, there are six independent operators in class XH 2 D 4 , none of which are in M8B, and that we write as Application of EOM on these six operators will give two-fermion operators in classes ψ 2 H 5 , ψ 2 H 4 D and ψ 2 H 3 D 2 , and four-fermion operators in classes ψ 4 H 2 and ψ 4 HD. We finish this section by pointing out that an alternative basis of 86 dimension-eight purely bosonic operators has been presented in Refs.[65,66] motivated by the study of off-shell Green's functions.The universal basis presented here and that in these references are related by IBP and Bianchi identities.As mentioned above the basis of bosonic operators presented in this section was selected with the aim of allowing for a more direct implementation of the EOM and a more transparent identification of the resulting Lorentz structures involving fermions and the corresponding fermionic operator combinations associated with universal theories, as we discuss next. IV. PRODUCTS OF FERMIONIC CURRENTS In universal theories, fermionic operators are either generated involving the SM fermionic currents or originate through the use of EOM for the bosonic fields on purely bosonic operators.As such the only possible fermionic Lorentz structures are those listed in Eq. (2.23).Consequently, the Wilson coefficients of the possible fermionic operators in universal theories have well defined relations.At this point, it is interesting to identify the possible current combinations which are generated by the application of the EOM to the bosonic operators listed in Sec.III.These combinations contain two and four fermion fields. Most of operators exhibiting two fermionic fields originate from direct contraction of the gauge and Higgs currents in Eq. (2.23) with dimension-five bosonic structures.In addition, some two-fermion operators contain derivatives of the fermionic currents in Eq. (2.23) contracted with dimension-four bosonic structures.The generated structures are: In order to facilitate the comparison with M8B we have transformed the last two equations using the relations in the Appendix B. In principle the same procedure could have been applied to the first two relations, however, we kept the form used in M8B. Conversely, most operators containing four fermion fields originate from the product of two currents in Eq. (2.23) contracted with a field strength tensor, two Higgs fields or the derivative of a Higgs field.The operator rotations to M8B require the knowledge of sixteen current products.There are three structures coming from the product of two scalar J H 's: + y e ab y e cd (l where, in writing the right-hand side of the above equations, we have made again use of the relations listed in Appendix B to express the fermion currents in the combinations appearing in M8B. The product of two gauge currents J µ B,W,G gives rise to Lorentz scalar and tensor structures.The tensor ones related to bosonic operators are where we have Fierz transformed the first two equations above for later convenience.On the other hand, the generated Lorentz scalar structures are There are only two products of the scalar current J H with a gauge current that are generated Finally, some operators with four fermions come from direct contraction of derivatives of two currents.They are: Notice that these last structures do not need any further simplification as their present form appear in M8B. V. FERMIONIC OPERATORS FOR UNIVERSAL THEORIES We are now in position to present the combination of dimension-eight fermionic operators that are associated with universal theories.We call such combinations universal fermionic operators since they are the ones with independent couplings.That is, in universal theories the couplings of the fermionic operators must be linear combinations of the 86 independent couplings of the universal fermionic operators listed here. The 86 universal fermionic operators are formed by the contraction of the fermionic Lorentz structures listed in Sec.IV with the remaining bosonic pieces of the universal operators listed in Sec.III.For convenience, we express them in terms of the fermionic operators in M8B and we employ the M8B naming and numbering of the operator classes.Also for convenience, we reproduce in Appendix C the subset of M8B operators which appear in the universal operators listed here.In addition, we have included a factor i to make the operators hermitian whenever possible. The full relation between the 86 bosonic operators in Sec.III, the universal fermionic operators, and the bosonic operators in M8B can be found in Appendix A. A. Two-Fermion Operators There are 62 independent universal combinations of two-fermion operators in the following classes: • Class 9 : ψ 2 X 2 H + h.c.: there are 16 universal operators in this class arising from the direct contraction of the Higgs fermionic current Eq.(2.23) with two gauge boson strength tensors qdB 2 H + y e pr Q (1) qdB 2 H + y e pr Q (2) qdW 2 H + y e pr Q (1) Q qdG 2 H + y e pr Q (2) quW BH + y d pr Q (1) qdW BH + y e pr Q (1) quW BH + y d pr Q (2) leW BH , together with the hermitian conjugates of the above operators.These universal fermionic operators are generated when applying EOM to some of the operators in class X 2 H 2 D 2 as can be seen in Eqs.(A33)-(A36), (A42)-(A45), (A55)-(A58), and (A64)-(A67), • Hybrid class 9&14 : ψ 2 X 2 H(D) contains 8 operators exhibiting specific combinations of operators in classes ψ 2 X 2 H and ψ 2 X 2 D originated from contraction of the fermionic structures in Eq. (4.3) and (4.4) with two gauge strength tensors leW BH + h.c. • Class 11 : ψ 2 H 2 D 3 : the 2 operators in this class arise from the contraction of the structures in Eq. (4.1) and (4.2) with a current containing two symmetrized covariant derivatives acting on the Higgs field These operators are generated directly from application of the EOM of the Higgs field to the operators in class X 2 D 4 as in Eqs.(A30) and (A31) and class XH 4 D 2 , see Eqs. (A83) and (A86). • Class 12 : ψ 2 H 5 + h.c.contains 2 operators originating from the contraction of the Higgs fermionic currents in Eq. (2.23) directly with Higgs fields: and its hermitian conjugate.These operators appear directly in the application of the Higgs EOM to the two operators in class H 6 D 2 Eqs.(A1)-(A2) but, as seen in Appendix A, they also arise in the rotation of a large fraction of the 86 bosonic operators.This is so, because these two operators in class H 6 D 2 are generically generated when reducing the products of the Higgs-gauge currents introduced by the gauge boson EOM to the bosonic operators in M8B. • Class 13 : ψ 2 H 4 D: there are four universal fermionic operators in this class which appear in the contraction of the electroweak fermionic currents in Eq. (2.23) with the product of two Higgs pairs one of them containing one derivative. • Class 15 : ψ 2 XH 2 D: It contains 24 operators generated by the contraction of fermionic gauge currents in Eq. (2.23) with a gauge field strength tensors and a pair of Higgs bosons with one derivative.In twelve operators the fermionic and Higgs currents are contracted with the SU (2) L field strength tensor They originate from direct application of EOM of the electroweak gauge bosons in operators in classes X 3 D 2 (Eqs.(A14), (A15), (A22), and (A23)) as well as X 2 H 2 D 2 (Eqs.(A46)-(A49) and Eqs.(A51)-(A54)).They are also generated in the rotation of operators R (1) BH 2 D 4 (A83), and R W H 2 D 4 (A86). In eight operators in this class the fermionic structures couple to the hypercharge field strength tensor They are generated by direct application of EOM of the electroweak gauge bosons in operators in class X 2 H 2 D 2 (Eqs.(A37)-(A40) and Eqs.(A72)-( A75)).They are also generated in the rotation of operators R (3) And finally four operators involve the gluon strength tensor which stem from the direct application of the gluon EOM in operators in class X 2 H 2 D 2 , as in Eqs.(A59)-(A62), and the operators R (1) . is generated by direct contraction of the Higgs fermionic current in Eq. (2.23) with one Higgs field and two derivatives of Higgs fields.There are six independent such contractions (5.9) B. Four-Fermion Operators We obtain 24 universal four-fermion operators in the following classes: • Class 18 : ψ 4 H 2 : contains eight universal fermionic operators obtained from the product of the four-fermion currents in Eqs.(4.5)-(4.7)and Eqs.(4.12)-(4.15)with a pair Higgs fields + −y e pr y u st Q (1) q 2 udH 2 + y e pr y d † ts Q (1) + 2 y e pr y u † st Q (5) + −y e pr y u st Q (2) together with the hermitian conjugate of Q (2) ψ 4 H 2 arise from direct application of the EOM for the gauge bosons in operators of class G 2 H 2 D 2 (A63), and the EOM's for B µν and W µν in R ( 13) Here again, for convenience, when writing the expression of Q q 4 W + 2Q (3) q 2 e 2 G + Q (2) They are all generated by the direct application of the EOM for the weak and strong gauge bosons in operators in class X 3 D 2 .In particular Q ψ 4 W arise when using the EOM of W µν in the operators R (1,2) • Class 20 : ψ 4 HD: there are four universal four-fermion operators genrerated by the contraction of the gauge-Higgs fermion currents in Eqs.(4.16) and (4.17) with the derivative of a Higgs field (5.12) and their hermitian conjugates.They are generated by applying the EOM for the electroweak gauge bosons and the Higgs in the four operators of class XH 2 D 2 : R (1,2) W H 2 D 4 ((A84)-(A85)).Notice that, to keep the notation compact, we took the liberty of reordering the fermion labeling for some operators in the first equation.In particular, in the case of Q (1) f 2 leHD , when f = q, the operator needs to be identified with Q + − y e pr y u st Q (1) q 2 udD 2 + y e pr y d † st Q (1) Q (3) They, respectively, originate from applying the EOM for the Higgs in R H 2 D 6 ((A13)), for the hypercharge gauge boson in R Notice that, for convenience, in the equation for Q (2) e 4 D 2 to to the M8B Q e 4 D 2 operator.We have included this minimal change of labeling when listing the operators of this class in Table IX. VI. SOME SIMPLE UNIVERSAL UV COMPLETIONS Here we briefly discuss a few existing models which match onto the universal basis.For each model we point out the bosonic operators which are generated in the universal basis and specified those which are rotated to the fermionic basis in this work. A straightforward way to construct universal extensions of the SM is to enlarge its scalar sector with new scalar fields which do not couple to the SM fermions.The simplest bosonic UV completion is therefore that of the SM supplemented by a real singlet scalar After tree level integration of the heavy S, the low energy effective theory contains the following operators up to dimension eight [41,67,68] The last two dimension-eight operators can be written in our basis by the following relations: From the results presented in Appendix A, we can see that the rotation of the operator in Eq. ( 6.3) to M8B only generates one fermionic universal operator, the real part of Q ψ 2 H 5 (see Eqs. (A1) and (A2)), while the operator in Eq. (6.4) generates a linear combination of five fermionic operators, the real parts of Q (1) ψ 4 H 2 ((see Eqs.(A5), (A6), and (A8)-(A10)).Another possibility is the addition of a scalar SU (2) L triplet field ϕ a with Y=0.In this case, the new terms in lagrangian read The low energy effective theory of this model is, up to dimension-eight operators in terms of our basis can be written as [41,69,70], In this case, from the Appendix A we note that the rotation of the operators to M8B generates the following fermionic operators, the real parts of Q ψ 4 H 2 ; together with Q ψ 4 H 2 , and Q Next we consider a scenario presenting a hidden sector where its particles are not charged under the SM gauge group.In the kinetic mixing model, the hiden sector exhibits a U (1) X gauge symmetry and possesses a gauge boson V µ which interacts with the SM via For heavy V µ we can integrate it out at tree level and obtain the following dimension-six and -eight operators [69,71] The dimension eight-operator is identified with the operator R B 2 D 4 in our basis, which is associated to seven fermionic universal operators in M8B: the real parts of Q (1) ψ 2 H 4 ,D , as can be seen in Eq. (A30).At large energy, composite Higgs models possess a strongly interacting sector that is naturally connected to the SM bosons.Ergo, possible high-energy resonances can give rise to a plethora of bosonic low-energy effective operators depending on the spectrum in the UV region.As an illustration, let us consider the minimal model based on the coset SO(5)/SO(4) [30,72,73] and consider a vector resonance (ρ µ ) that transforms in the adjoint of SO(4).In this scenario many operators are generated and we will list a few of them. The formalism develop by Coleman, Callan, Wess and Zumino [74] allow us to write down the allowed interactions of this resonance [75,76].Denoting by Π = h a T a where T a are the SO(5) broken generators and h a the Goldstone bosons, we define U = exp(iΠ).For simplicity, we assume that the SM gauge group satisfies G SM ⊂ SO(4).In order to write down the ρ µ interactions we need the building blocks D µ and E µ : with T A being the unbroken generators and A µ the SM gauge fields.The lowest order terms of D µ and E µ are The most general two derivative SO(5)-invariant action for the ρ µ is where Assuming that ρ µ is heavy we can integrate it out to obtain [75] ∆L where g ρ is the coupling constant.Terms containing four or more derivatives and four or more field strength tensors have been omitted.The first term of the above equation leads to the dimension-six operators On the other hand, the second term of Eq. ( 6.13) gives rise to the following dimension-six and -eight operators R Upon rotation to M8B the operators in Eq. (6.16) give rise respectively to two linear combinations involving ten and seven universal operators respectively given in Eqs.(A31) and(A30). In addition, possible four derivative ρ interactions can produce genuine anomalous quartic gauge couplings [53], i.e. anomalous quartic couplings that do not have triple couplings associated to them.For instance, the following ρ self-interaction generates the low-energy effective interaction that contains the operators VII. FINAL REMARKS In the absence of a smoking-gun signal for new physics at the LHC, EFTs, in particular the SMEFT, have become the standard tool for model-agnostic BSM explorations.Unfortunately, their power is in some sense also their weakness as, taken in their greatest generality, the number of parameters (i.e. the number of independent Wilson coefficients) is prohibitively large already at dimension-six.Identifying physically motivated hypotheses which reduce the EFT parameter space while still capturing a large class of BSM theories presents a motivated route to predictability.Universality, i.e. the assumption that the NP couples dominantly to the Standard Model bosons, is one such hypothesis.At dimension-six, universality reduces the dimensionality of the SMEFT parameter space from 2499 to 16, allowing for a constrained effective parametrization of NP effects [29]. In this work we have taken the next step by constructing the dimension-eight SMEFT operator basis which at the high energy matching scale can be used to encode the effects of universal extensions of the SM.To do so we have identified a suitable basis of independent operators formed with the Standard Model bosonic fields at dimensioneight.It contains 175 operators -that is, the assumption of Universality reduces the number of independent SMEFT coefficients at dimension eight from 44807 to 175.89 of these operators are part of the general SMEFT dimensioneight basis in the literature; see Table I).Our choice of the additional 86 operators is presented in Eqs.(3.1)-(3.10). In the general dimension-eight SMEFT basis in the literature these 86 operators have been traded for combinations of the 89 bosonic operators in the basis and additional operators involving fermions.Thus in universal theories, only a subset of fermionic operators are generated (see Appendix C) and their Wilson coefficients have well defined relations: they must be linear combinations of the 86 independent couplings of the universal fermionic operators presented in Section V. The drastic reduction of independent parameters implied by the Universality assumption opens up the possibility of employing it for quantitative phenomenology because just a few of them contribute to a specific reaction.For example, the direct effect of the dimension-eight universal operators can be seen in the production of multiple H, W ± and Z. Operators in the classes X 3 H 2 and X 3 D 2 modify the triple gauge boson vertices, so contributing to diboson production at tree level.Moreover, many classes contribute to modifications of the quartic coupling among the SM gauge bosons, as well as to vertices with Higgs and gauge bosons.In addition, an interesting subset of operators from the rotated basis are those which include, after rotation, the Murphy basis operators f 2 H 2 D 3 for i ∈ 1, 3, 4.These operators introduce novel kinematics to the Higgs associated production process [62].The Q BW 2 D 2 , and R are generated by rotating R Of course, the phenomenological program first requires a careful accounting for the relevant field redefinitions and other finite renormalization effects since some of the operators give contributions to the definition of the SM parameters [42].Moreover, even in a scenario where the high energy model is universal, there will be non-universal effects at the low energy EFT due to the renormalization group running [77] but controlled by the universal parameters at the matching scale.They should also be taken into account in phenomenological studies.Furthermore, the rich phenomenology of dimension-eight operators possesses many constraints originating from the causality and analyticity of the scattering amplitudes; see, for instance, references [78][79][80][81][82].These bounds define the regions of the parameter space associated with well-defined ultra-violet completions of the SM.We leave the quantitative exploration of the phenomenology of universal SMEFT at dimension-eight for future work. • Lorentz tensor fermionic Fierz identities: In the expressions below, S µν represent a piece which is symmetric under the exchange µ ↔ ν and which will not contribute to the operators for which these relations are being used, so for simplicity we have not included their explicit forms.They can be found in Ref. [51]. 5 )leH 3 D 2 , and their hermitian conjugates.These operators are generated directly from applying the Higgs field EOM to the operators in class H 4 D 4 and H 2 D 6 (see Eqs. (A3)-(A8) and (A13)).In addition they also appear in the rotation of operators XH 2 D 4 , as in Eqs.(A81)-(A82) and (A84)-(A85), arising in the reduction of the products of the Higgs gauge currents introduced by the gauge boson EOM to the bosonic operators in M8B. ψ 4 H 2 are generated by the use of the Higgs EOM directly in operators in class H 4 D 4 and H 2 D 6 , see Eqs. (A9)-(A13), and in the rotation of operators in class X 2 D 4 , as in Eqs.(A30)-(A31).Q (4) ψ 4 H 2 in terms of operators in M8B, we have added a superscript (1) in Q (1) f 4 H 2 , where f = u, d, e; and in Q (1) e 2 u 2 H 2 and Q (1) e 2 d 2 H 2 .This minimal change of labeling is reflected also when we list the operators in class 18 in Table VI.• Class 19 : ψ 4 X: the eight universal operators in this class are formed by the contraction of the four-fermion tensor currents in Eqs.(4.8)-(4.11)with a gauge strength tensor Q (1) ( 1 ) leq 2 HD in Table VIII.• Class 21 : ψ 4 D 2 : Finally, there are four universal four-fermion operators generated directly by the contraction of the derivatives of two fermion currents in Eqs.(4.18)-(4.21)Q W 2 D 4 . We notice in passing that the results in Sec.VI show that the simple composite Higgs model there presented generates both R (1)B 2 D 4 and R (1) W 2 D 4 , while R(1)B 2 D 4 also emerges in the minimal U (1) X kinetic-mixing scenario. f 2 H 4 D , where f = u, d, e, we have added a superscript of (1) to the M8B operators.The subscripts p, r are weak-eigenstate indices. TABLE III . The dimension-eight operators in the M8B with particle content ψ 2 H 5 , ψ 2 H 4 D and ψ 2 H 3 D 2 that are generated in universal theories.For the operators in the first column their hermitian conjugates are a priori independent operators.Operators in class 13 are hermitian.For operators Q TABLE IV . The dimension-eight operators in the M8B with particle content ψ 2 XH 2 D generated in universal theories.All operators are either hermitian or anti-hermitian.Once again, the subscripts p, r are weak-eigenstate indices. TABLE IX . The dimension-eight operators in the M8B with particle content ψ 4 HD generated in universal theories.All operators are either hermitian or anti-hermitian.For the operator Q(1) e 4 D 2 , we have added a superscript of (1) to the M8B operator.The subscripts p, r, s, t are weak-eigenstate indices.
9,896.6
2024-04-04T00:00:00.000
[ "Physics" ]
Controllable Clustering Algorithm for Associated Real-Time Streaming Big Data Based on Multi-Source Data Fusion Aiming at the problems of poor security and clustering accuracy in current data clustering algorithms, a controllable clustering algorithm for real-time streaming big data based on multi-source data fusion is proposed.)e FIR filter structure model is used to suppress network interference, and ant colony algorithm is used to detect the abnormal data in the big data. By optimizing the iteration, the pheromone concentration is placed in the front position as the abnormal data point, and the filter is introduced.)e fusion scope of multi-source data fusion is set. Combined with the data similarity function, the multi-source data fusion concept is used to construct the associated real-time streaming big data fusion device, and the data deduplication results are substituted into the fusion device to obtain the data clustering result. )e experiments show that the proposed algorithm has high safety factor, good data clustering accuracy, high clustering efficiency, and low energy consumption. Introduction e arrival of big data not only promotes industrial development but also promotes the evolution of new business models [1,2]. To this end, the rapid mining and discovery of knowledge from massive data has become the focus of researchers and companies. Data clustering is an important research direction in data mining. It aims to divide data into categories consisting of similar objects by clustering data [3,4]. However, the association of real-time streaming big data is versatile and complex. Traditional frameworks only consider computing resources for data stream extraction and storage and cannot effectively use computing resources to provide more and faster clustering services [5][6][7]. is requires a new data clustering algorithm [8,9]. Cao and Qian proposed a big data clustering algorithm based on local key nodes. For the initial node uncertainty and the time consumption caused by the fitness function calculation, local key nodes were introduced and the fitness formula was improved to reduce the time consumption. Experiments with classical algorithms in small-scale data networks and large-scale data networks showed that the clustering time was short, but there was a problem of poor security [10][11][12][13]. Zhang et al. proposed a SOM hybrid attribute data clustering algorithm based on heterogeneous value difference metrics. e algorithm used self-organizing map neural network as the framework and used heterogeneity difference based on sample probability to measure the dissimilarity of mixed attribute data. e frequency of occurrence of the classification feature in the Voronoi set was used as the basis of the reference vector update rule of classification attribute data, and the update of the numerical attribute and the classification attribute data rule was realized by the hybrid update rule. e proposed clustering algorithm was tested by using the classification attribute and mixed attribute dataset in UCI public database. e experimental results showed that the algorithm had low running complexity, but there was a problem of poor classification accuracy [14]. Wang et al. proposed a data classification algorithm based on Spark-FCM. Firstly, the matrix was distributed by horizontal partitioning, and different vectors were stored in different nodes. en, based on the computational characteristics of FCM algorithm, a distributed and cache sensitive common matrix operation and Spark-FCM algorithm are designed. e main data structure adopted distributed matrix storage, which had less data movement between nodes and distributed computing features at each step. e experimental results showed that the algorithm had good stability, but it had a long timeconsuming problem [15][16][17][18]. Liu et al. proposed an improved manifold clustering algorithm based on density peak search. e global and local spatial manifold distribution of the dataset was comprehensively considered, and the local density of each sample point was defined. According to the local density of each sample point and its local density relationship with other sample points, the cluster center criterion was defined. Clustering is implemented. But the algorithm had certain classification accuracy, and there was a problem of high energy consumption of clusters [19]. Aiming at the problems existing in the current research results, a controllable clustering algorithm for associated real-time streaming big data based on multi-source data fusion is proposed. e general framework is as follows: (1) e anti-interference filtering of the associated realtime streaming big data is realized by FIR filtering algorithm, and the abnormal data in the filtering result by ant colony algorithm are detected and filtered out to improve the data clustering security in real time. (2) Redundant data in associated real-time streaming big data are removed to reduce energy consumption of data clustering. (3) Data clustering is achieved through multi-source data fusion. (4) e experiment and discussion method are used to verify the controllable clustering algorithm for associated real-time streaming big data based on multisource data fusion. (5) e full text is summarized and the next research direction is proposed. Processing Abnormal Associated Real-Time Streaming Big Data. In order to improve the security of the data clustering process, the abnormal data need to be eliminated [20][21][22][23]. In the process, the FIR filtering algorithm is first used to realize anti-interference filtering of the associated real-time streaming big data. e structure diagram is shown in Figure 1. Assuming that the data traffic is generated by a linearly correlated nonlinear time series, the following FIR filter structure model is used for interference suppression: where x n represents the network traffic information model of data center, a 0 represents the sampling amplitude of the initial network traffic, x n−i represents the scalar time series of network traffic with the same mean and variance in data center, b j represents the oscillation amplitude of the network traffic in data center, and η n represents the delay sequence of data transmission. According to the calculation of equation (1), the network traffic information flow of data center is Fourier transformed to obtain the time series x(k), and the oscillation attenuation of the network traffic is obtained after the interference filtering process: where a represents the inter-domain variance coefficient of network traffic, m represents the embedded dimension in phase space, and B H (t) represents the correlation function of data flow anomaly feature detection. Assuming that the input sequence x(k) is a set of wide stationary time series, the transfer function H B (c) of the filter is where where G(c) represents the filter transfer model. Based on the above calculations, the designed interference suppression FIR filter of network traffic is as follows. According to the above calculation and analysis, the final data streaming output result c(t) obtained by the FIR antiinterference filtering process is where x(t) represents the real part of the data streaming time series, y(t) represents the imaginary part of the data streaming time series, and n(t) represents the other influence vectors. Based on the above interference suppression result, the ant colony algorithm is used to detect the abnormal data therein. e testing process is mainly divided into the following aspects: (6) Return to step (4) until convergence or meet the termination condition. (7) Select all paths whose pheromone is greater than the set threshold and save or modify S as required. e determination of the pheromone concentration in the table in the front position is determined as abnormal data. In step (2), the pheromone χ i′ of each edge e i′ (1 ≤ i ′ ≤ En) on DG is expressed as follows: where En represents the number of edges in DG. In step (2), the table S � (T, A, V, M), where T represents the tuple address of the ant walking the path, A is the target attribute name, V is the target attribute value in the tuple, and M is the attribute measure value of A in the tuple. In step (4), the node v is given, and the probability P i′ (t) of the ant selecting the adjacent edge e i′ is expressed as follows: where λ i′ represents the edge heuristic factor. e larger the value is, the greater the probability of selecting this path is. In step (5), the pheromone update on each side of the most recent path L is expressed as follows: where ρ represents the volatilization rate of pheromone, which can effectively inhibit the ant colony from rapidly converging to the path that has already passed. According to the continuous update of the pheromone, the pheromone whose concentration exceeds the set threshold w is stored in the table S, and the abnormal data are judged. e abnormal data filter is introduced here to filter out the abnormal data, and the obtained normal data set can be expressed as where C(t) represents the normal dataset in the associated real-time streaming big data and F(x) represents the abnormal data filter. Deduplication of Real-Time Streaming Big Data. In order to reduce the burden of data clustering, reduce the energy consumption of data clustering, and improve the classification accuracy, the redundant data need to be cleared [24]. For a new data segment, there are two similarities in its similar characteristics, as shown by the black points A 1 and A 2 in Figure 2, respectively, wherein, the black dot A 1 indicates that the position of the data segment in the plane is outside the three class boundaries, and the most similar data segment is the closest to the distance, and the white points are at the boundary of the class [25,26]. e black dot A 2 indicates that the position of the data segment in the plane is inside the class, and the most similar data segment is the closest distance between several white dots and black dots in the same class. Since the similarity of the data objects in the same class is high, the class boundary is selected. e data segment is used to build a check cache and also provides a good deduplication ratio [27]. According to the problem description, the redundant data are filtered out by using the check metadata deduplication algorithm. Firstly, the weighted dataset is clustered, and then the compressed neighboring algorithm is used to obtain the weighted subset, and the similarity metadata are eliminated based on the weighted subset, thereby reducing the size of the index. Eliminating metadata with high similarity can effectively reduce the amount of metadata and further reduce system resource overhead while maintaining the deduplication ratio [28,29]. e whole process of deduplication is divided into two parts: data segment similar clustering and deduplication. In the similar clustering phase, for the subsequent description to be clear, the subsequent symbols are uniformly defined: the sim fingerprint set defining the data to be checked is S′ � s 1 ′ , . . . , s n ′ , the cluster before the similar clustering is defined as C ′ � C 1 ′ , . . . , C K ′ , the cluster after clustering is C ″ � C 1 ″ , . . . , C K ″ , a distance measure between two similar data segments is defined as dist(s i ′ , s j ′ ), and the Figure 1: Structure of FIR filter. Wireless Communications and Mobile Computing Hamming distance between two sim fingerprint values is Ham(s i ′ , s j ′ ). Here, the saved sim fingerprint value information is used to represent a data object, and the fingerprint set S ′ � s 1 ′ , . . . , s n ′ of the check data is obtained. en, the data objects in the set S ′ � s 1 ′ , . . . , s n ′ are clustered, and the data segments are divided into K-class C ″ � C 1 ″ , . . . , C K ″ . e distance measure between two similar data segments is expressed as the Hamming distance of the sim fingerprint values, which is erefore, the entire clustering process can be described as follows: (1) Select K representative objects b 1 , . . . , b K from S ′ as the initial center point. e object with the lowest total cost is selected as the new center point. (4) Repeat steps (2) and (3) until the K center points no longer change. e K cluster C ″ � C 1 ″ , . . . , C K ″ is the obtained, that is, the required K class similar data. In the deduplication phase, the check weight subset is defined as S ″ � s 1 ″ , . . . , s n ″ , and the process of the deduplication phase is as follows. e sim fingerprint value of the data segment is used to replace the data segments, and the sim fingerprint value set S ′ � s 1 ′ , . . . , s n ′ of all data segments is obtained. Two memories st and gr for the set S ′ � s 1 ′ , . . . , s n ′ are set. All the samples of S ′ � s 1 ′ , . . . , s n ′ are put into gr, and the fingerprint value of a data segment sim is randomly extracted from gr and put into st. A fingerprint value s K ′ of a data segment sim is randomly extracted from gr, and the sim fingerprint value in st is used as the reference set. s K ′ is classified, and a closest s nK ′ to s K ′ from st is found. Assuming s nK ′ ∈ C i ′ is the same as s K ′ ∈ C i ′ , then it thinks that the redundancy judgment is correct, and to delete s K ′ ; otherwise, s K ′ is stored as a new category in st. e above steps for all samples in gr are performed until gr is empty. e set after removing redundant data at this time is S ‴ � s ‴ 1 , . . . , s ‴ n . According to the above process, redundant data in the real-time streaming big data can be cleared. Controllable Clustering for Associated Real-Time Streaming Big Data Based on Multi-Source Data Fusion. Assume that the fusion range of multi-source data fusion is [q min , q max ], where q min represents the minimum number of neighbors and q max represents the maximum number of neighbors. e q nearest neighbors of N ″ search for n ″ data points are mapped to a similarity function similarity, and the attribute similarity of the data point (z 1 , z 2 ) can be recorded as similarity(z 1 , z 2 ), namely: e similarity function of equation (12) is updated: for data point (z 1 , z 2 ), assume z 1 is in q neighbors of z 2 , or vice versa: e loop variable t ″ � t ″ + 1; assuming t ″ < N ″ , the similarity function is reconstructed; otherwise, the following steps are performed. According to the similarity function formed by the above process, the multi-source data fusion concept is used to construct the real-time streaming big data fusion device, and the output result is the final clustering result: where ⊗ represents the connector of attribute similarity data in the real-time streaming big data, δ represents the clustering factor, and the value controlled in [0.6, 0.7] can improve the clustering precision. Results In order to verify the effectiveness of the controllable clustering algorithm for associated real-time streaming big data based on multi-source data fusion, an experiment is conducted. e experimental hardware environment is shown in Figure 3. e algorithm is implemented by using C programming under Linux and the associated real-time streaming database. e experimental indicators are as follows: e experimental results are as follows. Analysis of Figure 4 shows that the big data clustering algorithm based on local key nodes and the SOM hybrid attribute data clustering algorithm based on heterogeneous value difference metrics have lower operational safety factors. e controllable clustering algorithm for associated real-time streaming big data based on multi-source data fusion has higher data clustering security coefficient under different types of attack data. is is mainly because the algorithm uses the particle swarm optimization algorithm to detect the anomaly data before data clustering and eliminates it, which effectively enhances the security performance of the algorithm. Analysis of Figure 5 shows that the proposed algorithm is superior to the current data clustering algorithm in data clustering accuracy. e algorithm uses filtering technology to suppress network interference and initially improves data clustering accuracy. e clustering factor is introduced to further improve the data clustering accuracy of the proposed algorithm. In Figures 6 and 7, the clustering time of using the controllable clustering algorithm for real-time streaming big data based on association of multi-source data fusion is not affected by the clustering data amount, the clustering efficiency is high, and the overall energy consumption of clustering is lower. is is mainly because the algorithm reduces the time spent on data clustering, the energy consumption of clustering, and the burden of data clustering by eliminating redundant data, effectively controlling the time and energy consumption of data clustering and enhancing the overall performance of the proposed algorithm. Discussion In the discussion, the clustering factor δ is taken as the discussion object, and the influence of its value range on data clustering is observed. Matlab2017a is used to simulate the influence of the value change on the data clustering accuracy rate. It is observed that the value is controlled within the interval of [0.6, 0.7], and the data clustering accuracy rate is the highest, that is, δ can effectively improve the clustering accuracy in this interval. e results are as follows. It can be seen from Figure 8 that when the value of δ is [0.4, 0.6], the data clustering accuracy rate is generally improved, but it is not very ideal; when the value of δ is [0.7, 0.9], the data clustering accuracy rate is decreasing; when the value of δ is at [0.6, 0.7], the data clustering accuracy rate is the highest, about 98%. It can be observed from the data comparison that the value of δ has a great influence on the accuracy of data clustering, and when the value of δ is [0.6, 0.7], it can control the clustering accuracy to the best, indicating that the value range is reliable. Conclusions Efficient clustering of large-scale data is very important for data utilization. At present, the performance of relevant research results is to be improved, and a controllable clustering algorithm for associated real-time streaming big data based on multi-source data fusion is proposed. By detecting and eliminating abnormal data and redundant data, it lays a foundation for data clustering and realizes data clustering by constructing a correlated real-time streaming big data fusion device. e experimental results show that the proposed algorithm has strong clustering performance and is feasible. In the next step, the following aspects can be regarded as the research focus: data clustering technology is constantly innovating, including data clustering system, which can Wireless Communications and Mobile Computing integrate clustering system with clustering algorithm and use clustering algorithm to control clustering system to further improve data clustering performance. Data Availability e datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,516.4
2022-02-23T00:00:00.000
[ "Computer Science" ]
Gene x environment interactions as dynamical systems : clinical implications The etiology and progression of the chronic diseases that account for the highest rates of mortality in the US, namely, cardiovascular diseases and cancers, involve complex gene x environment interactions. Yet despite the general agreement in the medical community given to this concept, there is a widespread lack of clarity as to what the term ‘interaction’ actually means. The consequence is the use of linear statistical methods to describe processes that are biologically nonlinear, resulting in clinical applications that are often not optimal. Gene x environment interactions are characterized by dynamic, nonlinear molecular networks that change and evolve over time; and by emergent properties that cannot be deduced from the characteristics of their individual subcomponents. Given the nature of these systemic properties, reductionist methods are insufficient for fully providing the information relevant to improving therapeutic outcomes. The purpose of this article is to provide an overview of these concepts and their relevance to prevention and interventions. Introduction The accumulating evidence that gene x environment interactions play a major role in chronic diseases, accounting for high rates of world-wide morbidity and mortality (e.g., cardiovascular diseases and cancers), has led to increasing interest in studying mechanisms of interaction related to treatment and prevention.However, despite the lip service given to the concept that the environment interacts with genotype, the actual definition of 'interaction' is still medically and scientifically fuzzy.Not only can it have different connotations depending on whether it is being viewed from a statistical or biological perspective, but the biological mechanisms are still widely misunderstood.The purpose of this paper is to clarify some of the concepts related to gene x environment interactions in chronic diseases, especially concepts related to the little discussed but immensely important topic of nonlinear dynamics in chronic disease etiology and progression. Statistical approaches to gene x environment interactions and "heredity" Statistically, the term interaction means that the independent variables in an equation (for example, smoking and gender) do not have a linear additive effect on the outcome variable.For instance, smoking might be a risk factor for an outcome in women but not in men.This type of interaction was found in a study of the angiotensinogen genotype, AGTM235T, which had been reported in multiple studies to have a risk polymorphism associated with hypertension.In that study, no significant association (main effect) in either men or women was found for this genotype with heart rate (HR), an endophenotype significantly associated with hypertension [1].When significance is found in some GWAS studies but not in others, it is referred to as "failure to replicate".However, it should not be assumed that this means 'no genetic contribution'.The analyses in this particular study also included anxiety because it too had been reported in a number of studies to be associated with hypertension.But analyses of anxiety and HR also showed no main effect in men; whereas in women, the association was the inverse of what would have been expected, i.e., lower HR with high anxiety.(This type of result can often indicate confounding).When interaction terms were included in the equation, there was a gene x environment x gender interaction such that men with the TT (hypothesized risk) genotype had significantly higher HR if, (but only if), they also had high anxiety, than did low anxious men with that same genotype, or high anxious men with the MM genotype.There was no such interaction in women.Thus, the hypothesized risk genotype did increase risk in men but only those with high anxiety, and showed no interaction in women [1].This is an example of a 'failure to replicate' genetic study in the analysis of complex diseases, that is not reflective of a lack of genetic risk, but results from a failure to include interaction terms that would expose context vulnerability.Since the inclusion of interaction terms in GWAS studies is not common, one wonders whether lack of context might be responsible for the large number of failure to replicate GWAS studies [2,3] in a broad range of areas.In the type of complex diseases discussed here, influence from environmental factors, can change the expression of a gene, overriding the effect of a specific genetic polymorphism.The mechanisms through which these gene x environment interactions occur will be further explained under the section on biological interactions. The use of inappropriate statistical methods can happen inadvertently when there is a general lack of knowledge concerning the biology of gene x environment interactions.If one doesn't know that the biology is nonlinear and not additive, there is no reason not to use linear, additive statistical methods to define "heredity", and these methods of modeling heredity were common before the sequencing of the human genome.Unfortunately, they are still in use today.The problem arises because although genotype remains constant throughout the lifespan, environmental and lifestyle factors change, and these factors can have major influences on the extent to which many genes are expressed.Thus, an oncogene that is usually in the "off" position can be turned "on" and a tumor suppressor gene that is "on" can be turned off by a change in environmental exposures, in which case, environment becomes dominant over genotype [4].Thus, although genotype doesn't change, the genetic contribution to phenotype is not always constant because expression can be inhibited or enhanced by changes in the surrounding microenvironment. The statistical modeling of "heredity", which is based on phenotypic similarity between monozygotic (MZ) and dizygotic (DZ) twins, is still utilized by some researchers despite the availability of more accurate molecular genetic techniques.It assumes that genotype explains a certain "fixed" percent of the population variance related to disease prevalence and that environmental factors are added to that percent to make up the difference, (so that they sum to 100).These models have traditionally not measured either genotype or environment, only phenotypic differences between MZ and DZ twins in the outcome variable.A simplified version of the concept can be seen below [5]: Twin A phenotype = Twin B phenotype + Environ + (Twin B phenotype × Environ) In practical use, the models can be quite a bit more complex by parsing genetic and environmental variance into many more subcomponents (e.g., shared and "non-shared" environment, additive genetic variance, dominant genetic variance, etc.).But the principle of the calculations doesn't change-all permutations involve linear, additive models. Like all models, this one has underlying assumptions and the problem arises when the assumptions are not met.These additive models do not take into account the non-linear complexity inherent in the biology of gene x environment interactions [6], i.e., they do not account for the dynamically changing nonlinear epigenetic influences on gene expression and the fact that gene expression is a combination of both intracellular and extracellular factors.They also assume that there are no differences in the prenatal environments between MZ and DZ twins.In cases where MZ twins share the same amniotic sacs (e.g., are monochorionic) this assumption is not met because the twins are essentially competing for the same nutrients.Differences in nutritional status between MZ twins can in some cases be more important than genetic differences between MZ and DZ twins with respect to subsequent disease phenotypes.This occurs when there are differences in fetal growth and nutrition resulting in divergences in birthweight that can lead to lifetime differences in risk for CVD and other illnesses [7].Linear additive models have also traditionally assumed no differences in shared family environments (e.g., rearing practices between siblings), an assumption which has subsequently been shown to be inaccurate [8].Furthermore, studies of personality measuring behavioral characteristics that are based on rank order scales are treated statistically as if they were integral scales, i.e., as if there is an equal distance between the quantitative units of the type that can be found between mm of mercury in blood pressure measurements.This is not the case.There is no evidence whatsoever that a unit of difference between 14 and 15 on the CES depression scale is the same as that between 19 and 20 on that scale.This is a very real problem since virtually all of the twin research on personality is based on these types of rank order scales but has been incorrectly interpreted as measuring quantitative trait differences [9][10][11].It is not surprising that studies analyzing actual genomic data related to cognitive and personality traits have often reported much lower genetic associations than the "heredity" implied by these models [12,13].The numbers add up but as we shall see below, the biology doesn't.There is a strong need for a more nuanced understanding in the scientific community about precisely how genes and environment interact biologically so that more appropriate methodology can be utilized. The biological meaning of interaction A primary characteristic of most chronic diseases is complexity, which refers to the fact that etiology and progression are usually multifactorial (involving a combination of genetic, lifestyle, environmental and biological factors), with the individual contributors interacting in a way that makes the whole (i.e., the phenotype) more than the sum of its parts.This signifies that the phenotype has functions and characteristics not found in any of its individual subcomponents.Properties that are not reducible to those of their constituent parts are referred to as "emergent," meaning that they cannot be adequately understood with reductionist methods.This is especially important with respect to genetics and gene expression.The concept of emergence in gene x environment interactions also cannot be adequately depicted statistically by using additive linear models. The reason that the dynamically fluctuating microenvironment is so important is the primacy of its role in gene expression.Genes are essentially passive biochemical codes, like blueprints, for making amino acids and proteins.Like a cookbook recipe, they cannot "read" themselves or "bake" the chocolate cake.Genes are "read" (transcribed) and translated into proteins by factors in the surrounding microenvironment.This means that gene function is to a large extent dependent on environmental cues.When DNA is not active, it is wrapped tightly around histone proteins like thread around a spool, which consolidates it for storage in the nucleus, but also protects it from being activated at inappropriate times by circulating transcription factors.In order for the gene to become activated, factors in the microenvironment must initiate the unwinding of the DNA strand from the histones so that it is accessible to activation and transcription.Thus, biologically, interaction between genes and the surrounding microenvironment begins at the most fundamental molecular level.Additive statistical models simply do not reflect this reality.Furthermore, many genes have multiple functions, and the role they play at any particular point in time, depends on the stimuli they encounter from the surrounding microenvironment.For instance, a P53 tumor suppressor gene can function variously in repair of DNA damage, cell-cycle arrest, differentiation, apoptosis and cellular senescence [14,15].Macrophages, phagocytic cells in the immune system with distinct biological functions that can either fight or promote tumor development, are also strongly influenced by the microenvironment [16].The function a macrophage assumes depends on the cues it receives from the surrounding environmente.g., from other genes, intercellular communications, and a myriad of extracellular factors such as hormones, enzymes, etc.Therefore, the input signals are extremely important (whether they inhibit or activate gene expression), but so also is their timing.The order in which they arrive is very important for phenotypic outcome [17].Indeed, it has long been known in the field of molecular biology that "one gene, one function" is an outmoded concept.There are simply not enough genes to perform all the roles required to maintain a healthy system.So, the assumption that phenotype can be understood as a simple process of translating the genetic code into proteins, does not reflect the dynamic, ongoing interactions inherent in the complexity of biological systems.Understanding the dynamic nature of these systems is important from the standpoint of designing research.A cell does not do the same thing in an intact animal that it does in a petri dish because the surrounding environment is completely different, and elicits different responses.Nor does a genetically modified animal respond in the same way to environmental toxins or medication as one that is intact.Physiological systems in an organism are inextricably interrelated, so the removal of one gene, unavoidably affects the function of more than one system. It has been suggested that a better way to understand phenotype would be to conceptualize it as the result of process in a reactive system [17].That means that the relative contribution of genes and environment varies within the same person at different points in time.According to this conceptualization, a given subsystem in the body is receiving and reacting to multiple inputs simultaneously.The emergent characteristics of the resulting complex systems are responsible for the protein's ability to assume properties and functions that are not inherent in its individual amino acids.This characteristic extends to cells and the formation of tissues and organs.It also seems to apply to "life" itself.It has been demonstrated that the entire DNA can be removed from a bacterial cell and synthetic DNA inserted to reprogram it into a different type of bacteria [18].However, the synthetic DNA does not have the property of "life" but requires a living cell to start reproducing different type of bacteria.Thus, the synthetic DNA is inserted into a cell without DNA that is, nevertheless, "living" (The DNA was removed to make room for the synthetic DNA).In this case, the DNA is the software that reprograms the cell, but does not constitute its "life", since removing it does not remove life.This is an illustration that emergent properties cannot be understood by examining characteristics of individual components (e.g., genes).This also implies that phenotypic "causality" is not unidirectional (e.g., starting at the molecular level and moving to the organ or systemic levels), but bidirectionalthe microenvironment can influence gene expression, just as the gene can influence its microenvironment.With the exception of relatively rare monogenic diseases, most illnesses, especially chronic diseases such as cardiovascular disease and cancers, phenotype evolves from a constant interaction between genes and the environment.With respect to carcinogenesis, many lifestyle and environmental factors influence gene expression by epigenetically activating or repressing oncogenes and tumor suppressor genes.In cases where tumor suppressor genes are inhibited or quiescent oncogenes activated, the microenvironment is dominant over genotype.Reductionism is inadequate for understanding this type of complexity. The clinical importance of these interactions is that although knock-out animal models (e.g., those with reduced immune function) help identify the function of specific genes or signaling pathways, they are of less use as clinical models because their physiological systems do not reflect the response of an intact animal (or human). Dynamic equilibrium The human body is a dissipative thermodynamic system, i.e., it is an open system that exchanges energy and matter with the environment.Thus, it often functions far from thermodynamic equilibrium.Years of research in the field of nonlinear dynamics has taught us that everything in the body is in a state of continuous flux whose function is to maintain robustness and adaptability in the face of shifting needs and input signals from multiple sources.Organelles, cells, tissues and organs are systems unto themselves but also components of larger systems that interact with each other at many levels [17].This complexity provides the flexibility for timely response to the constant, but temporally varying demands being made on different cells and organs throughout the day.In healthy systems, these moment-to-moment variations serve to adjust and fine-tune responses as local needs change.Just as evolution has been a balancing act between robustness and adaptability, the human body, in order to maintain health, must be flexible enough to adapt to new circumstances (e.g., changes in diet, increased physical exertion), while remaining robust enough to resist random minor perturbations, such as exposure to bacteria or mistakes in gene transcription that might otherwise cause ill health.Examples of physiological feedback loops that serve the purpose of health maintenance include immune mechanisms that fight invading bacteria or renegade cancer cells, mechanisms that repair or destroy DNA mutations, and renal processes that lower blood pressure.The interactions within and between these health maintenance systems are highly complex and they are also characterized by redundancy.If one system becomes overloaded, there are multiple back-up systems in place that kick-in "as to restore equilibrium.Thus, "stasis" exists only in death, and the term "dynamic equilibrium" more accurately describes systemic physiological interactions than the term, "homeostasis." Genes and networks Genes are essentially biochemical codes (recipes) for making amino acids and proteins.They do not initiate action but are instead activated or repressed by other cellular factors.Like recipes in a cookbook, they are passive; i.e., they remain quiescent until switched on or off by transcription factors interacting with cis-regulatory mechanisms.They cannot "read" their own code (transcribe) nor "bake the chocolate cake" (translate the code to amino acids and proteins) without wide-ranging help from the surrounding microenvironment: including, cis-regulatory networks, RNA binding proteins, RNA polymerase, ribosomes, microRNAs, transcriptional co-factors, and numerous other transcription factors [19]. Mechanisms that contribute to dynamic equilibrium by supporting redundancy at the genetic level, include multiple enhancers and transcription factors for single genes that can bind to the same cis-regulatory element [19], creating a back-up for transcription failure.Thus, feedback loops exist not only within and between tissues and organs of larger functional systems (e.g., the sympathetic nervous system) but also within and between cells, genes, and protein networks.Like other systems in the body, gene regulation is not linear but involves complex interactions at multiple levels of molecular and systemic functioning.Not only can genes belong to multiple networks involving different functions, but genes and transcription factors are regularly modified by other microenvironmental factors (e.g., enzymes, hormones, immune factors, the basement membrane), which are influenced by exchanges between the individual and the outside environment.What contributes to the nonlinear dynamics of these systems is the scale-free nature of their networks. Scale-free networks are not random.They reflect a self-organizing capacity of organic life that, regardless of system type, creates connectivity between vertices (nodes) with a distribution that follows a power-law function [20].These networks are characterized by a few nodes or vertices that are connected to many others, forming hubs, while the majority of nodes have very few connections.This network structure supports robustness to perturbation (knocking out one gene or protein seldom knocks out an entire system), while allowing the network to retain its adaptability.Part of the adaptability comes from the propensity of networks to expand, in a non-random, preferential manner, with new vertices attaching to already well-connected nodes [20].The dynamic nature of these networks results in the emergence of new characteristics or functions needed by the system, contributing to both robustness and adaptability. However, when network expansion continues long enough, it can reach a threshold where a giant connected component emerges, with distances so close that perturbations of a single gene or protein can actually propagate through the entire network having multiple, unrelated effects.This results in an increase in flexibility and adaptability by allowing a gene to belong to multiple networks.A simplistic system that required separate proteins for every single function would be unwieldy and dysfunctional with respect to robustness and adaptability.The flexible structure of molecular networks facilitates pleiotropy, a common characteristic of protein interaction networks [21].The P53 tumor suppressor gene mentioned earlier is a typical illustration.Expanded network membership allows it to respond to a broad range of signals and fulfill multiple functions.[22,23].To summarize, new knowledge of the interconnected nature of gene and cellular networks has led to a reassessment of gene function as something that should be shifted from an individual attribute to one of the network in which the gene participates [21]. Conceptually, it should now be clear that a diversity of environmental exposures (lifestyle, environmental, biological) that have been associated with complex diseases such as cardiovascular diseases and cancer, are mediated by influences at multiple levels: inflammation, gene transcription, the maintenance of mRNA stability, mRNA translation and protein stability [19] by enzymes, hormones, and metabolic processes.How input signals affect different systems depends on both their interactions with other inputs signals and the timing of their arrival.Thus, they can be synergistic, repressive, additive, and multiplicative, or cancel each other out. Thus, biological networks, like organ systems in the body, are characterized by complexity, meaning that gene function is often dynamic and flexible [24].The fact that molecular networks adapt and evolve can result in long-distance spatiotemporal patterns from what originated as local neighborneighbor interactions [25], indicating the difficulty of predicting network dynamics from single nodes [26]. The edge of criticality The nature of complex gene, protein and cellular networks contributes to another important property of healthy, dynamic systems, namely that their most efficacious functional range lies on the edge of criticality between order and chaos [26][27][28].That means that they tend to be dynamically stable to random perturbations but can react with global state changes to targeted perturbations [27].It is precisely at this critical juncture of minimal information loss where they provide enough order for robustness, while retaining enough flexibility to adapt to needed changes required for optimal responsiveness.Functioning "on the edge" explains the well-known adage from chaos theory that a small change can result in a major phase transition.This is exemplified by macrophage functionality.Depolarization of just a single mitochondrion at the edge of criticality can stimulate network collapse [25]; and macrophages can undergo global gene expression change to targeted perturbations, coordinating complex behavior with minimal information loss [27].Furthermore, because the structure of many biological networks is not static but evolves both structurally and functionally over time, stability and flexibility are balanced through self-organization to a dynamically critical state [29].These adaptive networks can evolve either through continued use (synaptic activity), lack of use, or evolutionary fitness (gene regulatory networks) [29]. Sudden changes are termed "phase transitions" or bifurcations and can occur when systemic load progressively increases, calling on more and more systems to be on the alert for maintenance and restoration of equilibrium.This tips the balance toward disequilibrium and disorder.Systems that are required to fulfill more than one function (their regular function plus back-up) for more than a short period of time, begin to redirect some of their energy away from efficiency to take on a higher quantity of work.One result of this is that disposal of waste products becomes less efficient and disorder increases.The more burden increases, the farther out of balance the system becomes.The term used to characterize systemic burden and "wear and tear" on the body, is "allostatic load" [30][31][32][33][34]. When system overload goes from acute to chronic, it can tip the dynamics away from a healthy attractor toward dysfunctionality.The functional state of a system can only be maintained up to a certain threshold of burden (disorder) before it collapses, resulting in a phase transition that can be likened to the "straw that broke the camel's back."It bifurcates from order into chaos, where new dynamical attractors emerge that may support disease rather than health. Clinical relevance The importance of these dynamics is their direct relevance for clinical interventions.Multifactorial etiology calls for multifactor approaches to treatment and prevention [35].Lack of a major gene for disease risk can easily mask genetic susceptibility that expresses only in the presence of certain environmental conditions.Genes that contribute to the body's ability to metabolize toxins are a typical example.It has been reported that in utero exposure to pesticides is associated with increased risk of leukemia in simple exposure analyses [36], but in the presence of specific polymorphisms of CYP1A1, CYP2D6, GSTT1 and GSTM1 genes, the risk increases 7-fold [37].This has major health policy implications for prevention.The GSTT1 null genotype and the GSTP1 ValVal genotypes have also been reported to increase vulnerability for prostate cancer [38], however the results are inconsistent [39].The fact that environmentally induced epigenetic changes can determine how a gene is expressed, explains some of the inconsistency in studies focused solely on genetics and clarifies the importance of understanding gene x environment interactions.Just as policy measures have been implemented to reduce and prevent smoking, more focus needs to be placed on environmental factors that increase epigenetic risk for chronic diseases.Knowledge of genotype alone is not sufficient for assessing risk for complex diseases.The glutathione S-transferase (GST) family of genes are involved in the regulation of metabolism of a wide range of chemicals, including carcinogens [40].But previous research has also reported that high intake of cruciferous vegetables combined with the GSTM1 genotype contributes to a reduction of prostate cancer risk [39].Thus, multi-level interventions involving health policy at the state and community level (e.g., regulation of pollution sources), individual behavioral factors (e.g., diet, physical activity, smoking and alcohol consumption), as well as family level factors that can influence health behaviors, are all important for addressing the complex gene x environment interactions related to complex etiology [35].System dynamics (network structure), as well as type and sequence of signal inputs from the surrounding microenvironment are important multifactorial contributors to clinical phenotype.They also illustrate why traditional pharmaceutical approaches that tend to target single mechanisms have so often failed to achieve permanent improvement [41][42][43][44]. Because complex diseases such as cardiovascular diseases and cancers progress slowly and involve innumerable gene x environment interactions, clinical care would be greatly facilitated by more detailed knowledge of systemic risk indicators.Just as timing is crucial to the interactions between microenvironmental inputs and molecular network dynamics, it is also crucial to intervention strategy.Because genes can belong to multiple networks and change their functional expression based on varying conditions, a step towards improved clinical care would be to develop assessment measures that reflect not only individual risk factors but also risk for emergent systemic dysfunction If clinicians could be alerted to increasing systemic burden early in the subclinical disease process, heightened preventive measures and/or perturbations that reboot the system back towards the healthy attractor might facilitate reversal, avoiding an increase in disorder and bifurcation into chaos.On the other hand, a state of advanced disease progression and the presence of unhealthy attractors would be helped by additional clinical information related to network dynamics driving the dysfunction. Conclusion Globally, cardiovascular diseases and cancers account for 43.3% of mortality in women and 40.2% of mortality in men [45].Unlike monogenic diseases such as Huntington's, where genotype determines phenotype, the biology of these diseases is highly complex and nonlinear.The next step in improving outcomes necessitates a more nuanced understanding of the systemic and molecular nature of interactions between genes, lifestyle, environmental factors and individual biology.It will also require the development of more accurate, nonlinear methods of analysis to identify the timing and contribution of individual components, as well as the emergent network phenotypes.
6,099.8
2015-12-21T00:00:00.000
[ "Biology" ]
Lack of miRNA-17 family mediates high glucose-induced PAR-1 upregulation in glomerular mesangial cells Upregulation of thrombin receptor protease-activated receptor 1 (PAR-1) is verified to contribute to chronic kidney diseases, including diabetic nephropathy; however, the mechanisms are still unclear. In this study, we investigated the effect of PAR-1 on high glucose-induced proliferation of human glomerular mesangial cells (HMCs), and explored the mechanism of PAR-1 upregulation from alteration of microRNAs. We found that high glucose stimulated proliferation of the mesangial cells whereas PAR-1 inhibition with vorapaxar attenuated the cell proliferation. Moreover, high glucose upregulated PAR-1 in mRNA level and protein expression while did not affect the enzymatic activity of thrombin in HMCs after 48 h culture. Then high glucose induced PAR-1 elevation was likely due to the alteration of the transcription or post-transcriptional processing. It was found that miR-17 family members including miR-17-5p, -20a-5p, and -93-5p were significantly decreased among the eight detected microRNAs only in high glucose-cultured HMCs, but miR-129-5p, miR-181a-5p, and miR-181b-5p were markedly downregulated in both high glucose-cultured HMCs and equivalent osmotic press control compared with normal glucose culture. So miR-20a was selected to confirm the role of miR-17 family on PAR-1 upregulation, finding that miR-20a-5p overexpression reversed the upregulation of PAR-1 in mRNA and protein levels induced by high glucose in HMCs. In summary, our finding indicated that PAR-1 upregulation mediated proliferation of glomerular mesangial cells induced by high glucose, and deficiency of miR-17 family resulted in PAR-1 upregulation. Introduction Diabetic nephropathy (DN) is one of the most common diabetic microvascular complications, characterized by continuous proteinuria, glomerular mesangial expansion, abnormal accumulation of extracellular matrix, and thickening of glomerular basement membrane, eventually developing glomerular sclerosis (Chen et al. 2019). Moreover, proliferation of the glomerular mesangial cells occurs at the early stage of DN, and is the main cause of increased synthesis and deposition of the extracellular matrix in glomeruli (Xu et al. 2020;Zhao et al. 2021). Among the influencing factors, chronic inflammation plays a very important role in the early pathological process of DN (Zhu et al. 2018;Tang et al. 2020). Activation of thrombin receptor protease-activated receptor 1 (PAR-1) signaling has been verified to contribute to chronic kidney diseases, including DN. An early report shows that the mRNA level of PAR-1 rather than PAR-4 is upregulated in the isolated glomeruli of diabetic db/db mice, and PAR-1 plays a role in the progression of glomerulosclerosis in DN (Sakai et al. 2009). Moreover, recent studies further demonstrate that PAR-1 deficiency or inhibition protects against DN in streptozotocin-induced diabetic mice (Waasdorp et al. 2016(Waasdorp et al. , 2018. On the other hand, PAR-1 leads to other chronic kidney diseases, as evidenced by glomerular injury induced by doxorubicin in mice (Guan et al. 2017), renal injury and interstitial fibrosis during chronic obstructive nephropathy (Waasdorp et al. 2019), and microcirculation failure and tubular damage in renal ischemia-reperfusion injury in mice (Guan et al. 2021), indicating that PAR-1 antagonism has therapeutic potential. Importantly, our recent study further demonstrated that PAR-1 upregulation participated in the early pathological process of DN, and the NLRP3 inflammasome and NF-κB signaling mediated the pro-inflammatory effects of PAR-1 ). Furthermore, our study indicated that PAR-1 upregulation was not associated with thrombin activity in both the renal cortex of diabetic rats and high glucose-cultured glomerular mesangial cells, and discussed the possible reasons of PAR-1 upregulation from other proteases . Thus, how high glucose induced PAR-1 upregulation in renal cortex is still unknown. MicroRNAs (miRs), an evolutionarily well-conserved class of small, non-coding RNAs, negatively regulate gene expression at the post-transcriptional level in a sequencespecific manner. Reports find that PAR-1 is negatively regulated by several microRNAs, such as miR-190a (Chu et al. 2014), and the microRNA precursor miR-17 family including miR-20a/b, -93, and -106a/b (Saleiban et al. 2014). Moreover, microRNAs are involved in pathogenesis of DN (Fiorentino et al. 2013;Kolling et al. 2017), and alterations of microRNAs result in proliferation of glomerular mesangial cells and matrix expansion (Bera et al. 2017;Wang et al. 2018). Then, whether microRNAs take part in the upregulation of PAR-1 caused by high glucose in glomerular mesangial cells, there are no reports. In this study, firstly, we aimed to explore whether upregulation of PAR-1 induced by high glucose in glomerular mesangial cells was related to deficiency of some micro-RNAs. Secondly, which microRNAs potentially took part in the regulation of PAR-1 in glomerular mesangial cells? Finally, confirmation of the relationship of PAR-1 and the target microRNA in glomerular mesangial cells in both normal and high glucose condition. Cell culture and treatments The cell line human mesangial cells (HMCs) was kindly provided by Dr. Wei in our laboratory, purchasing from FuHeng Biology (Cat. #FH0241), Shanghai, China. The cells were cultured in DMEM containing 10% FBS. After incubation for 24 h under normal conditions (medium containing 5.6 mmol/L glucose, 10% FBS, 100 U/mL penicillin and 100 μg/mL streptomycin, 5% CO 2 , 37 °C) and cell cycle synchronization for 12 h, HMCs were divided into the following groups: normal glucose group (NG, 5.6 mmol/L glucose), high glucose group (HG, 30 mmol/L glucose), and PAR-1 antagonist group (HG + Vor, 30 mmol/L glucose plus 0.1 μmol/L vorapaxar). Vorapaxar was dissolved in dimethylsulfoxide and made into stock solution for use. In addition, an osmotic pressure control (NG + MA, 5.6 mmol/L glucose plus 24.4 mannitol) was designed when the selected microRNAs were determined. Additional 24.4 mmol/L mannitol was used to mimic the osmotic press formed by 30 mmol/L glucose. After treatments for 48 h, the cells were harvested for indices analysis. The culture time was selected according to the changes of PAR-1 protein in HMCs cultured with 30 mmol/L glucose for 24 h, 48 h, and 72 h, respectively, in our previous report ). MiR-20a-5p overexpression Lentivirus carrying hsa-miR-20a-5p mimic was transfected into HMCs, establishing a stable cell line with miR-20a-5p overexpression. After 72 h of screening with puromycin, samples of cell stable strains were collected. Then the cells were divided into two categories, miRNA mimic and mimic control, and the overexpression efficiency of miR-20a-5p was confirmed. After different treatments for 48 h, PAR-1 levels in mRNA and protein were detected. CCK-8 assay for cell proliferation Cell proliferation of HMCs was analyzed using a CCK-8 method. Briefly, cell suspension (100 μl/well, 1.0 × 10 6 /ml) was pre-incubated in a 96-well plate for 24-48 h at 37 °C in a humidified atmosphere of 5% CO 2 . After the cells were incubated in different groups for 24 h, 10 μl of the CCK-8 solution was added to each well of the plate and incubated for 2 h in incubator. After 10 μl of 1% (w/v) SDS added to each well in dark at room temperature, the absorbance was determined at 450 nm using a microplate reader. The net absorbance of the normal glucose group was considered 100% of the cell proliferation viability. Determination of thrombin activity Thrombin activity was assessed by using a fluorometric assay based on the cleavage rate of the synthetic thrombin substrate Boc-Asp (OBzl)-Pro-Arg-AMC. Determination of thrombin activity was performed according to our previous report ). The reaction system was triggered by adding 60 μg of proteins at 37 °C for 50 min, and the optical density value was immediately measured with a fluorescence spectroscopy at the excitation wavelength 360 nm and emission wavelength 465 nm. The net absorbance from the plates of cells cultured with normal glucose was considered 100% thrombin activity. Real-time qPCR assay Total RNA isolation was performed by using trizol reagent, and the RNA was reverse-transcribed into single-stranded cDNA by using different primers corresponding to mRNA or microRNA. The Roche 480 LightCycler® system with SYBR Green dye binding to PCR product was used to quantify target mRNA or miRNA accumulation via fluorescence PCR using human β-actin or U6 as a reference. Human primers of the associated genes used for quantitative PCR in this study are listed in Table 1. For the amplification reaction in each well, a Cp value was observed in the exponential phase of amplification, and the quantification of relative expression levels was achieved using standard curves for both target and endogenous control samples. Relative transcript abundance of a gene is expressed as 2 −∆∆Cp values (∆Cp = Cp target − Cp reference , ∆∆Cp = ∆Cp treatment − ∆Cp NG / ∆Cp NC ). Western blot assay Cells were lysed in RIPA buffer with 1 mmol/L PMSF and 1 mmol/L phosphatase inhibitor cocktail at 4 °C for 30 min followed by 12,000 × g centrifugation at 4 °C for 15 min to obtain the supernatant. The BCA protein assay was performed to determine the protein concentration according to the manufacturer's instructions. The protein samples were separated using sodium dodecyl sulfate polyacrylamide gel electro phoresis and transferred to polyvinylidene fluoride membrane. The membrane was blocked with 2% milk powder solution for 60 min and incubated overnight at 4 °C with primary antibodies including rabbit anti-PAR-1 antibody. The proteins were detected using IRDye 800CW goat anti rabbit IgG (H + L) secondary antibody. An Infrared Imaging System was applied to detect immunoreactive blots. Signal densities on the blots were measured with Image J software and normalized using rabbit anti-GAPDH antibody as an internal control. Statistical analysis All statistical analysis was done with GraphPad Prism 7.0 software. All the data showed a normal distribution. Differences between the groups were assessed using unpaired t-test or one-way of analysis of variance followed by Tukey multiple comparisons test. The data in the different experimental groups were expressed as the mean ± SD. P < 0.05 was considered to be statistically significant. HG induced mesangial cell proliferation and PAR-1 upregulation in HMCs HG significantly induced cell proliferation in HMCs after 48 h culture compared with the NG culture (p < 0.01, Fig. 1a), while co-treatment with a selective inhibitor of PAR-1 vorapaxar inhibited cell proliferation induced by HG in HMCs (p < 0.05, Fig. 1a). Moreover, HG markedly increased the protein expression of PAR-1 in HMCs (p < 0.01, Fig. 1b). These data indicated that PAR-1 upregulation mediated the mesangial cell proliferation stimulated by HG condition. Effects of HG on thrombin/PAR-1 signaling in HMCs To seek for the reason of increased protein expression of PAR-1 induced by HG in HMCs, we examined the alteration of thrombin/PAR-1 signaling. It was found that HG did not change the enzymatic activity of thrombin (Fig. 2a), but elevated the mRNA level of PAR-1 (p < 0.01, Fig. 2b) in HMCs, suggesting that PAR-1 upregulation induced by HG was likely due to the alteration of transcriptional or posttranscriptional level. Overexpression of miR-20a-5p in HMCs To further verify that whether loss of miR-17 family indeed led to the upregulation of PAR-1 caused by HG, the member miR-20a-5p was selected for further study. So the HMCs with miR-20a-5p overexpression was established. Figure 5a showed that LV-hsa-miR-20a as well as mimic control totally infected the cells. Q-PCR data indicated that miR-20a-5p was dramatically increased in LV-hsa-miR-20a group compared with mimic control group (p < 0.01, Fig. 5b), indicating that the HMCs overexpressed with miR-20a-5p was successfully obtained. Discussion Glomerular mesangial cells are a class of important renal inherent cells, and cell proliferation of mesangial cells is one of the early pathological manifestations of DN. In the present study, firstly we found that chronic high glucose Fig. 3 Effects of high glucose on the chosen microRNAs in HMC cells. Decreased microRNAs due to high glucose stimulation in HMC cells (a); decreased microRNAs due to the effect of osmotic pressure in HMC cells (b). MicroRNA levels were assayed by stemloop RT-qPCR. NG, HG, and NG + MA represent 5.6 mmol/L glucose, 30 mmol/L glucose, and 5.6 mmol/L glucose plus 24.4 mmol/L mannitol (MA), an osmotic pressure control, respectively. Mean ± SD, n = 4 independent experiments. * p < 0.05, ** p < 0.01, vs. NG stimulated cell proliferation of HMCs, a human glomerular mesangial cell line, and affected thrombin/PAR-1 signaling namely unchanged thrombin activity but upregulation of thrombin receptor PAR-1. Moreover, PAR-1 inhibition attenuated cell proliferation in high-glucose cultured HMCs. Secondly, we predicted potential microRNAs that could combine with the 3′-UTR of F2R, and determined 8 of 14 microRNAs, finding that miR-17 family members including miR-17-5p, -20a-5p, and -93-5p were markedly decreased due to high glucose rather osmotic press in high glucose-cultured HMCs. Finally, we confirmed that miR-20a-5p overexpression reversed the upregulation of PAR-1 induced by high glucose in HMCs. Our findings partially clarify the role of miR-17 family in PAR-1 upregulation in DN. PAR-1 upregulation contributes to DN. In 2009, Sakai et al. report that PAR-1 promotes mesangial expansion and abnormal urinary albumin excretion in DN in mice (Sakai et al. 2009). Recently, Waasdorp et al. find that PAR-1 deficient mice develop less kidney damage after induction of Fig. 4 Sequence alignment information for the seed region of three members of miR-17 family as well as miR-129-5p and miR-181a-5p with the 3′-UTR of F2R (the gene symbol of PAR-1) by using miRDB bioinformatics platform Fig. 5 Confirmation of miR-20a-5p overexpression in HMC cells. Typical pictures of lentivirus (LV) infection by using LV-hsa-miR-20a-5p mimic and mimic control (a); miR-20a-5p level in LV-hsa-miR-20a-5p mimic and mimic control (b). Mean ± SD, n = 3 independent experiments. ** p < 0.01, vs. mimic control. Scale bar: 100 µm diabetes, as evidenced by diminished proteinuria, plasma cystatin C levels, expansion of the mesangial area, and tubular atrophy, and PAR-1 signaling in mesangial cells leads to the increased proliferation and expression of matrix proteins, indicating that PAR-1 may be an attractive therapeutic target to pursue in DN (Waasdorp et al. 2016). Furthermore, Waasdorp et al. verify that streptozotocin-induced diabetic mice treated with vorapaxar, a selective inhibitor of PAR-1, do not show significant albuminuria, mesangial expansion, and glomerular fibronectin deposition, suggesting that PAR-1 inhibition prevents the development of DN in this preclinical animal model for type 1 diabetes (Waasdorp et al. 2018). Moreover, our recent study demonstrated that a natural product sarsasapogenin alleviated DN in rats by downregulating PAR-1 signaling . Additionally, dual blockade of PAR-1 and PAR-2 additively ameliorates DN in male type 1 diabetic Akita mice (Mitsui et al. 2020). Taken together, PAR-1 plays an important role in the development of DN. Inflammation mediates the effects of PAR-1 in the pathological process of DN. An early report indicates that activation of PAR-1 amplifies crescentic glomerulonephritis and augments inflammatory renal injury (Cunningham et al. 2000). Recently, PAR-1 antagonism ameliorates kidney injury and tubulointerstitial fibrosis by inhibiting ERK1/2 and transforming growth factor-β-mediated Smad signaling, and suppressing oxidative stress, pro-inflammatory cytokine overexpression, and macrophage infiltration into the kidney (Lok et al. 2020). Moreover, edoxaban, an inhibitor of activated factor X, inhibits renal tubulointerstitial injury by attenuating PAR-1 mediated macrophage infiltration and inflammatory molecule upregulation in unilateral ureteral obstruction mice (Horinouchi et al. 2018). Further, our recent study demonstrated that PAR-1 participated in the pathogenesis of DN through activating the NLRP3 inflammasome and NF-κB signaling . These studies demonstrate that chronic inflammation mediates the effects of PAR-1 in chronic kidney diseases, including DN. Deficiency of microRNAs results in PAR-1 upregulation in renal inherent cells in high glucose. PAR-1 is the prototype receptor of thrombin, and PAR-1 upregulation can be theoretically due to increased thrombin activity, but enzymatic activity of thrombin was unchanged in high glucose-stimulated mesangial cells . A microRNA pathway is another mechanism to regulate gene expression. Mature microRNAs regulate the expression of their target genes by mRNA degradation or translational inhibition. MiR-20a, miR-20b, miR-17, miR-93, miR-106a, and miR-106b all belong to miR-17 family, and Saleiban et al. report that PAR-1 is post-transcriptionally regulated by miR-20b in human melanoma cells (Saleiban et al. 2014). In our study, three members of miR-17 family miR-17-5p, miR-20a-5p, and miR-93-5p were found to be significantly decreased in HG-cultured HMCs but not in osmotic press control. Although the seed region of miR-129-5p and miR-181a/b-5p has two binding sites with the 3′-UTR of F2R, but they were decreased by not only high glucose but also the same concentration (30 mmol/L) of mannitol in HMCs. Nevertheless, such effect needs to be further confirmed. Based on these results, the relationship of miR-20a-5p and PAR-1 was further studied by using HMCs overexpressed with miR-20a-5p. Our results showed that miR-20a-5p overexpression reversed PAR-1 upregulation induced by HG in HMCs. On the other hand, PAR-1 can indirectly regulate miR-17 family members by affecting NF-κB in breast cancer epithelial-mesenchymal transformation and metastasis (Zhong et al. 2017). Thus, deficiency of miR-17 family at Fig. 6 Effects of miR-20a-5p overexpression on PAR-1 mRNA level (a) and protein expression (b) in HMC cells. NC and NC + HG represent mimic control and mimic control cultured with 70 mmol/L glucose, respectively. One-way ANOVA with Tukey post-test was used for the analysis of statistical significance. Mean ± SD, n = 3 independent experiments (mRNA), n = 4 independent experiments (protein). ** p < 0.01, vs. NC; ## p < 0.01, vs. NC + HG least miR-20a-5p participates in the upregulation of PAR-1 caused by HG in HMCs. It is reported that miR-190a inhibits cell migration and invasiveness of breast cancer by targeting PAR-1 expression (Chu et al. 2014). Moreover, our previous results indicated that miR-190a alleviated neuronal damages by decreasing PAR-1 expression in high glucose-cultured SH-SY5Y cells, a commonly used cell line of central neurons (unpublished data). However, in the present study, miR-190a level was not affected by high glucose in HMCs. Additionally, a recent study shows that miR-582-5p negatively regulates PAR-1 and inhibits the apoptosis of neuronal cells after cerebral ischemic stroke (Ding et al. 2019). This study has some limitations. First, miR-17-5p or miR-93-5p was not selected to further confirm the relationship of upregulation of PAR-1 and lack of miR-17 family in glomerular mesangial cells in high glucose condition. Second, evidence of exclusion of miR-129-5p, miR-181a/b-5p was not enough just from the effects of osmotic pressure. Finally, potential regulation of PAR-1 by miR-129-5p should have been investigated. Conclusions The present study demonstrated that PAR-1 upregulation mediated cell proliferation of glomerular mesangial cells stimulated by chronic high glucose, and expression deficiency of miR-17 family contributed to the upregulation of PAR-1 caused by high glucose. Furthermore, miR-20a acted as a representative to confirm the role of miR-17 family in regulation of PAR-1 expression in mesangial cells. Our study primarily clarifies the role of miR-17 family in PAR-1 upregulation in the pathogenesis of chronic kidney diseases.
4,271.4
2021-09-17T00:00:00.000
[ "Biology", "Medicine" ]
Multigrid for Q k Finite Element Matrices Using a (Block) Toeplitz Symbol Approach : In the present paper, we consider multigrid strategies for the resolution of linear systems arising from the Q k Finite Elements approximation of one- and higher-dimensional elliptic partial differential equations with Dirichlet boundary conditions and where the operator is div ( − a ( x ) ∇· ) , with a continuous and positive over Ω , Ω being an open and bounded subset of R 2 . While the analysis is performed in one dimension, the numerics are carried out also in higher dimension d ≥ 2, showing an optimal behavior in terms of the dependency on the matrix size and a substantial robustness with respect to the dimensionality d and to the polynomial degree k . Introduction We consider the solution of large linear systems whose coefficient matrices arise from the Q k Lagrangian Finite Element approximation of the elliptic problem div (−a(x)∇u) = f , x ∈ Ω ⊆ R d , with Ω a bounded subset of R d having smooth boundaries and with a being continuous and positive on Ω. Based on the spectral analysis of the related matrix-sequences and on the study of the associated spectral symbol [1,2], this paper deals with ad hoc multigrid techniques where the choice of the basic ingredients, i.e., that of the smoothing strategy and of the projectors, has a foundation in the analysis of the symbol provided in [3]. Indeed, in the systematic work in [3], tensor rectangular Finite Element approximations Q k of any degree k and of any dimensionality d are considered and the spectral analysis of the stiffness matrix-sequences {A n } is provided in the sense of: • spectral distribution in the Weyl sense and spectral clustering; and • spectral localization, extremal eigenvalues, and conditioning. We observe that the information obtained in [3] is strongly based on the notion of spectral symbol (see [1,2]) and is studied from the perspective of (block) multilevel Toeplitz operators [4,5] and (block) Generalized Locally Toeplitz sequences [6,7]. We remind that a similar analysis is carried out in [8] for the finite approximations P k for k ≥ 2 and for d = 2: the analysis for d = 1 is contained in [3] trivially because Q k ≡ P k for every k ≥ 1, while, for d = 2, and even more for d ≥ 3, the situation is greatly complicated by the fact that we do not encounter a tensor structure. Nevertheless, the picture is quite similar and the obtained information in terms of spectral symbol is sufficient for deducing a quite accurate analysis concerning the distribution and the extremal behavior of the eigenvalues of the resulting matrix-sequences. It is worth noticing that the information regarding the conditioning determines the intrinsic difficulty in the precision of solving a linear system, that is the impact of the inherent error, and it is also important in evaluating the convergence rate of classical stationary and non-stationary iterative solvers. On the other hand, the spectral distribution and the clustering results represent key ingredients in the design and in the convergence analysis of specialized multigrid methods and preconditioned Krylov solvers [9] such as preconditioned conjugate gradient (PCG) (see [7], Subsection 3.7, and [10][11][12][13][14][15][16]). As proven in [11], the knowledge of the spectral distribution allows explaining the superlinear convergence history of the (PCG), thanks to the powerful potential theory. We emphasize that in [3,8] the final goal is the analysis and the design of fast iterative solvers for the associated linear systems. In the current note, we go exactly in this direction, by focusing our attention on multigrid techniques. Structure of the Paper The outline of the paper is as follows. In Section 2, we provide the notation and we present results regarding multigrid methods and we fix the notation for matrix-valued trigonometric polynomials, and the related block-Toeplitz matrices. Section 3 is devoted to the analysis of the structure and of the spectral features of considered matrices and matrix-sequences. The multigrid strategy definition and the symbol analysis of the projection operators are given in Section 4, together with selected numerical tests. The paper is concluded by Section 5, where open problems are discussed and conclusions are reported. Two-Grid and Multigrid Methods Here, we concisely report few relevant results concerning the convergence theory of algebraic multigrid methods and we present the definition of block-Toeplitz matrices generated by a matrix-valued trigonometric polynomial. We start by taking into consideration the generic linear system A m x m = b m with large dimension m, where A m ∈ C m×m is a Hermitian positive definite matrix and x m , b m ∈ C m . Let m 0 = m > m 1 > . . . > m s > . . . > m s min and let P s+1 s ∈ C m s+1 ×m s be a full-rank matrix for any s. At last, let us denote by V s a class of stationary iterative methods for given linear systems of dimension m s . In accordance with [17], the algebraic two-grid Method (TGM) can be easily seen a stationary iterative method whose generic steps are reported below. where we refer to the dimension m s by means of its subscript s. In the first and last steps, a pre-smoothing iteration and a post-smoothing iteration are applied ν pre times and ν post times, respectively, in accordance with the considered stationary iterative method in the class V s . Furthermore, the intermediate steps define the exact coarse grid correction operator, which is depending on the considered projector operator P s s+1 . The resulting iteration matrix of the TGM is then defined as where V s,pre and V s,post represent the pre-smoothing and post-smoothing iteration matrices, respectively, and I (s) is the identity matrix at the sth level. By employing a recursive procedure, the TGM leads to a Multi-Grid Method (MGM): indeed, the standard V-cycle can be expressed in the following way: Coarse Grid Correction From a computational viewpoint, it is more efficient that the matrices A s+1 = P s+1 s A s (P s+1 s ) H are computed in the so-called setup phase for reducing the related costs. According to the previous setting, the global iteration matrix of the MGM is recursively defined as MGM s min = O ∈ C s min ×s min , Definition 1. Let M k be the linear space of the complex k × k matrices and let f : (−π, π) → M k be a measurable function with Fourier coefficients given bŷ Then, we define the block-Toeplitz matrix T n ( f ) associated with f as the kn × kn matrix given by where ⊗ denotes the (Kronecker) tensor product of matrices. The term J (j) n is the matrix of order n whose (i, k) entry equals 1 if i − k = j and zero otherwise. The set {T n ( f )} n is called the family of block-Toeplitz matrices generated by f , which is called the generating function or the symbol of {T n ( f )} n . Remark 1. In the relevant literature (see, for instance, [10]), the convergence analysis of the two-grid method splits into the validation of two separate conditions: the smoothing property and the approximation property. Regarding the latter, with reference to scalar structured matrices [10,15], the optimality of two-grid methods is given in terms of choosing the proper conditions that the symbol p of a family of projection operators has to fulfill. Indeed, consider T n ( f ) with n = (2 t − 1) and f being a nonnegative trigonometric polynomial. Let θ 0 be the unique zero of f . Then, the optimality of the two-grid method applied to T n ( f ) is guaranteed if we choose the symbol p of the family of projection operators such that where the sets Ω(θ) and M(θ) are the following corner and mirror points Informally, it means that the optimality of the two-grid method is obtained by choosing the family of projection operators associated to a symbol p such that |p| 2 (ϑ) + |p| 2 (ϑ + π) does not have zeros and |p| 2 (ϑ + π)/ f (ϑ) is bounded (if we require the optimality of the V-cycle, then the second condition is a bit stronger) (see [10]). In a differential context, the previous conditions mean that p has a zero of order at least α at ϑ = π, whenever f has a zero at θ 0 = 0 of order 2α. In our specific block setting, by interpreting the analysis given in [18], all the involved symbols are matrix-valued and the conditions which are sufficient for the two-grid convergence and optimality are the following: (A) zero of order 2 at ϑ = π of the proper eigenvalue function of the symbol of the projector for Q k , k = 1, 2, 3 (mirror point theory [10,15]); (B) positive definiteness of pp H (ϑ) + pp H (ϑ + π); and (C) commutativity of p(ϑ) and p(ϑ + π). Even if the theoretical extension to the V-Cycle and W-cycle convergence and optimality is not given, in the subsequent section, we propose specific choices of the projection operators, numerically showing how this leads to two-grid, V-cycle, and W-cycle procedures converging optimally or quasi-optimally with respect to all the relevant parameters (size, dimensionality, and polynomial degree k). Our choices are in agreement with the mathematical conditions set in Items (A) and (B), while Condition (C) is not satisfied. The violation of Condition (C) is discussed in Section 5, while, in relation to Condition (A), we observe that a stronger condition is met, since the considered order of the zero at ϑ = π is k + 1, which is larger than 2 for k = 2, 3. Structure of the Matrices and Spectral Analysis: We report some results derived in [3] for the Lagrangian Finite Elements Q k ≡ P k , d = 1. Let us consider the Lagrange polynomials L 0 , . . . , L k associated with the reference knots t j = j/k, j = 0, . . . , k: and let the symbol , denote the scalar product in L 2 ([0, 1]), i.e., ϕ, ψ := 1 0 ϕψ. In the case a(x) ≡ 1 and Ω = (0, 1), the Q k stiffness matrix for Equation (1) equals the matrix K (k) n in Theorem 1. where the subscripts "−" mean that the last row and column of the of the whole matrices in square brackets are deleted, while K 0 , K 1 are k × k blocks given by with L 0 , . . . , L k being the Lagrange polynomials in (5). In particular, following the notation in Definition 1, n is the (nk − 1) × (nk − 1) leading principal submatrix of the block-Toeplitz matrices T n ( f Q k ) and f Q k : [−π, π] → C k×k is an Hermitian matrix-valued trigonometric polynomial given by An interesting property of the Hermitian matrix-valued functions f Q k (ϑ) defined in Equation (8) is reported in the theorem below from [3]: in fact, from the point of view of the spectral distribution, the message is that, independently of the parameter k, the spectral symbol if of the same character as 2 − 2 cos(ϑ), which is the symbol of the basic linear Finite Elements and the most standard Finite Differences. being the determinant of the empty matrix equal to 1 by convention) and L 0 , . . . , L k are the Lagrange polynomials in Equation (5). Furthermore, a generalization of the previous result in higher dimension is given in [8] and is reported in the subsequent theorem. Theorem 3. ([8]) Given the symbols f Q k in dimension d ≥ 1, the following statements hold true: Multigrid Strategy Definition, Symbol Analysis, and Numerics Let us consider a family of meshes Clearly, the same inclusion property is inherited by the corresponding Finite Element functional spaces and hence we find Therefore, to formulate a multigrid strategy, it is quite natural to follow a functional approach and to impose the prolongation operator p h 2h : V 2h → V h to be defined as the identity operator, that is Thus, the matrix representing the prolongation operator is formed, column by column, by representing each function of the basis of V 2h as linear combination of the basis of V h , the coefficients being the values of the functions ϕ 2h i on the fine mesh grid points, i.e., ϕ 2h In the following subsections, we consider in detail the case of Q k Finite Element approximation with k = 2 and k = 3, the case k = 1 being reported in short just for the sake of completeness. Q 1 Case Firstly, let us consider the case of Q 1 Finite Elements, where, as is well known, the stiffness matrix is the scalar Toeplitz matrix generated by f Q 1 (ϑ) = 2 − 2 cos(ϑ), and, for the sake of simplicity, let us consider the case of T 2h partitioning with five equispaced points (three internal points) and T h partitioning with nine equispaced points (seven internal points) obtained from T 2h by considering the midpoint of each subinterval. In the standard geometric multigrid, the prolongation operator matrix is defined as Indeed, the basis functions with respect to the reference interval [0, 1] areφ 1 (x) = 1 −x,φ 2 (x) =x, and, according to Equation (12), the ϕ 2h i coefficients arê giving the columns of the matrix in Equation (13). However, we can think the prolongation matrix above as the product of the Toeplitz matrix generated by the polynomial p Q 1 (ϑ) = 1 + cos(ϑ) and a suitable cutting matrix (see [15] for the terminology and the related notation) defined as i.e., P m s m s+1 = (P m s+1 Two-grid/Multigrid convergence with the above defined restriction/prolongation operators and a simple smoother (for instance, Gauss-Seidel iteration) is a classical result, both from the point of view of the literature of approximated differential operators [17] and from the point of view of the literature of structured matrices [10,15]. In the first panel of Table 1, we report the number of iterations needed for achieving the predefined tolerance 10 −6 , when increasing the matrix size in the setting of the current subsection. Indeed, we use A m s (p Q 1 )(K m s+1 ×m s ) T and its transpose as restriction and prolongation operators and Gauss-Seidel as a smoother. We highlight that only one iteration of pre-smoothing and only one iteration of post-smoothing are employed in the current numerics. Therefore, considering the results of Remark 1 and the subsequent explanation, there is no surprise in observing that the number of iterations needed for the two-grid, V-cycle, and W-cycle convergence remains almost constant when we increase the matrix size, numerically confirming the predicted optimality of the methods in this scalar setting. Table 1. Number of iterations needed for the convergence of the two-grid, V-cycle, and W-cycle methods for k = 1, 2, 3 in one dimension with a(x) ≡ 1 and tol = 1 × 10 −6 . # Subintervals TGM V-Cycle W-Cycle TGM V-Cycle W-Cycle TGM V-Cycle W-Cycle For the sake of simplicity, let us consider the case of T 2h partitioning with five equispaced points (three internal points) and T h partitioning with nine equispaced points (seven internal points) obtained from T 2h by considering the midpoint of each subinterval. Thus, with respect to Equation (12) and so on again as for that first couple of basis functions. Notice also that, to evaluate the coefficients, for the sake of simplicity, we are referring to the basis functions on the reference interval, as depicted in Figure 1. To sum up, the obtained prolongation matrix is as follows Hereafter, we are interested in setting such a geometrical multigrid strategy, proposed in [17,19,20], in the framework of the more general algebraic multigrid theory and in particular in the one driven by the matrix symbol analysis. To this end, we represent the prolongation operator quoted above as the product of a Toeplitz matrix generated by a polynomial p Q 2 and a suitable cutting matrix. We recall that the Finite Element stiffness matrix could be thought as a principal submatrix of a Toeplitz matrix generated by the matrix-valued symbol that, from Equation (8), has the compact form Then, it is quite natural to look for a matrix-valued symbol for the polynomial p Q 2 as well. In addition, the cutting matrix is also formed through the Kronecker product of the scalar cutting matrix in Equation (14) and the identity matrix of order 2, so that Taking into account the action of the cutting matrix (K m s+1 ×m s ) T ⊗ I 2 , we can easily identify from Equation (15) the generating polynomial as where A very preliminary analysis, just by computing the determinant of p Q 2 (ϑ) shows there is a zero of third order in the mirror point ϑ = π, being det(p Moreover, the analysis can be more detailed, as highlighted in Section 2. We highlight that our choices are in agreement with the mathematical conditions set in Items (A) and (B). Condition (C) is violated and we discuss it in Section 5 and Remark 2. Nevertheless, it is possible to derive the following TGM convergence and optimality sufficient conditions that should be verified by f and p = p Q 2 , exploiting the idea in the proof of the main result of [18]: where q(ϑ) = p(ϑ) H p(ϑ) + p(ϑ + π) H p(ϑ + π) −1 , O k is the k × k null matrix, γ > 0 is a constant independent on n, and we denote by A > B (respectively, A ≤ B) the positive definiteness (respectively, non-positive definiteness) of the matrix A − B. The condition in Equation (19) requires the matrix-valued function R(ϑ) to be uniformly bounded in the spectral norm. These conditions are obtained from the proof of the main convergence result in [18], where, after several numerical derivations, it was concluded that the above conditions are the final requirements needed. To this end, we have explicitly formed the matrices involved in the conditions in Equations (18) and (19) and computed their eigenvalues for ϑ ∈ [0, 2π]. The results are reported in Figure 2 and are in perfect agreement with the theoretical requirements. In the second panel of Table 1, we report the number of iterations needed for achieving the predefined tolerance 10 −6 , when increasing the matrix size in the setting of the current subsection. Indeed, we use A m s (p Q 2 )(K m s+1 ×m s ) T and its transpose as restriction and prolongation operators and Gauss-Seidel as a smoother. Again, we remind that only one iteration of pre-smoothing and only one iteration of post-smoothing are employed in our numerical setting. As expected, we observe that the number of iterations needed for the two-grid convergence remains constant when we increase the matrix size, numerically confirming the optimality of the method. Moreover, we notice that also the V-cycle and W-cycle methods possess optimal convergence properties. Although this behavior is expected from the point of view of differential approximated operators, it is interesting in the setting of algebraic multigrid methods. Indeed, constructing an optimal V-cycle method for matrices in this block setting might require a specific analysis of the spectral properties of the restricted operators (see [18]). Q 3 Case Hereafter, we briefly summarize the case of Q 3 Finite Elements, following the very same path we already considered in the previous section for P 2 Finite Elements. The basis functions with respect to the reference interval [0, 1] areφ For the sake of simplicity, let us consider the case of T 2h partitioning with seven equispaced points (five internal points) and T h partitioning with 13 equispaced points (11 internal points) obtained from T 2h by considering the midpoint of each subinterval. Thus, with respect to Equation (12) (see also Thus, the obtained prolongation matrix is as follows: Thus, taking into consideration that the stiffness matrix is a principal submatrix of the Toeplitz matrix generated by the matrix-valued function we are looking for the matrix-valued symbol p it is easy to identify the generating polynomial as A trivial computation shows again shows there is a zero of fourth order in the mirror point ϑ = π, being det(p However, the main goal is to verify the conditions in Equations (18) and (19): we have explicitly formed the matrices involved and computed their eigenvalues for ϑ ∈ [0, 2π]. The results are in perfect agreement with the theoretical requirements (see Figure 4). This analysis links the geometric approach proposed in [17,19,20] to the novel algebraic multigrid methods for block-Toeplitz matrices. In the third panel of Table 1, we report the number of iterations needed for achieving the predefined tolerance 10 −6 , when increasing the matrix size in the setting of the current subsection. Indeed, we use A m s (p Q 3 )(K m s+1 ×m s ) T and its transpose as restriction and prolongation operators and Gauss-Seidel as a smoother (one iteration of pre-smoothing and one iteration of post-smoothing). As expected, we observe that the number of iterations needed for the two-grid convergence remains constant when we increase the matrix size, numerically confirming the optimality of the method. As in the Q 2 case, we also notice that the V-cycle and W-cycle methods possess the same optimal convergence properties. Comparing the three panels in Table 1, we also notice a mild dependency of the number of iterations on the polynomial degree k. In addition, we can see in Tables 2 and 3 that the optimal behavior of the two-grid, V-cycle, and W-cycle methods for k = 2, 3 remains unchanged if we test different tolerance values. Table 2. Number of iterations needed for the convergence of the two-grid, V-cycle, and W-cycle methods for k = 2 in one dimension with a(x) ≡ 1 and tol = 1 × 10 −2 , 1 × 10 −4 , and 1 × 10 −8 . Remark 3. It is worth stressing that the results hold also in dimension d ≥ 2. In fact, interestingly, we observe that the dimensionality d does not affect the efficiency of the proposed method, as well shown in Table 4 for the case d = 2. We finally remind that the tensor structure of the resulting matrices highly facilitates the generalization and extension of the numerical code to the case of d ≥ 2. Indeed, the prolongation operators in the multilevel setting are constructed by a proper tensorization of those in 1D. Table 4. Number of iterations needed for the convergence of the two-grid, V-cycle, and W-cycle methods for k = 1, 2, 3 in dimension d = 2 with a(x) ≡ 1. Concluding Remarks In the present paper, we consider multigrid strategies for the resolution of linear systems arising from the Q k Finite Elements approximation of one-and higher-dimensional elliptic partial differential equations with Dirichlet boundary conditions and where the operator is div (−a(x)∇·), with a continuous and positive over Ω, Ω being an open and bounded subset of R d . While the analysis has been given in one dimension, the numerics are shown also in higher dimension d ≥ 2, showing an optimal behavior in terms of the dependency on the matrix size and a substantial robustness with respect to the dimensionality d and to the polynomial degree k (see Remark 3). We mention the fact that our analysis might be of interest for several variations on the problem in Equation (1). Indeed, if we impose different boundary conditions, our procedure can be applied with slight changes. In fact, the resulting stiffness matrices differ from the ones analyzed in the present paper, of a small rank correction matrix. Therefore, they share the same asymptotic spectral properties, which means we only have to take care of possible outliers, affecting the choice of the proper smoother. By interpreting the analysis given in [18] in our specific block setting, we provide a study of the relevant analytical features of all the involved spectral symbols, both of the stiffness matrices f Q k and of the projection operators p Q k , k = 1, 2, 3. While the two-grid, V-cycle, and W-cycle procedures show optimal or quasi-optimal convergence rate, with respect to all the relevant parameters (size, dimensionality, polynomial degree k, and diffusion coefficient), the theoretical prescriptions are only partly satisfied. In fact, our choices are in agreement with the mathematical conditions set in Items (A) and (B), while Condition (C) is violated. Here, by quasi-optimal convergence rate, we mean that the convergence speed does not depend on the size (optimality with respect to the this parameter) and it is mildly depending on the other relevant parameters such as dimensionality, polynomial degree k, and diffusion coefficient. By looking at the mathematical derivations in [18], we observe that the latter condition indeed is a technical one. In reality, we believe that Condition (C) is not essential and the commutation request can be substituted by a less restrictive one, possibly following the considerations in Remark 2. Such a point is in our opinion important for widening the generality of the theory and it will be the subject of future investigations. In conclusion, regarding the computational cost of the proposed algorithm, we highlight that the choice of the optimal smoother from a computational viewpoint is beyond the scope of the present paper. Indeed, in the case where the matrices possess a tensor structure, a further analysis will be performed to devise a more competitive method.
6,085.8
2019-12-18T00:00:00.000
[ "Computer Science", "Mathematics" ]
Physiological modeling of toxicokinetic interactions: implications for mixture risk assessment. Most of the available data on chemical interactions have been obtained in animal studies conducted by administering high doses of chemicals by routes and scenarios different from anticipated human exposures. A mechanistic approach potentially useful for conducting dose, scenario, species, and route extrapolations of toxic interactions is physiological modeling. This approach involves the development of mathematical descriptions of the interrelationships among the critical determinants of toxicokinetics and toxicodynamics. The mechanistic basis of the physiological modeling approach not only enables the species, dose, route, and scenario extrapolations of the occurrence of toxicokinetic interactions but also allows the extrapolation of the occurrence of interactions from binary to multichemical mixtures. Examples are presented to show the feasibility of predicting changes in toxicokinetics of the components of complex chemical mixtures based on the incorporation of binary interaction data within physiologically based models. Interactions-based mixture risk assessment can be performed by simulating the change in the tissue dose of the toxic moiety of each mixture component during combined exposures and calculating the risk associated with each tissue dose estimate using a tissue dose versus response curve for all components. The use of such a mechanistic approach should facilitate the evaluation of the magnitude and relevance of chemical interactions in assessing the risks of low-level human exposures to complex chemical mixtures. relevant exposure concentrations in both the test animal species and humans. Several environmental chemicals interact with each other by various mechanisms that are dependent on the dose, dosing regimen (i.e., single or repeated exposure), exposure pattern (i.e., pretreatment, coadministration, or postadministration), and/or exposure route of one or both chemicals (3). Most of the interaction studies reported to date have been conducted in laboratory animals by administering high doses of one or both chemicals by routes and scenarios often different from anticipated human exposures (4). Further, information on the toxicological consequences of low-level or chronic exposures to binary chemical mixtures, which show significant interactions when administered acutely, is often unavailable (3). Typically, in the health risk assessment process for chemical mixtures, the available data on toxic interactions among components are not taken into account. The present situation of neglecting data on binary chemical interactions (e.g., synergism, antagonism) will not change unless and until a tool or methodology can be developed for extrapolating the results of animal studies on binary chemical interactions to humans (i.e., by accounting for the route, scenario, exposure concentration, and species differences), and predicting the potential modulation of binary chemical interactions by other chemicals present within complex mixtures. These concerns can be addressed with the use of a physiologically based modeling approach. Physiologically based modeling refers to the process of reconstructing mathematically the anatomic-physiological characteristics of the organism of interest and describing the complex interplay among the critical determinants of toxicokinetics and toxicodynamics. The biological and mechanistic basis of the physiological modeling approach allows the conduct of various extrapolations (i.e., high dose to low dose, route to route, species to species, scenario to scenario, and binary to more complex mixtures) of the occurrence of toxicokinetic interactions among components of chemical mixtures. This article provides an overview of the conceptual basis of physiological models used for simulating toxicokinetic interactions in chemical mixtures and discusses its implications for developing mechanistic mixture risk assessment strategies. Physiological Modeling of Toxicokinetic Interactions: Conceptual Basis The feasibility of using physiologically based toxicokinetic (PBTK) models to describe, predict, and extrapolate the extent and magnitude of the occurrence of interactions for various dose levels, scenarios, species, routes, and mixture complexities arises from the very nature and basis of these models. PBTK models of mixtures correspond to a set of individual chemical models interconnected via the mechanism of interaction. In PBTK models for individual chemicals, the organism is frequently represented as a network of tissue compartments (e.g., liver, fat, slowly perfused tissues, and richly perfused tissues) interconnected by systemic circulation. Absorption of chemicals in the environmental medium may be via pulmonary, dermal, or oral routes. The amount absorbed per unit time can be calculated within the model; alternatively, the change Environmental Health Perspectives * Vol 106, Supplement 6 * December 1998 ---1 1 1 1 ll 1 1 1 1 1 1 1 ------in chemical concentration in blood or the tissue representing the portal of entry may be simulated using appropriate equations ( Table 1). The chemical in the arterial supply binds to blood components and/or enters the various tissue compartments where it may be dissolved in lipid and water components and be bound to tissue macromolecules. The rate of change in the amount of chemical in each tissue compartment is described with mass balance differential equations (MBDEs). In the case of metabolizing tissues these MBDEs accommodate appropriate mathematical descriptions (e.g., saturable or first-order processes). Finally, the concentration of chemical excreted in biological fluids (exhaled air, urine) may be calculated using algebraic or differential equations (5). The equations listed in Table 1 contain several parameters, i.e., critical determinants of chemical uptake and disposition. These determinants can be classified into four categories, namely, physiological, biological, physicochemical, and biochemical ( Table 2). If the numerical values of any of these determinants of chemical uptake and disposition are altered during coexposure to other chemicals, then a toxicokinetic interaction is likely to result. Physiological models of toxicokinetic interactions should then account for such alterations at the mechanistic level to provide simulations of blood or tissue concentration profiles of chemicals in mixtures. To construct a PBTK model for a binary mixture, one must initially construct two single-chemical models. The two models are then linked mechanistically at the level of a tissue or blood compartment. This is accomplished by modifying the numerical value(s) for mechanistic determinant(s) in mathematical expressions of absorption, distribution, metabolism, or excretion. Toxicokinetic interactions can then be viewed as a consequence of one chemical modifying the mechanistic determinant of the toxicokinetics of another chemical in the mixture. The absorption profile of one chemical may be altered in the presence of another chemical as a result of interference with an active uptake process or, perhaps, of modulation of the critical biological determinants of uptake (e.g., Qp, Q,). For example, hydrogen cyanide at low exposure concentrations causes an increase in Qp, thus potentially increasing the pulmonary uptake of other chemicals (6). To describe such absorption-level interactions in a PBTK model, chemicalspecific quantitative information on the The distribution profile of a chemical can be altered by another chemical if it affects certain critical physicochemical, physiological, or biochemical parameters. Some chemicals may alter the solubility characteristics of another chemical. For example, cyanide forms complexes with essential metals, resulting in a change in their tissue concentrations and distribution pattern due to changes in solubility and stability (7)(8)(9)(10)(11)(12). Several dithiocarbamates increase the uptake of lead through the blood-brain barrier by forming lipophilic complexes (13). The distribution phase interactions resulting from changes in physiological parameters such as liver volume (V1) or hepatic blood flow rate (Q1) are invoked by phenobarbital and ethanol. Chemicals in mixture or their metabolites may compete for binding to various macromolecules such as hemoglobin, albumin, metallothionein, or tissue receptors. Competition for plasma protein binding has been modeled by accounting for the number of binding sites, total protein level, dissociation constants, and the concentration of the unbound form of both the substrate and the inhibitor (14). Some chemicals can also increase the number of binding sites through an induction process [e.g., induction of microsomal binding proteins and cytosolic proteins by 2,3,7,8-tetrachlorodibenzo-p-dioxin (15)]. Metabolic interactions may occur because of one chemical inducing or inhibiting the metabolism of another chemical in the mixture. Metabolic inhibition occurs when a chemical competes directly with another chemical for an enzymatic binding site (competitive inhibition), aAdapted from Krishnan and Andersen (5). Abbreviations: A, amount; C, concentration; D, dose; E, hepatic extraction ratio; K0, rate of oral absorption; P, partition coefficient; PA, permeation across tissue membrane; 0, blood flow rate; S, skin surface; V, volume. Subscript abbreviations: a, arterial blood; bd, bound; fr, free; i, inhaled; I, liver; met, metabolized; o, oral; s:a, skin:air; s:b, skin:blood; sk, skin; t, tissue; v, venous blood; vt, venous blood leaving tissue. All other symbols are defined in Table 2. when a chemical binds directly to the enzyme-substrate complex but not to the free enzyme (uncompetitive), or when it does both of these (noncompetitive). The inhibitory effect of one chemical on another is modeled by including a term that describes the quantitative manner in which Km and V,,,. are modified (Table 3). Once the mechanism of interaction is hypothesized or determined and the two individual chemical PBTK models are interconnected, the integrated binary chemical mixture model is ready to be used for predicting the consequences of toxicokinetic interactions at various dose levels, exposure routes, species, and scenarios. This kind of modeling exercise can help determine the relative importance of an interaction for risk assessment purposes. Of the several interaction PBTK models published so far [reviewed by Simmons (16)], most of them have only been used to evaluate the mechanistic basis of toxicokinetic interactions. High-Dose to Low-Dose Extrapolations High-dose to low-dose extrapolations of toxicokinetic interactions can be accomplished by PBTK modeling because the mathematical descriptions used in PBTK models account for the nonlinear kinetic behavior of individual chemicals and the mixture effects. The toxicokinetic nonlinearity is often related to a change in the numerical values of biochemical, physiological or physicochemical determinants that is not strictly proportional to the change in dose or exposure concentration. The ability to conduct high-dose to lowdose extrapolation of toxicokinetic interactions using PBTK models may be examined with metabolic induction/inhibition as the mechanism. In such cases, the binary toxicokinetic interaction model accounts for the nonlinearity arising from two phenomena: the saturable nature of the metabolism of individual chemicals and the relative importance of the metabolic interaction mechanism as a function of substrate concentration. The saturation of metabolism at a high exposure concentration of a chemical leads to a situation that is characterized by the absence of significant inhibitory interaction effects on the hepatic extraction ratio at such concentrations; however, the interactive effect becomes more evident at subsaturation concentrations. In the case of enzyme induction, the effect is more pronounced at high exposure concentrations (i.e., at which metabolism is capacity limited) than at low exposure concentrations (i.e., at which metabolism is perfusion limited). The relative importance and thus the influence of metabolic interactions will depend on the substrate concentration, particularly the range in which saturation occurs. The saturable nature of the metabolism of the inhibitor chemical will also lead to nonlinearity in terms of its effects at a constant exposure concentration of a substrate. Because the saturable nature of the metabolism of mixture components and the magnitude and mechanism of the interactive effects are incorporated within the PBTK models, these models are useful for conducting high-dose to low-dose extrapolations of the consequence of toxicokinetic interactions. This particular application of PBTK models has been explored in the context of determining the relative importance of interaction data for health risk assessments and for establishing interaction thresholds (19)(20)(21)(22)(23). Route-to-Route Extrapolations The tissue or blood concentration versus time-course profiles of chemicals may be different depending on the route of exposure (e.g., intravenous, oral, inhalation, dermal). Therefore, the time course of the effective concentration of a chemical at the interaction site may differ according to route of exposure, thus leading to a routedependent magnitude and profile of the interaction effect. The route-to-route difference in bioavailability and toxicokinetics can be accounted for by using appropriate mathematical descriptions of absorption (Table 1). Route-to-route extrapolations of toxicokinetics using PBTK models have been successfully performed for several individual chemicals. The approach involves shutting off one pathway and introducing the chemical via another route (5). Because equations for absorption via each of the multiple routes are included in the PBTK model for each of the mixture components, all one does is specify chemical concentration in the respective exposure media. In the case of a mixture PBTK model, the same or different routes for the components can be chosen. For example, both chemicals may be administered orally or one by oral ingestion and another by inhalation. Such an approach has been used by Pelekis and Krishnan (23) to conduct route-to-route extrapolation of the occurrence of metabolic interactions between toluene (TOL) and dichloromethane (DCM). The PBTK modeling methodology, in effect, should permit the simulation of the extent and consequence of toxicokinetic interactions following combined multiroute exposures to chemicals. Interspecies Extrapolations The usefulness of PBTK models in the conduct of interspecies extrapolations of single chemicals is well documented (5,24). However, there have been only limited efforts so far to demonstrate the usefulness of rodent PBTK models in predicting the toxicokinetics of chemical mixture in humans. The procedure involves substituting the numerical values of physiological, physicochemical, and biochemical parameters, including the interaction parameters, with those for the species of interest. This must be done for each component model of the mixture PBTK model. The rat-to-human extrapolation of toxicokinetic interactions using the PBTK modeling approach has been validated for a binary mixture (TOL/m-xylene [XYL]) and a ternary mixture (TOL/XYL/ethylbenzene [EBZ]) (20,25). In these studies, the values of the interaction parameter (i.e., Ki for competitive inhibition mechanism) determined in animal studies were kept unchanged in the human PBTK models. Treating the metabolic inhibition constants as species invariant implies that the nature and magnitude of the competition between the alkyl benzenes for binding to cytochrome P4502E1 for metabolism does not change between species. This assumption can be accepted as the default because the substrates (TOL, XYL, EBZ) and the isoenzyme form (2E1) involved are the same despite the fact that the species being considered are different (rat vs human). Although the inhibition potency of a chemical is assumed to be the same regardless of the species, it is obvious that the hepatic venous concentration of the chemicals might vary from one species to another, thus accounting for species differences in the overall inhibition effects, if any. Such an approach has recently been used by Pelekis and Krishnan (23) to evaluate the relevance for humans of a toxicokinetic interaction between TOL and DCM characterized in rats. Application of PBTK models along these lines would facilitate a priori characterization of the threshold of interactions in humans and identification of those interactions that may be of concern to humans exposed to low concentrations. Scenario-to-Scenario Extpolations Scenario extrapolations to assess the potentially varying nature and magnitude of toxicokinetic interactions are essential because changing the exposure scenarios may invoke different mechanisms or alter the kinetic profiles of chemicals. If the change in exposure duration is the sole factor of concernall one must do is adjust the length of exposure (TCHNG) in the PBTK model. Accordingly, the length of time during which a particular tissue concentration is maintained might vary according to the scenario of exposure. This may have direct influence on the extent of interactive effects seen, as the effective concentration of the inhibitor is an important determinant of the outcome of interactions. The application of PBTK models for simulating the kinetics of various chemicals as a function of exposure scenarios is well documented (26)(27)(28). The same approach can be used to describe the change in exposure scenarios in the individual chemical models of a mixture PBTK model. In some cases mathematical descriptions to account for time-dependent processes such as induction of a metabolizing or binding protein may have to be additionally incorporated (15,29). The applicability of PBTK models for simulating the kinetics of chemical mixtures in humans following changes in exposure duration and concentration has been demonstrated by Tardif et al. (20). The inclusion of the information on toxic interactions between two chemicals within the regular risk assessment process (even if they are for a relevant exposure scenario, species, route, and dose level) has been hampered by the fact that such data do not represent the complete picture (30). Chemicals do not just coexist in two but in multiple numbers, and as such the toxicity of two interacting chemicals might further be altered by the other components of the mixture. The toxicity of complex mixtures is determined by the outcome of interactions not only at the binary level but also at other, higher levels (e.g., ternary, quaternary). There has been a lack of useful tools essential for predicting higher order interactions within complex mixtures. The potentially useful tools in this context are statistical, empirical, and mechanistic models. Statistical and empirical models are constructed on the basis of available data and therefore cannot be used for extrapolations. On the other hand, the mechanistic PBTK models are constructed on the basis of the quantitative interrelationships among the critical determinants of the process of interest and therefore are potentially useful in this context. Extrpolation from Binary to Higher Order Levels One of the approaches for modeling toxicokinetic interactions in mixtures involves identifying and linking all individual chemical models via interaction terms. PBTK models for mixtures of any level of complexity can then be created as long as the quantitative information on the mechanism for each interacting chemical pair is available or can be hypothesized. According to this methodology, for modeling the kinetics of the components of complex mixtures, plausible binary interactions need only be characterized. In a mixture of three chemicals, for example, there are three two-way interactions ( Figure 1). The first step is to write the models for each component of the mixture. Then the single-chemical models should be interconnected at the binary level by modifying the appropriate equations. If we consider competitive metabolic inhibition as the mechanism of interaction, the equation for calculating the rate of the amount metabolized (RAM) of each component should be modified appropriately (Table 4). Logically, this PBTK modeling approach should be applicable to predict higher order interactions in mixtures of any complexity, invoking various kinds of mechanisms of interactions among components. It is important to note that all linkages involving mixture components are of binary nature only (Table 4). If the model considers interactions at the binary level alone, how is it possible to simulate the consequence of a higher order interaction (e.g., involving three chemicals)? This is where the unique usefulness of PBTK modeling becomes evident. Let's assume that the binary chemical interaction between A and B has been modeled. Following the addition of another chemical, C, the PBTK model not only simulates the binary interactions involving C (i.e., C-A, C-B) but also the modulatory effect of C on the interaction between A and B. Once we describe the inhibitory effect of C on B, this would result in a reduction in the rate of B metabolized and consequently an increase in its concentration in venous blood leaving the liver (CQ,B). CVIB is the numerator of the term representing the inhibitory effect of B on A (i.e., 1 + CIB1KiBA) ( Table 4). Because the exposure to chemical C increases CVIB, this translates into a modification of the magnitude of the interactive effect of B on A. Similarly, C may also affect the concentration of A, which would then result in a change in the magnitude of the interactive effect of A on B. The PBTK model framework can also simulate similar phenomena affecting the concentration of Cbecause all components of the mixture are linked. Based on this analogy it will be possible to predict the influence of the addition of another chemical, D, to the ternary mixture, and so forth (Table 4). When D is added to an existing ternary mixture PBTK model for chemicals A, B, and C, we need to consider only three additional binary interactions (Figure 2, small dashed lines). By doing this, the modulating effect of D on the C-A and B-A interactions will be automatically simulated because all components are linked with each other within the PBTK framework. The effect of D on the kinetics of A will in turn affect the kinetics of B, C, and D. Any modulation of a binary interaction will affect the kinetics of other chemicals that are part of the network of binary interactions present in the mixture. The same considerations are applicable when another chemical, E, is added to the quaternary mixture. After adding the four new binary interactions (E-A, E-B, E-C, and E-D), chemical E will become an integral part of the network of the components of the mixture and any modulation of a binary interaction involving Ewill have repercussions on all the others (Figure 2). The novel thing about this approach is that it only requires data on binary interaction mechanisms for predicting the magnitude and consequence of higher order interactions within complex mixtures. Tardif et al. (25) validated this approach for predicting the kinetics of components of a ternary mixture of alkyl benzenes. These authors hypothesized, based on the possibility that alkyl benzenes are substrates of cytochrome P4502E1, that they are likely to compete for the enzyme catalytic site. Initially, individual chemical PBTK models were constructed in the rat. All individual chemical models were then interconnected by inserting a binary interaction term for metabolic inhibition (i.e., alternate descriptions of competitive, uncompetitive, and noncompetitive inhibitions) in the RAM equations. Once all individual chemical PBTK models were linked by binary interaction terms, the metabolic inhibition constants for each pair of mixture components were obtained by fitting model simulations to experimentally measured venous blood concentrations in rats exposed to mixtures. The Ki values were determined for all three types of inhibition, and competitive inhibition appeared to be the most plausible mechanism of metabolic interaction (25). With the inclusion of the Ki values for all binary interactions, the mixture PBTK model was used to simulate the kinetics of each component in rats following a 4-hr inhalation exposure to a mixture of 100 ppm each of TOL, XYL, and EBZ. For all three chemicals, the venous blood concentration kinetics simulated by the binary interactions-based mixture PBTK model compared well with experimental data (25). This approach has also been used to predict the carboxyhemoglobinemia resulting from DCM exposure in the presence of aromatic hydrocarbon solvents, i.e., TOL, XYL, and EBZ (19). Based on the results of this simulation study, Krishnan and Pelekis (19) reported that the threshold for DCM-TOL interaction diminished with increasing number of inhibitor chemicals. This observation can be explained with the change in Cvl that occurs during multichemical interactions at the metabolism site. In the study by Krishnan and Pelekis (19), for example, connecting the PBTK model for TOL to the existing DCM model based on competitive inhibition mechanism increases the liver venous blood concentration of DCM. The addition of a Figure 3. Predictability of the effect of benzene (X) on the interacting binary pair of dichloromethane (A) and toluene (B). These chemicals are substrates of CYP2E1 and are assumed to compete for metabolism. Vmax refers to the maximal velocity for biotransformation; Km refers to the Michaelis affinity constant; Cterms are concentrations of unchanged chemicals in the venous blood leaving liver. The equations on the left side of the thin arrows represent metabolism of A and B when they are present alone and on the right-hand side the equations represent the modification of the rate of their metabolism in the presence of an inhibitor (A or B). With the addition of Xto the binary mixture of A-B, all one does is include the constants representing metabolic inhibition between X and A and Xand B in the PBTK model. In this example, the inhibition constants are set equal to the Km of the inhibitor. Adapted from Krishnan and Pelekis (19). third chemical, XYL, to the binary mixture affects the magnitude of the existing binary interaction of DCM-TOL by increasing the CQi of TOL and that of DCM ( Figure 3). Similarly, the addition of EBZ to the ternary mixture affects the kinetics of all three solvents. The magnitude of the modulation of interactions invoked on the addition of another chemical to an existing network of binary interactions depends on its inhibition potency and also on its CQ. With increasing complexity of mixtures, the Ki for binary interactions is not modified; rather, the C>l is increased according to the potency and number of the inhibitors. The increasing effective concentration of chemicals in a mixture is due to a cascade of inhibitory events at the binary level. This is why we tend to see a more marked inhibitory effect on the metabolism of a substrate at a specific exposure concentration with increasing number of inhibitors. Though the previous discussion and examples published to date address competitive inhibition as the sole mechanism of interactions, this conceptual modeling approach is applicable for situations involving various other kinds of interaction mechanisms. For example, these can be at the level of absorption (e.g., alteration of ventilation rate, skin permeability coefficient), distribution (e.g., competition for protein binding sites, alteration of the concentrations of binding proteins, alteration of blood flow), and metabolism (e.g., enzyme induction). The methodological approach reviewed in the preceding paragraphs provides a systematic framework for modeling and dealing with interactions occurring in increasingly complex mixtures. Every time a new chemical is added to an existing mixture, one must simply describe the additional binary interactions within the PBTK model framework. This alone would appear to be sufficient for predicting changes in tissue dosimetry due to the higher order interactions occurring in complex chemical mixtures. Physiological Modeling of Toxicokinetic Interactions: Implications for Mixture Risk Assessment The extrapolation issues (i.e., high dose to low dose, route to route, interspecies, scenario to scenario, binary to multicomponent mixtures) relating to the consideration of interaction data in the risk assessment process can be addressed with the use of PBTK models. However, the exact nature in which the PBTK model-based extrapolations can be used for quantitative mixture risk assessment must be clarified. The mixture PBTK models simulate the altered tissue dose of chemicals during mixed exposures. Therefore, the model simulations of change in tissue dose of mixture components can be used along with tissue dose-response curves available for each of the mixture components to assess the risk attributed to each of the mixture components. It may be useful to briefly review the tissue dose-based quantitative risk assessment approach for individual chemicals (31). The following steps represent the methodological aspects involved in the conduct of a tissue dosimetry-based assessment of cancer risk assessment of single chemicals: 1. Determining the quantitative relationship between target tissue dose of the toxic moiety and exposure concentration of the parent chemical in the test animal (using rodent PBTK model); 2. Determining the relationship (e.g., with linearized multistage model) between the target tissue dose obtained in step 1 and the tissue responses seen in animal toxicology studies (e.g., cancer bioassays); 3. Using the relationship in step 2 to determine the tissue dose that corresponds to a given level of risk, one case of excess cancer per million individuals exposed over lifetime; and 4. Using a human PBTK model to estimate the exposure concentration of the chemical that provides tissue dose equivalent to that estimated in step 3. This will then be the environmental concentration of the chemical that corresponds to a predefined level of acceptable risk (i.e., one in a million in the present example). This approach improves the conventional dose-response relationship by enabling the examination of the relationship between the exposure concentration and tissue dose and tissue dose and tissue response. In the context of complex chemical mixtures, the change in response, i.e., infra-additive or supra-additive toxicities, can be viewed as a consequence of either a change in the target tissue dose of the toxic moiety per unit exposure concentration or a change in the tissue response to unit tissue dose during combined exposures. The former is a consequence of toxicokinetic interactions whereas the latter is the consequence of toxicodynamic interactions. In this paper the examples and methodology discussed are limited to the consideration of toxicokinetic interactions. Accordingly, toxicological interactions (i.e., increase or decrease in toxicity) result because of an increase or decrease in the target tissue dose per unit exposure concentration of a chemical in complex mixtures. For example, a chemical at exposure concentration of 10 ppm (arbitrary) produces A units of internal dose in humans. The corresponding risk (carcinogenic or noncarcinogenic) can be assessed by using the tissue dose versus response curve (Figure 4). On the other hand, during combined exposure with, for example, 10 other chemicals at 1 ppm (arbitrary), the tissue dose of the toxic moiety of the first chemical may no longer be A; it might be greater than A, i.e., S. Consequently, the risk associated with 10 Environmental Health Perspectives * Vol 106, Supplement 6 * December 1998 ppm of the first chemical would be greater than what is expected based on additivity principle ( Figure 4). The modulation of the target tissue dose of each individual chemical within a complex mixture can be simulated using the PBTK modeling approach as long as the binary interaction mechanisms are known or hypothesized. The tissue dosebased risk assessment for mixtures can then be conducted in the same way as for individual chemicals. The following are the steps involved in the application of PBTK models for the conduct of risk assessment for chemical mixtures: 1. Construct exposure concentration versus tissue dose and tissue dose-response curves for all relevant components of a mixture using single-chemical PBTK models; 2. Obtain predictions of tissue dose of each mixture component in humans exposed to predefined levels using the interactions-based PBTK model; and 3. Combine the results of steps 1 and 2 to determine the potentially altered response for each component during mixed exposures. Here, the PBTK-based dose-response curves generated for all mixture components are being used along with the simulations of the mixture PBTK models that correspond to the tissue dose anticipated during mixed exposures. If the tissue dose of a component predicted using the interactions-based PBTK model is the same as that provided by a single-chemical PBTK model, it would mean that the target tissue dose was not altered during combined exposures. Overall, the PBTK modeling approach to mixture assessment requires the identification and characterization of the modulation of the mechanistically relevant dose surrogate for each mixture component during combined exposures. For interactions that occur because of toxicokinetic alterations, there is no need to generate new dose-response curves for mixtures of various complexities because the toxicological potency of components does not change-it is only the tissue dose that changes. In the equation for calculating risk all one does is account for the change in target tissue dose due to kinetic interactions ( Figure 5). In cases where toxicodynamic interactions occur, the same equation depicted in Figure 5 can be used; however, it is important to note that toxicological consequences during mixed exposures do not result from change in tissue dose but result from a modification of the toxicodynamics or overall potency of mixture components. The current research in the areas of toxicokinetic and toxicodynamic modeling should facilitate the implementation and further validation of the interactions-based approaches for the risk assessment of chemical mixtures. Michaelis-Menten affinity constant oral absorption rate constant dermal permeability coefficient mass balance differential equations number of binding sites per proteins blood:air partition coefficient physiologically based toxicokinetic model parts per million skin:air partition coefficient skin:blood partition coefficient tissue:blood partition coefficient cardiac output liver blood flow alveolar ventilation tissue blood flow rate rate of amount metabolized exposed skin surface area length of exposure toluene blood volume liver volume Maximal velocity for metabolism tissue volume m-xylene
7,317.8
1998-12-01T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Early Stages of Obesity-related Heart Failure Are Associated with Natriuretic Peptide Deficiency and an Overall Lack of Neurohormonal Activation: The Copenhagen Heart Failure Risk Study Objective: This study evaluated the associations between the natriuretic peptide activity and the neurohormonal response in non-obese and obese outpatients with and without heart failure (HF). Background: Obesity-related HF may be a distinct subtype of HF. Obesity is associated with lower plasma concentrations of natriuretic peptides. The associations between obesity and neurohormonal activation estimated by mid-regional pro-adrenomedullin (MR-proADM) and copeptin in patients with HF is not elucidated. Methods: This prospective cohort-study included 392 outpatients ≥60years, plus ≥1 risk-factor(-s) for HF (hypertension, ischemic heart disease, atrial fibrillation, diabetes, chronic kidney disease), and without known HF. Patients were categorized ‘non-obese’ BMI = 18.5–29.9 kg/m2 (n = 273) and ‘obese’ BMI ≥ 30 kg/m2 (n = 119). The diagnosis of HF required signs, symptoms, and abnormal echocardiography. NT-proBNP, MR-proANP, MR-proADM, and copeptin were analyzed. Results: Obese patients were younger, had a higher prevalence of diabetes and chronic kidney disease, but a lower prevalence of atrial fibrillation. A total of 39 (14.3%) non-obese and 26 (21.8%) obese patients were diagnosed with HF. In obese patients, HF was not associated with higher plasma concentrations of NT-proBNP (Estimate: 0.063; 95%CI: –0.037–1.300; P = 0.064), MR-proANP (Estimate: 0.207; 95%CI: –0.101–0.515; P = 0.187), MR-proADM (Estimate: 0.112; 95%CI: –0.047–0.271; P = 0.168), or copeptin (Estimate: 0.093; 95%CI: –0.333–0.518; P = 0.669). Additionally, obese patients with HF had lower plasma concentrations of NT-proBNP (Estimate: –0.998; 95%CI: –1.778–0.218; P = 0.012), and MR-proANP (Estimate: –0.488; 95%CI: –0.845–0.132; P = 0.007) compared to non-obese patients with HF, whereas plasma concentrations of MR-proADM (Estimate: 0.066; 95%CI: –0.119–0.250; P = 0.484) and copeptin (Estimate: 0.140; 95%CI: –0.354–0.633; P = 0.578) were comparable. Conclusions: Patients with obesity-related HF have natriuretic peptide deficiency and lack of increased plasma concentrations of MR-proADM and copeptin suggesting that patients with obesity-related HF have a blunted overall neurohormonal activity. Introduction The global epidemic of obesity is an increasing burden of risk for HF [1,2]. Obesity is a well-established risk factor for HF, independent of obesity-related conditions like hypertension, type 2 diabetes, and ischemic heart disease [3]. Obesity has also been associated with subclinical changes in the structure and function of the left ventricle (LV), especially changes of the diastolic function and LV hypertrophy [4,5]. Thus, obesity has a strong correlation with HF with preserved ejection fraction (HFpEF) [6]. Various hypotheses of the pathophysiological mechanisms for obesity-related HF have been proposed: volume overload and increased afterload leading to hypertrophy of the LV [7], lipotoxicity caused by increased circulating free fatty acids [8], altered metabolism in the myocardium due to insulin resistance [9], and activation of the leptin-aldosterone-neprilysin axis leading to sodium retention, plasma volume expansion and ventricular remodelling [10]. Furthermore, obesity has been associated with lower plasma concentrations of amino-terminal pro-B-type natriuretic peptide (NT-proBNP) and mid-regional pro-atrial natriuretic peptide (MR-proANP) [11,12,13], a condition described as natriuretic peptide deficiency. But the mechanisms behind and the implications of the natriuretic peptide deficiency are incompletely understood. Structural remodelling of the heart in obese patients and the type of HF may influence the secretion of the natriuretic peptides since HFpEF is associated with lower plasma concentrations of the natriuretic peptides than HFrEF [14,15]. Increased epicardial fat in obese patients with HFpEF has been suggested to reduce the LV wall stress due to an enhanced pericardial resistance and thus impair the stimuli of the natriuretic peptides synthesis [16]. Metabolic factors may also influence the plasma concentrations of the natriuretic peptides, e.g. insulin-resistance has been associated with an augmented elimination of the natriuretic peptides from the circulation [17]. Considering that blockade or modulation of the neurohormonal activation is a cornerstone in the recommended treatment of patients with HFrEF [18] and is under evaluation in patients with HFpEF, obesity-related HF may be a subtype of HF, that due to natriuretic peptide deficiency, would benefit from a diverse management e.g. early treatment with sacubitril-valsartan. It is unknown whether obesity-related HF is associated with lower plasma concentrations of novel cardiac biomarkers like mid-regional pro-adrenomedullin (MR-proADM) and copeptin reflecting neurohormonal activation. Accordingly, we evaluated the plasma concentrations of natriuretic peptides and neurohormonal activation estimated by copeptin and MR-proADM in elderly non-obese and obese outpatients, with and without HF. Further, to improve the understanding of echocardiographic findings in obesity-related HF we assessed the relationship between plasma concentrations of natriuretic peptides and important echocardiographic parameters stratified according to the presence or absence of obesity. Methods The Copenhagen Heart Failure Risk Study was a prospective cohort study evaluating the prevalence of early stages of HF and undiagnosed HF among elderly outpatients with a high risk of HF but without known or suspected HF [19]. Study population The inclusion and exclusion criteria have been described in detail previously [19]. Briefly, patients were eligible if they were 60 years of age or above, had one or more risk factor(s) for HF (hypertension, ischemic heart disease, atrial fibrillation, diabetes, stroke, chronic kidney disease). Main exclusion criterions were: a history of HF or reduced LVEF, suspected HF, ongoing advanced cardiac disease (including known moderate-severe valvular disease), recent or planned cardiac procedure, moderate-severe chronic obstructive pulmonary disease, estimated glomerular filtrations rate (eGFR)<15 ml/min/1.73 m 2 . Enrolment commenced in December 2014 and ended in June 2017. Patients were consecutively screened at the Department of Cardiology, the Clinic of Diabetes, and the Clinic of Nephrology, Herlev and Gentofte University Hospital, Denmark, eligible patients were included after discharge from the Department of Cardiology or after a scheduled visit at the outpatient clinic of Diabetes or at the Clinic of Nephrology. In total 400 patients were enrolled in the Copenhagen Heart Failure Risk Study. Echocardiography was missing in one patient, who was therefore excluded from analysis. Further, in this present study, we excluded patients with BMI < 18.5 kg/m 2 (n = 7 patients), resulting in a study population of 392 patients. Written informed consent was obtained from all patients prior to enrolment. The Committee on Health Research Ethics for the Capital Region of Denmark approved the study (H-3-2014-016). The study was conducted according to the Declaration of Helsinki. Physical examination was performed at the day of enrolment, this included a medical history, current medication, blood pressure, venous blood samples and echocardiogram. Patients' weight and height were registered. BMI was calculated: weight (kg) divided by height squared (m 2 ). Patients were categorized as ' obese' (BMI ≥ 30 kg/m 2 ) and 'non-obese' (BMI ≥ 18.5 to < 30.0 kg/m 2 ). The diagnosis of HF was evaluated according to the 2012 guidelines on HF form the European Society of Cardiology [20], which includes clinical signs of HF, symptoms of HF and specific echocardiographic abnormalities. The 2016 guidelines on HF from the European Society of Cardiology [18], have incorporated plasma concentrations of BNP/NT-proBNP in the diagnosis of both HFpEF and HFrEF. Since this study investigated the influence of obesity on plasma NT-proBNP concentrations and other cardiac biomarkers we used the 2012 definition of HF. Patients with symptoms of HF, clinical signs of HF and abnormal echocardiography were diagnosed with HF. Symptoms of HF were patient-reported, and we required a combination of minimum two symptoms: a) dyspnoea/orthopnoea, b) edema/treatment with loop diuretics. Biomarkers Biochemical analyses were performed at the local laboratory. Routine biomarkers were analysed using fresh samples: haemoglobin, creatinine, cholesterol, high-sensitive C-reactive peptide, haemoglobin A1C, alanine aminotransferase, alkali phosphatase. Likewise using fresh samples, plasma concentrations of NT-proBNP (ng/L) were measured using the two-site chemiluminescent immunometric assay, IMMULITE2000 NT-proBNP (Siemens Healthcare Diagnostics). The analytical sensitivity of the assay reported by the manufacturer was 10 pg/mL, the reportable range 20-35000 pg/mL. The precision has been tested and for a mean plasma concentration 145-8884 pg/mL the within-run coefficient of variance was 2.3-3.2%, and the total coefficient of variance was 4.0%-4.9% according to the manufacturer, confirmed in a multicentre evaluation [21]. Blood for subsequent analyses was centrifuged for 10 minutes at 3000g and 20°Celsius, plasma was frozen and stored at -80°Celsius. Plasma concentrations of MR-proANP (pmol/L) were measured using an automated immunofluorescent assay, BRAHMS MR-proANP KRYPTOR (Thermo Fisher Scientific), assay information was provided by the manufacturer. The limit of quantitation was 4.5 pmol/L and the precision for plasma concentrations between 20-1000 pmol/L with an intra assay coefficient of variance <2.5% and an inter assay coefficient of variance <6.5%, analytical performance has been tested in healthy individuals [22]. Plasma concentrations of MR-proADM (nmol/L) and copeptin (C-terminalpro-vasopressin)(pmol/L) were measured using specified automated immunofluorescent assays, BRAHMS KRYPTOR (Thermo Fisher Scientific) [23,24]. The limit of quantification for the MR-proADM assay has been reported as 0.23nmol/L and the precision for plasma concentrations between 0.2-6.0nmol/L with an intra assay coefficient of variance ≤10% and an inter assay coefficient of variance ≤11%, with the highest coefficient of variance for plasma concentrations between 0.2-0.5nmol/L. The limit of detection for the copeptin assay has been reported as 0.69 pmol/L and the precision for plasma concentrations 2.0 -> 50 pmol/L with an intra assay coefficient of variance <15% and an inter assay coefficient of variance <18%, with the highest coefficient for plasma concentrations below 4 pmol/L. Assay information was provided by the manufacturer. Echocardiography Image acquisition was performed according to a predefined study protocol using the Vivid-E9 ultrasound system (General Electric Vingmed Ultrasound, Horten, Norway). Analyses were performed using the EchoPAC (version 201,70.1, General Electric Vingmed Ultrasound, Horten, Norway). Analyses were performed off-line, and in accordance with current guidelines [25,26], by a trained physician, blinded to clinical and biochemical data. Dimensions of the LV were obtained from the parasternal long-axis view. Inter-ventricular septal thickness, LV internal diameter, and LV posterior wall thickness were measured at end-diastole and LV mass was calculated. LV mass indexed to BSA may underestimate hypertrophy in obese patients, therefore, we calculated LV mass indexed to BSA and to height [27]. The apical 4-and 2-chamber view were used to assess LVEDV and LVEF. We used the Simpson's biplane method to calculate LVEF, however, in 19 patients this was not possible, instead, LVEF was calculated using the Teichholz's method. Wall motion score was evaluated in the three apical views, using the 16-segments model. Left atrial volume was assessed by planimetric in the apical 2-and 4-chamber view. The diastolic function was evaluated using pulse wave Doppler at the mitral inflow (peak early = E; atrial = A) and mitral valve deceleration time, and with tissue Doppler for myocardial velocities septal and lateral (peak early = e'; peak systolic = s'). The right ventricle function was evaluated by tricuspid annular plane systolic excursion at the lateral tricuspid annulus. Longitudinal strain was obtained in the three apical views, and global longitudinal strain (GLS) was calculated. Circumferential strain was obtained in the parasternal short-axis view at the papillary muscle level. Statistics Baseline characteristics and echocardiographic parameters are presented in four groups: non-obese patients with and without HF, and obese patients with and without HF. Categorical variables as numbers of patients and percentages, compared using the chi-square test. Continuous variables as median with 25-and 75-percentile, mean ± standard derivation (SD) and compared using the Wilcoxon test. To account for skewness continuous variables were log 2 transformed for comparison if appropriate. Obese and non-obese patients with HF were compared, and obese and non-obese patients without HF were compared, respectively. The relationship between obesity, the diagnosis of HF, and cardiac biomarkers were evaluated with linear regression models. Interaction analysis showed significant interaction between HF and obesity, this interaction was therefore included in the linear regression models. Cardiac biomarkers were log 2 -transformed. Comparisons were made with two-way ANOVA analysis, adjusted for age, sex, eGFR. The diagnostic ability of the cardia biomarkers for diagnosis of HF in non-obese and obese patients, respectively, was conducted using Receiver Operating Characteristics curves (ROC), and area under the curve (AUC) was calculated. The influence of obesity on the association between plasma concentrations of NT-proBNP or MR-proANP and selected echocardiographic parameters were evaluated using linear regression models. Echocardiographic parameters were response variables. Plasma concentrations of NT-proBNP and MR-proANP were log 2 -transformed. The model was adjusted for age, gender, eGFR, and atrial fibrillation. Each model was tested for interaction between obesity and plasma concentrations of NT-proBNP or MR-proANP, respectively. The diagnostic findings in obese and non-obese patients were evaluated as the prevalence of HF, the prevalence of patient-reported symptoms of HF, the prevalence of clinical signs of HF, and the prevalence of abnormal echocardiography, respectively. Trend tests were performed using the Cochran Armitage trend test. For all statistical analyses, the level of significance was set to P < 0.05 (two-sided). Analyses were performed using SAS enterprise, version 7.11 (SAS Institute Inc., Cary, NC, USA). Study population In total 119 patients were categorized ' obese' and 273 patients were 'non-obese'. Median age was 72 years, and 48% were female. The prevalence of undiagnosed HF was comparable among non-obese and obese patients, in total 39 of the non-obese patients (14.3%) and 26 of the obese patients (21.8%) were diagnosed with HF according to the diagnostic criteria for HF (P = 0.076 for comparison). The baseline characteristics are presented in Table 1. In general, clinical and biochemical characteristics were similar between the obese and the non-obese patients with HF. However, obese patients with HF were younger than non-obese patients with HF and had a higher prevalence of diabetes, but a lower prevalence of atrial fibrillation. As expected, plasma concentrations of NT-proBNP were higher in patients with HF, compared to patients without HF in both non-obese and obese patients (median: 202 vs. 1050 ng/L and 120 vs. 196 ng/L; mean ± SD: 479 ± 698 vs. 2334 ng/L ± 4060 and 259 ± 305 vs. 805 ng/L ± 1981, respectively, all P < 0.001 and P = 0.047). The plasma concentrations of MR-proANP were higher in non-obese patients with HF, compared The echocardiographic parameters are presented in Table 2. No major differences were encountered between obese patients with HF and non-obese patients with HF. The systolic function was largely preserved and the median of, LVEF in obese patients with HF was 55.7% and 61.2%, respectively, in non-obese patients (P = 0.668) There was no difference in the prevalence of the HF sub-types among the obese and the non-obese patients; in total 8 (2.9%) non-obese patients and 4 (3.3%) obese patients had HFrEF (LVEF < 40%), 5 (1.8%) non-obese patients and 2 (1.7%) obese patients had HFmrEF (LVEF 40-49%), and 26 (9.5%) non-obese patients and 20 (16.8%) obese patients had HFpEF (LVEF ≥ 50%), (P > 0.05 for all). The overall diastolic function was mildly abnormal when considering median values of all patients [26]. LAESV was Diastolic function Diagnosis of HF and relation with obesity Obesity was associated with HF in a multivariate logistic regression model ( Table 3). In analogous analyses, with BMI categorized as normal weight, overweight and obese, the association remained significant for obese patients (OR 3.10; 95% CI 1.34-7.16; P = 0.003), but not for over-weight patients (OR 1.32; 95% CI 0.61-2.87; P = 0.350), adjusted for traditional risk factors (supplemental Table 1). Among the obese patients 21.8% were diagnosed with HF, in addition, 24.4% of the obese patients reported symptoms of HF, 37.0% had clinical signs of HF, and 33.6% had an abnormal echocardiography without fulfilling the diagnostic criteria for HF (Figure 1). Among the non-obese patients 14.3% were diagnosed with HF. Among the non-obese patients without a diagnosis of HF, the prevalence of patient-reported symptoms of HF, or clinical signs of HF was lower compared to the obese patients, 15.0% and 26.0%, respectively (P-value for trend < 0.05 for both). The prevalence of an abnormal echocardiography among patients without a diagnosis of HF was comparable (non-obese: 41.0% and obese: 33.6%; P for trend = 0.336. (Figure 2A-D). Cardiac biomarkers and obesity ROC curves of the diagnostic ability for HF of the cardiac biomarkers, respectively, showed a poor performance in this outpatient population. The AUCs were higher in non-obese patients than obese patients for all cardiac biomarkers, however, 95%CI were overlapping (Figure 3). In non-obese patients the diagnostic accuracy was comparable between NT-proBNP, MR-proANP and MR-proADM (P > 0.05 for each comparison), whereas NT-proBNP (P = 0.002), MR-proANP (P = 0.009) and MR-proADM (P = 0.003) had a higher accuracy than Copeptin. In obese patients, no difference was found in the diagnostic accuracy between the cardiac biomarkers (P > 0.05 for all). Echocardiographic parameters, natriuretic peptides, and obesity The association between echocardiographic parameters, plasma NT-proBNP concentrations and obesity are shown in Figure 4A-D. Plasma NT-proBNP concentrations were positively associated with E/e' septal, left atrial volume index, LV mass index, and inversely associated with LVEF. Additionally, in the same linear regression models, obesity was significantly associated with LV mass index and LVEF (supplemental Table 2). The association between echocardiographic parameters, plasma MR-proANP concentrations, and obesity are shown in Figure 5A-D. Plasma MR-proANP concentrations were positively associated with E/e' septal, left atrial volume index, and LV mass index, but not with LVEF. In the same linear regression models, obesity was significantly associated with E/e' septal, LV mass index and LVEF (supplemental Table 2). Discussion The present study has two major findings: I) In obese patients, the diagnosis of HF was neither associated with increased plasma concentrations of NT-proBNP or MR-proANP nor with increased plasma concentrations of MR-proADM or copeptin, suggesting an overall impaired neurohormonal activation in obesity-related HF. II) The relationship between LV mass and LVEF and plasma concentrations of natriuretic peptides differ according to +/-obesity indicating that NP deficiency may play an important role in in the development of HF in obese patients already at a rather low BMI. Cardiac biomarkers and obesity In this present study, obesity was accompanied by a natriuretic peptide deficiency. Previous studies have also demonstrated lower plasma concentrations of NT-proBNP in obese patients [11,12,28,29], as well as an increase in plasma concentrations of NT-proBNP in relation to an intended weight-loss [30]. This should be considered when plasma concentrations of NT-proBNP are used as a diagnostic tool in obese patients, where diagnostic challenges might be present, since poor physical conditions due to obesity can mimic symptoms of HF and thereby increase the risk of misdiagnosis of HF with accurate and normal plasma concentrations of NT-proBNP. However, the mechanism behind this lack of increase in the plasma concentrations of natriuretic peptides is unknown [13,30] and obesity-related HF may be a distinct phenotype characterized by an impaired glycose-metabolism and an increased inflammation, together with an altered hemodynamic. Thus, whether this is the case in the range of 30-35 of BMI is less well elucidated [16]. In the present study, the neurohormonal activation was also evaluated by plasma concentrations of MR-proADM and copeptin. Both were significantly higher in obese patients without HF, compared to non-obese patients without HF. Additionally, plasma concentrations of MR-proADM were significantly higher among the non-obese patients with HF compared to the non-obese patients without HF and not among obese patients with HF, suggesting that obesity influence the plasma concentrations of MR-proADM and copeptin in patients without HF and that obese patients with early stages of HF have an overall impaired neurohormonal response. A previous study has described higher plasma MR-proADM concentrations in obese patients [31]. In a populationbased study, Sinning et al [29] reported similar results regarding plasma concentrations of NT-proBNP and MR-proANP, but in contrast to our results, plasma concentrations of MR-proADM were significantly higher in obese patients with HF compared to non-obese patients with HF. This discrepancy may be explained by differences in duration of HF between the studies and differences in the diagnosis of HF. Obesity, echocardiographic parameters, and natriuretic peptides The echocardiographic findings in this present study were in accordance with previously reported structural and functional changes in obese patients [4,5]. Thus, obesity was correlated with both structural and functional changes in the heart and with natriuretic peptide deficiency. The association between plasma concentrations of NT-proBNP and MR-proANP and LV mass were shifted up-ward in obese patients, while the association between plasma concentrations of NT-proBNP and LVEF was shifted down-ward in obese patients indicating that for any given plasma concentration of the natriuretic peptides the obese patients have a higher LV mass and a lower LVEF. This may be explained by an increased afterload in obese patients, possibly due to increased arterial stiffness [32]. The association between plasma concentrations of the natriuretic peptides and left atrial volume index was not influenced by obesity. Left atrial remodelling reflects the burden of increased pressure and volume over time, but may be affected by other conditions than obesity e.g. atrial fibrillation and age [26]. E/e' is a surrogate marker of LV filling pressure during certain circumstances [26]. Our findings suggest, that for any given value of plasma NT-proBNP concentrations, the LV filling pressure is equal in obese and non-obese patients, whereas any given value of the plasma MR- proANP concentration reflects a higher LV filling pressure in obese, compared to non-obese. The discrepancy between NT-proBNP and MR-proANP may reflect a type I error or that MR-proANP is closer associated with LV filling pressures than NT-proBNP [16]. Diagnosis of HF in obese patients Obesity mimics the symptoms of HF and thereby augments the challenge of diagnosing HF in obese patients. Dyspnoea may arise from a lower exercise capacity in obese patients [16]. The increased blood volume in obese patients predisposes to peripheral edema [7]. The clinical assessment of HF may be complicated by obesity and the image quality of the echocardiography in clinical practice may also be lower in obese patients. Accordingly, we observed a higher prevalence of patient-reported symptoms of HF and clinical evaluated signs of HF in obese patients without HF, compared to non-obese patients without HF. However, these patients did not fulfil the diagnostic criteria for HF, despite that symptoms and signs of HF were both present in a few patients. The prevalence of an abnormal echocardiography was comparable in obese and non-obese patients. Cardio-pulmonary exercise testing is a valuable test for discrimination cardiac-and non-cardiac dyspnoea [33]. Thus, in the present study, cardiopulmonary exercise testing may have reduced the risk of either a type 1 error (obesity associated edema and dyspnea with normal filling pressures) or a type 2 error (increased filling pressures during exercise despite lack of symptoms at rest). In the 2016 guidelines on diagnosis and treatment of HF from the European Society of Cardiology plasma concentrations of NT-proBNP >125 ng/L were introduced as diagnostic criteria for all sub-types of HF [18]. Since plasma concentrations of natriuretic peptides are lower in obese patients, the diagnostic value for HF may be reduced in obese patients. Among the obese patients with HF, only 73.1% had plasma concentrations of NT-proBNP >125 ng/L, whereas, 89.7% of the non-obese patients with HF had plasma concentrations of NT-proBNP >125 ng/L. Accordingly, the risk of underestimating the prevalence of HFpEF and HFmrEF in obese patients is higher. Strengths and limitations Some limitations of the study should be noted. The study participants were included from specialized units at the Herlev and Gentofte University Hospital, Denmark, according to specific in-and exclusion criteria. Results from this study, therefore, do not translate to the general population but instead brings insight to an elderly high-risk population. Nevertheless, these patients (elderly outpatients with one or more risk factor for HF, and a recent contact to the hospital) are not uncommon in the healthcare system, and the results, therefore, still seems clinically relevant. The study was a cross-sectional study, without temporal information about the duration of obesity, weight gain or weight loss, the duration of symptoms or clinical signs of HF, and without repeated measurements of the cardiac biomarkers. Accordingly, we are not able to evaluate the effect of the duration of obesity, the effect of changes in BMI over time, or to evaluate causality between the impaired neurohormonal response and obesity. We used BMI to determine the obesity status of the patients. BMI is a surrogate measure of body fat, it is an established risk factor for HF with high reproducibility and easily accessible. However, BMI has some limitations that should be considered e.g. it does not account for muscle mass, and fat distribution [34]. Accordingly, other measures of body composition such as muscle mass, body fat percentage, hip-waist circumference could be important to explore in the future, as they may contribute to a better understanding of the mechanisms behind the natriuretic peptide deficiency in obese patients, and maybe even the obesity-paradox in HF. The diagnosis of HF is oftentimes difficult in obese patients, with a risk of misclassification. To ensure that patient-reported symptoms of HF were indeed cardiac, we required the patients to present with both signs of dyspnoea (orthopnea or dyspnea) and edema (peripheral edema or treatment with loop diuretics). Clinical signs were based on clinical judgment and assessment of the NYHA functional class. The echocardiographic analyses were without systematic missing parameters, and analyses were validated by a trained physician. Baseline characteristics, echocardiographic parameters, and biochemical results in the present study were in accordance with findings in prior studies of obese patients with HF. Furthermore, no unexpected differences were encountered between the obese and non-obese patients with HF, suggesting a reliable diagnosis of HF among obese patients. The cardiac biomarkers were tested using commercially available assays. However, test variability should be considered in the evaluations of biomarkers. Prior studies have reported substantial intraindividual variability in plasma concentrations of NT-proBNP e.g. age, sex, BMI [35]. In the present study we examined the influence of obesity on the plasma concentrations of specific cardiac biomarkers associated with heart HF, however, we did not calculate intra-individual variability of the plasma concentrations. In the statistical analysis, we adjusted for several covariates, known confounders from the Framingham HF risk study and other relevant parameters, but there is still a risk of unmeasured confounding. Conclusion Obesity-related HF was associated with natriuretic peptide deficiency and an overall impaired neurohormonal activation in elderly outpatients with risk factors for HF. Structural and functional changes of the myocardium in early stages of obesity-related HF seemed close correlated to increased afterload rather than volume overload. Clinical perspectives Obesity-related HF are characterized by a lower age, higher prevalence of diabetes and lower prevalence of atrial fibrillation, compared to non-obese patients with HF. The plasma concentrations of NT-proBNP and MR-proANP are lower in obese patients, this should be considered in clinical practice when interpreting these biomarkers. Patient-reported symptoms and clinical signs of HF are more prevalent among obese patients. This complicates the diagnosis of HF in obese patients, alongside with the natriuretic peptide deficiency. Additional methods may be needed for the evaluation of cardiac performance in obese patients e.g. cardiopulmonary exercise testing or invasive hemodynamic testing. Translational outlook Future studies should seek to increase knowledge of the mechanisms behind obesity-related HF, e.g. by investigating the hemodynamic changes in the early stages of HF in obese patients. Further, these data suggest an over-all impaired neurohormonal activation in obesity-related HF. Accordingly, the conventional treatment strategy of HF, with modulation of the neurohormonal activation, may not have the same effect in obese patients with HF, and instead, novel treatment strategies should be considered e.g. intended weight loss, mineralocorticoid receptor antagonist, sodium-glucose co-transporter-2 inhibitor, LCZ696. However, these concepts need to be tested in randomized controlled trials. Additonal Files The additional files for this article can be found as follows:
6,354.6
2020-03-25T00:00:00.000
[ "Medicine", "Biology" ]
MUSLIM WOMEN AND POLITICS OF INDIA Politics is the progeny of political engagement, which is the mother. As it produces, nurtures, develops, rules, and shapes politics, the former is of utmost importance to both the country and the individual. Because of this, political engagement in every aspect of a nation's affairs defines its politics, and despite the fact that women make up close to half of the population, their political importance is far less than that of men. Women make up the bulk of the population. Women have always been viewed as second-class citizens in communities where men dominated, as is evident from history. The potential of women has been limited by forcing them to carry out domestic tasks within of four walls. Indian women were not allowed to leave their homes. Their freedoms and rights had been revoked. Enrolling them in school was prohibited. They had an idea of the perfect housewife. They have no access to opportunities in society, politics, the economy, or health. Women must be given more influence if civilisation is to advance. Muslims have a tremendous influence on Indian society. The largest minority in this nation is them. In 2011, the overall population of India was 13.4% Muslim, with the majority of them residing in Lakshadweep and Jammu & Kashmir. Emerging nations like India are currently concerned about the empowerment of women. It is thought that a number of factors, including the "invisible" role and "marginal" social status of women in Muslim societal dynamics, have hampered the development of Muslim society. The rate of women's emancipation is in danger within the greatest Muslim minority. Lack of social chances for Muslim women is a severe issue that needs immediate attention. The position of Muslim women in India will be primarily examined in this essay. INTRODUCTION Women are the group most at risk in the advancement of a better society in all spherespolitical, economic, cultural, and social.Women are unable to advance in many fields due to patriarchal and feudal structures.The struggle to free women from social ills has been more difficult, but they are showing signs of empowerment in a variety of occupations.The emancipation of women in all faiths also reveals the alarming fact that some communities are still being left behind in the contemporary world as a result of outdated taboos about access to formal education, the workplace, and political power generation.Muslim women and men are persistently underrepresented in emerging countries despite being the second-largest religious community in the world.Since they are denied fundamental human rights like education, employment, health care, and sanitation, Muslim women in particular are at risk.The rise of new Muslim women leaders has been observed in recent years.Both audacious and intelligent.She wants to take part in the democratic conversation happening in the nation and is not content to be restricted to the four walls of her house.The orthodox clergy are not someone she trusts to speak for her, which is significant.As a citizen and a practicing Muslim, she is aware of her rights.Anyone who infringes on her rights will not be tolerated.Women have rights, according to the Constitution.To give them more authority, various Acts have occasionally been created.Strong attempts are being taken toward positive discrimination in order to mainstream them, and various programs are also developed at the federal and state levels.However, on paper, each of these clauses appears to be flawless.Reality shows that there is frequently a gender divide.Muslims women are particularly hard hit.No of their gender, religion, or other characteristics, residents in a free India were guaranteed liberty and equality under the Constitution.As a result, although legally speaking women and men are on an equal footing, in actuality women's political engagement is lower than that of men.Due to this, despite all of the constitutional protections, they play a relatively insignificant role in both the Indian Parliament and State Legislatures.There are still millions of individuals who do not engage in politics.The primary causes of this include a lack of education, family discord, an unfavorable political climate, a paucity of women in local leadership positions, and the significant financial costs associated with elections.Every political party is constantly faced with fierce competition from male candidates for seats in the assembly and the Parliament. Research Objectives The Indian Constitution gives citizens the right to be elected, free expression, freedom to organize and vote.The Indian Constitution bars sex and class discrimination.Women are essential to the wellbeing of the family, society, community, civic bodies, institutions, and the State in the twenty-first century, holding up half the sky.They cannot be disregarded while making decisions, forming policies, or doing other essential citizen obligations.The aim of this paper is to highlight the participation of Muslim women in the politics of India. Methodology The main literature that were used as mainly secondary source of data are; Books, Articles, Journals, unpublished desertions, newspaper and internet sources were examined in order to arrive of conclusion in an objective way. DISCUSSION AND RESULTS Since women are an integral part of society, almost every nation after World War II committed to empowering them and modifying their constitutions as needed to ensure that they had the same rights as men.Occasionally, though, they were treated equally to men.Women represent the same percentage of the population as men, but there is no proof that this translates into equality with men in any field.Women from underrepresented groups are consequently at a distinct disadvantage.It is astonishing how underrepresented and anonymous they are in politics.In numerous countries, it took a considerable amount of time and work to win women the "right to vote."The greatest democracy in the world, India, grants all women the right to vote.Caste prejudice has been successfully addressed in India, but work needs to be done in the areas of sexual orientation and religious beliefs.Although the word "democracy" indicates that all citizens ought to be treated equally, anyone above the age of 18 who is not barred from voting by law, as long as they meet all other eligibility requirements, is also permitted to run for office.Despite fulfilling the requirement, we were unable to locate any instances of women being represented equally in any of the 16 Lok Sabhas or in state legislatures. The world has been hit by a tsunami of division and tensions, and those who promote community ideals and divisive policies for their personal gain are triumphantly riding these waves.India is no exception.One particular group within our society stands out in the current context as being burdened by everything that has torn the social fabric apart: Muslim women.Muslim women have a history of having identity crises in politics.This has grown into a topic of debate and conflict.Since independence, there have been no Muslim women MPs in five of the sixteen Lok Sabhas, and there have never been more than four Muslim women MPs in the 543-seat lower house of parliament.One of the main reasons why women are underrepresented in politics is the patriarchal structure of society. Women participation According to information provided by the Election Commission, the proportion of women elected to represent one of India's 543 Lok Sabha seats increased from 11% in 1999 to 16% in 2009.49 women were chosen to serve in the lower chamber in 1999.Punjab (30.8%) had the highest percentage of female newly elected officials in 2009, followed by Madhya Pradesh (20.7%) and Haryana (20.0%).While this is going on, the proportion of Indian women who are eligible to vote has increased from 44.3% to 45.8%.The patriarchal, hereditary nature of Indian politics needs multifaceted action.A 33% reservation, the minimum mass required, will be one of the most important changes in assisting women in obtaining the right to participate in Indian democracy not just as voters but also as leaders.The patriarchal, hereditary nature of Indian politics calls for intervention on many levels and in many forms.To ensure that women in India have the right to vote and run for office, one of the most important reforms would be to impose a minimum mass reservation of 33%.Muslims make up a much smaller portion of India's political representation than their share of the population, which is estimated to be around 14% but is widely believed to be much higher.Muslim women must be able to speak up in the British Parliament.More significantly, this is due to the false assumption held by male lawmakers that Muslim women have no issues, despite the fact that numerous issues relating to women need to be discussed in parliament and can only be done by female Muslim lawmakers.Therefore, it is necessary to look at the kind and degree of women's participation in the political process from the standpoint of women's growth.This would be an essential tool for furthering women's interests as well as a good indicator of how far women have come in terms of their own understanding and expectations.As a result, there will be an increase in the number of women in the media, which is wonderful for increasing their political participation, sense of self-worth, and the role models they look up to.Following Mahatma Gandhi's advice, only fourteen women were elected to state legislatures on the Congress Party's slate after independence, when the constituent assembly chose its final form.But each and every one of them did an amazing job of doing their part.It followed that when the Constitution was passed, women would be acknowledged as full citizens and legal equals.These ideas become official in India's democratic constitution. Muslim Women and Political Participation The Muslim community is more conservative, has a patriarchal society, and wants to keep its religious identity than the Hindu and Christian communities.This makes it harder for women to participate in public life in the Muslim community than in the Hindu and Christian communities, where religion does not act as a strict barrier to the advancement of women.In secular India, the Muslim clergy has a big impact on Muslim women, especially those from poor, illiterate, or socially backward groups.In strictly orthodox Islamic cultures, women are not allowed to leave the house without permission from their husband or father.Some Muslims in some places still think that a Muslim woman's only job is to be a mother and wife, even though things have changed in these areas and more Muslim women are taking part in politics.Still, scholars from many different traditions and worldviews have made the same argument.But in the 20th century, there were more Muslim women talking about politics, writing books, and doing research in universities.In the same way, Muslim women in India want equality and an end to unfair treatment.But there should be enough Muslim women in Parliament so that they can fight well.If non-Muslim women in Parliament who understand and care about the situation of Muslim women speak up for them, the Muslim community will accuse them of interfering in religious matters.In 1996, the government proposed a bill that would give women 33% of all reserved seats in Parliament and other bodies that make laws.Male MPs have been holding up the bill for a long time because they won't make any changes for women.Muslim women in politics have often had trouble getting to know them.This is being talked about and debated right now.Since India became independent, there have been 16 Lok Sabhas, but only four Muslim women have ever held seats in the 543-person lower house of parliament.The patriarchal structure of society is a big reason why women don't get involved in politics. Participation in Lok Sabha Only 22 of the 543 members of the outgoing Lok Sabha are Muslims, according to the 2011 census.Despite making up 6.9% of the population, there are just 0.7% of Muslim women in parliament.This demonstrates the lack of action taken by political parties to correct this false information in the Lok Sabha.Approximately 21 of the 612 women who have been elected to the 16 Lok Sabhas since independence are Muslim women.Since independence, there have been no Muslim women MPs in five of the sixteen Lok Sabhas, and their representation in the 543-seat lower house of parliament has never surpassed four.Currently, there are no Muslim women serving in the legislature in 24 of the 29 states.Muslims make up 6.9% of the population as a whole, yet only four of the 543 members of the outgoing Lok Sabha, or 0.7%, are female Muslims.Approximately 21 of the 612 women who have been elected to the 16 Lok Sabhas since independence are Muslim women.India has 13 Lok Sabha districts with a Muslim population of above 40% and 14 Lok Sabha districts with a Muslim majority.There are 101 seats where more than 20% of the people are Muslims.78 women were elected to the lower house of parliament in India's 2019 elections, making up 14% of the legislative body.It didn't, however, go smoothly.Only one Muslim woman, Sadja Ahmed, is now a member of the lower chamber, down from four before the May election.The situation is not that different at the state level.State legislatures have a representation of women of less than 8%.Nearly no Muslim women exist.Only one Muslim woman is among the Assam Legislative Assembly's 14 female members.There are just three female chief ministers among the 29 states and seven union territories, and none of them are Muslim.There are just two female governors and lieutenant governors/administrators among the 29 states and 7 union territories, and none of them are Muslim.Only three of the 36 Lok Sabha committees are now led by women, and none of them are Muslim women. If Muslim women were represented in proportion to their numbers, they would have always numbered more than 35.(Take into account that 13.5% of the nation's population is Muslim; since women outnumber men almost evenly, it can be assumed that 7%, or half of the 13.5%, are Muslim women.)However, there were never more than three Muslim women in any of the sixteen Lok Sabhas.There were only a few Muslim women present, maybe five times. Participation in Rajya Sabha There were 30 women in the Rajya Sabha as of October 28, 2014, although just 4 of the female MPs were Muslims.Only 15 Muslim women served in the Upper House from 1952 to 2010, and they all did so in various ways.Similarly, none of the 12 standing committees in the Rajya Sabha are led by a Muslim woman (the others are joint committees).In the 16 Lok Sabhas, there has never been a Muslim woman Speaker, and there has never been a Muslim woman Chairman of the Rajya Sabha.Four out of the eighteen times a Muslim woman has held the position of deputy chairman in the Rajya Sabha.It's intriguing that Najma Heptullah was the only one to attend each of the four gatherings. State Legislatures The situation is not much different at the state level.Less than 8% of the people in the state assemblies are women.Muslim women don't seem to be around much.One of the 14 women in the Assam Legislative Assembly is a Muslim, but that is the only one.One of the 14 women in the Assam Legislative Assembly is a Muslim, but the other 13 are not.Out of the 29 states and 7 union territories, only three have female chief ministers, and none of them are Muslim.Only two of the governors and lieutenant governors/administrators of the 29 states and 7 union territories are women, and none of them are Muslims.There are about 36 committees in the Lok Sabha right now, but only three are led by women, and none of them are Muslim women. Political Heads and Executive It doesn't matter all that much if they are at the top.There have been 16 Lok Sabha elections thus yet, but only one woman has served as prime minister.Muslim men and women have not held this position until far.Similar to how there has only ever been one female president of India, Muslim women have not yet opened their accounts.Only three of the 29 states and seven union territories have chief ministers who are women, and none of them are Muslim.In India's 29 states and seven union territories, only two of the governors, lt.governors, or administrators are women, and none are Muslims. Committees Parliament at the federal level and state legislative assemblies set up advisory groups to help them do their jobs well.They are called committees.Some of them meet only once, but others meet all the time.There are about 36 committees in the Lok Sabha right now, but only three are led by women, and none of them are Muslim women.In the same way, none of the 12 standing committees in the Rajya Sabha are led by a Muslim woman.The other committees are joint committees.Managing the business of the Houses: There is a woman Speaker who is in charge of the Houses (the Lok Sabha and the Rajya Sabha in the Indian Parliament and the Legislative Assemblies and Legislative Councils at the state level), but there has never been a Muslim woman in the position of Chairman of the Rajya Sabha.Speaker (Lower House) and Chairman (Upper House) say that a Muslim woman has been the Rajya Sabha's deputy chairperson four out of the last eighteen times.In any of the 16 Lok Sabhas, there was never a Muslim who was in charge.It's interesting that Najma Heptullah was the only person there all four times.People often make assumptions about Muslim women that are based on stereotypes, but most of the time; these women have stood up to these assumptions and shown what they can do.They have also run for office on their own as independent candidates. The caste system has significantly altered the Muslim societies of India.Islam has been "Indianized" in this way.Indian Muslims have a variety of social and economic issues as a result of their low levels of education.Due to the social, cultural, and patriarchal components of Islam as it is practiced in India, this is also true for Muslim women.They frequently have trouble enrolling in even primary school, let alone further education.Young girls are distracted from their studies and lose interest in school due to family member matchmaking.Even if they attend a reputable institution, they are frequently talked out of enrolling in college, especially if it is abroad.The likelihood that a girl will find a good husband decreases with her level of education.People frequently have false perceptions of how "clean" highly educated ladies and girls who have studied overseas are, which contributes to this.Women don't dispute religious leaders' laws because they are accustomed to being exploited.They can only be set free from the bonds of ignorance, illiteracy, and exploitation via education. CONCLUSION People often think that Muslim women are strict, socially backward, poor, and culturally deprived, and that they follow strict rules.Why are they that way?Who put them in a box?It can't be their religion, because Islam is the most open-minded religion and has given women equal rights.Women have also broken out of their stereotypical roles and shown what they can do when given responsibility.But it seems that some things, like the patriarchal past and mindset of political leaders, are mainly to blame.Constitutionally, they have the same rights as men when it comes to making decisions, but in practice, this is mostly just for show.In reality, they are always affected by the decisions that other people, mostly men, make.Lastly, macro factors like trade networks, foreign direct investment, national debt, and GDP put them at a disadvantage in the world-system tradition.So, at the current rate of progress, Muslim women will be able to close this political gap in a long time.
4,508
2023-01-26T00:00:00.000
[ "Political Science", "Economics" ]
A Novel Three-Gene Model Predicts Prognosis and Therapeutic Sensitivity in Esophageal Squamous Cell Carcinoma To precisely predict the clinical outcome and determine the optimal treatment options for patients with esophageal squamous cell carcinoma (ESCC) remains challenging. Prognostic models based on multiple molecular markers of tumors have been shown to have superiority over the use of single biomarkers. Our previous studies have identified the crucial role of ezrin in ESCC progression, which prompted us to hypothesize that ezrin-associated proteins contribute to the pathobiology of ESCC. Herein, we explored the clinical value of a molecular model constructed based on ezrin-associated proteins in ESCC patients. We revealed that the ezrin-associated proteins (MYC, PDIA3, and ITGA5B1) correlated with the overall survival (OS) and disease-free survival (DFS) of patients with ESCC. High expression of MYC was associated with advanced pTNM-stage (P=0.011), and PDIA3 and ITGA5B1 were correlated with both lymph node metastasis (PDIA3: P < 0.001; ITGA5B1: P=0.001) and pTNM-stage (PDIA3: P=0.001; ITGA5B1: P=0.009). Furthermore, we found that, compared with the current TNM staging system, the molecular model elicited from the expression of MYC, PDIA3, and ITGA5B1 shows higher accuracy in predicting OS (P < 0.001) or DFS (P < 0.001) in ESCC patients. Moreover, ROC and regression analysis demonstrated that this model was an independent predictor for OS and DFS, which could also help determine a subgroup of ESCC patients that may benefit from chemoradiotherapy. In conclusion, our study has identified a novel molecular prognosis model, which may serve as a complement for current clinical risk stratification approaches and provide potential therapeutic targets for ESCC treatment. Introduction Esophageal cancer is the sixth leading cause of cancer-related deaths and the eighth most common type of malignant gastrointestinal cancer in the world [1,2]. Adenocarcinoma and squamous cell carcinoma (ESCC) are the two major types of esophageal cancer, with the latter accounting for the 90% of cases worldwide [3]. In China, ESCC still remains the highest incidence and cancer-induced mortality rates, and the long-term prognosis of patients with ESCC is less than 20%, despite improvements in treatments such as surgical resection and adjuvant chemoradiation [4,5]. is poor prognosis for ESCC patients is highly associated with the difficult nature of diagnosing early-stage ESCC and the frequent occurrence of local invasion and distant metastasis [5]. In addition, conventional chemotherapy and radiotherapy treatments are relatively ineffective [6]. erefore, seeking novel molecular prognostic markers that can help identify patients at high risk and improving their prognosis are urgent needs in the clinic. However, signal molecular marker cannot meet the clinical requirements for biomarkers, such as high sensitivity and specificity, and it is more accurate than the current clinical staging system [7]. In the last few years, studies have demonstrated that combinations of multiple biomarkers were more sensitive and reliable than single molecular marker. Although several prognostic biomarkers for ESCC have been reported [8][9][10][11][12], there is still no ideal biomarker for clinical use. Ezrin as a member of the ezrin/radixin/moesin (ERM) protein family plays an important role in regulating the growth and metastatic of cancer [13,14]. In our previous studies, we showed that ezrin was upregulated in ESCC and promoted cellular proliferation and invasiveness of ESCC cells [15]. Furthermore, Ezrin might be a new prognostic molecular marker for ESCC patients [16]. Ezrin was also known as a key molecule connected with many other molecules in the biology of tumor development [17]. In these ezrin-related proteins, our previous studies identified that three proteins, i.e., MYC, PDIA3, and ITGA5B1, correlated with patients' survival [11,12]. MYC, a protooncogene, plays an integral role in a variety of normal cellular functions [18]. MYC amplification is a recurrent event in many tumors and contributes to tumor development and progression [19][20][21][22]. e progress of MYCinduced tumorigenesis in prostate cancer cells entails MYC binding to the ezrin gene promoter and the induction of its transcription [23]. Meanwhile, the induction of ezrin expression is essential for MYC-stimulated invasion [23]. PDIA3 (protein disulfide isomerase family A, member 3), also known as ERp57, is one of the main members of the protein disulfide isomerase (PDI) gene family and is identified primarily as enzymatic chaperones for reconstructing misfolded proteins within the endoplasmic reticulum (ER) [24]. Several studies have linked PDIA3 to different types of cancer, including breast [25], ovarian [26], and colon [27] cancers. In ESCC, we found that PDIA3 interacted with ezrin, and it was not only involved in the development and progression of ESCC but also related to OS and DFS of ESCC patients [12]. ITGA5B1 is a member of the integrin family which plays a significant role in cell adhesion to the extracellular matrix (ECM) [28,29]. In ESCC, ITGA5B1 upregulates the expression of ezrin through the L1CAM [30]. Although ezrin plays a pivotal role in ESCC progression, the clinical significance of ezrin-related proteins (MYC, PDIA3, and ITGA5B1) has not been thoroughly investigated in ESCC patients. Clinicopathological analyses of these ezrin-interacting proteins may further our understanding of the function of ezrin and provide therapeutic targets for ESCC. In the current study, we found that a three-gene signature comprised of MYC, PDIA3, and ITGA5B1 could independently predict ESCC patient survival. Patients and Specimens. For this retrospective study, 284 cases of formalin-fixed, paraffin-embedded ESCC tissue were collected from the Shantou Central Hospital between November 2007 and January 2010. All patients underwent curative resection and were confirmed as having ESCC by pathologists in the Clinical Pathology Department of the Hospital. Information on age, gender, and histopathological factors was obtained from the medical records and shown in Table 1. An independent validation set (GSE53622 and GSE5364) was obtained from the publicly available GEO database (https://www.ncbi.nlm.nih.gov/). We excluded the ESCC patients without clinical survival information, and the clinicopathological information was shown in Table S1. Overall survival (OS) was defined as the interval between surgery and death from tumors or between surgery and the last observation taken for surviving patients. Disease-free survival (DFS) was defined as the interval between surgery and diagnosis of relapse or death. Ethical approval was obtained from the ethical committee of the Central Hospital of Shantou City and the ethical committee of the Medical College of Shantou University, and only resected samples from surgical patients giving written informed consent were included for use in research. (IHC). TMAs were constructed based on standard techniques as previously described [12]. IHC was performed using the PV-9000 2-step Polymer Detection System (ZSGB-BIO, Beijing, China) and Liquid DAB Substrate Kit (Invitrogen, San Francisco, CA) according to the manufacturer's instructions and has been described in our previous studies [12]. e primary mouse monoclonal MYC antibody (1 : 100 dilution; Santa Cruz Biotechnology, USA), anti-PDIA3 antibody (polyclonal, 1 : 700 dilution; sigma, Saint Louis, MO), and anti-ITGA5B1 antibody (monoclonal, 1 : 50 dilution; millipore, USA) were used in this study. Evaluation of IHC Variables. e protein expression was evaluated by an automated quantitative pathology imaging system (PerkinElmer, Waltham, MA, USA), as described previously [11]. Briefly, as shown in Figure S1, the automated image acquisition and color images were obtained using Vectra 2.0.8 software. Subsequently, the spectral libraries were constructed using Nuance 3.0 software. And then, the color images were evaluated by Inform 1.2 software as follows: (1) segmentation of the tumor region from the tissue compartments, (2) segmentation of the tumor region from the tumor region, and (3) H score calculation (�(% at 0) * 0 + (% at 1+) * 1 + (% at 2+) * 2 + (% at 3+) * 3) based on the optical density which produces a continuous protein expression value in the range of 0 to 300. Construction of a Survival Predictive Model. Firstly, we used a univariate Cox proportional hazards regression analysis to evaluate the correlation between survival and each protein. Subsequently, we constructed a predictive model by the summation of the expression of each biomarker (high � 1, low � 0) multiplied by its regression coefficient, as described in the following equation: Y � (β1) × MYC + (β2) × PDIA3 + (β3) × ITGA5B1 [9]. Patients were then divided into three groups (high-risk, medium-risk, and low-risk) by the cut-off value generated by X-tile software [31]. Statistical Analysis. e SPSS v19.0 program was used for statistical analysis. Cumulative survival time was calculated by the Kaplan-Meier (K-M) method and analyzed by the log-rank test. e association of biomarkers and clinicopathological factors was evaluated by Fisher's exact test. e Cox proportional hazards regression model was used for univariate and multivariate analyses. e predictive value of the parameters was determined by receiver operating characteristic (ROC) curve analysis. P < 0.05 was considered to be statistically significant. Immunohistochemical Characteristics of 3 Biomarkers. e expression levels of MYC, PDIA3, and ITGA5B1 protein in ESCC were examined by IHC. As shown in Figure 1(a), MYC, PDIA3, and ITGA5B1 were mainly localized in the cytoplasm. We further investigated the association between the expression of these 3 biomarkers and clinicopathological parameters. ere was no significant correlation between the 3 markers and age, gender, tumor size, histologic grade, or invasive depth, etc. Nonetheless, low-expression of PDIA3 or high expression of ITGA5B1 significantly correlated with lymph node (LN) metastasis, whereas no correlation was found between MYC and LN metastasis ( Table 2). In addition, PDIA3 had a negative correlation while MYC and ITGA5B1 had a positive correlation with pTNM-stage ( Table 2). In support of these correlation analyses, MYC and ITGA5B1 showed increased expression in tumors with high clinical stage; in contrast, PDIA3 expression was downregulated in stage III tumors compared with those with stages I and II (Figure 1(b)). Prognostic Significance of MYC, PDIA3, and ITGA5B1 in Patients with ESCC. To further explore the clinical significance of MYC, PDIA3, and ITGA5B1 in ESCC patients, Kaplan-Meier analysis and log-rank test were performed. As shown in Figure 2, high expression of MYC or ITGA5B1 was significantly associated with poor prognosis (MYC: OS, P � 0.024, DFS, P � 0.024; ITGA5B1: OS, P � 0.001, DFS, P � 0.009, Figures 2(a) and 2(c)). However, the overexpression of PDIA3 trended to predict a favorable OS (P � 0.002) and DFS (P � 0.003, Figure 2(b)). Besides, because ITGA5B1 is a heterodimer of alpha and beta subunit, we used the expression level of ITGA5 instead of ITGA5B1 in microarray data, and the predictive value of MYC, PDIA3, and ITGA5 was further validated in an independent cohort (GSE53622 and GSE5364). e results for validation set were in line with those in generation set (Supplementary Figure S2(a)). Univariate Cox regression analysis further identified that these 3 molecules were significantly associated with OS (MYC: P � 0.026; PDIA3: P � 0.003; ITGA5B1: P � 0.001) and DFS (MYC: P � 0.026; PDIA3: P � 0.004; ITGA5B1: P � 0.010, Table 3). A Molecular Prognostic Model of the 3 Biomarkers Signature. We then evaluated the prognostic value of a molecular model that takes consideration of all the 3 biomarkers. To this end, we calculated the risk score Y � (β1) * (MYC) + (β2) * (PDIA3) + (β3) * (ITGA5B1). In this dataset, the regression coefficients (β1 � 0.347, β2 � − 0.482, β3 � 0.501) were calculated by univariate Cox proportional hazards analysis. All patients were divided into low-, medium-, and high-risk groups based on the Y scores, and the optimal cut-off values were determined by the X-tile software based on patients' prognosis [31]. Kaplan-Meier analysis further demonstrated that patients in the low-risk group indeed had markedly prolonged survival (OS: P < 0.001: DFS: P < 0.001, Figure 3(a)). e 5-year OS for low-, medium-, and high-risk groups was 62.9%, 41.3%, and 24.5%, respectively. Similar results were obtained for 5-year DFS in those groups, which were 56.0%, 37.4%, and 24.5%, respectively (Figure 3(a)). To validate whether this molecular prognostic model can serve as an independent predictor for OS and DFS, we carried out both univariate and multivariate analyses. As shown in Table 3, our newly defined molecular prognostic model, along with pTNM-stage and tumor size, was independent prognostic factors (Table 3). Moreover, receiver operating characteristic (ROC) analysis indicated that the predictive power of this molecular prognostic model was higher compared to each biomarker individually or the pTNM-stage (Figure 3(b)). e predictive value and power of molecular model for OS also yielded similar results from validation set as shown in Figure S2(b). e Potential of the Molecular Prognostic Model in Identifying ESCC Patients Who Can Benefit from Chemoradiotherapy. As shown in Table 1, chemoradiotherapy did not markedly prolong the OS and DFS of ESCC patients. To test the utility of the molecular prognostic model for predicting therapeutic efficacy, we performed K-M survival analysis. Our results showed that the OS and DFS of patients who were treated with surgery only were higher compared with those who received surgery + radiotherapy or surgery + chemotherapy in the low-risk group (Figure 4(a)). However, the opposite was true for patients in the high-risk group, in which ESCC patients who received only surgery had an unfavorable outcome (Figure 4(c)). Radiotherapy and chemotherapy tended to prolong patients' survival as the risk went up as determined by our molecular prognostic model. In particular, patients treated with surgery + chemotherapy in the high-risk group had the most favorable OS and DFS compared with surgery alone and surgery + radiotherapy (Figure 4). Discussion ESCC is one of the most prevalent and lethal cancers in Asian [4]; however, there is no effective molecular signatures for predicting the effectiveness of adjuvant treatments and prognosis in the clinic. Previous studies demonstrated that the cytoskeleton changes are intimately associated with cancer invasion and metastasis [32]. In support of this notion, our research has confirmed that the membranecytoskeletal linking protein ezrin contributes significantly to ESCC progression [15]. In this study, we attempted to generate an effective molecular model based on ezrin-related proteins (MYC, PDIA3, and ITGA5B1) for potential clinical applications. Our data highlight that a molecular model elicited from MYC, PDIA3, and ITGA5B1 has superior prognostic values compared with pTNM-stage, which also facilitates the identification of ESCC patients who may benefit from chemoradiotherapy. Ezrin, a membrane-cytoskeleton linker, plays a major role in promoting tumor progression [23,33]. Our previous study has identified the mislocalization of ezrin during ESCC development, in which membranous ezrin in normal epithelial cells becomes cytoplasmic in ESCC [34]. is abnormal localization changes the interacting proteins of ezrin, which has been shown to be critical for regulating tumor cell survival, invasion, and metastasis [12,17]. e expressions of MYC, PDIA3, and ITGA5B1 have been demonstrated to play critical roles in various malignant tumors and are independent prognostic factors in certain cancers [12,35,36]. It is important to note that although ESCC patients with higher risk predicted by our three-protein molecular model had poor prognosis, these patients might benefit from adjuvant therapies such as chemoradiotherapy, which improved their survival compared with surgical treatment alone. Compared with the model using three different genes (PPARG, MDM2, and NANOG), which we reported in 2015 [9], the current molecular model not only accurately predicts the OS of patients with ESCC but also predicts the DFS and sensitivity to chemoradiation. is makes it much more practical for clinical application. Our results are in line with other clinical studies, which have shown that high expression and rearrangement of MYC are associated with better response to chemoradiotherapy compared with patients without these abnormalities [37,38]. e mechanism behind this observation is probably related to the biological function of MYC in promoting DNA replication and cell cycle distribution [39]. As chemoradiotherapy utilizes the effects of DNA damage-induced cytotoxicity in neoplastic cells, it is not surprising to see an association between MYC and chemo/ radiosensitivity in ESCC patients. Indeed, overexpression of MYC has been shown to render tumor cells susceptible to chemotherapeutics, such as etoposide, doxorubicin, and camptothecin [40]. Nevertheless, MYC remains an attractive molecular target for therapy due to its high oncogenic properties [41]. Antisense oligonucleotides (ASOs) targeting MYC have been shown to block cell proliferation and induce apoptosis in solid and hematologic tumors [41,42]. Compared with MYC, relatively little is known about the biological function of ITGA5B1 in carcinoma. Recent studies suggest that ITGA5B1 can prevent cell anoikis through suppressing inflammation-and oxidative stressrelated genes [43,44]. ITGA5B1 is especially more noticeable in regulating cell adhesion [45], and it can promote early peritoneal metastasis in serous ovarian cancer [46]. In line with the protumorigenic role of ITGA5B1, we are the first to uncover the high expression of this protein in more advanced and metastatic ESCC tumors with unfavorable prognosis. Further studies are needed to delineate the mechanisms behind the deregulation of ITGA5B1 and its biological function in ESCC. PDIA3 has been shown to confer chemo/radioresistance to various types of tumor cells such as ovarian carcinoma [47,48]. PDIA3 expression level is correlated with the clinical outcome of patients with ovarian carcinoma who receive chemoradiotherapy, and the sensitivity to paclitaxel can be enhanced by PDIA3 silencing [47,48]. In ESCC, we found that PDIA3 decreased gradually with the progress of stage and related to favorable prognosis, which was in accord with the findings in gastric cancer [49], but contrary to those in hepatocellular carcinoma [50]. e favorable prognostic value of PDIA3 in ESCC implies that ESCC patients with high expression of PDIA3 may be more sensitive to chemotherapy such as paclitaxel, but further studies are warranted. ese contrasting observations can be attributed to the differences in the carcinogenic machinery between ESCC and other carcinomas. Taken together, these data suggest that MYC, PDIA3, and ITGA5B1 may serve as potential therapeutic targets for ESCC treatment, and cotargeting of these biomarkers might be more effective than targeting a single biomarker alone. Importantly, this study provides a clinically applicable molecular model that can more precisely predict clinical outcome than pTNM-stage, which may also facilitate the identification of ESCC patients who can benefit from radiotherapy or chemotherapy. Data Availability e clinical data and protein expression used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare no conflicts of interest. Figure S1: representative images showing the scoring process by the automated quantitative pathology imaging system. Figure S2: predictive value of three genes and the molecular model in validation dataset. Table S1: the clinicopathological
4,214
2019-11-25T00:00:00.000
[ "Biology", "Medicine" ]
Analysis of the attainment of boundary conditions for a nonlocal diffusive Hamilton-Jacobi equation We study whether the solutions of a parabolic equation with diffusion given by the fractional Laplacian and a dominating gradient term satisfy Dirichlet boundary data in the classical sense or in the generalized sense of viscosity solutions. The Dirichlet problem is well posed globally in time when boundary data is assumed to be satisfied in the latter sense. Thus, our main results are \emph{a)} the existence of solutions which satisfy the boundary data in the classical sense for a small time, for all H\"older-continuous initial data, with H\"older exponent above a critical a value, and \emph{b)} the nonexistence of solutions satisfying the boundary data in the classical sense for all time. In this case, the phenomenon of loss of boundary conditions occurs in finite time, depending on a largeness condition on the initial data. Introduction The present work is a contribution to the study of the qualitative properties of a nonlinear parabolic equation involving nonlocal diffusion. Specifically, we study the occurrence of loss of boundary conditions (LOBC, for short; see Sec. 2 for a precise definition) for the following problem: u t + (−∆) s u = |Du| p in Ω × (0, T ), (1.1) u = 0 in R N \Ω × (0, T ), (1.2) u(x, 0) = u 0 (x) in Ω. (1.3) Here, Ω ⊂ R N is a bounded domain with C 2 boundary, T > 0, and (−∆) s denotes the well-known fractional Laplacian operator, defined as (1.4) (−∆) s u(x, t) = C N,s P.V. where C N,s is a normalization constant. See [13] for details. In addition to s ∈ (0, 1), we impose the following restrictions on s and p, (1.5) s + 1 < p < s 1 − s . As with (1.5), (1.6) might not be optimal, and is explained in the context of our Theorem 1.1 (see below). Equation (1.1) can be seen as a generalization of the so-called viscous Hamilton-Jacobi equation, (1.8) u t − ∆u = |Du| p in Ω × (0, T ). For p = 2, this corresponds to the deterministic Kardar-Parisi-Zhang equation, proposed by these authors in [21] as a model for the profile of a growing interface. Due to its mathematical relevance as a simple model for an equation with nonlinear dependence on the gradient of the solution, (1.8) has been studied from numerous points of view and with different qualitative results in mind: existence and uniqueness of classical solutions ( [16]); existence and nonexistence of global, classical solutions and gradient blow-up and related phenomena ( [31], [32], [34], [23]; see also [26] and the references therein for a broader context); global existence of viscosity solutions, assuming boundary conditions in the viscosity sense ( [4]); and, closest to our work, regarding LOBC, [24]. Some of the previous results have been extended to more general equations, still in the second-order setting: e.g., to degenerate equations in [1], [2]; and, by the authors, to fully nonlinear, uniformly parabolic equations in [25], which the present work closely parallels. Under the structural assumptions of nondegeneracy of the diffusion and coercivity of the first order term, which are easily shown to be satisfied by (1.1) (see Remark 2.2), and the compatibility condition (1.7), together with the notion of boundary conditions in the viscosity sense, the existence of a unique solution of (1.1)-(1.2)-(1.3) defined globally in time, u ∈ C(Ω × [0, ∞)) ∩ L ∞ (Ω × (0, T )) for all T > 0, is shown in [7], Theorem 7.1. This result is proved by means of a comparison result ( [7], Theorem 3.2), and a subsequent application of Perron's method. See Sec. 2 for a precise definition of the notion of solution employed and further remarks on the application of the results of [7] to our problem. We remark that there exist certain results concerning the regularity of solutions for problems related to (1.1)-(1.2)-(1.3) (see, e.g., [5], [6]). However, even if they were adapted to our setting, they do not provide the regularity needed for the proof of Theorem 1.2. For this reason we resort to a regularization procedure. See the remarks after the statement of Theorem 1.2. 1.1. Main results. The first of our results concerns local existence, i.e., the existence of solutions which, for small time, satisfiy (1.2) in the classical sense. Due to the results of [7], it suffices to show that the globally defined viscosity solution of (1.1) satisfies (1.2) in the classical sense (see Sec. 2). This is accomplished by a barrier argument, i.e., the construction of a supersolution of (1.1)-(1.2)-(1.3) in a neighborhood of ∂Ω. It is here that the restriction (1.5) comes into play. Consider s ∈ (0, 1) fixed. The upper bound p < s 1−s implies for the critical exponent in (1.6) that β * < s, while the construction of the barrier ultimately relies on computing for β ∈ (0, 2s); more precisely, on the fact that F (β) < 0 for each β ∈ (0, s) (see [11], [27] for details). We note that neither the corresponding local existence result for (1.8) in [16] nor its extension to the fully nonlinear case in [25] require an upper bound for p (or, more generally, for the rate of growth of the gradient nonlinearity). In this sense our result might not be optimal. It is stable, however, in the sense that the restriction disappears as we approach the second-order case, since s 1−s → ∞ as s → 1 − . The statement of our second result, concerning LOBC, involves the principal eigenfunction of the fractional Laplacian. We denote by (λ 1 , ϕ 1 ) the solution pair for where the solution is normalized so that ϕ then the viscosity solution of (1.1)-(1.2)-(1.3) has LOBC at some finite time prior to T . The proof of Theorem 1.2 uses a key argument from Theorem 2.1 in [31], sometimes referred to as the "principal eigenfunction method." In adapting this argument to the current setting, the main difficulty is the lack of regularity of solutions. More precisely, we would need solutions to satisfy the equation either in the weak sense, or in some pointwise sense; in the latter case, all terms in the equation must be summable. As mentioned earlier, the existing theory for our problem does not provide such regularity. We note that even in the case of (1.8) (for which LOBC is obtained in [24], among other results), where viscosity solutions are shown to be smooth, some approximation procedure is necessary to "integrate" the equation. We remedy this problem by using regularization by inf-sup-convolutions, a method introduced in [22]. Afterwards, we require that various estimates related to (1.9) remain uniform with respect to the regularization parameters. In particular, we obtain the stability of solutions to (1.9) with respect to the varying domain (see Subsec. 4.2). For this part we also rely on fundamental estimates for the Dirichlet problem for the fractional Laplacian from [27]. The part of (1.5) that is relevant to Theorem 1.2 is s + 1 < p. This assumption appears only in the crucial Lemma 4.16, which states that a certain negative power (given by p and s) of the principal eigenfunction on an approximate domain is summable. The restriction (1.5) is related to our method of proof. However, a lower bound for p in terms of s is necessary for LOBC to occurr: it is known that solutions of (1.1) satisfying (1.2) in the classical sense for all T > 0 exist for s ∈ [0, 1] and p ≤ 2s (see [3], Theorem 4, for the nonlocal case and [4] for the local case, i.e., s = 1). A natural question which we leave open is whether there is classical solvability or if LOBC occurs when s ∈ [0, 1) and 2s < p ≤ s + 1. For simplicity, we have restricted our analysis to the case of homogeneous boundary conditions, as in (1.2). Local existence for more general boundary conditions can be obtained in the same way as in Theorem 1.1, following the construction of [11]. Theorem 1.2 applies to the case of general boundary conditions with practically no modification (see Remark 5.1). The methods of Theorem 1.2 apply to more general nonlinear operators as well, provided they satisfy the nonlocal equivalent of having divergence form (see [30], Sec. 3.6). This restriction is due to the essential use of "integration by parts" in the so-called principal eigenfunction method. For instance, the result can be extended to an equation with diffusion given by the so-called p-fractional Laplacian, defined for s ∈ (0, 1), p > 1 and x ∈ R N as In this case the technical results of Sec. 4 can be reproduced following [20] and [12]. 1.2. Organization of the article. In Sec. 2 we recall the notion of solution from [7], which is used throughout our work, and provide some remarks on the relevant results contained in that work. Sec. 3 is devoted to the proof of Theorem 1.1. In Sec. 4 we provide the technical results required for the proof of Theorem 1.2, which is proved in Sec. 5. 1.3. Notation. We write d : R N , d = d(x) for the distance to the boundary of the set Ω, extended by zero to the whole of R N , i.e., For δ > 0, we write Ω δ = {x ∈ Ω : d(x) < δ}, where d = d(x) is defined as above. Similarly, Ω δ = {x ∈ Ω : d(x) > δ}. However, to avoid confusion, we abstain from using both notations in the same section: e.g., in Sec. 3 we use the notation Ω δ , and from Sec. 4 onwards we use only Ω δ . The closure and boundary operation on sets is performed "after" specifying a subset in terms of the distance: e.g., Ω δ = {x ∈ Ω : d(x) ≤ δ}. In Sec. 4 and Appendix A we write, for η > 0, d η = d η (x) for the distance to the boundary of Ω η (extended by zero outside this set, as in (1.12)). Nonnegative constants whose precise value does not affect the argument are denoted collectively by C, and the value of C may change from line to line. When convenient, dependence of C on certain parameters is indicated in parentheses, e.g., as in (1.11). Dependency on Ω, N, s, and p is sometimes omitted for simplicity. Constants we wish to keep track of are numbered accordingly (c 1 , c 2 , . . . , C 0 , C 1 , etc.) Remark 2.2. To be precise, the equation to which the results of [7] apply is which differs from (1.1) only in the sign of the nonlinearity (here Ω, T , s, and p are as before). This does not in any way complicate the analysis, since a simple sign change allows us to go from one equation to the other; i.e., if u is a subsolution (resp. supersolution) of (2.4), thenũ = −u is a supersolution (resp. subsolution) of (1.1). Remark 2.3. Definition 2.1 also interprets the initial condition (1.3) in the viscosity sense, given that Ω × {t = 0} (the "bottom" of the domain) is part of the parabolic boundary of Ω × (0, T ). However, as noted in [7], Lemma 4.1, there is no LOBC on this set. That is, if u and v are respectively a bounded, upper-semicontinuous subsolution and a bounded, lower-semicontinuous supersolution of (1. Similarly, as a consequence of [7], Proposition 4.3, and Remark 2.2 above, there is no LOBC for supersolutions of (1. Remark 2.4. An important consequence of the comparison result of [7] is that solutions of (1.1)-(1.2)-(1.3) are uniformly bounded for all 0 ≤ t ≤ T . Indeed, from the assumptions on the initial data, we have that v ≡ 0 and are respectively sub-and supersolutions to ( Local existence From the discussion in the Introduction and from Remark 2.3, Theorem 1.1 follows if we can construct a suitable supersolution satisfying (1.2) in the classical sense. To this end we follow the corresponding construction in [11], which addresses a similar (stationary) problem. For convenience, we state the key estimates obtained therein. Lemma 3.1. Let Ω ⊂ R N be a bounded, C 2 domain and s ∈ (0, 1). Then, there exists a δ > 0 such that, for each 0 < α < s there exists c 1 > 0 such that The constant c 1 depends on N, s and α, and is such that Proof. This is a special case of Lemma 3.1 in [11]. Proof. This is a special case of Lemma 3.3 in [11]. Since v y − u 0 is strictly positive over the compact set Ω\Ω δ , there exists ǫ > 0 such that Recall that the continuous viscosity solution u satisfies (1.3) in the classical sense (see Remark 2.3). This implies that u(·, t) → u 0 as t → 0 + uniformly over Ω. Therefore, there exists T * > 0 such that Applying Lemmas 3.1 and 3.2, and using that d(x) ≤ |x − y|, we have for all sufficiently small δ > 0. On the other hand, using that α < 1 and again that Combining these estimates, we obtain We now take µ > 0 large enough, so that µc 1 − λc 2 > µc1 2 , then take δ > 0 small enough, so that Thus, which gives that v y satisfies (3.1). By standard arguments, the function is a viscosity supersolution of (3.1). It also satisfies (3.3) and (3.4), since these are satisfied by v y for all y ∈ ∂Ω. Furthermore, v is continuous across ∂Ω and, by (3.6), Therefore, applying the comparison principle of [7] over the domain Ω δ × (0, Remark 3.3. Local existence can be proven for initial data with "critical" regularity, i.e., u 0 ∈ C β * (Ω), β * = p−2s p−1 , in exactly the same way, provided [u 0 ] β * ,Ω is sufficiently small. More precisely, we define Proceeding as above, we set µ > 0 so that µc 1 − c 2 > µc1 2 , and require that is satisfied, instead of (3.8). Note that the estimate corresponding to (3.7) is satisfied automatically. Technical results 4.1. Regularization. In this section we use regularization by inf-sup-convolution, introduced in [22], to obtain a supersolution of (1.1) which has the regularity needed for the proof of Theorem 1.2. This function approximates the viscosity solution u of (1.1) uniformly over Ω × [0, T ] for any T > 0 as the regularization parameters tend to zero. We can also define v ǫ,κ and v ǫ similarly. Note the use of just one superscript when regularization if performed only in the space variable. We collect a series of well-known facts regarding these operations which will be used shortly. (i) Both operations preserve both pointwise upper and lower bounds, i.e., where inf and sup are taken over Ω × (0, T ). In other words, the sup and inf in the definition of the convolutions are achieved, provided we are at sufficient distance from the boundary. (iii) Both u ǫ,κ and u ǫ,κ are Lipschitz continuous in Similarly, they are Lipschitz continuous in t with constant K √ κ . (iv) u ǫ,κ , u ǫ,κ → u uniformly as ǫ, κ → 0, and similarly for u ǫ . (vi) With the notation above, The easier proofs follow more or less directly from the definitions (see e.g., [14]), while (vii) and (viii) may be found in [9]; (ix) is Proposition 4.5 in [9]. Property (v) uses the well-known theorems of Rademacher and Alexandrov on the differentiability of Lipschitz and convex functions, respectively; see [15] and the Appendix of [10]. For a given v ∈ C(Ω × [0, T ]), we will obtain a function which is Lipschitz continuous with respect to t and C 1,1 with respect to x, following [9]. First, denote byṽ the lower "0-extension" of v, as defined in (2.3). That is, v = v g with g ≡ 0 (this is only to avoid the notation "v 0 "). For v ∈ C(Ω × [0, T ]) with v| ∂Ω ≡ 0, this leads toṽ ∈ BU C(R N × [0, T ]). We remark that this is precisely what a solution of (1.1)-(1.2)-(1.3) with no LOBC satisfies. We then iterate the convolution operators defined above: where ǫ, δ, κ > 0. The expression furthest to the right follows from (vii) Proposition 4.2, (vii). As a first step we recall that inf-convolution (resp., sup-convolution) preserves supersolutions (resp. subsolutions) in the viscosity sense, albeit in a proper subset of the original domain. Proof. This is a time-dependent version of Proposition 5.5 in [8], in the particular case where the equation in its entirety is translation invariant, i.e., when there is no "(x, t)-dependence". In this case, the regularized function satisfies exactly the same inequality as the original supersolution. . A simple computation then shows that |ŷ −x| ≤ ǫ * , |ŝ−t| ≤ κ * , as defined in Lemma 4.3. Therefore, to ensure that (ŷ,ŝ) ∈ Ω × (0, T ), so that we can test the equation at this point, we require that d(x) > ǫ * and κ * < t < T − κ * . We also remark that, although slightly different from the one given in Definition 2.1, the notion of solution given in [8] is essentially the same when concerning the behavior of either sub-and supersolutions at interior points (in particular, for the purposes of Lemma 4.3). We now state a key proposition concerning the eigenvalues of a regularized function. Proposition 4.5. Let v ∈ BU C(R N ), δ > 0 and suppose that w = (v δ ) δ is differentiable everywhere. If for somex, w(x) < v(x) and w is twice differentiable atx, then D 2 w(x) has − 1 δ as an eigenvalue. Proof. This is Proposition 4.4 in [9], save for the order in which the inf-and supconvolutions are performed. The proof is entirely analogous. 4.2. The principal eigenvalue problem for the Dirichlet fractional Laplacian. In this section we provide some results regarding the principal eigenvalue problem for the fractional Laplacian on domains approximating Ω, i.e., The existence of a solution pair (λ η 1 , ϕ η 1 ) of (4.5) where λ η 1 > 0, and ϕ η 1 is nonnegative in Ω η and unique up to a multiplicative constant is proved in [29], Proposition 9. The solution obtained in this work is in the weak, or variational, sense. In particular, ϕ η 1 ∈ H s (Ω η ). Furthermore, in [28], Proposition 4, it is proved that ϕ η 1 ∈ L ∞ (Ω η ). We set (4.6) ϕ η 1 ∞ = 1 for all η > 0. From here, it is possible to apply the results of [27] to obtain that ϕ η 1 ∈ C s (R N ). Once the "right-hand side" of (4.5) is continuous, the notions of weak and viscosity solution coincide (see [27], Remark 2.11). Moreover, "bootstrapping" the results contained in [27] (see also [8]), the solution can be shown to be regular enough in the interior of Ω η for (4.5) to hold in a classical, pointwise sense. Additionally, since ∂Ω η is smooth (see Remark 4.8), it can be shown that as a consequence of Hopf's lemma and the strong maximum principle ( [18]) that Towards the proof of our main result, our aim is to provide estimates that remain uniform with respect to the varying domain (i.e, independent of η). To this end, we recall a few basic facts concerning the geometry of the domains Ω η , η > 0. First, since Ω is C 2 by assumption, there exists an η 0 > 0 such that the distance function d| Ω\Ω 2η 0 is C 2 ; in particular Ω η is C 2 for all η ∈ (0, 2η 0 ). Remark 4.10. In particular, a C 2 domain satisfies an exterior uniform sphere condition. Furthermore, at any given point of the boundary, the radius of the exterior tangent sphere is bounded by below by the smallest of the radii of curvature, which are equal to the inverses of the principal curvatures. Proposition 4.9 allows us to extend the uniform exterior sphere condition to domains close to Ω in a uniform way. More precisely, there exists a positive constant ρ 0 , depending only on Ω and η 0 , as given by Remark 4.8, such that for all η ∈ (0, η 0 ), and for all y ∈ ∂Ω η , there exists y 1 ∈ R N \Ω η such that B ρ0 (y 1 ) ∩ Ω η = {y}. Stability of eigenfunctions. Theorem 4.11. Let η 0 be as in Remark 4.8. Then, there exists C depending only on Ω, N , and s, such that, for all η ∈ (0, η 0 ), the positive solution of (4.5), normalized as above, satisfies Proof. Theorem 4.11 follows readily from the corresponding estimate for the Dirichlet problem. Indeed, we apply Proposition 1.1 from [27] to the solution of (4.5), recalling the normalization (4.6), and obtain for some positive C 2 depending on N , s, and Ω η . For the last inequality we used the fact that Ω η0 ⊂ Ω η for η < η 0 , and therefore λ η 1 ≤ λ η0 1 , by (4.7). It remains only to verify that, once η 0 is fixed, the constant C 2 = C 2 (N, s, Ω η ) in (4.9) (i.e., the constant in Proposition 1.1 from [27]) can be taken uniformly for η ∈ (0, η 0 ). We perform this analysis in Appendix A. Remark 4.13. The following is a consequence of Lemma 4.12 that will be useful later on: given K ⊂⊂ Ω and η ′ > 0 small enough, there exists a positive constant c, depending on K and η ′ , such that, for all η ∈ (0, η ′ ), ϕ η 1 (x) > c for all x ∈ K. A uniform Hopf 's lemma. Lemma 4.14. Let η 0 be as in Remark 4.8. Then, there exists C 3 , depending only on N, s, and Ω, such that, for all η ∈ (0, η 0 ), For a fixed domain, this is contained in Lemma 3.9 in [27]. For completeness, we go into the details of the proof of this result to show that the right-hand side of (4.10) is uniformly bounded. To this end, we also use elements from the proof of the corresponding result for the (more general) case of the fractional p-Laplacian in [20] (Theorem 3.6). Additionally, we employ Proposition 4.9. This, together with (4.15), (4.16) and (4.17), implies that . . , κ η N −1 ). Combining the above estimates with (4.12) and Proposition 4.9, we conclude that f η ∞ is uniformly bounded by a constant that depends only on N, s, and Ω; more specifically, on N, s, and the the principal curvatures of ∂Ω. The following computation is adapted from [12]. N, s, and Ω, such that for all η ∈ (0, η 0 ), the solution of (4.5) satisfies Proof. Write K 0 = Ω η0 for short and define, for where χ K0 denotes the characteristic function of K 0 . Note that v ∈ U SC(R N ) and v ∈ C(R N \K 0 ). From Lemma 4.14, for where C 3 is the constant from Lemma 4.14 and C(K 0 ) > 0. Hence, for sufficiently large A, (−∆) s v(x) ≤ 0. Using the change of variables Ψ η ∈ C 2 (R N , R N ) introduced in Lemma 4.14, together with a covering argument and the estimates provided therein, we have By assumption (1.5), − s p−1 > −1, hence this last integral is finite. On the other hand, using that Ω η0 ⊂⊂ Ω, together with Lemma 4.12 and Remark 4.13, we have ϕ η 1 (x) > c 4 > 0 for all x ∈ Ω η0 , where c 4 depends only on Ω, N and s (through η 0 ). Hence, Nonexistence of global solutions and LOBC As in our previous work [25], the proof of Theorem 1.2 uses key ideas from that of Theorem 2.1 in [31]. For completeness, we reproduce some of the elements of [25]. We remark that some care is required in choosing certain parameters appearing in our argument in the correct order, a difficulty which is not present in [31]. Specifically, we first choose u 0 large in an appropriate sense, then take η (which depends on u 0 and the regularization parameters of Sec. 4) sufficiently small. Proof of Theorem 1.2. Consider the differential inequality where C, M 0 > 0. We can integrate (5.1) explicitly to obtain . Since 1 − p < 0, this implies y(t) → +∞. Alternatively, for a fixed t 1 > t 0 , blow-up occurs for t < t 1 provided we have So fix T > 0 and assume that the viscosity solution u of (1.1) satisfies (1.2) in the classical sense. We will later specify the largeness condition on u 0 in terms of M 0 above, but may consider it set from now on, since it depends only on constants already available. The constant C in (5.1) is also specified later, depending only on the appropriate quantities. In particular, it is independent of both η and u 0 . Recall the approximate equation obtained in Proposition 4.6, where now w is obtained by regularization of the viscosity solution u of (1.1). Define z(t) = Ω η w(x, t)ϕ η 1 (x) dx, where ϕ η 1 is the unique positive solution of (4.5) normalized so that ϕ η 1 ∞ = 1. From Proposition 4.2 (iv) and Remark 2.4, we obtain, for sufficiently small η, that w ∞ ≤ u ∞ + 1 ≤ u 0 ∞ + 1. Thus z(t) is uniformly bounded for 0 ≤ t ≤ T . In what remains of the proof, we show that z satisfies (5.1) by using the assumption that the solution u satisfies (1.2) in the classical sense, and hence blows-up, a contradiction. Since w(·, t) ∈ C 1,1 (R N ) for all t ∈ [t 0 , t 1 ], (−∆) s w(·, t) is classically defined a.e. in Ω, precisely at the points where w has a second order expansion. Moreover, at such points we have that the integral with respect to y converges as ω → 0 and, by the standard computation, Hence by the dominated convergence theorem, We note that the integrand is no longer singular, and write dV for the measure on R 2N . Applying first Fubini's theorem, and then by symmetry (i.e., interchanging x and y), we have Starting from the last integral, we repeat the above computation "in reverse" to pass the operator onto ϕ η 1 and use the associated eigenvalue problem (4.5). Since now w ≡ 0 in R N \Ω, this gives Using that w, ϕ η 1 ≥ 0, and again that ϕ η Therefore, that p > 1, hence z(t) p is the dominating term in the right-hand side of (5.7). Therefore, taking and η sufficiently small ensures that bothż(t) ≥ C5 2 z(t) p for t > t 0 and that z(t 0 ) ≥ M 0 , which together are equivalent to (5.1). This gives the desired contradiction. Remark 5.1. Given the indirect nature of the preceding proof, we would like to highlight the role played by the main assumptions leading to LOBC: the fact that the gradient term is "dominating" in the equation, i.e., p > s + 1, from (1.5), is used only in Lemma 4.16. On the other hand, the assumption that leads to contradiction, that (1.2) is satisfied in the classical sense, is used only in (5.6) and in applying Poincaré's inequality. The case of more general boundary conditions can be treated in exactly the same way as above. Assuming u = g in R N \Ω × (0, T ) is satisfied in the classical sense, with g ∈ C b (R N \Ω × (0, T )), we obtaiṅ z(t) ≥ −λ η 1 z(t) + Cz(t) p − C 6 , instead of (5.7), where C 6 depends on g L ∞ (∂Ω×(0,T )) . From here on, the proof continues as above. Remark 5.2. Assuming higher regularity for the initial data, e.g., u 0 ∈ C 2 (Ω) it is possible to obtain estimates for u t (see, e.g., [33], Proposition 4.1, for an example of this method in the local setting). This allows the application of regularity results available for stationary problems (e.g, those of [7]) to our problem, essentially by treating u t as a bounded "right-hand side". Global Hölder estimates can then be obtained for the solution of (1. This situation is analogous to that of gradient blow-up for (1.8). Appendix A. Uniform C s regularity for the approximate domains In this Appendix we state a version of results from [27] which concern the regularity of solutions to the Dirichlet problem for the fractional Laplacian. We revisit the corresponding proofs to show that the estimates are uniform with respect to varying domains such as those appearing in (4.5). In this way we conclude the analysis postponed in the proof of Theorem 4.11. Proposition A.1. Let u be a solution of (A.1). Then u ∈ C s (R N ) and where C is a constant depending only on Ω and s. In particular, the constant C can be taken uniformly for η ∈ (0, η 0 ). This is Proposition 1.1 from [27], save for the dependency of C on the parameter η, which we require to be uniform. To this end, we outline the manner in which this result was obtained. We begin stating the key Lemmas leading up to it. Lemma A.2. Assume that w ∈ C ∞ (R N ) is a solution of (−∆) s w = h in B 2 . Then, for every β ∈ (0, 2s), where the constant C depends only on N, s and β. Remark A.4. Lemmas A.2 and A.3 have no dependence on the domains Ω nor Ω η . Therefore, they apply directly to our setting. Let Ω η and g be as above, and let u be the solution of A.1. Then where C depends only on Ω and s. In particular, C can be taken uniformly in η ∈ (0, η 0 ). Lemma A.5 relies on the following result: Lemma A.6. Let Ω be a bounded domain and let g ∈ L ∞ (Ω η ). Let u be the solution of (A.1). Then where C is a constant depending only on N and s. Proof of Lemma A.5: This is Lemma 2.7 in [27]. For points near ∂Ω η , the estimate is obtained by scaling the supersolution from Lemma A.3 to the annular region B 2ρ0 \B ρ0 , where B ρ0 is an exterior tangent ball to ∂Ω, and applying comparison. Owing to Remark 4.10, the scaling can be done uniformly with respect to η ∈ (0, η 0 ). For the remaining points in Ω η , Lemma A.6 is employed. Proof. This is a special case of Lemma 2.9 in [27]. Although more intricate than that of the previous results, the proof of this result uses only a scaling of the interior estimate of Lemma A.2 to the ball B R (x 0 ), the use of the upper barrier for u L ∞ (Ω) obtained in A.5, and a covering argument. As such, the constant C in Lemma A.7 now depends on the measure of Ω as well. This quantity, however, varies continuously for Ω η with η ∈ (0, η 0 ). Proof of Proposition A.1: It remains only to extend the estimate from Lemma A.7 up to the boundary. For this we provide an argument from [20]. Through a covering argument, Lemma A.7 extends to an interior bound on any compact subset of Ω η .
7,618.8
2018-04-13T00:00:00.000
[ "Mathematics" ]
Phase transitions in pseudospin- electron model with direct interaction between pseudospins The analysis of thermodynamic properties of the pseudospin-electron model in the case of zero electron transfer with the inclusion of the direct pseudospin-pseudospin interaction of ferroelectric type is performed. The equilibrium conditions in the regimes μ = const and n = const are investigated in the mean field approximation. It is shown that the interaction with electrons leads at the fixed μ value to the possibility of the first order phase transition at the change of temperature with a jump-like behaviour of 〈Sz〉 . In the regime n = const there takes place an instability with respect to phase separation in the wide range of asymmetry parameter h values. Introduction Pseudospin-electron model (so called Muller model [1]) is the one of theoretical models which were proposed in connection with the investigation of characteristic features of electron spectrum and lattice dynamics in high temperature superconductors.In this model strong Hubbard-type electron correlations are taken into account.Pseudospin formalism is used to describe the locally anharmonic lattice vibrations. Hamiltonian of the model is of the following form [2,3] c I.V. Stasyuk, Yu.Havrylyuk where in the single site part in addition to Hubbard correlation U there are terms that describe tunnelling splitting and asymmetry of local anharmonic potential (longitudinal field h).Hamiltonian (1) also contains terms, which describe electron transfer t ij and direct interaction between pseudospins J ij .The energy is accounted from the level of chemical potential . Most attention while investigating this model has been paid to examining the electron states, an effective electron-electron interaction, to studying the dielectric instabilities or ferroelectric type anomalies as well as the tendency to the spatially modulated charge and pseudospin orderings [4,5]. This work is devoted to the study of thermodynamics of the pseudospin-electron model in the case of zero electron transfer (t ij = 0) and zero frequency of tunnelling splitting (Ω = 0).The direct interaction between pseudospins is supposed to be a long-ranged (J ij ∼ J/N) that makes it possible to use the mean field approximation (MFA). In this approximation, the Hamiltonian of the model is of the form The interaction J ij is taken because the ferroelectric type interaction (J ( q) has got a maximum value at q = 0; J(0) ≡ J > 0); the order parameter η = S z i does not depend on the unit cell index. We investigate the possible states and phases of the system as well as the transitions between them at the change of temperature and model parameters values. Mean field approximation; Ω = 0 Thermodynamic potential of the model, calculated per one lattice site is equal in MFA to To investigate the equilibrium conditions we separate two regimes: µ =const and n =const.The equilibrium for these regimes is defined by the minimum of thermodynamical potential ((∂Ω/∂η) T,µ,h = 0) or the minimum of free energy F = Ω + µN ((∂F/∂η) T,µ,h = 0) respectively. The equation for the order parameter is obtained from these conditions in the form ( The average number of electrons n = 1 N iσ n iσ is determined as follows: The equation ( 5) and expression (6) form a set of equations for the parameter η and for the chemical potential. After eliminating the chemical potential, the equation for the order parameter transforms to the following form: where and the b/a 2 ratio, is given by the expression The equations ( 5) and (7) determine the values of η parameter, which correspond to the extremuma of thermodynamical potential Ω and free energy F respectively.From the set of all possible roots of it, it is necessary to take into consideration only those which provide the minimum values of Ω or F . The equation of state η = η(h) and phase diagrams at The equation for the parameter η, obtained in the regime of fixed number of electrons, can be solved analytically in the limit T = 0. Typical example of η(h) dependence is shown in figure 1, which corresponds to the n value in the interval 0 n 1.At some regions of values of h, where η(h) function possesses S-like behaviour and has got three or more values, the first order phase transitions with the jumps of parameter take place at the change of h. ) occur between phase transition points and outside them.At the change of parameter values the regions, where metastable phases occur, can overlap.The disappearing of some phase transitions takes place and therefore some intermediate phases are not realizable.In case 1 n 2 the dependence of η(h) is generally similar.The phase 3 and phase 2' at η = 3/2 − n, which now appears instead of phase 2, may play the role of intermediate phases. The dependence of η parameter on h field at T = 0 (in the case g > J 2 ; The analysis shows that the states described by 1 (z 1 h z 2 ), 2 (z 3 h z 4 ), 3 (z 5 h z 6 ) -lines are unstable while the states 1 . . . 4 correspond to the minimum values of the free energy.The intersections of F 1 , F 2 , F 3 , F 4 lines determine the points of the first order phase transition between relevant phases.Some of these phase transitions will not be realizable, if the relevant points of crossing lie above some other F k -line. The values of field h = h ik , at which phase transitions i ↔ k occur, are the following: at 0 n 1, and Some examples of phase diagrams (n, h) where the areas of occurrence of the phases 1 . . . 4 at T = 0 are shown, are presented in figure 2 1 .Their emerging depends on the values of U and g constants.In addition, they show the possibility of transforming the phases at the change of electron concentration n.To study this problem more attentively, we investigated the dependence µ(n) described by the equation (6). It is known that the dependence of particle concentration on chemical potential is one of the factors that determines thermodynamical equilibrium of the system.The state with homogenous distribution of particles is unstable at ∂n ∂µ T < 0. Hence, the phase separation into the regions with different concentrations takes place. Let's take, for instance, the case U > g, 0 < h < g; the corresponding phase diagram in plane (n, h) is shown on figure 2a.One can see that in phases 1 and 4 a) b) the chemical potential increases (or remains constant) with the increase of n, while within the phase 2 there is a possibility of descending dependence of µ on n. The dependence of chemical potential on the electron concentration n (T = 0).U > g; g > J/2; 0 < h < J/2.means the instability of homogenous state of the system.In the case illustrated on figure 3 the phase 2 splits at T = 0 into phase 4 (n = 0, η = 1 2 ) and into phase 1 (n = 1, η − 1 2 ) with weight coefficients 1 − n and n respectively. Phase separated states are bordered by binodal lines.For example, at T = 0 and U > g they are: Such boundaries surround the regions of occurring of the intermediate phases 2, 2', and 3.The mentioned phases might be stable only if there were some factors that maintain the space homogeneity of electron concentration. Phase transition at µ = const The dependence of the order parameter on field h and temperature θ, at the constant value of chemical potential, is determined according to equation ( 5). Here we consider the case of non-zero temperature.Among all possible solutions η = f (θ, h, µ) of equation ( 5), let us separate the zero ones.In (θ, h)-plane they define the curves which are described by an equation From this equation one can get where After substituting this expression for field h into equation (10) we get an equation η = 1 2 th βJη 2 , that defines the order parameter along the line h = θ ln ξ 1 ξ 2 and has got the standard form of molecular field equation.In addition to zero solution, non-zero ones occur at θ < θ c = J 4 .One can make a conclusion that at the temperatures θ < θ c the relation (14) has the meaning of equation that describes the line of phase equilibrium (first order phase transition curve), and temperature θ c corresponds to the critical point.Critical value of h c is given by the expression The feature of this phase transition is that the curve of phase equilibrium generally is not parallel to the temperature axis.The occurrence of a bent in the cooccurrence curve testifies to the possibility of the first order phase transition at the change of temperature with a jump of the order parameter η if the value of field h is within the interval placed between 2H * and h c values. At the temperatures below the critical value crossing the phase coexistence curve leads to the jump of electron concentration between values that correspond to phases, which are involved in phase separation (this corresponds to the break point on the dependence Ω(µ)).At the same time these values are points of binodal lines, which are determined according to the Maxwell rule from the plot of function µ(n).An illustrative sample of this is presented in figure 4. The plots presented are obtained using numerical calculations based on formulas ( 5) and ( 6). Conclusions The investigation performed shows that pseudospin-electron model with a longrange interaction possesses some features that make it different from the ordinary Ising model.They are as follows: -The possibility of the first order phase transition occurring at the change of temperature and at fixed value of the field (regime µ =const) -Instability with respect to phase separation in the wide range of parameter h (regime n =const) with the emerging of the regions with different electron concentration and different orientation of pseudospins. The results obtained give a more substantiated interpretation of the behaviour of the pair correlation function and dielectric susceptibility of the model in the case t = 0, J = 0 investigated in GRPA [5,7].It is possible to conclude that the divergence of susceptibility χ ss | µ=const and other related quantities at certain values of temperature and h ∼ g (at U = ∞) corresponds to the point of the high-temperature phase instability.The temperature of this instability is situated below the temperature of the first order phase transition that leads to the jump of S z . When the model is used to describe the thermodynamics of oxygen subsystem in YBa 2 Cu 3 O 7 -crystals, the results of this work might be used as a basis for describing the bistability phenomena in apex oxygen sublattices as well as for studying the experimentally observed spatial nonuniformities and phase-separated states in monocrystalline specimens [8,9].
2,577.4
1999-01-01T00:00:00.000
[ "Physics" ]
Efficient connectivity in Wireless Sensor Network using adaptive strategy in evolutionary computation — At present ,wireless sensor network (WSN) has reeived intense attention by reserchers because of its utilities in the various applications.In WSN ,sensors are battery operated devices and most of cases practicaly it is not possible to replace the battery once it loose the life .There are various reasons for energy consumption and among them one very significant factor is unefficient connectivity of sensors for sharing the relevent information under network.Connectivity situation can be more worse if there is a dynamic environment exist under network environment.In this paper ,evolutionary approach based on various form of genetic algorithm has proposed to handle this issue.Different strategy like ,redifiniton of agents,inclusion of flying agents and carrying the experience have included to enhance the qulaity of solution. connectivity of networks and elevating network lifespan. The NP-complete problem is articulated by using combinatorial optimization method. They have suggested potential field deployment algorithm (PFDA) and multi-objective deployment algorithm (MODA). [6]This paper analyzes the metric relationship model with 3-D Clifford sensor network. The Clifford sensor network contains connection graph with independent of coordinate and is reliable with different targets in diverse dimensional space. [7] The authors planned a deliberate model with queuing to obtain an optimum solution to elevate energy depletion of the sensor node. [8] Has investigated the connectivity of random deployment nodes in WSN with Gaussian distribution. This method provides better results while simulating in nonlinear distribution of sensors. The important problem in WSNs is Coverage and connectivity problems, which have a abundant effect on the performance of WSNs. The Improved method with efficient node deployment approach is used. In [9], authors have categorized the problem of coverage in WSN with different angles, define the estimated metrics by using the suitable algorithms. In [10],the node deployment problem is articulated with multi-objective optimization (MO) problem where, the objective of the problem is to discover a deployed sensor node to get the best out of coverage of targets, minimum the network energy depletion, better network lifespan, and connectivity between source and destination node for accurate transmission of information with minimum number of sensor nodes. [11] Presented the essential study on the connectivity between nodes in wireless sensor networks and efficient coverage of targets is obtained from mathematical modeling, theoretical study, and performance assessment perceptions. [12] In this paper the authors are presented, node deployment pattern with polygon shape to obtain optimum position of sensors in Wireless Sensor Networks for efficient coverage and connectivity. The important objective of this problem is to provide efficient connectivity and maximize the coverage with minimum number of sensors. To obtain better solution with minimum computational sources for node deployment problem with NP -hard is perplexing problem of research in WSN. [13]has presented an outline of WSN and node deployment problem in wireless sensor networks , and deliberations on metaheuristics and demonstrates how to use the meta heuristics methods to resolve the node Deployment Problem in WSN. The efficient method is used for curtail by using Sleep Scheduling (SS) mechanism and enhance lifespan of wireless sensor networks. In [14], authors presented software based algorithm to cope the energy of the wireless sensor network with Sleep Scheduling of nodes. In [15], review has presented the issues built on archetypal of WSNs: structured and non-structured for data gathering and aggregation and also discussed the importance of clustering and routing in wireless sensor networks for better energy preservation and lifespan of the network. In [16] authors have presented a distributed Resource Constrained Recovery method is used to restructured a network subdivided into dismember segments by deliberately relocation of nodes. The cases in which relocation nodes are inadequate to form steady topology of inter segment then mobile data gatherers with elevated routes to reduce data delay III. PROPOSED SOLUTON To obtain minimum distance, we have implemented adaptive genetic algorithms like, Redefined agents genetic algorithm [RAGA], Flying agents genetic algorithms[FAGA], Experienced agents genetic algorithms [EAGA].These algorithms provide better connectivity in dynamic environments of wireless sensor networks. The dynamic environments obtain when the sensors are moved from place to another place with the help of animals, robotics and human beings (soldiers). The adaptive genetic algorithms provide better solutions when the sensors in dynamic environment. In WSN there is a number of applications where dynamic topology exists for example in the case of a) Track animals b) Soldier strategy in war field etc. To handle Dynamic wireless sensor networks there is requirement of high level adaptability. In natural system evolutions can we consider as best example for adoptability, hence genetic algorithm platform has adapted, but the existing challenges are To find the optimal connectivity from one sensor to another for communication Detect change in topology With the change topology as shown as possible reestablish the optimal connectivity. To handle all these three different approaches are developed as shown in the Fig-1 F_fun (MPOP) 5. Next Generation population Tournament selecetion 〖(f〗_(vi )) 5. Topology change detection f(f_vi-f_(v(i-1)) ) ; 6. If change detected Current Generation = New Random Agent population ; Else Current Generation = Next Generation; 7. Go to step 1. Flying agent's with time [FAGA]: The new member are always incorporated with the populations in result there is a better diversity available and if there is dynamic condition appeared this diversity will help to find out solutions . With this approach it is possible to handle dynamic conditions faster.With initial definition of population the genetic operator applied to create offspring population and through tournament selection new population is created .The fitness value estimated and weak one is going to replace by newly created flying agents and this population will replace population and process is goes on. The psudo code for FAGA has given below. Psudo code for FAGA Experienced agent's with time: [ EAGA] If there is possibility to place experienced agents in the next generation always there is very good chance to handle dynamic topology in efficient manner because they are having the knowledge to handle the change in the topology with the time in result the optimal solutions can be achieved with very less time and chances failure is minimum.In EAGA with initial population definition and applying the genetic operator through selection process new population is created among the new population whichever is having higher fitness value that is solution stored in memory. This process will goes on in every generation .The previously stored experienced solution will replace the weak solution available in current population to create find new population and process will continue. The psudo code has given below. Fundamentally the two different approaches has applied to handle dynamic available in topology-(i) Detect and change in topology and take action (ii) Be adaptive all times, if there any change occur in topology it will handle by the same process RAGA concept applied to handle detedct and change approach while FAGA and EAGA applied to be adaptive always. Psudo code for EAGA It is observed with experiment that RAGA may take longer period to generate optimal solutions and there may be chance in between topology changes further which make the situation worst and obtained result are no way useful. If there is a high levels of change occur in topology it may difficult to find the optimal solutions within time span by FAGA.EAGA has shown the superior results in all cases in terms of high level of adaptive characteristics and faster exploration of global solution.this is possible because of exerience from past event help to handle the new event if it occurs. V. CONCLUSION Objective to increase the Connectivity of WSN can be obtained by concept of adaptive genetic algorithms. In this present paper a concept to increase connectivity of WSN by using adoptive genetic algorithm has presented, which help in find the solution to obtain high connectivity between source and destination. For various different sensors experiments has done on Redefine Adoptive genetic algorithm (RAGA), Flying Agents Adoptive genetic algorithm (FAGA), Experienced Adoptive genetic algorithm (EAGA) and shown very clearly that the proposed solution provide high connectivity between the source and destination by choosing minimum distance. In future the hybrid adoptive genetic algorithms is developed to enhance connectivity in dynamic environment of wireless sensor networks
1,918
2017-02-28T00:00:00.000
[ "Computer Science", "Engineering" ]
A Global Optimization Method to Determine the Complex Material Constants of Piezoelectric Bars in the Length Thickness Extensional Mode : Optimization methods have been used to determine the elastic, piezoelectric, and dielectric constants of piezoelectric materials from admittance or impedance measurements. The optimal material constants minimize the difference between the modeled and measured admittance or impedance spectra. In this paper, a global optimization method is proposed to calculate the optimal material constants of piezoelectric bars in the length thickness extensional mode. The algorithm is applied to a soft PZT and a hard PZT and is shown to be robust. Introduction Piezoelectric constants d 31 and d 33 are indispensable parameters when we evaluate the performances of new piezoelectric materials. A common method to determine d 31 from capacitance and admittance measurements has been described in an IEEE standard [1]. First, the dielectric constant T 33 is calculated from the capacitance measured at 1 kHz. Second, the elastic compliance constant s E 11 is calculated from the frequency of maximum conductance f s in the length thickness extensional mode. Third, the electromechanical coupling factor k 31 is calculated from f s and the frequency of maximum resistance f p . The piezoelectric constant d 31 is finally calculated from T 33 , s E 11 , and k 31 [1]. To circumvent the need of the capacitance measurement, researchers have developed iterative and non-iterative methods to determine the material constants (s E 11 , T 33 , and d 31 ) from only the admittance measurements in the length thickness extensional mode. Smits proposed an iterative method to calculate the complex material constants from the admittances at three frequencies [2]. According to the one-dimensional model in the IEEE standard [1], the three complex admittances suffice to determine the three complex material constants [3], whose imaginary parts represent the losses in the material [4]. Smits's method involved the manual selection of three data points on an admittance curve, which was later automated by Alemany et al. [3]. Non-iterative methods have also been proposed to calculate the material constants from particular near-resonance frequencies and the corresponding admittances [5,6]. In contrast to the iterative and non-iterative methods that use a limited number of admittance data, fitting methods are based on the whole admittance spectra and are therefore robuster. In a fitting method, the material constants are determined by fitting the modeled admittance or impedance spectra to the measurements. The optimal material constants are those that minimize the difference between the modeled and measured admittance or impedance spectra. When the modeled admittance or impedance spectra come from the one-dimensional model of the length thickness extensional mode, the optimal material constants can be calculated with local optimization algorithms, such as the simplex algorithm [7], the Levenberg-Marquardt algorithm [8], and the Nelder-Mead algorithm [9]. Unlike the global optimization algorithms that were recently proposed for the thickness extensional mode [10] and the radial mode [11], these local optimization algorithms may not converge to the globally optimal material constants, especially when the initial guess of the material constants is inaccurate. To propose a global optimization algorithm for the length thickness extensional mode is the first motivation of this paper. The modeled admittance or impedance spectra in the fitting methods can also come from finite element simulations [12][13][14][15][16]. Finite element simulations are able to predict the admittance at any frequency, even when different vibration modes are coupled. On the contrary, each one-dimensional model only applies to a specific vibration mode. Furthermore, one-dimensional models are only valid for samples with particular shapes and dimensions. The constraint is unnecessary in the finite element simulations. The fitting methods based on finite element simulations, however, are too complicated to be used in everyday research for most researchers. Although there are several fitting methods based on the one-dimensional model of the length thickness extensional mode [7][8][9], no open-source software is available. Consequently, researchers have to write their own code to realize the algorithms if they cannot afford a commercial software license [17]. The second motivation of this paper is to publish an open-source code to determine the material constants in the one-dimensional model of the length thickness extensional mode. The global optimization method proposed in this paper has the following advantages over other methods. It does not need a capacitance measurement to determine the dielectric constant T 33 , in contrast to the common method in the IEEE standard [1]. Unlike the iterative and non-iterative methods [2,3,5,6], our method takes full advantage of the measured admittance spectra and is less prone to experimental errors. Compared with the local optimization methods [7][8][9], the present global optimization method is able to find the globally optimal material constants even when their initial guesses are chosen randomly. Based on the one-dimensional model, the present method is more efficient and convenient than the fitting methods based on finite element simulations [12][13][14][15][16]. This paper is arranged as follows. In Section 2, we introduce the global optimization method to fit the modeled admittance and impedance spectra to the measured ones. The algorithm is applied to calculate the optimal material constants of PZT materials and the results are presented in Section 3 and discussed in Section 4. Conclusions are given in Section 5. Materials and Methods In the one-dimensional model of the length thickness extensional mode, the electrical admittance of a rectangular bar is [1]: where i = √ −1 and f is the frequency of the driving voltage. The density (ρ), length (l), width (w), and thickness (t) of the bar are experimentally measured. The one-dimensional model applies when l w and w > 3t [1]. The admittance also depends on an elastic compliance constant (s E 11 ), a dielectric constant ( T 33 ), and a piezoelectric constant (d 31 ) of the material. The three material constants are assumed to be complex numbers. The imaginary parts of s E 11 , T 33 , and d 31 are related to the elastic, dielectric, and piezoelectric losses, respectively [4]. The six-dimensional real vector y is defined as: where "Re" and "Im" represent the real and imaginary parts, respectively. The superscript "T" represents the transpose of the vector. The experimental admittances are denoted as Y exp ( f n ), where f n (1 ≤ n ≤ N) are N frequencies around the resonance frequency of the first length thickness extensional mode. We denote the average relative error of the admittance and impedance in the one-dimensional model as where Here, G = Re{Y}, B = Im{Y}, Z = 1/Y, R = Re{Z}, and X = Im{Z} are the conductance, susceptance, impedance, resistance, and reactance, respectively. The subscripts "mod" and "exp" represent the modeled and measured quantities, respectively. The optimal material constants correspond to the vector y that minimizes the average relative error E(y). The minimum average relative error is calculated with the Levenberg-Marquardt modification of Newton's method as follows [18]. 1. Measure the admittance, density, and dimensions of the sample. Calculate the experimental conductance, susceptance, resistance, and reactance from the measured admittance. 2. Randomly select a set of material constants (s E 11 , T 33 , d 31 ) as an initial guess, where: 3. Define: (14) such that the absolute value of each component of x is close to 1. 4. Calculate the average relative error E from Equations (3)- (7). Note that E is also a function of x. Calculate the gradient and the Hessian matrix of E and denote them as g(x) and F(x), whose expressions are derived in Appendix A. 6. Calculate the average relative errors E(x + α∆x) for step sizes α = 1/25, 1/5, 1, and 5. Find the minimum average relative error E(x + α opt ∆x), where α opt is the optimal step size. Take x + α opt ∆x to be the new x. 7. In the inner iteration, repeat steps 4-6 until the absolute values of all components of α opt ∆x are less than 10 −8 . Then E(x) is the locally minimum average relative error. Calculate the locally optimal material constants (s E 11 , T 33 , d 31 ) from Equation (14). 8. In the outer iteration, repeat steps 2-7 for 100 times with different sets of initial material constants. The globally minimum average relative error is defined as the minimum among all locally minimum average relative errors. The corresponding material constants are the globally optimal material constants. The flow chart of the algorithm is shown in Figure 1. In the end, we check whether the optimal material constants are reasonable. For a material to be passive, i.e., energy is dissipated instead of generated in the material, the following conditions must be satisfied: [4] Results The algorithm proposed in Section 2 is applied to a soft PZT and a hard PZT to calculate their optimal material constants in the length thickness extensional mode. The densities of the soft PZT and the hard PZT are 7.85 × 10 3 kg/m 3 and 7.81 × 10 3 kg/m 3 , respectively [7]. Both samples have the same dimensions. The length, width, and thickness are 12 mm, 3 mm, and 1 mm, respectively [7]. The experimental admittance and impedance of the soft PZT are obtained from Figure 1 in [7] and plotted with symbols in Figure 2. Note that the data are not evenly spaced in the frequency range between 110 kHz and 140 kHz, which corresponds to the first length thickness extensional mode. For example, the reactance data are denser when f ≈ 134.5 kHz (Figure 2h). Consequently, we use the linearly interpolated admittance and impedance at 301 frequencies evenly spaced between 110 kHz and 140 kHz as the experimental data in our calculations. The modeled admittance and impedance that best fit the experimental data are then calculated with the present algorithm and plotted with lines in Figures 2. The globally minimum average relative error is 6.5% and is found in 40 runs among 100 runs with different initial material constants. The optimal material constants are listed in Table 1 and are confirmed to satisfy Equations (15)- (17). The relative differences between our results and the previous results [7] are less than 3% and 7% for the real and imaginary parts of the material constants, respectively. Similarly, the experimental admittance and impedance of the hard PZT in the first length thickness extensional mode at 251 frequencies evenly spaced between 148 kHz and 158 kHz are linearly interpolated from Figure 2 in [7]. The original experimental data and the best fitting curves are plotted in Figure 3 with symbols and lines, respectively. The globally minimum average relative error is 10.4% and is found in 48 runs among 100 runs. The optimal material constants are listed in Table 1 and are confirmed to satisfy Equations (15)- (17). The relative differences between our results and the previous results [7] are less than 7% for the real parts of the material constants. The imaginary parts of the present optimal T 33 and d 31 , however, do not agree with the previous results [7]. Compared with the previous best fitting results [7], which are plotted with red dashed lines in Figure 3, our best fitting admittance and impedance are closer to the experimental results. Discussion In Section 3, the minimum average relative errors are 6.5% and 10.4% for the soft PZT and the hard PZT, respectively. Note that these average relative errors represent the relative differences between the modeled admittance and impedance spectra and the measured ones. The errors of the optimal material constants are difficult to estimate but should be far less than 10%. Otherwise, the differences between the modeled and measured resonance frequencies would have been evident in Figure 2 and 3. The average relative error of the modeled admittance and impedance spectra cannot be reduced to zero due to the following two reasons. First, the measured admittance and impedance may not be accurate. They are sensitive to the holding position of the sample. Second, the one-dimensional model (Equation (1)) only applies when l w and w > 3t [1]. When these conditions are not satisfied or when the sample is not a perfect rectangular bar, the one-dimensional model is only an approximation. The present global optimization method depends on the range of the initial guess of material constants. If this range is larger than that in Equations (8)- (13), the algorithm may need more runs to find the globally optimal material constants. The algorithm may even find different locally optimal material constants that have almost the same average relative error. Actually, according to Theorem A1 in Appendix B, different materials may have similar admittances around the same resonance frequency. For example, if we have several materials and the material constants of the n-th material are (2n − 1) 2 s E 11 , T 33 + 4n(n − 1)d 2 31 /s E 11 , and (2n − 1) 2 d 31 , where n ≥ 1, then the n-th resonance frequency of the n-th material is close to the first resonance frequency of the first material. Furthermore, the admittance spectrum of the n-th material near its n-th resonance frequency is similar to that of the first material near the first resonance frequency. This phenomenon is illustrated in Figure 4, where the first material is chosen as the soft PZT. Although the present algorithm may converge to different optimal material constants, we can easily check whether the vibration mode in the given frequency range is the first mode. According to Figure 4, those conductance spectra with local maxima at lower frequencies outside the given frequency range should be neglected. Only the optimal material constants that correspond to a modeled admittance spectrum with the first mode in the given frequency range are the ones we need. Conclusions In this paper, we proposed a global optimization method to determine the complex material constants of piezoelectric bars in the length thickness extensional mode. The calculated optimal material constants and the best fitting admittance and impedance are consistent with the previous results [7] for a soft PZT and a hard PZT. Unlike the previous algorithms [7][8][9], the present algorithm involved 100 runs from random initial material constants and was able to find the globally minimum average relative error of the modeled admittance and impedance. Even when the initial material constants were randomly chosen from a wide range, at least 40% of runs found the globally minimum average relative error, which shows the robustness of the current algorithm. On the contrary, the local optimization methods [7][8][9] depend on well-chosen initial material constants, otherwise they may find a local minimum instead of the global minimum of the average relative error. The present algorithm can also be modified to determine the optimal material constants in other vibration modes with one-dimensional models. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
3,511.2
2021-07-22T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Formation of InxGa1−xAs nanocrystals in thin Si layers by ion implantation and flash lamp annealing The integration of high-mobility III–V compound semiconductors emerges as a promising route for Si device technologies to overcome the limits of further down-scaling. In this paper, a non-conventional approach of the combination of ion beam implantation with short-time flash lamp annealing is employed to fabricate InxGa1−xAs nanocrystals and to study their crystallization process in thin Si layers. The implantation fluence ratio of Ga and In ions has been varied to tailor the final nanocrystal composition. Raman spectroscopy and x-ray diffraction analyses verify the formation of ternary III–V nanocrystals within the Si layer. Transmission electron microscopy reveals single-crystalline precipitates with a low number of defects. A liquid epitaxy mechanism is used to describe the formation process of III–V nanocrystals after melting of the implanted thin Si layer by flash lamp annealing. The fabricated InxGa1−xAs nanocrystals are mainly Ga-rich with respect to the implanted Ga/In ratio. Introduction The steady progress of microelectronic technology has come to a point where a further decrease of device dimensions approaches physical limits. The integration of III-V compound semiconductors into Si-based systems is a promising path to circumvent these limits. The high charge carrier mobilities in III-V materials in comparison to Si, especially for n-type material, is the outstanding feature which predestines them for the integration in future device technologies [1][2][3]. Another prominent characteristic of most III-V's is their direct band gap in the visible or near-infrared range allowing them to be used for optical applications [4][5][6]. In ternary III-V compounds, the band gap can even be engineered by varying the ratio within the group-III or within the group-V elements [7]. Further applications in non-volatile memories [8,9] and photovoltaic devices [10] as well as for dilute ferromagnetic semiconductors when combined with e.g. Mn [11] highlight the broad range of technologies which can be accessed with III-V compound semiconductors. Due to this large potential, there is a great interest for the integration of III-V compound semiconductors into Si technology. This integration can be performed by several techniques, where molecular beam epitaxy [12][13][14], metal-organic vapor phase epitaxy [15][16][17] and wafer bonding [18,19] are the most prominent ones. Another approach is ion beam synthesis of III-V crystallites within the Si host material [20,21] using sequential ion beam implantation and thermal annealing [22,23]. With the transition to ultra-short annealing times by flash lamp annealing (FLA), the control of nanocrystal (NC) size and quality could be enhanced [24]. A combination of this III-V integration method with additional reactive ion etching resulted in the formation of p-n-heterojunction diodes based on InAs NCs on top of Si nanocolumns [25]. The integration of III-V compound semiconductor NCs in Ge by this preparation technique [26] allows the combination of the high electron mobility of III-V compounds with the high hole mobility of Ge. Recently, ion beam synthesis of In x Ga 1−x As NCs in bulk Si in combination with rapid thermal annealing has been demonstrated as well [27]. In this paper, the fabrication of ternary In x Ga 1−x As NCs in thin Si layers by a combination of ion implantation and subsequent ms-range FLA is demonstrated. Using different fluence ratios of Ga and In ions, In x Ga 1−x As NCs of various compositions were prepared. With the aim to characterize the NCs and to understand the formation mechanism, optical and microstructural properties were investigated. Methods For sample preparation, thermally oxidized SOI (Si-on-Insulator) substrates with a final layer stack of 65 nm SiO 2 /60 nm Si/150 nm SiO 2 /bulk Si were used. Initially, the Si device layer had a (100) orientation. These substrates were implanted sequentially with As, Ga and In ions. The used implantation energies have led to a projected range centered in the middle of the thin Si layer. The As fluence was 3×10 16 ions cm −2 , while the Ga and In fluences were varied to achieve different Ga/In ratios. The overall ion fluence was 6×10 16 ions cm −2 . For the sake of comparison, one sample was self-implanted with an equivalent fluence of Si + ions. The fluences used for the different samples are given in table 1. In the following, we will use the nominal In x Ga 1−x As composition (value for x nom ) deduced from the fluence ratios to unambiguously identify the individual samples. In every case, the ion fluence was high enough to fully amorphize the thin Si layer. After implantation, the samples were thermally annealed by FLA to recrystallize the Si layer and to form III-V NCs. Therefore, the samples were pre-heated for 3 min at 470°C and flashed for 20 ms at energy densities ranging from 46.4 to 97.2 J cm −2 . The resulting annealing temperature can be calculated by considering the optical constants, heat capacity and heat conductivity of treated samples. For this purpose the temperature profiles have been simulated by solving the one-dimensional heat equation with the commercial software COMSOL Multiphysics ® [28] (see figures 4(c) and A1 in the appendix). Using the simulated temperature profiles the theoretical annealing temperatures range between 1000°C and 1400°C. In order to investigate the In x Ga 1−x As samples μ-Raman spectroscopy, x-ray diffraction (XRD) and scanning (SEM) as well as transmission electron microscopy (TEM) analyses were performed. During Raman spectroscopy samples were exposed to a 532 nm Nd:YAG laser in backscattering geometry and the Raman shift was measured between 100 and 600 cm −1 . XRD was performed on an Empyrean Panalytical 4-circle θ-θ diffractometer using Cu-Kα radiation. Cross-sectional TEM analysis was done with an image C s -corrected FEI Titan 80-300 microscope at an accelerating voltage of 300 kV. Besides bright-field TEM and high-resolution TEM imaging, high-angle annular dark-field scanning TEM (HAADF-STEM) and energy-dispersive x-ray spectroscopy (EDXS) were performed to collect information about chemical composition, defects and interfaces of the III-V NCs. Results and discussion In figure 1(a) Raman spectra of SOI samples implanted with different In/Ga fluence ratios are compared. Additionally, the Raman spectrum of the Si-implanted reference sample (black) is given. All samples were annealed at 97.2 J cm −2 (ca. 1370°C). Comparing In x Ga 1−x As samples with the reference, it can be clearly seen that the formation of a crystalline III-V component was successfully achieved by the combination of ion implantation and FLA. Figure 1(b) shows the shift of the peak positions of the InAs-like and GaAs-like TO and LO phonon modes as a function of the In content. The peak positions have been extracted by fitting the Raman spectra and follow the trend of the theoretical curves for a phonon mode shift with varying In x Ga 1−x As composition [29]. For the binary Table 1. Implantation fluence, nominal In x Ga 1−x As composition as well as In x Ga 1−x As compositions as deduced from Raman (x Raman ) and XRD measurements (x XRD ). In case of a bimodal peak structure in XRD both calculated compositions are given and the major contribution is marked by an asterisk. Ion fluence In x Ga 1−x As composition III-V compounds, namely InAs (x nom =1) and GaAs (x nom =0), the characteristic phonon modes are observed at 219 (TO) and 237 cm −1 (LO) for InAs [30] and at 268 (TO) and 286 cm −1 (LO) for GaAs [31], respectively. The spectra of the ternary phases show a two-mode phonon behavior [32]. With decreasing In content (decreasing x nom ) the observed phonon modes change from an InAs-like mode to a GaAs-like one. The InAs-like phonon modes lose intensity and converge while the GaAs-like phonon modes evolve. Due to this two-mode phonon behavior it is possible to draw conclusions about the composition of In x Ga 1−x As. These calculated compositions (see table 1) differ from the nominal compositions in such a way that the fabricated In x Ga 1−x As NCs are Ga-rich (x Raman <x nom ). It is assumed in a first approximation that the system is sufficiently relaxed and that the observed shifts of the peak positions are mainly due to composition. Nevertheless, the width of the peaks suggests a size or composition distribution of the ternary NCs or, more probable, a combination of both. The Si 2TA phonon mode present at 301 cm −1 for all spectra indicates the recrystallization of the thin Si layer. This is supported by the very strong TO+LO phonon mode peak at 520 cm −1 . The broad peak shifting from 360 to 390 cm −1 with increasing Ga content represents the amphoteric doping of Si in III-Vs. The substitutional defects Si Ga (384 cm −1 ) and Si As (399 cm −1 ) and their donor-acceptor pair (393 cm −1 ) in GaAs are Raman-active [33]. For InAs the Si In defect shows a Raman shift of 359 cm −1 [34]. We suspect a mixture of these defect-related Raman modes to be responsible for this broad peak. At high In contents Si In defects are more dominant giving rise to a signal at lower wavenumbers, while at high Ga contents the peak shifts towards higher wavenumbers due to Si Ga and Si As defects effecting the Raman spectra more. In conclusion, Raman spectroscopy proves that In x Ga 1−x As compound semiconductor NCs with variable composition can be successfully formed in SOI substrates via sequential ion implantation and subsequent FLA. Furthermore, these In x Ga 1−x As NCs are heavily doped with Si. Figure 2(a) shows XRD 2θ scans in the 2θ range from 22°to 100°measured at a grazing incidence angle of 1°. For the Si self-implanted reference sample the XRD pattern provides only the Si reflections (blue bars), which proves polycrystalline recrystallization of the implanted thin Si layer during FLA. The samples implanted with In + , Ga + and As + show additional Bragg peaks which are related to III-V compound formation (yellow bars). With decreasing In content, the 111, 220 and 311 reflections of the III-V compound NCs shift to higher angles indicating a decrease of the lattice constant and thus a shift from InAs to GaAs. Having a closer look at the reflections, a peak splitting is observed for the ternary phases, especially for x nom =0.5. This splitting is due to the coexistence of at least two In x Ga 1−x As phases, which are denoted as In-rich and Ga-rich In x Ga 1−x As with respect to the nominal composition. Figures 2(b) and (c) show crystallite sizes and μ-strain of the III-V and Si crystallites deduced from a Williamson-Hall analysis. For both III-V phases, the crystallite sizes are around 30 nm. In contrast, the Si crystallites within the thin reference sample are about 10 nm in size. Despite the grazing incidence, the crystallite size measured by XRD is a statistical average value and represents almost exclusively the vertical dimensions of the III-V NCs. As shown in figure 2(c), the III/V crystallites have lower μ-strain than the Si crystallites in the reference sample. The μ-strain itself displays a variation of the lattice parameter within the crystallites. In extreme cases, this variation can either be accounted to lattice imperfections or to a variation in the composition of the crystallites. However, in the present case the origin of the μ-strain is probably a mixture of both. The μ-strain of both ternary III-V phases shows a slight trend to higher values with decreasing In content (x nom ), but absolute values around 0.1% are small compared to the lattice mismatch expected for In x Ga 1−x As to Si which is between 11.5% (InAs) and 4.1% (GaAs). Hence, the μ-strain within the III-V crystallites is negligible. The variation of the μ-strain denoted by the error bars is smaller in the binary samples (x nom =1 and 0) than in the ternary samples. Therefore, we account the μ-strain mainly to a variation in composition of the ternary III-V NCs. In x Ga 1−x As has a cubic crystal structure and as the μ-strain is negligible, the lattice parameter of the different ternary III-V phases can be deduced from the XRD peak positions. Furthermore, the composition of the In x Ga 1−x As NCs can be calculated according to Vegard's law [35] which says that the lattice parameter of a solid solution changes linearly with the variation of its composition. The determined values (x XRD ) are given in table 1. As a peak splitting is observed in the XRD patterns for the ternary compounds, two x XRD values are displayed for a specific implantation fluence ratio. Since there are no hints for texture, comparing the intensities of the individual III-V XRD peaks points to a major and minor phase contribution. The NC compositions of the major contribution obtained by XRD verify the statement from Raman spectroscopy that the fabricated In x Ga 1−x As NCs are mainly Ga-rich with respect to the nominal composition. This fact is supported by the appearance of the In 101 and 110 Bragg reflections at 32.9°and 39.1°(red bars) which proves the presence of a metallic In phase in that samples. With decreasing In content, the In reflections disappear. In order to get more information about the morphology of the NCs, SEM and TEM analyses were performed. Top-view SEM images in figures 3(a)-(e) depict arbitrarily shaped, bright areas representing particles with sizes ranging from few tens of nanometers to about 400 nm for all fluence ratios. Comparing samples implanted with different fluence ratios, it is observed that the mean particle size has the tendency to decrease with decreasing In content (x nom →0). However, the number of particles does not change significantly which leads to a decrease in the overall volume fraction of the precipitates. In the corresponding HAADF-STEM micrographs of figures 3(f)-(k), cross-sections of these samples are depicted. From top to bottom, the SiO 2 capping layer, the thin Si device layer, the SiO 2 buried oxide layer and the bulk Si can be distinguished. In each sample, the thin Si device layer comprises several bright areas which represent the III-V precipitates. These particles mainly appear blocklike and are limited in height by the surrounding SiO 2 layers. Additionally, smaller precipitates which are located directly at the interfaces are observed and some of them are connected by filaments. Regarding the reduction of particle size with decreasing In content, as observed in the SEM images, the number of III-V precipitates visible in the HAADF-STEM micrographs is not high enough to give an adequate statistical evaluation. Besides the III-V NCs located within the thin Si layer, a dotted line of bright spots is observed in the surrounding SiO 2 layers close to their interfaces to the thin Si layer, indicating the presence of material with a high atomic mass. A statistical particle size analysis of the SEM micrographs has been performed to verify the tendency of the mean particle size to decrease with decreasing In content using the open source software ImageJ [36]. Figure 4 shows the results of this statistical evaluation depicting the particle size histograms of the different SOI samples (a) and the change of the mean particle size with decreasing In content (x nom ) (b). Most of the particles formed after FLA have sizes below 100 nm independent of the nominal composition. However, for the samples with higher In content, there are more particles with larger sizes than for the samples with lower In content. This can also be seen for the mean particle size of the ternary III-V NCs, which decreases from 99 nm for the InAs sample towards 81 nm for the GaAs sample. The error bars represent the standard deviation of the mean particle size and decreases with decreasing In content resulting in a narrower particle size distribution for samples with lower In content. The rather large variation in lateral particle size is responsible for the peak broadening of the III-V phonon modes observed during Raman spectroscopy. Figure 4(c) displays the time profile of the surface temperature during FLA annealing with energy density of 97.2 J cm −2 . After a fast cooling phase caused by heat conduction, further cooling due to thermal radiation and heat convection takes place on a timescale of several s. Whereas GaAs already solidifies after less than 100 μs, InAs and In stay much longer in the liquid phase. In figure 5(a) more detailed TEM analysis of these particles is exemplarily given for the sample with x nom =0.5 treated with FLA at 97.2 J cm −2 (ca. 1370°C). The cross-sectional bright-field TEM micrograph in figure 5(a) depicts the implanted layer stack with the thin SiO 2 capping layer, the recrystallized Si device layer with ternary III-V NCs, the SiO 2 box layer and the Si substrate at the bottom. Due to combined mass-thickness and diffraction contrast in the bright-field image, a clear distinction between Si and III-V crystallites in the device layer is not possible. To obtain micrographs showing almost exclusively atomic number contrast, HAADF-STEM imaging was performed. In figure 5(b) taken from the area marked by the yellow box in figure 5(a), one can see that there are two types of NCs: (1) block-like precipitates penetrating the whole Si layer and (2) triangular-shaped crystallites located at the interfaces. When having a closer look at type (2) NCs, it appears that thin filaments, which also pierce through the Si layer, are attached to them. According to the fast Fourier transform (FFT) given in figure 5(d), the block-like precipitate in figure 5(c) is single-crystalline. Based on the cubic zincblende structure and the measured interplanar distances, the FFT in figure 5(d) corresponds to a [ ] 112 zone axis pattern of In x Ga 1−x As. EDXS analyses (not shown here) performed at various III-V precipitates show variations in relative Ga/In ratio, supporting the existence of different In x Ga 1−x As phases within one sample. Similar to the HAADF-STEM micrographs in figure 3, additional small clusters in the surrounding SiO 2 layers close to the SiO 2 /Si interfaces can be observed in the HAADF-STEM image of figure 5(b). These small clusters have sizes ranging from 2 to 10 nm and appear to be crystalline. Since the tails of the implantation profiles extend into the SiO 2 layers, it is likely that these III-V NCs have been formed by a solid phase crystallization process as the SiO 2 layers remain solid during FLA. They grow due to a process similar to Ostwald ripening, but their size is limited due to the low amount of available group-III and -V atoms and the lower diffusion constant in the solid phase of SiO 2 . Due to the present results a liquid phase epitaxy (LPE) mechanism is proposed as responsible formation mechanism for III-V NCs in the Si device layer. The schematic growth mode is depicted in figure 6. After ion implantation, the Si device layer is amorphous and the implanted group-III and group-V ions have a Gaussian-like depth distribution ( figure 6(a)). The a-Si layer has a melting point which is about 200 K lower than that of c-Si [37] and the impurities decrease it even further, which is why the applied 20 ms FLA pulses are sufficient to melt the entire a-Si layer already at lower energy densities. The fast diffusion of group-III and -V atoms in molten Si results in a homogeneous distribution over the whole Si layer ( figure 6(b)). The diffusion coefficients D of these elements in liquid Si are several orders of magnitude higher than in c-Si close to the melting point [38,39]. Immediately after the FLA pulse, crystalline Si seeds start to evolve when the temperature falls below the actual melting point of c-Si and an undercooled melt is formed ( figure 6(c)). These c-Si seeds grow with further cooling and, as the group-III and -V elements have segregation coefficients k below one [39], the molten phase is enriched with these elements ( figure 6(d)). During further cooling the different c-Si grains come into contact with each other and merge to bigger polycrystalline regions with grain boundaries decorated with group-III and -V ions ( figure 6(e)). The Si grain Figure 4. Change of the mean particle size of the ternary III-V NCs depending on the In content (a). With decreasing In content, the mean particle size slightly decreases (b). Temperature profile of the SOI wafer surface including the cooling phase after the FLA pulse using a FLA energy density of 97.2 J cm −2 (c). Dashed lines represent the melting points of c-Si, GaAs, a-Si, InAs and metallic In (from top to bottom). growth is also limited by the SiO 2 /Si interfaces which in turn may be also decorated with the implanted species. Finally, when the temperature decreases below the melting point of the III-V compound, III-V NCs form between the polycrystalline Si grains (figure 6(f)). This III-V formation initially utilizes the recrystallized Si as a template for epitaxial growth but then continues on its own leading to two types of interface regions: those where the crystal orientation of the Si grain matches that of the III-V NC and those where it does not. The Si grains limit the lateral size of the III-V NCs while their height is governed by the thickness of the Si layer, leading to a uniform height distribution over the whole sample. By recrystallization from the melt, the strain in the III-V crystals originating from the lattice mismatch between III and V compounds and Si is reduced. This leads to a lower number of defects, e.g. dislocations, and higher crystalline quality than in case of III-V integration via the solid phase. The formation of mainly Ga-rich In x Ga 1−x As NCs is attributed to the melting point difference between InAs and GaAs. For the ternary compound, the melting point decreases with increasing In content [40]. Therefore, a Ga-rich In x Ga 1−x As phase has a higher melting point and starts to recrystallize earlier during cooling of the molten layer ( figure 4(c)). The excess In not consumed in III-V NC formation will form metallic In precipitates when the temperature is low enough. However, there are other effects influencing the final stoichiometry of the NCs like the available space where group-III and -V elements can be drawn from and the probability of more than one NC forming simultaneously. In the end, a distribution of In x Ga 1−x As compositions is possible although the majority will be Ga-rich with respect to the nominal value (x nom ). The decrease in the overall III-V volume fraction with decreasing x nom can be attributed to the different segregation coefficients of In and Ga. The segregation coefficient of Ga is closer to 1 than that of In which lowers the amount of group-III atoms available during III-V NC formation in samples with high Ga content resulting in a decreased amount of III-V material relative to the Si matrix. Conclusion We have demonstrated a process for the integration of III-V NCs in thin Si films. Ternary In x Ga 1−x As NCs have been fabricated by a combination of high-dose ion beam implantation and millisecond-range FLA. Investigations with Raman spectroscopy and XRD prove the formation of these NCs by the characteristic bulk phonon modes and typical diffraction peaks, respectively. TEM analyses present single-crystalline III-V compound semiconductor NCs in a thin polycrystalline Si layer after implantation and FLA. The formation mechanism is described by a LPE mechanism, which is influenced by diffusion and segregation of the group-III and -V ions in molten Si as well as by the melting points of the particular species. Due to recrystallization via the liquid phase, the strain due to the lattice mismatch between III and V compounds and Si can be lowered and the number of resulting dislocations can be reduced. In the ternary samples, more than one In x Ga 1−x As stoichiometry can be present. Depending on the nominally implanted In/Ga/As ratio, the final ternary composition can be adjusted within a certain range. In our experiments, x values ranging from 0 to 1 have been achieved although the In x Ga 1−x As precipitates are Garich compared to x nom , which is accounted to the melting point difference between InAs and GaAs. The presence of mainly Ga-rich ternary NCs and the low segregation coefficient of In leads to the formation of metallic In cluster. The III-V particles are constrained in height by the Si/SiO 2 interfaces, leading to control of the size distribution in one dimension, although they have a rather broad lateral particle size distribution. Further control of the lateral size distribution as well as the III-V particle shape can be achieved by introducing a patterned capping layer as implantation mask. Acknowledgments The support by the Ion Beam Center (IBC) at HZDR is gratefully acknowledged. The authors would like to thank C Neisser and G Schnabel for their careful sample preparation, T Schumann for the flash lamp treatment, A Scholz for performing the x-ray measurements as well as M Missbach and A Kunz for TEM specimen preparation.
5,842.2
2017-06-16T00:00:00.000
[ "Materials Science" ]
Recent Progress in the Application of Hydroxyapatite for the Adsorption of Heavy Metals from Water Matrices Wastewater treatment remains a critical issue globally, despite various technological advancements and breakthroughs. The study of different materials and technologies gained new valences in the last years, in order to obtain cheap and efficient processes, to obtain a cleaner environment for future generations. In this context, the present review paper presents the new achievements in the materials domain with highlights on apatitic materials used for decontamination of water loaded with heavy metals. The main goal of this review is to present the adsorptive removal of heavy metals using hydroxyapatite-based adsorbents, offering a general overview regarding the recent progress in this particular area. Developing the current review, an attempt has been made to give appropriate recognition to the most recent data regarding the synthesis methods and targeted pollutants, including important information regarding the synthesis methods and precursors, morphological characteristics of the adsorbent materials and effectiveness of processes. Introduction To treat and clean contaminated water is a very difficult process, water pollution representing a critical issue of our society. Industrial and agriculture sectors generate huge quantities of chemical compounds that can cause serious problems of environmental damage, heavy metals pollution being one of it, being more persistent than organic contaminants such as pesticides or petroleum byproducts. Heavy metals can produce serious human health problems (including hyperkeratosis, cancers, diabetes, anemia, disorders of the immune nervous and reproductive systems, etc. [1]). Today, the challenges in finding new materials and technologies for water depollution are directed towards a balance of operating costs (including easy scale-up of the decontamination procedures and materials synthesis) and contaminant removal efficiency. Remediation techniques, such as adsorption, membrane technology, ion exchange, coagulation or electrochemical treatment, are the most versatile applications, which can be comprehensively upgraded for the remediation behavior of heavy metals contaminated environment. Among the depollution strategies, adsorption is well-recognized as a viable and easy-to-apply at industrial level method for the removal of both organic and inorganic pollutants [2,3]. The research into adsorption materials gained new valences from the study of minerals, biopolymers, microalgal and fungal biomass to waste materials/byproducts or nanotechnology products, in order to obtain heavy metals removal efficiency in acidic, neutral or alkaline conditions [4][5][6][7]. Moreover, it is of great interest to use different innovative materials with good biocompatibility, enhanced possibility of biodegradability and bioreactivity [8]. What is important in the environmental practices of depollution is that the tandem "material-technology" has to offer not only economic benefits, but it has to contribute in achieving a sustainable development by being an active part in the global waste management process. These used materials must be removed from the systems in order to not be considered a threat to plants, animals and even humans because of their bioaccumulation, after their role is fully achieved. In this context, this review reports the recent applications of hydroxyapatite and hydroxyapatite-based materials in the area of heavy metals removal. Due to their chemical characteristics (enhanced thermal and chemical stability, acid-base properties, low solubility, adsorption and ion-exchange ability), these materials can be an important part of the adsorption methods of hazardous contaminants in the field of water and wastewater treatment. Considering those aspects, the main goal of this review is to evaluate and offer a general overview regarding the recent progress on hydroxyapatite-based adsorbents in this particular area. Developing the current review, an attempt has been made to give appropriate recognition to the most recent data regarding the synthesis methods and targeted pollutants, including important information regarding the synthesis methods and precursors, morphological characteristics of the adsorbent materials and effectiveness of process (Scheme 1 depicts a flow diagram of the structure of this review). As the industry is continuously searching for alternative methods to decrease the environmental burden of both the pollutants present and the methods applied for depollution, this review also attempts to present the use of hydroxyapatite with natural origin, along with materials obtained from synthetic precursors, mechanistic aspects, as well as conclusions regarding the current limitations and future perspectives. the study of minerals, biopolymers, microalgal and fungal biomass to waste materials/byproducts or nanotechnology products, in order to obtain heavy metals removal efficiency in acidic, neutral or alkaline conditions [4][5][6][7]. Moreover, it is of great interest to use different innovative materials with good biocompatibility, enhanced possibility of biodegradability and bioreactivity [8]. What is important in the environmental practices of depollution is that the tandem "material-technology" has to offer not only economic benefits, but it has to contribute in achieving a sustainable development by being an active part in the global waste management process. These used materials must be removed from the systems in order to not be considered a threat to plants, animals and even humans because of their bioaccumulation, after their role is fully achieved. In this context, this review reports the recent applications of hydroxyapatite and hydroxyapatite-based materials in the area of heavy metals removal. Due to their chemical characteristics (enhanced thermal and chemical stability, acid-base properties, low solubility, adsorption and ion-exchange ability), these materials can be an important part of the adsorption methods of hazardous contaminants in the field of water and wastewater treatment. Considering those aspects, the main goal of this review is to evaluate and offer a general overview regarding the recent progress on hydroxyapatite-based adsorbents in this particular area. Developing the current review, an attempt has been made to give appropriate recognition to the most recent data regarding the synthesis methods and targeted pollutants, including important information regarding the synthesis methods and precursors, morphological characteristics of the adsorbent materials and effectiveness of process (Scheme 1 depicts a flow diagram of the structure of this review). As the industry is continuously searching for alternative methods to decrease the environmental burden of both the pollutants present and the methods applied for depollution, this review also attempts to present the use of hydroxyapatite with natural origin, along with materials obtained from synthetic precursors, mechanistic aspects, as well as conclusions regarding the current limitations and future perspectives. In order to select the works presented in the current review, the popular database Scopus was used, applying as keywords "hydroxyapatite", "adsorption", "heavy metals" and as search criteria works published after 2019. The works returned were manually checked in order to remove false-positive results, and a decision regarding their insertion in the review was made after carefully reading of the in extenso paper, considering several factors, including the relevancy for the envisaged research area, characterization of the adsorbent materials and the complexity of the study. Adsorption Process Modeling and Mechanism Evaluation of adsorption processes and adsorbents is based on a thorough modeling using different kinetic models, equilibrium isotherms and thermodynamic data [9]. Together with physicochemical characterization and correlated with operational parameters (temperature, pH, pollutant concentration, adsorbent dose, etc.), these approaches allow a complete image of adsorbent performances and, consequently, process optimization. Several kinetic models [10,11] are primarily used for obtaining relevant insights in adsorption mechanism and rate-controlled steps (mass transport and/or reaction processes, the time to reach equilibrium state, etc.). Generally, for heavy metal adsorption on hydroxyapatite or modified hydroxyapatite, the pseudo-second-order model is more appropriate Scheme 1. Flowchart describing the structure of the present review. In order to select the works presented in the current review, the popular database Scopus was used, applying as keywords "hydroxyapatite", "adsorption", "heavy metals" and as search criteria works published after 2019. The works returned were manually checked in order to remove false-positive results, and a decision regarding their insertion in the review was made after carefully reading of the in extenso paper, considering several factors, including the relevancy for the envisaged research area, characterization of the adsorbent materials and the complexity of the study. Adsorption Process Modeling and Mechanism Evaluation of adsorption processes and adsorbents is based on a thorough modeling using different kinetic models, equilibrium isotherms and thermodynamic data [9]. Together with physicochemical characterization and correlated with operational parameters (temperature, pH, pollutant concentration, adsorbent dose, etc.), these approaches allow a complete image of adsorbent performances and, consequently, process optimization. Several kinetic models [10,11] are primarily used for obtaining relevant insights in adsorption mechanism and rate-controlled steps (mass transport and/or reaction processes, the time to reach equilibrium state, etc.). Generally, for heavy metal adsorption on hydroxyapatite or modified hydroxyapatite, the pseudo-second-order model is more appropriate for lower pollutant concentration while for higher concentration the pseudo-first-order model is in good concordance with experimental data. Adsorption isotherms represent the main approach to assess the optimal adsorption capacity and the interactions between solid and Materials 2021, 14, 6898 3 of 30 the pollutant from liquid phase and evaluate the distribution between them. There are several mathematical equations [12] applied for fitting the experimental data although the most used are Langmuir and Freundlich in linear or non-linear form, since they satisfactorily cover two general assumptions regarding the sorbate-adsorbent system. For the Langmuir model, the main assumption is that binding sites have similar affinity for heavy metal ions, while the Freundlich model refers to heterogeneous systems where adsorption sites have different affinities for sorbate species. Thermodynamic data are one of the most important methods for characterizing the adsorption process, especially since temperature is an important operational parameter. The Van't Hoff Equation (1) allows us to determine the thermodynamic functions, such as entropy, enthalpy and free changes in energy. From these data, it can be determined if the process is spontaneous (∆H 0 < 0, ∆S 0 > 0, ∆G 0 < 0) or non-spontaneous (∆H 0 > 0, ∆S 0 < 0, ∆G 0 > 0), and, consequently, the performance of the adsorption process can be evaluated. where K eq = q e /C [11]. Applying different models/equations to experimental data is the first step which is followed by selecting the one that fits the best. This is usually accomplished by comparing the correlation of regression (R 2 ) parameter, which is employed to select the appropriate model. However, there are several error functions which better compensate the experimental data error (Table 1) and can be applied to both kinetical or equilibrium data and are critical for process optimization. Rahman et al. [12] observed that adsorption of Pb 2+ , Fe 2+ , and Zn 2+ on Kappaphycus sp. followed a Langmuir model according to the R 2 parameter while applying another error function, the best fit was for the Temkin model. In addition, the best fit for the Langmuir model was found for Cd(II) adsorption on modified HAP [12] using the sum of square error and hybrid functions. Table 1. Several isotherms, kinetic and thermodynamic model equations commonly used in adsorption studies 1 . Model Equation Ref. Model Equation Ref. Thermodynamics [12,23] 1 where C 0 (mg/L): adsorbate initial concentration; C e (mg/L): adsorbate equilibrium concentration; q e (mg/g): observed biosorption capacity at equilibrium; q m (mg/g): maximum biosorption capacity; K L (L/mg): Langmuir constant related to the energy of adsorption; (R L ): a dimensionless constant, known as separation factor; K F (mg/g)/(mg/L) 1 Several low-cost materials were successfully applied in adsorption studies. Table 2 presents a selection of some materials, as well as their maximum adsorption capacity for comparison purposes. Application of Natural-Derived Hydroxyapatite for the Removal of Heavy Elements Hydroxyapatite is, first of all, a natural occurring mineral. As such, the use of naturalderived hydroxyapatite (HAP) is encountered in several published works. The natural hydroxyapatite can be obtained by the processing of several resources, including animal and fish bones, coral or egg shells [35]. The natural-derived HAP is mainly used in biomedical applications, due to its intrinsic properties, such as the presence of trace elements or its natural structure. Nevertheless, natural HAP can also be encountered in a series of environmental applications, including the ones targeted by the present review, removal of heavy metals (Table 3). In this category, we also mention the studies presenting the separation of one component (typically CaO) from natural sources (wastes of animal origin or algae), followed by other steps to develop the final hydroxyapatite material. 1 where: SSA-specific surface area, PV-pore volume, PD-pore diameter. Hassan et al. [36] presented the sorption of Sr(II) on a gamma-radiation composite including natural-derived hydroxyapatite and poly(acrylamide-acrylic acid). The authors proposed a complex mechanism for the Sr(II) uptake, including ion exchange (between the Ca and Sr ions) and surface complexation. The equilibrium data presented suggested that the best fitting isotherm models were Langmuir (for 298 K) and, respectively, Freundlich for 318 and 328 K. In addition, the uptake capacity of the composite was superior to other popular sorbents, including carbon nanotubes, kaolinite, or biosorbents, in the studied concentration range (10-50 mg/L). HAP sorbent also obtained from bovine bones was applied by Caballero et al. [37] for the removal of Pb(II) in a wide range of concentrations (400-1400 mg/L). Their results suggested that the best fitting isotherm for the adsorption process is represented by the Freundlich isotherm, thus supporting an adsorption in a monolayer. The authors also suggested that the optimum adsorbent concentration was 0.7 g/L, for this concentration being reached a removal efficiency of approx. 100%. Adsorption of Pb(II) was also evaluated by Vahdat et al., using both chicken-derived HAP and a magnetic HAP composite, at lower concentrations (1-10 g/L). The obtained results also suggested a pseudo-second-order model kinetic and the Freundlich isotherm being the best model to describe the process. The higher uptake capacity observed can, in our opinion, be assigned on the one hand to the different origin of the HAP, and, on the other hand, to the different lead concentration used. Removal of Cu(II) was studied by Ngueagni et al. [44], using as sorbent material hydroxyapatite obtained from the core of ox horns. The study involved metal concentration in the range 100-500 mg/L, while the characteristics of the sorbent were determined by the calcination temperature (400-1100 • C), with the Ca/P ratio varying between 1.22 and 1.61, while the specific surface between 130 and 1 m 2 /g. The copper adsorption process for the sample calcinated at 400 • C was best described by the Langmuir isotherm, reaching a maximum adsorption capacity of 99.98 mg/g, at room temperature and a pH of 5, the phenomenon being governed, according to the authors, by a cation exchange process. The adsorption capacity recorded was superior to other sorbents, such as hazelnut activated carbon or bentonite, but inferior to other more complex materials. The same group utilized the material to study the adsorption of lead and cadmium ions, obtaining similar results, the most promising adsorbent being the sample calcinated at 400 • C, in a process best described by the Langmuir isotherm. The HAP obtained from bovine femur was applied by Ramdani et al. [47] in sorption studies, using as heavy metals Pb(II) and Cd(II). The obtained results were compared with those resulted for the application of commercial HAP, with superior results for both ions. The natural HAP had superior pore size distribution and pore volume, compared with the commercial sample, and similar specific area. The adsorption processes were found to fit the Langmuir isotherm model for the natural HAP and the Freundlich isotherm model for the commercial sample. This could be explained, in our opinion, by the differences in terms of morphological characteristics between the two samples. A similar approach (with similar results) was applied by the same group [48], who compared the efficiency of natural HAP with the one of commercial HAP for the removal of copper and iron(III) ions. The adsorption process was found to be best fitted by the Langmuir isotherm, and the results were superior for the natural HAP in terms of maximum adsorption capacity. Fish scales were used by Sricharoen et al. [52] to obtain HAP using an ultrasoundassisted method, with application in the uptake of Hg(II). The material obtained had a high uptake capacity for the targeted ion, superior to several types of complex adsorbents, as presented by the authors, a phenomenon assigned by the authors to the ion exchange with the Ca 2+ in HAP structure, as well to the electrostatic interactions between the positively charged Hg 2+ and the HAP surface. An interesting approach was presented by Bi et al. [50]. Using Chlorella powder and a microwave-assisted method, the authors obtained hollow microspheres with multicomponent nanocores, in which the dominant phase was HAP, with the presence of whitmoreite, magnetite and chlorapatite. The material was used for the removal of cadmium ions, with a relatively high adsorption capacity. In addition, the magnetite phase present in the composites allowed the magnetic removal of the composite. A composite based on natural materials (HAP obtained from bovine cortical bone, chitosan obtained from shrimp shells and snail shell powders) was applied by Bambaeero et al. [59] for the removal of copper and zinc ions. The process obeyed a pseudo-secondorder model, while the isotherms that best fitted the experimental data were the Langmuir and Temkin isotherms. The authors reached an ion removal of 90% and 60%, respectively, for 3 mg/L initial ions concentration, 0.02 g of adsorbent and a pH of 5.5. A particular case regarding the natural-derived hydroxyapatite is represented by the use of natural material to obtain the precursors for the hydroxyapatite synthesis. Typically, the material obtained is CaO (noted in Table 3 as Ca precursor), which is used to obtain, by the addition of phosphoric acid (or other P-containing precursors), HAP. This approach was used by Xia et al. [39], HAP being applied for the adsorption of Sr(II). According to their results, the process is best fitted by the Liu isotherm model, reaching a maximum adsorption capacity of over 45 mg/g. Núñez et al. [40] applied HAP obtained by a similar recipe for the removal of lead, cadmium and copper ions. The process was better fitted by the Langmuir model, but most importantly, the authors also performed a selectivity study, suggesting a competitive effect between different ions. The lead ions were preferably adsorbed by HAP, most probably due to their high electronegativity and ionic radius closer to Ca(II). Elsanafeny et al. [55] applied a similar method for obtaining the calcium precursor from eggshells and applied the synthesized HAP as such or in the form of polymer modified HAP, in order to propose a method for the treatment of wastewater containing radioactive cobalt and strontium. The obtained results showed superior ion uptake capacity for the HAP. Using egg shells as a source of CaCO 3 , Zeng et al. [43] obtained an anionic/cationic substituted HAP by an ultrasound-assisted procedure. The partial substitution of Ca 2+ and PO 4 3− with Na + , SiO 4 4− and CO 3 2− led to a macroporous structure with a relatively high pore volume compared to the other presented data. The material was applied for the adsorption of lead and cadmium ions, with superior registered maximum adsorption capacity, by comparison with HAP previously presented, and the authors described the uptake process as a mixture of ion exchange, precipitation and electrostatic interactions. The stability of the adsorbent was also established by regeneration studies, the materials being effective after four regeneration cycles. Adsorption of Heavy Metals Using Synthesized Hydroxyapatite The synthesis by various routes of hydroxyapatite can be achieved, including heterogeneous and homogeneous chemical deposition, hydrothermal synthesis, sol-gel methods, and many others [64]. The chemical synthesis method allows the development of materials with tailored morphological properties and composition for envisaged applications [65]. The recent trend in the development of hydroxyapatite-based materials is represented by the development of as-simple-as-possible synthesis methods and with scale-up possibility, in order to achieve their application at industrial scale. Another important aspect in the synthesis of HAP is represented by the necessity of a homogeneous composition of the final material. Generally speaking, in order to achieve this goal, one approach is represented by the used of highly soluble calcium compounds [64]. Another parameter that can be encountered in most of the synthesis methods is represented by the requirement of a high pH value; the replacement of the classical concentrated ammonia with milder alternatives allows the superior control of the process [64]. Other important aspects to be controlled during the synthesis process, especially for environmental applications, are represented by the particle dimensions, specific surface area and porosity, as well as the presence and potential leaching of other elements (in the case of substituted or doped hydroxyapatite). Table 4 presents some of the most recent published works detailing the application of hydroxyapatite for the uptake of heavy metals. 1 where: SS-specific surface area, PV-pore volume, PD-pore diameter. The most encountered synthesis method is represented by co-precipitation. This method can lead to the development of hydroxyapatite that can be used as such or in complex combination with other materials. For example, Ivanets et al. [67] studied the influence of the crystallinity degree and porous structure on the HAP's adsorption properties of metals from a multi-component solution (Cd 2+ , Co 2+ , Cu 2+ , Fe 3+ , Ni 2+ , Pb 2+ and Zn 2+ ). The variation in crystallinity and porous structure was achieved by the use of Mg 2+ ions and hydroxyethylenediphosphonic acid (HDEP). The authors obtained an almost complete adsorption of Cd 2+ , Cu 2+ , Fe 3+ , Pb 2+ , Zn 2+ within 6 h, at the dose of sorbent 5-10 g/L, when using HAP prepared with Mg 2+ ions. On the other hand, HAP prepared with HDEP showed highest efficiency for the removal of Cd 2+ , Cu 2+ , Fe 3+ , Pb 2+ , Zn 2+ ions. The proposed order of adsorbent efficiency was Mg 2+ -prepared HAP > HAP > HDEP-prepared HAP, while the most probable adsorption mechanism is represented by ion-exchange and dissolution-precipitation. The relatively high adsorption capacity of nano-sized HAP is expected, considering that, according to the study of Zheng and Zhang [99], commercial HAP was proven a superior adsorbent against copper and zinc ions in static and cylinder dynamic experiments, compared with other often used materials, such as medical stone, nano-carbon, and biochar. Other encountered methods are represented by the hydrothermal synthesis, sol-gel, microwave synthesis or ultrasonication. All these methods usually lead to the development of rod-shaped particles, well in the nanometric range, with a good absorption capacity towards heavy metals. Calcium deficiency was also proven to have a direct impact on the metal uptake capacity of HAP. This was demonstrated by Van Dat et al. [69] who obtained calcium-deficient HAP, by selecting different Ca/P ratio, inferior to the stoichiometric one. According to the presented results, the calcium-deficient HAP has superior metal uptake capacity, compared with the non-deficient HAP. The authors also hypothesized that at lower metal concentration (under 0.01 mol/L), the ion-exchange mechanism was dominant, while at higher concentrations, additional precipitation also occurred. As such, the calcium-deficiency strategy can be applied for enhancing the HAP metal uptake capacity and to provide superior adsorbent characteristics. Another important parameter is represented by the synthesis temperature. The influence of this parameter on the HAP synthesis using a hydrothermal route, and on the U(VI) uptake capacity was evaluated by Zheng et al. [105]. As the synthesis temperature influenced the final morphology of HAP (being obtained nanosheets, nanoribbons and, at a working temperature of 180 • C, blocky structures), other important characteristics were also influenced, including the surface area and the pore volume, in a direct dependence with the temperature. The U(VI) uptake capacity was also influenced, ranging from 336.58-403.91 mg/g. The adsorption process obeyed a pseudo-second-order kinetic model and a Freundlich adsorption isotherm model. By using modern techniques, such as electrospinning, other HAP morphologies can be obtained (such as hollow fibers), which also proved to have a good efficiency in the heavy metal uptake [79]. The variation of HAP morphology was evaluated by Zou et al. [72], as a result of changes in pH, reaction temperature and reactant ratio. The authors obtained a large variety of morphologies, including fluffy spongy deposits, porous spheres, solid spheres, nanotubes and the dimensions varied between few nanometers to hundreds of nanometers. The strongest effect on the final morphology was assigned to the pH, while the reactant ratio had little effect on the morphology, only affecting the HAP yield. The porous nanosphere morphology was chosen for further experiments, exhibiting a good adsorption capacity for individual ions (Table 4), while for the complex matrix, involving the presence of all ions, HAP exhibited an uptake capacity >99%, for Hg 2+ , Pb 2+ , Cu 2+ , Ni 2+ and Co 2+ ions. The kinetics study, performed using Pb(II) as a model pollutant, revealed that the adsorption process followed a pseudo-second-order kinetic model and a Langmuir isotherm model, with a maximum uptake capacity over 250 mg/g. The presence of organic pollutants (such as oxytetracycline-OTC) has a different effect on the metal uptake on HAP, depending on the studied metal. As such, the adsorption of copper was found to be greatly increased up to 0.25 mmol/L OTC, the lead uptake to increase up to 0.10 mmol/L OTC, followed by a decrease under the level of single metal presence for 0.25 mmol/L OTC, while the cadmium uptake was not influenced by the OTC presence [74]. The recorded results were explained by the authors through the formation of organic-metallic complexes, with different affinity towards HAP, or the blocking of metal sites by OTC (in the case of lead). Another important practical aspect is related to the recovery of the adsorbent from the solution. This can be easily achieved by the development of composites based on HAP and a magnetic phase. Several authors presented the incorporation of magnetic phases in HAP (i.e., α, γ-Fe 2 O 3 , Fe) and the application of the obtained magnetic composites for the removal of U(VI), Cd(II), or Cu(II) . In all cases, high maximum adsorption capacities were obtained in processes obeying the pseudo-second-order model and Langmuir isotherm models. Rodrigues et al. [92] synthesized HAP from calcium hydroxide and phosphoric acid and used it to develop adsorbent materials with hydrotalcite (HT) and multi-wall carbon nanotubes (MWCNT) by ultrasonic and hydrothermal treatment. The authors applied the composites HAP/HT and HAP/HT/MWCNT (at two carbon nanotubes concentrations) for the removal of Cr(VI), in fixed bed column experiments. The experimental results proved a very good uptake capacity (between approx. 4 g/g and 5.8 g/g), increasing with the increase in MWCNT content. The kinetic studies evidenced a indicated a feasible, spontaneous, and endothermic physisorption, which could be applied for the removal of Cr(VI) from leather industry wastewater. A multi-component adsorbent was also proposed by Hokkanen et al. [93], using HAP, cellulose and bentonite clay, for the removal of As(III). The process was proven to follow a pseudo-first-order model and Langmuir isotherm model. Best results were obtained in the pH range 4-7, with the equilibrium being reached within 5 min. Synthesized HAP can be found in more complex adsorbent structures, such as composites with cyclodextrin, activated carbon, carbon nanotubes, hydrotalcite, bentonite, alendronate or even coated on ceramic support as filtration membranes, with significant adsorbent activity against a series of heavy metals (Cd 2+ , Cu 2+ , Cr 6+ , As 3+ , Pb 2+ ) , over a wide range of pH values, metal and adsorbent concentrations. Mechanistic Aspects and Current Limitations The development of the proposed adsorbents, either of natural or synthetic origin can be summarized according to Figure 1. The natural route involves two major possible pathways, as previously described. Mechanistic Aspects and Current Limitations The development of the proposed adsorbents, either of natural or synthetic origin can be summarized according to Figure 1. The natural route involves two major possible pathways, as previously described. One is the recovery of the calcium source and its introduction in the synthetic route (as depicted in Figure 1), while another is the direct recovery of hydroxyapatite, trough mechanical, carbonization (and in some cases calcination) of the raw material (typical animal bones). Regarding the synthetic route, a calcium and a phosphorus source are required, followed by different synthesis methods (typically co-precipitation, but other methods, such as hydrothermal, ultrasonication or sol-gel being also encountered), and post-treatments (drying, vacuum-drying, calcination). One is the recovery of the calcium source and its introduction in the synthetic route (as depicted in Figure 1), while another is the direct recovery of hydroxyapatite, trough mechanical, carbonization (and in some cases calcination) of the raw material (typical animal bones). Regarding the synthetic route, a calcium and a phosphorus source are required, followed by different synthesis methods (typically co-precipitation, but other methods, such as hydrothermal, ultrasonication or sol-gel being also encountered), and post-treatments (drying, vacuum-drying, calcination). The presence of heavy metals in water effluents can have a negative influence over very long periods of time [138]. Most of the heavy metals have as provenance various industries, such as textile industry, mining, smelting, fertilizer use, sewage discharge, etc. [139,140]. Regardless their origin, in the absence of appropriate decontamination methods, heavy metal ends up in water effluents, having a direct negative effect on flora, fauna, and finally, human health ( Figure 2). Development of appropriate decontamination methods (for example, adsorption) cannot only protect us against these effects, but also allows the recovery of heavy metals, thus lowering the pressure on existent resources. At the same time, we must consider that the application of efficient decontamination methods has to be affordable, in order to be ready to accept it and implement it for the industry and general public. In this regard, not only the adsorption process is more economically efficient, compared with other methods, but also has smaller associated costs (including operating costs), while the proposed adsorbents are also low-cost, thus offering a viable depollution alternative [141][142][143]. Hydroxyapatite can be easily produced on a large scale, thus allowing the scale-up of the technologies, at the same time being rapidly regenerated, while the metals can be recovered without any expensive procedures. very long periods of time [138]. Most of the heavy metals have as provenance various industries, such as textile industry, mining, smelting, fertilizer use, sewage discharge, etc. [139,140]. Regardless their origin, in the absence of appropriate decontamination methods, heavy metal ends up in water effluents, having a direct negative effect on flora, fauna, and finally, human health ( Figure 2). Development of appropriate decontamination methods (for example, adsorption) cannot only protect us against these effects, but also allows the recovery of heavy metals, thus lowering the pressure on existent resources. At the same time, we must consider that the application of efficient decontamination methods has to be affordable, in order to be ready to accept it and implement it for the industry and general public. In this regard, not only the adsorption process is more economically efficient, compared with other methods, but also has smaller associated costs (including operating costs), while the proposed adsorbents are also low-cost, thus offering a viable depollution alternative [141][142][143]. Hydroxyapatite can be easily produced on a large scale, thus allowing the scale-up of the technologies, at the same time being rapidly regenerated, while the metals can be recovered without any expensive procedures. Hydroxyapatite was also proven to be easily tunable for particular applications, its extraordinary versatility allowing its use in the hardest working conditions. As can be seen from the examples provided above, HAP adsorbents are not only efficient in the uptake of common pollutants (such as Pb 2+ , Cu 2+ , Cd 2+ , Cr 6+ , Ni 2+ , Cd 2+ , Zn 2+ , etc.), but can also uptake metals which are very dangerous and hardly removable by other methods (such as As 3+ ) or potential radioactive metals (U 6+ serving as a model for such elements) in the conditions encountered in practice (in terms of pH value and temperature). Hydroxyapatites-based adsorbents present multifunctional adsorption capacity resulting from FTIR and XRD studies related to modifications observed on hydroxyl, carbonate, phosphate and Ca 2+ by substitution, ion exchange, complexation or precipitation. Thus, substitution process takes place at Ca 2+ by metal ions in form of M 2+ , at phosphate groups by different oxyanions AsO4 3− , VO4 3− , etc., and at OHgroups by monovalent anions (F − , Cl − , etc.). In addition, composite HAP/organic structures can be significantly improved in terms of pollutants species than can be eliminated from aqueous solutions. Hydroxyapatite was also proven to be easily tunable for particular applications, its extraordinary versatility allowing its use in the hardest working conditions. As can be seen from the examples provided above, HAP adsorbents are not only efficient in the up-take of common pollutants (such as Pb 2+ , Cu 2+ , Cd 2+ , Cr 6+ , Ni 2+ , Cd 2+ , Zn 2+ , etc.), but can also uptake metals which are very dangerous and hardly removable by other methods (such as As 3+ ) or potential radioactive metals (U 6+ serving as a model for such elements) in the conditions encountered in practice (in terms of pH value and temperature). Hydroxyapatites-based adsorbents present multifunctional adsorption capacity resulting from FTIR and XRD studies related to modifications observed on hydroxyl, carbonate, phosphate and Ca 2+ by substitution, ion exchange, complexation or precipitation. Thus, substitution process takes place at Ca 2+ by metal ions in form of M 2+ , at phosphate groups by different oxyanions AsO 4 3− , VO 4 3− , etc., and at OH − groups by monovalent anions (F − , Cl − , etc.). In addition, composite HAP/organic structures can be significantly improved in terms of pollutants species than can be eliminated from aqueous solutions. Regarding the uptake capacity of the proposed materials, the general conclusion that can be drawn from the presented studies is that the capacity increases with the increase in the specific surface area. Even though the natural hydroxyapatite presents relatively good adsorption capacity towards a variety of metals, its capacity is increased when the approach is used as calcium source, as is the case of lead, for example, recording an increase of Q max from around 250 mg Pb(II)/g for HAP originating from bovine horns to approx. 700 mg/g, for HAP obtained using eggshells as calcium precursor [43,45]. A very good adsorption is achieved using HAP obtained from fish scales for Hg removal (over 200 mg/g) [52]. The adsorption capacity of synthesized HAP is increased not only by comparison with the natural materials, but also with the functionalization of the material or with the incorporation in composites with increased adsorption capacity. For example, the adsorption capacity for esterified HAP reaches approx. 2400 mg Pb(II)/g [81], and is increased up to over 2000 mg U(VI)/g, for the mesoporous HAP, obtained by freezedrying [136]. Most of the studies concerning the adsorption of the heavy metals on the naturalderived or synthetic HAP suggest that the adsorption process obeys a pseudo-second-order kinetic model, supporting as a main process the chemisorption (with some of the authors also suggesting the presence of the physisorption process, demonstrated by the application of a pseudo-first-order kinetic model), while the most appropriate isotherm to fit the experimental data was the Langmuir isotherm, suggesting a monolayer adsorption (at a fixed number of well-defined sites). The less encountered Freundlich isotherm suggests a different model, presuming that the concentration of the adsorbate on the adsorbent surface increases with its concentration. The Liu isotherm (one of the rarely applied models) is in turn a combination of those two, suggesting that the adsorbate has preferred sites for occupation, but this in turn can be saturated. In our opinion, although less encountered in the literature, this isotherm model should be further studied for the adsorption of heavy metals on HAP adsorbents. The widely acceptance of the pseudo-second-order kinetic model for the heavy metal uptake by HAP implies that the sorption process is controlled by chemical reactions, such as ion exchange, surface complexation and/or precipitation, and to a lesser extent by the physical absorption (as would be suggested by the pseudo-first-order kinetic model) ( Figure 3). An important parameter which governs the adsorption mechanism to a large extent is the pH value correlated with pHPZC which predicts to some extent the solid surface charge and metal speciation. Hence, for some pH intervals, in which the solid particle and the pollutant have opposite charges, the electrostatic attraction occurs, while in other conditions, ionic exchange or precipitation are the main adsorption route. Regeneration of such adsorbents can be easily achieved using, for example, HNO3 (0.1 M), Ca(NO3)2 (0.5 M, pH = 3), or HCl (1.5 M) solutions at room temperature, as demonstrated by Zeng et al. [43], Ma et al. [119], Shen et al. [125] and Ahmed et al. [127], the adsorbent materials preserving their properties after several cycles. Thus, the regeneration step can be achieved in industrial installation, without the need for very complicated equipment. Once recovered, the heavy metals can be reintroduced in the industrial processes. The application of hydroxyapatite-based adsorbent raises another issue that should be addressed more thoroughly in future studies, namely the presence of competition between metallic species for the binding sites, in the case of real-life multicomponent cases, as observed by some of the cited authors and also observed in studies regarding other natural adsorbents [144]. Another important aspect is related to data presentation in the published studies. In our opinion, future studies regarding the adsorbent properties of HAP should focus on the establishment of the mechanisms through which the processes are taking place, as An important parameter which governs the adsorption mechanism to a large extent is the pH value correlated with pH PZC which predicts to some extent the solid surface charge and metal speciation. Hence, for some pH intervals, in which the solid particle and the pollutant have opposite charges, the electrostatic attraction occurs, while in other conditions, ionic exchange or precipitation are the main adsorption route. Regeneration of such adsorbents can be easily achieved using, for example, HNO 3 (0.1 M), Ca(NO 3 ) 2 (0.5 M, pH = 3), or HCl (1.5 M) solutions at room temperature, as demonstrated by Zeng et al. [43], Ma et al. [119], Shen et al. [125] and Ahmed et al. [127], the adsorbent materials preserving their properties after several cycles. Thus, the regeneration step can be achieved in industrial installation, without the need for very complicated equipment. Once recovered, the heavy metals can be reintroduced in the industrial processes. The application of hydroxyapatite-based adsorbent raises another issue that should be addressed more thoroughly in future studies, namely the presence of competition between metallic species for the binding sites, in the case of real-life multicomponent cases, as observed by some of the cited authors and also observed in studies regarding other natural adsorbents [144]. Another important aspect is related to data presentation in the published studies. In our opinion, future studies regarding the adsorbent properties of HAP should focus on the establishment of the mechanisms through which the processes are taking place, as well as to working in relevant conditions for real systems. Conclusions and Future Perspectives Wastewater treatment remains a critical issue globally, despite various technological advancements and breakthroughs. Apatitic materials are a sustainable, safe and clean method for pollutants' removal from contaminated environments, with a great advantage over other materials that they can be obtained from natural sources and, more importantly, from waste. Due to their chemical and physical characteristics, they can successfully replace expensive materials, and also can be easily regenerated or have the ability to be converted from used materials into value-added products, and thus apply a zero-waste concept. Preparation methods of apatitic materials is an important step which can direct the efficiency of depollution technology. By referring to the presented data, the most promising synthesis method is represented by the synthetic route, for which the final morphology can be more easily controlled to obtain, for example, mesoporous structures with enhanced adsorption capacity. However, the natural route should not be disregarded, especially when discussing the synthesis process from a bio-economical perspective, considering the re-use of natural wastes. In addition, the particular synthesis method should be selected considering multiple factors, including but not limited to the availability of the raw materials, the targeted metals, and their level on the water matrices. These depollution systems are an important progress for environment treatments providing high efficiency without high costs using easy logistics, with a primordial condition of using optimized tailored materials. Future research is needed in order to obtain optimized materials which can be used in real water systems where the matrix is very complex and, from the point of view of heavy metals, there is an abundance of various competing ions in different concentrations, from traces to enhanced amounts. This challenges, related to the scale-up of technologies from lab scale to commercial adsorption processes, must be addressed through economic limitations and excessive use of chemicals. Even though the process of adsorption is predominant and well-established, the development of low-cost and sustainable materials with enhanced selectivity and stability are still primary challenges. Moreover, detailed studies on the adsorption mechanism are still required for more efficient adsorption process and optimized installations design, and toxicity, selectivity, multi-metal adsorption and reusability are some key challenges to be looked for in future years.
9,571.8
2021-11-01T00:00:00.000
[ "Environmental Science", "Materials Science", "Chemistry" ]
Gross transcriptomic analysis of Pseudomonas putida for diagnosing environmental shifts Summary The biological regime of Pseudomonas putida (and any other bacterium) under given environmental conditions results from the hierarchical expression of sets of genes that become turned on and off in response to one or more physicochemical signals. In some cases, such signals are clearly defined, but in many others, cells are exposed to a whole variety of ill‐defined inputs that occur simultaneously. Transcriptomic analyses of bacteria passed from a reference condition to a complex niche can thus expose both the type of signals that they experience during the transition and the functions involved in adaptation to the new scenario. In this article, we describe a complete protocol for generation of transcriptomes aimed at monitoring the physiological shift of P. putida between two divergent settings using as a simple case study the change between homogeneous, planktonic lifestyle in a liquid medium and growth on the surface of an agar plate. To this end, RNA was collected from P. putida KT2440 cells at various times after growth in either condition, and the genome‐wide transcriptional outputs were analysed. While the role of individual genes needs to be verified on a case‐by‐case basis, a gross inspection of the resulting profiles suggested cells that are cultured on solid media consistently had a higher translational and metabolic activity, stopped production of flagella and were conspicuously exposed to a strong oxidative stress. The herein described methodology is generally applicable to other circumstances for diagnosing lifestyle determinants of interest. Introduction Before the onset of genomic technologies, the way to inspect complex environmental adaptations of microorganisms largely reliedwhenever possibleon generation of mutant libraries followed by phenotypic characterization. Today, the most popular approach to the same end typically starts with generation of a differential transcriptome for comparing genome-wide gene expression patterns in condition A vs. condition B. While the platforms and technologies available to this end have evolved over the years, for example, from DNA arrays to deep sequencing of transcripts (RNA-Seq), the results deliver a list of genes that go up or down depending on the specific scenarios and their differences (Wang et al., 2009). Such profiles not only expose global physiological responses, but also pinpoint the roles of distinct genes that can then be separately studied. Without surprise, Pseudomonas putida has not been alien to such technical and conceptual developments and a suite of transcriptomes of this bacterium have been published in recent years with different methods for inspecting the responses of this microorganism to different physicochemical settings (Yuste et al., 2006;Kim et al., 2013Kim et al., , 2016La Rosa et al., 2015;Bojanovic et al., 2017;Molina-Santiago et al., 2017). However, such experiments were run by different laboratories with diverse RNA extraction and preparation methods, and with different data analysis and representation tools, and thus make comparisons challenging. In this context, the protocols below are an attempt to set a standardized workflow for generation of complete and reliable transcriptomes of Pseudomonas putida aimed at identifying genes and functions involved in lifestyle transitions. To this end, we have chosen two simple conditions that are habitual in the laboratory but involve completely different sets of circumstances: growth in liquid medium in a rotating tube and growth on the agar surface of a Petri dish. The nutritional conditions in either case are the same (see below), but the rest of the physicochemical settings are very different. Comparison of the two should thus shed some light on what is being sensed as a new niche and also what functions are involved in the transition and thriving in either place. Obviously, one major change is the shift between a basically homogeneous, liquid and water-saturated environment towards one that is semisolid, exposed to aerial desiccation and with nutrients available only from the lower, stiff substratum layer. Genes are thus anticipated to appear involved in surface sensing and changing from a planktonic to a sessile lifestyle (Schembri et al., 2003;Oggioni et al., 2006;Dotsch et al., 2012), but many others could be expected as well. It is remarkable that despite intensive studies of P. putida, many questions on the physiology of this bacterium on an agar plate remain unanswered. To shed light on these uncertainties, we simply compared the transcriptomes generated with RNA extracted from P. putida KT2440 grown in liquid M9 mineral medium supplemented with citrate as sole carbon source, either on tubes on rotation or on Petri plates made with the same components but containing 1.5% bacteriological agar. As shown below, the results provided hints about the functions and pathways involved in adaptation from liquid to solid media, as well as indications of the type of environmental conditions experienced by bacteria on an agar surface. Culture incubation conditions For this experiment, the strain of Pseudomonas putida KT2440 was used. Therefore, the required temperature for cultivation was 30°C but this can be changed depending on the microorganism or strain under study. (i ) Place an inoculum from the glycerol stock in a 15 ml tube with 5 ml of M9 minimal medium supplemented with 0.2% (v/w) citrate and rotate overnight (O/N) with good aeration at 30°C. (ii ) After that incubation period, inoculate 5 ml M9 citrate 0.2% (v/w) liquid medium in the same type of tube with 100 ll of the O/N culture and place it in a rotator wheel at 30°C. (iii ) Synchronized with step 2, take another 100 ll of the O/N culture and spread it over 25 ml plates of M9 0.2% citrate solidified with 1.5% (v/w) agar. Spread the culture all over the surface using a sterile glass handle. Place plates at 30°C. RNA extraction For the RNA extraction and the following DNA degradation, it is important to utilize clean material that is only for this use. To this end, tips (recommendable with filter) should be fresh or double-autoclaved. Pipettes and bench should be cleaned thoroughly with ethanol, and brand-new gloves are necessary during the whole process. Avoid touching untreated surfaces during the application of the methods. The procedure asks for having handy a RNeasy Kit (Qiagen, Venlo, Netherlands). Buffer RLT 1 included in the kit should be mixed fresh with 1% b-mercaptoethanol right before use. Also, a fresh 3 mg ml À1 lysozyme solution in 10 mM Tris-HCl (pH = 8.0) should be ready at the time RNA extraction starts. (i) After the required incubation period, collect cells from the cultures in liquid and solid media. In the case of liquid media, the whole 5 ml has to be centrifuged in order to obtain enough RNA using a of the RNeasy kit (see footnote 1) and spin down as in step 8. (x) Transfer the column again to a new collection tube and discard the previous tube. Add 500 ll of buffer RPE provided by the kit (see footnote 1) and spin the same way as previously. Discard flow through. (xi) Add 500 ll of Buffer RPE and centrifuge 2 min at maximum speed. (xii) Transfer the column to a new collection tube and centrifuge for 1 min at maximum speed. (xiii) Move column to a double-autoclaved 1.5 ml tube and elute with RNase-free water. Centrifuge for 1 min at maximum speed. (xiv) Repeat step 13 in order to have a total elution volume of 60 ll. DNA removal We suggest to employ the DNA-free kit (Ambion) (Invitrogen, Carlsbad, CA, USA) for the DNA purification from the samples as follows. (i) Add 6.5 ll of 109 Buffer to the 60 ll RNA elution from previous step. (ii) Add 1 ll of DNase I to each tube and mix gently by pipetting (not vortexing). (iii) Incubate at 37°C for 60 min. (iv) Before use, DNase inactivation buffer should be thawed slowly during the incubation of step 3. (v) Add 7 ll of inactivation buffer and leave the tube for 2 min at room temperature, flicking it every 30 s. (vi) Centrifuge at 10 000 g during 1.5 min. (vii) Transfer supernatant to a clean RNase-free 1.5 ml tube very carefully. Re-purification (i) Adjust the sample volume to 100 ll with RNase-free water. (ii) Add 350 ll of b-mercaptoethanol/RLT buffer and mix using the pipette. (iii) Add 250 ll of ethanol and mix by pipetting again. (iv) Transfer the 700 ll to a mini-spin silica membrane column and centrifuge for 15 s at 16 000 g. Discard the flow through and place the column in a new collection tube. (v) Add 500 ll of the RPE buffer provided by the kit and spin as in step 4. Place the column in a new collection tube as previously. (vi) Repeat the previous step but centrifuging for 2 min at 16 000 g. (vii) Place the column in a 1.5 ml tube and add 50 ll of RNase-free water. (viii) Centrifuge for 1 min at the same speed. Second DNase treatment and RNA quality control Repeat section 3 (DNA removal) taking special care in transferring only the supernatant to a new RNase-free 1.5 ll tube and not the pellet resulting from the last centrifugation. The RNA concentration and its purity can be measured with a NanoDrop machine (Eppendorf, Hamburg, Germany). In addition, we recommend to perform a qRT-PCR analysis of all the samples with some known primers using P. putida genomic DNA as a positive control in order to ensure that RNA preparations give no amplification (sometimes the DNA is not completely degraded). Should this happen, DNA elimination step should be done once more. Primers usable for this control included in our case sequences targeting inside the polysaccharide transporter PP_3132 in pea operon and PP_1795 in peb operon of P. putida (Table S1). RNA-Seq Actual deep sequencing of the high-quality RNA samples prepared as explained above is typically outsourced in a core facility of the institution where experiments are done or arranged with a commercial sequencing company. For the data discussed below, the RNA samples were processed for sequencing in the transcriptome analysis services of the Helmholtz Centre for Infection Research (HZI) in Braunschweig, Germany. RNA-Seq data processing For the processing of raw data, we adopted a generally utilized methodology and recommend some bioinformatic programmes for every step. Note, however, that the same tasks can be accomplished using other existing software. Prior to the analyses, proper, raw reads in FASTQ format need to be checked for its quality with a suitable tool, for example FastQC (Brown et al., 2017). After that, procedure goes as follows: (i) Each sample has to have its single-end reads aligned against the reference genome, in our case, against Pseudomonas putida KT2440 genome (assembly: GCA_000007565.2 ASM756v2). This can be done with Bowtie 2 (Langmead et al., 2009;Langmead and Salzberg, 2012). (ii) SAM alignment files are frequently converted to BAM files, sorted and indexed using SAMtools (Li et al., 2009;Li, 2011). (iii) In order to visualize the files, Integrative Genomics Viewer (IGV) is an appropriate programme (Robinson et al., 2011;Thorvaldsdottir et al., 2013). (iv) HT-seq (Pyl et al., 2014) set in union mode is a valuable software to count the number of reads per gene and quantify expression levels. Gene coordinates are necessary for this step; they can be downloaded in gff format from the suitable webpage. For our analyses, we used https://www.ncbi.nlm.nih.gov/ genome/?term=Pseudomonas%20putida%20KT2440 (v) A later normalization of read counts and differential gene expression is required. For that objective, DESeq2 (Love et al., 2014) works correctly. (vi) Preparation of tables should include several columns of informative data, such as raw and normalized counts, logFoldChanges, P-values, P-adjusted values and functional description for each gene in order to facilitate its transfer and visualization using Fiesta viewer (Oliveros, 2007a). The comparisons were set taking the results of the liquid culture as the reference, so the positive fold-change values are interpreted as an upregulation on solid media, while negative fold-change values represent downregulation on agar plates. For further details about this section and additional features on the above-mentioned programmes, see Oliveros (2017). Genes undergoing significant variations can be selected using the FIESTA Viewer v.1.0 software (Oliveros, 2007a,b), setting the parameters of P-adjusted value < 0.001 and fold change of < À2/> 2. However, it can be changed whenever required, and < 0.01 and < 0.05 are acceptable options, too. Functional analyses The list of selected genes with a differential regulation has to go through a last processing step. To this end, we suggest to upload the resulting lists into the DAVID V6.8 platform (Huang da et al., 2009a,b). DAVID software was used in order to organize the list of genes according to their functions and their relevance to this study. For this purpose, EASE Score, also known as modified Fisher's exact P-value (Hosack et al., 2003), is a parameter that considers the number of differentially expressed genes from an specific functional group and it compares these genes not only to the total number in the results, but also to the number of annotations with that specific role present in the genome. Therefore, the EASE Score is a good statistic to visualize how important a group of genes is for the experimental conditions tested: a smaller EASE Score means a higher enrichment of the annotation, so to make a more intuitive representation of the statistic, the -log(EASE Score) is usually calculated and plotted. In that way, a higher value of the -log (EASE Score) also means a higher enrichment of the function. It has to be noted that this statistic does not consider whether the genes are negatively or positively regulated, nor the fold-change value. So, when more detailed information is needed, fold-change values should be consulted in the lists. For more specific records on distinct genes, the Pseudomonas Genome DB (Winsor et al., 2016) is a most useful resource. Although there are multiple ways to group data using DAVID software, we propose a functional annotation chart setting with only four parameters: three types of Gene Ontology (GO) terms (GOTERM_BP_DIRECT, GOTE RM_CC_DIRECT and GOTERM_MF_DIRECT) and one pathway (KEGG_PATHWAY; Huang da et al., 2009a,b). DAVID information has to be downloaded and can be plotted using Excel and GraphPad Prism 6. Another useful online software to produce Venn diagrams is VENNY 2.1 (Oliveros, 2007b). Gross transcriptome results The workflow of the protocol described above is sketched in Fig. 1. Bacteria grown on solid and liquid media were harvested at three different time points in order to have an illustration of three stages of the bacterial physiological state: 6 h as a representation of exponential phase, 12 h as the time for the early stationary phase and 24 h as a time for late exponential phase. Two biological replicates were run per condition. The gross results of the experiments are summarized in Fig. 2. The number of sequences that resulted differentially expressed with a P-adjusted value of 0.001 and fold change of <À2/> 2, which were applied as selection parameters, was of 465 genes when liquid and solid cultures were stopped at 6 h, 273 genes at 12 h and 736 genes at 24 h (full lists are attached in the Tables S2-S7). One remarkable result was the lack of differential transcription of rpoS (PP_1623) at comparable times in liquid and solid media. Since this gene codes for the stationary-phase sigma factor, the score indicated that the growth stage is comparable in either condition and thus that the differences ought to be due to other physicochemical circumstances. In addition, rpoD (PP_0387), the RNA polymerase r 70 factor, sometimes used as a negative control for qRT-PCRs, remained also constant in all the comparisons. Transcription and translation activities As shown in Fig. 3 and Table 1, translation seems the most differentially regulated function at 6 h of incubation. There was a conspicuous appearance of genes related to ribosome subunits and translation itself with some of the lower EASE Scores and, therefore, a high value of its negative logarithm number. This general upregulation is connected to the observation that some genes involved in transcription are also positively regulated, including sigma factors. These results suggest that bacteria cultured on solid media have a higher metabolic activity and more proteins are produced as a consequence. These are probably enzymes and structural proteins that allow bacteria to face the characteristic conditions of solid surfaces, as, for example, special requirements to internalize nutrients, production of extracellular polymeric substances (EPS), resistance to stresses (such as desiccation, oxidation) and other functions which demand a higher level of transcription and translation. This upregulation is generally maintained through 12 and 24 h, although there were notable exceptions that included, for example, regulators of iron metabolism, for example the sigma factor r 19 and the pvdI gene, which were clearly downregulated at 6 h (although differences tend to level off with time). The same was true for the regulators of flagellar production and motility such as fliA or fleS, which become remarkably undertranscribed at later times (see Tables S1-S7). Motility One of the most affected genes in the transcriptomic comparisons was those involved in the regulation of flagellar activities which become of considerable importance at 12 and 24 h (Fig. 3, Table 1). Several GO and KEGG annotations can be noticed as relevant functions with a high -log(EASE Score), such as flagellar motility, motor activity, flagellar basal body or flagellar assembly. As a part of the flagellar motor, the stator portion is a very sensitive structure under sophisticated regulation that involves external factors such as viscosity or ion pressure, and also as a modulator of flagellar assembly and movement (Baker and O'Toole, 2017). If we take a Table 1 and Tables S2 and S3, just a few flagellar genes appear negatively regulated after 6 h of incubation (flhB, fliJ, fliI, flgE). But at 12 h, downregulation involves not only individual genes but also full operons (e.g. PP_4341 to PP_4394), and their fold-change numbers are even lower at 24 h. In Table 1, we can also notice the onset of proteins with the ATP binding motifs, with a notable easy score at 12 and 24 hperhaps related to flagellar functions also. Downregulation of some ORFs with homology to eukaryotic cilia (and thus likely to be involved in some type of motility) became noticeable at 12 and 24 h of incubation as well. Moreover, bacterial chemotactic activities became significantly undertranscribed at 12 h and after (PP_4332, PP_4888, PP_5020). Taken together, these observations strongly suggest that Pseudomonas putida stops the production of flagella, its assembly and the action of the rotor when cells are on solid, water-unsaturated surfaces, presumably because they cannot swim and the maintenance of the machinery has a high metabolic cost (Martinez-Garcia et al., 2014). Carbon metabolism Central metabolism annotations such as the tricarboxylic acid (TCA) cycle and the intimately related oxidative phosphorylation are also overrepresented at 6 h of incubation according to the main GO terms and KEGG pathways appearing with low EASE Scores. It has to be noticed that P. putida KT2440 has a special metabolic network for central carbon metabolism that differs in some characteristics from the more studied E. coli pathways: although P. putida possesses most of the machinery for the most common Embden-Meyerhof-Parnas (EMP) glycolytic pathway, the gene pfk is missing from the genome and consequently the EMP pathway does not work as such. Instead, the EMP enzymes merge in a gluconeogenic direction with the Entner-Doudoroff (ED) pathway and the pentose phosphate (PP) pathway to create a distinct cycle that then links to the TCA cycle (Chavarria et al., 2013;Nikel et al., 2015;Sanchez-Pascuala et al., 2017). On the other side, glyoxylate shunt activity is very low in this microorganism. Taken together, these singularities endow P. putida with a high level of reducing power and, therefore, a higher resistance to oxidative stress (Nikel et al., 2015). Consistently with this, we found an upregulation of the ED, PP (upregulated rpe, tktA) and TCA pathway enzymes, specially in its reductive branch: ipdG, sucB, sucA, PP_2652 and PP_0897. This effect was accompanied by activation of pathways further from central carbon metabolism, including the oxidation of fatty acids. This suggested cells on solid medium have more energy requirements (at least at 6 h) than cultures in equivalent growth in liquid. We also noticed a positive regulation of EMP pathway (tpiA, eno), the glyoxylate shunt and the biosynthesis from C2 (at the level of pyruvate dehydrogenase activity by the enzymes acoA, ace and ldpG), indicating that intermediary metabolism is also more active in cells placed on solid medium (Arce-Rodriguez et al., 2016;Reeves et al. 1996;Ramos et al., 2001). In sum, a general upregulation of carbon metabolism on agar plates becomes consistently apparent at all times (Table 1 and Tables S2-S4). Table 1 shows that peroxidase activity constitutes an important function at the earlier times (6 and 12 h). Later (24 h), the oxidative stress of solid cultures seems to demand less specific redox homeostasis functions. Taking a closer look at the list of results, we can also see a reorganization of the genes that act against oxidation over the three time points where the regulation of the activity of superoxide dismutases is more important at 6 h of incubation and the activity of catalases becomes more relevant at 12 h. This suggested a high presence of superoxide radicals at 6 h and a more prevalent action of hydrogen peroxide at later times at 12 h. Remarkably, the catalase gene katA increased its fold change from 45.36 at the 6 h comparison, to a dramatic 253.01 fold change at 12 h, and finally at 24 h of expression, katA becomes the highest overregulated gene of all, with a fold-change value of 593.45. Stress responses The large overregulation of this group of oxidative stress-related genes is due in part to the conditions of the solid media, for example higher level of desiccation and exposure to the metabolism of the neighbours in some other layers. But also, such a stress could be traced directly to being attached to a solid surface. This has been documented also in other cases. For example, some evidence relates oxidative stress with a higher production of c-di-GMP in Klebsiella pneumoniae. In this instance, the c-di-GMP phosphodiesterase YjcC is directly involved with the oxidation levels of the bacteria, as its transcription is controlled by the soxRS regulon (Huang et al., 2013). In P. aeruginosa, ROS and the oxidant agent hypochlorite (HClO) induce the expression of several diguanylate cyclases and biofilm formation as a possible protection strategy against oxygen radicals (Chua et al., 2016;Echeverz et al., 2017;Strempel et al., 2017), or in E. coli, the attachment ability can be induced with the stressor agent paraquat mediated by the PilZ domain protein DgcZ (Lacanna et al., 2016;Echeverz et al., 2017). The interplay between attachment, oxidation and c-di-GMP is intriguing and deserves further studies. Other stress-related genes that become overexpressed in solid medium include PP_4707 (water subsaturation, unpublished), PP_4855, PP_1353, cmpX and some extracytoplasmic sigma factors. Other functions Other genes and functions that appear in Table 1 become manifested at only one time point of the transcriptomic comparisons. For example, at 6 h, solid cultures show a drop of the metabolism of aromatic compounds and the transport of those molecules, but these functions are not differentially regulated at 12 or 24 h. We also found a downregulation of iron metabolism genes, as well as important transporters and transcription factors for these functions such as pvdS. Thus, the difference tends to disappear at 12 and 24 h, when the markers of iron metabolism tend to be the same under the two growth conditions. Similarly, pathways related to some amino acids such as lysine, tyrosine or arginine are divergent between solid and liquid medium at 6 h, but not at later times. It is remarkable that many components of the electron transport chain (e.g. the nuo genes) also become overexpressed at 6 h but not at 12 or 24 h. This could be related to the situation hinted at by the data above that bacteria need both more energy and extra reducing power for facing oxidative stress. In contrast, some chemotaxis activities peak at 12 h, whereas other functions related to the storage of carbon sources (e.g. fatty acid biosynthetic activities and starch metabolism) change by 24 h. We can even find genes encoding granule protein associated polyhydroxyalkanoates (PP_5007, PP_5008) in the results list, suggesting these ways of C saving become important. Other activities that pop up at 24 h include biosynthesis of aromatic amino acids, further responses to oxidative stress, ammonium transport, cytoplasmic activities and cell morphogenesis. Further inspection of the gene/function lists of Table 1 and Tables S2-S4 beyond the classification with DAVID allows identification of upregulated genes of the type 6 secretion systems (T6SSs). T6SSs have been described as molecular devices that perform different activities, in particular of toxins to eukaryotic or (mostly) prokaryotic neighbours, biofilm formation and regulation of some genes (Silverman et al., 2011). Among our data, we find increased transcription of hcp1 (PP_3089) and other components of T6SS cluster 1 (from PP_3088 to PP_3100) at 6 h. Nevertheless, just a few components of T6SS cluster 3, such as tssM3 (PP_2627), are upregulated at 12 h of incubation. Surprisingly, very few of the 43 proteins annotated in P. putida's genome as containing diguanylate cyclase/ phosphodiesterase domains, appeared in our analysis with either negative or positive fold-change numbers. The molecule produced or degraded by them, c-di-GMP, is the main secondary messenger controlling attachment to solid surfaces (R€ omling et al., 2013). This is not exclusive of P. putida, as the same set of genes vary also little in Pseudomonas fluorescens growing in solid vs. liquid media (Dahlstrom et al., 2018). It is plausible that the major regulatory layer of these proteins is posttranslational (Dahlstrom et al., 2015(Dahlstrom et al., , 2018, and thus, their transcriptional regulation may not be that critical. Conclusion The protocols and the study case presented above exemplify a complete workflow for inspecting the major physiological differences that P. putida undergoes when grown on two different conditionsin this specific case, the surface of agar in a Petri dish vs. growth in a liquid medium. The most conspicuous dissimilarities include a much higher translational and metabolic activity when growing on the surface (at~6 hrs) and the inhibition of flagellar genes (and thus reduced motility). Moreover, cells grown on a surface present also multiple indications of being subject to a strong oxidative stress, perhaps related to direct exposure of their biomass to the oxygen in air. These are general trends that are accredited by the large number of functional genes that consistently vary under the circumstances. However, studies on the role of specific genes need to be verified (e.g. by QT-PCR) and followed up on a case-by-case basis, an endeavour that is beyond the scope of this protocol article. The herein described methodology is generally applicable to any other situation where the consequences of environmental shifts between two or more conditions need to be assessed. Although the methods are optimized for P. putida strain KT2440, they should be generally applicable to other Pseudomonas strains and other Gram-negative bacteria in general. While the computational platforms for transcriptomic data analysis will surely improve from time to time, the reliability of the results will always rely to a large extent on the quality of RNA preparations. Supporting information Additional supporting information may be found online in the Supporting Information section at the end of the article. Table S1. Primers used for DNA detection in RNA samples by qRT-PCR. Table S2. List of down and upregulated genes comparing P. putida KT2440 cultures in solid vs. liquid media at 6 h of incubation. Table S3. List of down and upregulated genes comparing P. putida KT2440 cultures in solid vs liquid media at 12 h of incubation. Table S4. List of down and upregulated genes comparing P. putida KT2440 cultures in solid vs liquid media at 24 h of incubation. Table S5. DAVID Functional analysis results for 6 h comparison. Table S6. DAVID functional analysis results for 12 h comparison. Table S7. DAVID functional analysis results for 24 h comparison.
6,518.4
2019-04-07T00:00:00.000
[ "Biology" ]
Autoimmune Disease Classification Based on PubMed Text Mining Autoimmune diseases (AIDs) are often co-associated, and about 25% of patients with one AID tend to develop other comorbid AIDs. Here, we employ the power of datamining to predict the comorbidity of AIDs based on their normalized co-citation in PubMed. First, we validate our technique in a test dataset using earlier-reported comorbidities of seven knowns AIDs. Notably, the prediction correlates well with comorbidity (R = 0.91) and validates our methodology. Then, we predict the association of 100 AIDs and classify them using principal component analysis. Our results are helpful in classifying AIDs into one of the following systems: (1) gastrointestinal, (2) neuronal, (3) eye, (4) cutaneous, (5) musculoskeletal, (6) kidneys and lungs, (7) cardiovascular, (8) hematopoietic, (9) endocrine, and (10) multiple. Our classification agrees with experimentally based taxonomy and ranks AID according to affected systems and gender. Some AIDs are unclassified and do not associate well with other AIDs. Interestingly, Alzheimer’s disease correlates well with other AIDs such as multiple sclerosis. Finally, our results generate a network classification of autoimmune diseases based on PubMed text mining and help map this medical universe. Our results are expected to assist healthcare workers in diagnosing comorbidity in patients with an autoimmune disease, and to help researchers in identifying common genetic, environmental, and autoimmune mechanisms. Introduction Autoimmune diseases (AIDs) have a multifactorial etiology including genetic and environmental components. The incidence of AIDs is estimated at 3-5% worldwide. Autoimmunity is known to have a genetic component; yet, concordance rates of AIDs in monozygotic twins are incomplete, indicating a multifactorial etiology. The differences in autoimmunity incidence rates in different ethnic groups and geographical locations suggest the involvement of environmental factors such as lifestyle, exposure to infection, and nutrition, among others. Autoimmune diseases (AIDs) occur when the immune system attacks self-molecules as a result of a breakdown of immunologic tolerance to autoreactive immune cells [1]. These cells promote the secretion of high-affinity autoantibodies that target and react with the "self" molecules [2]. AIDs are sometimes comorbid, and a higher susceptibility to a second AID is regarded as an indication of potential common pathogenic mechanisms among autoimmune diseases [3]. AIDs are infrequent causes of death, yet they do contribute to mortality and to the quality of life, and their classification is an unmet need [4]. This study focuses on identifying a more extensive association by comparing all of the known AIDs according to their co-citation in the literature. The idea that different AIDs are associated is not new, and~25% of patients with one AID tend to develop other comorbid AIDs [5]. A population-based study of 12 AIDs highlighted an overall association between autoimmune disorders [6]. The statistical analysis of the association in that study showed that the number of people with more than one AID was significantly higher than the number predicted under the null hypothesis. In another study, there were significantly more cases than expected of ALS associated with a prior diagnosis of asthma, celiac disease, type 1 diabetes, multiple sclerosis, myasthenia gravis, myxedema, polymyositis, Sjögren's syndrome, systemic lupus erythematosus, and ulcerative colitis [7]. These results are in agreement with the supposition of common pathogenic mechanisms between autoimmune diseases. Several approaches have been used to classify human AIDs [1,8]. These approaches have been complicated by the lack of major histocompatibility complex and autoantibody association diseases tentatively labelled as AIDs [9]. As such, classifications have led to revised definitions of autoimmunity, yet these fail to define self-directed tissue inflammation, which is not autoimmune [10]. Furthermore, AID classification has been complicated by deviations in age of onset, gender ratio, monozygotic twin concordance, etc. As such, more recent classifications use SNP allele associations [11], environmental associations [12], and, most recently, machine learning [13]. Bioinformatics uses the wealth of information to analyze biological data. Text mining and citation counts are often used to identify trends and patterns in medicine [14]. Several studies have used text mining, and notably Bork et al. have captured the phenotypic effects of drugs based on the side effects resources published by the FDA [15]. In another study, Jensen and coworkers have used text mining to associate diseases and genes, and to establish a web-based database named DISEASE [16]. In the past, we have used frequency analysis of PubMed citations to successfully classify antibiotic resistance [17], antimalaria resistance [18], and Alzheimer's disease comorbidities [19]. Here, we use PubMed citations to document the co-occurrence of AIDs and determine their interrelated taxonomy. First, we validate our methodology in a small subset of AIDs, and then we apply it to classify AIDs. PubMed Count To mine for AIDs associations, we counted the number of co-citations on PubMed. The association of two AIDs was taken as the number of PubMed citations of the AID pair (e.g., "Lupus" AND "Sjögren"), divided and normalized by the number of PubMed citations for each disease alone (e.g., "Lupus" OR "Sjögren") according to the equation below, in which AID 1 and AID 2 correspond to AID pairs: The nominator of this equation is biased towards highly cited AIDs. To counter this bias, the association is divided and normalized by the denominator, which reduces the association of highly cited AIDs and increases the association of poorly cited AIDs. This equation also assumes random noise, and signal to noise is expected to grow by the square root of the signal, similar to random walk. To further reduce the chance of random cooccurrence, we queried only the title, abstract, and other terms of PubMed citations, but not the full text paper. Finally, to facilitate the laborious process of counting citations and to prevent scribe copy errors, a Python script was used. The script uses the "wget" function and rapidly looped over all AIDs pairs. The search was performed on a standard PC equipped with 16 GB RAM and operating under Microsoft OS. At an internet speed of 40 Mbps, one query took less than a second, and a complete run of the script took~6 h. List of Autoimmune Diseases To validate our technique, we calculated the association values of a limited dataset of 7 AIDs reported by Cifuentes et al. [3]. The training dataset comprised systemic lupus erythematosus (SLE), Sjögren's syndrome (SS), type 1 diabetes (T1D), autoimmune thyroid disease (AITD), rheumatoid arthritis (RA), multiple sclerosis (MS), and systemic sclerosis. To evaluate our technique, we calculated the association between 150 AIDs (listed in Supplementary Table S1). The initial list of 150 diseases was obtained by downloading several lists of AIDs and merging duplicates. Autoimmune lists were obtained from the Autoimmune Association website (https://autoimmune.org/disease-information, accessed on 12 July 2021), the Autoimmune Institute website (https://www.autoimmuneinstitute. org/resources/autoimmune-disease-list, accessed on 12 July 2021), and the Autoimmune Registry (https://www.autoimmuneregistry.org/the-list, accessed on 12 July 2021). Inadvertently, some of the 150 diseases were not AIDs and were omitted from our classification. Furthermore, temporary diseases were also excluded because they do not fit with the progressive nature of AIDs. Finally, indistinct AIDs with low PubMed citation counts were also omitted. The final list included only 100 AIDs. Clustering To classify AIDs, we use heatmap clustering based on association values. In particular, the ClustVis website was used for visual classification [20]. In addition, Circos software was used for preparing chord diagrams of AIDs using association values. Validation To validate our technique, we calculated the association values of seven AIDs. Figure 1 also shows the association matrix reported by Cifuentes et al. [3] Remarkably, the correlation between these two matrices is high (R = 0.91) and corroborates our predicted association of AIDs [3]. List of Autoimmune Diseases To validate our technique, we calculated the association values of a limited dataset of 7 AIDs reported by Cifuentes et al. [3]. The training dataset comprised systemic lupus erythematosus (SLE), Sjögren's syndrome (SS), type 1 diabetes (T1D), autoimmune thyroid disease (AITD), rheumatoid arthritis (RA), multiple sclerosis (MS), and systemic sclerosis. To evaluate our technique, we calculated the association between 150 AIDs (listed in Supplementary Table S1). The initial list of 150 diseases was obtained by downloading several lists of AIDs and merging duplicates. Autoimmune lists were obtained from the Autoimmune Association website (https://autoimmune.org/diseaseinformation, accessed on 12 July 2021), the Autoimmune Institute website (https://www.autoimmuneinstitute.org/resources/autoimmune-disease-list, accessed on 12 July 2021), and the Autoimmune Registry (https://www.autoimmuneregistry.org/thelist, accessed on 12 July 2021). Inadvertently, some of the 150 diseases were not AIDs and were omitted from our classification. Furthermore, temporary diseases were also excluded because they do not fit with the progressive nature of AIDs. Finally, indistinct AIDs with low PubMed citation counts were also omitted. The final list included only 100 AIDs. Clustering To classify AIDs, we use heatmap clustering based on association values. In particular, the ClustVis website was used for visual classification [20]. In addition, Circos software was used for preparing chord diagrams of AIDs using association values. Validation To validate our technique, we calculated the association values of seven AIDs. Figure 1 shows the AIDs association matrix of systemic lupus erythematosus (SSE), Sjögren's syndrome (SS), type 1 diabetes (T1D), autoimmune thyroid disease (AITD), rheumatoid arthritis (RA), multiple sclerosis (MS), and systemic sclerosis (SSc). Figure 1 also shows the association matrix reported by Cifuentes et al. [3] Remarkably, the correlation between these two matrices is high (R = 0.91) and corroborates our predicted association of AIDs [3]. Validation of autoimmune disease association. Shown on the left is our autoimmune disease association matrix based on PubMed co-citation. Shown on the right is the same matrix based on genetic evidence (adapted from Cifuentes et al. [3]). Note that association values range from 0 (white background) to 1 (gray background). Notably, the correlation between the two matrices is high (R = 0. 91 Notably, the association values reported in Figure 1 are small. However, if multiplied by 100, then the association values could represent an approximate percent comorbidity. For example, Sjögren and lupus, which are both collagen diseases, are associated in more than 3.5% (0.0358 × 100) of cases. Notably, associations less than 0.00001 are unrealistic, and have not been included in our study. Figure 2 shows a classification of the top co-cited 100 AIDs, discussed in the following paragraph, in light of their association and comorbidity. The AIDs are listed in the order obtained from ClustVis, and remarkably their clustering follows gender, system, onset age, and frequency. Notably, the automatic classification meticulously organizes all comorbid AIDs according to the major system affected. The affected systems include: (1) gastrointestinal, (2) neuronal, (3) eye, (4) cutaneous, (5) musculoskeletal, (6) kidneys and lungs, (7) cardiovascular, (8) hematopoietic, (9) endocrine, and (10) multiple. Interestingly, the clustering also organizes AIDs according to the major gender affected. Remarkably, the gender classification is separated at male and female gonads AIDs, namely, autoimmune orchitis, which affects the testes, and autoimmune oophoritis, which affects the ovaries. Yet, the male/female disease classification cannot single-handedly explain the classification and could arise from other associations (such as collagen, which is expressed more in men than in women). Note that each affected system is listed twice, once for each gender. The patient gender, average age of onset, and frequency in population are detailed in Supplementary Table S2 and were taken from Bender et al. [21]. Notably, the association values reported in Figure 1 are small. However, if multiplied by 100, then the association values could represent an approximate percent comorbidity. For example, Sjögren and lupus, which are both collagen diseases, are associated in more than 3.5% (0.0358 × 100) of cases. Notably, associations less than 0.00001 are unrealistic, and have not been included in our study. Figure 2 shows a classification of the top co-cited 100 AIDs, discussed in the following paragraph, in light of their association and comorbidity. The AIDs are listed in the order obtained from ClustVis, and remarkably their clustering follows gender, system, onset age, and frequency. Notably, the automatic classification meticulously organizes all comorbid AIDs according to the major system affected. The affected systems include: (1) gastrointestinal, (2) neuronal, (3) eye, (4) cutaneous, (5) musculoskeletal, (6) kidneys and lungs, (7) cardiovascular, (8) hematopoietic, (9) endocrine, and (10) multiple. Interestingly, the clustering also organizes AIDs according to the major gender affected. Remarkably, the gender classification is separated at male and female gonads AIDs, namely, autoimmune orchitis, which affects the testes, and autoimmune oophoritis, which affects the ovaries. Yet, the male/female disease classification cannot single-handedly explain the classification and could arise from other associations (such as collagen, which is expressed more in men than in women). Note that each affected system is listed twice, once for each gender. The patient gender, average age of onset, and frequency in population are detailed in Supplementary Table S2 and were taken from Bender et al. [21]. [20]. Notably, the clustering follows gender, system, onset, and prevalence. For a detailed list of the autoimmune diseases, gender ratio, average onset age, Figure 2. Autoimmune disease classification. Shown is a classification of the top co-cited 100 autoimmune diseases on PubMed. The diseases are clustered using association distances and listed in the order obtained from ClustVis [20]. Notably, the clustering follows gender, system, onset, and prevalence. For a detailed list of the autoimmune diseases, gender ratio, average onset age, prevalence in population, please see Supplementary Table S2. (NA-Not applicable, * varies in cold and warm weather). Figure 3 shows the autoimmune association as a chord diagram. Several examples are non-trivial and clearly illustrate the need to classify AIDs [4]. Association of Autoimmune Diseases prevalence in population, please see Supplementary Table S2. (NA-Not applicable, * varies in cold and warm weather) Figure 3 shows the autoimmune association as a chord diagram. Several examples are non-trivial and clearly illustrate the need to classify AIDs [4]. Notably, most AIDs are comorbid, as shown in Figure 2, and the top 20 diseases involve collagen tissues, which explains male predominance. For example, eosinophilic esophagitis is an inflammation of the esophagus (collagen type 1). Among others, it is associated with other AIDs such as autoimmune pancreatitis, retroperitoneal fibrosis, asthma [23] and celiac disease [24], as shown in Figure 2 Retroperitoneal fibrosis, which is caused by the buildup of inflammatory and fibrous tissue (collagen type 1) in the retroperineum [25], is associated with Hashimoto's thyroiditis, Graves' disease, vasculitis, psoriasis, and autoimmune pancreatitis, among Notably, most AIDs are comorbid, as shown in Figure 2, and the top 20 diseases involve collagen tissues, which explains male predominance. For example, eosinophilic esophagitis is an inflammation of the esophagus (collagen type 1). Among others, it is associated with other AIDs such as autoimmune pancreatitis, retroperitoneal fibrosis, asthma [23] and celiac disease [24], as shown in Figures 2 and 3. Retroperitoneal fibrosis, which is caused by the buildup of inflammatory and fibrous tissue (collagen type 1) in the retroperineum [25], is associated with Hashimoto's thyroiditis, Graves' disease, vasculitis, psoriasis, and autoimmune pancreatitis, among others, as shown in Figure 3. These comorbidities involve multiple systems ( Figure 2) such as lymph nodes, pancreas, salivary glands, kidneys, bile ducts, and lachrymal glands. Castleman disease is a lymphoproliferative condition, which presents either idiopathically, or in association with herpesvirus [1], or a malignant tumor. Notably, Castleman disease and POEMS syndrome are associated (Figures 2 and 3), and the former is a major criterion of diagnosis of the latter [26]. (POEMS syndrome is an acronym of Polyneuropathy, Organomegaly, Endocrinopathy, Myeloma, and Skin changes. Note collagen involvement) Guillain Barré syndrome occurs upon demyelination of peripheral nerves (collagen type 6), and is often induced by an infection or a drug [27]. Guillain Barré shares clinical similarities with chronic inflammatory demyelinating polyneuropathy [28], and with peripheral neuropathies, such as POEMS, chronic inflammatory demyelinating polyneuropathy, and multifocal motor neuropathy, as shown in Figure 2. Notably, Guillain Barré syndrome and myasthenia gravis occasionally combine [29], as reflected in Figure 3. Relapsing polychondritis is characterized by recurrent inflammation of cartilaginous tissues (collagen type 2) throughout the body [30]. Figure 2 shows that relapsing polychondritis is associated with scleritis, uveitis, and conjunctivitis, and both relapsing polychondritis and scleritis are associated with autoimmune otorhinolaryngitis [31]. Figure 3 shows that sympathetic ophthalmia, and Vogt Koyanagi Harada disease are also associated with uveitis [32]. Goodpasture's syndrome occurs through the deposition of autoantibodies in basement membranes (collagen type 4) of kidneys and lungs, eliciting rapidly progressive glomerulonephritis and pulmonary hemorrhage [33]. Figure 2 also shows that fibrosing alveolitis, also known as idiopathic pulmonary fibrosis, is associated with Goodpasture's syndrome. Neonatal lupus and congenital heart block can lead to atrioventricular conduction abnormalities diagnosed in utero or within the first month of life [34]. Maternal autoimmune disease is responsible for and presents in the neonatal heart membranes (collagen type 1) [35]. Figure 2 shows that congenital heart block and neonatal lupus are highly associated [36]. Figure 3 shows that fetal congenital heart block is associated with maternal autoimmunity and with lupus [37]. Eosinophilic fasciitis involves inflammation of tissue under the skin and over the muscle, called fascia (collagen type 1), and it tops the list of AIDs associated with collagen and male predominance [38]. Next, stiff person syndrome is caused by autoantibodies to glutamic acid decarboxylase (GAD), which lead to muscle weakness. Figure 3 shows GAD spectrum disorders include cerebellar ataxia, autoimmune epilepsy and encephalitis, among others. Surprisingly, undifferentiated connective tissue (UCTD) is not clustered with other connective tissue diseases, and Figure 2 suggest it shares less in common with them. Differentiating between UCTD and early-stage SLE is important to avoid irreversible target-organ damage [39] Cold agglutinin disease is a form of autoimmune hemolytic anemia following exposure to cold temperature. Interestingly, the onset age depends on the climate, and chilly weather precipitates the disease. As such, as seen in Figure 2, it does not associate with other blood and bone marrow autoimmune diseases such as pure red cell aplasia, hypogammaglobulinemia, and agammaglobulinemia, which arise from a bone marrow autoimmune disease [40]. Parry Romberg syndrome (PRS) is characterized by the progressive degeneration of tissues of one side of the face, leading to hemifacial atrophy. Figure 3 shows it is comorbid with other AIDs, such as SLE, rheumatoid arthritis, inflammatory bowel disease, ankylosing spondylitis, vitiligo, and thyroid disorders [41]. Notably, autoimmune involvement is absent in most cases of PRS, and Figure 2 suggests that association with the aforementioned AIDs is weak without the presence of antinuclear antibodies [42]. Cogan's syndrome is an autoimmune inner ear disease, and Figures 2 and 3 show associations with sympathetic ophthalmia and Vogt-Koyanagi-Harada disease [43]. Lambert-Eaton myasthenic syndrome is an autoimmune disorder characterized by muscle weakness of the limbs [44]. Notably, one could expect that Lambert-Eaton (33% female predominance) would be associated with myasthenia gravis (75% female predominance), yet this is not the case, and comorbidity of both is unheard of [45]. Susac's syndrome is a rare autoimmune disease that mainly affects young women. It is characterized by endotheliopathy, which presents as encephalopathy, retinal vasoocclusive disease, and hearing loss [46]. Figure 2 does not classify Susac's syndrome with other cardiovascular AIDs, perhaps due to its relatively recent AID classification as attested by antinuclear antibodies, or to a different etiology. Remarkably, our PubMed clustering differentiates between male and female AIDs. Here too, autoimmune oophoritis and autoimmune orchitis are gonad AIDs, and antibodies bind to both testicular and ovarian target antigens during their development [47]. Mucha-Habermann disease (MHD), also known as "Pityriasis lichenoides et varioliformis acuta" is a skin disease characterized by rashes and small lesions. Figure 2 does not classify MHD with other cutaneous AIDs, perhaps due to its male predominance or to a different etiology. In fact, MHD is a spectrum of diseases and often is triggered by infectious agents, or an inflammatory response secondary to T-cell dyscrasia, or an immune complex-mediated hypersensitivity [48]. Progesterone dermatitis (PD) is a skin disease due to progesterone toxicity, for example during the menstrual cycle [49]. Both PD and MHD develop in response to an endocrine disbalance or an external toxic stimulus, which explains their association in Figure 2. Notably, MHD affects more men, while PD affects mainly women, and are unlikely comorbid. Finally, without the presence of antinuclear antibodies, their classification as AIDs could be biased. Graves' disease is the most common cause of hyperthyroidism. It is an autoimmune disorder with systemic manifestations that primarily affect the heart, skeletal muscle, eyes, skin, bone, and liver. Figure 2 shows that Graves' disease and Hashimoto's thyroiditis are highly associated, and both have been reported to coexist in the same individual, reflecting their common autoimmune origin [50]. Hashimoto's thyroiditis and Graves' disease are some of the most common autoimmune endocrine diseases. Pernicious anemia is commonly caused by deficiency of vitamin B12 (cobalamin). Sometimes, it is also classified as an autoimmune disease, as it is comprised of salient features of autoimmune chronic atrophic gastritis and cobalamin deficiency [51]. Figure 2 classifies it along with other polyglandular syndromes, as it is often comorbid with these, suggesting a potentially common B12 deficiency. Trivially, polyglandular syndromes (PGS) type 1, 2, and 3 are highly associated ( Figure 2). PGS type 1 is an autosomal recessive syndrome due to mutation of the AIRE gene, resulting in hypoparathyroidism, adrenal insufficiency, hypogonadism, vitiligo, candidiasis, and others [52]. PGS type 2 combines Addison's disease along with autoimmune thyroid disease and/or type 1 diabetes [53]. Less trivially, PGS type 2, is also named Schmidt's syndrome, which explains the latter association with all PGS and omission from Figure 2, although it also refers to a brainstem syndrome leading to hemiparesis. Finally, PGS type 3 is the combination of autoimmune Hashimoto's thyroiditis with another organspecific autoimmune disease, such as diabetes mellitus, pernicious anemia, vitiligo, alopecia, myasthenia gravis, and Sjögren's syndrome, as shown in Figure 3. The distinction between these is blurred by their association with coinciding AIDs [52]. Notably, PGS are also known as autoimmune polyendocrine syndrome. Dermatitis herpetiformis (formerly known as Duhring's disease) is associated with other skin diseases such as bullous pemphigoid and pemphigus [54], as shown in Figure 2. Cicatricial pemphigoid (i.e., bullous pemphigoid, mucous membrane pemphigoid), and ocular cicatricial pemphigoid, are different manifestations of the same bullous disease, which is manifested through blisters all over the body [55]. Figure 3 shows that these bullous diseases can coincide with other AIDs, underlining the involvement of an overactive immune system [56]. Discoid lupus is a dermatological AID that can lead to rashes, scarring, hair loss, and hyperpigmentation of the skin, which tends to get worse when exposed to sunlight. Figure 2 shows that discoid lupus is highly associated with both lichen planus and lichen sclerosis and often overlap clinically [57]. Figure 3 shows they are also associated with sarcoidosis, erythema nodosum, and pyoderma gangrenosum, underlining a common genetic etiology [58]. Alopecia areata is characterized by a well-defined non-scarring alopecic patch or patches that can extend to the entire scalp or lead to total body hair loss [60]. Figure 2 clusters it with vitiligo, and they often overlap clinically. Figure 2 shows a high association between ulcerative colitis and Crohn's disease [61]. Despite several differences, ulcerative colitis and Crohn's disease are different grades of the same chronic bowel inflammation [62]. Notably, the differences arise from different gut microbiota, hormonal factors, and to a lesser extent from autoimmune variations [61]. Interestingly, higher rates of comorbid celiac disease, Crohn's disease, or ulcerative colitis are found in patients with eosinophilic esophagitis than in the general population [63]. Primary biliary cirrhosis and primary sclerosing cholangitis are associated in Figure 2. Both are variations of autoimmune hepatitis with cholestatic characteristics such as autoantibody negative autoimmune hepatitis, giant cell hepatitis, primary biliary cholangitis, and primary sclerosing cholangitis [64]. Myositis is an inflammation of muscles responsible for movement. Trivially, myositis, polymyositis, dermatomyositis, and inclusion body myositis are highly associated in Figure 2 [65], are often comorbid, and share autoimmune antibodies [66]. Polymyositis is associated with scleroderma, conferring an increased risk of connective tissue diseases, such as interstitial lung disease, inflammatory joint disease, SLE, and Sjögren's syndrome ( Figure 3). Scleroderma is another connective tissue disorder, both local and systemic, and can further be classified as limited systemic sclerosis, formerly known as the CREST syndrome (Figure 2), which comprises calcinosis, Raynaud phenomenon, esophageal dysmotility, sclerodactyly, and telangiectasia [67]. Figure 3 also shows a link between systemic sclerosis and biliary cholangitis [68]. Psoriasis and psoriatic arthritis are trivially associated in Figure 2 [69]. Psoriasis also increases the risk of rheumatoid arthritis, and these diseases have similar comorbidity profiles, with overlapping therapeutic options [69]. Likewise, reactive arthritis, juvenile arthritis, and ankylosing spondylitis (i.e., Bechterew's disease) are also highly associated under the umbrella term spondyloarthritis (Figure 3) [29]. Figure 2 clusters sarcoidosis with predominantly female AIDs affecting multiple systems and could share a common etiology. Sarcoidosis has been linked to infectious organisms such as mycobacterium and cutibacterium, and certain manifestations of sarcoidosis have been linked to specific HLA alleles, but the overall pathogenesis remains uncertain [70]. Primary Sjögren's syndrome is a systemic AID that is characterized by a triad of symptoms, namely, dryness, pain and fatigue [71]. Figure 2 associates it with rheumatoid arthritis and SLE. Rheumatoid arthritis is a chronic inflammatory joint disease, predominantly of autoimmune origin. Rheumatoid arthritis autoantibodies include the rheumatoid factor and antibodies against citrullinated proteins [72]. Systemic lupus erythematosus (SLE) is a multisystem AID with varied clinical manifestations and a complex pathogenesis [72]. SLE is the most comorbid of AIDs (Figure 3), and few diseases are as devastating as lupus [73]. Figure 3 shows that SLE can also accompany arthritis, scleroderma, myositis, inflammatory bowel diseases, and celiac disease [74]. SLE has also been associated with neuronal AIDs such as myasthenia gravis [75]. Figure 2 clusters SLE with Sjögren's syndrome and despite several differences antiphospholipid antibodies have been identified in both [76]. Glomerulonephritis, IgA nephropathy, and amyloidosis are highly and often associated with arthritis, as shown in Figure 2 [77]. Next are cardiovascular AIDs, and Figure 2 shows that uveitis and Behçet's disease are highly linked, and uveitis is an ocular manifestation of Behçet's disease [78]. Conspicuously, Figure 3 shows that uveitis is associated with ankylosing spondylitis, psoriatic arthritis, arthritis, colitis, sarcoidosis, Behçet's disease, and Posner-Schlossman syndrome, among others [79]. Although, vasculitis is not specifically autoimmune, inflammation of the blood vessels is associated with AIDs. IgA vasculitis (formerly called "Henoch-Schönlein purpura"), and IgG and IgM vasculitis are three types of leukocystoclastic vasculitis, and their high association is trivial [80]. Rarer vasculitises include granulomatosis with polyangiitis, and microscopic polyangiitis. Kawasaki disease and polyarteritis are highly associated cardiovascular AIDs in Figure 2, and together with Takayasu's arteritis, polyarteritis nodosa, ANCA-associated vasculitis, giant-cell arteritis, they are coronary artery vasculitises [81]. Figure 3 extensively associates vasculitis with sarcoidosis, erythema nodosum, and pyoderma gangrenosum, among others [82,83]. Mounting evidence suggest that narcolepsy is a neuronal AID, and it is associated with restless legs syndrome (not an AID and omitted from Figure 2). In fact, the same patients that find themselves unable to move for a few minutes report restless legs syndrome could be triggered [84]. Myasthenia gravis manifests as muscle weakness caused by antibodies against the nicotinic acetylcholine receptor. Figure 3 associates it with other AIDs [85]. Figures 2 and 3 show that autoimmune encephalomyelitis and multiple sclerosis are associated with chronic inflammatory demyelinating polyneuropathy, multifocal motor neuropathy, CLIPPERS syndrome, neuromyelitis optica spectrum disorders, and tumefactive demyelinating lesions [86]. Likewise, there is an increased risk of dementia in autoimmune hypothyroidism [87]. Of particular interest to the authors is the association of multiple sclerosis and Alzheimer's disease, and we shall return to this finding in the discussion. Optic neuritis, neuromyelitis optica, and transverse myelitis are highly associated in Figure 2. In agreement with this observation, transverse myelitis and optic neuritis are both elements for the diagnosis of neuromyelitis optica [88]. Figure 3 shows that transverse myelitis is also associated with other AIDs, such as lupus [89]. Neutropenia is clustered with other hematopoietic diseases in Figure 2. Notably, neutropenia is associated with several non AIDs, such as endometriosis, fibromyalgia, and interstitial cystitis, which have been omitted from Figure 2. In turn, fibromyalgia is associated with SLE, Sjögren syndrome, and IBD [90]. Evans syndrome and hemolytic anemia are also comorbid but do not share the same antibodies. Misfit AIDs Some AIDs are omitted from Figure 2 because they are not sufficiently distinct or are included in other categories. For example, juvenile myositis is not distinct from myositis, except it affects children. Juvenile arthritis is indistinct from arthritis and is omitted from Figure 2. Adult Still's disease is included in Still's disease, and it is also omitted. Perivenous encephalomyelitis is a subcategory of encephalomyelitis, sometimes identified as acute disseminated encephalomyelitis, and is omitted from Figure 2. Likewise, Baló's disease (or "Baló's concentric sclerosis") forms a subcategory of multiple sclerosis and does not receive a distinct status in Figure 2. IgG4-related sclerosing disease is a subcategory of sclerosis and is omitted from Figure 2 [91]. Likewise, linear IgA is a rare autoimmune blistering disease, with linear IgA deposits along the basement membrane zone [92]. It is associated with other bullous diseases and omitted from Figure 2. Some AIDs are secondary to other AIDs and do not present as primary AIDs. For example, Raynaud's phenomenon is idiopathic and not known as a primary AID and is omitted from Figures 2 and 3. Pathogen-Induced Diseases Some of the potential AIDs are vector-borne, result from pathogen infections, and are not listed in Figure 2. For example, coxsackievirus myocarditis is not an AID, as it is caused by a virus. Chagas disease is also completely out of place; it is not an AID as it is due to Trypanosoma parasitic infection. Additionally, Lyme disease is out of place, as it is caused by a Borrelia bacterium infection. Mooren's ulcer is a painful type of peripheral ulcerative keratitis and has been associated with microbial infection [93]. Another example is chronic recurrent multifocal osteomyelitis, a painful bone inflammation, and despite its association with multiple AIDs, may be more associated with bacterial infection [94]. Tolosa Hunt syndrome is caused by an idiopathic granulomatous inflammation of the cavernous sinus and is characterized by painful ophthalmoplegia [95]. It does not present with autoantibodies, and as such is not an autoimmune disease. Essential mixed cryoglobulinemia is associated with infections, malignancy, and autoimmune diseases, but may be idiopathic [96]. Up to 90% of reported cases are associated with hepatitis C (HCV) infection, and as such it is omitted from Figure 2. Finally, rheumatic fever is a disease that can affect the heart, joints, brain, and skin if bacteria infections are not treated properly. Rheumatic fever and subacute bacterial endocarditis are thus highly associated. This should not be confounded with rheumatoid arthritis, and it is excluded from Figure 2. Tumor-Induced Diseases Paraneoplastic cerebellar degeneration is caused by cancer and as such is not included in Figure 2. Toxicity-Induced Diseases Some AIDs are direct results of xenobiotic environmental exposure, and their sudden onset is related to known causes. For example, autoimmune urticaria (i.e., hives) is often due to chemical exposure. Benign mucosal pemphigoid is mostly associated with external noxious stimuli, unlike the AID mucous membrane pemphigoid [97], and as such it is omitted from Figure 2. In addition, some of the names are not necessarily associated with AIDs, further confounding their classification. Palindromic rheumatism and autoimmune urticaria are temporary rheumatisms and do not fit with the progressive nature of AIDs, and are not included in Figure 2. Injury-Induced Diseases Some of the potential AIDs result from injury and are not included in Figure 2. For example, Menière's disease is an inner ear disorder that can lead to vertigo and hearing loss and is associated with head injury. Ligneous conjunctivitis is another example. Autoimmune retinopathy is an immune response to external injury, and all three are omitted from Figure 2 [98]. Postpericardiotomy syndrome is a subgroup of post-cardiac injury syndromes together with postmyocardial infarction syndrome (Dressler's syndrome) and posttraumatic pericarditis and are omitted from Figure 2 [99]. Giant cell myocarditis (GCM) is not an autoimmune disease in more than 80% of cases [100] and was omitted from Figure 2. Parsonage-Turner syndrome, also known as neuralgic amyotrophy, is a poorly understood neuromuscular disorder affecting peripheral nerves mostly within the brachial plexus distribution but can also involve other sites, including the phrenic nerve [101]. The etiology of the syndrome is unclear, and it has been reported in various clinical situa-tions, such as postoperative, postinfectious, posttraumatic, and postvaccination. As such, Parsonage-Turner syndrome is omitted from our list. Alzheimer's Disease Spectrum Includes Autoimmunization Alzheimer's disease (AD) is an irredeemable chronic neurodegenerative disorder and the leading cause of dementia in the elderly. Despite rigorous multinational efforts and decades of costly research, no scientific consensus regarding the causes of AD has been achieved. Still, an exclusive pathological event leading to AD development remains a mystery. Growing evidence indicates that AD results from several intertwining pathologies [102,103]. It was also hypothesized that AD possesses an autoimmune component [104,105]; however, little scientific attention has been paid to this theory. However, it was pointed out that AD-associated autoimmunity could be triggered by several selftolerance and pathogen-related mimicry mechanisms [106], including of bacterial [107], and viral [108] origin. Several groups tried to provide evidence linking autoimmune diseases with dementia. Wotton and Goldacre performed a retrospective, record-linkage cohort study to determine whether hospital admission for autoimmune disease is associated with an elevated risk of future dementia [109]. Researchers found that 18 different autoimmune diseases, such as lupus, psoriasis, and MS, demonstrated a significant association with dementia. Notably, AD and vascular dementia showed the most significant positive associations with autoimmune pathologies. The authors speculate that the association with vascular dementia is a component of a broader relationship between autoimmune pathologies and neurovascular damage. Another recent investigation demonstrated a significant decrease in the total and resting regulatory T cells in AD patients. Surprisingly, a similar phenotype was detected in MS patients. The authors suggest that alterations in regulatory T cells number and activity observed in both diseases play a role in the T cell-mediated immunological tolerance impairment, which indicates a link between these pathologies [110]. Of note, mutual mechanisms for AD and MS were proposed earlier by Avinash Chandra, who hypothesized that amyloid ameliorates disease-associated inflammation [111]. Here, we support this hypothesis and show a high association between AD and MS ( Figure 2). Moreover, we suggest that AD and MS share common mechanisms of neurodegeneration. The elevation of amyloid precursor protein expression levels in axons around the plaque in MS and the correlation of amyloid-β with distinct stages of multiple sclerosis clearly indicate a role of amyloidosis in MS pathogenesis. Undoubtedly, there is primary clinical importance in revealing the putative autoimmune components of AD. A novel complex approach to the disease treatment and diagnosis, which considers several intricate mechanisms, may pave the way to efficient therapy. Conclusions Here, we quantify the association of some 100 different AIDs. The affected systems include: (1) gastrointestinal, (2) neuronal, (3) eye, (4) cutaneous, (5) musculoskeletal, (6) kidneys and lungs, (7) cardiovascular, (8) hematopoietic, (9) endocrine, and (10) multiple. Interestingly, our clustering differentiates between predominantly male and female AIDs. The male/female classification does not explain why most AIDs are more prevalent among women, while others are more prevalent among men. One reason could be the type of tissue affected, and if collagen is targeted, then female predominance could be masked by the high collagen level in male skin. In addition, our clustering of AIDs follows average onset age and systems affected. Our results are expected to assist the diagnosis of comorbidity, and identification of common genetic and environmental factors.
8,247.4
2022-07-26T00:00:00.000
[ "Medicine", "Biology" ]
Rationality, perception, and the all-seeing eye Seeing—perception and vision—is implicitly the fundamental building block of the literature on rationality and cognition. Herbert Simon and Daniel Kahneman’s arguments against the omniscience of economic agents—and the concept of bounded rationality—depend critically on a particular view of the nature of perception and vision. We propose that this framework of rationality merely replaces economic omniscience with perceptual omniscience. We show how the cognitive and social sciences feature a pervasive but problematic meta-assumption that is characterized by an “all-seeing eye.” We raise concerns about this assumption and discuss different ways in which the all-seeing eye manifests itself in existing research on (bounded) rationality. We first consider the centrality of vision and perception in Simon’s pioneering work. We then point to Kahneman’s work—particularly his article “Maps of Bounded Rationality”—to illustrate the pervasiveness of an all-seeing view of perception, as manifested in the extensive use of visual examples and illusions. Similar assumptions about perception can be found across a large literature in the cognitive sciences. The central problem is the present emphasis on inverse optics—the objective nature of objects and environments, e.g., size, contrast, and color. This framework ignores the nature of the organism and perceiver. We argue instead that reality is constructed and expressed, and we discuss the species-specificity of perception, as well as perception as a user interface. We draw on vision science as well as the arts to develop an alternative understanding of rationality in the cognitive and social sciences. We conclude with a discussion of the implications of our arguments for the rationality and decision-making literature in cognitive psychology and behavioral economics, along with suggesting some ways forward. We argue that the literature on rationality features a unifying but problematic (and generally implicit) assumption about vision and perception that is best characterized by an Ballseeing eye^(cf. Koenderink, 2014; also see Hoffman, 2012;Hoffman & Prakash, 2014;Rogers, 2014). We focus particularly on how the all-seeing view of perception manifests itself in research on rationality, cognition, and decision-making. We point to the pioneering work of both Herbert Simon and Daniel Kahneman to illustrate our points (Kahneman, 2003a(Kahneman, ,b, 2011Simon, 1956Simon, , 1980Simon, , 1990. Overall, the assumption of an all-seeing eye takes different forms across the social sciences. In some cases the all-seeing eye is assumed in the form of the rationality of some or all agents, or the system as a whole. In other cases the all-seeing eye is an emergent result of learning and visual, computational or information processing, or broader agent-environment interactions. In many cases the all-seeing eye is introduced in the form of the scientist who imputes illusion, bias, or other forms of error or veridicality to subjects-when they fall short of omniscience (Simon 1979;cf. Kahneman, 2003a). Each of these forms of all-seeing-ness, however, as we will illustrate, is problematic and is symptomatic of a representational, computational, and information processing-oriented conception of perception. In essence, much of the literature on rationality places an emphasis on psychophysics and inverse or ecological optics, ignoring the psychology and phenomenology of awareness (Koenderink, 2014). The emphasis is placed on the actual, physical nature of environments and objects within it (specifically, characteristics such as size, distance, color, etc)-rather than on the organism-specific, directed, and expressive nature of perception. We provide the outlines of a different approach to perception by drawing on alternative arguments about vision. Our critique of extant work in the cognitive and economic sciences focuses explicitly on perception and vision, and thus is different from Gigerenzer's (1991Gigerenzer's ( , 1996 approach, which emphasizes the Becological^rationality of judgmental heuristics (Gigerenzer & Todd, 1999;Todd & Gigerenzer, 2012). The heuristics literature builds on a frequentist, Bayesian or Bprobabilistic view of perception^ (Chater & Oaksford 2006;Kruglanski and Gigerenzer, 2011;Vilares & Kording, 2011), and more generally the Bstatistics of visual scenes^ Knill & Richards, 1996;Yuille et al., 2004;cf. also Koenderink, 2016). The central argument in this literature is that perception, over time, is in fact veridical rather than biased: organisms perceive and interact with their environments and over time learn its true, objective nature. Though we link up with some of the ways in which this literature interprets (and indeed rightly questions) visual illusions, we also disagree with the way this work characterizes vision and perception, and point toward an alternative approach. We conclude with a discussion of how our arguments impact the rationality and decision-making literature in psychology and behavioral economics. Perception and cognition: From omniscience to bounded rationality Any model of cognition, rationality, reasoning, or decisionmaking implicitly features an underlying theory of and assumptions about perception (Kahneman, 2003a;Simon, 1956). That is, any model of rationality makes assumptions about what options are seen or not, how (or whether) these options are represented and compared, and which ones are chosen and why. The very idea of rationality implies that someone-the agents themselves, the system as a whole or the scientist modeling the behavior-perceives and knows the optimal or best option and thus can define whether, and how, rationality is achieved. Rationality, then, is defined as correctly perceiving different options and choosing those that are objectively the best. In emphasizing rationality, cognitive and social scientists are incorporating-most often implicitly-certain theories and assumptions about perception, about the abilities and ways in which organisms or agents perceive, see, and represent their environments, or compute and process information, compare options, behave, and make choices. Assumptions about perception and vision, as we will discuss, are at the very heart of these models and the focus of our paper. Neoclassical economics has historically featured some of the most extreme assumptions about the nature of perception and rationality. This has taken the form of assuming some variant of a perfectly rational or omniscient actor and an associated Befficient market^ (Fama, 1970;cf. Buchanan, 1959;Hayek, 1945). 1 This work-in its most extreme form-assumes that agents have perfect information and thus there are no unique, agent-specific opportunities to be perceived or had: the environment is objectively captured and exhausted of any possibilities for creating value. Markets are said to be efficient as they, automatically and instantaneously, anticipate future contingencies and possibilities (Arrow & Debreu, 1954). Much of this work assumes that there is, in effect, an Bideal observer^(cf. Geisler, 2011;Kersten et al., 2004)-either represented by the omniscience of all agents or the system as a whole-and thus an equilibrium (Arrow & Debreu, 1954). As noted by Buchanan, economists Bhave generally assumed omniscience in the observer, although the assumption is rarely made explicit^ (1959: 126). The omniscient agent of economics has of course been criticized both from within and outside economics, as it does not allow for any subjectivity or individual level heterogeneity. For example, as Kirman argues, this approach Bis fatally flawed because it attempts to impose order to the economy through the concept of an omniscient individual^(1992: 132). Thomas Sargent further argues that BThe fact is that you simply cannot talk about differences within the typical rational expectations model. There is a communism of models. All agents inside the model, the econometrician, and God share the same model^ (Evans & Honkapohja 2005: 566). 2 Although the death of the omniscient agent of economics has been predicted for many years, it continues to influence large parts of the field. It is precisely this literature in economics, which assumes different forms of global or perfect rationality, that led to the emergence of the behavioral and cognitive revolution in the social sciences, to challenge the idea of agent omniscience. 3 Herbert Simon was the most influential early challenger of the traditional economic model of rationality. He sought to offer Ban alternative to classical omniscient rationality^ (1979: 357), and he anchored this alternative on the concept of Bbounded rationality,^a concept specifically focused on the nature of vision and perception (Simon, 1956). Simon's work was carried forward by Daniel Kahneman, who also sought to develop Ba coherent alternative to the rational agent model( 2003a: 1449) by focusing on visual metaphors, illusions, and perception. We revisit both Simon and Kahneman's work next. To foreshadow our conclusion, we argue that both Simon and Kahneman, as well as later psychologists and behavioral economists, have unwittingly replaced the assumption of economic omniscience with perceptual omniscience, or an allseeing view of perception. Neither Simon's nor Kahneman's model has overcome the paradigmatic assumption of omniscience, even though (or because) they have critiqued it. Instead, these models have merely introduced a different form of omniscience. We find it particularly important to revisit this work because it shows how the behavioral revolution was, and continues to be, deeply rooted in arguments about perception and vision. Though this work has sought to develop a psychologically more realistic and scientific approach to understanding rationality, we argue that this work can be challenged on both counts. Bounded rationality and perception As noted above, Herbert Simon challenged the assumption of agent omniscience (particularly pervasive in economics) with the idea of bounded rationality. The specific goal of his research program was, to quote Simon again, Bto replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kind of environments in which such organisms exist( 1955: 99, emphasis added). Rather than presume the omniscience of organisms or agents, Simon hoped to interject psychological realism into the social sciences by modelling the Bactual mechanisms involved in human and other organismic choice^(1956: 129). Bounded rationality became an important meta-concept and an influential alternative to models of the fully rational economic agent-a transdisciplinary idea that has influenced a host of the social sciences, including psychology, political science, law, cognitive science, sociology, and economics (e.g., Camerer, 1998Camerer, , 1999Conlisk, 1996;Evans, 2002;Jolls et al., 1998;Jones, 1999;Korobkin, 2015;Luan et al., 2014;Payne et al., 1992;Puranam et al., 2015;Simon, 1978Simon, , 1980Todd & Gigerenzer, 2003;Williamson, 1985). These notions of rationality continue to influence different disciplines in various ways, including recent work on universal models of reasoning, computation, and Bsearch ( Gershman et al., 2015;Hills et al., 2015). To unpack the specific problems associated with bounded rationality, as it relates to vision and perception, we revisit some of the original models and examples provided by Simon. We then discuss how these arguments have extended and evolved in the cognitive and social sciences more broadly (Kahneman, 2003a), including the domain of behavioral psychology and economics. In most of his examples, Simon asks us to imagine an animal or organism searching for food in its environment (e.g., 1955, 1956, 1964, 1969Newell & Simon, 1976;cf. Luan et al., 2014). 4 This search happens on a predefined space (or what he also calls Bsurface^) where the organism can visually scan for food (choice options) and Blocomote^and move toward and consume the best options (Simon, 1956). Initially the organism explores the space randomly. But it learns over time. Thus vision is seen as a tool for capturing information about and representing one's environment. What is central to the concept of bounded rationality, and most relevant to our arguments, is the specification of boundedness itself. Simon emphasizes the organism's Bperceptual apparatus^ (1956: 130). The visual scanning and capturing of the environment for options is given primacy: Bthe organism's vision permits it to see, at any moment, a circular portion of the surface about the point in which it is standing^ (Simon, 1956: 130, emphasis added). Rather than omnisciently seeing (and considering) the full landscape of possibilities or environment (say, options for food)-as models of global rationality might specify things-Simon instead argues that perception (the relevant, more bounded consideration set of possibilities) is delimited by the organism's Blength and range of vision ( 1956: 130-132). Similar arguments have recently been advanced in the cognitive sciences in universal models that emphasize perception and search (e.g., Fawcett et al., 2014;Gray, 2007;Luan et al., 2014;Todd et al., 2012). One of Simon's key contributions was to acknowledge that organisms (whether animals or humans) are not aware of, nor do they perceive or have time to compute, all alternatives in their environments (cf. Gibson 1979). Rather than globally seeing and optimizing, the organism instead Bsatisfices^based on the more delimited set of choices it perceives in its immediate, perceptual surroundings. Additional search, whether visually or through movement, is costly. Thus organisms search, scan, and perceive their environments locally and the tradeoffs between the costs of additional search and the payoff of choosing particular, immediate options drive behavior. In all, organisms only consider a small subset of possibilities in their environment-that which they perceive immediately around them-and then choose options that work best among the subset, rather than somehow optimizing based on all possible choices, which Simon argues would require god-like computational powers and omniscience. These ideas certainly seem reasonable; but they are nonetheless rooted in a problematic conception of vision and perception. We foreshadow some central problems here, problems that we will more carefully address later in the paper when we discuss Kahneman's (2003a,b) work and carefully revisit some of the common visual tasks and perceptual examples of bounded rationality and bias. First, note that a central background assumption behind bounded rationality is that there is an all-seeing eye present which can determine whether an organism in fact behaved boundedly or rationally, or not. As Simon put it, Brationality is bounded when it falls short of omniscience^ (1978: 356). For this shortfall in omniscience to be specified and captured, it requires an outside view, an all-seeing eye-in this case, specified by the scientistthat somehow perceives, specifies, computes, or (exhaustively) sees the other options in the first place, then identifies the best or rational one, which in turn allows one to point out the shortfall, boundedness or bias. From the perspective of vision research, Simon's Bfalling short of omniscience^-specification of bounded rationality can directly be linked to the Bideal observer theory^of perception (e.g., Geisler 1989Geisler , 2011Kersten et al., 2004). Similar to the standard of omniscience, the Bideal observer is a hypothetical device that performs optimally in a perceptual task given the available information^ (Geisler, 2011: 771, emphasis added). 5 Naïve (or bounded) subjects can be contrasted with a form of camera-like ideal observer who objectively records the environment. The comparison of objective environments with subjective assessments of these environments (or objects within it) has been utilized in the lab as well as in natural environments (Geisler, 2008; also see Foster, 2011;McKenzie, 2003). These approaches build on a veridical model of perception and objective reality, a sort of BBayesian natural selection^ (Geisler & Diehl, 2002) where B(perceptual) estimates that are nearer the truth have greater utility than those that are wide of the mark^ (Geisler & Diehl, 2003). The environment is seen as objective, and subjects' accurate or inaccurate responses are used as information about perception and judgment. This approach can be useful if we demand that subjects see something highly specific (whether they miss or accurately account for some stimulus specified by the scientist), though even the most basic of stimuli-as we will discuss-are hard to conclusively nail down in this fashion. Extant work raises fundamental questions about whether perception indeed tracks truth (or Bveridicality^) in the ways of an ideal observer (e.g., Hoffman et al., 2015). For example, evolutionary fitness maps more closely onto practical usefulness rather than any idea of truth or objectivity. Bayesian models of perception can be built on evolutionary usefulness rather than truth and accuracy (e.g., Hoffman & Singh, 2012;Koenderink, 2016). Supernormal stimuli highlight how illusory, seemingly objective, facts can be in the world (Tinbergen, 1951). We discuss these issues more fully later. The problem is that the very specification of an objective landscape, space, or environment assumes that the scientist him or herself, in effect, is omniscient and has a god-like, true view of all (or at least a larger set of) options available to the organism under study-a type of third-person omniscience. The scientist sees all (or more) and can, ex ante and post hoc, specify what is the best course of action and whether the organism in fact perceived correctly, acted boundedly, or behaved rationally. But, in 5 This idea of an ideal observer can be reasonable in highly restricted settings of psychophysics. Low-level vision is limited by photon statistics, and visual acuity by the wavelength of electromagnetic radiation and photon statistics. Here vision is indeed limited by the physics and the ideal observer is easily defined and useful. We can also do so in acoustics. But there is no way to conceive of ideal observers where meaning and awareness is concerned, the Bavailable information^is actually Bstructural complexity^in the Shannon sense. However, these models are about vision as a physiological device, again, ignoring awareness. most cases, simply labelling something as biased or bounded does not amount to a theoretical explanation. Indeed, it serves as a temporary holding place that requires further investigation as to the reasons why something was perceived or judged in a certain way. Perhaps the organism simply did not have enough time to identify the optimal solution or the organism couldn't see certain possibilities. The fact that perception and rationality consistently fall short of standards set forth by scientists raises questions not only about the standards themselves but also about why this is the case. The second problem is that perception as seen by Simon is a camera-like activity where organisms capture veridical images of and possibilities in their environments and store or compare this information (cf. Simon, 1980). Granted, the camera used by organisms-perception and vision-is specified as bounded in that it captures only a small, delimited portion of the surrounding environment in which it is situated-that which can be immediately perceived (for example, Ba circular portion^around an organism: Simon, 1956: 130)-rather than assuming omniscient awareness of the full environment. Whether only some or all of the environment is captured within the choice set of an organism, the approach assumes that perception generates objective representations or copies of the environment. Perception is equivalent to Bveridical^or true representation, and only the bounds of what is perceived are narrowed, compared to the more omniscient models featured in economics and elsewhere. Simon et al.'s BCaMeRa^model of representation illustrates the point, specifically where Bmental images resemble visual stimuli closely ( Tabachneck-Schijf et al., 1977: 309)-an assumption we will return to when discussing Kahneman's more recent work. Perception as representation, and the efforts to map true environments to true conceptions of those environments, is the sine qua non of much of the cognitive sciences. Frequent appeals to learning, bias, boundedness, and limitations only make sense by arguing that there is a true, actual nature to environments (which can be learned over time). The standard paradigm uses a world-to-mind, rather than a mind-to-world, model of perception that is, quite simply, not true to the nature of perception. Perception is not (just) representation (e.g., Purves, 2014) or world-to-mind mapping . The emphasis on representation places undue emphasis on the environment itself-and objects within it-rather than the organism-specific factors that in fact might originate and direct perception. Simon's view of perception, then, falls squarely into the domain of psychophysics and inverse optics (cf. Marr, 1982): the attempts to map objective environments onto the mind. It implies a form of pure vision or veridical optics where the world can properly be captured and represented, if only there were enough eyes on it, or enough computational or perceptual power to do so (cf. Simon, 1955Simon, , 1956. Environmental percepts are treated as relatively deterministic and passive data and inputs to be represented in the mind. The third and perhaps most central concern is the way that perception is implicitly seen as independent of the perceiver. Simon argues that the nature of the organism doesn't meaningfully impact the argument, as highlighted by his interchangeable use of universal mechanisms applied to organisms in general, both animals and humans alike. For example, he argues that Bhuman beings [or ants], viewed as a behaving system, are quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself^ ' (1969: 64-65). No attention is paid to the organism-specific factors associated with perception; the focus is on computation of perceived alternatives and the representation of an objective environment. 6 Simon's work was undoubtedly influenced in some form by behaviorism and its focus on the environment instead of the organism. He heralded the coming of a universal cognitive science (Simon, 1980, Cognitive Science), where a set of common concerns across Bpsychology, computer science, linguistics, economics, epistemology and social sciences generally^-focused on one idea: the organism as an Binformation processing system.^Perception, information gathering and processing provided the underlying, unifying model for this approach. 7 The universality and generality of the arguments was also evident in Simon's interest in linking human and artificial intelligence or rationality. In an article titled Bthe invariants of human behavior,^Simon argues that Bsince Homo Sapiens shares some important psychological invariants with certain nonbiological systems-the computers-I shall make frequent reference to them also^(1990: 3, emphasis added). He then goes on to delineate how human and computer cognition and rationality share similarities and are a function of such factors as sensory processing, memory, computational feasibility, bounded rationality, search, and pattern recognition. This approach represents a highly behavioral, externalist, and automaton-like conception of human perception and behavior (cf. Ariely, 2008;Bargh & Chartrand, 1997;Moors & De Houwer, 2006). The concern with these arguments is that they do not recognize that perception is specific to an organism or a species-they instead assume a universality that has little empirical support. To suggest and assume that there is some kind 6 Simon briefly mentions that Bwe are not interested in describing some physically objective world in its totality, but only those aspects of the totality that have relevance as the 'life space' of the organisms considered^(1956: 130). However, there is no subsequent discussion of organism-specific factors related to this life space of organisms, neither in his early or his later work. The emphasis is on universal factors that apply across species (see Simon, 1990). 7 Simon remarked to a friend that we need a Bless God-like and more rat-like chooser^ (Crowther-Heyk, 2005: 6). These types of arguments link with behaviorism, which emphasized environments over organisms: Bthe variables of which human behavior is a function lie in the environment^(1977: 1). Skinner further argued that the "the skin is not that important as a boundary" (1964: 84). Behaviorism also focused heavily on environments and external stimuli, at the expense of understanding the (comparative) nature of the organism. of objective environment which the organism searches is not true to nature. Instead of generic or objective environments, organisms operate in their own BUmwelt^and surroundings (Uexkull 2010), where what they perceive is conditioned by the nature of what they are (Koenderink 2014). The work of Tinbergen and Lorenz in ethology makes valuable contributions by showing how organism-specific factors are central to perception and behavior. Yet, the standard paradigm bypasses the hard problem of perception-its specificity and comparative nature-by jumping directly to environmental analysis and by assuming that perception is universal and equivalent to inverse optics (the mapping of objective stimuli to the mind). Although we may seek to identify general factors related to objects, or environmental salience or objectivity across species, this simply is not possible as what is perceived is determined by the nature of the organism itself. Simon's notion of objective environments, which then can be compared to subjective representations of that environment, is also readily evident in a large range of theories across the domain of psychology and cognition. For example, in his influential Architecture of Cognition, Anderson (2013; also see Anderson & Lebieri, 2003) builds on precisely the same premise of universal cognition, in seeking to develop a Bunitary theory of mind^focused on external representation and the mind as a Bproduction system^(input-outputs and ifthen statements driving organism interaction with the environment). This research builds on the longstanding BNewell's dream^(Alan Newell, Herbert Simon's frequent co-author) of building a computational and unified theory of cognition. Kahneman on perception A timely example of how problematic models of perception and vision continue to plague the rationality and decision-making literature is provided by Kahneman's Nobel Prize speech and subsequent American Economic Review publication (2003a) titled BMaps of Bounded Rationality.^A version of this article was also co-published in the American Psychologist (2003b). The article explicitly links the current conversations in cognitive psychology and behavioral economics with Simon's work and our discussion in the previous section. However, Kahneman's work focuses even more directly on perception and vision. He argues that his approach is distinguished by the fact that Bthe behavior of agents is not guided by what they are able to compute^-à la Simon-Bbut by what they happen to see at a given moment^ (Kahneman, 2003a(Kahneman, : 1469. Sight thus takes center-stage as a metaphor for arguments about rationality. What is illustrative of Kahneman's focus on perception and sight is that he B[relies] extensively on visual analogies^(2003a: 1450). The focal article in fact features many different visual tasks, pictures, and illusions, which are used as evidence and examples to make his points about the nature and limits of perception and rationality. We will revisit, and carefully reinterpret, some of these visual examples. Kahneman's emphasis on vision and perception is not all that surprising as his early work and scientific training-in the 1960s-was concerned with psychophysics, perception, and inverse optics: the study and measurement of physical and environmental stimuli. This early work focused on perception as a function of such factors as environmental exposure and contrast (Kahneman, 1965;Kahneman & Norman, 1964), visual masking (Kahneman, 1968), time intensity (Kahneman, 1966), and thresholds (Kahneman, 1967b). In other words, the study of perception is seen as the study of how (and whether) humans capture objects and environments based on the actual characteristics of objects and environments. These assumptions from Kahneman's early work, and the broader domain of psychophysics, have carried over into the subsequent work on the nature of rationality. This view of perception is also center-stage in, for example, Bayesian models of rationality (e.g., Oaksford & Chater, 2010). The background assumption in all of this research is that Bresponding to [the actual attributes of reality] according to the frequency of occurrence of local patterns reveal[s] reality or bring[s] subjective values 'closer' to objective ones^ (Purves et al., 2015: 4753). In the target article Kahneman (2003a) conceptualizes individuals-similar to Simon-as Bperceptual systems^that take in stimuli from the environment. As put by Kahneman, Bthe impressions that become accessible in any particular situation are mainly determined, of course, by the actual properties of the object of judgment^(2003a: 1453, emphasis added). This notion of perception explicitly accepts vision and perception as veridical or Btrue^representation (e.g., Marr, 1982;Palmer, 1999). Similar to Simon, the approach here is to build a world-to-mind mapping where Bphysical salience [of objects and environments] determines accessibility ( Kahneman, 2003a( Kahneman, : 1453. Perception is the process of attending to, seeing, or recording-as suggested b y K a h ne m an 's la n gu a g e o f Bi m p r es s i o ns^a n d Baccessibility^throughout the article-in camera-like fashion, physical stimuli in the environment based on the actual characteristics of objects and environments themselves. The emphasis placed on the environment is evident in what Kahneman calls Bnatural assessments^(cf. Tversky & Kahneman, 1983). Natural assessments are environmental stimuli, characterized by the Bactual,^Bphysical^features of objects that are recorded or Bautomatically perceived^or attended to by humans and organisms (Kahneman, 2003a(Kahneman, : 1452. These physical features or stimuli include: Bsize, distance, and loudness, [and] the list includes more abstract properties such as similarity, causal propensity, surprisingness, affective valence, and mood^ (Kahneman, 2003a(Kahneman, : 1453. This work closely links with psychophysics: efforts to understand perception as a function of such factors as threshold stimuli or exposure (e.g., Kahneman, 1965). Important to our arguments is that Kahneman equates perception-on a one-to-one basis-with rationality, intuition, and thinking itself, thus implying a specific environment-tomind mapping of mind. This is evident in the claim that Brules that govern intuition are generally similar to the rules that govern perception,^or, more succinctly: Bintuition resembles perception^ (Kahneman, 2003a(Kahneman, : 1450. Kahneman draws both analogical and direct links between perception and his conceptions of rationality, decision making, and behavior. Visual illusions, for example, are seen as instances and examples of the link between perception and the rationality. The discrepancy between what is seen (and reported) and what in fact is there, provides the basis for ascribing bias or irrationality to subjects. Visual illusions have thus become the example of choice for highlighting the bias and the limits of perception. The assumed camera-like link between perception and cognition emerges across a wide range of literatures in the domain of rationality, reasoning, and cognition. For example, Chater et al. argue that the Bproblem of perception is that of inferring the structure of the world from sensory input^(2010: 813). Most Bayesian models of cognition, rationality, and decision making feature similar assumptions (cf. Jones and Love, 2011). The precise nature of these inferences, from a Bayesian perspective, is based on encounters with an objective environment, the nature of which can be learned with time and repeated exposure (cf. Duncan & Humphreys, 1989). The social sciences, then, are building on a broader psychological and scientific literature that treats Bobject perception as Bayesian inference^ ; also see Chater et al., 2010). Bayesian perception compares observation and optimality (Ma, 2012;cf. Verghese, 2001), where the effort is to Baccurately and efficiently^perceive in the form of Bbelief state representations^and to match these with some true state of the world (Lee, Ortega, & Stocker, 2014). Oaksford and Chater (2010) discuss this Bayesian Bprobabilistic turn in psychology^and the associated Bprobabilistic view of perception^in the social sciences, where repeated observations help agents learn about the true, objective nature of their environments. Bayesianism is now widely accepted, as Kahneman argues, Bwe know…that the human perceptual system is more reliably Bayesian^(2009: 523). 8 Revisiting and reinterpreting Kahneman's examples In the focal articles, Kahneman (2003a,b) provides five different visual illustrations and pictures to make his point about the nature and boundedness of perception and rationality. Scholars in the cognitive and social sciences have indeed heavily focused on visual tasks and illusions to illustrate the limitations, fallibility, and biases of human perception (e.g., Ariely, 2001;Gilovich & Griffin, 2010;Vilares & Kording, 2011). These visual examples are used to illustrate the (seeming) misperceptions associated with objectively judging such factors as size, color and contrast, context and comparison, and perspective. These examples are also used to point out perceptual salience and accessibility, the role of expectations and priming, and the more general problem of perceiving Bveridically,^as an example of boundedness and bias (Kahneman, 2003a). However, visual illusions are commonly misinterpreted (Rogers, 2014). First, they rarely provide a good example of bias in perception, but instead can be interpreted as illustrations of how the visual system works. Second, illusions and perceptual biases are simply an artefact of the problem of singularly and exhaustively representing objective reality in the first place. Thus we next point to some of Kahneman's (2003a) examples and argue that these examples are wrongly interpreted, on both counts. In one illustration, Kahneman (2003aKahneman ( : 1460 highlights the problem of accurately judging or comparing the size of objects by using a two-dimensional picture that seeks to represent a three-dimensional environment. Similar to the classic Ponzo illusion (see Fig. 1, copied from Gregory, 2005Gregory, : 1243cf. Ponzo, 1912), in the picture the focal objects (in the above case, the white lines) that are farther away (or higher, in the two-dimensional image) are seen as larger by human subjects, even though the objects are the same size on the two dimensional surface. Kahneman calls this Battribute substitutionâ nd argues that the Billusion is caused by the differential accessibility of competing interpretations of the image^-and further that the Bimpression of three-dimensional size is the only impression of size that comes to mind for naïve observers-painters and experienced photographers are able to do better^ (Kahneman, 2003a(Kahneman, : 1461(Kahneman, -1462. The perceptual 8 Bayesian models of cognition and rationality have of course been criticized in the literature, for their seeming similarities to behaviorism, lack of attention to underlying mechanisms, their equating rationality with computation, lack of empirical findings, and overly strong focus on rationality, etc. (e.g., Bowers & Davis, 2012;Jones & Love, 2011). Our focus here is different, in that we highlight the perception and vision-related assumptions made by this literature. Gregory, 2005Gregory, : 1243 naiveté of subjects, compared to experts, is indeed a popular theme in the rationality literature. The problem is in how the visual task-that is purported to illustrate perceptual illusion and bias-is set up and how it is explained. The concern here is that the image features conflicting stimuli, namely, a conflict between the image and what it seeks to represent in the world. The reason that the top, white line in Fig. 1 (at first glance) appears to be longer is because the image features both two-and three-dimensional stimuli. Since the white line at the bottom (Fig. 1) is shorter than the railway ties it overlaps with-and the railway ties are presumed to be of equal length-it is natural to make the Bmistake^of judging that the top line in fact is longer than the bottom line. The catch, or seeming illusion, is that the two white lines are of equal length in two-dimensional space. The issue is that the vertical lines disappearing into the horizonthe railroad tracks themselves-suggest a three-dimensional image, though the focal visual task relates to a twodimensional comparison of the lengths of the two horizontal, white lines. To illustrate the problem of labelling this an illusion, we might ask subjects whether the vertical lines (the rail road tracks) are merging and getting closer together (as they go into the horizon), or whether they remain equidistant. On a twodimensional surface it would be correct to report that the lines are getting closer together and merging. This is how things appear in the image. But if the picture is interpreted as a representation of reality (of space, perspective, and horizon), then we might also correctly say that the lines are not getting closer together or merging. Furthermore, if the top, horizontal white line was in fact part of the three-dimensional scene that the picture represents, it would be correct to say that the top line indeed is longer. Experimental studies of visual space, using Blumenfeld alley experiments, provide strong evidence for the point that there is nothing straightforward about representing space on a two-dimensional surface or plane (e.g., Erkelens, 2015). Furthermore, consider what would happen if subjects were asked to engage in the same task in a natural environmentrather than looking at a picture-standing in front of railroad tracks that go off into the horizon. What visual illusions could be pointed to in this setting? The subjects might, for example, report that the tracks themselves appear to remain equidistant and that the railroad ties appear to remain the same size. If the subjects slowly lifted up a 1-meter long stick, horizontally in front of them, at some point the stick would indeed be of seemingly equal (two-dimensional) length to one of horizontal railroad ties that are visible up further in the horizon. We might briefly note that another interpretation of these types of perspective-based illusions is that they not only play with two and three dimensions, but that they also capture motion (e.g., Changizi et al., 2008). That is, human perception is conjectural and forward-looking, for example anticipating oncoming stimuli when in motion. Thus the converging or ancillary lines in the background of an image-commonly used in visual illusions (e.g., Ponzo, Hering, Orbison, & Müller-Lyer illusions)-can be interpreted as suggesting motion and thus appropriately Bperceiving the present^and anticipating the relative size of objects. Visual illusions are only artificially induced by taking advantage of the problem of representing a three-dimensional world in two dimensions. The discrepancies between two and three dimensions-the so-called data or evidence of visual illusions and bias-are not errors but simply (a) examples of how the visual system in fact works and (b) artefacts of the problem that two-dimensional representation is never true to any three-dimensional reality (we will touch on both issues below). The use of perspective-based visual illusions as evidence for fallibility, misperception, or bias is only a convenient tool to point out bias. But any bias is only the result of having subjects artificially toggle between representation and reality (or, more accurately, one form or expression of reality). To say that scientists have accurately captured some sort of bias is simply not true (Rogers, 2014). Visual illusions based on perspective are inappropriately exploiting and interpreting a more general problem, which is that two-dimensional images cannot fully represent three-dimensional reality. Moreover, as we will discuss, the very notion of appealing to some kind of singular verifiable reality as a benchmark for arbitrating between what is illusion or bias, versus what is not, is fraught with problems from the perspective of vision science (Koenderink, 2015;Rogers, 2014; see also Frith, 2007). We might note that some scholars in the area of cognition and decision-making have recently noted that visual illusions are incorrectly used to argue that perception and cognition are biased. For example, Rieskamp et al. write: BJust as vision researchers construct situations in which the functioning of the visual system leads to incorrect inferences about the world (e.g., about line lengths in the Muller-Lyer illusion), researchers in the heuristics-and-biases program select problems in which reasoning by cognitive heuristics leads to violations of probability theory^ (Rieskamp, Hertwig, & Todd, 2015: 222). We agree with this assessment, but our point of departure is more fundamental and pertains to the nature of perception itself. Specifically, the extant critiques of bias (and associated interpretations of visual illusions) propose that humans eventually learn the true nature of the environment, and thus focus on alternatives such as a Bayesian probabilistic view of perception. But the problem is that Bprobability theory is firmly rooted in the belief in [an] all seeing eye^ (Koenderink, 2016: 252). In other words, the idea of Bayesian Becological rationality ( Goldstein & Gigerenzer, 2002;Todd & Gigerenzer, 2012) builds on a model of ecological optics (cf. Gibson, 1977)-where perception is also seen in camera-like fashion: humans learn the true nature of environments over time. The notion of ecological rationality and optics implies that illusions are mere temporary discrepancies between representations and the real world. We propose a fundamentally different view, one that suggests it is not as easy (if not impossible) to disentangle illusion, perception, and reality. Thus, while we agree with the critique, our point of departure anchors on a very different view of perception, which we will outline in the next section. To illustrate further concerns with how perception is treated in this literature, we focus on another visual example provided by Kahneman (see Fig. 2 from Kahneman, 2003aKahneman, : 1455. This example is used by Kahneman to show the Breference-dependence of vision and perception( 2003a: 1455). He specifically points to referencedependence by discussing how the perception of brightness or luminance can be manipulated by varying the surrounding context within which the focal image is embedded (see Fig. 2 from Kahneman, 2003aKahneman, : 1455. In other words, it would appear that the inset squares in Fig. 2 differ in brightness, due to the varied luminance of the surrounding context. But the two inset squares in fact have the same luminance. Kahneman thus argues that the Bbrightness of an area is not a single-parameter function of the light energy that reaches the eye from that area^(2003a: 1455). Stopping short of calling this an illusion, the implication is that the reference-dependence of vision says something about our inability to judge things objectively and veridically, even though actual luminance in fact can be objectively measured. 9 A wide variety of brightness and colorrelated illusions have of course been extensively studied by others as well (Adelson, 1993(Adelson, , 2000Gilchrist 2007). The concern with this example is that the use of color or luminance tasks artificially exploits the fact that no objective measurement of color or luminance is even possible (Koenderink, 2010). 10 Using shadows or changing the surrounding context or luminance of a focal image, a common approach to pointing out illusions, is not evidence that perception itself is biased or illusory. Kahneman is correct when he says that color or luminance is Breference-dependent.^But the underlying assumption remains that there also is a true, objective way to measure luminance itself-by the scientist-and to highlight how human judgment deviates from this objective measurement. Unfortunately no such measurement is possible for color (Koenderink, 2010;cf. Maund, 2006). As discussed by Purves et al., any Bdiscrepancies between lightness and luminance…are not illusions^(2015: 4753). We may infer that the Btrue^state of luminance is not observed by a subject (Adelson, 1993), but any observation, measurement, or perception is always conflated with a number of factors that cannot fully be separated (Koenderink, 2010). We may only care about the focal retinal stimulus itself, but perception and observation is also a function of illumination, reflectance, and transmittance (Purves et al. 2015). These factors are all inextricably conflated in a way that makes it impossible to extract true measurement (Koenderink, 2010). Similar to perspectivebased visual illusions (where the illusion is artificially created by exploiting the gap between two-dimensional representation and three-dimensional reality), with luminance-based tasks scientists are only tricking themselves in pointing out observational discrepancies between perception and reality, rather than meaningfully pointing out bias. Color and luminance are always confounded by context (which includes a host of factors), and no objective measurement is possible (cf. Gilchrist et al., 1999;Gilchrist, 2006). Kahneman would seem to agree with this when he notes the context-dependence of perceptions. But his underlying Bveridical^approach to perception and vision is in direct conflict with this argument (Kahneman, 2003a(Kahneman, : 1460. 11 Most importantly, the nature of the perceiver matters. As discussed by Rogers, Bthere can be no such thing as 'color information' that is independent of the perceptual system which is extracting that information^(2014: 843). The way color or luminance is perceived depends on who and what, in what context, is doing the perceiving. The human visual system is highly specific-that is, it sees or registers a select portion of the light spectrum, responding to wavelengths between 390 nm and 700 nm. We wouldn't point to illusion or bias if someone were not able to see spectra outside this range, for example, ultraviolet lightwhich can be measured. As discovered by Newton, we see some aspects of light or color but not others. Chromatic aberrations highlight how white light includes a spectrum of colors. Indeed, the very idea of Blight^could be cast as an illusion, as alternative realities (e.g., the color spectrum) can be measured and proven. Of course, any discussion of color needs to wrestle with and separate colorimetry and the phenomenology of light and color (Koenderink, 2010). Note also that the way that any particular, seemingly objective color is represented or subjectively sensed varies across species. A bat sees the world very differently than humans do (cf. Nagel, 1974). Luminance or color has no Btrue^or objective nature (Koenderink, 2010). It is mental paint. Different species not only see the same colors differently, or don't see them at all, but they have different interpretations of the very same inputs, stimuli, and data. Furthermore, the human's built-in mechanism for maintaining color constancy should not be regarded as an illusion (cf. Foster, 2011), though it is often used as such (cf. Albertazzi, 2013). For example, in the real world we assume color constancy in the presence of shadows, even though this information can wrongly be used as evidence for illusion or bias when judging luminance or color in pictures (Adelson, 2000;cf. Gilchrist, 2006;Purves, 2014;Rogers, 2014). In all, although we can measure (and thus Bobjectivelyŝ how the existence of) a large range of possible frequencies across the electromagnetic spectrum, with various instruments, nonetheless the human visual system allows only certain types of input. This is true not only for luminance, but also for many other visual and perceptual factors. This very argument casts doubt on any one way of measuring perception and reality in the first place-an argument we will turn to next. An alternative approach to perception Throughout this manuscript-in criticizing extant conceptions of perception and rationality-we have broadly alluded to some ways forward. We now outline an alternative approach to perception and subsequently discuss its implications for the study of human judgment and decision-making as well as models of rationality. The core of our argument is that perception and vision is species-specific, directed, and expressive instead of singular, linear, representative, and objective. We are not the first to question the assumption of an all-seeing view of perception; yet much extant work across the cognitive sciences continues to rely on this assumption. Perception necessarily originates from a perspective, or point of view. Organism-specific perception The focus on the limits, errors, and boundedness or bias in perception misses a fundamental point about perception, namely that perception is organism-and species-specific. In an effort to develop general models of cognition and rationality (across different organisms, and even to account for artificial intelligence: Simon, 1980Simon, , 1990, scholars have lost sight of central insights from domains such as ethology. Ethology is the branch of biology that focuses on species-specificity, the comparative nature of organisms. Instead of attempting to generate models Bclaimed to be general^ (Tinbergen, 1963: 111), ethology is concerned with the comparative and unique nature of organisms, in terms of vision, perception, the senses, rationality, behavior, and any number of other domains (Lorenz, 1955; for a historical review see Burkhardt, 2005). One of the pioneers of the ethological approach to perception was the biologist Jakob von Uexküll (1921;cf. Riedl, 1984). Von Uexküll argued that each organism has its own, unique BUmwelt,^by which he meant the context of existence. He noted that Bevery animal is surrounded with different things, the dog is surrounded by dog things and the dragonfly is surrounded by dragonfly things^(2010: 117). These Umwelten or surroundings are not objective, but they comprise what the organism attends to, sees, and ignores. Hence, Umwelten vary across species and even across individual organisms within a species. Any object in an environment-say, a tree-is and means very different things, depending on the observer or species in question. A tree is a place of shelter for one species, a nesting location for another, an object of beauty, an obstacle, shade, a source of food, or a lookout point. The list of possible Baffordances^for any object is long (Uexküll 2010; cf. Gibson, 1979). Importantly, different aspects of Btree^are visible to different species. Awareness is not conditioned by what is there, but by the nature of the observer. Some focus on or simply see a particular color and others focus on, say, size. To give an example from another context, stickleback fish are attracted by and attuned to the color red, at the expense of seeing other more Breal^features of potential mates (Tinbergen, 1951). What is perceptually Bselected^or attended to or seen-which units, portions, and boundaries Kahneman, 2003Kahneman, : 1455 are relevant to the organism-varies significantly. Perception therefore depends more on the nature of the organism than on the nature of the environment. We cannot point to a single, objective characteristic of an object (whether color or size, as is done in psychophysics) or environment to capture some form of true perception. Although there are overlaps in both affordances and in what is perceived (what might be called Bpublic objects^; Hoffman, 2013), species see things in radically different ways. Perception requires a deeper Bgrammar,^an understanding of the nature of the perceiving organism itself. Similar to language learning (Chomsky, 1957), we can focus on and measure environmental inputs-exposure, repetition, and stimuli to explain, say, language, as the behaviorists did-or we can focus on the underlying, latent, developing, and speciesspecific capacity for language despite impoverished inputs or stimuli. No amount of exposure to linguistic or perceptual stimuli-no matter how frequent or how intense-will create the capacity to speak or perceive something if the underlying capacity or nature to receive those stimuli does not exist in the first place. To provide a stylized example: if a child carried around a hypothetical pet bee throughout its childhood, both child and bee would be exposed to the same environments, percepts, and stimuli. Yet, the child would not develop the navigational abilities of the bee and the bee would not develop the language or perceptual capacities of the child (Chomsky, 2002). Each would have very different-neither right nor wrong, but different-perceptions of their environments. Perception requires an ability and readiness to respond to relevant stimuli (Mackay, 1969). The problem of perception has, instead, in the rationality literature, been framed as one of needing to deal with-or somehow properly compute, capture, or see-the overwhelming inputs or correct, external stimuli and to represent the world in accurate ways (cf. Kahneman, 2003a,b). But a more fundamental issue is the directedness of perception due to a priori factors associated with the organism itself. In psychology there is indeed a parallel program of research which focuses on perception and the a priori or Bcore^knowledge of humans, in reaction to extant empiricist, Bperiphery-inward^-type, input-output models of perception and behavior (e.g., Spelke, et al., 1992). Stepping back, our intent is to focus on a different way of conceiving the nature of organisms, with particular attention to perception and vision. As noted by Simon, the appropriate specification of the underlying nature of organisms is indeed a fundamental starting point for any scientific analysis: BNothing is more fundamental in setting our research agenda and informing our research methods than our view of the nature of the human beings whose behavior we are studying ( 1985: 303). This underlying nature, for Simon and in subsequent work by Kahneman and others, focuses on perceptual boundedness, inputs and outputs, and computational limitation-generating models of rationality that can be verified against objective realities. We agree with Simon that the underlying specification of human nature matters. But we argue for a radically different, organism-specific understanding of nature, perception, and rationality. Perception as a user interface A powerful way of thinking about perception (and objects or environments) is as a species-specific user interface (Hoffman, 2009;Hoffman et al., 2015;Koenderink, 2011Koenderink, , 2015. What organisms, humans included, perceive is not the actual nature of things. As noted by Frith, Bwe do not have direct access to the physical world. It may feel as if we have direct access, but this is an illusion created by our brain ( 2007: 40;cf. Kandel, 2013). Perception and vision thus is, in effect, a species-specific interface that presents salient objects and features. 12 What is visible on the interface-the way that objects or Bicons^are, or how they are perceived-can be thought about as species-specific mental paint. Just as a computer's interface doesn't match any actual reality (and icons could vary wildly in, for example, color), and in fact is an illusion, so perception is some part illusion (or hallucination)-albeit a very useful illusion. The perceptual interface hides much of reality behind a set of things that are salient to a species. The fact that many aspects of reality are hidden is useful rather than a computational problem or lack of objectivity on the part of the organism or observer. The perception of particular objects also reflects the specific nature and capability of any organism. The lack of a capacity to see something as Bx^and not as By^-just as any species-specific capacity: bird-like flight, bat-like echolocation, or bee-like navigation-is not somehow problematic, or data to be utilized for highlighting bias or boundedness, but simply inherent to the nature of the organism itself. The notion of perception as a user interface reinforces our claim that there is no possible way to point to or verify any one objective reality against which we might test susceptibility to illusion or bias. Any discussion of color or luminance illustrates this. For all practical purposes we can treat color as real in our day-to-day interactions and behavior, without getting into details about color spectra, the phenomenology of color, the nature of light or electromagnetic waves, and radiation (cf. Wilczek, 2016). In other words, our perceptual interface is useful and serves us quite well, without having to get into the actual physical or objective nature of things (as is done in the rationality literature). The problem is that even the most real, tangible, and physical of objects-say, a table-is not verifiable in a scientific sense (though pragmatically we of course see it), despite the physicalism and materialism emphasized by many in science. Just as a laptop provides a useful, perceptual interface that hides other realities (which in turn hide yet other realities), so a table or any other physical thing can be seen as a species-specific icon. As discussed by Eddington (1927: 11-16), a physical object such as a table is not just what we see (and any physical features we might ascribe to it or measure: color, size, weight), but it is alsocounter to what is visible to us-largely made up of Bempty space.^Even the most basic or essential of actual, physical elements, an atom, in fact Bhas no physical properties at all( see Heisenberg, 1933;also Bell, 1990;Wilczek, 2016). In modern physics-compared to classical physics-there are neither any meaningfully physical properties (e.g., Mermin, 1998;Mohrhoff, 2000) nor any form of objective observerindependence (e.g., Bub, 1999;Maudlin, 2011;Wilczek, 2016). A problem is then introduced by the demands that existing work on rationality places on the Bphysical^and Bactual properties of the object of judgment^ (Kahneman, 2003;cf. Chater et al., 2010;Kersten et al., 2004). These actual properties are impossible to pin down, due to their multidimensionality. We might say that focusing on the actual, objective reality simply represents a pragmatic and empirical stance: objectivity only applies to what humans can actually touch and see (or verify)-thus circumventing any discussions that might get into metaphysics or the nature of reality. But as our discussion of various visual tasks and examples illustrates (e.g., luminance and visual illusions), it is impossible to point to any one true way that things really are. 13 Objects can be seen, described and represented in a large variety of ways-as we'll further emphasize next. We may be able to momentarily trap subjects into seeming illusions, into not seeing things in one specific and rational way that we might demand of them. But these illusions are only an artefact of demanding that perception conforms to one point of view, even though other views are possible, depending on the perspective. Rather than anchor on any form of computation or environmental and camera-like representation, our focus is not just on the species-specificity and user interface nature of perception, but also on the directedness of perception. This idea of the directedness of perception might informally be captured by Popper's (1972) contrast between bucket theories of mind versus searchlight theories of mind. Bucket theories represent a stimulus and input-/output-oriented model of mind where environmental information and perceptions are passively and automaticallywithout meaning (cf. Koenderink et al., 2015;Pinna, 2010;Powell, 2001)-poured in as a function of exposure, the actual nature of stimuli, and experience. The searchlight model of mind assumes that perception is driven by the set of guesses, questions, conjectures, hypotheses, and theories that the mind (or organism) brings to the world (cf. Brown, 2011;Koenderink, 2011). The notion of a searchlight theory of mind might be compared to the idea of Bperception as hypotheses^(cf. Gregory 1980). From this perspective, perception is actively directed toward certain features and it is expressive. Perception is not a process of identifying or learning some set of capital-T truths about environments, and objects within it, but rather an emphasis is placed on the organism-specific factors that direct perception and attention. In contrast, the Bwhat you see is what you get^-approach to perception (Hoffman, 2012) treats vision Bas an inverse inference problem^ , where the visual system seeks to Bmatch the structure of the world^ (Knill et al., 1996: 6). This approach treats perception as an effort to map Bsensory input to environmental layout^ (Chater et al. 2006: 287), or sees it as an effort to infer Bthe structure of the world from perceptual input^ (Oaksford and Chater, 2007: 93). But the efforts to map the external world onto the mind cannot be retrofitted into the perspective that we are suggesting here. Some argue that the idea of perception as a user interface is simply a version of Bayesian perception (Feldman, 2015). This argument is that perception does not track capital-T truth or beliefs in the world, but that perception tracks usefulness and that by doing so leads to fitness and improved performance for organisms and species. But focusing on usefulness, instead of truth, is fundamentally in violation of the underlying assumptions and foundation of Bayesian approaches to perception and vision (Hoffman & Singh, 2012;Hoffman et al., 2015). Perception, perspective, and art The problems and opportunities encountered by artists and scholars who study the psychology and perception of art provide a useful window into the nature of vision (cf. Arnheim, 1954;Clark, 2009;Gombrich, 1956;Grootenboer, 2005;Helmholtz 1887;Hyman, 2006;Ivins, 1938;Koenderink, 2014;Kulvicki, 2006Kulvicki, , 2014Panofsky, 1955Panofsky, , 1991. In this section we show how the arts teach us that any attempts at veridical representation and perception necessarily result in illusion (cf. Kandel, 2012). We concur with Arnheim who wrote that Bperception turns out to be not a mechanical recording of the stimuli imposed by the physical world upon the receptor organs of man and animal, but the eminently active and creative grasping of reality^(1986: 5). No true representation-more specifically, no single objective representation-is possible as there are many possibilities for representing reality (Koenderink et al., 2015;Rauschenbach, 1985). Placing an emphasis on any one element when seeking to represent reality necessarily means that other parts are not represented. Any one representation is just that: one 13 We can reduce anything into Bpointer readings^ (Eddington, 1927;247-252; also Koenderink 2012)-measurements of size, position or motionthough the actual nature of objects (and particularly how we perceive these objects) is far more complex, and further it also depends on perspective and the observer. And, beyond any pointer readings, complexity is emergent beyond any physical factors that might be measured (Ellis, 2005). representation chosen amongst a very large set of possibilities. Reality can be expressed in many ways. Various potential representations and expressions are not necessarily mutually exclusive, but useful for particular purposes, making different features salient. Thus it is hard to distinguish whether one representation is better or more veridical than another. Instead we might look for veridicality on certain dimensions (for example, whether three dimensions are appropriately captured), or better yet, for usefulness for making certain features salient. Perhaps the best way of illustrating the problem of perception and representation, as informed by the arts, is by focusing on Blinear^perspective and the aforementioned problem of capturing three-dimensional reality on a two-dimensional surface (Kandel, 2012;Mausfeld, 2002). The delineation of a Euclidean space allows three dimensions to be represented on a two dimensional surface (cf. Koenderink, 2012). This is done by taking a fixed position, a point of view, and then identifying a vanishing point-a horizon where vertical, parallel lines meet-where distance is represented by size and convergence. The problem is that the use of Euclidean space and vanishing points on a two-dimensional canvas necessarily produces an illusion, as the vertical lines do not in fact converge (e.g., the railroad tracks in Fig. 1). Incorporating distance and space into a representation is beyond the capacity of the medium (a two-dimensional surface), necessitating illusion and the omission of other aspects of reality. As vividly articulated by the Russian mathematician Pavel Florensky, Blinear perspective is a machine for annihilating reality( 2006: 93; cf. Koenderink et al., 2015;Rauschenbach, 1985). Or to soften the tone: the use of linear perspective annihilates some realities, omitting the possibility of their representation, while making three-dimensional aspects more salient. In other words, the use of a vanishing point hides a host of other things that could be represented, but now can't be, once the demand for depth is introduced. However, despite this, representations that properly depict three dimensions are often seen as more veridical and true to reality, even though they also hide much. Naïve or Bflat^representations-for example Egyptian or Byzantine art-are seen as distorting reality by omitting perspective altogether (Gombrich, 1956;Panofsky, 1991). The representation itself of course is not the reality, but merely a map of it (that is, it focuses us on some portions of reality and makes them salient). 14 Consider how painting and fine art changed in the late 19th c e n t u r y w h e n p h o t o g r a p h y b e c a m e a v a i l a b l e . Neurophysiologist Eric Kandel discusses how the work of artists at this time in Vienna Bsought newer truths that could not be captured by the camera… [and] turned the artist's view inward-away from the three-dimensional outside world and toward the multidimensional inner self^ (Kandel, 2012: 4; also see Kandel, 2013). The camera could capture outward surfaces or Bskins,^but not inward aspects that of course prove equally real. Artists such as Gustav Klimt Babandoned three-dimensional reality for a modern version of twodimensional representation that characterizes Byzantine art ( Kandel, 2012: 113). Klimt captured the subject in flat, iconlike fashion, featuring symbolism and ornamentation. One form of representing reality (more photograph-like) is abandoned to give way to highlighting other aspects. The modernist mantra of the turn of the century Vienna-which united psychologists, artists, and neuroscientists alike-was that Bonly by going below surface appearances can we find reality^ (Kandel, 2012: 16). 15 Kandel suggests that it was this tradition, which Bquestioned what constitutes reality,^and which provocatively concluded that Bthere is no single reality,^that in fact gave rise to cognitive science and neuroscience (2012: 14, 113). 16 The critical point here is that any visual scene can be represented in a number of different ways. We could compare different depictions of the same visual scene by, say, a photographer versus a photorealist, impressionist, surrealist, cubist, or symbolist painter. There is no sense in which one or another of these representations is more true to actual reality (cf. Koenderink et al., 2015). Each representation points to or expresses different aspects. Some aspects of a visual scene are made more salient by one depiction, necessitating the abandonment of others aspects. Surface appearances or threedimensional realities might be foregone to capture other aspects. Even photographs are scarcely objective or neutral, as photos of the same visual scene can vary significantly-and thus capture different aspects of reality, hiding others-based on choices about aperture, shutter speed, and exposure (Koenderink, 2001). Any number of other technologies could be used to enhance, express, measure, elicit, or point out different features within a visual field. At the most basic level, a painting can simply be described by what is physically there . Thus, prior to any demands for accurate depiction, we might objectively see a finished painting as constituted by its physical parts: a wooden frame, a canvas of some size, and the color pigment on the canvas. 17 This is one description. The painting can also be considered more closely: the composition and arrangement of the pigments can be noted and perhaps some kind of judgment can be made about whether these appropriately capture, say, Euclidean space or perspective. This is another description, but not the only alternative. The list of possible demands for representation is too large to be captured on a two-dimensional surface. Of course, the most obvious problem in anchoring on the physical aspects of representation or perception is that it misses a wide swath of activity concerned with meaning and symbols. A painting is more than the sum of its physical elements, a canvas, and pigment. The way that the pigments are arranged, the subject matter of the representation, feature elements of meaning that scarcely can be captured in any physical way (Langer, 1953;Panofsky, 1955;; also see Gormley, 2007). The arts teach us that a representational approach to perception cannot address how the physical things on a canvas-composed and arranged-elicit more than the luminance and other physical factors that could be measured. Recent work on Gestalt psychology reinforces this point . Central to perception, then, is the Bbeholder's share ( Gombrich, 1956). Observation is always theory-laden (Popper 1972) and there is no innocent eye that somehow directly captures or speaks truth to data or reality. The beholder's share is not only captured by the species-specific nature of perception, but also by the experiences, theories, and insights that the beholder brings to any encounter. We might again cite Florensky, who argues that Bthe visual image is not presented to the consciousness as something simple, without work and effort, but is constructed…such that each of [image] is perceived more or less from its own point of view^(2006: 270; see also Panofsky, 1991). Our argument is not merely a stylistic or artistic one, but it is directly applicable to science. What the arts reveal is that reality and perception is multifarious. We might, and perhaps should, observe and measure this multifariousness scientifically as well (Kandel, 2013;Koenderink, 2014). Many factors are not perceptible by the human eye, but nonetheless there. Science goes beyond naïve perception. We use all manner of perception-enhancing scientific tools and measurements to learn about the nature of reality. In all, the above research raises fundamental questions about the emphasis that Kahneman places on the Bactual,^Bphysical,^and Bveridicalâ spects of reality (2003: 1453-1460). As we have discussed, perception simply does not give us this type of direct access to reality (cf. Frith, 2007), or certainly not to the type of singular, objective reality that Kahneman has in mind. Furthermore, scholars interpret the fact that perception can be Bprimed^(for example by size, contrast, order), and that individuals can be led to see things in very different and discrepant ways, as evidence for bias (Kahneman & Frederick, 2002). The evidence from top-down priming is not evidence for bias, but rather evidence for the openness of reality to be interpreted and expressed in many different ways. What the arts illustrate is that rather than demand that subjects meet the requirements of, for example, linear perspective, there are a multitude of other demands that might also be made for representing, expressing, or seeing reality. Any single demand for verity is necessarily incomplete and illusory. Perception and rationality: So what? Our arguments about perception may seem abstract and perhaps far removed from practical concerns about the study of rationality, of human judgment and decision-making. However, our thesis has significant implications. First, there are two fundamentally different conceptions of human nature and rationality. One conception assumes that errors and mistakes are the critical phenomena to be demonstrated and explained (cf. Krueger & Funder, 2004). This literature uses the norm of omniscience as a convenient Bnull hypothesis^1 8 -granting scientists themselves an all-seeing position-against which human decision making is measured. The conventional and even ritualistic use of this null hypothesis has endowed it a normative force. Yet, repeated rejections of this null hypothesis are of limited interest or concern when the normative status of the theory is itself questionable. We can only expect the list of deviations, biases, and errors to grow, indefinitely, without fresh theoretical light being shed. Unfortunately, many of these tests Breveal little more than the difficulty of the presented task^ (Krueger & Funder, 2004: 322). The other approach to rationality focuses not on mistakes and error (from some omniscient norm), but on the nature of rationality itself. Such a theory needs to capture the accuracy manifest in human judgment (Jussim, 2015), as well as the fact that many of the seeming biases have heuristic value and lead to better judgments and outcomes (e.g., Gigerenzer & Brighton, 2009). Furthermore, this alternative theory needs to recognize that many of the simplistic tests of rationality omit important contextual information and also do not recognize that even simple stimuli, cues, and primes can be interpreted in many different ways. Thus, while psychology and behavioral economics can take credit for introducing psychological factors into judgment and decision making (cf. Thaler, 2015), we think that the literature cited here calls for a significant shift in the psychological assumptions about human nature. 18 In his recent book on behavioral psychology and economics, Richard Thaler recounts what psychologist Thomas Gilovich said to him: BI never cease to be amazed by the number of convenient null hypotheses economic theory has given you^ (Thaler, 2015: 97). The problem is that omniscience indeed is an all too convenient null hypothesis, which is easy to demonstrate as false, in an infinite variety of ways. However, beyond continuing to point to deviations from this convenient null hypothesis, future work also needs to more proactively account for what rationality is. We see both perception and rationality as a function of organisms' and agents' active engagement with their environments, through the probing, expectations, questions, conjectures and theories that humans impose on the world (Koenderink, 2012). The shift here is radical: from an empiricism that focuses on the senses to a form of rationalism that focuses on the nature, capacities, and intentions of the organisms or actors involved. While empiricism emphasizes the actual, physical characteristics within a visual scene (Kahneman, 2003), rationalism focuses us on the perceivers themselves. From this perspective, much of the work on bias, blindness, or bounded rationality-as we will illustrate nextcan be interpreted quite differently. Research by developmental psychologists shows how even infants have ex ante theories or Bcore knowledge^about the world, which guide expectations and object perceptions (e.g., Spelke et al., 1992; also see Gopnik & Meltzoff, 1997), thus challenging empiricism and the overly strong focus on the senses. We submit that a new generation of theories should start with a different premise, which grants human actors the same theoretical and scientific tools that we as scientists use to understand the world. The present asymmetry-between our assumptions about subjects versus the implied assumptions that we have about science itself-deserves attention. It has been touched on in economics, where Vernon Smith argues that Bour bounded rationality as economic theorists is far more constraining on economic science, than the bounded rationality of privately informed agents^(2003: 526). When we experimentally whittle rationality down to the simplest of stimuli or cues, we lose valuable contextual information, held by these Bprivately informed agents,^which shapes perception and interpretation. The problem is that even the simplest of cues or stimuli afford wildly different interpretations. 19 Thus the beliefs, ideas, conjectures and theories of agents deserve more careful attention. It is worth noting that this form of theorizing is scarcely new. It may be found in developmental psychology (e.g., Spelke et al., 1992) and in the history of philosophy, for example in the work of Plato, Kant, or Goethe. In the context of social science this premise links up with the type of theoretical endeavor envisioned by Adam Smith who argued that ultimately our theory of human nature and rationality-as paraphrased by Emma Rothschild-Bmust be a theory of people with theories^(2001: 50). Second, our arguments might yield alternative interpretations to existing theories and experimental findings of bias, boundedness, or blindness. Part of our concern is that the findings of bias and error are affected by scientists' own theoretical assumptions and expectations (cf. Bell, 1990), 20 much like perceiving and awareness depend on people's beliefs and expectations. If our theories postulate irrationality, and if we craft experimental tasks to prove this, we will find evidence for it. There is a large variety of stimuli that could be pointed to (and proven) but missed by human subjects in the lab or in the wild. But these types of findings can be interpreted in a number of different ways. Consider a telling example. In their famous experiment on inattentional blindness, Simons and Chabris (1999) show how subjects miss a chest-thumping person in a gorilla suit walking across the scene, because these subjects were asked (primed) to count the number of basketball passes (cf. Chugh & Bazerman, 2007). Kahneman argues that the gorilla study points out something very fundamental about the mind, namely, that it is Bblind to the obvious^(2011: 23-24). However, obviousness, from the perspective of perception-and awareness in particular-is far more complicated. If subjects were primed to look for the gorilla, and then asked to report on the number of basketball passes they observed, presumably they would also not be able to get the correct answer. Primes are equivalent to questions which direct awareness (cf. Koenderink, 2012), in the presence of visual fields that feature an extremely large (if not near infinite) variety of possible things that could be attended to. In the gorilla experiment, subjects could be asked to report on any number of things: the hair color of the participants, the gender or ethnic composition of the group, the expressions or emotion of the participants, the color of the floor, or whether they noticed what large letters were spray-painted on the wall (two large BS^letters). Any of these visual stimuli are evident-even obvious; though only if you are looking for them (or not looking for something else). Missing any one of them is not blindness or bias-though the stimuli are evident and obvious-though it can be framed as such. Missing the gorilla is a success, given the task at hand. Thus these types of experiments provide evidence for the directedness of perception and awareness, and highlight how a very large set of things can be attended to and reported in any visual scene. Primes and cues (rightly) direct the attention and awareness of subjects. In short, awareness and perception has little to do with the nature of the stimulus (Koenderink, 2012), even though this is the explicit assumption of behavioral work (Kahneman, 2003). What we are arguing for is thus a fundamentally different view of cognition. Awareness and perception are instead a function of the perceiver, of the questions, probes, and theories that any of us impose on even the simplest of visual scenes or surroundings, or on reality more generally. Shifting the emphasis to perceivers, rather than the nature of stimuli, provides a significant opportunity for future work. Rationality and perception research has engaged in an exercise where scholars pre-identify and focus on a single percept or stimulus and then look for a common response, or point to a systematic deviation from a single, sought-after, rational answer (Koenderink, 2001). Of course, it is important that theories allow and dictate certain observations. But an a priori focus on irrationality leads to an unknown quantity of pre-publication trial-and-error of different experimental tasks, to find and report those results that indeed provide evidence of bias or illusion. Any number of tests and experiments could be devised to highlight irrationality, blindness, and bias-as even the simplest of visual scenes exhausts our abilities to describe it. Missing something obvious (and thus surprising) in a visual scene of course provides an important basis for publication. This tendency has been noted in the context of social psychology: Bwhen judgments consistent with the norm of rationality are considered uninformative, only irrationality is newsworthy^ (Krueger & Funder, 2004: 318). But again, the vast amount of decision making that humans get right receives little attention (e.g., Funder, 2012;Jussim, 2015). And, more importantly, the actual mechanisms of rationality and awareness never get addressed-a significant opportunity for future work. The third, and perhaps most basic, implication of our arguments is that the rationality literature needs to rethink the multitude of visual examples and perceptual metaphors that are utilized to highlight bias. As we have discussed, visual illusions do not provide evidence of bias (Rogers, 2014;cf. Hoffman & Richards, 1984). Instead they reveal how the perceptual system works (well) in the presence of incomplete, degraded, or ambiguous input information (Koenderink, 2012;Zavagno et al., 2015). Visual illusions reveal that multiple responses, or ways of seeing, are equally rational and plausible, as highlighted in our discussion of the Ponzo illusion (see Fig. 1). Rational judgment, then, much like visual perception, can be seen as Bmultistable^ (Attneave, 1971). As noted by Schwartz et al., Bmultistability occurs when a single physical stimulus produces alternations between different subjective percepts^(2012: 896, emphasis added). Whereas Kahneman and others working in the heuristics-and-biases tradition emphasize the Bphysical^or Bactual properties of the object of judgment^(2003: 1453) and thereby focus on a single, fixed, and veridical interpretation (i.e., the rational response), we argue that even simple stimuli are characterized by indeterminacy and ambiguity. Perception is multistable, as almost any percept or physical stimulus-even something as simple as color or luminance (Koenderink, 2010)-is prone to carry some irreducible ambiguity and is susceptible to multiple different interpretations. Conscious perception is the result of ambiguity-solving processes, which themselves are not determined by the stimulus input. Similarly, the human susceptibility to priming and sensitivity to salient cues is not prima facie evidence of irrationality, but rather provides evidence of this multistability. 21 Whether we are dealing with perception or reasoning, in information-deprived and ambiguous situations humans use whatever evidence or cues (or demand characteristics) are available to make judgments. This is also the basis for saying that apparent biases might be seen as rational and adaptive heuristics (Gigerenzer & Gaissmaier, 2011;McKenzie, 2003). The specific opportunity for future research, suggested by our arguments, is to recognize the multistability and indeterminacy of judgment and rationality. Modal, average, or common responses can be useful for some purposes, but scholars might also take advantage of the large variance in judgments and use this information to understand heterogeneity in both perception and reasoning. The rationality literature has a tendency to label certain outcomes as biases or mistakes-and the catalogue of different biases now numbers in the hundreds. But this labeling has not allowed us to understand the actual reasons why humans behave in particular ways (Boudon, 2003). Furthermore, judgment and decision making often happen in ambiguous and highly uncertain environments, where specifying a single form of optimality is scarcely possible, though perhaps only with the benefit of hindsight. While the biases and bounded rationality literature is getting much traction in business and managerial literatures and settings, we wonder whether it even meaningfully applies to settings characterized by high levels of uncertainty (cf. Felin, Kauffman, Koppl, & Longo, 2014). It is precisely in these settings where the literature on rationality might in fact study how agent beliefs, expectations, and theories guide judgment and behavior, and how humans adjust as they make errors and learn from their behavior. Furthermore, the biases and rationality literatures have been extremely individualistic, scarcely accounting for the social dimensions of rationality. That is, human interaction in social, institutional, and organizational settings is likely to significantly shape how rationality Baggregates.T his is certain to be far more complicated than simple, linear addition, given complex, emergent outcomes. Thus, further theoretical and empirical attention is needed on the disparate social and organizational contexts within which judgment and decision making happen. 21 Of course, not just anything is Bprime^-able or susceptible to so-called Btop down^(e.g., categories or language) effects on perception (as discussed by Firestone & Scholl, 2015). However, our emphasis is on the fact that most perceptual cues and stimuli can be interpreted in different ways, far from yielding singular responses. The use of attentional cues or primes in experiments merely is a (adaptive and rational) response to having to deal with ambiguous stimuli in uncertain environments. Conclusion The purpose of this paper has been to show how the bounded rationality and biases literature-in behavioral economics and cognitive psychology-has implicitly built its foundations on some problematic assumptions about perception. Arguments about perception are inadvertently interwoven into the rationality literature through the use of visual illusions, metaphors, and tasks, as examples of bias, boundedness, and blindness. The behavioral literature features an all-seeing view of perception, which we argue is untenable and in fact closely mirrors the assumption of omniscience which this literature has sought to challenge. We provide evidence from vision and perception science, as well as the arts, to make our pointalong with suggesting some ways forward. We hope that our arguments can help build a foundation for alternative ways of thinking about judgment, decision making, and rationality. Just as the perception literature-some of which we have cited-features a more pragmatic and multi-dimensional approach to seeing and vision, the rationality literature might also consider the Busefulness^(and striking successes) of human reasoning and judgment in disparate contexts that feature much ambiguity and possibility. The literature on Bbiases as heuristics ( Gigerenzer & Todd, 1999) has begun to move us in this direction, although it has also inherited some problematic assumptions about perception. But there is also an opportunity to study the varied organism-specific and contextual factors that shape human cognition and decisions in natural settings. Furthermore, human agents also actively engage with the world on the basis of their expectations, conjectures, and theories, which also provides a promising opportunity for future work. Most real-world settings feature a wild assortment of possible stimuli and cues, allowing for varied types of rationalities and interpretations (even of the same stimulus), thus requiring us to expand the scope of how rationality is specified, studied, and understood. If our suggested reorientation of the study of rationality takes hold, then it will move the literature toward recognizing cognition, judgment, and rationality as a multi-stable affair. We hope that our paper, while provocative, has at least opened up a conversation about the perception-rationality link and perhaps even a conversation about the very nature of rationality.
19,026
2016-12-07T00:00:00.000
[ "Philosophy", "Economics" ]
Angiopoietin-1 Mimetic Nanoparticles for Restoring the Function of Endothelial Cells as Potential Therapeutic for Glaucoma A root cause for the development and progression of primary open-angle glaucoma might be the loss of the Schlemm’s canal (SC) cell function due to an impaired Angiopoietin-1 (Angpt-1)/Tie2 signaling. Current therapeutic options fail to restore the SC cell function. We propose Angpt-1 mimetic nanoparticles (NPs) that are intended to bind in a multivalent manner to the Tie2 receptor for successful receptor activation. To this end, an Angpt-1 mimetic peptide was coupled to a poly(ethylene glycol)-poly(lactic acid) (PEG-PLA) block co-polymer. The modified polymer allowed for the fabrication of Angpt-1 mimetic NPs with a narrow size distribution (polydispersity index < 0.2) and the size of the NPs ranging from about 120 nm (100% ligand density) to about 100 nm (5% ligand density). NP interaction with endothelial cells (HUVECs, EA.hy926) as surrogate for SC cells and fibroblasts as control was investigated by flow cytometry and confocal microscopy. The NP–cell interaction strongly depended on the ligand density and size of NPs. The cellular response to the NPs was investigated by a Ca2+ mobilization assay as well as by a real-time RT-PCR and Western blot analysis of endothelial nitric oxide synthase (eNOS). NPs with a ligand density of 25% opposed VEGF-induced Ca2+ influx in HUVECs significantly which could possibly increase cell relaxation and thus aqueous humor drainage, whereas the expression and synthesis of eNOS was not significantly altered. Therefore, we suggest Angpt-1 mimetic NPs as a first step towards a causative therapy to recover the loss of SC cell function during glaucoma. Introduction Primary open-angle glaucoma (POAG) is a chronic, progressive neuropathy of the optic nerve and one of the leading causes of blindness worldwide [1,2]. Intraocular pressure (IOP) is considered as the only modifiable risk factor for POAG development and progression [3]. The IOP is generated in the anterior chamber of the eye and is maintained by the balance between the production of aqueous humor in the ciliary body and its efflux through the trabecular outflow pathway. Pathologically altered tissues in the trabecular outflow system that are accountable for an increased IOP are the juxtacanalicular connective tissue (JCT), together with the inner wall endothelium of the Schlemm s canal (SC) [4,5]. Most anti-glaucoma drugs on the market act only symptomatically and do not engage the pathological changes in the trabecular outflow system. More innovative drugs that were recently approved, are Rho kinase inhibitors such as netarsudil (Rhopressa) and NO donors such as latanoprostene bunod (VYZULTA). They address the abnormally higher Most anti-glaucoma drugs on the market act only symptomatically and do not en gage the pathological changes in the trabecular outflow system. More innovative drug that were recently approved, are Rho kinase inhibitors such as netarsudil (Rhopressa) and NO donors such as latanoprostene bunod (VYZULTA). They address the abnormally higher extracellular matrix (ECM) synthesis and increased cell contractility of JCT and SC cells. However, they still fail to rescue the function of the SC [2,[5][6][7]. As yet, there is n drug on the market that specifically targets the inner wall endothelium of the SC. A certain level of SC cell loss has a more prominent negative impact on IOP compared to the afore mentioned ECM accumulation in the JCT [8]. Therefore, it is of utmost importance to spe cifically target SC cells, especially after disease has progressed. Recently it was demonstrated that the integrity and functionality of SC is maintained by the signaling between angiopoietin 1 (Angpt-1) and its tyrosine kinase receptor Tie [9]. Reduction or even inactivation of the Angpt-1/Tie2 signaling during adulthood in duces SC degeneration and is a key factor for IOP disbalance [4,9,10]. Restoring this path way by adding recombinant Angpt-1 as a therapeutic agent is not straightforward becaus Angpt-1 is prone to aggregation and is therefore not suitable as a therapeutic agent [11] Recently, an Angpt-1 mimetic peptide sequence (HHHRHSF) was discovered [12][13][14]. W hypothesize, that the Angpt-1 mimetic peptide immobilized on the surface of nanoparti cles (NPs) could be able to restore SC cell function. For successful receptor activation, th clustering of multiple Tie2 receptors is required [15]. Because NPs may bind to several cel surface receptors simultaneously in a so-called multivalent manner, the crosslinking o multiple Tie2 receptors should thus be possible (Scheme 1). Scheme 1. Concept of multivalent Angpiopoietin-1 (Angpt-1) mimetic nanoparticles (NPs) intended to restore Schlemm's canal (SC) function. Polymer NPs are surface functionalized with the Angptmimetic peptide sequence HHHRHSF. After reaching SC cells, particles bind to the Tie2 receptor i a multivalent manner, inducing Tie2 receptor clustering and activation. After activation of the sig naling cascade, several signaling pathways (not shown) are activated, restoring SC cell function The dimensions do not correspond to reality. In the present study, our aim was to take a first step towards developing a first ther apeutic targeting the Tie2 receptor. Therefore, we developed Angpt-1 mimetic NPs. Fo Scheme 1. Concept of multivalent Angpiopoietin-1 (Angpt-1) mimetic nanoparticles (NPs) intended to restore Schlemm's canal (SC) function. Polymer NPs are surface functionalized with the Angpt-1 mimetic peptide sequence HHHRHSF. After reaching SC cells, particles bind to the Tie2 receptor in a multivalent manner, inducing Tie2 receptor clustering and activation. After activation of the signaling cascade, several signaling pathways (not shown) are activated, restoring SC cell function. The dimensions do not correspond to reality. In the present study, our aim was to take a first step towards developing a first therapeutic targeting the Tie2 receptor. Therefore, we developed Angpt-1 mimetic NPs. For this purpose, we attached the Angpt-1 mimetic peptide covalently to a poly(ethylene glycol)-poly(lactic acid) (PEG-PLA) block copolymer. In the next step, we used the modified polymer together with the polymer poly(lactic-co-glycolic acid) (PLGA) to produce NPs. First, we characterized the NPs physicochemically. For cellular experiments, three different cell types including a human endothelial cell line (EA.hy926) as surrogate cells for SC cells, human umbilical vein endothelial cells (HUVECs) as the primary counterpart and primary fibroblasts as control cells were chosen. All three cell types were examined regarding their Tie2 receptor expression levels. Subsequently, NP uptake experiments were performed, and the cellular effects were demonstrated by a Ca 2+ mobilization assay and real-time reverse transcriptase (RT)-PCR and Western blot analyses of endothelial nitric oxide synthase (eNOS). Preparation and Characterization of Multivalent Angpt-1 Mimetic NPs For the development of Angpt-1 mimetic NPs, we chose polymer NPs on the basis of PEG-PLA block co-polymers and biodegradable PLGA, both known for their excellent biocompatibility ( Figure S1) [16][17][18][19]. Such a particle system enables the premodification of block co-polymers with a peptide sequence allowing for precise control of the NP composition and thus offering the possibility of modular NP preparation [20,21]. First, PEG-PLA polymers with either longer (COOH-PEG 5k -PLA 10k ) or shorter (MeO-PEG 2k -PLA 10k ) PEG chains were synthesized via ring-opening polymerization of cyclic lactide ( Figure 1A). The combination of polymers of different lengths and terminal functionalization offered the possibility, later, to influence the particle size and to design NPs of varying ligand content per particle. Therefore, in a second step, HHHRHSF containing a leucine residue at the NH 2 terminus was covalently coupled to the longer COOH-PEG 5k -PLA 10k block copolymer via EDC/NHS 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide/N-hydro xysuccinimide) activation. The coupling efficiency was around 95% as determined by an iodine assay and Pauly assay ( Figure 1B-E). The shorter polymer (MeO-PEG 2k -PLA 10k ) was not modified further. this purpose, we attached the Angpt-1 mimetic peptide covalently to a poly(ethylene gly col)-poly(lactic acid) (PEG-PLA) block copolymer. In the next step, we used the modifie polymer together with the polymer poly(lactic-co-glycolic acid) (PLGA) to produce NP First, we characterized the NPs physicochemically. For cellular experiments, three differ ent cell types including a human endothelial cell line (EA.hy926) as surrogate cells for SC cells, human umbilical vein endothelial cells (HUVECs) as the primary counterpart an primary fibroblasts as control cells were chosen. All three cell types were examined re garding their Tie2 receptor expression levels. Subsequently, NP uptake experiments wer performed, and the cellular effects were demonstrated by a Ca 2+ mobilization assay an real-time reverse transcriptase (RT)-PCR and Western blot analyses of endothelial nitri oxide synthase (eNOS). Preparation and Characterization of Multivalent Angpt-1 Mimetic NPs For the development of Angpt-1 mimetic NPs, we chose polymer NPs on the basis o PEG-PLA block co-polymers and biodegradable PLGA, both known for their excellen biocompatibility ( Figure S1) [16][17][18][19]. Such a particle system enables the premodificatio of block co-polymers with a peptide sequence allowing for precise control of the NP com position and thus offering the possibility of modular NP preparation [20,21]. First, PEG-PLA polymers with either longer (COOH-PEG5k-PLA10k) or shorter (MeO PEG2k-PLA10k) PEG chains were synthesized via ring-opening polymerization of cyclic lac tide ( Figure 1A). The combination of polymers of different lengths and terminal function alization offered the possibility, later, to influence the particle size and to design NPs o varying ligand content per particle. Therefore, in a second step, HHHRHSF containing leucine residue at the NH2 terminus was covalently coupled to the longer COOH-PEG5 PLA10k block copolymer via EDC/NHS 1-ethyl-3-(3-dimethylaminopropyl)car bodiimide/N-hydroxysuccinimide) activation. The coupling efficiency was around 95% a determined by an iodine assay and Pauly assay ( Figure 1B-E). The shorter polymer (MeO PEG2k-PLA10k) was not modified further. to the leucine residue of HHHRHSF via EDC/NHS (1-ethyl-3-(3-dimethylaminopropyl)carbodiimide/ N-hydroxysuccinimide) chemistry (for more details, please refer to Section 4). (B) Coupling efficiency of synthesized PLA 10k -PEG 5k -LHHHRHSF was determined by independently measuring the concentration of both PEG and LHHHRHSF using the iodine-assay ((C) corresponding calibration curve) and the Pauly reaction ((D) corresponding calibration curve), respectively. The coupling efficiency between polymer and peptide was about 95%. (E) Principle of the Pauly reaction that specifically detects histidine. Diazotized sulfanilic acid forms under alkaline conditions with histidine the red to orange colored c-azo complex. Absorbance of the solution was measured at λ = 490 nm. Results are presented as mean ± SD of at least n = 3 measurements. DBU: 1,8-diazabicyclo [5.4.0] undec-7-ene; DCM: methylene chloride; BME: β-mercaptoethanol; DMF: dimethylformamide; RT: room temperature. Angpt-1 mimetic NPs are intended to specifically bind to Tie2 receptors in a multivalent manner. To guarantee that the Angpt-1 mimetic ligand on the particle surface is accessible for cells, the longer functionalized polymer was combined with the nonfunctionalized shorter polymer prior to NP preparation. Longer and shorter polymers and PLGA as the NP core stabilizing component were mixed in the desired ratios and NPs were prepared by bulk nanoprecipitation in 0.1 × DPBS. Thus, it was possible to produce NPs with a ligand density ranging from 100 to 0% (Figure 2A). With decreasing surface ligand density (100 to 0%), the NP size decreased from about 120 to about 90 nm ( Figure 2B). The poly dispersity index (PDI) of all samples was below 0.2, indicating an overall monodisperse particle distribution without considerable aggregates. The zeta potential decreased from -3 to -10 mV in the same order ( Figure 2B). Figure 2C shows the numbers of ligands per NP considering a spherical shape for the NPs (calculated according to Abstiens et al. [20]). With decreasing ligand density, the number of ligands per NP decreased from about 2.4 × 10 4 (100% ligand density) over 8.5 × 10 3 (50% ligand density) to about 3.4 × 10 2 (5% ligand density). The particles were stable over 25 h in DPBS (37 • C) ( Figure 2D). Only a slight decrease of the size as well as of the PDI was observed over this time period. To guarantee colloidal stability in cell experiments, we further investigated the particle size distribution in culture medium supplemented with 10% serum over 24 h ( Figure 2E). A slight right shift of the main NP peak at around 100 nm was observed in all samples over the time. Additionally, the intensity of the smaller peak at around 10 nm which is associated with the presence of free serum protein such as albumin, decreased. These observations are indicative of serum protein adsorption to the NP surface [22]. [20]. With decreasing ligand density, the size as well as the calculated number of ligands per particle decrease. (D) NP stability measurement in DPBS over 25 h. Particles were analyzed regarding size and PDI after 0, 7 and 25 h. Particles showed an overall stable particle size distribution over this time. Results are presented as mean ± SD of at least n = 3 measurements. (E) NP stability measurement in culture medium. Intensity-weighted average size distribution of NPs incubated in culture medium supplemented with 10% serum. Samples were analyzed after 0, 2, 5 and 24 h. Over the time, particles underwent a slight right shift. [20]. With decreasing ligand density, the size as well as the calculated number of ligands per particle decrease. (D) NP stability measurement in DPBS over 25 h. Particles were analyzed regarding size and PDI after 0, 7 and 25 h. Particles showed an overall stable particle size distribution over this time. Results are presented as mean ± SD of at least n = 3 measurements. (E) NP stability measurement in culture medium. Intensity-weighted average size distribution of NPs incubated in culture medium supplemented with 10% serum. Samples were analyzed after 0, 2, 5 and 24 h. Over the time, particles underwent a slight right shift. Investigation of NP Interaction with Cells of Varying Tie2 Receptor Expression A prerequisite for the specific interaction of Angpt-1 mimetic NPs with cells is a sufficiently high expression of the Tie2 receptor. For this reason, three different cell types were studied, regarding their Tie2 receptor expression. As the availability of SC cells is limited, and most important because SC cells usually lose essential signaling in conventional culture systems, HUVECs and EA.hy926 cells were used as surrogate for SC cells [23]. Using primary low-passage HUVECs instead of SC cells is a common procedure in literature [24]. The EA.hy926 cell line stably expresses Tie2 receptor over many passages [13,25]. Primary fibroblasts were used as control cells. Figure 3 shows the cellular Tie2 receptor expression level of these cell types. Initially cells were observed by confocal microscopy using a specific, primary anti-human Tie2 antibody followed by a fluorescently labeled secondary antibody ( Figure 3A). A high fluorescence intensity of the Tie2 receptor was detected for EA.hy926, a less strong one for HUVECs and almost no fluorescence for fibroblasts. The first impression was confirmed by flow cytometric analysis ( Figure 3B). Again, trypsinized and fixed cells were incubated with the primary antibody against Tie2 and the appropriate fluorescently labeled secondary antibody. The cell-associated Tie2 receptor mean fluorescence intensity (MFI) was measured for each cell type. For EA.hy926 cells the fluorescence intensity was significantly higher by 11-fold compared to fibroblasts, and for HUVECs by 5-fold, respectively. Investigation of NP Interaction with Cells of Varying Tie2 Receptor Expression A prerequisite for the specific interaction of Angpt-1 mimetic NPs with cells is a sufficiently high expression of the Tie2 receptor. For this reason, three different cell types were studied, regarding their Tie2 receptor expression. As the availability of SC cells is limited, and most important because SC cells usually lose essential signaling in conventional culture systems, HUVECs and EA.hy926 cells were used as surrogate for SC cells [23]. Using primary low-passage HUVECs instead of SC cells is a common procedure in literature [24]. The EA.hy926 cell line stably expresses Tie2 receptor over many passages [13,25]. Primary fibroblasts were used as control cells. Figure 3 shows the cellular Tie2 receptor expression level of these cell types. Initially cells were observed by confocal microscopy using a specific, primary anti-human Tie2 antibody followed by a fluorescently labeled secondary antibody ( Figure 3A). A high fluorescence intensity of the Tie2 receptor was detected for EA.hy926, a less strong one for HUVECs and almost no fluorescence for fibroblasts. The first impression was confirmed by flow cytometric analysis ( Figure 3B). Again, trypsinized and fixed cells were incubated with the primary antibody against Tie2 and the appropriate fluorescently labeled secondary antibody. The cell-associated Tie2 receptor mean fluorescence intensity (MFI) was measured for each cell type. For EA.hy926 cells the fluorescence intensity was significantly higher by 11-fold compared to fibroblasts, and for HUVECs by 5-fold, respectively. Again, after trypsinization, fixation and permeabilization, cells were incubated with a primary antihuman Tie2 antibody followed by appropriate Alexa Fluor 488-labeled secondary antibody. Tie2 receptor-derived fluorescence was excited at 488 nm and the emission was detected using a 660/20 nm bandpass filter. EA.hy926 cells showed a 11-fold higher and HUVECs a 5-fold higher MFI compared to fibroblasts. Results are presented as mean ± SD of at least n = 3 measurements. * p < 0.05, ** p < 0.01, *** p < 0.001. To get a first impression of the NP-cell interaction, HUVECs, EA.hy926 cells and fibroblasts were observed by confocal microscopy after incubation with fluorescently labeled Angpt-1 mimetic NPs ( Figure 4A). The surface ligand density of the NPs varied again from 100 to 0%. After incubation with NPs of the highest ligand content, almost no particle fluorescence was detected in all three cell types. With decreasing ligand density, more and more fluorescent spots were observed. These observations were consistent over all three cell types but were particularly pronounced for HUVECs and EA.hy926 cells and to a much lesser extent for fibroblasts. Upper panels: Representative images of cell-associated Tie2 receptor fluorescence (green, lower panel). After permeabilization, cells were incubated with a primary anti-human Tie2 antibody followed by appropriate Alexa Fluor 488-labeled secondary antibody. EA.hy926 cells showed the highest fluorescence intensity, followed by HUVECs and fibroblasts. Lower panels: Staining without primary antibody showed a negligible background fluorescence intensity. Bars indicate 10 µm. (B) Flow cytometry analysis of the Tie2 receptor expression levels. Presented is the fold change of the cell-associated mean fluorescence intensity (MFI) of Tie2 receptor compared to fibroblasts. Again, after trypsinization, fixation and permeabilization, cells were incubated with a primary anti-human Tie2 antibody followed by appropriate Alexa Fluor 488-labeled secondary antibody. Tie2 receptor-derived fluorescence was excited at 488 nm and the emission was detected using a 660/20 nm bandpass filter. EA.hy926 cells showed a 11-fold higher and HUVECs a 5-fold higher MFI compared to fibroblasts. Results are presented as mean ± SD of at least n = 3 measurements. * p < 0.05, ** p < 0.01, *** p < 0.001. To get a first impression of the NP-cell interaction, HUVECs, EA.hy926 cells and fibroblasts were observed by confocal microscopy after incubation with fluorescently labeled Angpt-1 mimetic NPs ( Figure 4A). The surface ligand density of the NPs varied again from 100 to 0%. After incubation with NPs of the highest ligand content, almost no particle fluorescence was detected in all three cell types. With decreasing ligand density, more and more fluorescent spots were observed. These observations were consistent over all three cell types but were particularly pronounced for HUVECs and EA.hy926 cells and to a much lesser extent for fibroblasts. To quantify the NP-cell interaction and to confirm the first visual impression, we performed in a next step a flow cytometry analysis. Cells were incubated with NPs at a concentration of either 300 or 15 pM for 2 h. Figure 4B and C show the fold change of mean NP fluorescence in comparison to untreated cells at each ligand density. In the case of HUVECs and EA.hy926 cells, fluorescence increased with decreasing ligand density, independent of the particle concentration. In general, the NP-cell interaction decreased in the following order: HUVECs > EA.hy926 > fibroblasts. A closer look at the data of the cells treated with 300 pM NP revealed that between 25 to 0% ligand density, the fluorescence was significantly (p < 0.0001) higher for both cell types compared to fibroblasts (Figure 4C). Plotting the fluorescence intensity against the ligand density demonstrated that the NP-cell interaction of HUVECs and EA.hy926 cells correlated with the number of ligands in a kind of exponential manner ( Figure 4D). To quantify the NP-cell interaction and to confirm the first visual impression, we performed in a next step a flow cytometry analysis. Cells were incubated with NPs at a concentration of either 300 or 15 pM for 2 h. Figure 4B,C show the fold change of mean NP fluorescence in comparison to untreated cells at each ligand density. In the case of HUVECs and EA.hy926 cells, fluorescence increased with decreasing ligand density, independent of the particle concentration. In general, the NP-cell interaction decreased in the following order: HUVECs > EA.hy926 > fibroblasts. A closer look at the data of the cells treated with 300 pM NP revealed that between 25 to 0% ligand density, the fluorescence was significantly (p < 0.0001) higher for both cell types compared to fibroblasts ( Figure 4C). Plotting the fluorescence intensity against the ligand density demonstrated that the NP-cell interaction of HUVECs and EA.hy926 cells correlated with the number of ligands in a kind of exponential manner ( Figure 4D). Impact of Angpt-1 Mimetic NPs on VEGF-Induced Ca 2+ Influx in Cells To investigate the cellular effects that were elicited by Angpt-1 mimetic particles, a Ca 2+ mobilization assay was performed. From the literature, it is known that Angpt-1 inhibits extracellular Ca 2+ influx and that vascular endothelial growth factor (VEGF) acts as its counterpart [26,27]. Intracellular Ca 2+ levels were determined after loading cells with fura-2 AM and recording the fura-2 fluorescence emission after excitation at 340 and 380 nm. The higher the fluorescence emission ratio of fura-2 AM in Figure 5, the higher was the intracellular Ca 2+ concentration and vice versa. Impact of Angpt-1 Mimetic NPs on VEGF-Induced Ca 2+ Influx in Cells To investigate the cellular effects that were elicited by Angpt-1 mimetic particles, a Ca 2+ mobilization assay was performed. From the literature, it is known that Angpt-1 inhibits extracellular Ca 2+ influx and that vascular endothelial growth factor (VEGF) acts as its counterpart [26,27]. Intracellular Ca 2+ levels were determined after loading cells with fura-2 AM and recording the fura-2 fluorescence emission after excitation at 340 and 380 nm. The higher the fluorescence emission ratio of fura-2 AM in Figure 5, the higher was the intracellular Ca 2+ concentration and vice versa. [11,[27][28][29]. Presented is the Ca 2+ -dependent ratio of fluorescence emission of fura-2 AM at 510 nm after excitation at 330 and 380 nm, respectively. Angpt-1 mimetic particles opposed VEGF-induced Ca 2+ influx in both Figure 5. Angpt-1 mimetic NPs reduced VEGF-induced Ca 2+ influx in endothelial cells. Cells were loaded with fura-2 AM for 2 h. Fura-2 AM allows for precisely measuring the intracellular concentration of Ca 2+ . After cell loading, HUVECs (A) and EA.hy926 cells (B) were incubated with Angpt-1 mimetic NPs (150 pM) of varying ligand density (100% to 0%) or COMP Angpt-1 at a concentration of 400 ng/mL, both in presence of VEGF. A concentration of 400 ng/mL for COMP Angpt-1 was chosen because it is in the range of concentrations usually used in the literature [11,[27][28][29]. Presented is the Ca 2+ -dependent ratio of fluorescence emission of fura-2 AM at 510 nm after excitation at 330 and 380 nm, respectively. Angpt-1 mimetic particles opposed VEGF-induced Ca 2+ influx in both HUVECs and EA.hy926 cells. Results are presented as mean ± SD of at least n = 3 measurements. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001 ns: not significant. Y-axes are discontinued between 0.50 and 0.54 (A,B). Cells were incubated with NPs of varying ligand density (100 to 0%) or with recombinant COMP Angpt-1, each sample in the presence of VEGF. Figure 5A shows the fluorescence emission ratio of HUVECs after 10 min of incubation. As expected, after incubation with VEGF only, the fluorescence emission ratio increased from a control level of 0.46 to a value of 0.56 which can be interpreted as an increase of the cellular Ca 2+ level. Angpt-1 mimetic NPs seemed to attenuate this increase because the fluorescence emission ratio was lower than after application of VEGF alone. When NPs with a ligand density of 25% were applied, the effect was statistically significant (p < 0.01). The NPs of all applied ligand densities decreased the fluorescence emission ratio. Unexpectedly, incubation with COMP Angpt-1 at a concentration of 400 ng/mL led to a similar value as after incubation with VEGF. This means that soluble COMP Angpt-1 did not oppose the Ca 2+ influx induced by VEGF to such an extent as the Angpt-1 mimetic NPs. EA.hy926 cells showed a similar trend with the exception that particles without ligand exerted an effect as well. Impact of Angpt-1 Mimetic NPs on eNOS Expression To investigate the effect of Angpt-1 mimetic NPs on eNOS expression, a potential downstream target of the Tie2 signaling cascade [30], a real-time RT-PCR as well as a Western blot analysis was performed after cells were incubated for 24 h with COMP Angpt-1 (50 and 200 ng/mL) and Angpt-1 mimetic NPs of varying ligand densities (0-25% ligand density). Figure 6A shows the semi quantitative analysis of eNOS mRNA. With an increasing concentration of COMP Angpt-1, the expression of eNOS was 1.3-fold higher after 50ng/mL and 1.4-fold higher after 100 ng/mL relative to control. The same was true for the treatment with Angpt-1 mimetic NPs, as with a ligand density of 10% the eNOS expression was 1.3-fold and with 25% 1.6-fold higher compared to control NPs (0% ligand density). However, for both (NPs and COMP Angpt-1), the increase was not significant, which was confirmed on the protein level, where no increase of eNOS was observed after NP or protein treatment ( Figure 6B,C). HUVECs and EA.hy926 cells. Results are presented as mean ± SD of at least n = 3 measurements. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001 ns: not significant. Y-axes are discontinued between 0.50 and 0.54 (A,B). Cells were incubated with NPs of varying ligand density (100 to 0%) or with recombinant COMP Angpt-1, each sample in the presence of VEGF. Figure 5A shows the fluorescence emission ratio of HUVECs after 10 min of incubation. As expected, after incubation with VEGF only, the fluorescence emission ratio increased from a control level of 0.46 to a value of 0.56 which can be interpreted as an increase of the cellular Ca 2+ level. Angpt-1 mimetic NPs seemed to attenuate this increase because the fluorescence emission ratio was lower than after application of VEGF alone. When NPs with a ligand density of 25% were applied, the effect was statistically significant (p < 0.01). The NPs of all applied ligand densities decreased the fluorescence emission ratio. Unexpectedly, incubation with COMP Angpt-1 at a concentration of 400 ng/mL led to a similar value as after incubation with VEGF. This means that soluble COMP Angpt-1 did not oppose the Ca 2+ influx induced by VEGF to such an extent as the Angpt-1 mimetic NPs. EA.hy926 cells showed a similar trend with the exception that particles without ligand exerted an effect as well. Impact of Angpt-1 Mimetic NPs on eNOS Expression To investigate the effect of Angpt-1 mimetic NPs on eNOS expression, a potential downstream target of the Tie2 signaling cascade [30], a real-time RT-PCR as well as a Western blot analysis was performed after cells were incubated for 24 h with COMP Angpt-1 (50 and 200 ng/mL) and Angpt-1 mimetic NPs of varying ligand densities (0-25% ligand density). Figure 6A shows the semi quantitative analysis of eNOS mRNA. With an increasing concentration of COMP Angpt-1, the expression of eNOS was 1.3-fold higher after 50ng/mL and 1.4-fold higher after 100 ng/mL relative to control. The same was true for the treatment with Angpt-1 mimetic NPs, as with a ligand density of 10% the eNOS expression was 1.3-fold and with 25% 1.6-fold higher compared to control NPs (0% ligand density). However, for both (NPs and COMP Angpt-1), the increase was not significant, which was confirmed on the protein level, where no increase of eNOS was observed after NP or protein treatment ( Figure 6B,C). 3-fold after 50ng/mL and 1.4-fold higher after 100 ng/mL relative to control. (b) Considering the Angpt-1 mimetic NP treated samples relative to particle control (NPs with 0% ligand density), the expression of eNOS was 1.3-fold and 1.6-fold higher with 10 and 25% 3-fold after 50ng/mL and 1.4-fold higher after 100 ng/mL relative to control. (b) Considering the Angpt-1 mimetic NP treated samples relative to particle control (NPs with 0% ligand density), the expression of eNOS was 1.3-fold and 1.6-fold higher with 10 and 25% ligand density, respectively. (B) Representative Western blots for eNOS. (C) Western blot analysis of eNOS. Presented are the relative ratios eNOS and α-tubulin expression. (a) Treatment with Angpt-1 mimetic NPs as well as COMP Angpt-1 has no impact on total eNOS expression. (b) Considering the expression ratios of Angpt-1 mimetic NP treated samples relative to 0% NP ligand density, no increase of total eNOS can be observed. Results are presented as mean ± SEM. ns: not significant. Discussion POAG is one of the leading causes of blindness worldwide. Due to its multifactorial disease character, the precise pathogenesis remains unclear [31]. However, in the last few years, therapies targeting the cells of the conventional outflow pathway to restore pathological changes and thereby decreasing IOP have become the focus of attention [3]. With the advent of rho-associated protein kinase (ROCK) inhibitors, the first drug class has been introduced for causative POAG management. However, market-ready approaches fail to counteract the endothelial cell loss that is associated with SC regression. With SC degeneration, important functions, such as transcellular aqueous humor filtration that are essential for IOP control, are lost [9]. To deal with these shortcomings, we propose in this study polymeric Angpt-1 mimetic NPs that specifically bind to the Tie2 receptor and restore SC cell function in the long run. The Angpt-1 mimetic peptide sequence HHHRHSF has been applied in another context. Van Slyke et al. and David et al. both coupled the peptide sequence HHHRHSF to a branched PEG polymer for the treatment of diabetic ulcera and microvascular endothelial barrier dysfunction, respectively [12][13][14]. Thus, it was possible to couple a maximum of four peptide molecules per PEG molecule. In contrast, we coupled the peptide to a linear PEG-PLA block co-polymer. Thus, on the one hand only about one molecule per polymer chain was attached. On the other hand, our strategy allows for the formation of NPs. Consequently, a much higher number of ligands per particle was possible. To give an impression: at a ligand density of 100%, one NP carries about 24 × 10 3 ligands (50% ligand density: 8.5 × 10 3 ligands/NP; 10% ligand density: 1 × 10 3 ligands/NP). NPs were fabricated by using longer and shorter polymer chains. More specifically, the longer PEG 5k -PLA 10k were functionalized with the peptide and the shorter ones (MeO-PEG 2k -PLA 10k ) remained unfunctionalized. By decreasing the amount of longer functionalized polymer chains while increasing the amount of shorter polymer chains at the same time, it was possible to prepare NPs of varying surface ligand densities and to modulate NP size. Additionally, this approach allows the ligand to protrude from the particle surface and be more accessible for receptor interaction [32][33][34] because ligands have a higher flexibility when the distance between each ligand increases (Figure 2A) [21]. Ligand flexibility is of utmost importance to allow for binding of one particle to multiple receptors at the same time inducing Tie2 receptor clustering and finally, phosphorylation. Filling up the PEG shell with shorter-chain polymers is called backfilling and ensures NP shell integrity [21,34]. A PEG shell on the particle corona is essential for stealth properties such as the invisibility in biological systems thereby avoiding clearance of particles from the bloodstream [21,35]. Due to PEG's high hydrophilic and flexible nature, a dense PEG shell protects NPs against opsonization, serum protein adsorption while the diffusivity of NPs is enhanced, and circulation time is prolonged [35,36]. The NP core was formed by PLGA. PLGA offers the possibility to make the NPs detectable by covalently attaching a fluorescent dye, and it gives particles sufficient stability to enhance structural integrity which allows the NP preparation by bulk nanoprecipitation [32]. Moreover, the lipophilic PLGA core additionally represents a potential reservoir for possible drug candidates which can be encapsulated for targeted drug delivery. Such particles would have a dual mode of action. All compounds are known for their excellent biocompatibility. As expected, the NP size decreased with decreasing ligand density. With decreasing ligand density, the ligands experienced a space gain giving the ligand more flexibility in the PEG shell and the particle the opportunity to take smaller pack sizes. The resulting particles carried a negative zeta potential that came close to zero with increasing ligand content. Because the isoelectric point of the peptide is predicted to be at 10.5, the peptide will have a net charge of +1.0 at a physiological pH of 7.4 [37]. Therefore, it is obvious that the zeta potential was less negative with increasing numbers of positively charged ligand molecules on the particle surface. In any case, a negative zeta potential could be beneficial with respect to the trabecular outflow pathway as potential delivery route for NPs. Since the tissues of trabecular meshwork outflow pathway are negatively charged due to the presence of hyaluronic acid, small, negatively charged particles will most likely not be affected in their mobility and can freely diffuse through the trabecular meshwork [38][39][40]. The particles showed excellent stability in DPBS over 24 h. This guarantees NP integrity for optimal ligand display and a low risk for NP aggregation ensuring NP mobility as well. As it is a biologically demanding environment and an important parameter to know, the NP stability was further challenged in culture medium supplemented with 10% serum. Despite the stealth properties of a PEG shell, the NPs seemed to form a thin protein corona as a slight increase in particle size was observed. With regard to the environment to which the particles are exposed in the eye, the corona formation will most likely be much less pronounced, because the protein content in the aqueous humor equals to a serum concentration of only about 0.35% [41]. The primary HUVECs and the EA.hy926 cell line were included to evaluate the NPcell interaction as well as the cellular response to NPs and compared to fibroblasts as control cell type. To estimate the cell response that can be expected for each cell type, the cells were evaluated for their Tie2 receptor expression levels. In accordance with Van Slyke et al., the cell line EA.hy926 expressed much higher levels of Tie2 compared to primary HUVECs [13]. Mostly low levels of Tie2 receptor were detected in fibroblasts which is consistent with observations from Teichert et al. [42]. With this knowledge, further experiments evaluating NP-cell interaction were performed. In agreement with previous publications, NP-cell interaction depended largely on ligand density on the NP surface. In our case, the NP-cell interaction was improved with decreasing ligand density. This was counterintuitive because one would expect it vice versa. To explain this phenomenon, we had to take the size of NPs into account. In our study, the NP size increased with increasing ligand density. NPs with 0% ligand had a diameter of about 90 nm, while NPs with 100% ligand were about 120 nm in diameter. With the increasing ligand density, the NP-cell interaction decreased. To say it in other words: larger particles with higher ligand content were not taken up as efficiently as smaller particles with lower ligand content. This is well known from literature that small NPs (e.g., 50 nm) are internalized by cells better than large NPs (e.g., 100 nm) [43]. In our case, this allows for the assumption that NP size had a greater influence on NP-cell interaction than the ligand density. An issue of significant importance was that the interaction of NPs with fibroblasts was much lower compared to HUVECs and EA.hy926 cells. This in turn could have an advantage in terms of NP application. Let us take a look again at the trabecular outflow pathway as the potential delivery route. During glaucoma development, trabecular meshwork cells undergo a switch from a mesenchymal to a myofibroblast-like phenotype [2,44]. According to our results, these cells may eventually be expected to have a reduced NP interaction, which should enhance NP diffusion through the trabecular meshwork and allow NPs to reach their target cells in the SC. In this study, we evaluated the cellular effects of Angpt-1 mimetic NPs by a Ca 2+ mobilization assay as well as by a real time RT-PCR and Western blot analysis, regarding the expression of eNOS. Jho et al. demonstrated that Angpt-1 opposes VEGF-induced Ca 2+ influx in endothelial cells in a dose-dependent manner [27]. Beside Angpt-1 mimetic NPs we evaluated a more stable variant called COMP Angpt-1 instead of recombinant Angpt-1, which forms pentamers rather than tetramers [11]. In contrast to COMP Angpt-1, Angpt-1 mimetic NPs at a concentration of 150 pM reversed the effect of VEGF and reduced the Ca 2+ concentration dependent on the ligand density. In HUVECs, NPs with a ligand density of 25% opposed VEGF-induced Ca 2+ influx significantly. The fact that cellular effects were more pronounced at lower ligand density may be related to the above described less intense NP-cell interaction at higher ligand densities. This also fits well to the results of Van Slyke et al. who demonstrated that too high concentrations of clustered Angpt-1 mimetic peptide were not capable of activating the Tie2 receptor [13]. A reason for the failure of COMP Angpt-1 in counteracting VEGF-induced Ca 2+ influx in HUVECs, could also be related to the fact that COMP-Angpt-1 does not activate the Tie2 receptor in the same way as the native form does [45]. However, since the tone of vascular smooth muscle cells such as SC cells is controlled primarily by Ca 2+ levels, reduced intracellular Ca 2+ levels could therefore increase cell relaxation which should be beneficial for aqueous humor drainage in glaucoma [5,46]. One of the downstream targets of the Tie2 signaling cascade is eNOS [47]. For eNOS expression analysis, we used NPs with a ligand density ranging from 0-25% as they showed high NP-cell interaction. In our study, a trend of increasing eNOS mRNA in HUVECs was observed after both COMP Angpt-1 as well as Angpt-1 mimetic NP treatment with increasing concentration and ligand density, respectively. This could be beneficial for glaucoma therapy, as eNOS overexpressing mice have been demonstrated to have a decreased IOP and increased pressure-dependent outflow facility [48]. However, our results are not significant and at the protein level, this trend could not be confirmed. Regarding the effect of angiopoietins in the Tie2-AKT-eNOS pathway, an analysis of the different phosphorylation patterns would be more straightforward. Therefore, further studies are needed with regard to the phosphorylation state of Tie2, AKT and eNOS. If not stated otherwise, all other chemicals were purchased from Sigma-Aldrich in analytical grade. Ultrapure water for dialysis and NP preparation was obtained from a Milli-Q water purification system (Millipore, Schwalbach, Germany). Polymer Synthesis and Characterization MeO-PEG 2k -PLA 10k and COOH-PEG 5k -PLA 10k block copolymers were synthesized via a ring-opening polymerization as described previously [33]. The molecular weight of the polymers was determined by NMR spectroscopic analysis in deuterated chloroform at 295 K using a Avance 400 spectrometer (Bruker BioSpin GmbH, Rheinstetten, Germany; Figure S2). Peptide Quantification and Coupling Efficiency Determination Resulting LHHHRHSF coupling efficiency for synthesized LHHHRHSF-PEG 5k -PLA 10k was determined via independent measurement of the molar concentration of both PEG and LHHHRHSF. For LHHHRHSF quantification, a previously described variant of the Pauly reaction was applied [49]. For the determination of the PEG content a colorimetric iodine complexing assay was used. The procedure was as follows: LHHHRHSF modified polymer (PLA 10k -PEG 5k -CON-LHHHRHSF) was dissolved in acetonitrile (ACN; 10 mg/mL). 5 min on ice. Next, 15 µL of 5% (m/v) aqueous sodium nitrite solution were added. After three minutes of incubation, samples were taken from the ice and 150 µL of ethanol (70%; v/v) were added to each sample. A measure of 180 µL of the mixture was pipetted into a 96-well plate (Greiner Bio One, Frickenhausen, Germany) and absorbance was measured at λ = 490 nm using a FluoStar Omega fluorescence microplate reader (BMG Labtech, Ortenberg, Germany). Dilutions of LHHHRHSF (0-175 µg/mL) served as calibration. The iodine complexing assay for PEG quantification was performed as previously described [50]. MeO-PEG 5k served as calibration. For the determination of the coupling efficiency, the molar LHHHRHSF content was finally normalized to the molar PEG content. PLGA Labeling with Fluorescent Dye Particles were made detectable by covalently linking a fluorescent dye (CF TM 647) to the core-forming PLGA prior to NP preparation. In brief, CF TM 647 amine (one equivalent) and carboxylic acid-terminated PLGA (Resomer RG 502H; one equivalent) were diluted in dimethylformamide (DMF). Twenty equivalents of (1H-Benzotriazole-1-yl)-1,1,3,3-tetramethyluronium hexafluorophosphate (HBTU) and forty equivalents of diisopropylethylamine (DIPEA) were added to the solution and allowed to react overnight at RT. Labeled PLGA was precipitated using diethyl ether, centrifuged, and diluted again in ACN. This procedure was repeated three times to remove unreacted reactants. Finally, fluorescently labeled PLGA was dried and stored at −80 • C until use. NP Preparation NPs were fabricated by nanoprecipitation using a standard solvent evaporation technique in accordance to previously published protocols [21,33]. In short, appropriate amounts of PEG-PLA polymers and PLGA were combined in a 70/30 (m/m) ratio and diluted in ACN to a final concentration of 10 mg/mL. For particles with different ligand contents (as indicated), functionalized (LHHHRHSF-PEG 5k -PLA 10k ) was mixed with unfunctionalized (MeO-PEG 2k -PLA 10k ) polymer at desired ratios, while keeping the above-mentioned molar ratio of PEG-PLA to PLGA constant. Finally, the polymer solution was added dropwise into vigorous stirring 0.1 × DPBS and kept stirring for 4 h under the fume hood at RT, allowing the organic solvent to evaporate. Then, the obtained NP dispersions were concentrated by centrifugation at 1400× g for 30 min using Pall Microsep filters (molecular weight cut-off, 30 kDa). The molar concentration of NPs was determined and calculated as previously described considering the exact gravimetrical NP content, determined by lyophilization; the PEG content of NPs, determined by the iodine complexing assay (see below); the hydrodynamic diameter of the NPs determined by dynamic light scattering (DLS); and the density of NPs (1.25 g/cm 3 [36]) [20,21]. Finally, the concentration of NP stock dispersion was adjusted to 3 nM and stored in the fridge (2-8 • C) until use. NP Characterization The hydrodynamic diameter (reported as size in the following) and the zeta potential of the NPs was determined by DLS and measuring the electrophoretic mobility, respectively, using a Malvern Zetasizer Nano ZS (Malvern, Herrenberg, Germany). For DLS and zeta potential measurements, particle stock solutions were diluted in DPBS (1:10) and the general-purpose mode with automatic measurement position was applied. For zeta potential measurements, particles were analyzed using capillary cells (Malvern, Herrenberg, Germany). To determine NP stability, NPs were incubated for 25 h at 37 • C in DPBS as well as Dulbecco's modified Eagle's medium (DMEM; Pan Biotech GmbH, Aidenbach, Germany) containing 10% (v/v) fetal bovine serum (FBS) and 0.01% (m/v) sodium azide (Merck KGaA, Darmstadt, Germany) for 24 h. At indicated time points, samples were taken, and the size was measured at 37 • C as described. Data were recorded using the Malvern Zetasizer software 7.11 (Malvern Instruments, Worcestershire, UK). Calculation of the Number of Ligands Per Particle The absolute number of ligands per particle for each particle formulation (100 to 0% ligand density) was calculated according to Abstiens et al. [20]. Therefore, a spherical NP shape was assumed, allowing us to calculate number of ligands per particles from ligands/µm 2 . Cell Culture HUVECs were obtained from PromoCell GmbH (Heidelberg, Germany) and were used until passage 6. HUVECs were cultured in EBM TM -2 basal medium supplemented with EGM TM -2 supplements and 2% (v/v) FBS, all from Lonza Group Ltd. (Basel, Switzerland). Primary cultures of human foreskin fibroblasts were cultured in DMEM supplemented with 4.5 g/L glucose and 10% (v/v) FBS (Thermo Fisher Scientific, Waltham, MA, USA). Vascular endothelial cell line (EA.hy926) was purchased from ATCC (Manassas, VA, USA) and cultured in DMEM containing 1.5 g/L sodium bicarbonate and 10% (v/v) FBS. For the Ca 2+ mobilization assay, Leibovitz's L-15 medium was purchased from Thermo Fisher Scientific (Gibco, NY, USA). If not otherwise stated, cells were serum starved overnight and incubated with either particles or COMP Angpt-1 (Enzo Life Sciences, Farmingdale, NY, USA) at indicated concentrations. Confocal Laser Scanning Microscopy (CLSM) Analysis of Tie2 Receptor Expression HUVECs, EA.hy926 cells and fibroblasts were seeded into Ibidi 8-well µ-slides (Ibidi GmbH, Planegg, Germany) at a density of 2.5 × 10 4 cells/well and cultured for 24 h at 37 • C and 5% CO 2 . Thereafter, cells were washed with 0.1-fold phosphate buffer (0.1 × PBS) and fixed in 4% paraformaldehyde (PFA) for 10 min. After two washing steps with 0.1-fold PBS, cells were permeabilized with 0.5% Triton X-100 diluted in 0.1 × PBS for additional 10 min. After cells were blocked for 30 min with blocking buffer (2% bovine serum albumin (BSA) diluted in 0.1 × PBS and supplemented with 0.1% Triton X-100), cells were washed with dilution buffer (0.1% BSA diluted in 0.1 × PBS containing 0.1% Triton X-100) and incubated for one hour with 150 µL of primary mouse anti-human Tie2 monoclonal antibody (10 µg/mL; R&D-Systems, Inc., Minneapolis, MN, USA) diluted in dilution buffer. Afterwards cells were washed with 0.1 × PBS containing 0.1% BSA (washing buffer) followed by incubation with secondary Alexa fluor 488-labeled goat anti-mouse antibody diluted in dilution buffer (1:1000) for one hour at RT. Finally, cells were washed twice with washing buffer to remove excess antibody and embedded in Dako Fluorescence Mounting Medium (Agilent Technologies, Santa Clara, CA, USA). Control cells were stained with secondary antibody without having been previously incubated with the primary antibody. Cells were analyzed using a Zeiss LSM 510 Meta confocal microscope (Carl Zeiss Microscopy GmbH, Jena, Germany). Flow Cytometry Analysis of Tie2 Receptor Expression HUVECs, EA.hy926 cells and fibroblasts were trypsinized and seeded into centrifuge tubes at a density of 1 × 10 6 cells/tube. The obtained cell suspension was washed with 0.1 × PBS and fixed in 4% PFA for 10 min. The following permeabilization and staining procedures were the same as for CLSM analysis except that the fixed cells were centrifuged (200× g; 5 min) after each incubation step. After cells were finally stained with the secondary antibody, they were resuspended in 0.1 × PBS and analyzed using a BD FACS Canto II (BD, Heidelberg, Germany). Again, control cells were incubated with secondary antibody without having been previously incubated with the primary antibody. Tie2-derived fluorescence was excited at 488 nm, and the emission was recorded using a 530/30 nm bandpass filter. The appropriate cell population was gated and analyzed using Flowing software 2.5.1 (Turku Centre for Biotechnology, Turku, Finland). CLSM Analysis of NP-Cell Interaction To obtain a first impression of NP-cell interaction HUVECs, EA.hy926 cells and fibroblasts were seeded into Ibidi 8-well µ-slides at a density of 5.0 × 10 4 cells/well (HUVECs and EA.hy926 cells) or 2.5 × 10 4 cells/well (fibroblasts) and cultured for 24 h at 37 • C and 5% CO 2 . Beforehand, cells were serum starved for 24 h. Freshly prepared NPs of varying ligand density (100% to 0%) with a CF647 modified PLGA core were diluted to 300, respectively, 15 pM in appropriate pre-warmed serum-free culture medium. Prior to NP treatment, cells were stained with CellMask green plasma membrane stain (1:400) for 30 min. Thereafter, cells were washed and 200 µL of NP dilutions were added in each well. Cells were incubated for two hours at 37 • C and 5% CO 2 . Subsequently, cells were washed with DPBS and fixed in 4% PFA in DPBS for 15 min at RT. Cells were washed again, nuclei were stained with 4 ,6-diamidino-2-phenylindole (DAPI) and cells were finally embedded in Dako Fluorescence Mounting Medium after an additional washing procedure. Samples were examined using a Zeiss LSM 710. Flow Cytometry Analysis of NP-Cell Interaction To determine NP-cell interaction, HUVECs, EA.hy926 and fibroblasts were seeded at a density of 2 × 10 4 , 2 × 10 5 and 1 × 10 5 cells/well, respectively, and incubated for 94 h (HUVECs) or 24 h (EA.hy926 cells and fibroblasts) at 37 • C and 5% CO 2 . Subsequently, cells were serum starved for 24 h. Freshly prepared CF647-labeled NPs with indicated ligand densities were diluted to 300, respectively, 15 pM in appropriate culture medium. Cells were washed and incubated with NP dilutions for two hours at 37 • C. Afterwards, cells were washed with DPBS, trypsinized, and centrifuged for 5 min (200 g; 4 • C). The washing step was repeated twice. Final cell suspensions in DPBS were analyzed using a BD FACS Canto II. NP-associated fluorescence was excited at 633 nm, and the emission was measured using a 660/20 nm bandpass filter. Ca 2+ Mobilization Assay To investigate NP binding to Tie2 receptor, intracellular Ca 2+ levels were measured using fura-2 as a Ca 2+ chelator as previously described, with slight modifications [33]. After HUVECs and EA.hy926 cells were trypsinized, collected and incubated in Leibovitz's L-15 medium containing 5 µM fura-2 AM, 2.5 mM probenecid, and 0.05% Pluronics F-127 for 2 h at RT under gentle shaking (50 rpm). Subsequently, cells were collected by centrifugation and resuspended in Leibovitz's L-15 medium at a concentration of 1.6 × 10 6 cells/mL. The samples of 20 µL of NPs or COMP Angpt-1 at indicated concentrations were supplemented with or without VEGF (for a final concentration of 1 µg/mL; BioLegend San Diego, CA, USA), were pipetted into 96-well plates (half-area, costar, Corning, Inc., Kennebunk, ME, USA). Finally, 80 µL of cell suspension for a final cell concentration of 1.3 × 10 5 cells/well was added and fluorescence signal was measured after 10 min using a FluoStar Omega fluorescence microplate reader with excitation filters at 340/20 nm and 380/20 nm and emission filter at 510/20 nm, respectively. To check if the cells had been successfully loaded with fura-2 AM, maximum and minimal ratio of Ca 2+ -bound to Ca 2+ -unbound fura-2 was measured by incubating loaded cells with 0.1% (v/v) Triton-X 100 respectively 0.1% (v/v) Triton-X 100 combined with 45 mM ethylene glycol-bis(2-aminoethylether)-N,N,N ,Ntetraacetic acid (EGTA, data not shown). Protein and RNA Isolation After HUVECs were incubated with Angpt-1 mimetic NPs and COMP Angpt-1 for 24 h, as indicated, cells were washed twice with DPBS and lysed with 500 µL TRIzol TM reagent (Invitrogen, Thermo Fisher Scientific, New York, NY, USA). Total RNA and protein were isolated according to the manufacturer's instructions (TRIzol TM reagent). Statistical Analysis All data are presented as means ± standard deviation (SD) or standard error of the mean (SEM), of at least 3 measurements. Standard deviations of relative values were calculated according to the rules of error propagation ( Figures 1B, 3B and 4B-D). One-way ANOVA, followed by Tukey's multiple comparisons test ( Figures 3B, 5 and 6A,C) as well as a two-way ANOVA and Tukey's multiple comparisons test ( Figure 4C) were performed using GraphPad Prism 8.3.0 (GraphPad Software Inc., La Jolla, CA, USA) to assess statistical significance. Statistical significances were assigned as indicated. Conclusions and Outlook The SC forms the central outflow pathway for aqueous humor and is thus a key player in the pathogenesis of POAG. This study demonstrated Angpt-1 mimetic NPs as a promising tool to target SC cells and to restore SC cell function for glaucoma therapy. A major goal for further studies will be to enhance the specific interaction of NPs with cells. Because the Angpt-1 sequence is positively charged it may interact with the negatively charged particle core. This may provoke the Angpt-1 mimetic peptide sequence to flip towards the particle surface and no longer be available on the NP surface for specific NP-Tie2 interaction. However, in the long run, Angpt-1 mimetic NPs potentially offer the possibility to have a dual mode of action. First, when Angpt-1 NPs bind to the Tie2 receptor, the impaired intracellular signaling cascade will be reactivated. Second, by encapsulating a potential anti-glaucoma drug into the NP core, it will target another signaling cascade. Such a delivery system could be extremely potent for glaucoma therapy as it offers the possibility to address two different mechanisms involved in glaucoma progression. Therefore, the development of Angpt-1 mimetic NPs is an initial step in the right direction to restore the impaired SC cells essential for long-lasting IOP reduction.
12,290.6
2021-12-24T00:00:00.000
[ "Medicine", "Engineering" ]
Endothelial exosomes contribute to the antitumor response during breast cancer neoadjuvant chemotherapy via microRNA transfer. The interaction between tumor cells and their microenvironment is an essential aspect of tumor development. Therefore, understanding how this microenvironment communicates with tumor cells is crucial for the development of new anti-cancer therapies. MicroRNAs (miRNAs) are small non-coding RNAs that inhibit gene expression. They are secreted into the extracellular medium in vesicles called exosomes, which allow communication between cells via the transfer of their cargo. Consequently, we hypothesized that circulating endothelial miRNAs could be transferred to tumor cells and modify their phenotype. Using exogenous miRNA, we demonstrated that endothelial cells can transfer miRNA to tumor cells via exosomes. Using miRNA profiling, we identified miR-503, which exhibited downregulated levels in exosomes released from endothelial cells cultured under tumoral conditions. The modulation of miR-503 in breast cancer cells altered their proliferative and invasive capacities. We then identified two targets of miR-503, CCND2 and CCND3. Moreover, we measured increased plasmatic miR-503 in breast cancer patients after neoadjuvant chemotherapy, which could be partly due to increased miRNA secretion by endothelial cells. Taken together, our data are the first to reveal the involvement of the endothelium in the modulation of tumor development via the secretion of circulating miR-503 in response to chemotherapy treatment. setting [4]. Consequently, a better understanding of the interactions between cancer cells and the tumor microenvironment is necessary to unravel the complexity of tumor physiology and to limit the development of resistance mechanisms during anti-cancer treatments. MicroRNAs (miRNAs) are small non-coding RNAs that are essential for the regulation of various physiological and pathological processes, including development, differentiation, proliferation and cancer [5,6]. These transcripts bind to the 3' untranslated regions (UTRs) of messenger RNAs to either induce their degradation or inhibit their translation into proteins [7,8]. Cell-free miRNAs were found inside exosomes within biological fluids a few years ago [9,10]. Exosomes are small vesicles, ranging from 30 to 100 nm in size, composed of RNAs, microRNAs, and soluble and membranous proteins [11,12]. There is accumulating evidence that these organelles play a key role in intercellular communication via the transfer of their molecular contents [13][14][15]. Furthermore, recent findings demonstrate that circulating miRNAs are promising biomarkers for the diagnosis of several pathologies, demonstrating the notable abilities to discriminate between cancer types and stages and to monitor treatment responses [16][17][18]. Several studies have demonstrated that tumor exosomes are able to modulate the tumor microenvironment by activating angiogenesis, promoting the formation of cancer-associated fibroblasts (CAFs) and modulating the immune response [19][20][21][22]. However, little information is known regarding the role of exosomes derived from cells of the tumor microenvironment on the regulation of tumor cell metabolism. In the present study, we examined the potential transfer of miRNAs from endothelial to tumor cells via exosomes and their role in tumor behavior. We identified the endothelial miRNA miR-503, the expression of which is regulated by breast cancer neoadjuvant chemotherapy and which is able to inhibit tumor cell proliferation and invasion. results endothelial exosomes allow the transfer of mirnAs to tumor cells To investigate the role of circulating endothelial miRNAs on tumor cell physiology, we prospectively isolated and characterized exosomes from human umbilical vein endothelial cells (HUVECs). As demonstrated by dynamic light scattering, exosomes from endothelial cells show the typical size range of these vesicles, with a maximum peak at approximately 95 nm (Fig. 1A). Flow cytometry analysis also confirmed the presence of two well-known exosomal markers, CD63 and CD9, on the exosome surface ( Fig. 1B-C). Because the membranous protein composition of the exosomes is representative of the originating cells, we compared the presence of endothelial markers on HUVECs and HUVEC exosomes. As expected, both compartments presented a similar composition of markers, including αvβ3 integrins, CD31, CD105, E-selectin, ICAM-1, VCAM-1 and VE-cadherin. However, VEGFR2, which was strongly expressed on HUVECs, was not found on exosomes ( Fig. 1D and Fig. S1A-P). In addition, electron microscopy visualization of the exosomes also revealed a characteristic cup-shaped morphology, with a diameter of approximately 100 nm. Furthermore, immunogold labeling was positive for the exosome marker CD63 and the endothelial marker CD105 (Fig. 1E). Next, using an exogenous mouse miRNA that is not conserved in humans, mmu-miR-298, we sought to investigate the ability of endothelial cells to transfer miRNAs to human tumor cells. The miRNA was overexpressed in HUVECs, and the transfection efficiency was monitored using qRT-PCR (Fig. S1Q). Transfected HUVECs were then placed in a transwell coculture system with the cells separated by a membrane with 0.2-µm pores to prevent the transfer of miRNAs from other vesicles. This assay was applied to four tumor cell lines (lung carcinoma: A549, colorectal carcinoma: HCT116, breast adenocarcinoma: MDA-MB-231, and glioblastoma: U87) (Fig. 1F). Whereas HCT116 cells presented markedly low levels of mmu-miR-298, the three other tumor cell lines showed significant incorporation of the exogenous miRNA after 48 h. Exosomes were also purified from endothelial cells overexpressing mmu-miR-298, and the presence of the miRNA in exosomes was assessed using qRT-PCR (Fig. S1R). In addition, mmu-miR-298-and controlloaded exosomes were incubated with the various tumor cell lines. As observed in the coculture system, mmu-miR-298 was detected in all cell lines, but HCT116 cells still displayed reduced transfer levels (Fig. 1G). To study the interaction of endothelial exosomes with tumor cells, we labeled exosomes with the fluorescent lipid dye PKH67 and monitored uptake by the four tumor cell lines. Fluorescence microscopy revealed that all of the cell lines took up the exosomes, but the uptake by HCT116 cells was less pronounced (Fig. 1I). This observation was confirmed via flow cytometry (Fig. 1H). Notably, the exosome incorporation profile was similar to the mmu-miR-298 levels transferred via either coculture or endothelial exosomes, suggesting a major contribution by exosomes in the transfer of miRNAs. Moreover, the variation in uptake efficiencies between different tumor cell types strongly suggests the selective incorporation of endothelial exosomes. To further visualize the mechanism of exosome capture, we monitored exosome uptake over time using electron microscopy. For that experiment, we chose the MDA-MB-231 cell line, as these cells displayed a high level of exosome incorporation. If no exosomes were added to tumor cells, no specific patterns could be observed inside the endocytic vesicles. However, after 2 hours, entities with the characteristic cup shape of exosomes could already be observed inside the endosomes; these entities accumulated over time, as observed after 8 and 24 h (Fig. 1J). These data demonstrate that endothelial exosomes are taken up by tumor cells via endocytosis to allow the intercellular transfer of miRNAs. The tumor environment modifies the export of a subset of endothelial mirnAs Several studies have shown that miRNAs can be transferred from tumor cells to modulate angiogenesis. Here, we speculated that the exchange could also occur in the opposite direction. We hypothesized that tumor cells might elicit an anti-tumor response through the secretion of miRNAs from the endothelium. We thus investigated the miRNA content of endothelial exosomes to identify miRNAs that could modify tumor growth. We first performed miRNA expression profiles using PCR panels (Exiqon) to compare between HUVECs and their exosomes. As observed in other studies [9,23], most of the miRNAs were expressed at similar levels in cells and exosomes, although some were detected only in cells (10 miRNAs) or in exosomes (16 miRNAs) ( Fig. 2A-B and Fig. S2A). To identify endothelial miRNAs that could affect tumor development, we then profiled the miRNA content of exosomes from HUVECs cultured in a basal medium or in a tumor-mimicking medium enriched with growth factors. Basal medium was composed of 5% serum whereas tumoral medium contained a mix of growth factors optimized for HUVECs culture supplemented everyday with high doses of VEGF (50 ng/ml) and bFGF (20 ng/ml). Indeed, these two molecules are well-known activators of tumor angiogenesis [24]. As measured by MiR-146a (F) and miR-503 (G) levels, evaluated by qRT-PCR in HUVEC exosomes cultured under tumoral or basal conditions. MiR-146a (H) and miR-503 (I) levels, evaluated by qRT-PCR in HUVECs cultured in tumoral or basal conditions. All data are the mean ± SD (A-E, n = 2; F-I, n = 3). *P < 0.05, **P < 0.01 and ***P < 0.001 vs. the respective control. Additionally, see Fig. S2. www.impactjournals.com/oncotarget protein quantification, the first notable observation was the radical decrease in the level of exosome secretion in HUVECs cultured in the tumor medium compared with those cultured in the basal medium (Fig. 2C). Only miRNAs that were detected in all samples, displayed a variation lower than 2 between replicates and an individual Ct value lower than 40 were considered for further analysis. These criteria led to the selection of 204 miRNAs (Fig. S2B). When comparing the miRNA ratio derived from exosomes and HUVECs, 108 miRNAs were found to be modulated in HUVEC-derived exosomes by at least twofold ( Fig. 2 D-E and Table S1). As shown in the volcano plot, the 3 most upregulated miRNAs were miR-502-5p, miR-744 and miR-373*, whereas the most downregulated miRNAs were miR-146a, miR-205 and miR-503 ( Fig. 2D-E). For further investigation, we examined the 3 most downregulated miRNAs, which, according to our hypothesis, might exhibit antitumor properties. The decreased miR-146a and miR-503 levels in tumoral and basal exosomes were confirmed using TaqMan microRNA assays; however, we were unable to confirm the alteration in miR-205 levels ( Fig. 2F-G). Interestingly, we observed that miR-146a levels were also decreased in the exosome-producing endothelial cells, whereas miR-503 levels were not modified under tumor-mimicking conditions ( Fig. 2H-I). We then decided to further study miR-503 because this miRNA undergoes a selective export mechanism under tumor-mimicking conditions. In addition, miR-503 is a member of the extended miR-16 family, which has been widely described in the literature as an anti-tumor miRNA that regulates cell cycle progression and the proliferation status of cells [25][26][27]. Endothelial miR-503 impairs tumor growth in vitro To investigate the impact of miR-503 on tumor growth, we performed gain-and loss-of-function studies. MDA-MB-231 cells were transfected with either preor anti-miR-503, and the transfection efficiency was monitored using qRT-PCR (Fig. S3A). We used MDA-MB-231 constitutively expressing luciferase to quantify the proliferation in a coculture system by measuring the luminescence. Moreover, tumor cell invasion was assessed by quantifying the sprouting of tumor spheroids in a 3D collagen matrix. Increasing miR-503 levels via the transfection of miRNA mimics (pre-miRs) in MDA-MB-231 cells decreased both cell proliferation and invasion. Conversely, inhibition of miR-503 via the transfection of miR-503 antisense LNAs (anti-miRs) resulted in increased levels both of these processes compared with those of the control (Fig. 3A-B). Moreover, the effects of modulating miR-503 on tumor proliferation and invasion were confirmed by measuring BrdU incorporation and performing Boyden chamber assays, respectively ( Fig. S3B-C). We next sought to explore the effect of endothelialderived miR-503 on MDA-MB-231 cells. For this purpose, HUVECs were transfected with pre-miR-503, and the transfection efficiency was monitored using qRT-PCR (Fig. S3D). The effective transfer of the miRNA was then assessed using endothelial exosomes loaded with miR-503. Upon incubation of miR-503-loaded exosomes with MDA-MB-231 cells, we observed increased miRNA levels in the cells using qRT-PCR (Fig. 3C). We next investigated whether miR-503 secreted from endothelial cells could modify the phenotype of MDA-MB-231 cells. HUVECs overexpressing miR-503 were cocultured with MDA-MB-231 cells, which led to reduced tumor cell proliferation and invasion compared with those under control conditions (Fig. 3D-E). Importantly, the addition of anti-miR-503 into tumor cells rescued the effects of endothelial miR-503 in both functional assays. This observation suggest that miR-503 is the main effector of tumor cell proliferation and invasion inhibition in endothelial exosomes. To confirm that these effects were caused by the transfer of endothelial miR-503 via exosomes, we treated MDA-MB-231 cells with miRNAloaded HUVEC exosomes. This treatment also resulted in reduced MDA-MB-231 proliferation compared with the cells treated with mock HUVEC exosomes (Fig. 3F). MiR-503 inhibits CCND2 and CCND3 expression of MdA-Mb-231 To explore the molecular mechanism responsible for the inhibition of tumor cell proliferation and invasion by miR-503, we searched for target genes of the miRNA involved in those mechanisms. We used a computational approach involving the Targetscan algorithm (http://www. targetscan.org/) to obtain a list of genes predicted to be targets of miR-503. We then used the STRING algorithm (http://string-db.org/), which creates a network between proteins that have functional or physical interactions, to identify associations between predicted targets of miR-503. We observed the presence of a large cluster of proteins that influence cell cycle progression and pathways that regulate the proliferation/apoptosis status of cells (Fig. S4C). Using qRT-PCR, we tested a subset of miR-503 target genes that displayed many links with each other to identify the genes responsible for the observed antitumor phenotype. From this subset of genes, we identified two targets of miR-503, CCND2 and CCND3, which are downregulated at the RNA and protein levels upon overexpression of the miRNA (Fig. 4A-C). Importantly, inhibiting endogenous miR-503 using anti-miR transfection led to increased CCND2 and CCND3 protein levels. The interaction sites of miR-503 and the CCND2 and CCND3 3'-UTRs are pictured in Fig S4D. Notably, expression of the homologue CCND1 was not affected by miR-503 even though this gene is a validated target of miR-503 ( Fig. S4A-B). Because CCND2 has never been described as a target of miR-503, we analyzed whether the miRNA directly interacts with its 3'-UTR. Indeed, there are 3 binding sites for miR-503 in the CCND2 3'-UTR: one 8-mer site and two 7-mer-1A sites. We therefore constructed a luciferase reporter vector that encoded the 3'-UTR of CCND2 downstream of the luciferase coding sequence. Reduced luciferase activity was observed in MDA-MB-231 cells transfected with the vector; these cells were also observed to overexpress miR-503 compared with that of the control (Fig. 4D). Moreover, when the sequences that bind the seed region of miR-503 were mutated in the CCND2 3'-UTR, the miRNA was no longer able to inhibit translation of the luciferase mRNA, leading to normalized luminescence levels. We next determined whether the inhibition of CCND2 and CCND3 was responsible for the inhibition of tumor cell proliferation and invasion observed upon miR-503 modulation. Indeed, silencing CCND2 and CCND3 in MDA-MB-231 cells with siRNA led to decreased proliferation and invasion (Fig. 4E-F). Taken together, these results demonstrate that endothelial-derived miR-503 induces the inhibition of tumor cell proliferation and invasion via inhibition of CCND2 and CCND3. Neoadjuvant chemotherapy increases circulating mir-503 levels To study the potential endothelial release of miR-503 during cancer, we analyzed the plasmatic miR-503 levels in breast cancer patients subjected to various therapies. Interestingly, we observed increased miR-503 levels in patients receiving neoadjuvant chemotherapy treatment, whereas no changes were observed in patients treated only with surgery ( Fig. 4A-B and D-E). To more deeply investigate the influence of neoadjuvant chemotherapy, miR-503 levels were analyzed in tumor biopsies and in residual tumors of patients subjected to this treatment and presenting an incomplete pathological response. Surprisingly, no changes in the miR-503 levels were observed before treatment or after postchemotherapy surgery (Fig. 4C and F). This observation favors the view that the increased miR-503 levels in the circulation after chemotherapy do not come from the tumor. Thus, we decided to investigate whether endothelial cells could be responsible for the increased miR-503 levels upon neoadjuvant chemotherapy. We treated endothelial cells with both chemotherapeutic agents (epirubicin and paclitaxel) used in the treatment and analyzed the consequences on miR-503 expression and exosome secretion using qRT-PCR. We first observed a drastic increase in exosome production in endothelial cells treated with the chemotherapeutic agents; this effect was more pronounced with paclitaxel than with epirubicin treatment (Fig. 4G). Moreover, we observed increased miR-503 levels in exosomes following epirubicin and paclitaxel treatments compared with control conditions, whereas decreased levels were observed in exosomeproducing HUVECs (Fig. 4H-I). These data suggest that the elevated circulating miR-503 levels observed in patients after neoadjuvant chemotherapy could originate in part from exosome modification and miR-503 secretion by endothelial cells. dIscussIon Over the past few years, exosomes have emerged as important players in intercellular communication. Notably, several studies have demonstrated the role of tumor exosomes in regulating major processes of tumor progression, such as angiogenesis, immune modulation and metastasis dissemination. However, until now, the effects of exosomes secreted from endothelial cells on tumor cells has not been explored. Endothelial exosomes have been shown to induce several mechanisms, such as the regulation of angiogenesis in other endothelial cells [28,29] and the atheroprotective stimulation of smooth muscle cells [15]. It is well known that cells within the tumor microenvironment can act on tumor cells. In this context, a role for exosomes has emerged in the recent literature. For example, exosomes secreted from mesenchymal stem cells can regulate tumor growth, whereas exosomes from dendritic cells can induce tumor regression [30][31][32]. In this study, we demonstrate that endothelial cells, which are also important players in the tumor environment, produce exosomes that are able to transfer miRNAs to tumor cells. Our data reveal that this transfer involves endocytosis because incorporated exosomes are observed in endocytic vesicles inside tumor cells. An important finding of our work is the observation that the miRNA content of endothelial exosomes differs from that of the producing cells. This phenomenon has been described in other studies and demonstrates that, at least in part, miRNAs can be selectively packaged into exosomes [9,23]. In addition, our profiling experiment also revealed that the miRNA content of exosomes is altered by the culture conditions. This experiment led to the identification of miR-503, the exosome levels of which are decreased under tumor conditions despite cellular levels being unaffected. Notably, miR-424, which belongs to the same cluster as miR-503, also showed reduced levels under the same conditions. MiR-503 is known to inhibit proliferation, migration and tube formation in endothelial cells under high-glucose stress conditions [33]. Moreover, miR-503 has been widely described in the literature as an anti-tumor miRNA that regulates the expression of key cell cycle proteins, such as CDC25A and cyclins D1 and E1, as well as cell proliferation via E2F3 and PI3K regulation and the apoptosis status via BCL-2 inhibition [6,[25][26][27]33,34]. Notably, miR-503 also prevents tumor growth by interacting with the tumor microenvironment and reducing secretion of the proangiogenic factors FGF2 and VEGF by tumor cells [35]. We obtained similar results by modulating the expression of miRNA in MDA-MB-231 cells. The overexpression of miR-503 led to decreased tumor cell proliferation and invasion in multiple assays, whereas its inhibition presented opposite results. The role of endothelial miR-503 contained in exosomes has also been investigated by coculturing HUVECs overexpressing miR-503 with tumor cells. This manipulation also led to reduced proliferative and invasive properties, which could be reversed by adding anti-miR-503 to MDA-MB-231 cells. During investigation of the mRNA targets of miR-503 to explain these effects, we identified that miR-503 regulated cyclins D2 and D3. Cyclin D3 is already a validated target of miR-503; however, the regulation of the cyclin D2 had not yet been demonstrated [26]. Overall, these data showed that endothelial cells cultured under tumoral conditions released miR-503, which can have antitumoral effects, into the medium. Our human studies revealed a role for miR-503 in response to neoadjuvant therapy. Plasmatic miR-503 levels were elevated in breast cancer patients receiving neoadjuvant chemotherapy. As suggested by our in vitro data, the elevation of miR-503 in the blood after chemotherapy could originate, at least in part, from the increased secretion of miR-503 by endothelial cells following paclitaxel and epirubicin treatment. Because decreased miR-503 expression was observed in exosome-producing endothelial cells, it is likely that the chemotherapeutic agent promotes the transfer of miR-503 to exosomes rather than the induction of its expression. A similar effect has been reported in endothelial cells submitted to ionizing radiation [36]. On the other hand, plasmatic levels of miR-503 of patients treated only with surgery, and tumor levels of miR-503 of patients under chemotherapy were not affected. Therefore, it is likely that endothelial circulating miR-503 would originate from the entire endothelium. As previously described by other studies, anthracyclines and taxanes induce an endothelial toxicity which also affects endothelial cells outside of the tumor [37]. Therefore, miR-503 might act as a stress-induced miRNA that is essential for cell cycle regulation. Its expression is increased upon serum starvation of mesenchymal stem cells and is modulated according to cell cycle progression [38,39]. Thus, we propose a model in which endothelial cells, in response to unfavorable conditions, such as chemotherapy or radiation treatment, release circulating miR-503 into the surrounding environment (Fig. 5). Endothelial exosomes loaded with miR-503 might then inhibit tumor growth by acting directly on tumor cells, thereby contributing to the direct effect of these therapies. To the best of our knowledge, this is the first report of a miRNA transferred from endothelial cells to tumor cells via exosomes. Our data also reveal the involvement of the endothelium in the modulation of tumor development upon chemotherapy treatment. In this context, miR-503 appears to be an antitumor miRNA secreted by endothelial cells that is able to regulate tumor cell proliferation and invasion via the inhibition of CCND2 and CCND3. This process might complement the direct effects of chemotherapy and thereby help the host fight the tumor. MAterIAls And Methods cell culture The isolation and culture of HUVECs (passages 6-11) were previously described [40]. A549 and U87 cells were cultured in EMEM supplemented with 10% FBS. HCT116 cells were cultured in McCoy's 5A medium supplemented with 10% FBS. MDA-MB-231 cells were cultured in DMEM 4500 supplemented with 10% FBS. cell transfections and treatments Pre-miRs (25 nM; Ambion) and anti-miRs (25 nM; Exiqon) were transfected into HUVECs and MDA-MB-231 cells using Dharmafect-4 (Dharmacon Research Inc.) according to the manufacturer's instructions. CCND2, CCND3 and control siRNAs (20 nM) were transfected based on the calcium phosphate transfection method. Transfected HUVECs and MDA-MB-231 cells were plated in EGM2 or complete DMEM, respectively. After a 24-hour transfection, the cells were washed and kept in their media for an additional 48 or 72 hours. Functional assays were performed as described above. Exosome purification Exosomes were isolated and purified from the supernatants of HUVEC cultures using the differential centrifugations. HUVECs were cultured in EGM2 medium containing exosome-depleted serum. After 72 h, the medium was collected and centrifuged at 2000 g for 15 min at 4°C and then again at 12,000 g for 45 min at 4°C. Supernatants were then passed through a 0.22-μm filter (Millipore) and ultracentrifuged at 110,000 g for 90 min at 4°C. The pellets were then washed with phosphate buffer saline (PBS), followed by a second ultracentrifugation at 110,000 g for 90 min at 4 °C, then resuspended in PBS. The protein levels of the exosome preparations were measured using the BCA Protein Assay kit (Pierce) following the manufacturer's instructions. PKh67 labeling of exosomes Exosomes were labeled with PKH67 dye (Sigma) according to the manufacturer's instructions, and incubated with cells for 24 h. The cells were then washed 2 times with PBS and mounted on a slide for observation under a fluorescence microscope. Dynamic light scattering Exosomes were suspended in PBS at a concentration of 50 µg/mL, and analyses were performed with a Zetasizer Nano ZS (Malvern Instruments, Ltd.). Intensity, volume and distribution data for each sample were collected on a continuous basis for 4 min in sets of 3. Patients Ethical approval was obtained from the Institutional Review Board (Ethical Committee of the Faculty of Medicine of the University of Liège) in compliance with the Declaration of Helsinki. Patients with newly diagnosed primary breast cancer were prospectively recruited at CHU of Liège (Liège, Belgium) from July 2011 to July 2013. All patients signed a written informed consent form. This work consisted of a prospective study and did not influence the treatment of the enrolled patients; 29 patients were included in this study. Blood samples were collected into 9-mL EDTA-containing tubes. Plasma was prepared within 1 h by retaining the supernatant after double centrifugation at 4°C (10 min at 815 g and 10 min at 2500 g), then stored at -80°C. Seventeen patients with primary locally advanced breast cancer received neoadjuvant chemotherapy (NAC) with 3 or 4 courses of alkylating agents (cyclophosphamide or fluorouracil) and anthracycline-based chemotherapy (epirubicin) followed by 3 or 12 courses of tubulinbinding agents (docetaxel or paclitaxel). Eight of these patients did not achieved pathological complete response (ypT0N0, following the AJCC-UICC classification) For those 8 non responders, 4-μm tumor slices from formalin-fixed paraffin-embedded (FFPE) tissues samples were obtained from diagnostic core-needle biopsies (2 to 3 passes in the primary tumor) and the corresponding remaining tumor. The histological statuses of all 8 tissues samples were established with hematoxylin and eosin staining of the FFPE sections by a pathologist. Circulating miR-503 levels were also analyzed in a cohort of twelve primary breast cancer patients that did not receive any chemotherapy, and their plasma was collected 8 days before and 3 months after surgery. Electron microscopy of whole-mounted immunolabelled exosomes Exosome were placed on Formvar-carbon coated nickel grids for 1 h, washed 3 times with PBS and fixed with 2% paraformaldehyde for 10 min. After 3 washes, grids were then incubated for 2 h with the following antibodies: anti-CD63 or anti-CD105. Exosomes were then washed 5 times and incubated with a 10 nm-gold labeled secondary antibody. They were washed 5 more times and post-fixed with 2.5% glutaraldehyde for 10 min. Samples were contrasted using 2.5% uranyl acetate for 10 min followed by 4 washes and an incubation of 10 min in lead citrate. Grids were finally washed 4 times in deionized water and examined with a JEOL JEM-1400 transmission electron microscope at 80 kV. MicroRNA profiling Total RNA was extracted with the miRNeasy kit (Qiagen) following the manufacturer's protocol, and cel-miR-39 and cel-miR-238 were spiked into the exosome samples. Reverse transcription was then performed using the miRCURY LNA™ Universal RT microRNA PCR, polyadenylation and cDNA synthesis kit (Exiqon, Denmark). Quantitative PCR was performed according to the manufacturer's instructions on microRNA Ready-to-Use PCR panel 1. The controls included reference genes, inter-plate calibrators run in triplicate (Sp3) and negative controls. Cell coculture and functional assays For cocultures, endothelial donor cells were seeded onto 6-well plates. After 8 hours, transwells were added, and tumor cells were seeded onto the inner part of the transwell membranes. After 48 h of incubation, tumor cells were collected and analyzed. Proliferation assays For the luminescence proliferation assay, MDA-MB-231 cells were transfected or incubated with transfected HUVECs on a 24-well plate. After 48 h, 150 µg/mL of luciferin was added per well, and the luminescence was quantified using the bioluminescent IVIS imaging system (Xenogen-Caliper). For the BrdU incorporation assay, MDA-MB-231 cells were transfected and seeded into 96-well plates for 40 h. BrdU was added for 8 h, and proliferation was assessed using the Cell Proliferation ELISA BrdU (colorimetric) kit (Roche) following the manufacturer's instructions. Spheroid invasion assay Spheroids were prepared as previously described [41]. Briefly, spheroids composed of transfected MDA-MB-231 alone or with transfected HUVECs were allowed to form in 96-U-well suspension plates for 48 h. Spheroids were then collected and seeded for 48 h inside a 3D collagen matrix with culture medium. Pictures were taken to quantify the invasion level by measuring the area of invasion using ImageJ software. Boyden chamber assay Transfected MDA-MB-231 cells were seeded into 8-µm 24-well Boyden chambers (Transwell; Costar Corp) and subjected to cell invasion assays. The lower chamber was filled with 600 μL of complete DMEM, and transfected MDA-MB-231 cells were placed in 300 μL of free DMEM in the upper chamber and allowed to migrate for 4 h at 37°C. After fixation, cells were stained with 4% Giemsa and counted on the lower side of the membrane using ImageJ software. Chemotherapy treatment of endothelial cells HUVECs were cultured in EGM2 supplemented with 1 µg/mL epirubicin or 20 ng/mL paclitaxel. After 24 h, the medium was swapped with exosome-depleted medium, and the cells were cultured for 72 additional hours before being collected for exosome purification. www.impactjournals.com/oncotarget Preparation of cell extracts and Western blot analysis Cells were washed with PBS and scraped into lysis buffer [50 mM Tris-HCl (pH = 7.5); 1% NP40; 0.5% sodium desoxycholate; 1 mM EDTA; protease inhibitor cocktail cOmplete Mini, EDTA free (Roche)] on ice. Insoluble cell debris was removed by centrifugation at 10,000 g for 15 min. Aliquots of protein-containing supernatant were stored at -20°C, and protein concentrations were measured using the BCA Protein Assay kit (Pierce) following the manufacturer's instructions. Soluble cell lysates (50 μg) were resolved using SDS-PAGE (12%) and transferred to polyvinylidene fluoride membranes (Millipore). Blots were blocked overnight with 8% milk in Tris-buffered saline with 0.1% Tween 20 and probed for 1 h with the following primary antibodies: anti-CCND1 (Cell Signaling), anti-CCND2 (Cell Signaling), anti-CCND3 (Cell Signaling), and anti-beta-tubulin (ab6046, Abcam). After 3 washes with Tris-buffered saline containing 0.1% Tween 20, antigenantibody complexes were detected with a peroxidaseconjugated secondary antibody and the enhanced fluorochemiluminescent system (ECL; Pierce Biotechnology). Quantifications were performed using ImageJ software and are presented in bar graphs normalized to the levels of the corresponding loading control (β-tubulin). RNA extraction, miRNA and mRNA expression analysis using the TaqMan microRNA assay and quantitative real-time PCR analysis. Total RNA was extracted using the miRNeasy kit (Qiagen) following manufacturer's protocol, and cel-miR-39 and cel-miR-238 were spiked into exosome and plasma samples. TaqMan assays were used to assess miRNA expression. Briefly, 10 ng RNA was reverse transcribed into cDNA using the TaqMan microRNA Reverse Transcription kit and the TaqMan microRNA assay stem loop primers (Applied Biosystems). The resulting cDNAs were used for quantitative real-time PCR using the TaqMan microRNA assay and TaqMan universal PCR master mix reagents (Applied Biosystems). Thermal cycling was performed on an Applied Biosystems 7900 HT detection system (Applied Biosystems). For the cells, the relative miRNA levels were normalized to 2 internal controls, RNU-44 or RNU-48. For the plasma and exosomes, the relative miRNA levels were normalized to the 2 spiked-in miRNAs: cel-miR-39 and cel-miR-238 (Applied Biosystems). For mRNA expression analysis, RNA was extracted using the miRNeasy kit (Qiagen) according to the manufacturer's protocol. cDNA synthesis was performed with 1 μg of total RNA and the iScript cDNA Synthesis kit (BioRad), according to the manufacturer's instructions. The resulting cDNA transcripts (20 ng) served as the template for quantitative real-time PCR using the SYBR green method (Roche Applied Sciences). Thermal cycling was performed on an ABI Prism 7900 HT Sequence Detection System (Applied Biosystems). For all reactions, no-template controls were run, and random RNA preparations were also subjected to sham reverse transcription to verify the absence of genomic DNA amplification. Quantitative real-time PCR was performed using the SYBR green method (Bioline and Thermo Fisher Scientific). Thermal cycling was performed on an Applied Biosystems 7900 HT detection system (Applied Biosystems). The relative transcript level of each gene was normalized to the housekeeping genes cyclophilin-A (PPIA) and/or glyceraldehyde 3-phosphate dehydrogenase (GAPDH). Primers were designed using Primer Express software and selected to span exon-exon junctions to avoid the detection of genomic DNA (primer sequences are provided below). Sequences of qRT-PCR primers, siRNAs and antagomir All primers, siRNAs and control siRNAs were synthesized by IDT-DNA Data analysis All values are expressed as the mean ± SD (in vitro experiments) or the mean ± SEM (patient analyses). Comparisons between various conditions were assessed using two-tailed Student's t tests. Analyses of patients before and after treatments (Fig 4) were performed using two-tailed paired t tests. P values less than 0.05 were considered statistically significant.
7,284.2
2015-03-10T00:00:00.000
[ "Biology", "Medicine" ]
DNA intercalation optimized by two-step molecular lock mechanism The diverse properties of DNA intercalators, varying in affinity and kinetics over several orders of magnitude, provide a wide range of applications for DNA-ligand assemblies. Unconventional intercalation mechanisms may exhibit high affinity and slow kinetics, properties desired for potential therapeutics. We used single-molecule force spectroscopy to probe the free energy landscape for an unconventional intercalator that binds DNA through a novel two-step mechanism in which the intermediate and final states bind DNA through the same mono-intercalating moiety. During this process, DNA undergoes significant structural rearrangements, first lengthening before relaxing to a shorter DNA-ligand complex in the intermediate state to form a molecular lock. To reach the final bound state, the molecular length must increase again as the ligand threads between disrupted DNA base pairs. This unusual binding mechanism results in an unprecedented optimized combination of high DNA binding affinity and slow kinetics, suggesting a new paradigm for rational design of DNA intercalators. DNA intercalation represents an invasive, yet reversible, mode of DNA-ligand binding. These essential features of DNA intercalation allow a wide range of precisely modulated therapeutic and biotechnological applications. Conventional intercalators, such as acridine 1 and ethidium [2][3][4] , bind into the DNA lattice by direct insertion of planar aromatic moieties between the base pairs, for which the primary rate limiting step is breathing of the DNA double helix. In contrast, unconventional intercalators require further DNA deformation during association in order to accommodate bulky non-intercalating moieties, for example fitting cyclic polypeptide chains in DNA grooves, or breaking base pairs to thread a bulky moiety through DNA, before reaching the final intercalated state [5][6][7] . This strong DNA deformation is the primary rate-limiting step, giving unconventional intercalators much slower binding and dissociation from the final intercalated state, which is a desirable property for many DNA applications, including anti-cancer drugs [8][9][10] . Figure 1A shows the equilibrium dissociation constant and the dissociation rate from the final intercalative state for all the DNA intercalators, including the intercalating system we report here, that have been studied by single-molecule force spectroscopy, a reliable method for quantitatively determining intercalation affinity and kinetics. It shows clusters of two different types of ligands. The fast (conventional) intercalators have dissociation time constants ranging from milliseconds to seconds, and the slow (unconventional) intercalators have dissociation time constants ranging from tens of seconds to tens of minutes. This plot illustrates the distinct nature of each type of intercalating system in terms of dissociation rates, governed by two different regimes of DNA structural fluctuations. Based on the available equilibrium and kinetic single molecule studies of DNA intercalators 5,7,[11][12][13][14] , for cyanine dyes (cyan symbols), all but YOYO are conventional intercalators, while polypeptide intercalators (red), and threading intercalators (purple) are unconventional intercalators. We consider YOYO to be an unconventional intercalator because its relatively long linker must be accommodated before its second moiety is fully intercalated, resulting in overall slower intercalation relative to conventional intercalators 12 . In this work we introduce a new mechanism of DNA intercalation, where the intercalating moiety is converted from a fast assembling conventional intercalative state to a slow assembling final intercalative state. This intercalative conversion is characterized for the rotationally flexible binuclear ruthenium complex [μ -bipb(phen) 4 Ru 2 ] 4+ (Pi) shown in Fig. 1B 15,16 . Considering the structure of Pi, note that it has the same bulky side groups in a similar right-handed Δ chirality as for the previously reported threading mono-intercalator P 5 and threading bis-intercalator Pc 11 , but it has a different bridging moiety (Fig. 1B, shared bulky side group in black, different bridging moieties in green, blue, and orange for P, Pc and Pi respectively). While two monomers are bridged by a single semi-rigid bond for P and by a flexible long linker for Pc, the two monomeric units of Ru(phen) 2 ip 2+ in Pi are linked by two single bonds via a benzene ring (orange), providing additional rotational degrees of freedom. Previous bulk measurements reported by Chao et al. showed that Pi elongates DNA 16 , but linear dichroism (LD) experiments by Andersson et al. found that the DNA-Pi complex does not preserve the free-DNA orientation, in contrast to conventional intercalators 15 . In addition, a ligand/base-pair ratio of 1:10 resulted in complete DNA condensation, limiting the ability of bulk experiments to characterize the Pi-DNA intercalation mechanism 15 . Strong DNA condensation by cationic ligands with high affinity to DNA poses a great challenge in bulk experiments 15,17 . However, in single molecule experiments the ends of the DNA molecule are pulled apart, preventing the molecule from dropping out of solution when condensed, and preventing condensation at high pulling forces 18 . In addition, in single molecule experiments a much smaller concentration of ligands is required to observe significant binding, which also greatly diminishes DNA condensation 19 . We used dual-beam optical tweezers to conduct single-molecule force spectroscopy experiments using a force clamp, allowing us to fully characterize the affinity, kinetics, and the governing structural dynamics of this unique intercalating system. We 12 , except for the K d of YOYO, which is from Murade et al. 14 . Polypeptides (red symbols) are from Camunas-Soler et al. 13 and Paramanathan et al. 7 for Thiocoraline (Thio) and actinomycin D (ActD), respectively, threading intercalators (purple data) are from previous reports by Almaqwashi et al. 5 , Bahira et al. 11 found that this ligand possesses the highest DNA binding affinity measured with this method, combined with one of the slowest dissociation rates from the final intercalative state (Fig. 1A). Results Rapid intercalation followed by slow conversion to a fully intercalated state. We examined the kinetics of Pi interacting with single λ -DNA molecules as a function of constant applied forces of 10 to 50 pN and ligand concentrations of 0.15 to 40 nM. Figure 2A shows force clamp measurements in which DNA-ligand intercalation is monitored over tens of minutes, starting from the free DNA extension until the DNA-ligand complex reaches equilibrium. The DNA elongation measurements illustrate two distinct phases during association; rapid intercalation that is analogous to conventional intercalation, followed by very slow intercalation that approaches equilibrium with a rate that is comparable to that observed for other threading intercalators 5,11,23 . We then measured Pi dissociation from DNA after rinsing the binding ligands from the surrounding solution (Fig. 2B), and observed that the DNA-ligand complex extension decreases to the DNA-only extension over a timescale longer than the association process. Interestingly, the dissociation measurements fit well to a single rate that is comparable to the dissociation rate estimated for the threading mono-intercalator P 5,23 . In order to test our hypothesis of rapid formation of an intermediate state followed by slow formation of a final intercalated state, we probed the occupancy of the intermediate state by stopping the intercalation process at intermediate times and washing off the ligand (Fig. 2C). We observe a growing fraction of fast dissociating ligand when the ligand flow incubation time becomes shorter than the time needed for Pi to convert from its intermedate intercalated state to its final intercalated state, which occurs on a ~100 s time scale. The fast dissociation time from the intermediate intercalated Pi state is less than ~10 s, which is in the range of dissociation rates for conventional DNA intercalators (Fig. 1A). After an incubation time of tens of minutes, the amplitude of the fast dissociation fraction vanishes, indicating that all ligands are now in their final intercalated state. This is consistent with Pi conversion from a rapidly forming conventional intercalated state to a slowly forming unconventional, final threaded, intercalated state. These results in turn show that the intermediate state is in pre-equilibrium, an approximation employed below in analyzing the kinetics of this complex system 5,11,19 . After the conversion is completed (Fig. 2C, dark blue measurement), the slow Pi dissociation reflects the unthreading process, involving the energetically costly melting of several DNA base pairs. Two-step kinetics analysis reveals fundamental DNA intercalation rates and affinities. The traditional single transition model of a mono-intercalating system is not found to reflect the observed kinetics, as two rates are required to adequately fit the data (see Fig. 2A, dash line fit). We therefore used a two-step kinetics analysis 11,20,21 (outlined in Methods) to analyze the fast (k f ) and slow (k s ) association rates obtained from the double exponential fits of DNA-Pi complex time-dependent extension measured at constant force. In this model, an initial fast, bimolecular binding event is followed by a slower, unimolecular binding event, as demonstrated above. Fits to this model reveal the fundamental reaction rates for both steps of this process, k 1 c, k −1 , k 2 and k −2 , where k 1 c and k −1 represent association and dissociation to and from the intermediate intercalated state (I ‡ ), while k 2 and k −2 represent association and dissociation to and from the final (I) intercalated threaded state (see Methods, Eq. 2). Figure 3A shows fits to the concentration dependence for the measured k f (C) and k s (C) at two constant forces, while dissociation experiments that directly determine k −2 show no concentration dependence, as expected for a unimolecular dissociation rate. The elementary rates k 1 , k −1 , k 2 and k −2 are then fit to an exponential force dependence (Methods Eq. 5), as shown in Fig. 3B, revealing the zero-force rates and their associated DNA extension lengths ( Table 1). The elementary rates, in turn, allow determination of the equilibrium constants K d1 , K 2 and K d for the first and the second intercalation steps, as well as for the complete process of DNA-Pi threading. The force-dependent equilibrium constants are then fit to an exponential force dependence as shown in Fig. 3C, yielding the zero-force equilibrium constants and kinetic rates as well as their related DNA deformation lengths. These results, which validate the assumption that the intermediate state achieves rapid equilibrium before binding is complete, are summarized in Table 1. Equilibrium parameters from DNA-Pi complex elongation confirm kinetics results. The force-extension curve obtained for the saturated DNA-Pi complex, L eq sat , is fit to the WLC model [20][21][22] (Fig. 4A), as described in Methods. The obtained equilibrium elastic properties of the saturated DNA-Pi complex, including its contour length, persistence length, and elastic modulus, are comparable to values of these parameters previously measured for threading intercalators 23,24 . Figure 4A also shows the effects of aggregation on the DNA stretching curves, which can only be observed at high concentrations when holding DNA at very low initial extensions, comparable to bulk experimental conditions, which also observed aggregation 15 . During the first DNA stretch in the presence of ligand in solution on the time scale of ~100 s, the DNA elongation due to initial fast intercalation is observed, but the final state is not reached even after several consecutive stretch and release cycles. The equilibrium extensions of the DNA-Pi complex can only be obtained after Pi-DNA incubation for more than 10 min at each ligand concentration and applied DNA stretching force. These measurements determine the L eq (C) values presented in Fig. 4B for the four different force values, which give the DNA-Pi titration curves that can further be fit to the McGhee-von Hippel (M-H) model 5,25-28 , as described in Methods, to obtain the equilibrium constant K d at each applied force, plotted in Fig. 4C. The fit of the K d (F) dependence as obtained in this equilibrium analysis confirms the K d (F) values determined from the fitted elementary reaction rates from our kinetics measurements. The zero-force equilibrium dissociation constant K d (0) = 11 ± 2 nM obtained from this equilibrium analysis is consistent with its value determined from kinetics measurements. In addition, we determine the equilibrium DNA elongation upon complete Pi threading to be Δ x eq = 0.27 ± 0.03 nm. Discussion The DNA intercalation affinity found here for the two-step threading of Pi is ~5-fold higher than that of its closely related parent molecule P, which exhibits one-step intercalation in single molecule studies 5,7,12 . The quantified dynamic DNA deformations show that two-step intercalation also exhibits a molecular lock mechanism, in which equilibrium DNA deformation in the final state is less than the dynamic DNA deformation that is required for the full DNA-ligand assembly process. Thus, while the overall DNA elongation in the forward transition is x on = x +1 + x +2 = 0.35 nm, the DNA-Pi complex is relaxed to Δ x eq = 0.27 nm in the final equilibrium state. These findings confirm a previous prediction by Bahira et al. 11 that combining two-step intercalation 11,13,29 with a molecular lock mechanism 5,7 would result in higher DNA intercalation affinity than that observed for each of these properties alone. The first intercalative step of the binuclear complex Pi has an equilibrium constant of ~80 nM, which is close to the ~100 nM equilibrium constant reported in bulk for conventional DNA intercalation by an analogous mononuclear version of Pi (which has only one of the bulky threading units, enabling the exposed benzene ring to intercalate conventionally) 30 . This shows that the additional requirements of DNA deformation and ligand threading for the binding of Pi strongly enhance its affinity by an order of magnitude relative to intercalation by the benzene ring alone. , obtained from the measured k f , k s . and k -2 rates, are fitted to the exponential force dependence, as described by Eq. 5. The elementary rates extrapolated to zero-force are summarized in Table 1. (C) Force-dependent binding constants K d1 , K 2 and K d (as color coded) determined from the elementary rates fitted to an exponential force dependence. The fitted zero-force values of the equilibrium dissociation constants for the whole intercalation process, and for each of its steps, as well as associated equilibrium length changes, are also collected in Table 1. The error bars in B and C correspond to a confidence level of 68% in constant chi-squared boundaries. The results reveal a model for DNA-Pi assembly, illustrated in Fig. 5 based on the free energy landscape of this unique double-transition mono-intercalating system. In this proposed intercalation mechanism, the rapidly forming intermediate intercalative state represents conventional intercalation by the benzene ring, leading to unwinding of the double helix, which makes the subsequent threading transition an order of magnitude energetically more favorable than the threading of the same dumbbell moiety in the threading intercalator P. Following the threading transition, the DNA-Pi complex reaches an equilibrium state in which DNA intercalation by the benzene ring is optimized. Figure 6 approximately illustrates the Pi intermediate (top row) and final intercalating states (bottom row), where only the bridging benzene ring is the properly intercalating moiety. Note that the rotational flexibility of Pi may facilitate intercalative minor groove binding, the transition from the intermediate state to the final state, and the accommodation of both ends of the dumbbell in the minor and major grooves. This outlined intercalation mechanism indicates that the intercalating moiety is partially inserted between two initial base pairs, then the threading transition occurs between adjacent base pairs, resulting in stacking with the adjacent bases of the larger aromatic ring area of the ligand, thereby leading to the higher affinity state. The slowness of the second step is associated with threading of the bulky Ru(phen) 2+ moiety through the DNA base pair, requiring major duplex disruption. However, threading of this pre-intercalated DNA duplex is ~10-fold faster than the threading intercalation of the P ligand (compare k 2 for Pi and k 1 •K d = k −1 for P), and overall ~5-fold more driven. These novel findings for Pi not only overcome the limitation of bulk measurements in resolving the binding mode of the DNA-Δ,Δ-Pi interaction, but also present a convincing illustration of the utilization of single-molecule studies to provide important insights for the rational design of DNA-targeting ligands. The specific mechanism dissected here represents the first measurement of a two-step intercalator combined with a molecular lock mechanism. This mechanism results in the highest affinity measured for such an unconventional intercalator, as well as one of the slowest binding mechanisms observed. Using this information, DNA-ligand structural dynamics may be optimized for effective antitumor treatment in the presence of other non-intercalating DNA-targeting agents. Methods Experimental measurements. All experiments were conducted using dual-beam optical tweezers (laser wavelength 830 nm). A single bacteriophage λ -DNA, labeled on opposite strands with biotin, was attached between two streptavidin-coated polystyrene beads (~5.6 μ m); one bead is held in the optical trap and the other bead is held by a glass micropipette. A piezoelectric positioner displaces the micropipette (± 10 nm) to maintain a fixed stretching force (± 1 pN) on the DNA molecule. After the attachment between the two beads, the DNAonly stretching curve is obtained at a pulling rate of ~200 nm/s. Then, the DNA is stretched rapidly (~2 s) to reach the assigned constant force and the elongation from the DNA-only equilibrium extension due to the threading intercalation by Pi is traced. After the DNA-Pi complex reaches equilibrium elongation, the ligand is rinsed out by flowing ligand-free buffer solution. As the ligand dissociates, the DNA-Pi complex elongation is traced back to the free DNA extension. The force feedback reacts to any sudden force change as fast as 50 ms by displacing the micropipette to maintain the assigned force. The experiments were carried out in a 100 μ l flow cell chamber volume and a constant ligand flow rate of ~2 μ l/s. Constant force measurements were obtained on at least three DNA molecules for each averaged data point. All measurements were obtained at 21 °C and under buffer conditions of 10 mM Tris, 100 mM NaCl and pH 8. Pi was synthesized and purified as described elsewhere 15 . Kinetics rate analysis. The time-dependent DNA elongations are fit to a double-exponential dependence on fast and slow rates. Complex Equilibrium parameters from fundamental rates K d (nM) Δx eq (nm) K d1 (nM) x 1 (nm) K 2 (-) For the conditions (k 1 · C + k −1 ≫ k 2 + k −2 ), in which the non-intercalative state (NI) and the intermediate intercalative state (I ‡ ) states rapidly equilibrate before the second transition to the final intercalative state (I), we can then fit the measured fast and slow rates in terms of the elementary rates. From the elementary rates we determine the equilibrium constants of the first and second transition as well as the final equilibrium state, respectively: a nd (4) The zero-force rates, zero-force equilibrium constants, and their DNA deformation lengths are obtained from chi squared minimized fitting to an exponential dependence on force 3,4 . Here K i is K d1 , K 2 or K d and X is x 1 , x 2 or ∆ x eq , respectively. It is important to note that the force dependence resulting from the structural elongation is a property only of the elementary rates. It is these elementary rates that determine the free energy landscape, rather than the overall measured fast and slow intercalative rates. The fractional lengthening directly corresponds to the fractional ligand binding, Θ F C ( , ), for a particular Pi concentration at a fixed force, and is given by the ratio of the lengthening observed due to the binding of the intercalator ∆ L eq (F, C) to the complex lengthening observed at saturated ligand binding, ∆L F ( ) eq sat : (1 ) (9) Eqs (8) and (9) can be substituted into eq. (7) to calculate L eq as a function of concentration at each constant force. Matching these calculated and measured L eq (F, C) values allows us to determine the equilibrium dissociation constant K d (F) and the intercalative occluded site size n. The force-dependent K d (F) values are then fit to the exponential force dependence as given by Eq. 6. Left: major groove view; middle: side view; right: minor groove view. First row: intercalation from the minor groove, Second row: threading intercalation. Note that the bridging ligand is non-planar due to rotation about the single bonds. The benzene ring (orange) is stacked between A and T on the same strand. The bridging ligand is planar, and symmetrically disposed in the minor groove. Also shown here is the benzene ring, the only properly intercalating moiety, stacked between A from opposite strands. The red arrows point to structural differences between the intial intercalation from the minor groove and the final threading intercalation state. The pictures were obtained by several steps of manual docking and subsequent energy minimization in vacuo using the Amber force field in the HyperChem software package (HyperCube Inc.). The charge of the complex was set to zero to mimic the electrostatic screening of the buffer.
4,879
2016-12-05T00:00:00.000
[ "Biology", "Chemistry" ]
Stabilization of highly polar BiFeO$_3$-like structure: a new interface design route for enhanced ferroelectricity in artificial perovskite superlattices In ABO3 perovskites, oxygen octahedron rotations are common structural distortions that can promote large ferroelectricity in BiFeO3 with an R3c structure [1], but suppress ferroelectricity in CaTiO3 with a Pbnm symmetry [2]. For many CaTiO3-like perovskites, the BiFeO3 structure is a metastable phase. Here, we report the stabilization of the highly-polar BiFeO3-like phase of CaTiO3 in a BaTiO3/CaTiO3 superlattice grown on a SrTiO3 substrate. The stabilization is realized by a reconstruction of oxygen octahedron rotations at the interface from the pattern of nonpolar bulk CaTiO3 to a different pattern that is characteristic of a BiFeO3 phase. The reconstruction is interpreted through a combination of amplitude-contrast sub 0.1nm high-resolution transmission electron microscopy and first-principles theories of the structure, energetics, and polarization of the superlattice and its constituents. We further predict a number of new artificial ferroelectric materials demonstrating that nonpolar perovskites can be turned into ferroelectrics via this interface mechanism. Therefore, a large number of perovskites with the CaTiO3 structure type, which include many magnetic representatives, are now good candidates as novel highly-polar multiferroic materials [3]. INTRODUCTION New mechanisms to generate ferroelectricity (FE) have recently been the subject of active research, due to both fundamental interest and the technological importance of ferroelectrics and related materials [4]. Novel ferroelectrics have potentially higher performance for practical applications, as well as potential compatibility with other functional properties such as magnetism, yielding multiferroics and other multifunctional materials [3,5,6]. Artificially structured perovskite superlattices offer rich opportunities for novel ferroelectricity [7][8][9][10][11]. Nonbulk phases for the constituent layers can be stabilized by the mechanical and electrical boundary conditions characteristic of a superlattice [12,13], potentially turning constituents that are nonpolar in bulk form into ferroelectrics [14,15]. Competing low-energy metastable phases can be readily found in perovskites with low tolerance factors, promoting oxygen octahedron rotation (OOR) instabilities along the Brillouin-zone-boundary R-M line. The ground state structure in such cases is generally the nonpolar orthorhombic P bnm structure. As a typical example, the oxygen octahedron in a CaTiO 3 (CTO) can be described by its rotation around [110] axis and an in-phase rotation around [001] axis (a − a − c + in Glazer notation). Such a pattern of OOR favors antipolar behavior instead of FE [2]. On the other hand, OOR with a different pattern can also promote large FE. As one famous example, in BiFeO 3 (BFO) with R3c structure, the oxygen octahedron can be characterized by a rotation around [110] and an out-of-phase rotation around [001], yielding a fairly large polarization along [111] (a − a − a − in Glazer notation). Compared to the widespread of CTOlike materials, BFO-like perovskites are relatively rarely seen. As a result, the OOR is generally thought to suppress FE in perovskites. However, for many perovskites, the BFO-like structure serves as a low-energy metastable phase [2]. Therefore, it would be beneficial if an artificial perovskite superlattice could stabilize this metastable phase for the entire constituent layers or in a region near interface. To this end, a reliable design mechanism can be derived only from precisely determined atomic positions in experiments followed by theoretical interpretations based on first-principles calculations. EXPERIMENTAL AND FIRST-PRINCIPLES RESULTS Aberration-corrected high-resolution transmission electron microscopy (HRTEM) is a powerful method for accurate visualization of oxygen octahedron distortions [16,17]. Recently, it was shown that amplitude contrast imaging in HRTEM could be used to discriminate heavy and light element columns based on channeling contrast [18], allowing one to locate the exact interface and to visualize OOR angles in different atomic layers (see Supplementary Materials S1). Fig. 1 image was obtained by correcting both spherical and chromatic aberrations to achieve amplitude contrast imaging conditions (Cs = 3 µm, Cc = 1µm). In this image, channeling contrast between Ca and Ba columns is clearly observed: atomic columns of CaO and BaO appear as bright and dark dots, respectively; oxygen and Ti columns appear as bright dots. Due to the interdiffusion of Ba and Ca at the interface, the intensity at A site varies depending on the ratio of Ca and Ba as discussed in detail in the supplementary material (S2). It is seen that BTO and CTO grow coherently on the STO substrate, showing the same in-plane lattice constant as that of STO, and elongated c-axis in the BTO layer and shortened c-axis in the CTO layer (see Supplementary Materials S3). Within the CTO layer of the superlattice (box 1), a strongly corrugated TiO 2 plane is observed in which the oxygen atoms displace upward and downward with respect to the central Ti atoms, corresponding to an OOR around [110] by 9 • , comparable to that of bulk CTO. For TiO 2 planes between two BaO planes (box 3), alternating displacement of the oxygen atoms, and thus the amplitude of the OOR, is negligible, consistent with the fact that bulk BTO strongly resists OORs. For TiO 2 planes between one BaO and one CaO plane (box 2), the OOR around [110] is 3 • , smaller than that in the interior of the CTO layers. For comparison, in Fig. 1(b) we present the simulated HRTEM image using the atomic positions of the 4BTO/4CTO superlattice obtained from first-principles calculations. The simulated HRTEM image for the computed structure shows the same pattern of OOR as in the experiment (compare boxes 1, 2, and 3 in Fig. 1(a) and (b)), with amplitudes of 12.5 • in the CTO layer and 5.5 • at the interface. The quantitative difference in OOR around [110] angles from the experimental observation can be partly attributed to the fact that the experiments were performed at T= 300 K, while the ground state structural relaxation by density functional theory was at T = 0 K. In addition, in this image it is possible to discern the small uniform displacement of the oxygens relative to the Ti atoms in the TiO 2 plane, which is associated with the spontaneous polarization of the superlattice. While this displacement is present in all the TiO 2 layers, it can be more easily identified in those belonging to the interior BTO layers, which do not have the corrugation associated with OOR. We use the atomic-scale information from the firstprinciples results for a detailed layer-by-layer investigation of the properties of the superlattice. We focus our discussion on the 6BTO/6CTO superlattice, which allows a clearer distinction between the interface and interior layers; the corresponding results for the , Rxy(•), ADIS (Å), P001( µC/cm 2 ), P110( µC/cm 2 ), total polarization PT( µC/cm 2 ), c/a ratio, and the total energies (eV ) for strained BTO and CTO on STO substrate modeled in 20atom supercells. Both fixed electric (E) field and displacement (D) field boundary conditions are considered, which are used to described the electric boundary conditions of a perovskite in its nature bulk or within an insulating superlattice. According to the dielectric slab model [14], the structure of the constituent layers of the superlattice should be closely related to those of strained bulk materials under the electrical boundary condition of a fixed displacement (D) field, imposed by the superlattice as summarized in Table I. Indeed, as shown in Fig. 2, the interior BTO layers have negligible OOR with a polarization of 32 µC/cm 2 along the [001] direction. This is consistent with the structure and large polarization of strained BTO; the reduction from the strained bulk value of 42 µC/cm 2 can be attributed to the electrostatic cost of polarizing the nonpolar CTO layer. Both bulk CTO and strained bulk CTO are characterized by the strong OORs due to structural instabilities at the zone-boundary M and R points. Therefore, the interior CTO layers are dominated by R xy and R i z , which are OOR around [110] and an in-phase OOR around [001] respectively as shown in Fig. 2. In addition, a large antipolar (AFE) mode develops in the CTO layers that can be clearly identified by the zig-zag movement of A-site displacement along [110] direction. It should be stressed that this antipolar distortion is a structural distortion at the X point favored by the trilinear coupling due to the pattern of OOR in CTO-like materials. The above distortion in the interior CTO layers can be clearly seen in Fig. 2, as well as in the TEM image in Fig. 1 (a) (see Supplementary Materials S5). This AFE mode was also recently pointed out to be the key to the suppressed FE in all CTO-like perovskites [2]. Due to the applied tensile epitaxial strain, the interior CTO layers are polar along [110] direction with a magnitude of 26.4 µc/cm 2 just like the strained CTO [20]. If the interface effect is negligible, the dielectric slab model can be used to predict the polarization, yielding a value of 22.4 µC/cm 2 along the [001] direction. The firstprinciples calculation gives P 001 = 29.0 µC/cm 2 . The discrepancy from the dielectric slab model suggests that the interface effect cannot be neglected. Such a large enhancement of the polarization (∼ 25%) is a strong indication of a highly polar interface reconstruction. Indeed, examination of Fig. 2 reveals that the structure at the interface of the CTO layers differs significantly from that of the strained bulk CTO, with the OOR being suppressed at the interface of the superlattice. The AFE type displacement, which is driven by the trilinear coupling [2] involving the OORs of CTO, is suppressed too. Furthermore, a new structural pattern of OOR emerges at the interface: an OOR around the [110] axis and an out-ofphase OOR around the [001] axis for a TiO 6 sandwiched between two interface CaO layers, with rotation angles comparable to those of the strained bulk in-phase OOR. This new structure pattern is exactly the same as one would observe for oxygen octahedron rotation in BiFeO 3 and similar perovskites with R3c symmetry. MICROSCOPIC MECHANISM Here, we propose that this change in structure at the interface can be interpreted as the local stabilization of a BFO-like structure different from that of the bulk CTO. As far as the topology of the oxygen octahedron rotation network is concerned, oxygen octahedra in both BFO and CTO rotate around [110]; however, BFO differentiates itself from CTO by its out-of-phase OOR around [001] instead of the in-phase counterpart in CTO. The outof-phase and in-phase OOR around [001] originate from symmetry-nonequivalent structural instabilities at the R and M points respectively. This stabilization of a BFOlike structure in CTO layers near the interface is derived from the metastable polar e-R3c phase and is compatible with a much larger polarization than that in bulk CTO as shown in Table I. It has been shown that this phase cannot be stabilized relative to the P bnm phase by epitaxial strain alone [21]. However, in the superlattice, the suppression of the tilt angles by proximity to BTO, assisted by the electrical and mechanical boundary conditions that favor a phase with a component of polarization along [001], is sufficient to stabilize the structure [22]. To explore the stabilization of this phase more quantitatively, we constructed first-principles based models for the strained P bnm phase (designated E "CTO ′′ (R i z , R xy , AFE xy , FE xy )) and for the metastable e-R3c phase (E "BFO ′′ (R o z , R xy , FE xy , FE z )). Facilitated by space group symmetry analysis, the models of both E "CTO ′′ and E "BFO ′′ are built through polynomial expansions of the total energy from first-principles calculations with respect to the high-symmetry reference structure (P 4/mmm phase) in terms of the amplitudes of the relevant modes. In the above, R i z , R o z , R xy , AFE xy , FE xy , FE z represent the mode amplitude of in-phase OOR around [001], out-of-phase OOR around [001], OOR around [110], in-plane antipolar mode and in-plane and out-of-plane FE modes respectively. The resulting models are shown in the following for E "CTO ′′ and E "BFO ′′ respectively as (see Supplementary Materials S7 for fitted coefficients): Fig. 3. The total FE mode amplitudes are also presented by the color spectrum in the base plane in Fig. 3. It can be seen that when the angles are fixed to the values of bulk CTO regions in the superlattice, as shown in Fig. 2 (R i z = 8.3 • and R xy = 12.6 • ), the CTO-like phase is strongly favored in energy. In the CTO-like phase, as shown in Fig. 2 and Table I, the antipolar distortion is favored over the FE distortion due to the large trilinear coupling term ∼ R i z R xy AFE xy in Eq. 1. Notably, when the amplitudes of OORs are reduced, the BFO-like phase becomes energetically more stable than the CTO-like phase as shown in Fig. 3. This indicates that the BFO-like phase can be stabilized over the CTO-like phase when the OOR is reduced. When the above transition takes place, the OOR around [001] will change abruptly from in-phase rotation to out-of-phase rotation signifying a more drastic change in the topology of the oxygen octahedron network, as guided by the yellow plane at ∆E = 0 in Fig. 3. In addition to the pattern change of OOR, the BFO-like phase is generally found to have much larger polarization than that in the CTO-like phase as shown by the color spectrum in Fig. 3. The much stronger FE polarization is expected, originating from the e-R3c phase; it can also be easily understood by the large four-linear coupling term ∼ R o z R xy FE z FE xy , which promotes FE in both the in-plane and out-of-plane directions. This mechanism leads to the BFO-like phase that exists at the interface of the BTO/CTO superlattice. Assuming the octahedra to be fairly rigid, the reduction of OOT is imposed by the adjoining BTO layer, which is strongly resistant to the OOR. A direct consequence of the stabilization of the BFO-like structure at the interface is that the polarization of the superlattice is greatly enhanced. For a particular choice of angles with R o z = 5.7 • and R xy = 6.6 • similar to those at the interface of the BTO/CTO superlattice, the computed polarization of the BFO-like phase is over 54.0 µC/cm 2 , which is comparable to the polarization in the bulk e-R3c CTO as shown in Table I. In the superlattice, the BFO-like phase is further favored by both the electric and mechanical boundary conditions imposed by the polarization of the BTO layer according to Eq. 2. Under the continuous displacement field along [001] direction, the electric boundary condition tends to polarize the CTO components with a larger FE z . Under the tensile strain, the mechanical boundary condition effectively enhances FE xy . The larger FE z and FE xy tend to further lower the energy through ∼ R o z R xy FE z FE xy and stabilize the BFO-like phase. INVERSE DESIGN OF NEW FERROELECTRIC MATERIALS It has long been recognized that oxygen octahedron rotation can play different roles in perovskites promoting FE in BFO-like materials [1,23] but suppressing FE in CTO-like materials [24]. However, the results presented here suggest that a transition between these two phases can be achieved through interface engineering in a superlattice. In addition to improving the fundamental understanding of these transitions, these results suggest a new pathway to induce FE in functional oxide materials. The enhanced polarization observed in the BTO/CTO superlattice studied here demonstrates this mechanism. To explore the potential of this approach, we predict a few more superlattices A'BO 3 /A"BO 3 as listed in Ta-TABLE II: Predicted superlattices with enhanced polarizations. P sbulk A ′ BO 3 (µc/cm 2 ) and P sbulk A ′′ BO 3 (µc/cm 2 ) denote the computed polarizations for strained A'BO3 and A"BO3 respectively. PM(µc/cm 2 ) and P Cal. (µc/cm 2 ) are the expected polarizations from the dielectric slab model [14] and the computed polarizations from first-principles in nA'BO3/nA"BO3 respectively. Polarization enhancement (Enh.) is calculated by (P Cal. -PM)/PM. Sub. denotes the proposed substrate for the epitaxial growth of superlattice. Sub. . As a result, the overall polarizations of the superlattices are enhanced compared to the predictions from the dielectric slab model, which is equivalent to applying the charge continuity principle only and neglecting completely the possible interface reconstruction. This approach to create new FE materials by interfacial control can also be used to create new materials even where the building blocks could come only from nonpolar perovskites. In Table III, we list a few predicted 1A'BO 3 /1A"BO 3 superlattices within this category. These interface materials also provide us a good opportunity to perform rigorous mode decompositions based on space group theory followed by a careful comparison between the interface materials and the parent bulk compounds. The resulting mode decompositions and the local properties are also shown in Table III. The A'BO 3 is again a "CTO-like" perovskite with strong oxygen octahedron rotations. The above property is clearly represented by the large mode amplitudes of Q Rxy and Q R i z which correspond to an OOR around [110] and inphase OOR around [001] as shown in Table III. Under such a pattern of OORs, the antipolar mode Q AFExy is favored, and FE is strongly suppressed resulting in zero polarization along all directions. On the other hand, A"BO 3 is a strong "cubic" perovskite [28][29][30] that does Pc not display structural distortions associated with either OORs or polarization at its ground state as shown in Table III. Strikingly, when A'BO 3 and A"BO 3 form a 1A'BO 3 /1A"BO 3 superlattice, the resulting structural distortions are significantly different from their parent bulks. The differences come not only from the amplitudes of the modes but also from the symmetries associated with these modes. In Table III, the OORs around [110] Q Rxy are preserved in all these superlattices but, with largely reduced mode amplitudes compared with those in bulk A'BO 3 . In contrast, the in-phase OOR around [001] Q R i z completely disappears and is replaced by a large mode amplitude Q R o z associated with an out-of-phase OOR around the same axis in all the predicted new materials. As we have seen repeatedly in the previous discussions, such a new pattern of OOR signifies the stabilization of a "BFO-like" structure in all these artificial materials. Accordingly, large polarizations develop along both [001] and [110] directions with the generated total polarization vector roughly along the [111] direction due to the broken symmetry in the e-R3c phase. It can be noted that the polarization of BiFeO 3 is exactly along [111] direction in the R3c symmetry. At the same time, the antipolar mode Q AFExy is completely eliminated. Here, we want to stress that none of the component perovskites in the predicted superlattices is polar either in its natural bulk or in its strained bulk formats! OUTLOOK Currently, there are two widely adopted interface approaches to induce FE in oxide superlattices, namely tricolor [10] and hybrid improper methods [31]. An artificially induced broken inversion symmetry lies at the heart of both the above two methods. In the former, the broken inversion symmetry along the out-of-plane direction is introduced by the number of species in the superlattice; while in the latter, the broken inversion symmetry along the in-plane direction is facilitated by the differences in the antipolar modes of the two perovskite materials across the interface. However, it should be noted that the interface approach discussed here is a new route that is conceptually different from the above. Instead of introducing artificial inversion symmetry breaking, the ferroelectric polarization is stabilized by favoring a "BFO-like" structure which is a metastable phase for many perovskite materials. Due to the nature of the energy term that stabilizes the "BFO-like" structure (∼ R o z R xy FE z FE xy ) , it is expected the switching of FE does not necessarily require switching the directions of oxygen octahedron rotations, which usually requires much larger energy as is implied in the hybrid improper mechanism. Indeed, the FE polarization switching has already been successfully demonstrated in 2BTO/2CTO by Lee's group [32]. Based on nudged elastic band (transition state) theory [33,34] and single domain assumption, the energy barrier in switching FE in 2BTO/2CTO (154 meV ) is found to be close to that of the predicted materials 1CdSnO 3 /1BaSnO 3 (119 meV ) both of which are modeled in 40-atom supercells. In conclusion, by combining HRTEM experimental and first-principles approaches, we introduced a comprehen-sive interface design method to stabilize a highly polar "BFO-like" metastable phase in perovskite materials. Both the electric and mechanical boundary conditions are taken into account as well. This scheme introduces a conceptually novel way to design artificial FE materials. By predicting some new materials, we demonstrate this approach of exploring novel functional materials. For example, if the FE could be recovered in orthogonal RFeO 3 (R= Y,Gd, Tb Dy,Ho,Er,Tm,Yb, Lu) [35][36][37] by this approach, the synthesis of a new family of room temperature multiferroic materials could be achieved. Furthermore, the result of our current work indicates that, through an interface design mechanism, short-period superlattices can have stronger FE than longer ones. This is promising for modern device applications based on ultrathin films.
4,995.2
2015-07-02T00:00:00.000
[ "Physics", "Materials Science" ]
Hybrid Metaheuristics Web Service Composition Model for QoS Aware Services Recent advancements in cloud computing (CC) technologies signified that several distinct web services are presently developed and exist at the cloud data centre. Currently, web service composition gains maximum attention among researchers due to its significance in real-time applications. Quality of Service (QoS) aware service composition concerned regarding the election of candidate services with the maximization of the whole QoS. But these models have failed to handle the uncertainties of QoS. The resulting QoS of composite service identified by the clients become unstable and subject to risks of failing composition by end-users. On the other hand, trip planning is an essential technique in supporting digital map services. It aims to determine a set of location based services (LBS) which cover all client intended activities quantified in the query. But the available web service composition solutions do not consider the complicated spatio-temporal features. For resolving this issue, this study develops a new hybridization of the firefly optimization algorithm with fuzzy logic based web service composition model (F3L-WSCM) in a cloud environment for location awareness. The presented F3L-WSCM model involves a discovery module which enables the client to provide a query related to trip planning such as flight booking, hotels, car rentals, etc. At the next stage, the firefly algorithm is applied to generate composition plans to minimize the number of composition plans. Followed by, the fuzzy subtractive clustering (FSC) will select the best composition plan from the available composite plans. Besides, the presented F3L-WSCM model involves four input QoS parameters namely service cost, service availability, service response time, and user rating. An extensive experimental analysis takes place on CloudSim tool and exhibit the superior performance of the presented F3L-WSCM model in terms of accuracy, execution time, and efficiency. Introduction With the developments of Cloud Computing (CC) and Software as a Service (SaaS), an increased applications and processing resources have been summarized as Web Services (WSs) and provided on the web [1]. The WS consists of cross platform software process, loosely coupled, and flexible, which could tackle various needs for establishing versatile, agile, and cross enterprise applications [2]. The service composition acts as a solution for the unified combination of business applications to generate novel value-added services over present ones. As several WSs are established online with similar operations and distinct quality features, it turns to be critical in selecting candidate service to attain subsequent composite service (CS) with optimum efficiency. The facilitation has been made and utilized as different applications over the web, Quality of Service (QoS) aware service composition is in great demand, while the WS is understood and defined based on QoS and functional abilities. In recent years, the WS composition has gained a lot of interest in the research fields. There are 3 stages for QoS aware service composition. Initial design time stage, sub-processes are needed by the CS, and their control, data flow, and interactions are recognized. A recognizable language, WS-BPEL, is generally utilized for modeling abstract compositions. Next, at the pre-processing stage, concrete WS are established to match the abstract services based on function using semantic/syntactic technique [3]. Consequently, a list of functional equivalent WS (i.e., candidate) with distinct QoS is attained for all tasks. During the final run time stage, the decision making WS is depending upon their determined QoS. The whole QoS is defined by the QoS of structural element services which should be optimized to fulfill the user end-to-end QoS limitations. The relevant Combinatorial Problem (CP) is recognized as NP-hard [4]. Though the challenge of QoS-aware WS composition is often tackled by the present investigations [5], a major problem, that is the uncertainty of QoS, is yet unsolved and unconsidered. In real time application areas, WS is regularly subjected to fluctuation and variation because of several unexpected features like congestion and network connectivity. The subsequent QoS of composite WS monitored by clients is not stable and subjected to the risks of failing QoS needs by end-user. On the other hand, Personal trip planning has been established as commonly utilized urban computing service and assisted by digital map service suppliers like Microsoft MapPoint and Google Map. In previous years, trip planning is hot research field, which aims to search for a capable trip for the client using querying search methods, indexing, and effective data modeling with several beneficial methods. Several methods were presented user defined trip planning queries (TPQ) in an effective way. While computing TPQs, it is essential in considering the activity's limitations. For instance, when the client planning for lunchtime, the suggested trip is estimated to guide the client to a restaurant [6,7]. These kinds of activity-based trip planning queries are extensively utilized in spatial crowd sourcing, personal trip suggestion, etc. For TPQ, extensive works were taken for investigating the decision making of a qualified location-based service. But in some instances, a client might have several intentional activities, and probably, they are incapable of supporting access to any individual location-based services closer to the query locations [8]. With this motivation, this study focuses on the design of an effective WS composition model for location awareness. This study introduces a novel hybridization of the firefly optimization algorithm with fuzzy logic-based web service composition model (F3L-WSCM) in a CC environment for location identification. The presented F3L-WSCM model comprises discovery module which enables the client to provide a query related to trip planning such as flight booking, hotels, car rentals, etc. The presented model involves the firefly (FF) algorithm, which is inspired by the flashing patterns of fireflies for WS composition plan generation. Besides, fuzzy subtractive clustering (FSC) technique is employed for the selection of optimal composition plans from the existing composite plans. A series of simulations were performed on benchmark dataset to demonstrate the promising results of the F3L-WSCM model over the existing methods. The remaining sections of the paper are planned as follows. Section 2 reviews the recent WS composition planning techniques. Followed by, Section 3 describes the F3L-WSCM model and section 4 validates the experimental results. Finally, Section 5 concludes the study. Related Works Zhu et al. [9] established the technique that integrates FL with Graph Plan technique. The Fuzzy rules are utilized for evaluating and ranking the services based on user preferences, later, the result with optimum QoS value is chosen and utilized in Graph Plan construction. Alhadithy et al. [10] presented to compose WSs by fuzzy rule in which the electing of WS from Cloud depends upon the QoS which can satisfy client need and limitations. Furthermore, the developed method is offered with a component to monitor the implementation of composing services in the event that any of the composing services becomes inaccessible, the agent would interchange the inaccessible service with new service which matches the client's need and the produced fuzzy rules make novel composition strategy. Rhimi et al. [11] recommend a solution by exhibiting clients' uncertainty preferences with fuzzy sets. Lastly, they insert conventional features for ensuring global optimization with an effective composition process. Ma et al. [12] proposed a new semantic WS composition technique with fuzzy colored Petri net (FCPN). The FCPN method and algebra definition of service composition basic structure is provided. Sangaiah et al. [13] presented an effective approach to resolve the challenge of WS composition utilizing biogeography-based optimization (BBO). It is a simpler technique with some adjustable parameters. The developed technique provides significant results to this problem. Da Silva et al. [14] introduced the technique that integrates two concepts of creating novel compositions depending upon data kept in a graph database and later enhancing their quality by utilizing genetic programming. Researches have been accompanied relating the efficiency of recently developed method towards the present work. Outcomes demonstrate that the novel method performs quicker compared to formerly projected work, although it doesn't often attain the similar result quality as the compositions generated by themselves. Xu et al. [15] investigate the challenges of process aware location-based service composition that aims to return a rational trip formed by a group of location-based services that are rationally dispersed in spatial, however, it guarantees each intended activity and their temporal workflow limitations. Mainly, it proposes a set of spatial keywords search based techniques to accelerate the query process. Wang et al. [16] proposed a novel service composition system depending upon Deep Reinforcement Learning (DRL) for adaptive and large-scale service composition. The projected method is preferable for partial observable service platforms, which makes it better work for real time scenarios. The recurrent neural network (RNN) is accepted to develop RL method that could predict decision criterion and improve the capability to generalize and express. In Mallayya et al. [17], a user preference based WS ranking (UPWSR) technique is developed for ranking the WSs depending upon user preference and QoS factors of the WS. If the user request could not be satisfied by individual atomic service, various present services must be delivered and composed as composition. The presented architecture enables the client for specifying the global and local limitations to complex WSs that enhance the flexibility. The UPWSR method recognizes optimum fit services for every task in the client request and, by selecting the amount of candidate services for every task, decreases the time to create composition strategies. Cai et al. [18] proposed a cloud service composition technique depending upon multi granularity clustering, organizing services in the viewpoint of granularity for meeting user needs in service composition. The services disorder becomes multi granularity clustering that involves 3 stages: basic services clustering is depending upon multi granularity services clustering, correlation mining and message semantic similarity computing. The study illustrates that by using the presented technique, diverse requirements and user personalized are fulfilled and the performance of service composition is greatly improved. Fig. 1 showcases the overall working process of proposed F3L-WSCM model. The presented F3L-WSCM model developed a novel composition WS technique for trip planning. The presented model combines with the GIS application of producing interactive interface for Tourism. The presented model involves three major modules namely discovery module, selection module, and execution module. Firstly, the discovery module enables the client to provide a query related to trip planning such as flight booking, hotels, car rentals, etc. The discovery module holds details related to flights, hotels, taxi services, etc. Besides, the presented F3L-WSCM model involves 4 input QoS parameters namely service cost, service availability, service response time, and user rating. Then, the FF algorithm gets executed to generate a set of WS composition plans which are then fed into the selection module. During the selection process, the FSC technique elects an optimal WS composition plan. Finally, at the execution plan, the chosen plan will be sent to the client and CC server for further processing. The Proposed Model The aims of the WS selection from discovered services are depending upon their business requirements; they would go to the composition procedure. A set of detected WSs is separated into the division of services. This service is known as candidate service for a provided client request and their requirements. Henceforth, in created group, the division of services implements an essential process and other separations execute different types of tasks. Aforementioned, they realize a process as WS logic or service action. Therefore, in process selection, they must determine a group of candidate WSs s i , i ∈ [1…n] this could perform a set of processes t j , j 2 ½1 . . . m. The main objective is assuming a collection of candidates WS for every provided task, which is to decided that WS completes the provided task, this is the method of identifying services in the composition process. The QoS non-functional method which is composed of 4 variables for the quality of WSs method [19]: namely service availability, response time, user rating, and costs. All candidate services would attain a value for demonstrating this quality of WS standards. The cost quality c ij denotes the money that the service requester should pay for executing the service i by j task c ij ; i 2 ½1::n; j 2 ½1::m Assume that c ij is undefined in case service i could not perform the t task. ii) Service Availability The availability quality a ij represents likelihood that the service is used and accessed. It denotes the number of service responses to a request and overall requests have created to the service. It is demonstrated as follows: iii) User rating The user rating represents the measure of reliability. It is based on user experience by the service. Various end users could have distinct views regarding similar services. Reputation (or user rating) is determined by the average rank provided to the service by end user. The reputation of provided service is generally determined by: iv) Service Response Time The time quality t ij calculates the performance time among moment the request is sent and instant outcomes are attained j: Firefly Algorithm Based Composition Plan Model The FF Algorithm consists of nature inspired, optimization, metaheuristic method that is depending upon the social (i.e., flashing) behavior of fireflies/lighting bugs in the tropical temperature regions. According to the swarm behaviors like insects, fish, bird schooling [20]. Especially, the FF technique has several common features with other techniques that are depending upon swarm intelligence (SIs). In fact, it is easier in both implementation and concepts [21]. Based on current research, the technique is highly effective and outperforms different traditional methods like genetic algorithm (GA), to resolve various optimization challenges. The major benefit that it is utilizes mostly in real arbitrary numbers and depending upon global communication between the swarming particles (that is fireflies), and consequently, it appears highly efficient in multi-objective optimizations like WS composition plan generation. Fig. 2 illustrates the FF technique. The FF technique consists of 3 specific guidelines that are depending upon main flashing features of actual fireflies. (1) Entire fireflies are unisex, and it proceeds near brighter and attractive ones nevertheless their sex. (2) The attractiveness degree of FF is equivalent to their brightness that reduces with the increase in distance from another FF because air absorbs light. When there are no brighter/attractive fireflies compared to specific one, it would move arbitrarily. (3) The brightness/light intensity of FF is defined by the objective function value of a provided problem. For WS composition plan generation problem, the light intensity is equivalent to the objective function value. Moreover, the intensity reduces with distance and it depends on inverse square law as provided in Eq. (5). When the light is traversing in a medium with light absorption coefficient γ, later the light intensity at distance of r in the source is provided using Eq. (6). where I 0 refers the light brightness at the source [22]. Likewise, the intensity, β, is provided using Eq. (7). The generalization intensity function for ω ≥ 1 is provided in Eq. (8). Actually, some monotonically reducing function is utilized as follows: During this technique, an arbitrarily created possible solution known as FFs is allocated with light intensity depending upon its efficiency as the main function. This brightness is utilized for computing the intensity of FF that is directly proportional to their light intensity. Once the intensity/brightness of the solutions is allocated, all FFs follow FFs with optimal light intensity. For the intensity FF, it acts as the local search by arbitrarily from their neighborhood. Therefore, for 2 FFs, when the FF j is brighter than FF i , then FF i is location towards FF j utilizing the upgrading equation provided in Eq. (9). where β 0 implies the attraction of x j at r = 0, in researchers suggested that β 0 = 1 for execution, γ represents the algorithm parameter that defines the degree in that the upgrading model depends upon the distance among 2 FFs, α signifies the algorithm parameter for step length of arbitrary progress and Eð Þ refers the arbitrary vector from uniform distribution with values among [0,1]. To intensity FF, x b , the second expression in Eq. (9) is neglected, as provided in Eq. (10). Report the optimal solution, These upgrades of the position of FFs maintain with iteration till an end condition is met. The end condition is maximal number of iterations, a tolerance in the better value when it is recognized or no enhancement is attained from the consecutive iterations. Fuzzy Subtractive Clustering Based Optimal Composition Plan Selection Model Once the set of WS composition plans are generated, the FSC technique gets executed to choose an optimal web service from the existing ones. Subtractive clustering (SC) considers better data points for defining the cluster center depends on the density of neighboring data points. Assume a group of n data points: X = fx 1 ; x 2 ; x n g, where, x i implies the vector from M-dimension space [23]. The SC technique contains the subsequent processes: Step 1: Initialize, r a , η with η = r b r a , " E and E À . Step 2: Compute the density to each data point utilizing Eq. (11): r 2 a kx i Àx j k 2 (11) where P i indicates the density of i th data point, r a represents the positive constant determining a neighboring radius and ‖.‖ refers the Euclidean distance. The data points with maximum density are elected as initial cluster center. Step 3: The density of every data point is reviewed utilizing Eq. (12): where r b signifies the positive constant and r b = η*r a with an optimal choice of η = 1.5. Step 4: Assume x à is the data point with their density is maximum and equivalent P à Consider d min refers the minimum distances among x à and all earlier created cluster centers. x à is the new cluster center and jump to Step 3. Else: Pðx Ã Þ ¼ 0 and elect x à with the next maximum density, Pðx à Þ, v go to step 4. Step 5: Display the clustering result. If the membership degree of data point from all clusters can be defined using Eq. (13): r 2 a kx i Àx k k 2 (13) During the SC technique, it requires a set of 4 parameters: acceptance ratio " E, reflection ratio E À , cluster radius r a and squash factor η (or r b ). The chosen parameters significantly affect to outcomes of clustering. When values of " E and E À are huge, the count of cluster centers is diminished. So, these parameters are uncertainties in the SC technique. Conversely, the SC evaluated potential of data point as cluster centers depend on the density of neighboring data point that really depends on the distance among the data points with residual data points. So, the SC contains different kinds of uncertainties as distance measure, parameters initialized as: When x k is the k th cluster location is potential P à k , later the potential of all data points is reviewed by subsequent equation: Afterward, the selection of value of the parameter m is highly affected by outcomes of clustering. When m is lesser, the count of cluster centers is diminished. On the other hand, when m is excessive, so many cluster centers are created. Also, with the alteration of fuzzifier parameters m, it can be simple for obtaining an optimal outcome of clustering which is not based on the parameter setting of SC. The fuzzified variable m alters the outcome of the mountain function and therefore it considerably affects the clustering outcome. Using the fuzzified variable m, the dependency of the clustering outcomes can be reduced in the initial parameter values of the technique. With the adjusted variable m, an improved clustering outcome can be attained regardless of the parameter initialization. By the use of FSC technique, the optimal composition plan is selected from the availability of many composition plans for location identification. Performance Validation This section validates the experimental validation of the presented F3L-WSCM model on two benchmark datasets. The records in the dataset comprise user_ID, geographical location, and tips in English language. The records fitting to the identical object to form textual description of the object. Tab Besides, on analyzing the accuracy (distance to query) analysis of the proposed F3L-WSCM method on the dataset-2, the F3L-WSCM model has accomplished improved results by offering a higher distance to query under distinct activities. For instance, the F3L-WSCM model requires a higher distance to query of 28.29 whereas the NNB, DDB, and CSCB models have needed a reduced distance to query of 22.69, Concurrently, the F3L-WSCM technique requires a higher distance to query of 107.37 whereas the NNB, DDB, and CSCB models have needed a lower distance to query of 97.92, 50.68, and 6.59 respectively. Also, the F3L-WSCM methodology requires a superior distance to query of 129.76 whereas the NNB, DDB, and CSCB models have needed a minimum distance to query of 124.86, 75.53, and 6.59 correspondingly. From the experimental results, it is guaranteed that the proposed F3L-WSCM model is found to be effective over the existing methods in a significant way. Conclusion This study has introduced a novel F3L-WSCM in a CC environment for effective location awareness. The presented F3L-WSCM model primarily allows the client to provide a query related to location awareness. Next, the F3L-WSCM technique executes the FF algorithm that is inspired by the flashing patterns of fireflies for WS composition plan generation. Furthermore, the presented F3L-WSCM model involves four input QoS parameters namely service cost, service availability, service response time, and user rating. Next, the FSC technique is applied to pick the best composition plan from the available composite plans. A series of simulations were performed on benchmark dataset to demonstrate the promising results of the F3L-WSCM model over the existing methods. The obtained experimental results highlighted the improved performance of the F3L-WSCM model over the other methods in terms of different measures. In future, the presented F3L-WSCM model can be realized in real time applications. Funding Statement: The authors received no specific funding for this study. Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
5,164.6
2022-01-01T00:00:00.000
[ "Computer Science" ]
CONSTRUCTION OF AXISYMMETRIC STEADY STATES OF AN INVISCID INCOMPRESSIBLE FLUID BY SPATIALLY DISCRETIZED EQUATIONS FOR PSEUDO-ADVECTED VORTICITY An infinite number of generalized solutions to the stationary Euler equations with axisymmetry and prescribed circulation are constructed by applying the finite difference method for spatial variables to an equation of pseudo-advected vorticity. They are proved to be different from exact solutions which are written with trigonometric functions and a Coulomb wave function. Introduction In a domain Ω (⊂ R 3 ), the velocity u (: Ω → R 3 ) of a steady-state inviscid incompressible fluid is described by the stationary Euler equations with a boundary condition: or equivalently, Here, p(: Ω → R) is the pressure, ∂Ω is the boundary, and n is the unit outward normal vector on ∂Ω. In the axisymmetric case, the existence of solutions to (1.2) was discussed as the problem of vortex rings in, for example, [2,4,5,8].Their methods were based on a variational principle for kinetic energy. By contrast, Vallis et al. [13] proposed a completely different approach to the solvability of (1.2).Assume that the pair (v, q) : Ω × {t > 0} → R 3 × R satisfies the nonstationary system globally in time t, where ω = ∇ × v, and α is a nonzero constant.They asserted the decay v t → 0 and the relaxation of (v, q) to the above (u, p) as t → ∞.For example, in the axisymmetric case (where v(•,•,t) is a function of the radial and the axial coordinates r, z and does not have the azimuthal component), the azimuthal component ω of ω(r,z,t) satisfies It means that the integral D f (ω/r)r drdz with any smooth function f is conserved in t, where D is the cross-section of Ω in the meridian plane.In addition, from (1.3), we can derive the decay D |v t | 2 r dr dz → 0 as t → ∞, whether α < 0 or α > 0, if D (ω/r) 2 r dr dz < ∞ at t = 0.This is worthy of remark, because we can obtain an axisymmetric solution to (1.2) which has iso-(ω/r)-lines "topologically accessible" from initially given lines, as was mentioned by Moffatt [7, Section 5].Some readers may criticize (1.3) saying that it is artificial and unphysical.They should note that in variational approaches to (1.2), all (physical or unphysical) divergence-free fields that deform streamlines or vortex lines are considered in order to obtain energy extrema (see [3, Chapter II, Section 2]).The method of Vallis et al. means that an energy extremum is automatically reached as t → ∞ if vortex lines are deformed by the divergence-free field v + αv t . From a rigorous point of view, the theory of Vallis et al. has not been proved true in its entirety.Indeed, the nonlinearity of αω × v t seems too strong to obtain the temporally global solvability of (1.3) rigorously. In order to make use of (1.3) and construct axisymmetric solutions to (1.2) in a rigorous manner, the author in [9] applied the Galerkin method.He approximated (1.3) with n basis functions in Ω and let n and t go to infinity simultaneously to evade the difficulty of the term αω × v t .(The equality (10) in [9] should be corrected as the inequality r −1 ∇ × u ≤ r −1 ∇ × v 0 .)Nevertheless, a question was left open in [9].As a set of basis functions, that is, an orthonormal system in a square-integrable space with the weight r, the author in [9] used {w (k) } k∈N such that each of its elements satisfies (1.2).He could not exclude the possibility of the trivial case in which every constructed solution to (1.2) is written in the form cw (k) with a constant c and some k. In this paper, we note another system where P σ f = f + ∇Q with Q satisfying ∆Q = −∇ • f and (f + ∇Q) • n| ∂Ω = 0, and again α is a positive or negative constant.This system was introduced by the author in [11] in the two-dimensional context.It is based on the idea of Vallis et al.Indeed, in the axisymmetric case, it leads to our equation for pseudo-advected vorticity: Takahiro Nishiyama 3321 which has the same property as (1.4).Moreover, (1.5) yields the decay of dr dz) as t → ∞ if we assume its temporally global solvability (although it seems difficult to obtain as well as the solvability of (1.3)).Again, some readers may criticize (1.5) for its artificiality.As was mentioned above, it should be taken not as a physical model but as a substitute for variational methods for constructing stationary Euler flows.The aim of this paper is to approximate (1.6) by the finite difference method for r, z in a cylindrical domain Ω and prove that it generates an infinite number of generalized solutions to (1.2) which are axisymmetric, periodic in z, equipped with prescribed circulation, and different from the above cw (k) .The difficulty of proving the temporally global solvability of (1.6) is evaded by letting the lattice scale h → 0 and t → ∞ simultaneously.This is done and a generalized solution to (1.2) is constructed in Section 5 after some preparations in Sections 2, 3 and introducing the approximation of (1.6) in Section 4. For a fundamental theory of the finite difference method, we refer to [6,Chapter VI].By repeating the process used in Section 5, an infinite number of generalized solutions to (1.2) are generated in Section 6. In our case, each element of {w (k) } k∈N = {w (m,n) } m∈N,n∈Z is concretely written with a trigonometric function and the regular Coulomb wave function of order zero, as is shown in Section 6.It satisfies (1.2).As far as the author knows, no paper introduced this set of exact solutions to (1.2). An advantage of the finite difference method over the Galerkin method is that we have (5.7), which we mean by the above "prescribed circulation."By virtue of (5.7), we can show that our generalized solutions do not have the form cw (m,n) (see Theorem 6.1). In [11], the author discussed the stationary Euler equations in a square domain in R 2 by using the finite difference method and proved a theorem analogous to Theorem 5.2.Although our axisymmetric case is more complex, it brings a better result, that is, the construction of an infinite number of generalized solutions in Theorem 6.1.A characteristic of our case is that a small r matches a small lattice scale h and we can prove (3.6), while such an estimate could not be obtained in the planar case in [11]. Preliminaries Let us introduce our notation.We assume that Ω is a cylindrical domain with a constant radius a, that is, Ω = {(r, θ,z) | 0 ≤ r < a} and the flow is periodic in z.For simplicity, the period is set equal to a.The unit vectors in the r-, θ-, and z-directions in the cylindrical coordinate system are denoted by e r , e θ , and e z , respectively. For h = a/N with a sufficiently large positive integer N, we define (2.1) The complements of Λ r h and Λr h in Z are denoted by (Λ r h ) c and ( Λr h ) c , respectively. For { f j,k ∈ R} ( j,k)∈Z 2 , we define the difference quotients These operators D + h,r , D − h,r , D + h,z , and D − h,z are mutually commutative.For a set of vectors {f j,k = f r j,k e r + f z j,k e z } ( j,k)∈Z 2 , we define with s = r or z.The difference version of the gradient and the divergence operators are defined by (2.4) It is easy to verify ( Furthermore, we have We will often use in particular, = 0, and for s = z if f j,k = f j,k+N and g j,k = g j,k+N .It is also convenient to note the following inequalities.The last one (2.11) is known as the discrete Poincaré inequality. Let {•, •} S and |{•}| S for sets of scalars { f j,k }, {g j,k } or of vectors {f j,k }, {g j,k } be defined by or by (2.14) In order to discuss the limit h → 0, it is convenient to use the interpolation operators A h and B h defined by (2.15) Here, [r/h] means the integer in (r/h − 1,r/h].These operators correspond to ũh and u h in [6,Chapter VI].They are also used for sets of vectors as (2.16) It is useful to note that B h { f j,k } is a continuous and piecewise bilinear function and that the inequalities (2.18) Moreover, we have (2.19) The scalar product •, • for scalar functions f (r,z), g(r,z) or vector functions f(r,z), g(r,z) is defined by The norm • is defined by Let X 0 and X 1 be spaces given by with the scalar product •, • and We consider the construction of solutions to (1.2) in X 1 , particularly in a subspace of itself: Here, X 0 is a subspace of X 0 : We also use the Sobolev space of the first order on (0,a) 2 , denoted by W 1 2 ((0,a) 2 ), and the space of vector functions whose components belong to W 1 2 ((0,a) 2 ), denoted by W 1 2 ((0,a) 2 ).We sometimes use Difference operator Ξ h In the axisymmetric case, the relation between the stream function φ and the vorticity ω is represented by where (see [2,4,5,8]). Let the difference operator Ξ h be defined by It is a difference approximation of Ξ. The following lemmas will be used in Section 5. From now on, we frequently denote positive constants independent of h by C or C without distinction.Lemma 3.1.Let { f j,k } be a set such that f j,k = 0 for all j ∈ (Λ r h ) c and f j,k = f j,k+N .Then, Proof.By (2.6) and (2.8), we have where (3.9) Therefore, using (2.11), we obtain 3 4 which leads to (3.4). Spatially discretized equations for pseudo-advected vorticity First, we introduce the linear system for {φ j,k } ( j,k)∈Z 2 with {ζ j,k } ( j,k)∈Λh given that ) It is uniquely solvable.Indeed, if ζ j,k = 0 is assumed for all (j,k), then φ j,k = 0 is derived from (3.4) with (2.11).Therefore, we can represent φ j,k in Λ h as ) is given for every ( j,k) ∈ Λ r h × Z so that f j,k = f j,k+N , and { f r 0,k | f r 0,k = f r 0,k+N } k∈Z is also given.We define the operator P σ,h by where Q j,k is determined by It is easily verified that grad + h Q j,k is uniquely determined for given {f j,k } and { f r 0,k }.Indeed, if f j,k | ( j,k)∈Λ r h ×Z = 0 with f r 0,k | k∈Z = 0 is assumed, then, by using (2.7), we have which means that As an approximation of (1.6) by the finite difference method for r and z, we present the system of ordinary differential equations for ζ j,k (t) ((j,k) ∈ Λ h ): Here α is a fixed positive or negative number, and with φ j,k given by (4.3) for (j,k) ∈ Λ h and by (4.2) for the others. It should be noted that (4.1) is not always valid for j = 0 or N. The sets {ζ j,k } and {φ j,k } correspond to ω/r in (1.6) and φ in (3.1), respectively.As a result of the first condition of (4.2), the mean velocity in (0,a) 2 of our flows in Theorem 5.2 is equal to zero, while the case with φ j,k | j≤0 = 0 and φ j,k | j≥N = const.= 0 is open.Clearly, the system (4.8) is uniquely solvable at least locally in time if we give the initial data (4.10) Construction of solutions to the stationary Euler equations Let us define a generalized solution to (1.2). for any f ∈ X 1 , then u is said to be an axisymmetric generalized solution to (1.2). If this generalized solution belongs to the C 1 -class, then it is a classical solution to (1.2), according to the well-known orthogonality of the divergence-free and the gradient fields. Let Then we have the following theorem. Then it is a smooth solution to (6.1) with µ = µ m,n , and {φ (m,n) } m∈N,n∈Z is a complete orthogonal system in L 2 ((0,a) 2 ) with the weight r.
2,980.6
2005-01-01T00:00:00.000
[ "Mathematics", "Physics" ]
Double scaling limit of N=2 chiral correlators with Maldacena-Wilson loop We consider $\mathcal N=2$ conformal QCD in four dimensions and the one-point correlator of a class of chiral primaries with the circular $\frac{1}{2}$-BPS Maldacena-Wilson loop. We analyze a recently introduced double scaling limit where the gauge coupling is weak while the R-charge of the chiral primary $\Phi$ is large. In particular, we consider the case $\Phi=(\text{tr}\varphi^{2})^{n}$ , where $\varphi$ is the complex scalar in the vector multiplet. The correlator defines a non-trivial scaling function at fixed $\kappa = n\,g_{\rm YM}^{2}$ and large $n$ that may be studied by localization. For any gauge group $SU(N)$ we provide the analytic expression of the first correction $\sim \zeta(3)\,\kappa^{2}$ and prove its universality. In the $SU(2)$ and $SU(3)$ theories we compute the scaling functions at order $\mathcal O(\kappa^{6})$. Remarkably, in the $SU(2)$ case the scaling function is equal to an analogous quantity describing the chiral 2-point functions $\langle\Phi\overline\Phi\rangle$ in the same large R-charge limit. We conjecture that this $SU(2)$ scaling function is computed at all-orders by a $\mathcal N=4$ SYM expectation value of a matrix model object characterizing the one-loop contribution to the 4-sphere partition function. The conjecture provides an explicit series expansion for the scaling function and is checked at order $\mathcal O(\kappa^{10})$ by showing agreement with the available data in the sector of chiral 2-point functions. Introduction Recently, a certain interest has been devoted to the large R-charge limit of four dimensional N = 2 superconformal theories [1,2]. Besides, this limit may be conveniently combined with a weak coupling expansion and tuned in order to provide a non-trivial scaling behaviour. In this mixed regime we can neglect instanton contributions while keeping some interesting (scaled) coupling dependence. Typically, one has a vanishing Yang-Mills coupling g YM → 0 while the R-charge grows like 1/g 2 YM . Extremal correlators of chiral primaries in conformal N = 2 QCD have been computed in such a double scaling limit [3]. If Φ n is a chiral primary whose R-charge increases linearly with n, one may consider the normalized ratio between the 2-point functions in the N = 2 theory and in the N = 4 SYM universal parent theory. This defines the scaling function for gauge group SU (N ) (1.1) where κ = n g 2 YM is the fixed coupling at large R-charge. For certain classes of chiral primaries Φ n , it is possible to compute the perturbative expansion of the function F Φ by exploiting the integrable structure of the N = 2 partition function. In the simplest setup, Φ n is the maximal multi-trace tower Φ n = Ω n ≡ (Trϕ 2 ) n where ϕ is the complex scalar field belonging to the N = 2 vector multiplet. The 2-point functions Ω n Ω n are then captured by a Toda equation following from the four dimensional tt * equations [4], i.e. the counterpart of the topological anti-topological fusion equations of 2d SCFTs [5,6]. By exploiting the Toda equation in order to control the R-charge dependence, it is possible to compute the scaling function in (1.1) at rather high perturbative order [7]. This approach, based on decoupled semi-infinite Toda equations, has been proved to admit generalizations to broader classes of primaries and is believed to be a general feature of Lagrangian N = 2 superconformal theories [8]. In this paper, we reconsider the double scaling limit in (1.1), but for a different class of correlators, i.e. the 1-point function Φ n W of large R-charge chiral primaries Φ n in presence of a circular 1 2 -BPS Maldacena-Wilson loop W [9,10]. In conformal SQCD, it is possible to compute such correlators by localization as thoroughly studied in [11]. Other applications of localization to Wilson loops in N = 2 superconformal theories may be found in [12][13][14][15]. For our purposes, we shall again be interested in the maximal multi-trace case, i.e. Φ = Ω n . As is well known, the localization computation is based on the partition function of a suitable deformation of the N = 2 theory on S 4 [16][17][18]. However, the map from S 4 to flat space requires to disentangle a peculiar operator mixing induced by the conformal anomaly [19]. This is an annoying feature as far as the analysis of the n → ∞ limit is concerned. Indeed, the mixing structure becomes more and more involved with growing n. In particular, one should need results like those in [11], but with a fully parametric dependence on n. Besides, as in the study of the large R-charge limit of chiral 2-point functions, we want to work with a generic SU (N ) gauge group with (finite) fixed N . 1 The drawback is that finite N results may display a deceiving complexity for large R-charge. For a discussion of what simplifications occur in the mixing problem at large N see [20][21][22][23]. To overcome these difficulties, we exploit special features of the Ω n operators. For generic R-charge n and gauge group SU (N ), we provide the solution to the mixing problem in the maximally supersymmetric N = 4 SYM theory. Similarly, we give as well exact expressions for the first genuine N = 2 correction ∼ ζ(3). These findings are a useful guide to study higher transcendentality contributions. Our results allows to consider the n → ∞ limit of the ratio of 1-point functions taken with fixed κ = n g 2 YM in the SU (N ) theory. The quantity in (1.2) is the simplest natural object to be studied in presence of the Wilson loop and corresponding to (1.1). Our analysis shows that the limiting scaling function G(κ; N ) is well defined and non-trivial. In the SU (2) theory, we check the remarkable equality G(κ; N = 2) = F (κ; N = 2) at least at order O(κ 6 ). This identity does not hold for N > 2 as follows from a study of the G function in the SU (3) theory again at order O(κ 6 ). The case of SU (2) is definitely special. Based on certain universality arguments we formulate a simple conjecture for the all-order expansion of F (κ; 2) in terms of a certain N = 4 expectation value of the one-loop contribution to the N = 2 matrix model partition function. We have checked the conjecture at order O(κ 10 ) by reproducing the results of [7]. The plan of the paper is the following. In Section 2 we summarize recent developments about the large R-charge double scaling limit in the sector of chiral 2-point correlators. In Section 3 we briefly set up the calculation with the Maldacena-Wilson loop and recall the localization algorithm to map flat space correlators to definite matrix model integrals. In Section 4 we exploit some special features of the operators Ω n to solve the associated mixing problem in the N = 4 theory, to compute the correlator with the Wilson loop for a generic R-charge, and to evaluate the first correction ∼ ζ(3) for a general gauge algebra rank. Section 5 is devoted to a focused study of the large R-charge limit. We first discuss in a rigorous way the universality of the ζ(3) correction. Next, simple educated assumptions allow us to compute the scaling function G(κ; N ) at sixth order in κ for the SU (2) and SU (3) theories. In the final Section 6 we present a conjecture for the all-order expression of the scaling function in the SU (2) theory based on the identification of a particular universal and natural object in the N = 2 matrix model which turns out to encode the scaling function itself. 2 Large R-charge double scaling in N = 2 conformal SQCD We work in flat 4d space and consider N = 2 conformal SQCD, i.e. SYM with gauge group SU (N ) and 2N fundamental hypermultiplets. Chiral primary operators are primaries annihilated by half of the supercharges. They have scaling dimension ∆ and quantum numbers (R, r) of the R-symmetry SU (2) R × U (1) r obeying R = 0 and ∆ = r 2 . If ϕ is the complex scalar field in the vector multiplet, a generic scalar chiral primary is labeled by a vector of integers n = (n 1 , . . . , n ) and reads 2 Superconformal symmetry predicts the diagonal 2-point function where G n,m is a function of the gauge coupling g. In the recent papers [3,7] a special role has been played by the operators Ω n ≡ O 2,...,2 n = (Trϕ 2 ) n , (2.3) and the associated 2-point function coefficients G 2n = (Trϕ 2 ) n (Trϕ 2 ) n have been computed. The functions G 2n are captured by a semi-infinite Toda equation [19,3,8] that allows to compute them, after normalization to their N = 4 SYM value, in the large R-charge limit 3 n → ∞, g → 0, κ = n g 2 = fixed. (2.4) One finds the expansion where the coefficients c ( ) s (N ) have been computed at order O(λ 10 ) in [7]. The first cases are (2. 6) It has been conjectured that all terms associated with multiple products of zeta functions do vanish for N = 2. For this value, i.e. for the SU (2) theory, the expansion (2.5) reduces to 1127171217162240 π 20 κ 10 + O(κ 11 ), (2.8) and, starting at order κ 6 , products of zeta functions appear. This seemingly technical or accidental feature will have a role in the following discussion. A natural issue is than to explore the possibility of non-trivial scaling functions in other partially protected sectors. To this aim, we shall consider here chiral correlators of one primary with a Maldacena-Wilson loop. Chiral 1-point function with 1 2 -BPS Wilson loop We consider the 1 2 -BPS Maldacena-Wilson loop defined by [9, 10] where g is the gauge coupling, C is a circle of radius R, ϕ is the complex scalar field in the vector multiplet, and the trace is taken in the fundamental representation. Conformal invariance fully constrains the position dependence of the 1-point function of the chiral primary operator O n with W . For a loop placed at the origin, one has [11] where |x| C is a suitable SO(1, 2) × SO(3) distance between x and the loop respecting the conformal subgroup unbroken by the loop. All the remaining information about the 1-point function is encoded in the coupling dependent normalization A n (g) in (3.2). Localization results As discussed in [11], the function A n (g) may be computed by the same matrix model that appears in the partition function and encoding the localization solution of the N = 2 theory on S 4 with a specific (finite) Ω-deformation [16][17][18]. Since we are interested in a weak-coupling expansion, we shall drop the instanton contribution. After this simplification, the sphere partition function is associated with a perturbed Gaussian matrix model. Up to a g-independent normalization it reads , and the non-Gaussian interacting action S int (a) is an infinite series where s n (a) are invariant functions of a. The first terms read explicitly and higher powers of g are associated with higher transcendentality terms. As usual, correlation functions are computed by The N = 4 SYM theory is obtained as a special limit where the interacting action is dropped and the instanton contribution -already discarded at weak-coupling -is also removed. In this limit, the partition function is computed by a very simple Gaussian model and correlators are obtained after full Wick contraction where the basic contraction is a a = δ . In the following we shall be interested in the multi-trace expectation values, cf. (2.1), As discussed in [11] it is possible to compute the function A n (g) in (3.2) by the following prescription where : O n (a) : is obtained by Gram-Schmidt orthogonalization with respect to all operators with dimension smaller than |n|. In [11], several interesting results are obtained for the function A n (g) at specific values of the multi-index n, both at finite N as well as in the planar limit N → ∞ with fixed N g 2 . This is achieved by exploiting recursion relations with respect to n [25]. In general, a potential unavoidable problem is that it may be difficult to obtain results parametric in n, whereas this is essential for our purposes. This is a difficulty already showing up in the N = 4 theory due to the complication of the Gram-Schmidt procedure required to construct normal ordering. In the N = 2 theory, complications are worse due to the g-dependence introduced by the interaction term (3.4). In the next Section we shall consider the special class of operators Ω n in (2.3), show how to solve the above problems, and derive a set of results depending parametrically in n. This will be the starting point to discuss the large R-charge limit n → ∞ in Section 5. Remark Concerning notation, in the following we shall need to tell between quantities evaluated in the N = 2 or N = 4 theories. In such cases, we shall denote 4 The correlators W Ω n As we explained in the Introduction, we are interested in the functions As we discussed, to some extent it is straighforward to evaluate the perturbative expansion of A N =2 n at fixed n. However, our main concern is the limit (2.4) and the associated asymptotic ratio For this quantity a different approach is required. Notice that in the case of the chiral 2-point function, the existence of a Toda equation -equivalent to the tt * equations -has been crucial in this respect [7]. In this section, we shall present results for A n (g) in the N = 4 theory. This is a piece entering the ratio (4.2). Next, we move to the N = 2 theory and compute the first non-trivial correction to Although we are ultimately interested in the finite N case, we shall check our results by matching general expressions valid in the planar limit that have been conjectured in [11]. : where the constants c are determined by the condition : Ω : A useful remark is that (4.3) is equivalent to the subtraction of all possible (partial) Wick pairings inside Ω [25,11]. This is why the expansion (4.3) is dubbed normal ordering and denoted by the usual double colon notation. To prove this, we notice that by linearity we can reduce the problem to field monomial in the higher order traces Tr(a n ). Let W (Ω) be the operator constructed by subtraction of Wick pairings. The correlator W (Ω) Ω is obtained after full Wick pairing. If dim Ω < dim Ω, some pairing is necessarily inside W (Ω) and we get zero. By uniqueness, : Ω : ≡ W (Ω). The case of Ω n We now consider the special case of the operators Ω n = (Tra 2 ) n . Their expectation value is as follows easily from (3.3). According the the remarks in the previous paragraph we can write the following specialized version of (4.3) : Ω n : = Ω n + n−1 k=0 c (n) k Ω k . where α has been defined in (4.4). In the following it will be sometimes convenient to adopt an explicit matrix vector notation for (4.6) and write : To prove (4.6), we start from the orthogonality condition : Ω n : Ω m = 0, with m = 0, 1, . . . , n − 1, that we write as Using (4.4), these conditions may be written The solution is unique and reads 4 c (n) Notice that since c Γ(α + n) Γ(α + m) , (4.12) and noting that the r.h.s. is zero for integer m = n and one for m → n. A useful elementary identity The specific explicit coefficients (4.9) allow to prove the following elementary identity. For any function F (g a) we can write We compute O m Ω n by iterating the recursion relation proved in [25] O m Ω 1 = This gives Finally, using the mixing solution (4.6), we have (for |m| > 2n otherwise we get zero by construction of : Ω n :) Replacing this in (4.14) gives which is the same as (4.13). Remark 1 It may be interesting to compare (4.13) with what is obtained by taking derivatives with respect to 1/g 2 , i.e. essentially with respect to Imτ where τ is the complexified gauge coupling. In this case we would have obtained Gaussian averages with various insertions of powers of Ω 1 . Instead, derivatives with respect to g 2 automatically build up the normal ordered operators : Ω n :. In particular, the N dependence of the normal ordering expansion (4.6) is fully taken into account by the N dependence of the basic expectation F . Remark 2 The relation (4.13) may be written in explicit form by simply rearranging derivatives. This gives F : Ω n : = (−1) n 2 2 n n p=1 A useful application of formula (4.13) is to the case when F is the Wilson loop (3.8) with expectation value in the SU (N ) theory given by [11] W : Ω 1 : = g 2 ∂ g W , W : Ω 2 : = g 4 ∂ g (−1 + g ∂ g ) W , W : and so on. As a check, the first two lines are in agreement with Eqs. (4.7, 4.11) of [11]. Thanks to the special structure of W in (4.21), it is also simple to determine the closed expression of (4.22) for any given N , a kind of formula that will be useful in the following. The first cases are (4.24) The SU (2) computation is particularly simple and is easily checked by direct evaluation of the partition function and correlators in the Gaussian matrix model. This is briefly reviewed in App. A. The advantage of the relations proved in this section is that they are parametric in N , i.e. in the gauge group rank. Checks in the planar limit From the result (4.34) it is easy to work out the planar limit N → ∞ with fixed λ = N g 2 . Indeed we can expand In (4.34), we have at leading order and therefore Rearranging derivatives, we can write (4.40) in the form Sample cases are These may be compared with the general formula conjectured in [11] that reads and, of course, there is agreement after some rearrangement of the Bessel functions. Large R-charge limit and universality We now make full use of the results presented in the previous section to discuss the large R-charge limit of the normalized one-point function of the chiral primary Ω n (ϕ) with the 1 2 -BPS Wilson loop in the N = 2 theory. In particular, the result (4.34) controls exactly the contribution ∼ ζ(3). When combined with explicit low N calculations, it provides a guide to understand the behaviour of higher transcendentality contributions, as we are going to illustrate. Analysis of the ζ(3) correction We define the asymptotic ratio of one-point functions, cf. (4.2), The very existence of the large n limit in (5.1) is something that deserves investigation. To begin our analysis, let us consider the simplest SU (2) case. Using (4.41) and the first equation in (4.24) we obtain the ratio 6144n(2n + 1)(2n + 3) + 640g 2 (2n + 1)(2n + 3) + 40g 4 (2n + 3) + g 6 ζ(3) + h.z., (5.2) where "h.z." stands for higher transcendentality terms ζ(5), ζ(3) 2 , and so on. The expression (5.2) is exact in the gauge coupling g, i.e. it resums all contributions ∼ ζ(3) g 4+2 k , with k = 0, 1, 2, . . . . For large n and fixed κ = n g 2 the limit (5.1) reads and comes entirely from the ζ(3) g 4 term in (5.2). The same analysis may be repeated for higher N , see (4.24). In all cases one finds the same leading behaviour as in (5.3). In general, one can show that for SU (N ) the ζ(3) correction reads and thus (5.3) is valid for any N . Indeed, our results allow to check that the higher order terms in coupling g are always accompanied by lower powers of n and do not contribute in the n → ∞ limit. Notice however, that the complexity of the detailed dependence on n increases with N . For instance, even for the simplest ζ(3) correction, the O(g 6 ) term in (5.4) reads for SU (N ) with N = 2, 3, 4 We cannot give a general formula parametric in N , but in all cases, the correction term in the square brackets goes like g 2 /n = κ/n 2 and is negligible at large n. The series (6.6) has a finite convergence radius, being convergent for |κ| < π 2 . A simple representation of its analytic continuation that may be used for any κ > 0 is where J p are standard Bessel functions. For κ 1, one has log F(κ) = − log 2 π 2 κ + O(log κ), see App. C for more information. Of course, this large κ expansion must be taken with some caution because F(κ) does not include the instanton contribution. Instantons are expected not to contribute at finite κ -and in particular in the weak coupling expansion -but may be important when attempting to reach the large κ regime. A Direct evaluation of N = 4 correlators in the SU (2) theory The analysis of the low gauge group rank cases may be carried on explicitly by direct evaluation of the partition function, at least in the simple Gaussian case, i.e. for the N = 4 theory. Let us briefly discuss the SU (2) case as an illustration and to write down some expressions that are used in the main text. The partition function (3.3) may be written in terms of a k , k = 1, . . . , N , the eigenvalues of a 10 Z S 4 = C : Ω n : = (−1) n n! L 1/2 n (2 a 2 ) = where L 1/2 n is the generalized Laguerre polynomial L q n with q = 1/2, and H n (z) are Hermite polynomials. As a check one can compute W : Ω n : = 1 2 2n+1 Now, let us assume that (4.13) holds, with the expansion (4.7) in the l.h.s. We find F : Ω n+1 : = g 2(n+1) d dg 2 g −2n F : Ω n : = g 2(n+1) d dg 2 g −2n F
5,370.4
2018-10-24T00:00:00.000
[ "Physics", "Mathematics" ]
A Statistical Channel Model for Stochastic Antenna Inclination Angles The actions of a person holding a mobile device are not a static state but can be considered as a stochastic process since users can change the way they hold the device very frequently in a short time. The change in antenna inclination angles with the random actions will result in varied received signal intensity. However, very few studies and conventional channel models have been performed to capture the features. In this paper, the relationships between the statistical characteristics of the electric field and the antenna inclination angles are investigated and modeled based on a three-dimensional (3D) fast ray-tracing method considering both the diffraction and reflections, and the radiation patterns of an antenna with arbitrary inclination angles are deducted and included in the method. Two different conditions of the line-of-sight (LOS) and non-line-of-sight (NLOS) in the indoor environment are discussed. Furthermore, based on the statistical analysis, a semiempirical probability density function of antenna inclination angles is presented. Finally, a novel statistical channel model for stochastic antenna inclination angles is proposed, and the ergodic channel capacity is analyzed. Introduction Wireless communication technology has been widely used in communication systems for its mobility, convenience, flexibility, and lower cost compared with wired transmission.However, the signals are significantly affected by the surrounding environment and undergo fading and time variation before arriving at the receiver.In order to achieve a higher rate and more reliable communication, the acquisition of accurate channel state information (CSI) and channel modelling is fundamental and crucial in designing a wireless communication system and has been attracting researchers' attention. A basic framework of the geometry-based stochastic channel modelling approach (GSCMA) is developed in [1] for three different scenarios with the corresponding channel parameters such as delay spread, angle spread, shadow fading, angle of departure, angle of arrival, and delay power spectrum extracted from a large number of measurements.Additionally, a polarized channel model is also proposed based on crosspolarization discrimination (XPD) when considering the depolarization effect of channels on electromagnetic waves.The WINNER II channel model [2] extends the number of scenarios to more than a dozen but follows the same channel modelling approach.Furthermore, the WINNER II channel model allows propagation between line-of-sight (LOS) and non-line-of-sight (NLOS) conditions for a same scenario.Analogously, a number of channel models are established using GSCMA but assuming that the scatterers are distributed on regular geometries in two or three dimensions such as the one-ring model [3], twin-cluster model [4], and elliptical model [5] considering only the azimuth angle and the double-cylinder model [6], two-sphere model [7], and multiconfocal ellipsoid model [8] considering the influence of the elevation angle.However, these conventional channel models are assumed to be generally stationary, but this is not sufficiently applicable for the channels of the massive multiple-input multiple-output (MIMO) recognized as one of the most important candidate technologies for the fifth-generation (5G) mobile communication systems due to the potential and additional advantages compared with conventional MIMO technologies [9][10][11]. Consequently, several novel models [12][13][14] are developed to capture the new features observed from the measurements such as the spherical wave front, nonstationary effect on the antenna array axis and the time axis. The above channel models are independent of the antenna configurations and element radiation patterns.Instead, the correlative channel models such as the Kronecker model [15] and the Weichselberger model [16] use the correlation matrices at the mobile station (MS) and base station (BS) without knowing the distribution of scatterers or clusters resulting in a lower complexity.However, few studies of the channel modelling for stochastic antenna inclination angles have been done.It is known that mobile devices are not fixed on walls or people's desks as routers or computers.Instead, people communicate using a mobile device whenever and wherever possible; for instance, they are lying down, standing, and walking.The way people hold a mobile device is not a static state but a stochastic process since users can change the way they hold the mobile device very frequently even in a few seconds.Consequently, the antenna inclination angles will change with the rotation of the mobile devices leading to the variation of received signals due to the polarization mismatch between the signals and antennas.In this paper, a statistical channel model for stochastic antenna inclination angles in the indoor environment is developed based on a modified three-dimensional (3D) fast ray-tracing method.Two different conditions of LOS and NLOS for a common scenario of the T-shaped corridor for an indoor environment are investigated.Furthermore, in order to capture the stochastic characteristics of people holding a mobile device, a semiempirical probability density function (PDF) of antenna inclination angles is proposed, and closed expressions for the radiation patterns of a half-wave antenna for arbitrary inclination angles are deducted based on the principle of coordinate transformation.Finally, the ergodic capacities under two different conditions are analyzed based on the proposed channel model.This paper is organized as follows.A modified 3D fast ray-tracing method is introduced, and the validity and accuracy of the method in predicting the electromagnetic fields are verified in Section II.In Section III, the statistical channel model for stochastic antenna inclination angles is presented in detail.The numerical results are analyzed in Section IV, and conclusions are drawn in Section V. Simulation Environment, Method, and Validation In this section, a modified 3D fast ray-tracing method based on space subdivision is introduced and used to predict the electromagnetic fields in a T-shaped corridor.The layout and corresponding sizes of the corridor are shown in Figure 1.It is composed of brick walls, concrete floor, and ceiling with the corresponding parameters: the relative permittivity and conductivity are ε r = 4 0 and σ 2 = 0 343 s/m for the walls and ε r = 6 14 and σ 2 = 1 005 s/m for the floor and ceiling [17].If all the angles in the corridor are assumed to be right angles, the whole space of the corridor can be divided into many hexahedrons and each hexahedron can be further split into five tetrahedrons.Each hexahedron and tetrahedron must be seamless and nonoverlapping.It is worth noting that there are two different types of faces or lines in the model.One is called real face (RF) or real line (RL) since the face or line exists in the realistic scene such as the surface or edge of a wall.The other one is invisible in fact and only introduced for subsequent analyses and computations so that it is called the virtual face (VF) or virtual line (VL).The space meshing should make each face of a tetrahedron have only one property, the real or the virtual. 2 International Journal of Antennas and Propagation Since the size of an antenna is relatively small compared with the realistic propagation environment, a single antenna can be approximated as a transmitting point.If the coordinates of the transmitting point are given, the initial tetrahedron where the transmitting point is located in can be determined.The rays depart from the transmitting point and arrive at a receiving point through a number of tetrahedrons, and the path of each ray can be traced using the method proposed in [18].However, the calculation is so cumbersome that we modify the expression and give a more compact form as where OB denotes the vector from the transmitting point O to an arbitrary vertex of the i-th face of the initial tetrahedron as shown in Figure 2 and n i and a i represent the normal vector and extension coefficient of the i-th face, respectively.r is the unit vector of propagation, and ⋅, ⋅ represents the inner product of two vectors.Consequently, the face hit by the ray corresponds to the minimum and positive extension coefficient.Note that there are two cases of propagation for different type of faces.If the face is a VF, the incident rays will pass through the face and reach the adjacent tetrahedron as in the case 1 shown in Figure 2. Otherwise, the face is a RF, and the incident rays will be reflected in the current tetrahedron as in the case 2. If the roughness of all surfaces in the environment is assumed to be neglected, the reflected field can be determined according to Fresnel's laws of reflection.The reflection coefficients for perpendicular polarization and parallel polarization are given as where ε r is the relative permittivity and θ i is the incident angle.Furthermore, if an obstacle with the size much larger than the wavelength of the incident wave is present in the propagation path, the diffraction should be taken into consideration.In order to determine the diffraction field, Holm's heuristic diffraction coefficients are selected to calculate the diffraction field due to the simple expressions and the good consistency with the rigorous solution for finite conductivity as shown in [19].The diffraction coefficients for perpendicular polarization and parallel polarization are expressed in a more compact form as where R 0 ⊥ // and R n ⊥ // are the reflection coefficients for the perpendicular polarization and parallel polarization referred to in the formulas expressed in (2) and (3) for the 0-face and n-face [19].ϕ′ and ϕ represent the incident angle and diffraction angle, respectively.k is the wave number, 2 − n π is the inner angle of the wedge and here n = 1 5 due to the previous assumption of right angles in the corridor, s is the distance between the diffraction point and the diffraction observation point, and s ′ is the distance between the source point of the incident ray and the diffraction point. The process of the 3D fast ray-tracing is shown in Figure 3, where N and M denote the number of reflections and the number of reflections after diffraction, respectively. In the process of ray-tracing, it is assumed that each ray is independent of each other and the field around a ray should be represented only by the ray.In addition, it is necessary to determine whether a ray contributes to the field at a receiving point.One of the effective methods is using the reception sphere [20].The radius of the reception sphere for each ray is expressed as where α is the angle between two adjacent rays and d is the path length from the transmitting point to the receiving point. Note that if the receiving point locates in the overlapping area of the rays, the double counting error [21] will be generated.A method of reducing the double count error is presented as follows. Firstly, in order to determine whether a ray is received, all its adjacent rays need to be tested simultaneously.Secondly, if an adjacent ray is received, it must be determined whether the number of reflections and the surfaces in the whole paths of the two rays are equal.If it is true, we compare the distances from the receiving point to the two rays and discard the further one as a repeated ray.Otherwise, the two rays will be retained. It is known that when people are communicating with mobile devices, the rapidly changing actions of holding their mobile devices result in the antenna inclination angles varying randomly.Assuming that half-wave dipole antennas are used on both the transmitting and receiving sides, and if the coordinate system of the antenna is taken as the local coordinate system and the coordinate system of the corridor 4 International Journal of Antennas and Propagation environment is considered as the global coordinate system as shown in Figures 1 and 4, the radiation patterns of the antenna also change with the antenna inclination angles in the global coordinate system.According to the coordinate transformation, radiation patterns of a half-wave dipole antenna with arbitrary inclination angles are expressed as where F θ and F φ represent the radiation pattern of the θ direction and φ direction in the global coordinate system, respectively.As shown in Figure 4, α and β represent the zenith angle and azimuth angle of the tilted antenna in the global coordinate system, respectively.θ and φ are the zenith angle and azimuth angle in the global coordinate system, respectively.For α = 0 ∘ , the electric field patterns are simplified to These are the electric field patterns of a vertical polarized half-wave dipole antenna. Assuming that the transmitting antenna is fixed in the vertical polarization whereas the angle of the receiving antenna varies randomly, the intensity of the electric field received by the antenna will be different at the same receiving point since only the electric field components parallel to the antenna act on the received signals.Consequently, after all the effective rays are determined, the total complex E-field can be obtained as where K is the total number of the effective rays.i denotes the i-th effective subpath.e θ and e φ are the unit direction vectors in the θ and φ directions, respectively.S Ant is the antenna axial directional vector.⋅ , ⋅ represents the inner product of two vectors.Consequently, the relative received power (RRP) (relative to transmitting power) can be obtained as [17] where G T and G R represent the transmitting antenna gain and receiving antenna gain, respectively.PL 0 is the free space path loss from the transmitter to receiver.In order to verify the accuracy and effectiveness of the modified 3D fast ray-tracing method used for predicting the propagation characteristics of the electromagnetic waves in the T-shaped corridor, the simulation results of RRP obtained from the method are compared with the measurement results in [17] as shown in Figures 5 and 6.The simulation using the ray-tracing method is performed at 5.3 GHz according to the configuration and parameter setting of the measurement system in [17], but the antennas are assumed to be half-wave dipole antennas at the transmitting and receiving sides.Both transmitting and receiving Furthermore, the transmitting power is set to be 29 dBm in the simulation, and omnidirectional vertically polarized antennas with different gains but with the same transmitting power are assumed at the transmitting side for different conditions of the LOS and NLOS corridors.As shown in Figure 5, the transmitting antenna gain and receiving antenna gain are equal to 1 dBi, i.e., G T = G R = 1 dBi, the relative received power (relative to the transmitting power) ranges from -85 dB to -45 dB and decreases along the path from points A to B in the LOS corridor as shown in Figure 1.It is known that the direct path plays a dominant role in the receiving power, and signal attenuation is mainly due to the energy diffusion and reflections on the walls, floors, and ceilings when scattering and transmission are assumed to be neglected.In another case, i.e., G T = 13 dBi and G R = 1 dBi, the relative received power ranges from -100 dB to -60 dB along the path from points C to D in the NLOS corridor as shown in Figure 6, and it is less than the LOS condition even for the larger transmitting antenna gain.This is because there is no direct path in the NLOS corridor, and multiple reflections and diffraction result in the heavy attenuation.The simulation results show good consistency with the measurements for both LOS and NLOS conditions even though there are some differences due to the complicated realistic corridor environment.Consequently, the method is verified to be accurate and effective in the prediction of electric field for the T-shaped corridor environment. The intensity of the electric field received by an antenna can change with different inclination angles at a receiving point due to the polarization mismatch and the changing electric field patterns and can be analyzed using the modified 3D fast ray-tracing method.For instance, assuming the azimuth angle is equal to zero degrees, the relationships between the relative received power and the zenith angles along the paths from point A to point B in the LOS corridor and from point C to point D in the NLOS corridor are shown in Figures 7 and 8, respectively.In the LOS corridor, the relative receiving power decreases with the increase of the zenith angle, and the offsets relative to the vertically polarized antenna, i.e., α = 0 ∘ , increase but retain almost the same trend from 0 to 75 degrees.However, for the horizontally polarized antenna, i.e., α = 90 ∘ , the relative received power is far less than the vertical polarization and shows a different trend compared with others.This is because the transmitting antenna is vertically polarized and the main propagation mechanisms are the direction and reflections.The direct path plays a dominant role in the LOS corridor resulting in the relatively weaker depolarization effect.International Journal of Antennas and Propagation In the NLOS corridor, the relative received power for the vertically polarized antenna is still the largest whereas the offsets relative to the vertically polarized antenna for other angles are less than those in the LOS corridor especially for the horizontally polarized receiving antenna.This is because there is no direct path in the NLOS corridor and the main propagation mechanisms are the diffraction and reflections leading to the serious channel depolarization effect. A Statistical Chanel Model for Stochastic Antenna Inclination Angles In order to obtain the statistical CSI of the T-shaped corridor, 169 different user positions from A to B for the LOS corridor and 101 different user positions from C to D for the NLOS corridor are selected for simulation and investigation.The half-wave dipole antennas are assumed to be used at both ends of the transceiver with the transmitting power of 30 dBm, and the transmitting antenna is fixed in the vertical polarization whereas the inclination angles of the receiving antenna are variable.The simulation results of the electric field at different receiving points can be obtained according to (8) using the ray-tracing method.According to the statistical analyses, the statistical characteristics of the amplitude and the phase of the electric field can also be obtained.The results show that the amplitude of the electric field when expressed in decibels is subject to Gauss distribution for both LOS and NLOS corridors.For instance, the statistical distribution of the amplitude of the received electric field for the vertical polarization or z-polarization, i.e., α = 0 ∘ , and two horizontal polarizations or x-polarization and y-polarization, i.e., α = 90 ∘ , β = 0 ∘ , and β = 90 ∘ , are shown in Figures 9 and 10.For the LOS condition, the mean value is -3.7 dB for the z-polarization and much larger than -19.2 dB for the x-polarization and -25.9 dB for the y-polarization as shown in Figure 9 since the vertical polarization and horizontal polarization are copolarized and crosspolarized relative to the transmitting antenna, respectively, and the existence 7 International Journal of Antennas and Propagation of the direct path leads to relatively weaker depolarization as mentioned above.Furthermore, the value of the xpolarization is larger than that of the y-polarization since the reflection surfaces in the LOS corridor are in the y direction resulting in the heavier depolarization for the component of the y-polarization than that of the x-polarization and z-polarization.For the NLOS corridor, as shown in Figure 10, the mean value is -19.5 dB for the z-polarization and approximately equal to -25 dB for the x-polarization and -24.2 dB for the y-polarization due to the strong depolarization after the diffraction and multiple reflections.Furthermore, the reflection surfaces in the NLOS corridor are in the x direction leading to nearly equal mean values of the two horizontal polarizations. In order to obtain the statistical characteristics of received signals, the mean value and variance of the amplitude of the electric field expressed in decibels for different zenith angles and azimuth angles in the LOS corridor are depicted in Figures 11 and 12, respectively.The results show that the mean values range from -26 dB to -2 dB and decrease with the increase of the zenith angle due to the increased polarization mismatch.However, the effect of the azimuth angle is not obvious for the lower zenith angles but becomes significant for the larger zenith angles, especially when the zenith angle is equal to zero degrees; it is known that the mean value is independent of azimuth angles.This is because there are three basic polarizations as mentioned above, and the polarization mismatch also exists between two horizontal polarizations.Furthermore, the main reflection surfaces in the LOS corridor are in the y direction, which results in the heavier depolarization for the y-polarization than that of the x-polarization.However, the variance approximates to a constant of 4.25 dB for the lower zenith angles, but fluctuations occur for the larger angles as shown in Figure 12. In the NLOS corridor, the mean values and variances of the amplitude of the electric field changing with antenna inclination angles are shown in Figures 13 and 14, respectively.The mean values range from -30 dB to -19 dB and are smaller than those of the LOS condition due to the heavy attenuation after the diffraction and multiple reflections.The depolarization transforms more power from the copolarization components into the crosspolarization ones.In addition, since the main reflection surfaces in the NLOS corridor are in the x direction, the gaps among three polarizations are reduced.Similar to the LOS condition, the variance also tends to be a constant for the lower zenith angles with the value of 3.5 dB but shows approximately uniform distribution between 2.5 dB and 5 dB for the larger zenith angles. According to the fitting results, the mean value of the amplitude of the electric field shows sinusoidal variation with the zenith angle or azimuth angle for both LOS and NLOS corridors as shown in Figures 15 and 16, and the fitting function can be expressed as where y 0 , A, x c , and w are the coefficients of the fitting function.Consequently, the functional relationship between the mean value of the amplitude of the electric field and antenna inclination angles can be deducted and expressed in decibels as follows. For the LOS condition: where α and β represent the zenith angle and azimuth angle of the tilted antenna as shown in Figure 4. According to the statistical analysis, as shown in Figure 17, the phase of the electric field is subject to uniform distribution between -90 °and 90 °for both LOS and NLOS corridors. A statistical channel model for stochastic antenna inclination angles can be established as The phase of the electric field (degree) For the NLOS corridor For the LOS corridor where H M R ×N T is the M R × N T channel matrix, M R and N T are the number of antennas at the transmitting and receiving sides, respectively.A and Φ represent the amplitude and phase of the channel impulse response, respectively.α and β represent the zenith angle and azimuth angle as shown in Figure 4, respectively.j is the square root of -1. The process for generating channel coefficients is described as follows.Firstly, select the simulation scenario including the LOS or NLOS conditions and randomly generate the inclination angle of the antenna.Based on the obtained antenna inclination angles, the mean value of the amplitude of the channel coefficient for these angles can be obtained from ( 11) and (12).For simplicity, the variance is considered to be a constant 4.25 dB for the LOS condition or 3.5 dB for the NLOS condition as shown in Figures 12 and 14.Consequently, the amplitude of the channel coefficient is randomly generated according to the statistical distribution.Furthermore, the phase factor can also be generated in a similar way.Finally, the channel coefficient can be obtained, and the flow chart of the process for generating channel coefficients is shown in Figure 18. Numerical Analysis As mentioned above, the changing actions for people holding their mobile devices result in random variation for the antenna inclination angles.In fact, it is not a static state but can be considered as a stochastic process since it may change frequently even in a very short time.However, very few studies have been performed on the statistical characteristics of the way people hold their mobile devices.Additionally, the effect of the stochastic antenna inclination angles on received signals has not been taken into account in conventional channel models. In [22], an interesting study of the way that people naturally hold and interact with their mobile devices is performed and 1333 observations of people using mobile devices in different situations at different places in seven cities are made.The pie chart of the survey result is shown in Figure 19.It shows that talking in the way of voice calls occupies 22.28% of the users, while 18.98% is passive activities such as listening to audio or watching a video.Furthermore, the most common way of holding the mobile devices is the one-handed use accounting for 28.96% including 67% of right-handed use and 33% of left-handed use.The 21.01% of users is engaged in cradling their mobile devices in the left hands or right hands with 79% and 21%, respectively.However, only 8.78% is two-handed use with two different situations.The first one accounting for 90% is to hold the mobile devices vertically, i.e., in the portrait mode, and the other one is to hold the mobile devices horizontally, i.e., in the landscape mode. Although the data in [22] were used to evaluate how people actually use their mobile phones, we can empirically associate these data with the antenna inclination angles as shown in Table 1.Consequently, a semiempirical PDF is proposed based on the survey results.According to the statistical results, the antenna inclination angle obeys Gauss distribution with the corresponding mean and standard deviation of 45.5 °and 9.44 °, respectively.After obtaining the statistical distribution of antenna inclination angles, the channel coefficients can be generated according to the method introduced in Section III.In order to evaluate the channel model, taking the MIMO system of M T = M R = 4 as an example, the ergodic capacity for MIMO channels can be obtained according to the formula [23] C = E log 2 det I M R + ρ M T HH H bps/Hz, 14 where I M R is the M R × M R identity matrix; ρ is the system signal-to-noise ratio (SNR); ⋅ H is the Hermitian transpose; M R and M T denote the number of receivers and transmitters, respectively; and H is a matrix whose entries are the M R × M T channel gains.The ergodic channel capacity of two different conditions of LOS and NLOS is depicted in Figures 20 and 21.The results show that the ergodic capacity increases with the increase of average SNR and it is larger in the LOS condition compared with the NLOS channel under the same SNR.Furthermore, the ergodic channel capacity increases with the increase of the number of antennas at the transmitting and the receiving sides, and for the same number of antennas, the channel capacity of the channel model with stochastic antenna inclination angles is considered to be smaller than that of the antenna fixed as a vertical polarization due to the effect of the polarization mismatch. Conclusion A 3D statistical channel model for stochastic antenna inclination angles in a T-shaped corridor environment based on a modified 3D fast ray-tracing method has been proposed in this paper.The radiation patterns for arbitrary inclination angles of a half-wave antenna have been deducted and considered in the method.Based on the statistical analyses, the relationships between the statistical characteristics of the electric field and antenna inclination angles are analyzed for both of the LOS and NLOS corridors.Furthermore, a semiempirical probability density function of antenna inclination angles has been proposed and used to depict the stochastic process of people holding their mobile devices.For future work, the channel model can serve as a preliminary attempt; the parameters used in the channel model can be verified and extracted from the channel measurements and extended to more scenarios.Combined with the existing channel models, a more general, accurate, and low-complexity channel model may be developed and applied to the next-generation wireless communication systems. Figure 1 : Figure 1: The three-dimensional layout of the T-shaped corridor. Figure 2 : Figure 2: The demonstration of determining the next tetrahedron. Figure 3 : Figure 3: Flow chart of the 3D fast ray-tracing method. Figure 4 :Figure 5 : Figure 4: The demonstration of two coordinate systems.x-y-z is the global coordinate system, and x′-y′-z′ is the local coordinate system. Figure 6 : Figure 6: The relative received power from C to D in the NLOS corridor (N = 10, M = 8, G T = 13 dBi, and G R = 1 dBi). Figure 9 :Figure 10 : Figure 9: The probability density function (PDF) of the amplitude of the electric field in the LOS corridor.0.5 Figure 11 :Figure 12 : Figure 11: The mean value of the amplitude of the electric field for different inclination angles in the LOS corridor (scatter diagram). Figure 13 :Figure 14 : Figure 13: The mean value of the amplitude of the electric field for different inclination angles in the NLOS corridor. Figure 16 : Figure 16: The mean value of the amplitude of the electric field for different inclination angles in the LOS and NLOS corridors (α = 90 ∘ ). Figure 17 : Figure 17: The statistical distribution of the phase of the electric field for the LOS and NLOS corridors. Figure 18 : Figure 18: The flow chart of the process of generating channel coefficients. Figure 19 : Figure 19: The pie chart of the survey results for people holding their mobile devices. Table 1 : The relationship between the way of holding the mobile devices and the antenna inclination angles.The ways of holding the mobile Figure 15: The mean value of the amplitude of the electric field for different inclination angles in the LOS and NLOS corridors (β = 0 ∘ ).
6,878.2
2019-03-28T00:00:00.000
[ "Computer Science" ]
Impact of local secondary gas addition on the dynamics of self-excited ethylene flames Advanced combustion strategies for gas turbine applications, such as lean burn operation, have been shown to be effective in reducing NOx emissions and increasing fuel efficiency. However, lean burn systems are susceptible to thermo-acoustic instabilities which can lead to deterioration in engine performance. This paper will focus on one of the common industrial techniques for controlling combustion instabilities, secondary injection, which is the addition of small quantities of secondary gas to the combustor. This approach has often been employed in industry on a trial-and-error basis using the primary fuel gas for secondary injection. Recent advances in fuel-flexible gas turbines offers the possibility to use other gases for secondary injection to mitigate instabilities. This paper will explore the effectiveness of using hydrogen for this purpose. The experiments presented in this study were carried out on a laboratory scale bluff-body combustor consisting of a centrally located conical bluff body. Three different secondary gases, ethylene, hydrogen and nitrogen, were added locally to turbulent imperfectly-premixed ethylene flames. The total calories of the fuel mixture and the momentum ratio were kept constant to allow comparison of flame response. The heat release fluctuations were determined from the OH* chemiluminescence, while the velocity perturbations were estimated from pressure measurements using the two-microphone method. The results showed that hydrogen was the most effective in reducing the magnitude of self-excited oscillations. Nitrogen had negligible effect, while ethylene only showed an effect at high secondary flow rates which resulted in sooty flames. © 2020 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/) Introduction While lean burn operation in gas turbine combustion is beneficial in reducing emissions of nitrogen oxides (NOx), these systems are susceptible to thermo-acoustic oscillations, commonly known as combustion instabilities [1À5]. These instabilities are caused by unsteady combustion which can alter the heat release rate, and even travel further upstream the combustor and interfere with the air/fuel mixing process [6]. These instabilities are problematic as they result in severe vibrations of the combustor walls, excessive heat transfer, flame blow-off, unacceptable noise levels and component wear, potentially leading to a complete failure of the system [7À11]. Studying the interactions between large scale coherent structures (vortices), caused by hydrodynamic instabilities or acoustic disturbances, and flames is crucial to improve our understanding of fundamental mechanisms in turbulent combustion and combustion instabilities. Flame-vortex interactions can be used to analyse phenomena relating to the structure of the flame when it is wound up by a vortex (also known as flame roll-up), formation of a central core, secondary vorticity generation, quenching of the reaction zone, ignition dynamics, mixing and combustion enhancement [12À15]. In order to determine correlations between flame and flow dynamics, flame-vortex interactions have been studied in several different configurations: counterflow diffusion flames [16,17], turbulent swirl flames [18À22], propagating flat flames [23,24] and turbulent jet flames [25,26]. Optical diagnostic techniques, such has high repetition rate PIV (particle image velocimetry) and PLIF (planar laser induced fluorescence), have recently been used to study transient phenomena caused by vortex structures, such as lean blowout [27,28], thermoacoustic oscillations [29,30], local extinction [29,31,32] and flashback [30,33]. Non-premixed flames are particularly sensitive to roll-up as their location depends on the transport processes in the diffusive layers [34]. If a flame (premixed or non-premixed) is weak enough and the vortex residence time in the flame is long enough, it can be considerably lengthened and rolled up, which leads to an increase in global heat release and could cause local flame extinction due to excessive strain an curvature induced by the vortex [12,13,35]. Combustion oscillations are amplified through a feedback loop between the combustion processes and the acoustic oscillations. Hence, in order to dampen these oscillations, the coupling between the oscillatory heat release and acoustic perturbations needs to be disrupted, which alters the phase relation between the two sets of oscillations, leading to a decay in their amplitudes [1,2,36]. Combustion instability can be reduced through passive and active control methods. Passive control methods, as outlined in [3], involve modifications to the hardware, such as modifying the fuel-air feed channel length [37], changing the bluff body shape [38] or altering the combustor geometry [39,40], which have been shown to be quite effective. However, these methods are only effective over a limited range of frequencies and operating conditions, and the modifications can be quite costly and time-consuming. Active control methods, on the other hand, provide a more dynamic approach to controlling oscillations by introducing an actuator that provides a closed-loop response to sensors placed in the combustion chamber. The actuator dynamically modifies parameters in response to measured signals, resulting in a more effective approach to decoupling acoustic and heat release oscillations. One common approach of active control is the dynamic introduction of fuel close to the base of the flame, also known as secondary fuel addition or pilot injection, to create a localised richer zone. This changes the local equivalence ratio, and the resulting variation in flame dynamics alters the phase relation between the heat release and acoustic pressure modulation. Several experimental [41À50] and numerical [51À54] studies have been conducted to understand the impact of secondary fuel addition on combustion oscillations. Emris and Whitelaw [44] investigated low frequency oscillations in turbulent natural gas flames and observed that redirecting a small quantity of fuel from the main flow to a secondary injector, placed close to the combustor entrance, reduced the pressure oscillations and improved the overall stability of the flame. However, this improvement was only observed within a narrow range of operating conditions. Marzouk et al. [52] numerically showed that the addition of methane (as secondary fuel) to methane-air flames had a significant impact on the combustion process, including reaction zone broadening, burning rate enhancement, and flammability limit extension towards leaner mixtures. The spatial gradients of radical concentration and temperature produced by the unsteady equivalence ratio mixtures provided the flame with the ability to burn at leaner mixtures. In the work carried out by Albrecht et al. [46], secondary fuel injectors placed at the flame base and at the combustor dump plane were used to locally introduce fuel and vary the overall equivalence ratio. The authors reported a reduction in pressure oscillations and NOx emissions, and concluded that this was mainly achieved by the high jet momentum of the secondary injectors. In addition, the combined fuel injections prevented lift-off of the main flame, which appeared to be another damping source for the pressure perturbations. The studies presented above utilise the same fuel for both primary and secondary injection. The use of alternative gases as secondary fuel to curtail combustion oscillations has received considerable attention in the recent past. Hydrogen (H 2 ) is one such potential fuel; blends of hydrocarbon fuels and hydrogen have been reported to significantly improve flame stability and ignitability [48,55À59]. The addition of H 2 increases the lean limit of operation, reducing the risk of flame blow-off, which is a common problem in lean premixed combustors [38,60À63]. Recent works [56,64,65] have also reported considerable reductions in heat release perturbations as a result of H 2 addition, attributable to the higher burning velocity of H 2 . However, findings from Wicksall and Agrawal [66] demonstrated that their premixed flame exhibited strong instabilities with H 2 addition. Chen [67] also concluded that the addition of H 2 to methane-air flames could destabilise the flame. Hydrogen is also preferred as a fuel as it does not produce carbon emissions, however it does increase flame temperatures which could augment NOx production. H 2 addition results in higher reaction rates, which expands the size of the reaction zones making the recirculation zones smaller. This reduces the availability of the relatively cooler gases, leading to an increase in the global temperature, and thus enhancing the production of NOx [57,64,65,68À70]. However, several studies have reported appreciable reductions in NOx emissions with H 2 addition [45,66,71]. This was speculated to be due to the addition of H 2 allowing the flame to burn at ultra-lean conditions, hence significantly reducing the global flame temperatures and curtailing NOx production. While hydrogen enrichment does have a promising role in the development of future low-emission combustion technologies, it can be seen from the review of literature that researchers who have conducted investigations of H 2 addition within the context of combustion instability and NOx reduction have reported contradictory results. In addition, the effect of H 2 on combustion oscillations is not very well understood. Recent advances in fuel-flexible gas turbines offers the possibility to use other gases for secondary injection to mitigate instabilities. This work will investigate the effect of the local addition of various gases (fuels and diluent) to turbulent imperfectlypremixed self-excited ethylene flames. The specific objectives of this work are to study the effectiveness of the following methods in reducing combustion instabilities: (i) fuel-splitting, that is supplying part of the primary fuel via secondary fuel ports, ii) local H 2 addition and iii) local addition of N 2 (diluent). The following sections of the paper provide details of the experimental hardware and testing methodology employed, followed by results and discussion on the effect of the local addition of the three gases, and summary of the key findings from this study. Combustor A bluff body stabilised combustor, the design of which is based on Ref. [72] was employed for this study. The schematic of the combustor is shown in Fig. 1(a). The plenum had an internal diameter (ID) of 100 mm and a total length of 300 mm, with divergent and convergent cross sections at the inlet and exit of the plenum which prevented any flow separation during the expansion and contraction of the gas. The flow was streamlined using a section of honeycomb mesh placed as a flow straighter. The plenum delivers the mixture to the combustion chamber via a long duct with an ID of 35 mm and length 400 mm. This enables acoustic pressure measurements for the two-microphone method. The pressure fluctuations in the flow at the inlet to the combustion chamber were measured using two high-sensitivity pressure transducers (KuLite model XCS-093, sensitivity 4:2857 Â 10 À3 mV/Pa). The signals from the transducers were amplified using a Flyde Micro Analogue amplifier and acquired onto a PC using National Instruments data acquisition systems. The bluff body used in for this study was conical in shape Fig. 1(b), with a diameter of 25 mm; resulting in a blockage ratio of 50%. As shown in Fig. 1(b), the bluff body had two provisions of introducing fuel into the air stream, either via the primary fuel ports (six holes, each of diameter 0.25 mm; placed circumferentially on the main fuel pipe) or using the secondary injection ports, which are six 2 mm ports placed 2 mm below the burner. This configuration allows for three different modes of operation: (1) fully premixed mode fuel and air mixed completely before entering the combustor, (2) imperfectlypremixed mode fuel introduced a short distance upstream of the flame (via the primary fuel ports) to achieve partial premixing and (3) local addition (from the secondary fuel ports) with either of the above modes (see Fig. 1(b)). Two long concentric tubes were welded to the bluff body which served as fuel supply lines for the two injection ports. In this study, due to possibility of flame flashback during self-excitation under fully premixed conditions, only imperfectlypremixed flames were considered. The combustion zone of the combustor was enclosed using a combination of quartz and stainless steel cylinders of internal diameter 70 mm; and a combined length of 300 mm. The bottom section of the enclosure, of length 100 mm; was made of quartz for optical diagnostics, while the rest was stainless steel. The enclosure prevented local equivalence ratio fluctuations due to air entrainment from the surroundings. The air was delivered from a central compressor while the other gases were supplied from compressed gas cylinders. The flow rates of all the gases were controlled using digital mass flow meters (Red-Y Smart), which has an accuracy of § 1.5% of the full-scale reading. Optical diagnostic arrangement The layout of the optical diagnostic facility is shown in Fig. 2. High speed light sheet tomography (LST) or simply laser tomography was used to identify the boundary between the reactants and burnt products to extract a 2-D flame contour, as described in [73]. The tomography setup included a Pegasus laser (532 nm; maximum frequency 10,000 Hz) and a Photron high-speed camera (maximum rate 20,000 frames per second-fps). The laser beam was expanded into a laser sheet to illuminate a region of 12.5 x 12.5 mm; with a flame height of 2.5 -15 mm from the base of the combustor being captured. The flow was seeded with fine olive oil droplets and the Mie scattering was captured using the high-speed camera at 3000 fps. Discriminating intensity levels of Mie scattering from unburnt region to virtually none in burnt region allowed identification of the boundary between the unburnt reactants and burnt combustion products. The flame contour thus evaluated from this method were used to produce timeseries snapshots of the evolution of the flame length (which is indicative of the flame surface area). OH* chemiluminescence was measured using a Hamamatsu sideon photo-multiplier tube (PMT) fitted with an interference filter which had a center wavelength of 307 nm and bandpass of § 10 nm. A plano-convex lens was also mounted in front of the PMT which focused the entire flame on the PMT collection window. Data was recorded at a sample rate of 10,000 Hz over a period of 2 seconds using National Instruments data acquisition software. A DSLR camera was also used to capture photographic images of the flames to show differences in their general appearance. The DSLR camera was set at a fixed position, aperture size, shutter speed, ISO and white balance to allow the comparison between flame images of different flow conditions. Flow conditions The following procedure was utilised to achieve self-excitation of the imperfectly-premixed ethylene flames. The flow rate of air was fixed at 250 slpm; and the air-fuel mixture was ignited at a global equivalence ratio at which the flame is not acoustically self-excited, that is, no dominant frequency is observed on the power spectrum (please note, the global equivalence ratio is based on the air and fuel flow rates and does not take into account the extent of premixedness of the air/fuel mixture). The flow rate of ethylene was then increased until the amplitude of the pressure oscillations exhibited a significant increase, and a clear dominant frequency was observed on the power spectrum. In the current work, self-excitation was observed at a global equivalence ratio, F Global ; of 0.812. As described before, this work will investigate the effect of local addition of gases in order to control acoustic oscillations. Table 1 shows the flow conditions used for the local addition of the three secondary gases, (a) ethylene, (b) hydrogen (H 2 ) and (c) nitrogen (N 2 ) to imperfectly-premixed ethylene flames. The three gases were introduced through the secondary fuel ports of the bluff body (see Fig. 1 (b)). Ethylene was added to investigate whether fuel stratification could help reduce combustion oscillations. Hydrogen and nitrogen were added to understand the effect of diluents, both reactive (H 2 ) and inert (N 2 ), on heat release fluctuations. Several parameters were kept constant in order to enable comparison between the three secondary gases. The air flow rate was fixed at 250 slpm. The input power, calculated from the calorific value of the fuel mixture, was kept constant at 13.11 kW. The momentum ratio, defined by the Eqn. (1) (where sec denotes the secondary gas added, while pri represents the primary mixture of ethylene and air), was kept constant between the three secondary gases, for each flow condition, in order maintain the same level of penetration of each locally added gas (ethylene, hydrogen and nitrogen) in the primary ethylene-air flow. This was done by adjusting the volume flow rates of hydrogen and nitrogen, so that they matched the momentum ratio of the locally added ethylene, as can be seen in Tab. 1. Momentum ratio 3. Data analysis Global heat release fluctuation Both the pressure and the OH* chemiluminescence signals had a cyclic response under the dominant acoustic excitation. The pressure transducer signals were analysed using the two-microphone technique to calculate the velocity fluctuation, u 0 ; and normalised by bulk velocity, h U i ; at the inlet to the combustion chamber, as detailed in the following references [72,74]. The flame describing function, FDF; (also known as the non-linear flame transfer function, NFTF) was determined from the ratio of the heat release fluctuations and the velocity perturbations, The normalised values of h OH Ã i were used to represent the heat release fluctuation, Q 0 ðf Þ= h Q i . Image processing Flame images were captured using the high-speed tomography setup for a duration of 100 ms. The flame contour was obtained by post processing the images through various steps of filtration, binarization and tracing the flame boundary, to obtain a single pixel thick flame front. The flame contour length (FL) was taken to be indicative of the flame surface area. Fig. 3 shows the region of the flame captured with the laser tomography technique, which was 12.5 mm wide and 12.5 mm in height (from 2.5 mm to 15 mm downstream of the base of the combustor). Similar to the OH* chemiluminescence, the flame contour length was also put through FFT and the power spectral density (PSD) was calculated, which was then normalised with the averaged flame length h FL i to obtain FL 0 ðf Þ= h FL i . Table 1 Experimental flow conditions for the local addition of (a) ethylene, (b) hydrogen and (c) nitrogen to imperfectly-premixed turbulent ethylene flames. Fig. 4 shows the power spectral density (PSD) plot, generated by applying fast Fourier transform (FFT) on the OH* chemiluminescence signal, for self-excited imperfectly-premixed ethylene flames with no local addition, that is, the base test case (see Tab. 1). The PSD plot shows a sharp peak at 338 Hz; clearly demonstrating that this is the dominant self-excitation frequency. Fig. 5(a) shows the time-series plot for the flame surface area, while Fig. 5(b) shows the power spectrum obtained from the time-series data to determine the frequency at which the flame surface area fluctuations were taking place. Significant fluctuations in the flame surface area, with peak-to-peak values of about 1-1.5, close to the base of the combustor can be observed in Fig. 5(a), which was observed to cause roll-up of the flame front. The PSD plot (Fig. 5(b)) shows that the frequency of the flame surface area fluctuations is 340 Hz; which is very close (barring experimental inaccuracies) to the dominant frequency of oscillations from the OH* signal (Fig. 4). Hence, this confirms that the heat release oscillations can be captured by the flame surface modulation. Self-exited ethylene combustion The time-series of images in Fig. 6 depict the flame-vortex interactions, particularly the evolution of the coherent structures during self-excited oscillations for imperfectly-premixed ethylene flames, without any local addition of secondary gases. The instances of the flame were captured every 0.33 ms and the images show periodic formation, and subsequent destruction, of flame vortices. It is clear from the images that the appearance of coherent structures corresponded with the least amount of flame element, and it is likely that this point would have coincided with the trough of the flame contour length (5 (a)). The coherent structures continue to convect upstream, which would potentially contribute to an increase in the heat release oscillations, followed by complete disintegration of the flame structures. Effect of fuel-splitting on the dynamics of self-excited ethylene flames The effectiveness of fuel-splitting, that is, splitting part of the primary fuel and adding it locally to turbulent imperfectly-premixed ethylene flames, in reducing combustion oscillations was investigated. For these set of tests, the overall equivalence ratio was kept the same, while the fuel flow rates in the two fuel streams (via the primary and secondary ports) were varied -see Tab. 1. Fig. 7 presents photographs of the flame with increasing percentage of the local addition of ethylene as a secondary fuel. It can be observed in Fig. 7 that as the amount of secondary ethylene is increased, the flames become increasingly yellow and sooty. This is most likely due to the poor mixing of the locally added ethylene with the primary flow air, creating a locally rich mixture and resulting in unburnt hydrocarbons, and the consequent high soot formation. The observed soot formation could result in inaccurate estimates (due to broad black-body radiation from soot) of the heat release from the readings of the OH* chemiluminescence using the PMT. Furthermore, these soot rich areas caused bright spots in the tomography imaging. Hence, for the local addition of ethylene, only changes in the velocity perturbations were determined from the pressure measurements, and both normalised pressure and velocity oscillations have been shown in Fig. 8. It can be seen in Fig. 8 that when 1.4% of ethylene (by volume) is removed from the primary flow and injected locally through the secondary ports, the pressure and velocity perturbations increase. However, subsequent addition of ethylene through the secondary ports (and removal from the primary), results in considerable reductions in the velocity oscillations. The initial increase in the perturbations could be attributed to the increase in the local equivalence ratios towards stoichiometric values as a result of the local addition of small amounts of ethylene (up to 1.4%), which would increase the local premixedness, and hence contribute to an increase in the heat release. Further addition of ethylene would have made the local equivalence ratios rich (as evidenced by the appearance of soot in the flame photographs), which would result in decreased flame temperatures and a reduction in the magnitude of oscillations. Local addition of hydrogen to self-excited ethylene flames The results from the tests carried out to understand the effectiveness of local addition of hydrogen for combustion instability control have been presented in this section. H 2 was added via the secondary fuel ports during the self-excited oscillations of imperfectly-premixed ethylene flames. The effects of H 2 addition on pressure, velocity and heat release perturbations, and on the flame describing function, have been shown in Fig. 9. It can be observed from the figure that the addition of up to 10% H 2 results in a significant reduction in the normalised velocity oscillations ( Fig. 9(b)), however subsequent H 2 addition does not show any appreciable change. A similar trend is observed for the heat release response, that is, a reduction of almost 60% when 10% H 2 is added, with no change observed beyond 10% H 2 addition (Fig. 9(c)). The flame describing function (Fig. 9(d)), determined from the velocity and heat release fluctuation data, exhibits a decreasing reduction with H 2 addition. This reduction in the pressure perturbations is consistent with the findings reported by Barbosa et al. [64], who investigated the local addition of H 2 as an instability control technique. In order to explain this reduction, the authors [64] captured phase locked images of OH* and CH*, which showed that the hydrogen jet breaks the coupling between the acoustic and heat release oscillations. The power spectra calculated from global OH* chemiluminescence and the flame length estimates are shown in Fig. 10(a) and Fig. 10(b), respectively. The magnitude of the PSD based on the OH* chemiluminescence signal and from the flame contour length (FL) show a significant reduction in the peak of the dominant frequency when the H 2 addition is increased upto 10%. Beyond 10% H 2 addition, both PSD plots exhibit minimal changes with increasing H 2 addition. Both these observations correlate well with the trends in pressure and velocity perturbations observed in Fig. 9. Considering the time-based evolution of the flame boundary images shown in Fig. 11 for 5% and 20% H 2 addition, it is apparent that an increase in the local addition of H 2 leads to an overall increase in the flame contour length. Even at the lowest point of the heat release, the flame elements disintegrate to a lesser degree with increasing H 2 addition. It could be speculated that the reduction in the velocity and heat release perturbations with H 2 addition, observed in becoming a self-limiting factor, with no further flame destruction possible with increasing H 2 addition. This can be observed in Fig. 11(c) which shows instantaneous images of the flame boundary with different H 2 addition for self-excited ethylene flames. Due to flame-vortex interaction in the inner recirculation zone a large part of the flame gets annihilated (as shown by the regions circled in red), and as H 2 is added beyond 20% the vortex is not able to cause any further flame roll-up. Addition of nitrogen to self-excited ethylene flames The local addition of nitrogen (N 2 ), via the secondary fuel ports, to self-excited ethylene flames was carried out as a control. N 2 is inert and does not contribute energy to the combustion process, thereby acting primarily as a diluent. It is clearly evident that the flame boundary images shown in Fig. 12 for 9.7% N 2 addition are largely similar to those for pure ethylene flames (Fig. 6). Fig. 13 compares the time-series plots of the flame length for pure ethylene and ethylene/9.7% nitrogen, and only an insignificant reduction in the peaks and troughs of the surface area can be observed with N 2 addition, which shows that the added N 2 had a negligible impact on the flame roll-up occurrence in the selfexcited flames. Fig. 14 shows the normalised velocity and heat release oscillations for up to 9.7% (by volume) N 2 addition, while the power spectra calculated from global OH* chemiluminescence and the flame length estimates are shown in Fig. 15(a) and Fig. 15(b), respectively. No effect of N 2 addition can be observed in Fig. 14 and Fig. 15, except for a slight increase in the perturbations at 1.4% N 2 addition. Although it is known that the addition of N 2 reduces the flame temperature, it is speculated that the level of N 2 added in this work was too low to have any appreciable impact. The observations with N 2 also confirm that the effects observed with ethylene and H 2 addition are primarily due to them taking part in the combustion reaction, and not because of these gases acting as a diluent in the combustion mixture. Conclusions This study investigated the effect of the local addition of three secondary gases, ethylene, hydrogen and nitrogen, to imperfectly-premixed self-excited ethylene flames. The secondary gas flow rates were adjusted to keep the momentum ratio between the secondary gas and the primary ethylene/air fuel mixture constant and match the level of penetration each secondary gas had into the main (primary) flow. In addition, the combined calories of the combustible mixture (primary and secondary) was also kept constant by reducing the main ethylene flow as the secondary gas flow was increased. The self-excited ethylene flames experienced a recurring oscillation of pressure, velocity and heat release, accompanied by a periodic formation and destruction of the flame roll-up in the region close to the base of the combustor. The addition of H 2 reduced the pressure, velocity and heat release perturbations, and decreased the initial size of the flame roll-up. However, these effects were only observed for up to 10% H 2 addition, subsequent H 2 addition (of up to 30%) did not exhibit any appreciable changes beyond that level. The local addition of ethylene (to observe the effects of fuel-splitting), of up to 2.9%, resulted in an increase in the normalised pressure and velocity oscillations. Higher levels of ethylene addition (beyond 2.9%) reduced the magnitude of oscillations, however the flame became increasingly yellow, indicating the presence of soot and unburnt hydrocarbons, and hence lower combustion efficiency. No noticeable change in the velocity and heat release perturbations, and in the flame surface area fluctuations was observed with the addition of nitrogen, leading to the conclusion that the thermal dilution effect of N 2 was not significant enough to alter the dynamical flame response. Hence, it can be concluded from this work that the local addition of H 2 is most effective in reducing the degree of perturbations of selfexcited ethylene flames, and that is attributed to the disruption of the coupling between heat release and acoustic oscillations. For the operating parameters tested in this work, the local addition of gaseous fuel (ethylene) and inert gas (nitrogen) was not observed to be as effective in controlling combustion instability. This approach of secondary addition of hydrogen not only is able to mitigate combustion instabilities but has the potential to keep emissions low (through lean burn operation) and achieve higher efficiency. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
6,780.2
2021-02-01T00:00:00.000
[ "Engineering" ]
Enhanced Optical and Electronic Properties of Silicon Nanosheets by Phosphorus Doping Passivation In this paper, we use the spin-on-dopant technique for phosphorus doping to improve the photoelectric properties of soft-chemical-prepared silicon nanosheets. It was found that the luminescence intensity and luminescence lifetime of the doped samples was approximately 4 fold that of the undoped samples, due to passivation of the surface defects by phosphorus doping. Meanwhile, phosphorus doping combined with high-temperature heat treatment can reduce the resistivity of multilayer silicon nanosheets by 6 fold compared with that of as-prepared samples. In conclusion, our work brings soft-chemical-prepared silicon nanosheets one step closer to practical application in the field of optoelectronics. Introduction Since the discovery of graphene in 2004, two-dimensional (2D) nanomaterials have received more attention, including single-element 2D materials such as graphene and phosphorene and metal dichalcogenides represented by MoS 2 [1][2][3][4]. The special 2D structure with the quantum confinement effect provide abnormal photonic, magnetic, catalytic, and electronic properties, conferring them with outstanding behaviors in sensors, ferroelectricity, catalysis, field-effect transistors, batteries, supercapacitors, and thermoelectricity. As the group IVA element, two-dimensional silicon (silicon nanosheet) is expected to have superior properties similar to graphene. Furthermore, it also preserves the advantages of silicon material, which is the most important material in modern life. Therefore, two-dimensional silicon may have broad application prospects in the fields of electricity, optics, chemistry and even biology [5][6][7][8][9][10]. At present, the main preparation methods of silicon nanosheets are chemical vapor deposition (CVD), molecular beam epitaxy (MBE) and the soft chemical method. The CVD and MBE methods are in the "bottom-up" growth mode, which can accurately regulate the thickness of silicon nanosheets by controlling the growth time or the growth rate, resulting in high-quality silicon nanosheet crystals [11,12]. Because these two methods adopt a highvacuum or hydrogen atmosphere as the growth environment, the surface of the prepared silicon nanosheets are mainly composed of dangling bonds and Si-H bonds. However, the need for a large flux of hydrogen as a carrier gas, the low growth efficiency, expensive growth equipment and other deficiencies make these two methods not suitable for practical production. In contrast, the soft chemical method has the advantages of a simple process and easy mass production. The soft chemical method is a "top-down" preparation methodthe chemical reaction will inevitably damage the crystal structure of the original material, so the silicon nanosheets obtained by this method have poorer crystal quality compared with the former. In order to weaken the interaction between the silicon layers in the original material and enhance their stability in solution or air, silicon nanosheets prepared by the soft chemical method usually require the introduction of surface modifiers through connecting a large number of organic groups on the surface [13,14]. Based on the two-dimensional crystal structure with low defect density, the silicon nanosheets prepared by the CVD and MBE methods show excellent photoelectric performance. The silicon nanosheets prepared by the CVD method have achieved tunable luminescence in UV-VIS band, and on this basis, white light organic light-emitting diode (OLED) was obtained [15]. Single-layer silicon nanosheets prepared by the MBE method have a graphene-like band structure and exhibit bipolar electrical characteristics under the regulation of gate voltage, with a resistivity of approximately 2.04 × 10 −3 Ω · cm and a carrier mobility of approximately 100 cm 2 V −1 s −1 at room temperature [16]. Silicon nanosheets prepared by the soft chemical method also have a quantum confined effect and photoluminescence [17,18], but the uncontrollability of the thickness and the adverse effect of the crystal defects on the luminescence property limit the application of silicon nanosheets prepared by the soft chemical method in light-emitting devices. In addition, the surface defects of silicon nanosheets prepared by the soft chemical method capture the carriers and the surface modifiers impede the transmission of electrons between the nanosheets, resulting in poor conductivity, with an average resistivity of 7.24 Ω · cm and a carrier mobility of 36 cm 2 V −1 s −1 [19,20]. Referring to the relevant studies on doping of silicon nanocrystals, we found that phosphorus doping can passivate the dangling bonds on the surface of silicon nanomaterials, thus reducing the non-radiative recombination channels and carrier traps [21][22][23]. Therefore, we believe that phosphorus doping for silicon nanosheets fabricated by the soft chemical method will pave a way to improve their photoelectric properties. In this paper, the effects of phosphorus doping on photoelectric properties of silicon nanosheets was investigated. It is proved that phosphorus doping can effectively overcome the problem of high defect density on the surface of soft-chemical-prepared silicon nanosheets, and significantly improve the luminescence intensity, luminescence life and electrical conductivity of the material, which is superior to traditional surface modification method using organic modifiers. Materials and Methods We selected Zintel compound CaSi 2 (Ca ≈ 30%, Alfa Aesar, Haverhill, MA, USA) composed of alternating layers of calcium ions and silicon ions as the original reactant and CuCl 2 (98%, Shanghai Macklin Biochemical Co., Ltd., Shanghai, China) as the oxidant to produce silicon nanosheets by the de-intercalation reaction under the condition of water bath heating at 40 • C. CuCl 2 oxidizes the negative silicon ions in CaSi 2 , breaking the ionic bonds within the compound, allowing the calcium ions between the silicon layers to desorbed and enter the solvent, leaving only the layered silicon skeleton. Then, ethanol is added for ultrasonic treatment, and the silicon particles with weak interaction between layers are separated into ultra-thin silicon nanosheets by mechanical force. Finally, the remaining reactants or insufficiently stripped silicon particles in the samples were removed by low-speed centrifugation, and the supernatant which contained ultra-thin silicon nanosheets was collected for subsequent experiments. Phosphorus doping is realized by the spin-on-dopant method. First, the silicon nanosheets stored in anhydrous ethanol were coated on SiO 2 /Si (the thickness of SiO 2 is 300 nm) substrate and dried at 60 • C, and liquid phosphorus dopant (P509, Filmtronics, Butler, PA, USA) was coated on the silicon substrate and dried at 200 • C for approximately 30 min. Then, the two substrates placed opposite each other with an interval of 400 µm were heated in the annealing furnace under vacuum environment to 925-1025 • C for 5-10 min. Finally, the cooled doped samples were rinsed quickly in 1% hydrofluoric acid solution to remove the surface oxide layer and residual doping sources. The undoped samples used as references underwent the same heat treatment process on SiO 2 /Si substrate. Figure 1 shows the X-ray diffraction (XRD) patterns of the original reactant CaSi 2 , layered silicon nanosheets after CuCl 2 oxidization and dispersive silicon nanosheets after ultrasonic treatment. The CaSi 2 (red line) we used mixed three crystalline phases with PDF numbers shown in Figure 1. The diffraction peak at approximately 28 • demonstrates that each layer of Si atoms in CaSi 2 is arranged in the same way as the (111) crystal plane in crystalline silicon. As can be seen from the XRD pattern of the layered silicon nanosheets (blue line), the reaction product corresponds to the cubic-phase crystalline silicon. Though there is a diffraction peak of a residual byproduct CuCl, no sign of any original reactant CaSi 2 (red line) is found in the pattern. It is proved that Ca 2+ can be effectively removed through the redox reaction of CaSi 2 and CuCl 2 , and the silicon skeleton in CaSi 2 can be preserved. Results and Discussion Only one diffraction peak can be observed in the XRD pattern of dispersive silicon nanosheets (yellow line in Figure 1). It indicates that they no longer have the periodicity of crystalline silicon in other directions, reflecting the quasi two-dimensional characteristics. The diffraction peak corresponds to the (111) crystal plane, illustrating that the procedure is inserting copper particles between adjacent silicon layers of CaSi 2 to weaken the interaction, making them easy to peel off in the perpendicular direction under the external force. Compared with the layered silicon nanosheets, partial amorphous will occur in the dispersive ones, resulting in the intensity of the diffraction peak weakening. Figure 2a shows the scanning electron microscope (SEM) image of layered silicon nanosheets obtained by a CaSi 2 de-intercalation reaction. It can be seen from the image that there are many interspaces between layers, which are formed by the insertion and desorption of Cu particles [24]. As we know, due to thermodynamic instability, when the lamella are very thin, they tend to form folds and undulations to release energy. Therefore, the silicon nanosheets present as wavy lamella as shown in Figure 2a. (Figure 2b), it can be seen that there is no obvious difference in morphology between them. The doped nanosheets can still maintain discrete without re-bonding during the heat treatment process. A small peak of P element can be obviously observed in the energy dispersive spectroscopy (EDS) spectrum (inset of Figure 2b) of doped samples, and the weight percentage of P element is roughly 4.5%. Transmission electron microscope (TEM) characterization of the silicon nanosheets obtained by ultrasonic peeling showed that the samples were uniform in thickness and smooth in surface. As can be seen from Figure 2c,d, the size of nanosheets varies, with side lengths ranging from 1 to 8 µm. By contrast with the broken particles in the image, the thickness of the silicon nanosheet can be roughly determined to be less than 10 nm. The typical diffraction spots of cubic silicon can be found in the indexed selected area electron diffraction (SAED) pattern of multilayer silicon nanosheets (inset at upper-right corner of Figure 2c), which can be corresponding to the XRD peaks of layered silicon nanosheets (blue line in Figure 1). High resolution transmission electron microscope (HRTEM) image (inset at the left bottom of Figure 2c) prove that these ultra-thin silicon nanosheets still retain clear lattice streaks with a lattice spacing of 0.312 nm, corresponding to the (111) crystal plane of cubic silicon, which is consistent with the XRD diffraction peak of dispersive silicon nanosheets (yellow line in Figure 1). Similar to XRD results, some amorphous regions, especially the edges, appear in the dispersed silicon nanosheets. This is because nanosheets less than 10 nm thick are sensitive to oxygen in the air, deionized water and oxygen free radicals generated by high-speed electron beams, and easily oxidized to silicon oxide [25]. According to the research results of silicon nanocrystals, because of similar atomic sizes of phosphorus atoms and silicon atoms, phosphorus doping will not cause significant changes in the crystal structure of silicon nanocrystals, whether it is distributed on the surface or enters the interior to form substitutional phosphorus [26]. As a result, there is no significant difference between the doped and undoped silicon nanosheets in the structural characterization of TEM (Figure 2d), HRTEM(inset of Figure 2d), XRD and Raman (not shown here). Figure 3 shows the steady-state and transient photoluminescence (PL) spectra of undoped and doped silicon nanosheets. The luminescence peak of the undoped sample is approximately 480 nm. According to the theory of luminescence mechanism of silicon nanocrystals in the literature, the 480 nm luminescence peak may come from indirect band gap recombination of silicon or surface state recombination. According to the luminescence model of porous silicon proposed by Li et al. [27], luminescence of microsecond lifetime comes from the silicon particle core and nanosecond lifetime comes from the surface oxide layer. It can be seen from Figure 3b that the lifetime of the undoped sample conforms to the lifetime order of the indirect bandgap recombination. The PL spectrum of the doped sample also showed the luminescence peak at 480 nm derived from the same radiation recombination process. However, the luminescence intensity of the doped sample at this wavelength is approximately 4 fold that of the undoped sample. This is because the introduction of phosphorus impurities reduces the surface defect states and inhibits the non-radiative recombination process in silicon nanosheets. The surface passivation effect of phosphorus impurities is also reflected in the lifetime. In addition, a new luminescence peak appears in the doped sample at approximately 440 nm. By comparing the PL spectrum of the doped silicon nanosheets with the PL data in the literature [17,28], we believe that the luminescence peak is derived from the quasi-direct band gap transition of silicon nanosheets with thickness close to that of a single layer. Without doping treatment, these ultra-thin silicon nanosheets are unstable in the air and easily oxidized resulting in lack of luminescence of 440 nm. Figure 4a shows the X-ray photoelectron spectroscopy (XPS) full spectra for both undoped and doped samples. It can be seen from the figure that the number and position of peaks in the two spectra are basically the same. The identification results of these peaks show that the main elements in the sample are silicon and oxygen, and no other elements remain. The semi-quantitative analysis shows that the Si/O ratio of the undoped sample is 0.67. The oxidation degree of the undoped sample is relatively serious, because there are a lot of dangling bonds on the surface of the silicon nanosheet, which has stronger surface reactivity than bulk silicon and is easier to be oxidized in the air. While the doped sample is less oxidized, with a Si/O ratio of 0.8. This is because part of the dangling bond is passivated by phosphorus atoms, which increases the stability of the silicon nanosheet in the air. For undoped and doped samples, two Si 2p binding energy peaks located at 99 and 103 eV are detected which are corresponding to Si 0 state and oxidized Si 4+ state, respectively. By comparing the area of the two peaks, the content ratio Si 0 /Si 4+ is approximately 0.4 in the undoped sample. For doped sample, this value is approximately 0.6. It can be seen from the ratios of these two state that although a certain degree of oxidation of silicon nanosheets is inevitable in the process of preparation, storage and testing, the degree of oxidation can be effectively reduced by passivation treatment. Figure 4b shows the P 2p spectra of undoped and doped samples. No phosphorusrelated signal was observed in the spectrum of undoped samples. The doped sample shows a peak at approximately 129 eV. According to the literature, the binding energy of the P-O bond and the P-Si bond is 134.5 and 128.4 eV, respectively [29], thus it is judged that the peak corresponds to P-Si bond. The appearance of the P-Si signal proves that phosphorus doping is achieved successfully by the spin-on-dopant method. According to the above test results of optical properties, it is inferred that phosphorus does not enter the inside of silicon nanosheets to form substitution doping but distributes on the surface. Therefore, the addition of phosphorus does not cause Auger recombination to reduce the luminescence performance of silicon nanosheets but enhances its luminescence by passivating the surface. The device prepared for electrical testing was shown as the inset in Figure 5. Silicon nanosheets suspension is dripped on SiO 2 /Si substrate, and 100 nm gold electrodes are evaporated on the sample, where the channel length is 25 µm and the width is 100 µm. By fitting the I-V curve, it can be found that the resistivity of the undoped silicon nanosheets is significantly higher than that of silicene in the literature [16,30]. This is because there are a lot of defects on the surface of silicon nanosheets synthesized by soft-chemical procedure, which become a trap of charge carriers. Moreover, in this device, the conduction process does not strictly occur within a single nanosheet. It also involves the carrier transmission between the interfaces of the nanosheets. The surface oxidation layer and interspace between adjacent nanosheets will impede the transmission and thus reduces the conductivity. Based on the above problems, we further improve the conductivity of multilayer silicon nanosheets by annealing at high temperature and phosphorous passivation. The edge contact parts of adjacent silicon nanosheets can be better connected or re-bonded by annealing. It can be seen from the blue line in Figure 5 that the resistance value of the silicon nanosheets after annealing is reduced by 4 fold compared with that of as-prepared samples. Due to the self-cleaning effect of nano-sized materials, the diffused phosphorus impurities are basically distributed on the surface of silicon nanosheets. Compared with silicon, the more electrons produced by phosphorus can neutralize the charged carrier trap on the surface. It can also be found that phosphorus passivation can further reduce the resistance value by 2 fold magnitude compared with that of annealing samples as shown by the red line in Figure 5. Furthermore, the breakdown voltage of the doped sample is below 5 V and that of the undoped sample is above 20 V. For silicon, the critical value between tunnel breakdown and avalanche breakdown is approximately 5 V. We can infer that the breakdown mechanism of the undoped sample is avalanche breakdown under the high voltage. However, the increase in carrier concentration in the doped sample leads to the decrease in the width of barrier width and the increase in tunneling probability, so that the breakdown mechanism transforms into tunnel breakdown. According to the relationship curve between the breakdown voltage of bulk silicon and the impurity concentration, the doping concentration of silicon nanosheets is in the range of 10 18~1 0 19 cm −3 . Table 1 compares the resistivity of silicon nanosheets prepared by different methods. Due to the limitation of crystal quality and surface defects, the resistivity of silicon nanosheets prepared by soft chemical methods is much higher than that prepared by MBE and CVD, which has been fully explained in the introduction. Compared with other soft chemical prepared silicon nanosheets, our sample effectively reduces the resistivity, and the conductivity is closer to that of silicon nanosheets with high crystal quality. It is worth noting that the film thickness we used in the calculation of resistivity comes from the value of 4 µm measured in the SEM cross-section image (Figure 2b). It can be seen from the SEM cross-section image that there are a lot of gaps in the multilayer silicon nanosheets, so the actual thickness of the sample should be less than the measured value, and the actual resistivity of the sample may be lower. Conclusions A large number of ultra-thin silicon nanosheets were prepared by the de-intercalation reaction, and their quasi-two-dimensional properties were proved by the morphology and structure characterization. According to the results of EDS and XPS, the spin-on-dopant method successfully achieved phosphorus doping of silicon nanosheets prepared by the soft chemical method. The optical performance test shows that the luminescence peak of the undoped ultra-thin silicon nanosheet is 480 nm, which is derived from the indirect band gap recombination of silicon. By passivation of surface defects, phosphorus doped sample enhance the luminescence intensity to approximately 4 fold that of the undoped sample and extend the lifetime to approximately 4 fold. In addition, since phosphorus can achieve surface passivation and improve the stability of silicon nanosheets, phosphorus doped samples can also recover the quasi-direct band gap transition of silicon nanosheets at approximately 440 nm, which is easily quenched due to the existence of surface defects and surface oxidation. Electrical performance tests show that the surface defects can be passivated by phosphorus doping, which reduces the resistivity of multilayer silicon nanosheets by 6 fold. In conclusion, phosphorus doping can effectively overcome the adverse effects caused by a large number of surface defects of silicon nanosheets prepared by the soft chemical method and improve the optical and electrical properties of silicon nanosheets simultaneously.
4,551.4
2023-01-26T00:00:00.000
[ "Materials Science" ]
Adaptive Maximums of Random Variables for Network Simulations In order to enhance the precision of network simulations, the paper proposes an approach to adaptively decide the maximum of random variables that create the discrete probabilities to generate nodal tra ffi c on simulated networks. In this paper, a statistical model is first suggested to manifest the bound of statistical errors. Then, according to the minimum probability that generates nodal tra ffi c, a formula is proposed to decide the maximum. In the formula, a precision parameter is used to present the degree of simulative accuracy. Meanwhile, the maximum adaptively varies with the tra ffi c distribution among nodes because the decision depends on the minimum probability generating nodal tra ffi c. In order to verify the e ff ect of the adaptive maximum on simulative precision, an optical network is introduced. After simulating the optical network, the theoretic average waiting time of nodes on the optical network is exploited to validate the exactness of the simulation. The proposed formula deciding the adaptive maximum can be generally exploited in the simulations of various networks. Based on the precision parameter K , a recursive procedure will be developed to automatically produce the adaptive maximum for network simulations in the future. Intoduction Simulations are an important technique for the design of systems, the estimation of performance, and the maintenance of systems [1,2].It is widely used in various fields.For the simulation of complex systems, how to save computing time is an important topic.Moreover, it is also worthwhile to discuss how to reach acceptable simulative precision.In general, to promote simulative precision will lower simulative efficiency.Therefore, in order to enhance the simulative precision of complex systems, it is very important to take appropriate tradeoff between precision and efficiency. For example, DQDB networks are systems with asynchronous transfer mode (ATM) and time-division multiple access (TDMA) [3].Its medium access control (MAC) protocol is so complex that the performance analysis of the network is very difficult [4,5].To make an exact analysis on the performance of the network is almost impossible [5].Most papers estimate the performance of the network by simulations [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20].Thus, it only depends on the precision of simulations to exactly comprehend the behavior of the complex system.However, if simulative precision is overpromoted, simulative efficiency will be suppressed.Therefore, how to reach the acceptable simulative precision and meanwhile take good tradeoff between precision and efficiency is an important problem for simulating complex systems.It is worthwhile to explore, but there are few papers discussing the problem. In simulations, random variables are used to create various distribution functions of probabilities.These probabilities are applied to direct the input amplitude of signals or noise as simulating communication systems [1,2].For network simulations, probabilities are exploited to control the generation of nodal traffic.The probabilities assigned to represent nodal traffic are continuous.However, the probabilities created from random variables are discrete.Due to the inherent difference between the continuous and discrete probabilities, simulative errors will take place absolutely. In order to take precise simulations, this paper suggests a statistical model.Based on the model, it is obvious that simulative precision only depends on the maximum of random variables controlling the generation of network traffic.Based on the perception, a simple formula is proposed to decide the feasible lower bound of maximums of random variables.In the formula, the feasible lower bound is dependent on a precision parameter, denoted by K, and the minimum probability generating nodal traffic.The larger the precision parameter, the more the simulative precision.Then, a prime number that is slightly greater than the lower bound can be chosen as the maximum of random variables.The chosen maximum can adapt to the traffic distribution of simulated networks.So, the adaptive maximum cannot only result in acceptable simulative precision but also take desired efficiency.In practice, due to the approach deciding the adaptive maximum, simulative systems can make optimal tradeoff between reaching high precision and saving computing time. So as to understand the effect of the adaptive maximum on simulative precision, an optical TDMA network is introduced.The MAC protocol of the optical TDMA network implements traffic control.The average waiting time of a node on the network is in inverse proportion to the traffic of the node [21].Due to the quantitative analysis of the optical TDMA network, the root-mean-square (rms) difference between the simulative and theoretic average waiting times of nodes is calculated to validate the exactness of simulations. In Section 2, the suggested statistical model exhibits the relationship between the statistical error and the maximum of random variables.The discussion for deciding the adaptive maximum of random variables will also be shown in this section.The MAC protocol performing traffic control and some working conditions assumed for simulating optical TDMA networks are presented in Section 3. Section 4 illustrates the effect of the adaptive maximum on the performance estimated by simulations.The validation of simulations will also be shown in this section.Section 5 includes conclusions. The Decision of Adaptive Maximums Before simulations, a set of continuous probabilities is assigned to predefine the distribution of nodal traffic.In simulations, a set of discrete probabilities, which corresponds to the set of continuous probabilities, controls the generation of nodal traffic.The difference between continuous and discrete probabilities is used to manifest the influence of the maximum of integral random variables on simulative precision.Then the minimum probability in the set of continuous probabilities is exploited to decide the adaptive maximum of integral random variables. In simulated networks, every node has one queue.Queues consist of cell (packet) buffers.Queues provide firstin-first-out (FIFO) service.The first cell buffer in a queue is attached to the transmission system of the simulated network.When an available slot on the transmission system is passed through, the contents of the first cell buffer will be written into the available slot. The number of cells temporarily storing in queues is dependent on the traffic generated by nodes.The heavier the nodal traffic, the longer the queuing delay.According to the complex MAC protocol of most networks, an available slot on transmission systems appears for some node randomly.Hence, the prediction of the queuing delay of a specified cell is difficult.In order to estimate performance, most simulations assume that networks are with heavy load [4,6,11,21].This assumption will lengthen queues so that the theoretic analysis of the queuing delay of a specified cell will become more difficult.Therefore, how to enhance the accuracy of queuing delays is a key topic for network simulations.In order for taking precise simulations, a statistical model must be first introduced.Based on the model, the data resulted from simulations can be applied to calculate the nodal mean of queuing delays.The queuing delay is defined as the period for which a cell stays in queues. For a node within networks, every cell generated by disassembling procedures is first stored in queues.Before the cell is transferred to transmission systems, it must be sequentially shifted into the first cell buffer of nodal queues.When the cell is within the first cell buffer, the MAC protocol will be exploited to decide the moment after that the node can write the cell out.In above operations, there are two moments relative to the queuing delay.The first moment is the instant that a cell enters queues.The second one is the flash that the cell is sent onto transmission systems.Let IM i, j and OM i, j in sequence denote the first and second moments of the ith cell generated by the jth node, where i and j are the ordinal number of cells and nodes, respectively.The queuing delay of the ith cell generated by the jth node is designated by C i, j .Then C i, j can be represented as where Max(R, j) is the ordinal number of the last cell generated by the jth node, R is the maximum of the integral random variable used to create the set of discrete probabilities, and N is the number of nodes within networks.Let P I denote the probability that the Ith node generates traffic.These probabilities, P I s, are chosen in accordance with interested simulative scenarios.The chosen probabilities are continuous, but their corresponding probabilities generated by an integral random variable are discrete.When a discrete probability corresponds to a continuous probability, both probabilities must be equal theoretically.But, in practice, the discrete probability is inherently different from its corresponding continuous probability.If the difference between them is large, statistical delays could not converge on the precise level that is acceptable. Let X represent an integral random variable that uniformly distributes over [0, R].Its probability density function f X (x) can be presented by The random variable X is applied to create P I s for simulations.Let P I denote the discrete probability corresponding to P I .The difference between P I and P I can be represented as , where From (3), if R theoretically approaches infinite, the difference will become zero.Therefore, (3) can be rearranged as lim If a simulation is with a precise-probability set { P I = P I , I = 0, . . ., N − 1}, the statistical delays of network simulations will also be precise.In a word, when R theoretically approaches infinite, the statistical queuing delay will converge precisely. After discussing the enhancement of statistical precision, statistics of the queuing delay are presented.According to (1), the {C i, j } is a set of positive random numbers.Let μ( j) denote the mean of {C i, j }.Then μ( j) can be calculated by ( The mean μ( j) represents the average queuing delay of the jth node. If R is theoretically infinite, the computing time taken in simulations is also infinite.In practice, R can be chosen in accordance with P I s.If P I s is small, R must be large enough to ensure statistical precision.In other words, R must adaptively vary with P I s to promote simulative precision. Let P min represent the minimum probability among P I s.P min must be greater than the inverse of R for minimum precise requirement in simulations, that is, In order to guarantee that the precise degree is acceptable, R must be chosen so that where K > 1 is a positive real number.Then, R, the adaptive maximum, can be represented by the equation In (8), the value K is a precision parameter.To enlarge K will result in higher precision.On the other hand, if simulative procedures are with overlarge K, they will consume more computing time but promote little precision.Thus, the decision of the value K is dependent on the acceptable degree of precision. Because the recursive formula of the power-residue method is computationally very efficient [1,22], it is widely adopted for generating random variables with uniform distribution.As the power-residue method is exploited to generate the uniform-distribution random sequence, the maximum of the integral random variable must be a prime number.Therefore, the lower bound of R, indicated by R LB , must be first taken with the following equation: Then, a prime number that is slightly greater than R LB can be assigned as the adaptive maximum R. The adaptive maximum can result in the optimal tradeoff between simulative precision and efficiency. Optical TDMA Networks In order for comprehending the effect of adaptive maximums of random variables on the precision of network simulations, an optical TDMA networks is introduced.Before depicting the structure of the network, the deduction for the average waiting time of nodes on the network (the waiting mean) is presented.The waiting time of a cell is the queuing delay that the cell waits in the first cell buffer of queues for an available slot on transmission systems.Based on the structure of the network, several working conditions are assumed for simulations.Due to these working conditions, an MAC protocol implementing traffic control is described. For TDMA networks, a node must send requests to preserve empty slots when it is going to transmit messages.More requests preserve more slots.As the number of preserved slots of a node becomes large, the waiting mean of the node will be reduced.Therefore, if a node has more traffic, its waiting mean will be decreased.The relationship between the waiting mean and the traffic of the Ith node [21] can be presented as where μ(I) and T(I) are the waiting mean and traffic of the Ith node, respectively, and S is the slot rate of the optical TDMA networks. Because optical TDMA networks are high-speed networks, the slot rate can approach infinite.Therefore, μ(I) can be rearranged as lim In (11), μ(I) is a function of T(I).It exhibits that the waiting mean of a node on an optical TDMA network is in inverse proportion to the traffic of the node.The theoretic waiting mean will be exploited to validate simulations in Section 4. The structure of the optical TDMA network is shown in Figure 1.In Figure 1, the medium between the slot generator and the slot terminator is an optical fiber.The slot flow on optical fibers is sent by slot generators and sinks into slot terminators.The number of nodes within the network is N. Nodes are numbered from 0 to (N − 1) in sequent order.The ordinal number of every node also relates to the nodal position in the topology.The period that the slot generator just completely sends a slot onto optical fibers is called a slot time.A slot length is the distance that a slot spreads on the optical fiber.Other working conditions concerning the space between adjacent nodes, the length of messages, and the traffic distribution among nodes are described as follows. For all interested simulative scenarios, the space between adjacent nodes is one slot length.Messages are similar in length.Every message can be contained in the payload of a slot. The traffic distribution among nodes affects the operation of traffic control in the MAC protocol.For the benefit of easily performing traffic control, a basic traffic denoted by T B is introduced.The amount of T B is dependent on the defined distribution of traffic.In a scenario, the traffic of some node can be several times the amount of T B .In order to obviously present the influence of the adaptive maximum on simulative precision, it is assumed that traffic is uniform distribution among nodes in every simulative scenario.Hence, the traffic of every node in a simulative scenario is equal to one T B .Because the optical fiber is a one-way bus, and all messages are not transmitted out of the network, the (N − 1)th node does not generate any traffic.Let T N denote the network traffic.Then T N can be presented by Therefore, the T B in simulations can be shown as Based on the introduction of T B , the approach of traffic control can be described as follows. In this paper, slot frames are used to implement traffic control.The slot flow on optical fibers is partitioned into frames.There are 1/T B slots in a frame.When a frame is passed to the Ith node, the node can only write one message into an empty slot within the frame when its queue is not empty.After the operation, the node must immediately stop writing messages out regardless of whether its queue is empty or not.After the moment, the node waits for the arrival of the next frame to restart the controlling process. Simulations The simulation is applied to present the influence of the adaptive maximum of a random variable on simulative precision.The waiting mean of optical TDMA nodes manifests the precise degree caused by different adaptive maximums.The simulative efficiency is dependent on the size of the adaptive maximum.The larger the maximum of the random variable, the lower the efficiency of the simulative system.On the other hand, the theoretic waiting mean calculated by (11) will be used to validate simulative data.The rms difference between the simulative and theoretic data, denoted by D rms , is defined as where μ s (I) and μ(I) are the simulative and theoretic waiting means of the Ith node, respectively.Two parameters must be chosen before simulations.The parameters are the number of nodes N and the network traffic T N .The chosen Ns and T N s are based on interested scenarios.After the choice of Ns and T N s, (13) can be exploited to calculate the basic traffic T B .Because messages are one slot in length, the basic traffic T B can be regarded as the minimum probability P min .In order for clearly distinguishing between precise degrees corresponding to different adaptive maximums, small P min is necessary.So, the chosen T N in all interested scenarios is equal to 0.25. In simulations, two scenarios are interested.In order to certainly manifest the effect of adaptive maximums, the Ns in two scenarios are 40 and 50, respectively.On account of sufficiently exhibiting the influence of precision parameters on simulative precise degrees, three Ks are assigned in every scenario.The three Ks in every scenario are 300, 3000, and 30 000, respectively.Then, ( 9) is used to calculate the corresponding R LB of every K parameter.In accordance with the power-residue method, the adaptive maximum R, which is a prime number and slightly greater than its corresponding R LB , can be finally found.According to the description above, those relative parameters derived from the T N , Ns, and Ks are listed in Table 1. For traffic control, the number of slots in a frame, which is equal to 1/T B , must vary with scenarios.Due to (13), the number of slots in a frame, denoted by N SF , can be presented as Hence, the N SF of the scenario with 40 nodes is 156 and that of the scenario with 50 nodes is 196.Because uniform traffic distribution is assumed, the traffic of every node is equal to the T B in each scenario.Consequently, the theoretic waiting mean calculated by (11) is the same as the N SF in each scenario. In the following figures that show simulative results, the horizontal axis is the ordinal number of nodes.Because the ordinal number of nodes is discrete, all curves in figures consist of piecewise lines.The waiting mean of nodes on the vertical axis is expressed in slot times. After simulations, ( 5) is used to calculate the simulative waiting mean of nodes.Figures 2 and 3 show the variation of waiting means corresponding to two scenarios, respectively.In every figure, the horizontal solid line represents the theoretic waiting mean.Other three curves are relative to simulative data corresponding to three precision parameters.In two figures, it is obviously exhibited that curves will become smoother when the precision parameter K is enlarged. Based on the theoretic waiting mean, D rms s calculated by ( 14) is used to validate the simulations.Table 2 presents these D rms s.Every D rms represents the rms difference between the horizontal solid curve and a simulative curve in each figure.Observing Table 2, larger precision parameter K and smaller N will result in smaller D rms .Therefore, a simulative curve will approach the horizontal solid line if precision parameter K is consecutively enlarged.However, the decrease of D rms s does not linearly correspond to the increase of K parameters.Consequently, simulative procedures with overlarge K will consume more computing time but promote little precision.To adjust K to a proper value can take an adaptive maximum to result in an acceptable precise level with relative high efficiency.In a word, network simulations with appropriate adaptive maximums can take optimal tradeoff between simulative precision and efficiency. Conclusions The paper discusses tradeoff between simulative precision and efficiency.Based on the statistical model of queuing delays, the difference between continuous and discrete probabilities is used to manifest the effect of the maximum of random variables on the statistical error.Then a simple method with precision parameter K is proposed to decide the maximum of random variables.The maximum adapts to the minimum probability among P I s.The adaptive maximum must be enlarged as the minimum probability of P I s becomes smaller. In order to manifest the effect of the adaptive maximum on the simulative precision, an optical TDMA network whose MAC protocol performs traffic control is simulated.The average waiting time of some optical TDMA node is in inverse proportion to the traffic of the node.The theoretic average waiting time is exploited to calculate D rms s to validate the exactness of simulations.Simulative results exhibit that the adaptive maximum can take optimal tradeoff between simulative precision and efficiency. In network simulations, the adaptive maximum not only results in the acceptable degree of precision but also suitably saves computing time.Based on the precision parameter K, a recursive procedure will be developed to automatically generate the adaptive maximum in future. 1 Figure 1 : Figure 1: The structure of optical TDMA networks. Table 1 : Relative parameters in two scenarios. Table 2 : D rms s corresponding to Ks and Ns.
4,796.4
2009-01-01T00:00:00.000
[ "Computer Science" ]
Accurate Image Multi-Class Classification Neural Network Model with Quantum Entanglement Approach Quantum machine learning (QML) has attracted significant research attention over the last decade. Multiple models have been developed to demonstrate the practical applications of the quantum properties. In this study, we first demonstrate that the previously proposed quanvolutional neural network (QuanvNN) using a randomly generated quantum circuit improves the image classification accuracy of a fully connected neural network against the Modified National Institute of Standards and Technology (MNIST) dataset and the Canadian Institute for Advanced Research 10 class (CIFAR-10) dataset from 92.0% to 93.0% and from 30.5% to 34.9%, respectively. We then propose a new model referred to as a Neural Network with Quantum Entanglement (NNQE) using a strongly entangled quantum circuit combined with Hadamard gates. The new model further improves the image classification accuracy of MNIST and CIFAR-10 to 93.8% and 36.0%, respectively. Unlike other QML methods, the proposed method does not require optimization of the parameters inside the quantum circuits; hence, it requires only limited use of the quantum circuit. Given the small number of qubits and relatively shallow depth of the proposed quantum circuit, the proposed method is well suited for implementation in noisy intermediate-scale quantum computers. While promising results were obtained by the proposed method when applied to the MNIST and CIFAR-10 datasets, a test against a more complicated German Traffic Sign Recognition Benchmark (GTSRB) dataset degraded the image classification accuracy from 82.2% to 73.4%. The exact causes of the performance improvement and degradation are currently an open question, prompting further research on the understanding and design of suitable quantum circuits for image classification neural networks for colored and complex data. Introduction The theory of machine learning is an important subdiscipline in both artificial intelligence and statistics, with roots in artificial neural networks and artificial intelligence research since the 1950s [1]. Data processing using quantum devices is known as quantum computing. Because operations can be performed on numerous states simultaneously, the capacity of quantum states to be in a superposition can significantly speed up computation in terms of complexity in a broader machine learning context. Several quantum machine learning (QML) variations of classical models have recently been developed, including quantum reservoir computing (QRC) [2], quantum circuit learning (QCL) [3][4][5], continuous variable quantum neural networks (CVQNNs) [6], quantum kitchen sinks (QKSs) [7][8][9], quantum variational classifiers [10,11], and quantum kernel estimators [12,13]. Recent literature surveys on QML are available [14][15][16]. We note that the main approach taken by the community consists in formalizing problems of interest as variational optimization problems and using hybrid systems of quantum and classical hardware to find approximate solutions [15]. The intuition is that by implementing some subroutines on classical hardware, the requirement of quantum resources is significantly reduced, particularly the number of qubits, circuit depth, and coherence time, making the quantum algorithms suitable to be implemented on noisy, intermediate-scale quantum (NISQ) devices [15]. Recent examples in this direction include the work by Arthur and Date, who proposed a hybrid quantum-classical neural network architecture where each neuron is a variational quantum circuit [17], and the work by Sagingalieva et al., who proposed a combination of classical convolutional layers, graph convolutional layers, and quantum neural network layers to improve on drug-response prediction over a purely classical counterpart [18]. Among the many proposals to combine classical machine learning methods with quantum computing, the method proposed by Henderson et al. in [19] has the advantage of being implementable in quantum circuits with a smaller number of qubits with shallow gate depths and can be applied to more practical applications. This method utilizes quantum circuits as transformation layers to extract features for image classification using convolutional neural networks (CNNs). These transformation layers are called quanvolutional layers, and the method is herein referred to as the quanvolutional neural network (QuanvNN). An important question raised was whether the features produced by the quanvolutional layers could increase the accuracy of the machine learning models for classification purposes. Henderson et al. attempted to address this question by applying randomly created quantum circuits and comparing the classification accuracy of the QuanvNN with the results obtained by a conventional CNN. The results did not show a clear advantage in terms of classification accuracy over the classical model [19]. The QuanvNN was further updated in [20], implemented on quantum computer hardware (Rigetti's Aspen-7-25Q-B quantum processing unit), and evaluated in a satellite imagery classification task. However, the image classification accuracy of the QuanvNN did not improve compared with that of the conventional CNN. The implementation of the QuanvNN on a software quantum computing simulator, PennyLane [21], was provided by Mari [22]. Mari's implementation of QuanvNN differs from that of Henderson et al. in at least two aspects. Firstly, Mari's implementation combined a quanvolutional layer with a neural network (NN) instead of CNN. Secondly, the output of the quantum circuit (a set of expectation values) was directly fed into the following neural network layer, while the output of the quantum circuit was converted into a single scalar value using a classical method in the original QuanvNN proposal by Henderson et al. In Mari's implementation, 50 training and 30 test images from the Modified National Institute of Standards and Technology (MNIST) dataset (a handwritten 10 class 10-digit dataset [23]) were applied and tested. No clear improvement in the classification accuracy of QuanvNN over NN was observed in [22]. In this paper, we first show that a QuanvNN using a randomly generated quantum circuit (four qubits with 20 single-axis rotations and 10 controlled NOTs (CNOTs), extending Mari's implementation from using one random layer to five random layers) improves the image classification accuracy of a classical fully connected NN against MNIST and the Canadian Institute for Advanced Research 10 class (CIFAR-10) dataset (photographic 10 class image dataset [24]) from 92.0% to 93.0% and from 30.5% to 34.9%, respectively. We then propose a new model, termed Neural Network with Quantum Entanglement (NNQE), using a strongly entangled quantum circuit (four qubits with 20 three-axis rotations and 20 CNOTs) combined with Hadamard gates, instead of random quantum circuits. Our newly proposed NNQE further improves the image classification accuracy against MNIST and CIFAR-10 to 93.8% and 36.0%, respectively. These improvements were obtained using a quantum circuit consisting of only four qubits without introducing any additional parameters to the optimizing machine learning process. Unlike other QML methods, the proposed method does not require optimization of the parameters inside the quantum circuits; hence, it requires only limited use of the quantum circuit. Given the small number of qubits and relatively shallow depth of the proposed quantum circuit, the proposed method is well suited for implementation in noisy intermediate-scale quantum computers. However, using QuanvNN or the proposed NNQE degrades the image classification performance when applied to a more complicated German Traffic Sign Recognition Benchmark (GTSRB) dataset (43 class real-life traffic sign images [25]) in comparison with the classical NN accuracy from 82.2% to 71.9% (QuanvNN) and to 73.4% (NNQE). Nevertheless, we note that NNQE produced improved image classification accuracy over QuanvNN from 71.9% to 73.4%. The exact causes of the performance improvement and degradation are currently an open question, prompting further research on the understanding and design of suitable quantum circuits for image classification neural networks for colored and complex data. We note that a similar result of QuanvNN not improving the image classification accuracy of NN when tested against GTSRB was also recently reported in [26], which is consistent with our findings. The remainder of this paper is organized as follows: Section 2 presents the methodology for the proposed model. The details of our experiment are provided in Section 3. The results and discussion are presented in Section 4, followed by conclusions in Section 5. Methods For the implementation of QuanvNN, readers are referred to [22], noting that the number of random layers was increased from 1 to 5. This results in the use of a quantum circuit with 20 random single-axis rotations and 10 CNOTs with QuanvNN. Figure 1 shows a flowchart of our proposed NNQE model. We assume that the input image is a two-dimensional matrix of size m-by-m and the pixel value x follows 0 ≤ x ≤ 1. The extension to a multichannel pixel image is expected to be straightforward. A section of size n-by-n is extracted from the input image, where n = 2. An extension of n > 2 will be left for further study. Given n = 2, we use a 4-qubit quantum circuit. The four qubits are initialized in the ground state, and the four-pixel values are then encoded using RY with θ = πx as in (1). The outputs from RY gates are fed to the quantum circuit. NNQE uses four Hadamard gates, 20 three-axis rotations, and 20 CNOTs. One Hadamard gate is applied to each qubit immediately after encoding. The gates are grouped into five layers, with each layer consisting of four three-axis rotations and four CNOTs. Three-axis rotation is applied to each qubit within the layer. The rotation angles were chosen randomly and uniformly between 0 rad and 2 π rad. The four CNOTs within each layer connect the qubits randomly, but without overlap. The Hadamard and CNOT gates can be described mathematically as in (2) and (3), respectively. The outputs from the measurement operations are given as expectation values between −1 and 1 and form the output features. The output features are transformed into a one-dimensional vector using a flattening layer, as shown in Figure 1. The output of the flattening layer is connected to the fully connected layer to classify and predict the image labels for testing model learning. The dotted box in Figure 1 is expanded in detail in Figure 2. The circuit is expanded into multiple rotations and CNOTs in the expectation it will achieve better feature extraction than that of the random circuit. In particular, the design of the quantum circuit was inspired by the Circuit 15 in [27] which was found to retain high expressibility with a strong entangling capability. In addition, an extra layer of the Hadamard gates was added by us by trial, which showed further performance improvements. The output features are transformed into a one-dimensional vector using a flattening layer, as shown in Figure 1. The output of the flattening layer is connected to the fully connected layer to classify and predict the image labels for testing model learning. The dotted box in Figure 1 is expanded in detail in Figure 2. The circuit is expanded into multiple rotations and CNOTs in the expectation it will achieve better feature extraction than that of the random circuit. In particular, the design of the quantum circuit was inspired by the Circuit 15 in [27] which was found to retain high expressibility with a strong entangling capability. In addition, an extra layer of the Hadamard gates was added by us by trial, which showed further performance improvements. Experiment The method proposed in this study was implemented on a quantum computing simulator using Python (version 3.7.0) and PennyLane libraries (release 0.27.0) [21]. The random quantum circuit and strongly entangled quantum circuit were implemented using PennyLane's built-in RandomLayers and StronglyEntanglingLayers functions. Unless otherwise stated, the Adam optimizer and a batch size of 128 were used to train the network. Table 1 summarizes the parameters of the three image datasets used in this experiment. The method was implemented on a MacBook Pro (Intel Core i7 2.5 GHz CPU). Processing data for MNIST for NNQE, for example, took approximately two days each. Experiment The method proposed in this study was implemented on a quantum computing simulator using Python (version 3.7.0) and PennyLane libraries (release 0.27.0) [21]. The random quantum circuit and strongly entangled quantum circuit were implemented using PennyLane's built-in RandomLayers and StronglyEntanglingLayers functions. Unless otherwise stated, the Adam optimizer and a batch size of 128 were used to train the network. Table 1 summarizes the parameters of the three image datasets used in this experiment. The method was implemented on a MacBook Pro (Intel Core i7 2.5 GHz CPU). Processing data for MNIST for NNQE, for example, took approximately two days each. Testing Dataset MNIST The MNIST dataset [23] consists of 60,000 training and 10,000 testing images of handwritten digits from 0 to 9. Each image measures 28 × 28 pixels. The original images are grayscale with values between 0 and 255, which were scaled by dividing them by 255. Figure 3 shows an example of the MNIST dataset images and the corresponding output features obtained using NNQE circuit. Testing Dataset MNIST The MNIST dataset [23] consists of 60,000 training and 10,000 testing images of handwritten digits from 0 to 9. Each image measures 28 × 28 pixels. The original images are grayscale with values between 0 and 255, which were scaled by dividing them by 255. Figure 3 shows an example of the MNIST dataset images and the corresponding output features obtained using NNQE circuit. Testing Dataset CIFAR-10 The CIFAR-10 dataset [24] consists of 50,000 training images and 10,000 testing images. Photographic images are colored and consist of ten classes. The original images are in RGB color, which were converted into grayscale between 0 and 255 and then scaled by dividing them by 255. Examples of CIFAR-10 dataset images are shown in Figure 4. Figure 5 shows an example of the original CIFAR-10 dataset images and the corresponding output features obtained using an NNQE circuit. Testing Dataset CIFAR-10 The CIFAR-10 dataset [24] consists of 50,000 training images and 10,000 testing images. Photographic images are colored and consist of ten classes. The original images are in RGB color, which were converted into grayscale between 0 and 255 and then scaled by dividing them by 255. Examples of CIFAR-10 dataset images are shown in Figure 4. Figure 5 shows an example of the original CIFAR-10 dataset images and the corresponding output features obtained using an NNQE circuit. Testing Dataset GTSRB The GTSRB dataset [25] consists of 34,799 training and 12,630 test images of 43 classes of traffic signs captured from actual use under various conditions. These images were captured at night, during rainy weather, and in fog-based atmospheric environments under various illumination conditions, which could make it challenging for any machine to learn concealed features from dark and relatively unclear images. The original dataset has image sizes varying between 15 × 15 pixels and 222 × 193 pixels. As suggested by Sermanet and LeCun in [28], the images were scaled to 32 × 32 pixels. The original images are in RGB color, which were converted into grayscale between 0 and 255 and then scaled by dividing them by 255. Examples of the GTSRB dataset images are shown in Figure 6, whereas the original and corresponding output features using the NNQE circuit are shown in Figure 7. Testing Dataset GTSRB The GTSRB dataset [25] consists of 34,799 training and 12,630 test images of 43 classes of traffic signs captured from actual use under various conditions. These images were captured at night, during rainy weather, and in fog-based atmospheric environments under various illumination conditions, which could make it challenging for any machine to learn concealed features from dark and relatively unclear images. The original dataset has image sizes varying between 15 × 15 pixels and 222 × 193 pixels. As suggested by Sermanet and LeCun in [28], the images were scaled to 32 × 32 pixels. The original images are in RGB color, which were converted into grayscale between 0 and 255 and then scaled by dividing them by 255. Examples of the GTSRB dataset images are shown in Figure 6, whereas the original and corresponding output features using the NNQE circuit are shown in Figure 7. Figure 8 shows the variation in the classification accuracy of the test set as a function of the training epoch using the MNIST dataset. As shown in Figure 8, QuanvNN improves the accuracy of the test set over the classical NN. The performance was further improved by the NNQE circuit. Again, we emphasize that this improvement was obtained without introducing any additional optimizing parameters in the machine learning process. Figure 8 shows the variation in the classification accuracy of the test set as a function of the training epoch using the MNIST dataset. As shown in Figure 8, QuanvNN improves the accuracy of the test set over the classical NN. The performance was further improved by the NNQE circuit. Again, we emphasize that this improvement was obtained without introducing any additional optimizing parameters in the machine learning process. Figure 10 shows the variation in the accuracy of the test set using the GTSRB dataset. Unlike the cases using the MNIST and CIFAR-10 datasets, the test set accuracy obtained using the QuanvNN was reduced compared with that of the classical NN. However, the proposed NNQE circuit outperforms the QuanvNN, as shown in Figure 10. Figure 10 shows the variation in the accuracy of the test set using the GTSRB dataset. Unlike the cases using the MNIST and CIFAR-10 datasets, the test set accuracy obtained using the QuanvNN was reduced compared with that of the classical NN. However, the proposed NNQE circuit outperforms the QuanvNN, as shown in Figure 10. We note that in each case of the MNIST, CIFAR-10, and GTSRB datasets, other classical methods, such as CNNs, which are algorithmically more complex but can be implemented efficiently on modern processors, can in practice produce a higher image classification accuracy than that by our proposed NNQE method. However, the benefit of our proposed NNQE method is to observe that the application of the quantum circuit can improve the image classification accuracy over a classical method. Understanding the exact causes of this observation is expected to lead a better design of the quantum circuit that is more beneficial in practice in the future. However, the exact cause of this phenomenon is currently unknown and requires further investigation. We believe one plausible reason could be the better correlations between the image pixels that may be enhanced owing to the strong entanglement between the qubits, thereby leading to an overall improvement in accuracy. A summary of the results is presented in Table 2. We note that in each case of the MNIST, CIFAR-10, and GTSRB datasets, other classical methods, such as CNNs, which are algorithmically more complex but can be implemented efficiently on modern processors, can in practice produce a higher image classification accuracy than that by our proposed NNQE method. However, the benefit of our proposed NNQE method is to observe that the application of the quantum circuit can improve the image classification accuracy over a classical method. Understanding the exact causes of this observation is expected to lead a better design of the quantum circuit that is more beneficial in practice in the future. However, the exact cause of this phenomenon is currently unknown and requires further investigation. We believe one plausible reason could be the better correlations between the image pixels that may be enhanced owing to the strong entanglement between the qubits, thereby leading to an overall improvement in accuracy. A summary of the results is presented in Table 2. To investigate the characteristics of the proposed NNQE, eight optimizers and five batch sizes were tested using the GTSRB dataset. These optimizers were used to run the model and have different effects on the model execution and training. The following are the models used to check the performance efficiency of our proposed NNQE circuit: Adam, AdaDelta, RMSProp, Adagrad, AdaMax, SGD, Nadam and FTRL. Figure 11 shows test set accuracy using the different optimizers and batch sizes against GTSRB. It is evident from Figure 11 that the Nadam-based optimizer algorithm performs better than all other optimizers used in this study. For the different batch sizes tested in Figure 11, the results show only a small difference among the best-performing optimizers using a wide range of batch sizes (10, 30, 60, 90, and 120). Conclusions and Future Directions In this study, we developed a new NNQE method and investigated the image classification performance using three different well-known image datasets. As shown in Table 2, the testing accuracy against MNIST (handwritten digits) was improved from 92.0% by the classical NN to 93.0% by the previously proposed QuanvNN, and further to 93.8% by our proposed NNQE. Similarly, the testing accuracy against CIFAR-10 (colored images) was improved from 30.5% by the classical NN to 34.9% by QuanvNN, and further to 36.0% by NNQE. Both MNIST and CIFAR-10 had 10 distinct classes. While the exact cause of this is not yet clear and requires further investigation, one plausible reason could be the better correlations between the image pixels that may be enhanced owing to the strong entanglement between the qubits, thereby leading to an overall improvement in accuracy. However, the performance of the proposed model was degraded when applied to reallife, complex, colored images of traffic signs (GTSRBs), which have 43 classes in comparison with the classical NN. This is shown in Table 2 as follows: The testing accuracy against GTSRBs by the classical NN was found to be 82.2%, which was reduced to 71.9% by the previously proposed QuanvNN. The testing accuracy against GTSRBs was improved from 71.9% to 73.4% by our proposed NNQE. However, this is still a reduction in the testing accuracy against GTSRB from 82.2% achieved by the classical NN. This indicates that further development of the NNQE model may be necessary for relatively larger clas- It is evident from Figure 11 that the Nadam-based optimizer algorithm performs better than all other optimizers used in this study. For the different batch sizes tested in Figure 11, the results show only a small difference among the best-performing optimizers using a wide range of batch sizes (10, 30, 60, 90, and 120). Conclusions and Future Directions In this study, we developed a new NNQE method and investigated the image classification performance using three different well-known image datasets. As shown in Table 2, the testing accuracy against MNIST (handwritten digits) was improved from 92.0% by the classical NN to 93.0% by the previously proposed QuanvNN, and further to 93.8% by our proposed NNQE. Similarly, the testing accuracy against CIFAR-10 (colored images) was improved from 30.5% by the classical NN to 34.9% by QuanvNN, and further to 36.0% by NNQE. Both MNIST and CIFAR-10 had 10 distinct classes. While the exact cause of this is not yet clear and requires further investigation, one plausible reason could be the better correlations between the image pixels that may be enhanced owing to the strong entanglement between the qubits, thereby leading to an overall improvement in accuracy. However, the performance of the proposed model was degraded when applied to real-life, complex, colored images of traffic signs (GTSRBs), which have 43 classes in comparison with the classical NN. This is shown in Table 2 as follows: The testing accuracy against GTSRBs by the classical NN was found to be 82.2%, which was reduced to 71.9% by the previously proposed QuanvNN. The testing accuracy against GTSRBs was improved from 71.9% to 73.4% by our proposed NNQE. However, this is still a reduction in the testing accuracy against GTSRB from 82.2% achieved by the classical NN. This indicates that further development of the NNQE model may be necessary for relatively larger classes in more complex datasets, such as real-life traffic signs and GTSRBs. We also tested different optimizers for the proposed model to demonstrate the efficacy of NNQE model further. The results showed that Nadam-based optimizers produced the most optimal results. This is perhaps attributable to the Nadam algorithm being an extension of Adam optimizers, which add Nesterov's Accelerated Gradient (NAG), or Nesterov momentum, to provide an improved type of momentum for the search procedure. Future research could also include increasing the number of qubit sizes from four, as well as investigating the indicators of performance improvements, or their relative degradation, in comparison with classical NN. These studies could involve proposing new methodologies for designing quantum circuits to build on the present study and tests with more complex datasets with larger classes or concealed data features.
5,595.8
2023-03-01T00:00:00.000
[ "Computer Science" ]
Automated Low-Cost Photogrammetric Acquisition of 3D Models from Small Form-Factor Artefacts : The photogrammetric acquisition of 3D object models can be achieved by Structure from Motion (SfM) computation of photographs taken from multiple viewpoints. All-around 3D models of small artefacts with complex geometry can be difficult to acquire photogrammetrically and the precision of the acquired models can be diminished by the generic application of automated photogrammetric workflows. In this paper, we present two versions of a complete rotary photogrammetric system and an automated workflow for all-around, precise, reliable and low-cost acquisitions of large numbers of small artefacts, together with consideration of the visual quality of the model textures. The acquisition systems comprise a turntable and (i) a computer and digital camera or (ii) a smartphone designed to be ultra-low cost (less than $150). Experimental results are presented which demonstrate an acquisition precision of less than 40 µm using a 12.2 Megapixel digital camera and less than 80 µm using an 8 Megapixel smartphone. The novel contribution of this work centres on the design of an automated solution that achieves high-precision, photographically textured 3D acquisitions at a fraction of the cost of currently available systems. This could significantly benefit the digitisation efforts of collectors, curators and archaeologists as well as the wider population. Introduction In recent years, the loss, damage and destruction of cultural artefacts in the Middle East has captured worldwide attention [1] and has motivated digital preservation and virtual conservation efforts.For example, Rekrei (formerly Project Mosul) [2] and The Million Image Database Project [3] are projects that collect and curate photographs to digitally preserve heritage and to create 3D models of current, lost, or at risk heritage. The inspiration for the work presented here was the challenge of resourcing 3D models for the Virtual Cuneiform Tablet Reconstruction Project [4][5][6].Cuneiform is one of the earliest known systems of writing.Emerging from a simple system of pictograms some five thousand years ago, the script evolved into a sophisticated writing system for communication in several languages.Cuneiform signs were formed with wedge-shaped impressions in hand-held clay tablets.It was the original portable information technology, and it remained in use for over three thousand years in Mesopotamia, the region in and around modern day Iraq and Syria. Excavated cuneiform tablets are typically fragmented and their reconstruction poses a puzzle of considerable complexity [7].The puzzle's "pieces" are small complex 3D forms (the dimensions of 8000 catalogued tablets extracted from the Cuneiform Digital Library Initiative (CDLI) database [8] had an average width and length of 4.3 and 5.1 cm, respectively [9]), they belong to an unknown number of complete or incomplete tablets, and they are distributed within and between museum collections worldwide. Many thousands of inscribed cuneiform tablet fragments have been excavated in the last 200 years; the largest collections include those of the British Museum in London, the Penn Museum in Philadelphia, the Iraq Museum in Baghdad, and the Louvre in Paris.Virtual reconstruction of the tablets obviates issues such as geographical distance as well as practical issues concerning the necessarily limited accessibility and the physical fragility of the fragments [10].It also makes possible the use of computer-automated matching tools [11]. A significant challenge in the virtual reconstruction of fragmented artefacts is the acquisition of the virtual artefacts themselves [12].Conventional laser scanners and structured light scanners are costly and not easily portable.In addition, the scanning process can be labour intensive, requiring training and skills in order to acquire partial 3D models from multiple viewpoints before manually 'stitching' the parts together to form a complete 3D mesh.Similar problems affect the usability of 'dome' techniques such as Photometric Stereo [13,14] that can only acquire a single hemisphere at a time. The potential of photogrammetric acquisition for digital heritage is well-established [15].Ahmadabadian et al. [16] demonstrated that, with sufficient numbers of photographs and robust calibration procedures, precisions comparable to laser scans can be achieved.However, photogrammetric acquisition of 'problematic' artefacts such as small form-factor objects with challenging texture, optical properties or complex geometry can be achieved but requires "high attention and experience" to image acquisition [17].In addition, even with the highest quality images, generic unadapted application of automated photogrammetric workflows can diminish the quality of the 3D model [18]. In multi-viewpoint photogrammetric acquisition [19], sets of photographs are obtained by moving a camera around an object as illustrated in Figure 1A.This ensures that the lighting conditions remain constant and that there are no changes to background features that could confuse the reconstruction processing.In the rotary photogrammetric approach proposed here, the camera is fixed on a tripod and the object is rotated by a turntable as shown in Figure 1B.Lighting conditions are kept constant by the use of a diffuse, overhead, central light source and background features are eliminated by the use of a matt, monochrome tabletop cover.It is impossible to completely reconstruct an artefact from a single set of photographs gathered using a turntable because there will be no information about the underside of the object.The conventional solution is to take several scans with the object in different orientations and then to merge the resulting models to form a single 3D mesh using a point-cloud registration technique such as the iterative closest point algorithm [20].This works well for non-textured models, but, when texture mapping is applied, the appearance can be unsatisfactory.In areas where the meshes from two or more different partial models intersect, the texture can have a 'patchy' appearance due to the different illuminations of each viewpoint and, at the boundaries of each partial model, 'seams' can be observed for the same reason [21].An automated solution to this problem that integrates point-cloud registration, meshing, and texture mapping processes is described in this paper. The rotary photogrammetric acquisition method and the camera and smartphone versions of the system are described in Section 2, and the signal processing workflow is described in Section 2.4.Experimental results evaluating the precision of the system are presented in Section 3. Rotary Acquisition System A block diagram of the acquisition systems is shown in Figure 2 and photographs of the complete smartphone version are shown in Figure 3.The motivation for the development of the smartphone system version was the achievement of an ultra-low-cost system; the complete acquisition system, including a suitable smartphone, can be assembled for less than $150.The use of smartphones for photogrammetric acquisition has been widely reported for a range of object scales from large rock formations [22] down to close-range acquisitions, for example, prosthetic socket interiors [23]. The rotating platform is an adapted turntable 200 mm in diameter, originally intended for use in jewellers' shop window displays.It contains a small motor that drives a pinion gear connected by a reducing gear to a larger gear moulded into the underside of the rotating top surface.In order to control the rotation more precisely, the original 1.5 V DC motor was replaced by a 5 V stepper motor.New control electronics were added inside the turntable base and power supplied via a USB connection.As shown in the block diagram in Figure 2, a microcontroller receives instructions from either a computer via a USB serial link or a smartphone via a Bluetooth receiver module.An Arduino Nano [24] was used for the central microcontroller module, chosen for its small size, low cost and built-in USB serial adapter used for programming the microcontroller and also communicating with the computer to synchronise the motor and camera.A Bluetooth module was added to provide wireless serial communications with the smartphone.The firmware was designed to respond to commands from either the Bluetooth module or the USB port allowing either smartphone or computer control with a single unit.Four of the digital input/output ports of the microcontroller were used to drive the stepper motor via a ULN2003A Darlington transistor array [25].Application software running on either the computer or smartphone synchronises the turntable motion and the camera trigger.In the case of the smartphone, the phone's own built-in camera is used, whilst for the computer a Digital Single Lens Reflex (DSLR) camera is used, although any compatible high-resolution digital camera would suffice.The computer used in the prototype system was a Windows 10 laptop. Without a known datum, the scale of a photogrammetrically acquired model is arbitrary [16].Unlike other similar systems (e.g., Nicolae et al. [17] and Porter et al. [26]), a pseudo-random calibration pattern is adhered to the top surface of the turntable for the automated calibration of the reconstructed 3D model: a process described in Section 2.4.1. Acquisition Software For computer control, acquisition software was developed to perform three main tasks: triggering the turntable controller, triggering the digital camera shutter, and storing and indexing the images.The camera used in our experiments was a Canon (Tokyo, Japan) DSLR camera controlled, via USB, using the Canon Digital Camera Software Development Kit [27] which provisions control of the camera's settings, remote shutter release, and direct download of captured photographs to the computer.An application using the Canon SDK was developed to manage the acquisition process. Smartphone control required a similar acquisition application to be developed using the Android Studio Integrated Development Environment (IDE) (Version 2.2, Google LLC, Mountain View, CA, USA) [28].The only significant differences were that the Android app uses the smartphone's integrated camera and communication with the turntable controller is via a wireless Bluetooth link. For both computer and smartphone acquisitions, a complete set of 36 photographs taken at 10 • intervals (derived empirically) can be performed by a single user request.In order to manage the data filing, an artefact ID is provided by the user.Successive acquisitions of the same object are stored in a series of sub-directories contained within a directory with the ID as its name.At the end of the acquisition process, the photographs are ready for photogrammetric reconstruction processing. Photography Although the system has been conceived to require minimal photographic expertise, there are some issues concerning camera parameters to consider. Depth-of-Field Macro photography typically suffers from limited depth-of-field due to the short subject distances relative to focal length of the camera lens.In order to fill the image frame whilst maintaining focus over the entire target area of the calibration plate, the measures available are to reduce the aperture setting to a level where the depth-of-field is acceptable or to raise the camera elevation so that the required depth-of-field reduces. When using the DSLR camera with a 50 mm macro lens at f /22 aperture, the camera elevation was approximately 40 • and the object-to-camera distance was 550 mm.Assuming a circle of confusion of 20 µm, and applying the calculations detailed by Kraus [29], the depth-of-field would be 98 mm.Referring to Figure 4, the width of the calibration pattern is 130 mm, which, when viewed at a 40 • angle, reduces to a depth of d 1 = 130 cos(40 • ) = 100 mm.This would suggest that the depth-of-field is barely adequate for this application, especially given that the f /22 aperture would be expected to introduce diffraction blurring of approximately 10 µm [29].However, given that the average dimensions of the objects of interest are 43 mm by 51 mm [9], giving d 2 ≈ 50 cos(40 • ) = 38 mm, the depth-of-field has proven to be sufficient in all practical cases. When using the smartphone camera, the aperture was fixed at f /2.4.The focal length of the lens was 4.6 mm and the sensor size is 6.4 mm.At an object-to-camera distance of 250 mm and assuming a circle of confusion of 3 µm, the depth-of-field is reduced to 42 mm.This is inadequate for larger cuneiform tablet fragments and leads to a degradation in the reconstruction precision for smartphone acquired photographs (see Section 3). Lighting Photogrammetric reconstruction relies on the matching of features between pairs of photographs.It is essential that the features remain consistent when the object is rotated requiring lighting conditions that are constant with rotational angle.For this reason, it is usually recommended that uniform, diffuse illumination is used [30], which is ideal for objects with colour variations on their surface.The clay composition of cuneiform tablets, however, is relatively homogeneous and, when photographed under isotropic lighting, they appear to be featureless.Our compromise solution was to illuminate the object using an LED desk lamp approximately 200 mm above the centre of the turntable.At the f /22 aperture described previously, this required a shutter speed of no more than 1/4 s at ISO 200. This illumination geometry ensured there would be no rotational variation of lighting but that the features formed by the broken edges and cuneiform wedges would be clearly visible with good contrast.This does give rise to the problem that varying levels of illumination results in areas of light and shade in the surface that can then be subsequently 'baked' into the 3D model's texture [19]. Steps to diminish this effect are described in Section 2.4.6. Image Processing Transforming the sets of photographs into the resulting 3D models consists of several processes as illustrated in the block diagram in Figure 5.During acquisition, M photographs are taken for each of N artefact orientations.The subsequent processing produces a photographically textured, high-precision 3D model using a completely automatic and unsupervised workflow as described in the following sections. Camera Properties and Geometry Estimation A Scale-Invariant Feature Transform (SIFT) [31] followed by bundle adjustment [32] using the implementation by Wu et al. [33] is used to generate estimates of the cameras' parameters and geometries.In the absence of calibrated metric cameras, the estimated camera positions have an arbitrary scale factor.A conventional solution is to include additional coded targets and/or scale bars in the image scene.In our workflow, automatic calibration is achieved by adding a sequence of virtual reference 'photographs' to each of the image sets.The reference 'photographs' are actually artificially generated replicas of the calibration plate, shown in Figure 3C, rendered using a sequence of viewpoints comparable to typical camera viewpoints.A sequence of twelve images rendered at 30 • rotational intervals, all from a 45 • elevation, was empirically found to work well.SIFT feature points are extracted and matched for this set of images prior to processing the 'real' photographs. The camera geometry estimation process uses a well-established workflow [33], beginning by extracting the SIFT features of the real set of photographs and matching them with the existing calibration model.Bundle adjustment refines the estimated parameters of the unknown cameras whilst keeping the virtual camera positions, poses and intrinsic parameters fixed at their known, exact values (see Figure 6).By estimating the real camera positions relative to a static, calibrated set of feature points, the correct scale factors are automatically ensured.The final stage of this process is the removal of the virtual photographs from the image set prior to dense point-cloud reconstruction using only the real photographs and the corresponding calibrated camera parameter estimates.Using this approach, the discrete coded targets, conventionally used for calibration, are substituted by a single extended coded target filling most of the image scene that is not occupied by the object being acquired itself.This gives robust auto-calibration results and works with objects of many shapes and sizes.The complete geometry estimation workflow was implemented using code from Wu et al. [33] combined with customised code written in a combination of C++ and Matlab.A Windows batch script was used to automate the process. Dense Point-Cloud Reconstruction The open source Clustering Views for Multi-View Stereo (CMVS) algorithm [34,35] is used to reconstruct a dense 3D point-cloud from each set of photographs.The point density (typically between 100 and 300 points per square millimetre) is sufficient to resolve the features of the impressed cuneiform script and has proven more than adequate for the purpose of matching fragmented tablets [11].There are other dense point-cloud reconstruction applications available, but CMVS was chosen for our application because it is free, has a well-documented command line interface making it easily automated, and produced comparable results, in our tests, to commonly recommended alternatives such as Photoscan by Agisoft (St.Petersburg, Russia) [36].No modifications were made to the CMVS algorithm. Cropping Cropping is the removal of unwanted points that do not form part of the object being acquired-mostly points representing the turntable and calibration pattern and any supporting material used to stabilise the object during the acquisition.A dark coloured supporting foam material was chosen to contrast with the much lighter colour of the cuneiform fragments so that it could be automatically identified and removed.The calibration model used during bundle adjustment lies on the z = 0 plane, so most of the extraneous points lie outside an axis-aligned bounding box with an x and y extent equal to the size of the calibration pattern and a z extent from the maximum z-coordinate in the point-cloud down to a few millimetres above the turntable surface.The actual lower z-limit used is derived by progressively calculating the average luminosity of points in millimetre-by-millimetre slices starting at z = 2 mm above the turntable and increasing z until an average luminosity threshold is exceeded.By this process, the dark-coloured supporting material is automatically removed from the artefact's point-cloud. Point-Cloud Registration After photogrammetric point-cloud reconstruction has been applied to each set of M photographs, a set of N three-dimensional calibrated point-clouds have been generated.The process of merging the N partial models is one of point-cloud registration: matching the overlapping regions of the point-clouds from pairs of partial models.Point-cloud registration is performed in two stages: • Automatic coarse alignment of point-clouds using the Super 4PCS algorithm [37], • Fine alignment using a Weighted Iterative Closest Point (W-ICP) algorithm. Super 4PCS is a reliable and efficient open-source algorithm for achieving coarse alignment between point-clouds in an automatic, unsupervised process.The algorithm described by Mellado et al. [37] is used without modification. Fine alignment is a modified version of the well-established ICP algorithm [38].In the conventional algorithm, in order to orient one point-cloud, P, so that it matches another point-cloud, Q, an error function of the following form is minimised: where p i is the i-th member of the point-cloud P, A k is the current estimate of the optimal rigid transform matrix, and q j is the j-th member of the fixed, reference point-cloud Q; j is chosen to select the closest point in the cloud to p i .At each iteration of the process, the point-cloud correspondences are re-estimated and a new transform, A k , calculated to minimise the error function. The ICP algorithm in this form works under the assumption that all points in both point-clouds are equally precisely estimated, which, in our application, is not necessarily the case.Figure 7A,B illustrate the problem, showing two partial models of an object acquired from different viewpoints.Assuming view n 1 was photographed using a typical camera elevation of around 40 • , the points p 1 and p 2 would have been photographed from a very shallow grazing angle and would be poorly illuminated by the overhead light source.As a result, their photogrammetric reconstruction would not have been as precise as the other points illustrated in the model.From viewpoint n 2 (the same object turned over), the corresponding points p 3 and p 4 would be better illuminated and in better view from the camera and, so, would be more precisely reconstructed.If the expected precision of each point can be estimated, their relative importance in the point-cloud registration process can be weighted accordingly.In addition, after registration, the overlapping regions can be automatically 'cleaned-up' by eliminating unreliable points where a better close-by alternative can be found in another partial model. As part of the dense point-cloud reconstruction process, each point is associated with a surface normal vector.The vertical (z) components of the correspondingly rotated normal vectors form a good first estimate of the potential reliability of each point.Figure 7C illustrates this concept.Points such as p 5 with poor expected precision can be identified by the negative vertical (z) component of the normal vector (i.e., the normal points downwards).Points along the top of the object, such as p 6 , would all be expected to have good precision and can be identified by their large, positive z components.The only exceptions to this rule are found near the base of the object, e.g., point p 7 .In this area, shadowing can cause such poor reconstruction that the normal vector itself can be imprecise.These points can be identified by their z coordinate relative to the bottom of the cropped object.These criteria are combined to form a single confidence metric for each point: where n z is the z component of the point's normal vector and z is the point's z coordinate in millimetres relative to the cropping height used in the previous section.The constant, λ, sets the height-range of the subset of points near the base that are expected to be less precise; a value of approximately 1 mm has been found to work well in practice.An example illustrating the use of this confidence metric is shown in Figure 8. Points on the top, upward facing surface of the object have confidence values close to the maximum whilst those near the bottom show a confidence close to zero.The sides of the object show varying confidence values according to the inclination of the surface.A comparison of Figures 8A,B shows that the regions around the sides of the object that were poorly lit or whose view was obscured do, correctly, receive correspondingly lower confidence values.In order to make use of the confidence values, a modified Weighted ICP (W-ICP) algorithm is used.This algorithm is the same as conventional ICP except that the error function from Equation ( 1) is modified to be: i.e., the contribution of each point-to-point distance is weighted by the product of the confidence values of each point in the pair.This ensures that the most precisely estimated points contribute most to the error function and will, therefore, be registered most precisely.After the last iteration, each pair of points is combined to form a single interpolated point using the same confidence values.Given a pair of points p i and q j , the resulting interpolated point, r i , in the merged point-cloud would be: This merging operation helps to ensure that no 'seams' remain at the edges of the original point-clouds resulting from imprecise unpruned points. Meshing Following the point-cloud registration and merging operations described above, the points require connecting to form a surface mesh before texturing can be applied.Poisson surface reconstruction [39] was used for this stage and was chosen for its relative simplicity and reliability.Some care is needed to avoid loss of detail on inscribed surfaces.We found that an octree depth of 14 gave a good compromise, retaining the detail of the inscriptions with a tractable computational complexity. Texturing The texturing process refers back to the original sets of photographs and the camera position information calculated during the sparse point-cloud reconstruction to determine the detailed appearance (i.e., texture) of each face of the mesh.Meshlab [40] provides relatively easy-to-use parameterisation and texturing processing suitable for this task providing the camera locations can be imported in the correct format.This is a more complex task than in the conventional single-scan photogrammetry workflow [41] because the merging process will have reoriented the component parts of the mesh, requiring the camera positions to be moved accordingly. The starting point of the process is the set of camera locations estimated by the bundle adjustment processing of VisualSFM [42].The position of the m-th camera during the n-th partial acquisition can be expressed in the form of a single 4-by-4 view-matrix, V mn . As a result of the point cloud registration stage, each of the N partial meshes has been transformed from the location assumed by the bundle adjustment process.As a result, before texturing, the mesh must also be transformed by a model-matrix, M n , which is the inverse of the optimal transform calculated during point-cloud registration for the n-th partial acquisition.Thus, the rotation and translation of the m-th camera during the n-th partial acquisition is expressed by the model-view-matrix, V mn M n .An example of the resulting ensemble of camera positions is illustrated in Figure 9. Having correctly repositioned and reoriented the cameras, Meshlab is able to parameterise and texture the mesh producing the final 3D model.Figure 10 shows an example of a completed reconstruction. Performance Evaluation An obvious approach to determine the precision and resolution of a 3D acquisition system is to test its performance with geometrically simple calibration objects of known dimensions (e.g., cubes or spheres [43]).The difference between the resulting point-cloud and the calibration object can then be calculated.However, for photogrammetric systems, the acquisition process relies on the detection of multiple feature points on the object surface.Smooth, regular geometric shapes lack the features needed for precise reconstruction.Strategies have been developed to compensate for a lack of features [17,44] but would introduce unnecessary uncertainty to this comparison process. Our approach was to use 3D printed replicas of cuneiform tablet fragments and to compare the 3D models produced by the rotary acquisition system to those produced by a high resolution 3D scanner.The replicas used were made from high-resolution laser scans of cuneiform tablet fragments and were fabricated using stereolithography 3D printing at a resolution of 10 µm.The scanner used was a Ceramill Map300 3D dental-scanner (Amann Girrbach AG, Austria), chosen for its rated precision of less than 20 µm.These scans were used as the ground-truth data for evaluating the 3D acquisition precision of the photogrammetric system. Two 3D printed fragment replicas were photographed using the rotary acquisition system described in Section 2 from three viewpoints each and the point-clouds were reconstructed using the processing outlined.Each fragment was photographed using a Canon EOS 450D 12.2 mega pixel digital SLR camera and a Google Nexus 4 Android smartphone (Google LLC, Mountain View, CA, USA and LG Electronics, Seoul, South Korea) with a built-in 8 mega pixel camera.For comparison purposes, scans were also taken using a commercial 3D structured light scanner, the DAVID-SLS-2 system (DAVID Vision Systems GmbH, Koblenz, Germany) [45].Such 3D scanners project patterns of light onto the object surface and estimate the shape of the object from the distortions of the patterns observed by a camera viewing from a different angle to the projector. Results and Discussion Figure 11 shows photographs of the two 3D prints during acquisition and the resulting 3D models.After reconstruction of each 3D model, a comparison was made with the high-resolution scanned model.The root-mean-squared (RMS) error between the vertices in the model's surface mesh and the corresponding closest points on the surface mesh of the high-resolution scan was calculated using the CloudCompare 3D processing application [46]; the results are summarised in Table 1.Aii) and (Bii) are 3D models created using the rotary acquisition system with the DSLR camera.(Aiii) and (Biii) are 3D models created using the rotary acquisition system with the smartphone system.(Aiv) and (Biv) are 3D models created using the DAVID-SLS-2 structured light scanner.The experimental results presented in Table 1 show an improved precision for the DSLR camera compared with the smartphone.This was anticipated given the improvement in image resolution (12.2 Megapixel DSLR vs. 8 Megapixel smartphone), the greater depth of field (see Section 2.3.1) and the superior optics of the 50 mm macro lens used by the DSLR compared with the built-in 5 mm lens of the smartphone camera.Nevertheless, both are comparable with the performance achieved by the structured light scanner and both compare favourably with the 100 µm errors reported in similar applications with much more expensive laser scanning equipment [47]. Both the DSLR camera and smartphone photogrammetry systems have been used for the scanning of cuneiform tablet fragments at the British Museum.Tablet fragments including the fragment shown in Figure 10 have been acquired from the collection of tablets excavated at Ur which is thought to contain many matching but unjoined fragments.Our ambition is to assist reconstruction by identifying virtual joins.An example of a close-fitting virtual join between two fragments acquired from the Ur collection using this system is shown in Figure 12-the closeness of the fit between the fragments is only possible with high precision models such as those provided by the system.The fragments shown in Figure 12 have been made available in an online interaction [48] with an interface designed to support joining and study tasks [49].The acquisition system has also been used in the virtual joining of two tablet fragments from the third tablet of the Old Babylonian version of the Atrahasis epic [50][51][52].The physical tablet fragments are separated by 1000 km: one in the British Museum in London and the other in the Musée d'Art et d'Histoire in Geneva. One area that has not been a focus of this work has been processing time.The total processing time of the workflow is typically between 2 and 4 h depending on the computer and GPU specification, the number of viewpoints used, and the size of the physical object.Most of this processing time is taken by the CMVS dense point-cloud algorithm, the point-cloud registration used to join partial scans from different viewpoints, and the photographic texturing.The fully automated workflow allows processing to be offloaded to resources such as high performance computing clusters or cloud computing services meaning the long processing time does not impede the throughput of acquisitions of large numbers of artefacts. There is scope for further work toward the optimisation of operating conditions with the aim of improving heuristic estimates of parameters such as the number of photographs taken and the relative positioning of camera and lighting.There is also scope for further work toward refinements in texture processing.Currently, rendered artefact textures have subjectively pleasing photo-realistic appearances, but, under certain conditions, there can be a discrepancy between the colour of some regions and the actual albedo of the real artefact.For example, an artefact region photographed only whilst partially lit will appear darker in its virtual form.With appropriate calibration and knowledge of the lighting conditions, these discrepancies can be predicted and corrected giving an improved photo-realistic appearance.Hopefully, there will be further efforts toward the realisation of low-cost, high-definition, 3D acquisition systems, ideally through open source initiatives. Conclusions The rotary photogrammetric acquisition systems presented in this paper are very low-cost but high performance solutions for the 3D acquisition of small form-factor objects.The workflow innovations presented enable automation of the acquisition and reconstruction processes such that no user intervention is required after the photographs are acquired.Experimental testing of the 3D acquisition precision has shown the performance using the 12.2 Megapixel DSLR camera to be better than a commercial 3D scanner.Models created using this system have been successfully used to join fragmented tablets automatically, thereby demonstrating the application of this system for the automatic reconstruction of fragmented cuneiform tablets. Figure 1 . Figure 1.(A) multi-viewpoint camera approach for photogrammetric acquisition of a fixed artefact; (B) the rotary photogrammetric approach, achieving the same image set with a fixed camera and moving turntable. Figure 2 . Figure 2. Block diagram of the hardware components of the rotary acquisition system. Figure 3 . Figure 3. (A) the turntable and smartphone acquisition app in use, (B) inside the turntable base, (C) the 130 × 130 mm calibration pattern on the turntable platter. Figure 4 . Figure 4. Side view of the camera geometry illustrating the depth-of-field required for focusing over the entire depth of the turntable (depth-of-field, d 1 ), and over just the depth of the artefact (depth-of-field, d 2 ). Figure 5 . Figure 5. Block diagram of the signal processing workflow (in the experiments presented in this paper M = 36 and N = 3 or 4). Figure 6 . Figure 6.Results from the estimation of camera properties and geometry.Estimated camera positions and poses are shown for the real and virtual/reference cameras.The sparse point-cloud formed from the feature points used for matching is visible at the bottom of the figure. Figure 7 . Figure 7. Example cross-section of two point-clouds, (A) n 1 and (B) n 2 acquired from the same object.Due to the camera and lighting geometry, points p 1 and p 2 would not be expected to have been estimated with the same precision as points p 3 and p 4 .(C) surface normals used to estimate point-cloud precision.Point p 5 has a normal vector with a negative vertical component indicating a poor expected precision.Point p 6 has a large, positive vertical component indicating good expected precision.Point p 7 has a positive vertical component; however, its proximity to the base of the object indicates the point, as well as the estimation of its normal vector, may be imprecise nonetheless. Figure 8 . Figure 8. (A) a reconstructed dense point cloud acquired from a cuneiform tablet fragment (U 30056) after the automated cropping process; (B) the corresponding confidence metric of each point. Figure 9 . Figure 9. Camera positions and orientations estimated from four sets of 36 photographs (M = 4, N = 36).The complete textured model is shown in Figure 10. Figure 10 . Figure 10.Obverse, reverse, and side views of a completed cuneiform fragment model (U 30080) acquired using the system with the DSLR camera. Figure 11 . Figure 11.(Ai) and (Bi) are photographs of the two 3D printed cuneiform tablet fragment facsimiles (created from scans of W 18349 and Wy 777) used to test the acquisition precision.(Aii) and (Bii) are 3D models created using the rotary acquisition system with the DSLR camera.(Aiii) and (Biii) are 3D models created using the rotary acquisition system with the smartphone system.(Aiv) and (Biv) are 3D models created using the DAVID-SLS-2 structured light scanner. Figure 12 . Figure12.Visualisations of a pair of cuneiform tablet fragments, (A) UET 6/748 and (B) UET 6/759, automatically joined in virtual form with the result shown in (C).The 3D models were acquired using the rotary photogrammetric acquisition system using the DSLR camera. Table 1 . Quantitative results comparing acquisition precision.
7,826
2019-12-01T00:00:00.000
[ "Computer Science" ]
Variability of the Linguistic Consciousness Development of an Individual in the Artificial Bilingualism Conditions Purpose. The purpose of this study was to outline the variable markers of the individual linguistic awareness/consciousness development in the conditions of artificial bilingualism Introduction The linguistic awareness of an individual means the purposefulness of linguistic actions, which are built on the basis of the objective reality anticipatory reflection with the help of generalized meanings objectified in words and fixed in social experience. Numerous studies by scientists in the field of linguistics, social linguistics and psycholinguistics (Yavorska, 2000;Selihey, 2009;Tokareva, 2018;Ivaniuk, Goroshko & Melnyk, 2020 etc.) prove that language awareness directs a person's speech and language activity, "forms, preserves and transforms linguistic signs, the rules of their conjugation, usage, views and attitudes towards language and its elements" (cited by : Selihey, 2009: 13), which allows an individual to realize and interpret the facts of linguistic behavior of an individual and/or national communities. In the study of prescriptive linguistics as a discourse, Yavorska (2000) considers the linguistic awareness or consciousness of an individual as a result of reflection; language reflection takes place on two levels: the first (surface) consists of language views and assessments inherent in a certain community, the second (deep) is realized when choosing language options for text creation. Therefore, attitudes about language behavior and a person's evaluative attitude towards language are possible only where there is variability. In this context, Yavorska (ibid.) interprets linguistic awareness as "a set of culturally and socially determined attitudes towards language that reflect collective value orientations" (ibid.: 145). Given the debatable nature of the categorization of linguistic consciousness in psycholinguistic science (Tokareva, 2018;Ivaniuk, Goroshko & Melnyk, 2020;Shymko, 2021 etc.), we must specify the defined concept in the discourse of this study: we consider linguistic awareness as one of the levels of the structure of a holisticthe picture of the human world, an invariant from the multitude of possible schemes of mastering reality, which is most suitable for the purposes of communication between people (Ivaniuk, Goroshko & Melnyk, 2020: 64). In this context, language is not only a communication tool, but also a way of perceiving, organizing and encoding (or decoding) the surrounding reality. On the basis of the polyfunctional language system internalized by the linguistic personality, the human consciousness produces a kind of conceptual-linguistic universe -a language world view; the logical-semantic constructs, subject, predicate, modal, discursive and other "quanta" of meaning reflected in it form a coherent unity (a kind of collective philosophy, which is a projection of the sociolect of speech subjects), which forms the conceptual space of the mental continuum (Tokareva, 2018)). Predictors of the language world view perceived by the subject and the content characteristics of speech activity determine the vectors of self-expression and self-affirmation, the expression of the national identity of the community (people) to which the individual belongs. An important marker of the individual worldview is bilingualism (from the Latin bi -"two" and lingua -"language"), determined by the spread of population migration due to political and social crises in society and closely related to the marginalized groups life activities, their educational and economic problems. The phenomenological field of bilingualism is not new to the scientific community (Weinreich, 1953;Fabbro, 1999;Howat & Widdowson, 2004), but it still remains ambiguous and hotly debated; the terminological difficulties of defining this concept and describing its essential features began to be traced. The debate about the influence of the practice of alternating use of two languages on the development of individual intelligence started at the beginning of the 20th century with the research of the well-known linguists-theoreticians of language contact Urie Weinreich (Weinreich, 1953) and Einar Haugen (Haugen, 1953). In linguistic studies, scientists specifically stated that bilingualism occurs when a person switches from one language code to another in specific conditions of speech communication (regardless of whether it is a transition from one language to another, from a national language to a dialect, or to languages of international, international communication). At the same time, Weinreich (1953) recognized that the degree of mastery of two languages cannot be formulated purely in linguistic terms; this is one of the numerous aspects of bilingualism, for the study of which linguistics needs cooperation with psychology and social sciences. Already at the end of the 20th century, Azhniuk, in the field of sociolinguistic discourse, carried out a study of bilingualism markers in the language Ukrainian diaspora in the USA; the scientist used the term "metalanguage consciousness", interpreting it as an individual's ability to mentally compare the languages they know. Azhniuk's research proves that bilinguals who used Ukrainian and English in communication deepened their understanding of various linguistic phenomena, formed appropriate assessments, and had the opportunity to be in two "language worlds" at the same time (cited by : Selihey, 2009: 13). Modern polydiscourse studies, including psycholinguistic studies, are marked by the interdisciplinary nature of the bilingualism analysis phenomenology; polymodal aspects of the manifestation of bilingualism are the subject of study in psycholinguistics, cognitive psychology, speech psychology, sociolinguistics, and linguistic and cultural studies. Research paradigms for the analysis of bilingualism highlight, first of all, the mechanisms of processing linguistic material in the minds of bilinguals (Schneller, 2013;Marks et al., 2022), linguistic and psycholinguistic patterns of foreign language (L2) acquisition in childhood (Kempert & Hardy, 2015;Creel, Rojo & Paullada, 2016;Garton & Copland, 2019;Cahyati, Parmawati & Atmawidjaja, 2019;Giguere & Hoff, 2023;Oh, Bertone & Luk 2023;) and in adulthood (Forsyth, 2014;Bergmann, Sprenger & Schmid, 2015;Marks, et al., 2022). Sociolinguistic and psycholinguistic markers of the bilingualism landscape began to attract the attention of scientists. In particular, it has been proven that children and adolescents master a foreign language more easily (increase vocabulary, reading speed, speaking skills and reading comprehension) in conditions of natural bilingualism, when mastering two languages occurs in direct contact with both language environments; under such conditions, children and adolescents not only learn a foreign language (L2) at school, but also hear it at home (Creel, Rojo & Paullada, 2016;Aquino-Sterling & Salcedo-Potter, 2019;Persici, Majorano, Bastianello & Hoff, 2022;Oh, Bertone & Luk 2023;Giguere & Hoff, 2023) or actively use in direct communication with peers (Bergmann, Sprenger & Schmid, 2015;Rankin, Grosso & Reiterev, 2016;Frigolé & Tresserras, 2023;Giguere & Hoff, 2023). As a result, in the linguistic consciousness of a naturally bilingual person, there is an adequate replacement of the language and culture constructs of the native mental background with foreign ones. Natural bilingualism reflects the way of thinking and the integrity of the dual world view of an individual. At the same time, in the conditions of artificial bilingualism (formal, which ensures the mastery of a foreign language (L2) during its specialized study (Voronin & Rafikova, 2017)), incomplete learning of foreign speech systems (deficiencies in the speech structures planning, insufficient articulation skills, defects in tempo and rhythm) were noted, and manifestations negative interlanguage transfer (Forsyth, 2014;Bergmann, Sprenger & Schmid, 2015;Tumbull, 2018;Frigolé & Tresserras, 2023). Similar trends are also observed in the translation of texts (Bergmann, Sprenger & Schmid, 2015;Creel, Rojo & Paullada, 2016;Tumbull, 2018;Zasiekin, 2019;Saienko, Novikova & Sozykina, 2022;Frigolé & Tresserras, 2023). This allows us to assert that a person in the conditions of artificial bilingualism unconsciously translates language structures, modeling the communicative situation within the native culture. At the same time, researchers of bilingualism emphasize that the indicator of conscious (not formal) mastery of a foreign language is language awareness. For this, when learning a foreign language (L2), students are not only introduced to its structural forms (grammar, rules of reading, writing etc.), but also encouraged to understand the system-functional resources of a non-native language; it stimulates students' interest, actualizes motivation and, as Selihey (2009) emphasizes, strengthens their commitment to language, instills resistance to the policy of linguicide and the phenomena of language manipulation (ibid.: 14). Therefore, artificial bilingualism in the realities of permanent transformations of the wartime creates for an average person, first of all, a formal opportunity to communicate in a foreign language and expand knowledge about the culture of another nation, thus optimizing human adaptive resources. A person's ability to communicate (in particular, productive text creation) in a foreign language (L2), directing his consciousness to the subject of content, rather than the form of speech activity, is one of the main psychological characteristics of individual bilingual competences formation. At the same time, we consider it expedient to emphasize that the limitations of artificial bilingualism (lack of language environment, insufficient development of metacognitive forms of mental activity, meager practice of text creation etc.) make the ability to independently build the logical and semantic structure of speech expression the most problematic aspect of bilingual personality development (Tokareva, 2018), another problematic aspect is understanding textual constructions of a foreign language (Cahyati, Parmawati & Atmawidjaja, 2019;Tokareva & Tsehelska, 2020). In general, the analysis of scientific and scientific-methodological discourses of the specified problem allows us to state that, despite the wide continuum of theoretical and empirical research presented by the world cohort of scientists, it is too early to talk about a holistic concept of the development of the linguistic consciousness of an individual in the modern polylingual space. Therefore, taking into account the debatable content of the given problem, as well as realizing the importance of understanding the trends of the linguistic consciousness development of an individual in the unstable chronotope of being, we conducted an empirical study of the linguistic consciousness of an individual markers; we consider this study as a pilot project for defining the vectors of the integrative theory of the development of linguistic consciousness of the individual in the modern polylingual space. Psycholinguistic dimensions of the development of linguistic consciousness of an individual were chosen as the object of research. We considered the subject of the study to be the concretization of trends in the development of the linguistic consciousness of an individual in the conditions of artificial bilingualism. Within the scope of this study, we consider bilingualism as mastery of a foreign language (L2) at a level sufficient for communication and exchange of ideas with speakers of a foreign language; in this context, we consider it important not in which language a person thinks, but whether a person can communicate and exchange thoughts with other subjects of communication using a foreign language. The purpose of this study was defined by us as the delineation of variable markers of the development of linguistic consciousness of an individual in the conditions of artificial bilingualism. As a research task, we considered the possibility of evaluating the variability of the use of bilingual competences in the foreign language communication system of subjects, taking into account the experience of artificial bilingualism. Methods The polymodality of the development of the linguistic consciousness of an individual in the conditions of the instability of the modern sociocultural situation necessitated the use of the evolutionarysynergistic paradigm of scientific rationality, which allows analyzing the phenomenological field of speech development from the standpoint of self-organization of open dissipative systems in the unity of sociocultural, psychological, and psycholinguistic contexts. Taking into account that the contextuality of the organization of the structural dimensions of speech expression is formed (in the unity of the "language -speech -speech activity" system) and fixed in the process of formatting the linguistic consciousness of an individual, we made a hypothetical assumption about the existence of differences in the detection of markers of the development of the linguistic consciousness of a person with taking into account the experience of artificial bilingualism. In order to test the hypothesis, we implemented an empirical research program; the main method of the study was the method of selective observation with the fixation of markers of the development of language awareness and qualitative indicators of the respondents' mastery of English as a foreign language in the conditions of artificial bilingualism. Potential standards of artificial bilingualism were used for comparison: i. passive-mechanical (imitation) model of artificial bilingualism, based on the respondents' mastery of the grammatical (formal) structure of the non-native language (morphology, word formation rules, syntax); ii. an active (cognitive-communicative) model of artificial bilingualism, in the educational space of which modeling of the linguistic consciousness of the respondents was implemented by mastering the system-functional resources of a non-native language. In the format of the monitoring research project, the markers for detecting the linguistic awareness of the respondents were subject to: knowledge of the language, culture and speaking skills (at a level sufficient for formulating and expressing thoughts in the process of interpersonal communication in a foreign language), manifestations of language socialization (mastering the norms of listening, perceiving and speaking in a foreign language (L2) at a level sufficient for communication and agreement of semantic codes of communication subjects). The obtained data were subjected to content analysis, which made it possible to evaluate the markers of the development of linguistic awareness of the respondents in the given realities of artificial bilingualism and to reveal the level of activity when using a foreign language in communication. The computer statistical program IBM SPSS Statistics 19 ("Statistical Package for the Social Science" was used for summarizing and analyzing empirical materials). The variables were checked for the normality of the characteristic distribution. In order to statistically confirm the significance of the obtained data, the method of one-factor variance analysis (Fisher's φ-criterion) was used. At the second stage of data processing, a multidimensional procedure of cluster analysis (K-means clustering) was applied, which made it possible to single out subsets of the levels of linguistic awareness development among the respondents of the research project. Participants The Private Enterprise "Educational Center 'Interclass' " (Kryvyi Rih, Ukraine), certified as an institution of extracurricular humanitarian education, was chosen as the base of the empirical study. The sample population of respondents was formed randomly; it was made up of 38 students of the primary education groups of the "Interclass" Educational Center, whose learning of the English language (L2) took place in the format of an active model of bilingualism with the involvement of the developmental resource of the system-functional learning paradigm. The age of the respondents (6-7 years old) is due to the peculiarities of the age development of younger schoolchildren; it is this age group of children that is most sensitive to the systematic development of metacognitive forms of mental activity, the formatting of the "learned bilingualism" experience (Voronin & Rafikova, 2017) and the constructive modeling of the secondary language personality (Cahyati, Parmawati & Atmawidjaja, 2019;Tokareva & Tsehelska, 2020). As a control group, a group of the first grade (35 people aged 6-7 years, randomly selected) of the secondary school of the city of Kryvyi Rih (Ukraine) was chosen. Foreign language training of junior high school students in the control group was carried out in the format of a passive model of artificial bilingualism with a focus on speech grammar. In order to comply with the principles of ethical and professionally adequate behavior accepted in the scientific environment, the authors obtained informed consent from the parents of potential participants of the empirical study regarding the involvement of children of primary school age in learning English in the format of an active model of artificial bilingualism (see: Tokareva & Tsehelska, 2020). Results Modeling of the research paradigm of bilingual markers of the primary schoolchildren linguistic personality was carried out taking into account the fact that in the modern coordinates of comprehensive internationalization, the development of human linguistic consciousness in the areas of artificial bilingualism is gaining special relevance. Polylingualism of the students' life-making determines the need to go beyond "correctly formatted sentences" and direct the development of linguistic consciousness into the range of discourse of foreign language contexts. In response to today's demands, on the basis of the Educational Center "Interclass" (Kryvyi Rih, Ukraine) in the format of a longitudinal research project, primary school-age children are taught English in the dimensions of a system-functional approach using metacognitive schemes (see : Tokareva, 2022). In this context, the content of the actualization of metacognitive processes in the system of development of reflective and metacognitive forms of mental (and in particular language) activity is recognized as an axiomatic scenario for the development of the resource potential of primary school students. The given conditions of artificial bilingualism (in the format of an active model) determine the schoolchildren's mastery of the English language world view, the formation of a conscious attitude to a foreign language, which helps to understand the new socio-cultural reality and promotes the development of students' linguistic awareness. In the context of the above, the results of this pilot project were elaborated on the identification of markers of the development of linguistic awareness of the individual (in particular, children of primary school age) in the conditions of artificial bilingualism (the cut was made based on the results of work during 6 months, the 1st semester). The following were subject to analysis: representation of knowledge of a foreign language (denotative representation of content units, adequacy of ordering syntactic constructions, ability to produce formal (grammatical, lexical, syntactic) constructions), skills in perceiving and understanding messages in a foreign language, competent text creation in the given conditions of artificial bilingualism. In the process of the research, normative language and speech constructs of the corresponding semantic and/or grammatical series, the repetition of lexemes and syntactic constructions in the speech of the respondents, the frequency of typical speech reactions among the respondents were recorded (the absolute frequency of the method of meaning creation, adequate text creation, congruent dialogue was calculated); this made it possible to reconstruct to a certain extent the vectors of the respondents' linguistic awareness and to reveal the level of activity of younger schoolchildren in the use of a foreign language in the conditions of artificial bilingualism. The generalized results of measuring quantitative indicators of the absolute frequency (absolute frequency) of the demonstration of polymodal markers of the development of linguistic awareness of primary schoolchildren in the conditions of artificial bilingualism (in the educational space of mastering the English language) in the compared groups of the sample population are shown in Table 1. Perception of information expressed in a foreign language in the conditions of direct and indirect intercultural communication (listening) Adequate response to perceived information verbally and/or non-verbally The results of the statistical stage of data analysis of the research project proved that the differences between the experimental and control groups of primary school students in identifying markers of linguistic awareness are reliably significant relative to individual predictors of the logical ordering of speech-thinking dimensions of language awareness (at p ≤ 0.05).The respondents of the experimental group demonstrate statistically better listening results (0.038 = p ≤ 0.05 and 0,001 = p ≤ 0.05), understanding the content of oral expression in a familiar everyday context (0,041 = p ≤ 0,05) and in identifying communicative competences: abilitiesto create real-time simple messages using a few short sentences (0.017 = p ≤ 0,05) and the ability to interact with other individuals in different communication situations (0.005 = p ≤ 0.05). At the same time, the results of the empirical research allow us to state that there are no differences between the respondents in knowledge of a foreign language and in the skills of critical evaluation of information; the indicators of pronunciation and accentuation of commonly used words by primary schoolchildren of the experimental and control groups do not differ statistically. The included observation of the work in the study groups of "Interclass" also proves that the children of the experimental group in the situations of artificial bilingualism behave more confidently, meaningfully solve complex tasks (listening, perception, understanding) taking into account the given conditions of bilingual culture; mastering the grammatical structure of a foreign language, use personal intellectual resources and adequately format logical-semantic and system-functional predictors of text messages in a foreign language. In the context of the above, the efficiency of using a foreign language (L2) should be considered as one of the main indicators of the formation of bilingual competences of respondents. Further use of the multidimensional procedure of cluster analysis of the average indicators of the isolated markers of linguistic awareness (K-means clustering) made it possible to record clusters (groups of respondents), which are characterized by the compatibility of the features of the dominant indicators. Three groups of respondents with different levels of crystallization of markers of individual development of linguistic awareness were identified: initial, medium and high, the gradation of which differs in the experimental and control groups (Fig. 1). In the study of the phenomenology of linguistic consciousness and its level, we share the opinion of Selihey (2009), who emphasized that the level of linguistic consciousness is not so much an ontological concept as a heuristic one, which "helps to study and typologize the attitude of different people to language and linguistic reality" (ibid.: 17). We consider the identification of three levels of development of the linguistic consciousness of an individual to be evident, the detection markers of which are combined into three clusters. The first cluster marked the initial level of identifying markers of the respondents' linguistic awareness, which is characterized by unstructured knowledge of a foreign language, the absence of regulators of the linguistic behavior of the individual, and the axiological infantilism of the respondents' foreign language worldview (16.0% of the younger schoolchildren of the experimental group and 63.0% of the respondents of the control group). The second cluster records markers of the detection of an average level of linguistic awareness development; in children of this group (74.0% of respondents of the experimental group and 34.0% of the control group) the bilingual indicators of language activity are more organized, the role of intellectual operations in speech activity is strengthened (understanding of the chronotope of events, elementary skills of categorization and comparison of objects and phenomena are formed). At the same time, schoolchildren perceive a foreign language (L2) only as a pragmatic (utilitarian) means of communication, their ideas about the foreign language world view are contradictory and subjective. The third cluster indicates a relatively high level of development of the respondents' linguistic awareness; it is characterized by a more meaningful linguistic world view, the expression of all structural and functional components of a foreign language (formation of valuable ideas about a foreign language (L2), conscious assimilation of norms of linguistic behavior etc.), a fairly high level of foreign culture generalization. At this level of linguistic consciousness development, a person does not need special motivation, mastering the language moves to the stage of conscious perception. And although the high level of linguistic awareness development among primary schoolchildren is just beginning to take shape (tendencies to identify markers of this level of linguistic awareness are traced only in 10.0% of respondents of the experimental group and in 3.0% of respondents of the control group), it attests to the prospect of the individual's linguistic development in the format of bilingual education. In general, the generalization of the empirical study data proves that the primary schoolchildren of the experimental group in the conditions of artificial bilingualism achieved higher quality results in the linguistic awareness development. The positive results of introducing an active (cognitive-communicative) model of artificial bilingualism into the English language learning process of primary school students and, in particular, a systemic-functional program with extensive use of the resources of metacognitive schemes (Tokareva, 2022) confirm the possibility of purposeful addition and ordering of competence scenarios for mastering a foreign language by schoolchildren (listening, perception, understanding, speaking) in the schemas of the internalized experience of artificial bilingualism. Systematic use of metacognitive schemes of bilingual order (see, for example: Tokareva, 2023) turns students into active creators of personal bilingual experience; in this context, the productive acquisition of bilingual competences can be interpreted as an open evolutionary process of modeling the linguistic consciousness of an individual. Discussions Recognizing that language is a certain socio-cultural relay of the polylingual experience of humanity, we are fully aware that one of the predictors of optimizing the development of a person's linguistic consciousness is the formation of a person's foreign language competence (and in particular -in the process of mastering a foreign language). At the same time, it should be taken into account that in the conditions of a modern multilingual society, bilingualism appears as an interdisciplinary phenomenon that illuminates the linguistic situation, implicitly affects the development of the linguistic consciousness of subjects of linguistic activity, structures the vectors of the architecture of the language worldview, and therefore requires the attention of related sciences. Linguistic, psycholinguistic, sociolinguistic, and cognitive studies of recent years convincingly prove the productivity of artificial bilingualism as a system of "the functioning of two linguistic and cultural codes in the linguistic consciousness of an individual and an effective tool that promotes the acquisition of a foreign language" (Saienko, Novikova & Sozykina, 2022: 279). The scientific community recognizes that the language codes of a person's native and foreign (L2) languages interact with each other (in particular, at the level of language interference), the native language conditions the formation and formulation of thought in a foreign language, implementing a program of co-activation (parallel activation) of languages and weakening the barrier of cognitive load on the individual in the conditions of artificial bilingualism (Rankin, Grosso & Reiterev, 2016;Saienko, Novikova & Sozykina, 2022: 280-281). Sharing the stated statements, we at the same time consider it expedient to more clearly differentiate the resource potential of the active (cognitive-communicative) model of artificial bilingualism, focused on the systematic development of the linguistic consciousness of the individual in the context of learning a foreign language. We also consider it necessary to emphasize that the optimization of the multilingual dimensions of the modern educational space requires decent state support (complex programs for the bilingual development of the linguistic consciousness of the individual, the creation of bilingual infrastructure in educational institutions, the variability of modeling standards of the linguistic didactic bilingual language space etc.). Conclusions The generalization of the results of the theoretical-empirical study of the psycholinguistic continuum of the development of the linguistic consciousness of the individual in the dimensions of artificial bilingualism provided the basis for the following conclusions: in the realities of the modern information society permanent transformations, bilingualism appears as a predictor of a foreign language mastery at a level sufficient for communication and exchange of ideas with other subjects of linguistic activity; (1) language (and foreign language in particular) is not only a communication tool, but also a way of perceiving, organizing and encoding (or decoding) the surrounding reality. In this context, linguistic consciousness is interpreted as an invariant of possible schemes of mastering reality, which is most suitable for the purposes of communication between people; (2) the analysis of the markers of the development of linguistic awareness of primary school students proved that in the conditions of the implementation of the active
6,210.4
2023-03-19T00:00:00.000
[ "Linguistics" ]
The contribution of general language ability, reading comprehension and working memory to mathematics achievement among children with English as additional language (EAL): an exploratory study ABSTRACT An increasing number of high-stakes mathematics standardised tests around the world place an emphasis on using mathematical word problems to assess students’ mathematical understanding. Not only do these assessments require children to think mathematically, but making sense of these tests’ mathematical word problems also brings children’s language ability, reading comprehension and working memory into play. The nature of these test items places a great deal of cognitive demand on all mathematics learners, but particularly on children completing the assessments in a second language that is still developing. This paper reports findings from an exploratory study on the contribution of language to mathematics achievement among 35 children with English as an Additional Language (EAL) and 31 children with English as their first language (FLE). The findings confirm the prominent role of general language ability in the development and assessment of mathematical ability. This variable explained more variance than working memory in word-based mathematics scores for all learners. Significant differences were found between the performance of FLE learners and EAL learners on solving mathematical word-based problems, but not on wordless problems. We conclude that EAL learners need to receive more targeted language support, including help with specific language knowledge needed to understand and solve mathematical word problems. Introduction The role of language in mathematics learning and teaching is undeniably crucial. Building on the work of Bruner (1966), Haylock and Cockburn (2013) argue that language is one of the four key experiences of mathematics learning in addition to concrete manipulation, pictures and symbols. However, not only do mathematics learners have to learn a large number of technical and abstract terms that are specific to the subject (e.g. trapezoid and rhombus; sine, cosine and tangent functions), they also need to have a good knowledge of everyday vocabulary as mathematical problem solving has become increasingly more grounded in everyday contexts. Potential confusion arises when the boundary between these two types of language becomes blurry. For example, a lexically ambiguous term, such as odd can be taken to describe something that is strange or abnormal in the everyday context, but when it is used as a mathematical term in relation to numbers, it can be taken to describe any integer that cannot be divided exactly by 2. More examples of lexically ambiguous words include 'volume', 'degree' and 'root' among several other examples. To some mathematics learners, the boundary between these two types of language becomes even more blurry when they encounter homophones, that is words that sound the same, but have different meanings in different contexts, for example, 'pi' vs 'pie' and 'serial' vs 'cereal' (Adams, Thangata, and King 2005). The difficulties involved in distinguishing between the everyday usage of terms and their specific academic meaning illustrate Cummins' (2003) distinction between basic interpersonal communicative skills (BICS) and cognitive academic language proficiency (CALP): the former is taken to refer to the language required for social situations while the latter refers to language which is necessary for academic success. Acquiring CALP in a second language is particularly challenging for learners for whom the language of instruction is not their first language. According to Cummins, these learners may take five to ten years to develop age-appropriate command of CALP. In the context of England, such linguistic complexities are particularly relevant as the recently revised primary mathematics curriculum (Department for Education 2013, 4) emphasises the role of context in mathematics teaching and learning, for example, Year 2 pupils (6-7 years old) should be taught to 'solve simple problems in a practical context involving addition and subtraction of money of the same unit, including giving change'. These linguistic features highlight the important role of mathematics learners' vocabulary knowledge and reading comprehension skills, particularly in the context of solving mathematical word problems, which has increasingly become a standard assessment tool to measure learners' mathematical understanding and performance. Not only can these word problems be found in England's Standardised Assessment Tasks (SATs), they can also be found in other national tests, such as the USA's National Assessment of Educational Progress (NAEP) and Australia's National Assessment Program -Literacy and Numeracy (NAPLAN) as well as international tests, such as the Trends in International Mathematics and Science Study (TIMSS) and the Programme for International Student Assessment (PISA). Whilst the linguistic features, as previously outlined, place a great deal of cognitive demand on all mathematics learners, they are arguably much more challenging for learners for whom English is an additional language (EAL) and who are, more limited in terms of their vocabulary knowledge and reading comprehension skills, particularly when compared to their peers who have English as their first language (FLE) (Burgoyne et al. 2009). While there is considerable evidence that bilinguals (including EAL children) have smaller vocabularies than monolinguals, and smaller vocabularies are thought to be in part responsible for lower levels of reading comprehension in EAL children (e.g. Murphy and Unthiah 2015), there is virtually no research into the contribution of these variables to EAL children's mathematics achievement. The current study aims to fill this gap by furthering our understanding of the contribution of reading skills and language ability to mathematical knowledge in EAL children. EAL children's academic achievement Studies from different countries have found that EAL children's mathematical achievement is generally higher than their language achievement. In Australia, for example, the gap in academic performance of EAL children is less marked in mathematics than it is for reading (Australian Curriculum, Assessment and Reporting Authority 2016). According to Strand, Malmberg, and Hall (2015), this is also the case in the UK where the scores of EAL children on mathematics assessments are always higher than on reading assessments at every age. At age 11, there is a sizeable gap for reading only, but no gap for mathematics, grammar, punctuation and spelling, whilst by age 16, EAL students are slightly more likely than FLE students to achieve a good pass in mathematics for the General Certificate of Secondary Education (GCSE), the examination taken in England at age 16. Yet, when compared with other contexts, 'there is far less consistency with respect to EAL children's L1 background' in the UK (Murphy and Unthiah 2015, 34), with over 240 different languages reportedly spoken as a first language by primary school-aged children in the UK (Demie, Hau, and McDonald 2016). Furthermore, even though the number of EAL children in UK primary schools has doubled from 7.6% in 1997 to 16.2% in 2013 (Strand, Malmberg, and Hall 2015), much less research has been undertaken with EAL learners in the UK than elsewhere, underscoring the need for further studies of mathematics within that context. One possible reason for the limited amount of research conducted in the UK is that assessing EAL children with tests that were developed for monolinguals is not appropriate for many reasons and monolingual norms associated with these tests are often inappropriate for bilinguals (Martin et al. 2003;Gathercole 2013). Furthermore, the heterogeneous nature of the EAL learner population in the UK makes it problematic to refer to the existence of an 'EAL gap'. Strand, Malmberg, and Hall (2015), for example, note that there is a difference between (a) EAL students who report English as their main language and (b) EAL students who report a language other than English as their main language, with the first group obtaining higher scores than the second and they conclude that the range of achievement among EAL students is as wide as for FLE students. Understanding the causes of this variability is very important for policy makers and teachers. A recent report on secondary school attainment in England states that proficiency in English is the key factor behind the differences in academic achievement (Demie, Hau, and McDonald 2016). Exploring potential relationships between general language ability, reading comprehension, and mathematical ability As previously indicated at the end of the introduction section, general language ability and reading comprehension are closely linked to pupils' ability to make sense of mathematical word problems. However, given the lack of research exploring the contributions of these variables to mathematical proficiency in EAL children, we thus hope to address this research gap. Each of these two variables will now be explored in turn. General language ability There is evidence to suggest that proficiency in mathematics is related to general language ability (e.g. Fuchs et al. 2015). We subscribe to the view that there are generic as well as specific components in individual language proficiency. According to Harsch (2014, 153) 'language proficiency can be conceptualised as unitary and divisible, depending on the level of abstraction and the purpose of the assessment and score reporting'. Investigations of general language ability and its relationship with mathematical ability, need, however, a valid and reliable measure of the former. Arriving at a single score (which is based on a set of separate scores for different skills or components of language proficiency) to represent unitary aspects of language proficiency is a challenge. There are few tests which claim to measure EAL learners' general language ability, apart from the large scale tests that are used for admission to university, such as the IELTS or TOEFL for English which measure a wide range of skills which are then converted to an overall score. If such a test exists, it should correlate strongly with tests of different skills and/or components of language ability (vocabulary and grammar). Eckes and Grotjahn (2006) claim that the C-test, which was developed by Raatz and Klein-Braley (1981), does tap into general language proficiency, rather than specific skills (e.g. either reading or writing). In their study, the C-test correlates strongly with written as well as oral tests of German as a foreign language. Similar results were obtained by Dörnyei and Katona (1992) who show that the C-test can be used to measure global language proficiency. In Eckes and Grotjahn's (2006) study, strong correlations were found between the C-test and vocabulary and grammar, although the correlations with vocabulary were stronger than those with grammar. A C-test is a variant of the cloze test. Instead of deleting entire words, the second half of every second word is deleted (e.g. 'The deleted parts are given in bold'), where the deleted parts of the words are given in bold. The test generally consists of five or six short texts in which approximately 20 gaps need to be filled. Using factor analysis and Rasch analysis, Eckes and Grotjahn (2006) provide comprehensive evidence that their German C-test is unidimensional. The C-test format could potentially reveal very interesting differences between EAL learners and children whose first language is English, because the C-test is an integrative test which does not only tap micro-level skills (word level skills), but also macro-level skillssome of the gaps can only be filled if the grammar or the lexis in other parts of the text are taken into account (Klein-Braley 1994). We know little about EAL learners' ability to deal with macro-level skills, which is why the C-test could reveal important new information about the specific language proficiency profiles of EAL learners. The current project thus aims to contribute not only to the discussion about the differences in language proficiency between FLE and EAL mathematics learners, but also further our understanding of the suitability of this particular test format in assessing EAL children. Reading comprehension On the basis of the evidence reviewed in their paper, Murphy and Unthiah (2015) suggest that weaker reading comprehension skills may in part be responsible for lower levels of academic achievement among EAL learners. It might also be worth noting that one needs both decoding (word recognition) and linguistic comprehension skills (the ability to use lexical information to achieve sentence and discourse level interpretation) in order to be able to read (Hoover and Gough 1990). Decoding and linguistic comprehension are the two key dimensions of a widely used model of reading ability: the Simple View of Reading (Gough and Tunmer 1986). While most researchers in the field accept that mastery of these two dimensions is needed, it is also important to note that they are independent of each other. It is possible to have very good comprehension skills but poor decoding skills (as is the case in dyslexia), whilst there are also readers with good decoding skills but poor comprehension, often referred to as 'poor comprehenders' (Yuill and Oakhill 1991). The implications of poor comprehension for mathematics learning are apparent in a study by Pimperton and Nation (2010) of twenty-eight 7-8 years old children who took three tasks (of mathematical reasoning, verbal ability, and phonological short-term memory) over two testing sessions. Their findings regarding poor comprehenders' significantly lower scores than controls on the mathematical reasoning task suggest that poor comprehenders need identifying early, in order to rectify potential linguistic deficiencies that impact on other aspects of learning. Building upon Pimperton and Nation (2010), Bjork and Bowyer-Crane (2013) examined the cognitive skills used by sixty 6-7 years old with mathematical word problems and numerical operations. The children in their study completed five tasks (reading accuracy, reading comprehension, phonological awareness, verbal ability as well as numerical operations and mathematical word problems) over two testing sessions. Their findings suggest poor reading comprehension is related to difficulties with mathematical word problems, though not numerical operations. The available evidence suggests that EAL learners have similar comprehension problems to those of poor comprehenders, whilst single word reading is less problematic for this group (Hutchinson et al. 2003;Burgoyne et al. 2009). In Burgoyne et al.'s (2009) study, EAL children even outperformed monolingual children in tests of decoding (accuracy of reading single words or connected text), but they clearly lagged behind their monolingual peers in listening and reading comprehension. Vocabulary has been found to be a key predictor of reading comprehension among both first language learners (Ouellette and Beers 2010; Tunmer and Chapman 2012) and EAL learners (Melby-Lervåg and Lervåg 2014). Burgoyne et al. (2009, 742) conclude that 'the weaker vocabulary skills of EAL learners […] place significant constraints on their comprehension of written and spoken text'. In particular, Burgoyne et al. (2009) suggest it is because of their smaller vocabulary that EAL learners are less able to make good use of written texts to support the formulation and expression of responses to comprehension questions. While vocabulary is recognised by many authors as being of key importance, little research has been done into the contribution of general language ability to reading comprehension and academic achievement. There is a lack of consensus as to whether aspects of oral language skills have a similar relation to reading comprehension for L1 and L2 learners. In a Canadian longitudinal study, Lesaux, Rupp, and Siegel (2007) followed L1 and L2 English-speaking kindergarteners to the fourth grade. It was found that a measure of syntactic skills (cloze task) in kindergarten predicted reading comprehension of both groups of children. Conversely, Droop and Verhoeven (2003) found a stronger effect of morphosyntactic skills (sentence repetition) on the reading comprehension performance of L1 Dutch speaking children (8-10 year olds) than on that of their L2 peers. Finally, Babayiğit (2014) explored the relationship between oral language (i.e. vocabulary and morphosyntactic skills [sentence repetition]) of English-speaking L1 and L2 children (9-10 year olds) in England. Evidence was found to suggest a predictive relationship between oral language and reading comprehension levels with a tendency for the relationship to be stronger in the L2 than in the L1 group. In a more recent study, Babayiğit (2015) extended her findings to show that weaker oral language skills (vocabulary, sentence repetition, verbal working memory) explained the lower reading comprehension performance of English-speaking L2 learners (10 years old). The current study aims to shed more light in this field by including a different measure of general language ability (C-test) among the variables studied in relation to EAL children's academic achievement in the domain of mathematics. Relationships between general language ability, reading comprehension and mathematical ability A traditional method of measuring mathematical ability has been to assess pupils' ability to solve word problems (Abedi and Lord 2001). Typically, pupils are expected to choose and collate relevant information from the problem, and to use them to solve the problem. Making sense of mathematical word problems can thus present a challenge for EAL learners whose command of English language is still developing. Vilenius-Tuohimaa, Aunola, and Nurmi (2008) subscribe to the view that there is a close relationship between reading comprehension and mathematical ability to the extent that reading comprehension skills appear to predict performance on mathematical word problems. Other studies appear to suggest that lexical complexity (e.g. word frequency, the use of idiomatic phrases and words with multiple meanings, and culturally-specific non-mathematical vocabulary items) of mathematics word problems might influence comprehension difficulties for EAL pupils with mathematical word problems (Abedi and Lord 2001). Taken together, research exploring the relationship between reading comprehension, language and mathematical ability is fragmented, with previous studies arguably tending to focus on exploring either the relationships between mathematical performance and reading comprehension or between mathematical performance and vocabulary knowledge separately (e.g. Fuchs et al. 2015). Additionally, the findings are, to an extent, inconclusive. For example, while Vilenius-Tuohimaa, Aunola, and Nurmi's (2008) study of 225 Grade 4 (Year 5) children in Finland found that children's performance on solving mathematical word problems was strongly related to performance in reading comprehension, Imam, Abas-Mastura, and Jamil's (2013) study of 666 public high school students in the Philippines found no statistically significant correlation between these two variables. The current study will attempt to contribute to this field by comparing and contrasting EAL and FLE children's mathematical performance, and the relationship, if any, with their reading comprehension skills and general language ability. Exploring relationships between children's working memory performance and their mathematical ability Closely linked to both linguistic and mathematical proficiencies is working memory or the mental space that is involved in the controlling, regulating, and maintaining of relevant information needed to achieve complex cognitive tasks (Miyake and Shah 1999). With particular regard to mathematical ability, such complex tasks require competence and control for specific procedures (e.g. arithmetic, algebra) as well as problem solving, which requires the temporary holding of information required to arrive at particular solutions, calling upon working memory resources. While numerous studies suggest that working memory is a significant predictor of mathematical abilities and outcomes Siegel 2001, 2004;Passolunghi and Pazzaglia 2004), the same predictive power was not found in other studies, for example, Bull, Johnston, and Roy's (1999) study of 7 years old children and Swanson's (2011) study of 6-7 years old children. Subsequently, the relationship between working memory and mathematical ability is far from clear, and indicate that there is some way to go before we can evaluate how working memory and mathematical ability might be interrelated, particularly in relation to EAL learners. This is thus another area that this study sets out to investigate. The current study The current study builds on the research outlined in the previous section by focusing on the following three research questions: 1. How does EAL learners' performance on mathematics (word problems and wordless problems) tests differ from that of FLE learners? 2. How does EAL learners' performance on reading comprehension, general language ability (English) and working memory tests differ from that of FLE learners? 3. To what extent are EAL children's reading comprehension level, general language ability and working memory scores related to their mathematics word problem solving performance? Participants This study focused specifically on Year 5 children (9-10 years old) because they are one of the two age cohorts (9-10 and 13-14 years old) that are tested, every four years, in the internationally-recognised Trends in International Mathematics and Science Study (TIMSS). One of the key benefits of such alignment is the access to a wealth of TIMSS's publicly-released and well-constructed mathematics test items that set out to measure (9-10 years old) children's mathematical proficiency in three different domains, namely number, geometric and data display. Additional details of these test items are included in the following section. Fifty-two urban schools across the south of England were approached because of their reported statistics of EAL children in their schools. Nine schools were happy to take part in the study and the percentage of EAL children in these schools ranged from 46.8% to 73.7%, and the percentage of children who were eligible for free school meals (an index of deprivation widely used in the UK) ranged from 20.6% to 32.4%. Of these four schools, 35 EAL children (16 boys: 19 girls) and 31 FLE children (17 boys: 14 girls) agreed to participate, and they represented a wide range of ability levels in mathematics and reading. The EAL children in our study came from 11 different countries (China, Egypt, Germany, Iceland, India, Italy, Pakistan, Poland, Portugal, Romania or Uganda) and they spoke 10 different first languages (Arabic, German, Icelandic, Italian, Konkani, Luganda, Polish, Romanian, Telugu or Urdu) in addition to English. Purposive sampling strategy was employed in this study whereby Year 5 children who had arrived in the UK from a non-English speaking country within the past five years were invited to take part. At the time of data collection, the children in the EAL group had been in the UK for 2.72 years on average (SD 1.61). The following distribution gives further details: 0-12 months -9 children; 13-24 months -11 children; 25-36 months -5 children; 37-48 months -4 children; and 48-60 months -3 children (with missing data for 3 children). Survey instruments and procedure The data collection took place between December 2014 and March 2015, and in June 2016. A battery of five tests was administered to each child and this took around an hour per child to complete. To assess mathematical achievement, the children were asked to solve 20 mathematical test items: five wordless arithmetic problems; five word problems on numbers (e.g. 'Paint comes in 5 L cans, Sean needs 37 L of paint. How many cans must he buy?' Correct answer: 8); five word problems on shapes and measures (e.g. 'The school playground is a square. The playground is 100 metres long. Ruth walks all the way around the edge of the playground. How far does she walk?' Correct answer: 400 metres); five word problems on data handling (e.g. 'There is a picture of a scale with apples on it, with an arrow pointing towards a mark in between two points labelled 200 and 250. How much do the apples weigh?' Correct answer: 220). The fifteen word problems were obtained from the Grade 4 (Year 5) TIMSS 2011 study with permission from the International Association for the Evaluation of Educational Achievement (IEA), Boston College. The 5 wordless problems (e.g. '28 × 29 = ?' and '41 ÷ 5 = ?') were created by the research team, as drawn from England's primary mathematics curriculum (Department for Education 2013). Children had 25 minutes to complete the mathematics assessment. Snowling et al.'s (2009) York Assessment of Reading for Comprehension (YARC) was used to assess children's reading ability (accuracy, comprehension and rate). The test consists of two different components, namely a Single Word Reading Test (SWRT) and a reading comprehension test. The former comprises 60 words, ranging from 'see' and 'look' to 'haemorrhage' and 'rheumatism'. For each child, an accuracy score and a reading rate was recorded for the SWRT. The SWRT score also informs the level of the first YARC reading passage they are expected to read. If children scored four or less in answering questions relating to the first passage, they would be asked to read a passage of a lower difficulty level. If they scored more than four, they would be asked to read a passage of a higher difficulty level. General language ability was measured with a C-test, which consisted of three different texts. Two of these were created on the basis of texts from Corbett's (2013) writing models for Year 5, and the third (Ali Baba) came from the animated tales for Year 5, published by the Hamilton Trust (2016). This third text was taken from a different source, because there were not enough suitable texts with clearly different topics in Corbett's volume. We first created a C-test following the traditional deletion principle, deleting the second half of every second word, but found in piloting that this task was very difficult for the children. Therefore, we simplified it by deleting the second half of every third word instead. We also simplified some of the vocabulary items by replacing low frequency items by higher frequency words. For example, in The Stormy Rescue: 'the rain had finally slackened' was changed to 'the rain had finally stopped' and 'she squinted out through the side windows' became 'she jumped out through the side windows'. While we acknowledge that in some cases we were unable to match the central meanings of the replaced words (e.g. 'squinted' vs 'jumped'), in such cases our primary intention was to increase the likelihood that our texts would be accessible to both subject groups. Each passage contained 20 missing word parts (deletions in bold), for example, 'She had left the window down just in case the fox came back' (The Stormy Rescue). The maximum scores for each passage was 20. To assess working memory, the children were asked to complete the backward digit span working memory task, from the Automated Working Memory Assessment (AMWA, Alloway 2007). In order to ensure that the working memory task was accessible to the EAL and FLE groups, we avoided vocabulary-specific working memory tasks (e.g. Gaulin and Campbell 1994), as well as domain-specific working memory tasks (e.g. Passolunghi and Costa 2016). The adopted task ranges from Span 2 (two digits) to Span 7 (seven digits). The discontinuation rule is to stop after 3 consecutive errors. If the child was successful on four trials of any given span level, they could proceed to the next level. Working memory scores were measured using longest backwards digit score. To avoid any potential order effect, half of the children (comprising both EAL and FLE children) were asked to take the mathematics test before the reading comprehension test, and the other half were asked to take the reading comprehension test before taking the mathematics test. Preliminary data analysis The children's responses on the reading comprehension were carefully interpreted and scored by members of the research team according to guidance provided in the YARC scoring manual. To investigate reliability, two members of the research team randomly chose the reading comprehension test response of ten children (from the total of 66 children), representing around 15% to be moderated. Of these ten children, five were those with EAL and the other five children were those with FLE. The inter-rater reliability for the reading comprehension was very high at 96.05%, calculated using Cohen's kappa (Cohen 1960). We also computed ability scores for each of the two passages and a mean of these two variables following the procedures in the YARC manual. The reliability of the two reading comprehension passages was high with a Cronbach's alpha coefficient of .873. Similarly, the C-test marking was also moderated to ensure reliability. Two members of the research team marked the tests independently and then met to draw a list of children's responses that required further discussions. Altogether, 20 responses across the five passages were highlighted, ranging from misspellings (e.g. cliff vs clif), grammatical errors (e.g. bags vs bag) and semantic differences (e.g. soon vs somehow). While for adults it is common to accept only correctly spelled answers (Eckes and Grotjahn 2006), this is not necessarily the best approach with low-proficiency learners. In, for example, Linnemann and Wilbert's (2010, 117) study of Grade 9 learning disabled students in Germany, 'each correct restoration scored one point, that is, the original word or an orthographically, grammatically and semantically correct variation'. In our study, on careful consideration of the learner proficiency levels and the consistency of some minor errors, we took a slightly different approach and awarded a point for grammatically and semantically correct variations as well as simple misspellings (e.g. cliff vs clif) as it can be argued that they still demonstrate comprehension by our young participants. The reliability of the language ability test was high with a Cronbach's Alpha (CA) coefficient of .883. Finally, the mathematics test was highly reliable with a CA coefficient of .805. Research Question 1: How does EAL learners' performance on mathematics (word problems and wordless problems) tests differ from that of FLE learners? The mathematics test data were normally distributed and did not violate the assumptions for the use of parametric tests. Therefore, we first ran an ANOVA to see if there were significant differences between the total mathematics scores for the two groups. This turned out not to be the case (see Table 1 below). We expected the differences to be non-significant, because the total mathematics score consists of wordless and word-based problems and we expected differences to only emerge with the word-based problems. Therefore, we carried out a Multivariate Analysis of Variance (MANOVA) with the total wordless mathematics score and the word-based problems as dependent variables and the grouping variable (EAL versus FLE) as the independent variable, which was indeed significant (F (2,61) = 3.261, partial η 2 = .097, p = .045). Further analyses revealed that of these two, only performance on the word-based mathematics problem solving performance was significantly different between the two groups (F(1,62) = 6.543, partial η 2 = .095, p = .013). Among the three components of word-based mathematics items (i.e. numbers, shapes and measures, and data handling), the numbers component was the only one which was significantly different (F (1,64) = 6.153, partial η 2 = .088, p = .016). Research Question 2: How does EAL learners' performance on reading comprehension, general language ability (English) and working memory tests differ from that of FLE learners? Before examining the relationship between reading comprehension ability, general language ability, working memory and mathematics scores, we first examined how EAL and FLE learners differ in these areas. Working memory scores and single word reading test (SWRT) scores were not normally distributed. Therefore, a series of non-parametric Mann Whitney U-Tests was conducted with the results displayed in Table 2. In line with previous research (e.g. Twist, Schagen, and Hodgson 2007), FLE learners outperformed EAL learners in reading comprehension. In fact, all language test scores were significantly higher for the FLE group than for the EAL group. Effect sizes ranged from .27 for the YARC reading comprehension test to 3.9 for the C-test. The high standard deviations for the latter indicate that there was considerable variation in performance within both groups too. However, there were no significant differences in working memory score between the two groups. The finding that SWRT scores were significantly lower for EAL learners is surprising in the light of Burgoyne et al.'s (2009) assertion that single word reading and decoding is less problematic for EAL learners compared to reading comprehension. This issue will be taken up in more detail in the discussion section. For the EAL group, length of residence correlated moderately and significantly with all measures: reading comprehension (r s = .491, p = .008), general language ability (r s = .373, p = .036), single word reading (r s = .489, p = .004) and word-based mathematical problem solving (r s = .399, p = .029). Unsurprisingly, those learners who had been in the UK longer performed better on all tests. n.s. *Significant at the .05 level 2-tailed **significant at the .01 level 2-tailed ***significant at the .001 level 2-tailed. Research Question 3: To what extent are EAL children's reading comprehension level, general language ability and working memory scores related to their mathematics word problem solving performance? To answer this question a series of Spearman correlations was performed and the results are displayed in Table 3. The results indicate that there were differences in the relationship between word-based mathematical problem-solving performance and measures of literacy between the EAL and FLE group. For the EAL group, there was a significant correlation between reading comprehension and wordbased mathematics problem solving performance, which is in line with results of previous research (e.g. Abedi and Lord 2001). Single word reading ability, general language ability and working memory correlated significantly with mathematics scores in both groups. The correlation between, on the one hand, word-based mathematics, and, on the other hand, reading comprehension, general language ability and the SWRT support Clarkson's (1992) claim that linguistic proficiency plays an important role in mathematics attainment. A final remaining question is how we can explain the differences between the FLE and the EAL groups with respect to their word-based problem skills. A series of hierarchical regression analyses was conducted to investigate which factors explained word-based mathematics scores of the group as a whole, and then the FLE and EAL groups separately. While some measures were not normally distributed, the distribution of residuals was checked and found to be within normal distribution ranges, therefore confirming the appropriateness of using regression analysis. Firstly, a two-stage hierarchical regression analysis was conducted for all learners using the stepwise method. To control for the effect of working memory, this variable was entered at stage 1, which accounted for 16% of the variance observed (R 2 = .155, F (1, 59) = 10.858, p = .002). At stage 2, reading comprehension score, SWRT score and the language ability score were entered into the model. Language ability and SWRT were excluded in model 2, which comprises working memory and reading comprehension. The final model accounts for 42% of the variance in word-based mathematics scores (R 2 = .421, F (3, 56) = 26.658, p = < .001). Collinearity was assessed and the statistics fell well within the accepted range. As can be seen from the regression model statistics displayed in Table 4, reading comprehension score is the variable that explains the most variance in word-based mathematics scores for all learners. The same regression analysis was conducted on the groups of FLE and EAL learners separately to explore differences between the two groups of learners. For the FLE group, model 1 (including working memory only) explained 26% of the variance (R 2 = .255, F(1, 28) = 9.907, p = .004) (see Table 4. Regression analysis of word-based mathematics scoresall learners. Table 5). The reading comprehension score, the SWRT score and the language ability score were entered into model 2. For the FLE learners, reading comprehension and SWRT were excluded in model 2, which comprises working memory and language ability (C-test) only. The final model accounts for 44% of the variance in word-based scores for FLE learners (R 2 = .443, F (1, 28) = 9.464, p = .005). When entered in the model alone, language ability accounts for 29% of the variance (R 2 = .288, F (1, 29) = 11.752, p = .002). For the EAL group, again using stepwise hierarchical regression, the final regression model included reading comprehension only since working memory and language ability were excluded. SWRT was not included in the model as collinearity statistics were well above acceptable levels. The model explained 44% of the variance in word-based mathematics scores for EAL learners (R 2 = .436, F (1, 29) = 22.458, p = < .001) (see Table 6). A significant finding of this study is that general language ability, as measured by the C-test, plays a key role in performance on word-based mathematics assessments for FLE learners. Additionally, C-test scores correlated strongly with SWRT for both groups and with reading comprehension for both groups, but in particular for the EAL group (see Tables 7 and 8). Potential reasons for such high correlations will be taken up in the discussion. .696*** .313 * *Significant at the .05 level 2-tailed **significant at the .01 level 2-tailed ***significant at the .001 level 2-tailed. 9.907** 9.464** *Significant at the .05 level 2-tailed **significant at the .01 level 2-tailed ***significant at the .001 level 2-tailed. 22.458*** *Significant at the .05 level 2-tailed **significant at the .01 level 2-tailed ***significant at the .001 level 2-tailed. (C-test) .750*** .693 * *Significant at the .05 level 2-tailed **significant at the .01 level 2-tailed ***significant at the .001 level 2-tailed. Discussion and conclusions The findings of the current study underline the prominent role language plays in the development and assessment of mathematical ability. Consistent with the findings of previous studies, the results show that FLE learners significantly outperform EAL learners in the word-based component of the mathematics test only. The results of this study also indicate that there are differences in how reading comprehension ability is related to the mathematical word problem solving performance for FLE and EAL learners. Our results are in line with previous research that has shown English reading comprehension ability to be related to mathematical word problem solving performance for EAL learners (e.g. Abedi and Lord 2001). However, contrary to the findings of Vilenius-Tuohimaa, Aunola, and Nurmi (2008), the correlation between reading comprehension and word-based mathematics for FLE learners is not significant, which confirms findings of Imam, Abas-Mastura, and Jamil (2013). The lack of correlation between reading comprehension and word-based mathematics for the FLE learners may be accounted for by the higher level of reading comprehension of these learners who do already possess the linguistic knowledge required for the mathematics test. A strength of the current study lies with the inclusion of the language ability (C-test) and SWRT alongside a measure of reading comprehension, which allows for a more in-depth investigation of the contribution of different aspects of language and reading ability to the assessment of mathematical knowledge in EAL children. Our results differ from those of Burgoyne et al. (2009) in that the FLE learners in our study outperform the EAL learners on the Single Word Reading Test. Burgoyne et al. (2009) found that EAL learners' decoding ability was better than that of FLE learners, and argue that vocabulary knowledge rather than decoding is the key factor in reading comprehension ability among EAL learners. The differences between the two studies could be due to the fact that our sample included EAL children who had been in the UK for a relatively short period of time. Burgoyne et al. (2009) comment that efforts to improve reading ability should be focused on comprehension strategies. Research with young language learners (e.g. Samo 2009) has shown that less successful readers with lower levels of language proficiency employ local, bottom-up strategies thereby focusing on smaller units (word or sentence level) when constructing meaning, whereas more proficient and successful learners may employ a wider range of top-down, global strategies for text comprehension. The C-test encourages learners to use a combination of bottom-up, word-based strategies along with top down, text-based strategies. While SWRT and C-test scores are highly correlated for the EAL learners, the regression analysis showed that the C-test scores explained variation in mathematics scores over and above what is explained by word decoding skills. Our finding that general language ability as measured by the C-test correlates significantly with reading comprehension levels of both L1 and L2 students furthers evidence stemming from previous pieces of research in English (Lesaux, Rupp, and Siegel 2007;Babayiğit 2014Babayiğit , 2015 and in Dutch (Droop and Verhoeven 2003). More importantly, it extends our knowledge of how general language ability might affect comprehension in the context of word-based mathematics problem solving for FLE and EAL students. While general language ability scores predicted FLE students' performance in word-based mathematics problems, it was reading comprehension scores that predicted EAL students' performance. This finding together with the strong correlation between C-test scores and reading comprehension of EAL students might still support the important role of general language ability for EAL word-based problem solving performance. Taken together the findings from the two groups of children suggest that the relationship between language ability and mathematical word problem solving performance may not follow the exact same trajectory for FLE and EAL learners. A further interesting finding of the current study was that working memory was more strongly associated with mathematical word problem solving performance in the FLE group than in the EAL group. One possible explanation for this is that the effect of working memory is eclipsed by EAL learners' reading comprehension levels. Another issue could be that EAL learners found the backwards digit span task very difficult, possibly because the number system is not as automatised in EAL learners as in FLE learners. It was evident that some learners were still not completely familiar with numbers in English, which is a linguistic rather than a numeracy issue. In future research, a non-verbal test of working memory should be used with EAL learners to isolate this from linguistic knowledge. Within the EAL group of learners, there was a great deal of variation in scores on all measures and also in 'length of residence' in the UK. The results of the current study emphasise its importance when accounting for the variation seen between learners as this factor correlated strongly with all outcomes. Some learners had only been resident in the UK for several months, whereas others had lived in England for nearly five years. Those learners who had been resident for five years actually performed on a par with monolingual peers. On the one hand, this is a very positive finding. On the other hand, this highlights questions around the suitability of assessment instruments (both school-based and for research) for learners with less exposure to English and therefore lower proficiency levels. Length of residence is a factor that needs to be considered when assessing progress. While the current study was situated in the UK and has made references to the UK context, it is our strong belief that our findings are also applicable to other settings. As mentioned, this is particularly relevant when an increasing number of high-stakes mathematics standardised tests around the world place an emphasis on using mathematical word problems to assess students' mathematical understanding. The results presented here have immediate implications for both practice and policy. The current study provides empirical evidence of the variability between our FLE and EAL learners in terms of language ability, which accounts for differences both between and within groups in terms of mathematics and literacy scores. This suggests that teachers should focus on those EAL learners with lower level general language ability and also focus on explicit vocabulary learning to improve reading comprehension. This could be done by, for example, making explicit the specific mathematical meaning of lexically ambiguous words (e.g. 'root', 'volume', etc.) at the beginning of mathematics lessons, to help children develop their CALP. Additionally, the use of mathematics-specific illustrated storybooks to teach mathematical concepts to young children (e.g. Harper Collin's Math Start series, Kane Press's Mouse Math series, etc.) has recently been found to help develop children's linguistic and mathematical abilities (Purpura et al. 2017), though research exploring the effect of using mathematics-specific illustrated storybooks on the development of linguistic and mathematical abilities of EAL children specifically would be particularly useful. In relation to practitioners, the use of this particular resource should be seriously considered by mathematics teachers not just in England (where the study took place), but elsewhere too. In addition, the results also indicate that test designers should consider the complexity of the language used in mathematical word problems. Research by Abedi and Lord (2001) showed that simplification of language in mathematics tests benefited lower proficiency learners. Finally, as the current study has shown that the C-test is potentially a powerful tool to measure generic language ability, future studies should focus on the role of such holistic measures in uncovering the differences between the specific abilities of EAL learners and FLE learners.
10,164
2020-04-20T00:00:00.000
[ "Education", "Linguistics", "Mathematics" ]
Convolutional Neural Networks Promising in Lung Cancer T-Parameter Assessment on Baseline FDG-PET/CT Aim To develop an algorithm, based on convolutional neural network (CNN), for the classification of lung cancer lesions as T1-T2 or T3-T4 on staging fluorodeoxyglucose positron emission tomography (FDG-PET)/CT images. Methods We retrospectively selected a cohort of 472 patients (divided in the training, validation, and test sets) submitted to staging FDG-PET/CT within 60 days before biopsy or surgery. TNM system seventh edition was used as reference. Postprocessing was performed to generate an adequate dataset. The input of CNNs was a bounding box on both PET and CT images, cropped around the lesion centre. The results were classified as Correct (concordance between reference and prediction) and Incorrect (discordance between reference and prediction). Accuracy (Correct/[Correct + Incorrect]), recall (Correctly predicted T3-T4/[all T3-T4]), and specificity (Correctly predicted T1-T2/[all T1-T2]), as commonly defined in deep learning models, were used to evaluate CNN performance. The area under the curve (AUC) was calculated for the final model. Results The algorithm, composed of two networks (a “feature extractor” and a “classifier”), developed and tested achieved an accuracy, recall, specificity, and AUC of 87%, 69%, 69%, and 0.83; 86%, 77%, 70%, and 0.73; and 90%, 47%, 67%, and 0.68 in the training, validation, and test sets, respectively. Conclusion We obtained proof of concept that CNNs can be used as a tool to assist in the staging of patients affected by lung cancer. Introduction In recent years, advanced analysis of medical imaging using radiomics, machine, and deep-learning, including convolutional neural networks (CNNs), has been explored. ese approaches offer great promise for future applications for both diagnostic and predictive purposes. CNNs are nonexplicitly programmed algorithms that identify relevant features on the images that allow them to classify an input object. ey have been applied in various tasks such as detection (e.g., breast lesions on mammographic scans), segmentation (e.g., liver and liver lesions on computed tomography (CT)), and diagnosis (e.g., lung lesions on screening low-dose CT). CNNs are a machine-learning technique based on an artificial neural network with deep architecture relying on convolution operations (the linear application of a filter or kernel to local neighbourhoods of pixel/voxels in an input image) and downsampling or pooling operations (grouping of feature map signals into a lower-resolution feature map). e final classification or regression task relies on higherlevel features representative of a large receptive field that is flattened into a single vector. e development of an algorithm entails (a) selection of the hyperparameters, (b) training and validation, and (c) testing. e hyperparameters include the network topology, the number of filters per layer, and the optimisation parameters. During the training process, the dataset of input images (divided into training and validation sets) is repeatedly submitted to the network to capture the structure of the images that is salient for the task. Initially, the weights for each artificial neuron are randomly chosen. en, they are adjusted at each iteration, targeting minimisation of the loss function, which quantifies how close the prediction is to the target class. e performance of the trained model is then evaluated using an independent test dataset. is is also aimed at assessing whether an "overfitting" has occurred. e overfitting problem can arise in the case of limited datasets with too many parameters compared with the dataset size, in which case a model "memorises" the training data rather than generalising from them [1]. In the field of lung imaging, CNNs have been tested in nodule segmentation from CT images. Average dice scores of 82% and 80% for the training and test datasets, respectively, have been reported [2]. CNNs have been demonstrated to achieve better results than conventional methods for the purpose of nodule detection [3,4]. Moreover, a model for assessment of cancer probability in patients with pulmonary nodules has been proposed. e area under the curve (AUC) was found to be 0.90 and 0.87 for the training and test sets, respectively [5]. Stage assessment has not yet been described. e present study, as a first step towards complete TNM parameter assessment, aimed to develop an algorithm for the classification of lung cancer as T1-T2 or T3-T4 on staging fluorodeoxyglucose positron emission tomography (FDG-PET)/CT images. Study Design and Patient Selection. In this retrospective single-centre investigation, we screened all patients who underwent FDG-PET/CT between 01/01/2011 and 27/06/2017 for the purpose of staging a suspected lung lesion, within 60 days before biopsy or surgical procedure. e inclusion criteria were (1) age > 18 years and (2) histological diagnosis of primary lung cancer. e exclusion criteria were (1) inconclusive histology due to inadequate biopsy sample and (2) diagnosis of nonmalignancy. e study was approved by the Institutional Ethics Committee. Image Acquisition and Postprocessing. FDG-PET/CT was performed according to the standard institutional procedures, previously detailed [6]. Postacquisition processing was performed to generate an adequate dataset for the CNN. e original CT and PET image size was 512 × 512 × N slices and 128 × 128 × N slices, respectively, where N slices is the number of slices in which the lesion appears. e CT images were clipped between −1000 and 400 Hounsfield units. PET images were resampled in the CT space. en, both images were rescaled to lie between 0 and 1. Figure 1: Study workflow and networks' architecture. e network (consisting of two neural networks, the feature extractor, and the classifier) was trained using a cross-validation strategy. e entire dataset was divided into 5 randomly chosen parts. At each training run, 4/5 of the dataset were used as a training set and the remaining 1/5 was used as the validation set. Subsequently, the final model network was adjusted and tested for performance using the dataset divided into three sets: training, validation, and test. In the feature extractor CNN, the input images, both PET and CT, were submitted to a series of convolutions producing a stack of feature maps containing low-level features, rectified linear units (ReLU), and max pooling layers that downsample the feature maps (MaxPool), to produce higher-level features. In the classifier network, these higher-level features are used to perform the final classification T1-T2 (output label 0) vs T3-T4 (output label 1). Consequently, the dataset consisted of 3D bounding boxes on both PET and CT images, cropped around the lesion centre, identified by two nuclear medicine physicians (M.S. and M.K.) with dimension 128 × 128 × N slices . Data augmentation, a strategy commonly used by deep-learning methods, was performed. Image patches were rotated in 2D space around the lesion centre about the z-axis by an angle randomly selected in a range of [−10°, 10°]. is processing step artificially expands the size of the training set and reduces the overfitting phenomena. CNN Development and Analysis. e study workflow and the networks' architecture are summarised in Figure 1. During the training phase, a fivefold crossvalidation strategy was adopted by dividing the cohort into a training dataset and a validation dataset. To assess the performance of the final model, the cohort was divided into training, validation, and test datasets. e algorithm was composed of two networks: a feature extractor and a classifier. e feature extractor was a CNN that took a CT-PET image patch of 128 × 128 pixels as input and performed classification (T1-T2 with label � 0 and T3-T4 with label � 1) according to the appearance of the image patch. e feature extractor aimed to extract the most relevant features from a single patch. e classifier took as input the mean of the second to last layer of features extracted from all slices of a single patient and aimed to perform a classification (T1-T2 vs. T3-T4) for that patient. e softmax function was applied to the last layer of both networks, in order to obtain the probability of being T1-T2 and T3-T4. e class having the highest probability was assigned to each patient. Both models were trained with the Adam algorithm [7]. Table 1 summarises the parameters for the feature extractor and the classifier networks. TNM classification system 7 th edition [8] was used as reference. e results were classified as Correct (concordance between reference and prediction) and Incorrect (discordance between reference and prediction). Accuracy (Correct/[Correct + Incorrect]), recall (Correctly predicted T3-T4/[All T3-T4]), and specificity (Correctly predicted T1-T2/[all T1-T2]) as commonly defined in deep-learning models, were used to evaluate CNN performance. e area under the curve (AUC) was calculated for the final model. Results From the institutional database, a cohort of 586 patients was selected by applying the abovementioned criteria. Patients with distant metastases or histology different from adenocarcinoma and squamous cell carcinoma were excluded from the present analysis. erefore, 472 patients were included in the study (T1-T2 � 353 patients, T3-T4 � 119 patients). Staging was clinical and pathological in 97 and 375 of cases, respectively. Subsequently, the patients were randomly divided in training (n � 303), validation (n � 75), and test (n � 94) sets. e patients' characteristics are summarised in Table 2. Table 3 summarises the results of the cross-validation analysis. e algorithm developed and tested in the present work achieved an accuracy of 69%, a recall of 70%, and a specificity of 67% in the test set, for the identification of T1-T2 and T3-T4 lung cancer, in the final model analysis. e AUC was 0.83, 0.73, and 0.68 in the training, validation, and test sets, respectively. Results of all metrics for the final model in the training, validation, and test sets are reported in Table 4. Discussion e algorithm developed and tested in the present work achieved an accuracy of 87%, 69%, and 69% in the training, validation, and test sets, respectively, for the classification of T1-T2 and T3-T4 lung cancer. e lower performances of the CNN in the validation and test sets compared with the training dataset are probably related to the sample size (n � 75, n � 94, and n � 303, respectively). e TNM staging system is the reference method for prognostication and treatment decision-making in cancer patients, including in those with lung cancer. Pathological assessment is considered the gold standard. However, patients with tumours of the same stage can experience variations in the incidence of recurrence and survival. ese variations may be related to tumour biology and other factors, including potential differences in the pathological stage at diagnosis and at neoadjuvant treatment [9] and the possible effect of the number of surgically removed lymph nodes on the N-parameter [10]. Finally, pathological TNM staging is not feasible in advanced stages. Medical imaging is the reference when pathological assessment is not feasible. Hybrid FDG-PET/CT is noninvasive and provides whole-body assessment, resulting essential in baseline lung cancer staging. CT scan is the cornerstone of lung cancer imaging providing all the necessary information needed for clinical T staging. FDG-PET/CT outperforms other modalities in terms of diagnostic accuracy in mediastinal nodal involvement and extrathoracic disease detection [11,12]. In lung CT imaging, deep-learning approaches have been used to detect lung nodules [3,13], to segment [2], and to classify them as benign or malignant [5,14]. Some preliminary data are available on radiation treatment planning [15] and outcome prediction [16]. Recently, the value of CNNs for classification of mediastinal lymph node metastasis on FDG-PET/CT images has been investigated in non-small cell lung cancer. CNNs proved promising in identifying lymph node involvement, with higher sensitivities but lower specificities compared with doctors [17]. e present work reports an innovative application of deep-learning providing an automated staging classification. Some limitations of this study have to be acknowledged. Firstly, the limited number of patients precluded the design of a network for a classification task with four outputs (T1, T2, T3, and T4). Moreover, better performance is expected in larger datasets. In future work, we plan to include a larger number of cases in a multicentre framework. Secondly, the reference to define the T-parameter was based on the pathological assessment in surgically treated patients (80% of cases), while in patients not suitable for surgery, clinical assessment was used as the standard. e choice to use the pathological staging when available instead of the clinical one was aimed to achieve higher algorithm performance. Finally, we did not test separately CT and PET image modality. It could be speculated that, in this first step investigation, the CNN nodes were mostly activated by the weighted features coming from the CT component, while for the N-and M-parameters, PET can be supposed to give a major contribution. e comparison between the performances of a CNN trained using either CT or PET images will be considered in future studies. However, our final objective is to develop a whole-body assessment tool processing both CT and PET images together. In conclusion, the key result in the present preliminary investigation is the feasibility and promising performance of CNNs in assessing the T-parameter in lung cancer. e developed tool is able to provide in few minutes from baseline PET/CT the probability of a patient being T1-T2 or T3-T4. Further investigations are needed to develop robust algorithms for a complete TNM assessment. Compared with radiomics, CNNs have the advantage of eliminating the need for tumour segmentation, feature calculation and selection, which are even more critical issues in small lesions. Still, the possibility of a complementary role of radiomics and artificial intelligence techniques should be addressed. Moreover, improvement in risk stratification is foreseen with the incorporation of patients' clinical features in the neural network algorithm. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Additional Points Clinical Relevance. e development of an automated system based on neural networks, that is, the ability to perform a complete stage and risk assessment will ensure more accurate and reproducible use of imaging data and allow deeper integration of this information in the therapeutic plan for each individual patient. Ethical Approval e study was approved by the Institutional Ethics Committee. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Conflicts of Interest A. Chiti received speaker honoraria from General Electric, Blue Earth Diagnostics, and Sirtex Medical System, acted as the scientific advisor for Blue Earth Diagnostics and Advanced Accelerator Applications, and benefited from an unconditional grant from Sanofi to Humanitas University. All honoraria and grants are outside the scope of the submitted work. All other authors have no conflicts of interest.
3,355.8
2018-10-30T00:00:00.000
[ "Medicine", "Computer Science" ]
Molecular Surface Estimation by Geometric Coupled Distance Functions Estimating the surface from given atoms with location and size information is a fundamental task in many fields, such as molecular dynamics and protein analysis. In this paper, we present a novel method for such surface estimation. Our method is based on level set representations, which can efficiently handle complex geometries. The proposed method is analyzed from mathematical point of view and from computation point of view. The method does not require any prior information about the surface. This property is fundamentally important for the surface estimation task. The presented method is evaluated on both synthetic and real data. Several numerical experiments confirm that our method is effective and computationally efficient. Finally, the method is applied on protein surface estimation. This method is suitable for high performance molecular dynamics study, protein surface analysis, etc. I. INTRODUCTION Given the location and size information of some atoms, it is important to know the geometric surface that compactly contain these atoms. Such geometric surface is important for its function, such as protein docking and Molecular Hydrophobicity Potential. Therefore, surface estimation from such atoms becomes fundamentally important for related research fields. One example is shown in Fig. 1, where the surface is estimated by our method and the color indicates its mean curvature property. Before showing the molecular surface estimation, we first introduce a surface reconstruction method from point cloud. Different from the molecular surface, these points exactly live on the surface and do not have a size or radius. We show this method and its accuracy on both synthetic and real data. Then, we extend this method for molecular surface estimation in Section V. A. POINT CLOUD REPRESENTATION A point cloud P = { x i ∈ S} is a fundamental representation of a surface S. If an object is small compared to S, it can be treated as a point. For example, nuclei are treated as points to represent cells in the tissue; a small region is treated The associate editor coordinating the review of this manuscript and approving it for publication was Jenny Mahoney. as a point during tissue development; a protein is treated as a point on the cell membrane. All points together can represent the geometry where they live on. This idea has been used in super resolution microscopy techniques such as Photo-Activated Localization Microscopy (PALM, invented by Eric Betzig who won Nobel prize because of PALM in 2014) and Stochastic Optical Reconstruction Microscopy (STORM). However, surface reconstruction from unstructured point clouds is challenging due to the absence of connectivity information between the points, which may lead to topological ambiguities. Even if there is a model to choose VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ the topology according to the sample's geometry, this requires prior knowledge about the sample. Moreover, high curvature (sharp corners or edges), irregular sampling, and noise in the point positions often complicate the task. Conceptually, there are two approaches to recovering the surface S from which the points have been sampled: interpolation (find missing data) and model fitting (reduce error). Traditionally, both approaches require predefined basis functions that ideally reflect geometric properties of the true surface, such as connectivity, smoothness, sparsity, and curvature. These implicitly assumed properties constitute the prior knowledge (regularization) about the unknown geometry. Even though their imposition may render the reconstruction problem well-posed, these priors may mask details or patterns in the signal, such as smoothing out texture, or bias the reconstruction result toward the imposed priors. Regularization-free geometry reconstruction is desirable, specially for biological data, where prior knowledge is scarce or needs to be investigated. If a prior was imposed, it would be difficult to decide whether a property of resulting geometry results from the prior or from the signal. This prior-freeness requirement makes most of state-of-the-art approaches in surface reconstruction fail in this context. We present a regularization-free method for geometry reconstruction from point clouds to tackle this challenge. B. PREVIOUS WORKS ON SURFACE RECONSTRUCTION Numerous methods have been proposed for surface reconstruction from unstructured point clouds. This includes approaches based on FFT (e.g., [1]), Poisson surfaces (e.g., [2]), and moving least squares (e.g., [3]). Most of those methods require that the surface normals have been first estimated from the data. From a differential geometry point of view, normals are first-order information about the geometry, while position is zeroth-order information. However, estimating normals from unstructured point sets is challenging because global consistency across the entire surface must be ensured. Traditionally, normals are estimated using Principal Component Analysis (PCA), a local method that can not guarantee global consistency. Several methods, for example level-set methods [4], [5], do not require normal information. Instead, they attempt to minimize the energy [5] where q is a real number, and d(·) is the distance field to the unknown implicit surface S (see Fig. 2(a)), and λ is the regularization coefficient. In this energy-based approach, methods from image segmentation have also been adapted to surface reconstruction [6]. Thanks to the convexity of the energy, fast solvers (e.g., split-Bregman [7], [8] or Primal/ Dual) can be used. The regularization term R(S) in the model, however, tends to smooth the result and remove details from the surface. Moreover, the computational cost depends on proper initialization and stepsize control. Previous works also demonstrated several strategies to save memory and reduce the computational complexity of levelset algorithms. This includes narrow-band formulations [9], multi-scale methods [10], [11], and DT-grids [12]. The need for computationally expensive level-set re-initialization has been overcome by adding an additional penalty (regularization) term in the energy [13]. II. OUR METHOD Our work is motivated by the symmetry property of a distance field and the antisymmetry of the level set representation [14]. It is inspired by coupled level-set methods [15]- [17], but addresses some of their shortcomings when reconstructing high-curvature regions, while guaranteeing the signed-distance property. The connection of our method with others can be found in section II-D. We are given an unordered point set P = { x i : x i ∈ R n , i = 1, . . . , N }. Usually, n = 2 for image processing problems, such as estimating a contour from feature points or filling gaps between edge fragments, and n = 3 for computer graphics problems, such as constructing a surface model from a point cloud, or for stereo vision problems. Even though we focus on two-and three-dimensional problems, the method presented here also works in higher dimensions, for example for constructing surfaces in manifold learning. For surface reconstruction, we present a coupled Signed Distance Functions (cSDF) method. The key idea in cSDF is to use two spatially coupled signed distance functions φ in and φ out to capture the surface S. This geometric constraint is in contrast to coupled level-set methods, which are based on topological constrains [18]. The cSDF idea is inspired by the difference between the reflection symmetry of the distance field d and the reflection antisymmetry of the level-set function φ. From a variational point of view, the cSDF method attempts to minimize the energy functional where φ max = max (φ in ) 2 , (φ out ) 2 , d 2 . However, as described in the following, we do not need to evolve any Partial Differential Equation (PDE) in order to compute the result. Instead, simple thresholding is enough. The reason for this becomes apparent from the Eikonal equation (Eq. 3), where thresholding is equivalent to wave propagation if the wave speed is constant. We first illustrate cSDF in 2D and then apply it to synthetic and real-world data in 3D. The method proceeds in three steps, as detailed by Algorithms 1 to 3 below. The corresponding steps are illustrated as in Fig. 2(a). The coupled level set functions φ out and φ in are illustrated in Fig. 2(b) for two examples. A simple example of the intermediate results after every processing step is shown in Fig. 3. A. STEP 1: DISTANCE FIELD For a given point cloud P (exemplified in Fig. 3(a)), we compute the distance field d( x) on a predefined Cartesian grid G = m × n of uniform resolution h. This amounts to solving the following Eikonal equation: as the boundary-value formulation of the Hamilton-Jacobi problem associated with Eq. 2. Several methods are available to numerically solve this equation, including the Fast Marching Method [9], the Group Marching Method [19], the Fast Sweeping Method (FSM) [20], the Fast Iteration Method [21], and direct Hamilton-Jacobi solvers [22]. Here, we show three alternative methods to compute the distance field. The first one uses an extended-window FSM restricted to a narrow band of width b: where Eq. 3 is only solved for x ∈ N b . We ensure the distance property by setting v( x) ≡ 1. This method was first published in [23]. The second method directly computes the distance field from the point cloud using a Sparse Voxel Oct-tree (SVO) data structure in the narrow band. The third method solves an inhomogeneous Helmholtz equation using FMM [24], similar to how it is done in a Schrödinger distance transform [25]. 1) EXTENDED-WINDOW FAST SWEEPING METHOD FSM sweeps the grid until convergence, which can be inefficient for points far from the interface. Fast iterative methods relax this by using locks [21]. These locks, however, cause additional serialization. Here, we avoid these locks and accelerate FSM by using a larger window size w > 1 (see Algorithm 1). The min method of FSM [20] is hence extended to account for all points in a w-neighborhood, as illustrated in Fig. 4(a) for w = 2. We initialize the algorithm with: The original FSM [20] is recovered for w = 1. For w > 1 the present extended-window FSM is more accurate than the original FSM, because the integrated error is reduced when the local window size w gets larger. Moreover, it converges faster since the larger window causes a wave of higher speed (not further elucidated in this paper). The local update cost increases from 2 to w(w + 3)/2 (not (2w + 1) 2 , thanks to the symmetry property). In our implementation, we further accelerate initialization and fast sweeping by using a Look-Up Table (LUT) for the distances in the local window. Instead of directly computing the distances of each sample point x i to each grid node x i,j within the local window, we subdivide each grid cell into 4 × 4 bins, represented by the green/white checkerboard pattern in Fig. 4(a). We then build a LUT of the distances between each checkerboard bin center and each grid node in the local window. The distance of a sample point to a grid node is then taken to be the distance between the bin center that it resides in and the grid node (black arrows in Fig. 4(a)). This can be used to further accelerate the initialization and sweeping steps as follows: Step: There are two ways to initialize the distance field. The sample point x i , represented by a red dot in Fig. 4(a), can be used to directly compute the distance to its all neighbors on the grid. The other way is to use a LUT. As shown in Fig. 4(a), x i must be in one of the grid cell neighboring x i,j . All distances in the LUT are computed with respect to the center of the checkerboard cell containing x i . • Sweeping Step: We determine d( x) in the local window using a distance LUT with respect to the center node x i,j . Thanks to the symmetry property of d( x), only w(w + 3)/2 real numbers need to be stored in the LUT. The sweeping process only uses addition and comparison operations, which are fast on modern CPUs. Fig. 3(b) shows an example d( x) computed using FSM with w = 3. 2) DIRECT COMPUTATION The distance field can alternatively be directly computed using a SVO data structure [26]. SVO has attracted a lot of attention in the computer-graphics literature recently for its nice properties in ray tracing, which essentially also is a distance computation task. We use a Hilbert curve (illustrated Fig. 4(b)) to encode the narrow band grid points. Then, the so-encoded narrow band is organized in an oct-tree, which is constructed bottom-up. The distance is then directly determined by a nearestneighbor search in the tree. For more details, we refer to Ref. [26]. 3) HELMHOLTZ EQUATION We can embed the distance field d( x) into a homogeneous Helmholtz equation: where ψ( x) = exp(d( x)/τ ), δ ≤ τ < 0 and δ is a negative number close to zero. is the Laplace operator. The solution of this Helmholtz equation automatically also satisfies the Eikonal Eq. 3. For τ → 0(τ < 0), After solving Eq. 6, the distance field can be recovered as: The advantage of this method is that there are several very efficient solvers for Eq. 6, such as FFT(DCT, DST)-based solvers, Multigrid solvers, and Fast Multipole Methods [24]. Here, we use a DCT-based solver in order to directly impose homogeneous Neumann boundary conditions, i.e., the gradient of ψ is zero in the normal direction at the band edge. B. STEP 2: COUPLED SIGNED-DISTANCE FUNCTIONS We aim to compute φ( x), the signed-distance function associated with d( x). The key idea in cSDF is to apply distance-preserving shift transformations to the output of Algorithm 1, thus solving the boundary-value problem in Eq. 3 without (pseudo-)time evolution. Specifically, we shift d by an offset T in order to determine the functions φ in bin and φ out bin that indicate whether the shifted level set d − T is inside or outside of S (see shaded areas in Fig. 2(a)). The threshold T defines the separation between the regions to be labeled. Therefore, T > √ 3h. Algorithm 1 Extended-Window Fast Sweeping Method in 2D 1: INPUT: threshold tol, window size w, S, U , V 2: set w +1 = w + 1 3: initialize d k ( x) using Eq. 5 4: define the loop sets 6: go through the loop sets and do After thus identifying φ in bin and φ out bin , we shift d down by T s , yielding the level sets φ in1 and φ out1 . Then, the function d − T s is shifted up by exactly the same T s , yielding φ in2 and φ out2 . It is clear that T < T s < b in order to keep the two layers separate. As shown later, T s is a scale parameter. The complete procedure is given in Algorithm 2. C. STEP 3: SURFACE RECONSTRUCTION USING cSDF After computing φ in2 and φ out2 , a joint estimation of the signed-distance function φ of the reconstructed surface S is computed from φ in2 , φ out2 , and d( x) as described in Algorithm 3. Algorithm 3 Surface Reconstruction Using cSDF if d out edge == t then φ = φ out2 8: end for 9: OUTPUT: φ There are only three possible curvature (c) cases for any point x i on S: positive, negative, or zero curvature (as illustrated in Fig. 3(a)), corresponding to the three cases in Algorithm 3: • c > 0: S is convex at x i . Therefore, S is captured by d and φ out2 . • c < 0: S is concave at x i . Therefore, S is captured by d and φ in2 . • c = 0: S is planar at x i . Therefore, S is captured by φ in2 and φ out2 . A simple average of the corresponding two fields provides the estimate for φ. Fig. 3(c) shows an example φ computed using this procedure. D. RELATIONS TO OTHER METHODS The cSDF method is related to coupled level set methods and to Hilbert-Huang transforms. However, cSDF uses a geometric coupling, while coupled level sets and the Hilbert-Huang transform only provide topology control. This difference is illustrated in Fig. 5. For example, the inner level set in coupled level sets can evolve arbitrarily as long as it stays inside the outer level set (left panel of Fig. 5). This is not possible in cSDF, where the two layers must be geometrically shifted by T s (right panel of Fig. 5). E. NOISE, OUTLIERS, AND PARAMETER Since the present cSDF method is free of regularization and intends to reconstruct even minute details of the surface, it is sensitive to noise in the input point set. If the input point positions are noisy, or contain outliers, we hence use a different algorithm for computing the distance field d( x). This robust algorithm is presented below. All downstream processing, in particular the cSDF construction, then remains unaffected. We also analyze below how the parameter T s controls the scale of the surface details to be recovered. This parameter hence is a scale-space parameter for the cSDF method. 1) ROBUST DISTANCE FIELD In the presence of outliers or noise in the input point set, we use a local variance-weighted method to compute the distance field, which is robust against noise and outliers. This choice is motivated by the fact that in practice the positions x i are often uncertain due to, e.g., measurement errors. We model this uncertainty as Gaussian noise of mean zero and standard deviation σ i i.i.d. added to the point positions. Usually, this σ i can directly be obtained from the measurement or imaging device that acquired the data. If σ i is unknown, we estimate it by K n -nearest neighbor clustering or by singular value decomposition. In the clustering method, we first compute the K n -nearest neighbor distances of all points and compute their average. Then, we use the difference between the distance and this average as the weight. When using singular value decomposition, we first compute the variance matrix of the point set and use SVD to compute the distance to the plane defined by the first eigenvectors. In what follows, we use the clustering-based method, which is also called ''inverse distance weighting'' in the literature. The weighted distance field is then defined as: is a normalization factor, D i = |x − x i | is the distance between x and x i , and x is an arbitrary position on the grid. A comparison between the so-obtained robust distance field and the one obtained using Algorithm 1 is shown in Fig. 6. 2) SCALE PARAMETER T s The cSDF coupling parameter T s controls the scale space in which the interpolated signal lives. Let be the average (across all data points) distance between K n -nearest neighbors. Then, T s has to satisfy the condition: in order for neighboring points to meet. This defines the lower bound on how small of details can possibly be recovered from the samples by cSDF. An illustration is shown in Fig. 7. The green dot represents a grid point x i,j , the red dots represent the input samples x i . The two red samples within the shaded disk around the green dot are indistinguishable by x i,j . Therefore, T s must be larger than the radius of the shaded disk in order to ensure that it is unnecessary to distinguish between those two samples (lower bound). As mentioned, thresholding with T s is equivalent to wave propagation. Thus, this step turns the explicit discrete-sample representation of the surface into an implicit continuous representation. Increasing T s , however, is not equivalent to smoothing or to a regularizer. T s only defines the scale space for the interpolation, but does not limit the curvature within that space. As seen in the area highlighted by the green rectangle in Fig. 8, surface details are not lost when increasing T s . However, between closely apposed surfaces, a too coarse scale space may lead to topological problems, as for example shown in the red rectangle in Fig. 8. This issue can be avoided by adaptively changing T s in a standard scale-space approach, or by using a spatially adapted T s ( x). FIGURE 8. Changing the scale parameter T s does not introduce surface smoothing (green rectangle), but may lead to topological problems (red rectangle) due to scale-space coarsening. III. NUMERICAL VALIDATION We validate cSDF in 2D and show its accuracy by measuring the error under the 1 and 2 norms with uniformly sampled points on a circle. We further perform 3D benchmarks on computer-graphics models. A. 2D BENCHMARKS We test the accuracy of cSDF by sampling N points uniformly on a circle of radius R and comparing the reconstructed circle to the ground truth for decreasing N . We use a 200 × 200 grid for all N ∈ [45, 360] × R ∈ [40, 70] and directly compute distances without LUT. We linearly interpolate the resulting φ at each of the original x i ∈ S. The correct value would be φ c = 0 for all x i . We then compute the overall (reconstruction plus interpolation) 1 and 2 errors as [ , respectively. The result is shown in Fig. 9. B. 3D BENCHMARKS We benchmark cSDF in 3D by using the vertices of the triangulated surfaces of the well-known computer-graphics models ''Armadillo'' and ''Buddha'' as input point clouds. The number of points for each model, the CPU time for cSDF reconstruction of the implicit surface representation, and the resulting errors against the known ground truth at the vertex positions are given in Table 1. The code is implemented in C and run on a 2GHz Intel Core i7. When the distance LUT is used, the CPU time is further reduced to 12.8s and 15.9s for ''Armadillo'' and ''Buddha'', respectively. The timings compare favorably with the > 200s CPU time of an efficient Bregman code [27] for ''Buddha'' with similar resolution. Figure 10 shows the resulting reconstructions and close-ups ( Fig. 10(b) and (d)) with the input point cloud overlaid to demonstrate the method's capability to represent high-curvature regions without grid refinement (see, for example, the ''Armadillo'' claws). IV. SURFACE RECONSTRUCTION In this section, we illustrate the use of cSDF in several applications on real biological data. Since the cSDF is priorfree, the resulted geometry is only based on the point cloud. Therefore, the property of the surface, such as normal or curvature distribution, is only from the geometry itself, without being corrupted by any prior. The first data set comprises 3D positions of atoms in a protein conformation obtained from molecular-dynamics simulations. 1 This is an example of noise-free data where 1 Data courtesy of Dr. Anton Polyansky, Zagrovic group, MFPL, Vienna. the surface is to be reconstructed as accurately as possible. We use cSDF to reconstruct the molecular surface of the protein and to locally shade it according to the Molecular Hydrophobicity Potential (MHP). The result is shown in Fig. 11(a) and (b). The second case considers a 2D PALM (photo-activated localization microscopy) super-resolution image. PALM intrinsically produces point clouds, as it detects the centroids of single fluorescent molecules. cSDF can then be used to reconstruct the surface (e.g., the membrane) on which the fluorescent molecules live. The PALM image in Fig. 11(c) and (d) shows fluorescent lamin proteins of the nuclear lamina. 2 We use cSDF to reconstruct the nuclear envelope from these point detections. Due to the large amount of outliers and noise in this data set, we use the robust distance field method presented above. A. COMPARED WITH TV In its current implementation, cSDF is about 6 times faster than a highly efficient Bregman code for surface reconstruction [27]. This runtime can be further reduced by parallelizing FIGURE 12. 2D example with sharp corners, compared with TV regularization [6] with parameter µ = 10 −6 (left), 2 × 10 −6 (middle) and 3 × 10 −6 (right) and presented method. the code on multi-or many-core hardware, such as GPUs or computer clusters. cSDF computes the distance field three times and then estimates the signed distance field. Thus, the key point of cSDF's parallelization is to compute the distance field in parallel, which has been extensively studied in computer graphics [28], [29]. Due to its regularization-free nature, cSDF can capture high curvature without adaptive grid. It also does not bias the reconstruction result toward a prior, nor does it excessively smooth the reconstructed surface. This way, cSDF is for example able to perfectly reconstruct and represent the sharp corners and edges of a cube, whereas regularization-based methods will round them even for the smallest amount of regularization, as shown in Fig. 12. B. MEAN CURVATURE ESTIMATION From φ, thanks to the signed-distance property of cSDF, the mean curvature can directly be computed as: Examples are shown in Figs. 13. Based on the prior-free property, the normal and curvature are guaranteed to be features of the surface itself, without being corrupted by any prior. V. PROTEIN SURFACE ESTIMATION cSDF can be extended to volume data, where the inner distance field vanishes. The surface of the volume is captured by the outer distance field and the distance field of the point cloud. Since the inner distance field does not exist, the concave region can not be accurately recovered (sharp inside corners get smoothed). The algorithm is summarized in Algorithm 4. The process is very similar with shrinking effect in traditional level set method. cSDF can be further extend to volume data, where a point x i becomes a sphere centered at x i with given radius r i . Algorithm 4 can be used to handle this case by letting d( x) inside the sphere be negative. A. MEAN CURVATURE DISTRIBUTION PRIOR Since cSDF is regularization-free, we can use it to obtain prior knowledge about the surfaces. We prepare 17 different proteins from Molecular Dynamics Simulations with 1000 time The mean curvature distribution of (a). (c) Surface reconstruction, normal estimation, and curvature estimation for a part of a human aorta point cloud (data courtesy of Dr. George Bourantas, MPI-CBG). The mean curvature is color coded after curvature histogram equalization for better visualization. (d) Zoom of (c). It worth pointing out that the distribution is only relied on the surface without being corrupted by any prior since cSDF is prior-free. step. 3 The proteins are summarized in Table 2. These names are the same as they are in the Protein Data Bank. 4 We have five independent runs for ubiquitin (UBQ), and five independent runs for UBM2. In total, we have 25 trajectories, each of which has 1000 time steps. The number of points for each protein is shown in Table 2. We use Algorithm 4 to construct the 25,000 protein surfaces and estimate their mean curvature. To reduce the resolution effect, instead of studying H , we study H · h 2 , which is called mean curvature half density [30], [31] or weighted curvature [32]. And it is independent on resolution h. Two distributions of H · h 2 from the examples are shown in Fig. 14. Even though these two surfaces are very different, We use 1 We compute a distance matrix M between all proteins at each time step. M (j, k) are the χ 2 distances between p i 1 t 1 and p i 2 t 2 , where j = (i 1 − 1) * 1000 + t 1 and k = (i 2 − 1) * 1000 + t 2 . The result is shown in Fig. 15a. We define the average distribution for each protein as Similarly, we can compute a distance matrix˜ M , wherẽ M (j, k) is the χ 2 distance betweenp j andp k . We use isomap algorithm [33] to reorder the proteins, showing the relationship between each other. The reordered distance matrix is shown in Fig. 16a and b. The rearrangement shows the relationship between proteins. B. CURVATURE DISTRIBUTION MODELING Noticing the distributions of H · h 2 can be well approximated by a Gaussian distribution in Fig. 14, we use a Gaussian model to approximate each distribution of H · h 2 for all 25,000 protein surfaces. The Gaussian model is defined as The modeling results are shown in Fig. 17. The parameters a and σ are stable for all tested proteins. The parameter b is stable for each protein during time steps. This stability guarantees that mean curvature distribution can be used as priors. We can also rearrange the proteins by simply sorting the parameter b. The result is shown in Fig. 18b. The reordered parameter b is shown in Fig. 19b. The similarity between the FIGURE 18. Distance matrix after the rearrangement of these proteins. result from Isomap and the result from sorting parameter b suggests that b is a dominant parameter in this modeling. VI. SUMMARY We have presented a regularization-free method for geometry reconstruction from unstructured point clouds. The result is guaranteed to be a signed-distance function, dispensing with the need for re-initialization and regularization. We benchmarked the method on 2D and 3D artificial datasets and showed its accuracy and computational efficiency. We further showed its application in real-world surface reconstruction for protein molecular surfaces and PALM microscopy data. Thanks to the regularization-free property, the mean curvature distributions from the estimated surfaces can be obtained and modeled as a prior. We show how to compute and model the mean curvature distributions for a Molecular Dynamic Simulation dataset. Thanks to the computational efficiency, our method can reach real-time performance and can be applied in many fields, such as protein surface estimation, studying the relationship between structure and function, molecular dynamics, etc.
6,945.4
2020-01-01T00:00:00.000
[ "Computer Science" ]
Non-linear optical measurements and crystalline characterization of CdTe nanoparticles produced by the ‘electropulse’ technique We propose to extend the ‘electropulse’ technique to synthesize CdTe nanoparticles for optical components. To do so, we create nuclei of the expected material on a titanium electrode immersed in an electrolytic solution by using an electrochemical pulse. The nanoparticles are expelled from the titanium horn surface by cavitation bubbles, which are produced in the solution by a high intensity ultrasound pulse generated by a piezoelectric crystal. Optical absorption spectra and third-order non-linear optical properties of a colloidal solution of CdTe particles produced by this technique are presented. The non-linear refractive index was characterized using the single-beam z-scan technique, and measurements were carried out at several incidence intensities. Non-linear indices of refraction between −1.2×10−7 and −2×10−8 cm2 W−1 were measured. Introduction Non-linear optics is considered to be a new frontier area between science and technology. It is expected to play an essential role in the emergent technology of photonics. Recently, molecular materials, polymeric systems and nanocrystals have emerged as a new range of promising materials for achieving optimization of non-linear optical properties and their applications in new devices. Synthesis of materials of nanometric dimensions has rapidly expanded in the last few years. Structures with a controlled size and shape, from clusters with a few atoms to nanostructures with some thousands of atoms, can be produced with a predefinite architecture. At the nanometric scale, materials possess special chemical and physical properties which are modified by quantum confinement of the electrons. These properties strongly depend on the shape and size of the particles and they differ from those in the macroscopic scale. Interaction of II-VI semiconductor materials with light has been studied. Nanoparticles present some strong electronic resonance effects near which non-linear optical properties were observed [1]- [4]. Electronic and optical properties of nanocrystalline semiconductors are specific because of the quantum confinement defined as the elementary electronic excitation in a reduced volume. This is a step between the bulk and an atom. Quantum confinement acts especially on quasiparticle excitons [5,6]. Such materials are very promising for optoelectronic design. The 'electropulse' technique is a technique to produce semiconductor nanoparticles. The non-linear optical properties of CdTe colloidal solutions are analysed by the single beam z-scan technique, which is based on the intensity-dependent refractive index. The overall refractive index of the material is given by n = n o + n 2 I, where n 2 is called the non-linear refractive index. It originates from the third-order susceptibility of the material, which cannot be neglected when the incident light intensity is high. Synthesis of CdTe nanoparticles Different methods have been developed for the synthesis of semiconductor nanoparticles. Among these one can cite nanocrystallite growth 'in situ' inside porous glass matrix with a controlled pore size, prepared by the sol-gel technique [7,8], growth in an ionic crystal [9], or growth in a zeolite [10]. Several researchers have explored the two most popular methods of synthesis of the II-VI semiconductors developed by Murray et al [11] and the aqueous preparation technique pioneered by Henglein and Weller [12,13]. The 'electropulse' technique is derived from electrochemistry. The experimental setup was developed at the Université Libre de Bruxelles (ULB), by Delplancke's group [14]- [17]. An electrochemical growth process is divided into three steps: a random germ nucleation on the working electrode's surface, their independent growth under the continuous influence of electric field and current and, finally, the total surface coverage with the deposited compound. The production of nanocrystals results only from the first two steps. Next, the nanoparticles are expelled out of the titanium horn by the cavitation bubble produced by an intense ultrasound field. High-intensity ultrasound waves were produced in the titanium horn combined with two piezoelectric ceramic elements. These were placed between a counter mass and the titanium rod and connected to a high-frequency alternative generator. The resonance frequency of the titanium horn and the ultrasonic frequency produced inside the horn have a value of around 20 kHz. The intensity of the ultrasound wave was typically 50 W cm −2 on the small area where the deposition occurred (cf figure 1). The experimental conditions were adjusted to be above the cavitation threshold of the electrolyte solution. Under ultrasound irradiation, a cloud of bubbles appeared below the titanium horn. The implosion of these bubbles propelled a jet of liquid perpendicular to the substrate. This jet was able to detach the previously electrochemically synthesized particles from the titanium horn surface. A sequential mode was used. T on is the deposition interval time and controls the nanoparticles' size, T off the non-deposition time and Current (mA/cm 2 ) Voltage (mV vs Ag/AgCl) 10 5 0 -100 -200 -300 -400 -500 -600 T us the ultrasound duration time. The size of the particles was directly linked to the times T on (in the hundreds of milliseconds range). The titanium horn was connected to a Tacussel PRT-20X potentiostat. The electrochemical design was a three-electrode configuration. The titanium electrode was named as the 'working electrode', on which the deposition occurs. The reference electrode was an Ag/AgCl electrode and the anode (or counterelectrode) was a platinum foil. Results from previous studies [18] dictated the choice of the electrolyte: CdSO 4 (1M), HTeO + 2 (40-150 µM) solution obtained with the form of TeO 2 in aqueous solution at pH = 2.4 adjusted by sulphuric acid (H 2 SO 4 ). The temperature of the solution was maintained at 85 • C. The measurement of the polarization curve of the titanium substrate in the electrolyte used is illustrated in figure 2. Deposition takes place between −200 and −600 mV with reference to the Ag/AgCl electrode. The optimum value of the potential for the production of the stoichiometric CdTe compound was determined by long constant-potential deposition experiments. X-ray diffraction analysis of the deposits obtained was then performed. This value was −570 mV referring to the Ag/AgCl reference electrode and the 'working electrode'. The electrochemical process is potentiostatic, which means that the potential is controlled and the current intensity across the solution is automatically determined by the nature of the electrolyte, shown in figure 2. This technique allows us to produce CdTe nanoparticles with a mean radius ranging between 4 and 200 nm, which correspond to deposition times, T on of 5 and 400 ms respectively. The preliminary preparation of the 'working electrode' is a rather important step to produce particles of good quality. This electrode must be carefully polished (roughness must be less than 30 µm). Next, a thin film of TiO 2 is coated on the deposition zone of the working electrode. This film favours the instantaneous nucleation and growth process, as described in [19]. To produce an optically transparent sample, optical diffusion and diffraction inside the material by the aggregates of particles must be minimized. Consequently, the coalescence of quantum dots must be avoided. Moreover, the optical quality of the surface should be as soft as possible. This means that surface states resulting from defects should be annihilated, e.g. by a surfactant. Trioctyl phosphine oxide (TOPO) was used. Under these conditions, properties of the material may be linked up with an individual particle set. Since the nanoparticles are produced in an aqueous medium, their surface is surrounded by adsorbed water molecules. The newly produced particles are filtered, dried and added to a TOPO/TOP solution at 70 • C. High intensity ultrasound waves are required to disperse the particles and to split up the aggregates. The TOPO/TOP solution with the particles is then heated to 230 • C for half an hour under continuous magnetic stirring. Before adding methanol in excess to precipitate the particles, the temperature is reduced to 60 • C. Finally, the TOPO coated particles are dispersed in a non-polar solvent such as toluene or hexane. The colloidal solution remains stable for several weeks. Crystalline characterization of nanocrystals The nanocrystals were characterized by high-resolution scanning transmission electron microscopy (STEM), x-ray fluorescence (XRF) and x-ray diffraction (XRD). Figure 3 is a 125×10 3 times magnified image of a set of CdTe particles with low crystallite coverage on the copper grid. It allows statistic size measurement of ∼100 individual nanocrystallites on the image. The average size is 22 nm ± 10%. Moreover, it shows that particles are not agglomerates. Figure 4 is a blow-up of figure 3. The selected particle has a size of 22 nm and is of spherical symmetry. This is an interesting result since, due to the growth of the particle on a plane, it was expected to grow along the axis perpendicular to the plane of the horn. In the nanometre scale, the sizes of the nuclei were so small that the surface-free-energy minimum corresponds to a pseudo-spherical symmetry. Figure 5 illustrates the size distribution of the particles present in figure 3. Figure 6 depicts a very high magnification, 1.15 × 10 6 times, of a single particle. Attention is focused on one particle pulled out of a sample of quantum dots with a mean radius of 6 nm ± 7%. Despite the pseudo-spherical symmetry, cleavage planes are easy to identify in figure 6. The statistical measurements were repeated for the different samples presented in this paper. The distribution width varies from 7 to 10%. Table 1 shows the different samples synthesized in this work. The distribution width is explained by a more or less deep alteration of the deposition surface and of the TiO 2 film. Moreover, during pulse sequences, ultrasound waves are not always evenly distributed on the horn surface. The difference in surface-polishing from one experiment to another and the accumulation of material attached to the horn are two probable reasons for the lack of homogeneity of the ultrasound pulses. They indeed locally modify the resonance frequency of the horn. Nevertheless, the dispersion in the size distribution can be improved by selective precipitation after synthesis. X-ray diffraction (XRD) and EDX are useful to determine the crystalline character and the composition of the samples. Figure 7 illustrates the results of XRD; the structure of nanoparticles is periodical, crystalline and corresponds to the cubic structure of CdTe. All samples showed similar diffraction spectra. The widening of the peaks, due to the Sherrer effect, is an argument in favour of the small sizes of the particles, confirmed by electron microscope results. However, figure 7 shows the presence of TeO 2 but in very small amounts. It comes from non-dissolved Te reactant (TeO 2 ) in the aqueous solution. EDX measures the atomic composition of the nanoparticles: Cd and Te are found in equal amounts in all samples. Linear optical absorption spectra of nanocrystals It is known that decreasing the size of nanometric particles induces a blue shift in the energy gap and a set of discrete energy levels in the semiconductor. This is the effect of quantum confinement, which occurs mainly on the exciton quasi-particle energy levels. The spectrum (figures 8 and 9) has been recorded on a low-concentration colloidal solution of CdTe in toluene solution. The recording temperature was 300 K and the scan rate was 120 nm min −1 with a resolution of 1 nm. The spectrophotometer was a PerkinElmer 559 UV/Vis. Absorption spectra show the evolution of the energy levels' distribution with the particle sizes. Two spectra are shown in figure 8. The first spectrum (continuous line) corresponds to a several minute duration deposition under the deposition conditions described previously. It produces particles bigger than 1 µm. The larger particles are characterized by an absorption spectrum similar to the one shown by bulk CdTe. The dashed line is the absorption spectrum of 100 nm particles. In figure 9 are gathered the absorption spectra of the particles b-e from table 1. The blue shift corresponding to the evolution of the first excited state absorption towards the short wavelengths is noticeable. This state corresponds to the creation of an exciton, the binding state of an electron and a hole. The peaks of the spectrum of sample c are not clear, and the large size distribution is responsible for the widening of the absorption peaks. Table 1 shows also the optical density of the nanoparticles at 633 nm which is the excitation wavelength of the z-scan setup. Rayleigh scattering was subtracted from the curves to obtain the response of purely the nanoparticles. Non-linear optics: z-scan The z-scan technique is used to characterize the non-linear optical properties of CdTe colloidal solutions. Developed by Sheik-Bahae et al [20], the z-scan allows the characterization of transparent materials having third-order non-linear effects. It is based on the intensity-dependent refractive index and includes the variation of the refractive index as a function of the incident beam irradiance on the sample: where n is the index of refraction, n o the linear index, I the intensity and n 2 the non-linear index of refraction. It is associated with the real part of χ (3) in the Taylor expansion of the material's polarizability. A Gaussian radial distribution of the incident laser beam (T 00 mode) induces a refractive index modulation inside the sample. Consequently, this modulation involves the creation of an induced lens. The z-scan is therefore an auto-induced technique. The principle is to move the sample along the optical axis in the vicinity of the laser beam focused on an external lens. For each position of the sample around the focus, the induced lens inside the sample possesses different focal lengths. This focal length depends on the incident Gaussian shape. The experiment consists of the measurement of the irradiance through a small aperture in a far field for each position versus the focus position. The sign of n 2 determines the vergence of the lens, positive or negative. For a positive n 2 (convergent lens), and a negative sample position (versus the external lens focus), the transmitted beam is radially wider at the aperture compared with the beam without sample because the beam converges before the focus of the fixed lens and hence the transmitted irradiance through the aperture is then lower. Consequently, for a sample position between the focus and the aperture (positive z) the focalization due to the induced lens in the sample occurs after the focus of the fixed lens and hence the irradiance at the aperture increases. A similar reasoning can be made for a divergent induced lens. The result is then the opposite. The experimental setup is shown in figure 10. The laser is a 20 mW He-Ne continuous laser. Parameters were carefully determined by a beam analyser (Beamscan 1180 from Photon Inc.). The beam waist (ω o = 63 µm) and the Rayleigh length (z o = 17.5 mm) are determined. The sample is a 1 mm thick quartz cell filled with CdTe colloidal solution, see table 1. The weight fraction of CdTe crystallites in each sample is about 10%. As the sample did not present any non-linear absorption, the curve fitting is computed by the following empirical law: where n 2 is the non-linear index of refraction from equation (1), λ the wavelength, I o the on-axis intensity at the waist and L eff = (1 − e −αL )/α. The analysed samples are those described in table 1. The data were collected for different incident beam intensities. The half-wave plate shown in the setup of figure 10 allows an efficient and simple tuning of the beam intensity. The output laser intensities were 0.5, 1, 2, 3, 4, 5 mW respectively. Figure 11 illustrates the results obtained for sample c. As expected, the non-linear features increase when the beam intensity increases. This is due to the fact that the weight of the second term of the equation describing the intensity-dependent refractive index increases. Equation (2) is valid until | o | value is less than π [4]. It gives a relative uncertainty on | o | of 3%. The error propagation calculation gives an n 2 value of approximately 9% for each measure. The principal error sources come from intensity and waist measurements. It is indicated by the error bars in figure 12. Figure 12 shows the variation of the non-linear refractive index versus the incident beam intensity for samples b-e. The phase modulation associated with the intensity-dependent refractive index as well as the Gaussian profile of the beam decreases for all samples when the intensities are high. However, for the smaller size samples (d and e), the onset of the curve is different. The local warming, which produces a circular temperature gradient, creates a local variation of the refractive index called a thermal lens. Reference [21] reports cw non-linear optical measurement on CdTe nanoparticles embedded in a glass matrix. The non-linear refractive index value reported is of the order of 10 −7 esu (i.e. around 10 −9 cm 2 W −1 ). In our measurements the n 2 values reported are greater. The explanation arises from the thermal conductivity of the surrounding nanoparticles. The thermal lens created is more pronounced as the thermal conductivity is weak. It is the case in our measurements. Thermal conductivity of glass is 1.38 W m −1 K −1 at 20 • C and increases with temperature whereas the thermal conductivity of toluene is 0.120 W m −1 K −1 and decreases with temperature [22]. Obviously, it has been verified that, under these conditions, the solvent alone did not show any non-linear effect. No non-linear absorption was detected at the He-Ne laser emission wavelength. Non-linear optical measurement with a pulsed laser has been reported in [23,24]. These measurements are very different from the measurements presented in this paper. In [23,24], electronic effects contributed to the intensity-dependent refractive index whereas in the present measurements it is due to thermal effects. Conclusion CdTe nanoparticles were synthesized by the 'electropulse technique'. X-ray diffraction and EDX showed that the nanoparticles' composition corresponds to the expected CdTe percentages and that they are crystalline. Spectra recorded by HRSTEM showed that the size of the nanoparticles is nanometric, the size distribution being relatively narrow, between 7 and 10%. The exciton state of the particles is observed as a function of particle size by optical absorption measurements. Third-order optical non-linearities of a colloidal solution of CdTe in toluene were measured by the z-scan technique at the emission wavelength of a continuous wave He-Ne laser. The value of n 2 varies between −1.2 × 10 −7 and −2 × 10 −8 cm 2 W −1 .
4,279.8
2004-03-01T00:00:00.000
[ "Materials Science", "Physics" ]
Development of a Software System for Selecting Steam Power Plant to Convert Municipal Solid Waste to Energy : A software system that enhances the selection of appropriate power plant capacity that will convert combustible municipal solid waste (MSW) into energy was developed. The aggregate of waste to be converted was determined and the corresponding heating value was established. The capacities of steam power plants’ components required for the conversion were determined, using thermodynamic mathematical models. An algorithm based on models used to determine the energy potential, the power potential of MSW, the capacities of the components of the steam power plant, were translated into computer soft code using Java programming language; saturated steam and superheated steam tables, together with the thermodynamic properties of the power plant required were incorporated into the soft code. About 584 tons of MSW having a heating value of 20 MJ/kg was the quantity of waste experimented for energy generation. This information was input into the software as data and was processed. Then, the software was able to predict 3245.54 MWh energy potential for the quantity of waste, and electrical power potential of 40.54 MW. The capacities of the steam power plant components that were predicted include 100.35 MW of boiler power, 40.54 MW of turbine power, and 59.80 MW of condenser power. The methodology adopted will make it easy for the managers in the waste-to-energy sector to appropriately select the suitable capacity of the required steam power plant that can convert any quantify of MSW at any geographical location, without going through the engineering calculation and stress or rigor involved in the plant capacity design. Moreover, the accuracy obtained for the software is greater than 99%. Introduction Municipal solid waste (MSW), is a collection of materials resulted from man's activities with his environment on daily basis, which are considered as not useful and are disposed of as wastes. MSW generation is on the high side over the years, around the globe, due to the increase in population, changes in taste, fashions, advancement in technology, and consumerism growth [1,2]. Islam [3] and Ibikunle [4,5] reported that the growth in MSW generation is due to the global quest for urbanism, social, and industrial development. Johari [6] stated that the sporadic increase in a waste generation will necessitate environmental challenges unless it is adequately and promptly managed. Ibikunle [7] remarked that the waste generated in the Ilorin metropolis of Kwara State is a huge one, and the waste management system available is insufficient; this makes people indulge in indiscriminate disposal of wastes which results in pollution, an unsightly scene, and blockage of waterways. The number of power plants in China, US, Japan, Germany, the UK, and France is 166, 88, 822, 78, 14, and 37 respectively; while power generation in China is 18.7 million MWh, US is 14 million MW, Japan is 1.9 million kWh, Germany is 5768 GWh, the UK is 2782 GWh and 1999 GWh is in France [23]. Despite the enormous waste generated in African nations, and the recent development in WTE technology, Africa is still behind in waste-to-energy (WTE) practices. The first waste-to-energy (WTE) power plant in Africa is 110 MW of Ethiopia, and Ghana is proposing a 60 MW Armech thermal power plant [24]. Oladende [25] reported that the Nigeria Electricity Supply Industry (NESI) has 12 power stations in Nigeria with a total power output of 11,165 MW and yet could not produce power during an off-peak period of the Christmas holiday; moreover, the only 26,000-L capacity biogas power plant, at Ikosi-Ketu market in Lagos, with about 10 KVA daily capacity, has not commenced operation. In South Africa, the renewable energy project put in place by the renewable independent power producer procurement programme (REIPPPP) is about 6327 MW altogether, in which landfill and biogas take 60 MW altogether [26]. The Ilorin metropolis that is selected for this study produces 302,000 tons of MSW per year, with a generating rate of 0.78 kg/capita/day; 70% of the aggregate waste generated is combustible, yet the city still faces an energy crisis [7]. The power supply by the Power Holding Company of Nigeria (PHCN) to the Ilorin metropolis is insufficient for her social and economic demand. Therefore, if energy recovery from waste components is adopted in Ilorin, it will provide a dual solution viz., efficient waste management and provision of alternative energy to energy from fossil fuels, which can complement the power supply from PHCN. To practice WTE in any environment, the sufficiency and efficiency of the waste fractions required must be ascertained; the sufficiency of MSW is about the quantity of the waste fractions available for energy production and the regular flow in terms of the generation rate. The efficiency of waste is about the heating value and the correlation between the physicochemical properties of the MSW and its calorific value. When the sufficiency and the efficiency of the waste have been determined, then the choice of the appropriate WTE technology can be made. In this study, software was developed using Java programming language, an algorithm based on equations used to determine energy potential and electrical power potential of the municipal solid waste. The mathematical models used to determine the capacity of each of the equipment in the waste-fired power plant was prepared, which is translated into computer soft code using Java programming language, which is one of the common programming languages used in numerical analysis as well as a unified modeling language (UML) [27]. This study aims to provide software that will serve as a management and technical decision tool, to predict the capacity of steam power equipment that will convert MSW into energy at any geographical location, provided the available quantity of waste for energy and the low heating value (LHV) is also established. This study solves one of the technical challenges that could be encountered while trying to ascertain the capacity of the plant required. Nevertheless, the software developed in this study will predict the available energy and power potential in the waste to be converted, as well as the thermodynamic properties of the stages involved in the process. This software is referred to as a power plant selection support system because it helps in taking decisions on the appropriate capacity of the steam power plant required for a specific quantity of waste when its low heating value has been ascertained. The accuracy of the software is established by comparing the calculated values to the values generated by the software, and it is found to be above 99%. Decision Support Systems (DSSs) The computer-based interactive systems adopted to help the users to make the right decision or choice of activities and judgment are called DSSs; this is characterized by data bank and retrieval device. It enhances access to information and restoration of functions that support modeling and reasoning based on models. DSSs also support solving problems and framing. In most situations, the standard or quality of decisions taken can be used to correct the human error involved during decision-making. Disciplines that include operations research, economics, and statistics develop various methods for decision making. The methods can be enhanced by different scientific techniques that are based on artificial intelligence. Computer programs that either operate as tools on their own or as integrated computing environments are adopted for complex decisions. Such computing environments are called decision support systems (DSSs). The broad concept of a DSS is responsible for the variation in its definitions, therefore, every definition now depends on the author's point of view. To embrace all perspectives, a DSS can be defined as a computer-based system that aids users in judgment and choice of activities [28]. This is the reason the software developed in this study, to select the appropriate capacity of steam power plant that will convert any quantity of combustible MSW to energy in any geographical location, is considered as a decision support system. The design of a DSS and its analysis is so complex that it involves the usage of specific and adequate instruments and methodologies to model the decision processes. Customized and developed features can be used to implement a DSS for decision activities or by utilizing a generalized DSS that can later be customized. DSSs can either be specifically developed for an establishment, using programming languages or using DSS generators. The strategy involved in the software development either engages general-purpose programming language (GPL), such as C++, PASCAL, BASIC, or COBOL, or engages the use of fourth-generation language (4GL), such as VISUAL BASIC .NET, C# .NET, VISUAL J# .NET, DELPHI, JAVA, or VISUAL C++ [29]. The most appropriate area for the installation of a biomass power plant can best be selected by managers using a spatial decision support system. Fuzzy logic and fuzzy membership functions are used for the creation of criteria layers and suitability maps. Multicriteria Decision Analysis methodology (Analytical Hierarchy Process) combined with fuzzy system elements for the determination of the weight coefficients of the participating criteria are used [30]. Fatima [31] incorporated a decision support system in a power plant simulation tool to provide the qualitative synthesis needed for the plant design process, and to assist the design engineers in performing a better choice-evaluation for technical quantitative and qualitative analysis. Konstantinos [32] stated that decision support systems (DSS) are an approach designed as a tool to help managers take decisions by accelerating the relevant processes. The DSS developed in this study is designed to accelerate processes required in the design of power plants, and concurrently predict the energy and power potentials available in a quantity of MSW. This DSS can predict the energy and power potentials of the MSW fractions that are available for power generation, predict the thermodynamic properties involved in each stage of the power cycle of the steam power plant, and also predict the capacity of each of the equipment in the power plant that will convert a particular quantity of MSW whose net heating value is known. Materials and Methods The MSW materials investigated in this article are the 9 combustible MSW fractions of the total 19 waste fractions identified in the Ilorin metropolis. These include paper, packaging boxes, grass/garden trimmings, rags, wood, food residue, nylon, polypropylene sacks, and plastic bottles. The quantity of the MSW available per day for energy production was established, and the net heating value was determined using an e-2k combustion calorimeter. The heat energy and the power potentials of the MSW were determined. Rankine reheat cycle was used to design the capacities of the steam power plant equipment required for energy production. The mathematical models utilized in determining the energy potential and the designing of the power plant capacity are used to form the algorithm used in the development of the software, called the decision support system. The software interfaces were developed using Java swing utilities that comprise JFrames, JTextrafields, JButtons, and JEditorpane for data display. Action listeners and action events were used to control the button functionalities, handle calculations, and panel/Editorpane displays [25]. Determination of the Aggregate MSW Generated in Ilorin/Year, Using Collection Facts and Figures The aggregate MSW produced in the Ilorin metropolis of Kwara State between January and June 2021 was estimated by using the mathematical model in Equation (1), without the aid of a weighing bridge, as suggested by Kosuke [22] and Ibikunle [4,5,10]. The total number of trucks is n, the capacity of a truck is C i (m 3 /truck), the loading volume ratio of a truck is V i , the density of MSW loaded on the truck is d i (tons/m 3 ), and the number of trips by truck is t ij on day j (frequency of trips/day). Characterization of the MSW Fractions It was reported by NT ENVIR 001 [33], Issam [34], and Ibikunle [4,5,10] that the huge MSW collected from different locations in the city can be effectively characterized through random sampling of some selected heaps of wastes deposited in the dumpsite. NT ENVIR 001 [33] suggested that 240 L of a bin full of MSW are appropriate as a unit sample. The waste streams were characterized twice a week for six months, thereby making 48 different batches of samples for the characterization investigation at Lasoju dumpsite to avoid any consequences from the effect of insufficient samples. Each component identified was sorted into different receptacles and weighed. Moisture Content Analysis of MSW Components The moisture content of the MSW combustible fractions was estimated using an electric oven (model DHG 9053) with a 200 • C capacity. About 1 g of powder of the sample of each component was measured into a crucible and dried in the oven, maintained at a temperature of 110 • C for an hour. The weight loss is considered as the moisture content of the waste component as suggested by Vairam [35], Shi [36], Titiladunayo [7], and Ibikunle [4,7,37] using Equation (2). where MC is the percentage moisture content, W 1 is the mass of empty crucible (g), W 2 is the mass of crucible and sample (g), W 3 is the mass of crucible and sample after heating (g), W 2 − W 1 is the mass of sample (g), and W 2 − W 3 is the moisture content (g). Ultimate Analysis of MSW Components Ultimate analysis was investigated to ascertain the quantitative value of the carbon (C), hydrogen (H), nitrogen (N), sulphur (S), and oxygen (O) percentage content that is available in the MSW components. The investigation was carried out using an elemental analyzer (Flash EA 1112 model), based on ASTM D5291. About 0.5 g of powdered sample of each component was measured into a crucible and combusted. The oxides of carbon, hydrogen, nitrogen, and sulphur produced were analyzed by thermal conductivity detector (TCD), and the electrical signal emerged was processed by 'Eager 300 software' to give the percentages of elements contained in the sample. The samples were replicated, and an average of the values obtained is considered as the typical value [2]. Estimation of the Heating Value of the MSW Combustible Fractions Islam [3] and Shi [33] suggested that the high heating value (HHV) of solid fuel can be determined by using e-2k combustion calorimeter, shown in Figure 1. The low high heating value (LHV) of the nine combustible waste fractions presented in Table 3 was obtained by Sustainability 2021, 13, 11665 6 of 20 applying Equation (3), as adopted by Islam [3], Ibikunle [4], Ibikunle [10], Ibikunle [37], and Kumar [38], as well as adopting Dulong and Steuer models in Equations (4) and (5) respectively. The average of the LHV obtained from the three equations is considered the typical LHV [4,7]. where W msw is the weight fraction (%) of the combustible MSW and HHV msw is the high heating value obtained by using a bomb calorimeter [4,10,[37][38][39]. Estimation of the Heating Value of the MSW Combustible Fractions Islam [3] and Shi [33] suggested that the high heating value (HHV) of solid fuel can be determined by using e 2k combustion calorimeter, shown in Figure 1. The low high heating value (LHV) of the nine combustible waste fractions presented in Table 3 was obtained by applying Equation (3), as adopted by Islam [3], Ibikunle [4], Ibikunle [10], Ibikunle [37], and Kumar [38], as well as adopting Dulong and Steuer models in Equations (4) and (5) respectively. The average of the LHV obtained from the three equations is considered the typical LHV [4,7]. where is the weight fraction (%) of the combustible MSW and is the high heating value obtained by using a bomb calorimeter [4,10,[37][38][39]. In Equations (4) and (5), and are the heating values estimated by adopting Dulong and Steuer models respectively, and carbon (C), hydrogen (H), sulfur (S), and oxygen (O) are the chemical elements obtained from the ultimate analysis. Determination of Energy Potentials ( ) of the Municipal Solid Waste Daura [39] and Ibikunle [10] reported that the energy potential ( ) of MSW could be determined by applying Equation (6): where is the energy potential from MSW, (tons) is the weight of MSW, is the net low heating value of the MSW (MJ/kg). Conversion ratio (1 kWh = 3.6 In Equations (4) and (5), LHV ii and LHV iii are the heating values estimated by adopting Dulong and Steuer models respectively, and carbon (C), hydrogen (H), sulfur (S), and oxygen (O) are the chemical elements obtained from the ultimate analysis. Determination of Energy Potentials (EP msw ) of the Municipal Solid Waste Daura [39] and Ibikunle [10] reported that the energy potential (EP msw ) of MSW could be determined by applying Equation (6): where EP msw is the energy potential from MSW, W msw (tons) is the weight of MSW, LHV msw is the net low heating value of the MSW (MJ/kg). Conversion ratio (1 kWh = 3.6 MJ). The Electrical Power Potential (EPP msw ) of the MSW Is Determined Using Equation (7) as Suggested by Ibikunle [7,10] and Daura [39] EPP msw = 40, 602 kW where η is the conversion efficiency in a power plant, which is within a range of 20-40% as reported by Muhammad [40] and Ibikunle [7,10]. However, conversion efficiency of 30% is adopted for this work. where η g is the generator efficiency (selected is 90%), η p is the transmission efficiency (selected is 75%) of turbine work (W T ). The generator efficiency range is 85-90% and turbine efficiency is within a range of 75-80% [41]. Determination of the Steam Power Plant Capacity Required for Energy Production Thermodynamic analysis of Rankine cycle is used to enhance the reliability and efficiency of steam power plants [42]. In this study, the reheat Rankine cycle presented in Figure 2b is used to determine the capacity of the steam power plant that will convert the waste to electrical power. Rankine reheat cycle in Figure 2b was used for the estimation of the steam power plant capacity because it has a higher cycle efficiency compared to the ordinary superheat Rankine cycle shown in Figure 2a. The combination of temperature (400 • C) and pressure (40 bar) is commonly used in the capacity determination/design for steam power plants to minimize investment costs. Temperature higher than 400 • C may result in high-temperature strain and corrosion of the superheater tubes; also, the pressure below 40 bar lowers the requirements for pretreatment of the feed water [7,43,44]. (7) as Suggested by Ibikunle [7,10] and Daura [39] 277. 8 (kW) 40,602 kW where is the conversion efficiency in a power plant, which is within a range of 20-40% as reported by Muhammad [40] and Ibikunle [7,10]. However, conversion efficiency of 30% is adopted for this work. Power to grid = (MW) MW where is the generator efficiency (selected is 90%), is the transmission efficiency (selected is 75%) of turbine work ( . The generator efficiency range is 85-90% and turbine efficiency is within a range of 75-80% [41]. Determination of the Steam Power Plant Capacity Required for Energy Production Thermodynamic analysis of Rankine cycle is used to enhance the reliability and efficiency of steam power plants [42]. In this study, the reheat Rankine cycle presented in Figure 2b is used to determine the capacity of the steam power plant that will convert the waste to electrical power. Rankine reheat cycle in Figure 2b was used for the estimation of the steam power plant capacity because it has a higher cycle efficiency compared to the ordinary superheat Rankine cycle shown in Figure 2a. The combination of temperature (400 °C) and pressure (40 bar) is commonly used in the capacity determination/design for steam power plants to minimize investment costs. Temperature higher than 400 °C may result in high-temperature strain and corrosion of the superheater tubes; also, the pressure below 40 bar lowers the requirements for pretreatment of the feed water [7,43,44]. The reheat Rankine cycle in Figure 2b is a modification of simple Rankine cycle in Figure 2a. The schematic representation of a steam power plant using the reheat Rankine cycle is shown in Figure 3. The reheat Rankine cycle in Figure 2b is a modification of simple Rankine cycle in Figure 2a. The schematic representation of a steam power plant using the reheat Rankine cycle is shown in Figure 3. Work Output ( ) of the High-Pressure Turbine (HPT) can Be Determined Using Equation (9), as Suggested by Kaspooria [42], Akhator [12], Arabkooshar [45], and Loni [46] ℎ − ℎ (9) where, (kJ/kg) is the HPT work output, is the mass flow rate of the steam, ℎ and ℎ are the enthalpies at states 1 and 2 during the isentropic expansion process in the HPT. According to Jordi [47] and Hesham [48], the Heat (Q ) Supplied to the Steam at the Reheat Tube can be Determined Using Equation (10) ℎ − ℎ where is the rate of heat supplied to the steam, during the constant pressure heating process, from state 2 to 6 in the reheat tube. Work Output ( ) of the Low-Pressure Turbine (LPT) can Be Determined as Suggested in Equation (11), by Akhator [12] and Arabkooshar [45] ℎ − ℎ (11) where (kJ/kg) is the LPT work output, and ℎ and ℎ are the enthalpies at states 6 and 7 of the reheat Rankine cycle, during the isentropic expansion process in the LPT. Heat ( ) Rejected at the Condenser According to Akhator [12], Arabkooshar [45], and Loni [46] is Presented in Equation (12) ℎℎ where (kJ/kg) is the heat rejected in the condenser during an isothermal heat rejection process, while ℎ and ℎ are the enthalpies at states 7 and 3 of the reheat Rankine cycle respectively. where, W T 12 (kJ/kg) is the HPT work output, . m st is the mass flow rate of the steam, h 1 and h 2 are the enthalpies at states 1 and 2 during the isentropic expansion process in the HPT. According to Jordi [47] and Hesham [48], the Heat (Q 26 ) Supplied to the Steam at the Reheat Tube can be Determined Using Equation (10) . where . Q T 26 is the rate of heat supplied to the steam, during the constant pressure heating process, from state 2 to 6 in the reheat tube. where W T 67 (kJ/kg) is the LPT work output, and h 6 and h 7 are the enthalpies at states 6 and 7 of the reheat Rankine cycle, during the isentropic expansion process in the LPT. Heat ( . Q 73 ) Rejected at the Condenser According to Akhator [12], Arabkooshar [45], and Loni [46] is Presented in Equation (12) . where . Q 37 (kJ/kg) is the heat rejected in the condenser during an isothermal heat rejection process, while h 7 and h 3 are the enthalpies at states 7 and 3 of the reheat Rankine cycle respectively. where . Q 31 (kJ/Kg) is the heat supplied to the boiler, while h 1 and h 3 are the enthalpies at states 1 and 3 of the reheat Rankine cycle respectively. (14) where Q B is the addition of heat Q 13 and the heat of reheat Q 26 . MSW consumption rate, where m f is taken as the rate of fuel consumption, the boiler efficiency is η B , and the low heating value of MSW is LHV. Efficiency of 80% is assumed for the boiler (the range of boiler efficiency is 80-90%) [38]. Boiler power (Q BP ) : The boiler power, where Q BP is considered as the power generated/day by the boiler for steam power plant. The turbine power output (P T ) = . m st × W T The total work output of the turbine, The Condenser Power Development of Software for Selecting Power Plant Capacity The nomenclature of all the parameters used in modeling the energy and electrical potentials of MSW, and the parameters used in modeling the power plant capacity required were prepared. An algorithm based on Equations (2)- (19) was prepared, which was translated into computer soft code using Java programming language (which is one of the common languages used in numerical analysis and a unified modeling language) [25]. The flow chart upon which the algorithm is based is shown in Figure 4. The software interface was developed using Java swing utilities which comprise mostly of JFrames, JTextrafields, JButtons, and JEditorpane for data display. Action listeners and action events were used to control the button functionalities, handle calculations, and panel/Editorpane displays. Java was used due to its flexibility, object-oriented programming language, and its multi-platform capabilities, which allows it to function on any operating system (OS). The conversion ratios for MSW in tons to kilograms, the heating value in MJ/kg to energy potential in kWh, the saturated and superheated water tables, and steam properties were incorporated into the software's database [25]. Sustainability 2021, 13, x FOR PEER REVIEW 10 of 21 system (OS). The conversion ratios for MSW in tons to kilograms, the heating value in MJ/kg to energy potential in kWh, the saturated and superheated water tables, and steam properties were incorporated into the software's database [25]. The Flowchart for the Decision Support System 1. The weight of MSW available for energy generation, heating value of the MSW, boiler efficiency, conversion efficiency, and turbine efficiency was supplied into the corresponding designated box in the input interface shown in Figure 5, 2. The boiler pressure, boiler temperature, pressure in the reheat tube, and the pressure required in the condenser are selected from the corresponding draw-down of the icons provided in the input interface, 3. After the required parameters have been supplied/selected; the system requests that you proceed, and the proceed button is clicked, 4. Then, a verification interface, shown in Figure 6, will appear to ask if data is OKAY; if OK, click on proceed button otherwise click on the recheck data button and continue, 5. Then click on the process button, and it will automatically process the data. After that, click on proceed button and the result will automatically be displayed on an output interface, as well as the date and exact time of processing as presented in Figure 7. The boiler pressure, boiler temperature, pressure in the reheat tube, and the pressure required in the condenser are selected from the corresponding draw-down of the icons provided in the input interface, 3. After the required parameters have been supplied/selected; the system requests that you proceed, and the proceed button is clicked, 4. Then, a verification interface, shown in Figure 6, will appear to ask if data is OKAY; if OK, click on proceed button otherwise click on the recheck data button and continue, 5. Then click on the process button, and it will automatically process the data. After that, click on proceed button and the result will automatically be displayed on an output interface, as well as the date and exact time of processing as presented in Figure 7. Sustainability 2021, 13, x FOR PEER REVIEW 11 of 21 The results displayed include appropriate energy and electrical power potentials of the MSW, the work-output of the high-pressure and the low-pressure turbines, the heat rejected at the condenser, the heat supplied in the boiler, the boiler power, the turbine power, the condenser power required to convert the quantity of MSW to power, and the enthalpies at the different nodes of the reheat Rankine cycle. Results and Discussion In this section, the aggregate of the MSW produced in the Ilorin metropolis, the components in the MSW streams, their energy content, and the capacity of the waste-fired power plant required for electricity generation is established. Moreover, the manuallycalculated values and the values generated by the selection support system are compared. These include the heat energy, electrical power potentials of a quantity of MSW, energy at the thermodynamic states of the Rankine cycle, and the required capacity of the power plant that will convert the MSW; this is to ascertain the accuracy of the decision support system. The Estimated Aggregate of MSW Produced The quantity of MSW generated in a city must be considered to be able to estimate the quantity of the fractions that are suitable for energy production. The MSW generated in Ilorin between January and June 2021, as presented in Table 1, was predicted to be about 101,912 tons, based on the facts and the figures of the waste collection system without the use of a weighing machine, compared to 370,706 tons/year generated in Onitsha (the eastern part of Nigeria) and 10,000 tons of MSW/day in Lagos, the commercial center of Nigeria [55], 250 tons of MSW/year in Ado-Ekiti (Western Nigeria) [56], 30 million tons/year in the US [57], 2.15 million tons/year in Sweden [58], 46 million in China [59], and 169,120 tons of MSW/day in Africa [60]. The aggregate waste produced during this study implies that the MSW generated in Ilorin is sufficient for the renewable energy process. The quan- The results displayed include appropriate energy and electrical power potentials of the MSW, the work-output of the high-pressure and the low-pressure turbines, the heat rejected at the condenser, the heat supplied in the boiler, the boiler power, the turbine power, the condenser power required to convert the quantity of MSW to power, and the enthalpies at the different nodes of the reheat Rankine cycle. Results and Discussion In this section, the aggregate of the MSW produced in the Ilorin metropolis, the components in the MSW streams, their energy content, and the capacity of the waste-fired power plant required for electricity generation is established. Moreover, the manuallycalculated values and the values generated by the selection support system are compared. These include the heat energy, electrical power potentials of a quantity of MSW, energy at the thermodynamic states of the Rankine cycle, and the required capacity of the power plant that will convert the MSW; this is to ascertain the accuracy of the decision support system. The Estimated Aggregate of MSW Produced The quantity of MSW generated in a city must be considered to be able to estimate the quantity of the fractions that are suitable for energy production. The MSW generated in Ilorin between January and June 2021, as presented in Table 1, was predicted to be about 101,912 tons, based on the facts and the figures of the waste collection system without the use of a weighing machine, compared to 370,706 tons/year generated in Onitsha (the eastern part of Nigeria) and 10,000 tons of MSW/day in Lagos, the commercial center of Nigeria [55], 250 tons of MSW/year in Ado-Ekiti (Western Nigeria) [56], 30 million tons/year in the US [57], 2.15 million tons/year in Sweden [58], 46 million in China [59], and 169,120 tons of MSW/day in Africa [60]. The aggregate waste produced during this study implies that the MSW generated in Ilorin is sufficient for the renewable energy process. The quantity collected for disposal was estimated, as suggested by Ogunjuyigbe [13], that the aggregate of MSW collected and disposed of in most of the developing nations is about 74% of the quantity generated. Table 2 reveals that the quantity of MSW characterized during the study is about 985 kg. The population responsible for MSW generation during this study is 1,222,294 people as predicted by Ibikunle [5]. The aggregate waste characterized during June is about 229 kg, followed by 182 kg in May, and the least is about 156 kg in February. The reason for June having the highest quantity of MSW of about 23% of the total waste characterized, might be because during this period, newly harvested crops are emerging from farm and food items, and others are cheaper, making it affordable for many people than before. February had the least waste, which might be due to the dry season when there was no plant nor harvesting of food crops and people have just concluded the new year celebration and did not have enough money to purchase items. In this characterization study, plastic bottles have the highest fraction of about 13%, followed by grass/garden trimmings of about 8%, and leather having the least of about 1%. The reason for the high fraction of plastic bottles is because many people consume bottled water and other drinks. Leather has the lowest fraction because it is treasured for ornamental crafts, bags, and shoes in Ilorin, hence it is rare to come by its waste fractions. The generation rate of MSW during this study is 0.145 kg/capita/day. The Combustible MSW Fractions That Are Considered for Energy Production Nine combustible fractions of the 19 MSW components that were characterized are considered for energy production, as presented in Table 3. The table shows that about 612 kg of combustible waste fractions were characterized during this study. This makes about 62% of the waste generated. This implies about 63,185 tons of combustible MSW were available for energy production during the investigation. Ibikunle et al. (2019) estimated the combustible components of MSW in Ilorin for the year 2016 to be about 71%, compared to 84% in Ghana in 1983 and 27% in 2014; 7.7% in South Africa; 74% in Nigeria, and 3.6% of the US [61]. This implies energy recovery from MSW via combustion in 2016 will reduce the quantity of waste deposited into the dumpsite in Ilorin metropolis by about 71%, but this recent study revealed that energy recovery via combustion of waste fractions could reduce the aggregate waste deposited into the dumpsite in Ilorin by about 62% within six months. Table 3. Physical characterization of MSW (of 612 kg) for energy production between January-February. Jan. Feb. Table 4 presents the ultimate analysis of the waste fractions and the high heating value (HHV) of the components obtained from the bomb calorimeter. The analyses reveal that the average elemental components of the waste fractions give 29.14% composition of carbon, 0.11% of hydrogen, 3.95% of nitrogen, 0.50% of sulfur, and 0.15% of oxygen. The average HHV of the MSW components is 25 MJ/kg. These values are very important in the estimation of the energy and power potentials of MSW. The low heating value (LHV) of the waste components that is paramount in the estimation of power potential as determined by adopting Dulong and Steuer models in Equations (3) and (4) is presented in Table 5a. The average HHV from the bomb calorimeter is 25 MJ/kg, the average LHV b from the Dulong model is about 21 MJ/kg, and LHV c from the Steuer model is about 20.8 MJ/kg. Moreover, the LHV obtained from Equation (2) is about 19 MJ/kg. Therefore, considering the results from the three equations, the typical value for the LHV is assumed to be 20 MJ/kg. The MSW with LHV of 20 MJ/kg will produce correspondent energy of about 43% of petrol, 45% of energy in diesel, 48% of that in natural gas, 49% in coal, and about 100% of energy content in woody biomass [62]. The Energy Potential (EP MSW ) and Electrical Power Potential of MSW (EPP MSW ) The methodology established that about 62% of the MSW characterized are combustible, therefore, considering about 584 tons out of about 63,185 tons of (combustible) waste fractions available during the study, with the low heating value (LHV) of 20 MJ/kg, it can produce heat energy potential of 3.2 GWh and electrical power potential of 40.6 MW in a day, as presented in Table 5b. The available energy potential (EP MSW ) of 3.2 GWh in MSW will produce equivalent energy in 603 tons of wood, 395 tons of coal, 94 tons of hydrogen, 352,591 L of petrol, and 320,000 L of diesel [63]. The electrical power potential (EPP MSW ) of the MSW is about 41 MW, and this will meet about 15% of the power requirement in Kwara State [64], as well as will help achieve about 9% of the Nigeria Renewable Energy Master Plan (REMP) goal for 2025. The Capacity of the Steam Power Plant Components Required to Convert the MSW The capacities of the steam power plant components that will convert the available combustible MSW in Ilorin to energy were determined. It is established that a steam power plant output of 41 MW, operating on a reheat Rankine cycle of 40% efficiency, based on design specifications which operate in steam pressure and temperature of 40 bar and 400 • C respectively is required. Temperature higher than 400 • C can bring about high-temperature strain and corrosion in the boiler superheater tubes, and pressure below 40 bar can lower the requirements for pretreatment of the feed water [27,44,45]. The plant has an MSW fuel consumption rate of 40 tons/h, boiler capacity of 3600 kJ/kg (heat input), turbine output capacity of 1455 kJ/kg of steam, steam mass flow rate of 28 kg/s (100 tons/h), and specific steam consumption (SSC) of 2.48 kg/kWh. The parameters determined for each node (state) during the thermodynamic processes required in the steam power generating procedures are given in Table 6a. The energy balance of the processes is given in Table 6b and the capacities for each equipment of the power plant required are given in Table 6c. Table 6a,b contain pressures and temperatures selected from the steam table incorporated into the power plant design calculations to determine the enthalpy, entropy, work done, and the heat produced in different nodes of thermodynamic stages. Table 6c presents the heat, work, and power estimated for boiler, turbine, and condenser discretely, while determining the capacity of the power plant that will convert 584 tons of MSW to energy. This agrees with 'ZGB' (a manufacturer of boilers in China) with a rating range of 101-150 tons/h steam mass flow rate for boiler with the power of 70-105 MW. Software-Generated Values on Power Potential and Power Plant Capacity The software developed serves as the supporting tool for the selection of waste-toenergy (WTE) power plant capacity that will utilize MSW as fuel. This system determines the energy and power potentials of MSW when the known weight of MSW in tons and its low heating value (MJ/kg) are inputted into the input interface of the software shown in Figure 5. The data inputted are verified in the interface in Figure 6, to ensure errorfree processes, before the data is processed. The results obtained using the software are presented in output display Figure 7. This decision support system will predict the capacity of the power plant required for energy recovery of any quantity of MSW at any location, as well as the heat enthalpy at the nodes of the Rankine cycle during the thermodynamic processes, provided the quantity of waste to combust is known, as well as the value of the low heating value. Comparison between Manually-Calculated and Software-Generated Values on: Energy Potentials and Power Plant Capacity The appropriate way to validate the efficiency of the software developed can be validated by comparing the manually-calculated values to the values obtained from the software package [65]. The values presented in Table 7 show that the accuracy of the software is >99%. This software can be used to predict the energy and power potentials of MSW in any location at any time, provided the weight and the low heating value are known. Nevertheless, it will excellently predict the quantity of work or heat involved in each of the power plant's equipment, as well as the capacity of the required boiler, turbine, and condenser that will convert the MSW to electrical energy. Conclusions The software developed can effectively serve as a supporting system for the selection of the appropriate capacity of the steam power plant to convert any quantity of MSW as fuel to energy. The results obtained revealed that a steam power plant of about 100 MW power of boiler, with about 40 MW power of turbine and 60 MW power of condenser with a steam mass flow rate of about 28 kg/s, will convert about 584 tons of municipal solid waste (MSW) with 20 MJ/kg heating value to electrical power potential of about 41 MW, and energy potential of about 3243 MWh. The comparison of the results obtained by manual calculation and that of the software shows an accuracy greater than 99%. It is concluded that the software will successfully predict the energy and electrical power potentials of MSW, and the power to grid potentials in any quantity of combustible waste fractions at any time, irrespective of geographical variations. It will also predict the capacities of individual components in the power plant required to convert the waste to energy, provided the quantity of the municipal solid waste (MSW) to be converted in tons and the net heating value of the MSW (MJ/kg) is supplied. The software will also determine the cycle efficiency of the power plant, the steam mass flow rate, and the MSW (fuel) consumption.
9,480.8
2021-10-21T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Joint Opportunistic Scheduling of Cellular and Device-to-Device Communications The joint scheduling of cellular and D2D communications to share the same radio resource is a complex task. In one hand, D2D links provide very high throughputs. In the other hand, the intra-cell interference they cause impacts on the performance of cellular communications. Therefore, designing algorithms and mechanisms that allow an efficient reuse of resources by the D2D links with a reduced impact on cellular communications is a key problem. In general, traditional Radio Resource Management (RRM) schemes (D2D grouping and mode selection) focus on finding the most compatible D2D pair for an already scheduled cellular User Equipment (UE). However, such approach limits the number of possible combinations to form the group (composed by a cellular UE and a D2D pair) to be scheduled in the radio resource. To overcome that, in this work a unified Joint Opportunistic Scheduling (JOS) of cellular and D2D communications, which is able to improve the total system throughput by exploiting the spatial compatibility among cellular and D2D UEs, is proposed. But more complexity is brought to the scheduling problem. Thus, a low-complexity suboptimal heuristic Joint Opportunistic Assignment and Scheduling (JOAS) is also elaborated. Results show that it is possible to reduce the computational complexity but still improve the overall performance in terms of cellular fairness and total system throughput with less impact on cellular communications. But, transmissions conveyed to cellular and D2D UEs on the same radio resource are coupled by intra-cell interference. Therefore, designing algorithms and mechanisms that allow an efficient reuse of resources by the D2D links as a means to improve the spectrum utilization with a reduced impact on cellular communications is a key problem.Fig. 1 exemplifies the problem of spectrum sharing among cellular and D2D communications within a single cell.Several distance-based studies in the literature have shown that the efficient sharing of resources heavily depends on the distance between UEs with D2D traffic and their positions with respect to cellular UEs [1], [4]- [6].In [4], an analysis demonstrates the feasibility in the coexistence of both communication modes (i.e., cellular and D2D communication modes, which latter are referred as mode 1 and mode 2, respectively) and shows that D2D communications bring benefits in interference-limited local area scenarios.Cellular communications happening close to Evolved Node B (eNB) (in LTE systems) and D2D communications occurring near the cell-edge provide the most favorable scenario for sharing resources.Thus, the potential benefits of D2D communications are strongly constrained by the network topology1. The best overall capacity depends mainly on the position of the D2D transmitter relative to the cellular UE when reusing downlink resources, and to the eNB when reusing uplink resources [1].It indicates that differentiating between 1In this work, the use of terms like spatial conditions, spatial reuse, or spatial compatibility, which are usually employed in multi-antenna systems, are related with geographical distribution of UEs and their channel conditions.transmitters and receivers, and exploiting the UEs geographical distribution are extremely important for interference mitigation by Radio Resource Management (RRM) schemes. B. Background: Traditional RRM Schemes While some previous works in literature have pointed out that the overall capacity of a cellular network with underlayed D2D communications always outperforms the conventional cellular network, when cellular radio resources are reused by D2D communications in favorable conditions [1], [4], other works have proposed solutions to extend the range of situations in which D2D links are useful through RRM schemes: D2D grouping, mode selection, and power allocation [5], [7], [8]. In [5], a heuristic approach aiming at power minimization achieves a suboptimal performance in terms of spectral efficiency and throughput fairness through a joint mode selection, D2D grouping, and power allocation.Thus, RRM schemes that efficiently apply interference coordination become a major issue in cellular networks with D2D communications. Also, most of the proposed schemes for resource assignment of D2D communications have considered a pre-selected cellular UE [1], [4], [6], [7], such that the cellular scheduling runs independently of the establishment of D2D links.In [8], the available channels are firstly allocated to cellular UEs while D2D pairs form a priority queue for each of those channels.Then, the eNB sequentially selects the D2D pair with the highest priority and sets the transmit power for each resource.In general, greedy RRM schemes solve three separate subproblems, as shown in Fig. 2. Mode selection Fig. 2. RRM schemes based on a greedy approach.The cellular scheduling selects a primary cellular UE.After that, the D2D grouping step assigns a pair of D2D UEs for the primary cellular UE.Then, the mode selection is responsible for the decision-making between cellular or D2D mode [9].However, when the primary cellular UE is chosen by a greedy approach, it is not possible to ensure that the group including this UE and a secondary D2D pair is the most spatially compatible one for sharing the radio resource.The choice of a former cellular UE that is done by these RRM schemes to be the head of a greedy search limits the exploitation of the overall multi-UE diversity.This issue is further aggravated when mode selection avoids the shared mode because the most compatible D2D pair of the primary selected cellular UE does not contribute to an increased throughput.Perhaps all available D2D pairs are spatially incompatible with that primary cellular UE, i.e., they do not have good channel conditions to share the same resource, while other cellular UEs could be prioritized instead to improve the overall spectrum utilization. While it is known that the system capacity can be improved by exploiting the multi-UE diversity [5], the aforementioned works [1], [4], [6]- [8] have neglected the benefits of such diversity, as they just focused on finding the most spatially compatible D2D pairs with respect to a scheduled cellular UE. Besides that, since the instantaneous throughput information of all cellular UEs and D2D pairs must already be available within a cell for mode selection purposes, these measurements may be used instead in a unified framework for joint resource assignment of cellular and D2D communications. Moreover, in [10] the authors claim that they consider a joint mode selection, channel assignment, and power control to maximize the overall system throughput.Three modes are then considered: dedicated cellular mode (herein mode 1), dedicated D2D mode, and reuse mode (herein mode 2). However, the optimization problem is decomposed into two subproblems: transmit power control for both cellular and D2D UEs, and joint mode selection and channel assignment for each D2D UE.Thus, and once again, a greedy approach is followed. C. Problem Statement To the best of our knowledge, prior works have not proposed an optimization model for resource assignment that takes into account cellular scheduling, D2D grouping, and mode selection in a unified framework aiming to exploit the multi-UE diversity and to provide fairness.Hence, we propose the unified Joint Opportunistic Scheduling (JOS) problem for resource assignment among cellular and D2D communications, as shown in Fig. 3.The JOS problem allows opportunistic exploitation of instantaneous channel fluctuations and prioritizes communications which are in better conditions for sharing resources as to obtain an improved system capacity.To avoid an Exhaustive Search (ES) over all possible resource assignments, scheduling, and mode selection decisions, simple and effective heuristic algorithms are required for alleviating concerns about processing complexity.Hence, we propose the Joint Opportunistic Assignment and Scheduling (JOAS) heuristic as an efficient low-complexity solution for the JOS problem.The proposed heuristic forms various candidate groups and evaluates them using a modified Proportional Fair (PF) metric, but processing only the UEs and groups most likely to be scheduled. In addition, assuming that the maximum throughput in mode 2 is achieved when only one D2D pair is enabled per cell at each scheduling instance in a LTE network [9], [10], especially in small cell cases, the JOAS heuristic limits the number of possible combinations to group only one cellular UE and one D2D pair. Besides that, it also considers a mechanism to limit the impact of D2D communications on the performance of cellular communications.The main contributions of this work are summarized as follows: 1) Propose the JOS problem whose objective is to maximize a modified PF metric that takes into account the instantaneous throughput of celular and D2D communications; 2) Elaborate a low-complexity suboptimal JOAS heuristic to solve the JOS problem, which is based on resource assignment and joint scheduling of groups of cellular and D2D UEs, along with optional pre-selection schemes and a protection mechanism for cellular communications.The remaining sections of this manuscript are organized in the following manner.Section II presents the system modeling.In Section III, a formulation for the JOS problem is proposed, while Section IV describes the JOAS heuristic.In Section V traditional RRM schemes are briefly described.Simulation results are presented and discussed in Section VI.Finally, in Section VII, conclusions and future perspectives are drawn. II. S M In this section the system model is addressed.Let us assume a multi-cell LTE network with C cells.The shape of each cell is a regular hexagon with an eNB placed in its center [11]. To model a scenario where D2D communications are likely to happen underlaying such network, a percentage of the total J UEs within the cell coverage area is clustered inside a rectangular hotspot zone located near the cell-edge (which ensures the favorable conditions for sharing radio resources [6]), while the others are uniformly distributed over the remaining area [12].As an example, for a percentage of 25 %, if there are J = 16 UEs per cell, J CELL = 12 of them are cellular UEs and J D2D = 4 are D2D UEs. Considering that UEs inside the hotspot are very close to each other, and far away from most cellular UEs, D2D pairs are obtained by randomly pairing those UEs (transmitter and receiver).Following the previous example, with J D2D = 4 D2D UEs there exist M = 2 D2D pairs per cell. LTE systems employ Orthogonal Frequency Division Multiple Access (OFDMA) for multi-UE transmissions in downlink communication direction.As such, subcarriers are grouped in blocks of 12 adjacent subcarriers spaced by 15 kHz, which gives a total bandwidth of 180 kHz per block.The information is transported on that bandwidth over a slot that lasts for 0.5 ms and seven Orthogonal Frequency Division Multiplexing (OFDM) symbols.This frequency-time block is designated as Physical Resource Block (PRB) and it is the minimum allocable radio resource unit.Nevertheless, and due to practical reasons, each scheduled UE takes two slots (a subframe). To this end, the scheduling happens at each Transmission Time Interval (TTI) (1 ms) on a PRB basis for the total number of N PRBs, that are fully reused in all cells (reuse-1). Moreover, the channel response for each PRB is represented by the complex channel coefficient associated with its middle subcarrier and first OFDM symbol; and the channel coherence bandwidth is assumed to be larger than the bandwidth of a single PRB, leading to a flat fading channel over each of them.Furthermore, modeling the complex channel coefficients includes the propagation effects on wireless channels, namely, pathloss, shadowing, and fast fading for the urban-microcell environment [11], [13], [14].Now, let us assume that t(m) and r(m) represent, respectively, the transmitter and receiver that belong to the D2D pair m ∈ {0, 1, . . ., M }, and that j ∈ {1, 2, . . ., J CELL } denotes a cellular UE.Note that m = 0 represents the case where there are only cellular UEs.Also, in the multi-cell network, eNBs (likewise cells2) are referred by their index c ∈ {1, 2, . . ., C}. The total transmit power, P UE from a UE and P CELL from an eNB, is divided among the allocated PRBs.Hence, the transmit power per resource n ∈ {1, 2, . . ., N } for the cellular link is denoted as p c,n and for the D2D link as p t(m),c,n ; and the cellular link's channel is represented by h j,c,n ∈ C, while h r(m),t(m),c,n ∈ C translates the channel of D2D pair m within cell c at PRB n.And, the reported Channel State Information (CSI) from all involved UEs does not suffer any error or delay, i.e., the CSI knowledge is perfect. Admitting η 2 as the average power of additive white Gaussian noise at receivers in downlink, the Signal-to-Interferenceplus-Noise Ratios (SINRs) γ CELL j,m,c,n and γ D2D j,m,c,n perceived, respectively, by the cellular UE j and the D2D receiver r(m) may be written as presented in (1).Table I has the description of channels and transmit powers and Fig. 4 illustrates them for a better understanding.For this work we restrict ourselves to the downlink communication direction, no power control is used, and a single omnidirectional antenna is installed in all eNBs and UEs.In the case of multiple antenna systems, or the uplink direction, or 2In this work, the terms eNB and cell are sometimes used interchangeably. Inter-cell interference from cellular links Intra-cell interference from cellular link Inter-cell interference from cellular links Inter-cell interference from D2D links allowing that multiple D2D pairs to be grouped with a cellular UE, the required modifications in (1) shall be straightforward.In a given TTI t (note that, whenever possible, and to simplify notation, the index t is omitted from the following equations) and considering that the link adaptation function f LA (•) selects the Modulation and Coding Scheme (MCS) which yields the maximum data rate [15], [16], the throughput of cellular and D2D links is calculated as follows In the cellular scheduling without D2D communication, i.e., m = 0, a standard PF metric ψ STD j,0,c,n is chosen.Such scheduling is performed by estimating the instantaneous throughput R CELL j,0,c,n in each resource n and updating the average throughput T CELL j,0,c [17]; the low-pass filtered average throughput of UE j after transmission at TTI t + 1 is calculated in the following manner where R CELL j,0,c,n denotes the actual throughput achieved by UE j within cell c at PRB n and TTI t, and ∆ PF is the length of the exponentially weighted time window.Thus, resources are sequentially allocated to each UE j with the best PF metric, particularly where and R min > 0 denotes the instantaneous throughput provided by the lowest MCS per PRB in LTE systems. III. J O S P The objective of the current section is to propose a unified optimization problem for the Joint Opportunistic Scheduling (JOS) of cellular and D2D communications.The JOS problem is designed so that it persecutes the maximization of the PF ratio and is based on the same priority calculation performed by the PF metric in (4).However, the instantaneous cellular throughput is modified to the total instantaneous throughput R TOTAL j,m,c,n , which is given as where R CELL j,m,c,n and R D2D j,m,c,n are, respectively, the instantaneous throughput values of the group ( j, m) c,n (hereafter, indexes c and n are omitted for notation simplicity) composed by the cellular UE j and D2D pair m within cell c while sharing the same PRB n.Hence, the modified priority ψ JOS j,m,c,n of each group is calculated as follows The principle behind the standard PF in ( 4) is to schedule the UE j which has the largest ratio between the achievable instantaneous throughput and the average one.Looking to (6) the same principle applies as before, but for the group ( j, m).Therefore, if for any reason the celular UE experiences poor channel conditions but the D2D pair may achieve large throughput, the group ( j , m ) is scheduled.In addition, when a cellular UE j is not scheduled for several TTIs, its average throughput T CELL j,0,c is low, which further larges (6).As such, the fairness among cellular UEs is still preserved. Defining an optimization binary variable x j,m,c,n , the JOS problem can be formulated as subject to , n and m > 0, (7e) where R min denotes the instantaneous throughput provided by the lowest MCS per PRB in LTE systems. On the one hand, looking to (7a) and from ( 6), ( 5), (2), and (1) it can be easily observed that ψ JOS j,m,c,n is intrinsically related with the transmit power, which is the coupling element between cellular scheduling, D2D grouping, and mode selection subproblems.Therefore, if x j,m,c,n equals zero, i.e., the group ( j, m) is not scheduled to transmit, then the allocated power to channels h j,c,n and h r(m),t(m),c,n in (1) is also zero. On the other hand, the instantaneous throughput estimated by the scheduling policy needs to accurately reflect the achievable throughput in each radio resource [17].However, such information is known only after power allocation is performed, which may be unavailable for the scheduling policy [18].To simplify this problem, Equal Power Allocation (EPA) is used. The full reuse of PRBs combined with EPA in the downlink direction permits the exact knowledge of the transmit power by the scheduling policy at eNB (i.e., p c,n = p c ,n = P CELL /N).However, for D2D UEs, the number of resources is allocated opportunistically and, therefore, the transmit power (i.e., p t(m),c,n and p t(m ),c ,n ) is hard to be predicted.In such case, a realistic prediction of the average transmit power per resource for each UE at TTI t + 1 is computed based on an exponential moving average as in [18] pt(m),c,n where P UE /N ≤ pt(m),c,n ≤ P UE is the estimated average transmit power from D2D transmitter t(m) within cell c at PRB n used by the scheduling policy, p t(m),c,n is the transmit power effectively allocated, and ∆ AVG is the length in TTIs of the exponentially weighted time window. It is clear that (7) belongs to the family of Mixed-integer Nonlinear Programming (MINLP) problems, which combine the combinatorial difficulty of optimizing over discrete variable sets with the challenges of handling nonlinear functions [19], and therefore is a Nondeterministic Polynomial Time (NP)-hard combinatorial problem, which is intractable. IV. J O A S H In this section, a heuristic called JOAS is elaborated as a low-complexity solution for the JOS problem stated in (7).JOAS runs within each cell on a PRB basis independently from other cells.In this manner, for notation simplicity, indexes c and n are omitted from the following equations. Based on the principle of a multi-UE scheduling, the main idea of JOAS heuristic is to improve opportunistically the total throughput of cellular and D2D communications.In order to avoid testing all combinations of cellular UEs and D2D pairs, the proposed method allows the processing of candidate groups most likely to be scheduled in each PRB.For this to happen, the heuristic is divided into two main steps (see Fig. 5): • Using a certain metric to measure the spatial compatibility among cellular UEs and D2D pairs, assignment selects the most spatially compatible D2D pair to each cellular UE.Thus, it enables the processing for scheduling of only a reduced set among all possible candidate groups that would be considered by an ES; • The scheduling that occurs jointly and opportunistically prioritizes groups that were assigned by the previous step whenever it is possible to increase the total throughput.Hence, an implicit mode selection happens, i.e., either the scheduling of a group ( j , m ), hereafter referred as mode 2, or the scheduling of just a cellular UE j , without resource sharing, denoted as mode 1, when the UE alone achieves a largest throughput.Furthermore, some optional mechanisms are also designed to reduce the processing complexity by pre-selecting UEs and protect cellular communications: • The pre-selection schemes, namely pre-assignment and pre-scheduling schemes, avoid a full wide-search on the unlikely scheduling candidates [20].They reduce processing complexity of both assignment and scheduling steps by selecting only the most likely cellular UEs and groups to be scheduled; • The protection mechanism avoids the selection of D2D pairs that would highly harm the performance of cellular communications, by focusing on the total throughput and interference coordination. A. Assignment As said before, the assignment step is designed to avoid the increasing complexity in terms of throughput calculations for the scheduling step.So, the assignment selects the most likely D2D pair to be scheduled with each cellular UE.Herein, an assignment metric φ j,m that measures the spatial compatibility of each cellular UE j and D2D pair m is employed.Therefore, the most compatible D2D pair for the cellular UE j is assigned as follows Different link measurements can be used for the assignment metric.These measurements try to capture the effects related with spectral efficiency gain and interference reduction.Also, depending on the measurement, the assignment may be done in a slow time basis (e.g., seconds) and at once for all PRBs (i.e., whole bandwidth).To avoid increasing the computational complexity and overhead on reporting measurements to the eNB, the assignment metric shall be as simple as possible. As the desired and interfering cellular links are assumed to be known at the eNB, and the D2D link is presumed to be known by the communicating UEs, only the D2D interfering link at the cellular UE is difficult to obtain.In this case, e.g., a minimum reference power used by the D2D discovery procedure over that link [21] can be used to infer the channel h j,t(m),c,n in Fig. 4. The assignment metrics are detailed in the following and the measurements required by each of them are indicated in Table II, whereas channels are described in Table I (apart from cell and PRB indexes), and ẑj and ẑr(m) are, respectively, the interference estimation suffered by the cellular UE j and D2D receiver r(m). 1) Random (RND): The RND metric randomly selects a D2D pair such that each pair within the cell has equal chances to be scheduled [6].It configures a scenario with non-networkassisted D2D communications, that does not take into account any link measurement, and, therefore, disfavors the spatial compatibility among cellular UEs and D2D pairs.This metric is employed only to obtain a lower bound on the performance of the assignment step. 2) Signal-to-Leakage-plus-Noise Ratio (SLNR): The SLNR is based on the ratio between the desired signal power over the undesired leakage of a D2D communication.Measurements of the channel gain and generated interference (leakage) are closely related to good and bad spatial conditions for sharing resources.Thus, this metric makes the differentiation among D2D transmitters (interferes) and cellular receivers to measure how good is the geographical separation between them.The SLNR metric is given by where h r(m),t(m) 2 is the channel gain between the transmitter and receiver of D2D pair m, p t(m) is the transmit power from D2D transmitter t(m), and h j,t(m) 2 is the interfering channel gain between D2D transmitter t(m) and celular UE j, whereby h j,t(m) 2 p t(m) is the leakage. 3) Exhaustive Search (ES): The ES metric3 estimates the achievable throughput per group of all possible ( j, m) groups, and assigns to the cellular UE j the D2D pair m that leads to the highest modified PF priority, expressed in (6).The ES metric is given by φ j,m = ψ JOS j,m . The time complexity (number of comparisons) in both ( 11) and ( 10) is expected to be very similar.However, the computational complexity for ES metric shall be higher, especially for situations of high UE density, since it depends on (6), ( 5), (2), and (1).This solution is not necessarily optimal and it is only used as upper bound on the performance of assignment step. B. Scheduling The scheduling step makes a joint opportunistic processing of the estimated instantaneous throughput values of both cellular and D2D links to get improved total gains.Each radio resource is mandatorily allocated to one cellular UE while its sharing with a D2D pair depends on the total throughput.This step has the flexibility of avoiding the schedule of a D2D communication when it does not provide additional gain to the total throughput, thus performing mode selection.The scheduling is as follows where m = 0 represents the mode 1 case, which schedules the cellular UE j through the standard PF metric (4b); and m 3Strictly speaking, ES should not be considered as a metric because it just searches over the space of all possible solutions and selects the best one.0 represents the mode 2 case, which schedules the cellular UE j and D2D pair m through the JOS metric (6). Furthermore, some additional comments on scheduling are worth to be highlighted: • Groups compete for resources by using the JOS metric, which has its historical average throughput values based only on the cellular throughput.So, as explained in Section III, the fairness obtained by ( 12) is still guaranteed for cellular UEs such as by the standard PF in (4); • Cellular UEs in groups with low spatial compatibility tend to be scheduled in mode 1 because of the low total throughput in mode 2; • In mode 2, cellular UEs always have their instantaneous throughput reduced due to the intra-cell interference from D2D link.However, in favorable conditions for resource sharing, i.e., with a large total instantaneous throughput, they still have high priority to be selected in subsequent scheduling rounds because of the slow increase on historical throughput compared to mode 1. C. Pre-selection Schemes The pre-selection schemes alleviate concerns with the extensive processing for scenarios with high UE density through a controlled trade-off between capacity and complexity [20].Herein, pre-selection schemes are applied to reduce the complexity of assignment and scheduling steps.While the preassignment scheme allows the assignment processing only for those cellular UEs most likely to be scheduled, the prescheduling scheme allows the scheduling processing only for the groups most likely to be scheduled.In the following, both pre-selection schemes are detailed. 1) Pre-assignment: It reduces the complexity of the assignment step, and consequently of the scheduling step, by picking up only a fraction F A of those UEs most likely to be scheduled in mode 1.Therefore, the assignment step is dealt only to a subset A ⊂ J CELL of the total set J CELL of cellular UEs within the cell, with |A| = F A |J CELL |, where |•| is the cardinality operator (recall that |J CELL | = J CELL ) and F A is a fraction of cellular UEs with the highest standard PF priority values. 2) Pre-scheduling: The complexity of the scheduling step is reduced by allowing total throughput calculations only for a fraction F S of the most spatially compatible groups, which are likely to be scheduled in mode 2. From the groups G provided by the assignment step, total throughput calculations are dealt only to a subset S ⊂ G of groups, with |S| = F S |G|, where F S is a fraction of groups with the highest assignment metric values. D. Protection Mechanism As the total throughput may hide a low cellular throughput because of a high D2D throughput, a protection mechanism is designed to prevent the negative impact of excessive interference from D2D link on the performance of cellular communications.It protects the cellular UEs by ensuring a minimum throughput requirement for cellular communications. Since it is performed before the scheduling step, mode 2 can be avoided when the loss on the instantaneous throughput of a cellular UE due to the generated interference of its assigned D2D pair is higher than the maximum allowable one. Let f j,m denote the percentage throughput loss of a cellular UE j due to the impact of its assigned D2D pair m , which is given by where R CELL j,0 is the instantaneous throughput of the cellular UE j in mode 1 and R CELL j,m in mode 2. As such, the protection mechanism switches off a candidate group by using a reversed step function to nullify its JOS priority (thus avoiding mode 2) as follows where F I denotes the maximum admissible throughput percentage loss of a cellular communication when D2D communications are enabled for sharing resources, compared to a baseline scenario with only cellular communications (F I = 0 %). Power control methods at D2D transmitters for interference mitigation are desired to minimize the impact on cellular communications.However, such methods are out of the scope of this work, since they would mask the benefits of JOAS. For comparison with JOAS, in this section traditional RRM schemes used to enable the sharing of radio resources between cellular and D2D communications are explained.Usually such schemes divide the JOS problem (7) into cellular scheduling, D2D grouping, and mode selection subproblems.An algorithmic description of these procedures is provided in Fig. 6 and detailed as follows: 1) The standard PF scheduler selects the primary cellular UE j according to the priority ψ STD j,0 defined in (4); 2) The D2D grouping scheme assigns the most spatially compatible D2D pair m to the primary cellular UE j by using the assignment metric in (9); 3) The mode selection algorithm only allows D2D communications when the total throughput in mode 2 is higher than in mode 1.Moreover, the cellular throughput is also protected such that it must be higher than zero. 1: for each TTI t, cell c, and PRB n do 2: Schedule the cellular UE j with largest PF metric ψ STD j,0 using (4) 3: Group the D2D pair m that has the largest assignment metric φ j, m with the primary cellular UE j applying (9) 4: if R TOTAL j , m < R CELL j ,0 then 5: Remove the D2D pair m 6: end if 7: end for Fig. 6.RRM schemes: cellular scheduling (line 2), D2D grouping (line 3), and mode selection (lines 4 to 6). In the following, GRP refers to D2D grouping and GRP+MS to D2D grouping with mode selection. VI. R A This section provides a performance assessment for JOAS heuristic in the multi-cell scenario through system-level simulations aligned with LTE systems [11], [13], [14], as described in Section II.The main simulation parameters are in Table III.Note that when the number of UEs in the system increases, the dimension (complexity) of the problem also increases, scaling exponentially, which brings computational issues.Therefore, the maximum number of UEs per cell is fixed to 16. Fig. 7 presents the total system spectral efficiency achieved by JOAS heuristic in comparison to traditional RRM schemes for several UEs per cell.As it is shown, the JOAS heuristic is able to overcome the improvement of performance achieved by both GRP and GRP+MS schemes for any UE density.As an example, with 16 UEs per cell, the total throughput gain is about 9 %.For 4 UEs per cell, there is only one D2D pair into each cell.Hence, a spatially incompatible D2D pair might be forcibly assigned with the former cellular UE by the GRP scheme, and with cellular UEs by the assignment step of the JOAS heuristic.The GRP+MS scheme has the flexibility of avoiding these harmful situations.However, it was not able to provide total performance improvements.In its turn, the improvement achieved by JOAS heuristic is due to the prioritization of cellular UEs which are in better conditions for sharing resources by the scheduling step.The benefits of resource sharing are promising in terms of the total system spectral efficiency, especially for high UE densities.But, there is a trade-off between the gains of enabling D2D communications and the impact on cellular communications. Table IV presents the number of occurrences (in terms of percentage) of the D2D communications and the Jain's index of fairness for cellular communications, which is defined as Π = J j x j 2 J J j x 2 j , and x j = R CELL j,0,c,n [22].The higher number of D2D communications achieved by GRP scheme greatly impacts on the fairness of cellular communications (compared to the PF in conventional scenario), which happens due to strong intra-cell interference induced by mode 2. As mentioned in Section I, D2D links should not be mandatorily assigned in every radio resource, which is the case of GRP.In this context, GRP+MS scheme improves the performance of cellular communications by reducing the occurrences of D2D communications when it is not possible to achieve any gain in the total throughput.In its turn, the JOAS heuristic increases the occurrences of D2D communications and keeps the fairness of cellular communications, in comparison to the GRP+MS scheme.These results indicate the potential benefits brought by JOAS heuristic.Fig. 8 presents the performance of cellular and D2D communications achieved by JOAS heuristic and traditional RRM schemes.The GRP and GRP+MS schemes present almost the same total spectral efficiency.However, the impact on cellular communications is clearly higher with the GRP scheme because it does not consider the cellular throughput information.On the other hand, such impact is reduced with the GRP+MS scheme, so that the D2D throughput is reduced and the cellular one is improved.Moreover, the JOAS heuristic allows a higher impact on the throughput of cellular communications than the GRP+MS scheme to achieve a higher total throughput gain (recall that this is the main objetive of JOS problem), while maintaining the fairness of cellular communications, as it can be seen in Table IV.Fig. 9 illustrates the total system spectral efficiency gains considering different RRM schemes and assignment metrics.When D2D communications occur in hotspot zones near the cell-edge is the most favorable scenario for resource sharing.Even though, the GRP scheme is not effective when using an assignment metric which deficiently measures the spatial compatibility among UEs, as the RND metric.Indeed, all the reliability of GRP remains on the efficiency of the assignment metric (see that the gap between GRP and GRP+MS schemes reduces for the ES case).On other hand, the scheduling step of JOAS heuristic is able to avoid the scheduling of D2D pairs inefficiently assigned by the RND assignment metric, which would highly harm the total throughput, giving the highest gap between JOAS and traditional RRM schemes, among the three assignment metrics.In summary, while JOAS heuristic in 10 % the performance with GRP+MS scheme for the ES metric, the improvement is 26 % for the RND metric. Additionally, it is worth to notice that the simpler SLNR metric provides similar results as the more computationally demanding ES metric, which makes the former preferable to be implemented in real networks. In order to diminish the processing complexity of the JOAS heuristic, pre-selection schemes are used to reduce the number of D2D pairs and groups evaluated by the assignment and scheduling steps, respectively.Fig. 10 shows the total system spectral efficiency of the preselection scheme by varying the pre-assignment factor F A for the three considered assignment metrics.Inspecting the figure, even when a reduced number of cellular UEs is processed by the JOAS heuristic, e.g., F A = 50 % of the total (i.e., only 8 among 16 UEs in the cell), there is no performance loss (in comparison to F A = 100 %) for all assignment metrics.It means that the standard PF priority in cellular mode is efficient in pre-selecting the most likely UEs to be scheduled.Furthermore, for F A = 25 % and SLNR metric, the preassignment scheme is reducing the probability of occurrences of D2D communications with some improvement for cellular communications while the total performance is almost kept constant.However, inefficient assignment metrics such as the RND metric tend to be more sensitive to pre-selection for very low pre-assignment factor values. Fig. 11 presents the total system spectral efficiency of the pre-selection scheme by varying the pre-scheduling factor F S for the three considered assignment metrics.In the case of ES metric, the reduction in the amount of processed groups does not impact on the overall performance because it captures very well the group with the maximum priority in mode 2. On the other hand, as the RND metric is already limited because it does not take into account any spatial compatibility on the assignment step, its total performance is further reduced when pre-scheduling limits the number of evaluated UEs.Finally, the SLNR metric, which does not fully capture all spatial compatibility as the ES metric, presents a considerable impact on the total performance, but only for pre-scheduling factor values lower than F S = 50 %.As observed in all results, D2D links always impact on the performance of cellular communications even in favorable conditions.In the following, the protection mechanism for cellular communications, which is described in Section IV-D, to prevent excessive intra-cell interference from D2D communications, is evaluated.Fig. 12 provides the system spectral efficiency gains for cellular and D2D communications for different admissible impact factor values F I . Analyzing the figure, it may be seen that high total throughput gains are obtained with a minimum impact on the performance of cellular communications, as for F I = 5 %.Even when a group achieves low values of system spectral efficiency and the impact on the estimated throughput of the cellular UE is small, the scheduling of the assigned D2D pair is avoided.Therefore, only D2D pairs in very good conditions for sharing resources can be scheduled. Table V shows the occurrences of D2D communications, cellular throughput losses, and total throughput gains for low and high impact factor values, for 16 UEs per cell.As it can be seen, for F I = 20 %, the JOAS heuristic presents the same number of D2D occurrences as the GRP+MS algorithm, but with lower cellular throughput loss and higher total throughput gain, which demonstrates its effectiveness. VII. C The two main objectives of this work may be summarized as: 1) Propose a unified optimization problem for the JOS of a Losses and gains are measured in comparison to the baseline scenario, as in Fig. 12. cellular and D2D communications; 2) Elaborate a suboptimal JOAS heuristic for the JOS problem. The JOS problem is a NP-hard combinatorial problem and, therefore, intractable, which makes low-complexity solutions necessary.In this way, JOAS heuristic was designed to jointly and opportunistically schedule both cellular and D2D communications in the same radio resource exploiting the spatial compatibility among UEs.Results showed that fairness, spectrum reuse, and the overall system capacity were improved with JOAS heuristic in comparison to traditional RRM schemes. While the reliability of GRP scheme is intrinsically related with the efficiency of the assignment metric and mode selection (GRP+MS scheme) may still avoid the scheduling of a group in bad spatial conditions for sharing resources, the JOAS heuristic is less dependent of such metric.For example, the JOAS heuristic improved in 10 % the performance obtained with GRP+MS scheme when using a good assignment metric, but the gain was about 26 % for an unreliable metric.Indeed, the good performance of JOAS heuristic is more related with exploitation of the spatial organization of UEs, so that cellular UEs which are in very good sharing resource situations are opportunistically prioritized. Furthermore, the JOAS heuristic achieved a considerable complexity reduction with minimum impact on the total throughput by pre-selecting UEs and/or groups that are most likely to be scheduled.Also, to control the trade-off between the performance of cellular and D2D communications, it was possible to limit the number of occurrences of D2D links, so that they only happened when there were good conditions for sharing resources, i.e., the impact on cellular communications is minimum, but with high overall performance. As future perspectives, the JOS problem may be investigated in more challenging scenarios: when multiple D2D communications of the same or from different hotspots are allowed per PRB into the same cell or using relays (i.e., more than onehop) [23], multiple layers are required to be processed and the scheduling problem tends to be more complex.Hence, the cooperation among cells of the same site such as in Coordinated Multipoint (CoMP) systems may be considered to jointly schedule multiple D2D pairs and the cellular UE. Fig. 1 . Fig.1.An example of cellular spectrum sharing with D2D communications within a cell in downlink.The red (r) cellular UE is not in a good spatial condition for sharing resources with blue (b) and green (g) D2D UEs because it may perceive strong intra-cell interference, but blue and green cellular UEs may potentially share resources with the D2D pairs of the same color. Fig. 3 . Fig. 3. Proposed JOS problem for joint scheduling of cellular and D2D communications. h j, c, n Desired channel between eNB c and UE j at PRB n h j, t (m), c, n Interfering channel between D2D transmitter t(m) within cell c and UE j at PRB n h j, c , n Interfering channel between eNB c and UE j at PRB n h j, t (m ), c , n Interfering channel between D2D transmitter t(m ) within cell c and UE j at PRB n h r (m), t (m), c, n Desired channel between D2D transmitter t(m) and receiver r(m) within cell c at PRB n h r (m), c, n Interfering channel between eNB c and receiver r(m) at PRB n h r (m), c , n Interfering channel between eNB c and receiver r(m) at PRB n h r (m), t (m ), c , n Interfering channel between D2D transmitter t(m ) within cell c and receiver r(m) at PRB n p c, n Transmit power from eNB c at PRB n p t (m), c, n Transmit power from D2D transmitter t(m) within cell c at PRB n p c , n Transmit power from eNB c at PRB n p t (m ), c , n Transmit power from D2D transmitter t(m ) within cell c at PRB n Fig. 4 . Fig. 4. Illustration of channels and transmit powers in (1); where triangle represents an eNB, circle is a cellular UE, and square is a D2D UE.The double line (circle and square) indicates a receiver.Moreover, solid lines are for desired links while dashed lines represent interfering links. Fig. 7 . Fig.7.Total system spectral efficiency of RRM schemes employing the ES assignment metric.The PF scheduling provides the performance in the conventional scenario without D2D communications underlaying the cellular network, while all other schemes have D2D communications enabled, such that, for example, considering J = 16 UEs/cell, there exist M = 4 D2D pairs within each cell, because the percentage of D2D UEs is 50 %. Fig. 8 . Fig. 8.Total system spectral efficiency of cellular and D2D communications for different RRM schemes using ES as assignment metric for 16 UEs/cell. Fig. 10 . Fig. 10.Total system spectral efficiency of the pre-selection scheme by varying the pre-assignment factor F A [%] and the assignment metric for JOAS heuristic, considering 16 UEs/cell. Fig. 11 . Fig. 11.Total system spectral efficiency of the pre-selection scheme by varying the pre-scheduling factor F S [%] and the assignment metric for JOAS heuristic, considering 16 UEs/cell. Fig. 12 . Fig.12.Analysis of the protection mechanism for cellular communications by varying the impact factor F I [%] and using the assignment metric ES for the JOAS heuristic, considering 16 UEs/cell.F I = 0 % represents the baseline scenario, i.e., without any D2D communication. A This work is supported by the Innovation Center, Ericsson Telecomunicações S.A., Brazil, under EDB/UFC.33 and EDB/ UFC.40 Technical Cooperation Contracts.Also, Carlos Silva and José Mairton would like to acknowledge Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for the financial support.Likewise, Tarcisio Maciel would like to acknowledge Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) for its financial support under the grants 426385/2016-0 and 308398/2015-7.R
10,046.4
2017-08-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Observation of the Mating Behavior of Honey Bee (Apis mellifera L.) Queens Using Radio-Frequency Identification (RFID): Factors Influencing the Duration and Frequency of Nuptial Flights We used radio-frequency identification (RFID) to record the duration and frequency of nuptial flights of honey bee queens (Apis mellifera carnica) at two mainland mating apiaries. We investigated the effect of a number of factors on flight duration and frequency: mating apiary, number of drone colonies, queen’s age and temperature. We found significant differences between the two locations concerning the number of flights on the first three days. We also observed an effect of the ambient temperature, with queens flying less often but longer at high temperatures compared to lower temperatures. Increasing the number of drone colonies from 33 to 80 colonies had no effect on the duration or on the frequency of nuptial flights. Since our results agree well with the results of previous studies, we suggest RFID as an appropriate tool to investigate the mating behavior of honey bee queens. Introduction Honey bee (genus Apis/Apis mellifera) queens are polyandrous and mate only at one period in their life [1][2][3][4][5]. Usually within 1 week after emergence [6,7], mating takes place in free flight, supposedly at mating leks that are also known as drone congregation areas (DCA) [6,8,9]. At these sites that have been shown to be temporally stable [10,11], between 8000 to 15,000 drones can be present during specific times of the day [12,13]. Usually, queens perform a small number short flights of a few minutes for orientation, prior to true nuptial flights which are usually longer [14][15][16]. A queen can mate several times during a single flight [17][18][19], but queens often perform several consecutive nuptial flights on the same day or on different days [14,17,[20][21][22]. Between one and six flights on a couple of days have been reported [14,21,[23][24][25]. The number of drones a queen mates with ranges between 6 and 26, with an average of 12-14 [26][27][28]. The number of spermatozoa in the spermatheca [20,29] and the number of matings itself [17] have been hypothesized to serve as a signal for undertaking an additional flight or to start egg laying. In contrast, Tarpy and Page [30] state that queens have no control over the number of times they mate and that the high mating frequencies of honey bees is simply a stochastic by-product of mating behavior and mate availability. Several factors have been identified to influence the mating behavior of honey bee queens, most prominently among them the age of the queen [16], but also environmental conditions such as temperature, wind, and cloud cover [23,[31][32][33]. The number of available drones within the flight range of a queen has also been discussed as an important variable to influence number and duration of individual nuptial flights [14]. Since mating is fatal to the males which leave part of their endophallus in the sting chamber of the queen, the mating success of a flight can often be determined upon return of a queen to her colony [34,35]. The mating success and the number of mating partners of a given queen can be assessed via indirect methods such as counting spermatozoa in the spermatheca [14,18,20] or determining the number of patrilines by microsatellite analysis [26,28,36]. In contrast, the mating behavior itself, taking place in free flight, remains hardly accessible to observation, despite the development and availability of advanced methods and techniques for its study. While early reports were based upon direct observation of free flying or tethered queens [37][38][39][40], subsequent studies used wooden queen dummies or pheromone-treated preserved queens fixed to a rotating beam on a mast to stimulate and study the mating behavior of flying drones [34,41]. The flight activity and congregation behavior of drones has also extensively been studied using pheromone traps [42] attached to helium filled balloons [13,43,44] and by X-band radar [45][46][47]. While sexually mature drones fly frequently and continuously until they mate [48], an individual queen performs comparatively few flights [23,24]. The study of queens' behavior requires continuous observation of the flight entrance of the respective mating nuc(s) and/or the use of a queen excluder to prevent an unobserved departure. Although technical means to monitor the departure and arrival of queens such as photoelectronic control were developed comparatively early [49], the standard method still remains direct observation of the nuc entrance combined with a queen excluder. This method limits the number of nucs that can be observed simultaneously and it may also interfere with the behavior of the queen [23,33], since she has to wait until the observer removes the excluder, both at departure and return. Recently, radio-frequency identification (RFID) has been successfully used in Hymenoptera to investigate social interactions in ants [50], nest-drifting behavior in wasps [51], individual activity of bumblebees [52], and the effect of insecticides on the foraging behavior of A. mellifera workers [53,54]. To evaluate the suitability of this technique for the study of mating flight behavior of honey bee queens, we labeled queens (Apis mellifera carnica) with RFID transponders to record their departure from and return to their mating nucs that were equipped with reader modules. In this paper, we report results on duration and frequency of queen nuptial flights from RFID data. We analyze the flight behavior of our experimental queens in relation to their age and to environmental variables such as location of the mating apiary and ambient temperature. In addition, we manipulated the number of available drones in the vicinity of the mating apiaries to study the effects on queen flight duration and frequency. Field Work The fieldwork was conducted at two mating apiaries situated in the rural district 'Ilm-Kreis' in Middle-Thuringia, Germany ( Figure 1). One apiary is located near Gehlberg (50°40'57'' N, 10°46'30'' E, altitude 605 m) and the second one is located near Oberhof (50°42'36'' N, 10°43'50'' E, altitude 798 m). The distance between the two mating apiaries is 4.4 km. Each mating apiary is protected by an area of 154 km 2 (a radius of 7 km around each station) where according to regional law beekeepers are allowed to keep only colonies headed by queens provided by the respective mating apiary. During the whole experiment, 13 drone colonies (Gehlberg colonies) were situated 2.3 km to the north-east of the nucs with virgin queens at Gehlberg and 20 drone colonies (Oberhof colonies) were located close to the nucs with virgin queens at Oberhof (Figure 1). In calendar weeks 24 and 27 of 2011 (Table 1), we placed two trailers with 47 additional drone colonies next to the queens at mating apiary Gehlberg (about 100 m, Figure 1). These additional colonies were removed again before new groups of queens were placed at the mating apiaries. 13 + 47 20 In each experiment week (Table 1), we placed eight nucs with virgin sister queens at each mating apiary. These queens were individually marked with radio-frequency identification (RFID) transponders (mic3-TAG 64RO, carrier frequency: 13.56 MHz, Microsensys GmbH, Erfurt, Germany). The dimensions of the tags were 1 × 1.6 × 0.5 mm with a weight of 2.4 mg. A reader module (iID MAJA reader module 4.1, Microsensys GmbH, Erfurt, Germany) connected with a host (iID HOST type MAJA 4.1, Microsensys GmbH, Erfurt, Germany) was attached to the entrance of each mating nuc. Every time a tagged queen was leaving or entering the mating nuc, the RFID tag of the queen was registered by the reader module and the date, time, reader-and tag ID-number were stored on the host. The data were downloaded from the host and processed manually using Microsoft Excel 2010. Climate data from the meteorological station Schmücke were used to determine the impact of temperature on the nuptial flight duration and flight frequency. Data Analysis The number and duration of the flights of each queen were calculated from the RFID readings. We excluded flights of less than three minutes or more than 60 min from the analysis. We counted flights of less than three minutes as orientation flights rather than as nuptial flights [14]. Nuptial flights of more than one hour duration also appear unlikely [23]. Occasionally, queens do not enter their mating nuc immediately after returning, but instead cluster together with worker bees in front of the entrance (own observation). In the four weeks of the experiment, it was not always possible to provide queens of the same age. Therefore, and to avoid small sample sizes, queens were pooled into three categories in regard to their age when they left their nucs for a flight (Table 2). Likewise, since not all queens flew an equal number of times, the ranks of flights were also pooled into four categories ( Table 2). The groups of queens in different experimental weeks differed in the number of days they were able to fly (due to environmental reasons such as weather, or due to reasons connected to experiment logistics). For this reason, for the analysis of flight frequency only the flights of the first three days of all queens were used. The mean temperature for these first three flying days was calculated from the hourly measurements between 12:00 and 17:00. Table 2. Grouping of the data concerning the queen's age and the sequence of flights. All queens were kept in a dark room at 15 °C until they were brought to the mating stations. We analyzed our data using univariate, general linear models (GLM). For the comparison of two or more samples, we used Mann-Whitney-tests and Kruskal-Wallis-tests (since the data were not normally distributed), respectively. P values of multiple pairwise tests were obtained after a sequential Bonferroni adjustment [55]. We performed all analyses using the statistics software package SPSS 19.0.0 (IBM). Survival Rates and General Flight Behavior of the Queens From the total of 64 queens observed, 11 (17.19%) did not return to their mating nucs ( Table 3). Three of the lost queens tried to enter a foreign mating nuc and one of them was recorded in the wrong entrance at two consecutive days. Two queens (3.13%) were not recorded at all during the whole experiment; however, they started egg laying afterwards. On average, the queens left their nucs on 2.20 ± 0.98 flying days (min. 1; max. 5), with most of the nuptial flights (82.49%) taking place between 13:00 and 16:00 h ( Figure 2). The earliest and latest departure was recorded at 11:50 and 17:38 h, respectively. The average total number of recorded flights per queen was 5.04 ± 3.11 (min.1; max. 16), with a maximum of seven flights of one queen on one day. The mean daily number of nuptial flights per queen was 2.30 ± 1.35, with a mean duration of 17.69 ± 13.19 min (min. 3.08; max. 57.07; Figure 3). Table 3. Number of queens with recorded nuptial flights, number of mated (queens which started egg laying) and lost queens (queens which did not return from the mating station) for each location and experiment week. Two of the mated queens were not recorded at all. Calendar week Total number of monitored queens Lost queens Gehlberg 22 8 7 7 1 24 8 7 7 1 26 8 5 5 3 27 8 3 3 5 Oberhof 22 8 7 7 1 24 8 6 8 0 26 8 8 8 0 27 8 8 8 0 These results are in accordance with previous studies. Queen losses of 10% to 20% during nuptial flights due to weather conditions, predation or difficulties in returning to their mating nuc are not unusual [17,56]. In general, honey bee queens perform several flights on a couple of days. Schlüns et al. [17] and Woyke [20] reported up to three flights for queens on the mainland. Alber et al. [23] observed one to four mating flights (queens returned with a mating sign) on one to three days for queens on an island. Verbeek [33] reported up to 10 flights on three days and a mean number of flights per queen of 12.5 (mainland) which appears to be rare for queens on mainland mating apiaries. Mated queens However, this many flights and even more have been reported for queens on islands [23,33]. Our results are well in agreement with previously reported observations from various climates and at various latitudes (northern hemisphere) in that A. mellifera queens and drones take nuptial flights between 12:00 and 17:00 h with a maximum flight activity between 13:00 and 16:00 h [23,24,31,33,48,[57][58][59][60]. Data on the flight duration of honey bee queens provided by different authors vary considerably between 2 and 57 min [21,33,61,62] with a mean duration between 7 and 26 min [1,14,18,33]. Duration and Frequency of Queen Nuptial Flights First, we considered all flying days of all queens, and observed no significant differences concerning the number of flying days per queen (Mann-Whitney-test; U = 264.50; p = 0.270) as well as the total number of nuptial flights per queen (Mann-Whitney-test; U = 229.00; p = 0.085) between the two mating apiaries. When queens of both mating apiaries were considered together, the number of flights per day differed significantly between different consecutive flying days (Kruskal-Wallis-test; Chi 2 = 12.30, p = 0.015, df = 4), with the number of flights per day increasing until day 4 and then decreasing again. Significant differences were observed between days 1 and 3 (Mann-Whitney-test; U = 250.00; p = 0.015), days 1 and 4 (Mann-Whitney-test; U = 19.50; p = 0.005), and days 2 and 4 (Mann-Whitney-test; U = 25.50; p = 0.023; Figure 4). However, after sequential Bonferroni adjustment of the p-values for multiple testing, only the difference between the number of nuptial flights on days 1 and 4 remains significant. Then, we restricted the further analysis of flight frequency to the first three flying days, since groups of queens differed in the number of days that flights were possible (see Section 2.2). The number of nuptial flights on the first three flying days differed significantly between the two locations, with queens of Oberhof flying more often than queens of Gehlberg (Table 4, Figure 5a). In addition, the total number of flights of the queens was higher on days with lower mean temperatures than at higher temperatures (Table 4, Figure 5b). However, we observed no effect of the number of drone colonies on the total number of nuptial flights of a queen (Table 4). In contrast to the number of nuptial flights of all queens, their mean duration did not differ significantly between the two mating apiaries ( Table 5). The mean daily nuptial flight duration of each queen did not differ between flights taken on different days (Kruskal-Wallis-test; Chi 2 = 2.58, p = 0.631). We also found no effect of the number of drone colonies on the duration of the nuptial flights (Table 5). However, flight duration significantly depends on the age of the queen (Table 5, Figure 6a), with the youngest (<9 days) and oldest (>15 days) queens flying significantly longer than queens of medium age (10 to 14 days). Flight duration also depends on the rank of the flight (Table 5, Figure 6b), where the first flights were the longest ones. In addition, the duration of the nuptial flights significantly increased with increasing temperature (at departure; Table 5, Figure 6c). Table 5. Effects of queen's age, mating apiary, number of drone colonies, rank of consecutive flights and temperature (at departure; included as a covariate) on the duration of the nuptial flights. Type III sum of squares, denominator degrees of freedom (Den. d.f.), mean square, F-value and p-value are given for each factor. Several studies identified environmental conditions such as temperature, wind, and cloud cover as important factors influencing the mating behavior of honey bees [31][32][33]60,63]. Alber et al. [23] showed that matings hardly ever occur at temperatures below 20 °C with overcast sky and wind velocity above 30 km/h. According to Lensky and Demter [31] high wind velocities (9-14 km/h) in combination with low temperatures (15 °C-20 °C) lead to an increase of short queen flights. In our study, the flight duration also decreased with decreasing temperatures whereas the flight frequency increased. The mating behavior of honey bee queens can also vary between different types of mating apiaries. For example, nuptial flights of queens on island mating apiaries are shorter and more frequent than on mainland mating apiaries [33,64]. In addition, queens on island mating apiaries mate less often compared to queens on the mainland [28]. These differences have been considered to be caused by different climatic conditions (especially wind velocity) between islands and the mainland [28,33,64]. Our study was conducted at two mating apiaries separated by a distance of 4.4 km only. Nevertheless, local climate conditions are likely to differ between the two locations, especially since the altitude of Oberhof is about 200 m higher than Gehlberg. This difference may explain the different flight frequencies at the two apiaries, with queens of Oberhof flying more often than queens of Gehlberg. The number of available drones in the environment has been hypothesized to influence the mating flights of honey bee queens, although the reports of different author teams differ in their conclusions. While Koeniger and Koeniger [14,57] observed an increase in mean flight duration from 13.7 ± 6.1 min with plenty of drones (>10,000) to 21.8 ± 9.67 min with few (~2500) drones, Woyke [18] showed the opposite effect, with an abundance of drones in close proximity to the queens leading to longer nuptial flights. However, the nuptial flights of the queens from an apiary with plenty of drones were not more effective (concerning the amount of semen queens received) than those of queens from an apiary where no colonies with drones were present within 2.5 km [18]. Neumann et al. [28] compared queen mating frequencies and found no effect with numbers of drone colonies varying between 10 and 42. In our study we increased the number of drone colonies from 33 to 80 colonies and found no effect, neither on the flight duration nor on the flight frequency. Possibly, 33 drone colonies are already enough for an excess supply of drones so that the effect of additional drone colonies is rather weak. We found no previous studies reporting differences in flight duration in queens of different ages, where we observed a significant effect. The flights of both the youngest and oldest queens were significantly longer than those of medium-aged queens. However, due to reasons connected to experiment logistics, our groups of queens in different experimental weeks were not always all of the same age. Therefore we cannot exclude the possibility that the effect of the queen's age may have resulted from differences between different experimental weeks. To confirm our observation it will thus be necessary to observe queens of different age simultaneously at the same mating apiary. Direct visual observation of queens is time-consuming and the number of individuals which can be monitored simultaneously, as well as the time span during which this can be done, is limited. To prevent unobserved nuptial flights, queen excluders are used for direct observations which have to be checked continuously by a patrolling observer. Both the presence of an observer and the queen excluder itself, can disturb the queens and alter their behavior [23,33]. In contrast, use of the RFID technique provides a suitable method to monitor several individuals simultaneously without disturbing them. Of course, the use of RFID does not permit to detect whether a queen returns with a mating sign, so it cannot be determined whether she mated on a flight or not. However, even with direct observation one cannot always be certain whether a queen was successful on her nuptial flight, as mated queens sometimes (8% to 36%) return without a mating sign [1,23,65]. One can only be sure that a queen returning with a mating sign mated with at least one drone. A more serious problem occurring with RFID is the fact that sometimes a queen can pass the reader module without being registered. Three of our queens were not recorded at all during the whole experiment; however, they started egg laying afterwards. Although it is possible that they lost their transponder, we also found that the orientation of the transponder relative to the reader modules influences the readings. It may thus be possible that queens leave unregistered when they pass the entrance in a position that is unfavorable for a reading. Another technical problem was that we sometimes could not unambiguously decide whether a queen left her mating nuc for a nuptial flight or if, after having passed the reader module, she did not leave after all but returned into the nuc instead. Both these difficulties can be solved by use of reader modules with two antennae. Queens passing such a reader module are registered twice, which enables the observer to determine the direction of the movement [52,66]. Conclusions Radio-frequency identification (RFID) is an appropriate tool to investigate the duration and frequency of nuptial flights of honey bee queens, since our results agree well with the results of previous studies. Marking the queens with RFID transponders does not lead to increased queen losses. In our study, the number of queens which did not return to their mating nucs was within the range of losses reported by other authors, and rather seems to depend on the mating apiary (10 queens were lost at Gehlberg and 1 queens at Oberhof). To reduce the rate of ambiguous readings, reader modules with two antennae should be used.
4,991
2014-07-01T00:00:00.000
[ "Biology" ]
Yeast β-Glucan Altered Intestinal Microbiome and Metabolome in Older Hens The prebiotics- and probiotics-mediated positive modulation of the gut microbiota composition is considered a useful approach to improve gut health and food safety in chickens. This study explored the effects of yeast β-glucan (YG) supplementation on intestinal microbiome and metabolites profiles as well as mucosal immunity in older hens. A total of 256 43-week-old hens were randomly assigned to two treatments, with 0 and 200 mg/kg of YG. Results revealed YG-induced downregulation of toll-like receptors (TLRs) and cytokine gene expression in the ileum without any effect on the intestinal barrier. 16S rRNA analysis claimed that YG altered α- and β-diversity and enriched the relative abundance of class Bacilli, orders Lactobacillales and Enterobacteriales, families Lactobacillaceae and Enterobacteriaceae, genera Lactobacillus and Escherichia–Shigella, and species uncultured bacterium-Lactobacillus. Significant downregulation of cutin and suberin, wax biosynthesis, atrazine degradation, vitamin B6 metabolism, phosphotransferase system (PTS), steroid degradation, biosynthesis of unsaturated fatty acids, aminobenzoate degradation and quorum sensing and upregulation of ascorbate and aldarate metabolism, C5-branched dibasic acid metabolism, glyoxylate and dicarboxylate metabolism, pentose and glucuronate interconversions, steroid biosynthesis, carotenoid biosynthesis, porphyrin and chlorophyll metabolism, sesquiterpenoid and triterpenoid biosynthesis, lysine degradation, and ubiquinone and other terpenoid-quinone biosyntheses were observed in YG-treated hens, as substantiated by the findings of untargeted metabolomics analysis. Overall, YG manifests prebiotic properties by altering gut microbiome and metabolite profiles and can downregulate the intestinal mucosal immune response of breeder hens. INTRODUCTION The gastrointestinal tract of chickens demonstrates a highly diverse ecosystem, harboring more than 900 bacterial species (Gong et al., 2002). Accumulating studies have validated the significant role of the gut microbiota in feed digestion, nutrient absorption, breakdown of toxins, exclusion of pathogens, intestinal development, development, and stimulation of the immune system, which maintains the homeostasis of the gastrointestinal tract and endocrine activity in mammals as well as in chickens (Yeoman and White, 2014;Lynch and Pedersen, 2016;Thaiss et al., 2016). Moreover, a variety of bioactive substances such as peptidoglycans, lipopolysaccharides, DNA and extracellular vesicles, and metabolites, including short-chain fatty acids (SCFAs), aryl hydrocarbon receptor ligands, polyamines, trimethylamine, secondary bile acids, bacteriocins, quorum-sensing autoinducers, vitamins, carotenoids, neurotransmitters, and phenolic compounds, are contributed by the intestinal flora. By interacting with the host cells via the portal vein, these substances are directly or indirectly involved in diverse physiological processes, including host development, metabolism, cell-to-cell signal communication, immune regulation, health, and diseases (Clemente et al., 2012;Nicholson et al., 2012). Research has highlighted several predisposing factors, such as genetics, environment, age, diet, additives, antibiotics, and pathogens, that influence or regulate the host gut microbiota composition, diversity, and function, thereby altering metabolism and immunity (Belkaid and Hand, 2014;Lynch and Pedersen, 2016;Thaiss et al., 2016). An attractive approach for the amelioration of host production performance, immunity, regulation of metabolism, and prevention or treatment of diseases includes beneficial modulation of the gut microbial communities through dietary interventions or nutritional strategies (Flint et al., 2015). There is increasing evidence that polysaccharides from plants and microbes modulate the composition of the gut microbiota as well as microbial-derived metabolites in humans, animals, and poultry, which in turn improves host immunity, meaning they could be used to treat several ailments (Jayachandran et al., 2018;Tang et al., 2019;Yin et al., 2020). For example, polysaccharides extracted from purple sweet potato were found to alleviate immunosuppression by increasing the relative abundances of SCFA-producing bacteria in cyclophosphamide-treated mice (Tang et al., 2018). Metabolizing prebiotic polysaccharides, gut microbiota can produce a wide range of primary and secondary metabolites, some of which can, in turn, affect host physiology and immunity (Koh et al., 2016). Significant alteration in the gut microbiota in egg-laying hens is noted from the day of hatching until 60 weeks of age and was further modified as it aged (Callaway et al., 2009;Videnska et al., 2014). With increased weeks of age, hens gradually exhibit age-related deteriorations, mainly, a degenerative digestive system; compromised feed nutrient utilization; decline in productive performance, egg quality, and reproductive performance; increased intestinal permeability; chronic intestinal inflammation; reduced antioxidant capacity; and poor health status, incurring major economic loss for the poultry industry (Joyner et al., 1987;Bain et al., 2016). Significant shifts of the intestinal microbiota are suggested to be responsible for these changes. Some studies have indicated that nutritional strategies can positively regulate the gut microbiota and mucosal immunity and thus can be employed as a novel and effective approach to improving egg performance, egg quality, immunity, and antioxidant capacity in the later laying period of hens (Pan and Yu, 2014;Lee et al., 2016;Wang et al., 2018). Our previous study reported the efficacy of yeast β-glucan in enhancing the cellular and humoral immune function as well as in reducing mortality and improving egg quality and reproductive performance in aged hens (Zhen et al., 2020). Nevertheless, the impact of yeast β-glucan supplementation on both the intestinal microbiome and intestinal metabolites in older hens has rarely been investigated to date. This study aimed to explore whether yeast β-glucan can regulate the immune function of older hens by affecting gut microbiota composition and their metabolism. Thus, we determined the gut microbiome, microbial metabolite profiles, and the expression of immunity-related genes in the ileal mucosa of aged breeder hens after being fed yeast β-glucan. Experimental Design, Diets, Animal Management, and Sample Collection The China Agricultural University Animal Care and Use Committee, Beijing, China, approved the experimental animal protocols for this study (permit number SYXK 2019-0026). Two hundred and fifty-six 43-week-old Hy-Line brown breeder hens were randomly divided into two groups with eight replicates (16 hens per replicate). Hens received basal diets without or with 200 mg/kg yeast β-glucan (Glu200). The amount of yeast β-glucan supplementation was derived from our dose-response study, conducted before (Zhen et al., 2020). The yeast β-glucan used in this study was acquired from Zhuhai TianXiangYuan Biotech Holding Co., Ltd. (Zhuhai city, Guangzhou province, China) with a purity of ≥80% and molecular weight of 1,223 kDa. Its structure characterization has been described previously (Zhen et al., 2020). The composition of the corn-soybean meal-based diet used in this study followed the requirements of the National Research Council (1994). The specific composition of the basal diet and nutrient levels was the same as in our previous article (Zhen et al., 2020). Hens were reared in cages under a daily regimen of lighting (16L:8D) and were provided with feed and water ad libitum. Room temperature was maintained between 18 and 23°C. The study lasted 9 weeks with a 7-day adaptation period. At 51 weeks of age, one chicken was randomly chosen from each replicate and euthanized by cervical dislocation. Ileal contents and ileum were collected and frozen by liquid nitrogen, stored at −80°C until further analysis. DNA Extraction and Illumina MiSeq Sequencing The total bacterial DNA from the ileal contents (about 100 mg each sample) was extracted by a QIAamp Fast Stool Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions and was preserved at −80°C before further analysis. The concentrations of DNA extracts were measured on a NanoDrop 2000 spectrophotometer (Thermo Scientific, MA, United States). 16S rRNA genes of 16S V3-V4 were amplified using primers F341 and R806 (F341: ACTCCTA CGGGRSGCAGCAG, R806: GGACTACVVGGGTATCTAATC). A Qiagen Gel Extraction Kit (Qiagen, Germany) was employed to purify the PCR amplification products, which were further quantified through a Qubit 2.0 fluorometer (Thermo Fisher Scientific, Waltham, United States) to pool into an even concentration. Amplicon libraries were sequenced on the Illumina MiSeq PE250 platform (Illumina, San Diego, United States). Raw sequences were merged by FLASH, and the merged sequences were quality-filtered by Trimmomatic. Subsequently, UCHIME was adopted to remove the chimeric sequences, obtaining the effective tags, which were further analyzed by Biomarker Technologies (Beijing) Co., Ltd. Clustering of sequences at a level of 97% similarity (USEARCH, version 10.0) was followed by filtering the operational taxonomic units (OTU) by sequencing 0.005% of all sequence numbers as a threshold. Thereafter, the Greengenes database 1 was nurtured to process the taxonomy-based analysis of the OTUs through the RDP algorithm. Analyses of OTU extraction, overlapping of OTUs, clustering, α-diversity, and β-diversity were conducted with the help of the QIIME2 (2019.4) software with Python scripts. A species diversity matrix was presented based on binary Jaccard, Bray-Curtis, and weighted and unweighted UniFrac algorithms. The linear discriminant analysis (LDA) effect size (LEfSe) was adopted to identify the differential abundance of taxa. PICRUSt was applied to predict the function of the ileal microbiota community (Langille et al., 2013). The predictions were summarized to multiple levels, and the Statistical Analysis of Metagenomic Profile Package (STAMP) helped to compare the functional categories between the Control and Glu200 groups (Parks et al., 2014). The raw data were uploaded to the National Center for Biotechnology Information's Sequence Read Archive database (SRA accession: PRJNA752702). Sample Preparation for LC-Q-TOF/MS Analysis Thawed ileal contents (about 100 mg) of each hen were extracted with 1 ml of pre-cooled 50% methanol, vortexed for 1 min, and incubated at room temperature for 10 min; the extraction mixture was then stored at −20°C. Following centrifugation at 4,000 g for 20 min, the supernatants were collected and stored at −80°C for UPLC-Q-TOF/MS analysis (ultra-performance liquid chromatography (UPLC) system (SCIEX, United Kingdom) coupled to a high-resolution tandem mass spectrometer TripleTOF 5,600 plus (SCIEX, United Kingdom) in LC-Bio Technologies (Hangzhou) Co., Ltd). Meanwhile, 10 μl of each extraction mixture was combined to obtain pooled quality control (QC) samples. LC-Q-TOF/MS Analysis for Ileal Contents UPLC was used for proceeding chromatographic separations according to the procedure of Zhan et al. (2020) research. Reversed-phase separation with an ACQUITY UPLC T3 column (100 mm × 2.1 mm, 1.8 μm, Waters, United Kingdom) was employed. The temperature of the column oven was maintained at 35°C. The flow rate was 0.4 ml/min, and the mobile phase comprised solvent A (water, 0.1% formic acid) and solvent B (acetonitrile, 0.1% formic acid). Gradient elution conditions were calibrated as follows: 0-0.5 min, 5% B; 0.5-7 min, 5-100% B; 7-8 min, 100% B; 8-8.1 min, 100-5% B; and 8.1-10 min, 5% B. The injection volume for each sample was 4 μl. The metabolites eluting from the column were determined by a high-resolution tandem mass spectrometer TripleTOF 5,600 plus (SCIEX, United Kingdom). The Q-TOF was operated in both positive and negative ion modes. The curtain gas was set at 30 PSI, ion source gas 1 at 60 PSI, and ion source gas 2 at 60 PSI, and the interface heater temperature was 650°C. For positive and negative ion modes, the ionspray voltage floating was set at 5,000 and −4,500 V, respectively. The mass spectrometry data were acquired in IDA mode. The TOF mass ranged from 60 to 1,200 Da. The survey scans were acquired in 150 ms, and as many as 12 product ion scans were collected on exceeding a threshold of 100 counts per second (counts/s) and with a 1+ charge state. The total cycle time was fixed to 0.56 s. Four time bins were summed for each scan at a pulse frequency value of 11 kHz through monitoring of the 40 GHz multichannel TDC detector with four-anode/channel detection. Dynamic exclusion was set for 4 s. The mass accuracy was calibrated every 20 samples during the acquisition. Furthermore, the stability of the LC-MS during the whole acquisition was assessed by acquiring a QC after every 10 samples. LC-Q-TOF/MS Data Acquisition and Processing LC-MS raw data were processed by the XCMS, CAMERA, and metaX toolbox implemented with the R software. The online KEGG and HMDB databases and an in-house fragment spectrum library of metabolites were adopted to annotate the metabolites by matching the exact molecular mass data (m/z) of samples with those from the database. The KEGG databases 2 were used to analyze the metabolic pathways of differential metabolites. To process the intensity of peak data, metaX was used. Differences in metabolite concentrations between two treatments were identified using Student t tests. An FDR (Benjamini-Hochberg) and supervised partial least squares discriminant analysis (PLS-DA) were conducted to adjust the value of p for multiple tests and discriminate the different variables between groups through metaX, respectively. The data supporting the findings of this study have been deposited into the CNGB Sequence Archive (CNSA) of China National GeneBank DataBase (CNGBdb) with accession number CNP 0002101. Statistical Analysis Immunological parameter data were analyzed using a one-way analysis of the variance procedure of SPSS 17.0 (SPSS Inc., Chicago, IL, United States). A value of p ≤ 0.05 was considered significant, and value of p at 0.05-0.10 were classified as trends. Gene Expression in the Ileum of Hens As evident from Table 2, compared to the control group, significant downregulation (p < 0.05) of the mRNA levels of TLR2, TLR4, IL-6, IL-8, IL-10, IL-12, TGF-β3, IFN-γ, and TNF-α genes was observed in hens fed with 200 mg/kg yeast β-glucan, whereas no significant effect was documented on CLDN1, FABP2, ZO-1, occludin, TLR6, IL-1β, IL-2, MyD88, and NF-κB mRNA expression (p > 0.05). In general, the addition of 200 mg/kg yeast β-glucan manifested lower levels of mRNA expression of the genes in the ileum of hens as compared to the control group. However, it failed to affect the intestinal barrier. The Diversity and Composition of Gut Microbiota Frontiers in Microbiology | www.frontiersin.org 5 December 2021 | Volume 12 | Article 766878 A Venn diagram identified two unique OTUs in the control group and one unique OTU in the Glu200 group (Figure 2A). Ileal microbiota confirmed two dominant bacteria -Firmicutes and Proteobacteria -at the phylum level ( Figure 2B). Bacterial phyla portrayed no difference between the two groups (p > 0.05). At the genus level ( Figure 2C), yeast β-glucan addition significantly increased the relative abundance of Lactobacillus in the ileum as compared to the control group (p < 0.05; Figure 2D). LEfSe analysis claimed a significantly higher relative abundance of Lactobacillus, Bacilli, Lactobacillales, Lactobacillaceae, Lactobacillus, Enterobacteriales, Enterobacteriaceae, and uncultured bacterium-Escherichia-Shigella in the Glu200 group than that in the control group (LDA score >4; Figures 2E,F). Figure 3 summarizes the microbial function prediction at level 2 of the KEGG pathways. Between the control and Glu200 groups, 38 functional pathways were identified. Compared with the control group, the yeast β-glucan group showed significant enrichment of several functional pathways, including translation, nucleotide metabolism, replication and repair, lipid metabolism, carbohydrate metabolism, xenobiotic biodegradation and metabolism, energy metabolism, metabolism of terpenoids and polyketides, folding, sorting and degradation, cell growth and death, transcription, and drug resistance, while showing suppression of other pathways such as amino acid metabolism, cell motility, metabolism of cofactors and vitamins, membrane transport, signal transduction, biosynthesis of other secondary metabolites, and metabolism of other amino acids. Ileal Metabolites and Metabolic Pathways To elucidate whether supplementation of yeast β-glucan affects intestinal microbiota and mucosal immune response by altering intestinal metabolites, we further estimated the intestinal metabolite profiles of the ileum contents using UPLC-MS/ MS-based non-targeted metabolomics approach. The PCA and PLS-DA score plot clearly indicated a distinct separation of identified intestinal metabolites, including positive ion modes and negative ion modes between the un-supplemented control and the Glu200-supplemented groups (Figure 4). This finding confirmed the induction of significant alterations of the intestinal metabolic profiles in the hens receiving yeast β-glucan. The PLS-DA models were authenticated by cross-validation (R2 = 0.998 and Q2 = 0.883 for positive ion modes and R2 = 0.999 and Q2 = 0.816 for negative ion modes). Figure 5 highlights the variety of positive ion modes in the ileal contents, with 264 increased and 234 decreased molecular features based on a VIP >1.0 in 95% jack-knifed confidence intervals. Furthermore, the levels of 136 differential metabolites of negative ions were increased, and those of 73 negative ions were decreased. Metabolic pathways presenting with a difference are detailed in Figure 6 and Supplementary Tables S1-S4. Results disclosed that the levels of differential metabolites related to cutin, suberin, wax biosynthesis, atrazine degradation, vitamin B6 metabolism, phosphotransferase system (PTS), steroid degradation, biosynthesis of secondary metabolites, biosynthesis of unsaturated fatty acids, aminobenzoate degradation, and quorum sensing were significantly downregulated (Supplementary Tables S1 and S2), whereas the levels of differential metabolites mapped to ascorbate and aldarate metabolism, C5-branched dibasic acid metabolism, pentose and glucuronate interconversions, glyoxylate and dicarboxylate metabolism, benzoate degradation, ubiquinone and other terpenoid-quinone biosyntheses, steroid biosynthesis, carotenoid biosynthesis, porphyrin and chlorophyll metabolism, metabolic pathways, sesquiterpenoid and triterpenoid biosyntheses, and lysine degradation were upregulated (Supplementary Tables S3 and S4) in the ileal content of hens belonging to the yeast β-glucan supplementation group compared with the non-supplemented control group. DISCUSSION The rapid evolution of research in the field of prebiotics and immunostimulants is attributed to this sustainable method to improve chicken health that is subjected to fewer regulatory restrictions on food safety than other methods (Lillehoj and Lee, 2012). Because of their strong immunomodulatory potential, yeast β-glucan, a polysaccharide, finds significant application as a feed supplement for the poultry industry (Soltanian et al., 2009;McFarlin et al., 2013;Samuelsen et al., 2014;Stier et al., 2014;Fuller et al., 2017;Jayachandran et al., 2018). Our previous study indicated that dietary yeast β-glucan induced significant escalation of systemic immunity and reduction of the mortality rate of laying hens and improved egg quality and fertile egg hatchability, suggesting beneficial effects of yeast β-glucan addition on the reproductive performance of aged hens (Zhen et al., 2020). Gut health exerts important effects on egg quality and reproductive performance. To justify the cause of yeast β-glucan treatment-mediated beneficial effects on egg yolk color and fertile egg hatchability, we first investigated the impact of yeast β-glucan treatment on gene expression of intestinal mucosal immune-related factors. Interestingly, our observation confirmed the downregulation of ileal mucosal TLR-mediated immune responses on feeding 200 mg/kg yeast β-glucan. Immunity and inflammation in the gut are significantly monitored by the TLR-mediated signal pathway and its related effector molecules (Abreu, 2010;Shao et al., 2016;Nawab et al., 2019). Declined TLR-mediated immune-related genes expressions were observed in the ileum of aged hens following yeast β-glucan treatment, suggesting that, during the later laying period of hens, the inclusion of yeast β-glucan into the diet could alleviate intestinal chronic inflammatory responses resulting from age-related gut dysbiosis, thereby improving immune status. In line with our findings, previous researches have substantiated the role of supplementation of yeast β-glucan in ensuring homeostasis of the organism and in protecting chicken against an excessive immune response when subjected to pathogen challenge or other inflammatory stimuli by inhibiting the activation of the TLR-mediated signal pathway and restoring the balance of immune responses (Tzianabos, 2000;Huff et al., 2006; Chen Shao et al., 2013Shao et al., , 2016Tian et al., 2016). Therefore, we proposed that yeast β-glucan supplementation downregulated ileal immune responses, which might be beneficial to gut health and the intestinal integrity of older hens. Tight junction proteins CLDN1, ZO-1, and occludin along with FABP2 are used as biomarkers to predict gut barrier function and health in chicken (Chen et al., 2015). Nonsignificant changes in CLDN1, FABP2, ZO-1, and occludin mRNA levels following yeast β-glucan administration in our study suggested that feeding yeast β-glucan to older hens failed to impair intestinal barrier function and integrity. The above observations indicated that the improvement in egg performance and egg quality by yeast-derived β-glucan might be attributed to reducing the intestinal chronic inflammatory state caused by aging, maintaining gut homeostasis, and ameliorating gut function in aged hens. Recently, growing evidence has substantiated the association of gut microbiota together with their metabolites to the immune regulation indulged in local and systemic inflammatory responses (Abreu, 2010;Belkaid and Hand, 2014;Bae et al., 2020;Lavelle and Sokol, 2020). To further elucidate the mechanism of inhibition of intestinal immune responses in aged hens mediated by yeast β-glucan treatment, the modulatory effect of yeast β-glucan administration on gut microbiome and metabolome was evaluated. In this study, increased Shannon index, decreased Simpson index, and altered ileal microbiota community composition and structure indicated that yeast β-glucan supplementation into the diet significantly affected the α-diversity and β-diversity of the ileal microbiota. A more diverse microbial community signified the stronger homeostasis of the gut microbial community. Significant alteration of the yeast β-glucan treatment-induced ileal microbiota α-diversity and β-diversity of aged hens presumed that yeast β-glucan could ameliorate the intestinal microbial community and structure, which may be related to gut homeostasis. Interestingly, taxon analysis discovered an enriched population of ileal Lactobacillus in the Glu200 group relative to the control group. LEfSe analysis also confirmed enrichment of the relative abundance of Uncultured bacterium-g-Lactobacillus, Bacilli, Lactobacillales, Lactobacillaceae, Lactobacillus, Enterobacteriales, Enterobacteriaceae, and uncultured bacterium-Escherichia-Shigella following yeast β-glucan addition. In accordance with our observations, several recent studies using mammals as models and in vitro simulated fermentation systems have demonstrated yeast β-glucan-facilitated changes in the structure of gut microbiota and exhibited prebiotic-like activity. For example, Xu et al. (2020) reported that yeast β-glucan promoted alleviation of Aβ1-42-induced cognitive deficits by enriching the beneficial bacteria (Lactobacillus and Bifidobacterium) and reducing the pathogenic bacteria (Oscillibacter, Mucispirillum, Alistipes, Anaerotruncus, and Rikenella). Frontiers in Microbiology | www.frontiersin.org which could also effectively alleviate immunosuppression (Wang et al., 2019). Similarly, attenuation of DSS-induced colitis resulted from oral administration of soluble yeast β-glucans to mice via an increased population of intestinal Lactobacillus johnsonii and Bacteroides thetaiotaomicron (Charlet et al., 2018). Cao et al. (2016Cao et al. ( , 2018 also observed that enriched gut microbiota proportion of Akkermansia by orally administered yeast β-1,3glucan could suppress chronic inflammation in diabetic mice. In mouse T1D (type 1 diabetes) and colitis models, supplementation with yeast β-glucans manifested anti-diabetes and anti-inflammatory effects by enriching Bacteroidetes and Verrucomicrobia while diminishing the phylum Firmicutes (Gudi et al., 2019. In early life, the addition of yeast β-glucans alters gut microbiota composition such as Methanobrevibacter, Fusobacterium, and Romboutsia, during the pre-weaning period in piglets (de Vries et al., 2020). Additionally, using a simulated large-intestine fermentation system, Wang et al. (2020a) observed that yeast β-glucan could be broken down following gut microbiota fermentation and selectively enhanced the growth of beneficial gut microbiota Bifidobacterium longum, impeded the proliferation of harmful intestinal flora, and decreased the ratio of Firmicutes to Bacteroidetes. Thus, our findings established that yeast β-glucan could be fermented by gut microbiota of hens, possessing prebiotic-like activity for the modulation of gut microbiota in laying hens. As an important probiotic, Lactobacillus stimulates the gut health of poultry (Tsai et al., 2012;Ashraf and Shah, 2014). As compared to that in the control hens, a higher proportion of Lactobacillales, Lactobacillaceae, Lactobacillus, and Uncultured bacterium-glactobacillus was observed in the gut of aged hens with dietary yeast β-glucan in this study, suggesting the amelioration of the balance of gut microbiota and gut health of aged hens by yeast β-glucan addition. After feeding with yeast β-glucan, intestinal immune responses declined, which was possibly related to promoting intestinal beneficial microbiota Lactobacillus growth. Concurrently, a remarkable escalation in the relative abundance of Enterobacteriales, Enterobacteriaceae, and uncultured bacterium Escherichia-Shigella was witnessed in the yeast β-glucan group compared with the blank group. Though some Enterobacteriaceae belonging to the Proteobacteria phylum are regarded as pathogens (such as Salmonella and Shigella), most Enterobacteriaceae in the gut is considered commensals as they benefit their host by fermenting glucose with acid production, or generating vitamin B12 and vitamin K, or promoting the maturation of the host immune system and competitively eliminating pathogens in the intestine (Leimbach et al., 2013;Conway and Cohen, 2015;Litvak et al., 2019). Considering the constructive effects of yeast β-glucan on chickens' gut microbiota composition, the present results indicate that yeast β-glucan could improve gut health and is one of the candidate prebiotics stimulating the intestinal health of aged hens. However, further exploration is essential to elucidate the mechanism of yeast β-glucan in promoting the growth of intestinal commensal Enterobacteriaceae. PICRUSt analysis determined that yeast β-glucan contributed to the enrichment of nucleotide metabolism, lipid metabolism, xenobiotic biodegradation and metabolism, carbohydrate metabolism, and energy metabolism while suppressing amino acid metabolism and metabolism of cofactors and vitamins in the ileal microbiota. Consistent with our findings, Taylor et al. (2020) reported a notable change in the function of the gut microbiota of DSS-induced mice with respect to enhanced carbohydrate metabolism, glycan biosynthesis, and metabolism, as well as fatty acid biosynthesis, resulting from β-1,3-d-glucan treatment. Wang et al. (2020a) employed a simulated large-intestine fermentation system and observed that cell motility, lipid metabolism, transport, and catabolism, transcription, enzyme families, membrane transport, neurodegenerative diseases, cellular processes, and signaling were significantly enriched in the yeast β-glucan group with respect to the control group. Similarly, yeast β-glucan supplementation indulged a shift in gut microbiota composition Frontiers in Microbiology | www.frontiersin.org 11 December 2021 | Volume 12 | Article 766878 A B FIGURE 6 | Buddle diagram of KEGG pathway enrichment analysis of differential metabolites. The x axis (Rich Factor) represents the ratio of the number of differential metabolites annotated to the pathway to all the metabolites annotated to the pathway. The higher the value, the higher the degree of enrichment of differential metabolites in this pathway. The color of the point represents the value of p value of the hypergeometric test, and the smaller the value is, the greater the reliability of the test and the more statistically significant it is. The size of the dots represents the number of differential metabolites annotated to the pathway, and the larger the point, the more differential metabolites in the pathway. (A) Positive ion pattern analysis and (B) negative ion pattern analysis. Frontiers in Microbiology | www.frontiersin.org and structure and increased concentrations of fecal SCFAs such as acetic, propionic, and butyric acids in mice and Alzheimer's disease-induced mice (Xu et al., 2020). As reported, gut microflora could metabolize nondigestible carbohydrates into SCFAs, which are known to boost intestine health by its trophic, anti-inflammatory, and immunomodulatory effects (Lavelle and Sokol, 2020). Hindgut microbiota usually ferments amino acids into amines, ammonia, and gases such as sulfide, methane, and hydrogen, which are genotoxins, cytotoxins, and carcinogens associated with colon cancer and inflammatory bowel diseases (Neis et al., 2015;Lavelle and Sokol, 2020). Although there were varied reports regarding gut microbiota composition and structure, along with predicted microbial community functions following yeast β-glucan treatment in different experimental studies, based on our observation, we opined that dietary yeast β-glucan administration is beneficial in modifying the gut microbiota in composition and community structure toward enhanced carbohydrate metabolism and suppressed amino acid metabolism, thereby contributing to dampening of age-related gut chronic inflammation in hens. The inhibition of TLR-NF-κB signaling pathway and reduced intestinal pro-inflammatory IFN-γ, TNF-a, and IL-8 levels upon yeast β-glucan treatment in our study lends support regarding this notion. Therefore, 16S gene sequence analysis emphasized the prebiotics-like effect of yeast β-glucan in hens that could amend their gut health. The gut microbiome interferes with intestinal functions through its metabolites. In this study, UPLC-MS/MS-based nontargeted metabolomics analysis revealed that yeast β-glucan supplementation significantly downregulated the levels of differential metabolites related to the PTS, quorum sensing, biosynthesis of unsaturated fatty acid, steroid degradation, and aminobenzoate degradation pathway, compared with the control (Murakami et al., 1993;Liu et al., 2013;Zhao et al., 2019). The involvement of differential metabolites (N-acetyl-d-glucosamine (GlcNAc) and N-acetyl-dgalactosamine) related to the PTS pathway was noted in regulating the virulence of certain pathogens and anti-inflammatory effects. Antioxidant, antiviral, and anti-inflammatory activities were contributed by several metabolites (13,16-docosadienoic acid, erucic acid, eicosadienoic acid, and 5Z,8Z,11Z,14Z,17Z-eicosapentaenoic acid) engaged in the biosynthesis of the unsaturated fatty acid pathway (Weldon et al., 2007;Chen et al., 2020;Liang et al., 2020). Immunosuppressive effects were manifested by the metabolites (androstenedione, 5-androstene-3, 17-dione, testosterone, 5-α-androstane-3, 17-dione, dehydroepiandrosterone, and 3,4-dihydroxy-9,10-secoandrosta-1,3,5(10)-triene-9,17-dione) pertaining to steroid degradation (Muriel et al., 2017). The adhesion of flagellum, biofilm formation, virulence factor gene transcription, and production of toxins of bacteria are affected by bacterial pheromones (N-hexanoyl-l-homoserine lactone, N-heptanoylhomoserine lactone, and N-octanoyl-l-homoserine lactone) involved in the quorum-sensing pathway, thereby directly promoting the infection by pathogens Li et al., 2018). Differential metabolites (anthranilate, 4-aminobenzoate, (S)-4-hydroxymandelate, vanillate, and 3-hydroxy-5-oxohexanoate) mapped to aminobenzoate degradation have been reported to interfere with bacterial biofilm formation and, at the same time, exhibit immunosuppressive properties for inflammation-related diseases without inducing cell death (Li et al., 2017). Thus, reduced differential metabolites and related metabolic pathways in the ileal content of hens following yeast β-glucan administration indicated yeast β-glucan-mediated inhibition or regulation of the growth and the virulence factor production of intestinal potential pathogens and prevention of intestinal pathogens' infection, thereby suppressing intestinal inflammatory responses of older breeding hens or maintaining gut immune homeostasis. The decline in the intestinal pheromone homoserine lactone content, which was also involved in the virulence factor gene transcription of intestinal potential pathogens and increase in the relative abundance of lactobacillus, after feeding with yeast β-glucan, further confirmed this observation. Downregulation of the vitamin B6 metabolism pathway following yeast β-glucan treatment in the present study was also consistent with the outcome of the PICRUSt prediction from this study. Vitamin B6 has been known to participate in all amino acid metabolism, mainly in the form of pyridoxal 5-phosphate as a coenzyme in the transamination reaction (Wilson et al., 2019). This observation may imply that feeding yeast β-glucan could impede the amino acid utilization by ileal microbiota or reduce amino acid deamination in the hindgut via suppressing the vitamin B6 metabolism pathway of hindgut microbiota. On the other hand, our research highlighted downregulation in the levels of different metabolites involved in cutin, suberin, and wax biosynthesis pathway, after feeding with yeast β-glucan, while it highlighted upregulation in the ascorbate and aldarate metabolism, C5-branched dibasic acid metabolism, pentose and glucuronate interconversions, glyoxylate and dicarboxylate metabolism, steroid biosynthesis, porphyrin and chlorophyll metabolism, carotenoid biosynthesis, ubiquinone, and other terpenoid-quinone biosyntheses, sesquiterpenoid and triterpenoid biosyntheses, and lysine degradation, in the ileal content of hens belonging to the yeast β-glucan supplementation group compared with those belonging to the non-supplemented group. Differential metabolites (cis-9,10-epoxystearic acid, 18-hydroxyoleate) related with cutin, suberin, and wax biosynthesis pathway were responsible for the lipid metabolism disorder of cells by increasing the intracellular lipid content, as well as inhibiting the fatty acid oxidation in peroxisomes and mitochondria . Antioxidative, antiaging, anti-inflammatory, and immunomodulatory effects, as well as improvement in lipid metabolism, were attributed to the biomarkers (α-tocotrienol, δ-tocopherol, 2-methyl-6phytylquinol, γ-tocopherol, β-tocopherol, and 2,3-dimethyl-5phytylquinol) engaged in ubiquinone and other terpenoidquinone biosynthesis pathways (Jiang, 2014;Galli et al., 2017). Differential metabolites (ascorbate, d-glucuronolactone, and 5-dehydro-4-deoxy-d-glucuronate) which participated in ascorbate and aldarate metabolism and pentose and glucuronate interconversions manifested various functions, such as antioxidant, immunoregulatory, and/or liver-detoxification function (Zhu et al., 2019;Gouda et al., 2020). Anti-oxidative and anti-inflammatory potential of differential metabolites (zeaxanthin, adonixanthin, and lutein) mapped to carotenoid biosynthesis encouraged their use as a feed additive for the Frontiers in Microbiology | www.frontiersin.org coloring of poultry meat and eggs (Rao and Rao, 2007;Maoka et al., 2013;Moraes et al., 2016;Milani et al., 2017). Association of biomarkers (farnesol and nerolidol) mapped to sesquiterpenoid and triterpenoid biosynthesis with antioxidative and antiinflammatory properties was documented (Jahangir et al., 2006;Khan and Sultana, 2011). Similarly, in vivo and in vitro studies have reported that yeast β-glucan portrayed antioxidative and anti-inflammatory activities in mice (Lei et al., 2015;Cao et al., 2016Cao et al., , 2018Charlet et al., 2018;Gudi et al., 2019Gudi et al., , 2020. Moreover, supplementation of tocopherol, lutein, or ascorbate was found to increase the fertility and hatchability of breeding eggs in poultry (Elibol et al., 2001;Panda et al., 2008;Zhu et al., 2019). Furthermore, our results also claimed enrichment of porphyrin and chlorophyll metabolism pathways in the ileal contents of hens receiving yeast β-glucan. The main eggshell pigment in brown-shelled eggs was identified to be the differential metabolite-protoporphyrin involved in porphyrin and chlorophyll metabolism pathways . Hence, increase in these differential metabolites after yeast β-glucan addition indicated that yeast β-glucan might possess antioxidative, antiinflammatory, antiaging, and liver-detoxification functions along with modulating lipid metabolism and pigment formation of brown-shelled eggs through modification of the above metabolic pathways, resulting in improved egg color, enhanced fertile egg hatchability of aged hens, and decreased intestinal immuneinflammatory responses after feeding yeast β-glucan. Interestingly, significant amplification in the concentration of calcidiol, 7-dehydrocholesterol, vitamin D3, and cholesterol involved in the steroid biosynthesis pathway was noted in the ileal content of the yeast β-glucan group compared with the control group. The breeder hens' absorption of calcium and phosphorus was augmented by calcidiol and vitamin D3, thereby promoting growth bone and improving eggshell quality and fertile egg hatchability (Wen et al., 2019;Adhikari et al., 2020). Recently, the most striking observation is that vitamin D3 and its derivative supplementation were reported to not only display immunomodulatory and anti-inflammatory properties but could also stimulate the growth of gut beneficial microbiota as well as reinforce intestinal barrier function, thereby enhancing host disease resistance (Sassi et al., 2018;Fakhoury et al., 2020). Thus, overrepresented differential metabolites mapped to steroid biosynthesis pathway after yeast β-glucan administration provide compelling evidence for yeast β-glucan-mediated suppression of intestinal inflammation and improved egg quality and reproductive performance of hens in the later laying period. Collectively, the results of metabolomics analysis claimed that yeast β-glucan might possess new functions, including anti-oxidative, liverdetoxification, as well as modulation of lipid metabolism through altered gut microbiota metabolic pathway, apart from immunomodulatory effects in hens. Future research will need to further confirm these observations and investigate the underlying mechanism of yeast β-glucan in improving gut health in more depth by employing different experimental models. Furthermore, the correlation between gut microbiota, gut microbiome, and intestinal mucosal immune responses also needs to be further explored. CONCLUSION The present study initially reported the prebiotic-like properties manifested by dietary yeast β-glucan provided to aged hens by altering gut microbiome and metabolite profiles of the ileal content. Furthermore, dietary yeast β-glucan supplementation could repress the ileal chronic inflammation of breeder hens in the later laying period. Overall, this study sheds light on a promising strategy for the prevention of age-related immune hypofunction or chronic intestinal inflammation in aged hens with the help of dietary-supplement-based immunomodulators. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://www.ncbi.nlm. nih.gov/, PRJNA752702; https://db.cngb.org/, CNP 0002101. ETHICS STATEMENT The animal study was reviewed and approved by China Agricultural University Animal Care and Use Committee. AUTHOR CONTRIBUTIONS ZW and WZ designed the research. WZ performed the research and wrote the manuscript. WZ and YL analyzed the data. ZW, YS, YM, YW, FG, WA, and YG participated in the revision of the manuscript. All authors contributed to the data interpretation and approved the final version of the manuscript. FUNDING This research was funded by Zhuhai TianXiangYuan Biotech Holding Co., Ltd. Funders had no role in the study design, analysis, or writing of this article. ACKNOWLEDGMENTS We would like to thank Editage (www.editage.cn) for English language editing. The authors are grateful to the staff at the Department of Animal Science and Technology of China Agricultural University for their assistance in conducting the experiment.
8,072.4
2021-12-17T00:00:00.000
[ "Biology" ]
Indefinite Graphene Nanocavities with Ultra-Compressed Mode Volumes Explorations of indefinite nanocavities have attracted surging interest in the past few years as such cavities enable light confinement to exceptionally small dimensions, relying on the hyperbolic dispersion of their consisting medium. Here, we propose and study indefinite graphene nanocavities, which support ultra-compressed mode volumes with confinement factors up to 109. Moreover, the nanocavities we propose manifest anomalous scaling laws of resonances and can be effectively excited from the far field. The indefinite graphene cavities, based on low dimensional materials, present a novel rout to squeeze light down to the nanoscale, rendering a more versatile platform for investigations into ultra-strong light–matter interactions at mid-infrared to terahertz spectral ranges. Introduction Optical nanocavities provide a indispensable platform to study various sorts of lightmatter interactions, including light emission [1][2][3], optical nonlinearity [4][5][6], optomechanics [7,8], and quantum effects [9,10]. Surface plasmonic modes can be employed to shrink optical cavities down to the subwavelength scale, and in particular, graphene plasmons have accessible simultaneous extraordinary confinement and flexible tenability, covering the spectral regimes from the the mid-infrared to terahertz (THz) ranges [11,12]. When a graphene sheet is placed close to a metal surface, it supports a special type of highly confined and low-loss electromagnetic mode called acoustic graphene plasmons [13,14], based on which an acoustic graphene plasmon nanocavity with ultra-compressed mode volumes can be achieved [15,16]. Resonant metastructures in the excitation of ultrasharp states with small mode volumes, such as toroidal resonances, surface lattice resonances, and bound states in the continuum, have also been mentioned in previous work. [17,18] Recently, hyperbolic metamaterials (HMMs) and metasurfaces [19][20][21][22][23] have attracted much research interest. Hyperbolic media have different signs of the principal elements of the permittivity or permeability tensors, which can be exploited for applications concerning the enhancement of the optical density of states, heat-transfer engineering, nonlinear effects, optical forces, superlensing [24][25][26][27][28][29][30], optical topological transition [31][32][33], etc. [34]. The open and extended hyperbolic dispersion curves or surfaces [35] accommodate propagating waves with huge wave vectors that are essential for optical cavity miniaturization [36][37][38]. To be specific, hyperbolic dispersions have been achieved in nanowire arrays [39,40] and layered metal-dielectric structures [41,42], based on which indefinite optical cavities have been obtained from the visible to near-infrared spectral regions. Besides metallic structures, HMMs based on 2D material such as graphene [43][44][45] in mid-infrared and terahertz ranges have also been proposed. Nevertheless, the possibilities of constructing indefinite nanocavities by relying on graphene-based HMMs and their contrasting optical properties have not been sufficiently explored. In this article, we demonstrated a novel type of graphene indefinite nanocavity consisting of alternating graphene and silicon layers. The hyperbolic dispersion of such graphene-silicon HMM allows for propagating waves with large wave vectors and high effective indexes. This leads to ultra-confined modes with volume confinement factors up to 10 9 . Moreover, such modes show anomalous scaling rules compared to conventional optical cavities. The indefinite graphene nanocavity can be efficiently excited by far-field illuminations over a broadband range, and its capability to confine light into tiny dimensions can play a significant role in infrared spectroscopy [46], biosensing [47,48], and other applications [49,50], over the spectral regime from mid-infrared to THz. Theoretical Model The scheme for an indefinite graphene nanocavity is presented in Figure 1a, which consists of layered graphene-silicon HMMs. Figure 1b represents the 2D cavity array. In both scenarios, the thickness of single-layer dielectric silicon is d = 9 nm and its relative permittivity is set to be d = 11.56. The graphene can be effectively treated as a surface current sheet characterized by its surface electric conductivity σ: σ = σ inter + σ intra , where σ inter and σ intra denote contributions from the inter-band and intra-band transition of electrons, respectively [51]: where T is temperature; K B is Boltzmann constant; e is elementary electric charge; andh is the reduced Planck constant. In this research, T = 300 K, τ = µE F /eV 2 F , V F = c/300 is the Femi velocity, the carrier mobility of graphene µ = 10 4 cm 2 (vs) −1 , and the chemical potential of doping graphene E F = 0.64 eV. Detailed simulation methods can be referred to Appendix A. Dispersion and Resonance Conditions The iso-frequency contour of HMM is calculated from the following relation [52] at f 0 = 30 THz for transverse-magnetic polarized mode, which belongs to type II HMM [35]: where β = −Z 0 σi/ d k 0 , Z 0 = µ 0 / 0 is the vacuum impedance and γ = k 2 x − d k 2 0 . This HMM is effectively isotropic on the transverse x-y plane [53]: x = y = d + iσZ 0 /k 0 d and z = d , indicating that the layered graphene-dielectric structure is effectively uniaxially anisotropic. The effective longitudinal permittivity along z ( z ) equals the permittivity of silicon and remains positive, while the transverse effective permittivity can be negative due to the material dispersion of the monolayer graphene. A conventional optical cavity is limited by the closed iso-frequency contours of the consisting medium. However, much larger wave vectors are allowed in this HMM, which sustain ultra-compressed modevolumes of the indefinite optical cavity. Cavity modes in the graphene indefinite cavity and the iso-frequency contour of HMM are both shown in Figure 2. Here, the red stars represent the resonant wave vectors of the graphene indefinite cavity (see Figure 3A-F) calculated through the Fabry-Perot resonant condition: where the δφ i is the boundary phase shift; k i is the wave vector of mode; the integer m i represents the mode order; and l x , l y , and l z are the length of a single cavity along the x, y, and z-directions, respectively. We assume that l y is infinite for 2D indefinite cavities (see Figure 1b) and l x = l y for 3D indefinite cavities and confine our discussion to the special scenario of k x = k y . 30THz Figure 2. Iso-frequency contour of the graphene-silicon HMM in x-z plane at 30THz. Hyperbolic curves (green line) represent allowed propagating modes inside the multilayered metamaterial (calculated from Equation (3)), and the red stars denote resonant cavity modes. The black circle with radius k 0 around the origin represents a cross section of light cone in air (LCA). Anomalous Scaling Rules of Graphene Indefinite Nanocavities The spatial distributions of the electric field for a 2D indefinite cavity (see Figure 1b) are presented in Figure 3. It is clear that identical optical modes in these six cavities with different sizes (l x , l z ) can be supported with the same resonant frequency and the same mode order (m x , m z ) = (1, 1), and the confinement ability is comparable to that of acoustic graphene plasmon modes [15,54]. Indefinite graphene cavities with different sizes can resonate at a fixed frequency as long as the required resonant wave vectors are located on the the same iso-frequency contour. This means that the resonant wave vector can move along the iso-frequency curve as the size of the cavity scales down, which is not possible for conventional optical cavities. For example, the refractive indices (n x , n z ) = (k x /k 0 , k z /k 0 ) for a graphene indefinite cavity with the size combinations (62, 54), (44,36), and (26,18) nm are (80.6, 92.5), (113.6, 138.8), and (192.2, 277.6), respectively, all located on the iso-frequency curve. We can further shrink the dielectric thickness, making the effective mode index along z reach almost 300. However, the quantum effect should then be taken into consideration if the distance between neighbouring graphene sheets goes into the sub-nanometer regime. Figure 4A-F shows the electric field distributions of the cavity modes with a fixed size [63, 54] nm for modes of different orders (2, m z ). In sharp contrast to conventional cavities, the higher-order mode resonates at lower frequencies, which are manifest in Figure 4. This is because the transverse and longitudinal components of the effective permittivity tensor ( x <0 and z >0 ) of this bulk HMM have opposite signs. When the mode order m z increases, the larger resonant k z corresponds to a lower resonant frequency according to the following characteristic hyperbolic dispersion relation [44]: Far-Field Excitation of Indefinite Graphene Nanocavities As a next step, we perform simulations of a 2D nanocavity array (see Figure 1b) with 50% filling ratio for different cavity sizes (i.e., the period of cavity array p = 2l x ), which is illuminated by a far-field free-space plane wave. Figure 5 shows the transmission spectra for (1, 1) modes resonant at the same resonant frequency of 37THz for cavities of different sizes. As is also evident from Figure 5, for any cavity of a fixed size, the lower-order (1, 0) mode exhibits a higher resonant frequency than that of the higher-order (1, 1) mode, confirming the anomalous scaling rules discussed in the previous section. Ultra-Compressed Mode Volumes of Graphene Indefinite Nanocavities As a last step, we study the mode volume of graphene indefinite cavities by means of quasi-normal mode theory [15]. Figure 6 shows the obtained normalized mode volume of a 3D graphene indefinite cavity (red symbol and line), defined as V ca /λ 3 0 , where V ca = l x l y l z (for the different vertical size l z , we can calculate the vertical wave vector k z by the zdirection Fabry-Perot resonant condition by Formula (4). In the next step, the tangential wave vector k x and the tangential size l x of the small cavity can be obtained by the hyperbolic iso-frequency contour shown in Figure 2 and the x-direction cavity resonant condition. Because the effective permittivity is isotropic and unchanged along all tangential directions for a multilayer system, we can assume the k y = k x and l x = l y for a 3D cavity roughly characterizes the mode volume. A bowtie photonic crystal cavity with a recorded deep subwavelength mode confinement factor 10 −5 ∼10 −4 [55] and similar metal-insulator-metal (MIM) indefinite cavities have beem experimentally demonstrated, and in the near-infrared range they have also been shown to obtain great field confinement [41]. Although the resonant wavelength of the graphene indefinite cavity (approximately 10 µm here, or longer wavelengths) is much larger than that of the MIM indefinite cavity (approximately 2 µm; refer to Ref. [41]), the obtained normalized mode volume of our graphene indefinite cavities can reach up to 10 −9 (that is, a mode-volume confinement factor up to 10 −9 ), which is approximately two orders of magnitude smaller than that of the indefinite MIM cavities). The normalized mode volumes achieved here are comparable to the recently reported acoustic graphene plasmon cavities [15]. The extraordinary confinement we have achieved mainly relies on the open hyperbolic dispersion curves, which can be employed to further squeeze light down to atomic scales [54]. Figure 3, the red line is the fitting curve and MIM indefinite cavity resonates at the near−infrared range (black), calculated from reference [41], as function of the different length l z of a single indefinite nanoscale cavity. Conclusions In conclusion, we propose and demonstrate graphene indefinite nanocavities with ultra-compressed mode volumes and extraordinary optical confinement at mid-infrared and THz spectral regimes. The normalized mode volume can reach approximately 10 −9 , two orders of magnitude smaller than the widely studies MIM indefinite cavities. Our indefinite graphene nanocavities can be efficiently excited from the far field and manifest anomalous scaling laws of the resonances, which can function as a promising playground to study extreme light-matter interactions and explore tunable high-performance metadevices with desired functionalities. Data Availability Statement: Relevant data is included in the manuscript. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Numerical Simulations: The proposed system is calculated with numerical simulations (commercially available finite element software COMSOL multiphysics). A surface current density boundary condition J = σE in software is employed to describe the electric properties of graphene. In order to explore the spatial distributions of the electric field for a 2D indefinite cavity with a different cavity size at a chosen resonating frequency, the mode analysis section in the model builder window and the study node of COMSOL Multiphysics are used. Before numerical simulation, the tangential and vertical dimensions of the small cavity can roughly be estimated by the iso-frequency contour of the HMMs and the Fabry--Perot resonant condition. (l x , l z ) of the cavity size are set in the simulation model, respectively. Meanwhile, for the Eigenfrequency section in the model-builder window, the study node of COMSOL Multiphysics is used to calculate the resonate frequency with a different mode order but at the same cavity size. As for the spectra response, the simulation nanocavity array with a far-field free-space plane wave at normal incidence by Port excitation in COMSOL software is performed. The electric field direction of the incident wave is along the x-direction. The periodic boundary conditions are applied along the x-direction, and the perfect matched layer conditions are applied along the z-direction.
3,077.2
2022-11-01T00:00:00.000
[ "Physics", "Materials Science" ]