text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Distribution and nests of paper wasps of Polistes (Polistella) in northeastern Vietnam, with description of a new species (Hymenoptera, Vespidae, Polistinae)
Abstract Seven species of the subgenus Polistella Ashmead of the genus Polistes Latreille including a new species, P. brunetus Nguyen & Kojima, sp. n. described here, are recognized to occur in northeastern Vietnam, the easternmost part of the eastern slope of the Himalayas. A key to these species is provided. Their distributional records are remarked. Nests of P. delhiensis Das & Gupta, P. mandarinus de Saussure and P. brunetus are also described.
introduction
Of the four subgenera in the cosmopolitan paper wasp genus Polistes, Polistella, with some 85 extant species, is the largest in terms of the number of species among the three subgenera endemic to Old World (Gyrostoma Kirby & Spence, Polistella Ashmead, and Polistes Latreille). The subgenus Polistella is known to show a high species diversity in the northern part of Indochina, the area on the eastern slope of the Himalayas. This is especially the case, together with strong endemism, for the Polistella species that are characterized by a basally strongly swollen second metasomal sternum. These species may form a monophyletic group and show the distribution pattern of so−called "Himalayan Corridor origin", namely they occur in the zone from the southern slopes of the Himalayas, through the eastern slope of the Himalayas and eastern coastal areas of continental Asia and Taiwan, to Ussuri and eastern Siberia in Russia and Hokkaido in Japan . Locating in the easternmost part of the eastern slope of the Himalayas, the Polistella fauna in the northern parts of Vietnam would be a key to understanding the process of forming the current distribution pattern of these Polistella wasps.
While the Polistella fauna of northwestern Vietnam has been more or less well studied , that in northeastern Vietnam has been little known. The present study has recognized seven species of Polistes (Polistella) including a new species described herein to occur in northeastern Vietnam. Their distribution records are remarked. Nests of three species (P. delhiensis Das & Gupta, P. mandarinus de Saussure and P. brunetus Nguyen & Kojima, sp. n.) are also described.
Materials and methods
Based on the geographical and climatic features, "northeastern Vietnam" is used in the present paper for the area consisting of the following provinces: Ha Giang, Cao Bang, Tuyen Quang, Bac Kan, Thai Nguyen, Lang Son, Bac Giang and Quang Ninh (Fig. 1). The specimens examined in the present study are unless otherwise mentioned deposited in the Institute of Ecology and Biological Resources in Hanoi; they were mainly collected by ourselves during a research trip to Cao Bang, Bac Kan and Bac Giang made in 2012.
The adult morphological and color characters except for male terminal sterna and genitalia were observed on pinned−and−dried specimens under a stereomicroscope. Apical parts of male metasomata were dissected for the terminal sterna and genitalia. They were put in lactic acid for several hours, washed in distilled water, and observed in glycerin under a stereomicroscope. The terminology of male genitalia follows Kojima (1999). Drawings were made with the aid of a drawing tube. Photos were taken with Panasonic Lumix DMC−FX 100 and Leica EZ4HD 3.0 MegaPixel Digital Stereo Microscope, using LAS exclusive microscopy software (LAS EZ 2.0.0).
In the descriptions of morphology, the following abbreviations are used: POD, distance between the inner margins of the posterior ocelli; OOD, distance between the outer margin of the posterior ocellus and the inner margin of the eye at vertex; Od, transverse diameter of the posterior ocellus.
The parts measured for the morphometric are defined as follows: body length, the lengths of head, mesosoma and first two metasomal segments combined; clypeus width, the distance between the uppermost points where clypeus touches the eyes; clypeus height, the distance from the bottom of the dorsal emergination to the apex; distance between inner eye margins at vertex and at clypeus, respectively the distance between the inner eye margins at the level of anterior ocellus in frontal view of head and at the level where inner eye margins approached each other most closely; interantennal and antennocular distances, the distance between the inner margins of antennal sockets and between the outer margin of antennal socket and inner eye margin at the level of middle of antennal socket, respectively; antennal socket width, the transverse diameter; eye and gena width, the maximum width for each in strictly lateral view of the head; metasomal tergum I length, the distance in lateral view from the posterior end of the basal slit for the reception of the propodeal suspensory ligament to the posterodorsal end of the tergum; metasomal tergum II, length, the distance in lateral view from the bottom of the basal depression or "neck" to the posterodorsal end of the tergum; metasomal tergum I and II width, the maximum width for each in dorsal view. Remarks on distribution records. The distribution records of P. mandarinus reported may need confirmation as several species were erroneously identified as "P. mandarinus" (see Kojima 1997). In Vietnam, this species has been known from the provinces of Quang Tri (Nguyen and Ta 2008), Phu Tho, Vinh Phuc, Thua Thien Hue (Nguyen et al. 2011), Cao Bang, Bac Kan, Hai Phong (present study); this species may occur in the areas north of the Hai Van Pass, but its occurrence in nothwestern Vietnam needs further researches. The species has also been recorded from eastern China and Tibet (Hou et al. 2012) and Korea (Carpenter 1996), however its occurrence in Korea may need confirmation (J.K. Kim & J. Kojima, unpublished data Remarks on distribution. This species could be placed in the "Stenopolistes" group and has been recorded from Delhi in India and North Vietnam [Son La (Nguyen and Pham 2011), Ha Giang, Bac Kan, Phu Tho, Hoa Binh, Vinh Phuc (present study)]. The other two species of the "Stenopolistes" group occurring in Vietnam, P. nigritarsis Cameron and P. khasianus Cameron, similarly have such the disjunct distribution records, which are probably due to lack of intensive field works in the areas in the southern slope and western part of the eastern slope of the Himalaya. (Nguyen and Khuat 2003), Phu Tho, Hai Phong (Nguyen et al. 2005), Quang Binh, Quang Tri, Thua Thien Hue, Quang Nam (Nguyen and Ta 2008), Son La (Nguyen and Pham 2011), Bac Kan, Lang Son, Bac Giang, Ninh Binh, Thanh Hoa, Nghe An, Ha Tinh (present study), showing that the species is widely distributed in Vietnam except for southern provinces. This species could occur widely in eastern parts of subtropical and temperate Asia, from Vietnam, through eastern parts of continental China, to Korea and Honshu Island of Japan; its closely related species, P. formosanus Sonan may cooccur with this species in Taiwan and only P. formosanus is known to occur in the Nansei Islands (Saito et al. 2007). Remarks on distribution. The following three subspecies are currently recognized in P. strigosus: the nominotypoical subspecies known to occur in Laos, China and Taiwan; minimus Bequaert, 1940 distributed in Nepal, Malaysia (Sabah) and the Philippines; and atratus Das and Gupta, 1984 in India. The color form from Vietnam agrees with non of the above-mentioned subspecies. It has the head reddish brown, mesosoma dark yellowish brown with metanotum and propodeum dark yellow, metasomal terga I−III dark yellow, and the other metasomal terga brownish black (in some specimens, all metasomal terga dark yellow). This species is widely recorded from the provinces of Hai Phong (Nguyen et al. 2005 Diagnosis. This species can be distinguished from the other Polistes (Polistella) species by the following combination of characters: pronotum with dense and coarse punctures, their edges forming reticulation; metasomal sternum II in lateral view swollen ventrally in anterior half; sternum IV with two long parallel longitudinal ridges medially (this character also occurs in P. japonicus); proximal margin of the penis valve of the male genitalia in lateral view produced ventrally at proximoventral to form a small tooth.
Head in frontal view about 1.1 times as wide as high (Fig. 2); in dorsal view weakly swollen laterally behind eyes, then narrowed posteriorly, with posterior margin shallowly and broadly emarginate. Vertex slightly raised in area among ocelli, slightly sloped down behind posterior ocelli towards occipital carina; POD:OOD = about 1:1.7; POD about 1.2 times Od (Fig. 3). Gena, in lateral view about 0.8 times as wide as eye (Fig. 4); occipital carina fine, evanescent in ventral one third of gena. Inner eye margins weakly convergent ventrally, in frontal view about 1.1 times further apart from each other at clypeus than at vertex (Fig. 2). Antennal sockets closer to inner eye margin than to each other; anterior tentorial pit slightly further apart from antennal socket than from inner eye margin; interantennal space weakly raised. Clypeus in frontal view as wide as high, produced ventrally into blunt angle; in lateral view weakly swollen anteriorly (Fig. 3); length of lateral margin of clypeus lying along inner eye margin longer than diameter of antennal socket and about as long as the length of malar space. Antenna (Fig. 5): scape more than 3 times as long as its maximum width; flagellomere I about 3 times as long as its maximum width, about 1.2 times as long as the length of flagellomeres II and III combined; flagellomere II and III longer than wide; terminal flagellomere bullet−shaped, about 1.4 times as long as its basal width.
Pronotal carina sharply raised, produced dorsally into thin lamella in dorsal part, slightly sinuate backward on lateral side, reaching ventral corner of pronotum. Mesocutum weakly convex, about 0.9 times as long as wide between tegulae; anterior margin broadly rounded. Scutellum convex, slightly concave medially. Metanotum weakly convex, disc nearly flat but strongly depressed anterior margin. Propodeum short; posterior face widely (about half the maximum width of propodeum) and shallowly excavated medially, more or less smoothly passing into lateral faces; propodeal orifice elongate, about 1.8 times as long as wide (measured at widest part), somewhat narrowed in dorsal half. Wings hyaline, jugal lobe of hind wing rounded (Fig. 7).
Metasomal tergum I short and thick, about 0.8 times as long as its apical width, in lateral view abruptly swollen dorsally just behind basal slit for reception of propodeal suspensory ligament; corner between anterior and dorsal faces bluntly angled (Fig. 8). Sternum II in lateral view swollen ventrally in smoothly curved line in anterior half, then ventral margin nearly straight line parallel to ventral margin of the tergum. Clypeus with scattered large punctures, each bearing sharply pointed golden bristle; tomentum on clypeus medially restricted in dorsal one-fourth of clypeus, laterally extending ventrally. Mandible with several small and shallow punctures at base and deep punctures at anterior margin. Frons covered with deep punctures. Vertex and gena with sparse small and shallow punctures; area around ocelli smooth; ventral one third of gena with coarse punctures. Pronotum with dense, coarse punctures, their edges forming reticulation (Fig. 6). Mesocutum densely with coarse flat-bottomed punctures; punctures on scutellum and metanotum dense coarser but smaller than those on mesoscutum. Mesepisternum with dense coarse well−defined punctures in posterodorsal part (punctures in dorsal margin similar to those on pronotum), scattered punctured in anteroventral part; border between posterodorsal and anteroventral parts indistinct. Dorsal metapleuron with striae and shallow large punctures; ventral metapleuron with sparse strong punctures. Propodeum with strong transverse striae; lateral face with sparse ill−defined punctures. Metasomal segements covered with minute punctures in addition to scattered small punctures (stronger and larger on sterna) except sternum IV with two long medial parallel longitudinal ridges along sternum and several shorter ridges on each side of the long one ended by large shallow punctures (Fig. 9), area between paired longitudinal ridges smooth; sternum II−IV each with a stuff of long hair at apical margin, sternum V and VI entirely covered with long hairs.
Dark brown; following parts yellow to orange-yellow: clypeus except apical black margin, mandible except a black spot at base and apical margin, and narrow band along inner eye margin extending from bottom of frons to middle of eye emargination; following parts black: area around ocelli, apical margin and a longitudinal line along lateral faces and at the middle of propodeum, spot on valvula, mid and hind coxae and trochanters beneath.
Male. Body length about 13.5-15.5 mm; fore wing length about 15.5-16.5 mm. Like female, but differing from the latter as follows: head about 1.2 times as wide as high in frontal view (Fig. 10); eye strongly swollen laterally; inner eye margins about as long from each other at vertex as at clypeus; gena in lateral view about half as wide as eye (Fig. 11), with weakly raised blunt ridge running along posterior margin of eye; clypeus in frontal view about as wide as high (Fig. 10), only slightly produced ventrally, evenly and slightly convex apically, in lateral view weakly convex in dorsal part (Fig. 11). Antenna (Fig. 12) slenderer than in female; scape short, about 2.8 times as long as its maximum width; flagellomere I longer than length of flagellomeres II and III combined; flagellomeres II and III each longer than wide; terminal flagellomere elongate, slightly curved, about 2.5 times as long as its basal width. Metasomal sternum VII depressed medially (Fig. 13), without tubercle (Fig. 14).
Body surface sculpture as in female, but clypeus without large punctures and densely covered with long golden hairs and with a faint longitudinal ridge medially.
Genitalia in general as that of Polistes species, with the following specific characters: digitus in inner aspect of paramere (Fig. 15) about 3.2 times as long as wide (measured at widest part), distinctly swollen near base, gradually narrowed apically to mid− length, then slightly swollen towards the rounded apex; aedeagal penis valves (Figs 16−17) slightly longer than aedeagal basal apodeme, in ventral view narrowest near mid−length, weakly swollen proximally from mid−length then strongly swollen and distinctly produced laterally near proximal margins, in lateral view slightly thickened in proximal one fourth and with dorsal margin weakly and smoothly sinuate, with proximoventral corner produced into abuse angle (Fig. 17); ventral margins of penis valves finely serrated along entire length.
Color and marking pattern similar to female, but more extensively marked with yellow as follows: clypeus except a broad longitudinal median band, narrow long band on gena along posterodorsal margin of eye, antennal flagellomeres beneath, narrow band along pronotal lamella, narrow band on basal metanotum, paired longitudinal lines on lateral face of propodeum, valvulae, narrow band at apical margin of tergum I−III and sternum II and III; more extensively black marked as follow: two longitudi-nal bands at lateral margin on mesoscutum, a wide band on basal margin of tergum I and II (sometimes on tergum III); spot at upper corner of meseposternum (closed to dorsal part of metapleuron); propodeum and legs more black.
Etymology. The specific name, brunetus, is a Latin adjective, referring to the brown body color.
Distribution. Known only from localities in northern Vietnam listed above.
Polistes (Polistella) mandarinus de Saussure, 1853
Hou et al. (2012) described the nest of this species based on the nests observed in Tibet, with light ferruginous brown (juggling from the figures). A nest (#VN−NE2012− P−02) (Fig. 18) that we collected, together with 3 females and 11 males, at Phi Oac NR, Cao Bang Province has similar features of that described by Hou et al. (2012) although it differs in coloration. Our nest has 19 cells and had produced more than ten adult wasps. Its structural and color characters are as follows: Comb "paper"−like in texture, made mainly of long fine plant fibers and wasp adult oral secretion, more or less uniformly dark greysish−brown in cell walls, suboval (about 30 mm × 20 mm) in view from side of cell opening, expanded excentrically from the single terminal petiole, with surface corresponding to cell bottom weakly convex; Petiole single, terminal, attached to the border between bottoms of the first two cells, 2.5 mm long and 1.2 mm × 1.5 mm thick at the mid−length, with thin central core of plant fibers, enlarged strictly with adult oral secretion, blakish brown and lustrous, secretion coat widely expanded on comb back around the petiole and on substrate in thin film holding the fern vain; Cells generally arranged in regular rows, pentagonal at open end when surrounded by other cells, with free margins rounded, each cell weakly expanded towards open end, 5.3 mm × 5.6 mm (range 5.0 mm × 5.4 mm -5.8 mm × 5.9 mm; n=17) wide at open end, 3.4 mm (range 3.1 −3.8 mm; n=11) wide at bottom and 19 mm (range 15−22.5 mm; n=13) deep in cells containing pupae or having produced adult, cell wall about 1.12 mm thick; Cocoon cap white, produced beyond rim of cell by 0.5−4.5 mm, slightly domed.
Polistes (Polistella) brunetus Nguyen & Kojima, sp. n.
A pre−emergence stage (before any adult wasps' emergence) nest (#VN−NE2012− P−02) (Fig. 19) was collected, together with 4 adult females, at Kim Hy NP, Bac Kan Province. The nest was attached to a rattan shoot, at about 2.5 m above the ground, and has 26 cells, with the fifth (=last) instar larvae as the oldest immature (for immature composition, see Fig. 19). The fifth instar larvae were artificially fed with fresh hornet (Vespa) eggs, and one of them successfully spun the cocoon. The structural and color characters are as follows: Comb "paper"−like in texture, made mainly of long fine plant fibers, usually with 2−3 mm wide horizontal stripes of different colors (pale gray to gray and pale brownish−gray) in cell walls, subcircular (about 30 mm × 25 mm) in view from side of cell opening, expanded concentrically from the single petiole, with surface corresponding to cell bottom weakly convex; Petiole single, central, attached to the border between bottoms of the first two cells, 3.8 mm long and 1.
Key to species of Polistes (Polistella) of northeastern Vietnam
The characters given in the key are applicable to both sexes unless when specified.
1 Metasomal sternum II basally strongly swollen, in lateral view bulging anteriorly (Fig. 21) Metasomal sternum II gradually swollen posteriorly, in lateral view with ventral margin weakly and smoothly curved (Fig. 8) Clypeus in lateral view only weakly convex anteriorly (Fig. 22). Disc of scutellum nearly flat, in lateral view smoothly passing from dorsal margin of mesoscutum (Fig. 24) -Clypeus in lateral view distincly convex (Fig. 23). Disc of scutellum convex (Fig. 25). Pronotal striation regular and very strong (Fig. 27). Border between dorsal and lateral surfaces of pronotum distinctly angled. Male metasomal sternum VII with weak tubercle (Fig. 28) Pronotum with dense, coarse punctures, their edges forming reticulation (Fig. 6). Disc of scutellum convex. Metasomal sternum II in lateral view convex ventrally in anterior half. Anterior margin of male clypeus nearly straight (Fig. 10) , Polistella fauna in northeastern Vietnam, with only seven species, is poorer even though the environmental, especially climatic conditions, in northeastern Vietnam are expected to be more diverse and hence to harbor richer fauna in terms of number of species than in mountainous areas of northern Vietnam. On the other hand, however, in contrast to the fact that all the 14 Polistella species occurring in mountainous areas in northern Vietnam may belong to a possible monophyletic species group that is characterized by a basally strongly swollen second metasomal sternum and shows ditribution pattern of so−called "Himalayan Corridor origin", seven species recognized in northeastern Vietnam are comprised of at least three species groups, thus they are more diverse phylogenetically than those of mountainous areas of norther Vietnam. Namely, other than P. dawnae and P. mandarinus in the species group characterized by a basally strongly swollen second metasomal sternum, P. delhiensis belongs to so-called "Stenopolistes" group, the species belonging to which are distributed in tropical and subtropical continental Asia, so-called Sunda Land (Malay Peninsular, Sumatra and Borneo) and also in Papuan Region, including Pacific Islands. The last group, including the four species recognized in northeastern Vietnam, P. japonicus, P. sagittarius, P. strigosus and P. brunetus, is the P. sagittarius group of Carpenter (1996), which is rather ill-defined and known to be widely distributed in Oriental Region and East Asia. | 4,673.2 | 2014-01-08T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Associations of SAA1 gene polymorphism with Lipid lelvels and osteoporosis in Chinese women
Background The development of osteoporosis is associated with several risk factors, such as genetic polymorphisms and enviromental factors. This study assessed the correlation between SAA1 gene rs12218 polymorphism and HDL-C lelvels and osteoporosis in a population of Chinese women. Methods A total of 387 postmenopausal female patients who were diagnosed with osteoporosis (case group) based on bone mineral density measurements via dual-energy x-ray absorptiometry and 307 females with no osteoporosis (control group) were included in this study. Correlations between SAA1 gene rs12218 polymorphism and osteoporosis and HDL-C level were investigated through the identification of SAA1 gene rs12218 polymorphism genotypes using the polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP). Results The TT genotype of rs12218 was more frequently in osteoporosis patients than in control subjects (P <0.001). And the rs12218 was found to be associated with plasma TG, HDL-C, LDL-C, and BMD levels in osteoporosis patients (P<0.05). Conclusions The present results indicate that both osteoporosis and lipids levels are associated with the TT genotype of rs12218 in the human SAA1 gene.
Background
Osteoporosis is a major public health problem with growing prevalence. Obesity and hyperlipidemia have been demonstrated to be closely related with osteoporosis [1][2][3]. And osteoporosis is especially prevalent in the elderly population, and it is a significant public health issue that reduces patient functioning and quality of life. Moreover, both osteoporosis and obesity have high genetic predisposition and the genetic correlation between them have been established across different ethnic groups [1,4]. Serum amyloid A (SAA) is a kind of apolipoprotein and is primarily synthesized in the liver by activated monocytes and macrophages [5]. As an apolipoprotein, SAA is associated with HDL-C and during inflammation can contribute up to 80% of its apoprotein composition [6]. Many studies have demonstrated that sustained high expression of SAA may contribute to atherogenesis [7,8], and that an elevated concentration of SAA is associated with an increased risk of CVD [9]. And serveral studies indicated rs12218 in the SAA1 gene was associted with carotid atherosclerosis [10] and peripheral arterial disease [11]. However, the relationships between SAA gene polymorphisms and osteoporosis remain unclear.
In the present study, we aim to study the relationship between SAA1 gene polymorphsim (rs12218) and HDL-C level and osteoporosis. Table 1 shows the clinical characteristics of the study participants, the following values were significantly different between the 2 groups: systolic blood and age. There was no significant difference in the following variables between the 2 groups: DBP, body mass index (BMI), plasma concentration of total cholesterol (TC), plasma concentration of TG, HDL-C and LDL-C. Table 2 shows the distribution of the genotypes and alleles of the rs12218. The genotype distribution of each rs12218 did not show significant difference from the Hardy-Weinberg equilibrium values (data not shown). For total participants, the genotype and the allele distribution of rs12218 differed significantly between the osteoporosis patients and the control participants (both P<0.001). The TT genotype and T allele were more common in the osteoporosis patients than in the control participants. Logistic regression was performed with and without lipid disorders and other confounders. The TT genotype of rs12218 still differed significantly between these two groups (P<0.001, OR=7.610, 95% CI: 3.484-16.620, Table 3). Table 4 shows the relationgship between rs12218 and TG, TC, HDL-C LDL-C, and BMD levels. In the osteoporosis group, we found that the rs12218 was not only significantly associated with plasma TC, HDL-C, and LDL-C levels (P=0.021, P=0.009, and P=0.009, respectively), but also associated with BMD (P<0.05). However, this association was not found in the control group. And, we did not find the TG level was significantly associated with rs12218 in these two groups. In addition, we also found the rs12218 was associated with the plasma SAA levels not only in the case group, but also in the control group ( Figure 1).
Results and discussion
In the present study, we found that variation in the SAA1 genes is associated with both osteoporosis and TC, HDL, and LDL levels in osteoporosis in Chinese population. This is the first study to observe the relationship btween SAA1 gene and osteoporosis. Osteoporosis is characterized by low bone mass, an increase in bone fragility, deterioration in bone microarchitecture, and an increase in the risk of fracture [12]. Some metabolic changes, such as those that occur due to a lack of estrogen, immobilization, metabolic acidosis, hyperparathyroidism, and systemic and local inflammatory diseases, affect the osteoclast count and activity associated with bone turnover [13]. Prostaglandins, insulin-like growth factors (IGFs), interleukins (IL-1, IL-6, and IL-11), tumor necrosis factor (TNF), and several local factors in bone, such as transforming growth factor (TGF), also contribute to the regulation of bone formation and resorption [13,14].
The gene for SAA1 was considered as a candidate for osteoporosis because it is the gene encoding one important inflammation factor, SAA, which is synthesized by the liver. A relationship between the SAA1 gene polymorphism and cardiovascular diseases has been reported previously [11,12,15]. Previous studies have investigated the SAA1 rs12218 polymorphism in the Chinese population, but its relationship with osteoporosis has not been investigated; this relationship was examined in our study for the first time. In the present study, the TT genotype of rs12218 significantly differed between osteoporosis patients and control participants, indicating that the risk of osteoporosis is increased in participants with the T allele of rs12218. Logistic regression analysis adjusted for some confounders showed that TT genotype distribution of rs12218 significantly differed between the osteoporosis patients and the control participants.
In additon, we also found the rs12218 significantly associated with the plasma TC levels in the osteoporosis patients, our finding was in line with Xie et al. reports. In the early 1970s, SAA was identified as the plasma protein responsible for forming tissue deposits called "amyloid (AA-type)" seen in diseases with underlying persistent acute inflammation [16,17]. Soon after its discovery, SAA was shown to be an acutephase protein produced by the liver within hours of tissue injury regardless of cause. Its plasma concentration can increase a 1000-fold within 24 h [18,19]. In plasma, SAA is associated with HDL [20,21] and, during severe inflammation, can contribute 80% of its apo-protein composition [22]. The displaced apoA-I is rapidly cleared by the liver and kidneys [23], together with a sharp decline in apoA-I gene expression during inflammation [24].
Conclusions
In conclusion, the SAA1 gene polymorphism was associated with osteoporosis in Chinese population. And this association maybe related to the lipid disorder resulting from the SAA gene polymorphisms.
Subjects
Postmenopausal females who were admitted to the Department of Endocrinology, First Affiliated Hospital of Chongqing Medical University were informed of the study, and patients who opted for inclusion in the study were evaluated. Patients who were diagnosed with parathyroid, thyroid, liver, and rheumatological diseases that affect bone metabolism; patients with a history of malignancy or surgically induced menopause; and patients who used drugs affecting bone metabolism (e.g. corticosteroids, anticonvulsants, and heparin) during the clinical and laboratory assessments were excluded from the study. Erythrocyte sedimentation rate, complete blood count, serum alkaline phosphatase, calcium, phosphorous, serum glutamic oxaloacetic transaminase, serum glutamic pyruvic transaminase, gamma-glutamyl transpeptidase, blood urea nitrogen, creatinine, glucose, uric acid, albumin, total protein, urine calcium/creatinine, thyroid-stimulating hormone, parathyroid hormone, cortisol and vitamin D levels were measured prior to the study. A total of 694 patients satisfied the study criteria and were included in the study. The age, height, weight, and body mass index (BMI) of the participants were evaluated. All participants underwent dual-energy xray absorptiometry (DEXA) evaluations, and 387 postmenopausal females were diagnosed with osteoporosis based on this assessment (osteoporosis gorup). 307 patients without osteoporosis were included in the control group. All participants provided informed consent in compliance with the study protocol, which was approved by the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University.
Bone mineral density
The participants underwent DEXA scanning using a Hologic QDR 4500 W system (Hologic, Inc., Bedford, USA) to assess bone mineral density (BMD), and the lumbar spine (vertebrae L1-L4) and hip (femur neck) were evaluated. Patients with a mean bone density below 2.5 SD were diagnosed with osteoporosis, as recommended by the World Health Organization (WHO).
Genotyping
Genomic DNA was extracted from peripheral blood leukocytes using a DNA extraction Kit (Beijing Bioteke Company Limited, Beijing, China). Genotyping was confirmed by polymerase chain reaction (PCR)-restriction fragment length polymorphism (RFLP) analysis. The primers of rs12218 were designed according to Xie's protocol [10,11] as follows: Sense: 5 0 AACAGGGAGAATGGGAGGGTG GG3 0 ; Antisense: 5 0 GCAGGTCGGAAGTGATTGGGGT C3 0 . The PCR mixture was subjected to 35 cycles for 60 sec at 94°C, 30 sec at 54°C, and 40 sec at 72°C following the initial denaturation for 3 min at 94°C. These PCR products were digested by Bgl I restriction enzyme was according to manufacturer's instructions. To ensure the results to be verified, we used sequenced genomic DNAs as positive controls in our assays.
Statistical analysis
The data were evaluated using SPSS Version 17 software (IBM Corp. Armonk, New York, USA). The continuous variables were not normally distributed based on the Shapiro-Wilk test for normality. The Mann-Whitney U test was implemented for the comparison of the two groups. Medians (quartiles) are provided as descriptive statistics. The Pearson chi-square test was conducted for categorical variables. N and % values are provided. A p<0.05 was considered statistically significant. | 2,174.2 | 2013-03-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
Experimental ancilla-assisted qubit transmission against correlated noise using quantum parity checking
We report the experimental demonstration of a transmission scheme which has the ability to transmit any state of a photonic qubit over unstabilized optical fibres, regardless of whether the qubit is known, unknown, or entangled to other systems. A high fidelity to the noiseless quantum channel was achieved by adding an ancilla photon after the signal photon within the correlation time of the fibre noise and by performing a measurement which computes the parity. Simplicity, the maintenance-free nature and robustness against path-length mismatches among the nodes make our scheme suitable for multi-user quantum communication networks.
Introduction
Quantum communication networks with many participants will provide various communication and computation tasks based on the nature of quantum physics, such as quantum key distribution (QKD) [1]- [3], quantum teleportation [4], quantum repeaters [5], measurement-based quantum computing [6], and others [7]- [9]. Such a system inevitably involves manipulation of multipartite entanglement, and requires faithful node-to-node transmission of an information carrier that is already entangled to other systems. Widespread use of such networks also demands a plug-and-play connectivity, which avoids the need for complicated stabilization and calibration tasks among distantly located users. Recent studies on practical quantum communication systems have mainly been focused on QKD. Among the most promising implementations for QKD are the plug-and-play schemes [10]- [17] and those utilizing double Mach-Zender interferometers (MZI) [18]- [20], both of which share the common feature of robustness against correlated noise during transmission of quantum states in the optical fibres. In the plug-and-play systems based on the auto-compensation of birefringence effects during a round trip of light pulses [10,11], the encoding of the transmitted states is done by choosing a manipulation on the incoming pulse, which implies that the transmitted state must be known to the sender. The plug-andplay feature can be also achieved by utilizing multi-photon entangled states in decoherence-free subspaces (DFS) [12]- [17]. In this case, transmission of a photonic qubit in an unknown state requires an encoding process into multi-photon entangled states, which is difficult using present technology. On the other hand, double MZI systems can be used for arbitrary quantum states whether it is known or unknown; however, the need for subwavelength-optical-delay adjustments in MZIs at each node stands as a disadvantage, especially when the number of the participants in the network increases. The existing schemes therefore lack either the plug-and-play feature or the ability to transmit a qubit that is in an unknown state or is entangled to other systems. Achieving both of the features at the same time is not only of practical importance but also of fundamental interest, since it amounts to a faithful transmission of quantum states between the parties who do not have the shared reference frame [21].
In this paper, we experimentally demonstrate such a faithful transmission scheme fulfilling both of the above requirements, for single-photon polarization states through optical fibres. This is achieved by adding an ancilla photon of a fixed polarization after the signal photon within the correlation time of the phase fluctuations in the fibre and by quantum parity checking. After the transmission of these photons through unstabilized optical fibres, the channel fidelity to the noiseless quantum channel is 0.958. Evidently, a transmission channel which is very close to a noiseless one is achieved without the stabilization of optical components.
Ancilla-assisted qubit transmission scheme
We first introduce the idea [22] of the scheme and then describe the experimental demonstration using two photons generated by spontaneous parametric down-conversion (SPDC), the linear optical elements and the photon detectors. Suppose that Alice is given a signal photon in unknown state α|H + β|V , where |H and |V represent horizontal (H) and vertical (V) polarization states, respectively, and |α| 2 + |β| 2 = 1. Alice uses another photon as a reference, which she prepares in a fixed state |D ≡ (|H + |V )/ √ 2. She sends the signal photon in a time-bin following that of the reference photon with a temporal delay t A . The two-photon state can be written as where the subscripts represent the temporal delay from the front time-bin. As shown in figure 1, the photons in the H-and the V-polarization state are transmitted through the channels C H and C V , respectively. While ordinary single-mode fibres can be used for these quantum channels at the cost of decreasing the success probability [22], here we use polarization-maintaining optical fibres (PMF) for the simplicity of the experiments. In this case, the polarization rotations of the photons in each channel do not occur, but unknown phase shifts φ H and φ V are added to the photons in each channel independently due to the fluctuations of the optical path lengths. We assume the interval t A between the signal and reference photons is much shorter than the correlation time of the fluctuations, so that the phase shifts are considered to be correlated such At Bob's location, the photons in both modes, C H and C V , are mixed together, and the received state becomes Here the optical path lengths of C V and C H may differ, which is indicated by the temporal delay τ in the subscripts of the V-polarization states. We can easily see that the state α|V τ |H t A + β|H |V t A +τ is invariant under the phase shifts. It is worth mentioning that in the previous DFS schemes [12]- [17], Alice prepares entangled states in the DFS. In our scheme, Alice's two photons (equation (1)) are not correlated, let alone entangled. It is Bob who sifts All detectors D X and D Y are silicon avalanche photodiodes, and they are placed after single-mode optical fibres. The histogram shows the number of delayed coincidence events with various delay between the detectors D X and D Y , which was recorded by a time-to-amplitude converter (TAC). The central peak shows the events where the signal and reference photons have passed S and L, respectively. We accept the events in 2.5 ns time window around the central peak as the successful ones. Note that the two peaks separated from the central peak by t A correspond to the case where both photons pass through S or L, and the remaining two other peaks correspond to the case where the signal passes L and the reference passes S. out the entangled states in the DFS. Bob can, in principle, project the state (2) onto the state α|V τ |H t A + β|H |V t A +τ , which happens with the probability of 1/2, and decode the projected state into the faithful signal α|H + β|V . In our experiment, this extraction of the faithful signal state from the received state is performed by passive linear optical elements and postselection using the photon detectors.
Experiment
The schematics of the experimental set-up is shown in figure 2. Two photons in distinct modes are generated by SPDC from type I phase matched 2 mm thick β-barium borate (BBO) crystals. One photon in |H passes through long path, and is transformed into arbitrary signal polarization states α|H t A + β|V t A by rotating the polarization by a half wave plate HWP S and adding a phase shift by a liquid crystal retarder LCR S . The other photon in |H passes through the short path, and is transformed into the fixed reference polarization state by HWP R . These photons are mixed by a non-polarizing beamsplitter BS A . Here we can prepare the two photons in the state (1) with the probability 1/4 when two photons are generated from SPDC. The temporal delay t A between the signal and the reference photon is about 3 ns. The photons are split into the H-and V-polarization modes by a polarizing beam splitter PBS A which transmits the H-polarization photons and reflects the V-polarization photons. These photons are then transmitted to Bob through 10 m PMF H and PMF V .
At Bob's location, these photons are mixed by PBS B again. If the optical path lengths of PMF H and PMF V were precisely adjusted with high stability, the received state would be the same as the state prepared by Alice. However, we did not perform any such stabilizations in the following experiments taking several hours of data accumulation, during which the phase shifts φ H and φ V fluctuated randomly.
The extraction of the signal state from the received two-photon state can be passively performed in the following way. The received two photons are first split into long path (L) and short path (S) by BS B , then mixed by PBS P again. HWP L rotates the polarization of the photons in the long path by 90 • . Using HWP X , PBS X , and a photon detector D X , the polarization of the photon in mode X is projected on to the diagonal state |D . The difference between the lengths of L and S corresponds to a temporal delay t B which is adjusted by the mirrors (M) on a motorized stage. The successful events are postselected by discriminating the time delay between the arrival of photons at detectors D X and D Y by using the time resolving coincidence detection as shown in figure 2. HWP Y and the quarter wave plate QWP Y in front of D Y are used for the analyses of the successfully extracted signal states.
Here we only consider the successful case where the signal photon passes through S and the reference photon passes through L. This happens with the probability 1/4 when two photons arrived at the BS B . In this case the state just before the PBS P can be written as where the superscripts represent the spatial modes. Here φ H and φ V include the phase shifts added in Bob's interferometer. If one photon is emitted in each mode of X and Y, the output state just after the PBS P is This operation is referred to as quantum parity checking [23], which is also useful for other quantum information tasks [24,25]. Let us consider the case where t A = t B . When the detector D X finds one photon, the state in mode X is projected on to |D X t A . At that time, the state in mode Y is projected onto the state α|H Y τ+ t A + β|V Y t A +τ , implying that we faithfully obtain the signal state in mode Y. It is worth mentioning here that the delay τ only affects the arrival time but not the fidelity of the output states as long as the correlation time of the fluctuations of the phase shifts added in Bob's interferometer is much longer than τ.
We first show that the above scheme can extract a faithful signal state in mode Y by properly adjusting the optical delay t B , when the signal state is |D t A . As shown in figure 3, varying the optical delay by moving M, we can clearly see the interference effects. The upper and lower curves show the coincidence rates on the bases |D Y and |D Y ≡ (|H Y − |V Y )/ √ 2, respectively. The observed visibility at the zero delay is 0.959 ± 0.013 representing a clear signature that coherence is preserved during quantum state transmission. The small deviation from 100% visibility is due to the residual mode mismatch as well as multi-photon-pair generation during the SPDC. The full-width at half-maximum (FWHM) of the interference fringe, which corresponds to the coherence length l c of the photons, is found to be ∼75 µm. This is roughly 100 times larger than the wavelength of the photons implying the robustness of the scheme against path-length mismatches and fluctuations up to the order of many wavelengths. The requirement for the precision of alignment and stability will be further relaxed if we choose the photons with longer l c . In order to characterize the performance of our transmission scheme precisely, we analysed the output states via tomographic reconstruction of the density matrices for various signal states. Real and imaginary components of the density matrices of the output states are reconstructed for the input signal states, |H t A , |V t A , |D t A , and |L t A ≡ (|H t A + i|V t A )/ √ 2, and the results are shown in figure 4. The fidelities of these reconstructed density matrices to those of the initial signal states are calculated as, 0.991 ± 0.031, 0.985 ± 0.030, 0.999 ± 0.030, and 0.985 ± 0.030, respectively, which clearly shows that the output states are very close to the input signal states.
Since the above experimental results are enough to characterize completely the quantum operation E effectively applied in our system, we can calculate various quantities for the demonstrated operation E. In order to characterize quantitatively how close the operation E is to the noiseless quantum channel, we calculate the average fidelityF (E) [26] which is defined as the average of the fidelites F i = ψ i |E(ψ i )|ψ i over all input states |ψ i . It has been shown that F (E) is connected to entanglement fidelity F e (E) ≡ φ|(I R ⊗E)(φ)|φ by the following simple formulaF (E) = (2F e (E) + 1)/3 for qubit channels [27,28], where |φ represent a Bell state. The E acts on one member of the Bell state |φ and I R acts on the other. Instead of measuring F e (E) by preparing the Bell state |φ , we can estimate F e (E) from the above reconstructed density matrices as 0.958 ± 0.033 andF (E) is calculated to be 0.972 ± 0.022. This clearly shows that our qubit transmission scheme provides a high fidelity to a noiseless quantum channel.
The proof-of-principle experiment demonstrated here uses passive linear optical elements, thus the probability of success is rather smaller than the ideal case. However the success probability will increase 16 times by replacing the BS A and BS B by fast optical switches, and twice by using feed-forward decoding techniques. Excluding the fibre losses and the detection loss, these improvements will enable a success probability of 1/2. In the present experiment, we used PMFs as a channel, but our scheme also allows the use of standard single-mode fibres [22]. It is worth noting that the photon losses in optical fibres may affect the efficiency in this two-photon quantum communication scheme more than in the single photon transmission, but the efficiency will be greatly improved by the use of quantum repeaters [5].
Conclusion
This work demonstrates that the two-photon interference effect together with quantum parity checking can be used for faithful transmission of qubits in arbitrary unknown quantum states with the help of ancillas without active control and stabilization mechanisms. Simplicity, versatility, and the maintenance-free nature of the scheme will be important for future quantum communication networks. | 3,519.6 | 2006-07-24T00:00:00.000 | [
"Physics"
] |
Constructions of statistical estimates of digital measurement error in the case of small samples
The paper presents a computational-experimental method for the construction of statistical estimates of the error of digital measurements carried out on complex technical objects with metrological support. The algorithm of construction of the estimation of the distribution density function of random error of measurements on the basis of complex application of methods of the theory of characteristic functions, the theory of Lie operator series and methods of statistical modeling in the processing of a limited amount of statistical information in the conditions of small samples is described. The solution of the practical problem on the construction of the estimation of the measurement error distribution density and the problem on the zero mark of the measuring instrument in the probabilistic formulation is given. The presented computational and experimental method can be used to construct statistical estimates for the probability distribution density of the general type, including those given by implicit functions, characteristic functions and operator series.
Introduction
At the current stage of economic development there is a need to manage projects for the creation of high-tech construction objects, construction materials and products [1][2][3][4][5][6][7][8][9][10][11], as well as projects for the creation of automated production facilities equipped with modern high-precision digital measuring instruments and control and measuring devices.At the same time, the share of digital and intelligent measurements is steadily increasing.Therefore, the development of methods for processing and interpretation of digital measurement results on the basis of statistical processing of sample data, including in the conditions of small samples, seems to be an urgent task.
In modern metrological practice [12][13][14][15][16], various distribution laws, including uniform distribution, are used to describe random measurement errors.The uniform distribution has: errors of observation results, rounded to the nearest side of the counts with inaccuracy of the whole (or fractional) division of the scale; error of approximate calculations with rounding to the nearest significant digit; adjustment errors within the permissible limits; backlash errors; variations in the readings of measuring instruments.
Currently, there is a need to develop specific methods and corresponding analytical apparatus, the main goal of which should be to ensure the most efficient processing and interpretation of a limited amount of statistical information [17][18][19][20][21][22][23], including in conditions of small samples [19][20][21][22][23].
The object of the study is complex technical systems with metrological support.The subject of the study are methods of statistical processing of measurement information.The aim of the study is to develop a computational and experimental method of building statistical estimates of the error of digital measurements under conditions of small samples.To achieve the goal of the research the methods of the theory of characteristic functions, Lie series (a kind of operator series) and statistical modeling are used.The novelty of the results presented in the article consists in the complex application of these methods for the problems of metrological support of complex technical systems.Practical and theoretical significance consists in solving the problem of the zero mark of the measuring instrument (about the sliding/absence of sliding of the grouping center) in a probabilistic formulation.
Methods
We will describe the main provisions of the Method of Selecting the Type of Distribution Law (MSTDL), the Method of Characteristic Functions (MCF), the main provisions of the method of operator series (MOS) and the main provisions of the Method of Statistical Modeling (MSM), which formed the basis for the development of the computational-experimental method of statistical processing of small samples proposed in the article.
MSTDL
Let there are N measurements -a sample.It is necessary to find an estimate * () Fxof the distribution function () Fx.This is a simplified formulation of the problem, since no requirements are made to the properties of the estimation.The problem will be considered solved if it is possible to find an estimate * () fx of the distribution density () fx. Figure 1 shows the area of the plane 12 ( , ) [20], where 1 is the square of the asymmetry coefficient,
2
is the excess coefficient, which are defined as follows: where k is the central moment of the random variable of k order, k = 2,3,4.The considered area is divided into subareas, each of which corresponds to a certain class of distributions.When choosing a model in this case, it is suggested to use the significant difference between different classes of distributions in terms of asymmetries form and sharp-peak form.
For example, for a normal distribution: 1 0 = = , for an exponential distribution: 1 4 = , 2 9 = .Therefore, each of these distributions are displayed on the plane by a single point.Other distributions, for example, Student distribution, log-normal distribution, -distribution, correspond to different curves on the figure 1. Third distributions correspond to whole subareas.
To select a model using this approach, it is necessary to calculate the estimates coefficients 1 and 2 and then to find the point corresponding to the obtained estimates in figure 1.It should be noted that for large samples the calculation of estimates is not difficult.But for small samples, there may be some difficulties [20].
MCF
Let us first consider the most commonly used statistic, the sample mean x .Let the numbers 12 , ,..., x form a sample from a uniform distribution with density: here a and r are constant values.It is required to determine the characteristic function and the distribution law of the sample mean x .The characteristic function of the uniform distribution (1) and the distribution of the sample mean have the form [19]: Using ( 2) and the inversion formula for the Fourier transform [19], we represent the distribution density of the sample mean as: Note that at the sample size n →the statistics x is asymptotically normal.When the sample size n is small, it is not possible to represent (3) as an analytical dependence function () fx.It should be noted that characteristic functions give a simple and powerful method of finding the marginal distribution functions of mean values and sample sums, but in the real cases of small samples, except for some special cases, the calculation of an integral of type (3) is difficult.
In general, the method of distribution function inversion based on formula (4) can be formulated as follows.
Let the distribution function () Fx represent the values of a random variable 12 ( ,..., ) x with probability P and take values () . Let as order the values i x in ascending order () i Fx : ( where the distribution density function at a point 0 xx = is not equal to zero.Then, based on (4), it can be shown that the function is the inverse of the distribution function () Fx (quantile function).The condition for the existence of expression ( 5) is the analyticity of the function () Fxand the existence of its derivative not equal to zero 0 ( ) 0 Fx .A necessary condition for the legitimacy of using the inverse ( 5) is the convergence of the series.The convergence is proved using Dalembert's sign.
Let us return to the problem of finding the distribution of the sample mean (3).At 0 xa = , write down the first three coefficients for the terms of the series (5): .
The determination of the subsequent terms does not cause fundamental difficulties, since it is reduced at the chosen value of the reference value 0 xa = to the calculation of tabular integrals of the form [24]: As a result, the quantile of the distribution of the sample mean from a uniform population can be represented as: Using the last relation, we can determine the confidence region for the estimation of the parameter Differentiating (6) by gives: Note that with a different sample size than the one considered in the example, the numerical values of the coefficients will be slightly different.
From ( 6) and (7) .(8) Recall that it turned out to be impossible to construct an estimate of the sampling mean distribution using the characteristic functions (3).Let us now study the problem of constructing an estimate of the grouping center of a random variable (the problem of the zero mark of a measuring instrument).Let a sample of volume n : 12 , ,..., ,...
x be extracted from a uniform general population and ordered in ascending order of magnitude k x : (1) . It is required to find a quantile function for the grouping center and, at the significance level, to draw a conclusion about "slippage" or "no slippage" of the grouping center.
The distributions of the extreme members of the variational series x and x have the form: ) , where is the distribution of a uniformly distributed random variable X .Without violating the generality of reasoning, we can further assume the coordinate of the grouping center to be equal to zero 0 a = .Then the statistic Let us represent the quantile of the distribution of statistics using operator series as follows: Note that in this case the beta function n n (11) can be defined using the relations: .
Similarly, we find: etc. Hence the critical region [16,20] for statistics z at the significance level is defined as: The conclusion about the slippage (lack of slippage) of the grouping center is made based on the application of standard statistical hypothesis theory [16,20].x is usually carried out with the help of statistical criteria and statistics of Pearson or Kolmogorov [17,22].Obviously, in conditions of small samples it is not possible to use the marginal distributions of these statistics.On the other hand, for small samples it is often possible to form such statistics, which will depend only on standard random variables, and will not depend on the distribution parameters of the general population.It seems reasonable to determine the distribution function of such statistics either as a result of statistical modeling, or to construct analytically with the help of characteristic functions, if this is possible in a particular problem.
Results
Suppose that three numbers ,, -random variables uniformly distributed in the interval (0,1).
Let us introduce the statistic ( ) ( ) ,, x x x is a sequence of measured values (variation series) ordered by value: x x x .The statistic , does not depend on the distribution parameters of the general population, but is determined by a specific sample.Analytical definition of the distribution function of this statistic is difficult due to the mutual dependence of the numerator and denominator.Therefore, we apply the theory of statistical modeling.
The Monte Carlo method was used to calculate the probability values () , which are presented in Table 1.
Discussion
The presented computational-experimental method can be applied to the construction of statistical estimates for other types of distribution density laws (normal, exponential), as well as distributions of general type, including those defined by implicit functions, characteristic functions and operator series.In this case, it is necessary that the sample volume should be at least one unit larger than the number of parameters of the distribution function under study.
Conclusion
A computational-experimental method of estimating parameters of complex technical systems with metrological support is developed.The results of modeling of the practical problem of constructing the estimation of the density function of digital measurement error distribution in the case of uniform distribution are presented.The solution of the problem of zero mark departure of the measuring instrument (about crawling/non-crawling of grouping center) in a probabilistic formulation is presented.
2. 3 .
MOS The function inverse of a single-valued analytic function () YX = , in the neighborhood of the point 00 () yx = , where the function () x has at point 0 x a non-zero derivative, can be represented by a Lie operator series: the grouping center has a density distribution: are obtained as a result of tests.It is required to test the hypothesis that the measured quantities are uniformly distributed at the significance level .Independent uniformly distributed random variables general density of distribution (1) can be represented in the form:
Figure 2 F
and the distribution function, which allow us to conclude that the procedure of constructing the distribution function by Carlo method is statistically stable.
Figure 2 .
Figure 2. Assessment of statistical stability of the statistical modeling method.
, an estimate of the uniform distribution density of .can be found ˆ() | 2,889.6 | 2024-02-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Electrochemical, Spectroscopic, and Computational Investigations on Redox Reactions of Selenium Species on Galena Surfaces
: Despite previous studies investigating selenium (Se) redox reactions in the presence of semiconducting minerals, Se redox reactions mediated by galena (PbS) are poorly understood. In this study, the redox chemistry of Se on galena is investigated over a range of environmentally relevant Eh and pH conditions ( + 0.3 to − 0.6 V vs. standard hydrogen electrode, SHE; pH 4.6) using a combination of electrochemical, spectroscopic, and computational approaches. Cyclic voltammetry (CV) measurements reveal one anodic / cathodic peak pair at a midpoint potential of + 30 mV (vs. SHE) that represents reduction and oxidation between HSeO 3 − and H 2 Se / HSe − . Two peak pairs with midpoint potentials of − 400 and − 520 mV represent the redox transformation from Se(0) to HSe − and H 2 Se species, respectively. The changes in Gibbs free energies of adsorption of Se species on galena surfaces as a function of Se oxidation state were modeled using quantum-mechanical calculations and the resulting electrochemical peak shifts are ( − 0.17 eV for HSeO 3 − / H 2 Se, − 0.07 eV for HSeO 3 − / HSe − , 0.15 eV for Se(0) / HSe − , and − 0.15 eV for Se(0) / H 2 Se). These shifts explain deviation between Nernstian equilibrium redox potentials and observed midpoint potentials. X-ray photoelectron spectroscopy (XPS) analysis reveals the formation of Se(0) potentials below − 100 mV and Se(0) and Se( − II) species at potentials below − 400 mV.
Introduction
While a vital nutrient at low concentrations, the capacity of selenium (Se) to act as a toxic contaminant has been evinced by cases such as the Kesterson reservoir in California where high concentrations of Se in sediments and aquatic systems have led to increased wildlife mortality rates and birth defects [1,2]. Owing to the chemical similarity to sulfur (S), such hazardous accumulations in the environment are often linked to the weathering of coal [3] and Se-bearing sulfide minerals [4,5]. Moreover, long-lived radionuclides of Se (e.g., 79 Se) can be introduced to the environment where effluents are generated from spent nuclear fuel or reprocessing this fuel [6].
The mobility and fate of Se in the environment is highly dependent on the oxidation state which dictates factors such as sorption affinities and solubility [4]. For example, Se(VI) and Se(IV) species are dominant under oxidizing conditions and are not only more bioavailable and toxic than reduced forms, but also more soluble granting them greater mobility [4] (Figure 1). Under reducing conditions, Se is found as various Se(−II) species in addition to insoluble Se(0). While Se(−II) can occur as soluble H 2 Se/HSe − species, it also occurs as insoluble metallic selenides such as clausthalite (PbSe) or ferroselite (FeSe 2 ), and as an incorporated species in a number of sulfides. Semiconducting minerals are not only naturally occurring in the environment but are also viable electron sources and sinks and allow for "conducting" electrons between redox-active species adsorbed some nm away from each other [7][8][9][10][11][12][13][14][15]. Consequently, interactions between contaminants and semiconducting minerals pose the chance for mobilization or immobilization by mediating redox reactions which alter the speciation of the contaminant and therefore its behavior [16]. With a band gap of 0.4 eV [17], galena (PbS) is not only a viable semiconducting mineral, but also one that offers particular geochemical relevance as a plausible substrate for Se interactions due to the common accumulations of Se in sulfide minerals [4,5,18].
The goal of the present study is to identify Se redox reactions mediated by galena surfaces over a range of environmentally relevant Eh and pH conditions (+0.3 to −0.6 V; pH 4.6) and to characterize the effect of galena as a mediating substrate on those reactions. Results from this study will provide implications for the fate and mobility of Se in the environment not only by identifying the specific physicochemical conditions under which speciation changes occur, but also by accounting for how those conditions deviate from expected theoretical conditions. Finally, this paper aims to demonstrate a useful combination of experimental and theoretical techniques and, for these purposes, electrochemical, spectroscopic and computational approaches have been used as outlined below.
Cyclic voltammetry (CV) measurements were performed using a galena electrode and the collected data were analyzed to identify specific redox species responsible for electric signals at given pH and Eh. The use of electrochemical preparation allows the ability to quantify various redox parameters such as reduction potential, chemical reversibility, and the oxidants and reductants participating in specific reactions. It is important to note that in all experiments performed in this study, galena is not the primary electron donor in the reduction processes described, but rather Semiconducting minerals are not only naturally occurring in the environment but are also viable electron sources and sinks and allow for "conducting" electrons between redox-active species adsorbed some nm away from each other [7][8][9][10][11][12][13][14][15]. Consequently, interactions between contaminants and semiconducting minerals pose the chance for mobilization or immobilization by mediating redox reactions which alter the speciation of the contaminant and therefore its behavior [16]. With a band gap of 0.4 eV [17], galena (PbS) is not only a viable semiconducting mineral, but also one that offers particular geochemical relevance as a plausible substrate for Se interactions due to the common accumulations of Se in sulfide minerals [4,5,18].
The goal of the present study is to identify Se redox reactions mediated by galena surfaces over a range of environmentally relevant Eh and pH conditions (+0.3 to −0.6 V; pH 4.6) and to characterize the effect of galena as a mediating substrate on those reactions. Results from this study will provide implications for the fate and mobility of Se in the environment not only by identifying the specific physicochemical conditions under which speciation changes occur, but also by accounting for how those conditions deviate from expected theoretical conditions. Finally, this paper aims to demonstrate a useful combination of experimental and theoretical techniques and, for these purposes, electrochemical, spectroscopic and computational approaches have been used as outlined below.
Cyclic voltammetry (CV) measurements were performed using a galena electrode and the collected data were analyzed to identify specific redox species responsible for electric signals at given pH and Eh. The use of electrochemical preparation allows the ability to quantify various redox parameters such as reduction potential, chemical reversibility, and the oxidants and reductants participating in specific reactions. It is important to note that in all experiments performed in this study, galena is not the primary electron donor in the reduction processes described, but rather transmits electrons Minerals 2019, 9, 437 3 of 18 from and to the electrochemical setup. In addition, galena catalyzes the reaction by helping in the dehydration of absorbing species from solution and by the potential overlap of orbitals between the mineral surface and the redox-active orbitals of the absorbing species. Even if a Se-reducing electron originates from an S 3p orbital (i.e., Sulfur 3p orbital), the missing S 3p electron will be replaced by an electron from the electrode.
The adsorption modes of Se species with different oxidation states on galena slabs were simulated based on molecular orbital modeling. Energetic contributions of adsorption are included in evaluating Se redox transformation mediated by galena and deviation between equilibrium reduction potentials and observed midpoint potentials are determined using calculated adsorption Gibbs free energies. Finally, X-ray photoelectron spectroscopy (XPS) was employed to acquire direct evidence for Se redox reactions catalyzed by the galena electrode by determining the composition ratios of different Se species on the galena surface.
Sample Characterization
Galena sourced from Missouri was characterized using scanning electron microscope-energy dispersive X-ray spectroscopy (SEM-EDS) using a 15 kV accelerating voltage and a 112.6 µA beam current ( Figure 2). In addition to lead (Pb) and S, copper (Cu) impurities were detected which is commonly found in galena originating from Missouri [33][34][35]. The dominant form of powdered galena is cubic, which represents cleaving along (100) planes.
Voltammetry
Voltammetry was conducted in a conventional three-electrode cell controlled by an Princeton Applied Research EG&G model 263A potentiostat with Powersuite software. A Pt counter electrode (Sigma Aldrich) and Ag/AgCl reference electrode were used (CH Instruments); however, all potentials quoted in this study are with respect to the SHE. A powder microelectrode (PME) was used as the working electrode which acts effectively as a cleaved mineral electrode of greater surface area [36]. The increased surface area on a relatively small volume allows for relatively fast reaction kinetics which is important for redox processes in cyclic voltammetry that happen at a second to minute timescale. A PME was prepared by conventional glass blowing techniques and was composed of a Pt wire encased in soda lime glass with the Pt wire exposed flush to the end of the electrode. A cavity of 100 µm diameter (dictated by the diameter of the Pt wire) and a~20 µm depth was made by etching the exposed Pt in aqua regia (~80 • C) for~2.5 h [37]. The cavity was packed by tapping the electrode in galena powdered in an agate mortar and pestle, and the powder was removed from the cavity after use via sonication.
Unless otherwise stated, voltammetry was performed on freshly powdered, pristine galena in 10 mM Na 2 SeO 3 + 100 mM NaCl adjusted to pH 4.6 at 50 mV/s initiating from the open circuit potential (OCP) in the positive-going direction. The OCP (~110 mV ± 5 mV) was monitored for 30 s before initiating each scan. Solutions of any pH were adjusted using HCl and NaOH. All reagents (excluding galena) were obtained from Sigma Aldrich. Solutions and were sparged with argon (Ar) gas for 30 min prior to voltammetry to remove dissolved oxygen.
On voltammograms, anodic and cathodic peaks are denoted with an "A" and "C", respectively. The midpoint potential (E mid ) of the anodic (E A ) and cathodic (E C ) peaks comprising a redox couple was defined as E mid = 1 2 (E A + E C ). Peak areas and peak currents were obtained using the Origin8.5 plotting software (OriginLab Corporation, Northhampton, USA). For these purposes, peaks were defined by either the two major inflection points or local minima or maxima bounding the peak of interest determined using a Savitzky-Golay smoothing function for first and second derivatives. Using these bounds, peak areas were obtained through integration and peak currents by subtracting Scanning electron microscope-energy dispersive X-ray spectroscopy (SEM-EDS) image of powdered galena (above) and the resulting spectrum (below). In addition to Pb and S, Cu was detected which is an impurity commonly reported for galena sourced from Missouri.
Voltammetry
Voltammetry was conducted in a conventional three-electrode cell controlled by an Princeton Applied Research EG&G model 263A potentiostat with Powersuite software. A Pt counter electrode (Sigma Aldrich) and Ag/AgCl reference electrode were used (CH Instruments); however, all potentials quoted in this study are with respect to the SHE. A powder microelectrode (PME) was used as the working electrode which acts effectively as a cleaved mineral electrode of greater surface area [36]. The increased surface area on a relatively small volume allows for relatively fast reaction kinetics which is important for redox processes in cyclic voltammetry that happen at a second to minute timescale. A PME was prepared by conventional glass blowing techniques and was composed of a Pt wire encased in soda lime glass with the Pt wire exposed flush to the end of the electrode. A cavity of 100 µm diameter (dictated by the diameter of the Pt wire) and a ~20 µm depth was made by etching the exposed Pt in aqua regia (~80 °C) for ~2.5 h [37]. The cavity was packed by tapping the Figure 2. Scanning electron microscope-energy dispersive X-ray spectroscopy (SEM-EDS) image of powdered galena (above) and the resulting spectrum (below). In addition to Pb and S, Cu was detected which is an impurity commonly reported for galena sourced from Missouri.
X-Ray Photoelectron Spectroscopy (XPS)
XPS spectra were obtained using a Kratos Axis Ultra X-ray photoelectron spectrometer with a monochromatized Al Kα source (1486.6 eV) primed to 8 mA and 14 kV at no more than 5 × 10 −7 Torr. Core spectra obtained using a 20 eV pass energy with 0.1 eV resolution were calibrated with respect to the C 1s spectrum of adventitious carbon of each respective sample assuming a binding energy of 284.8 eV. Peak fitting and calculation of atomic ratios were performed using CasaXPS 2.3.17 software (Casasoftware Ltd, Teignmouth, UK) and the relative sensitivity factors provided by [38]. Due to the small diameter of the PME, XPS analysis was conducted on bulk galena electrodes prepared by mounting cleaved galena (polished to 1200 grit) to a copper wire via conductive silver paste (Ted Pella Inc., Redding, CA, USA) and insulated with non-conductive epoxy (Loctite). Bulk electrodes were polarized at potentials of interest (see Section 3.2) for 30 min in solution containing 10 mM Na 2 SeO 3 + 100 mM NaCl and adjusted to pH 4.6. After polarization, bulk electrodes were stored in an anaerobic glove bag (5% H 2 + 95% N 2 ) for no more than two hours before being transferred into the spectrometer where evacuation could begin. Samples were not rinsed or further treated at any point after polarization. Analytical grade Na 2 SeO 4 , Na 2 SeO 3 , and Se 0 reagents (Sigma Aldrich) were used as reference standard materials.
Calculation of Adsorption
The Gaussian 09 package was used to model the atomic structures and adsorption energies of possible surface species based on molecular orbital calculation. All calculations were made at B3LYP [39,40] and LANL2DZ [41][42][43] level. B3LYP is a hybrid method of density functional theory (DFT, to included electron correlation) and Hartree-Fock (HF, to approximate electron exchange well) approach. In order to approximate polarization due to bonding additional polarization functions were added (extra basis functions containing no electrons). LANL2DZ uses an all-electron description for atoms of the first row elements and small-core relativistic effective-core potentials (RECP) of inner electrons, combined with double-zeta functions for the valence electrons of heavier atoms of elements such as S, Se, and Pb [44]. These computational parameters have been successfully applied to calculating systems involving galena or Se molecules [44][45][46].
Modeling adsorbate structures on galena (Fm3m space group) was performed on a 4 × 4 × 2 atoms PbS cluster. This cluster represents the {100} surfaces of galena in good agreement with the SEM-EDS observations ( Figure 2). One layer of the cluster was allowed to relax while the edge atoms on that layer were fixed in order to minimize edge effects that are inherent to the cluster size [46]. For the aim at calculating energetics of adsorption reactions occurring in aqueous phases, hydration was simulated by imposing the solute in a cavity within the solvent reaction field (SCRF) based on a PCM (Polarizable Continuum Model [47]) solvation model.
The adsorption Gibbs free energy is calculated from the computed Gibbs free energies of the chemical species (X), the galena slab and the species adsorbed on the slab (PbS≡X) as described in Equation (1): In Equation (1), Gibbs free energies are obtained by adding vibrational contributions (c p T to obtain the enthalpy and −T∆S to include vibrational entropy). These contributions were obtained from frequency calculations on the optimized structure) to the molecular/electronic energy. Entropy changes due to hydration were included in the PCM solvation model.
Results
CV measures current as a function of electrochemical potential and was used here to characterize redox transformations between species with different Se oxidation states as mediated by the galena powder in the working electrode (Section 3.1). Resulting electrochemical signals were used to evaluate reduction potentials of possible redox pairs and the amounts of species changing their oxidation state within a given reduction or oxidation peak.
Cyclic Voltammetry of Galena with and without Se(IV)
CV was performed in the scan range of −0.65 to +0.3 V using the galena PME in the absence of Se to establish any voltammetric signatures contributed by reactions involving the Pt and galena electrode materials (Figure 3a). In the absence of Se, the PME with galena present in the cavity exhibits a cathodic peak at −440 mV (labeled C 0 in Figure 3a). This peak is likely indicative of the reduction of oxidized surface elemental S via the following reaction taken from [48][49][50]: CV performed in the scan range of −0.65 to +0.3 V using the galena PME and Se(IV) (added as Na 2 SeO 3 ) present in solution exhibited three anodic peaks and three cathodic peaks denoted A 1 to A 3 and C 1 to C 3 (Figure 3a). The observation of peaks A 1 to A 3 and C 1 to C 3 in the scan with Se(IV) present in solution and their clear distinction from the features of the scan performed in the absence of Se attributes these peaks to Se redox reactions. The contribution of Pt to the total electrochemical signal can be considered minute because the galena component of the PME produces electrical signals of current one order of magnitude greater than that of the empty PME (Figure 3a). Association of these CV peaks with Se redox reactions was also substantiated by the fact that the reduction peaks C1, C2, and C3 increase in current magnitude with increasing Se(IV) concentration (Figure 3b). While the peak current of A1 increased with Se concentration, anodic peaks A2 and A3 Association of these CV peaks with Se redox reactions was also substantiated by the fact that the reduction peaks C 1 , C 2 , and C 3 increase in current magnitude with increasing Se(IV) concentration ( Figure 3b). While the peak current of A 1 increased with Se concentration, anodic peaks A 2 and A 3 showed negative current with varying Se concentration but do not necessarily increase with the concentration. One possible reason is that Se reduction is dominant over negative potential ranges and decreases overall current even upon anodic scans below 0.0 V. Upon the measurement using the galena PME, the intersection between the anodic scans with and without Se(IV) occurs at about 0 to −0.1 V (Figure 3a,b) where there is transition between reductive and oxidative processes involving Se(IV) and its reduced products. Similar observations on anodic scans leading to negative current are also reported from studies regarding Se reduction measured using Au electrodes [19,51].
It should be noted that peaks A 2 , A 3 , C 2 and C 3 were not attributed to the hydrogen (H) evolution on Pt (or galena), i.e., the redox transformation between H and H + and their adsorption/desorption. Typical CV patterns of this one-electron transfer process showed near chemical reversibility (i.e., comparable areas of cathodic and anodic peaks) and separation of the corresponding peaks is as small as 59 mV [52] which was not the case for the current-potential region involving A 2 , A 3 , C 2 and C 3 . Furthermore while these reactions are commonly observed from measurements in strong acid solutions, the voltammetric signature from H evolution is unlikely to contribute to an observable degree under the current experimental conditions where the concentration of Se is higher than that of protons by a factor of two or three. Given the same pH of 4.6 in each case, any H evolution occurring in scans performed in the presence of Se would have been observed in those in the absence of Se.
While galena can, as a redox catalyst, influence the kinetics of the reaction, the formation of bonding of different Se species with the galena surface can lead to a variation on the peak position [46]. We address such energetic contribution of adsorption using quantum-mechanical calculations (Section 3.3). It is important to note that in this electrochemical system, the ultimate source and sink for electrons is the electrode, i.e., the potentiostat with electrons being transferred by the platinum wire and galena powder, and not, e.g., the S 3p orbitals of galena.
Peak Pairing and Assignment to Se Redox Transformations
Prior to peak quantification and reaction assignment, it was necessary to pair oxidation and reduction peaks to specific Se redox transformation mediated by galena. To aid pairing the Se-related peaks, linear sweep voltammetry (LSV) was used to limit scans to a single direction rather than including a reverse scan such as with CV. This allowed us to examine individual oxidation and reduction processes observed in narrow scan ranges.
Successive linear sweeps were performed in the negative-going (i.e., cathodic) and positive-going (anodic) direction over a potential range from +300 to −100 mV. In separate scans on fresh galena powder, growth of C 1 was observed with successive cathodic sweeps (Figure 4a) while no noticeable feature was observed with anodic sweeps performed over the same potential range. However, the growth of A 1 is observed with anodic sweeps scanned after successive cathodic sweeps producing C 1 are performed on the same galena powder (Figure 4b). These results indicate that A 1 involved the reversible reaction of C 1 constituting a redox couple (denoted with C 1 /A 1 ). The peak potentials of A 1 /C 1 were observed at +120/−60 mV (Figure 3a). Figure 5 shows cathodic linear sweeps over a potential range from −265 mV to −615 mV. The measurements exhibited growth of one broad, large peak with the first few sweeps which evolved into peak C 2 and C 3 in later sweeps. Peak A 2 and A 3 were assigned to the reverse reaction of C 2 and C 3 constituting redox couples C 2 /A 2 and C 3 /A 3 due to their later emergences upon anodic scans with progressive CV cycling. The peak potentials of A 2 /C 2 are observed at −360/−440 mV, and of A 3 /C 3 at −500/−540 mV (Figure 3a). Given peak potentials of a redox couple, E mid can be defined allowing a first-order approximation of a reduction potential. For peak assignment, the observed E mid is compared with the equilibrium reduction potentials of Se redox pairs. The Nernst equation defines the deviation of the equilibrium reduction potential from the standard reduction potential as a function of pH and the concentrations of the reductants and the oxidants: where E is the equilibrium reduction potential, E 0 is the standard reduction potential, n is the number of electrons transferred, dictated by the reaction stoichiometry, and this ratio is multiplied by the log of the reaction quotient. The Nernst equations and the standard reduction potentials of Se redox pairs relevant to the experimental conditions of this study are summarized in Table 1. In Table 2, the equilibrium reduction potentials are estimated for a solution pH of 4.6 and the presumed concentrations of relevant selenium species. Since HSeO 3 − is the most dominant Se(IV) species at pH 4.6 (pk a1 = 2.7 and pk a2 = 8 for H 2 SeO 3 ), it is reasonable to assume that the initial Se(IV) concentration (0.01 M) is approximately equal to [HSeO 3 − ]. In the bulk solution (50 mL), the concentration of other Se species would be minute because they can only be produced from the Se(IV) redox reactions occurring by means of the galena PME. One constraint on the Se(−II) species is considered that the ratio of [HSe − ] to [H 2 Se] is 5 at pH 4.6 (pk a1 = 3.9 for H 2 Se). The equilibrium reduction potentials in Table 2 Given peak potentials of a redox couple, Emid can be defined allowing a first-order approximation of a reduction potential. For peak assignment, the observed Emid is compared with the equilibrium reduction potentials of Se redox pairs. The Nernst equation defines the deviation of the equilibrium reduction potential from the standard reduction potential as a function of pH and the concentrations of the reductants and the oxidants: where E is the equilibrium reduction potential, E o is the standard reduction potential, n is the number of electrons transferred, dictated by the reaction stoichiometry, and this ratio is multiplied by the log of the reaction quotient. The Nernst equations and the standard reduction potentials of Se redox pairs relevant to the experimental conditions of this study are summarized in Table 1. In Table 2, the equilibrium reduction potentials are estimated for a solution pH of 4.6 and the presumed concentrations of relevant selenium species. Since HSeO3 − is the most dominant Se(IV) species at pH 4.6 (pka1 = 2.7 and pka2 = 8 for H2SeO3), it is reasonable to assume that the initial Se(IV) concentration (0.01 M) is approximately equal to [HSeO3 − ]. In the bulk solution (50 mL), the concentration of other Se species would be minute because they can only be produced from the Se(IV) redox reactions occurring by means of the galena PME. One constraint on the Se(−II) species is considered that the ratio of [HSe − ] to [H2Se] is 5 at pH 4.6 (pka1 = 3.9 for H2Se). The equilibrium reduction potentials in Table 2 For the C 1 /A 1 couple, E mid was defined at +0.03 V and the best agreement with the equilibrium reduction potentials (Table 2)
Adsorption of Se Species on Galena
Adsorption is needed for a dissolved species to remain in contact with a solid surface and occurs before electrons are transferred between the electrode surface and the chemical species. Interaction between a mineral surface and a chemical species can be an important parameter in quantifying the thermodynamics of electron transfer mediated by a mineral surface in that it can cause deviation from theoretical equilibrium redox potentials within a few hundred mV [46,[54][55][56]. Here, energetic contributions of adsorption to Se redox thermodynamics were calculated using the computational code, Gaussian 09. Comparison between the observed midpoint potential and the equilibrium redox potential (Section 3.2) was further explored and it was examined whether consistency between the observed and the theoretical reduction potentials is improved with correction for the adsorption contributions to the reduction potentials.
The adsorbate structures shown in Figure 6 are the most likely reaction products according to the calculations presented in this study. The bidentate mode of HSeO 3 − on galena includes the oxygen bond with surface Pb (Pb-O = 2.4 Å) and attractive interaction between H and surface S (Figure 6a,b). The ∆G ads of HSeO 3 − , following a frequency analysis, on galena was calculated to be −69 kJ/mol.
Interaction of HSe − with the galena surface was in a monodentate mode (Figure 6c). The calculated Se-Pb distance is 3.0 Å and the ∆G ads is −30 kJ/mol. H 2 Se weakly interacts with the galena surface (Se-Pb of 3.5 Å) which is indicated by the positive ∆G ads (= +29 kJ/mol) (Figure 6d). This also indicates that reduction of Se(IV) to H 2 Se, especially at pH values more acidic than the experimental one of 4.6, would lead to the release of H 2 Se into solution while HSeis calculated to stay adsorbed to the galena surface.
The energetic contribution of adsorption on galena to redox thermodynamics was evaluated from calculating the difference in Gibbs free energy between the redox half reaction for dissolved species (Equation (4)) and one for adsorbed species (Equation (5)). This difference (Equation (6)) was equivalent to the reaction involving the oxidant adsorbed on galena (PbS≡Ox) and the dissolved reductant (Red) as reactants and the adsorbed galena (PbS≡Red) and the dissolved oxidant (Ox) as products (Figure 7). In turn, this reaction equation corresponds to the subtraction of the oxidant adsorption equation (Equation (7)) from the reductant adsorption equation (Equation (8)).
PbS≡Ox + ne − ↔ PbS≡Red PbS≡Ox + Red ↔ PbS≡Red + Ox PbS + Ox ↔ PbS≡Ox PbS + Red ↔ PbS≡Red (8) and b). The ∆Gads of HSeO3 − , following a frequency analysis, on galena was calculated to be −69 kJ/mol. Interaction of HSe − with the galena surface was in a monodentate mode (Figure 6c). The calculated Se-Pb distance is 3.0 Å and the ∆Gads is −30 kJ/mol. H2Se weakly interacts with the galena surface (Se-Pb of 3.5 Å) which is indicated by the positive ∆Gads (= +29 kJ/mol) (Figure 6d). This also indicates that reduction of Se(IV) to H2Se, especially at pH values more acidic than the experimental one of 4.6, would lead to the release of H2Se into solution while HSeis calculated to stay adsorbed to the galena surface. The energetic contribution of adsorption on galena to redox thermodynamics was evaluated from calculating the difference in Gibbs free energy between the redox half reaction for dissolved species (Equation (4)) and one for adsorbed species (Equation (5)). This difference (Equation (6)) was equivalent to the reaction involving the oxidant adsorbed on galena (PbS≡Ox) and the dissolved reductant (Red) as reactants and the adsorbed galena (PbS≡Red) and the dissolved oxidant (Ox) as products (Figure 7). In turn, this reaction equation corresponds to the subtraction of the oxidant adsorption equation (Equation (7)) from the reductant adsorption equation (Equation (8)).
PbS≡Ox + ne − ↔ PbS≡Red PbS≡Ox + Red ↔ PbS≡Red + Ox (6) PbS + Ox ↔ PbS≡Ox (7) The change in redox potential by adsorption (∆V) is related to the difference in the adsorption Gibbs free energies as described in Figure 7 and Equation (9): Overall, the computed contribution of adsorption to redox thermodynamics ranges from −0.05 to −0.18 V which indicates the shift of the redox potential position to more negative values than the theoretically derived equilibrium redox potentials ( Table 2). The midpoint potentials of Se-attributed redox transformations measured from galena CVs (Sections 3.1 and 3.2) are in good agreement with reduction potentials corrected for the adsorption contributions and support peak assignment: the observed midpoint potential of C 1 /A 1 (= 0.03 V) was very close to the corrected redox potentials for the redox couples of To sum up, based on evaluation of equilibrium reduction potentials (Section 3.2) and the adsorption contribution to the selenium redox thermodynamics on galena (Section 3.3), the observed CV peaks were assigned to individual redox pairs and the intimate growth of peak C 2 and C 3 observed during the linear cathodic scans ( Figure 5) was attributed to the reduction of Se(0) to Se(−II) in association with selenide speciation.
XPS Measurements
XPS spectra were obtained to identify the products of Se redox transformation mediated by galena. For this purpose, bulk galena electrodes were polarized for 30 min in solution containing 10 mM Na 2 SeO 3 + 0.1 M NaCl adjusted to pH 4.6. The potential was held at two potential values −0.125 V and −0.49 V corresponding to approximate peak positions of C 1 and C 2 observed with cyclic voltammetry.
Core scans for the Se 3d peak are presented in Figure 8. Galena polarized at −0.125 V yields peaks that indicate the presence of Se(IV) and Se(0) with 3d 5/2 spin−orbit split peaks located at 58.4 and 55.0 eV, respectively (Figure 8a and Table 3). The peak observed at 55.0 eV was attributed to Se(0) which is in agreement with the standard value of 55.2 eV and within the range of 54.8 to 56.3 eV [32,57] commonly reported for Se(0). The proportion of Se oxidation states are 51%/49% for Se(0)/Se(IV). From the galena electrode polarized at −0.49 V, the Se core scans reveal three peaks attributed to Se(IV) at 58.3 eV, Se(0) at 54.4 eV and Se(−II) species at 52.8 eV (Figure 8b and Table 3). Formation of a red film on the galena electrode was macroscopically observable after it was held at a potential of −0.49 V for 30 min which is evidence of Se(0) as reported from previous studies [30,58,59]. The peak at 52.8 eV indicates the formation of Se(−II) species such as HSe − and H 2 Se in good agreement with the peak assignment of peak C 2 . This Se 3d 5/2 peak is within a range of binding energies typically reported for Se(−II) compounds [32]. The relative proportions of Se oxidation states are 29%/32%/39% for Se(−II)/Se(0)/Se(IV), respectively. Formation of a red film on the galena electrode was macroscopically observable after it was held at a potential of −0.49 V for 30 min which is evidence of Se(0) as reported from previous studies [30,58,59]. The peak at 52.8 eV indicates the formation of Se(−II) species such as HSe − and H2Se in good agreement with the peak assignment of peak C2. This Se 3d 5/2 peak is within a range of binding energies typically reported for Se(−II) compounds [32]. The relative proportions of Se oxidation states are 29%/32%/39% for Se(−II) / Se(0) / Se(IV), respectively.
Possible Forms and Behavior of Products of Se Reduction Mediated by Galena
The electrochemical and spectroscopic data presented in this study reveal the formation of Se(0) at potentials below + 30 mV (i.e., the mid potential of C 1 /A 1 ) and of protonated Se(−II) below −400 mV. Here, possible forms and behavior of Se(0) and Se(−II) at the galena−solution interface are discussed.
The binding energies of Se(0) shown on the XPS spectra of the galena surfaces polarized at −0.125 V and −490 V deviate by~0.6 eV which indicates that Se(0) produced at these negative potentials are likely to be in different forms. Se(0) occurs in various forms including red monoclinic Se(0) and grey (or black) trigonal Se(0) [58][59][60]. The Se 0 spin-orbit splits for the −0.125 V sample are very close to those for grey Se(0) ( Table 3). Additionally, a red film characteristic of Se(0) formation was observed on galena bulk electrodes after a potential hold at −490 mV for 30 minutes (Section 3.4). These observations suggest possible reduction potential-dependent regions of stability for a particular type of Se(0) in agreement with Espinosa et al. [61] reporting red Se 0 formation on carbon paste electrodes only if potentials were scanned below −0.2 V. This possibility is also accordant with X-ray absorption spectroscopic (XAS) data reported by Scheinost and Chalet [62] reporting grey and red Se(0) as products of Se(IV) reduction in the presence of Fe-bearing minerals.
A red film was observed on galena bulk electrodes after a potential hold at −490 mV for 30 minutes (Section 3.4) which is characteristic of the Se(0) formation. A number of electrochemical studies have attributed the formation of Se(0) to the comproportionation between Se(IV) and Se(−II) species [25,28,61]. When Se(−II) species (HSe − and H 2 Se) are produced at C 1 to C 3 , subsequent comproportionation reactions can proceed in Se(IV)-containing solutions (Equations (10) and (11)): By comparing the charge (in Coulombs) passed by the reverse peak (C R ) to the forward peak (C F ) of a redox couple, a charge ratio is obtained where C R /C F = 1 for an ideal, Nernstian reaction. Electron charges of a given peak are obtained by integrating the current of a peak over the applied voltage and dividing the result by the scan rate. As of the final cycle of Figure 3a, the C 1 /A 1 , C 2 /A 2 , and C 3 /A 3 couples have charge ratios of~0.27,~0.44,~0.14, respectively. The C R /C F < 1 for all three couples indicates that the anodic scanning is capable of reoxidizing only a limited amount of the cathodic product of their respective reactions. One possible explanation on the limited chemical reversibility is that oxidation of Se(0) produced from the reduction processes mediated by galena is slow and irreversible. Another possibility is the loss of a fraction of cathodically produced H 2 Se to the solution (and maybe gas) phase which has been reported in a number of studies employing CV coupled with electrochemical quartz crystal microbalance [20,22,24,25].
The stability of Se(−II) species increases at more negative potentials where Se(0) is reduced further into HSe − and H 2 Se as inferred from the peak assignment of peak C 2 and C 3 (Sections 3.2 and 3.3). From voltammetry, a fraction of H 2 Se produced via C 3 diffuses away from the reacting volume around the electrode surface and the rest is retained on the surface as indicated by the low C F /C R of~0.14 and the positive ∆G ads . Retention of otherwise soluble H 2 Se would be due to the presence of red Se(0) due to its capability of strongly adsorbing Se(−II) species [59].
Geochemical Implications
In this study, electrochemical measurements are used to evaluate possible Se redox processes mediated by galena under reducing (Eh +0.3 to −0.6 V) and acidic (mainly, a pH of 4.6) conditions. HSeO 3 − is the dominant Se species with an oxidation state of +IV and is found redox-active under these conditions. The cyclic voltammetry measurements combined with XPS analysis reveal the potential-dependent reduction of Se(IV) to Se(0) and further to a mixed phase of Se(0) and Se(−II) at the galena surface where Se(0) is the main product of Se reduction under intermediate reducing conditions (Eh = −125 mV) while the formation of HSe − and H 2 Se results from reduction of Se(0) at highly reducing conditions (−440 to −490 mV). Equilibrium redox potentials (and potentials adjusted for adsorption contributions) and limited chemical reversibility of redox pairs Se 0 /HSe − and Se 0 /H 2 Se observed with cyclic voltammetry indicate that protonated Se(−II) species would be the dominant products of Se(IV) reduction. Since Pb(II) is generated from mineral dissolution in environmental systems involving galena, the formation of Pb(II)/Se(−II) solids such as clausthalite (PbSe) is likely to occur in such settings where the adsorbed/dissolved Se(−II) species are produced from Se reduction which subsequently react with Pb(II).
The abiotic reduction of redox-sensitive elements can be mediated by a semiconducting mineral surface when the reduction is coupled with the oxidation of other species on the surface [37,63]. While in the electrochemical part of this study, electron transfer is triggered by applying an electrical potential, the mineral surface oxidation or the adsorption of reductants from solution to the surface plays an equivalent role in natural systems [64]. In a system where acidic dissolution of galena occurs in the presence of dissolved Se(IV), adsorbed or structural S(−II) species such as H 2 S and HS − may be possible sources for electrons being transferred through the galena surface to Se(IV). Similar mechanisms have been suggested from previous studies where reductants are sourced from the main constituent ions of the mineral catalyst [62,65].
Conclusions
One important finding from our computational modeling for Se species adsorbed on galena is that Se redox thermodynamics is influenced by interaction with the galena surface, revealing shifts in observed redox potentials relative to equilibrium redox potentials and our modeling evaluation suggests that this is caused by the contribution of adsorption Gibbs free energies. Specifically, the redox potential is calculated to shift towards more negative or positive values by 70 to 180 mV, depending on the ∆G changes of adsorption of different Se redox reactions mediated via the galena surface (Table 2).
Our experimental and computational results contribute toward a fundamental understanding of the catalysis of redox transformations occurring at the mineral surface and is critical in evaluating Se fate and mobility in the environments. The computational approaches of this study have broad applications and can be applied to future studies that aim at accounting for energetic variations caused by interaction of chemical species with mineral surfaces, for instance, upon electrochemical and spectroscopic measurements. | 9,207.8 | 2019-07-15T00:00:00.000 | [
"Materials Science"
] |
OptDesign: Identifying Optimum Design Strategies in Strain Engineering for Biochemical Production
Computational tools have been widely adopted for strain optimization in metabolic engineering, contributing to numerous success stories of producing industrially relevant biochemicals. However, most of these tools focus on single metabolic intervention strategies (either gene/reaction knockout or amplification alone) and rely on hypothetical optimality principles (e.g., maximization of growth) and precise gene expression (e.g., fold changes) for phenotype prediction. This paper introduces OptDesign, a new two-step strain design strategy. In the first step, OptDesign selects regulation candidates that have a noticeable flux difference between the wild type and production strains. In the second step, it computes optimal design strategies with limited manipulations (combining regulation and knockout), leading to high biochemical production. The usefulness and capabilities of OptDesign are demonstrated for the production of three biochemicals in Escherichia coli using the latest genome-scale metabolic model iML1515, showing highly consistent results with previous studies while suggesting new manipulations to boost strain performance. The source code is available at https://github.com/chang88ye/OptDesign.
■ INTRODUCTION
A growing population and fast economical development are leading to an increasing demand of various daily products and industrial raw materials, many of which are derivatives of oil and petroleum. Over the past decades, important efforts are being made to develop sustainable production processes that convert biomass or other renewable resources to bioproducts through cell platforms. 1 A key challenge in this respect is the design of high-performance strains with efficient metabolic conversion routes to desired products. Recent advances in genome-scale metabolic modeling (GSMM) 2 have made it possible to have a system-level understanding of cell physiology and metabolism, leading to rational prediction of metabolic interventions for strain development. Systems strain design 1 has helped to improve the production of numerous biochemicals, including lycopene, 3 malonyl-CoA, 4 alkane and alcohol, 5 and hyaluronic acid. 6 A number of tools have been developed for strain design. 7,8 OptKnock, 9 which was developed to block some reactions in metabolic networks, is one of the earliest such tools. OptKnock identifies the knockout targets that lead to maximal biochemical production in the context of flux balance analysis 2 that is subject to mass balance and thermodynamic constraints. This results in a bilevel optimization problem which can be solved through mathematical reformulation into a standard mixed-integer linear program (MILP). 9 The OptKnock model was later extended to consider gene up/down-regulation, 10 swap of cofactor specificity, 11 and introduction of heterologous pathways 12 for biochemical production. It was also adapted to identify synthetic lethal genes for anti-cancer drug development. 13 Some improvement strategies, such as GDBB 14 and GDLS, 15 have been proposed to improve the efficiency of OptKnock in solving the bilevel problem. There also exist numerous approximate solutions to the OptKnock model, including genetic algorithms 16 and swarm intelligence. 17 Designing strains that couple production to growth has received increasing attention in recent years, mainly due to the great production potential of growthcoupled strains in adaptive laboratory evolution. 18 Consequently, a number of computational tools along this direction have been developed to design strains with various growthcoupled phenotypes. 19,20 OptCouple 20 simulates jointly gene knockouts, insertions, and medium modifications to identify growth-coupled designs, although gene expression regulation is not considered. In addition, game theory has been introduced into metabolic engineering. 21,22 NIHBA 22 considers metabolic engineering design as a network interdiction problem involving two competing players (host strain and metabolic engineer) in a max-min game enabling growth-coupled production phenotypes, and the problem is solved by an efficient mixed-integer solver. Furthermore, there are also some studies which do not rely on optimality principles for phenotype prediction. Among these, the minimum cut set (MCS)-based approach, 23,24 which aims to find the smallest number of interventions blocking undesired production phenotypes, has been extensively studied. Despite high computational complexity, MCS-based approaches have successfully predicted strain design strategies leading to in vivo biochemical production. 25 Another important approach of the same kind is OptForce, which identifies metabolic interventions by exploring the difference in flux distributions between the wild type and the desired production strain. 26 OptForce has shown good predictions for in vivo malonyl-CoA production. 4 The use of computational tools is of undisputed importance to strain development in metabolic engineering. 27 However, there are several limitations which may prevent the wide applicability of the above-mentioned approaches. First, most of the tools focus on prediction of either knockout targets or regulation targets alone, with a few exceptions that are capable of predicting both interventions, such as OptForce 26 and OptRAM. 28 These exceptions highlight that a combination of knockout and up/ down-regulation often leads to higher biochemical production compared to a single strategy. OptForce encourages the use of flux measurements while identifying optimum design strategies. OptRAM considers regulatory networks from which transcriptional factors can be optimized for biochemical production. However, both OptForce and OptRAM rely heavily on the precise expression level of regulation targets; for example, desired production phenotypes can only be achieved at the exactly suggested flux values (OptForce) or up/down-regulation fold changes (OptRAM). It is known that gene expression is a complex process with many uncertainties. The underlying strict expression requirements in these approaches may miss theoretically non-optimal but practically feasible design strategies. In addition, both approaches rely on a reference flux vector of the wild type, which can be incorrectly chosen from many steady-state flux distributions if it cannot be uniquely determined. Second, many existing strategies rely on the assumption of optimality principles, for example, maximal growth in OptKnock 9 and derivatives, in the cell metabolism. However, this assumption is not always an accurate representation of how cells respond to metabolic perturbations or environmental changes. 29 NIHBA 22 showed that reducing unnecessary surrogate biological objectives helps to identify many non-optimal but biologically meaningful knockout solutions.
This paper introduces a new computational tool, called OptDesign, that uses a two-step strategy to predict rational strain design strategies for biochemical production. OptDesign has the following capabilities: (C1) overcomes the uncertainty problem as there is no assumption of exact fluxes or fold changes that cells should have for production. As a result, non-optimal but good feasible solutions are not missed.
(C2) allows two types of interventions (knockout and up/ down-regulation).
(C3) disregards the assumption of (potentially unrealistic) optimal growth in the production mode.
(C4) can use with or without reference flux vectors.
(C5) guarantees growth-coupled production (if desired up/ down-regulations are achievable in vivo).
OptDesign is the only tool that combines these five capabilities, as shown in Table 1. In the remainder of this paper, we describe OptDesign and benchmark it considering three case studies, demonstrating high consistencies of predicted design strategies with previous in vivo and in silico studies.
■ MATERIALS AND METHODS
A metabolic network of m metabolites and n reactions has a stoichiometric matrix S that is formed by stoichiometric coefficients of the reactions. Let J be a set of n reactions and v j the reaction rate of j ∈ J; Sv represents the concentration change rates of the m metabolites. The flux space FS is defined as the space spanned by all possible flux distributions v for the system subject to thermodynamic constraints at steady state (i.e., the concentration change rate is 0 for all the metabolites). Mathematically, FS can be described as where lb j and ub j are the lower and upper flux bounds of reaction j, respectively. We use the notation FS w for the wild type and FS m for the mutant strain. Flux balance analysis (FBA) determines a single solution in FS when a surrogate biological objective is provided for 1.
overcomes the uncertainty problem as there is no assumption of exact fluxes or fold changes that cells should have for production, (C2) allows two types of interventions (knockout and up/down regulation), (C3) disregards the assumption of optimal growth in the production mode, (C4) can use with or without reference flux vectors, and (C5) guarantees growth-coupled production (if desired up/ down-regulations are achievable in vivo). b The original OptKnock may not always achieve growth-coupled production, but its derivative RobustKnock 30 is guaranteed to achieve this.
OptDesign recognizes metabolic changes from the wild type to production (mutant) strains. Let v ∈ FS w denote a flux vector of the wild type and Δv denotes the flux change needed for v to transition into a desired production state. Obviously, v + Δv represents a flux vector of the production strain, and it needs to satisfy mass balance and some production requirements, that is, v + Δv ∈ FS m . Note that flux measurements can be used to customize the flux bounds in FS w and FS m if available; otherwise, the flux bounds can be set according to flux variability analysis (FVA) predictions. For example, FS m can be constrained by imposing production requirements on the lower bounds of the production reaction and biomass.
OptDesign introduces the concept of noticeable flux difference δ (mmol/g DW /h) between the wild type strain and the production strain in reactions. OptDesign uses this concept to identify an optimal set of manipulations leading to the production phenotype FS m . To do so, OptDesign performs two key steps of optimization. First, OptDesign identifies a minimal set of reactions that must deviate from their wild-type flux with at least δ in order to achieve FS m . This set of reactions form candidate regulation targets. Second, OptDesign searches through regulation candidates, together with knockout candidates, for the optimal combination of manipulations to maximize biochemical production. The following two subsections are devoted to presenting these two steps in detail.
Selecting Up/Down-Regulation Reaction Candidates. This step of OptDesign is to identify the minimum number of reactions whose flux must have a noticeable change if the cellular metabolism shifts from the wild type to the required production state. A reaction is considered a candidate for up-regulation if its flux in the mutant is at least δ units more than that in the wild type. On the contrary, this reaction is considered for downregulation if its flux in the mutant is at least δ units fewer than that in the wild type. Note that the above directional up/downregulation definition is used for computational convenience, and final regulation targets identified from OptDesign will be rationally grouped by contrasting the wild type to the mutant strain by their absolute flux values (which will be detailed later in the section Materials and Methods). In any other situations, this reaction is not considered as a candidate for genetic manipulation. Figure 1 illustrates the above concept with a toy network of five reactions. Suppose δ is set to 2 units for all these five reactions, R4 and R5 are considered for down-regulation and up-regulation, respectively. However, R1, R2, and R3 are not selected as regulation candidates since their flux changes from the wild type to the mutant are within the predefined threshold δ.
An MILP procedure is employed to minimize the number of reactions that must change their flux from the wild type to the mutant by at least δ units. This can be expressed as the following MILP problem j J y y step 1:min ( ) where y j + and y j − are binary variables representing that the flux of reaction j increases and decreases by at least a noticeable level δ j > 0 from the wild type to the production phenotype, respectively. Equivalently, y j . Constraints 2b−2e are for flux increase and decrease, respectively. Δv j min and Δv j max are the lower and upper bounds of flux change Δv j , respectively. Special reactions in which fluxes are not allowed to be decreased (increased), for example, non-growth-associated maintenance, should have a zero value for the lower (upper) bound of their corresponding Δv components. A reaction cannot increase and decrease flux simultaneously, which implies constraint 2f. Constraints 2g describe the flux space of the wild type and mutant strain at steady state, where FS w and FS m are constrained differently by specifying a minimum growth rate and a minimum target production rate, respectively. It is worth noting that the minimum target production is not a requirement for the producing strain. Instead, it is mainly used to identify the set of reactions that must change their flux in order to produce the target compound. In practice, we can fulfil this purpose by setting the minimum target production rate to the maximum theoretical production rate (computed by FBA with an objective to maximize the flux through the reaction acting on the target product).
Note that this procedure does not need a known reference flux vector for both the wild type and the mutant; instead, it takes into account all possible wild-type and mutant flux distributions that meet engineering requirements (e.g., growth rate, production/yield). However, it is recommended to make use of flux measurements for the wild-type and mutant strains if possible in order to select a rational set of regulation targets effectively.
Identifying Optimal Manipulation Strategies. The solution to the MILP (2) results in a flux-increase set F + (corresponding to reactions with y j + = 1) and a flux-decrease set F − (corresponding to reactions with .y j − = 1) in addition to the suggested flux change Δv. However, these two sets are not the minimum number of manipulations needed for the required production state as the effects of some manipulations can be propagated to the whole metabolic network. 26 In addition, there is an engineering cost in manipulating reactions (through gene− protein−reaction associations), and therefore, it is assumed there is a limit on the number K m of genetic manipulations, including up/down-regulation and knockout. We allow gene/ reaction knockout in this step for two reasons. First, reactions having an unnoticeable flux change (below δ j and thus not in F + or F − ) may sometimes be good manipulation targets, especially when they are involved in completing pathways. Taking them as potential knockout targets could improve biochemical production (essentially a relaxation of optimization models). Second, there may be reactions in the regulation candidate set that carry near-zero fluxes in the production strain. From a practical point of view, completely deactivating them by gene knockout is easier than regulating their gene expression precisely to the suggested minute fluxes. Reaction knockout candidates (denoted by set F × ) can be selected by a preprocessing approach, 22 which excludes reactions that are essential, irrelevant, or unlikely to be good knockout targets.
Here, we treat the strain design task as a network interdiction problem 22 that maximally forces cells to violate their wild-type phenotypes for production, that is, to choose the optimum manipulations from F + ∪ F − ∪ F × in favor of biochemical production regardless of what the wild-type flux distribution is. As a result, we develop the following network interdiction problem where c P is a coefficient vector for the target biochemical. y j × = 1 represents the knockout of reaction j, leading to zero flux in this reaction as illustrated by constraint 3b. Constraints 3f and 3g limit the allowable number of knockouts and the total number of manipulations.
The above statement is a special bilevel problem and can be formulated to a standard MILP using duality theory 9 (see reformulation in Supporting Information Data 1). The resulting MILP can be handled either by a modern MILP solver or, if numerous alternative solutions are desired in a single run, by the hybrid Benders algorithm. 22 The solution to problem 3 contains some regulationassociated binary variables that have a value of 1, that is, y j = 1, for some j ∈ F + ∪ F − . The integer values represent the reaction targets whose flux needs a change for biochemical production. In order to determine how they are to be regulated in experimental implementation, the following classification rule is applied reaction up regulation set and 1 reaction down regulation set and 1 The output of the model (3) predicts which reaction should be up-or down-regulated by at least the chosen flux change threshold. It does not impose exact fluxes on the mutant strain to guarantee the high production of target chemicals. In this sense, the resulting manipulations suggested by OptDesign could be experimentally more feasible than those obtained by existing tools.
Computational Implementation. OptDesign relies on model reduction and candidate selection for computational efficiency. Genome-scale metabolic (GEM) models can be significantly simplified by compressing linearly linked reactions and removing dead-end reactions (those carrying zero fluxes). Similarly, many reactions can be excluded from consideration with a priori knowledge that, for example, they are vital for cell growth or their knockout is not likely to improve target production. We followed the model reduction and candidate selection procedure 22,31 (the detailed procedure can be found in Supporting Information Data 1), resulting in a much smaller knockout candidate set for each target product in the latest Escherichia coli GEM iML1515. 32 The flux change threshold δ j = 1 mmol/g DW /h was used throughout the paper unless otherwise stated. Algorithm 1 presents the pseudocode of OptDesign. It was implemented in MATLAB 2018b to be compatible with the Cobra Toolbox 3.0. 33 All MILPs were solved by Gurobi 9.02. 34 It is worth noting that a couple of minutes is enough for a modern optimization software like Gurobi to identify a reasonably small set of up/down-regulation candidates. The source code is available for download at https://github.com/ chang88ye/OptDesign.
■ CASE STUDIES
The OptDesign framework was tested by identifying metabolic manipulations for the production of three industrially relevant biochemicals using the latest genome-scale metabolic model iML1515 32 for E. coli. Glucose was used as the sole carbon substrate, and its maximum uptake rate was set to 10 mmol/ g DW /h. These biochemicals include a number of compounds that have been experimentally studied in the literature. In particular, we focussed on the native succinate and non-native lycopene and naringenin 26,35 in our case studies. A comparison between OptDesign predictions and experimentally validated interventions for another nine target biochemicals can be found in Supporting Information Data 2. The heterologous biosynthesis pathway added to iML1515 for the production of two non-native biochemicals can be found in Supporting Information Data 1. The newly added reactions were charge-and massbalanced. The growth conditions were the same as in iML1515, except that a minimal cell growth of 0.1 h −1 was imposed on mutant strains for biochemical production. 31 All the other parameters remained the same as the original iML1515. 32 At most 10 manipulations including no more than 5 knockouts were allowed, and the restriction on knockout is to intentionally favor gene expression manipulation over gene knockout. All optimization problems in OptDesign were solved by Gurobi 9.02 on a MacBook with a 3.3 GHz Intel Core i5 processor and 16 GB RAM. The optimization process was terminated by multiple stopping criteria whichever was met first, including the time limit (10 4 seconds) and optimality gap (5%). Indeed, we observed that the incumbent solution did not improve either after 3000 s or when the optimality gap reached 5%.
Case Study 1: Succinate Overproduction. As a starting point, we wondered which reactions are likely to be good regulation targets and how they are distributed in metabolic networks. Therefore, we extracted the candidate regulation targets from the first step of OptDesign and reorganized them into different metabolic subsystems, as shown in Figure 2. It can be observed that the majority of regulation candidates are from the Krebs cycle and fermentation products (i.e., formate, acetate, and ethanol) that have the same precursor acetyl-CoA as succinate. It is suggested that all the reaction candidates from the Krebs cycle should increase their flux and those related to the formation of formate, acetate, and ethanol should lower their activity. This prediction is consistent with many studies of succinate production. 26,36,37 In addition to these two main subsystems, reactions from the glucose metabolism, pyruvate metabolism, and cofactor conversion/formation are also possible regulation targets. For example, the glucose transporter (GLCptspp) predicted for down-regulation here has been a deletion target in another study 38 to enhance succinate production. Figure 3a shows a few final design strategies identified by OptDesign that can improve succinate production. It suggests eight primary manipulations, including the knockout of five reactions, up-regulation of citrate synthase (CS) and pyruvate dehydrogenase (PDH), and down-regulation of periplasmic ATP synthase (ATPS4rpp). Two crucial enzymes in the formation of the fermentation products lactate and ethanol, that is, lactate dehydrogenase (LDH_D) and acetaldehyde dehydrogenase (ACALD), are suggested to be deactivated as they are considered as competing pathways consuming succinate precursors. These two manipulation targets have been observed in several studies. 39,40 The knockout of FAD reductase (FADRx) increases the availability of NADH, which has shown to be an effective approach to high succinate production. 41 The methylglyoxal synthase (MGSA) pathway to lactate is another primary knockout target predicted by OptDesign. The removal of this minor pathway should result in pyruvate accumulation for succinate biosynthesis. Interestingly, this knockout has been implemented in previous studies, 37,40 resulting in an increased flux in the Krebs cycle. Ribulose-phosphate 3-epimerase (RPE) is another manipulation target identified by OptDesign, whose knockout blocks the conversion of ribose-5-phosphate (R5P) to D-xylulose 5phosphate (xu5p-D) in the pentose phosphate pathway. 42 Therefore, it is expected that primary glycolytic flux flows into the precursors, for example, phosphoenolpyruvate and pyruvate, of succinate.
OptDesign suggests to overexpress two enzymes in the succinate biosynthetic pathway, that is, PDH and CS, which are intuitively straightforward to understand. In anaerobic E. coli, the PDH activity is either low or undetectable in order to maintain redox balance. 43 However, it is observed that an E. coli mutant with activation of PDH for extra NADH improves succinate production. 40 Overexpression of CS, which is also suggested in another study, 26 has been observed to increase the flux in the Krebs cycle in a malic acid production E. coli strain. 44 OptDesign further predicts that high succinate production requires either up-regulation of glucokinase (HEX1) or downregulation of a phosphoenolpyruvate-dependent phosphotransferase system (PTS)-related reaction (GLCptspp), both of which have the same effect that glucose transport is favorably through the ATP-consuming HEX1 rather than the more efficient but phosphoenolpyruvate-dependent PTS route. Consequently, it improves the availability of PEP, which is a precursor for biomass formation and many biochemicals including succinate. Both manipulation approaches have been observed to improve the succinate yield. 36 However, the increased ATP demand due to glucose transport via HEX1 has to be mediated by increased ATP production by other means. For this reason, OptDesign suggests down-regulation of ATPS4rpp to reduce cleavage of ATP to ADP in order to meet metabolic energy requirements. This prediction has been also suggested in another study. 45 OptDesign also identifies a number of additional modification targets, such as pyruvate formate lyase (PFL), phosphotransacetylase (PTAr), and acetate kinase (ACKr), that have been widely used as knockouts to increase the flux toward the Kreb cycle in succinate-focused studies. 37,40 However, here, OptDesign suggests to down-regulate these enzymes instead of deactivating them completely. In addition, phosphoenolpyruvate carboxylase (PPC) and malate dehydrogenase (MDH) are also predicted as promising overexpression targets. This result is consistent with experimental studies that show increased succinate production through up-regulating these two enzymes. 40,46 Case Study 2: Naringenin Production. A three-step pathway for naringenin was introduced into the metabolic network E. coli (see Figure 3b), and unlimited coumaric acid (cma) was supplemented in the growth medium. 35 OptDesign predicts that naringenin production requires four primary knockouts, one up-regulation and two down-regulations. The first two primary knockouts are dihydroorotic acid dehydro-genases (DHORD2 and DHORD5) that catalyze the oxidation of dihydroorotate to orotate in the pyrimidine biosynthesis pathway. The knockout of the underlying gene pyrD for these two reactions results in a reduced growth rate, 47 which might save the carbon source for naringenin biosynthesis. The knockout of succinate dehydrogenase (SUCDi) creates a surplus of the biosynthetic precursor acetyl-CoA for naringenin, which has been experimentally observed in a previous study. 35 Phosphoenolpyruvate carboxylase (PPC), a metabolic shortcut for the conversion of phosphoenolpyruvate to oxaloacetate and the byproduct phosphate, is also listed as a primary knockout. . Design strategies identified by OptDesign for biochemical production in E. coli. Reaction names and their arrow symbols in the same color mean that they must be manipulated in mutant strains. Reaction names colored only (i.e., red, green, or blue) mean that they are alternative manipulations. Dashed arrows represent a merge of multiple conversion steps to metabolites. Design strategies are summarized in boxes above the simplified metabolic maps. Abbreviations of metabolite names are as follows: g6p, glucose-6-phosphate; f6p, D-fructose 6-phosphate; g3p, glyceraldehyde-3-phosphate; 13dpg, 3-phospho-D-glyceroyl phosphate; 3gp, 3-phospho-D-glycerate; 6pgc, 6-phospho-D-gluconate; ru5p-D, Dribulose 5-phosphate; r5p, alpha-D-ribose 5-phosphate; xu5p-D, D-xylulose 5-phosphate; dhap, dihydroxyacetone phosphate; mthgxl, methylglyoxal; pep, phosphoenolpyruvate; pyr, pyruvate; lac-D: D-lactate; dxyl5p, 1-deoxy-D-xylulose 5-phosphate; ipdp, isopentenyl diphosphate; frdp, farnesyl diphosphate; ggdp, geranylgeranyl diphosphate; phyto, all-trans-phytoene; ppi, diphosphate; pi, phosphate; gly, glycine; mlthf, 5,10methylenetetrahydrofolate; flxso, flavodoxin semi oxidized; flxr, flavodoxin reduced; accoa, acetyl-CoA; cit, citrate; icit, isocitrate; akg, 2-oxoglutarate; succ, succinate; fum, fumarate; mal-L, L-malate; oaa, oxaloacetate; hom-L, L-homoserine; thr-L, L-threonine; dhor-S, (S)-dihydroorotate; orot, orotate; malcoa, malonyl-CoA; cma, coumaric acid; cmcoa, coumaroyl-CoA; chal, naringenine chalcone; fad, flavin adenine dinucleotide oxidized; fadh2, flavin adenine dinucleotide reduced. Abbreviations of reaction names are referred to the iML1515 model definitions.
We postulate that in addition to avoid the accumulation of phosphate, its deletion could not only direct the flux through pyruvate to acetyl-CoA but also reduce the consumption of acetyl-CoA in the Krebs cycle for the mediation of oxaloacetate. In fact, PPC mutants were found to have a flux increase from pyruvate to acetyl-CoA in a 13 C-labeling experiment. 48 Two linear reactions, that is, threonine synthase (THRS) and homoserine kinase (HSK), which are involved in the formation of L-threonine from L-homoserine, are predicted as downregulation targets. This manipulation is expected to reduce carbon consumption in competing pathways, which therefore increases the carbon flux toward naringenin. Another downregulation target is the inorganic diphosphatase (PPA) that catalyzes the conversion of one ion of pyrophosphate to two phosphate ions. This manipulation is not intuitively straightforward and believed to create a combined effect with other manipulations to boost naringenin production. Since PPA down-regulation produces less phosphate which is needed in the added naringenin biosynthesis pathway, phosphate has to be balanced through an increase in its transport channel, that is, the phosphate transporter (PItex).
Aside from the above primary manipulations, it is also predicted that naringenin production strains must block at least one of the following reactions: two reactions on the Entner− Doudoroff pathway (EDD/EDA), pyruvate synthase (POR5), isocitrate lyase (ICL), and PDH. Blocking EDD/EDA might increase the use of glucolycosis, producing more ATP which is needed in the heterologous naringenin pathway. The removal of POR5 or PDH forces E. coli to use alternative conversion routes from pyruvate to acetyl-CoA without depleting coenzyme A (CoA), another primary precursor for naringenin biosynthesis. The knockout of ICL prevents the malate synthase reaction from consuming acetyl-CoA. In addition, OptDesign also predicts that the up-regulation of acetyl-CoA carboxylase (ACCOAC) helps to increase the production of naringenin, which has been implemented in another study. 35 Case Study 3: Lycopene Production. A non-native lycopene biosynthetic pathway consisting of three key reactions were added to the metabolic network of E. coli (see Figure 3c). A preliminary execution of OptDesign predicted the need of only one modification, which is the overexpression of the gene encoding dimethylallyltranstransferase (DMATT) or the one encoding geranyltranstransferase (GRTT). While this manipulation intuitively makes sense, gene overexpression only in the upstream biosynthesis pathway of lycopene does not lead to high lycopene production due to a low concentration of precursors, as experimentally illustrated in another study. 49 Therefore, we run our tool again while disallowing DMATT/GRTT to be valid regulation targets. Consequently, a variety of design strategies were identified, as shown in Figure 3c. Specifically, all the design strategies are combinations of seven manipulations, consisting of five knockouts, two up-regulations, and one down-regulation. However, they differ from each other in only two knockout targets. The three primary knockouts, that is, ribose-5phosphate isomerase (RPI), triose-phosphate isomerase (TPI), and PDH, are linked to two precursors (i.e., glyceraldehyde-3-phosphate and pyruvate) of lycopene biosynthesis. The knockout of RPI reroutes the carbon flux flowing into the lycopene precursors using more effective metabolic routes (e.g., glycolysis) rather than the non-oxidative pentose phosphate pathway, which is consistent with the study, 3 in addition to slowing down cell growth due to reduced ribose-5phosphate formation for RNA and DNA synthesis. Both TPI and PDH knockouts should immediately increase the availability of the lycopene precursors, with the latter for increased lycopene biosynthesis being already confirmed experimentally in another study. 50 Apart from glyceraldehyde-3-phosphate and pyruvate, acetyl-CoA is also an important precursor to form isopentenyl diphosphate, a building block for lycopene, using a different pathway. Therefore, it is expected that increasing the availability of acetyl-CoA should also improve lycopene. Unsurprisingly, POR5 is predicted as an up-regulation target in compensation for the loss of PDH for acetyl-CoA formation. Also, reducing the amount of acetyl-CoA flowing into the Krebs cycle was found to increase the flux toward isopentenyl diphosphate. 51 This is fulfilled by removing either fumarase (FUM) or SUCDi in this study. Each of these two knockouts has to be paired with an additional knockout outside the Krebs cycle. This leads to three most frequent pairs, that is, the glycine cleavage system (CLYCL) with FUM, CLYCL with SUCDi, and SUCDi with EDD/EDA. The predicted CLYCL knockout is believed to help reduce the cleavage of 3-phospho-D-glycerate into the glycine biosynthetic pathway so that more pyruvate can be accumulated. Alternatively, blocking the Entner−Doudoroff pathway allows more flux into glycolysis, leading to a higher production of the two precursors (i.e., glyceraldehyde-3-phosphate and pyruvate) for lycopene.
The NADPH-dependent flavodoxin reductase (FLDR2) is another primary up-regulation target predicted by OptDesign. Overexpressing FLDR2 is thought to balance the significantly increased ratio of NADPH to NADP + caused by the last step of the lycopene biosynthetic pathway. Last, it is predicted that reducing the phosphate uptake rate improves lycopene production. This is probably because two out of the three reactions added for lycopene biosynthesis produce diphosphate that can be converted to phosphate, and a flux decrease in this uptake reaction rebalances phosphate in the system.
■ DISCUSSION
This paper has presented a new computational tool, called OptDesign, to aid strain development through rational identification of genetic manipulations including reaction knockout and flux up/down-regulation. This tool has been benchmarked via three case studies of different biochemicals, demonstrating its capability of identifying high-quality strain design strategies to improve biochemical production.
OptDesign predicts well in its first computational step a set of candidates that can be potentially used as experimental regulation targets, as shown in the succinate case. In a second computational step, the algorithm further prunes this set to a realistically acceptable size while optimizing biochemical production. Interestingly, many of the predicted manipulations have been experimentally implemented in previous studies. Taking succinate production as an example, 10 out of 14 manipulations (MSGA, ACALD, LDH_D, HEX1, PDH, PFL, PTAr/ACKr, GLCptspp, PPC, and MDH) suggested by OptDesign have been employed in succinate-producing strains. 36,37,40,46 Specifically, it has been shown that engineered E. coli strains KJ060 and KJ073 produce succinate yields of 1.2− 1.6 mol/mol glucose after removing completing pathways that lead to the byproducts ethanol, acetate, formate, and lactate. 40 These strains were developed through added acetate in culture media because the deletion of PFL causes acetate auxotrophy under anaerobic conditions. 37 However, OptDesign suggests that there is no need to completely deactivate PFL. Instead, down-regulating it avoids acetate auxotrophy while still achieving high succinate production. a Additionally, it is observed that glucose transport favoring glucokinase over pepdependent PTS yields higher succinate production. 52 Furthermore, the overexpression of PPC in E. coli for increasing succinate yields has been confirmed in a previous study. 46 OptDesign also suggests a few new modifications, such as the deletion of FADRx and RPE, the up-regulation of CS, and downregulation of ATPS4rpp, which to our best knowledge have not been experimentally implemented for succinate production. While up-regulation of CS has been shown to increase malic acid production, 44 it remains unclear whether this manipulation is also useful for succinate production. The suggested flux modifications on FADRx and ATPS4rpp reconfirm the importance of ATP and redox balance in succinate-producing strains. 40 Deletion of RPE showed low flux in the Krebs cycle, 53 suggesting that metabolic bottlenecks may exist upstream of the Krebs cycle. The design strategies predicted by OptDesign imply that a synergistic effect of RPE knockout with the other identified flux modifications can lead to a high production of succinate. Similar observations can also be found for the production of two non-native biochemicals, naringenin and lycopene, studied in this paper.
We have so far assumed that regulation targets can be selected only from the minimal regulation set derived from the first computational step of OptDesign. Under this assumption, it ensures that regulation manipulations are used as few as possible since suggested regulation levels cannot be exactly guaranteed in experimental implementation. However, in the case that multiple metabolic routes exist between two metabolites, the minimal regulation set will have only one of them included. In view of this, we have also computed the maximal regulation set by maximizing the number of reactions that can have noticeable flux changes. Taking lycopene as an example, the number of regulation candidates increases sharply to 119 in the maximal regulation set from 43 in the minimal regulation set (see Supporting Information Data 1). Consequently, the resulting larger solution space makes it possible to identify design strategies with a better minimum guaranteed flux for lycopene (see Figure 4). In both cases, the design strategies identified by OptDesign couple lycopene production with growth, although the strategy from the maximal regulation set yields a higher production rate than that from the minimal regulation set.
OptDesign has two key parameters, that is, the flux change δ and the minimum required growth rate, which influence the quality of solutions for high production. Figure 5 shows the sensitivity of OptDesign to these two parameters on identifying design strategies for succinate production (sensitivity analysis of these parameters for lycopene and naringenin production can be seen in Supporting Information Data 1). It is observed that high succinate production is achieved near the anti-diagonal line in the 2-D parameter space. Low production strategies are seen when both parameters have either a small or big value. This is because when the minimum required growth is high, there is little room to adjust flux for biochemical production; on the contrary, when the minimum required growth is small, large flux changes (and more regulation candidates to choose as indicated in Supporting Information Data 1) can be made to boost biochemical production. δ impacts on production too as it not only affects the candidate regulation set but also the flux of candidate reactions for regulation on metabolic networks (see details in Supporting Information Data 1). In practice, it requires careful selection of δ and growth threshold to yield optimum design strategies.In addition, OptDesign can be used with a reference flux vector v* easily by binding the flux bounds of the wild type to v* in eq 3. Figure 6 shows the production envelopes of the design strategies, identified with/without the use of in silico reference flux vectors, for three target products. It can be observed that the use of reference flux vectors increases the size of production envelopes, and the maximum growth rate of reference-guided mutants is higher than that of reference-free mutants. This may be explained by the fact that fixing the wildtype flux vector v in eq 3 at v* reduces the room for flux adjustments, hence impacting less on growth rate. The production envelopes for succinate also suggest that referenceguided design sometimes could lead to better solutions. Figure 6 also demonstrates the capability of OptDesign to create (strongly) growth-coupled producing strains whose (minimum) target production increases with growth regardless of reference flux vectors.
Furthermore, OptDesign highlights the benefit of flux regulation in strain design. For example, with a limit of 5 manipulations (including knockout and flux regulation), OptDesign found numerous design strategies for naringenin, with the best having a minimum guaranteed production flux of 1.73 mmol/g DW /h. In contrast, some existing strain design tools (e.g., OptKnock 9 and NIHBA 22 ) using knockout only did not identify any strategies leading to naringenin production. Similar to OptDesign, there also exist a few tools, for example, OptForce 26 and OptReg, 10 that can identify both flux regulation and knockout targets. We compared OptDesign with OptForce and OptReg in terms of manipulation targets. For succinate production (see Figure 7), it is noticed that there is a large overlap between the design strategies predicted by these tools, and the common interventions which tend to increase the flux flow toward succinate are all from the core central metabolism, highlighting that intervention of these common targets is effective to increase the availability of succinate precursors. In addition, Figure 7 also shows that OptDesign can find more novel manipulations than the other two tools, demonstrating its capability C1 ( Table 1) that enables the search for near-optimal . Production envelopes of different growth-coupled design strategies consisting of no more than five manipulations for lycopene. The production envelope illustrates the minimum and maximum production rates a production strain can achieve at different growth rates compared to the wild type. The solid-blue production envelope is for the design strategy using the minimal regulation set: ALCD19 (knockout), TKT2 (knockout), DXPS (overexpressed), PItex (overexpressed), and TPI (underexpressed). The dashed red production envelope is for the design strategy using the maximal regulation set: FUM (knockout), R1PK (knockout), ADK3 (overexpressed), PItex (overexpressed), and ADK1 (underexpressed). Reaction names are consistent with the genome-scale metabolic network model of E. coli iML1515.
alternatives. OptReg identified fewer regulation targets than the others as it tends to couple the maximum growth rate with target production. OptForce eliminated possible near-optimal but important manipulations by restricting its overproduction target, thereby producing fewer intervention targets than OptDesign.
Finally, OptDesign has been developed to identify metabolic manipulations regardless of whatever the wild-type flux distribution looks like, and it can be used with flux measurements if available. Indeed, a measured wild-type flux vector can help refine the manipulation candidates, leading to a more accurate prediction of design strategies. In addition, the threshold for noticeable flux change defined in this work can be further adjusted with measured data, and different reactions can have distinct values for this parameter. Dedicated threshold values allow for a better prediction of rational flux modifications. Although OptDesign has been implemented for reaction-level phenotype prediction, it can be easily modified to predict design strategies at the gene level. For example, OptDesign can be applied to metabolic network models with an advanced stoichiometric representation of gene−protein−reaction associations, 55 from which design strategies consisting of gene targets can be identified.
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acssynbio.1c00610. . Comparison of different strain design tools without reference flux vectors for succinate overproduction. The intervention targets were identified by using the default genome-scale metabolic network model of E. coli iML1515. 32 A 100% theoretical succinate yield was used in OptForce, and the regulation parameter C in OptReg was set to 0.5. Reaction names are consistent with the iML1515 model. Bilevel problem reformulation, lycopene and naringenin biosynthetic pathway, model reduction, and impact of OptDesign parameters on biochemical production (PDF) Comparison between in silico predictions and in vivo manipulations for nine compounds; knockout and regulation candidates for succinate, lycopene, and naringenin; impact of reference flux vectors (XLSX) | 9,237.2 | 2021-12-13T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Lesion outline and thermal field distribution of ablative in vitro experiments in myocardia: comparison of radiofrequency and laser ablation
Objectives To explore the lesion outline and thermal field distribution of radiofrequency ablation (RFA) and laser ablation (LA) in myocardial ablation in vitro. Materials and methods Twenty-four fresh porcine hearts were ablated with RFA or LA in vitro. The radiofrequency electrode or laser fiber and two parallel thermocouple probes were inserted into the myocardium under ultrasound guidance. The output power for RFA was 20 W/s and for LA was 5 W/s, and the total thermal energies were 1200 J, 2400 J, 3600 J, and 4800 J. The range of ablation lesions was measured, and temperature data were recorded simultaneously. Results All coagulation zones were ellipsoidal with clear boundaries. The center of LA was carbonized more obviously than that of RFA. With the accumulation of thermal energy and the extended time, all the ablation lesions induced by both RFA and LA were enlarged. By comparing the increase in thermal energy between the two groups, both the short-axis diameter and the volume change showed significant differences between the 1200 J and 3600 J groups and between the 2400 J and 4800 J groups (all P < 0.05). Both the short-axis diameter and the volume of the coagulation necrosis zone formed by LA were always larger than those of RFA at the same accumulated thermal energy. The temperatures of the two thermocouple probes increased with each energy increment. At the same accumulated energy, the temperature of LA was much higher than that of RFA at the same point. The initial temperature increase at 0.5 cm of LA was rapid. The temperature reached 43 °C and the accumulated energy reached 1200 J after approximately 4 min. After that the temperature increased at a slower rate to 70 C. For the RFA at the point of 0.5 cm, the initial temperature increased rapidly to 30 °C with the same accumulated energy of 1200 J after only 1 min. In the range of 4800 J of accumulated thermal energy, only the temperature of LA at the point of 0.5 cm exceeded 60 °C when the energy reached approximately 3000 J. Conclusions Both RFA and LA were shown to be reliable methods for myocardial ablation. The lesion outline and thermal field distribution of RFA and LA should be considered when performing thermal ablation in the intramyocardial septum during hypertrophic obstructive cardiomyopathy.
Introduction
Over the past decade, invasive therapeutic options for hypertrophic obstructive cardiomyopathy (HOCM), including surgical myectomy, alcohol septal ablation (ASA), radiofrequency catheter ablation (RFCA), dualchamber pacing and implantable cardioverter defibrillator (ICD), have been developed to improve clinical symptoms and to relieve left ventricular outflow tract (LVOT) obstruction [1][2][3][4][5][6]. Recently, a novel therapy of transthoracic echocardiography-guided percutaneous intramyocardial septal radiofrequency ablation (PIMSRA) has been investigated [7,8]. PIMSRA is a percutaneous intramyocardial, non-transaortic and non-transcoronary operation to reduce the LVOT obstruction. This procedure could avoid traditional sternotomy and damage to the conduction system, which is distributed underneath the endocardium. The treatment can effectively improve the hemodynamics and symptom of patients with HOCM during 6 months of follow-up [7]. To protect the conduction system, the investigators recommended maintaining a 3 mm-safe margin between the outline of the ablated zone and the endocardium of both left and right ventricle. Although an ECG monitor is used to demonstrate the change in rhythm or configuration, it is meaningful to pre-estimate the range of coagulation necrosis ablation lesions.
Currently, this procedure is performed by ablating the hypertrophic interventricular septum (IVS) with a radiofrequency needle. Radiofrequency ablation (RFA) is a safe and effective treatment method for tumors, and it can obtain a definite and stable ablation boundary [9,10]. Unlike tumor ablation, cardiac ablation is concerned with safety boundaries to avoid injuring tracts. In addition to RFA, laser ablation (LA) can also obtain a definite ablation boundary, and this procedure is usually used in small organs such as the thyroid and prostate [11,12]. Moreover, the needle of LA is 21-gauge, which is much finer than the 17-gauge needle used for RFA. This difference may result in less injury and bleeding. Numerous trials with different treatment algorithms have confirmed the clinical effectiveness and safety of LA in solid thyroid nodules or lesions with variable fluid components [12][13][14]. Therefore, LA might be a potential option for cardiac ablation.
In this study, we used porcine hearts to investigate the lesion outline and thermal field distribution of RFA and LA for myocardial ablation in vitro according to different thermal energies and power outputs.
Materials and methods
No institutional review board approval was necessary, as no human subjects participated in this study. Furthermore, animal committee approval was not necessary, as we used an in vitro porcine heart model. Twenty-four fresh room-temperature porcine hearts were offered by the local butcher at the same vendor. The average weight of the porcine hearts was 500 ± 50 g.
RFA was performed by using the Cool-tip ™ radiofrequency system (Valleylab, Boulder, CO, USA). This system consists of the following components: an RF generator (maximum power: 200 W, electric current: 480 kHz), a 17-gauge internally cooled monopolar electrode with a 2-cm exposed tip, a peristaltic perfusion pump, and a 10-cm 2 grounding pad. The porcine hearts were immersed in saline. The grounding pad was put at least 30 cm away from the electrode. A mechanical pump was used to cool the electrode with the internal circulation of sterile saline (4 °C), and the flow rate of the circulation was approximately 100 mL/min. During RFA, the electrode was placed 3 cm into the porcine heart. The output power was controlled at 20 W/s, with the ablation times were set as 1 min, 2 min, 3 min and 4 min.
The LA equipment was EchoLaser type X4 (Esaote Company, Florence, Italy) with a 300 μm plane-cut optic fiber sheathed by a 21-gauge PTC needle. It radiated laser light at a wavelength of 1064 nm. The PTC needle was inserted into the target position under the guidance of ultrasound, and then the core needle was pulled out. The fiber was inserted through the needle sheath into the same position. Then the PTC needle was withdrawn for 5 mm, leaving the tip of the fiber in direct insert into the tissue. LA were lunched with an output power of 5 W, and the gross energies were set at 1200 J, 2400 J, 3600 J and 4800 J.
Before ablation, ultrasonography was used to guide the insertion of the electrode or laser fiber. The electrode or laser fiber was inserted into the thickest part (2-3 cm) of the interventricular septum of each porcine heart. Temperatures were measured throughout each ablation by Conclusions: Both RFA and LA were shown to be reliable methods for myocardial ablation. The lesion outline and thermal field distribution of RFA and LA should be considered when performing thermal ablation in the intramyocardial septum during hypertrophic obstructive cardiomyopathy. Keywords: Radiofrequency ablation, Laser ablation, Thermal field distribution, Hypertrophic obstructive cardiomyopathy, In vitro using a SENDAE SD-TC02B thermal monitor (Sun-Gun Automation Engineer Company, Guangzhou, China). Two thermocouple probes were implanted at the same depth parallel to the needle at distances of 5 mm and 10 mm (Fig. 1a, b). Only the thermocouple tip could measure the temperature that the thermocouple tip was monitored by using real-time ultrasound imaging to guarantee its position. The temperature data were collected every 10 s for the entirety of each ablation.
Each post-ablation coagulation specimen was sectioned along the electrode shaft. The coagulative necrosis zone along the needle insertion axis was considered as the long-axis diameter, while the short-axis diameter was measured perpendicular to the long-axis diameter (Fig. 1c). According to previous experiments, the diameters of short-axis were usually symmetrical in this in vitro experiment. Thus, the short-axis diameter was used twice for the volume calculations. The coagulation volume (V) was calculated by using the formula for ellipsoids: V = (π × long-axis diameter × short-axis diameter × short-axis diameter)/6 [15]. Measurements were performed only on the central zone of coagulative Fig. 1 a Photograph shows radiofrequency ablation (triangle) in porcine heart. Two thermocouple probes (asterisk) were placed at the same depth parallel to the needle at distances of 5 mm and 10 mm to monitor the local tissue temperature during the procedure. b Photograph shows laser ablation (arrowhead) in porcine heart. Two thermocouple probes (asterisk) were placed at the same depth parallel to the needle at distances of 5 mm and 10 mm to monitor the local tissue temperature during the procedure. c The coagulation necrosis zones of radiofrequency ablation. The long-axis diameter (a) and short-axis diameter (b) are shown. d The coagulation necrosis zones of laser ablation. The center of laser ablation was carbonized more obviously than that of radiofrequency ablation necrosis, which consisted of a central charring zone and a white coagulation zone. Two individuals (WL, JL) working in consensus measured all ablation zone. All of the operations described above were performed three times in each group.
Statistical analysis
Data analysis was performed with SPSS 22.0 software (Inc., Chicago, IL). The results are presented as the mean ± SEM (standard error of the mean). Comparisons between two groups were statistically analyzed using a two-sided Student's t test, and statistical studies between multiple groups were analyzed using multiway analysis of variance (ANOVA). The temperature rise curves were plotted in Origin software (Origin Lab, version 8.5). The method of repeated measurement and variance analysis was used. All data with P values less than 0.05 indicated statistically significant differences.
Results
The coagulation necrosis zones in the porcine hearts after ablation were pale and hard. The ablation zone was almost ellipsoidal with a clear boundary and did not include the undesired extension of coagulation along the needle shaft. The center of LA was carbonized more obviously than that of RFA (Fig. 1d).
The comparison of lesion outlines between the two ablation modes
The size changes, including the long-axis diameter, shortaxis diameter and volume, after thermal ablation under different thermal energies induced by LA and RFA are shown in Table 1. All the long-axis diameters, short-axis diameters and volumes of both thermal methods were enlarged with increasing thermal energy.
For LA, the smallest ablation lesion was 1.3 ± 0.3 cm 3 in volume, with a long-axis diameter of 2.1 ± 0.1 cm and a short-axis diameter of 1.1 ± 0.1 cm at 1200 J, while the largest ablation lesion was 5.3 ± 0.2 cm 3 in volume, with a long-axis diameter of 2.5 ± 0.1 cm and a short-axis diameter of 2.1 ± 0.1 cm at 4800 J. The long-axis diameter at 1200 J was 2.1 ± 0.1 cm, which was significantly different from that of the 4800 J group (2.5 ± 0.1 cm, P < 0.05). The short-axis diameter changed significantly between 1200 J (1.1 ± 0.1 cm) and 3600 J (1.9 ± 0.1 cm) and between 2400 J (1.5 ± 0.1 cm) and 4800 J (2.1 ± 0.1 cm) (all P < 0.05). The volume change was significantly different between all the groups (all P < 0.05) (Fig. 2). For RFA, the smallest ablation lesion was 0.3 ± 0.0 cm3 in volume, with a long-axis diameter of 1.2 ± 0.1 cm and a shortaxis diameter of 0.6 ± 0.0 cm at 1200 J, while the largest ablation lesion was 2.2 ± 0.2 cm3 in volume, with a longaxis diameter of 2.3 ± 0.0 cm and a short-axis diameter of 1.4 ± 0.1 cm at 4800 J. By comparing the increases in thermal energy between the two groups, the long-axis diameter change was statistically significant only between 1200 and 4800 J; the short-axis diameter and the volume change showed significant differences between 1200 and 3600 J and between 2400 and 4800 J (Fig. 3). At the same accumulated thermal energy, LA had a larger long-axis diameter than RFA at 1200 J and 4800 J. Both the shortaxis diameter and volume of the coagulation necrosis zone formed by LA were always larger than those of RFA at each accumulated thermal energy (Fig. 4).
Temperature measurement evaluation
The average temperature measurements of the two thermocouple probes are shown in Fig. 5. The temperature increased with each energy increment. The temperature near the antenna at 0.5 cm showed a higher and greater change than that near the far-field region at 1.0 cm due to the localization of heating. In contrast, the temperature curves in the region far from the antenna were much smoother, since thermal conduction led to the rise in temperature. At the same accumulated energy, the temperature of LA was much higher than that of RFA at the same point. Interestingly, the initial temperature increase of LA at 0.5 cm was rapid. The temperature reached 43 °C at an accumulated energy of 1200 J after approximately 4 min. And after that, the temperature increased 2 The changes in the a long-axis diameter, b short-axis diameter and c volume after myocardial ablation with different laser ablative energies. *The results between the different energies are significantly different (P < 0.05) Fig. 3 The changes in the a long-axis diameter, b short-axis diameter and c volume after myocardial ablation with different radiofrequency ablative energies. *The results between the different energies are significantly different (P < 0.05)
Discussion
In this study, with increasing accumulated thermal energy, both the long-axis diameter and short-axis diameter were enlarged, especially for the short-axis diameter of RFA that reached 14 mm and that of LA that reached 21 mm, which included the standard of ablation for ventricular septal thickness (≥ 15 mm). The size of the ablation lesions formed by LA was larger than that formed by RFA under the same output energy, showing a greater efficiency accomplished by LA than by RFA in our settings (< 4800 J). In terms of the thermal distribution, the temperature of the far-field region was lower and increased more slowly compared with that of the central zone for both LA and RFA. Moreover, the temperature of the LA group was always higher than that of the RFA group at the same point under the same output energy, which also suggested that LA might be more efficient than RFA under the same energy consumption. Minimally invasive thermal ablation of lesions has become common since the advent of modern guidance method [16]. Percutaneous thermal ablation is initially used for the treatment of small, unresectable tumors or for patients who are poor surgical candidates [17][18][19][20]. As technology advances and more exploration, thermal ablation will no longer be confined to the treatment of tumors but will be used in other tissues and organs as the lung and heart. RFA is the most widely used ablation method, especially for the therapy of hepatocellular carcinoma (HCC) [21]. The heating principle of RFA is that the radiofrequency electrode generates an electric field with a high-frequency alternating current. Frictional heating was generated when the ions in the tissue attempt to follow the changing directions of the alternating current. LA is another minimally invasive local-ablative technique which is less investigated and used compared to RFA. However, some published data have shown that LA is equivalent to RFA in terms of both tumor control and long-term outcomes for the percutaneous treatment of HCC [22][23][24]. The principle of LA is based on the spontaneous emission of characteristic photons by excited atoms and light energy produced by laser equipment from electrical energy acting on tissues to generate heat. The local temperature could rise to above 200 °C, which cause the local tissue coagulated, became necrotic, charred, or even vaporized. However, because the light of laser is easily scattered and absorbed, this modality has limited tissue penetration and hence the ablation areas is very small of approximately 1-2 cm 2 . Under these conditions, LA is typically used in the field of small organs such as thyroid, prostate and nerves [25][26][27][28][29]. Therefore, LA has advantages in terms of laser precision and efficiency. Additionally, multiple laser fibers can be used together on account of their single tenuous type to improve effectiveness and adapt to a wide range.
We compared the ablative effects of LA and RFA of the myocardium in vitro. Similar results have not been reported in previous studies. However, there are several limitations in this study. First, this study used healthy porcine hearts as models that were different in terms of disease, thickness and biological structure. This may confine the use of single needle and single ablation, and the low output power was set during radiofrequency ablation. Second, this experiment only discussed the effectiveness of ablation of the myocardium in vitro, and neither live animal models nor HCOM patients were involved. As a result, some important factors were ignored, such as blood perfusion, myocardial motion and the heat sink effect. Furthermore, we did not perform pathologic studies to confirm that the ablations were complete, and the thermal field had limitations in terms of identifying incomplete ablations. Therefore, the results of this study can provide a restricted reference for animal models and HOCM patients in vivo, and more intensive exploration should be demonstrated.
Conclusions
This study reports that the thermal ablation techniques RFA and LA are technically feasible and promising approaches for the treatment of HOCM because of their controlled and effective necrosis and the relatively secure temperature changes. We found that LA had better ablation efficiency than RFA in the ablation zone range and resulted in temperature changes with limited thermal output energy. Certainly, long-term investigations and experiments, especially in vivo assessments of animal models and HOCM patients, should be implemented. | 4,088 | 2020-10-20T00:00:00.000 | [
"Medicine",
"Engineering"
] |
PowerFactory-Python based assessment of frequency and transient stability in power systems dominated by power electronic interfaced generation
The deployment of variable renewable energy based power plants is increasing all over the world, however, unlike conventional power plants these are mostly connected to the grid via power electronic interfaces. High penetration of power electronic interfaced generation (PEIG) has an important impact on the inertia of the system, which is of major concern for frequency and large disturbance rotor angle (transient) stability. Therefore, it is desirable to study the effectiveness of widely used approaches to assess the stability of a system with high penetration of PEIG. This paper concerns with the modelling and control aspects of a power system for the evaluation of the most widely used metrics (indicators) to assess the dynamics of the power system related to frequency and rotor angle stability. The functionalities of Python are used to automate the generation of operational scenarios, the execution of time domain simulations, and the extraction of signal records to compute the aforesaid indicators. The paper also provides a discussion about possible improvements in the application of these indicators in monitoring tasks.
I. INTRODUCTION
The electrical power grid is a massive and complex system with a non-linear dynamic performance, which can be excited by different types of disturbances, and can manifest in different forms of stability phenomena [1].
The future power system has in its dynamic behaviour the main challenge to properly expand with the massive inclusion of power electronic interfaced generation (PEIG) [2], which due to their output variability, and decoupling from the transmission network, impacts the overall stability of the system.Thus, motivating a revision of the approaches used in monitoring and control tasks.
Displacement of conventional power plants with synchronous generator by PEIG lowers the inertia, which decreases the robustness of the system against disturbances, and is reflected in higher excursions of frequency [3] and machine rotor angles [4].Hence, there is a renewed interest in evaluating the approaches used to estimate the proximity of the system to frequency or transient unstable condition.
Constant monitoring of systems parameters is vital to prevent widespread disruptions and system collapse.
Several researches have proposed numerous options of early detection of stability issues, which employ computational intelligence tools to predict the value of a selected stability indicator.Such approaches have been developed and tested in systems dominated by synchronous generation [5], [6].Nevertheless, further research effort is needed to improve the accuracy and reliably prediction (or alternatively classification) throughout changing operating conditions (load level, generation dispatch, and topology).This is specially critical in systems with reduced inertia and short circuit capacity.
This paper provides an evaluation of the suitability of selected and widely used indicators in both, industry and academia, for frequency and transient stability assessment in systems with high penetration of PEIG.The study is conducted on a three area system, original introduced in [7], and modified to have high penetration of wind power plants (62% of the total installed capacity), to measure the distance to frequency and transient instability in power systems with high penetration levels of PEIG.
II. A REVIEW ON STABILITY INDICATORS
The lack of ineffective participation of PEIG in the frequency containment period, rises a concern with the assessment of the frequency performance in the mentioned interval, in which the lack of assistance by PEIG can result in large frequency gradient [8].Therefore, the selected frequency indicators are related to this period.For rotor angle stability assessment, Power angle-based stability margin indicator and COI-referred rotor angles TSI (transient stability indicator) are considered.
A. Frequency performance indicators 1) Rate Of Change Of Frequency (ROCOF): This metric corresponds to the frequency gradient after an imbalance event of active power generation and load demand [9].The frequency starts deviating from rated value as an immediate result of a generation loss.
The ROCOF is defined analytically as shown in (1) and for the computation of the frequency derivative, some current practice is to compute the ROCOF in two ways: the first one is with the use of the approximation taken from [10] given by (2) for qualitative assessment of the frequency performance within the time window of the system inertial response and, the second is by computing the slope of the frequency decrease in a fixed time window of 0.5s after a disturbance.
where ∆P is the MWs lost (i.e power deficit), f is the system frequency, E sys is the system kinetic energy in MWs, E lost is the kinetic energy lost in MWs.
The relevance of the ROCOF lays on the data acquisition speed of the equipments associated to frequency measurement and protection, for which the frequency shouldn't change faster than these equipments can detect.
B. Transient performance indicators
1) Power angle-based stability margin: This indicator shows a percentage value about the maximum angular deviation between any two synchronous machines within the electrical system.This indicator is defined as follows: where δ max is the maximum angle separation of any two generators of the system at the same time in the post fault response [11].
The relevance of this indicator is on the information about the possible islanding because it monitors the rotor angles in the system.The loss of synchronism and the activation of outof-step relays are reflected in lower values of this metric.The range of δ is [−180 • , +180 • ] 1 , for which a value of M argin ∼ = 33.3%means a total separation between areas of 180 • .
2) COI-referred rotor-angles TSI: This indicator is based on equivalent inertia values as a represetation of the total inertia of each area and for the entire system [12], and it is defined as shown in (4).Taken from [13] and modified to have 62% installed capacity of PEIG.
The terms δ COIj and δ COIsystem are the equivalent rotor angles of area j and the entire system.In literature these equivalences are known as Center Of Inertia or COI.To compute these values, (6) is used.
where H j is the equivalent inertia of area j and H T is the overall inertia of the system.
Typical value for δ lim is π/3 [12], which is the maximum allowed angle determined by steady-state constraint.
A. Modified PST16 benchmark system
The 16 PST benchmark system shown in Fig. 1 is used for both Frequency and rotor angle stability studies using the same simulation platform (DIgSILENT PowerFactory).
The grid consists of three strongly meshed areas, 66 buses, 16 generators, 28 transformers and 51 transmission lines, of which 3 are considered as weak transmission lines because of their length (200km transmission lines); such lines are used to interconnect the areas.The loads are concentrated in area C and power is transferred from areas A and B to area C through two long tie-lines.The generation and load demand are distributed as shown in Table I.The system considers 5 hydro power units, 7 thermal (coal) and 4 nuclear.The last two are located in areas B and C. The wind parks installations are located on these two areas.
Synchronous generators are modelled using the built-in objects ElmSym and TypSym of PowerFactory, based on the sixth order model.The excitation system used corresponds to the modified IEEE type 1 model; while the governor systems differs from the technology of generation unit, i.e., whether the prime mover is steam or water, and the implemented models are TGOV1 and HYTGOV1, respectively.Detailed information is available in [13].
B. Wind turbine model
The wind turbine model was built based on the standard IEC 61400-27 series from [14].In the mentioned standard, the modular structure of the WT models can be done using Type 1 (1A or AB), Type 2, Type 3 (3A or 3B) or Type 4 (4A or 4B) wind turbines (check [14] for detailed information).However, for sake of implementation in PowerFactory, in this development the type 4 WTs has the same aerodynamic model like type 3 and its simplified active power control model was replaced by a more detailed one of type 3.The previous statement implies that the only difference between both WT types is the generator system block, therefore, it can be selected between WT type 3A, 3B or 4 under this block.Because of this, only one model is used to represent both turbines, being possible to change the type of the wind turbine (Type 3A, 3B or 4) by changing the generator system.As it can be seen in Fig. 1, there are eight wind farms of which seven are type 4 (representing 8899.45MW of installed capacity) and one is type 3 (954MW of installed capacity).Fig. 2 can be used as a reference for the overall structure of the WT control scheme.
C. Generated operating conditions and disturbances
The wind parks were installed with the same capacity of the synchronous generators on their point of common coupling (PCC), with the intention to study a change in the power share in the system but also to study when a wind park completely replaces a synchronous generator without modifying the overall power generation.It is worth to clarify that such situation (wind park installed in the same node of a conventional power plant) may not happen in reality, but given the fact that the system is not a detailed representation of a real system, adding a few lines and transformers to recreate a new generation addition, will not change significantly the simulation outcomes.
There are three load demand cases taken into account: Winter (100%), Spring (80%) and Summer (60%); where the 100% represents 15565MW as shown in Table I.For each case several dispatch scenarios are studied, where the main variation is done in the power share between synchronous and wind generation, i.e., under each season several simulations are ran where only the power share is changed (the power flow direction is not altered).
In each simulation case a set of operating scenarios are designed such that the wind generation progressively takes over the synchronous one, specifically, over the thermal and nuclear units in areas B and C. Including the removal from the system of a whole thermal power plant.
For assessment of ROCOF, the normative contingency for continental Europe, according to [9], is the generator outage of the two biggest generation units in one busbar.However, this doesn't apply directly to a test system like the one being used in this paper, because such event exacerbate the instability of the system and prevents unveiling interesting results that are found at the edge of instability.For this reason, the biggest conventional generation unit (with 1000 MW represents 6.3% of total power) is considered as the most critical generator outage for this system.
On the other hand, for transient stability studies the most critical contingency to be applied is a short circuit in the tie line A-C with a Fault Clearing Time of 152ms, which is shorter than the Critical Clearing Time (which was found to be 156ms).The criticality of the outage was corroborated based on steady-state analysis of the system in N-1 case, where the Power Flow Index, defined in [15] as shown in (7), is utilized to find the line where a short circuit causes the biggest impact in the system.
where S i,pos is the actual apparent power flow through the i th line, and S i,lim is the apparent power flow limit in MVA.
The tie line A-C caused the biggest post-contingency PFI (when the system is highly vulnerable to transient instability) in the system which it is interpreted as the biggest electrical stress, therefore, it was selected as the worst case scenario.
IV. SIMULATION RESULTS
The software used for simulation is DIgSILENT Pow-erFactory 2016.Other software like Python 3.4, MATLAB 2016 and, Excel are also used as a complement to run simulations.The dispatch cases are established as tabulated based scenarios, using Microsoft Excel, and are dynamically read and set in PowerFactory by Python, where different events and faults are established per dispatch case.A zoom-in into the procedural blocks for the simulation process can be conceived schematically in Fig. 3, where the data extraction block, which was programmed in Python, is broke down into detailed steps.In the figure can be seen that branch outages were automated, while the event was always the same (one synchronous generator outage).
Numerical experiments were performed on a Dell Latitude E7450 personal computer with an Inter(R) Core (TM) i7-4600U CPU, 2.10 GHz processing speed, and 8 GB RAM.
A. Effect of increased wind generation on frequency stability performance
Different generation profiles are configured to run simulations and compute the ROCOF when the PEIG gradually replaces the conventional units.The simulations are such that: first the wind generation gradually replaces conventional units in the area B, while area A and C remain untouched; then the wind generation gradually replaces conventional units in the area C, while area A and B remain untouched; and finally the wind generation gradually replaces conventional units in both areas B and C.
At every dispatch profile the power output from synchronous generators gets reduced and the wind turbine increases.Also, a complementary information to the dispatch scenarios, already provided, is the following information that completes the environment for getting the system profile: Line A-C is set out of service (same simulation case is ran twice, one with no topological changes and one with line A-C out of service).The event is a generator outage representing 6.3% of the total generation.The generator name is A1bG (see Fig. 1).Winter (as shown in Table I), Spring (taken as 80% of Winter) and Summer (taken as 60% of Winter) peak load demands are configured.
The results of all simulation cases described above are shown in Fig. 4, where it is observable that the trend of RO-COF is such that the values are higher when the inertia is low, which happens when wind generation replaces synchronous generators, this means that, as it is well known, there is a clear relation between the frequency response and the overall system inertia, as stated in (2).However, when the system topology changes (e.g., a crucial line outage) and/or the load demand varies such that the electrical stress of the system also varies, the frequency tends to respond more abruptly (although with the same trend), which is observed in dotted lines in Fig. 4 even though the inertia is the same.This figure shows all simulations plotted together and it can be read different values of ROCOF for the same inertia level depending on the loading level and topology of the system.
The electrical stress of the system for winter is higher than spring and higher than summer, which can be captured with the PFI, shown in Fig. 5; however, these values get even higher when a line outage is done (as also does the ROCOF), i.e., the increasing of the PFI, caused in this test system by a tie line outage or the load demand variation, reflects that the electrical stress in the system has increased and it results in higher values of ROCOF, even when there is no change in the system inertia nor in active power imbalance (same generator outage).
Fig. 4 and Fig. 5 reveal a dependence of the ROCOF not only on systems inertia but in systems loading levels as well.From such figures it could be suggested that an overload metric, like PFI, on each case could work as an off-set value for ROCOF.Further investigation is needed to find a more suitable metric to associate the loading/stress level of the system to the value of ROCOF for a given inertia level.Such metric shall take into account the properties of the load (e.g.voltage/frequency dependency).
B. Effect of increased wind generation on transient stability performance
Different operating conditions are also generated to evaluate the transient stability performance to have a better and improved understanding about the sources of possible unstable operating conditions.The main factor to be modified over simulation cases is the level of penetration of wind power generation.
The simulations are such that: First the synchronous generation units in area B are being gradually replaced by wind parks while areas A and C remain untouched (6 dispatch cases); then the synchronous generation units in area C are being gradually replaced by wind parks while areas A and B remain untouched (4 dispatch cases).
In these set of simulation cases the area B doesn't return to zero wind generation, instead it remains at 100% while area C increases its wind power share, which implies that the decrease of synchronous generators is continuous along each simulation case.The event is as described in section III-C (short-circuit in line tie A-C with fault clearing time of 152ms).
The results are shown by comparing the three seasons in one picture, which are the product of the sensitivity analysis with respect three different loading levels.For the indicator COI-Referred rotor-angle TSI, Fig. 6 shows the results for each simulation case.The definition of this metric is such that the closer the results are to 1 p.u, the worst the transient stability of the system.
From Fig. 6 some interesting results are observed: 1) Due to the fact that the values of this indicator are predominantly around 0.5 p.u, with very low variation, it doesn't reveal approximations to dangerous values (but instead jumps from 0.5 p.u to unstable).It is not possible to find at which operating condition the system is critically stable.
2) The transient stability of the system varies greatly depending on the loadability level (seen in the figure as Winter, Spring and Summer), for which the system load demand is a crucial aspect to consider in a regular basis, as it affects the stability of the system.3) This indicator properly marks an unstable operating condition, so it can be used to classify the stability (as stable or unstable) but not to assess the distance and tendency to move to unstable condition (e.g. as a consequence of decrease in power share from synchronous generators due to the increase of wind power share).Despite the proper calling of unstable situation, this indicator lacks the information to measure the distance and tendency to move to unstable condition.This fact is evidenced in cases 30% and 36% of total wind generation from Fig. 6 for Spring, where the wind power share is relatively close between each other and the indicator didn't show any type of dangerous values (but still stable), and instead, it jumped abruptly to reflect the occurrence of instability.Fig. 7 shows the results of the second indicator for transient stability, the power angle based stability margin (from now on will be referred as margin) for each simulation case.The definition of this metric is such that the closer the results to 30%, the bigger the angular separation and thus, it reflects synchronous generators approaching to a loss of synchronism (see section II-B).
From Fig. 7, some interesting results are observed: 1) After 36% of total wind generation, when PEIG is increased in area C, there is a trend of the system to become more stable, which means that while more wind generation is present, more transient stable the system is.This is due to the fact that more wind power generation is used to cover locally the demand (Area C is predominantly consuming), whereas less power transfer occurs in the tie lines and the synchronous generators of the system have reduced output power (active and reactive).It is important not to draw general conclusions based on the mentioned value for PEIG, since it is also observed that a clear relationship between the margin index and the penetration level of PEIG (or equivalently the remaining share from synch.generation) cannot be defined.This emphasizes the non-linear nature of transient stability.
2) This metric is more descriptive than the COI-referred rotor-angle TSI, since it utilizes almost the entire range of the possible results.This is important because intermediate conditions (stable but closest to instability) can be read from here.3) This metric shows more clearly the effect of the load demand (Winter, Spring, Summer) on the stability of the system.
A M argin of 33% represents an angular difference between any two generators of 180 • , 56% a difference of 100 • , while higher values (above 70%) represent shorter and safer angular differences between generators.This indicator properly displays the values that are taken as dangerous, since a value of 50% does not trigger any out-of-step relay, but might cause an islanding status in the system.
V. OUTLOOK OF IMPROVEMENTS FOR TOOLS TO MONITOR PROXIMITY TO INSTABILITY IN THE CONTROL ROOM
From the analysed cases in the previous section it is valid to declare that most of the current indicators for frequency and transient stability assessment can be valid when studying high penetration of PEIG.However, these indicators are the result of measurements of physical phenomena like the lowest frequency value, or the speed of frequency decrease, or the maximum angular deviation between generators.ROCOF and Margin are single values of instantaneous measurements without further information about future behaviour of the system.
In order to measure a possible distance to instability it is necessary to develop new methodologies that use advanced platforms like WAMS to assess the distance to instability and the impact of operational changes on a regular basis (e.g.intra-hour and real-time applications).Reliable and accurate estimation of system inertia from PMU data is crucial for this purpose.
VI. CONCLUSIONS
The modelling of an electrical power system with high penetration of power electronics interfaced generation is developed and exploited using the simulation capabilities of DIgSILENT PowerFactory, together with Python for automated execution of such simulations.The results of the multiple cases ran in this work show that, for frequency stability, the current practices might have a level of dependency on the overloading level of the system for the frequency stability assessment in the containment phase.This work makes a call to incorporate this information in the calculations to estimate a possible distance to instability.On the other hand, for rotor-angle stability studies, the current practices are found to be appropriate to monitor the behaviour of the system in real time applications, although there are limitations on some popular metrics, like the COI-referred rotor-angle TSI, which is found to be more suitable to assess and help in classifying the stability status.
Figure 2 :
Figure 2: Block diagram showing the overall structure of WT controller.
Figure 3 :
Figure 3: Flow diagram for data extraction in PowerFactory and Python.SS DB refers to Steady State results data base, while time DB refers to time domain results data base.
Figure 4 :
Figure 4: ROCOF vs Systems inertia for different dispatch configurations.The subscript "NL" refers to simulation cases with "No line out", while "Line out" refers simulation cases where the line A-C was out of service.
Figure 5 :
Figure 5: Power Flow Index (PFI) for sensitivity analysis.This figure is complementary with Fig. 4.
Figure 6 :
Figure 6: Results of COI-Referred rotor-angle TSI computations under the aforementioned batch of simulations.
Figure 7 :
Figure 7: Results of margin index computations.
TABLE I :
Winter peak load and generation distribution in PST 16 benchmark system. | 5,339.6 | 2018-04-10T00:00:00.000 | [
"Engineering"
] |
A Comparative Study of Ir(dmpq) 2 (acac) Doped CBP, mCP, TAPC and TCTA for Phosphorescent OLEDs
: In this work, we present the fabrication and characterization of solution-processable red Phosphorescent Organic Light-Emitting Diodes (PhOLEDs). The proposed approach is based on Ir(III) complex, namely Bis(2-(3,5-dimethylphenyl)quinoline-C,N)(acetylacetonato)Iridium(III), also known as Ir(dmpq) 2 (acac), which was doped in four different host materials: (a) 4,4 (cid:48) -Bis(N-carbazolyl)-1,1 (cid:48) -biphenyl (CBP), (b) 1,3-Bis(N-carbazolyl)benzene (mCP), (c) 1,1-Bis[(di-4-tolylamino) phenyl]cyclohexane (TAPC), and (d) tris(4-carbazoyl-9-ylphenyl)amine (TCTA). The metal–organic complex offers unique optical and electronic properties arising from the interplay between the inorganic metal and the organic material. The optical and photophysical properties of the produced thin films are investigated in detail using spectroscopic ellipsometry and photoluminescence, whereas the structural characteristics are examined by atomic force microscopy. This comparative study of the four different Host:Ir-complex systems provides valuable information to evaluate the emission characteristics in order to achieve pure red light. Finally, these materials were applied as a single-emissive layer in PhOLED devices, and the electroluminescence characteristics were studied.
Introduction
Red organic light-emitting devices (OLED) containing phosphorescent emitters constitute an attractive research topic because of their various applications in different fields of medicine and energy [1][2][3][4][5][6]. It is well established that red OLEDs are of intense academic and industrial interest for energy-saving solid-state lighting as well as in flat-panel displays [3,4]. It is also important to mention that red OLEDs have also become promising light sources for compact and "imperceptible" biomedical devices that use light to probe, image, manipulate, or treat biological matter [1]. In this regard, the fabrication of red phosphorescent OLEDs (PhOLEDs) has attracted great attention.
The PhOLEDs are a breakthrough, as they theoretically exhibit their highest internal quantum efficiency in comparison to fluorescent devices. Especially, the maximum achievable internal quantum efficiency (IQE) was 25% for fluorescent devices emitting only a singlet exciton state. For this reason, special emphasis has been given to apply PhOLEDs as the strong spin-orbit coupling (SOC) and fast intersystem crossing could lead to a harvest of both singlet and triplet excitons in the emitting layer and achieve internal quantum efficiency as high as 100% theoretically [5][6][7]. The most promising strategy to fabricate PhOLEDs is based on the transition metal complex luminescent materials. Among the transition metal complexes, Ir (III) complexes are considered the most promising emitters in PhOLEDs based on their relatively high phosphorescence quantum yields, short triplet excited state lifetimes, excellent color tunability from blue to deep red, and splendid thermal and electrochemical stability [7][8][9][10][11]. provide a thorough characterization and evaluation of photoactive materials and PhOLED devices in order to achieve high color purity and stability.
PhOLED Fabrication
The architecture structure of the solution-processed PhOLEDs and the molecular structure of the used phosphorescent dopant are illustrated in Scheme 1a,b, respectively. Firstly, pre-patterned Indium-Tin Oxide-coated glass substrates (received from Ossila Sheffield, UK) were extensively cleaned by sonication in DI, acetone, and ethanol for 10 min and followed by drying with nitrogen. Then, the substrates were transferred to the glove box, where the substrates were also treated with oxygen plasma at 40 W for 3 min. The PEDOT:PSS layer, which was used as the hole transport layer (HTL), was deposited by spin coating method onto the glass/ITO substrate and followed by annealing at 120 • C for 5 min. The emitting layers (EML) were spun at the same speed, specifically at a rotational speed of 2000 rpm for 1 min, onto the PEDOT:PSS layer. Finally, a bilayer of Ca 6 nm thick, and Ag 125 nm thick was used as a cathode electrode layer and was deposited using the appropriate shadow masks by vacuum thermal evaporation (VTE). fabricated device, this work aims to give insight into the structural surface characteristics of the thin films using atomic force microscopy (AFM). Finally, these Hosts:Ir-complex were applied in solution-processable PhOLEDs as an emissive single-layer, and their electroluminescent properties were assessed. The determination of the optical properties in combination with the photo-and electro-emission characteristics provide a thorough characterization and evaluation of photoactive materials and PhOLED devices in order to achieve high color purity and stability.
PhOLED Fabrication
The architecture structure of the solution-processed PhOLEDs and the molecular structure of the used phosphorescent dopant are illustrated in Scheme 1a,b, respectively. Firstly, pre-patterned Indium-Tin Oxide-coated glass substrates (received from Ossila Sheffield, UK) were extensively cleaned by sonication in DI, acetone, and ethanol for 10 min and followed by drying with nitrogen. Then, the substrates were transferred to the glove box, where the substrates were also treated with oxygen plasma at 40 W for 3 min. The PEDOT:PSS layer, which was used as the hole transport layer (HTL), was deposited by spin coating method onto the glass/ITO substrate and followed by annealing at 120 °C for 5 min. The emitting layers (EML) were spun at the same speed, specifically at a rotational speed of 2000 rpm for 1 min, onto the PEDOT:PSS layer. Finally, a bilayer of Ca 6 nm thick, and Ag 125 nm thick was used as a cathode electrode layer and was deposited using the appropriate shadow masks by vacuum thermal evaporation (VTE).
Thin Film and Device Characterization
Spectroscopic ellipsometry (SE) is a powerful, robust, non-destructive, and surfacesensitive optical technique that allows the determination of the optical properties as well as the thickness of the light-emitting polymers. Through the SE technique, we can measure the pseudodielectric function ε(E) = ε 1 (E) + i ε 2 (E) of the studied thin films. In addition, by applying the appropriate modelling and fitting procedures we can extract significant information about the dielectric function ε(ω), the thickness of the thin films with nanometer-scale precision, the absorption coefficient, and the optical constants, such as the fundamental band gap, and the higher energy optical gaps. The SE measurements were acquired using a phase modulated ellipsometer (Horiba Jobin Yvon, UVISEL, Palaiseau, France) at photon energies between 1.5-6.5 eV and 20 meV step at 70 • angle of incidence. The SE experimental data were fitted to model-generated data using the Levenberg-Marquardt algorithm, taking into consideration all the fitting parameters of the applied model.
The surface morphology of the emitting thin films was investigated by atomic force microscopy (AFM) (NTEGRA, NT-MDT). The measurements were performed in ambient conditions, at tapping-mode operation, using silicon-based cantilevers with a high-accuracy conical tip and nominal tip roundness <10 nm.
Finally, the photoluminescence (PL) and electroluminescence (EL) characteristics of the active layers and the final PhOLED devices, respectively, were measured into the glove box, without encapsulation using the Hamamatsu absolute PL quantum yield measurement system (C9920-02) and the external quantum efficiency system (C9920-12), which measures brightness and light distribution of the devices.
Spectroscopic Ellipsometry
Spectroscopic ellipsometry (SE) in the NIR-Vis-fUV spectral region (1.5-6.5 eV) can provide valuable information on the optical and electronic properties, as well as the thickness of the metal-organic complexes films through the analysis of the measured pseudodielectric function ε(E) = ε 1 (E) + i ε 2 (E) . At this point, it is important to mention that the optical properties are thus a function of both chemical nature and processing.
The investigation of the optical properties of pristine host materials CBP, mCP, TACT, and TCTA was initially realized in order to be used as a reference for the analysis of the ε(E) of the respective Host:Ir-complex. In order to extract quantitative information, we modelled and fitted the measured ε(E) spectra by applying a three-phase theoretical model which consists of the layer sequence air/host-material/glass according to the Levenberg-Marquardt minimization algorithm. The optical properties of the host materials were described using the modified Tauc-Lorentz (TL) oscillator model, which has been successfully applied in amorphous organic semiconductors [23]. More specifically, in the TL model, the imaginary part ε 2 (E) of the dielectric function is determined by multiplying the Tauc joint density of states by ε 2 (E) obtained from the Lorentz oscillator model and described by the following expressions [24]: The real part ε 1 (E) is obtained by the Kramer-Kronig integration as shown below [24]: where ε ∞ is a constant that accounts for the existence of electronic transitions at higher energies, which are not taken into account in the ε 2 (E). The TL model provides the ability to determine the energy position of the fundamental band gap E g , the amplitude A of the oscillator, the Lorentz resonant energy E 0 , and its broadening term Γ. For each organic-host film, the appropriate number of TL oscillators was used for the accurate description of each individual dielectric response. Concerning the Host:Ir-complex layers, we followed the same methodology of the above analysis in order to derive the optical and electronic properties. For the fitting analysis, the five-phase geometrical model air/Host:Ir-complex/PEDOT:PSS/ITO/glass was Photonics 2022, 9, 800 5 of 14 applied, and the bulk dielectric function ε(E) of the Host:Ir-complex films was described using the TL dispersion equation. The same number of TL oscillators was used for each Host:Ir complex as in the case of the respective pristine host material. This is justified by the fact that the percentage of the Ir(dmpq) 2 (acac) in the films was as low as 4%. Figure 1a-d show the calculated real ε 1 (E) and imaginary parts ε 2 (E) of the bulk dielectric function ε(E) of the undoped and doped organic thin films, as they were reproduced using the best-fit parameters of the above analysis. Indeed, regarding the emissive layers based on Host:Ir-complex, it can be derived that their ε(E) exhibit strong similarities with that of host materials. The comparison between the ε(E) of the measured (exp) and the theoretical (fit) ones of the doped films is demonstrated in the insets of the respective figures. The theoretical ε(E) spectra were derived based on the best-fit parameters of the analysis of the measured ε(E) , which include the parameters of the applied dispersion equation and the thickness.
host film, the appropriate number of TL oscillators was used for the accurate description of each individual dielectric response.
Concerning the Host:Ir-complex layers, we followed the same methodology of the above analysis in order to derive the optical and electronic properties. For the fitting analysis, the five-phase geometrical model air/Host:Ir-complex/PEDOT:PSS/ITO/glass was applied, and the bulk dielectric function ε(Ε) of the Host:Ir-complex films was described using the TL dispersion equation. The same number of TL oscillators was used for each Host:Ir complex as in the case of the respective pristine host material. This is justified by the fact that the percentage of the Ir(dmpq)2(acac) in the films was as low as 4%. Figure 1a-d show the calculated real (Ε) and imaginary parts (Ε) of the bulk dielectric function ε(E) of the undoped and doped organic thin films, as they were reproduced using the best-fit parameters of the above analysis. Indeed, regarding the emissive layers based on Host:Ir-complex, it can be derived that their ε(E) exhibit strong similarities with that of host materials. The comparison between the 〈ε(Ε)〉 of the measured (exp) and the theoretical (fit) ones of the doped films is demonstrated in the insets of the respective figures. The theoretical 〈ε(Ε)〉 spectra were derived based on the best-fit parameters of the analysis of the measured 〈ε(Ε)〉, which include the parameters of the applied dispersion equation and the thickness. It can be easily recognized that all studied hosts exhibit low electronic absorption up to 3 eV or above. Their doping with the Ir(dmpq)2(acac) affects mainly the characteristics It can be easily recognized that all studied hosts exhibit low electronic absorption up to 3 eV or above. Their doping with the Ir(dmpq) 2 (acac) affects mainly the characteristics of the absorption edge as it is can be easily deduced by the calculated absorption coefficients of the films. These results are illustrated in Figure 2a-d. In all films, an increase of the absorption in the sub-bandgap range was obtained, and the most pronounced are that of CBP and mCP. It should be noted here that there are only small modifications in the characteristics of the electronic absorptions of all the host organic materials at higher energies. of the absorption edge as it is can be easily deduced by the calculated absorption coefficients of the films. These results are illustrated in Figure 2a-d. In all films, an increase of the absorption in the sub-bandgap range was obtained, and the most pronounced are that of CBP and mCP. It should be noted here that there are only small modifications in the characteristics of the electronic absorptions of all the host organic materials at higher energies. There are several absorption bands for each material. In principle, the absorption coefficient spectrum of Ir(III)-compound doped in CBP host exhibits a similar absorption band with the absorption spectrum of net CBP. In the CBP spectrum, the absorption peak around 295 nm could be assigned to the carbazole-centered π-π * transitions, whereas the absorptions in the range of 319-340 nm could be attributed to π-π * transitions between the carbazole unit and the central biphenyl unit in the molecule [13,22,25,26]. On the other hand, for Ir(III)-complex doped in CBP host, the absorption between 250-350 nm is CBPhost and ligand-centered (LC) based. More specifically for the LC transition, we assign the strong absorption band in the UV region to the spin-allowed π-π * transition of the cyclometalated quinoline ligands of the Ir(III)-compound. The weak absorption band at wavelengths higher than 350 nm could be assigned to a handful of charge-transfer transitions. In particular, this could be assigned to the admixed MLCT (metal-ligand charge transfer) and MLCT/LC transitions, the latter of which are usually spin-forbidden but were allowed due to the strong spin-orbital-coupling (SOC) induced by the heavy metal-Iridium(III) [2,7,9,12,[27][28][29].
Furthermore, comparing the metal-organic Ir(III)-complex based on mCP host with the net mCP material, they present similar spectral features. The absorption spectra of the net mCP exhibit three absorption bands located at 250-350 nm, which could be associated with the n-π * and π-π * transitions of the carbazolyl units [30,31]. It is worth noting that the doping of the mCP with the organometallic complex brings about some differences in the absorption edge, and this may assign to the states within the energy gap of the host material. It is recognized that the absorption tail of the Ir(III)-complex doped in the mCP There are several absorption bands for each material. In principle, the absorption coefficient spectrum of Ir(III)-compound doped in CBP host exhibits a similar absorption band with the absorption spectrum of net CBP. In the CBP spectrum, the absorption peak around 295 nm could be assigned to the carbazole-centered π-π * transitions, whereas the absorptions in the range of 319-340 nm could be attributed to π-π * transitions between the carbazole unit and the central biphenyl unit in the molecule [13,22,25,26]. On the other hand, for Ir(III)-complex doped in CBP host, the absorption between 250-350 nm is CBP-host and ligand-centered (LC) based. More specifically for the LC transition, we assign the strong absorption band in the UV region to the spin-allowed π-π * transition of the cyclometalated quinoline ligands of the Ir(III)-compound. The weak absorption band at wavelengths higher than 350 nm could be assigned to a handful of charge-transfer transitions. In particular, this could be assigned to the admixed MLCT (metal-ligand charge transfer) and MLCT/LC transitions, the latter of which are usually spin-forbidden but were allowed due to the strong spin-orbital-coupling (SOC) induced by the heavy metal-Iridium(III) [2,7,9,12,[27][28][29].
Furthermore, comparing the metal-organic Ir(III)-complex based on mCP host with the net mCP material, they present similar spectral features. The absorption spectra of the net mCP exhibit three absorption bands located at 250-350 nm, which could be associated with the n-π * and π-π * transitions of the carbazolyl units [30,31]. It is worth noting that the doping of the mCP with the organometallic complex brings about some differences in the absorption edge, and this may assign to the states within the energy gap of the host material. It is recognized that the absorption tail of the Ir(III)-complex doped in the mCP is higher, in the range from 360 nm and above, and this can also be associated with the MLCT transitions.
Moreover, both TAPC and Ir(III)-compound doped in TAPC exhibit similar absorption characteristics. However, the broadband absorption located at 313 nm is evident in the case of the Ir(III)-complex based on TAPC, and a noticeable absorption tail was also observed. It is also significant to refer that we observe differences between the absorption features based on TAPC and the other host-organic materials absorption spectrum. This may be attributed to the fact that TAPC is composed of two tri(p-tolyl) amine (TTA) molecules Photonics 2022, 9, 800 7 of 14 chemically bridged by a cyclohexane ring and present different absorption characteristics with the host materials based on the carbazolyl compound [32,33].
The TCTA and Ir(III)-complex doped in TCTA present a similar shape of absorption. In particular, the absorption peaks at 293 and 327 nm could be assigned to n-π * and π−π * transition of triphenyl amine and carbazole, respectively. In the absorption spectrum of metal-organic complex doped in TCTA, the absorption tail is obvious in the range from 370 nm and above, and this can be attributed to the MLCT/LC transitions [34,35].
Photoluminescence
The PL emission spectra of the Host:Ir-complex films are illustrated in Figure 3. In the host-guest system, such as the Host:Ir-complex, the energy transfer between the host and the guest is the main emission mechanism. Generally, the energy transfer is that the excitons are primarily formed in the host and then transfer their energy to the guest through the Förster and/or Dexter mechanisms [36]. In order to clarify the emission mechanism of Host:Ir-complex we also measured the PL spectra of host materials, which are plotted in the same figure, for completeness. The PL spectra were recorded upon excitation at 340 nm. It is obvious for all Hosts:Ir-complex the dominant PL emission band is located in the region between 560-800 nm. On the other hand, the PL emission profile of CBP, mCP, and TCTA host materials is centered at 350-500 nm, in the blue region, whereas that of TAPC covers a significantly wider range up to 700 nm. As can be seen, the emission from the hosts is quenched by Ir(dmpq) 2 (acac). This fact indicates that the efficient energy transfer mechanism from the host to the guest takes place [37]. However, it can be seen that a negligible emission has existed in the region between 380-480 nm, which is assigned to the emission of host materials. In addition, the broad, structureless spectral features lead us to conclude that the phosphorescence originates primarily from the MLCT states [7,10,28]. Particularly, the dominant PL peak for all Hosts:Ir-complex was centered at 620 nm and there is a subtle shoulder peak at approximately 650 nm except for the Ir(III) complex doped in TAPC. The latter Host:Ir-complex exhibits PL emission maximum at 612 nm, and compared to the other three, a blue shift is observed. This fact could be associated with the PL emission of host-TAPC.
Atomic Force Microscopy
Figure 4a-d present the AFM height images of spin-coated thin films in order to evaluate the film-forming ability, morphological properties of the mixed films, and the effects of dopant distribution on film morphology. Moreover, the AFM results, as listed in Table 1, show the values of the mean roughness (Sq), the root mean square (Sa), the peak to peak (Sy), and the calculated, through the SE analysis, thicknesses of the Hosts:Ir-complex films. One can observe that the surface morphology of all samples was homogeneous and adequately covers the substrate. The image analysis revealed that smooth and continuous films were formed with low Root Mean Square roughness (RMS) values. In more detail, by comparing the RMS values of the Host:Ir-complex films, it is found that doping of Ir(dmpq) 2 (acac) results in the formation of smooth thin films, as the RMS values are below 0.32 nm for all Hosts:Ir-complex. The thin films are continuous with smooth surface morphology and quite small RMS values, which means that the Host:Ir-complex exhibits morphological stability without any obvious particle aggregation or phase separation. All these desirable features are favorable for host-dopant combination to be used in PhOLED fabrication and operation processes.
Atomic Force Microscopy
Figure 4a-d present the AFM height images of spin-coated thin films in order to e uate the film-forming ability, morphological properties of the mixed films, and the eff of dopant distribution on film morphology. Moreover, the AFM results, as listed in Ta 1, show the values of the mean roughness (Sq), the root mean square (Sa), the peak to p (Sy), and the calculated, through the SE analysis, thicknesses of the Hosts:Ir-comp films. One can observe that the surface morphology of all samples was homogeneous adequately covers the substrate. The image analysis revealed that smooth and continu films were formed with low Root Mean Square roughness (RMS) values. In more de by comparing the RMS values of the Host:Ir-complex films, it is found that doping Ir(dmpq)2(acac) results in the formation of smooth thin films, as the RMS values are be 0.32 nm for all Hosts:Ir-complex. The thin films are continuous with smooth surface m phology and quite small RMS values, which means that the Host:Ir-complex exhibits m phological stability without any obvious particle aggregation or phase separation. these desirable features are favorable for host-dopant combination to be used in PhOL fabrication and operation processes.
Electroluminescence
We have investigated the potential of these Hosts:Ir-complex as emissive materials in phosphorescent OLED applications using devices having the configuration glass/ITO/ PEDOT:PSS/Host:Ir-complex/Ca/Ag. Figure 5a-d show the respective experimental EL spectra of the studied devices, which were obtained at 12 V, as well as the corresponding PL spectra for comparison. For their better evaluation, a deconvolution fitting analysis of the experimental EL and PL spectra was realized, revealing the existence of three main peaks. The results of this analysis were the wavelengths where the maximum of the emission of the films is located and the full width at half maximum (FWHM). According to the EL deconvolution analysis, it is noteworthy to mention that all EL spectra from the Ir(dmpq) 2 (acac) doped in different hosts exhibit emission approximately between 550-750 nm. The EL emission of the host is completely quenched, and the dopant emission completely dominates and results in a red emission from the Ir(dmpq) 2 (acac). Compared to the PL and EL emission, it is obvious that the tendency of the EL sp was similar to the PL spectra. However, for all studied Hosts:Ir-complex, the EL em profile is blue-shifted in comparison to the PL emission profile. According to the d Compared to the PL and EL emission, it is obvious that the tendency of the EL spectra was similar to the PL spectra. However, for all studied Hosts:Ir-complex, the EL emission profile is blue-shifted in comparison to the PL emission profile. According to the deconvolution analysis, there are fundamental differences in the shifts of the peaks either between the PL and EL of the different peaks of the same Host:Ir-complex film or between the same peak of the different films. These results are demonstrated in Figure 6a, in which the horizontal lines indicate the mean wavelength values of the three peaks, and the arrows denote the shift between the respective PL and EL peaks. Thus, we can distinguish that the smaller peak shifts are obtained for the Ir(dmpq) 2 (acac) doped CBP and the larger for the Ir(dmpq) 2 (acac) doped TCTA. Compared to the PL and EL emission, it is obvious that the tendency of the EL spectra was similar to the PL spectra. However, for all studied Hosts:Ir-complex, the EL emission profile is blue-shifted in comparison to the PL emission profile. According to the deconvolution analysis, there are fundamental differences in the shifts of the peaks either between the PL and EL of the different peaks of the same Host:Ir-complex film or between the same peak of the different films. These results are demonstrated in Figure 6a, in which the horizontal lines indicate the mean wavelength values of the three peaks, and the arrows denote the shift between the respective PL and EL peaks. Thus, we can distinguish that the smaller peak shifts are obtained for the Ir(dmpq)2(acac) doped CBP and the larger for the Ir(dmpq)2(acac) doped TCTA. It is remarkable to refer that the PL and EL emission is based on different mechanisms. It is established that in PL, the emission results from the radiative recombination of the photo-excites carriers. On the other hand, the EL depends not only on the optical properties and physical properties of the emitting layers but also on the electrical properties of two conductive regions which are used for injection and transportation of carriers [38]. So, the EL emission is related to the two mechanisms, energy transfer and charge trapping. As it has been already mentioned, in host-dopant systems, three paths can lead to phosphorescent emission by the dopant. It has been suggested that (i) in the host molecule, the generated singlet excitons under electrical excitation can be transferred to the singlet excited state of the dopant via Förster and Dexter energy transfer processes. Then, they can be converted into triplet excitons for radiative decay by an efficient intersystem crossing (ISC) process. (ii) The other possible scenario for the host molecule is based on generated triplet excitons, which can be transferred to the triplet excited state of a phosphorescent dopant through Dexter energy transfer. (iii) The holes and electrons can directly recombine in the phosphorescent dopant by charge trapping mechanism [39]. We speculate that the emission in the red region could be assigned to both the complete energy transfer mechanism from the host matrix to the guest-Ir(dmpq) 2 (acac) and charge trapping in Ir(dmpq) 2 (acac). According to our results, we can assume that excitons on host molecules, triplet or singlet, are formed by the capture of opposite charges that are injected from both electrodes. These excitons then transfer their energy to the nearby guest molecules through the Förster mechanism or Dexter mechanism. At this point, it is also important to mention that the direct trapping of charges on Ir(III)-compound followed by guest exciton formation and radiative decay may be another mechanism responsible for the guest emission [37,40,41].
The chromaticity diagram with the Commission Internationale de L' Eclairage (CIE) coordinates, which are derived from PL and EL measurements, is illustrated in Figure 6b.
The CIE coordinates of Hosts:Ir-complex, confirm the PL emission in the red region. It was found that the emission of Hosts:Ir-complex is located at the edge of the red region in the CIE coordinates map. This red emission from Host:Ir-complex can lead to the assumption that the efficient energy transfer takes place from the singlet-excited state in the host to the (Singlet) 1MLCT band of the guest, Ir(dmpq) 2 (acac), followed by fast intersystem crossing to the triplet state 3MLCT of Ir(dmpq) 2 (acac) and, consequently, emission from its triplet state.
In the case of EL emission, it can be observed that the devices based on Host:Ir-complex emitted reddish-orange light. In more detail, the CIE coordinates are (0.660, 0.339) for the device based on the Ir compound doped in CBP. Note that these coordinates are very close to the National Television System Committee (NTSC) standard for red subpixels (0.67, 0.33) [42]. The device of Ir-complex doped in TAPC exhibit CIE coordinates very close to the ideal red emission, the values are (0.610, 0.347). In the case of the other two Hosts:Ir-complex, the CIE coordinates shifted to the orange emission. Specifically, Ir-complex doped in mCP and TCTA obtain values at (0.571, 0.356) and (0.546, 0.400), respectively. Thus, from the comparison of the EL and PL emission spectra of each emitting material, the Ir(III)-complex doped in CBP exhibits the red color selectivity in its emission both in a thin film form and in a PhOLED device.
Finally, the performances of the fabricated PhOLED devices were evaluated by measuring their current density-voltage (J-V) and luminance-voltage, and the results are depicted in Figure 7a,b, respectively. The luminance measured for the Ir compound doped in CBP reaches 293 cd/m 2 at 14 V, whereas when the Ir compound doped in the other three hosts mCP, TAPC, and TCTA present lower luminance values. Specifically, the luminance measured for the Ir compound doped in mCP is 91 cd/m 2 at 14 V, for the Ir compound doped in TAPC is 62 cd/m 2 at 10 V and for the Ir compound doped in TCTA is 45 cd/m 2 at 14 V. The comparison of the highest luminance values between the photoactive films can be justified by their thicknesses, listed in Table 1. Table 1. To enforce the red color purity improvement, the emissive layers based on the Host:Ir complex, are newly proposed in this study. Thus, in this work, we compare the Ir(III) complex doped in four different host materials. Among the Hosts:Ir complex, the Ir(dmpq)2(acac) organometallic compound doped in CBP is the most promising material to achieve red emission stability and selectivity. So, the Ir(dmpq)2(acac) doped in CBP is an encouraging step forward to achieve red solution-processable PhOLEDs, as it provides good exciton confinement within this emitting layer and the CIE coordinates obtained from the EL measurements approach the ideal red-light emission. A thorough investigation of the fabrication parameters and devices' architectures could be considered a decisive factor for improving the efficiency of the produced PhOLED devices.
Conclusions
In conclusion, we have studied the Ir-complex doped in four different host materials, namely CBP, mCP, TAPC, and TCTA for solution-processed red PHOLEDs. A thorough investigation of the absorption and emission behavior of the Ir(III)-complex doped in To enforce the red color purity improvement, the emissive layers based on the Host:Ir complex, are newly proposed in this study. Thus, in this work, we compare the Ir(III) complex doped in four different host materials. Among the Hosts:Ir complex, the Ir(dmpq) 2 (acac) organometallic compound doped in CBP is the most promising material to achieve red emission stability and selectivity. So, the Ir(dmpq) 2 (acac) doped in CBP is an encouraging step forward to achieve red solution-processable PhOLEDs, as it provides good exciton confinement within this emitting layer and the CIE coordinates obtained from the EL measurements approach the ideal red-light emission. A thorough investigation of the fabrication parameters and devices' architectures could be considered a decisive factor for improving the efficiency of the produced PhOLED devices.
Conclusions
In conclusion, we have studied the Ir-complex doped in four different host materials, namely CBP, mCP, TAPC, and TCTA for solution-processed red PHOLEDs. A thorough investigation of the absorption and emission behavior of the Ir(III)-complex doped in these host materials is presented. Afterward, these materials were applied as a singleemissive layer in the wet fabrication of PhOLED devices using the spin-coating process. A comprehensive study of the PL and EL emission of the spin-coated thin films was also realized in order to evaluate the color selectivity. We found that the devices based on Host:Ir-complex demonstrate emission in the orange-red region. Compared to the host materials, Ir(III) complex doped in CBP is a promising candidate material in order to achieve red-light emission from the phosphorescent device, as the CIE coordinates are very close to the National Television System Committee (NTSC) standard for red subpixels.
Author Contributions: Writing-original draft preparation, investigation, formal analysis, data curation, D.T.; methodology, investigation, writing-review and editing, data curation, L.P.; investigation, methodology, validation, writing-review and editing, K.P.; data curation, validation, writing-review and editing, V.K.; supervision, conceptualization, visualization, writing-review and editing, funding acquisition, M.G. All authors have read and agreed to the published version of the manuscript.
Funding: This research has been co-funded by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH-CREATE-INNOVATE (project code: T1EDK-01039).
Informed Consent Statement: Not applicable.
Data Availability Statement: Data presented in this article is available on request from the corresponding author. | 7,432.6 | 2022-10-26T00:00:00.000 | [
"Materials Science",
"Engineering",
"Chemistry",
"Physics"
] |
Identification of Trichoderma spp. by DNA Barcode and Screening for Cellulolytic Activity
Trichoderma species are cosmopolitan fungi, frequently present in all types of soil, manure and decaying plant tissues. Their dominance in soil may be attributed to their diverse metabolic capability and aggressive competitive nature [1]. The economic importance of Trichoderma is due to their production of extracellular industrial enzymes, such as cellulolytic enzymes [2]. These enzymes are extensively used in industry such as degradation of cellulose materials which are used in textile and paper industry, in wastewater treatment and in biodegradation of plant lignocellulosic materials [3]. Cellulases are enzymatic complex, that comprises exo-β-1,4-glucanases (EC 3.2.1.91), endoβ -1,4-glucanases (EC 3.2.1.4) and β -1,4-glucanases (EC 3.2.1.21), that act synergistically in the hydrolysis of β-1,4-glycosidic bonds present in cellulose polymers for review see [4]. Therefore, many methods were developed to screen and select highly cellulases producing Trichoderma. The cellulose azure method is one of the choices where cellulose azure assay using dyed cellulose, a method of measuring primarily cellobiohydrolase activity by dye release [5]. The cellulose azure method is the most reliable qualitative assay for cellulolysis. This method also tests for simultaneous action of all cellulase enzymes. Degradation of cellulose results in the release of a bound dye, the vertical migration of which can be observed and the intensity of blue dye indicates the activity of cellulase [6]. Another assay was developed for screening of highly producing exoglucanase isolates using microcrystalline cellulose (Avicel) as substrate crystalline pure cellulose [7]. Also, cellulase can only degrade a specific substrate, therefore, screening of cellulase-producing Trichoderma can be performed on agar plates using a cellulosic substrate such as Avicel or carboxymethylcellulose (CMC) as carbon source for Trichoderma growth [8]. At the same time, quantitative assay of endoglucanase activity can be detected using carboxymethyl cellulose (CMC) by detection of clear zone around the colony using the Congo red stain [9]. Abstract
Introduction
Trichoderma species are cosmopolitan fungi, frequently present in all types of soil, manure and decaying plant tissues. Their dominance in soil may be attributed to their diverse metabolic capability and aggressive competitive nature [1]. The economic importance of Trichoderma is due to their production of extracellular industrial enzymes, such as cellulolytic enzymes [2]. These enzymes are extensively used in industry such as degradation of cellulose materials which are used in textile and paper industry, in wastewater treatment and in biodegradation of plant lignocellulosic materials [3]. Cellulases are enzymatic complex, that comprises exo-β-1,4-glucanases (EC 3.2.1.91), endo-β -1,4-glucanases (EC 3.2.1.4) and β -1,4-glucanases (EC 3.2.1.21), that act synergistically in the hydrolysis of β-1,4-glycosidic bonds present in cellulose polymers for review see [4]. Therefore, many methods were developed to screen and select highly cellulases producing Trichoderma. The cellulose azure method is one of the choices where cellulose azure assay using dyed cellulose, a method of measuring primarily cellobiohydrolase activity by dye release [5]. The cellulose azure method is the most reliable qualitative assay for cellulolysis. This method also tests for simultaneous action of all cellulase enzymes. Degradation of cellulose results in the release of a bound dye, the vertical migration of which can be observed and the intensity of blue dye indicates the activity of cellulase [6]. Another assay was developed for screening of highly producing exoglucanase isolates using microcrystalline cellulose (Avicel) as substrate crystalline pure cellulose [7]. Also, cellulase can only degrade a specific substrate, therefore, screening of cellulase-producing Trichoderma can be performed on agar plates using a cellulosic substrate such as Avicel or carboxymethylcellulose (CMC) as carbon source for Trichoderma growth [8]. At the same time, quantitative assay of endoglucanase activity can be detected using carboxymethyl cellulose (CMC) by detection of clear zone around the colony using the Congo red stain [9].
Abstract
Species identification of isolates of Trichoderma from different locations of Nile delta of Egypt was performed and their cellulolytic activities were analyzed. On the basis of morphological characteristics, 75% of isolates were identified to species level and they were divided into four aggregate groups. Morphological characterization alone was insufficient to precisely identify Trichoderma species because they have relatively few morphological characters and limited variation that cause overlapping and misidentification of the isolates. Therefore, there was a necessity to use molecular technique to compensate for the limitations of morphological characterization. DNA sequencing of 5.8S-ITS region was carried out using specific primers ITS1 and ITS4. By comparing the sequences of the 5.8S-ITS region to the sequences deposited in GenBank using BLAST program all isolates can be identified to species level with homology percentage of at least 99%. In addition, TrichOKEY search tool, was used to assess the reliability of Genbank and results were in 92% agreement with the BLAST results. Data indicated a narrow species diversity and there were two main species predominated namely; T. longibarchiatum and T. harzianum. Distribution of nucleotides as well as the (G+C) content in ITS region of isolates indicated a wide range of interspecies variation. Finally, isolates were assessed for their total cellulase activities using a cellulose-azur method, for exoglucanases activity using Avicel method and for endoglucanases activity using carboxymethyl cellulose (CMC) and acid swollen cellulose methods. Consequently, eleven isolates were selected to be the best isolates among the 28 isolates used for cellulolytic ability.
Finally, another method for selection of hypercellulolytic Trichoderma spp. was using Petri plate with Walseth-cellulose as a sole source of Carbone [10].
Due to the diverse economical applications of Trichoderma, the correct species identification of Trichoderma is vital. Morphological characterization of Trichoderma isolates to species is currently based largely on many criteria such as conidial form, size, color and ornamentation, branching pattern, side branches, phialides and the formation of hyphal and elongation from conidiophores [11]. However, incorrect species identification using morphological characters is very common even for experts because of the high similarity of morphological characters [12]. However, recently many molecular methods and identification tools were developed, which are based on DNA sequence analysis. Therefore, it is now possible to identify every Trichoderma isolate to its species [13].
There are several molecular methods to characterize fungi species. Sequence analysis of the ITS region is the most famous method among molecular characterization methods. In eukaryotic cells, there are two internal transcribed spacers flanking the 5.8S gene. The two spacers, together with the 5.8S gene, are normally referred to as the ITS region [14]. The rRNA genes are universally conserved, while the ITS region and intergenic spacer (IGS) are highly variable [15]. The ITS region is one the fastest evolving region and they may vary among species within a genus. Thus, the sequences of these regions can be used for identification of closely related species [16]. Sequence analysis of the ITS region have been used successfully to generate specific primers capable of differentiating closely related fungal species [17]. It has typically been most useful for molecular systematic study at species level, and even within species [18].
Finally, the use of ITS sequence analysis to identify an isolate at the species level involves submission of sequences to NCBI BLAST web site and identification of respective species on the basis of the degree of sequence similarity (e.g., >98%). Also, the International Subcommission on Trichoderma and Hypocrea Taxonomy has developed a method namely TrichOKEY 2. It is a program for molecular identification of Trichoderma on species levels based on an oligonucleotide ITS DNA BarCode (http://www.isth.info). Therefore, the objectives of the present study were: 1) species identification of some unknown isolates from different locations of Nile delta of Egypt and 2) documentation of their cellulolytic activity.
Samples collection
Different types of samples from soil, and decomposed organic matters, such as wheat straw and rice straw were collected from six governorates of Egypt. The samples were taken from a 15 cm depth and collected in sterile polyethylene bags, which were transported to the laboratory and stored at 4 °C until use [19].
Isolation of Trichoderma sp.
A serial dilution technique [20] was followed and a 10 3 dilution of each sample was prepared. 250 μL of each solution was pipetted onto a Potato Dextrose Agar (PDA) amended with 1 g/L streptomycin (Merck) plate and incubated at 28 °C for one week [21]. The culture plates were examined daily, individual colonies were isolated and uncommon colonies were reisolated onto a PDA plate [22]. Morphological characteristics were observed for identification and the plates were stored at 4°C [23]. In addition, four isolates were kindly provided by Prof. Dr Medhat Aldenary, Faculty of Agriculture, Tanta University and three isolates from our laboratory were included in this investigation.
Morphological identification
Two techniques, visual observation on petri dishes and micromorphological studies in slide culture, were adopted for identification of Trichoderma species. For visual observation, the isolates were grown on PDA agar for 3-5 days. The mode of mycelia growth, color, odor and changes of medium color for each isolate were examined every day. For micromorphological studies, a slide culture technique was used. Examination of the shape, size, arrangement and development of conidiophores or phialides provided a tentative identification of Trichoderma spp. [24]. Further verified and confirmed identification was performed at Plant Pathology Research Institute, Agricultural Research Center, Giza, Egypt.
Molecular identification
Molecular identification with sequences analysis of the internal transcribed spacer (ITS) 1 and 4 of ribosomal DNA (rDNA) was carried. b) PCR was used to amplify the internal transcribed spacer regions 1 and 4 (ITS1 and 4) of the rRNA gene cluster using the following primer pair amplified as designed by Hermosa et al., [25] with modifications ( e) Sequence Analysis: The sequences of ITS1-5.8S-ITS4 region of all isolates were analyzed using Molecular Evolutionary Genetics Analysis (MEGA4 version 5.10.). The sequencing data were compared against the Gene Bank database (http:// www.ncbi.nlm.nih.gov/BLAST/), where a nucleotide blast program was chosen to identify the homology between the PCR fragments and the sequences on the Gene Bank database. Besides, the 5.8S-ITS sequences were compared to a specific database for Trichoderma using TrichOKEY 2 program, which available online from the International Subcommission on Trichoderma and Hypocrea Taxonomy (ISTH, www.isth.info) [13]. out as described by Pointing SB et al., [6]. Cellulolysis was assessed by monitoring release of azure dye from cellulose-dye complex and diffusion into clear agar not containing cellulose-azure. As follow:
Screening of Trichoderma isolates for cellulolytic activity
6% w/v agar was transferred to 25mL glass culture bottles, autoclaved, allowed to solidify. Then carefully aliquot of 1 ml of CBM medium supplemented with 1% w/v cellulose azure (azure I dye, sigma C.I.52010) aseptically was loaded on the surface of the solidified agar as an overlay. Media were inoculated with 5×10 5 spores of Trichoderma isolates and incubated at 28°C in darkness. Migration of dye into the clear lower layer indicated the presence of Cellulases. Cellulolysis was assessed by monitoring release of azure dye from cellulose-dye complex and diffusion into clear agar. The relative cellolytic activity of each isolate was scored by comparing the intensity of blue color of the medium with standard blue color scale of 1 to 10 (maximum) over an incubation period of 15 days [26].
Exoglucanases
A method by Bose RG et al., with some modifications was carried out to screen for high cellulase-producing isolates [27]. The basic medium consisted of a Mandel's mineral solution supplemented with trace elements and 2% agar [28]. Avicel PH-101 NF was used as the sole source of carbon at a concentration of 1%. 0.01% Triton-X 100 was added to limit the colony size and to facilitate the screening of isolates. The cellulose agar plates were seeded with 5×10 5 spores in 20µl and incubated at 28 • C for 21 days until clear zones around fungal colonies were observed. Then the diameter of the clear zone was measured.
Endoglucanases
Dye staining of carboxymethylcellulose agar (CMC agar): The cellulolytic activity of fungal strains was determined by their ability to grow and form cleared zones around colonies on Mandel's agar medium (MAM) supplemented with 0.5% w/v low viscosity carboxymethyl cellulose (CMC) [29]. The medium was autoclaved, dispensed into Petri dishes, allowed to solidify and inoculated with 5×10 5 spores in 20 µl Trichoderma isolates and incubated at 28°C. After growth for 5 days, the plates were flooded with 1% aqueous Congo red and allowed to stain for 15 minutes. The stain was washed off from the agar surface with distilled water and the plates were then flooded with 1 M NaCl to destain for 15 minutes. The NaCl solution was then removed. CMC degradation around the colonies appears as a yellow-opaque area against a red color for undegraded CMC. The diameter of the clear zone was measured [6].
Walseth cellulose Plate-clearing assay
A method by Khiyami et al., with some modifications was carried out to screen for high cellulase-producing isolates [26]. The basic medium consisted of a Mandel's mineral solution supplemented with trace elements and 2% agar. Phosphoric acid-swollen cellulose (Walseth cellulose) was used as the sole source of carbon at a concentration of 1%. 0.01% Triton-X 100 was added to limit the colony size and to facilitate the screening of isolates [30]. The plates were seeded with 5×10 5 spores in 20 µl and incubated at 28°C for 6 days until clear zones around fungal colonies were observed. Then the diameter of the clear zone was measured.
Statistical analysis
One-way ANOVA followed by Duncan's multiple range test (DMRT) was used to assess the statistical significance of changes in all indices with the level of significant difference set at p<0.05. Statistical analysis software (SPSS 16.0.0 release; SPSS Inc., Chicago, IL) was used for all analyses.
Results
A total of 28 fungal isolates were analyzed in this investigation. 21 isolates were collected samples from six governorates of Nile delta of Egypt namely; Menufia, Gharbia, Kafr el Sheikh, Sharqia, Dakahlia and Ismalia (Figure 1). In addition, four isolates were given from Tanta university collection and three isolates were from our laboratory collection. Preliminary screening was carried out for all isolates which showed that these fungal isolates were Trichoderma species. The type of samples and isolate codes were shown in Table 2.
Morphological identification of the 28 Trichoderma isolates was performed and resulted in identification of all 28 isolates as Trichoderma genus (Table 3). However, as for species identification, seven isolates were not identified to species level (25%). The rest of the isolates (75%) were species identified and they divided into four groups. Seven isolates were identified as Trichoderma harzianum, six isolates as Trichoderma hamatum, five as Trichoderma viride and three as Trichoderma koningii.
As for molecular identification, genomic DNA of all Trichoderma isolates was extracted. PCR Amplification of 5.8S-ITS Region was conducted using specific primers ITS1 and ITS4. 5.8S-ITS DNA fragments were amplified from all Trichoderma isolates and PCR products were sequenced. Then, BLAST program was used to determine the species identity of Trichoderma isolates. The BLAST results were presented in Table 4. According to the BLAST results, 15 isolates were identified as T. longibarchiatum (54%) and seven isolates were identified as T. harzianum (25%), while five isolates were identified as T. album, T. virens, T.viride, T. asperellum and T. saturnisporum. Only, one isolate MNF-MSH-Trich23 could not be identified to species level. Also, TrichOKEY search was also used to assess the reliability of BLAST results. Three isolates were not identified to species level namely, MNF-MSH-Trich4, MNF-MSH-Trich22, MNF-MSH-Trich23
First isolates were assessed for their total cellulase activities using Dye diffusion from a cellulose-dye complex (cellulose azure agar) method. Ten isolates recorded 10 in this scale means that they have good cellulolysis ability (Table 7) Secondly Avicel is used for measuring exoglucanase activity because it has a low degree of polymerisation of cellulose and it is relatively inaccessible to attack by endoglucanases despite some amorphous regions. Six isolates showed highly significant Avicellulases activity namely; MNF-MSH-Trich1, MNF-MSH-Trich7, MNF-MSH-Trich8, MNF-MSH-Trich14, MNF-MSH-Trich22 and MNF-MSH-Trich26 (Table 7).
Thirdly, Trichoderma isolates were identified for their endoglucanase activities on plate clearing assay by using carboxymethyl cellulose (CMC) and Congo red dye and by the formation of clear zone diameter. Based on measurements, six isolates gave clear zones of cellulase activity having diameter significantly larger than other isolates namely; MNF-MSH-Trich5, MNF-MSH-Trich7, MNF-MSH-Trich9, MNF-MSH-Trich10, MNF-MSH-Trich14 and MNF-MSH-Trich21. Carboxymethylcellulose (CMC) is a substrate for endoglucanase and so can be used as a test for endoglucanase and β-glucosidase activity ( Table 7).
Another assay was conducted for endonuclease activities identification by using plate clearing assay of acid swollen cellulose as a substrate. Recording clearance of cellulose within the growth medium can be difficult to assess, particularly with dense or dark hyphal growth (Table 7). Six isolates showed high activities namely; MNF-MSH-Trich1, MNF-MSH-Trich8, MNF-MSH-Trich14, MNF-MSH-Trich22, MNF-MSH-Trich26 and MNF-MSH-Trich27.
Discussion
The present study basically is a domestic assessment of Trichoderma representing six Egyptian governorates. Morphological characterization was conventionally used in the identification of Trichoderma species, and it remains as a potential method to identify Trichoderma species [31]. According to this investigation results, morphological and cultural characteristics could not distinguish Trichoderma isolates up to the species level efficiently. Similarly, most of the researchers were facing difficulty with regard to the identification of the Trichoderma species owing to the higher level of structural similarities [32]. Therefore, information from morphological study alone is insufficient to precisely identify a Trichoderma species because Trichoderma species have relatively few morphological characters and limited variation that may cause overlapping and misidentification of the isolates [31]. Besides that, morphological characteristics are influenced by culture conditions [33]. Therefore, there is a necessity to use molecular technique to compensate for the limitations of morphological characterization.
In this study, DNA sequencing of the 5.8S-ITS region was carried out using specific primers ITS1 and ITS4. The ITS region is one of the most reliable loci for the identification of a strain at the species level [34]. By comparing the sequences of the 5.8S-ITS region to the sequences deposited in GenBank, all of the Trichoderma isolates except MNF-MSH-Trich23 can be identified to species level with homology percentage of at least 99%. However, Druzhinina et al., mentioned that GenBank database contain many sequences of Trichoderma isolates which may have been incorrectly identified and occurred under a false name [35]. Hence, TrichOKEY search tool, a program that specifically compare ITS1 and ITS2 sequences to a specific database for Trichoderma generated from only vouchered sequences were used to assess the reliability of BLAST results. TrichOKEY was used by many literatures and resulted in successful identification of Trichoderma isolates [31]. From the TrichOKEY results obtained all isolates except isolate MNF-MSH-Trich23 were identified. The results were in 92% agreement with the BLAST results. Isolate MNF-MSH-Trich23, however, was identified as an unknown Trichoderma species. Also, morphological data of MNF-MSH-Trich23 was insufficient to be identified. The main difference between the two data bases results were in two isolates MNF-MSH-Trich15 and MNF-MSH-Trich19. Results of this investigation confirmed the same difficulty facing other researchers where the morphological characterization is not reliable. The morphological identification agreed with molecular identification in only one isolate MNF-MSH-Trich1. It was concluded that, the morphological characterization is not reliable for identification of the isolates and oligonucleotide barcode is a powerful tool for the identification of Trichoderma species and should be useful as an alternative or as a complement to morphological methods. Therefore, the molecular data of ITS sequence is more trusted in characterization and identification of isolates under study. Consequently, according to molecular findings there were two main species predominated in these isolates. T. longibarchiatum consisted about 50% of the isolates and T. harzianum consisted about 25% of the isolates. Most of these two species came from the isolates of the collected materials. This data indicted a [36]. Other results from south of Egypt indicated that the two species T. harzianum and T. longibarchiatum were present [37]. The predominant T. longibrachiatum is the youngest clade of Trichoderma evolutionarily [38]. Also, it is a soil fungus which is found all over the world but mainly in warmer climates [39]. The second predominant species T. harzianum is the most commonly reported species in the genus, occurring in diverse ecosystems and ecological niches [40]. Therefore, this low degree of diversity may occur due to specific biotic or abiotic factors such as plant species, microbial competition, soil physical and chemical properties and application of pesticides or fertilizers in the geographical region [4].
In addition, the interspecies identification of the species from different isolates was carried out in this study. The results showed that although, the rDNA ITS sequence was very conservative, there were variation on sequence and length among different isolates, and there was a genetic differentiation in a various degrees. Consequently, variation among individuals of the same species was noticed. For instance within T. longibarchiatum the GC% varied from 42.6% to 68.8%, while the ITS length was between 561 and 888 nucleotides. Also, within T. harzianum variation was in length between 546 and 1028 nucleotides and in GC% was between 49.6% and 55.7%. This indicated a wide range of interspecies variatins which is consistent with the idea of haplotype presence among species [4].
As mentioned before, Trichoderma sp. exhibited the highest cellulose activity and consistency in producing cellulase when compared with other microorganisms [41]. Total cellulase activity was measured using cellulose azure agar method. It is highly recommended as it is the most reliable qualitative assay for cellulolysis [42]. The experiment for cellulose azure method was conducted for 15 days, the period of time will allow Trichoderma to degrade link of azure and cellulose. Fadel et al., reported that each microorganism have the different incubation time to do the enzymatic reaction to the substrate [43]. Further analysis was conducted to differentiate isolates into two categories either highly endonucleases or exonuleases producing isolates. Avicel has been used for measuring exoglucanase activities that cleave the accessible ends of cellulose modules to liberate glucose and cellobiose [44]. To identify highly producing endoglucanases producing isolates two methods were carried out namely; CMC and Walseth cellulose plate assay. CMC is chemically modified and used to resemble the cellulose and Congo red can only colorize the cellulose and the area that is decolorized indicated the endoglucanases enzyme activity [45]. Hydrolysis of cellulose, especially the endoglucanases enzyme, is crucial, as it initiates the next synergistic actions involving β-glucosidase and cellobiohydrolases [46]. Walseth cellulose plate assay included converting the crystalline fraction of cellulose to the amorphous form by adding o-phosphoric acid to produce phosphoric acid swollen cellulose (PASC) [47]. None of screening methods reported in this research had sufficient precession to allow the selection of particular cellulase enzymes. This is may be due to the complexity of the celluletic system produced by Trichoderma. However, eleven isolates were selected to be the best isolates among the 28 isolates used. These are MNF-MSH-Trich1, MNF-MSH-Trich5, MNF-MSH-Trich7, MNF-MSH-Trich8, MNF-MSH-Trich9, MNF-MSH-Trich10, MNF-MSH-Trich14, MNF-MSH-Trich21, MNF-MSH-Trich22, MNF-MSH-Trich26 and MNF-MSH-Trich27. These findings were consistent with the result that most of the isolates were belonging to T. longibarchiatum and T. harzianum. These species have been adopted in various industries because of their ability to secrete large amounts of protein and metabolites [4].
Conclusion
According to this investigation results, the morphological characterization is not reliable for identification of the isolates and oligonucleotide barcode is a powerful tool for the identification of Trichoderma species and should be useful as an alternative or as a complement to morphological methods. According to molecular findings there were two main species predominated in the middle Delta area of Egypt namely; T. longibarchiatum and T. harzianum. Eleven isolates were selected to be the best isolates among the 28 isolates used for cellulolytic activity. | 5,443.2 | 2016-04-30T00:00:00.000 | [
"Biology"
] |
Protease-inhibiting, molecular modeling and antimicrobial activities of extracts and constituents from Helichrysum foetidum and Helichrysum mechowianum (compositae)
Background Helichrysum species are used extensively for stress-related ailments and as dressings for wounds normally encountered in circumcision rites, bruises, cuts and sores. It has been reported that Helichysum species are used to relief abdominal pain, heart burn, cough, cold, wounds, female sterility, menstrual pain. Results From the extracts of Helichrysum foetidum (L.) Moench, six known compounds were isolated and identified. They were 7, 4′-dihydroxy-5-methoxy-flavanone (1), 6′-methoxy-2′,4, 4′-trihydroxychalcone (2), 6′-methoxy-2′,4-dihydroxychalcone -4′-O-β-D-glucoside (3), apigenin (4), apigenin-7-O-β-D-glucoside (5), kaur-16-en-18-oic acid (6) while two known compounds 3,5,7-trihydroxy-8-methoxyflavone (12), 4,5-dicaffeoyl quinic acid (13) together with a mixture of phytosterol were isolated from the methanol extract of Helichrysum mechowianum Klatt. All the compounds were characterized by spectroscopic and mass spectrometric methods, and by comparison with literature data. Both extracts and all the isolates were screened for the protease inhibition, antibacterial and antifungal activities. In addition, the phytochemical profiles of both species were investigated by ESI-MS experiments. Conclusions These results showed that the protease inhibition assay of H. foetidum could be mainly attributed to the constituents of flavonoids glycosides (3, 5) while the compound (13) from H. mechowianum contributes to the stomach protecting effects. In addition, among the antibacterial and antifungal activities of all the isolates, compound (6) was found to possess a potent inhibitor effect against the tested microorganisms. The heterogeneity of the genus is also reflected in its phytochemical diversity. The differential bioactivities and determined constituents support the traditional use of the species. Molecular modelling was carried out by computing selected descriptors related to drug absorption, distribution, metabolism, excretion and toxicity (ADMET). Graphical abstract Compounds isolated from Helichrysum species (Compositae). Electronic supplementary material The online version of this article (doi:10.1186/s13065-015-0108-1) contains supplementary material, which is available to authorized users.
Background
The genus Helichrysum (Compositae) consists of more than 600 species with a major center of distribution in South Africa [1]. Several Helichrysum species have been used in folk medicine of different countries for the protection of post-harvest food [2]. Moreover, Helichrysum species are used extensively for stressrelated ailments and as dressings for wounds normally encountered in circumcision rites, bruises, cuts and sores [3]. It has been also reported that Helichysum species are used to relief abdominal pain, heart burn, cough, cold, wounds, female sterility, menstrual pain [4] and to treat some diseases such as gastric [5][6][7], gastroduodenal, gastric ulcers and gastritis [8], stomach damage [9,10], acute hepatitis, fever, or oedema [11], diuretic, inflammatory, allergic [12,13]. In addition, some of these species have been reported to possess antimicrobially active compounds [14][15][16].
Chemical studies on Helichrysum species have been carried out by many investigators and the presence of flavonoids, phloroglucinols, α-pyrones, coumarins and terpenoid compounds has been reported [17][18][19][20][21][22][23][24][25]. H. foetidum has been assessed to treat influenza, infected wounds, herpes, eye problems, menstrual pains and to induce trance and possess antifungal properties [2,26]. H. mechowianum is used for the treatment of stomach damage, cephalgy [9,27] and possesses ulcerogenic activity [28,29]. In continuation of these studies, we extended our search for biologically active compounds from Helichrysum species [17,18] to the protease-inhibiting activity of extracts and isolated compounds from Helichrysum foetidum and Helichrysum mechowianum using a fluorescence resonance energy transfer (FRET) protease pepsin inhibition assay as pharmacological model for anti-ulcer compounds [30]. Beside excessive stomach acid and Helicobacter pylori, pepsin is one of the major factors in the pathophysiology of peptic ulcer disease and reflux oesophagitis. In addition, the antibacterial and antifungal effects of both species against Bacillus subtilis and the yeast Cladosporium cucumerinum were evaluated respectively.
The chemical profile of methanol extracts of H. mechowianum and H. foetidum was investigated. To our knowledge, this is the first report about constituents of H. mechowianum. The compounds identified have been reported previously from other Helichrysum species in different compositions.
In order to assess the drug-likeness profiles of the isolated metabolites, low energy computer models were generated and a number of ADMET-related descriptors calculated, with the view of drug metabolism and pharmacokinetics (DMPK) evaluation.
Biological tests
The methanol leaf extracts of Helichrysum foetidum and Helichrysum mechowianum showed significant activity in the pepsin protease FRET assay while no activity was detected against the serine protease subtilisin ( Table 1). The extract of H. foetidum exhibited the higher pepsin protease inhibition (37.4 and 35.6 % inhibition at 50 and 25 μg/ml) ( Table 1). Therefore also the previously isolated constituents 1-6 ( Fig. 1) from H. Foetidum and 12-13 from H. mechowianum were tested. The best results at a concentration of 50 μg/ml were obtained with apigenin-7-O-β-D-glucoside (5) and 6′-methoxy-2′,4-dihydroxychalcone-4′-O-β-D-glucoside (3) with a moderate inhibition activity of 46.3 and 37.4 % respectively (Table 1) while 3,5,7-trihydroxy-8-methoxyflavone (12), 4,5-dicaffeoyl quinic acid (13) showed weak activity. These results suggest that the inhibition activity on the aspartate protease observed with H. foetidum extract could be mainly attributed to the glycosidic compounds (3) and (5). Contrarily, in the inhibition assay with the nd not detected serine protease subtilisin, neither the crude extracts, nor the isolated substances of both species show significant activity (Table 1). We can conclude that the substances present in the crude extract of H. foetidum are selective for aspartate proteases. Observed negative results may be due to the auto-fluorescence debris of subtilisin cleavage of these compounds resulting in fluorescent residues or the absence of bioaffinity interactions between the substances present in the crude extract of H. foetidum with the serine protease subtilisin [31]. The observed protease inhibiting activity may have mucosal protective effects and therefore may help to reduce peptic ulceration. From the Black birch fungus (Inonotus obliquus), which is used in folk medicine in Russia for the treatment of gastrointestinal tract disorders, also the flavonoidal fraction was shown to possess antiulcerous activity [31]. In addition, the crude extract of both species and all the isolated compounds were subjected to in vitro antimicrobial assay against the reference strains of bacteria Bacillus subtilis and yeast (Cladosporium cucumerinu).
It has been reported that extracts having MIC values below 8000 μg/ml possess some antimicrobial activity [32]. MIC values below 1000 μg/ml are considered noteworthy [33,34]. Thus, the crude extract having activities of 1000 μg/ml or lower against all the pathogens studied demonstrated potential anti-infective properties. Compounds having antimicrobial activities less than 64 μg/ ml are accepted as having notable antimicrobial activity [33] and those compounds exhibiting activity at concentrations below 10 μg/ml are considered "clinically significant" [32,33]. According to this observation, the crude leaf and flower extracts of H. foetidum showed a significant and concentration dependent growth inhibition of Bacillus subtilis of of 85.4 % at a concentration of 1 mg/ ml and of 21.8 % at a concentration of 0.1 mg/ml whereas the crude extract of H. mechowianum at concentrations of 1 mg/ml and 0.1 mg/ml causes a moderate growth inhibition of 36,2 and 29,8 % respectively (Table 2). Likewise, the crude leave and flower extracts of H. Foetidum also exhibit antifungal activity against Cladosporium cucumerinu shown by the development of inhibition zones on the bioautography plate. In contrast, extracts of H. mechowianum were slightly active against this fungus.
Furthermore, all of the isolated compounds were subjected to in vitro antimicrobial assay. It was interesting to note that compounds (1-6) from H. Foetidum exhibited notable growth inhibition range of 85.0 to 75.0 % against Bacillus subtilis and a range of 70 to 56 % against the yeast Cladosporium cucumerinu at a concentration of 1 mg/ml whereas compounds (12-13) from H. mechowianum showed a moderate growth inhibition range of 40.2 to 30.8 % at 1 mg/ml against Bacillus subtilis ( Table 2). Of all the isolated, compound (6) exhibited the highest sensitivity growth inhibition of Bacillus subtilis of 85.0 % at a Table 2).
The results of the work indicate that diterpenoid possess antimicrobial against the gram positive bacterium. This antibacterial activity of H. foetidum extract might be associated to the high content of kaurenoic acid (6). This justifies the use of these plants species in folk medicine and corroborated with the previous reports on the antibacterial activities for Helichrysum species [2,35,36]. Kaur-16-en-19-oic acid isolated from extract of the Asteraceae (Senecio erechtitoides and Wedelia calendulaceae) was previously shown to possess high inhibitory activity against several bacterial strains [37,38].
Chemical constituents
The main constituents of both species were characterized by detailed ESI-MS investigations. The combination of LC-MS, MS/MS and FTICR-HRMS allowed the detection of various components simultaneously. The MS experiments show, that H. foetidum and H. mechowianum possess different chemical compositions. The leaf extract of H. foetidum is dominated by the chalcones 2 and 3, the flavonoids 4 and 5 and by diterpenoids [18] whereas main constituents of H. mechowianum are quinic acid derivatives with a less prominent bioactivity profile.
A more detailed ESI-MS investigation of the crude extract of Helichrysum mechowianum indicates (Additional file 1) the presence of quinic acid ( [38,39] and are also present in the French H. stoechas var. olonnense [40]. Both are used as digestive. A similar compound composition is known for the Artichoke; Cynara scolymus L., which is used for its choleretic, lipid-lowering, hepatostimulating, and appetitestimulating actions [41]. Extracts and constituents of artichoke were also shown to possess antibacterial and antifungal activities, however, Extracts and constituents of H. mechowianum showed least efficiency antifungal properties against the yeast Cladosporium cucumerinum. The observed quinic acid derivatives might be responsible for the stomach protecting effects of H. mechowianum. Chromatographic separation of the partitioned extracts of H. mechowianum resulted in the isolation of a phytosterol mixture from the n-heptane fraction, 3,5,7-trihydroxy-8-methoxyflavone ( − 515.11950) from the water fraction. The relative composition of the phytosterol fraction was determined by GCMS as campesterol (2 %), stigmasterol (9.3 %), campest-7-en-3-ol (61.4 %), chondrillasterol (18.9 %), β-sitosterol (61.4 %) and an unidentified sterol (1.2 %). The compounds (12) [42] and (13) [43] were identified by comparison of spectral data with literature data. In addition, the position of the caffeoyl residues in compound (13) was determined by 2D NMR measurements. In particular, HMBC correlations from H-4 and H-5 of the quinic acid to C-9′, C-8′ and C-7′ of the caffeoyl residues indicates the substitution at position 4 and 5 (Table 3). Since this compound is reported to possess cytotoxic and apoptose inducing activity [44,45], the anticancer activity against the prostate cancer cell line PC-3 was tested. However in concentrations of 50 nM or 50 μM no effect was observed with this cell line (Table 4).
In silico pharmacokinetics assessment
Many bioactive compounds do not make it to clinical trials because of adverse pharmacokinetic properties. It therefore becomes imperative to access the pharmacokinetic profiles of potential drugs early enough in order to access their potential for further development. A summary of twenty two of the computed molecular descriptors used to assess the drug-likeness profiles of the isolated metabolites have been summarized in Table 2. These include the #stars or 'drug-likeness' parameter, the molecular weight (MW), the solvent accessible surface area (SASA), along with its hydrophobic component (FOSA) and hydrophilic component (FISA), the molecular volume, the number of hydrogen bond acceptors (HBA) and donors (HBD), the n-octanol/water partition coefficient (log P), the solubility parameter (log S), the predicted IC 50 values for the blockage of the human-ether-a-go-go potassium ion (HERG K + ) channels (logHERG), predicted permeability of Caco-2 cells, the blood-brain barrier partition coefficient (log BB), permeability of Madin-Darby canine kidney (MDCK) monolayers, skin permeability (log K p ), the number of predicted primary metabolites (#metab), the binding affinity to human serum albumin (log K HSA ), the percentage human oral absorption (PHOA), the number of violations of Lipinski's 'Rule of Five' (Ro5) and Jorgensen's 'Rule of Three' (Ro3) and the polar surface area (PSA). The range of values of each parameter for 95 % of known drugs have been given beneath Table 2. Five of these compounds (1, 2, 3, 5 and 12) showed #star = 0, which indicates that all the computed parameters fell within the recommended range for 95 % of known drugs. Meanwhile, compounds 4 and 6 showed only #star = 1. An overall ADME-compliance score, drug-likeness parameter (indicated by #stars), was used to assess the pharmacokinetic profiles of the isolated compounds. The #stars parameter indicates the number of property descriptors computed by QikProp [46], which fall outside the optimum range of values for 95 % of known drugs. The methods implemented were developed by Jorgensen et al. [47][48][49] (Table 5).
General methods
Silica C) or CD 3 OD (δ = 49.0 ppm, 13 C), respectively. The high resolution ESI mass spectra were obtained from a Bruker Apex III Fourier transform ion cyclotron resonance (FTICR) mass spectrometer (Bruker Daltonics, Billerica, USA) equipped with an Infinity™ cell, a 7.0 Tesla superconducting magnet (Bruker, Karlsruhe, Germany), an RF-only hexapole ion guide and an external APOLLO electrospray ion source (Agilent, off axis spray, voltages: endplate, -3.700 V; capillary, −4.200 V; capillary exit, 100 V; skimmer 1, 15.0 V; skimmer 2, 10.0 V). Nitrogen was used as drying gas at 150°C. The sample solutions were introduced continuously via a syringe pump with a flow rate of 120 μl/h. All data were acquired with 512 k data points and zero filled to 2048 k by averaging 32 scans. The XMASS Software (Bruker, Version 6.1.2) was used for evaluating the data. The positive ion ESI mass spectra and the collision-induced dissociation (CID) mass spectra were obtained from a TSQ Quantum Ultra AM system
Extraction and isolation
Leaves and flowers of H. foetidum and leaves of H. mechowianum were extracted exhaustively with 90 % methanol for a period of 72 h. The solvent was removed by evaporation under reduced pressure. From the crude flower extract of H. foetidum, by purification with successive column and preparative TLC chromatography on silica gel using a chloroform/methanol gradient systems,the compounds 7,4′-dihydroxy-5-methoxy-flavanone (1) and kaur-16-en-18-oic acid (6) were obtained while the compounds 6′-methoxy-2′,4,4′-trihydroxychalcone (helichrysetin) (2), 6′-methoxy-2′,4-dihydroxychalcone-4′-O-β-D-glucoside (3), apigenin (4), apigenin-7-O-β-D-glucoside (5) were isolated from the leaves and flowers of H. Foetidum. The aqueous residue of the crude extract of H. mechowianum leaves was partitioned successively with n-heptane and ethyl acetate. The n-heptane and the ethyl acetate extracts were further purified by silica gel column chromatography using n-hexane/ethyl acetate gradient systems resulting in the isolation of a phytosterol fraction and of 3,5,7-trihydroxy-8-methoxyflavone (12), respectively. The water fraction was further separated using Diaion HP20 eluted with water, methanol, ethyl acetate and acetone followed by chromatography of the methanol fraction on Sephadex LH20 to give 4,5-dicaffeoyl quinic acid (13). Initially the extracts or purified constituents were dissolved in DMSO and the dilutions of samples tested were made in the respective buffer for each enzyme, i.e., 0.1 M sodium phosphate (pH 7.5) for subtilisin and 0.1 M sodium acetate (pH 4.4) for pepsin. Samples (0.01 -50 μg/ml) were pre-incubated with subtilisin (37 nM) or pepsin (1.7 nM) for 30 min and then, transferred to a black opaque microplate. The substrate EDANS-DABCYL (2 μM), prepared in the specific buffer for each protease, were automatically injected. The final volume was 100 μl. Experiments were performed separately for each protease, which was prepared at the day of experiment. Reads were made for a period of 5 min, with 1 min intervals, and temperature controlled at 37°C. The mean, standard deviation and relative standard deviation (RSD) of triplicates and the percentage of inhibition were calculated using the final fluorescence intensity measured.
Antifungal assay
The antifungal activity against the phytopathogenic fungus Cladosporium cucumerinum was tested by bioautography on silica gel plates [51] in concentrations of 50, 100, 200 and 400 μg/cm. Amphotericine B was used as positive control for growth inhibition.
Cytotoxicity assay
The cytotoxicity was determined by XTT method, using the Cell Proliferation Kit II (Roche). The human prostate cancer cell line PC-3 was maintained in RPMI 1640 medium supplemented with 10 % fetal bovine serum, 1 % L-alanyl-L-glutamin (200 mM), 1 % penicillin/streptomycin and 1,6 % hepes (1 M). For the measurement of cytotoxicity the same medium was used without antibiotics. For PC-3 500 cells/well were seeded overnight into 96-well plates and exposed to serial dilution of each compound for three days.
Molecular modeling
All molecular modelling was carried out on a Linux workstation running on a 3.5 GHz Intel Core2 Duo processor (Santa Clara, USA). Low energy 3D structures of the thirteen isolated compounds were generated using the MOE software package [52] and the Merck molecular forecefiled [53] and saved in mol2 format. These were initially treated with LigPrep [54], distributed by Schrodinger, Inc (Camberley, UK). This implementation was carried out with the graphical user interface (GUI) of the Maestro software package (New York, USA) [55], using the OPLS force field [56][57][58]. Protonation states at biologically relevant pH were correctly assigned (group I metals in simple salts were disconnected, strong acids were deprotonated and strong bases protonated, while topological duplicates and explicit hydrogens were added). A set of the ADMET-related properties (a total of 46 molecular descriptors) were calculated using the Qik-Prop program (New York, USA) [46] running in normal mode. QikProp generates physically relevant descriptors and uses them to perform ADMET predictions. An overall ADME-compliance score, druglikeness parameter (indicated by #stars), was used to assess the pharmacokinetic profiles of the compounds. The #stars parameter indicates the number of property descriptors computed by QikProp, which fall outside the optimum range of values for 95 % of known drugs. The methods implemented were developed by Jorgensen et al. [47][48][49].
Conclusion
Eight known compounds have been identified from the extracts of two species from the genus Helichrysum (Compositae) harvested from the South West of Cameroon (Central Africa). The results showed that the flavonoid glycosides (3,5) from H. foetidum exhibited protease inhibition, while the compound (13) from H. mechowianum contribute to the stomach protecting effects. In addition, the antibacterial and antifungal activities of compound (6) was demonstrated by the fact that it was found to possess a potent inhibitor effect against the tested microorganisms. The differential bioactivities and determined constituents support the traditional use of the species. Molecular modelling studies showed that five of the isolated compounds showed physicochemical properties that completely within the recommended range for more that 95 % of known drugs, while two compounds have only one violation. | 4,228 | 2015-05-30T00:00:00.000 | [
"Chemistry"
] |
On a New Equation for Critical Current Density Directly in Terms of the BCS Interaction Parameter , Debye Temperature and the Fermi Energy of the Superconductor
Recasting the BCS theory in the larger framework of the Bethe-Salpeter equation, a new equation is derived for the temperature-dependent critical current density jc(T) of an elemental superconductor (SC) directly in terms of the basic parameters of the theory, namely the dimensionless coupling constant [N(0)V], the Debye temperature θD and, additionally, the Fermi energy EF—unlike earlier such equations based on diverse, indirect criteria. Our approach provides an ab initio theoretical justification for one of the latter, text book equations invoked at T = 0 which involves Fermi momentum; additionally, it relates jc with the relevant parameters of the problem at T ≠ 0. Noting that the numerical value of EF of a high-Tc SC is a necessary input for the construction of its Fermi surface—which sheds light on its gap-structure, we also briefly discuss extension of our approach for such SCs.
Introduction
The critical current density (j c ) of a superconductor (SC) is the maximum current density that it can carry beyond which it loses the characteristic of superconductivity.It is an important parameter because greater its value, greater is the practical use to which the SC can be put.The basic relation between j c and the critical velocity (v c ) of Cooper pairs (CPs) at any temperature T and an applied field H is: where n s is the number of CPs, e*, P c and (2m*) are, respectively, the charge, the critical momentum, and the effective mass of a CP.We note that, since formation of CPs in the BCS theory is synonymous with the formation of their condensate [e.g., 1], P c in (1) may also be defined as the minimum momentum that causes dissociation of the condensate.
As alternatives to (1), several derived relations for j c can be found in the literature [2][3][4][5], some of which have been reproduced in Table 1.Salient features of these relations are: 1) They are obtained via indirect approaches based on diverse criteria such as the type of SC being dealt with (type I or II) and its geometry; 2) They lead to values of j c s that are generally much greater than the experimental values; and 3) Only one of them involves the Fermi energy E F (via Fermi momentum) of the SC-this will be further discussed below.
E F of an SC is an important parameter too because, as has been remarked [6], "There is every evidence that the remarkable low value of E F (<100 meV) and the strong coupling of carriers with high-frequency phonons is the cause of high T c in all newly discovered superconductors."Furthermore, the input of the numerical value of E F is essential to construct the Fermi surface E j (k) of an SC via , j F from which it is seen that [7; p. 117] the whole process of determining theoretically the shape of the Fermi surface involves calculating E j (k) over the entire Brillouin zone and then constructing the particular constant energy surface that corresponds to E F .However, this assumes that the actual numerical value of E F is available, which may well not be the case.The importance of the Fermi surface stems from the fact it sheds light on the gap-structure of the SC since it marks the boundary between the occupied and the unoccupied parts of the band j.This explains the considerable experimental effort that has been expended on constructing the Fermi surfaces of a variety of high-T c SCs as reported in [e.g., 8,9] and, more recently, in [10][11][12].In particular, in the latter of these references, the gaps observed in ironpnictide SCs as nodes or line nodes on the Fermi surface have evinced considerable interest.For a quantitative account of the T c and the multiple gaps of a prominent member of the iron-pnictide family, namely Ba 0.6 K 0.4 Fe 2 As 2 , in the framework of the generalized-BCS equations (GBCSEs) [13]-which will be further discussed below, we draw attention to [14].The purpose of this note is to present an approach in which P c (T)-defined as the momentum at which the binding energy of the CPs vanishes (this is equivalent to the vanishing of the gap [13])-is calculated via the dynamics of CPs.As will be seen, we are then led via (1) to an equation for j c (T) directly in terms of the familiar BCS parameters, namely the dimensionless coupling constant [N(0)V], the Debye temperature θ D and, additionally, E F of the SC.The framework employed by us is that of the Bethe-Salpeter equation (BSE) for reasons to be spelled out shortly.
The paper is organized as follows.In the next section, we obtain equations for P c (T, H = 0) and P c (T = 0, H = 0) for a simple SC.The solutions of these equations for Sn are obtained in Section 3 and compared with similar results obtained by a different method.Extension of our approach for non-elemental SCs is presented in Section 4. In Section 5 we make four brief comments.The final section sums up our conclusions.
Equations for P c (T,
Customization of this equation for CPs requires th at a, b should be electrons.We then have where m is the electron mass, , In our earlier work [13] based on (2), it sufficed to set where E is the total energy of a CP; it then turned out that W .The BCS interaction kernel in (2) then was 2) is simply to provide the means to temperature-generalize the theory at the outset via the Matsubara recipe.Thus, following the steps that have been detailed in [13,16] we obtain from (2) the 3-dimensional equation where , and If we simply carry out the integration in ( 8), we obtain the usual T = 0 theory; subjecting it to the Matsubara recipe, however, we obtain an equation valid at any temperature-causing the theory to incorporate many-body effects.With the aid of the Matsubara recipe, (8) yields [13,16]: where and , k B being the Boltzmann constant.Since critical velocity is defined as the velocity of CPs at which W = 0, we now need to consider (2) for the case of moving CPs. Hence ( 4) is replaced by where is the 3-momentum of the c.m. of a CP.It i s pertinent at this stage to draw attention to the intera ction Hamiltonian corresponding to (2), which is actua lly apparent from the structure of the equation: where ψ is the electron field and the phonon field; exchanges of the latter field between the electrons with coupling strength g being responsible for pairing.Both for elemental and non-elemental SCs, one is now enabled to calculate not only the Tcs and ∆s-as has been shown [17,18], but also Pc(T)s of the pairs as will be seen below.
Because the BSE formalism accommodates CPs having non-zero c.m. momentum, it constitutes a larger framework than the original BCS formalism which restricts the Hamiltonian at the outset to comprise of terms corresponding to pairs having zero c.m. momentum.
Since energies of the electrons forming a CP now take on the values Substituting ( 9)-( 12) into ( 6), we obtain where q P q q P q P q P q q (13a) and it is seen that, as is well known for a constant kernel, the wave function for the pair is a constant; the limits (L, U) will be dealt with shortly.Putting T T De q q q q in (13), multiplying the resulting equation with and simplifying, we obtain so that, since the integration range for we obtain from ( 14) the equation , cos , 2 and (13a) and the definition of E in (4) have been used.In the natural units employed by us, both m and E F are in eVs; the second pre-factor within the square brackets on the RHS of ( 16) is therefore recognized as the 3-dimensional density of states at the Fermi surface (with the dimensions of (eV −1 •cm −3 ) in the units customarily employed in the BCS theory).Henceforth we denote this factor by N(0).Note that the term corresponding to P p x/2m in the expansion of (P/2 ± p) 2 /2m has been written as Pαx by using (15) and the definitions of α and x that follow (16).We now specify the limit L. It follows from ( 12) that where (15) has been used.These relations may be written as We now put W = 0 in (16) in order to determine the critical momentum P c (T) at any temperature.Simultaneously, we neglect 2 8 c P m everywhere-a posteriori justification to follow, excepting in the denominator of the integrand where it must be retained so as to avoid the singularity at ξ = 0.It is then seen that it is an excellent approximation to write (16) as: where , , , , .
Equation ( 19) affords a consistency check of our procedure so far: Putting P c = 0 causes the x-integral to yield unity, and the two tanh-functions to add up, leading to the correct BCS equation for T c .Note that when T = 0 (β = ), f 1 (P c , ξ, x) = 1,whereas the value of f 2 (P c , ξ, x) depends on whether ξ is less or greater than P c αx.Therefore, when T = 0, we can write (19) as where Carrying out the elementary integrations in ( 22) and ( 23), ( 21) yields Since, as will be seen below, , E 2 , we may write it more compactly as where the dimensionless parameter
Solutions of (25) and (19) for Sn and Comparison of Results for j c via (1) with Those Obtained via an Alternate, Indirect Approach
We deal with Sn because superconducting properties based on its j c have been discussed in standard texts such Using ( 27) and the experimental value of j c for Sn (~2 × 10 7 Ampere cm −2 ), ( 26) is invoked to calculate n s since it is the most uncertain quantity in the equation.It is thus found that n s = 8.50 × 10 21 cm −3 , ( which, it has been remarked [19], is appreciably less than one electron per atom, but not unreasonable in view of the complicated band structure of tin, which has been discussed in [7, p. 294]. In our approach, we first need the value of λ to solve (25).Substituting the experimental value of where 2m* is the effective mass of a CP and m* has been taken to be 1.26 times the free electron mass as before, e* is twice the electronic charge and the value of E F is in eV.
Since we have already determined A via dynamics of the problem, e* and B are known constants and j 0 is known from experiment, (31) involves two unknowns: n s and E F ; knowledge of either of them enables one to calculate the other.Guided by text book wisdom, if we use the values of j 0 (2 × 10 7 Amp cm −2 ) and E F (given in ( 27)), we obtain from (30) and (31) the following results v 0 = 1.50 × 10 4 cm sec −1 (33) n s (CPs) = 4.17 × 10 21 cm −3 . ( The values of E 1 , E 2 and E 3 in (32) justify the approximation made in reducing (24) to (25).The result in (33) is almost identical with the value obtained via (26) and quoted in (27), while the result in (34) translates into 8.34 × 10 21 cm −3 for the number of super electrons which, again, is in excellent agreement with the value quoted in (28).It is thus established that the approach followed in this paper provides an ab initio theoretical justifi-cation for the text book equation (26) valid at T = 0; additionally: 1) it relates j c with the relevant parameters of the problem at T ≠ 0 via (19) and 2) it can easily be extended to bring non-elemental high-T c SCs under its purview as will be discussed in the next section.
With P 0 known, it is convenient to solve (19) in terms of the reduced (or normalized) variables defined as c t T T and 0 ( ) ( ) .c p t P t P Figure 1 gives the results of this exercise for 0 ≤ t ≤ 1.We have also studied the variation of p with t for five other elements: Pb, Hg, In, Tl, and Nb-by taking for their E F s the values given by the free electron model [20, p. 248], and found it to be similar to that of Sn.The T c s and the multiple gaps of several non-elemental high-T c SCs (other than iron-pnictide SCs) have been dealt with in [17,18] via GBCSEs.We recall from [13,16] that these equations constitute a generalization of the BCS equations because: 1) they incorporate the mechanism of multi-phonon exchanges for the formation of Cooper pairs besides the usual one-phonon exchange mechanism; and 2) they invoke more than one Debye temperature-which is another way to specify the massdependent Debye frequency of an ion species-to characterize the SC.
In order to calculate P c in the scenario in which CPs are bound via say, two-phonon exchange mechanism in a CS A x B 1-x , we need to generalize (19) and ( 24).This is accomplished by replacing the propagator in ( 12) by a superpropagator [13]: 0, otherwise where 1,2 are the BCS model interactions for the species of phonons belonging to A, B in the combined state of the constituents A and B, to be distinguished from 1,2 , which are the free state interactions of A, B; are to be similarly distinguished from 1,2 .Following now the sequence of steps between (8) and (24), we obtain the generalized version of (19) as: where , , d where ln The solution of (39) for MgB 2 , for example, requires the inputs of ; in addition, we require E F of the CS.Such solutions will be addressed elsewhere.
Discussion
We have dealt above with equations that were obtained via positive energy projection operators (PEPOs).This suffices for the problem addressed because P c corresponds to the situation when W = 0; in this limit, it has been shown in [21] that the equation obtained via the negative energy projection operators is identical with the one obtained via the PEPOs; also that: 1) CPs formed via electron-electron and hole-hole scatterings make equal contributions to the BS amplitude; and 2) the amplitudes for the formation of CPs corresponding to the mixed energy projection operators are zero.Note that if we concern ourselves with the ratios of j c s at different temperatures, which seems to be a realistic application of our equations, then the choice of the effective mass of the electron in (1) becomes immaterial.
Even a cursory survey of the literature shows that j c of an elemental/non-elemental SC can vary between wide limits-depending upon the shape, size and alloying materials of the sample.The study presented here suggests that this variation comes about because each sample has its own set of intrinsic parameters: T c , θ D , and E F .Substituting these into the equation for j c (which is known from experiment) leads to a relation involving n s and E F .Knowledge of either of them then determines the other.
We finally note that the equations for j c (T) presented in this paper can be generalized to include an external magnetic field via the Landau quantization scheme-as has been done to obtain dynamics-based equations for critical magnetic fields for both elemental and non-elemental H = 0) and P c (T = 0, H = 0) for a Simple SC Our starting point is the T = 0, H = 0 BSE[15] for the bound states of particles a, b bound via the interaction kernel , momenta of the two electrons in the centre of mass (c.m.) frame, and P is the 4-momentum of the c.m. of the CP in the laboratory frame.
Copyright © 2013
SciRes.WJCMP On a New Equation for Critical Current Density Directly in Terms of the BCS Interaction Parameter, Debye Temperature and the Fermi Energy of the Superconductor 105 [Note: We use natural units: mass, momentum, energy, etc. in eV, ]. 1 c The role of the 4 th dimension in ( model interaction given in (5) gets replaced by where Copyright © 2013 SciRes.WJCMP On a New Equation for Critical Current Density Directly in Terms of the BCS Interaction Parameter, Debye Temperature and the Fermi Energy of the Superconductor 106 T c quoted above and θ D = 195 K in the BCS equation for T c : of α given after(16) has been used.Using (29) and(1)
Figure 1 . 4 .
Figure 1.Variation of reduced critical momentum with reduced temperature for Sn obtained via (19) with the input of λ = 0.2445, θ D = 195 K and E F = 1.74 eV.
addition, we require E F of the CS.Such solutions will be addressed else where n puts of 1,2 c and the two Debye temperatures:
Table 1 . Some of the relations in the literature for calculating the critical current densities (j c s) of different types of super- conductors obtained via diverse, indirect methods.
On a New Equation for Critical Current Density Directly in Terms of the BCS Interaction Parameter, Debye Temperature and the Fermi Energy of the Superconductor 104 [5]thermodynamic critical field) are obtained from experiment Kim et al. model[5] On a New Equation for Critical Current Density Directly in Terms of the BCS Interaction Parameter, Debye Temperature and the Fermi Energy of the Superconductor 107 as [3; p. 248] and [19; p. 138].The equation invoked for j c at T = 0 in these texts is: | 4,244.8 | 2013-05-23T00:00:00.000 | [
"Physics"
] |
Imaging molecular geometry with electron momentum spectroscopy
Electron momentum spectroscopy is a unique tool for imaging orbital-specific electron density of molecule in momentum space. However, the molecular geometry information is usually veiled due to the single-centered character of momentum space wavefunction of molecular orbital (MO). Here we demonstrate the retrieval of interatomic distances from the multicenter interference effect revealed in the ratios of electron momentum profiles between two MOs with symmetric and anti-symmetric characters. A very sensitive dependence of the oscillation period on interatomic distance is observed, which is used to determine F-F distance in CF4 and O-O distance in CO2 with sub-Ångström precision. Thus, using one spectrometer, and in one measurement, the electron density distributions of MOs and the molecular geometry information can be obtained simultaneously. Our approach provides a new robust tool for imaging molecules with high precision and has potential to apply to ultrafast imaging of molecular dynamics if combined with ultrashort electron pulses in the future.
The physical and chemical properties of molecules directly depend on their geometries and electronic structures that both have always been the central issues in molecular physics. The geometry of a molecule is conventionally obtained by the methods of X-ray 1,2 or electron diffraction [3][4][5][6] , from which the atomic positions are determined with sub-Ångström spatial resolution. An alternative imaging approach emerged in the past decade, which is referred to as the laser induced electron diffraction [7][8][9][10][11] , has also been demonstrated to image molecular structures with sub-Ångström precision. In this technique, an intense laser field is employed to extract electron from a molecule itself, and within one laser period a fraction of the tunneled electron wave packet will be forced back to re-collide and diffract from the parent molecular ion. The well-established method in the conventional electron diffraction is then applicable to retrieve the bond lengths of molecule.
On the other hand, the tunneled electron wave packet that directly emerges into the vacuum retains information about the orbital from which the electron is ionized 9 . By measuring the momentum distribution for these direct electrons, the fingerprint of the highest occupied molecular orbital can be observed through the filter of the suppressed binding potential through which the electron tunnels 9 . Thus one set of measurements simultaneously identifies the orbital wavefunction of molecule and the position of the atoms in the molecule in this laser induced electron tunneling and diffraction technique. Information about the ionizing orbital of neutral molecule is also imprinted on the high-harmonic radiation produced by the recombination of the re-collision electron with the parent ion in the laser field and allows the three-dimensional shape of the highest electronic orbital to be measured 12 .
Electron momentum spectroscopy (EMS), which is based on the electron-impact single ionization or (e, 2e) experiment near the Bethe ridge, is a well-established technique that can obtain the spherically averaged electron momentum distributions, or electron momentum profiles (see Supplementary Information Note 1), for any individual molecular orbitals (MOs) in principle [13][14][15] . This unique ability of imaging MOs makes the EMS a robust technique for exploring the electronic structures of molecules in gas phase 16 . However, the geometry information of molecule is usually veiled due to the single-centered character of the momentum space wavefunction for MO. In momentum space, for a MO which can be approximated by a linear combination of atomic orbitals (LCAOs), the information about the equilibrium nuclear positions R J is only present in the phase factors exp(− ip ⋅ R J ) introduced by Fourier transform of the wavefunction from position space to momentum space (see Methods for details). Therefore the electron momentum distribution of a MO will be modulated by a cosine or sine function b is the distance between atoms J a and J b . This oscillation phenomenon is usually referred to as bond oscillation 17 , which can also be regarded as a result of the Cohen-Fano type 18 or the Young-type interference effect originated from the coherent superposition of the (e, 2e) amplitudes from the atoms in the molecule. This type of molecular scale interference was first proposed by Cohen and Fano 18 in photoionization and was successively demonstrated in the ionization of molecules induced by heavy ions [19][20][21][22][23][24][25] , photons [26][27][28][29][30][31][32][33][34][35] , as well as electrons [36][37][38] .
In the EMS experiments, the interference effect was first discussed in the 1980 s 17 and clearly observed only recently in the experiments of CF 4 37 , H 2 38,39 , and SF 6 40,41 . Direct observation of the interference pattern in electron momentum distribution is usually very difficult due to the weak modulation on the rapidly decreasing intensity at large momentum. The feasible way is to compare the experimental cross section of a molecule with the theoretical or experimental one-center atomic cross section 37,39,40 or to compare the cross sections between two different vibrational states 38 . Kushawaha et al. 33 in their photoionization work suggested a more obvious way to observe the interferences by measuring the ratio of two cross sections corresponding to the MOs with symmetrical and anti-symmetrical characters, which are expected to give oscillations in antiphase, thus magnifying the interference pattern.
In the present work, the similar scheme has effectively been applied in EMS experiments to uncover the multi-center interferences in CF 4 and CO 2 . The scheme is pictorially illustrated in Fig. 1a. With CF 4 as an example, the three outermost MOs (1t 1 , 4t 2 , 1e) of this molecule are essentially due to lone-pair electrons or 2p atomic orbitals (AOs) on the F atoms. Figure 1b shows the calculated electron momentum profiles (see Supplementary Information Note 2) for 4t 2 and 1e orbitals at equilibrium geometry. In the logarithmic coordinate both of the momentum profiles show weak oscillations extending to large momentum region due to the multi-center interferences from the ionization of the four F atoms. Different orientations of the constituent 2p AOs in 4t 2 and 1e orbitals lead to the oscillations almost completely in antiphase (Fig. 1b) 37 . The interference pattern can be significantly magnified by plotting the ratio of the momentum profiles for these two MOs, as illustrated in Fig. 1a σ(1 )/σ(4 ) In this study, the accurate measurements are carried out for CF 4 and CO 2 by using a high-sensitivity angle and energy dispersive multichannel electron momentum spectrometer with simultaneous detection in 2π angle range 42 . Two-dimensional (2D) electron density map of binding energy and relative azimuthal angle for the outer-valence MOs for these two molecules have been obtained. The experimental electron momentum profiles for the relevant MOs are extracted. A strong dependence of the oscillation period on the interatomic distance is observed in the ratios of electron momentum profiles between two MOs with oscillations in antiphase, which is used to determine F-F distance in CF 4 and O-O distance in CO 2 with sub-Ångström precision. Thus, in our new approach, we can simultaneously obtain the electron density distributions of MOs and the molecular geometry information in one set of measurements. Benefited from the wide momentum range (from 0 to 8 a.u.) of this new version EMS spectrometer 42 , more than two periods of oscillations are included in the interference fringes. Besides, the present observation of interference effect totally depends on the experimental measurements and does not rely on the comparison with the one-center atomic cross section. These features make our approach a robust tool for imaging molecules with high precision and has the potential to apply to ultrafast imaging of molecular dynamics if combined with the ultrashort electron pulses 43 in the future.
Results
2D electron density maps. Figure 2a and b show the 2D electron density maps for CF 4 and CO 2 measured at impact energy of 1.2 keV plus binding energy (see Methods). These 2D maps are the (e, 2e) TDCSs as functions of binding energy and relative azimuthal angle φ (i.e. the momentum of the orbital electron) and contain all the information on binding energy spectra (BES), electron momentum distributions, and symmetries for various ionization states. Figure 2c and d show the total BES summed over all the measured φ for CF 4 and CO 2 respectively. Gaussian functions as shown by the solid curves, which correspond to the ionizations from different MOs, are invoked to fit the BES. The MO specific electron momentum profiles can be extracted by deconvoluting the Multicenter interference effect. The orbital images for 1t 1 , 4t 2 , 1e MOs of CF 4 and for 3σ u , 4σ g MOs of CO 2 are shown at the top right of Fig. 3. For CF 4 molecule, the three outer most MOs, 1t 1 , 4t 2 and 1e, are composed of 2p lone-pair electrons on the F atoms. As we have mentioned, both the momentum profiles for 4t 2 and 1e orbitals show weak oscillations due to the multi-center interferences from the ionization of the four F atoms. The phase of the interference factor depends on the different orientations of the constituent 2p AOs in the MOs 37 . In 4t 2 orbital the 2p AOs of the four F atoms orient parallel to each other, while in 1e orbital the 2p AOs of each two pairs of F atoms are in opposite orientations. The different orientations lead to the interference oscillations of momentum profiles almost completely in antiphase (Fig. 1b). Besides 4t 2 , 1e orbital pair, the momentum profiles of 1t 1 , 4t 2 orbital pair of CF 4 and 3σ u , 4σ g orbital pair of CO 2 are also modulated by the interference factors in antiphase (see Supplementary Information Note 3 and Fig. S1 for detail).
The interference pattern will significantly be magnified by plotting the ratio of the momentum profiles as indicated in Fig. 1c. Figure 3a and b show the ratios of the measured momentum profiles σ(1t 1 )/σ(4t 2 ) and σ(1e)/σ(4t 2 ) for CF 4 by solid circles. Both ratios exhibit significant oscillations around constant values with more than two periods, which is the distinct evidence of the multi-center interference effect. The constant is the product of the ratio of the electron occupation numbers of MOs (6 for 1t 1 , 4t 2 and 4 for 1e) and the ratio of the pole strengths of the corresponding ionization peaks. The pole strengths of the main ionizations peaks for the outer valence orbitals of molecules are usually approximately equal to unity. Therefore, the constant is roughly dependent on the ratio of the electron occupation numbers, which is about 1 for σ(1t 1 )/σ(4t 2 ) and 0.67 for σ(1e)/σ(4t 2 ) as is the case shown in Fig. 3a and b. We also illustrate in the figures the theoretical ratios for σ(1t 1 )/σ(4t 2 ) and σ(1e)/σ(4t 2 ) of CF 4 calculated at the equilibrium interatomic F-F distance R FF = 2.1551 Å 44 as well as at the distances changing − 0.2 Å, − 0.1 Å,+ 0.1 Å,+ 0.2 Å. The theoretical momentum profiles are calculated by B3LYP density functional method adopting aug-cc-pVTZ basis sets (see Supplementary Information Note 2). A very sensitive dependence of the oscillation interference pattern on the interatomic F-F distance can be observed. The theoretical results at equilibrium geometry give the best agreement with the experiments.
For CO 2 molecule, 3σ u and 4σ g MOs, which are hybrid orbitals of the oxygen (O) lone-pairs, are anti-symmetrical (u) and symmetrical (g) that are expected to give oscillations in antiphase. The experimental and theoretical momentum profile ratios of 3σ u and 4σ g MOs are shown in Fig. 3c. As is expected, the experimental ratio presents regular oscillation around a constant of about 0.85 that corresponds to the pole strength ratio of 4σ g and 3σ u (0.72/0.85) 45 . Similar to the situation of CF 4 , a very sensitive dependence of the interference pattern on the interatomic O-O distance is observed and the theoretical result at equilibrium geometry (R OO = 2.3267 Å 44 ) give approximately the best agreement with the experiment.
It should also be noted that the experimental data obviously deviate from the theoretical predictions at large momentum. These derivations should be ascribed to the distorted wave effect which is a common phenomena in EMS 14 at large momentum region and such effect may be different for different MOs. However, it still remains an unresolved problem to include the distorted wave effect in the calculations for the molecular system. Determining interatomic distance. As is discussed above, the oscillation period of the interference pattern is very sensitive to the change of interatomic distance, which provides a possible way to determine the interatomic distances with high precision. This is the well-known benefit in precision of any interferometric measurements like the Young's double-slit experiment. In order to determine the exact values of the equilibrium interatomic distances from the present experimental data, a series of theoretical momentum profile ratios are calculated at various interatomic distances R and a least-square fitting procedure is performed (see Supplementary Information 4). The χ 2 values, which is defined as the sum of the squared differences between experimental and theoretical momentum profile ratios, are shown as open circles in Fig. 4 as functions of relative interatomic distances (R − R eq )/R eq , where R eq are the equilibrium interatomic distances of CF 4 and CO 2 reported in ref. 44. Three-order polynomials (solid line) are used to fit the χ 2 distributions. As can be seen in Fig. 4a-c, the minimum points of χ 2 values are (R − R eq )/R eq = 0.033, 0.018 and − 0.059 for the momentum profile ratios of 1t 1 /4t 2 , 1e/2t 2 of CF 4 and 4σ g /3σ u of CO 2 . Therefore the exact values of the equilibrium interatomic distances of the present work can thus be determined to be R FF = 2.23 Å or 2.19 Å (2.21 Å on average) for CF 4 and R OO = 2.19 Å for CO 2 . On the other hand, the uncertainty of χ 2 value, shown as error bar in Fig. 4, can be deduced from that of the experimental data, which includes the statistical and deconvolution uncertainties. The corresponding error bars show that the minimum points of χ 2 distributions can just be resolved from the points of (R − R eq )/R eq = 0.00, 0.07 for 1t 1 /4t 2 of CF 4 , (R − R eq )/R eq = − 0.01, 0.05 for 1e/4t 2 of CF 4 and (R − R eq )/R eq = − 0.09, − 0.03 for 4σ g /3σ u of CO 2 , as indicated by the dashed lines in Fig. 4a-c. The uncertainties of the determined values of equilibrium interatomic distances are thereby ± 0.08 Å or ± 0.06 Å (± 0.07 Å on average) for CF 4 and ± 0.07 Å for CO 2 , which are about 3-4% of interatomic distances. By further improving the momentum resolution and reducing the statistical uncertainty, it would not be difficult to reach 1% or better in geometry determination.
Discussion
We demonstrate a robust method for the retrieval of the interatomic distances from the multicenter interference effect of molecules with EMS. A sensitive dependence of the oscillation period on the interatomic distance is observed in the ratios of electron momentum profiles between two MOs with oscillations in antiphase. A least-square fitting procedure is used to precisely determine the equilibrium F-F distance in CF 4 and O-O distance in CO 2 with sub-Ångström precision. The result for F-F distance is R FF = 2.21 Å ± 0.07 Å, which is consistent with the value reported by electron diffraction 44 within the experimental uncertainty. As for O-O distance in CO 2 , the result is determined to be R OO = 2.19 Å ± 0.07 Å. It is slightly smaller than the value from the electron diffraction experiments 44 . EMS is readily a well-established technique to obtain the spherically averaged electron momentum distributions for individual MOs. Therefore, by unveiling its new ability of determination of molecular bond lengths, EMS is now able to obtain the electron density distributions of MOs and the molecular geometry information simultaneously in one set of measurements. On the other hand, the recent advances in ultrashort electron pulses allowing one to achieve 4D electron diffraction 3-6 as well as 4D electron microscopy 46,47 . The most recent work 48,49 also demonstrated the feasibility of time-resolved EMS measurements of short-lived transient species, where an ultrashort photon pulse is used for exciting the dynamics of interest and an ultrashort electron pulse is applied to probe the system as a function of the delay time between them. Therefore, by employing the new approach of the present work as well as ultrashort electron pulses, EMS has the potential to apply to ultrafast imaging of the molecular dynamics by exploring not only the change of electron densities but also the change of molecular structures for transient species.
Methods
Experiment. The experiment is carried out using a high-sensitivity angle and energy dispersive multichannel electron momentum spectrometer with nearly 2π azimuthal angle range (2π-EMS). The details of the 2π-EMS can be seen in ref. 42. Briefly, the experiment involves coincidence detection of two outgoing electrons produced by electron impact ionization of the target molecule. The electron beam generated from a thermal cathode electron gun is accelerated to the energy of 1200 eV plus the binding energy to collide with the gas-phase target in the gas cell. The symmetric non-coplanar kinematics is employed. The scattered and ejected electrons with equal polar angles (θ 1 = θ 2 = 45°) and energies are analyzed by a spherical electrostatic analyzer with 90° sector and 2π azimuthal angle range. The two outgoing electrons are detected in coincidence by a position sensitive detector placed at the exit plane of the analyzer. The passing energies of energy analyzer are 600 eV for CF 4 and 200 eV for CO 2 , respectively. The performances of EMS-2π are calibrated by electron impact ionization of Argon before experiment. The energy resolution, polar angle resolution and azimuthal angle resolution are determined to be Δ E = 2.2 eV, Δ θ = 1.0° and Δ φ = 2.4° for CF 4 experiment and Δ E = 1.4 eV, Δ θ = 1.0° and Δ φ = 2.9° for CO 2 experiment, respectively. | 4,317.6 | 2016-12-22T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Two-dimensional adaptive dynamics of evolutionary public goods games: finite-size effects on fixation probability and branching time
Public goods games (PGGs) describe situations in which individuals contribute to a good at a private cost, but others can free-ride by receiving a share of the public benefit at no cost. The game occurs within local neighbourhoods, which are subsets of the whole population. Free-riding and maximal production are two extremes of a continuous spectrum of traits. We study the adaptive dynamics of production and neighbourhood size. We allow the public good production and the neighbourhood size to coevolve and observe evolutionary branching. We explain how an initially monomorphic population undergoes evolutionary branching in two dimensions to become a dimorphic population characterized by extremes of the spectrum of trait values. We find that population size plays a crucial role in determining the final state of the population. Small populations may not branch or may be subject to extinction of a subpopulation after branching. In small populations, stochastic effects become important and we calculate the probability of subpopulation extinction. Our work elucidates the evolutionary origins of heterogeneity in local PGGs among individuals of two traits (production and neighbourhood size), and the effects of stochasticity in two-dimensional trait space, where novel effects emerge.
Recommendation?
Accept with minor revision (please list in comments)
Comments to the Author(s)
In this manuscript, the authors study the adaptive dynamics of production and neighborhood size in the nonlinear public goods game where the benefit and cost functions are nonlinear when the population size is large. By performing theoretical analysis and numerical simulation, they reveal that the population size plays an important role in determining the final state of the population. Furthermore, when the population size is small, the authors calculate the probability of extinction of a subpopulations and capture some interesting evolutionary outcomes which are not captured by deterministic theory. I find that the work is of broad interest. However, I still have some following comments or questions on it.
(1) In page 1, the authors use the abbreviation PG in the abstract, but does not give the full name.
(2) In page 3, what does 1D games mean? The authors should give an explanation.
(3) In page 4, the authors state that in the evolutionary process the neighborhood size-trait is also a continuous variable between 1 and N. But the authors also need to consider that the neighborhood size should be an integer.
(4) I do not see the parameter values which are used to plot figure 2 and figure S1.
(5) In page 7, the probability that the opponent replaces the individual and becomes the parent of an offspring in the next generation (q) will be negative when P_opp<P_focal. The authors should clarify this.
(6) The public goods dilemma frequently appears in the real world, and some works about the public goods game can be mentioned the literature review, e.g., Journal of the Royal Society Interface, 2013, 10, 20120997;New Journal of Physics, 2014, 16: 083016;Mathematical Models and Methods in Applied Sciences, 2019, 29: 2127--2149Nonlinear Dynamics, 2019, 97:749-766. (7) There are some typos in the work. For example, in page 17, line 29-32, "First, we demonstrated that …. Then, we show that…"; In page 13 of SI,
Review form: Reviewer 2
Is the manuscript scientifically sound in its present form? Yes
Are the interpretations and conclusions justified by the results? Yes
Is the language acceptable? Yes
Recommendation? Accept with minor revision (please list in comments)
Comments to the Author(s) This paper studies the adaptive dynamics of nonlinear public goods game in a population with evolving neighborhood size. The authors find the spontaneous emergence of two distinct subpopulations comprised of producers who make large investments and free-riders who contribute very little, each with a different neighborhood size from an initially monomorphic population in trait space driven by evolutionary branching. Furthermore, the intrinsic stochasticity arising from population size is shown to play a dominant role in determining the final state of the population, i.e., branching or extinction. I think the novelty of the work lies in its consideration of the co-evolution of neighborhood size and public good production, which is, as far as I know, not investigated in previous papers focusing on adaptive dynamics. Besides, the theoretical analysis of adaptive dynamics employed seems to be also sound. Considering the acceptance criterion of Royal Society Open Science, I support its publication if the following suggestions are incorporated.
(1) The evolutionary branching of cooperators and defectors in social dilemma games is not new. (2013)]. They have shown that nonlinearity of public goods production alone is sufficient for inducing the phenomenon of evolutionary branching. Then what particular role the evolving population structure, i.e., the evolving neighborhood size, plays in the adaptive dynamics of nonlinear public goods? Considering above facts, I suggest the authors to compare the results with that in [A tale of two contribution mechanisms for nonlinear public goods. Sci. Rep. 3, 2021Rep. 3, (2013]. (2) An important issue I do feel confused is the way in which the evolving population structure, or more specifically, the evolving neighborhood size, is modelled. I would prefer if the authors can give more specific examples that indeed support such a modelling manner of population structure.
(3) With reference to the nonlinear relationship between the common benefits and the overall contributions, I suggest citing the following very relevant paper [Impact of critical mass on the evolution of cooperation in spatial public goods games. Phys. Rev. E 81, 057101 (2010)]. Besides, regarding the multi-player social dilemma games on structure populations, I recommend the author to cite a recent review paper [Statistical physics of human cooperation. Phys. Rep. 687, 1-51 (2017)] to strengthen the general background.
(4) There are a few English mistakes or vague expressions in the manuscript. Please carefully proofread the article before resubmission.
Review form: Reviewer 3
Is the manuscript scientifically sound in its present form? Yes
Do you have any ethical concerns with this paper? No
Have you any concerns about statistical analyses in this paper? Yes
Recommendation?
Accept with minor revision (please list in comments)
Comments to the Author(s)
The tragedy of commons is a long-standing puzzle, and public goods game (PGG) theory provides a potential route to resolve it. In this paper, the authors explored the effect of nonlinear benefit and cost function on the branching and extinction in PGG, and obtained that population size would be a crucial role in determining the finial state of the population.
In my opinion, the current idea is interesting, and I am willing to consider the potential recommendation. Yet, before the formal acceptance, I have some technical comments on the related contents as follows: 1.
As the authors state, in Figure 1 B, the branch point location and probability in trait space are interesting. However. I was wondering whether the qualities of the branch point location and probability can be shown more obviously in Fig 1 B, or can be marked as some more detailed number of generations.
2.
Population's size plays a crucial role in determining the finial state. Then, how to ascertain the population's size needs to be described. Meanwhile, is the assumption of an infinitely large population reasonable in practice and why the small population number is selected as 150, 200? 3.
Regarding the PGG, many mechanisms have been proposed to enrich the understanding of the emergence of cooperation. As an example, the adaptive reputation mechanism is an effective one, two recent works [Evolution of cooperation in the spatial public goods game with adaptive reputation assortment. Physics Letters A. 2016,380: 40-47; Effect of memory, intolerance and second-order reputation on cooperation. Chaos, 2020, 30: .063122.] are worth mentioning here.
5.
The references in Appendix D are empty! Decision letter (RSOS-210182.R0) We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below.
Dear Dr Kimmel
On behalf of the Editors, we are pleased to inform you that your Manuscript RSOS-210182 "Branching and extinction in evolutionary public goods games" has been accepted for publication in Royal Society Open Science subject to minor revision in accordance with the referees' reports. Please find the referees' comments along with any feedback from the Editors below my signature.
We invite you to respond to the comments and revise your manuscript. Below the referees' and Editors' comments (where applicable) we provide additional requirements. Final acceptance of your manuscript is dependent on these requirements being met. We provide guidance below to help you prepare your revision.
Please submit your revised manuscript and required files (see below) no later than 7 days from today's (ie 31-Mar-2021) date. Note: the ScholarOne system will 'lock' if submission of the revision is attempted 7 or more days after the deadline. If you do not think you will be able to meet this deadline please contact the editorial office immediately.
Please note article processing charges apply to papers accepted for publication in Royal Society Open Science (https://royalsocietypublishing.org/rsos/charges). Charges will also apply to papers transferred to the journal from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (https://royalsocietypublishing.org/rsos/chemistry). Fee waivers are available but must be requested when you submit your revision (https://royalsocietypublishing.org/rsos/waivers). In this manuscript, the authors study the adaptive dynamics of production and neighborhood size in the nonlinear public goods game where the benefit and cost functions are nonlinear when the population size is large. By performing theoretical analysis and numerical simulation, they reveal that the population size plays an important role in determining the final state of the population. Furthermore, when the population size is small, the authors calculate the probability of extinction of a subpopulations and capture some interesting evolutionary outcomes which are not captured by deterministic theory. I find that the work is of broad interest. However, I still have some following comments or questions on it.
(1) In page 1, the authors use the abbreviation PG in the abstract, but does not give the full name.
(2) In page 3, what does 1D games mean? The authors should give an explanation.
(3) In page 4, the authors state that in the evolutionary process the neighborhood size-trait is also a continuous variable between 1 and N. But the authors also need to consider that the neighborhood size should be an integer.
6
(4) I do not see the parameter values which are used to plot figure 2 and figure S1.
(5) In page 7, the probability that the opponent replaces the individual and becomes the parent of an offspring in the next generation (q) will be negative when P_opp (6) This paper studies the adaptive dynamics of nonlinear public goods game in a population with evolving neighborhood size. The authors find the spontaneous emergence of two distinct subpopulations comprised of producers who make large investments and free-riders who contribute very little, each with a different neighborhood size from an initially monomorphic population in trait space driven by evolutionary branching. Furthermore, the intrinsic stochasticity arising from population size is shown to play a dominant role in determining the final state of the population, i.e., branching or extinction. I think the novelty of the work lies in its consideration of the co-evolution of neighborhood size and public good production, which is, as far as I know, not investigated in previous papers focusing on adaptive dynamics. Besides, the theoretical analysis of adaptive dynamics employed seems to be also sound. Considering the acceptance criterion of Royal Society Open Science, I support its publication if the following suggestions are incorporated.
(1) The evolutionary branching of cooperators and defectors in social dilemma games is not new. (2013)].
(2) An important issue I do feel confused is the way in which the evolving population structure, or more specifically, the evolving neighborhood size, is modelled. I would prefer if the authors can give more specific examples that indeed support such a modelling manner of population structure.
(3) With reference to the nonlinear relationship between the common benefits and the overall contributions, I suggest citing the following very relevant paper [Impact of critical mass on the evolution of cooperation in spatial public goods games. Phys. Rev. E 81, 057101 (2010)]. Besides, regarding the multi-player social dilemma games on structure populations, I recommend the author to cite a recent review paper [Statistical physics of human cooperation. Phys. Rep. 687, 1-51 (2017)] to strengthen the general background.
(4) There are a few English mistakes or vague expressions in the manuscript. Please carefully proofread the article before resubmission.
Reviewer: 3 Comments to the Author(s)
The tragedy of commons is a long-standing puzzle, and public goods game (PGG) theory provides a potential route to resolve it. In this paper, the authors explored the effect of nonlinear benefit and cost function on the branching and extinction in PGG, and obtained that population size would be a crucial role in determining the finial state of the population.
In my opinion, the current idea is interesting, and I am willing to consider the potential recommendation. Yet, before the formal acceptance, I have some technical comments on the related contents as follows: 1. As the authors state, in Figure 1 B, the branch point location and probability in trait space are interesting. However. I was wondering whether the qualities of the branch point location and probability can be shown more obviously in Fig 1 B, or can be marked as some more detailed number of generations. 2. Population's size plays a crucial role in determining the finial state. Then, how to ascertain the population's size needs to be described. Meanwhile, is the assumption of an infinitely large population reasonable in practice and why the small population number is selected as 150, 200? 3. The English spelling need to be checked carefully. For example, P6: Line 40, "…shown for both the monorphic….." should revised as "…shown for both the monomorphic…". P12 Lines 14, 32, etc "…figure …" should be "…Figure 2…". 4. Regarding the PGG, many mechanisms have been proposed to enrich the understanding of the emergence of cooperation. As an example, the adaptive reputation mechanism is an effective one, two recent works [Evolution of cooperation in the spatial public goods game with adaptive reputation assortment. Physics Letters A. 2016,380: 40-47; Effect of memory, intolerance and second-order reputation on cooperation. Chaos, 2020, 30: .063122.] are worth mentioning here. 5. The references in Appendix D are empty! ===PREPARING YOUR MANUSCRIPT=== Your revised paper should include the changes requested by the referees and Editors of your manuscript. You should provide two versions of this manuscript and both versions must be provided in an editable format: one version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); a 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. This version will be used for typesetting.
Please ensure that any equations included in the paper are editable text and not embedded images.
Please ensure that you include an acknowledgements' section before your reference list/bibliography. This should acknowledge anyone who assisted with your work, but does not qualify as an author per the guidelines at https://royalsociety.org/journals/ethicspolicies/openness/.
While not essential, it will speed up the preparation of your manuscript proof if you format your references/bibliography in Vancouver style (please see https://royalsociety.org/journals/authors/author-guidelines/#formatting). You should include DOIs for as many of the references as possible.
If you have been asked to revise the written English in your submission as a condition of publication, you must do so, and you are expected to provide evidence that you have received language editing support. The journal would prefer that you use a professional language editing service and provide a certificate of editing, but a signed letter from a colleague who is a native speaker of English is acceptable. Note the journal has arranged a number of discounts for authors using professional language editing services (https://royalsociety.org/journals/authors/benefits/language-editing/).
===PREPARING YOUR REVISION IN SCHOLARONE===
To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre -this may be accessed by clicking on "Author" in the dark toolbar at the top of the page (just below the journal name). You will find your manuscript listed under "Manuscripts with Decisions". Under "Actions", click on "Create a Revision".
Attach your point-by-point response to referees and Editors at Step 1 'View and respond to decision letter'. This document should be uploaded in an editable file type (.doc or .docx are preferred). This is essential.
Please ensure that you include a summary of your paper at Step 2 'Type, Title, & Abstract'. This should be no more than 100 words to explain to a non-scientific audience the key findings of your research. This will be included in a weekly highlights email circulated by the Royal Society press office to national UK, international, and scientific news outlets to promote your work.
At
Step 3 'File upload' you should include the following files: --Your revised manuscript in editable file format (.doc, .docx, or .tex preferred). You should upload two versions: 1) One version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); 2) A 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. --If you are requesting a discretionary waiver for the article processing charge, the waiver form must be included at this step.
--If you are providing image files for potential cover images, please upload these at this step, and inform the editorial office you have done so. You must hold the copyright to any image provided.
--A copy of your point-by-point response to referees and Editors. This will expedite the preparation of your proof.
At
Step 6 'Details & comments', you should review and respond to the queries on the electronic submission form. In particular, we would ask that you do the following: --Ensure that your data access statement meets the requirements at https://royalsociety.org/journals/authors/author-guidelines/#data. You should ensure that you cite the dataset in your reference list. If you have deposited data etc in the Dryad repository, please only include the 'For publication' link at this stage. You should remove the 'For review' link.
--If you are requesting an article processing charge waiver, you must select the relevant waiver option (if requesting a discretionary waiver, the form should have been uploaded at Step 3 'File upload' above).
--If you have uploaded ESM files, please ensure you follow the guidance at https://royalsociety.org/journals/authors/author-guidelines/#supplementary-material to include a suitable title and informative caption. An example of appropriate titling and captioning may be found at https://figshare.com/articles/Table_S2_from_Is_there_a_trade-off_between_peak_performance_and_performance_breadth_across_temperatures_for_aerobic_sc ope_in_teleost_fishes_/3843624.
At
Step 7 'Review & submit', you must view the PDF proof of the manuscript before you will be able to submit the revision. Note: if any parts of the electronic submission form have not been completed, these will be noted by red message boxes.
Reviewer: 1 Comments to the Author(s) (1) In page 1, the authors use the abbreviation PG in the abstract, but does not give the full name.
We have changed the sentence "We allow the PG production…" -> "We allow the public good production…" (2) In page 3, what does 1D games mean? The authors should give an explanation.
By 1D games, we mean adaptive dynamics where only a single trait is allowed to evolve over time. In this 1D setting, branching, if it should occur, always does. We have added clarification, "we observe that branching may be deterministically favored only for a finite amount of time in our two-dimensional model: the monomorphic population can drift away from the region where branching is deterministically favored, a feature not seen in the single trait games." (3) In page 4, the authors state that in the evolutionary process the neighborhood size-trait is also a continuous variable between 1 and N. But the authors also need to consider that the neighborhood size should be an integer.
We appreciate this feedback. When determining the payoff (two paragraphs later), we describe the procedure where we round to the nearest integer when determining the neighborhood size. We have made a brief comment at the definition that this is done.
(4) I do not see the parameter values which are used to plot figure 2 and figure S1.
We have added these parameter values to the captions of figures 2 and S1, they are the values presented in the table.
(5) In page 7, the probability that the opponent replaces the individual and becomes the parent of an offspring in the next generation (q) will be negative when P_opp We thank the reviewer for catching this. The simulations are all correct, we are missing that if P_opp is smaller, then q = 0. We have updated equation (6).
(6) The public goods dilemma frequently appears in the real world, and some works about the public goods game can be mentioned the literature review, e.g., Journal of the Royal Society Interface, 2013, 10, 20120997;New Journal of Physics, 2014, 16: 083016;Mathematical Models and Methods in Applied Sciences, 2019, 29: 2127--2149Nonlinear Dynamics, 2019, 97:749-766. We thank the reviewer for these references and have added them along with a brief description.
(7) There are some typos in the work. . We further explore continuous investment as it relates to a new model in which there are two co-evolving traits. Analyzing two traits in our model allows us to explore the impact of finite population size in a game which features co-evolution, which is likely present in most real-world applications." (2) An important issue I do feel confused is the way in which the evolving population structure, or more specifically, the evolving neighborhood size, is modelled. I would prefer if the authors can give more specific examples that indeed support such a modelling manner of population structure.
We appreciate the input and have added an example which justifies our modeling of the population structure and neighborhood size.
(3) With reference to the nonlinear relationship between the common benefits and the overall contributions, I suggest citing the following very relevant paper [Impact of critical mass on the evolution of cooperation in spatial public goods games. Phys. Rev. E 81, 057101 (2010)]. Besides, regarding the multi-player social dilemma games on structure populations, I recommend the author to cite a recent review paper [Statistical physics of human cooperation. Phys. Rep. 687, 1-51 (2017)] to strengthen the general background.
We appreciate the additional recommendations and will add the first of these to our model introduction. However, we were previously advised by a reviewer to avoid human behavioral applications and for that reason we took the liberty to not add the second paper.
(4) There are a few English mistakes or vague expressions in the manuscript. Please carefully proofread the article before resubmission.
Typos have been addressed and clarity improved, thank you.
Reviewer: 3 Comments to the Author(s) 1. As the authors state, in Figure 1 B, the branch point location and probability in trait space are interesting. However. I was wondering whether the qualities of the branch point location and probability can be shown more obviously in Fig 1 B, or can be marked as some more detailed number of generations.
With several simulations overlapping, it is difficult to show exactly where each one of them branches without cluttering the figure. Instead , Fig 1 B provides a qualitative assessment of the stochasticity, which is further elucidated and quantified in Figures 2-4. We have modified the caption for figure 1 to address this. "However, the generation at which branching occurs varies, highlighting the importance of stochasticity in our game. This stochasticity will be the focus of the following analysis, as shown in greater depth in Figures 2-4." It is difficult to define when a successful branch has occurred. We have used a K-means (with K = 2) detection algorithm and have stated a branching event occurs when the distance between the means is above some predefined threshold. We have marked these locations by color-coded arrows on figure 1B.
2. Population's size plays a crucial role in determining the final state. Then, how to ascertain the population's size needs to be described. Meanwhile, is the assumption of an infinitely large population reasonable in practice and why the small population number is selected as 150, 200?
The final state, probability of branching and other parts of the dynamics are all population-size dependent. We are confused by what the reviewer means with "ascertain the population size", this is a predefined quantity that remains fixed throughout the simulation. The impact of population size was investigated both numerically and analytically. The sizes of 150-200 were chosen because these were the population sizes which branch somewhat consistently, but also observe extinction events, as seen in figure 4. For these reasons, they were ideal population sizes for further study as they are capable of all three interesting events (branching, no branching, and extinction). This explanation has been added to the caption for Figure 3.
Typos have been addressed and clarity improved, thank you.
4. Regarding the PGG, many mechanisms have been proposed to enrich the understanding of the emergence of cooperation. As an example, the adaptive reputation mechanism is an effective one, two recent works [Evolution of cooperation in the spatial public goods game with adaptive reputation assortment. Physics Letters A. 2016,380: 40-47; Effect of memory, intolerance and second-order reputation on cooperation. Chaos, 2020, 30: .063122.] are worth mentioning here.
We have added these to the introduction, thank you.
5. The references in Appendix D are empty! Fixed, thanks. | 5,980.8 | 2021-05-01T00:00:00.000 | [
"Mathematics",
"Economics"
] |
Differences in self-reported health between low- and high-income groups in pre-retirement age and retirement age. A cohort study based on the European Social Survey
Highlights • Older persons show significant increases in self-reported health from 2002 to 2018.• Relative, but not absolute social inequality declined during this time.• Self-reported health has increased with virtually every examined demographic.• Gaps between high-income and low-income demographics remain significant, however.• High-income, post-retirement groups report better health in 2018 than low-income, pre-retirement groups.
Introduction
Several studies have shown a correlation of income and health status within countries concluding that high/increasing income might have a positive effect on health [1,2]. Moreover, health and income are interrelated, and multiple studies show that e.g. the transition into retirement may largely affect the (subjective) health status not only due to multiple physical, mental, and social aspects, but also due to decreased income [3][4][5][6][7]. At the same time, poor health might lead to early retirement.
In the nexus of health and income, older persons (aged 65 years and more) are a group of particular interest: On the one hand due to an increasing prevalence of chronic diseases and multimorbidity and on the other because of changing income situations e.g. due to retirement or death of a spouse. In view of the above-mentioned potentially positive effect of higher income on health the aim of our study is to analyse the differences in self-reported health between older persons in low-and high-income households, as well as the development of the health status within and between cohorts and the development of status-related health inequalities.
Therefore, our study depicts the state of self-rated health in older age cohorts in 17 countries in 2002 and 2018 using data of the European Social Survey (ESS). Cross-country comparisons bear additional challenges. Considering life expectancy Picket and Wilkinson argue that in an inter-country comparison there is "(…) not simply a lack of statistically significant relation with life expectancy, but no relation whatsoever. Average real incomes can be almost twice as high in some developed countries as in others without consequences for life expectancy" [2]. Using gross national income per capita at purchasing power parities the authors conclude that "what matters may be social position, or income relative to others, rather than material living standards regardless of others" [2]. The European perspective allows us to show a general trend across countries in a comparatively long timespan. We focus on the association between income on health and analyse differences of self-rated health between the lowest and the highest income tercile. Our main hypotheses is that self-rated health of older persons in highincome households is significantly higher that of those living in low-income households. We want to investigate how differences have developed over time and between income groups. Therefore, these cohorts are analysed at two points in time and thus at a different age. By that, we answer the following research questions: • Did the self-rated health status of older age cohorts between 49 and 64 years and between 65 and 80 years improve between 2002 and 2018? • Did absolute inequalities between income groups increase between 2002 and 2018?
As the coverage of association between self-rated health and income is already extensive in the literature, the following section summarises selected findings focusing mainly on studies analysing self-reported health.
Health and income: A review of the literature
In a systematic narrative review, Read et al. [8] found and analysed 71 studies on health status of older (60 +) people in Europe published between 1995 and 2013. 44 of those studies covered self-rated health. Of those studies, 7 out of 7 analysing the association between selfrated adequacy of income and self-rated health show an association. However, the authors point out that "the association between income and self-rated health was less clear" [8], since out of 19 studies, only 11 reported such association.
An analysis of the European Values Study 2008/2009 shows that low income is significantly related to less-than-good-health in 23 of 42 European countries [9]. Based on ESS 8 (2016/2017) data from 23 countries, Papazoglou and Galariotis [10] confirm the assumption that higher individual income raises the probability of self-reported good or very good health.
Based on EU-SILC data, Forster et al. [11] show the differences in self-reported health between the lowest and the highest income quintile concerning the percentage reporting good health. Across the 28 EU countries at that time, the respective figures for both groups are 60% and 78% [11]. Further, they report on EQLS (European Quality of Life Surveys) data showing that between 2007 and 2016 the shares of people reporting bad health were stable in both, the bottom income quartile (14% and 13%) as well as the top quartile (around 5%).
Furthermore, an analysis of 42 European countries with data from the European Values Study (EVS) 2008/2009 shows higher income inequality being related to higher inequality in self-reported health [12]. Like the ESS, the EVS measures income via categories and not as an exact figure.
Overall, the highlighted studies show mixed results on the association between income and health status. This, however, is caused by different measures of income and health. Nevertheless, most studies listed show that there is a correlation between a high income and a good health status. Concerning long-term trends for past decades, several authors found increasing health inequalities (for European countries [13,14]; for Central and Eastern European countries [15,16]).
Dataset and timespan
The ESS is a bi-yearly cross-national survey in Europe, established in 2001. Data is collected via face-to-face interviews with newly selected cross-sectional samples. We chose ESS data (core questionnaire) because it covers a large period of time (2002 to 2018). As we use data from the ESS 1 (2002) and the ESS 9 (2018), we aim for the maximum time span coverable by the dataset. Two 8-yearsteps (2002 to 2010 and 2010 to 2018) could have provided a distinction between rather recent developments and developments before 2010. Yet, we chose the 16-year step instead of two 8-year steps, since 2010 data may be affected by the 2008 financial crisis [9,13,[16][17][18][19][20]. Our analysis contains all 17 countries which took part in both rounds. A complete list is shown in Table 1.
Although the ESS is no panel study, we created different age groups in a manner that birth cohorts enter the next higher age group between 2002 and 2018 ( Fig. 1). We constructed the age boundaries based on the "classic" upper boundary for the "working age population" at age 64 used e.g. by OECD [21], although numerous European countries raised their legal retirement age beyond the age of 65.
The youngest birth cohort was middle aged (33-48 years) in 2002 and is roughly in pre-retirement age in 2018 (49-64 years). The second birth cohort (49-64 years in 2002) was in pre-retirement age in 2002 and in retirement age in 2018 (65-80 years). The oldest cohort (65-80 years) was in retirement age in 2002. This cohort is not analysed for 2018, since the numbers of cases between 81 and 90 years is too low (there are no persons older than 90 covered in ESS9 in the 17 countries). In sum, we can assess the development of the self-rated health status and health inequalities in this status for the youngest cohort (33-48 years in 2002) as they enter middle age (49-64 years) and for the second cohort (49-64 years in 2002) as they enter old age (65-80 years). The self-reported health status reflects an individual's perception of their social, biological, and psychological health. We can compare the self-rated health status and inequalities amongst those in old age (65-80 years) in 2002 and 2018.
Measure of health and income
As for the measurement of health, we use a single item. The item assesses the subjective perception of health (on a 5-point-scale: very bad, bad, fair, good, very good). Due to the ordinal scale of measurement, similar as other authors (e.g. [10] and [22]) we dichotomised the variable into very good or good health on the one side and fair, bad or very bad health on the other side. This subjective assessment can be too positive or too negative, depending e.g. on individual psychological predispositions, lack of relevant information about one's own health, and social desirability effects. Further, the question is about 'health'. Only the footnote for further explanations by interviewers elaborates on the definition's including 'physical and mental health'. Therefore, it is possible that respondents answer this question mainly according to their physical health.
As we are interested in health differences between income groups in and between cohorts, it has to be noted how "different patterns of health inequalities emerge depending on the measure" [23]. Looking at the literature, studies either use income or education. For example, a study by Read et al. [8] has shown that education is used more frequently than income. Balaj et al. [24] analysed data from the ESS 7 (2014) rotating module on social inequalities in health and found considerable differences in the extent of health inequalities between the 21 countries [24]. They argue that education is less prone to reverse causation than income, which seems plausible given that the level of formal education is mainly determined in the first third of the life course.
Although ESS data also contains information on respondents´(and their spouses´) educational level (ISCED) we focus on income and argue that the position of a person in society is defined by income rather than formal education. Further, by using income adjusted for household composition, we focus on a gap in current research on health inequality between status groups. Using net household income adjusted for household composition according to the OECD equivalence scale [25], our study refines the methodology of previous studies on the relationship between income and health [10,[26][27][28][29][30]. It must be noted, however, that income is calculated in various ways across these studies. For example, Papazouglu and Galariotis [10] reduced the ten ESS categories of total net household income to five groups (deciles 1&2, 3&4, etc.). Jen at al. used income before taxes and not net income [26]. Mulatu and Schooler use family income not adjusted for household size [25]. Christelis et al. [28] used SHARE data and weight according to the OECD scale. Artazcoz et al. [29] measure household income and divide it by the square root of the number of people living in a household (with the square root calculation, couples living together, or families are estimated to be richer than with the OECD equivalence scales). Bovasso et al. [30] used 17 categories for personal and household income, yet did not mention any adjustment for household size.
According to their household's net income adjusted to household composition, respondents are categorised into three terciles (low, medium and high income). We focus on the differences in self-rated health between the low and the high-income group.
As Kjellsson et al. [31] conclude, "it is generally a sensible idea to present the reader with both relative and absolute versions of inequality measures to compare inequality between populations". Absolute inequality is often reported in comparisons of percentages of different groups in different times. Absolute health inequalities are defined as the gap in percentages of respondents in good or very good health between the low and the high-income tercile while relative inequalities are calculated as the percentual rise that would be needed to go from the lower value to the higher value. If 20 percent of the population in the low income tercile are in good or very good health and 40 per cent of those in the high tercile, the chances of a person in the high income tercile to be in good or very good health are twice as high as those of a person in the low income tercile. If later figures are at 40 and 60 percent respectively, the chances are only 50 per cent higher, so that absolute inequality remained stable, but relative inequality declined. Contrastingly, not percentages in good or very good health, but fair, bad or very bad health are measured, figures would be 80/60 in the beginning and 60/40 later, so that the differencesin the probabilities and therefore relative inequality would increased from 1,33 to 1,50. Therefore, although the concept of relative inequality seems to be plausible since it reflects individual probabilities, results on developments depend on an arbitrary choice.
Development of self-rated health status
Across all four age groups, the share of respondents in good or very good health rose to a roughly similar degree (Table 1) 7. This means that not only did the low-income tercile not reach the value in 2018 the high tercile already had in 2002, but also that the high-income tercile in 2018 amongst those between 65 and 80 had a higher value than the low-income tercile in the age group 49-64 (57.7 and 53.9 percent respectively). The high tercile's value may be slightly inflated due to respondents comparing their health to others in the same age group (the opposite applying to the low tercile in the younger group). According to what is stated in the surveys however, those in old age of the high-income tercile feel healthier than the low-income tercile in the age group which is 16 years younger.
Considering the younger cohort of 2002 and the older cohort of 2018 (the same birth cohorts, (1938)(1939)(1940)(1941)(1942)(1943)(1944)(1945)(1946)(1947)(1948)(1949)(1950)(1951)(1952)(1953), it becomes clear that this cohort had a weak decline in their share of persons in good or very good health. Amongst the low-income tercile, the share declined from 45.6 percent to 41.8 percent, and amongst the high-income tercile from 66.0 to 59.7. In this manner, the high-income tercile lost more absolute percentage points (6.3, with only 3.8 for the low-income group). Although the income position is not necessarily stable and people can leave or enter the three income positions, clearly for those between 1938 and 1953 the health differences between low and high earners did not increase. They rather declined, as these birth cohorts left the middle age (49-64 years) and entered old age (65-80 years). One explanation might be that especially in the low tercile, some with bad health deceased, which improved this group's share of those in good or very good health.
There are strong correlations between the country values in Table 1: Within 2002 as well as 2018, countries with a high share of respondents in good or very good health in the low-income tercile report high values in the high-income tercile as well (and vice versa). Therefore, good or bad health of income groups are sub-aspects of general good or bad health in the cohort. There seems to be no remarkably tradeoff between income groups. Further, a high share in 2002 for a cohort translates into a high share in 2018 for the same cohort (that is, one age group higher). This applies to the low as well as the high tercile. The low (high) tercile in middle age in 2002 is not the same group as the low (high) tercile in high age in 2018, but tendentially, since low-(high) income workers tend to be low-(high) income retirees Pearson's R-values are between 0.734 and 0.910 (all significant at the 0.1%-level). This means that (1) within countries, the self-reported state of health of the low tercile is strongly related to the self-reported state of health for the high tercile, (2) also, within countries, the younger group's self-reported state of health is strongly related to that of the older group. Put differently, the state of health of the birth cohort of 1938-1953 in 2002 as a middle-aged group (49-64 years) correlates to these cohort's health status in 2018 as an old age group (65-80 years) (with values of 0.909 for the low-income group in the young group in 2002 and the low-income group in the old cohort in 2018, and 0.880 for the high-income group), (3) differences between countries are vastly stable, so that countries with a high share of respondents reporting good or very good health in a certain age group in 2002 usually also had high shares in 2018 in the same age group (that is, the later birth cohort).
Better health for the low tercile reduces health inequality. Focusing on absolute inequality and on age groups (not birth cohorts) for the middle age group (49-64 years), improved health of the low tercile is related to lower inequality (Pearson's R −0.475, not significant at the 5%-level). For the old age group (65-80 years) the correlation is Sig.: * ≤ 0.05 | ** ≤ 0.01 | *** ≤ 0.001. Blank spaces: Inter-group difference not significant at the 5%-level. The mean value is unweighted, so that every country has the same weight irrespective of population or sample size. Fig. 2 shows that in the mean across the 17 countries, absolute health inequalities remained vastly unchanged. There is a rise by 0.18 percentage points in the mean for those between 49 and 64 (median: −0.90 percentage points), and a rise by 1.3 percentage points for those between 65 and 80 (median: +0.60 percentage points). Therefore, in contrast to studies mentioned above which refer to time periods before our time span or to time spans overlapping only with the beginning of our time span, we do not even find increasing absolute health inequalities.
Limitations
The ESS question for the "household's total income, after tax and compulsory deductions, from all sources" (ESS 9 Source Questionnaire) comes with some problems. For example, the intensity of missing responses or answers due to social desirability effects or incorrect / incomplete information on the respondents' side [9]. Further, considering the effect on health status long-term circumstances and behaviours, the fact that households partly change the income tercile they belong to implies that amongst older age groups respondents may be defined as belonging to a certain income group, although they possibly spent their life vastly in a different income group. This again underlines that we refrain from causal interpretations and merely depict the self-rated health situation. Lastly, it is plausible that respondents assess their own health not solely in comparison to their past or future health, or their assessment of the average health status in their respective society, but partly in comparison to others in their own age group. This may lead to rather positive assessments by the oldest age groups.
Conclusions
Our study shows a considerable increase of older persons reporting good or very good health between 2002 and 2018, in all four groups examined. Absolute differences between status groups remained stable. In 2018 the high-income tercile of those between 65 and 80 still reported better health than the low-income tercile of those between 49 and 64. Overall, one could argue that the self-rated health has improved in the countries examined. This does not take responsibility off policymakers, as these results need to be investigated further. Our findings, and especially the low shares of those in good or very good health in the lowest tercile, underline the importance of policy measures aiming at increasing the health status of (older) persons, particularly of those with a low-income situation. Health inequalities have social origins and are therefore avoidable [32], at least to some degree. Further, health is affected not only by policies directly focused on health. Yet, the state of research on socio-economic inequality and population health is ambigious. Several older meta studies and some single studies [2,22] show that higher inequality is related to worse population health, but studies published later rather find no such effects (probably also due to better methods [2,33]). These results point out that a wider view on (non-spurious) effects from societal circumstances to health and health inequalities could be useful. | 4,568.2 | 2022-04-01T00:00:00.000 | [
"Economics",
"Sociology"
] |
Before Name-Calling: Dynamics and Triggers of Ad Hominem Fallacies in Web Argumentation
Arguing without committing a fallacy is one of the main requirements of an ideal debate. But even when debating rules are strictly enforced and fallacious arguments punished, arguers often lapse into attacking the opponent by an ad hominem argument. As existing research lacks solid empirical investigation of the typology of ad hominem arguments as well as their potential causes, this paper fills this gap by (1) performing several large-scale annotation studies, (2) experimenting with various neural architectures and validating our working hypotheses, such as controversy or reasonableness, and (3) providing linguistic insights into triggers of ad hominem using explainable neural network architectures.
Introduction
Human reasoning is lazy and biased but it perfectly serves its purpose in the argumentative context (Mercier and Sperber, 2017). When challenged by genuine back-and-forth argumentation, humans do better in both generating and evaluating arguments (Mercier and Sperber, 2011). The dialogical perspective on argumentation has been reflected in argumentation theory prominently by the pragma-dialectic model of argumentation (van Eemeren and Grootendorst, 1992). Not only sketches this theory an ideal normative model of argumentation but also distinguishes the wrong argumentative moves, fallacies (van Eemeren and Grootendorst, 1987). Among the plethora of prototypical fallacies, notwithstanding the controversy of most taxonomies (Boudry et al., 2015), ad hominem argument is perhaps the most famous one. Arguing against the person is considered faulty, yet is prevalent in online and offline discourse. 1 1 According to 'Godwin's law' known from the internet pop-culture (https://en.wikipedia.org/wiki/ Although the ad hominem fallacy has been known since Aristotle, surprisingly there are very few empirical works investigating its properties. While Sahlane (2012) analyzed ad hominem and other fallacies in several hundred newspaper editorials, others usually only rely on few examples, as observed by de Wijze (2002). As Macagno (2013) concludes, ad hominem arguments should be considered as multifaceted and complex strategies, involving not a simple argument, but several combined tactics. However, such research, to the best of our knowledge, does not exist. Very little is known not only about the feasibility of ad hominem theories in practical applications (the NLP perspective) but also about the dynamics and triggers of ad hominem (the theoretical counterpart).
This paper investigates the research gap at three levels of increasing discourse complexity: ad hominem in isolation, direct ad hominem without dialogical exchange, and ad hominem in large inter-personal discourse context. We asked the following research questions. First, what qualitative and quantative properties do ad hominem arguments have in Web debates and how does that reflect the common theoretical view (RQ1)? Second, how much of the debate context do we need for recognizing ad hominem by humans and machine learning systems (RQ2)? And finally, what are the actual triggers of ad hominem arguments and can we predict whether the discussion is going to end up with one (RQ3) ? We tackle these questions by leveraging Webbased argumentation data (Change my View on Reddit), performing several large-scale annotation studies, and creating a new dataset. We experiment with various neural architectures and ex-trapolate the trained models to validate our working hypotheses. Furthermore, we propose a list of potential linguistic and rhetorical triggers of ad hominem based on interpreting parameters of trained neural models. 2 This article thus presents the first NLP work on multi-faceted ad hominem fallacies in genuine dialogical argumentation. We also release the data and the source code to the research community. 3
Theoretical background and related work
The prevalent view on argumentation emphasizes its pragmatic goals, such as persuasion and groupbased deliberation (van Eemeren et al., 2014), although numerous works have dealt with argument as product, that is, treating a single argument and its properties in isolation (Toulmin, 1958;). Yet the social role of argumentation and its alleged responsibility for the very skill of human reasoning explained from the evolutionary perspective (Mercier and Sperber, 2017) provide convincing reasons to treat argumentation as an inherently dialogical tool. The observation that some arguments are in fact 'deceptions in disguise' was made already by Aristotle (Aristotle and Kennedy (translator), 1991), for which the term fallacy has been adopted. Leaving the controversial typology of fallacies aside (Hamblin, 1970;van Eemeren and Grootendorst, 1987;Boudry et al., 2015), the ad hominem argument is addressed in most theories. Ad hominem argumentation relies on the strategy of attacking the opponent and some feature of the opponent's character instead of the counterarguments (Tindale, 2007). With few exceptions, the following five sub-types of ad hominem are prevalent in the literature: abusive ad hominem (a pure attack on the character of the opponent), tu quoque ad hominem (essentially analogous to the "He did it first" defense of a three-year-old in a sandbox), circumstantial ad hominem (the "practice what you preach" attack and accusation of hypocrisy), bias ad hominem (the attacked opponent has a hidden agenda), and guilt by association (associating the opponent with somebody with a low credibility) (Schiappa and Nordin,2 An attempt to address the plea for thinking about problems, cognitive science, and the details of human language (Manning, 2015).
The topic of fallacies, which might be considered as sub-topic of argumentation quality, has recently been investigated also in the NLP field. Existing works are, however, limited to the monological view (Wachsmuth et al., 2017;Habernal and Gurevych, 2016b,a;Stab and Gurevych, 2017) or they focus primarily on learning fallacy recognition by humans (Habernal et al., , 2018a. Another related NLP sub-field includes abusive language and personal attacks in general. Wulczyn et al. (2017) investigated whether or not Wikipedia talk page comments are personal attacks and annotated 38k instances resulting in a highly skewed distribution (only 0.9% were actual attacks). Regarding the participants' perspective, Jain et al. (2014) examined principal roles in 80 discussions from the Wikipedia: Article for Deletion pages (focusing on stubbornness or ignoredness, among others) and found several typical roles, including 'rebels', 'voices', or 'idiots'. In contrast to our data under investigation (Change My View debates), Wikipedia talk pages do not adhere to strict argumentation rules with manual moderation and have a different pragmatic purpose.
Reddit as a source platform has also been used in other relevant works. Saleem et al. (2016) detected hateful speech on Reddit by exploiting particular sub-communities to automatically obtain training data. Wang et al. (2016) experimented with an unsupervised neural model to cluster social roles on sub-reddits dedicated to computer games. Zhang et al. (2017) proposed a set of nine comment-level dialogue act categories and annotated 9k threads with 100k comments and built a CRF classifier for dialogue act labeling. Unlike these works which were not related to argumentation, Tan et al. (2016) examined persuasion strategies on Change My View using word overlap features. In contrast to our work, they focused solely on the successful strategies with delta-awarded posts. Using the same dataset, Musi (2017) recently studied concession in argumentation. derstand other perspectives on the issue', in other words an online platform for 'good-faith' argumentation hosted on Reddit. 4 A user posts a submission (also called original post(er); OP) and other participants provide arguments to change the OP's view, forming a typical tree-form Web discussion. A special feature of CMV is that the OP acknowledges convincing arguments by giving a delta point (∆). Unlike the vast majority of internet discussion forums, CMV enforces obeying strict rules (such as no 'low effort' posts, or accusing of being unwilling to change view) whose violation results into deleting the comment by moderators. These formal requirements of an ideal debate with the notion of violating rules correspond to incorrect moves in critical discussion in the normative pragma-dialectic theory (van Eemeren and Grootendorst, 1987). Thus, violating the rule of 'not being rude or hostile' is equivalent to committing ad hominem fallacy. For our experiments, we scraped, in cooperation with Reddit, the complete CMV including the content of the deleted comments so we could fully reconstruct the fallacious discussions, relying on the rule violation labels provided by the moderators. The dataset contains ≈ 2M posts in 32k submissions, forming 780k unique threads.
We will set up the stage for further experiments by providing several quantitative statistics we performed on the dataset. Only 0.2% posts in CMV are ad hominem arguments. This contrasts with a typical online discussion: Coe et al. (2014) found 19.5% of comments under online news articles to be incivil. Most threads contain only a single ad hominem argument (3,396 threads; there are 3,866 ad hominem arguments in total in CMV); only 35 threads contain more than three ad hominem arguments. In 48.6% of threads containing a single ad hominem, the ad hominem argument is the very last comment. This corresponds to the popular belief that if one is out of arguments, they start attacking and the discussion is over. This trend is also shown in Figure 1 which displays the relative position of the first ad hominem argument in a thread. Replying to ad hominem with another ad hominem happens only in 15% of the cases; this speaks for the attempts of CMV participants to keep up with the standards of a rather rational discussion.
Regarding ad hominem authors, about 66% of 4 https://www.reddit.com/r/changemyview/ them start attacking 'out of blue', without any previous interaction in the thread. On the other hand, 11% ad hominem authors write at least one 'normal' argument in the thread (we found one outlier who committed ad hominem after writing 57 normal arguments in the thread). Only in 20% cases, the ad hominem thread is an interplay between the original poster and another participant. It means that there are usually more people involved in an ad hominem thread. Unfortunately, sometimes the OP herself also commits ad hominem (12%). We also investigated the relation between the presence of ad hominem arguments and the submission topic. While most submissions are accompanied by only one or two ad hominem arguments (75% of submissions), there are also extremes with over 50 ad hominem arguments. Manual analysis revealed that these extremes deal with religion, sexuality/gender, U.S. politics (mostly Trump), racism in the U.S., and veganism. We will elaborate on that later in Section 4.2.
Experiments
The experimental part is divided into three parts according to the increasing level of discourse complexity. We first experiment with ad hominem in isolation in section 4.1, then with direct ad hominem replies to original posts without dialogical exchange in section 4.2, and finally with ad hominem in a larger inter-personal discourse context in section 4.3.
Ad hominem without context in CMV
The first experimental set-up examines ad hominem arguments in Change my view regardless of its dialogical context.
Data verification
Ad hominem arguments labeled by the CMV moderators come with no warranty. To verify their reliability, we conducted the following annotation studies. First, we needed to estimate parameters of crowdsourcing and its reliability. We sampled 100 random arguments from CMV without context: positive candidates were the reported ad hominem arguments, whereas negative candidates were sampled from comments that either violate other argumentation rules or have a delta label. To ensure the maximal content similarity of these two groups, for each positive instance the semantically closest negative instance was selected. 5 We then experimented with different numbers of Amazon Mechanical Turk workers and various thresholds of the MACE gold label estimator (Hovy et al., 2013); comparing two groups of six workers each and 0.9 threshold yielded almost perfect interannotator agreement (0.79 Cohen's κ). We then used this setting (six workers, 0.9 MACE threshold) to annotate another 452 random arguments sampled in the same way as above.
Crowdsourced 'gold' labels were then compared to the original CMV labels (balanced binary task: positive instances (ad hominem) and negative instances) reaching accuracy of 0.878. This means that the ad hominem labels from CMV moderators are quite reliable. Manual error analysis of disagreements revealed 11 missing ad hominem labels. These were not spotted by the moderators but were annotated as such by crowd workers.
Recognizing ad hominem arguments
We sampled a larger balanced set of positive instances (ad hominem) and negative instances using the same methodology as in section 4.1.1, resulting in 7,242 instances, and casted the task of recognition of ad hominem arguments as a binary supervised task. We trained two neural classifiers, namely a 2-stacked bi-directional LSTM network (Graves and Schmidhuber, 2005), and a convolutional network (Kim, 2014), and evaluated them using 10-fold cross validation. Throughout the paper we use pre-trained word2vec word embeddings (Mikolov et al., 2013). Detailed hyperpa- Table 1: Prediction of ad hominem arguments rameters are described in the source codes (link provided in section 1). As results in Table 1 show, the task of recognizing ad hominem arguments is feasible and almost achieves the human upper bound performance.
Typology of ad hominem
While binary classification of ad hominem as presented above might be sufficient for the purpose of red-flagging arguments, theories provide us with a much finer granularity (recall the typology in section 2). To validate whether this typology is empirically relevant, we executed an annotation experiment to classify ad hominem arguments into the provided five types (plus 'other' if none applies). We sampled 200 ad hominem arguments from threads in which interlocution happens only between two persons and which end up with ad hominem. The Mechanical Turk workers were shown this last ad hominem argument as well as the preceding one. Each instance was annotated by 16 workers to achieve a stable distribution of labels as suggested by Aroyo and Welty (2015). While 41% arguments were categorized as abusive, other categories (tu quoque, circumstantial, and guilt by association) were found to be rather ambiguous with very subtle differences. In particular, we observed a very low percentage agreement on these categories and a label distribution spiked around two or more categories. After a manual inspection we concluded that (1) the theoretical typology does not account for longer ad hominem arguments that mix up different attacks and that (2) there are actual phenomena in ad hominem arguments not covered by theoretical categories. These observations reflect those of Macagno (2013, p. 399) about ad hominem moves as multifaceted strategies. We thus propose a list of phenomena typical to ad hominem arguments in CMV based on our empirical study. For this purpose, we follow up with another annotation experiment on 400 arguments, with seven workers per instance. 6 The goal was to annotate a text span which made the argument an ad hominem; a single argument could contain several spans. We estimated the gold spans using MACE and performed a manual post-analysis by designing a typology of causes of ad hominem together with their frequency of occurrence. The results and examples are summarized in Table 2.
Results and interpretation
The data verification annotation study (section 4.1.1) has two direct consequences. First, the high κ score (0.79) answers RQ2: for recognizing ad hominem argument, no previous context is necessary. Second, we still found 5% overlooked ad hominem arguments in CMV thus a moderationfacilitating tool might come handy; this can be served by the well-performing CNN model (0.810 accuracy; section 4.1.2).
The existing theoretical typology of ad hominem arguments, as presented for example in most textbooks, provides only a very simplified view. On the one hand, some of the categories which we found in the empirical labeling study (section 4.1.3) do map to their corresponding counterparts (such as the vulgar insults). On the other hand, some ad hominem insults typical to online argumentation (illiteracy insults, condescension) are not present in studies on ad hominem. Hence, we claim that any potential typology of ad hominem arguments should be multinomial rather than categorical, as we found multiple different spans in a single argument.
Triggers of first level ad hominem
In the following section, we increase the complexity of the studied discourse by taking the original post into account.
Annotation study
We already showed that ad hominem arguments are usually preceded by a discussion between the interlocutors. However, 897 submissions (original posts; OPs) have at least one intermediate ad hominem (in other words, the original post is directly attacked). We were thus interested in what triggers these first-level ad hominem arguments. We hypothesize two causes: (1) the controversy of the OP, similarly to some related works on news comments (Coe et al., 2014) and (2) the reasonableness of the OP (whether the topic is reasonable to argue about). We model both features on a three-point scale, namely controversy: 1 = 'not re-ally controversial', 2 = 'somehow controversial', 3 = 'very controversial' and reasonableness: 1 = 'quite stupid', 2 = 'neutral', 3 = 'quite reasonable'. 7 We sampled two groups of OPs: those which had some ad hominem arguments in any of its threads but no delta (ad hominem group) and those without ad hominem but some deltas (Delta group). In total, 1,800 balanced instances were annotated by five workers and the resulting value was averaged for each item. 8 Statistical analysis of the annotated 1,800 OPs revealed that ad hominem arguments are associated with more controversial OPs (mean controversy 1.23) while delta-awarded arguments with less controversial OPs (mean controversy 1.06; K-S test; 9 statistics 0.13, P-value: 7.97 × 10 −7 ). On the other hand, reasonableness does not seem to play such a role. The difference between ad hominem in reasonable OPs (mean 1.20) and delta in reasonable OPs (mean 1.11) is not that statistically strong; (K-S test statistics: 0.07, P-value: 0.02).
Regression model for predicting controversy and reasonableness
We further built a regression model for predicting controversy and reasonableness of the OPs. Along with Bi-LSTM and CNN networks (same models as in 4.1.2) we also developed a neural model that integrates CNN with topic distribution (CNN+LDA). The motivation for a topicincorporating model was based on our earlier observations presented in section 3. In particular, we trained an LDA topic model (k = 50) (Blei et al., 2003) on the heldout OPs and during training/testing, we merged the estimated topic distribution vector with the output layer after convolution and pooling. We performed 10-fold cross validation on the 1,800 annotated OPs and got reasonable performance for controversy prediction (ρ 7 Examples of not really controversial: "I Don't Think Monty Python is Funny", very controversial: "Blacks are generally intellectual inferior to the other major races", quite stupid: "Burritos are better than sandwiches", and quite reasonable: "Nations whose leadership is based upon religion are fundamentally backwards". 8 A pilot crowd sourcing annotation with 5 + 5 workers showed a fair reliability for controversy (Spearman's ρ 0.804) and medium reliability for reasonableness (Spearman's ρ 0.646). 9 Kolmogorov-Smirnov (K-S) test is a non-parametric test without any assumptions about the underlying probability distribution.
Type
(%) Example spans Vulgar insult 31.3 "Your just an asshole", "you dumb fuck", etc. Illiteracy insult 13.0 "Reading comprehension is your friend", "If you can't grasp the concept, I can't help you" Condescension 6.5 "little buddy", "sir", "boy", "Again, how old are you?" Ridiculing and sarcasm 6.5 "Thank you so much for all your pretentious explanations", "Can you also use Google?" 'Idiot'-insults 6.5 "Ever have discussions with narcissistic idiots on the internet? They are so tiring" Accusation of stupidity 4.3 "You have no capability to understand why", "You're obviously just Nobody with enough brains to operate a computer could possibly believe something this stupid" Lack of argumentation skills 4.3 "You're making the claims, it's your job to prove it. Don't you know how debating works?", "You're trash at debating." Accusation of trolling 3.9 "You're just a dishonest troll", "You're using troll tactics" Accusation of ignorance 3.5 "Please dont waste peoples time pretending to know what you're talking about", "Do you even know what you're saying?" "You didn't read what I wrote" 3.0 "Read what I posted before acting like a pompous ass", "Did you even read this?" "What you say is idiotic" 2.6 "To say that people intrinsically understand portion size is idiotic.", "Your second paragraph is fairly idiotic" Accusation of lying 2.6 "Possible lie any harder?", "You are just a liar." "You don't face the facts and ignore the obvious" 1.7 "Willful ignorance is not something I can combat", "How can you explain that?
You can't because it will hurt your feelings to face reality" Accusation of ad hominem or other fallacies 1.7 "You started with a fallacy and then deflected.", "You still refuse to acknowledge that you used a strawman argument against me" Other 8.3 "Wow. Someone sounds like a bit of an anti-semite", "You're too dishonest to actually quote the verse because you know it's bullshit" 0.569) and medium performance for reasonableness prediction (ρ 0.385), respectively; both using the CNN+LDA model (see Table 3). We then used the trained model and extrapolated on all held-out OPs (1,267 ad hominem and 10,861 delta OPs, respectively). The analysis again showed that ad hominem arguments tend to be found under more controversial OPs whereas delta arguments in the less controversial ones (K-S test statistics: 0.14, P-value: 1 × 10 −18 ). For reasonableness, the rather low performance of the predictor does not allow us draw any conclusions on the extrapolated data.
Results and interpretation
Controversy of the original post is immediately heating up the debate participants and correlates with a higher number of direct ad hominem responses. This corresponds to observations made in comments in newswire where 'weightier' topics tended to stir incivility (Coe et al., 2014). On the other hand, 'stupidity' (or 'reasonableness') does not seem to play any significant role. The CNN+LDA model for predicting controversy (ρ 0.569) might come handy for signaling potentially 'heated' discussions.
Before calling names
In this section, we focus on the dialogical aspect of CMV debates and dynamics of ad hominem fallacies. Although ad hominem arguments appear in many forms (Section 4.1.3), we treat all ad hominem arguments equal in the following experiments.
Data sampling
So far we explored what makes an ad hominem argument and whether debated topic influences the number of intermediate attacks. However, possible causes of the argumentative dynamics that ends up with an ad hominem argument remain an open question, which has been addressed in neither argumentation theory nor in cognitive psychology, to the best of our knowledge. We thus cast an explanation of triggers and dynamics of ad hominem discussions as a supervised machine learning problem and draw theoretical insights by a retrospective interpretation of the learned models.
We sample positive instances by taking three contextual arguments preceding the ad hominem argument from threads which are an interplay between two persons. Negative samples are drawn similarly from threads in which the argument is awarded with ∆ as shown in Figure 2. 10 Each instance consists of the three concatenated arguments delimited by a special OOV token. This resulted in 2,582 balanced training instances.
Neural models
The alleged lack of interpretability of neural networks has motivated several lines of approaches, such as layer-wise relevance propagation (Arras et al., 2017) or representation erasure (Li et al., 2016), both on sentiment analysis. As our task at hand deals with multi-party discourse that presumably involves temporal relations important for the learned representation, we opted for a state-of-theart self-attentive LSTM model. In particular, we re-implemented the Structured Self-Attentive Embedding Neural Network (SSAE-NN) (Lin et al., 2017) which learns an embedding matrix representation of the input using attention weights. To make the attention even more interpretable, we replaced the final non-linear MLP layers with a single linear classifier (softmax). By summing over one dimension of the attention embedding matrix, each word from the input sequence gets associated 10 To ensure as much content similarity as possible, we used the same similarity sampling as in section 4.1.1. with a single attention weight that gives us insights into the classifier's 'features' (still indirectly, as the true representation is a matrix; see the original paper). 11 The learning objective is to recognize whether the thread ends up in an ad hominem argument or a delta point. We trained the model in 10-fold cross-validation and although our goal is not to achieve the best performance but rather to gain insight, we also tested a CNN model (accuracy 0.7095) which performed slightly worse than the SSAE-NN model (accuracy 0.7208).
Results and interpretation
During testing the model, we projected attention weights to the original texts as heat maps and manually analyzed 191 true positives (ad hominem threads recognized correctly), as well as 77 false positives (ad hominem threads misclassified as delta) and 84 false negatives (delta as ad hominem), in total about 120k tokens. The full output is available in the supplementary materials, we use IDs as a reference in the following text.
In the following analysis, we solely relied on the weights of words or phrases learned by the attention model, see an example in Figure 3. Based on our observations, we summarize several linguistic and argumentative phenomena with examples most likely responsible for ad hominem threads in Table 4.
The identified phenomena have few interesting properties in common. First, they all are topic-independent rhetorical devices (except for the loaded keywords at the bottom). Second, many of them deal with meta-level argumentation, i.e., arguing about argumentation (such as missing support or fallacy accusations). Third, most of them do not contain profanity (in contrast to the actual ad hominem arguments of which a third are vulgar insults; cf. Table 2). And finally, all of them should be easy to avoid.
Misleading 'features' False positives revealed properties that misled the network to classify delta threads as ad hominem threads.
• These include topic words (such as racism, blacks, slave, abortion) which reflects the implicit bias in the data.
• Actual interest mixed with indifference in Figure 3: An example of reconstructed word weight heat map extracted from the attention matrix for a thread which ends up in ad hominem; three previous arguments are shown (see Figure 2 for sampling details).
False negatives were caused basically by presence of many 'informative' content words (980 unemployment, quarterly publication, inflation data, 474 actual publications, this experiment, biological ailments, medical doctorate, 1214 graduate degree, education, health insurance) and misinterpreted sarcasm (285(-1) "Also this is a cute analogy").
Conclusion
In this article, we investigated ad hominem argumentation on three levels of discourse complexity. We looked into qualitative and quantative properties of ad hominem arguments, crowdsourced labeled data, experimented with models for prediction (0.810 accuracy; 4.1.2), and proposed an updated typology of ad hominem properties (4.1.3). We then looked into the dynamics of argumentation to examine the relation between the quality of the original post and immediate ad hominem arguments (4.2). Finally, we exploited the learned representation of Self-Attentive Embedding Neural Network to search for features triggering ad hominem in one-to-one discussions. We found several categories of rhetorical devices as well as misleading features (4.3.3).
There are several points that deserve further investigation.
First, we have ignored metainformation of the debate participants, such as their overall activity (i.e., whether they are spammers or trolls). Second, the proposed typology of ad hominem causes has not yet been post-verified empirically. Third, we expect that personality traits of the participants (BIG5) may also play a significant role in the argumentative exchange. We leave these points for future work.
We believe that our findings will help gain better understanding of, and hopefully keep restraining from, ad hominem fallacies in good-faith discussions. | 6,323.8 | 2018-02-19T00:00:00.000 | [
"Computer Science"
] |
Effect of Land Expropriation on Land-Lost Farmers’ Health: Empirical Evidence from Rural China
With rapid urbanization and industry development, China has witnessed substantial land acquisition. Using the rural household survey data, this paper examines the impact of land expropriation on land-lost farmers’ self-reported health with the ordered probit model and investigates the possible mechanisms. The results show that the land expropriation puts higher health risks over those land-lost farmers and the health status of land-lost farmers is significantly worse than that of those with land. Land expropriation has a negative impact on the land-lost farmer’s health through income effects and psychological effects. The health status of land-lost farmers can be enhanced through amending current land requisition policies, increasing the amount of compensation, improving the earning capacity of land-lost farmers and strengthening mental health education.
Introduction
With the advancement and acceleration of industrialization and urbanization in China, the expropriation of land caused a large number of farmers to leave their land and become the land-lost farmers. Statistics show that there are 50 million land-lost farmers after land expropriation in China, and about 30 million land-lost farmers' living standards have been greatly declined [1,2]. After the expropriation of farmers' land, the land-lost farmers face pressure from the economic model, social interaction and other aspects of change and re-adaptation requirements, which may have an impact on the health of the farmers, including physical and mental health. Health, in turn, affects individual behavior and health can affect both individual productivity and output [3]. Farmers with poor health are more likely to end up in poverty [4,5].
The existing literature mainly focused on the issues of pension system and income for land-lost farmers. However, the research on whether land expropriation has any effect on the health of farmers is still relatively rare. Therefore, we would like to know if there is any health risk of land-lost farmers under the background of urbanization. If so, what are the health effects of land expropriation on rural residents? Are these effects positive or negative? What are the formation mechanisms and the channels of influence? Is the influence on the farmers with different gender and socio-economic status consistent? How can the health safeguard system be effectively built for land-lost farmers? Obviously, answering these questions is the main purpose of this research. We can find the key points for improving the health of land-lost farmers through targeted intervention of channels, which is of great theoretical and practical significance for the formulation of a reasonable agricultural land compensation system, the implementation of the Rural Vitalization Strategy and the beautiful countryside construction in China.
Among the existing literature, few studies have quantitatively examined the link between land-lost and the health status of the farmers at the local level, largely due to the lack of detailed information and micro data. This work adds to the literature in the following ways. Firstly, the previous studies focus on qualitative description and case analysis, while the empirical analysis based on micro-data is still scarce. We conducted a rigorous empirical study based on the survey data of rural households in Chengdu in 2011 and add to the growing empirical literature examining the relationship between land expropriation and land-lost farmers' health at the micro-level. Secondly, although the influence of land expropriation on the farmer's health is considered in previous studies, this study provides the evidence to explore the issue from a new perspective: that of considering investigating this complex mechanism by income and psychological effects. Our results provide new insights on the issue of Chinese rural land expropriation. Our study consequently calls for reinforced actions from the Chinese public authorities in order to improve the well-being of the land-lost farmers. Thirdly, existing empirical studies on the effect of land expropriation on land-lost farmer's health adopted the simple OLS (Ordinary Least Square) and probit model. We adopt an ordered probit model, so as to more accurately identify the health effects of land-lost farmers.
The remainder of the paper is organized as follows. In Section 2 we review the related literature. In Section 3 we introduce the data source, the definition of related variables and the construction of the model. The empirical results are presented in Section 4. We make a further discussion on the formation mechanism of the health effects of land-lost farmers in Section 5. The last section summarizes the main conclusions.
Literature Review
There is a large literature base regarding the influence factors which affect residents' health. Most of the literature on residents' health are based on the theory of the demand for health (Grossman, 1972) to explore the determinants of health from dimensions like age, gender, income level and social capital, etc. [6][7][8][9][10][11][12][13][14]. At the micro level, age, gender, educational level, marital status, environmental quality, water quality, sanitation level, income level and religious beliefs all have an impact on the health of rural residents [11][12][13][14][15][16][17]. At the macro level, health expenditure in rural areas, accessibility of medical services and outflow of rural labor force are closely related to the health of the rural residents [18][19][20].
However, for a long time, there was less attention paid to whether the land expropriation will affect the health of farmers. The reason is mostly due to the lack of detailed survey data. Jacobs (2004) and Marco-Thyse (2006) found that land expropriation by urbanization in African countries reduced the health status of farmers [21,22]. In the opinion of Summerfield (2007) and Friedman (2009), improving the health status of land-lost farmers requires improvements in the medical care and retirement benefits of land-lost farmers [23,24]. Fearnside (2001) and Campbell et al. (2010) claimed that land expropriation could lead to a decline in the health of land-lost farmers and cause social instability [25,26]. Yang (2017) found that people who lived in the communities with land expropriation had 0.75 units lower depression score [27]. Wu et al. (2009) studied the health status of land-lost farmers in northern Jiangsu and found that the health problems were serious [28]. found that urbanization has reduced the health of land-lost farmers based on the probit model [1]. Yu (2012) argued that psychological gaps and imbalances are more likely to breed among land-lost farmers and induce their health risks [29].
To sum up, the association between land expropriation and health has been less discussed, and we do not clearly know whether the effect of land expropriation on health exists in rural China. Under the background of rapid economic development and urbanization, rural land expropriation will become more and more common in China, thus, the association between land expropriation and health is worthy of discussion. This study aims to address this research gap. It is hoped that these results provide new evidence for the issue on the land-lost farmers' health, and enables us to better provide new ideas for the improvement of relevant policies after land expropriation.
Data
The data set used for the analysis comes from the 2011 rural household survey data in Chengdu. The survey conducted in 2011 from June to September and there was no further investigation. The data were created in the frame of World Bank and Chinese Academy of Social Sciences funded projects. With the advancement of urbanization, the scale of land expropriation in Chengdu is gradually increasing. By June 2015, a total of 0.0746 million hectare of farmland was expropriated and about 637,000 farmers lost their land in whole or in part [30].
Multi-stage sampling method was used in the survey. In the first phase of the survey, methods sampling proportioned to the scale was taken. Three surveyed counties were randomly selected from Chengdu. In each county, three townships were randomly selected and three administrative villages were selected from each township. Finally, household surveys were conducted on rural households according to a combination of random sampling and typical sampling in administrative villages. Missing or incomplete information questionnaires were eliminated, and the samples of students in school and the samples of farmers who lost their working capability entirely by illness or disability were deleted. In this study, missing data in land expropriation, health, hukou and age variables were omitted by a listwise deletion strategy. After data processing, a total of 3945 rural residents were used as research samples, of which 1199 were rural land-lost farmers, accounting for 30.39% of the total sample.
Dependent Variable
Quantifying health status is always a challenging problem. Self-reported health status as an indicator of personal health may be subjectively influenced by individual respondents. But the main advantage of self-reported health is that it can fully reflect the physiological and mental health of individuals. It was widely used in the previous literature and many studies confirmed the rationality of this indicator [8,13,14,[31][32][33][34][35][36][37]. This paper also used self-reported health status as the primary health indicator.
Self-reported health was collected during the main interview. The respondents were asked to rate their own general health on a five-point Likert scale: Would you say your health is very good, good, fair, poor or very poor? The answers were coded as (1) "Very Poor", (2) "Poor", (3) "Fair", (4) "Good", (5) "Very Good".
The Explanatory Variable and Control Variables
Whether the farmers loss their land or not was the main explanatory variable. The main explanatory variable was a dummy variable indicating the status of the land expropriation; that is, the dichotomous variables took the value one if the farmers have lost all or part of the land and took the value zero if they have not lost their land. Most of the farmers lost their land 2 to 4 years ago. The mode of compensation was the way for land expropriation and the government does not provide any jobs for the land-lost farmers. Therefore, this survey provided us a good source of data to discover the effect of land expropriation on the health of rural residents.
We controlled for other factors that may affect the individual's health, mainly including gender, age, education level and other family factors such as family size, family income per capita and family housing marketing value [6][7][8]12,13,15,16,31]. Table 1 provides an overview and descriptive statistics of the variables used in the empirical analysis. The Spearman correlation matrix of all variables are shown in Table S1 in the supplementary material. From Table 1, we can see that the average value of self-reported health was 3.823, which reflects that the self-reported health of rural residents was between "fair" and "good", which was consistent with the previous literature on the health of rural residents [37]. The average value of land lost was 0.304, which shows that land-lost farmers accounted for about 1/3 of the total sample. The proportion of females was 49.39%, which is consistent with the demographic characteristics of females in China that are less than males. The average age of rural residents surveyed was 45.01 years old and the average education years was 7.992 years. The average family size of respondents was 4.116.
Model Specification
According to Grossman's theory, influencing factors such as the age, education level and gender of residents affect the health status of residents. Since health was a categorical variable in this study, and was ranked and discontinuous, the ordinary least squares (OLS) is not suitable for estimate the equation. Therefore, we adopted the ordered probit model to examine the impact of land expropriation on land-lost farmers' self-reported health.
Following the literature [37][38][39], our general estimation approach is as seen as follows: The dependent variable, Health i is the self-reported health status of rural residents. Self-reported health, a widely used and validated measure of general health, was assessed by the following question: "Would you say your health is very good, good, fair, poor or very poor?" The question has five answer options including (1) "Very Poor", (2) "Poor", (3) "Fair", (4) "Good", (5) "Very Good". It is widely used in the previous literature and many studies confirm the rationality of this indicator [31][32][33][34][35][36][37]. Landlost i is the core explanatory variable. It is a dummy variable, taking the value 1 when the farmer lost his land, and 0 otherwise. X i represents the set of control variables that reflect the characteristics of individuals and families, mainly including the gender, age, marital status, family size and per capita income of rural residents. F(·) is a non-linear function, and the specific form is: In this model, Y * i is the unobservable latent variable, r 1 < r 2 < r 3 < · · · < r J−1 , is the estimated parameter and called the tangent point.
Econometric Analysis Results
Firstly, we studied whether the farmer losing their land could affect their health status. The analysis was performed based on an ordered probit model. Secondly, margin effects were analyzed. Lastly, we tested the robustness of our results. Table 2 presents estimates of the health Equation (1). We mainly focused on the effect of land expropriation on the health status of rural households. Model 1 only controlled the basic characteristics of individuals such as gender, age, educational level and marital status. Model 2 reported the coefficients for adding other control variables such as medical expenses last year and participation status in the New Rural Cooperative Medical Scheme. Model 3 reported the estimated coefficients for a full set of control variables. The symbols *, ** and *** indicate statistical significance at the 10%, 5% and 1% levels, respectively.
Model Estimation Results
Models (1) to (3) have the same estimation results. The results revealed a negative and significant effect of land expropriation on self-reported health. This means that the land expropriation put higher health risks over those land-lost farmers and the health status of land-lost farmers was significantly worse than that of those being with land. Land expropriation had a negative impact on the land-lost farmer's health. In column four of Table 2, we re-estimated with the ordered logit model. The results for the main effect were very similar to the results with the ordered probit model.
Regarding the control variables, age was significantly negative at the 1% level, hukou and medical expenses were significantly negative at the 1% level. Gender was significantly positive at the 1% level; education level and family income per capita were significantly positive at the 1% level; family size and family house marketing value were significantly positive at the 5% level. The regression results of gender showed that males had far better health than females. Age was negatively correlated with health. The educational level, the family size and the family per capita income will improve the health status of the farmers.
Marginal Effects Analysis
The marginal effects of the explanatory variable and other control variables form ordered probit model is presented in Table 3. The marginal effects showed that under the condition of controlling other variables farmers who lost their land were more likely to report poor self-rated health. Specifically, land expropriation reduced the probability that the farmer self-rated health "very good" by 3.847%, and the probability of "good" decreased by 0.211%. The probability of "Fair" increased by 2.117%, the probability of "poor" increased by 1.726%, and the probability of "very bad" increased by 0.225%.
The marginal effects of other control variables are consistent with the expected. For example, for each ten thousand yuan increase in per capita household income, the probabilities of "very poor", "poor" and "fair" self-rated health dropped by 0.249%, 1.978% and 2.516% respectively; and the probability of "good", "very good" self-rated health rose by 0.167% and 4.576%. Education was positively correlated with self-reported health and age was negatively correlated with health. With increasing age one year, the probabilities of "very poor", "poor" and "fair" self-rated health rose by 0.04%, 0.31% and 0.39% respectively; and the probability of "good", "very good" self-rated health dropped 0.03% and 0.71%.
Robustness Checks
In this section, we investigated the robustness of our results in the following several aspects.
(1) We considered adding some other control variables. We conducted sensitivity checks by adding other control variables that may also affect the health status of the farmers, including an indicator of whether a farmer was engaged in business the past year (Businesse), the expenditures on nutrition and health care (Expendnutri) and a dummy variable, whether a farmer is a China Communist Party member (Communistpm). Adding the control variables had little effect on the effect of the land expropriation. The coefficients of the control variables were all significant and had expected signs. More importantly, our main results were unchanged and the coefficient estimates varied little with our choice about whether to include other control variables or not from Table 4. In the following regressions we only report estimates of the parsimonious specification and omitted the estimated results of the other control variables due to space constraints. (2) We considered replacing the independent variable. Instead of using an ordered self-rated health, we used a binary variable to reflect the health statues of the farmers. The dichotomous variable took the value one if the farmers considered their health status was good and very good; and took the value zero if they considered their health status was other. We estimated the equation with probit and logit models. The alternative measures confirmed, and even strengthened our previous findings. The results of Model 8 and Model 9 are shown in Table 5. (3) We considered changing the sample range. We tested whether our results ere sensitive to the sample of the farmers considered in the estimations. First, we excluded the farmers over the age of 65 years, only considering the working-age farmers who are in the 18-65 years range. The third columns (Model 10) in Table 6 present the results of the estimation. The sample restriction had no impact on our previous conclusions. Second, we only considered the sample of rural residents with rural household registration. Model 11 in Table 6 presents these results. Third, farmers were divided into two groups based on the income level of poverty line. According to the poverty line standard set by China in 2011 (the standard of annual per-capita annual net income poverty is 2300 yuan), the total sample was divided into two sub-samples, one being under the poverty line and the other being above the poverty line. As can be seen from the results in the Table 6, land expropriation had a negative effect on the health of farmers with different income levels. Among farmers under the poverty line, the coefficient of land expropriation was −0.123, while the coefficient of the farmers above the poverty line was −0.153. This means that the income effect was more obvious for the poor farmers. Having experienced the above-mentioned tests, the results were still robust. Land expropriation affected the health status of the rural residents, the health status of land-lost farmers was remarkably lower than that of land-based farmers. Therefore, the results were accepted by the robustness tests, proving their stability and reliability.
Discussion
The above analysis shows that land expropriation has a significant negative effect on the health status of land-lost farmers. To clarify this relationship, this section we offer the following possible explanations.
Income Effect
The first possible explanation for this effect is that land expropriation can affect farmers' incomes, resulting in a decline in their health status. Self-reported health of the residents can be improved with increasing income [40][41][42]. When the Chinese government expropriates agricultural land, the compensation policy may be unreasonable. The current land expropriation compensation policy of "multiple output compensation" will lead to the low compensation for land compensation. The compensation is much lower than the market price and the compensation is to be paid to the farmers in a lump sum. This policy led to the farmers losing a long-term source of income and income security. They cannot guarantee the basic living standard of land-lost farmers. In addition, the employment channels of land-lost farmers have changed. The shift from agriculture-related to non-agricultural sectors and the low education level of the land-lost farmers leave the farmers at a lower level in regards to income and obtaining employment. The lower level of income and employment has led to the land-lost farmers not having the necessary resources to devote resources to health-related activities, resulting in the deterioration of the health status of land-lost farmers.
Loss of land may lead to falling the farmers' income and lack of protection for their long-term livelihoods. Available literature was identified for this review. They found that land expropriation makes the total income of farmers drop by 12.7% and the agricultural-related income by 36.8%; the compensation system of land expropriation damages the basic economy of land-lost farmers rights and interests; the compensation standard is less than 10% of the current land value-added income and the land-lost farmers are not in good condition; more than 90% of land-lost farmers are not satisfied with the compensation standards of local governments [43][44][45][46]. The impact of land expropriation on different incomes of the farmers was used to investigate the income effects in our study. Table 7 are the results of the land expropriation on the different income of the farmers. Land expropriation will significantly reduce their agriculture-related income and increase the transferred income. But the effect for the non-agricultural income and total income is not significant. The results show that land expropriation had no effects on farmers' income increase. Therefore, =the compensation should be strengthened for land-lost farmers and the ability to increase the income of the land-lost farmers should be improved, so that farmers have the ability to invest in health.
Psychological Effect
Another possible aspect of land expropriation effects is psychological factors. Firstly, more and more farmers are losing their lands depended upon in the past times. After the land has been expropriated, the land on which generations depended on for survival was lost forever. This event will greatly affect their psychological conditions and cause anxiety in their future lives. This will easily lead to psychological imbalance and increasing insecurity, resulting in health problems. Secondly, the value of the land expropriated has increased substantially after more and more agricultural land has been converted to non-agricultural use. But the land-lost farmers cannot share the benefits brought by land value increment. This will cause a psychological imbalance. In addition, land compensations will be paid in lump sum, resulting in the land-lost farmers' lack of appropriate social security once the compensation is used up. The land-lost farmers will face the dual pressures of life and employment, likely to cause psychological imbalance and abnormalities. Therefore, land-lost farmers experience higher psychological anomalies which will seriously endanger their health.
Due to the lack of the data, it is impossible to conduct more accurate discrimination on the psychological mechanism of land expropriation, nor can it make any specific contribution to the impact of the land expropriation on the farmer's mental health. First of all, we provide some indirect evidence to explain the psychological effects. In addition, we examine the effect of land expropriation on land-lost farmers from a gender perspective.
Land-lost farmers' psychological status showed a trend of deterioration [47]. The overall mental health status of land-lost farmers is not optimistic, exhibiting anxiety, hostility, paranoia and some mental diseases, etc. [48]. More than 80% of the land-lost farmers miss the more primitive way of life previously experienced-"go to work at sunrise and go home at sunset", and have great concern for their life after land expropriation [29]. All of these studies show that the psychological conditions of farmers have changed after the land expropriation. Different genders may have differences in psychological feelings after losing their land. As can be seen from the Table 8, the effects to males and females are different. The effect of land expropriation to females is significant negative, while to the males is negative, but not significant. The possible explanation are as follows. On the one hand, the traditional culture of "men go out and women stay home" in China makes women have less social activities than men. Rural women are mostly responsible for cumbersome family affairs and take mental stress more than men. Women are psychologically weak to bear the loss of the land. The worry and anxiety about future life after losing land can easily lead to psychological imbalance and endanger their health. On the other hand, women in rural areas are generally less educated, less able to accept new things, less able to adapt to new situations, less able to cope with stress and mental health. Women tend to grow more anxious than men. They are more severely affected by land expropriation. In addition, women are generally more open and honest when talking about health issues, compared to their male counterparts. They consider more factors than men and will underestimate their own health status.
Conclusions
This paper, using micro data derived from a questionnaire rural household survey in Chengdu, analyzes whether land expropriation affects the health status of rural residents and investigates the possible mechanisms by ordered probit model, exploring a new research angle. The empirical results demonstrate that land expropriation puts higher health risks over those land-lost farmers and the health status of land-lost farmers is significantly worse than that of those with land. Land expropriation can reduce the probability that the farmer self-rated their health "very good" by 3.847%, the probability of "good" decreases by 0.211% and the probability of "very bad" increases by 0.225%. Land expropriation has a negative impact on the land-lost farmer's health through income and psychological effects.
These results contribute new evidence to address these questions we proposed before and let us understand the mechanisms of how land expropriation affects the health status of the land-lost farmers. However, given the findings of this study show that land expropriation brings about a worse health status, it is recommended that both compensation-based policies and target-based policies be considered in relation to land expropriation. The rural land in China is bearing the weight of double functions, which are agricultural production and social security. That must be embodied in the compensation. There are some implications for policy that could be extracted from these results. First, the Chinese public authorities may raise the standards for land compensation and resettlement fees for land expropriated according to the social and economic development level. Second, various ways of compensation should be adopted. The mode of compensation depending on the market price and allocation with job should become a new way for land expropriation. Third, it is essential to provide training to land-lost farmers, both to increase their knowledge (so that they can raise their self-health care consciousness), as well as to improve their employment level (so that they can enhance the income-acquiring ability). Accordingly, this study will hopefully encourage further research on health problems in villagers and farmers in developing countries.
Nonetheless, the present study has some limitations. First, this study only used self-reported health to measure health status and ignored other indicators of health, and a comprehensive indicator is more helpful to reveal the secret of land expropriation affecting health. Due to the lack of data, a self-reported health variable can be used to measure health status in the present study. However, numerous studies have confirmed the validity and rationality of such self-reported health variables [8,13,14,[31][32][33][34][35][36][37]. Second, this study examined only the income and psychological effects, but neglected other potential explanations at the societal or community level. Indeed, no data are available to identify those mechanism in this survey, with the investigation of these issues left for future research. Finally, since the study was based on cross-sectional data, we are not able to examine the causal relationship between the land expropriation and health. These should all be addressed in further studies.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,426 | 2019-08-01T00:00:00.000 | [
"Economics"
] |
Reliable Characterization of Organic & Pharmaceutical Compounds with High Resolution Monochromated EEL Spectroscopy
Organic and biological compounds (especially those related to the pharmaceutical industry) have always been of great interest for researchers due to their importance for the development of new drugs to diagnose, cure, treat or prevent disease. As many new API (active pharmaceutical ingredients) and their polymorphs are in nanocrystalline or in amorphous form blended with amorphous polymeric matrix (known as amorphous solid dispersion—ASD), their structural identification and characterization at nm scale with conventional X-Ray/Raman/IR techniques becomes difficult. During any API synthesis/production or in the formulated drug product, impurities must be identified and characterized. Electron energy loss spectroscopy (EELS) at high energy resolution by transmission electron microscope (TEM) is expected to be a promising technique to screen and identify the different (organic) compounds used in a typical pharmaceutical or biological system and to detect any impurities present, if any, during the synthesis or formulation process. In this work, we propose the use of monochromated TEM-EELS, to analyze selected peptides and organic compounds and their polymorphs. In order to validate EELS for fingerprinting (in low loss/optical region) and by further correlation with advanced DFT, simulations were utilized.
Introduction
Many pharmaceutical compounds may exist as polymorphs (e.g., they crystallize into different packing arrangements having the same chemical formula). Drug polymorphism is critically important in the pharmaceutical industry, as many of the solid-state properties of a compound are dependent on its polymorphic form. As an example, different polymorphic phases dissolve at different rates (solubility, bioavailability) affecting the adsorption of the compound in vivo, making it essential to control which polymorphic form is dosed to patients [1,2]. There is hence a requirement that the polymorphic behavior of a drug compound is thoroughly investigated and understood.
Besides, when a drug or active pharmaceutical ingredient (API) is approved to be released into the market to be used for medication purposes, it is important that the marketed polymorphic form is well characterized (e.g., rules under US Food and Drug Administration-FDA in United States and European Medicines Agency-EMA in European countries). In practice, the approved API together with its other polymorphic forms is protected by Patent/IP rights before being launched onto the market and the released pharmaceutical product must contain exclusively the specific approved API form. Therefore, it is very important to characterize all the forms, not just only the marketed ones, as it might be possible that some polymorphic forms could have (or not) the same therapeutic effects as the marketed API, or it can be harmful for health and the environment [3].
On the other hand, during synthesis/production of a specific API in a particular polymorphic form, sudden transformation to another (more stable) polymorph may occur (e.g., the case of Ritonavir API where the marketed polymorph converted with time to a more stable but much less soluble form) [4,5]. Such transformed polymorphs may present in "trace" quantities that can also co-exist along with the main compound and are often unobserved during initial screening. As previously mentioned, the detection of impurities from the initial synthetic material or during the production process is also relevant. Therefore, there is a need to develop new reliable techniques and approaches to detect such trace polymorphic impurities (or other impurities) if present.
In the current "state of the art", the standard characterization of an API (crystalline or amorphous), its polymorphs and its blend within a polymer matrix is usually performed using conventional or Synchrotron X-Ray powder diffraction (XRPD) (detection limit of 0.01 wt %). Other techniques like Fourier-transform infrared spectroscopy (FT-IR)/Raman scattering, differential scanning calorimetry (DSC) and solid-state nuclear magnetic resonance (NMR) can also be used with limited spatial resolution (from several hundreds of microns to nm); for example, Raman spectroscopy can only distinguish compounds at micron scale and photothermal-induced resonance (PTIR) has a spatial resolution reaching up to~100 nm [6][7][8][9][10][11].
In the case of crystalline materials, electron diffraction (ED) in a TEM is ideal to identify whether an individual crystallite may belong to a new polymorph phase or not [12][13][14][15][16] at very local scale (10-20 nm) by collecting experimental diffraction patterns from an area of the sample and comparing them with the simulated diffraction patterns of well-known phases; electron diffraction based 3D tomography [17][18][19][20][21] can also be applied to determine the ab initio unit cell and crystal structure of each individual crystallite but its application can be very time consuming because of the significant number of API nanocrystals that may exist in a ng quantity powder sample; therefore there is a need to develop a technique to finely screen and detect (at nm scale) different APIs having various possible polymorphs (independent of crystalline or amorphous phases) and/or screen various possible phases in amorphous solid dispersion (ASD) [22], with nm spatial resolution. In the case of an amorphous material electron pair, distribution function can also be used to study pharmaceutical polymorphs [23].
It was recently shown that electron energy loss spectroscopy (EELS) in the low loss regime (0-50 eV) can be a suitable technique to distinguish amorphous organics at very local (nm range) scale [24]. It has been shown that EELS spectroscopy might be used to quantify local concentrations of API drugs through the amorphous polymer matrix with high accuracy at sub-100 nm resolution in a thin-film-like sample by recording the spectral signatures of the different compounds. In the work of Ricarte et al. [24], analysis of phenytoin/HPMCAS ASDs showed that drug and polymer were intimately mixed throughout the ASD, even at high drug loadings.
Therefore TEM-EELS spectroscopy appears as a potentially complementary tool to identify and screen with high spatial resolution a number of organic small molecules (APIs, polymers in ASD) that otherwise cannot be distinguishable by other analytical techniques. In this work we try to explore further the possibility of finely characterizing various crystalline & amorphous APIs (including several polymorphs) by using high energy resolution monochromated STEM-EELS.
With EELS, we can study the loss of kinetic energy that electrons passing through or near a sample experience by exciting the sample itself. In particular in the low-loss regime the electron beam probes the optical transitions of the material in the ultraviolet to infrared range, providing information similar to that of optical UV−Vis spectroscopies. The spectra are approximately described by the so-called energy loss function Im[−1/e˜(ω)] (ELF).
Materials and Specimen Preparation/Instrumental Configuration-Data Collection
To identify APIs in crystals and ASDs we acquired spatially resolved maps of EELS spectra, through the so-called STEM-EELS spectrum imaging mode in the Titan3 microscope from Thermo Fisher Scientific installed at EMAT-Antwerp and equipped with a Wien filter monochromated electron source, a probe aberrations corrector and a Gatan Enfina electron spectrometer, featuring a fast electrostatic shutter. As organic compounds are very beam sensitive (critical dose for organics varies from 10-120 e/Å 2 ) [25,26], to reduce beam damage, EELS low loss spectra from micron-sized grains were collected at 300 kV in low dose condition at RT with an effective resolution of 0.2 eV. The low-loss scattering events have a much higher cross section than higher energy transitions (e.g., the promotion of core-shell electrons to the valence levels) thus helping minimize the dose.
Using optimized low dose data acquisition, EELS data was acquired in STEM mode without any cryo-cooling techniques and no beam damage has been observed in our samples. We used a low convergence semi-angle (~0.5 mrad) to reduce current density within the probe, a collection semi-angle of 25 mrad. Since high energy electrons can induce radiation damage even at a distance of a few nm from its trajectory, we acquired the spectra from widely spaced (~50 nm) probe positions to avoid cross-damage between neighboring positions [27]. Each spectrum was acquired for 20 msec, where the scans covered areas between 0.5 and 2 µm 2 and the dose was estimated to be 1 e/Å 2 sec. Beam damage for the crystalline sample was monitored by observation of the high angle reflections in the diffraction pattern; the typical lifetimes of crystals were of the order of minutes in the current measuring conditions. Progressive amorphization by the electron beam proceeded via progressive loss of long-range order (fading of the high angle diffraction spots in crystalline APIs). For noncrystalline material the beam damage was monitored by observing any possible sample shape change in the STEM image. In addition, possible sample beam damage was also monitored by observing any low loss spectral change (1-5 eV EELS region) over time.
EELS data from several pristine sample areas were collected to compare the reproducibility of our results. EDX data were also collected from all samples to confirm the elements present in the molecule (especially for peptide TH_15 and TH_27: C, N, O, S) and to check for the presence of impurities ( Figure 1). The EDX spectrometer for data acquisition is a Bruker Super-X, which uses 4 windowless detectors to reach a total collection angle of~1 Steradian (Sr). The windowless design allows the effective detection of the signal of light elements, while the high acceptance angle greatly increases the dose-efficiency (detecting a higher fraction of the X-rays emitted by the sample), allowing it to perform EDX mapping even for organic materials.
The EELS data were treated with the python package Hyperspy [28]. We applied, in order, a Savitzky-Golay filter, then a Richardson−Lucy deconvolution not with the purpose of increasing the energy resolution, but with that of reducing the impact of multiple scattering. We then removed the zero loss by fitting it to a pseudo-Voigt shape, subsequently subtracted from every point the spectral signature corresponding to the support film. From this pretreated data, we manually selected the thinnest area in order to obtain the best quality spectra.
Samples used for the experiment were the following: beta-cyclodextrin, hexacarboxy cyclohexane, tannin, peptide TH_15, peptide TH_27 and piroxicam form 1 and form 2 and aripiprazole form 2 and form 4 ( Figure S1). All examined samples were crushed gently between two glass plates and then the powder sample was sprinkled on continuous carbon TEM grid. The size of the particles was relatively large (µm size) and in general samples were found thick in most areas; EELS observations were performed only in thin sample areas.
Results
For the synthetic peptide TH_15 and TH_27, fluorine impurity traces were detected by EDX ( Figure 1); such traces might have come from the synthesis procedure and more specifically from the use of trifluoroacetic acid (CF 3 CO 2 H) during the cleavage of the peptide from the solid support (resin) [29].
In order to establish a fairly conclusive screening and identification methodology to readily differentiate among various organic molecular crystals and their polymorphs, we first analyzed by EELS various "reference" organic crystals; in most of the cases several characteristic low loss < 20 eV peaks were identified which were entirely specific to each organic compound (Figure 2). From our obtained results it seemed that the use of TEM-EELS ( Figure 2) with a monochromatic source (0.2 eV resolution) was necessary to reveal fine spectral details in the low loss EELS region for organic compounds, otherwise without its use many spectral fine details would not have been clearly distinguishable (for example the EELS spectra of tannin). All studied compounds (with the exception of the beta-cyclodextrin sample) revealed a different unique EELS signature at the low loss/optical region arising from π-π* transition peaks intimately tied to molecular structure. Based on our initial results it seemed that the EELS fingerprint (centered on the low loss/plasmon loss region) applied to organic compounds could potentially differentiate between them. These interesting results confirmed further early results obtained with low resolution EELS spectra on organic compounds [24]. In order to evaluate whether EELS could be used to distinguish between different polymorphic forms of the same compound, we acquired spectroscopic data from two different prepared forms of piroxicam and two different prepared forms of aripiprazole [30,31]. By choosing spectra from the thinnest parts of the analyzed samples (mean free path λ = 3, for both samples) for piroxicam, we were able to extract spectra showing a clear difference between the two forms of piroxicam ( Figure 3). Though this mean free path (mfp) was on the limit for low loss EELS analysis (for qualitative mapping with edges < 800 eV the thickness should be between 0.1-1.2 mfp and for low-loss EELS analysis the thickness should be between 0.1-3.0 mfp), as the relative thickness was the same for both samples the results in this case were trustworthy and it was also reproducible [32]. The relative thickness was estimated using Gatan DM software using the log-ratio (relative) method [33]. This difference tended to disappear when thicker regions were analyzed suggesting that multiple scattering was a limiting factor for EELS to be sensitive enough to differentiate polymorphic samples. Such a clear difference, though, could not be observed in the case of the two polymorphic forms of aripiprazole samples even the EELS data were extracted from the thinnest part of the sample. During the EELS fingerprinting it was advisable to estimate the thickness in terms of mfps from the area where low loss EELS data were extracted; otherwise thickness variation might affect the result. In order to have a deeper insight into the EELS low loss difference between various polymorphs of the same API compound, we studied theoretical simulations of the electronic structures of triclinic and monoclinic polymorphs of piroxicam API where there was a clear difference in their respective experimental EELS spectra (Figure 3).
Theoretical calculations of the electronic structure of the two piroxicam forms were performed using the plane wave plus local orbitals (APW + lo) method of DFT, implemented on WIEN2k code [34][35][36][37]. The exchange-correlation functional used was the Perdew-Burke-Ernzerhof method [38]. Crystallographic information of both phases was used as input. According to the bibliography, the piroxicam form 1 corresponded to a monoclinic phase with P21/c space group and 36 non-equivalent atomic positions. On the other site piroxicam form 2 corresponded to a triclinic phase (P1 space group) with 78 non-equivalent atomic positions [30]. The self-consistent cycle converged to 10 −5 eV and the residual forces on atoms were below 0.01 eV/Å with 1000 k-points for both crystallographic phases. Complex dielectric function (CDF) and ELF were computed using OPTIC from WIEN2k.
The obtained energy bandgaps for both phases were underestimated with respect to the experimental values as is usual in DFT. The simulated data was calibrated using a scissor operator [39], the needed shift was computed using the first signal of the experimental data (common to both forms) placed at 3.825 eV.
In Figure 4, the real (dashed line) and imaginary (dotted line) part of the dielectric functions of both forms are plotted. Form 1 presented a plasmon resonance at 21.56 eV, (ε) = 0, and three main interband transitions, first peaks of the (ε); form 2 presented the plasmonic transition at a slightly higher energy, 20.53 eV and a set of three interband transitions could be detected. Comparing both forms, a clear difference in the relative intensity of the first transition was detected. From the CDF the ELF can be directly obtained. Figure 5 shows the ELF compared with the experimental low loss spectra [40]. The difference in relative intensity on the imaginary CDF caused form 2 to have broader peaks for low losses of energy.
Looking in detail at the interband transitions ( Figure 6), we concluded that the simulated data presented a good agreement with the experimental EELS data. The three main signals appeared on both, however peak 3 had too low intensity to be detected in any of the forms, peak 1 and 2 had the exact energy position for both phases and the peak 4 presented a redshift on form 2. Looking in detail at the interband transitions (Figure 6), we concluded that the simulated data presented a good agreement with the experimental EELS data. The three main signals appeared on both, however peak 3 had too low intensity to be detected in any of the forms, peak 1 and 2 had the exact energy position for both phases and the peak 4 presented a redshift on form 2. In view of our current results, we concluded that while it was still impossible to completely exclude the presence of residual thickness or damage effects in the experimental data, EELS could be In view of our current results, we concluded that while it was still impossible to completely exclude the presence of residual thickness or damage effects in the experimental data, EELS could be used to detect the fingerprints of both polymorphs of piroxicam, through the change of the relative intensity of the first signal and the redshift.
Discussion
Detection and identification of various polymorphs/various different organic structures/APIs can be done with current Raman spectroscopy instrumentation with micron resolution; instead the "plasmon loss" EELS map technique achieves 10-50 nm spatial resolution and is far more general, as it may work for individual APIs and enable the efficient detection and screening of new pharmaceutical polymorphs during API synthesis, even in very low trace quantities (<0.1 wt %). Assuming that each organic compound/polymorph generally exhibits its unique (and distinguishable) π−π* transition plasmon loss peak, it should be possible to create "plasmon loss" maps (filtered at a particular loss peak, characteristic for each compound) that could potentially "differentiate" between various organic compounds/polymorphs. Such EELS maps are performed routinely for materials science applications (e.g., Si-Ge based layered semiconductors, core shell nanoparticles, perovskites etc.). Along with "known" basic/principal polymorph/crystal phase API, the generation of such "plasmon loss" EELS maps may reveal the existence of other possible polymorphic phases which may present <0.01 wt % in an ASD/formulated product where their further characterization could be done using electron diffraction tomography techniques [17,19] for crystalline materials or electron pair distribution function for amorphous materials [23].
Many pharmaceutical API formulations (up to 80% in their crystalline form) result insoluble in water and as a result their bioavailability is not adequate for appropriate medication; those API compounds in amorphous form show much higher bioavailability/solubility rates (up to 10-1600 times) [41] than their crystalline counterparts. As amorphous API forms, they cannot be conserved during long timescales (e.g., years) (which is mandatory for market acceptance) [42] in industrial practice poorly soluble crystalline APIs as prepared as a solid dispersion mixture between amorphous API and specific polymer mix (called ASD). This way the API/polymer mix ASD can be maintained in an "amorphous state" for longer time scales (e.g. years). Although the chemistry of API-polymer mix is not well-understood, it is generally assumed that in an "ideal" ASD, the API molecules are surrounded by polymer molecules without effective interaction between them. Again, in the case of ASD where both the API and polymer mix contain the same chemical elements (e.g. C, N, O) in their composition, low loss EELS mapping may enable detection of the "as early as possible" phase separation at ASD. It is also important to note that use of the scanned beam as in STEM and working in "low loss" part of the EELS spectra reduces the possible beam damage in beam sensitive (organic/pharmaceutical) compounds. The dose efficiency and sensitivity of EELS is also being further enhanced by using of direct detection cameras which are starting to gain acceptance in the TEM community. A low loss EELS spectrum is also useful to quantify water content in cells from various cellular components [43][44][45].
The challenge to overcome for generalized use of the EELS "low loss" technique is related to the fact that (in many cases) EELS low loss energy differentiation between various organics is often below 0.9 eV (conventional resolution of Gatan EELS spectrometers without a monochromated electron source). Therefore, the use of advanced TEM microscopes having EELS with a monochromated electron source (energy resolution < 0.2 eV, down to 0.01 eV in the best examples) seems mandatory to distinguish between various compounds.
Inelastic scattering between a fast electron and sample can occur even if the beam passes at a slight distance from the sample, a phenomenon known as inelastic delocalization. This phenomenon fundamentally induces a physical limit on the spatial resolution of the inelastic signal in EELS, even if the signal were recorded with an ideal instrument. Inelastic delocalization increases for higher energy beams and decreases with lower energy losses [32]. The inelastic delocalization factor R can be estimated by the root mean square impact parameter d rms proposed by Pennycook [46] and it is 9 nm for 3 eV loss with 300 kV beam and~4 nm for 7 eV loss with100 kV beam. In most of the cases drug crystal nucleation or phase separation in ASD happens at 10-100 nm scale, so the inelastic delocalization in EELS will not be a hindrance to observe phase separation or drug nucleation in ASD [9,47,48].
While Cherenkov radiation has been a limiting factor in some EELS applications (e.g., bandgap measurements in semiconductors), it was not a limiting factor in low loss EELS characterization for organic compounds [49]. The refractive index of the studied compounds were not particularly high (e.g.,~1.6 β-cyodextrin,~1.4 for hexacarboxy clyclohexane,~1.9 for tannin,~1.7 for piroxicam,~1.6 for aripiprazole) meaning that while, depending on the beam energy, the electron velocity could be above the Cherenkov threshold, the emission was not going to be strong enough to significantly distort the fingerprints of the different organic compounds, and would at most give a small contribution to the featureless background.
Conclusions
The present work further confirms TEM-EELS as a potentially powerful complementary analytical tool to identify and screen with high spatial resolution organic small molecules (APIs, peptides, polymorphic forms of APIs) that otherwise cannot be distinguishable by other analytical techniques with that spatial resolution. Further work to be performed towards increasing sensitivity of that method to distinguish all possible polymorphic forms of an organic compound. It is also important to bear in mind that in order to get reliable EELS spectra for phase identification, sample thickness should be small enough (approx. <100 nm) to reduce multiple scattering effects. To obtain further experiment improvement to avoid such effects, appropriate sample preparation protocols have to be developed, e.g., with ultramicrotomy or Cryo FIB. | 5,139.4 | 2020-06-27T00:00:00.000 | [
"Materials Science"
] |
Self-stratification studies in waterborne epoxy-silicone systems
Waterborne epoxy‐silicone coating formulations are prepared by combining selected ingredients in optimum quantities to produce ~250–275 μm films (dry film thickness, DFT) applied on polyester, sandblasted steel, smooth steel, acrylic, polypropylene and aluminum substrates. The self-stratification of the applied coatings is then evaluated using Fourier Transform Infrared spectroscopy – Attenuated Total Reflectance (FTIR–ATR) and Scanning Electron Microscopy (SEM) with Energy Dispersive X-Ray (EDX) analysis. The difference in the absorption spectra of the top and bottom surfaces of these films as well as a significant difference in topography seen from the SEM images and elemental mapping through an EDX analysis of the cross-section has confirmed the occurrence of stratification on polyester substrate. The observed stratification results are subsequently compared with a theoretical model where the primary mechanism driving the separation is assumed to be the surface free energy difference of the resins and their respective wetting of the substrate. The influence of using two different theories, Wu's Harmonic Mean Method and Owens-Wendt Method for the interfacial surface tensions and the solid surface free energy computations, on the predictions from this model is also tested. The theoretical predictions support the observed results for most cases of the formulated waterborne systems; except for the case of sandblasted steel, the observed results do not match the predictions. The possible reasons for the difference in between the prediction and observation for this specific case has also been elucidated.
Introduction
The multilayer application of many coating systems could be replaced if one could commercially design and formulate self-stratifying coatings. Herein fewer materials would be used as well as a good interlayer adhesion is likely to be achieved as the concentration gradient of two layers would enable to remove the need of an intermediate coat/tie coat commonly used in multilayer coating systems. Moreover, the application of different layers of a multilayer system requires a considerable amount of processing time due to the time taken for the individual curing of layers as well as the requirement for overcoat interval times. A diagrammatic representation for the replacement with a selfstratified coating is shown in Fig. 1.
Many attempts for formulating a self-stratified coating system have been reported in the literature [1][2][3][4][5][6]. The most common formulation methods to achieve a spontaneous separation into layers are driven by either the 'evaporation of the solvent or aqueous medium' or the 'selective crosslinking reaction of the polymers with the hardener' or both in systems comprising of immiscible polymers [7,8]. Moreover, phase separation in thin films and coating formulations after application is also known to be substrate-driven [9][10][11]. Therefore, self-stratifying coatings are in fact a subset of 'evaporation-induced' [8,12], 'substrate-induced' [13][14][15] or 'reaction-induced' [2] self-assembled or phase separating [16] systems.
From previously studied systems (summarized in Table 1), it is known that the difference in the 'rate of crosslinking reaction' and the 'rate of solvent evaporation' has led to the formulation of promising selfstratified systems [8,17]. These competing phenomena are responsible for the evolution of stratification and subsequent film formation. A diagram depicting the same has been shown in Fig. 2a. As the hardener selectively crosslinks with Resin -2, shown in the figure, the density and chain length of the Resin -2 would rapidly increase and the cross-linked resin would solidify out of the system. As the solution gets poorer in Resin -2, it is enriched in Resin -1, spontaneously raising the Resin -1 towards the surface until this resin physically dries to form the second (or top) film. This would occur due to the lower rate of change in solubility of Resin -1 as the solvent evaporates compared to the crosslinking rate of reaction between Resin -2 and the hardener.
However, an alternative scenario could be wherein, both the resins involved in the system are thermosetting. In this case, the addition of two different hardeners to the one-pot system could also lead to selfstratification due to a difference in the rate of curing of the two resins with the individual hardeners. A diagram illustrating the same is shown in Fig. 2b.
The process of evolution of a phase-separating system followed by selective chemical curing described above could prove to be advantageous from the sustainability point of view for the coatings industry.
Besides the components of the formulated coating system, it is known that the material of the substrate as well as the substrate surface profile influence the surface properties of the substrate. Therefore, the choice of substrate would also play an important role in the stratification after the coating has been applied as its surface free energy would influence the selective wetting by resins in the system.
In this work, specifically, two waterborne epoxy-silicone systems applied to six different substrates of varying surface free energies are studied for the possibility of self-stratification which can potentially find applications as a replacement to a three-layer fouling release coating consisting of an epoxy primer, a tie coat and a silicone topcoat typically used by the marine industry.
Theory
Several models, both thermodynamic and transport phenomenabased, have been reported in the literature to predict self-stratification [17][18][19]. The most commonly used ones to predict the selfstratification in a system of two polymers or resins in a solvent mixture are the 'UNIFAC model', a model based on 'surface tension relationships' and model based on 'Hansen Solubility Sphere overlap of polymers' [17]. However, all these three thermodynamic models are recommended to be used with care, as they do not correlate well with experimental results for every case. For instance, the UNIFAC model is able to predict the phase separation point only for certain compositions of epoxy-acrylic mixtures while the 'surface energy model' is able to predict the stratification on Polycarbonate but not on Teflon. Apart from these models, construction of 'Phase State Diagrams' have also been proposed for such mixtures.
Moreover, for systems consisting of colloidal particles in addition to the resins and solvents, models based on 'diffusion gradient' [20] and also 'diffusiophoresis of colloids' [21,22] have been proposed. The parameters proposed in the diffusiophoresis based model that considers jamming effects [22] have been validated in the work of Schulz et al. [23], wherein stratification in a system of two colloidal particles Fig. 1. Replacement of a multi-layered coating system with a self-stratified coating system. Table 1 Overview of previously studied self-stratified coating systems consisting of resins with a particle size distribution in the μm range.
Fig. 2.
Idealized view of the evolution of self-stratifying system after solvent evaporation and chemical curing on addition of a) one hardener b) two hardeners.
Fig. 3.
Difference in substrateresin interfacial surface tensions leading to difference in wettability of substrate. a) High value for |γ S1 -γ S2 | b) Low value for |γ S1 -γ S2 |. dispersed in water is achieved. More recently, a 'dynamical Density Functional Theory' [24] has also been reported for predicting selfstratification in such systems. In the work of Zhou et al. [25], Sear [26] and Sear and Warren [21], the conditions and parameters at which a stratified structure can be obtained in colloidal mixtures have been derived.
In this work, systems consisting of resins dispersed in aqueous media as solid particles or available as emulsions have been considered for the evaluation of self-stratification when applied on substrates as films of wet film thickness of 500 μm. The formulation and curing temperatures chosen for the experiments are room temperature and humidity conditions. Under these conditions, and low solution viscosities, the specific gravity difference between any two polymer resins is very low (~0.05-0.07) [27]. Moreover, considering the thickness of the films, the surface free energy difference between these resins, which are short range forces, is quite significant (~15-20 mN/m). Also, the selective wetting of the substrate by the individual polymer resins can be described by the surface free energies of the substrate and polymer resins as well as the interfacial surface tensions [17]. Hence, when two phases are separated, a new interface along their boundary is created, which requires work of adhesion for the formation of a new interface with air. According to Dupré, this work of adhesion is related to the magnitudes of the surface energies of the two polymers, the substrate and the interactions across their interface [28]. Therefore, from among the models discussed above, we choose to focus only on the ability of the 'model based on surface energy relationships' to predict the selfstratification in this work. This model comprises of three thermodynamic expressions that have been reported in the work of Carr and Wollstöm [14]. The authors have shown that this model can be used to predict which resin combinations will give rise to self-stratifying systems given the surface energy/concentration relationship of the pure resins in solution and assuming the systems have phase separated. The interfacial surface tension between the two polymers of surface tensions with respect to air γ 1 and γ 2 , is denoted by γ 12 while, the interfacial surface tension of the substrate with respect to each of the polymers in the system is denoted by γ S1 and γ S2 . A comprehendible explanation for each of the equations of their model has been described below: Wettability of substrate by polymer: the interfacial surface tension of the polymer migrating towards the substrate with respect to the substrate, γ S2 should be significantly less than the difference of interfacial surface tensions of polymer coming on the top with respect to the substrate, γ S1 and the interfacial surface tensions in between the two polymers, γ 12 . This essentially means that the polymer requiring less energy (or less work of adhesion) to form the substrate-polymer interface and would migrate to the bottom and wet the substrate better compared to the other polymer. This is represented by the expression below: There could be two distinct cases within Eq. (1). When |γ S1 -γ S2 | is a significantly large value, then one resin would wet the substrate more than the other resin as illustrated in Fig. 3a. Whereas, if |γ S1 -γ S2 | is close to 0, both the resins would wet the substrate equally, as illustrated in Fig. 3b.
Layer sequence: the total interfacial surface tension of the more feasible layer sequence in the system should be lower compared to the opposite sequence. The two possible layer sequences, 1 and 2 are shown in Fig. 4a and b respectively.
The interfacial surface tensions, γ 12 , γ S1 and γ S2 can be calculated using the Harmonic Mean Method given by Wu [29] and shown in Eq. (6).
where, d denotes the dispersive component and p denotes the polar component of the individual surface tensions, γ 1 and γ 2 , for components 1 and 2. Alternatively, the interfacial surface tension can be more precisely calculated by the Owens-Wendt (OW) Method as shown by Eq. (7) [30] However, besides the effect of surface free energies of the polymers and the substrate, several other parameters like the temperature of curing, rate of solvent evaporation, rate of crosslinking and difference in molecular weights and glass transition temperatures are also known to influence the self-stratification of coatings comprising of immiscible resins in a carrier medium [17,31].
Materials
Two waterborne formulations are prepared. Both formulations consist of an aqueous-based epoxy resin (Resin -1) and an aqueous amine adduct (Hardener -1). The second resin used in each of these formulations is an aqueous-based silicone resin, however the percentage weight solids content of silicone is different for the two formulations. The first formulation contains a silicone resin with 60 wt% solids (Resin -2) and is named Formulation A. While, the second formulation contains a silicone resin with 54 wt% solids (Resin -3) and is named Formulation B. Resin -2 has a comparatively lower viscosity of ~100 mPa⋅s compared to Resin -3 with a viscosity value of ~1000 mPa⋅s, while Resin -1 has the highest viscosity of ~3500 mPa⋅s.
The ingredients of the formulations A and B and their respective quantities are presented in Table 2.
Further, the particle size distribution of the resins using a Mastersizer is also made. The median particle size by volume of the epoxy resin (Resin -1) is 1.01 μm while that of the silicone resin (Resin -2) is 2.93 μm, as shown in Fig. 5a) and b) respectively.
Formulated coating and sample preparation
The Formulations A and B are prepared according to the composition indicated in Table 2. The quantity of the hardener added to the system is decided based on the epoxide-amine chemistry, such that 90-95 % of the epoxy resin would be able to crosslink with the amine hardener. The formulation is then left to stand for 15 min in the fume hood at room temperature of 20 ± 2 • C and RH value 82-85 % to make sure that the air bubbles incorporated are released from the system. The waterborne formulations are then applied using a film applicator on polyester, sandblasted steel, smooth steel, aluminum, polypropylene and acrylic with a wet film thickness of 500 μm. These different substrates chosen for the study are such that they cover a wide range of surface free energies. The selected substrates have a surface free energy value either above, below or in between the binary pair of resins chosen for a formulation, hence covering a wide range for testing and validation of the theoretical model described in Section 1. The technical details with the name of the suppliers of these substrates are shown in Table 3. After application, they are then left to cure for 24 h under the fume hood at room temperature (20 ± 2 • C) and humidity conditions (82-85 % RH).
Characterization of self-stratification of the coating
An Attenuated Total-Reflection Fourier Transform Infrared (ATR-FTIR) spectroscopic analysis of the top and bottom surfaces of the formulations A and B coated on polyester substrate, is performed. On other substrates, the adhesion of the coating is too strong and so the bottom surface of the coating cannot be analyzed by this technique. This analysis is performed on a ThermoFisher Scientific infrared spectrometer with an ATR diamond unit. Spectra of the surfaces are recorded in the range of 4000-500 cm − 1 . One background spectrum is collected just before starting the measurements.
The formulations, for which a difference in the fingerprint of the absorption spectra in the top and bottom surface of the free film is observed, are further tested for the extent of self-stratification on other substrates as well. This extent of self-stratification is tested using the SEM-EDX analysis of a cross section of these formulations applied on all six substrates. A ThermoFisher Scientific Scanning Electron Microscope with the Electron Transfer Dissociation (ETD) detector is used for this purpose. The voltage is maintained at 20 kV for carrying out this analysis. The cross section of the coating applied on a metal substrate is obtained by cutting the coated substrate with a metal cutter while the cross-section of a free film of the coating applied on polyester/polypropylene is obtained by making a fracture in liquid nitrogen.
Besides the characterization of the coated substrates, contact angle measurements have been performed in order to determine the surface free energies of the different substrates and individual resins. These measurements are carried out using the Krüss Scientific Drop Shape Analyser DSA30E. The contact angles are measured using two liquids: where, γ s and γ l are the surface tensions of the solid and liquid respectively while γ sl is the interfacial surface tension between solid and liquid while θ is the measured contact angle. The values corresponding to each of these substrates are presented in Table 3. Further, to study the curing kinetics of the reaction between the epoxy resin and the hardener, the Differential Scanning Calorimeter from Thermofisher was used. The computer and software are used for setting the heating and the cooling rate at temperature profiling of a material. The heating from − 40 • C to 150 • C is carried out at 10 • C/min. The sample is then cooled from 150 • C to − 40 • C at 20 • C/min and heated again at the same rate in order to ensure whether the curing reaction has reached completion. 11.5 mg of sample was used to make this evaluation.
The measured contact angles used for the estimation of surface energies of the substrates are presented in the Supplementary material and summarized in the Appendix B.
Results and discussion
The FTIR-ATR analysis of Formulation B applied on the polyester substrate, showed the absorption peaks of Polydimethylsiloxane (PDMS) on the top and epoxy on the bottom (Fig. 6). Moreover, the resins applied to polyester substrate and individually dried were also characterized using the FTIR-ATR analysis as shown in Fig. 7. The similarity in the absorption peaks obtained for the 'formulated coating on the top and bottom surfaces' and for the individually dried resins confirms the selfstratification in Formulation B. This difference in the absorption peaks of the top and bottom surface was not observed for Formulation A, wherein the solids weight percentage of the silicone resin is 60 %. In this formulation, the silicone resin has a very low viscosity (~100 mPa⋅s) compared to the viscosity of the epoxy resin (~3500 mPa⋅s) and hence the miscibility of the two resins in this formulation is likely to be good. This could be one possible reason for not observing the self-stratification in Formulation A.
Therefore, the SEM-EDX analysis and specifically mapping the Si content across the cross section of the applied coating was performed only for Formulation B. The SEM analysis showed a different topography and more porous structure on the top while on the bottom a denser structure was seen as shown in the Fig. 8a. A homogeneous concentration gradient with very low concentration of Si close to the base and high concentration of Si on the top (Fig. 8b), was seen only in the case wherein polyester was used as the substrate. For all other substrates, the distribution of Si across the cross-section was uniform, indicating that no stratification had occurred. The film thickness of the dried stratified coating is measured to be approximately 275 μm.
In order to check if these observed results can be predicted by the theoretical model described in Section 2 (Eqs. (1), (3) and (5) 53.0 mN/m for the silicone and aqueous amine adduct cured epoxy resin respectively when the contact angle measurements and the OW method are used. On the other hand, using the contact angle measurements together with the Wu method, they are estimated to be γ 1 = 32.1 mN/m and γ 2 = 58.84 mN/m respectively for the two resins. A comparison of using each set of these estimations, from the OW and Wu methods, in the 'surface energy based theoretical model', with the observed results are summarized in Table 4 and Table 5 respectively.
According to the theoretical prediction shown in Table 4, the stratification should be expected in the case when polyester of surface free energy, 61.07 mN/m and sandblasted steel of surface free energy, 57.42 mN/m are chosen as the substrate, as all three expressions of the 'surface energy model' are greater than zero. For the other substrates, one or two values for these expressions are less than zero; and so the stratification is not expected for them. Here, the interfacial surface tensions are calculated using Wu's Harmonic Mean Method (Eq. (6)). The calculations from the OW theory (Eq. (7)) are shown in Table 5. Both methods yield the same conclusions. The reason for not observing the self-stratification on the sandblasted steel can be explained by the comparable wettability of the substrate by both the silicone and the epoxy resins as seen from the low value of 0.59 mN/m for the first expression (Eq. (1)) of the model for sandblasted steel in Table 5. However, if the surface roughness of the sandblasted steel (mean surface depth = RZ60) can be modified by doing a surface preparation, then the substrate surface tension, γ S of the sandblasted steel can be altered. According to Cassie-Baxter theory [32], this can influence the difference in the wettability of the resins and possibly increase the chances of self-stratification on sandblasted steel as well.
The predictions from the 'surface energy model' are also made using both, Wu's Harmonic Mean Method and the OW method for Formulation A. We observe that, no stratification is expected for any of the substrates. This is due to the high negative value of the Eq. (3), showing that the alternate layer sequence is more feasible due to the higher interfacial surface tension, γ s2 between the epoxy resin and the substrate. The results are summarized in Appendixes A.1 and A.2 in Appendix A.
Besides, the influence of substrate on self-stratification, the rate of the crosslinking reaction between the hardener and the epoxy resin would also influence the self-stratification. This has been studied in the work by Lemesle et al. [2] for epoxy-silicone solvent-borne systems wherein two bio-based amine hardeners have been compared.
The Differential Scanning Calorimetry run on the curing reaction of the epoxy resin and the hardener showed an exothermic peak at temperature, T = 105 • C (Fig. 9). This implies that a faster reaction rate at a higher temperature compared to the room temperature conditions, could further support the self-stratification in the system. However, a full analysis together with the drying kinetics would help determine the optimum curing conditions. However, no comparative analysis between different hardeners was performed in this work due to the unavailability of more waterborne amine-based hardeners (besides the Hardener -1) in our lab.
Overall, in this work, from the study of waterborne epoxy-silicone formulations, it is seen that self-stratification is predicted and observed only for Formulation B. The variation of substrates plays a significant role on the self-stratification, wherein, • The stratification is observed only in the case when polyester is used as the substrate. • The model based on surface energy predicts the possibility of selfstratification for both sandblasted steel and polyester as substrates. • The reason for not observing self-stratification in the case of sandblasted steel could be attributed to the similar wettability of both the resins to this substrate, as seen from a lower value of the expression 'γ s1 -γ s2 -γ 12 ' in the case of sandblasted steel (=0.59 mN/m) compared to the polyester (=4.39 mN/m).
Conclusion
The role of surface free energy of resins and substrate on selfstratification in waterborne epoxy-silicone systems has been studied. Two waterborne formulations applied on six different substrates were analyzed for self-stratification using FTIR-ATR of the front and back surfaces and SEM-EDX analysis of the cross-section was performed.
From among the two formulations tested, only the formulation consisting of a silicone resin of 54 wt% solids and surface free energy of 27.16 mN/m together with an amine-cured epoxy resin of surface free energy, 53.0 mN/m applied to polyester showed self-stratification.
The experimental results are compared with a 'surface energy model' wherein the interfacial surface tensions and solid surface energies are calculated using two interfacial theories: Wu's Harmonic Mean and Owens-Wendt. The model predictions are not affected by changing the interfacial theory from Wu's Harmonic Mean to Owens-Wendt Method.
The predictions from the model match the observed results for almost all the cases for both the formulated systems. Only in the case of the same formulation as above but applied to sandblasted steel, no stratification is observed, even though it is predicted by the model. However, this can be explained by the similar wettability of both the resins in this formulation to sandblasted steel. Another reason for the difference in the observations from the model prediction for this case could be that the model only considers the surface free energy values of the cured resins for making the prediction. In reality, however, the solution surface free energies of the individual resins change as the solvent evaporates and the crosslinking reaction occurs.
This model together with the values to quantify the rate of crosslinking reaction, epoxy equivalent weights of the resins used in the formulations can serve as input information for the computer-aided design of self-stratifying coatings. It can be further coupled with the knowledge of difference in the physiochemical properties of the immiscible resins like viscosity as well as other kinetic phenomena like the solvent evaporation, sedimentation, and particle diffusion to predict the self-stratification using model-based tools in both non-particulate and colloidal coating systems.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
No data was used for the research described in the article. | 5,759.6 | 2022-12-01T00:00:00.000 | [
"Materials Science"
] |
Structure-dependent growth control in nanowire synthesis via on-film formation of nanowires
On-film formation of nanowires, termed OFF-ON, is a novel synthetic approach that produces high-quality, single-crystalline nanowires of interest. This versatile method utilizes stress-induced atomic mass flow along grain boundaries in the polycrystalline film to form nanowires. Consequently, controlling the magnitude of the stress induced in the films and the microstructure of the films is important in OFF-ON. In this study, we investigated various experimental growth parameters such as deposition rate, deposition area, and substrate structure which modulate the microstructure and the magnitude of stress in the films, and thus significantly affect the nanowire density. We found that Bi nanowire growth is favored in thermodynamically unstable films that facilitate atomic mass flow during annealing. A large film area and a large thermal expansion coefficient mismatch between the film and the substrate were found to be critical for inducing large compressive stress in a film, which promotes Bi nanowire growth. The OFF-ON method can be routinely used to grow nanowires from a variety of materials by tuning the material-dependent growth parameters.
Introduction
Recently, we reported a new nanowire growth method, termed on-film formation of nanowires (OFF-ON), that combines the advantages of simple thin film deposition and whisker formation to achieve highly crystalline nanowires [1]. OFF-ON is a template-and catalyst-free synthetic approach that utilizes thermally induced compressive stress within a polycrystalline thin film to obtain nanowires as small as tens of nanometers in diameter. Because of its direct growth capability via atomic mass flow and compatibility with multi-component materials, OFF-ON can be used to grow, sequentially or in parallel, single-element [1] and multiple compound nanowires [2]. Importantly, there is no need to use catalysts, thus avoiding cross-contamination that degrades the properties of the resultant nanowires. These capabilities make OFF-ON a unique and highly desirable tool for growing defect-free, high-quality and single-crystalline nanowires composed of a material of interest.
The first demonstration of OFF-ON was carried out with bismuth (Bi) nanowires [1]. Unlike other methods [3][4][5][6][7][8][9][10], typical Bi nanowires grown by OFF-ON are as long as hundreds of micrometers with exceptional uniformity in diameter and can be used as unique building blocks linking integrated structures over large length scales. The advantage of using OFF-ON to grow Bi nanowires has been demonstrated by oscillatory and nonoscillatory magnetoresistance measurements that show that nanowires grown via OFF-ON are highquality single-crystalline [11,12]. Subsequently, OFF-ON has been expanded to grow a wide variety of materials and structures, including Bi 2 Te 3 [2], Bi-Te core/shell [Kang J, Roh JW, Ham J, Noh J, Lee W: Reduction of thermal conductivity in single Bi-Te core/shell nanowires with rough interface, submitted], Bi-Te superlattice [Kang J, Ham J, Noh J, Lee W: One-dimensional structure transformation by on-film formation of nanowires: Bi-Te core/shell nanowires to Bi/Bi 14 Te 6 multi-block heterostructure, submitted], nanoparticleembedded [Ham J, Roh J, Shim W, Noh J, Lee W: Nanostructured Thermoelectric Materials: Al 2 O 3 nanopartice-embedded Bi Nanowires for ultra-low thermal conductivity, submitted], and self-assembled Bi nanowires [13]. OFF-ON is a promising nanowire growth platform; however, factors that ultimately control many important growth parameters to increase nanowire density have not been investigated. Herein, the authors 1 Department of Materials Science and Engineering, Yonsei University, 134 Shinchon, Seoul 120-749, Korea. Full list of author information is available at the end of the article report the effect of various parameters on Bi nanowire growth. The parameters studied were the microstructure and size of the as-deposited Bi films and the substrate structures on which they were deposited. Clarification of such effects provides optimized conditions for achieving high nanowire densities for specific applications.
Experimental details
Bi nanowires were fabricated by the OFF-ON method simply by annealing a Bi film at relevant temperatures without the use of conventional templates, catalysts, or starting materials (Figure 1a). Details related to the preparation of the substrates, deposition of the thin films, and annealing procedure are presented in [1]. In this study, the effect of several major parameters on Bi nanowire growth was examined. First, the effect of the Bi film microstructure, which can be modulated by film deposition rate, on the growth of nanowires was investigated. For this purpose, Bi thin films were deposited onto thermally oxidized Si (100) substrates at deposition rates of 2.7 Å/s (RF power: 10 W) and 32.7 Å/s (100 W), using UHV radio frequency (RF) sputtering. Second, the effect of Bi film areas, where the Bi nanowires are grown, on nanowire density was addressed. To study this, Bi films of various areas were fabricated using photolithography and lift-off. Four different Bi film areas were tested: (10 4 μm) 2 , (10 3 μm) 2 , (10 2 μm) 2 , and (10 μm) 2 . Third, we examined the effect of the magnitude of the compressive stress on the Bi film, which is modulated by the thermal expansion of the substrate, on Bi nanowire density. For this study, two different substrates, i.e., a thermally oxidized Si substrate and a Si substrate without SiO 2 on top were used.
Bi nanowires and Bi thin films were characterized by high-resolution X-ray diffractometer (Rigaku D/MAX-RINT, XRD), atomic force microscopy (DI 3100 AFM with a Nanoscope IVa controller), scanning electron microscopy (FE-SEM JEOL 6701F), and optical microscope (Olympus OM). Topology of Bi thin films deposited at rates of 2.7 and 32.7 Å/s were examined by contact-mode AFM after heat treatment. To calculate the Bi nanowire density, each Bi thin film was divided into 16 parts. Then, the number of nanowires on two randomly selected parts was counted using OM, and the average nanowire density was calculated. (RF power: 10 W) and 32.7 Å/s (RF power: 100 W), respectively, before and after thermal annealing. For both deposition rates, the identical 50-nm-thick Bi films were obtained by controlling the deposition time. From Figure 1b,c, it is evident that the Bi film grown at 100 W have preferential orientations of (003), (006), and (009) after heat treatment, while the film deposited at 10 W have additional orientations of (012) and (104). Interestingly, Bi nanowires grew from Bi films deposited at 100 W at far higher densities than from Bi films deposited at 10 W (see Figure 2). This implies that the preferential orientation (00ℓ) in a Bi film facilitates Bi nanowire growth. At a fixed growth temperature, the impinging flux of Bi atoms onto the surface of a substrate is expected to be higher for the higher RF power of 100 W, leading to a shorter time interval between encounters of adatoms, and in turn, creating a local excess of adatoms, called supersaturation [14]. This causes adatoms not to settle into possible equilibrium positions, resulting in the Bi film having a non-equilibrium microstructure and a non-uniform surface. In such a Bi film, Bi atoms are more likely to occupy unstable positions and are susceptible to migration upon thermal activation. This is why the grain orientations of the Bi film deposited at 100 W are redirected to the (00ℓ) through thermal annealing, as shown in Figure 1c.
Results and discussion
The inference above is more directly observed from the AFM images. Figure 2a,b shows AFM images of annealed Bi thin films grown at rates of 2.7 Å/s (10 W) and 32.7 Å/s (100 W). The film grown at 100 W is rougher and shows a greater number of protrusions on the surface compared to the film deposited at 10 W. Figure 2c,d shows SEM images of Bi nanowires grown on annealed Bi thin films that were initially deposited at rates of 2.7 and 32.7 Å/s, respectively. In contrast with the case of the film grown at 2.7 Å/s where few nanowires are observed, many long Bi nanowires are found on the Bi film deposited at 32.7 Å/s. Figure 2e shows that the ratio of the Bi nanowire densities for the two cases reaches approximately 800. Based on a localized model [15], the surface oxide layer may strongly affect nanowire growth because a nanowire can grow only when it can break the naturally formed oxide layer at the cost of stored compressive stress. The surface oxide layer is less likely to form on sharp protrusions. Therefore, we assume that a higher density of Bi nanowires can be achieved on films grown at a higher deposition rate partly because of Bi films with a higher density of protruding regions that can easily break the surface oxide layer at a given compressive stress. Moreover, a high deposition rate tends to induce a fine grain structure because of the limited surface migration of adatoms as mentioned before, and Bi atomic diffusion during thermal annealing is expected to be favored for nanowire growth through enlarged grain boundaries. These results indicate that surface morphology and grain structure of the Bi film, along with the preferential orientations stated in Figure 1, are critical factors in determining how easily Bi nanowires can grow on it. Consequently, the deposition rate of a Bi film is a parameter of importance, which controls all of these factors; a high deposition rate promotes Bi nanowire growth. Compressive stress stored in Bi films is thought to be the driving force for spontaneous Bi nanowire growth by the OFF-ON method. In order to check the appropriateness of this hypothesis and to study the effect of another parameter on Bi nanowire growth, we investigated the effect of Bi film areas. For this, we fabricated Bi thin film patterns with four different size of areas: (10 4 μm) 2 , (10 3 μm) 2 , (10 2 μm) 2 , and (10 μm) 2 . Figure 3a,b,c,d shows SEM images of Bi nanowire grown on different Bi film areas (A), where the Bi films were deposited on SiO 2 /Si substrates at a rate of 32.7 Å/s. If the compressive stress hypothesis is reasonable, then a larger Bi film area should result in a higher density of Bi nanowires, because the compressive stress is generally less relieved at the center of a film and more released at the edges of the film. Indeed, we found that the density of Bi nanowires at the edge is higher in the factor of 1.3 than that at the center, and the total density increased as the Bi film area increased after annealing at 270°C for 10 h (see Figure 3e). This indirectly shows that compressive stress is a driving force for Bi nanowire growth by the OFF-ON method, and preventing stress relief is another key factor for promoting nanowire growth. In this sense, Bi film area is another parameter that determines the Bi nanowire density. The magnitude of stress and its correlation with the nanowire density is discussed in detail elsewhere [16]. In addition, the above result proves that Bi nanowire growth is not driven by the thermal evaporation of Bi atoms during annealing; if this were the case, then Bi nanowire density should be independent of Bi film area.
Finally, the effect of the substrate layer structure (α) on Bi nanowire density was investigated to elucidate the role of thermal expansion mismatch between the substrate and the film. For this study, two different film stack structures, Bi/SiO 2 /Si and Bi/Si, with different thermal expansion mismatches, were exploited. Here, Bi films were deposited at an identical rate of 32.7 Å/s for both stacks. Figure 4a schematically shows Bi nanowires grown on the Bi/SiO 2 /Si and Bi/Si stacks, illustrating that the nanowire density on a Bi/SiO 2 /Si stack is much larger than on a Bi/Si stack. In fact, the Bi nanowire density on the Bi/SiO 2 /Si stack was measured to be 5400 cm -2 , which is much higher than that on the Bi/Si stack (240 cm -2 ), as shown in Figure 4b. The thermal expansion mismatch that causes compressive stress in a film results from the large difference in thermal expansion coefficients of Bi (13.4 × 10 -6 /°C) and SiO 2 (0.5 × 10 -6 /°C) or Si (2.4 × 10 -6 /°C). It is inferred that the 20 times larger Bi nanowire density on the Bi/SiO 2 /Si stack results from the larger mismatch of thermal expansion coefficients between the substrate and the Bi film for the Bi/SiO 2 /Si stack than for the Bi/Si stack (note the difference in the thermal expansion coefficients of Si and SiO 2 ). Therefore, the choice of a substrate structure that can maximize the thermal expansion mismatch with the film is a crucial parameter for optimizing nanowire growth. This principle may be universally applicable to nanowire growth based on any material systems, using the OFF-ON method.
Conclusions
We have investigated the effect of major growth parameters on Bi nanowire growth by the OFF-ON method. It was found that a rough Bi film surface and a fine Bi film grain structure induced by a high deposition rate facilitate Bi nanowire growth. The Bi nanowire density increases as the size of Bi film area increases and as the difference in thermal expansion coefficients between the substrate and the Bi film increases, confirming that compressive stress acts as the driving force for Bi nanowire growth by the OFF-ON method. These results indicated that major parameters should be properly set to achieve the highest density of Bi nanowires, using the OFF-ON. The OFF-ON method can be used equally well for growth of nanowires from other materials by adjusting these major growth parameters. | 3,113.6 | 2011-03-04T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Thermodynamics of deterministic finite automata operating locally and periodically
Real-world computers have operational constraints that cause nonzero entropy production (EP). In particular, almost all real-world computers are ‘periodic’, iteratively undergoing the same physical process; and ‘local’, in that subsystems evolve whilst physically decoupled from the rest of the computer. These constraints are so universal because decomposing a complex computation into small, iterative calculations is what makes computers so powerful. We first derive the nonzero EP caused by the locality and periodicity constraints for deterministic finite automata (DFA), a foundational system of computer science theory. We then relate this minimal EP to the computational characteristics of the DFA. We thus divide the languages recognised by DFA into two classes: those that can be recognised with zero EP, and those that necessarily have non-zero EP. We also demonstrate the thermodynamic advantages of implementing a DFA with a physical process that is agnostic about the inputs that it processes.
These analyses use minimal physical descriptions of the computations performed by the abstract constructs of computer science theory [20,21].Some recent work has instead probed the thermodynamics of certain types of hardware, such as CMOS-based electronic circuits [22,23].However, there exist practical constraints on physical computation that are not specified by the overall computation performed, but which are nonetheless relevant beyond a particular type of hardware.The thermodynamic costs of these constraints are not resolved by either of the approaches above, although their consequences can be significant [24].Accordingly, we ask: which kinds of thermodynamic costs necessarily arise when implementing a computation using a physical system solely due to constraints that seem to be shared by all real-world physical systems that implement digital computation?To begin to investigate this issue, here we consider the minimal entropy production (EP) that arises due to two ubiquitous constraints on real-world digital computers.First, the vast majority of modern physical computers are periodic: they implement the same physical process at each iteration (or clock cycle) of the computation.Second, all modern physical systems that perform digital computation are "local", i.e., not all physical variables that are statistically coupled are also physically coupled when the system's state updates.Ultimately, the reason that this constraint is imposed in both abstract models of computation and real world computers is that it allows us to break down complex computations into simple, iterative logical steps.
In this work we explore how and when operating under these constraints imposes lower bounds on the EP of a computation modeled as a CTMC, regardless of any other details about how the computation is performed (equivalent results apply even in a quantum setting [9]).Taken together, the constraints impose necessary EP through mismatch cost [8][9][10][11] of two types: "modularity" cost [6,8,12,13], and what we call "marginal" mismatch cost.Both types of mismatch cost have been identified in the literature as possibly causing EP in any given physical process; here we argue that they are in fact inescapable in complex computations.In particular, we demonstrate their effects for one of the simplest nontrivial types of computer, deterministic finite automata (DFA).
DFA have important applications in the design of modern compilers, as well as text searching and editing tools [25].They are also foundational in computer science theory, at the foot of the Chomsky hierarchy [26,27], below push-down automata [21] and Turing machines [20,28,29].These properties makes DFA particularly well-suited for an initial study of the consequences of locality and periodicity in computational systems.We thus take the first step towards investigating the thermodynamic consequences of locality and periodicity in all the computational machines of computer science theory.
We next introduce our modelling approach and key definitions.We subsequently outline the general consequences of locality and periodicity for arbitrary computations, in the form of a strengthened second law.Having discussed these strengthened second laws, we then derive specific expressions for constraint-driven EP in DFA, and explore how DFA could be designed to minimize the expected and worst-case costs that result.Next, we analyse how this EP relates to the underlying computation performed; surprisingly, the most compact DFA for a given language is generally neither especially thermodynamically efficient nor inefficient.Finally, we consider regular languages, i.e., the sets of strings such that every string in the set can be recognized by some DFA.We show that such languages can be divided into a class that is thermodynamically costly for a DFA to recognise, and a class that is inherently low-cost.
A. Deterministic Finite Automata
A DFA [6,26,27] is a 5-tuple (R, Λ, r ∅ , r A , ρ) where: R is a finite set of (computational) states; Λ is a finite alphabet of input symbols; ρ is a deterministic update function specifying how the current DFA state is updated to a new one based on the next input symbol, i.e., ρ : R × Λ → R; r ∅ ∈ R is a unique initial state; and r A ⊂ R is a set of accepting states.An example is shown in Fig. 1.The set of all finite input strings is indicated as Λ * .
The DFA starts in state r ∅ and an input string λ ∈ Λ * is selected.The selected input string's first symbol, λ 1 , is then used to change the DFA's state to ρ(λ 1 , r ∅ ).The computation proceeds iteratively, with each successive component of the vector λ used as input to ρ alongside the then-current DFA state to produce the next state.We write λ −i for the entire vector λ except for the i'th component.
We write the DFA's computational state just before iteration i as r i−1 , and we use r i for the state after the update.The update in iteration i is then the map We refer to this map as the local dynamics, and define the set of local states as FIG. 1: Example DFA with states R = {0, 1, 2, 3}, alphabet Λ = {a, b}, initial state r ∅ = 0 and accepting set r A = {0, 1, 2}.The update function ρ is illustrated in (a); the current computational state and the current input symbol specify the next computational state.This DFA accepts input strings that do not contain three or more consecutive bs.(b) shows the evolution of the local state through three iterations; the input string is read from left to right.
with elements z ∈ Z. z 0 i is the local state just before update i: z 0 i = (λ i , r i−1 ), and z f i = (λ i , r i ) is the local state after update i.Note that z f i = z 0 i+1 in general, since z 0 i+1 involves λ i+1 , not λ i .The local update function fixes the full update function of the entire state space, since λ −i is unchanged during an update.
A DFA accepts λ if its state is contained in r A after processing the final symbol.The language accepted by a DFA is the set of all input strings it accepts.Many DFA accept the same language L; the minimal DFA for L has the smallest set of computational states R for all DFA that accept L [26,27].
Fig. 1 (a) shows a DFA with four computational states that processes words built from a two-symbol alphabet {a, b}.This DFA accepts all strings without three or more consecutive bs.Three iterations of this DFA when fed with an input (a, b, b) are shown in Fig. 1 (b).
DFA can be divided into those with an invertible local map ρ, and those with a non-invertible ρ.The map ρ defines islands in the local state space: an island of ρ is a set of all inputs to ρ that map to the same output (i.e., it the pre-image of an output of ρ).If the local dynamics defined by ρ is invertible, all local states are islands of size 1; otherwise Z is partitioned by ρ −1 into non-intersecting islands, some of which contain multiple elements.We write c i for the island that contains z 0 i .The DFA in Fig. 1 is non-invertible, since z f i = (a, 0) could have arisen from either z 0 i = (a, 0), (a, 1) or (a, 2), which comprise an island.
B. Thermodynamic description of DFA
Details of our thermodynamic modelling of DFA are given in the Methods.In short, we assume that the logical states of the device are instantiated as well-defined, discrete physical states.At each iteration, a control pro-tocol µ(t) is applied that drives a deterministic update of the DFA's state according to the logical rules of the computation.
Although the overall update is deterministic, we assume that the input word is sampled from a distribution p(λ), representing the possible computations that the DFA may be required to perform.We use λ to represent the random variable corresponding to the input word.The randomness of λ means that the computational state after update i; the local state before and after update i; and the island occupied during iteration i are also random variables.To represent these random variables we use r i , z 0 i , z f i and c i , respectively.As outlined in the Methods, when a time-dependent control protocol µ(t) is applied to a thermodynamic system X with a finite set of states X = {x 1 , x 2 , ...}, the mismatch cost [6,8,9] is a lower bound on EP.Here, the time-dependent protocol µ(t) drives an evolution from p(x) to p ′ (x ′ ) = x P (x ′ |x)p(x), or p ′ = P p.The distribution q µ is known as the prior distribution [6,15,30], and is specific to the applied protocol µ(t).D(p || q µ ) is the Kullback-Leibler (KL) divergence between p and q µ ; the mismatch cost is then the drop in KL divergence due to the matrix P .σ µ is zero if p = q µ , and non-negative by the data processing inequality.Intuitively, the mismatch cost is the contribution to the EP of the misalignment between the actual input distribution p(x) and an optimal distribution q µ (x) specified by the physical process µ(t).If the input distribution is well-matched to the protocol applied, p(x) = q µ (x), EP is minimised.
In the Methods, we outline how the EP of two coevolving subsystems X a and X b that are not physically coupled during the period of evolution can be split into EP for the two subsystems in isolation, and a term related to the change in mutual information between the two.In the special case where X a evolves during the time period in question and X b = X −a is static, the dynamics of X a under µ(t) = µ a (t) is said to be solitary [6,8,13].In this case, the mismatch cost is [6,8,12,13] Here, p a (x a ) is the initial marginal distribution for subsystem a, and ∆I is the change in the mutual information between X a and X −a over the period in question.The first term in Eq. ( 4) is the non-negative mismatch cost generated by X a running in isolation, having marginalised over the other degrees of freedom.We call this the marginal mismatch cost, σ mar .Like any other mismatch cost, it is non-negative.The second term is the reduction in mutual information between X a and X −a [7,13], which we call the modularity mismatch cost, σ mod , after Ref. [12].By the data processing inequality [31], σ mod ≥ 0. Intuitively, this term reflects the fact that information about the statistical coupling between X a and X −a is a store of non-equilibrium free energy, and that information is reduced in a solitary process.
To analyse the minimal thermodynamic costs of operating DFA under local and periodic constraints, we consider the effect of these constraints on the overall mismatch cost at each iteration.As discussed in the Methods, any additional entropy production can, in principle, be taken to zero.
Locality
In principle, one could build a DFA that physically couples the entire input word, λ, to the local subsystem z i during update i.However, this coupling is not required by the computational logic, which is local to z i .Moreover, it would be extremely challenging to implement in practice; modern computers do not physically couple bits that do not need to be coupled by the logical operation in question.Accordingly, we assume that the evolution of the local state z i is solitary.As a result, the global mismatch cost splits into two non-negative components: a marginal mismatch cost associated with the evolution of the local state in isolation; and a modularity mismatch cost associated with non-conserved information between the local state and the rest of the system.
Periodicity
The marginal mismatch cost for iteration i will depend on the similarity of p(z 0 i ), the initial distribution over local states, and q µi (z i 0 ), the prior distribution for the protocol µ i (t) implemented at iteration i.Typically, p(z 0 i ) will vary with i.In theory, one could design µ i (t) to match these variations, ensuring q µi (z i 0 ) = p(z 0 i ) at each update, eliminating σ mar .However, designing such a protocol would require knowledge of p(z 0 i ) -which in turn would require running a computation emulating the DFA before running the DFA, gaining nothing.Moreover, one of the major strengths of computing paradigms such as DFA, Turing machines and real world digital computers is that their logical updates are not iteration-dependent.It is therefore natural to impose a second constraint: the protocol µ i (t), like the logical update ρ, is identical at each update i (µ i (t) = µ(t)).Formally, we define a local, periodic DFA (LPDFA) as any process that implements a DFA via a repetitive, solitary process on the local state z i .
D. General consequences of local and periodic constraints
We briefly consider the consequences of locality and periodicity in general, before re-focussing on DFA.The mismatch and modularity costs introduced in Section II B are well established.However, systems that perform nontrivial computations by iterating simpler logical steps on subsystems are exposed to these costs in a way that simpler operations, like erasing a bit, are not.The need to operate iteratively on an input that is evolving from iteration to iteration makes the mismatch cost unavoidable.Additionally, modularity-cost-inducing statistical correlations result from the need to carry information between iterations, which will not be required in simpler systems.
Consider a physical realisation of an arbitrary computation that is local and periodic in a way that reflects the locality and periodicity of the computational logic.Then the marginal and modularity mismatch costs set a lower bound on EP, regardless of any further details about how the computation is implemented.Specifically, let X be the computational system and X i the local subsystem that is updated at iteration i.Then over the course of N iterations, the system will experience a total marginal mismatch cost where P is the update matrix, p(x i ) is the initial distribution of the local state and q µ (x i ) is the prior built in to the actual protocol µ(t).
Eq. 5 depends on the details of µ(t) beyond the locality and periodicity constraints.However, some choice of q µ (and hence µ(t)) will minimize σ mar , setting a lower bound on EP that is independent of these details.
Unless p(x i ) is identical for all i, or P is a simple permutation, it is not generally possible to choose a single q µ that will eliminate σ mar at every iteration i.In this case, Eq. 6 provides a strictly positive periodicity-induced lower bound on the EP that depends purely on the logic of the computation performed.
Similarly, the accumulated modularity cost follows directly as where ∆I(X i ; X −i ) is the change in mutual information between X i and X −i due to update i.As with Eq. 6, this contribution to EP is entirely determined by the computational paradigm used and the distribution of inputs; it is independent of the details of the implementation given the assumption of locality and periodicity.Taken together, the sum of σ mar and σ mod from Eq. 6 and Eq. 7 constitute a strengthened second law for periodic, local computations that depends only on the logic of the computation, not the details of its implementation.These implementation-independent lower bounds, alongside the qualitative observation that computing systems are particularly vulnerable to modularity and mismatch costs, is the first main result of this work.These results apply to any computational system implemented using a periodic, local process.For the rest of the paper, we will focus on DFA.Doing so allows us to illustrate the consequences of local and periodic restrictions in a concrete computational model.
E. Entropy production for LPDFA
Under our assumptions, the EP when applying a solitary dynamics µ(t) to an initial distribution p(z 0 i , λ −i ) at the update stage of iteration i of a DFA is where is the marginal mismatch cost of update i, and is the modularity mismatch cost of update i.A variant of the modularity cost in Eq. ( 10) was considered in isolation in Ref. [32], for the special case of DFA operating in steady state.Henceforth, for simplicity, we suppress the dependence of σ i on µ since µ is constant over all iterations.The KL divergences in Eq. 9, giving σ mar , can be simplified for LPDFA.Since each update in an LPDFA deterministically collapses all probability within an island to one state, p(z f i |c i ) = q(z f i |c i ).As shown in Section 2 I of the Supplementary Information, this simplification implies that which is the second main result of this work.σ i mar is therefore the divergence between initial and prior distributions, conditioned on the island of the initial state.
In Fig. 2, we explore the properties of σ i mar for the DFA shown in Fig. 1.The four sub-figures show σ i mar for four distinct distributions p(λ), and a fixed (uniform) prior q µ .We immediately see that σ i mar is strongly dependent on both the distribution of input words and the iteration, with σ i mar non-monotonic in i in all four cases.σ i mar is determined by a combination of how well tuned the prior is to the input distribution within a given island, and the probability of that island at each iteration.At the start of iteration 1, particularly for subfigure (b), there is a high probability of the system being in the island {(a, 0); (a, 1); (a, 2)} and the uniform prior is poorly 2: FIX LABELS EP in a simple system shows non-trivial dependence on iteration and input word distribution.We plot total EP σ i , and its decomposition into σ i mar and σ i mod , for the DFA in Fig. 1 (a), which accepts all words that do not contain three or more consecutive bs.In all cases we use a uniform prior q µ (z 0 i |c i ) within each island, and consider a distribution of input words with fixed length N = 15, but vary the distribution of input words p(λ).
aligned with the actual initial condition within this island (all in (a, 0)).At larger i, this cost drops both because the probability of being in that island drops, and the conditional distribution within the island gets more uniform.
For iterations i ≥ 3, the system has a non-zero probability of being in the other non-trivial island {(b, 2); (b, 3)}.The uniform prior is initially poorly matched to the conditional distribution within this island (at the start of iteration i = 3, the system cannot be in (b, 3)).Additionally, the probability of the system being in this island is quite low for subfigure (a) and (b), but much higher for (c) and (d) -explaining the jumps in those traces.
The third main result of this work is a simple expression for the modularity mismatch cost for DFA.As we show in Section 3 of the Supplementary Information, Surprisingly, σ i mod , a global quantity, is given by the entropy of the local state at the beginning of the update, conditioned on the island occupied at the start of iteration i.This result holds regardless of the distribution of input strings or the DFA's complexity.
To understand Eq. 12 intuitively, we note that z 0 i in general contains information about λ −i .After the update, any information provided by λ i alone is retained, since the input symbol is not updated by the DFA.Moreover, for islands of size 1, the combined values of λ i and r i are just as informative about λ −i as λ i and r i−1 were.However, for non-trivial islands, the extra information provided by r i−1 on top of λ i is lost, yielding Eq. 12.We see from our example system in Fig. 2 that modularity costs behave very differently from marginal mismatch costs.In general, σ i mod tends to zero as the probability of being absorbed into state 3 increases: in this case, there is no entropy of z 0 i .Modularity costs stay high for system (b), in which bbb substrings are infrequent.
Modularity costs are relatively low in Fig. 2 (d), in which symbols of the input word are correlated.Naïvely, one might have assumed that a larger I(z 0 i , λ −i ) generated by a correlated input word would be more susceptible to large modularity costs.We explore this question in more detail in Fig. 3 for both the DFA illustrated in Fig. 1 (a) and a second DFA that accepts words that are concatenations of bb and baa substrings (Fig. 3 (a)).
In Fig. 3 (b) we plot the total modularity cost, N i=1 σ i mod , for both DFA processing a Markovian input, as a function of the degree of correlation, P (λ i+1 = λ i ).We see that in both cases, the uncorrelated input words with P (λ i+1 = λ i ) = 0.5 have relatively high (though not maximal) modularity cost, and fully correlated strings have σ mod = 0.
To understand why, consider Fig. 3 (c), in which we plot the information between the local state and the rest of the input word before (I 0 = I(z 0 i ; λ −i )) and after (I f = I(z f i ; λ −i )) the update of iteration i, for the original DFA in Fig. 1 (a).We consider uncorrelated input words (P (λ i+1 = λ i ) = 0.5) and moderately correlated input words (P (λ i+1 = λ i ) = 0.8).At early iterations, I 0 is larger for the correlated input, as would be expected (at later times, the DFA with correlated input is more likely to be absorbed into state 3, reducing I 0 ).More importantly, the system with correlated inputs retains more of its information in the final state.Because λ −i is correlated with the the current symbol λ i , it is a better predictor of the final state of the update.In the limit of P (λ i+1 = λ i ) = 1 or 0, there is no modularity cost as z f i is perfectly predictable from λ −i .Combining Eqs.(11) and (12) gives where is the cross entropy between q µ (z 0 i |c i ) and p(z 0 i |c i ).This total entropy production is also shown for the example DFA of Fig. 1 (a) in Fig. 2. Modularity cost is plotted as a function of the probability that subsequent symbols in the word have the same value.(c) Mutual information between the local state and the rest of the input word before (I 0 ) and after (I f ) the update of iteration i, for the DFA in Fig. 1 (a).Data is plotted for P (λ i+1 = λ i ) = 0.8 (correlated) and P (λ i+1 = λ i ) = 0.5 (independent).Results are plotted for different values of q µ ((a, 0)|c ⋆ ), where c ⋆ = {(a, 0), (a, 1), (a, 2)} is the island containing (a, 0).q µ (z 0 i |c i ) is otherwise unbiased, and q µ ((a, 0)|c ⋆ ) = 1/3 corresponds to a totally unbiased prior.(c) Equivalent to (a), but for input p(a) = 0.2, p(b) = 0.8, and applying a bias to q µ ((b, 3)|c ⋆⋆ ), where c ⋆⋆ = {(b, 2), (b, 3)} is the other non-trivial island for this DFA.q µ (z 0 i |c i ) is otherwise unbiased, and q µ ((b, 3)|c ⋆⋆ ) = 1/2 corresponds to a totally unbiased prior.
F. Reducing the marginal mismatch cost through choice of priors
Applying a bias to the prior
It is natural to ask how q µ (z 0 i |c i ) might be chosen to minimize EP for a given p(λ) and a given DFA.One might hope that q µ (z 0 i |c i ) could be tuned to p(λ) alone, without any reference to the operation of the DFA.Unfortunately, however, such an approach will fail.The states within each island all have the same value of λ, because the update map ρ(λ i , r i−1 ) = (λ i , r i−1 ) → (λ i , r i ) does not update the input symbol.Applying a prior that is a function of λ alone results in a uniform q µ (z 0 i |c i ).Reducing the mismatch cost through choice of prior thus requires some understanding of the computational state, not just the inputs.For example, for the DFA in Fig. 1 (a), the computation starts in the state r ∅ = 0. Biasing q µ (z 0 i |c i ) towards states with r = 0, as we show in Fig. 4 (a), can reduce the marginal mismatch cost of the first step.If the bias is too strong, then increased costs at later iterations overwhelm the initial reduction.It is possible, however, to reduce the total EP with a moderate bias of q µ (z 0 i |c i ) towards states with r = 0.Alternatively, one could bias q µ (z 0 i |c i ) towards states with r = 3, since most trajectories will eventually be absorbed.As shown in Fig. 4 (b), doing so incurs an extra cost at short times, particularly at iteration i = 3.At the start of the third iteration, the DFA is moderately likely to be in computational state r = 2, but cannot be in computational state r = 3, so the biased prior is a poor match for p(z 0 i |c i ).At later iterations, however, the biased prior performs better.Again, a moderate bias performs best overall.
Advantages of a uniform prior
Section II F 1 shows that it is possible to reduce EP by applying biased priors.However, we also saw that very biased priors could lead to very high EP.As noted in Ref. 33, in which a similar result to Eq. 11 was derived in the absence of distinct islands, σ i mar penalizes an overconfident prior q µ (z 0 i |c i ).If q µ (z 0 i |c i ) = 0 for a given state but p(z 0 i |c i ) = 0, Eq. 13 implies σ i mar → ∞.The authors of Ref. 33 hypothesised, therefore, that a uniform q µ (z 0 i |c i ) may be optimal.As a fourth main result of this work, we present three important properties of a q µ (z 0 i |c i ) that is uniform for each c i , i.e., a prior q µ (z 0 i |c i ) = 1/L ci , with L c the size of island c.First, for such a prior, Eq. 13 becomes Here, L cmax is the size of the largest island of ρ.Eq. 15 gives a finite upper bound to EP for LPDFA employing a uniform prior distribution q µ (z 0 i |c i ) = 1/L ci , constrained by the size of the largest island.
Second, for any protocol, the worst case EP is at least ln L cmax .A uniform prior distribution q µ (z 0 i |c i ) = 1/L ci therefore minimizes the worst case EP.To verify this claim, consider the input distribution p(z i 0 ) = δ z i 0 ,zmin , where z min is a state that minimizes q µ (z 0 i |c i ) within the largest island.For such a distribution, Eq. 15 reduces to where the final inequality follows from q µ (z min |c max ) ≤ 1/L cmax .Finally, the uniform prior distribution q µ (z 0 i |c i ) = 1/L ci minimizes predicted average EP if a designer is maximally uncertain about p(z 0 i , λ i ).A designer may not know that p i (z 0 i |c i ) is the input distribution at iteration i -either because p(λ), or the DFA's dynamics on p(λ), are unknown.Thus the choice of protocol µ(t), and hence q µ (z 0 i |c i ), is performed under uncertainty over not just the input state, but also the distribution from which that state is drawn.
Let the designer's belief about the distributions p(c i ) and p(z 0 i |c i ) be represented by a distribution π(v, w) over an (arbitrary) discrete set of possible distributions indexed by v, w: p v (c i ), p w (z 0 i |c i ).The designer's best estimate of the expected EP at iteration i is then (see Section 4 of the Supplementary Information) Here, H(z 0 i |c i , v, w) and I(z 0 i ; w|c i , v) are defined with respect to the estimated joint distribution, p(v, w, z 0 is the designer's estimate for the probability distribution within an island, having averaged over the uncertainty quantified by π(w|v).
All three terms in Eq. 17 are non-negative.The first is σ i mod averaged over v and w.The third is the marginal mismatch cost between p(z 0 i |c i , v) and q µ (z 0 i |c i ).However, even if q µ (z 0 i |c i ) matches the average estimated distribution within an island, p(z 0 i |c i , v) = q µ (z 0 i |c i ), the best estimate of σi mar is non-zero.The second term, I(z 0 i ; w|c i , v), quantifies how much uncertainty in w is actually manifest in an uncertainty in the input distribution; variability about p(z 0 i |c i , v) gives positive expected EP.An equivalent term was previously identified in Ref. [34] for arbitrary processes with a single island.
H(z 0
i |c i , v, w) and I(z 0 i ; w|c i , v) are protocolindependent and cannot be changed for a given computation.D(p(z 0 i |c i , v) || q µ (z 0 i |c i )), however, can be minimized by by choosing q µ (z 0 i |c i ) = p(z 0 i |c i , v).Given maximal uncertainty, the designer's best estimate will be uniform: p(z 0 i |c i , v) = 1/L ci .In this case, a uniform q µ (z 0 i |c i ) = 1/L ci minimizes estimated average EP.
The results hitherto apply to LPDFA, but do not reflect the actual computation performed.The results for σ i mar -the optimality of a uniform protocolapply to any deterministic process; the LPDFA's restrictions simply justify why q µi (z 0 i |c i ) cannot be tuned to p(z 0 i |c i ) at each i.The results for σ i mod are more specific, relying on a solitary process using a single symbol λ i from an unchanging "input string", and a device whose state after the update is unambiguously specified by λ i≤j .Nonetheless, σ i mod in Eq. 12 is not directly related to the computational task.We now explore how EP is related to ρ, and the language accepted by the DFA.G. Relating EP to computational tasks The EP in Eq. 13 is positive iff q µ (z 0 i |c i ) = 1 for any z 0 i , c i for which p(z 0 i |c i ) = 0 and p(c i ) = 0.This condition is met whenever an island c i with p(c i ) > 0 has at least two elements z 0 with p(z 0 i |c i ) > 0. There are two ways to avoid this EP.One is if all islands have a single element, i.e., the local update function ρ is invertible (this observation was made for σ mod alone in Ref. [32]).The second is if the distribution of input strings p(λ) is such that for every island c i with at least two elements, all but one of those elements always have p(z 0 i |c i ) = 0.However, in that case, q µ (z 0 i |c i ) must be finely-tuned to match this condition when the physical system implementing the computation is constructed.As discussed in Sections II F 1 and II F 2, this strategy risks high costs for overconfidence.
We now focus on the former way of achieving zero EP, asking what determines whether ρ is invertible.Since ρ preserves the input symbol λ i , it can only be noninvertible if it maps two distinct computational states to the same output for the same symbol λ i .If we illustrate ρ by a series of directed graphs, one for each value of λ i , then a non-invertible DFA will have at least one state with at least two incoming transitions for at least one value of λ i .We label states with more than one incoming transition for a given λ i as conflict states; conflict states for the DFA in Fig. 1 (a) are shown in Fig. 5.
The minimal DFA for a given language does not generally minimize or maximize EP
The minimal DFA for a language L has the smallest set of computational states R for all DFA that accept L. This minimal DFA has just enough memory to sort parsed substrings into classes of equivalent strings, so that information can be passed forward to complete the computation [26,27,35].More formally, define input strings λ and µ to be equivalent with respect to language L iff λν ∈ L ⇐⇒ µν ∈ L for any string of input symbols ν, where λν is a concatenation of ν after λ.The Myhill-Nerode theorem states that the number of states of the minimal DFA for L is the number of equivalence classes of this equivalence relation [26,27,35].Perhaps surprisingly, minimal LPDFA do not in general either maximise or minimise EP.This claim is our fifth main result; to illustrate it, first consider the two DFA in Fig. 6, which both have Λ = {a, b} and accept input strings with an even number of bs.Fig. 6 (a) is the minimal DFA for this language.It is invertible, and so has zero EP.The larger DFA in Fig. 6(b) is noninvertible, and so σ i (p(z 0 i , λ −i )) > 0 in general.For example, EP is positive if the sequences (Λ i−2, Λ i−1 , Λ i ) = (a or b, b, a) and (Λ i−2, Λ i−1 , Λ i ) = (b, a, a) both have nonzero probability.The minimal LPDFA never has higher EP than larger DFA, and often has lower EP.Now consider the two DFA in Fig. 7.Both accept any input string constructed from Λ = {a, b} with no b symbols, and Fig. 7(a) is the minimal DFA for this language.Neither DFA is invertible, so EP is generally non-zero for both.However, the non-minimal LPDFA in Fig. 7 (b) delays entropy production by a single iteration relative to Fig. 7 (a).As outlined in Section 5 of the Supplementary Information, this delay ensures that the overall EP for the larger LPDFA is always less than or equal to the EP for the minimal LPDFA.
H. Languages are divided into costly and low-cost
classes by the structure of their minimal DFA The DFA in Fig. 7 (b) can be extended, delaying nonzero EP.However, a finite number of additional states cannot prevent EP for arbitrary length inputs, and DFA are necessarily finite.Indeed, the sixth main result of our work, proven in detail in Section 6 of the Supplementary Information, is that if a minimal DFA is noninvertible, any DFA that accepts the same language must also be non-invertible.One cannot eliminate conflict states without disrupting the sorting of strings into equivalence classes.Thus if the minimal DFA for a regular language L is non-invertible, recognising that language is inherently costly.Conversely, if the minimal DFA that accepts L is invertible, recognising that language is lowcost.
As an example, consider a DFA that takes inputs of integers in base n, and accepts the integer y if y is divisible by m.As we show in Section 7 of the Supplementary Information, the minimal DFA for such a computation is invertible iff n and m have no common factors.Therefore, it is inherently costly to decide whether a number is divisible by 9 if the number is expressed in base 3, but not if the number is expressed in base 2, showing that even conceptually similar computations can have very different thermodynamic consequences.
III. DISCUSSION
Breaking down complex computations into simple periodic updates, involving small parts of the computational system, is at the heart of both theoretical computer science and real-world computing devices.It is natural that physical systems designed to implement computations involve physical processes that are also local and periodic; that is how "synchronous, clocked" digital computers are designed.
However, physical systems that implement periodic, local computations are particularly vulnerable to stronger lower bounds on EP than the zero bound of the second law.Any physical operation -including computationscan, in principle, be performed in a thermodynamically reversible way, with a sufficiently well-designed protocol [36].The nature of non-trivial computations, however, means that such a protocol would need to reflect not just the distribution of possible inputs to the computer, but also how those inputs are processed, and the subtle statistical coupling that is generated as the computation proceeds.
We have illustrated how these challenges manifest as marginal and modularity mismatch costs in DFA with non-invertible local update maps.Interestingly, the overall computation performed by a DFA -mapping the input word and starting computational state to the same input word and a final computational state -is always invertible.The logical properties of the overall computation are therefore not helpful in understanding the necessary EP of a local, periodic device.
We have only a qualitative, system-specific understanding of why the curves in Fig. 2 and Fig. 4 have the forms they do.Additionally, although similar results will hold for quantum mechanical or finite heat-bath treat-ments of DFA's thermodynamics, additional subtleties will arise.More generally, DFA are just the simplest machine in the Chomsky hierarchy and it is unknown how marginal and modularity mismatch costs behave for other paradigms.The constraints of locality and periodicity will also apply to (physical systems implementing) other machines in the hierarchy, such as push-down automata, RAM machines, or Turing Machines.We would expect that variants of the results concerning σ mod and σ mar presented here also apply to those systems.However, there will also be important differences.For example, the overwriting of input and/or memory that occurs in machines more powerful than DFA will affect σ mod in ways not considered in this paper.Moreover, Turing machines and push-down automata have access to an infinite memory.DFA, by definition, do not -indeed, it is this restriction that divides regular languages into lowand high-cost.
Finally, it is interesting to consider how the consequences of locality and periodicity relate to other resource costs.Recent work on transducers -a computational machine that generates an output corresponding to a hidden Markov model -has shown that a quantum advantage exists over a classical implementation if and only if the machine is not locally invertible [37]; it is unclear whether a similar result holds for DFA.The role of the input distribution in determining the thermodynamic costs in our work is also reminiscent of the way computational complexity depends on the distribution over inputs.
Consider a system X with a finite set of states X = {x 1 , x 2 , ...}.There is a distribution p(x) over X at some initial time, and that distribution evolves according to a (potentially time-dependent) Markov process µ(t).We assume that the system is attached to a single heat bath during this process, choosing units so that the bath's temperature equals 1/k B .We also assume that µ(t) obeys local detailed balance with respect to that bath and the system's (potentially time-evolving) Hamiltonian [4].Although we won't need to specify whether the Markov process is discrete-time or continuous-time, to fix the reader's intuition (and accord with real-world digital computers) we can assume that it is continuous-time.
Suppose that the process runs for some pre-fixed time.The distribution over X at the end of that time is a linear function of the initial distribution, which we write as p ′ (x ′ ) = x P (x ′ |x)p(x), or just p ′ = P p for short, where P is implicitly fixed by the stochastic process µ(t).A given P will partition X into islands.Two states x and x ′ are within the same island if and only if P (x ′′ |x) = 0 and P (x ′′ |x ′ ) = 0 for any state x ′′ .
Let q c µ (x) be the initial probability distribution that minimizes the entropy production under µ(t) for distributions with support restricted to the island c.This optimal distribution will be unique within each island.
No matter what the actual initial distribution p is, and regardless of the specific details of the process µ(t) that implements P , so long as each q c µ has full support within island c, the EP when the process is run with the initial distribution p will be [8,9,11] Here, the index c runs over the islands of the process, p(c) = x∈c p(x) and q µ (x) = c q(c)q c µ (x) is called the prior distribution [6,15].
Note that the distribution over islands, q µ (c), is arbitrary.Any distribution q µ (x) that is a sum over the set of optimal distributions {q c µ (x)} could be used with the same results.In practice, the existence of many possible q µ does not affect our analysis; we shall simply use a convenient q µ with q µ (c) = 0 for all c.
The first two terms in Eq. 18 are the mismatch cost [6,8,9] of the process.The final term in Eq. 18 is the residual entropy production.Unlike the statistical mismatch cost, the residual EP depends on the physical details of the process implementing µ(t).Each term in the sum is non-negative, but can be reduced to zero using a quasi-static process [6,8,9].
Marginal and modularity mismatch costs
Let X a and X b be two co-evolving systems that are physically separated from one another during a time period [0, 1], though they may have been coupled in the past.Due to this separation, we may consider separate protocols µ a (t) and µ b (t).Moreover, the prior for the overall process must be a product distribution, q µ (x) = q µa (x a )q µ b (x b ) Taking p(x a ) and p(x b ) as the marginal distributions of the initial joint distribution p(x a , x b ), the drop in KL divergence during [0, 1] is where P a , P b are the two matrices corresponding to the conditional distributions of ending states given initial states.This drop equals where H is the entropy, H(. || .) is cross-entropy, and ∆ means change from beginning to end of the evolution under P .Adding and subtracting marginal entropies, this form can be re-expressed as By the definition of the change of mutual information between X a and X b , ∆I, we obtain We may thus write for the EP during [0, 1], which simplifies to Eq. ( 4) if X b = X −a and X −a is static.
Eq. ( 4) may, at first glance, seem inconsistent with the general discussion in Ref. [8], which used a more general Bayes net formalism.In fact there is no inconsistency.In the language of Ref. [8], the variables in z i 0 are the "parents" of r i , resulting in the same marginal and modularity mismatch costs as derived here.
B. Physical model of DFA
In order to apply stochastic thermodynamics to the computational model of DFA, it is necessary to make assumptions about how the logic is instantiated in a physical system.We assume that all the possible logical states of the system, defined by the set R × Λ * × Z + (combining the possible computational states, input words and iteration steps) correspond to well-defined discrete physical states [4,38].For example, the DFA could be a molecular assembly processing a copolymer tape [14].Metastable configurations of the assembly would represent the computational state, the sequence of the copolymer the state of the input word, and the position of the polymer the iteration.We also assume that if it is necessary to implement ρ, the DFA has access to ancillary hidden stateswhich with probability 1 are unoccupied at the start and end of any update [36].
Computation will, in general, involve an externally applied control protocol that varies the physical conditions of the system over time; in the case of the molecular computer, we would use time-varying concentrations of molecular fuel [14].This protocol defines the dynamics µ(t) discussed in Section IV A. Although the dynamics will be stochastic, strictly speaking, we assume that µ(t) biases trajectories sufficiently to obtain effectively deterministic computation by the end of each update More formally, we are interested in the limits of stochastic protocols under which they approximate deterministic dynamics to arbitrary accuracy [33].We abuse notation, using µ(t) to refer to both the external protocol and the dynamics it induces over the system's states.
We take the input word λ to be a random variable sampled from a distribution p(λ).We use r i , z 0 i , z f i and c i to represent the random variables corresponding to the computational state of the DFA after update i, local state before and after update i; and the island occupied during iteration i, respectively.
We will consider a distribution p(λ) in which all words are the same finite length N .Within this setup, a distribution of input words with lengths less than or equal to N could be simulated by adding an extra null symbol that induces no computational transitions to the alphabet.Processing these null input symbols would have no thermodynamic cost under the assumptions considered here.For simplicity, we do not include these null symbols in our examples.
C. Thermodynamic costs of DFA
Different measures of cost
In this paper we focus on entropy production as the fundamental thermodynamic cost of running DFA.EP represents the lost ability to extract work from a system, and is a metric for the thermodynamic irreversibility.In certain contexts, the work required to perform a process, or the heat transferred to the environment therein, are also used to quantify the thermodynamic cost of a process.
The operation of a DFA does not increase the entropy of the computational degrees of freedom of the system, since the map from (r 0 = r ∅ , λ) to (r N , λ) is one-to-one if the full input word is taken into account.If the computational states all have the same energy and intrinsic entropy [4,38] as is typically assumed, the energy and entropy change of the system will thus be zero.Any EP is equal to the heat transferred to the environment, which must be exactly compensated by the work done on the system.All three measures of thermodynamic cost are therefore identical.
Costs considered in analysing the model
We do not consider further the residual EP, nor the costs of incrementing i (both can, in principle, be made arbitrarily small).We also neglect costs associated with actually generating µ(t) itself, as discussed in Ref. [14].Given these assumptions, whenever we use the term "(minimal) EP", we refer to the (minimal) EP due to the mismatch cost (and its decomposition into marginal and modularity mismatch cost).
Decomposition of EP generated at each iteration
In general, when applying the mismatch cost formula to a computation there are multiple choices for the times of the beginning and end of the underlying process.This choice matters, because the mismatch cost contribution to EP is not additive over time.for example, the drop in KL divergence for a two-timestep computation will generally differ from the sum of the drops in KL divergence for each of those timesteps.
One could consider a single mismatch cost evaluated over the entire computation.Under this choice, none of the details of how the conditional distribution P of the overall computation arises by iterating the conditional distributions of each step are resolved by the mismatch cost.All that matters is the drop in KL divergence between the initial distribution, when the computer is initialized, and the ending distribution, when the output of the computation is determined.This approach has been used to analyze Turing Machines [7,15] as well as DFA [39].
An alternative choice is to focus on the EP generated at each iteration of the DFA, with the total EP of the entire computation being a sum of those iteration-specific EPs.Doing so allows us to manifest restrictions on the applied protocol inherent to the iterative process in the mismatch cost, rather than burying them in the residual entropy production of the computation as a whole.Given that we focus on costs arising from the iterative nature of the computation, it is natural to focus on the EP at each iteration of the DFA.We first explicitly write the divergences as a sum over the islands and then a sum over states within islands: Since the update deterministically collapses all probability within an island to a kronecker delta, p(z f i |c i ) = q(z f i |c i ).Thus the final two terms in Eq. 1 cancel and we obtain 2. SIMPLIFICATION OF σ i mod FOR LPDFA.
To calculate the modularity mismatch cost in an LPDFA, it is helpful to separate λ j>i , the input string variables for j > i, from λ j<i , the variables for j < i. Making that separation then using the chain rule for mutual information, we obtain Next, if we express the local state variable z i in terms of the DFA's state variable and current input symbol's state variable, apply the chain rule again and cancel terms, we get Due to the deterministic and sequential operation of a DFA, both r i and r i−1 are unambiguously determined by the first i variables in the input string, λ j≤i .As a result, the two conditional information terms in the final line of Eq. 4 are both zero.Applying the chain rule for mutual information twice to the remaining terms and simplifying, we obtain Again using the fact that the first i variables in the input string, λ j≤i , unambiguously specify both r i and r i−1 , I(r i−1 ; λ j≤i ) = H(r i−1 ) and I(r i ; λ j≤i ) = H(r i ).Thus, using the definition of the conditional entropy and mutual information, where the last line follows by adding and subtracting H(c i ).Finally, since the deterministic collapse of all inputs to a single output within an island ensures H(z f i |c i ) = 0, we can further reduce the modularity mismatch cost to This result establishes the claim made in the main text.
ESTIMATING ENTROPY PRODUCTION FOR AN UNCERTAIN INPUT DISTRIBUTION.
The designer's best estimate for the entropy production is obtained by averaging Eq. 13 of the main text over π(v) and π(w|v): Expanding the cross entropy yields Here, H(z 0 i |c i , v, w) and I(z 0 i ; w|c i , v) are defined with respect to the joint distribution estimated by the designer, p(v, w, z 0 i , c i ) = π(v)π(w|v)p v (c i )p w (z 0 i |c i ), and p(z 0 i |c i , v) = w π(w|v)p w (z 0 i |c i ) is the designer's estimate for the probability distribution within an island, having averaged over π(w|v).We claim that, for any distribution of inputs and choice of iterated protocol for the LPDFA in Fig. 7 (a) of the main text, it is possible to choose a protocol for the LPDFA in Fig. 7. (b) that results in EP that is less than or equal to the EP of the LPDFA in Fig. 7 (a).To prove this claim, note that the EP at iteration i for the minimal DFA in Fig. 7 (a) is given by considering only the island defined by {(b, 0), (b, 1)}.Thus σ i (p(z 0 i , λ −i )) = p i (b, 0) ln q µ (b, 0|b, 0 or 1) + p i (b, 1) ln q µ (b, 1|b, 0 or 1).
For the larger LPDFA in in Fig. 7 (b), the EP at iteration i is entirely due to the two islands defined by {(b, 0), (b, 1)} and {(a, 0), (a, 1)}.Thus ) ln q ′ µ (b, 1|b, 0 or 1) + p ′ i (a, 0) ln q ′ µ (a, 0|a, 0 or 1) + p ′ i (a, 1) ln q ′ µ (a, 1|a, 0 or 1), (11) with primed quantities referring to the larger DFA for clarity.Given the well-defined starting state of the LPDFA, it is possible to say that none of these states are occupied at the first step: p ′ 1 (b, 0) = p ′ 1 (a, 0) = p ′ 1 (b, 1) = p ′ 1 (a, 1) = 0.Moreover, assuming the same distribution of input strings to both devices, the related structure of both devices implies p i (b, 0) = p ′ i+1 (b, 0) + p ′ i+1 (a, 0) and p i (b, 1) = p ′ i+1 (b, 1) + p ′ i+1 (a, 1).If we then chose protocols for the larger DFA so that q ′ µ (b, 0|b, 0 or 1) = q ′ µ (a, 0|a, 0 or 1) = q µ (b, 0|b, 0 or 1) and q ′ µ (b, 1|b, 0 or 1) = q ′ µ (a, 1|a, 0 or 1) = q µ (b, 1|b, 0 or 1), we obtain for i > 1, and σ i ′ (p ′ (z 0 i , ), λ −i ) = 0 for i = 1.As a result, for any finite number of iterations N , To prove the claim, recall that a non-invertible DFA has at least one "conflict state" to which multiple input computational states are mapped by the same input symbol under ρ (see Fig. 5 of the main text).We consider the network ρ λ , defined by the mapping between computational states for an input symbol λ corresponding to such a conflict state in a minimal DFA D L that accepts the language L. If D L has M computational states, there are exactly M directed edges in ρ λ .Thus the existence of a conflict state with more than one inward edge implies at least one state with zero inward edges.The existence of such a state r † in ρ λ implies that there are no transitions into the equivalence class represented by r † due to the symbol λ.
The states in any other DFA D ′ L that accepts L can be partitioned into non-overlapping sets, each of which corresponds to an equivalence class of L (one of the states of D L -see Refs.[26][27] of the main text).The transitions between these non-overlapping sets must exactly match the transitions defined by ρ in D L , otherwise D ′ L would fail to sort input strings into equivalence classes of L. Therefore, if r † has no inward edges in the network ρ λ defined by D L , none of the states in the set corresponding to the equivalence class represented by r † can have inward edges in the network ρ ′ λ defined by D ′ L .The existence of at least one state in the network ρ ′ λ with zero inward edges implies the existence of at least one conflict state with two or more inward edges in ρ ′ λ , since the total number of edges is equal to the total number of states.Therefore any D ′ L that accepts the same language as a non-invertible minimal DFA D L must exhibit conflict states, and must also be non-invertible.In the context of these DFA, it is helpful to refer to the alphabet using numerical indeces.We assume that the integer y is written on the tape in base n so that its most significant figure is λ 1 , its second most significant figure is λ 2 , etc.
Let y i be the integer represented by the first i entries in the input word.The DFA will be in the absorbing state r A after iteration i if and only if y i mod m = 0.Moreover, after the next iteration, the system will be in r A if and only if y i+1 mod m = (λ i+1 + n(y i mod m)) mod m = 0. ( The value of y i mod m is thus sufficient to express the specify the equivalence class of the word fragment y i , since it is the only information needed from the first i digits to determine whether the full word is divisible by m.We note, however, that knowledge of y i is not necessary to specify the equivalence class.In general, words with distinct values of y i mod m can belong to the same equivalence class.Nonetheless, the equivalence class corresponding to the absorbing state necessarily only contains word fragments with y i mod m = 0.
m and n have no common factors
Due to the arguments in Section 5 of the SUpplementary Information, it is sufficient to show that any DFA that accepts this language is invertible.We may therefore consider a DFA in which there are m states, each one corresponding to a single value of y i mod m.Let us assume, for the sake of contradiction, that such a DFA is non-invertible.For this to be true, we require that two distinct values of y i mod m, which would lead to different computational states after iteration i, result in the same value of y i+1 mod m for a given λ i+1 .Using the expression for y i+1 mod m in Eq. 16 where l, k are two distinct integers between 0 and m − 1.We will assume k > l without loss of generality.
Using the properties of modular arithmetic, we may rewrite Eq. 17 as (n(k − l)) mod m = 0.
Eq. 18 can only be zero if k − l is zero, which would violate the requirement that l = k, or if the union of the prime factors of n and k − l is a superset of the prime factors of m.Since k − l < m, the prime factors of l − k alone cannot be a superset of the prime factors of m.Therefore n and m must share at least one prime factor, violating the initial assumption and proving the claim by contradiction.
m and n have at least one common factor
We now prove that the minimal DFA that accepts words written in base n that are divisible by m is non-invertible if n and m have at least one common factor.To do so, it is sufficient to show that at least one non-zero value of y i mod m results in y i+1 mod m = 0 for λ i+1 = 0, since this operation corresponds to a non-accepting state being mapped to r A by λ i+1 = 0, and r A will also be mapped to r A by λ i+1 = 0.In other words, we require (0 + nk) mod m = 0 (19) for integer k, 0 < k < m.For any n, m that share a common factor g, there will always be a k = m/g for which Eq. 19 holds.Thus any DFA that accepts words written in base n that are divisible by m will be non-invertible if n and m have at least one common factor.
FIG.2: FIX LABELS EP in a simple system shows non-trivial dependence on iteration and input word distribution.We plot total EP σ i , and its decomposition into σ i mar and σ i mod , for the DFA in Fig.1(a), which accepts all words that do not contain three or more consecutive bs.In all cases we use a uniform prior q µ (z 0 i |c i ) within each island, and consider a distribution of input words with fixed length N = 15, but vary the distribution of input words p(λ).(a) input words have independent and identically distributed (IID) symbols with p(a) = p(b) = 0.5.(b) input words have IID symbols with p(a) = 0.8 and p(b) = 0.2.(c) input words have IID symbols with p(a) = 0.2 and p(b) = 0.8.(d) input words are Markov chains.The first symbol is a or b with equal probability, and subsequently P (λ i+1 = λ i ) = 0.8.
FIG.2: FIX LABELS EP in a simple system shows non-trivial dependence on iteration and input word distribution.We plot total EP σ i , and its decomposition into σ i mar and σ i mod , for the DFA in Fig.1(a), which accepts all words that do not contain three or more consecutive bs.In all cases we use a uniform prior q µ (z 0 i |c i ) within each island, and consider a distribution of input words with fixed length N = 15, but vary the distribution of input words p(λ).(a) input words have independent and identically distributed (IID) symbols with p(a) = p(b) = 0.5.(b) input words have IID symbols with p(a) = 0.8 and p(b) = 0.2.(c) input words have IID symbols with p(a) = 0.2 and p(b) = 0.8.(d) input words are Markov chains.The first symbol is a or b with equal probability, and subsequently P (λ i+1 = λ i ) = 0.8.
FIG. 3: Correlated input words do not generate high modularity costs.(a) A 4-state DFA that processes words formed from a two-symbol alphabet, accepting those formed by concatenating bb and baa substrings.(b) Total modularity cost N i=1 σ i mod for the DFA in (a) and the DFA in Fig. 1 (a), when processing words of length N = 15 that are generated using a Markov chain.Modularity cost is plotted as a function of the probability that subsequent symbols in the word have the same value.(c) Mutual information between the local state and the rest of the input word before (I 0 ) and after (I f ) the update of iteration i, for the DFA in Fig.1(a).Data is plotted for P (λ i+1 = λ i ) = 0.8 (correlated) and P (λ i+1 = λ i ) = 0.5 (independent).
FIG. 5 :
FIG. 5: Decomposition of the DFA in Fig. 1 (a) into networks of transitions for each input symbol, ρ λ .(a) Network for λ i = a, where the state r = 0 is a conflict state.(b) Network for λ i = b, where the state r = 3 is a conflict state.
FIG. 6 :FIG. 7 :
FIG.6: Two DFA that accept input strings with an even number of bs built from Λ = {a, b}.(a) The minimal DFA for this language; it is invertible.(b) A larger DFA that accepts the same language but is non-invertible; state 0 is a conflict state for ρ b and state 2 is a conflict state for ρ a .
4 .
MINIMAL DFA ARE NOT NECESSARILY MORE THERMODYNAMICALLY EFFICIENT THAN LARGER DFA.
6 .
THE INVERTIBILITY OF DFAS THAT ACCEPT WORDS IN BASE n THAT ARE DIVISIBLE BY m | 15,101.8 | 2022-08-14T00:00:00.000 | [
"Physics",
"Computer Science"
] |
An elementary proof of Fermat’s last theorem for all even exponents
Abstract An elementary proof that the equation x2n + y2n = z2n can not have any non-zero positive integer solutions when n is an integer ≥ 2 is presented. To prove that the equation has no integer solutions it is first hypothesized that the equation has integer solutions. The absence of any integer solutions of the equation is justified by contradicting the hypothesis.
Theorem. The equation
x n + y n = z n (1) has no non-zero integer solutions when the exponent n is an integer > .
Previous works. Equation (1) has been of great interest to number theorists for a long time. In 1837, E. E. Kummer [2,9] proved that if (1) has integer solutions then n ≡ (mod 8). Rothholtz [2] extended Kummer's result to prove that (1) has no integer solution if the exponent n is a prime of the form n = t + or one of the variables x, y, z is a prime. In 1977, Terjanian [9] offered a surprisingly simple proof that if (1) is satisfied for non-zero integers then n divides x or y. Equivalently, Terjanian proved Fermat's last theorem for the first case with even exponents. In this paper, a simple proof of the theorem is offered for all even exponents.
Simplification of the theorem. Any integer > is either divisible by 4 or an odd prime. Fermat's last theorem is already known to be true when n is a multiple of 3 or 4 (see [10]). Again, x, y, z must not have any common factor. Otherwise, both sides of the equation can be divided by the common factor to obtain a smaller solution. Also for consistency only one of the variables can be even. When z is even, the left-hand side of (1) is equivalent to 2 (mod 4) and the right-hand side of (1) is equivalent to 0 (mod 4). This leads to an inconsistency. Again since Fermat's equation deals with the situation where all three variables have like powers, it is enough to prove the theorem when the three variables x, y, z are relatively prime integers, y is even, the exponent n is a prime k > and none of the variables is a prime (see [2]).
Search for integer solutions.
Throughout the paper all the variables are positive integers. By (x, y, z) = we mean that x, y, z are coprime integers and |y. By (a, b) = we mean that a, b are coprime integers and |b.
Hypothesis. Fermat's equation with an even exponent has integer solutions.
Equation (1) can be written as where (2) can have integer solutions as seen from the example + = . The objective here is to show that the solutions of (2) cannot be of the form Since integer solutions of (1) are assumed by using Terjanian's result [9], one notes k|Y.
Under the assumption that z, g, h are all integers > , equation (7) represents a right triangle ZGH whose sides and area are integers, and z is the hypotenuse. Therefore ZGH is a rational right triangle [6]. Equivalently, (g, h, z) is a Pythagorean triple. Consequently, we get where tan H = h/g and < H < π/ . Substituting (8) and (9) in (1) and n = k, we get x k + y k = z k [cos (kH) + sin (kH)].
Since cos (kH) + sin (kH) = , we conclude that x and y as obtained in (8) and (9) are indeed the parametric solutions of (1).
From (3) we get X = Real[(g + ih) k ] and Y = Imag[(g + ih) k ]. Thus we get where j = (k − )/ , and thus where C , C , . . . are integers, each divisible by k, f = + if k ≡ (mod 4). Otherwise, f = − . The sign of f will influence only the orientation of X and Y but will have no impact on the integer solutions of (1). Equations (10) and (11) are rewritten as respectively, where Q, R are real integers. Since (g, h) = and k|h, we conclude that (g, Q) = and (h, R) = k. Therefore, if X and Y are k-th powers, then g, Q, h, and R must take values of the forms where (u, v) = , and u, r, v, d are integers > , and w is an integer > . From (12) and (13) we thus obtain The impossibility of (14) will imply the impossibility of (1). By expanding tan kH in terms of tan H (see [5, p. 111]), we get k(d/w) k = U/V, (U, V) = , U = kd k , V = w k , e = h , f = g . Thus we get where the coefficients C , C , . . . , C p− and D , D , . . . , D p− are non-zero integers. It will be enough to prove that e is not an integer given f is an integer. This will imply that h is not an integer given g is an integer. With this assumption, equations (15) and (16) are transformed into To prove that both g and h cannot be integers, it is enough to prove that at least one of (17) and (18) Consequently, (14) cannot be satisfied under the given conditions. Therefore, the hypothesis is contradicted. This proves Fermat's last theorem for all even exponents. | 1,300.6 | 2020-01-01T00:00:00.000 | [
"Mathematics"
] |
Factor XII Silencing Using siRNA Prevents Thrombus Formation in a Rat Model of Extracorporeal Life Support
Heparin anticoagulation increases the bleeding risk during extracorporeal life support (ECLS). This study determined whether factor XII (FXII) silencing using short interfering RNA (siRNA) can provide ECLS circuit anticoagulation without bleeding. Adult male, Sprague-Dawley rats were randomized to four groups (n = 3 each) based on anticoagulant: (1) no anticoagulant, (2) heparin, (3) FXII siRNA, or (4) nontargeting siRNA. Heparin was administered intravenously before and during ECLS. FXII or nontargeting siRNA were administered intravenously 3 days before the initiation of ECLS via lipidoid nanoparticles. The rats were placed on pumped, arteriovenous ECLS for 8 hours or until the blood flow resistance reached three times its baseline resistance. Without anticoagulant, mock-oxygenator resistance tripled within 7 ± 2 minutes. The resistance in the FXII siRNA group did not increase for 8 hours. There were no significant differences in resistance or mock-oxygenator thrombus volume between the FXII siRNA and the heparin groups. However, the bleeding time in the FXII siRNA group (3.4 ± 0.6 minutes) was significantly shorter than that in the heparin group (5.5 ± 0.5 minutes, p < 0.05). FXII silencing using siRNA provided simpler anticoagulation of ECLS circuits with reduced bleeding time as compared to heparin. http://links.lww.com/ASAIO/A937
therapy for patients with cardiac and/or respiratory failure. 1 Unfortunately, ECLS is plagued by a high rate of both bleeding and thrombotic complications. [1][2][3] Currently, unfractionated heparin is the most widely used anticoagulant. 4 Although its use can delay thrombotic complications, such as oxygenator failure, it also increases the risk of intracerebral, pulmonary, and surgical sites bleeding. 1,5,6 Therefore, major improvements in anticoagulant are needed to reduce these complications and the need for careful coagulation monitoring.
Thrombus formation during ECLS is initiated primarily by three synergistic processes: (1) activation of factor XII (FXII) and the intrinsic coagulation cascade, (2) platelet binding and activation to fibrinogen that is adsorbed to the artificial surfaces of the circuit, and (3) shear-induced activation of platelets caused by high-resistance circuit components and the pump. 7 Each of these modes of activation accelerates activation of the common coagulation cascade and, ultimately, fibrin formation. Heparin anticoagulation inhibits fibrin formation by inhibiting thrombin and activated factor X of the common coagulation cascade. Direct inhibition of FXII could serve a similar role, with the advantage of not inhibiting tissue hemostasis. FXII has little to no role in normal hemostasis, and thus, people who do not produce FXII live normal lives without excessive bleeding. 8,9 This makes FXII or activated FXII (FXIIa) inhibition an attractive, safe alternative to heparin during ECLS.
To date, several FXII and FXIIa inhibitors have been developed including (1) natural inhibitors, (2) small-molecule inhibitors, (3) monoclonal antibodies, (4) antisense oligonucleotides (ASO), and (5) aptamers. [10][11][12][13][14][15][16][17][18] Among these, the recombinant fully human FXIIa-blocking antibody 3F7 and small-molecule inhibitor FXII900 have shown that FXIIa inhibition can significantly reduce thrombus formation without bleeding complications during ECLS. 11,17,18 An alternative, long-acting means is to eliminate hepatic FXII synthesis for days to weeks with a single administration using oligonucleotide-based drugs, including ASO and short interfering RNA (siRNA). In one study, a FXIIspecific ASO attenuated catheter-induced thrombosis up to 35 days in rabbits. 19 However, this ASO has a delayed onset of action (4 weeks) that renders it unusable for ECLS. In contrast, a single siRNA dose can reduce plasma protein concentrations within one day. siRNA is a synthetic RNA duplex consisting of two unmodified annealed 21-23-mer oligonucleotides. Once siRNA enters the cytosol, it binds to the RNA-induced silencing complex and degrades target messenger RNA (mRNA), preventing mRNA translation into a specific protein. 20 This study determined whether FXII silencing using siRNA can provide sufficient anticoagulation for ECLS, maintain normal tissue coagulation, and reduce the need for careful anticoagulation monitoring.
Methods
This study was approved by the Allegheny Health Network Institutional Animal Care and Use Committee (Project No. 1071). Adult male Sprague-Dawley rats (Taconic Biosciences, Germantown, NY) were used and received humane care in compliance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. 21 FXII siRNA was purchased from Sigma-Aldrich (SASI_ Rn01_00106775_s, St. Louis, MO). MISSION siRNA Universal Negative Control #1 (SIC001, Sigma-Aldrich) was used as an untargeted siRNA control. Lipidoid nanoparticles (LNPs) (Method 1, Supplemental Digital Content 1, http://links.lww. com/ASAIO/A933) were used as the delivery vehicle for siRNA, as they have been previously identified as an effective and safe siRNA delivery material. [22][23][24][25] Rats were randomized to four groups (n = 3 each) based on the means of anticoagulant: (1) no anticoagulant, (2) heparin, (3) FXII siRNA, or (4) nontargeting siRNA. In group one, no anticoagulant was administered. In group two, intravenous 50 IU/kg heparin was administered to achieve an activated clotting time (ACT) of 180-250 seconds before initiating ECLS. Afterward, heparin was administered continuously at 50 IU/ kg/h for 1 hour and then at 25 IU/kg/h for 7 hours. 26 In groups three and four, 3 mg/kg of FXII or nontargeting siRNA was administered 3 days before ECLS. Before siRNA administration, a blood sample was taken to evaluate FXII concentration (0.3 ml), ACT (15 µl), and activated partial thromboplastin time (aPTT) (0.4 ml). The dose of 3 mg/kg and timing of ECLS were determined based on a preliminary (n = 2) kinetic study evaluating the effect of siRNA dose and time on FXII concentration, ACT, and aPTT ( Figures 1 and 2, Supplemental Digital Content 2, http://links.lww.com/ASAIO/A934). For the presentation and statistical analysis, FXII concentration was normalized to the percentage of the baseline FXII concentration before siRNA delivery.
The rats were placed on pumped arteriovenous ECLS by cannulating the left carotid artery for access and the right jugular vein for return, with blood flow maintained at 2 ml/min. The ECLS circuit was described previously. 26 Briefly, it consists of small-bore tubing and a custom-designed mock-oxygenator (Figures 3 and 4, Supplemental Digital Content 3, http://links. lww.com/ASAIO/A935) and has a 2.5-ml priming volume. During ECLS, mean arterial pressure (MAP), heart rate (HR), and peripheral oxygen saturation (SpO 2 ) were monitored continuously by using the BIOPAC MP150A-CE data acquisition system (BIOPAC Systems, Inc., Santa Barbara, CA) and a Heska VetOx Plus 4800 Vital Signs Monitor (Heska, Loveland, CO). MAP was maintained over 60 mm Hg by administering norepinephrine (0-0.5 µg/kg/min). The targeted HR was 250-350 beats/min, and targeted SpO 2 was > 94%. Any rats not meeting these criteria were euthanized. In total, two rats in the FXII siRNA group were euthanized because of air embolism during right jugular vein cannulation. The mock-oxygenator inlet and outlet pressures were monitored continuously using the BIOPAC MP150A-CE data acquisition system and recorded at 0.05 (baseline), 0.08, 0.16, 0.5, 1, 2, 3, 4, 5, 6, 7, and 8 hours after flow initiation. Blood flow resistance was calculated in standard fashion as (inlet pressure − outlet pressure)/(blood flow).
The bleeding time was also measured before and at the end of ECLS, as previously described. 26 Blood samples were also taken after cannulation for ECLS and at the conclusion of ECLS to measure ACT (15 µL), aPTT (0.4 ml), and platelet count (Plt) (0.3 ml). Arterial blood gases were not measured due to restrictions on blood sampling volumes. For presentation, the Plt was corrected for hemodilution during circuit priming using the formula corrected Plt = raw Plt × (hematocrit before ECLS)/ (hematocrit at the end of ECLS).
The experiment was continued for 8 hours or until the mock-oxygenator blood flow resistance reached three times its baseline resistance and was considered failed. The circuit was disconnected and gently flushed with heparinized saline (2 IU/ml), and the rats were euthanized using pentobarbital (175 mg/kg). The saline in the mock-oxygenator was flushed using air for 10 seconds before and after the experiment, and the amount of flushed volume was measured by weight (g) and converted to volume using the density of saline (1.0 mg/ml). Then the difference between the pre-and postexperimental volume was determined to be the clot volume. From the clot volume, the percent of the mock-oxygenator filled with the clot was calculated.
Statistical Analysis
All data are presented as mean ± standard error. A paired t-test was used to examine the effects of the siRNA and LNPs delivery vehicle on the FXII level, ACT, and aPTT. A one-way analysis of variance (ANOVA) was used to examine the effect of different anticoagulation groups on ECLS thrombus volume. A linear mixed model was used to assess differences in ACT, aPTT, blood flow resistance, and bleeding time during ECLS. The anticoagulant type (FXII siRNA, heparin, etc.), time, and the interaction of anticoagulant and time were used as fixed effects, and the animal ID was used as a random effect. For posthoc correction, Bonferroni correction was used for all pairwise comparisons. A one-way ANOVA was used to compare the Plt before ECLS and loss in platelets during ECLS between groups. The Kaplan-Meier method was performed to compare mockoxygenator failure rates using log-rank as the statistical test for significant differences in failure rate. All statistical analyses were performed using SPSS software (version 27.0; SPSS Inc., Chicago, IL). All statistical tests were two-sided, and a p value < 0.05 was considered statistically significant.
Results
The mean weight of the rats was 549 ± 5 g in the no anticoagulant group, 516 ± 25 g in the heparin group, 586 ± 24 g in the siRNA group, and 538 ± 6 g in the nontargeting siRNA group. The differences in weight were not statistically significant (p = 0.11).
In contrast, 3 mg/kg of nontargeting siRNA had no effect on plasma FXII concentration (102% ± 3% after treatment, p = 0.48), indicating that the reduction of FXII using FXII siRNA was specifically due to the siRNA and not the LNPs delivery vehicle ( Figure 1A). The nontargeting siRNA also had no effect on aPTT (20.5 ± 0.6 seconds before treatment and 20.7 ± 2.7 seconds after treatment, p = 0.91) ( Figure 1B). However, the nontargeting siRNA unexpectedly caused a statistically significant ACT prolongation from 75 ± 4 seconds before treatment to 123 ± 2 seconds 3 days after treatment (p < 0.05, Figure 1C). Therefore, the combination of no effect on FXII concentration or aPTT with an increase in the ACT suggests an effect on the coagulation system outside the coagulation cascade factors.
The aPTT during ECLS for each anticoagulant group is shown in Figure 2A. The aPTT before and at the end of ECLS in the FXII siRNA group was 93.6 ± 10.0 and 102.3 ± 8.9 seconds, respectively, which were statistically prolonged when compared to the no anticoagulation (22.2 ± 1.8 seconds before and 26.5 ± 3.5 seconds at the end of ECLS, p < 0.05) and the nontargeting siRNA groups (20.7 ± 2.7 seconds before and 23.3 ± 2.9 seconds at the end of ECLS, p < 0.05) (Figure 2A). Similarly, the use of heparin created a statistically significant increase in aPTT (135.2 ± 31.3 seconds before and 46.9 ± 5.2 seconds at the end of ECLS) compared to no anticoagulation (p < 0.05) and nontargeting siRNA (p < 0.05). There was no significant difference in the aPTT between the FXII siRNA and the heparin groups (p > 0.99). The nontargeting siRNA group showed no significant difference in aPTT when compared to the no anticoagulant group (p > 0.99).
The ACT for each anticoagulant group is shown in Figure 2B. The ACT for the heparin group was significantly prolonged (202 ± 8 seconds before and 170 ± 5 seconds at the end of ECLS) when compared to the FXII siRNA (139 ± 2 seconds before and 137 ± 11 seconds at the end of ECLS, p < 0.05), the nontargeting siRNA (123 ± 2 seconds before and 126 ± 3 seconds at the end of ECLS, p < 0.05), and the no anticoagulant groups (104 ± 9 seconds before and 120 ± 3 seconds at the end of ECLS, p < 0.05). The ACT before and at the end of ECLS in the FXII siRNA group was also prolonged compared to that in the nontargeting siRNA and the no anticoagulant groups, but without statistical significance (p > 0.99 and p = 0.14, respectively). Figure 3A shows mock-oxygenator blood flow resistance throughout the experiments. The no anticoagulant group exhibited a significantly greater mock-oxygenator blood flow resistance than the FXII siRNA (p < 0.05) and the heparin groups (p < 0.05). For all rats not receiving an anticoagulant, mock-oxygenator resistance rapidly increased to greater than three times the baseline resistance before 10 minutes had elapsed. However, the blood flow resistance in the FXII siRNA group did not change significantly during the experiment (p = 0.94), and no devices reached three times the baseline resistance. No heparin group mock-oxygenators reached three times the baseline blood flow resistance, and there was no significant difference between the FXII siRNA and the heparin groups' resistance (p > 0.99). Unexpectedly, the resistance of all devices in the nontargeting siRNA group did not increase as rapidly as in the no anticoagulant group. The mock-oxygenator failure curve exhibiting significant differences (p < 0.05) between the four groups is shown in Figure 3B. The mean time from initiation of ECLS until the oxygenator failure in the no anticoagulant and the nontargeting siRNA groups was 7 ± 2 and 113 ± 27 minutes, respectively. Figure 3C shows thrombus volume in the mock-oxygenator after circuit detachment. The thrombus volume in the FXII siRNA, heparin, no anticoagulant, and nontargeting siRNA groups were 5% ± 5%, 0% ± 2%, 49% ± 4%, and 15% ± 2%, respectively (p < 0.05). The thrombus volume in the FXII siRNA group was significantly lower than that in the no anticoagulant group (p < 0.05). There was no significant difference in the thrombus volume between the FXII siRNA and the heparin groups (p > 0.99). The thrombus volume in the nontargeting siRNA group was also significantly lower than that in the no anticoagulant group (p < 0.05), mirroring the unexpected ACT and blood flow resistance results. Last, the change in platelet counts during ECLS is shown in Figure 5, Supplemental Digital Content 4, http://links.lww.com/ASAIO/ A936. The bleeding time for each anticoagulant group is shown in Figure 4. The bleeding time for the heparin group was significantly prolonged (6.5 ± 0.3 minutes before and 5.5 ± 0.5 minutes at the end of ECLS) compared to the FXII siRNA (2.8 ± 0.4 minutes before and 3.4 ± 0.6 minutes at the end of ECLS), the nontargeting siRNA (1.7 ± 0.4 minutes before and 2.0 ± 0.3 minutes at the end of ECLS), and the no anticoagulant groups (0.7 ± 0.2 minutes before and 0.8 ± 0.2 minutes at the end of ECLS) (p < 0.05). The bleeding time before and at the end of ECLS in the FXII siRNA group was significantly prolonged compared to the no anticoagulant group (p < 0.05). There was no significant difference between the nontargeting siRNA and the no anticoagulant groups (p = 0.45) and between the FXII siRNA and nontargeting siRNA groups (p = 0.18).
Discussion
This study determined that FXII siRNA provided sufficient anticoagulation for at least 8 hours of ECLS. A single intravenous dose of siRNA (3 mg/kg) administered 3 days before the initiation of ECLS reduced plasma FXII concentration to 26% of the normal level, and this prevented thrombus formation within the mock-oxygenator over 8 hours, similarly to heparin anticoagulation. At the same time, this had a lesser effect on prolonging tissue bleeding when compared to heparin. It should also be noted that direct FXII inhibitors have no effect on tissue bleeding times. 11,17,18 Therefore, further work should be performed to optimize the siRNA sequence and dosing to eliminate any effect on tissue bleeding. Once optimized, replacing heparin with FXII silencing could reduce bleeding complications that occur in approximately 15% to 30% of ECLS cases. 1,5,6,27,28 Moreover, the ability of FXII siRNA to provide anticoagulation for multiple days would greatly simplify anticoagulant management. During clinical ECLS management, heparin is given continuously and titrated to balance the risk of thrombotic and bleeding complications. This requires regular measurement of clotting times and subsequent adjustment of heparin infusion rates. In the current study, a single intravenous dose of siRNA provided a significant effect that lasted for 3 days and did not require management or titration during ECLS. Clinically, siRNA could be repeated on some regular schedule with little or no required measurement of clotting times. The combination of sufficient ECLS anticoagulation and a reduction in bleeding complications will make ECLS safer to employ, and the simplicity in once every 3-4 days dosing would simplify patient care and reduce ECLS costs.
Previous studies of shorter-acting FXIIa inhibitors have provided similar results. Wilbs et al. 17 demonstrated that FXII900, a bicyclic peptide inhibitor of FXIIa, reduced thrombus formation and blood flow resistance in oxygenators over 4 hours in the rabbit veno-venous ECLS model when compared to no anticoagulation. In that study, the oxygenator thrombus volume was significantly lower (p < 0.05) for the FXII900-treated rabbits (10%) than for the untreated rabbits (37%). 17 Similarly, Larsson et al. found that the recombinant human FXIIa-blocking antibody (3F7) significantly reduced thrombus formation and increases in oxygenator pressure gradient in a rabbit veno-arterial ECLS model during 6-hour testing when compared to no anticoagulant use. 11 Of similar importance, FXII900 and 3F7 preserved normal tissue bleeding times, unlike heparin. 11,17,18 In contrast, FXII silencing also provides several days of anticoagulation with a single dose. The FXII siRNA used in this study provided at least 3 days of effective silencing in rats at a dose of 3 mg/kg. FXII900 had to be continuously administered as it was eliminated by the kidney, although it could be bound to long-chain polymers (e.g., polyethylene glycol, polycarboxybetaine) to increase its half-life to as long as a few days. In contrast, siRNA provides a long-lived reduction in FXII concentrations. In our preliminary study, siRNA showed a dosedependent reduction of plasma FXII and prolongation of aPTT on day 3, but these returned to nearly baseline values on day 7. Further optimization of the siRNA sequence and delivery vehicle could lead to more effective, longer-lasting silencing. Cai et al. observed that 0.01 mg/kg of siRNA provided an 82% reduction in FXII and 1 mg/kg of siRNA provided a 99% reduction in FXII on day 7 in rats. 29 Similarly, a single dose of GalNAc-siRNA targeting FXII (ALN-F12) provided a dosedependent maximal FXII suppression of 51-55% at 0.3 mg/kg and 93% at 1 mg/kg on day 10, lasting for 64 days. 30,31 This holds the promise for single dosing during traditional ECLS or infrequent dosing during extended ECLS.
Additionally, the current study demonstrated an increase in the bleeding time in rats treated with the FXII siRNA unlike FXIIa inhibition using the 3F7 and FXII900 and silencing via ALN-F12. 11,17,18,31 Similarly, Cai et al. also reported that rats treated with FXII siRNA delivered via cationic LNPs displayed a statistically significant increase in bleeding time. 29 Therefore, there may be direct side effects on platelet function or off-target gene silencing on other proteins in the hemostatic system when using this delivery method. [32][33][34] Ultimately, the time course and effectiveness of the FXII silencing will depend on the delivery vehicle, siRNA sequence, and potential chemical modification of the siRNA. In the current study, FXII siRNA was administered 3 days before ECLS because the half-life of FXII is 50-70 hours, and it would take at least 2 days for circulating FXII to disappear completely. 8 However, Stavrou et al. demonstrated very little difference in FXII levels between 1 and 3 days after treatment. 35 Therefore, FXII concentrations may decrease within 24 hours after treatment. Clinically, a reasonable use of FXII siRNA would be to administer FXII siRNA as soon as a decision has been made to put the patient on ECMO, and heparin or a short-acting FXII inhibitor would also be provided before cannulation to provide sufficient time for siRNA to decrease the FXII concentration. 17,18 In patients using ECMO as a bridge to lung transplantation, administration of FXII siRNA 1 day before cannulation may be feasible.
This study had several limitations. The ability to anticoagulate the ECLS circuit was evaluated in a miniaturized system with components different from a clinical ECLS system and for a period of only 8 hours in the rat model. An initial, short-term, small animal study was deemed necessary, however, to provide the proof of concept prior to embarking on significantly more expensive long-term studies in larger animals. Furthermore, we have not yet optimized the optimal delay between delivery and initiation of ECLS and fully evaluated the side effects of LNPs. The nontargeting siRNA showed mild prolonged ACT and a mild reduction in ECLS thrombus formation for approximately 2 hours. These LNPs demonstrated no toxicity in mice and rats. 22,23 However, a more detailed examination of their effects on the coagulation cascade might indicate some unknown mild effects, as seen in our study.
In future studies, the siRNA sequence, doses, and delivery vehicles should first be optimized for this application to increase efficacy and decrease off-target effects, and the time course of FXII decrease should be investigated. Thereafter, large-animal studies could be used to evaluate anticoagulation and anti-inflammatory efficacy over a typical course of ECLS for acute respiratory failure (5-7 days) or bridge to transplant (2-4 weeks) using a commercial ECLS system. FXII siRNA might also reduce inflammation by reducing contact activation and bradykinin production. 8,9 Additionally, the optimal use of anticoagulant monitoring (aPTT, etc.) should be studied, as less frequent monitoring would likely be necessary due to the longer duration of action and decreased bleeding risk. Last, the siRNA efficacy should be investigated during ECLS in patients with hypercoagulative and hyperinflammatory states.
Conclusions
FXII silencing can provide simpler, long-acting anticoagulation in ECLS circuits. Due to the reduced bleeding time and need for coagulation monitoring, FXII siRNA would enable simpler and safer ECLS. | 5,111.4 | 2022-01-27T00:00:00.000 | [
"Medicine",
"Biology"
] |
On delayed choice and quantum erasure in two-slit experiment for testing complementarity
The principle of complementarity is one of the cornerstones of quantum theory. The aim of this study was to advance our understanding of complementarity by analyzing the role of delayed choice and quantum erasers in two-slit experiments, and by proposing experiments for verifying the analysis. The analysis is based on models consisting of measurable spaces and probability measures involved in the experiments. The main findings are as follows: (a) The complementarity principle manifests itself in such a way that wave and particle behaviors cannot be simultaneously observed almost surely with respect to any single, fixed measure. (b) Described by different measures, complementary properties can coexist in the same experimental setup. (c) Which-way information will not preclude or erase interference fringes. (d) Delayed choice and quantum erasers are irrelevant to testing complementarity. (e) It is possible for us to know through which slit each quantum object passed almost surely with respect to the measure corresponding to the slit while the interference pattern is intact. Based on the experiments analyzed, realizable experiments are proposed for verifying the above results.
Introduction
Is it possible for us to know, with probability one, through which slit each quantum object (particle or photon) passed without disturbing interference fringes? This question concerns complementarity illustrated by the famous Young's double-slit experiment, which embodies the counterintuitive features of quantum physics [1]. It is well known from the literature that the standard answer to this question is no; an affirmative answer is considered inconsistent with the complementarity principle. The negative answer is usually explained by Heisenberg's uncertainty relation: simultaneous observation of wave and particle behaviors in the same experimental setup violates the uncertainty relation and hence is prohibited [2,3].
The uncertainty relation is not the only explanation, however. For instance, researchers have proposed an alternative explanation based on two-slit quantum-eraser experiments for testing complementarity [4][5][6]. According to the alternative explanation, which-way (particle-like) information precludes or erases interference (wave-like) fringes, but the precluded or erased interference pattern can be restored by erasing which-way information after a particle or a signal photon has been detected. For the experiment introduced in [4], the choice of erasing or not erasing which-way information is made by experimenters. For the experiment reported in [6], the 'choice' is made randomly by an idler photon, which is the entangled twin of the signal photon.
The alternative explanation was not accepted by some researchers; according to their argument [7], the uncertainty relation is relevant to the double-slit experiment, for any which-way measurement causes a momentum transfer, which is large enough to violate the uncertainty relation and destroy interference fringes. In the literature, this issue was addressed by observing a weak-valued momentum-transfer distribution [8]. This distribution is obtained by weak measurements [9][10][11][12][13]. According to the experiment reported in [8], interference fringes are destroyed by which-way information measured in the experiment; this is in agreement with the alternative explanation, because weak-valued probabilities can be negative [14], and the momentum-transfer distribution has a variance consistent with zero. However, this momentum-transfer distribution also supports the argument against the alternative explanation; as a weak-valued distribution [8], it exhibits features characteristic of both the alternative explanation given in [4][5][6] and the argument in [7].
The present paper concentrates solely on two experiments. One is the two-slit atom interference gedanken experiment introduced in [4]; the other is the real experiment with pairs of entangled signal-idler photons reported in [6]. The former illustrates the notion of 'delayed-choice quantum eraser' straightforwardly. The latter is a full demonstration of the original scheme of the delayed-choice quantum eraser proposed in [5]. Each of the two experiments involves several probability measures on their common measurable space. However, the measures involved in both experiments are not specified, but the corresponding probability densities are used to explain the experimental results, which makes the explanation questionable.
The aim of this study was to advance our understanding of complementarity by analyzing the role of delayed choice and quantum erasers in the above experiments, and by proposing realizable experiments for verifying the analysis. The analysis is based on models consisting of measurable spaces and probability measures involved in the experiments. The findings reported in this paper are as follows. Which-way information can be used to specify two exclusive subsets of spots produced by quantum objects on a screen; each of the subsets serves as the domain of a random vector (or variable) representing coordinates of spots only in the subset. A probability conditional on this subset determines the density of the corresponding random vector (or variable). The density describes a pattern formed by spots of the subset, and characterizes the particle-like behavior. Addition of such densities is invalid, because the sum of the densities implies a false assumption, and violates the total probability theorem. The well-known interference pattern is described by the density of a random vector (or variable) representing coordinates of spots in a set different from those specified by which-way information. This density characterizes the wave-like behavior. The complementarity principle manifests itself in such a way that wave and particle behaviors cannot be simultaneously observed almost surely with respect to any single, fixed measure. Nevertheless, based on the total probability theorem, it is shown that the wave-like aspect and the particle-like aspect are described by different measures, and can coexist in the same experimental setup. Which-way information will not preclude or erase interference fringes observed in the experiments. Delayed choice and quantum erasers are irrelevant to testing complementarity. Without violating the complementarity principle, the experiments reveal a possibility for us to know through which slit each quantum object passed almost surely with respect to the measure corresponding to the slit.
The gedanken experiment [4] and the real experiment [6] are analyzed in section 2 and section 3, respectively. Based on the experiments analyzed, realizable experiments are proposed in section 4 for verifying the results reported in this paper. The paper is concluded in section 5.
On delayed choice in two-slit quantum-eraser experiment with atoms
The gedanken experiment [4] has demonstrated convincingly a way to bypass Heisenberg's positionmomentum uncertainty obstacle. A brief description of this experiment is as follows [3,4]. A quantum-eraser detector sits between two micro-maser cavities, one in front of each slit. Two shutters shield the cavities from each other (figure 1).
The cavities act as which-way detectors. Initially, both cavities are empty and the shutters are closed. Atoms are sent to the apparatus one at a time, and collimated by a series of wider slits before they arrive at the narrow slits where the interference pattern originates. Immediately preceding the cavities, a laser beam is introduced to excite collimated atoms. After an atom passes through the laser beam, it absorbs a short-wavelength photon from the laser and becomes excited. The excited atom emits a longer-wavelength photon in one of the cavities, and goes through the corresponding slit with its motion unaffected. Hence the longer-wavelength photon carries potential which-way information. The quantum-eraser detector is used to detect the 'tell-tale' photon emitted by the atom. Before the shutters are opened, the wave associated with the photon consists of two partial waves, one in each cavity. After the atom has hit the screen and produced a spot there, the potential which-way information becomes actual, and experimenters can erase the which-way information by opening the shutters. Once the shutters are opened, the two partial waves become a single one. But the tell-tale photon may or may not be detected, depending on whether the partial waves reinforce or extinguish each other at the site of the quantum-eraser detector. The above procedure is then repeated for the next atom. As this process continues, a sequence of spots will be produced. The spots are elementary outcomes of the experiment.
Model of experimental outcomes
To analyze the role of delayed choice and the quantum eraser in the above experiment, it is necessary to specify the involved probability measures on their common measurable space to describe the experimental outcomes. Let S, A ( ) be the measurable space, where S is the sample space consisting of all the spots produced by atoms of an ensemble prepared for the experiment, and A the σ-algebra of subsets of S. Let i label the slits, i = 1, 2. For a specific i, denote by H i (s) a proposition, which characterizes a property of a spot s, where H i (s) means that s is produced by an atom that went through slit i. This proposition concerns the particle-like behavior of an atom. According to whether H i (s) is true or false for s, all the spots fall into two subsets = Î = Î S s S H s S s S H s : , : , which constitute a partition of S. Spots in S i carry (actual) which-way information, because they imply a 'slitspot' correlation between slit i and a spot s produced by an atom that passed through slit i. Denote this correlation by C i (s) if H i (s) is true for given i and s. Besides the property characterized by H i (s), each spot s also has a property characterized by one of two propositions, H + (s) or H − (s), where H + (s) means that s is produced by an atom with its tell-tale photon detected when the shutters are opened, and H − (s) is the negation of H + (s). The two propositions concern wave-like behaviors of atoms. Write which constitute a partition of S different from that formed by S 1 and S 2 . Spots in S ± imply a 'spot-photon' correlation between s and the tell-tale photon of an atom that produced s. Similarly, if H + (s) or H − (s) is true, the correlation is represented by C + (s) or C − (s). Let be a probability measure on S, A ( ). This measure characterizes uniquely the distribution of spots on the screen, and does not depend on specific coordinate systems. By definition, probabilities are values in the interval [0, 1] assigned by a probability measure to events of a random experiment. Although probabilities are often expressed by numerical values, they are not pure numbers, because probabilities are calculated according to the rules different from those for calculating pure numbers. To calculate probabilities, it is necessary to distinguish different events, especially when their probabilities are numerically identical. If probabilities are expressed by numerical values without specifying associated events, the distinction is impossible. This can cause confusion. Fortunately, probabilities can also be expressed by symbols. When it is necessary to avoid confusion, it is helpful to express probabilities by symbols, indicating associated events explicitly. In this paper, the probabilities of S i and S ± are expressed by symbols S i ( )and S ( ). Their numerical values satisfy the following conditions. and , which is a random vector defined for each s ä S with V(s) representing the coordinate of s. Let P V and f V be the distribution and density of V on the measurable space , 3 3 where 3 is the set of all ordered triples r = (x, y, z), and 3 R the σ-algebra of subsets of 3 . Write I r = I x × I y × I z , where I x = (x, x + dx] is an interval of an infinitesimal length dx on the real line. Similarly I y = (y, y + dy] and The probability measure on S, A ( ) determines f V and P V uniquely. By definition, If s belongs to G(r) for some r, the only information contained in G(r) is that V(s) belongs to I r . To use the correlation C i (s), let V i be the restriction of V to S i . To use the correlation C ± (s), denote by V + and V − the restrictions of V to S + and S − . By definition, if U is a random vector on S, and ¢ S a subset of S with .
imply the correlation C i (s), and spots in G ± (r) imply the correlation C ± (s), as shown by the following simple properties of G i (r) and G ± (r).
are given by conditional probabilities determined by on S, A ( ). Similar to , the measures i and are independent of specific coordinate systems. By definition, for any event E in A, , a similar result follows immediately.
The distributions and densities of V, V i , and V ± are defined on , 3 3 R ( ) with r serving here as coordinates of spots. Properties of a spot s, such as those characterized by H 1 (s) and H + (s), cannot be described by its coordinate. To describe the properties of s, it is necessary to use i and determined by . However, because these measures are all defined on S, A ( ), the meaning of 'with probability one' or 'almost surely' becomes ambiguous. A convention in measure theory adopted here is helpful to prevent confusion and make the exposition easier. If, for each s ä S, Q(s) is a proposition concerning s, and if ¢ is a probability measure on the question raised at the beginning of section 1: Is it possible for us to know, with probability one, through which slit each quantum object (particle or photon) passed without disturbing interference fringes? If 'probability' in the phrase 'with probability one' is (or any single, fixed measure), it is impossible to tell through which slit each atom passed almost surely with respect to . By (1) (1) i = 1, i.e., s ä S 1 . Because s must be somewhere on the screen, there is some r such that Similarly, for each s ä S, one and only one of two equations below must hold.
for some 1, for some 1.
Spots in different subsets of S may constitute different patterns. To avoid saying clumsily 'a pattern formed by all spots in a set' very often in the following analysis, let Π(E) stand for a pattern formed by all spots in a set E ⊂ S. This pattern is described by the density of a random vector, which represents coordinates of s ä E. For instance, Π(S) is formed by all s ä S and described by
Analysis and discussion
In the gedanken experiment [4], the role of the quantum eraser is to manipulate which-way information.
Experimenters use a laser beam to excite an atom before it enters one of the cavities (figure 1). The excited atom emits a photon in the cavity, and passes through the corresponding slit with its motion undisturbed. Hence the photon carries potential which-way information. Once the atom hits the screen, the potential which-way information becomes actual and available for experimenters to exploit the correlations C i (s) or C ± (s) for sorting the data. The cavities are separated by a shutter-detector combination. Experimenters can erase the which-way information by opening the shutters after the atom has produced a spot. When the laser is turned off, no atom becomes excited, and experimenters do not have which-way information. In the absence of which-way information, the motion of the center-of-mass of the atom (in the interference region) is described by the following wave function, which is the sum of two terms referring to the two slits where ¢ r is the coordinate of the center-of-mass of the atom before it hits the screen, and |i〉 represents the internal state of the atom. Corresponding to (10), the density of coordinates of spots on the screen is which describes the well-known interference pattern, where R represents the real part of a complex number, and y r 1 *( ) the complex conjugate of ψ 1 (r). According to [4], if the laser is turned on, after an atom passes through one of the cavities and makes the transition from the excited state to the unexcited state |b〉, the system of the atom, cavities, and quantum-eraser detector is described by the state y y Y ¢ = ¢ ñ + ¢ ñ ñ ñ r r r 1 2 1 0 0 1 b d , 12 where |d〉 is the ground state of the quantum-eraser detector before the shutters are opened, |1 1 0 2 〉 and |0 1 1 2 〉 denote the cavity states. For example, |1 1 0 2 〉 means that there is one photon in cavity 1 and none in cavity 2.
The following is a brief description of how the quantum eraser works [4]. In terms of symmetric and antisymmetric atomic states ψ ± and symmetric and antisymmetric states | ± 〉 of the radiation fields in the cavities, the state given by (12) appears as y y Y ¢ = ¢ +ñ + ¢ -ñ ñ ñ + - Once the atom has arrived at the screen and produced a spot there, experimenters open the shutters. The action of the quantum eraser is to change (13) into where |e〉 is the excited state of the quantum-eraser detector after it absorbs a photon. According to [4], (14) results in a density 16) and (17) describe wave-like properties of spots in S + and S − ; (16) is identical to (11), and (17) corresponds to so-called 'anti-fringes'.
In [4], it is claimed that Π(S) is described by (15) when the laser is turned on. The left-hand side of (15) is due to the orthogonality of |e〉 and |d〉, see (14). If the final state of the quantum-eraser detector is unknown, then the interference terms in + f V andf V cancel each other, and the cancellation is due to addition of these two densities; if experimenters observe the state of the quantum-eraser detector and correlate the observed state to the corresponding spot on the screen, then + f V describes the fringes retrieved by erasing which-way information, andf V describes the anti-fringes. The right-hand side of (15) is also a consequence of the orthogonality of |1 1 0 2 〉 and |0 1 1 2 〉, which makes the interference terms disappear, see (12). (15) implies an assumption: in (11), (16), and (17), the terms |ψ i | 2 serve as f V i . By this assumption, |1 1 0 2 〉 and |0 1 1 2 〉 are correlated with ψ 1 and ψ 2 , respectively, which leads to an assertion that interference fringes are precluded simply by knowing which-way information or even by having the ability to acquire which-way information [4,19].
However, the assumption underlying (15) is false, and the assertion is merely a consequence of explaining the role played by which-way information based on this false assumption. The assumption is false, for it results in illegitimate addition of densities. As shown by (4), (5), (8), and (9), + f r V ( ) andf r V ( ) on the left-hand side of (15) are determined by probabilities conditional on mutually exclusive events; similarly, f V 1 and f V 2 are also determined by conditional probabilities; addition of such densities is illegitimate. The falsity of the assumption indicates that (15) is invalid, and hence Π(S) is not described by (15).
No doubt, the state of the quantum-eraser detector implies information about wave-like properties of spots produced by atoms, and tell-tale photons carry potential which-way information, which becomes actual after the corresponding atoms hit the screen. But such information does not preclude interference fringes on the screen, and cannot change the experimental results. Experimenters can use acquired information to construct the restrictions of V to S i or S ± . Such restrictions only serve to describe the spots on the screen from different aspects, but cannot modify the wave function given by (10). Therefore, (10) and the corresponding experimental results, which are represented by the interference pattern Π(S), are intact. In other words, the density that describes Π(S) is (11), whether or not which-way information is available. Hence experimenters need not use the quantum eraser to restore the fringes.
In the following, f V refers to (11). As components of + -f f , V V , and f V , the terms |ψ i | 2 do not serve as f V i , even if |ψ i | 2 and f V i have the same functional form. By (16) contains most spots of G(r), then G + (r) consists of few spots remained in G(r), and vice versa. All spots in G − (r) and G + (r) constitute G(r), a portion of Π(S).
After the above clarification, the experiment can now be further analyzed based on the total probability theorem. This theorem holds for all physically meaningful probability densities, including those obtained by Born's probabilistic interpretation of wave functions. By the total probability theorem, the wave-like aspects (interference fringes) described by G r [ ( )] and G r [ ( )] can coexist with the particle-like aspect (which-way information) described by G r i i [ ( )], as shown below.
{[ ( ) ] ( )} {[ ( ) ] ( )} { [ ( )] ( )} { [ ( )] ( )} [ ( )] [ ( )] [ ( )]
Because G + (r), G − (r), G 1 (r), and G 2 (r) are characterized by random vectors different from V, their probabilities cannot be calculated with f V . By definition, each event characterized by V can be represented by an event in A, and hence can be used to calculate the probability of any event characterized by V. However, there are events in A that describe complementary phenomena but cannot be characterized by V. In other words, the complementary nature of the experimental results cannot be captured by using f V . Nevertheless, as shown above based on the total probability theorem, complementary properties of atoms can be described by several probability densities in the same experimental setup, where the densities are determined by different measures.
The requirements of the total probability theorem are necessary to prevent illegitimate addition of probabilities conditional on different events. Probabilities must be calculated according to the rules for operations with probabilities. (15) not only fails to meet the requirements but also breaks the rules, because it implies the false assumption that the terms |ψ i | 2 in the expressions of + -f f , V V , and f V serve as f V i . Although this assumption allows (15) to be expressed as to make its both sides satisfy the normalization condition, neither + ( ) ( ) is legitimate for calculating probabilities of any events relevant to the experiment. As shown in section 2.1, ( ) , and f r r d V 2 ( ) are probabilities conditional on S + , S − , S 1 , and S 2 . Moreover, 2 are even not probability densities; they are invalid and have nothing to do with the experiment. Probabilities are not pure numbers. Even though ( ) ( ), and S 2 ( )are numerically identical, these probabilities do not represent and hence cannot be treated as the same pure number. The distributive law for calculating pure numbers is not applicable to (19) or (20), because they are not expressions of pure numbers and cannot be calculated as pure numbers. Hence G r [ ( )] is irrelevant to r r r r r r r r r r r r r r ( ) (21) violates the total probability theorem. As shown above, (15) and (21) are essentially the same; they cannot be used to calculate probabilities of events in A. This shows again that (15) is invalid. Because the interference pattern Π(S) is described by , and because neither true, the complementarity principle manifests itself in such a way that wave and particle behaviors of atoms cannot be simultaneously observed almost surely with respect to or any single, fixed measure. Nevertheless, as shown by (19) and (20), wave and particle aspects of atoms are described by different measures, and can peacefully coexist in the same experimental setup. Moreover, by (3), the correlation C i (s) implied by G i (r) and the correlation C ± (s) implied by G ± (r) cannot change G(r), and hence Π(S) remains intact in the presence of the correlations. In particular, ascertaining which-way information, i.e., H s corresponds to slit B. Thus the regions play the role of the slits. Pairs of entangled signal-idler photons are generated from either region A or region B. In the apparatus, there is at most one pair of entangled signal-idler photons. Signal photons from both regions are sent through a lens to a screen. The lens is used to achieve the 'far field' condition. Detector D 0 , movable along the x-axis, is used to detect signal photons. Idler photons from both regions are sent to an interferometer, consisting of one prism, three 50-50 beam splitters (BSA, BSB, and BS), and two reflecting mirrors (M A and M B ), see figure 3. The prism separates idler photons into different paths corresponding to various measurement options. Detectors D i (i = 1, 2, 3, 4) are used to detect idler photons; coincidences between D 0 and D i are recorded by a coincidence circuit, see figure 2.
The delayed 'choice' for experimenters to observe either wave or particle behavior of a signal photon is not made by experimenters; the 'choice' is made randomly by the idler photon in the same pair. Once the signal photon hits the screen, the delayed 'choice' made by the idler photon cannot change the position of D 0 where the signal photon is detected. Because photons in the same pair are entangled and hence have the same properties, experimenters can observe the path taken by the idler photon, and infer the behavior of the signal photon without disturbing it in any way. Each idler photon travels along one of the six paths listed below, and is detected at least 7.7 ns later than the detection of its twin, the corresponding signal photon.
Model of experimental outcomes
To describe outcomes of the above experiment, it is necessary to specify the involved probability measures on their common measurable space. The notations used here are the same as (or similar to) those used in section 2, but their meanings may be redefined. [6] (listed before section 3.1), and hence the signal photon came from either region A or region B. In contrast, if H 3 (s) is true, the idler photon took path 2), and hence the signal photon came from region B. Similarly, if H 4 (s) is true, the idler photon took path 1) and the signal photon came from region A. As shown above, no which-way information is carried by sample points in S 1 ∪ S 2 . Only those in S 3 ∪ S 4 have which-way information.
Denote by X a random variable defined for each s ä S with X(s) representing the x-coordinate of s. Let P X and f X be the distribution and density of X on , R ( The probability measure on S, A ( ) determines f X and P X uniquely. If s belongs to G(x) for some x, the only information contained in G(x) is that X(s) belongs to I x . Let X i be the restriction of X to S i . Let P X i and f X i be the distribution and density of X i . Define = Î Î G x s S X s I : .
It is easy to see that G i (x) have the following properties.
The events G 1 (x) and G 2 (x) describe wave-like behaviors; G 3 (x) and G 4 (x) describe the particle-like behavior. By definition, are given by conditional probabilities determined by on S, A ( ). Similar to , the measures i are independent of specific coordinate systems.
However, by (22), H s i ( )[ ] are false. In the next subsection, the experiment reported in [6] will be analyzed based on the probabilistic model specified. As we shall see again, the sorting of the full ensemble into sub-ensembles, characterized by known paths or interference patterns with full fringe visibility, is not sufficient for us to capture quantitative aspects of wave-particle duality, as it may not necessarily lead to a correct explanation of the experiment.
Analysis and discussion
As shown in the experiment [6], properties of signal photons can be inferred from paths taken by their entangled twins. For i = 1, 2, 3, 4, experimenters can construct X i , the restriction of X to S i , by using coincidences between D 0 and D i . Moreover, experimenters can simultaneously use G f X i ( ), the graph of f X i , to describe Π(S i ), the pattern formed by s ä S i . However, S i ≠ S; the pattern Π(S) is formed by all sample points produced by signal photons, and described by Γ( f X ). After signal photons are detected by D 0 , the sample points and their coordinates are fixed, and so are Π(S) and Γ( f X ). This is simply a banal fact and needs no experimental verification. But what does Π(S) look like? The following experiment is proposed to answer this question.
Proposed experiment 1. This is a largely simplified version of the experiment reported in [6], and hence it is of course realizable. The purpose of this experiment is to observe Π(S) through Γ( f X ). Accordingly, the interferometer, detectors D i , i = 1, 2, 3, 4, and the coincidence circuit are all removed ( figure 4). Clearly, in this simplified experiment, which-way information is absent. According to quantum mechanics, Π(S) is a standard interference pattern, just like what we see in an ordinary Young's double-slit experiment. The shape of Γ( f X ) is well known.
Therefore, in the experiment [6], although paths taken by idler photons can be used to divide the sample space S into exclusive subsets, dividing S into subsets can neither change Π(S), the interference pattern formed by all s ä S, nor alter f X , the probability density describing Π(S). Undoubtedly, 'clicks' at D 3 or D 4 can provide which-way information without disturbing signal photons detected by D 0 in any way. However, according to the interpretation given in [6], which-way information erases interference fringes exhibited in Π(S 1 ) and Π(S 2 ), and 'clicks' at D 1 or D 2 erase which-way information and restore the erased fringes. This interpretation is incorrect, because it violates the total probability theorem as we shall see below.
For a real number x, G(x) is a portion of the interference pattern Π(S) characterized by X, and G i (x), i = 1, 2, 3, 4 are subsets of G(x), see (23). By the total probability theorem, 'probability' in the phrase 'with probability one' refers to the measure corresponding to the region (slit). That is, we can ascertain = H s i , 3, 4 i i ( )[ ] while interference fringes remain intact. Proposed experiment 3. In this experiment, the interferometer used in [6] is modified, such that BSA, BSB, D 3 , and D 4 are removed, but the other components of the original experimental setup remain unchanged ( figure 6). With this simplified setup, we can readily see the relation between interference fringes and antifringes. No sample point in this experiment carries which-way information, because each idler photon can only take one of the paths below:
Conclusion
Recall the question concerning complementarity illustrated by the Young's double-slit experiment: Is it possible for us to know, with probability one, through which slit each quantum object (particle or photon) passed without disturbing interference fringes? So far, in the literature, the standard answer to this question has been no. Traditionally, the negative answer is explained by Heisenberg's uncertainty relation. In contrast, the explanation given in [4,6] claims that interference fringes are precluded or erased by which-way information, but the precluded or erased fringes can be restored by erasing which-way information after an atom or a signal photon has been detected.
The explanation given in [4,6] relies on densities determined by probabilities conditional on mutually exclusive events. These events are sets of spots. Such sets serve as domains of different random vectors (or variables). However, the probability measures that determine these densities are not specified in [4,6], but the densities are used to explain the experimental results, which leads to violation of the total probability theorem and makes the explanation questionable.
The present paper has shown that the complementarity principle manifests itself in such a way that wave and particle behaviors cannot be simultaneously observed almost surely with respect to any single, fixed measure. Nevertheless, based on the total probability theorem, it is shown that complementary aspects of quantum objects are described by different measures, and can peacefully coexist in the same experimental setup. Whichway information will not preclude or erase interference fringes observed in the experiments. Delayed choice and quantum erasers are irrelevant to testing complementarity.
In conclusion, for the question about complementarity illustrated by the double-slit experiment, an affirmative answer is not only conceivable but also reasonable: Without violating the complementarity principle, we may tell through which slit each quantum object passed almost surely with respect to the measure corresponding to the slit. This answer cannot be found in the literature, but it can be tested by experiment. Based on the experiments analyzed, realizable experiments are proposed for verifying the results reported in this paper. The results and proposed experiments may be helpful to advance our understanding of complementarity. | 7,978.6 | 2021-01-01T00:00:00.000 | [
"Physics"
] |
Walking on Mild Slopes and Altering Arm Swing Each Induce Specific Strategies in Healthy Young Adults
Slopes are present in everyday environments and require specific postural strategies for successful navigation; different arm strategies may be used to manage external perturbations while walking. It has yet to be determined what impact arm swing has on postural strategies and gait stability during sloped walking. We investigated the potentially interacting effects of surface slope and arm motion on gait stability and postural strategies in healthy young adults. We tested 15 healthy adults, using the CAREN-Extended system to simulate a rolling-hills environment which imparted both incline (uphill) and decline (downhill) slopes (± 3°). This protocol was completed under three imposed arm swing conditions: held, normal, active. Spatiotemporal gait parameters, mediolateral margin of stability, and postural kinematics in anteroposterior (AP), mediolateral (ML), and vertical (VT) directions were assessed. Main effects of conditions and interactions were evaluated by 2-way repeated measures analysis of variance. Our results showed no interactions between arm swing and slope; however, we found main effects of arm swing and main effects of slope. As expected, uphill and downhill sections of the rolling-hills yielded opposite stepping and postural strategies compared to level walking, and active and held arm swings led to opposite postural strategies compared to normal arm swing. Arm swing effects were consistent across slope conditions. Walking with arms held decreased gait speed, indicating a level of caution, but maintained stability comparable to that of walking with normal arm swing. Active arm swing increased both step width variability and ML-MoS during downhill sections. Alternately, ML-MoS was larger with increased step width and double support time during uphill sections compared to level, which demonstrates that distinct base of support strategies are used to manage arm swing compared to slope. The variability of the rolling-hills also required proactive base of support changes despite the mild slopes to maintain balance.
INTRODUCTION
Everyday walking environments are complex as they vary in levelness and regularity (Allet et al., 2008). Challenging terrains require gait pattern modifications, through changes in spatiotemporal gait characteristics, kinematics, and kinetics, to accommodate the mechanical constraints. Responses to challenging terrain by the postural control system can be seen in adjustment of spatiotemporal gait characteristics. Compensatory changes such as increased double-support time or step width are a means of coping with uphill or downhill slopes, respectively (Kawamura and Tokuhiro, 1991;Sun et al., 1996;Gottschall and Nichols, 2011). The effectiveness of such changes may be determined by additionally quantifying stability. For example, taking wider steps has been linked to increased mediolateral margin of stability (ML-MoS) (McAndrew Young and Dingwell, 2012), indicating enhanced stability. Vieira et al. (2017) found downhill walking decreased ML-MoS and uphill walking increased ML-MoS compared to level walking, but not all concomitant gait strategies were explored.
During walking, the natural 1:1 contralateral arm-leg swing pattern reduces gait's metabolic cost by controlling angular momentum about the vertical axis of the center of mass (COM) (Meyns et al., 2013). This antiphase arm-leg swing pattern can be modulated by adjusting either arm motion or leg motion, which demonstrates the bidirectional nature of this relationship (Bondi et al., 2017). Different arm swing strategies have been shown to have unique impacts on gait stability. For example, walking with arms held may improve stability by increasing trunk inertia which limits CoM movement (Bruijn et al., 2010;Pijnappels et al., 2010). Conversely, some studies found decreased postural control and increased metabolic cost when walking without arm swing (Collins et al., 2009;Punt et al., 2015;Yang et al., 2015), or no difference in postural control between absent and normal arm swing (Bruijn et al., 2010;Hill and Nantel, 2019;. Alternatively, active arm swing may increase stability by more aptly counterbalancing torques that act on the COM's trajectory (Nakakubo et al., 2014;Punt et al., 2015;Yang et al., 2015;Wu et al., 2016). However, active arm swing's contribution to walking stability remains conflicting (Collins et al., 2009;Bruijn et al., 2010;Meyns et al., 2013;, especially when walking on challenging terrains. The purpose of this study was to examine the effect of arm swing on spatiotemporal gait parameters, margin of stability, and postural strategies during uphill and downhill sections of a rolling-hills terrain. We expected that walking on slopes (uphill or downhill sections) with arms held would have compound increases in compensatory gait strategies that may increase stability, while the gait changes from active arm swing would conflict with the compensatory strategies used to navigate sloped walking.
METHODOLOGY
Fifteen healthy adults (8 male, 7 female; age 23.4 ± 2.8 years; height 170.2 ± 8.1 cm; weight 72.3 ± 13.5 kg) volunteered from the Ottawa area. An a priori power analysis revealed that 12 participants were adequate to achieve power at ß = 0.8. Participants had no neurological or orthopedic disorders affecting gait and no musculoskeletal injuries in the previous 6 months. The study was approved by the Institutional Review Board (University of Ottawa) and the Ottawa Hospital Research Ethics Board; all participants provided written informed consent.
Data Collection
Three-dimensional motion capture was completed using the Computer-Assisted Rehabilitation Environment (CAREN; CAREN-Extended, Motek Medical, Amsterdam, The Netherlands, Figure 1). This system combines a 6 degreeof-freedom platform with integrated split-belt instrumented treadmill (Bertek Corp., Columbus OH), 12-camera VICON motion capture system (Vicon 2.6, Oxford, UK), and 180 • projection screen. Participants wore a torso harness attached to an overhead structure when on the treadmill. Platform motion was tracked by three markers, and full body kinematics collected using a 57-marker set . Motion data were gathered at a rate of 100 Hz.
Experimental Protocol
For each trial, participants walked in a virtual park scenario which included a 20 m simulated rolling-hills terrain preceded and succeeded by 40 m of level walking. The rolling-hills terrain was produced by platform oscillations in the sagittal plane (pitch) based on a sum of four sines with frequencies of 0.16, 0.21, 0.24, and 0.49 Hz (Sinitski et al., 2015). Treadmill speed used the self-paced algorithm described by Sloot et al. (Sloot et al., 2014) (Methods 2c) which incorporated anterior-posterior pelvis position, velocity, and acceleration, referenced to the person's initial standing position (heels at the anterior-posterior midline of the treadmill). Visuals on the projection screen matched treadmill and platform conditions in speed and slope.
Trial order was randomized. Separate trials occurred for the three arm conditions: held, normal, and active. Instructions for the held condition were to volitionally hold arms in a still, relaxed position at the participant's sides. For the active condition, participants were instructed that the arms should be roughly horizontal at peak anterior arm swing.
Uphill sections included steps occurring when the average slope of the platform was between +1 and +3 degrees; downhill sections included steps occurring when the average slope of the platform was between −1 and −3 degrees (Figure 2). No uphill or downhill steps spanned a peak or trough in the rollinghills terrain. Level walking included steps from the middle 20 m of the 40 m flat section preceding the rolling-hills terrain.
Data Analysis
Data were imported into Visual3D (C-Motion, Germantown, MD). Kinematic data were filtered at 10 Hz using a 4 th order, zero-lag low-pass Butterworth filter, chosen using a residual analysis approach (Winter, 2009). Heel strike and toe-off gait events were calculated using a velocity-based algorithm as previously described (Zeni et al., 2008) and verified using ground reaction forces. Spatiotemporal parameters included speed, step length, step width, step time, percent double-support time (DST), and coefficients of variation (CoV) for step length, step width, step time, and percent double-support time. Speed was retrieved from D-Flow [Motek Medical, Amsterdam, The Netherlands; (Geijtenbeek et al., 2011)] which served as the control software for the CAREN system; we then averaged the speed over each step. Gait stability was quantified using mediolateral margin of stability (ML-MoS) and ML-MoS CoV using previously reported methods (Hof et al., 2005;Hak et al., 2013;.
Step length was calculated for each step as the hypotenuse of the vertical and anteroposterior distance between the feet at heel strike of the leading leg. The MoS was calculated bilaterally at FIGURE 1 | The CAREN-Extended virtual reality system used in this study.
both heel strikes and defined as the distance of the Extrapolated Center of Mass (xCoM) to the right/left lateral heel marker: The formula for xCoM was: Where CoMp = CoM's position, CoMv = CoM's velocity. ωΘ was calculated as: In this term, g = 9.81 m/s 2 and l is the length of the inverted pendulum determined as the average distance of the right/left lateral heel marker to the CoM at heel-strikes. Visual 3D was used to calculate the CoM's position and velocity. Kinematic measures included trunk angle (mid-point of the posterior superior iliac spine markers to C7 compared to global vertical, measured in the AP direction with a larger trunk angle indicating increased forward inclination) and trunk acceleration root-mean-square (RMS) in the ML, AP, and VT directions as a measure of upper body variability [with larger RMS values indicating greater variability (Menz et al., 2003;Marigold and Patla, 2008)]. All data reduction prior to statistical analyses were performed using the Julia programming language (Bezanson et al., 2017) and custom code (MacDonald et al., 2021).
Statistical Analyses
Separate 2-way repeated measures ANOVAs were used to examine significance between each slope (uphill, downhill) compared to level and across arm (held, normal, active) conditions, as well as potential interactions, for all variables using IBM SPSS Statistics 26 (IBM Analytics, Armonk, USA). Assumption of normality was confirmed using a Shapiro-Wilk test and Greenhouse-Geisser p was reported when Mauchly's Test of Sphericity was violated. Significance level was set at p < 0.05. A Bonferroni correction was used for post-hoc tests.
RESULTS
No significant interaction effects between arm swing and surface slope were found. Statistical information regarding main effects is included in Tables 1, 2, with significant post-hoc findings presented in the following text. See Tables 3, 4 for spatiotemporal results and Table 5
Arm Swing During Uphill and Downhill Sections of the Rolling-Hills
In this section, corrected p-values for each result are presented in parentheses.
Walking with arms held decreased walking speed compared to normal (p ≤ 0.044) and active arm swing (p ≤ 0.031).
Step length increased with increasing arm swing (p ≤ 0.01) and, during uphill sections only, step length CoV was greater when walking with arms held compared to with normal arm swing (p = 0.027). Active arm swing increased step width CoV compared to normal Frontiers in Sports and Active Living | www.frontiersin.org TABLE 3 | Comparison of speeds, spatiotemporal gait parameters, and coefficients of variation (CoV) in the three arm swing conditions (held, normal, active) during uphill, level, and downhill walking.
Slope
Arms Speed (m/s) Spatiotemporal CoV (%) Step length (cm) Step width (cm) Step time (s) DST (% stride) Step length Step width Step time DST (p ≤ 0.047). Active arm swing increased step time compared to held (p = 0.005) and normal (p = 0.001). Active arm swing also decreased double support time compared to held (p ≤ 0.001) and normal (p ≤ 0.014). During downhill sections only, active arm swing increased ML-MoS compared to normal (p = 0.014). Active arm swing decreased trunk angle compared to held (p < 0.001) and normal (p ≤ 0.006). AP-RMS magnitude was larger with active arm swing compared to held and normal (p < 0.001) and smaller with arms held compared to normal (p ≤ 0.003). During uphill sections only, main effects were found for VT-RMS but no post-hoc significance.
Uphill vs. Level
Walking on uphill sections decreased walking speed and step length and increased step width, step time, and double support time compared to level. Uphill walking also increased ML-MoS compared to level. Uphill walking increased step time CoV and decreased step length and ML-MoS CoV. Uphill walking increased trunk angle, and decreased AP-RMS magnitude compared to level.
Downhill vs. Level
Walking on downhill sections decreased walking speed, step length, and step time compared to level. Downhill walking decreased step length CoV and increased step time CoV and double support time CoV. Downhill walking decreased trunk angle compared to level.
DISCUSSION
This study investigated the effect of various arm swings on spatiotemporal parameters and postural strategies during uphill and downhill sections of a rolling-hills terrain compared to level walking. Regardless of slope, active arm swing increased step time and decreased double-support and trunk angle, while walking with arms held decreased walking speed and trunk angle. During both uphill and downhill sections, walking speed was consistently slower and caused postural and spatiotemporal changes from
Variability of Rolling-Hills Condition Required Proactive Base of Support Changes
When walking on the rolling-hills terrain, the magnitude and timing of surface fluctuations was unpredictable (oscillating between −3 • and +3 • ) and required participants to navigate continuous changes in surface slope. For example, a posterior tilt in the surface shifting to an incline may interfere with a leg in late swing and precipitate unplanned foot contact, and an anterior tilt to a decline may induce a stepping response to catch balance. Prentice et al. (2004) investigated walking from a level surface onto a ramp and found that even the smallest incline (3 • ) required adaptations to the swing limb trajectory (Prentice et al., 2004). We believe that the increased step time CoV found in our study could be the result of a similar proactive strategy to optimize the base of support during the rolling-hills terrain. Using the rolling-hills terrain condition, Sinitski et al. (2019) similarly found that healthy adults increased step time variability as well as step length variability compared to level walking (Sinitski et al., 2019). They also reported that participants increased step width during the rolling-hills condition compared to level walking. While they only investigated the rolling-hills as a single walking condition, we found increased step width to be specific to the uphill sections. However, the steps counted within the uphill and downhill sections can each be considered a transition step which reflect characteristics of both the current state as well as the upcoming state (Gottschall and Nichols, 2011). Therefore, it remains uncertain whether the increased step width is attributable to the current uphill section or in preparation for the upcoming downhill section. In either case, participants proactively modified their base of support to stabilize the COM when navigating the rolling-hills terrain. The increased step width and double support time during uphill sections coincided with increased ML-MoS and decreased ML-MoS CoV. Vieira et al. (2017) similarly found increased ML-MoS during uphill sections, which increased stability, but their results showed decreased ML-MoS during downhill sections which we did not find (Vieira et al., 2017). Our results are somewhat different from Kawamura and Tokuhiro (1991) who found no step width increase during uphill sections (Kawamura and Tokuhiro, 1991). However, Kawamura's study examined a relatively narrow ramp which may have affected participants' ability to increase step width. The decrease we found in ML-MoS CoV may also be linked to uphill steps being consistently wider compared to level walking. In healthy individuals, decreased step width variability is thought to reflect greater active attention toward foot placement (Maki, 1997;Siragy and Nantel, 2018;. Additionally, increases in ML-MoS during perturbations may indicate a compensation response to mitigate destabilizing effects of the terrain, particularly as this finding was unique to the present study compared to previous investigations of ML-MoS during both uphill and downhill walking (Vieira et al., 2017). This demonstrates that the healthy young adults did adjust to the incline, even though the slope was minor, and successfully maintained stability.
Mild Uphill and Downhill Slopes Required Spatiotemporal and Postural Modifications
Speed was slower for both uphill and downhill sections compared to level. This is somewhat similar to Kawamura and Tokuhiro (1991) which found a decrease in walking speed for both uphill and downhill conditions at 12 • , but not at lower slopes (3, 6, 9 • ) (Kawamura and Tokuhiro, 1991). Our finding of decreased walking speed with slopes ranging from −3 to +3 • may, therefore, be linked to the continuously varying nature of the rolling-hills terrain condition wherein a more cautious gait was employed for the duration of the terrain. Trunk posture was more backward during downhill sections and more forward during uphill sections, as hypothesized. Uphill walking is typically accompanied by a forward inclination of the trunk to aid in forward propulsion and stepping up (Leroux et al., 2002). Conversely, downhill walking is typically accompanied by a less forward trunk posture which assists in stepping down and the frictional demands on downhill slope (Leroux et al., 2002). The decreased walking speed and altered spatiotemporal and postural variables demonstrate that participants did make accommodations for the mild (≤ 3 • ) slopes encountered. Therefore, participants navigated the rolling-hills primarily by decreasing walking speed, but even the mild slopes caused spatiotemporal and postural changes.
Active Arm Swing Required Proactive Strategies to Increase ML-MoS During Downhill Walking
We hypothesized that active arm swing may additionally perturb gait and require strategies that interact with those adopted for sloped walking. Instead, we found that the gait strategies used to manage active arm swing remained relatively consistent across slope conditions. However, the increase in ML-MoS seen with active arm swing compared to normal was only observed during downhill walking and corresponded to increased step width CoV. Hill and Nantel (2019) also found increased step width variability with active arm swing compared to normal during level walking (Hill and Nantel, 2019). They postulated that the more variable step width stemmed from the decreased coordination also found in the active arm swing condition and may have contributed to the concomitant increase in trunk local dynamic stability. The higher step width variability may demonstrate a proactive strategy to help stabilize the COM when walking with active arm swing, which was successful so far as to also increase ML-MoS in the downhill walking condition. This potentially shows that participants improved their mediolateral stability by varying their step width when managing the active arm swing.
Arm Swing Effects Were Consistent Across Uphill and Downhill Sections of Rolling-Hills
We hypothesized that walking with arms held would lead to compound compensatory strategies during both uphill and downhill sections of the rolling-hills to increase stability. In both uphill and downhill sections, walking with arms held decreased speed compared to normal and active, which may indicate an extra level of caution when walking without arm swing. However, this did not appear to alter any strategies adopted during sloped walking. In fact, spatiotemporal differences from arm swing primarily existed with active arm swing compared to held and normal, with no significant differences between held and normal. For example, compared to held and normal, active arm swing increased step time, seemingly to preserve the coupling of arm-to-leg swing when the arms had further to swing (Bondi et al., 2017). This is further evidenced by the concomitant increase in step length during the active arm swing condition. It may be the case that the speed adjustment made by participants during the held condition was adequate to approximate normal walking stability and limit further need for spatiotemporal adjustments. Conversely, walking speed during active arm swing was not significantly different from normal but led to significant spatiotemporal differences from normal arm swing. Compared to normal arm swing, both held and active conditions caused distinct postural differences. Adopting a larger trunk angle with arms held projects the CoM further anteriorly, potentially reflecting an attempt to facilitate forward progression (Leroux et al., 2002). In contrast, the more upright posture (smaller trunk angle) during active arm swing may be an attempt to compensate for the forward-shifted CoM from increased anterior arm swing. While held and active arm swing illicited different strategies, these strategies remained separate from those used to navigate the slopes.
Limitations
Both the "held" and "active" arm swing conditions could have led to increased attention compared to normal arm swing, which may approach the attentional requirements of some dual tasks. It is uncertain to what extent this affects the outcome parameters. The rolling-hills was a continuous slope condition wherein a range of angles were used rather than specific slope angles. While this is a more naturalistic terrain, it cannot provide insight to the strategies used to overcome specific surface angles or the extent of the spatiotemporal or postural strategies.
CONCLUSION
Our study demonstrates that arm swing caused equivalent changes in all surface conditions. ML-MoS and step width CoV both increased within downhill sections of the rollinghills terrain with the use of active arm swing compared to normal. This indicates that young, healthy participants may have improved their mediolateral stability by varying their step width when managing the active arm swing. Alternately, the increase in ML-MoS during uphill sections compared to level was accompanied by wider steps and longer double support time. Because stability increased during active arm swing with ongoing base of support adjustments and during sloped walking with consistently wider steps and longer double support, this demonstrates that different stepping strategies were used to manage active arm swing compared to a mild incline. Participants successfully navigated the rolling-hills by decreasing walking speed, but even the mild slopes caused spatiotemporal and postural changes. Specifically, the variability of the rolling-hills required participants to proactively modify their base of support to stabilize the COM. As this study tested healthy young adults, the current findings can be used as a baseline comparison in future investigations of other populations. Future research should focus on sloped walking in populations at risks of or with gait impairments (i.e., older adults or those with gait disorders).
DATA AVAILABILITY STATEMENT
The software and dataset produced and analyzed during this work are openly available in Zenodo at: https://doi.org/10.5281/ zenodo.5608535.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Review Board of the University of Ottawa and the Ottawa Hospital Research Ethics Board. The patients/participants provided their written informed consent to participate in this study. | 5,098.8 | 2022-01-25T00:00:00.000 | [
"Biology"
] |
Characteristics of the Board of Commissioners, Directors, and Financial Distress
ABSTRACT
INTRODUCTION
Companies experiencing financial difficulties will have difficulty paying bills, their income is not enough to cover all their expenses, and they will experience losses (Zulfa et al., 2021).Conversely, financial distress is defined as a situation where a company's operating cash flow is insufficient to cover its existing obligations (Wesa & Otinga, 2018).Examining the reasons behind financial distress is a subject that has already been covered.Research has been done since 1960 by (Beaver, 1966;Altman, 1968).The majority of earlier research was on analyzing accounting and financial data using a variety of statistical methods.However, financial suffering cannot be fully explained by economic statistics.The necessity for the firm to be supported by sound corporate governance is one of the factors contributing to financial hardship (Fathonah, 2016).This claim is supported by research showing that, in stable economic times, corporate governance measures may improve a company's performance and shield it from harm in the event of a financial crisis (Erkens et al., 2012;Orazalin et al., 2016).
This issue is important for this study for several reasons.First, although the prediction of financial distress has attracted much attention, previous research findings still need to be resolved.Historians want to try to bridge the gap in the literature.So far, most studies have been conducted on how financial distress develops in developed countries (Tsai, 2014;Baklouti et al., 2016;(Al-Tamimi, 2012).Therefore, researchers want to expand this research to developing countries to conduct more in-depth research.Second, Because the Board of Commissioners and Directors is one of the features of corporate governance that typically plays a major role in the Company, this study focuses more on the characteristics of the Board of Commissioners and Directors (Begum et al., 2023).In addition to the ownership structure and the reputation of public accountants, board characteristics are the most important components of corporate governance (Detthamrong et al., 2017).Thus, this study identifies four attributes of the Board of Directors and four attributes of the Board of Commissioners, namely the number of female directors, independent directors, the size of the Board of Directors, the education of the Board of directors, the proportion of female commissioners, independent commissioners, the education of the Board of commissioners, and the size of the Board of commissioners.
This research was conducted at State-Owned Enterprises throughout Indonesia.Researchers used State-Owned Enterprises because They play an important role in running a country's economy and contribute to providing state revenue.The purpose of this study is to answer the following question: Does financial distress depend on the characteristics of the board of directors and commissioners?The researcher is eager to investigate this subject further to make the findings interesting for shareholders, management, regulators, and other stakeholders who are interested in studying how the characteristics of the board of commissioners and directors affect financial distress.The theory used in this research is agency theory.Agency theory explains that in companies, there are differences in interests between principals (investors) and agents (managers), which can cause agency problems (Jensen & Meckling, 1976).Because the board is an internal organ of the corporation that is tied to shareholders, one of the major ramifications of this agency problem is regulations related to corporate governance, specifically in terms of oversight and board composition.Therefore, agency theory examines and attempts to resolve issues in the interaction between principals (shareholders or owners) and agents (business management).Terjesen et al. (2016) state that the board of commissioners is tasked with supervising and disciplining management in an organization.It's an intriguing field of study to look at how commissioner traits affect financial distress.The presence of women on the board of commissioners is one of them.Compared to male leaders, female leaders are typically more circumspect and risk-averse (Kristanti & Isynuwardhana, 2018).This will affect the idea of high reward at high risk.Having a large number of female executives in the organization will lessen danger, but if little risk is taken, the firm will also see little profit.A little return might also bring on financial anguish.Numerous other studies (Rahimipour, 2017;Hoseini & Gerayli, 2018) have indicated a strong negative influence on financial hardship related to the position of women on the board of commissioners.Thus, the probability of financial hardship decreases with the number of women on the board.On the other hand, research conducted by (Salloum et al. 2016;García-Meca & Santana-Martín, 2022) shows that having female members of the board of commissioners does not significantly reduce financial suffering.As a result, companies experience losses and cannot overcome their financial difficulties by adding more women to the board of commissioners.So, the following is the research's hypothesis: H 1 : Proportion of Female on the Board of Commissioners Negatively Affects Financial Distress Representing women on the board of directors and commissioners also helps lessen the company's financial problems.A female board of directors might encourage creativity in the company's management (Guizani & Abdalkrim, 2023).A disappointing financial situation considering the proportion of female directors.This implies that there will be fewer financial issues the more female directors there are.This was discovered by (Benkraiem et al., 2017). in an earlier study.Having more women on the board of directors may improve function and productivity, leading to greater corporate success.However, Solakoglu & Demir (2016), research revealed a positive financial hardship and the proportion of female directors are correlated.This implies that there are an excessive number of women serving on the board of directors, which might have a negative impact on the company.This is due to the perception that men are more competent than women, but women's success is more due to luck in making decisions.This is different from research conducted by Santen & De Bos (2015), which found that the proportion of female directors did not influence financial distress.So, the following is the research's hypothesis: H 2 : Proportion of Female Directors Negatively Affects Financial Distress According to agency theory, the opportunistic conduct of directors needs to be unified and restrained by independent commissioners.Independent commissioners are better equipped to keep an eye on and control the acts of corporate directors if their proportion is higher.This idea is consistent with study by (Fathonah, 2018), which discovered a negative correlation between financial difficulty and the independence of the board of commissioners.The more the independent board of commissioners' oversight is effective, the more the directors' deviations may be reduced.This is in contrast to earlier study by Widhiadnyana & Ratnadi (2019), which looked at the influence of independent commissioners and discovered a link between financial distress and independent commissioners that was favorable.This is because the company's independent commissioners have a controlling role has not been running efficiently.The process of selecting independent commissioners in companies is still limited to providing requirements to comply with corporate governance laws.So, the following is the research's hypothesis: The presence of independent directors is also important for the organization.Independent directors are believed to reduce agency problems.Previous research by Widiatami et al. (2023), found that independent directors have the ability to act independently due to affiliation.This allows them to supervise company directors in making decisions and limit policies that may conflict with the interests of certain parties.Independent directors therefore have a detrimental effect on financial hardship.However, in reality, according to Dharma et al. (2021), being overly independent might be bad for the firm because when there are too many independent directors, cannot freely implement policies that can benefit the company so that it can increase the risk of financial difficulties.Therefore, the existence of independent directors in the company is only for compliance requirements and more or less for ceremonial purposes.So, the following is the research's hypothesis: H 4 : Independent Directors Have a Negative Impact on Financial Distress Members of the board of commissioners with greater expertise are more capable of overseeing the company and making wiser judgments than those lacking a certain degree of education (Singhal et al., 2021).Permana & Serly (2021), research revealed a negative correlation between financial difficulty and the board of commissioners' educational background.This implies that the likelihood of the firm feeling financial difficulties reduce with the amount of education possessed by the board of commissioners.Highly educated members of the board of commissioners are better equipped to oversee the board of directors' performance and can make better decisions for the company because they have a deeper understanding of the financial industry.The oversight of the company's board of directors was not impacted by the high or low educational levels of the board of commissioners, according to a study by (Kharis & Nugrahanti, 2022).This is due to the possibility that the Board of Commissioners' decision-making may be influenced by a wide range of educational backgrounds, which might lead to inefficiency in providing monitoring.So, the following is the research's hypothesis:
Board of Commissioners Education Has Negative Influence on Financial Distress
The educational background of the board of directors plays a major influence in determining the company's strategy.Since a director's education influences the success of the firm, a prior study by Kristanti et al. (2016) revealed a negative correlation between financial problems and a director's level of education.This is because knowledgeable directors with good judgment and strategic understanding may help the company avoid financial trouble.The findings of this study are different from those of the study of Mahardini & Framita (2022), which discovered a positive correlation between financial difficulty and the board of directors' educational attainment.This implies that financial strain may also rise with a higher degree of education on the board of directors.This is because the board of directors' educational backgrounds diverge with the nature of the company's industry, making it impossible for them to support the company's continued operations.This contrasts with a study by Budiningsih et al. (2022) that discovered no relationship between board education and financial difficulty.So, the following is the research's hypothesis: The board of commissioners is responsible for supervising and directing the board of directors in managing and representing the corporation.Reduced board performance may arise from an organization's bigger board of commissioners' inability to effectively carry out its supervisory role.As a result, financial difficulties for the company are expected (Agustina & Anwar, 2021).A large and diverse board of commissioners with various skills and expertise can provide a broader perspective for the company and help make better judgments.Thus, the size of the commission has a negative impact on financial distress (Lestari & Wahyudi, 2021).In contrast to the research conducted by Kalbuana et al. (2022), the size of the board of commissioners has a beneficial impact on financial hardship since it serves as a less effective watchdog over the Company's directors.So, the following is the research's hypothesis: H 7 : Board of Commissioners Size Has Negative Impact on Financial Distress One of the most crucial corporate governance systems is the board of directors, whose presence affects how well the business performs (Triwahyuningtias & Muharam, 2012).A large board of directors is anticipated to influence the efficacy of business strategies that improve long-term performance and enhance the company's reputation, according to (Kusanti & Andayani, 2015).A large number of directors not only impacts the company's image but also lessens the likelihood of financial difficulty.Therefore, financial difficulty is negatively impacted by the size of the board of directors.Vania & Supatmi (2014), claim that board size has a positive impact on financial distress.This is because the board of directors does not function as a party that is in line with its obligations; it only functions to provide the prerequisites for establishing a company.This is different from research conducted by (Nuswantara et al., 2023), which found no relationship between financial distress and board size.So, the following is the research's hypothesis:
RESEARCH METHODS
Research of this kind is quantitative.Secondary data is employed.All Indonesian state-owned businesses make up the sample for this study, which has an observation period of 2016-2021.The research uses a purposive sampling strategy for sample selection, meaning that the following conditions must be met: 1) the firms must be state-owned and have financial records and annual reports available from 2016 to 2021.2) State-owned businesses that have not yet suffered setbacks or declared bankruptcy.Based on these criteria, a research sample of 37 companies was obtained.Based on the sample determination criteria, 222 research analysis units comprised 37 state-owned companies with six financial reporting accounting periods.Companies are classified as either financially troubled or financially healthy based on their Z-score (Guizani and Abdalkrim, 2023).A company is categorized as financially sound if its Z-score is 1.81 or above (scoring 0), and it is considered to be in financial hardship if its Z-score value is less than 1.81 (scoring 1).The determined Z-score value and a firm's financial difficulty have an inverse relationship, meaning that the lower the Z-score value, the higher the likelihood of the company going bankrupt (formula 1).
Z-score = 1.2X1 + 1.4X2 + 3.3X3 + 0.6X4 + 1.05X5 Where: X 1 = Working Capital/Total assets X 2 = Retained Earnings/Total Assets X 3 = Profit Before Interest and Tax/Total Assets X 4 = Market Value of Equity/Book Value of Total Debt X 5 = Sales/Total Assets Gender diversity is a separate study variable.The percentage of female board members relative to the total number of commissioners may be used to calculate gender diversity based on the number of women (WOMEN) on the board.The number of female members of the board of directors (WOMEN) as a percentage of the total board of directors is measured by the proportion of women (Kabir et al., 2022).In addition, researchers used the Blau Index to calculate a gender diversity index measure for the board of commissioners and directors based on the number of gender categories (men and women).(Guizani & Abdalkrim, 2023).The Blau Index can be measured by formula 2.
Where N is the number of categories and P_1^2 is the proportion of each type.The types used in this study are "male" and "female."The Independent Board of Commissioners (INDKOM) can be computed by independent commissioners using the number of commissioners; the size of the Board of Commissioners, or BSIZE, can be computed by counting the number of commissioners; and EDUC, or board education, can be computed by dividing the number of members with a master's degree or higher by the total number of commissioners.The proportion of independent directors (INDIR) is the ratio of independent directors to the total number of directors.Directors' education is determined by dividing the number of directors with a master's degree or higher by the total number of directors.The size of the board is determined by considering the number of directors.
Particular companies from earlier research have been employed as control variables to control the firm's financial state to examine the effects of gender diversity and board features on financial distress (Shahwan, 2015).The liquidity ratio (LIQ) measurement is the ratio of current assets to current liabilities; the measurement of the ratio of total debt to total assets (DER), namely the ratio of debt to equity; and the size of the Public Accounting Firm (BIG4) is the application of a dummy variable which gives a value of 1 if the company is audited by one of the Big Four Public Accounting Firms, namely PricewaterhouseCoopers, Deloitte Touche Tohmatsu, Ernst and Young, and KPMG.If one of the Big Four does not audit a company, then that company gets a score of 0. The logistic regression analysis approach is the foundation for this study's regression model.Data management using the computer application Eviews version 12. Researchers developed the following model to investigate the impact of board characteritics and gender diversity on financial distress: Board of Commissioners:
RESULTS AND DISCUSSIONS
Table 1 shows the descriptive data of the chosen sample firms.This research sample's mean and median financial distress values are 0.545 and 1.Since this number is less than 1.81, some Indonesian state-owned businesses between 2016 and 2021 are deemed to be financially unhealthy and in the risky zone.As we go on to the independent variable, gender diversity, we can see that there are still several organizations with no women on the board of commissioners.The mean and median values for the presence of women on the board of commissioners are 0.121 and 0. The percentage of corporations with female commissioners on the board is 12.1% on average.The mean and median of the percentage of women on the board of directors are 0.106 and 0, respectively, indicating that some organizations still do not have any women.Additionally, the average percentage of women on a company's board of directors is 10.6%.
The commissioner index's mean and median values of 0.156 and 0 (based on BLAU) and the board of directors' mean and median values of 0.153 and 0 both demonstrate how little gender diversity there is on the Board of Commissioners and Directors.As a result, men continue to predominate on the boards of Indonesian state-owned businesses in terms of gender diversity.On average, there are 15.3% of women serving on the Board of Directors and 15.6% on the Board of Commissioners.In reference to the board characteristic data, the mean and median values for independent commissioners are 0.328 and 0.833, respectively.This indicates that the average percentage of independent commissioners within the organization is 32.8%.Based on the average and median board of directors values of 0.715 and 0.833, the company's average proportion of independent board members is 71.5%.
The mean and median values for the Board of Commissioners' variable education level are 0.823 and 0.833, respectively.This means 82.3% of Indonesia's state-owned businesses are controlled and operated by a board of commissioners with a master's degree or above.The mean and median values for the board's education level were 0.779 and 0.800, respectively.This means that 7.79% of organizations have a board of directors with a master's degree or above.In terms of board size, the company's board of commissioners has an average of 5.518 board commissioners, according to the mean and median values for the board of commissioners, which are 5.518 and 5.The average size of the company's board of directors is 5.671, which is the number of directors the company owns.
Regarding the control variables, the typical state-owned corporation in Indonesia is highly susceptible to financial hardship, as evidenced by the mean and median debt-to-equity ratio (DER) values of 2.260 and 1.373.The cause of this situation is rising debt levels.The mean, median, and liquidity ratio values are 7.200 and 1.544, and these numbers indicate that a company's capacity to pay down debt increases with its present asset level.According to the mean and median values for public accounting firms (BIG4), which are 0.396 and 0, respectively, 39.6% of state-owned businesses in Indonesia are audited by public accounting firm (Big Four).In this study, logistic regression is used for hypothesis testing.The test measures the extent to which the independent variable may predict the probability of the dependent variable occurring.Conventional assumption checks on the independent variables, and a normality test is not necessary when using the logistic regression analysis approach.(Ghozali, 2018).
The results of the hypothesis test in Table 2 indicate that the proportion of women on the board of commissioners (WOMEN) has a substantial positive effect on financial distress with a probability of 0.01<0.05.This means that the proportion of women on the board of commissioners increases the level of financial distress.The results of this study are in line with those of Salloum et al. (2016), who discovered that an excessively high percentage of women can also cause losses for the company.Therefore, H 1 is rejected.This finding is corroborated by agency theory, which examines the agreements between agents and shareholders while managing a company since the primary cause of agency expenses is the mismatch between the interests of the principal and agent.The agency cost is the total cost of the principal's spending monitoring.As a result of the increased the proportion of women on the commissioners' board, the monitoring expenses of the business will increase.
H 2 is rejected because there is no clear relationship between the percentage of female directors (WOMEN) and financial distress, with a probability of 0.61>0.05.The findings of this study are consistent with the research of (Santen & De Bos, 2015), which found no relationship between the percentage of female directors and financial distress.This is because men still control state-owned companies in Indonesia.After all, there is only one, or at most, two or three, female directors in the company.Using gender diversity (blau index), the researcher ran further tests to verify the integrity of the research findings about the percentage of female board members experiencing financial hardship.With a probability of 0.01 <0.05, the findings indicated that the percentage of female board members utilizing gender diversity (blau index) remained significant and positively correlated with financial hardship.This outcome is consistent with the ratio-based results in model 1.With a probability of 0.89>0.05, the study's findings for the board of directors using the Blau index indicated that gender diversity on the board did not significantly affect financial hardship.This outcome is consistent with the ratio-based test findings in model 1.
H 3 is rejected because there is a positive and substantial impact of the independence of the board of commissioners (INDKOM) on financial hardship, with a probability of 0.00 <0.05.This demonstrates that the degree of financial difficulty rises as the number of independent commissioners does, suggesting that the independent commissioners' monitoring role is not operating efficiently inside the organization.This is consistent with study by Widhiadnyana & Ratnadi (2019), which discovered a favorable correlation between financial hardship and the commissioners' independence.H 4 is rejected because independent directors (INDIR) have a probability of 0.98>0.05,indicating that it doesn't significantly affect financial hardship.This shows that the company's process in selecting independent directors is still limited to providing legal requirements to comply with good corporate governance practices.This finding is consistent with Sewpersadh (2022)companies are compelled to reassess operational policies and reengineer strategic formulations to discern value maximising uses for limited resources.The executive's agility to react to financial distress determines the probability of bankruptcy.Proper governance drives sound and sustainable, value maximising decision-making, while inept practices lead to value diminishing, self-serving behaviour that financially constrains companies, resulting in an acceleration of financial distress.This study examined the correlation between financial distress and corporate governance within a sample of 116 listed South African companies using the GMM estimation.Key financial distress determinants were found to be audit committees and shareholder activism (proxied by equity ownership research, which found no independent directors and financial distress are related. The level of education (EDUC) on the board of commissioners is negative and does not significantly affect financial distress with a probability of 0.43>0.05,so H 5 is rejected.The high or low level of education of the board of commissioners does not affect the supervision of the company's board of directors.The results of this study are in line with research conducted by (Kharis & Nugrahanti, 2022).H 6 is rejected since the study's conclusions on the board of directors' education level (EDUC) indicate that the board's education has a substantial positive influence on financial hardship with a probability of 0.01 <0.05.This implies that there will be greater financial difficulties the more educated the board of directors is.This suggests that there is a possibility that the highly educated backgrounds of the board members don't align with the company's line of business, making it unable to sustain the corporation's ongoing operations.The result of this investigation are in line with the findings of other research (Mahardini & Framita, 2022).
H 7 is rejected since the study's findings demonstrate that the board of commissioners' size (BSIZE) is positively and significantly affects financial hardship, with a probability of 0.01 <0.05.This implies that the financial difficulty increases with the number of the board of commissioners.Therefore, it may be concluded that a big board of commissioners is a less useful watchdog on corporate executives.This outcome is in line with research that has been done by (Kalbuana et al., 2022).H 8 is rejected because the results of the study on the direction of board size (BSIZE) do not have a significant effect on financial difficulties, with a probability of 0.23>0.05.These results are in accordance with research conducted by Nuswantara et al. (2023), the size of the board of directors shows that the company's financial difficulties are not always significantly influenced by the size of the board of directors.
The debt-to-equity ratio (DER) coefficient has a substantial positive correlation with financial difficulty with a probability of 0.00<0.05.This implies that the DER value will increase with increasing financial distress.Therefore, total assets must be greater than total liabilities; in other words, the corporation has to have a low DER in order to pay off its obligations without significantly jeopardizing the interests of its capital owners.On the other hand, if it turns out that the company has a high DER, then there are concerns that it will be difficult for the business to pay its obligations.Financial hardship may result from this.The findings of this study are consistent with previous research by (Budiningsih et al., 2022).
Liquidity research (LIQ) results, on the other hand, have a probability of 0.00<0.05and have a strong negative impact on financial hardship.This suggests that there is a strong correlation between financial hardship and liquidity, namely that when a company's liquidity rises, the likelihood of financial difficulty falls.On the other hand, if the company's liquidity decreased, there would be a higher chance of financial difficulties.The idea put forward by Brigham and Houston in Murni (2018), according to which a company's liquidity would decline and issues arise if its current obligations increase more quickly than its current assets, is the foundation for the research findings.The size of the public accounting firm (BIG4) is negative and has no bearing on financial distress, with a probability of 0.08>0.05.This means that most state-owned companies are audited by small public accounting firms, which tend to be less independent.In this case, it is the data obtained from 20 data companies experiencing financial distress; only 7 companies' data have used public accounting firm (BIG4).Big-four, and the rest use public accounting firm Non-Big-four.The findings of this study are consistent with those of (Pratitis, 2012).
CONCLUSIONS
In order to provide empirical data on the characteristics of the board of commissioners and directors with regard to financial distress, the following conclusions were drawn from testing the research data: First, the size of the board of directors, the percentage of independent directors, the education of the board of commissioners, and the number of female directors do not affect financial distress; Second, the size of the board of directors, the percentage of independent directors, the education of the board of commissioners, and the number of female directors do not affect financial distress; and Third, in relation to the control variables, financial distress is positively influenced by the debt-to-equity ratio, negatively influenced by liquidity, and not influenced by Public Accounting Firm (BIG4).
There are many theoretical and practical consequences presented by this work.Theoretically, this study can improve our knowledge of how the characteristics of boards of commissioners and directors can affect financial crises.Practically, this study offers useful information for business decision makers in developing countries to help them design boards with positions and memberships that support corporate governance principles and prevent financial distress.In addition, by outlining the characteristics of boards of commissioners and directors, these results can help regulators understand the need to take actions to improve board performance.This study has unavoidable limitations.First, it only observes Indonesian state-owned companies between 2016 and 2021.Second, it only uses the Altman z-Score as a metric of financial distress.In the future, scholars can use Zmijewski to help address other financial distress and expand the research sample to cover more developing countries.To further expand this study, future researchers can include board tenure, board age, and foreign board members in the list of board characteristics.
H 3 :
Independent Commissioners Have a Negative Impact on Financial Distress
Table 2 .
The Results of Logistic Regression Analysis Notes: This table presents the results of a logit regression that estimates the effect of gender diversity and board characteristics on financial distress.WOMEN is the proportion of Women on the board.INDKOM is an independent commissioner.INDIR is an independent director.EDUC is the Council's Education Level.BSIZE is the board size.DER is the debt-to-equity ratio.LIQ is liquidity.BIG4 is a measure of public accounting firms.*,**, ***, significance 1%, 5%, 10%. | 6,504.4 | 2024-08-16T00:00:00.000 | [
"Business",
"Economics"
] |
Estimation and Validation of Lisinopril in Bulk and its Pharmaceutical Formulation by HPLC Method
An accurate and precise HPLC method was developed for the determination of lisinopril. Separation of the drug was achieved on a reverse phase C8 column using a mobile phase consisting of phosphate buffer and methanol in the ratio of 35:65v/v. The flow rate was 0.8 mL/min and the detection wavelength was 215 nm. The linearity was observed in the range of 20-60 μg/mL with a correlation coefficient of 0.9992. The proposed method was validated for its linearity, accuracy, precision and robustness. This method can be employed for routine quality control analysis of lisinopril in tablet dosage forms.
Introduction
Lisinopril is a potent, competitive inhibitor of angiotensin-converting enzyme (ACE), the enzyme responsible for the conversion of angiotensin I (ATI) to angiotensin II (ATII).ATII regulates blood pressure and is a key component of the renin-angiotensin-aldosterone system (RAAS).Lisinopril is indicated for the treatment of hypertension.It may be used alone as initial therapy or concomitantly with other classes of antihypertensive agents.Lisinopril is indicated as adjunctive therapy in the management of heart failure in patients who are not responding adequately to diuretics and digitalis.Lisinopril is chemically described as (S)-1-[N 2 -(1-carboxy-3-phenylpropyl)-L-lysyl]-L-proline dihydrate 1 (Figure 1).A few spectroscopic 2,3 , HPLC [4][5][6] and LC-MS 7 methods were reported earlier for the determination of lisinopril in bulk and pharmaceutical dosage forms.In the present study the authors report a rapid, sensitive, accurate and precise HPLC method for the estimation of lisinopril in bulk and in tablet dosage forms.
Experimental
The analysis of the drug was carried out on a waters HPLC system equipped with a reverse phase xterra C 8 column (150 mm x 4.6 mm; 3.5 µm), a 2695 binary pump, a 20 µL injection loop and a 2487 dual absorbance detector and running on waters empower software.
Chemicals and solvents
The reference sample of lisinopril was supplied by Sun Pharmaceutical Industries Ltd., Baroda.HPLC grade water and acetonitrile were purchased from E. Merck (India) Ltd., Mumbai.Potassium dihydrogen phosphate and orthophosphoric acid of AR Grade were obtained from S.D. Fine Chemicals Ltd., Mumbai.
Preparation of phosphate buffer (pH 3.0)
Seven grams of KH 2 PO 4 was weighed into a 1000 mL beaker, dissolved and diluted to 1000 mL with HPLC water. 2 mL of triethyl amine was added and pH adjusted to 3.0 with orthophosporic acid.
Preparation of mobile phase and diluents
350 mL of the phosphate buffer was mixed with 650 mL of methanol.The solution was degassed in an ultrasonic water bath for 5 minutes and filtered through 0.45 µ filter under vacuum.
Procedure
A mixture of buffer and methanol in the ratio of 35:65v/v was found to be the most suitable mobile phase for ideal separation of lisinopril.The solvent mixture was filtered through a 0.45 µ membrane filter and sonicated before use.It was pumped through the column at a flow rate of 0.8 mL/min.The column was maintained at ambient temperature.The pump pressure was set at 800 psi.The column was equilibrated by pumping the mobile phase through the column for at least 30 min prior to the injection of the drug solution.The detection of the drug was monitored at 215 nm.The run time was set at 6 min.Under these optimized chromatographic conditions the retention time obtained for the drug was 2.298 min.
A typical chromatogram showing the separation of the drug is given in Figure 2.
Calibration plot
About 25 mg of lisinopril was weighed accurately, transferred into a 100 mL volumetric flask and dissolved in 25 mL of a 35:65v/v mixture of phosphate buffer and methanol.The solution was sonicated for 15 min and the volume made up to the mark with a further quantity of the diluent to get a 100 µg/mL solution.From this, a working standard solution of the drug (40 µg/mL) was prepared by diluting 2 mL of the above solution to 10 mL in a volumetric flask.Further dilutions ranging from 20-60 µg/mL were prepared from the solution in 10 mL volumetric flasks using the above diluent.20 µL of each dilution was injected six times into the column at a flow rate of 0.8 mL/min and the corresponding chromatograms were obtained.From these chromatograms, the average area under the peak of each dilution was computed.The calibration graph constructed by plotting concentration of the drug against peak area was found to be linear in the concentration range of 20-60 µg/mL of the drug.The relevant data are furnished in Table 1.The regression equation of this curve was computed.This regression equation was later used to estimate the amount of lisinopril in tablet dosage form.
Validation of the proposed method
The specificity, linearity, precision, accuracy, limit of detection, limit of quantification, robustness and system suitability parameters were studied systematically to validate the proposed HPLC method for the determination of lisinopril.Solution containing 40 µg/mL of lisinopril was subjected to the proposed HPLC analysis to check intra-day and inter-day variation of the method and the results are furnished in Table 2.The accuracy of the HPLC method was assessed by analyzing solutions of lisinopril at 50, 100 and 150% concentrated levels by the proposed method.The results are furnished in Table 3.The system suitability parameters are given in Table 4.
Estimation of lisinopril in tablet dosage form
Two commercial brands of tablets were chosen for testing the suitability of the proposed method to estimate lisinopril in tablet formulation.Twenty tablets were weighed and powdered.An accurately weighed portion of this powder equivalent to 25 mg of lisinopril was transferred into a 100 mL volumetric flask and dissolved in 25 mL of a 35:65v/v mixture of phosphate buffer and methanol.The contents of the flask were sonicated for 15 min and a further 25 mL of the diluent was added, the flask was shaken continuously for 15 min to ensure complete solubility of the drug.The volume was made up with the diluent and the solution was filtered through a 0.45 µ membrane filter.This solution containing 40 µg/mL of lisinopril was injected into the column six times.The average peak area of the drug was computed from the chromatograms and the amount of the drug present in the tablet dosage form was calculated by using the regression equation obtained for the pure drug.The relevant results are furnished in Table 5.
Results and Discussion
In the proposed method, the retention time of lisinopril was found to be 2.298 min.Quantification was linear in the concentration range of 20-60 µg/mL.The regression equation of the linearity plot of concentration of lisinopril over its peak area was found to be Y=106604.8+38917.98X(r 2 =0.9992),where X is the concentration of lisinopril (µg/mL) and Y is the corresponding peak area.The number of theoretical plates calculated was 2348, which indicates efficient performance of the column.The limit of detection and limit of quantification were found to be 0.03 µg/mL and 0.12 µg/mL respectively, which indicate the sensitivity of the method.The use of phosphate buffer and methanol in the ratio of 35:65v/v resulted in peak with good shape and resolution.The high percentage of recovery indicates that the proposed method is highly accurate.No interfering peaks were found in the chromatogram of the formulation within the run time indicating that excipients used in tablet formulation did not interfere with the estimation of the drug by the proposed HPLC method.
Figure 1 .
Figure 1.Chemical structure of lisinopril
Table 1 .
Calibration data of the method
Table 2 .
Precision of the proposed HPLC method
Table 3 .
Accuracy studies
Table 4 .
System suitability parameters
Table 5 .
Assay and recovery studies | 1,743 | 2012-01-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Systematic Review of the Short-Term versus Long-Term Duration of Antibiotic Management for Neutropenic Fever in Patients with Cancer
Simple Summary Empirical administration of broad-spectrum antibiotics during neutropenia has been shown to reduce mortality from bacterial infections. However, prolonged antibiotic exposure, in particular, promotes the development of antimicrobial resistance and the selection of resistant microorganisms, which are often more difficult to treat, and carry a higher risk of complications. Early antibiotic discontinuation has been proposed in patients with hematologic malignancy who have febrile neutropenia. Several studies have found that shorter duration of antimicrobial therapy have better clinical outcomes and lower the exposure to the broad-spectrum antibiotics, but this raises concerns about their implementation in clinical practice. Furthermore, their safety and efficacy have been questioned. In our study, a systematic review was conducted to compare the short-term and long-term durations of antibiotics for febrile neutropenia for the outcomes of clinical failure, mortality, and bacteremia. Abstract Early antibiotic discontinuation has been proposed in patients with hematologic malignancy with fever of unknown origin during febrile neutropenia (FN). We intended to investigate the safety of early antibiotic discontinuation in FN. Two reviewers independently searched for articles from Embase, CENTRAL, and MEDLINE on 30 September 2022. The selection criteria were randomized control trials (RCTs) comparing short- and long-term durations for FN in cancer patients, and evaluating mortality, clinical failure, and bacteremia. Risk ratios (RRs) with 95% confidence intervals (CIs) were calculated. We identified eleven RCTs (comprising 1128 distinct patients with FN) from 1977 to 2022. A low certainty of evidence was observed, and no significant differences in mortality (RR 1.43, 95% CI, 0.81, 2.53, I2 = 0), clinical failure (RR 1.14, 95% CI, 0.86, 1.49, I2 = 25), or bacteremia (RR 1.32, 95% CI, 0.87, 2.01, I2 = 34) were identified, indicating that the efficacy of short-term treatment may not differ statistically from that of long-term treatment. Regarding patients with FN, our findings provide weak conclusions regarding the safety and efficacy of antimicrobial discontinuation prior to neutropenia resolution.
Introduction
Fever due to chemotherapy-induced neutropenia is experienced in 10-50% of patients with solid tumors, and more than 80% of those with hematologic malignancies [1]. Patients with hematological malignancy are at high risk of febrile neutropenia (FN), and experience Gram-negative bacilli bloodstream infections. Broad-spectrum beta-lactam antibiotics should be administered, such as carbapenem, piperacillin/tazobactam, ceftazidime, or cefepime, according to several guidelines [1][2][3]. Determining the optimal shown in the Supplementary Materials. This systematic review was conducted according to the guidelines of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) [14]. The review protocol was recorded on 2 November 2022 with PROSPERO with the CRD number 42022369590.
Selection of Studies
We included randomized control trials (RCTs) in any language that reported all-cause mortality, clinical failure, or bacteremia, comparing the short-term duration of antibiotics with the long-term duration in hematological FN. We excluded patients with clinically and microbiologically documented infections, as well as neonatal patients. Two investigators (K.I. and T.M.) independently assessed the full texts of the articles. Discrepancies were discussed with a third and fourth investigator (E.O. and N.M.). Regarding the selection of studies until the end of 2017, we referred to the results of a Cochrane review by Stern et al. [13].
Adults (older than 18 years) and children (younger than 18 years) with FN caused by cancer chemotherapy and treated with any antibiotic regimen were included in this study. We defined fever as a single oral temperature higher than 38.3 • C or a temperature higher than 38.0 • C sustained for more than 1 h, according to the guidelines [1,2]. Neutropenia was defined as an absolute neutrophil count of less than 500 cells/µL. Studies that used a different although similar definition to that in the guidelines were included in the review. The types of interventions in the RCT define protocol-guided antibiotic discontinuation prior to neutropenia resolution versus antibiotic continuation until neutropenia resolution. We recorded the criteria defined for antibiotic discontinuation, including the timing of discontinuation, definitions of defervescence, and neutrophil count defined for neutropenia resolution.
Outcomes
The primary outcomes in this systematic review were any cause mortality, clinical failure, and bacteremia. Clinical failure was assessed as defined in each study.
Data Extraction
Two investigators (K.I. and T.M.) independently extracted the following data: publication country, published year, sample size, type of cancer (solid tumor or hematological malignancy, including stem cell transplantation), type of beta-lactam antibiotics, follow-up period, mortality, clinical failure, and bacteremia in each study. Data were extracted to the data extraction sheet using Microsoft Excel and Google spreadsheets, and were easily checked by a reviewer, including study information (e.g., publication country, study years, single-center or multi-center study), participant baseline characteristics (type of population, inclusion and exclusion criteria, comorbidity, and type of cancer), information regarding the intervention (type of antimicrobials and planned antibiotic duration in each arm), information regarding risk of bias (e.g., randomization method, allocation concealment, blinding, discontinuation of study, and incomplete outcome reporting), and information regarding outcomes (mortality, clinical failure, and bacteremia). Two review authors (K.I. and T.M.) extracted data from the included trials independently and entered them into the data extraction sheet. We extracted data preferentially using the intention-to-treat method, which included all individuals who were randomly assigned to the study outcome. For dichotomous outcomes, we recorded the number of participants manifesting the outcome in each group, as well as the number of evaluated participants. For continuous outcomes, we documented values, as well as the measure used, to represent the data (including mean with standard deviation and median with interquartile range). Discrepancies were resolved through discussion or by other investigators (E.O. and N.M.). We asked study authors for any missing data so that we could include findings from any studies published after 2018 in our study. We also referred to the data from the Stern et al. study [13], which was published before 2018.
Risk of Bias Assessment
Two investigators (K.I. and T.M.) independently assessed the risk of bias. Disagreements were resolved via discussion with a third investigator (E.O. and N.M.). The risk of bias was assessed according to the scales of the Cochrane risk-of-bias tool (RoB) [15]. With the RoB, we evaluated seven domains of bias: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, reporting selection, and others. We assessed the effect of allocation concealment on results based on the evidence of a strong association between poor allocation concealment and overestimation of effect [16], as defined below: • Low risk of bias (adequate allocation concealment); • Unclear bias (uncertainty regarding allocation concealment); • High risk of bias (inadequate allocation concealment).
The two review authors independently recorded methods of allocation generation, blinding, incomplete outcome data, selective reporting, the unit of randomization (patient or febrile episode), and publication status, in addition to the adequacy of allocation concealment.
Statistical Analyses
We analyzed dichotomous data by calculating the RR for each study, with the uncertainty in each result presented as 95% CIs. We assessed the percentage of variation across studies that could not be ascribed to sampling variation using the I 2 statistic. A fixed-effects model was used unless significant heterogeneity was observed (p < 0.1 or I 2 > 50%), in which case the random-effects model was used. We also visually inspected the forest plots to judge heterogeneity. We analyzed the data using Review Manager 5.4 (freely available software, released by Cochrane, London, UK).
Certainty of Evidence
We used the Grades of Recommendations, Assessment, Development, and Evaluations (GRADE) approach to interpret the findings and rate the certainty of evidence [17], grading the major outcomes (mortality, clinical failure, and bacteremia development). A certainty of evidence of review was evaluated using GRADEpro guideline development tool software (GRADEpro GDT, Evidence Prime Inc., Hamilton, ON, Canada) [18], using parameters such as study design, risk of bias, directness of outcomes, heterogeneity, precision within results, bias due to publication, estimate effect, and dose relationship with response and confounders. Thus, the overall GRADE obtained can be high, moderate, low, or very low certainty of evidence. We considered this analysis in our conclusions.
Sensitivity Analysis
We conducted a sensitivity analysis to assess the effect of allocation concealment on mortality to prevent the overestimation of effects of studies with inadequate or unclear allocation concealment. The studies with unclear risk were same as those identified by Stern et al. [13]; therefore, we only analyzed low risk allocation.
Selection Bias
Funnel plot analyses were performed for the three main comparisons: mortality, clinical failure, and bacteremia. The funnel plots for three main comparisons were symmetrical ( Figure 2). An indication that small trials are missing may be present for bacteremia in Figure 2.
Selection Bias
Funnel plot analyses were performed for the three main comparisons: mortality, clinical failure, and bacteremia. The funnel plots for three main comparisons were symmetrical (Figure 2). An indication that small trials are missing may be present for bacteremia in Figure 2.
Risk of Bias Assessment, GRADE, and Meta-Analyses
In our systematic review, no significant differences in mortality (RR 1.43, 95% CI, 0.81, 2.53, I 2 = 0), clinical failure (RR 1.14, 95% CI, 0.86, 1.49, I 2 = 25), and bacteremia (RR 1.32, 95% CI, 0.87, 2.01, I 2 = 34) were observed (Figure 3). The risk of bias assessment data are graphically presented in Table 2. We also evaluated the GRADE for mortality, treatment failure, and bacteremia in the RCTs and found a low certainty of evidence (Table 3). We also analyzed the mortality, clinical failure, and bacteremia for only hematological malignancy patients, including patients who underwent stem cell transplantation as reported by de Jonge et al. [12], Ram et al. [25], Aguilar-Guisado et al. [19], and Santolaya et al. [27], and found similar results (Supplementary Figure S1). Figure 3. Summary of findings of short-compared with long-term duration antibiotic therapy presented as Forest plots, including the results reported by de Jonge et al. [12], Ram et al. [25], and Kumar et al. [22], and the results of the systematic review by Stern et al. [13]. Mortality (above), clinical failure (middle), and bacteremia (below). Total RR across 1 (left favors short, right favors long). Abbreviations: RCT, randomized control study, CI, confidence interval; RR, risk ratio [12,[19][20][21][22][23][24][25][26][27][28]. . Summary of findings of short-compared with long-term duration antibiotic therapy presented as Forest plots, including the results reported by de Jonge et al. [12], Ram et al. [25], and Kumar et al. [22], and the results of the systematic review by Stern et al. [13]. Mortality (above), clinical failure (middle), and bacteremia (below). Total RR across 1 (left favors short, right favors long). Abbreviations: RCT, randomized control study, CI, confidence interval; RR, risk ratio [12,[19][20][21][22][23][24][25][26][27][28]. Table 2. Summary of risk of bias in all the randomized controlled trials including the results reported by de Jonge et al. [12], Ram et al. [25], and Kumar et al. [22], and the results of the systematic review by Stern et al. [13]. The risk of bias included randomization sequence, concealment, blinding of participant and clinician, incomplete outcome data, selective reporting, and others. The color of risk of bias: green, low risk of bias; yellow, unclear risk of bias; red, high risk of bias. ⊕⊕ Low a,b * The risk in the intervention group (and its 95% confidence interval) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI). CI: confidence interval; RR: risk ratio GRADE Working Group grades of evidence High certainty: we are very confident that the true effect lies close to that of the estimate of the effect. Moderate certainty: we are moderately confident in the effect estimate: the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different. Low certainty: our confidence in the effect estimate is limited: the true effect may be substantially different from the estimate of the effect. Very low certainty: we have very little confidence in the effect estimate: the true effect is likely to be substantially different from the estimate of the effect.
Randomization
Explanations: a . methods of randomization and allocation concealment were unclear in most studies. Most studies were unblinded; b . effect estimate overlapping no effect with wide confidence interval. c . variable and inconsistent definition of clinical failure across studies.
Sensitivity Analysis
The RR for mortality was 1.07 (95% CI 0.39-2.92) in the studies with a low risk of bias for allocation concealment (five trials), compared with an RR of 1.65 (95% CI 0.82-3.29) in the studies with an unclear risk of bias for allocation concealment (p = 0.51 for subgroup differences; Supplementary Figure S2).
Discussion
In this systematic review, we examined the short-and long-term duration of antibiotic management for neutropenic fever in patients with hematological malignancy. We identified eleven RCTs (comprising 1128 distinct patients with FN). In our systematic review, we found that the efficacy of short-term duration of treatment may not differ statistically from that of long-term.
According to these studies, it may still be difficult for clinicians to implement a short duration of antibiotics for FN.
However, multidrug-resistant GNB, including carbapenem-resistant GNB and those caused by extended spectrum beta-lactamase-producing Enterobacteriaceae, is increasing worldwide in cancer patients [29]. Independent risk factors for CRE bloodstream infection in this study were prior β-lactam/β-lactamase inhibitor or carbapenem use. In another study, antimicrobial resistance was associated with unfavorable outcomes, such as high mortality in patients with cancer [30]. As a result, it is necessary to reduce long-term antibiotic exposure in cancer patients.
We are still debating whether to accept the result that the long-term treatment is preferable, due to the discrepancies among the studies. The use of prophylaxis and the criteria for antibiotic discontinuation were different in each study. Regarding the different characteristics of each study, much longer durations of antibiotic treatment were reported in the study by Aguilar-Guisado et al. [19]. The type of hematological malignancies studied by de Jonge et al. [12] and Aguilar-Guisado et al. [19] were disparate. In the study by Aguilar-Guisado et al. [19], 45% of patients had acute leukemia, and approximately 27% were in induction or re-induction for prolonged neutropenia; in the study by de Jonge et al. [12], 43% of patients had multiple myeloma and about 70% of transplants were autologous, in which the neutropenic duration was shorter.
In the former study, neutrophil recovery did not resume until the patients in the shortterm antibiotic group showed improved clinical symptoms; while in the latter, neutrophil recovery was early; therefore, rendering a comparison with the long-term group was difficult without a cut-off. Additionally, studies by Klaassen et al. [21] and Santolaya et al. [27] mixed the high-risk with low-risk patients. Thus, in the majority of the studies, the shortterm group had very limited time to confirm culture negativity, and the long-term group may have had the desirable outcome, although the two groups were non-inferior. Although the etiology of FN remains unknown with negative culture findings, recent cell-free deoxyribonucleic acid (DNA) technology has shown that viruses and Streptococcus viridans are common in blood culture-negative cases [31]. We believe that if cell-free DNA is incorporated into the studies, patients with infections would be excluded from the short-term treatment group, thus, leading to more favorable outcomes.
In the context of antimicrobial regimens, de Jonge et al. [12] empirically used carbapenems, while Aguilar-Guisado et al. [19] used anti-pseudomonas beta-lactam antibodies. The regimens of other studies in the review were different from the regimen in the current guideline. However, the incidence of resistant Gram-negative-rod (GNR) strains has increased in patients with hematologic malignancies [32], but the studies included in the systematic review did not consider the ration of resistant strains. The efficacy of extended administration of beta-lactam [33] or beta-lactam + aminoglycosides [34] for FN is under investigation. These regimens are also effective against resistant GNR strains. Moreover, the small sample size of these studies also resulted in limited evidence. However, there are several ongoing RCT studies on the discontinuation of antimicrobial therapy for FN in some countries; therefore, the results are still awaited (NCT 04948463, NCT 04270786, and NCT 04637464 in the ClinicalTrials.gov registry of clinical trials).
This study had several limitations. A systematic review examines and synthesizes the information on a subject that is available in the literature; as a result, it may include some bias from the publications. We compiled trials of different designs, including RCTs and prospective non-RCTs. The study by Stern et al. [13] is supplemented by new RCTs in our study, and the authors have already communicated via email with the correspond-ing authors in each of the previous studies in order to learn more specifics about each investigation. We are only able to communicate with corresponding authors in each article since 2018. Although the heterogeneity of studies in our research was low, we believe that more RCTs will further improve the quality of a systematic review. Currently, several retrospective studies have been conducted on the duration of therapy for FN, which were preferable for a short course of antibiotics [35][36][37][38]. Moreover, these studies included highrisk febrile neutropenic patients in hematology. These studies support the findings of our systematic review.
Conclusions
Cancer patients should be exposed to the optimal short exposure duration of antimicrobial therapy, which benefits the implementation of antimicrobial stewardship strategies to improve the use of antimicrobials and limit multidrug resistance, as well as a short hospital stay. The evidence of each RCT is limited, and a short-term duration of beta-lactam antibiotics showed no statistically significant differences in mortality, clinical failure, and bacteremia compared with those for long-term duration antibiotics in our systematic review, possibly owing to the small number of studies, varying clinical among studies, or different study designs.
Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/cancers15051611/s1, Figure S1: Summary of findings of short-term duration of antibiotics compared with long-term duration of antibiotics presenting Forest plot for only hematological malignancy patients including stem cell transplantation; Figure S2: Summary of findings of short-term duration of antibiotics compared with long-term duration of antibiotics presenting Forest plot for mortality in allocation concealment bias as sensitivity analysis. | 4,211.2 | 2023-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
On Some Approximation Theorems for Power q-Bounded Operators on Locally Convex Vector Spaces
This paper deals with the study of some operator inequalities involving the power q-bounded operators along with the most known properties and results, in the more general framework of locally convex vector spaces.
Introduction
Let be a Hausdorff locally convex vector space over the complex field C. By calibration for the locally convex space we understand a family P of seminorms generating the topology P of , in the sense that this topology is the coarsest with respect to the fact that all the seminorms in P are continuous. Such a family of seminorms was used by the author and Wu [1] and many others in different contexts (see [2][3][4][5]).
It is well known that calibration P is characterized by the property that the set ( , ) = { ∈ : ( ) < } , > 0, ∈ P is a neighborhood subbase at 0. Denote by ( , P) the locally convex space endowed with calibration P.
Recall that a locally convex algebra is an algebra with a locally convex topology in which the multiplication is separately continuous. Such an algebra is said to be locally -convex (l.m.c.) if it has a neighborhood base U at 0 such that each ∈ U is convex and balanced (i.e., ⊆ for | | ≤ 1) and satisfies the property 2 ⊆ .
Any algebra with identity will be called unital. It is well known that unital locally -convex algebra A is characterized by the existence of calibration P such that each ∈ P is submultiplicative (i.e., ( ) ≤ ( ) ( ), for all , ∈ A) and satisfies ( ) = 1, where is the unit element.
An element of locally convex algebra A is said to be bounded in A if there exists ∈ C such that the set {( ) } ≥1 is bounded in A (see [6]). The set of all bounded elements in A will be denoted by A 0 .
Let C ∞ := C ∪ {∞} be the Alexandroff one-point compactification of C. Following Waelbroeck [7,8], we introduce the following. Definition 1. We call resolvent set in the Waelbroeck sense of an element from a locally convex unital algebra ( , P) the set of all elements 0 ∈ C ∞ for which there exists ∈ V 0 such that the following conditions hold: (a) the element − is invertible in , for any ∈ \ {∞}; (b) the set {( − ) −1 : ∈ \{∞}} is bounded in ( , P).
The resolvent set in Waelbroeck sense of an element will be denoted by ( ). The Waelbroeck spectrum of will be defined as 2 The Scientific World Journal
Definition 2. We say that a linear operator : → is -bounded (quotient-bounded) with respect to P if for any ∈ P there exists > 0 such that Denote by P ( ) the set which consists of all -bounded operators with respect to calibration P.
The set of all bounded elements in P ( ) will be denoted by ( P ( )) 0 (see [12]). It easily follows from [6, Proposition 2.14(ii)] that For ∈ ( P ( )) 0 we denote by ( ) the Waelbroeck resolvent set of and by ( ) the Waelbroeck spectrum of . The function is called the resolvent function of . It is well known that In this paper we evaluate the behaviour of the power of a -bounded operator from the algebra ( P ( )) 0 by some type of approximations. The main results have been announced in [14].
The Main Results
We continue to employ the notations from the previous sections and we will introduce two types of operatorial approximations for operators from the algebra ( P ( )) 0 which approximate a given operator on a convergent power bounded series. The power boundedness problem for operators acting on Banach spaces was largely developed in various frameworks by many authors (see [15][16][17]).
In the following, using the functional calculus from the ( P ( )) 0 algebra (see [7,8]), some important boundedness properties are obtained. Denote N * = N \ {0}. First we have the following.
for ∈ N * and for all ∈ C with | | > 1.
Proof. Assume that sup̂∈P̂( ) ≤ for ∈ N * . Since for | | > 1, then, by using the generalized binomial formula, we get from where we deducê for any ∈ N * and anŷ∈P. Therefore, the conclusion is verified.
Conversely, we have the following.
The last inequality was obtained by using Stirling's approximation. Now, for ∈ ( P ( )) 0 we introduce (see [18]) the following.
Next theorem shows how an operator from the ( P ( )) 0 algebra is related to its Yosida approximation. Proof. By evaluating ( , ) in terms of the resolvent ( , ), for | | > P ( ) we obtain from where it follows that the assertion of the theorem is true. Moreover, so (1) is true.
Proof. Property (i) implies P ( ) ≤ 1 so that the argumentation given in the proof of Theorem 7 implies that any ∈ C with | | > 1 belongs to the resolvent set of . Hence, using the generalized binomial formula, we get Now, by applying (i) again we obtain for anŷ∈P, whence by passing to supremum, the inequality (ii) holds. Conversely, (i) is a direct consequence of (ii).
Proof. From Theorem 8, for ∈ ( P ( )) 0 , is equivalent to The conclusion follows taking into account that for ∈ N * .
Application
Following [19], we see that the resolvent of is given by the Yosida approximation of is The above implies that is a contraction for ≤ 1.
It is clear that for estimating the powers of it seems to be better to use the Yosida approximation or Möbius approximation than the resolvent approximation.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper. | 1,436.4 | 2014-08-18T00:00:00.000 | [
"Mathematics"
] |
The slippery slope of dust attenuation curves: Correlation of dust attenuation laws with star-to-dust compactness up to z = 4
Aims. We investigate dust attenuation of 122 heavily dust-obscured galaxies detected with the Atacama Large Millimeter Array (ALMA) and Herschel in the COSMOS field. We search for correlations between dust attenuation recipes and the variation of physical parameters, mainly the effective radii of galaxies, their star formation rates (SFR), and stellar masses, and aim to understand which of the commonly used laws best describes dust attenuation in dusty star-forming galaxies at high redshift. Methods. We make use of the extensive photometric coverage of the COSMOS data combined with highly-resolved dust continuum maps from ALMA. We use CIGALE to estimate various physical properties of these dusty objects, mainly their SFR, their stellar masses and their attenuation. We infer galaxy effective radii (Re) using GALFIT in the Y band of HSC and ALMA continuum maps. We use these radii to investigate the relative compactness of the dust continuum and the extension of the rest-frame UV/optical Re(y)/Re(ALMA). Results. We find that the physical parameters calculated from our models strongly depend on the assumption of dust attenuation curve. As expected, the most impacted parameter is the stellar mass, which leads to a change in the"starburstiness"of the objects. We find that taking into account the relative compactness of star-to-dust emission prior to SED fitting is crucial, especially when studying dust attenuation of dusty star-forming galaxies. Shallower attenuation curves did not show a clear preference of compactness with attenuation, while the Calzetti attenuation curve preferred comparable spatial extent of unattenuated stellar light and dust emission. The evolution of the Re(UV)/Re(ALMA) ratio with redshift peeks around the cosmic noon in our sample of DSFGs, showing that this compactness is correlated with the cosmic SFR density of these dusty sources.
Introduction
In its earlier years, the Universe did not only witness more star formation rates (SFRs), but a significant fraction of this SFR was obscured by dust (e.g., Blain et al. 2002;Takeuchi et al. 2005;Chapman et al. 2005;Bouwens et al. 2012;Madau & Dickinson 2014;Magnelli et al. 2014;Bourne et al. 2017;Whitaker et al. 2017;Gruppioni et al. 2020;Khusanova et al. 2020). The heavily attenuated in the ultraviolet (UV) dusty star-forming galaxies (DSFGs, e.g., Viero et al. 2013;Weiß et al. 2013;Casey et al. 2014;Béthermin et al. 2015;Strandet et al. 2016;Casey et al. 2017;Reuter et al. 2020) have contributed notably to the cosmic SFR, making them crucial to the general comprehension of galaxy evolution. These dust-rich galaxies have managed to stack up their stellar masses in relatively short timescales, while efficiently depleting their gas reservoirs. Therefore, they might be the direct progenitors to ultramassive passive red galaxies that are often encountered at high redshift (e.g., Daddi et al. 2005;Whitaker et al. 2013;Nayyeri et al. 2014;Toft et al. 2014;Carnall et al. 2020).
Our understanding of the nature of DSFGs have benefited from the plethora of multiwavelength photometry over the last two decades. With a decade worth of observation programs of the Atacama Large Millimeter/submillimeter Array (ALMA), it has become possible to enrich our datasets of infrared (IR) galax-ies at higher redshifts, by their multiplicity from their blended, lower resolution detections with Spitzer and Herschel. This has made modeling dust attenuation at high redshifts a hot topic (e.g., Popping et al. 2017;Wang et al. 2017;McLure et al. 2018;Buat et al. 2019;Salim & Boquien 2019;Fudamoto et al. 2020).
Interstellar dust is highly efficient in absorbing the short wavelength photons, predominantly originating from young UVbright massive stars, rendering the star-forming cold regions of the DSFGs virtually inaccessible. This attenuated light can be successfully reproduced by assuming a dust attenuation law (e.g., Burgarella et al. 2005;Buat et al. 2012Buat et al. , 2014Lo Faro et al. 2017;Salim et al. 2018;Salim & Narayanan 2020). In reality however, a single attenuation law cannot mimic dust extinction in the interstellar medium (ISM) of a large and diverse sample of galaxies (e.g., Wild et al. 2011;Kriek & Conroy 2013;Buat et al. 2018;Małek et al. 2018;Salim & Narayanan 2020). Different approaches appear to work when modeling dust attenuation in reproducing spectral energy distributions (SEDs) of galaxies. Calzetti et al. (2000) derived a universal effective attenuation law which is used as a screen model, by measuring the extinction in local starburst galaxies. The attenuation curve of Calzetti et al. (2000) succeeds in modeling dust reddening even at high-redshift metal-poor galaxies with bright cold dust component. The double component dust attenuation law of Charlot & Fall (2000) assumes a more complex, physical mixing of dust Article number, page 1 of 18 arXiv:2304.13713v1 [astro-ph.GA] 26 Apr 2023 A&A proofs: manuscript no. aanda and stars. With this approach, newly formed stars are placed in the cold molecular clouds, and experience double attenuation by dust of the molecular clouds and the ISM. Older stars are attenuated by the dust grains of the ISM alone. These attenuation laws, together with their different recipes, are often used in the literature when modeling the SED of galaxies. Such recipes include a steeper curve than that of Calzetti et al. (2000) with a UV bump around 0.217µm (Buat et al. 2011(Buat et al. , 2012, and a shallower ISM attenuation than that of Charlot & Fall (2000) (e.g., Lo Faro et al. 2017. However, when it comes to DSFGs, a proper dust attenuation curve that is able to mimic accurately the absorbed photons is crucial for any SED analysis, especially that in these objects, as dust plays a massive role in their evolution, and is an important agent in their SEDs. On the other side of the SED, the long-wavelength cutoff of the Rayleigh-Jeans approximated tail is extensively covered by ALMA. Therefore, its cold dust continuum provides a vital element in the far-infrared (FIR) SEDs of DSFGs. However, high-resolution ALMA-detected cold dust emission maps were often found to disagree spatially with detections at shorter wavelengths, such as the UV-emitting star-forming regions or the stellar populations of DSFGs (e.g., Dunlop et al. 2017;Elbaz et al. 2018;Buat et al. 2019). This disagreement can be either in a form of different dust compactness relative to the higher energy detections, or in some cases a complete physical dissociation of these two components. Such offsets are often found to be more significant than the systematic offsets that arise from instrumental uncertainties or large beam sizes . In most cases, radio detections with the Karl G. Jansky Very Large Array (VLA) confirmed the ALMA-detected physical dissociation (e.g., Rujopakarn et al. 2016;Dunlop et al. 2017;Elbaz et al. 2018;Hamed et al. 2021). This separation challenges a local energy balance which could be an issue for SED modeling that takes into account a global conservation of energy. This problem was highlighted lately in Buat et al. (2019), since such energetic balance is the core of widely used SED fitting tools (e.g., da Cunha et al. 2008;Noll et al. 2009;Boquien et al. 2019).
Reverse engineering the spectral distribution of DSFGs does not come without obstacles, despite the advent in current understanding of the physical and chemical processes that such galaxies undergo. The most commonly confronted obstacle in SED fitting is the degeneracy problem. Such degeneracy arises from overlapping different physical contribution in one specific wavelength domain, such as the dust and stellar age degeneracy (e.g., Hirashita et al. 2017). Although this can be overcome by assuming an energetic balance and using the FIR emission as an additional constrain for dust attenuation, and by choosing an appropriate star formation history (SFH) (see Ciesla et al. 2016), a well-constrained attenuation curve will help limit such degeneracies, leading to better estimated physical parameters.
Dust attenuation curves significantly alter the stellar mass determination of galaxies in general and of DSFGs in particular. As flatter and geometrically complex attenuation curves can dim the light coming from the older stellar populations in the ISM more efficiently than steeper ones, they naturally result in a significant hidden older stellar population in the optical to nearinfrared (NIR) range. This, along with other assumptions such as the initial mass function (IMF) and the SFH, lead to large differences in the resulting stellar masses (e.g., Zeimann et al. 2015;Małek et al. 2018;Buat et al. 2019).
The uncertainty in stellar mass determination of DSFGs (see the stellar mass controversy in Hainline et al. 2011;Michałowski et al. 2012;Casey et al. 2014), remarkably influences the posi-tion of these objects along the commonly named main-sequence (MS) of star-forming galaxies (e.g., Brinchmann et al. 2004;Noeske et al. 2007;Daddi et al. 2010;Rodighiero et al. 2011;Lo Faro et al. 2015;Schreiber et al. 2015;Hamed et al. 2021). Most galaxies seem to follow the tight scatter of the MS independently of the redshift, whose outliers are typically referred to as starbursts. However, the strong dependence of the stellar mass estimation from SED fitting techniques on the assumed attenuation curve, varies massively this scatter and affects the "starburstiness" of already active DSFGs. It is therefore crucial to choose suitable attenuation laws in order to limit biases on the stellar mass determination.
Recently, Donevski et al. (2020) investigated the dust and gas contents of a large sample of DSFGs, linking dust abundance to other physical characteristics such as their SFRs. Despite the growing understanding of these objects, we still lack a complete picture of how dust attenuates their stellar radiation. Properly quantifying dust attenuation of DSFGs is crucial for quantifying the cosmic SFR and get a better grip on galaxy evolution. To achieve this goal, we study closely the dust attenuation curves in DSFGs and investigate the possible physical properties that they might depend on. With the ever-growing understanding of the nature of these dusty galaxies, in recent years many works studied the relation between dust attenuation and other physical properties (e.g., Fudamoto et al. 2020;Lin et al. 2021;Lower et al. 2022;Boquien et al. 2022). In these studies, links between dust attenuation properties and other physical observables were studied, such as the dust grain sizes, star-dust geometry, and the star formation activity in these galaxies. The most widely used attenuation laws are that of Calzetti et al. (2000) and Charlot & Fall (2000). Both of these laws are used interchangebly when modeling galaxies' photometry at high redshifts. However, we still lack a complete knowledge of the preference of attenuation laws in DSFGs at different redshift range. In this paper, we aim at answering the question of which of the commonly used laws best describes dust attenuation in DSFGs at high redshift. Measuring the slopes of attenuation is beyond the scope of this paper. We make use of a large statistical sample with available dust continuum maps and their UV/optical counterparts. We study the effect of UV/optical to dust continuum compactness on the preferred attenuation law for our sample. This paper is structured as follows: in Section 2 we describe the data analyzed in this work, both the photometry and the images. In Section 3.1 we detail the method we use to estimate the circularized effective radii of our sources, and Section 3.2 provides a description of the SED fitting procedures we used to achieve the physical properties of our sources. The results and their respective discussions are presented in Section 4, and the summary is concluding this paper in Section 5. Throughout this paper, we adopt the stellar IMF of Chabrier (2003) and a ΛCDM cosmology parameters (WMAP7, Komatsu et al. 2011): H 0 = 70.4 km s −1 Mpc −1 , Ω M = 0.272, and Ω Λ = 0.728.
Sample selection
The large two square degree COSMOS field, centered on R.A., Dec. = (10h00m27.9171s, +02d12m35.0315s) (Scoville et al. 2007;Ilbert et al. 2013;Laigle et al. 2016), has been observed with an unmatched commitment from different instruments, covering a wide range of wavelength observation of galaxies up to redshift of six. This unique survey design offers rich data sets Table 1: Summary of available photometric data in each band with its centered wavelength, the mean of S:N, and the number of detections in our sample. The detections of different ALMA bands (6 and 7) concern different galaxies.
of millions of identified galaxies, allowing deep investigation of galaxy evolution at various redshifts.
The choice of COSMOS field galaxies in this work is motivated by the abundance of multiwavelength data spanning across a wide range of redshifts, and the significant number of ALMA detections that this field enjoys. This set of optical, infrared and submillimeter detections allows us to build a statistical sample of DSFGs. This sample is ideal to investigate the evolution of DSFGs properties and the crucial role that dust attenuation plays in their evolution.
ALMA data
Since the main science objective in this study is to quantify the effect of the distribution of dust emission relative to the stellar continuum, at different redshift ranges, the core of our sample was built around ALMA detections. For that we used ALMA fluxes and continuum maps from the A 3 COSMOS automated ALMA data mining in the COSMOS field (Liu et al. 2019). This data set assembles hundreds of identified galaxies from the ALMA archive into a single catalogue. In our work we use the primary-beam-corrected ALMA maps.
The main advantage in our work is having access to dust continuum morphology relative to the spatial distribution of the star formation region and the stellar population in our sample. For this reason, we carefully select a sample of galaxies characterized by good quality detections.
For submillimeter images, we use the A 3 COSMOS generated maps (Liu et al. 2019). These images were deconvolved using a robust cleaning with a Brigg's parameter of 2, i.e. a natural weighting of visibilities. This results in significantly better signal-to-noise (S:N) of the innermost regions of a source in the uv plane, thus defining its outermost "borders" in the im-age plane (see section 2.1 in Liu et al. 2019 for a more detailed description of produced ALMA continuum images). To study dust attenuation through cosmic time, we select ALMA-detected galaxies with S:N higher than 5. Our preliminary sample is composed of 1,335 ALMA detected individual galaxies.
Ancillary maps
For the shorter wavelength (rest-frame UV and optical) images, we used the third data release of the deep field continuum maps detected with the y band of the Hyper Suprime-Cam (HSC) of Subaru (Miyazaki et al. 2018;Aihara et al. 2022). These images have high angular resolution (0.64 ) which allows a physical comparison with their ALMA counterparts. Comparison between the photometric redshifts of 43 galaxies from the final sample, for which spectroscopic redshifts are available from the literature (Liu et al. 2019). Spearman's rank correlation coefficient is shown as ρ, with the photometric redshift accuracy given by the σ MAD (Ilbert et al. 2009). Red dotted lines correspond to z p = z s ± 0.15(1 + z s ) (Ilbert et al. 2009).
Auxiliary photometric data
We used the photometric data from the Herschel Extragalactic Legacy Project (HELP) panchromatic catalogue (Shirley et al. 2019). This catalogue was achieved taking into account visible to mid-infrared (MIR) range surveys as a prior, homogenizing them and extracting fluxes from Herschel maps whose angular resolution is significantly lower than their short wavelength counterparts.
To build the HELP catalogue, Herschel fluxes were extracted using the probabilistic deblender XID+ (Hurley et al. 2017), which was used on SPIRE maps, taking into into account the positions of sources observed with the high-resolution detections from Spitzer at 24µm. This technique was shown to increase the accuracy of photometric redshift estimations (Duncan et al. 2018).
We perform a positional cross-match between ALMA catalogue and the HELP catalogue with a rather conservative 1 search radius. Although galaxies whose dust continuum or molecular gas emission is significantly dissociated from the shorter wavelength continua are not uncommon (e.g., Elbaz et al. 2018;Hamed et al. 2021), due to various factors such as astrometry problems, positional errors as a result of the beam size, or otherwise physical factors, this timid search radius avoids false matches (e.g., Buat et al. 2019;Liu et al. 2019). This crossmatching procedure resulted in 383 individual galaxies ranging from z=0.3 to z=5.5.
To better constrain the physical properties of our statistical sample through SED modeling, we discarded galaxies that have less than three detections in the UV-NIR wave range. This requirement rejected 30% of the sources. Additionally, we required minimum six detections in the MIR-FIR bands (8-1000 µm) out of which at least five detections having a S:N>3. As a result of these selection criteria, the finally-selected sources have at least 10 detections in the UV to NIR (0.3-8 µm) range with S:N>5. For objects that had detections in similar bandpass filters, we used the detections coming from the deeper survey. This has a high importance in SED fitting procedures especially because measurements at similar wavelengths may be order of magnitudes different from each other, which will negatively affect the quality of their fitted spectrum. Moreover, dense coverage of a very short part of the SED could add too much weight during the SED fitting process and bias the final fit ). Our UV-NIR photometric data as well as the FIR counterparts have overall high S:N (mean of 60 for the former and 13.23 for the latter), while in the IR side of the spectrum, MIPS measurements at 24 µm, as well as SPIRE detections at 250 and 350 µm, and evidently all ALMA detections have high S:N (mean of 12.68 for all bands). All Herschel's SPIRE fluxes (at 250, 350 and 500 µm) are essential since they cover the thermal part of the total SED up to z ∼ 4, which contains the Rayleigh-Jeans dust emission tail. To even better constrain the IR-submilimiter part of the SED fits, we appended our sources with VLA detections at 3 GHz from Smolčić et al. (2017).
Final sample
The above described selection yields in the final sample of 122 galaxies with panchromatically high S:N, covering a redshift range of 1 < z < 4. Forty-three galaxies of our sample have spectroscopic redshifts from Liu et al. (2019), and for the rest of the sample we use the reliable photometric redshifts provided by the HELP catalogue. Figure 1 shows a comparison between the spectroscopic and photometric redshifts of the galaxies in our sample that possess both measurements. We calculate the photometric redshift accuracy (Ilbert et al. 2009) as σ MAD = 1.48 × median(|z p − z s |/(1 + z s )) where MAD is the mean absolute deviation. This redshift accuracy resulted in reliable photometric redshifts of our sample with σ MAD = 0.014. Table 1 shows the photometric bands and the associated S:N for the final sample of 122 DSFGs. Almost half of the final sample, 60 galaxies, have a S:N of Y band detection higher than 5, and all of the sources had a VLA detection. We want to stress that only 15 DSFGs from our sample (12%) do not have u band detection, and we miss five detections in g and V bands. With the exception of those galaxies, the rest of the sample have full set of 23 photometric bands, assuring excellent spectral coverage, essential for a detailed SED fitting.
Size measurements
To study the spatial extent of dust emission and that of the stellar populations and the star-forming regions in our sample, we derived homogeneous effective radii (R e ) of the dust continuum maps and their short (UV-optical) wavelength counterparts. To achieve this, we used GALFIT (Peng et al. 2002), parametrically fitting two-dimensional Sérsic profiles in the primary-beamcorrected images of our sample. With the Sérsic index (n Sérsic ), obtained from the fitting procedure, it is possible to quantify the concentration of light in a galaxy, which can provide important information about its morphology. Moreover, GALFIT provides the values of the ratio between the minor and major axis, and this allows for calculating the effective circularized radii (hereafter simply effective radii or R e ). With spectroscopic and reliable photometric redshift information, we can convert the real sizes of our objects. In this work, we analyze the evolution of IR radii between redshift four and one, but also simultaneously changes in the rest-frame UV-optical.
For our analysis we used the Y band of HSC, and ALMA (bands 6 and 7) images to compute physical sizes of our final sample of DSFGs. The mean angular resolution of the ALMA detections in our sample is 0.75 , and equivalently 0.64 for the HSC Y band detection. This renders valid a comparison between sizes estimated from each of these bands directly, without the need to degrade them on resolution.
In computing the Sérsic profiles, we adopted a similar approach than Elbaz et al. (2018), by leaving (n Sérsic ) free. For the sake of comparison, we also fixed n Sérsic = 1 (e.g., Hodge et al. 2016;Elbaz et al. 2018). While 5% of galaxies in our sample did not reach convergence when fixing n Sérsic , R e was found to be in good agreement in both cases (on average 7% larger when fixing n Sérsic for ALMA detections), and we found an agreement within 23% in resulted n Sérsic with a fixed and a free n Sérsic .
In our profile fittings, we initially used an automated computation with GALFIT, and carefully checked the resulted models and their residuals. This was performed to get a general understanding of our sample's range of effective radii in each of the two used wavelength domains. This allowed us to test different input values of n Sérsic , including a Gaussian model. This is an important step when using this tool, since it permits validating the initial parameters needed for GALFIT. We found that varying n Sérsic between an exponential disk profile (n Sérsic = 1) and a Gaussian profile (n Sérsic = 0.5) leads to a slight change in the models and their consequent effective radii, with R e n=0.5 ALMA being 26% smaller than R e n=1 ALMA .
After the aforementioned tests, we individually fitted Sérsic profiles to each of our sources in the two bands (UV/optical and IR), while monitoring the resulted models, the residuals and the profile parameters. This individual fitting was especially required for galaxies that did not fit in the automated computation (∼ 5% of the total sample), in which case a simple and slight parameter adaptation managed to fit these sources. The distribution of the computed n Sérsic was rather narrow, ranging Our technique in fitting Sérsic profiles in two bands of our sample was performed homogeneously in the same methodical approach. This is important to accurately quantify ratios of different R e at varying wavelengths. Our primary effort was to calculate our effective circularized radii in a homogenized method -with the same tool and approach, which reduces possible biases for the final physical interpretation. To test the reliability of our size measurements, we also computed the minimum possible size that can be accurately measured using the formula by Martí-Vidal et al. (2012) and Gómez-Guijarro et al. (2022): where the minimum size for each source (θ min ) is in units of the synthesized beam FWHM (θ beam ), depending on the S:N of the source. All size measurements of our sample were above that limit. Figure 2 shows an example of a galaxy HELP-J095953.305+014250.922 at redshift z=2.15 seen in the HSC Y and ALMA band 6. Figure 2 shows the original image of that galaxy at two different wavelengths, with their light profiles fitted with GALFIT, and their subsequent residuals. Table 2: Summary of the derived effective radii of our sample from the available detections at two different wavelengths.
The median radii and their errors (the median absolute deviation) at different redshift ranges are presented in Table 2, and the redshift evolution of these derived sizes along with the evolution of their star-to-dust compactness will be shown in Figure 11 (left panel). This Figure shows the change in the radii sizes (also related to the sample selection of DSFGs) of ALMA and HSC y as well as a comparison with similar work done by Buat et al. 2019 based on galaxies at z∼2. The evolution of these derived radii with the dust luminosity and stellar mass of our sample are shown in Appendix B.
SED fitting method
To derive the physical properties of our well-constrained multiwavelength sample, we use the Code Investigating GALaxy Emission CIGALE 1 , an energy balance technique of the SED fitting (Noll et al. 2009;Boquien et al. 2019). This technique of SED fitting takes into account the balance between the energy absorbed in the rest-frame UV-NIR part of the total galaxy emission, and its rest-frame IR emission. The mediator agent in this energy balance is the dust, since it will absorb a significant part of the short wavelength photons emitted by the stars, and emanate in the form of thermal emission in the FIR. Reverse-engineering the total spectrum of a galaxy is not an easy quest. Some physical processes are completely unrelated, such as the synchrotron emission of accelerated electrons, which dominates the radio part of the SED, and the UV photons whose origin is traced directly to the young stars. However, some physical processes release photons of the same frequency range, such as the MIR range which can have different contributors like Active Galactic Nuclei (AGN) and polycyclic aromatic hydrocarbons (PAHs), resulting in degeneracies. Therefore, carefully choosing physically-motivated templates and parameters is crucial in order to deduce key physical properties that galaxies are experiencing, since these parameters depend on the assumptions made (e.g., Ciesla et al. 2015;Leja et al. 2018;Carnall et al. 2018).
In the following subsections, we describe the different aspects of our SED fitting strategy, and motivate our choice of certain laws and parameters. The SED modules description used in our work are presented below with the stellar part and the dust part.
Stellar SED
To forge the SED of a galaxy, we first assume a stellar population that is behind its direct and indirect emission. This means that we should take into account a stellar population library 1 https://cigale.lam.fr/ and its spectral evolution, stars with different ages and a certain metallicity. In this work, we use the stellar population library of Bruzual & Charlot (2003), a solar metallicity and an initial mass function (IMF) of Chabrier (2003), which takes into account a single star IMF as well as a binary star systems.
The stellar population models are then convoluted with an assumed star formation history (SFH). SFHs are sensitive to many complex factors including galaxy interaction, merging, gas accumulation and its depletion (e.g. Elbaz et al. 2011;Ciesla et al. 2018;Schreiber et al. 2018;Pearson et al. 2019). The SFHs have a significant effect on fitting the UV part of the SED, and consequently affecting the derived physical parameters such as the stellar masses and the SFRs. Ciesla et al. (2016Ciesla et al. ( , 2017 showed that simple SFH models (such as a delayed model) are not enough to reproduce a precise fit of the UV data, especially for galaxies that are undergoing a starburst or quenching activity.
To model the SEDs of our IR-bright sample, we use a delayed SFH with a recent exponential burst (e.g. Małek et al. 2018;Buat et al. 2018;Donevski et al. 2020). This recent burst is motivated by the ALMA detection which makes very likely a numerous populations of young stars manifesting their presence through dust. And in such scenario, a galaxy will build the majority of its stellar population in its earlier evolutionary phase, then the star formation activity slowly decreases over time. This is followed by a recent burst of SFR. The SFR evolution over time is hereby modeled with: where the first term translates into a delayed SFH slowed by the factor of τ 2 , which is the e-folding time of the main stellar population, and extended over the large part of the age of the galaxy. The second term is the exponential decrease of recent SFR, where t and τ are the age of the burst and the e-folding (Calzetti et al. 2000) Colour time of the burst episode respectively. This SFH provided better fits compared to the simpler delayed SFH, especially for nonquiescent galaxies. We vary τ as shown in Table 3, to give a comprehensive flexibility of the delayed formation of the main stellar population. We discuss the choice of this SFH over a truncated version in Appendix A.
Dust SED
The dust content of our sample of DSFGs is presumed to be the driving component of the shape of their SEDs. This makes the modeling of dust attenuation important to extract accurate physical properties.
To model the effect of dust, we use two different attenuation laws for our SED fitting: the approach of Calzetti et al. (2000, henceforth C00) and that of Charlot & Fall (2000, henceforth Article number, page 7 of 18 A&A proofs: manuscript no. aanda CF00). While these two attenuation laws are relatively simple, they differ on how to attenuate a given stellar population. The attenuation curve of C00 was tuned to fit a sample of starbursts in the local Universe. This curve attenuates a stellar population assuming a screen model: where k(λ) is the attenuation curve at a given wavelength λ, A(λ) is the extinction curve, and E(B-V) is the color excess, which is the difference between the observed B-V color index and the intrinsic value for a given population of stars. Despite its simplicity, this attenuation curve, with its modifications, is widely used in the literature (e.g. Burgarella et al. 2005;Buat et al. 2012;Małek et al. 2014Małek et al. , 2017Pearson et al. 2017;Elbaz et al. 2018;Buat et al. 2018;Ciesla et al. 2020). However, it does not always succeed in reproducing the UV extinction of galaxies at higher redshifts (Noll et al. 2009;Lo Faro et al. 2017;Buat et al. 2019).
Another approach is to also consider dust present in birth clouds. This is the core of the attenuation curve of CF00. In this approach, dust is considered to attenuate the dense and cooler molecular clouds (hereafter MCs) differently than ambient diffuse interstellar media (ISM). This configuration is expressed by the following analytical expression: where δ IS M and δ MC are the slopes of attenuation in the ISM and the MCs respectively. Young stars that are in the MCs will therefore be attenuated twice: by the surrounding dust and additionally by the dust in the diffuse ISM. A ratio of Av IS M is also considered to account for the attenuation of young stars residing in the birth clouds, and the older stars residing in the ISM. CF00 found that δ IS M = δ MC = -0.7 satisfied dust attenuation in nearby galaxies, however, this curve is frequently used at higher redshifts (e.g. Małek et al. 2018;Buat et al. 2018;Pearson et al. 2018;Salim & Narayanan 2020). By attenuating at higher wavelengths (until the NIR) more efficiently than C00, this approach considers a more attenuated older stellar population.
Lo Faro et al. (2017, henceforth LF17) have found that a shallower attenuation curve reproduces the attenuation in ultraluminous and luminous IR galaxies (ULIRGs and LIRGs) at z∼2. For their sample, LF17 have found δ IS M = -0.48. This curve was used in Hamed et al. (2021) for a heavily dust-obscured ALMA-detected galaxy at z∼2, and provided an overall better fit that other steeper attenuation laws.
To model dust attenuation of the galaxies of our sample, we use the aforementioned laws, with the parameters presented in Table 3. The mean normalized attenuation of our sample as a result of the three attenuation curves are shown in Fig. 3
left panel.
To obtain the left panel of Fig. 3, we computed the attenuation values in the UV-NIR bands. Then we averaged the attenuation in each band for the whole sample. The curve of C00 is steeper than the double-component power-laws of CF00 and LF17, especially in the NIR domain.
Hot and cold dust components
Dust grains heated by AGN, along with the vibrational modes of polycyclic aromatic hydrocarbons (PAHs), dominate the MIR part of the SED of a galaxy. Thus, it is important to include AGN modeling in our SED fitting procedures, as well as taking into account PAH contribution to the overall dust emission. Our initial analysis of the IRAC photometry of Spitzer did not suggest Article number, page 8 of 18 M. Hamed et al.: The slippery slope of dust attenuation curves AGN candidates. We also included AGN-heated dust templates of Fritz et al. (2006) in our SED fitting procedure, but found no AGN contribution in our sample.
To model the IR emission in our SED models, we use the templates of Draine et al. 2014. These templates take into consideration different sizes of grains of carbon and silicate, hence, allowing different temperatures of dust grains. These templates rely on observations and are widely used in the literature to fit FIR SEDs.
SED quality, model assessment
In assessing which SED provides the best fit for modeling the galaxies of our sample, we adopt a similar methodology as in Buat et al. (2019). We compare the reduced χ 2 of the resulted fits with the three attenuation recipes used, and in the case of different attenuation laws providing a good reduced χ 2 , we checked the Bayesian inference criterion (BIC) defined as BIC = χ 2 + k × ln(N), where k is the number of free parameters and N is the number of data points. The mean reduced χ 2 was found to be 2.4 for our sample (2 left panel, where we show the reduced χ 2 for the best fits of our sample).
To test the robustness of the method we use to select the best attenuation law for each galaxy, we perform the following test: -We fit all galaxies of our sample with the attenuation law of CF00. We then take the best fit values for each filter of each galaxy. -We perturb these "best" fluxes (obtained from the fit) using the initial photometry errors to obtain a mock catalog. -This mock catalog was then fitted with: Calzetti law, CF00 law and LF17 law, in the same way as the initial real photometry was treated.
The reduced χ 2 obtained from those fits are then compared for each source. We show this comparison in Fig. 6. The obtained reduced χ 2 for CF00 fits of the mock sample are consistently smaller than that obtained using the two other attenuation recipes. Precisely, 93% of mock galaxies preferred CF00 attenuation law using this test. The other 7% had a reduced χ 2 of ∼0.1 lower using the other attenuation laws.
The choice of the best attenuation law that better describes the observed fluxes is crucial for our study. Therefore we used an additional method in reliably attributing the best attenuation law for each galaxy. We introduced perturbations to the fluxes by applying a Gaussian distribution with a standard deviation that corresponds to the uncertainties of each band. To make the calculation time faster, and since the main task was to check the reliability of the best attenuation law for each galaxy, we generated mock fluxes until the mid-IR bands of IRAC. We created ten mock catalogues with this method, and applied the same approach of SED fitting for our original sample to the mock samples. Fig. 7 shows the ratio of mock fluxes to real fluxes of our sample.
The results of these tests are presented in Tab. 4, where the vast majority of our mock galaxies (> 94%) preferred the same attenuation law of the initial real galaxies. This shows that the usage of the reduced χ 2 in assessing the best attenuation law in our case is valid. This is directly linked with the good S:N ratios of our photometric data, that were shown in Tab 1. Fig. 6. Comparison between reduced χ 2 of the mock sample obtained with CF00, C00, and LF17 attenuation laws.
Real
Mock C00 CF00 LF17 C00 94% 5% 1% CF00 0% 97% 3% LF17 0% 3% 97% Table 4: Summary of the results of the best fits using the same SED method on the fluctuated photometry preferring the same attenuation law (average of ten realizations) and the best attenuation laws for the real galaxies.
A&A proofs: manuscript no. aanda An example of a computed SED of our sample is given in Fig. 5, where we attenuated the same galaxy in three different ways. In this example, the shallower attenuation of Lo Faro et al. 2017 was preferred since it provided a significantly better fit. To test the reliability of our SED models, with CIGALE we generated a mock galaxy sample and fitted SEDs with the same methods applied to our sample. We show the comparison between the real physical properties that we derived for our sample and its mock equivalent in Fig. D.1.
Galaxy properties and dust attenuation
We applied a bouquet of attenuation slopes in fitting our sample of DSFGs. The attenuation curve of C00 results in lower stellar masses compared to the ones obtained with the shallower double component attenuation laws of CF00 and LF17 (with a mean stellar mass of 10 10.87 M for C00, 10 11.22 M and 10 11.52 M for CF00 and LF17 respectively). The distribution of the obtained stellar masses was shown in Fig. 3 right panel with the mean M * for the whole sample portrayed for every attenuation law used. Stellar masses computed using shallower attenuation slopes are higher than the one produced with steeper curves.
Star formation rates computed from the panchromatic SED fitting using the three aforementioned attenuation laws do not change, similar to the results found in Małek et al. (2018). The mean values of the log 10 (SFR) of the sample fitted with C00, CF00, and LF17 are 2.75, 2.63, and 2.60 M yr −1 respectively, which is of a similar range of galaxies studied in Buat et al. 2019. The dust masses computed with the three attenuation laws are invariant (with a mean of 1.80×10 9 M for the whole sample). This is mainly due to the strong constrain of the FIR part of the SED provided by the ALMA detections as well as the good fitting of the spectrum. The significant difference in produced stellar masses using the different attenuation law slopes result in a clear distinction in the "starburstiness" of galaxies, and also affects the quiescent systems (Lo Faro et al. 2015). In our sample, the number of starburst galaxies decrease with a shallower attenuation curve (60% for C00, 25% for CF00, and 14% for LF17). Despite its simplicity, the attenuation law given by Calzetti et al. (2000) provided good fits in building the SEDs and was favored over the shallower curves of Fall (2000) andLo Faro et al. (2017) in 49% of the whole sample, that is 61 sources (by comparing the resulted reduced χ 2 ). This was mainly noticed below redshift of z = 2. The attenuation curve adapted by Lo Faro et al. (2017) provided better fits for 38 galaxies in total, but 79% of these galaxies fell in the redshift range of 1.5 < z < 2.5, which supports the initial tuning of the ISM attenuation at -0.48 (Lo Faro et al. 2017).
We show the preference of the attenuation laws for our sample based on the V band attenuation and the SFR in Figs. 8 and 9 respectively. We find no clear correlation between the attenuation in the V band and the preference of the attenuation laws in our sample. We also checked the correlation with the stellar masses. But since these masses are directly a byproduct of the attenuation law used, as it was shown in Figs. 3, we cannot tell if this correlation is physical. Galaxies that are preferring CF00 double-component attenuation law and its shallower version LF17, by construction, result in a significantly higher older stellar population, therefore increasing the stellar mass (e.g., Małek et al. 2018;Buat et al. 2019;Hamed et al. 2021;Figueira et al. 2022). We also checked the preference of attenuation laws used with SFR for our galaxies. This is shown in Fig. 9. We found that towards higher SFRs, there are no preferences in attenuation laws for our sample. However, in the lower limit of SFR, a double-component attenuation was slightly preferred, but still within the error bars. In the sample we had only 18 galaxies with log(SFR)< 2.4 and 38% of them were fitted with C00 while the rest preferred CF00/LF17. The small statistics at this lower end of SFR for our sample does not allow us to make a strong statement about the correlation of the attenuation laws for low-SFR galaxies.
1.0
Fraction C00 CF00/LF17 Fig. 8. Preference of attenuation laws of our sample according to the attenuation in the V band. To facilitate the reading of this plot, we shifted the bins of CF00/LF17 slightly to the right (+0.1).
Stellar vs. dust components
To analyze the energy balance of our sample of DSFGs, especially with the available dust and stars emission and images, we follow the method introduced in Małek et al. (2018) and Buat et al. (2019) by dissecting the stellar continuum of our sample of galaxies without taking into account the FIR detections. Equivalently, we also fit the FIR continua of our SFGs. To model the stellar continuum, we use the photometric bands of CFHT, Subaru, VISTA and available IRAC bands. This is done in order to ensure the attenuation curve requirements without adding the energy balance constrain to the global SED fitting method. As shown in e.g., Buat et al. 2019 andHamed et al. 2021, the dichotomy between the stellar SED and its dust counterpart is important in testing the validity of the energy balance concept that is the basis of most of panchromatic SED fitting tools. Moreover, this method is critical in cases where the dust continuum maps are not centered on their short wavelength counterparts. When fitting the short wavelength part of the SEDs, we model the stellar light taking into account the delayed SFH boosted by a recent burst, a stellar population library of Bruzual & Charlot (2003), and dust attenuation laws that are discussed in 3.2. We also modeled the IR emission using Draine et al. (2014) dust emission templates, but without taking into account the IR photometry, allowing the energy balance to dictate dust luminosities and masses based on the amount of the stellar light that is attenuated.
A total of 61% of our sample (75 sources) provided better fits with the simple power law of C00. While a shallower attenuation was needed to reproduce the spectra with the least χ 2 for the rest (14% with CF00 and 25% with an even grayer LF17 slope).
The steep law of C00 provided better fits for more galaxies when taking into account the stellar emission only. This result was also found in Buat et al. (2019) for a smaller sample. This tendency will be confirmed in future studies based on the new generation of IR datasets from JWST and well constrained short wavelength counterparts from LSST.
Equivalently, we estimated dust luminosities based on the rest frame UV to NIR photometry of our sample, assuming an energy balance between dust absorption and dust emission. We compare these IR luminosities with the ones calculated from the IR photometric points with Draine et al. (2014) templates. The results are shown in Fig. 10. We confirm the scatter initially found in Buat et al. (2019) and Hamed & Małek (2022) with dust-rich galaxies which significantly differs from the normal SFGs found in Małek et al. (2018). We find that the dust emission evoked from a pure energy balance based on the short wavelength does not always explain the one calculated from the FIR photometry. Galaxies that are fitted with C00 attenuation law are found to inhibit more attenuation and they produce higher L IR from their dust content rather than the stellar parts. The star-to-dust compactness did not seem to play a role in this trend. This shows that the dust luminosity concluded from the direct and attenuated UV photons based on the energy balance is not enough to reproduce the dust luminosity observed from the actual IR photometry.
Dust attenuation and sizes
We study the effects of star-to-dust compactness of the unobscured star-forming regions and the stellar population emission to the extent of dust emission detected by ALMA. We define the ratio of the short wavelength radii to their FIR counterparts as R e (UV)/R e (ALMA) where R e (UV) is the circularized effective radii measured from the HSC Y bands of our sources. The UV radii of our sample decrease with redshift, while the ALMA counterparts are rather constant accross the studied redshift range in this work. This is shown in Fig. 11 and is explained by the bright IR ALMA-selected sources, and their active star-forming/stellar population regions.
Despite the fact that the attenuation curve of Calzetti et al. (2000) managed to fit the largest sub-sample of our DSFGs around the cosmic noon and at lower redshift ranges, the distribution of these galaxies for which a steeper attenuation curve provided the best fit is relatively compact. A relative star-to-dust compactness was found to have an average of 1.7, while for the shallower attenuation curve recipes were found to not follow a specific preference, and they are rather scattered accross all the studied redshift bins. We show the redshift distribution of the preferred attenuation laws in Appendix B. The range of the resulted (n Sérsic ) was too small (0.4 to 1.6), and in that narrow range we did not find any correlation with other observables. We find that the ratio of R e (UV) to R e (ALMA) changes accross the redshift in our sample of ALMA-detected DSFGs. This distribution is found to peak at z = 2 around the cosmic noon, as shown in the right panel of Fig. 11. This change is more prominent at higher redshift with the decrease of the star-to-dust compactness is directly connected to the rapidly decreasing restframe UV sizes of these DSFGs at higher redshift. This might be explained by the more intense star formation around that cosmic epoch, especially that DSFGs contributed significantly to the total star formation activity in the Universe. Moreover, this peak is found to be stronger for galaxies that require a shallower attenuation curve. This might correlate with the higher need of cold star forming regions in these galaxies to explain the higher SFRs. This result shows that dust attenuation and the star-to-dust compactness of dust in DSFGs might be correlated with the cosmic SFR density.
These findings partly agree with the smaller sample size studied in Buat et al. (2019). However, our statistically larger sample size allows for an extrapolation of this correlation not only at different redshift ranges, but also showed that shallower attenuation curves do not favor higher star-to-dust compactness, but rather a scattered trend was found, unlike the steeper curves which clearly preferred relatively smaller R e (UV)/R e (ALMA) (∼ 2) at different redshift ranges. One possible explanation for this can be the fact that for galaxies with relatively smaller sizes of their unobscured star-forming regions and their dust content, might be translated by a screen model of attenuation, due to a non-effective mixing of stars and dust. On the other hand, a very compact dust emission requires a more complex mixing of dust and stars which is translated in a shallower attenuation curve. A correlation is visible between the fraction of our sample that is fitted with a specific attenuation law and the relative compactness. We show this in Fig. 12.
We find that the galaxies with smaller opt/UV sizes relative to dust radii largely prefer the C00 attenuation law in our sample. For larger R e (Y)/R e (ALMA) ratio (>3) only CF00 and its shallower modification LF17 fitted the galaxies better. This shows that taking relative compactness into account is highly important when performing SED fitting. One can infer the most likely attenuation law for a given galaxy by taking into account its sizes in the unattenuated SF region and the dust continuum. This in turn enables a reliable procedure for fitting its spectral energy distribution (SED).
Summary
In this paper, we studied a statistical sample of 122 DSFGs not hosting AGN accross a wide range of redshift (1 < z < 4). We derived their circularized effective radii in two different bands, HSC's Y band when available, and equivalently their dust components' radii with ALMA detections. On the other hand, we carefully analyzed their SEDs, modeling them with varied dust attenuation laws, particularly that of Calzetti et al. (2000) and the shallower curves of Charlot & Fall (2000) and their recipes.
We also dissected the stellar SEDs alone and their IR counterparts as done in (e.g., Buat et al. 2019;Hamed et al. 2021) to investigate the validity of the energy balance when taking into account the ALMA detections of our sources. We found that even if most of our sources seem to produce the same dust emission when relying on an energetic balance from the short wavelengths, and when fitting the IR photometry apart, some galaxies expressed dimmer star formation when the Calzetti et al. (2000) attenuation law was favored. This was translated into an under-supply of dust emission from the stellar population alone.
We found that knowing the information of the sizes of DSFGs especially their stellar and dust contents in the analysis, constrains the used attenuation curves to fit the photometry of these galaxies. We found that a starburst curve of Calzetti et al. (2000) was favored in the reproduction of the SEDs of DSFGs with comparable star-to-dust radii ratios, precisely at 1.2 < R e (UV)/R e (ALMA) < 2.5. However, despite the seemingly irrelevant star-to-dust ratios with the attenuation curves of Charlot & Fall (2000) and the shallower version (Lo Faro et al. 2017), we found that compact dust emission and extended stellar radii needed shallow curves and double exponential attenuation laws to account for the missing photons absorbed by dust. This shows that when fitting SEDs using broad band photometry, a careful analysis of the radii of different components should be taken into account, before using a unique attenuation law which will result in wrongly-estimated stellar masses.
We stress that recent ALMA studies of smaller samples of z∼2 galaxies suggested that compact IR sizes could be connected to rapid growth of supermassive black holes during the SMG star-formation phase (Ikarashi et al. 2017). Semianalytical models (i.e. Lapi et al. 2018) predict that such sources would experience forthcoming/ongoing AGN feedback, which is thought to trigger the morphological transition from star-forming discs to early-type galaxies. However, our galaxies do not show AGN activity, and are suggestive of SF galaxies caught in the compaction phase characterised by clump/gas migration toward the galaxy center, where the highly intense dust production takes place and most of the stellar mass is accumulated (Pantoni et al. 2021a andPantoni et al. 2021b). Interestingly, dust compaction phase is suggested to play role in metallicity enrichment efficiency, which further affects dust growth in ISM (Pantoni et al. 2019;Donevski et al. 2020). We expect that our dusty galaxies have relatively wide range of gas metallicities and dust growth efficiencies, which would reflect on different attenuation slopes, rather than favoring the single one. This expectation is also in line with results from recent cosmological simulations that found dependence of attenuation on dust compactness and/or geometry (e.g., Schulz et al. 2020) and the ratio between the small and large dust grains (e.g., Hou & Gao 2019).
In our sample, we have observed that the C00 attenuation law is mostly favored by galaxies with opt/UV sizes two times larger than the dust radii. However, for galaxies with R e (Y)/R e (ALMA) >3 the CF00 and its shallower modification LF17 were found to fit better. These findings suggest significant importance of considering the relative compactness when conducting SED fitting. By taking into account the sizes of a galaxy in the unattenuated SF region and the dust continuum, one can deduce the most probable attenuation law for that galaxy, thus providing a dependable approach for fitting its SED.
We conducted a test to ensure that the trend observed between the preferred attenuation law and relative compactness is not influenced by other physical properties. Specifically, we investigated the potential relationship between the relative compactness and two other properties -attenuation in the V band and SFR ( Fig. 8 and Fig. 9). Fig. 13 presents the results of this analysis. We find no significant correlation between these properties and relative compactness. This finding supports the conclusion that the observed trend between relative compactness and preferred attenuation law can be robust and not influenced by these physical properties.
We find that the star-to-dust compactness of the unobscured star-forming regions/stellar population regions to dust emission of these DSFGs peaks around the cosmic noon (z∼2). This is notable at higher redshift. A possible correlation might be with the cosmic SFR density for which the DSFGs were a major contributor in the early Universe.
These results are promising in the era of highly resolved deep-field detections with the LSST and JWST, where dust attenuation and size measurements are getting more precise. Combining these detections with the FIR information especially with ALMA, is unparalleled to deal with the dust attenuation curve problem at different redshift ranges. Histogram of the ratio of HSC's Y band radius to the ALMA radius. The red-filled histograms at different redshift ranges represent galaxies for which a shallow attenuation curve of CF00 or LF17 was critical in order to reproduce the observed stellar light and resulted in better fits. The dashed histogram shows galaxies for which the C00 attenuation curve gave a satisfying fit. | 12,531.4 | 2023-04-26T00:00:00.000 | [
"Physics"
] |
Clay Catalyzed Reactions of Indole and its Methyl Derivatives with α , β-unsaturated Carbonyl Compounds
Electrophilic substituons reactions of indole and 1-methylindole with methyl propiolate in the presence of K-10 montmorillonite were obtained the formation of the corresponding methyl 3,3bis(indolyl)propanoates. The reaction of 1,3-dimethylindole with methyl propiolate was given methyl 3,3bis(1,3-dimethyl-1H-indol-2-yl)propanoate, methyl 1,5-dimethyl-1H-benzo[b]azepine-3-carboxylate and methyl 3,3,3-tris(1,3-dimethyl-1H-indol-2-yl)propanoate. The reaction of 1,3-dimethylindole with 2cyclopentenone was yielded a typical addition product, similarly the reaction of indole and 1-methylindole with 2-cyclopentenone were concluded the expected addition products only.
INTRODUCTION
Indole and its derivatives are components of drugs founded in many pharmaceutical compounds 1−4 and are crucial building blocks for biologically active compounds. 5,6The Michael addition of indoles to α,βunsaturated carbonyl compounds are an useful reaction for medicinal chemistry applications.−13 Trisindolyl amines are reported to be important intermediates for the development of new drugs with potential ironchelating abilities. 14However, for years, many synthetic methods for the preparation of the biologically important, diindolyl 15−22 and trisindolylalkanes, 23−27 have been reported and most of these procedures either in strong acidic conditions, 28,29 expensive reagents and catalysts involved 30−35 or they were carried out under dry conditions using microwaves 36,37 and ultrasound accelerated methods. 38nvironmentally, Benign chemical processes using less hazardous catalysts has become a primary goal in synthetic organic chemistry.In this work, the reactions of indole, 1-methylindole and 1,3-dimethylindole with methyl propiolate and 2-cyclopentenone in dichloromethane under mild conditions using K-10 montmorillonite as catalyst is described.The reactions of indole and methyl-substituted indoles with α,β-unsaturated carbonyl compounds with an initial attack at the preferred 3-position of the indoles were followed by rearrangement to the 2-position.This type of rearrangement has been previously reported by Jackson et al. 39−41 Treatment of indole(1) and 1-methylindole(2) with methyl propiolate in dichloromethane in the presence K10 montmorillonite, occurred just at that position to give methyl 3,3-di(1H-indol-2-yl)propanoate(4) and 3,3-bis(1-methyl-1H-indol-2-yl)propanoate(5) (Scheme 1).The C 3 atom in the indole molecule is the most active in electrophilic substitution processes. 23,31In the molecule of 1,3-dimethylindole(3), because the 3position is occupied by methyl group, from the addition reaction of 1,3-dimethylindole(3) to methyl propiolate in dichloromethane and K-10 Montmorllonite catalyst were obtained three different products in one pot; 8) respectively (Scheme 2).Substituted benzoazepines possess a broad spectrum of biological activities. 42They are of moderate size giving rise to their potential as ligands for receptors and offer semi-restricted conformational flexibility allowing considerable scope for selective binding with a range of functional groups. 14Benzoazepine type compounds were previously synthesized from the 2-methoxyindole and dimethyl pyrroles in 1966. 43It is worth mentioning that in previous work, 44 we have reported the synthesis of dimethyl 2-(2-methyl-1H-methylindol-3-yl)maleate and dimethyl 2-methyl-1H-1-benzazopine-3,4-dicarboxylate from the reaction of 2-metylindole with dimethyl acetylenedicarboxylate.From the reaction 1,3-dimethylindole and dimethyl acetylenedicarboxylate were isolated dimethyl 1,5-dimethyl-1H-1-benzoazepine-3,4dicarboxylate.In 1,3-dimethylindole with methylpropiolate reaction as a successful example of the ring expansion was obtained methyl 1,5-dimethyl-1H benzo[b]azepin-3-carboxylate (7).
Indole(1), 1-methylindole(2) and 1,3-Dimethylindole(3) was also reacted with 2-cyclopentenone under the same conditions and in this reaction, it is yielded only Michael addition products (Scheme 3).We have found that Montmorillonite smoothly catalyzes in these reactions leading to two C-C bonds and thus affording the desired products in one pot.
EXPERIMENTAL Material
All chemicals were purchased from Merck, Fluka and Sigma-Aldrich and Montmorillonite K-10 clay was purchased from Fluka AG, Switzerland.TLC was carried out on aluminum sheets precoated with silica gel 60 F 254 (Merck), and the spots were visualized with UV light (λ = 254 nm).Column chromatography was conducted on silica gel 60 (40−63 μm).The melting points were determined on an Electrotermal A 9100 melting point apparatus.The NMR spectra were recorded on a Bruker DPX-400 spectrometer.Chemical shifts are reported in parts per million relative to CHCl 3 ( 1 H: δ = 7.27), CDCl 3 ( 13 C: δ = 77.0ppm) and CCl 4 ( 13 C: δ = 96.4ppm).The IR spectra were measured in KBr on a Jasco FTIR 300E spectrometer.The mass spectra were run on an LC/MS, AGILENT 1100 MSD system.The elemental compositions were determined using a LECO CHNS-932 analyzer.
Synthesis of 3-(1-methyl-1H-indol-3-yl)cyclopentanone(10)
4 g of Montmorillonite was added in to a mixture of 8 mmol of 1-methylindole and 4 mmol of 2cyclopentenone in 40 mL dicloromethane.The mixture was refluxed for 4 h and the reaction product was flash chromatographed using ethylacetate / petroleum ether
Substituted benzoazepines possess a broad spectrum of biological activities, as the benzoazepine ring is a major fragment of a series of alkaloids. 33,47However, when the resonate was further attacked by a second molecule of 1,3-dimethyl indole, the reaction was also proceeded with the attack at the 3-position followed by rearrangement to the 2-position to give the methyl 3,3-bis(1,3dimethyl-1H-indol-2-yl)propanoate(6) at higher yield.Jackson et al have previously reported similar type of rearrangement. 41The methyl 3,3,3-tris(1,3-dimethyl-1H-indol-2-yl)propano-ate( 8) was obtained at lower yield.The reaction of indole and 1-methylindole with methyl propiolate gave only diindolyl products and no cyclization or trisindolyl products were observed (Scheme 1).
CONCLUSION
−47 The reaction is a typical Michael type 1,4-addition or conjugate addition of resonance-stabilized carbanions of the methyl propiolate.The reaction of indole and 1methylindole with methyl propiolate could afford addition products and bisindolyl products only (Scheme 1).However, Montmorillonite smoothly catalyzes these reactions leading to two C-C bonds.The use of clay in these reactions was found to be very attractive, because of its environmental compatibility.Many protic acids and Lewis acids that are used in these reactions are sometimes deactivated.When Lewis acids are used, the excess acid can liberate as harmful mixtures to the eco system. | 1,265.6 | 2014-07-01T00:00:00.000 | [
"Chemistry"
] |
Purification to Apparent Homogeneity and Properties of Pig Kidneyl-Fucose Kinase*
l-Fucokinase was purified to apparent homogeneity from pig kidney cytosol. The molecular mass of the enzyme on a gel filtration column was 440 kDa, whereas on SDS gels a single protein band of 110 kDa was observed. This 110-kDa protein was labeled in a concentration-dependent manner by azido-[32P]ATP, and labeling was inhibited by cold ATP. The 110-kDa protein was subjected to endo-Lys-C digestion, and several peptides were sequenced. These showed very little similarity to other known protein sequences. The enzyme phosphorylated l-fucose using ATP to form β-l-fucose-1-P. Of many sugars tested, the only other sugar phosphorylated by the purified enzyme wasd-arabinose, at about 10% the rate ofl-fucose. Many of the properties of the enzyme were determined and are described in this paper. This enzyme is part of a salvage pathway for reutilization of l-fucose and is also a valuable biochemical tool to prepare activated l-fucose derivatives for fucosylation reactions.
6-Deoxy-L-galactose (L-fucose) 1 is an important sugar in animal cells since it is involved in various recognition reactions of glycoproteins and glycolipids (1). Thus, oligosaccharides that have ␣-1,2-linked L-fucose are precursors for blood group A and B antigens (2). In the Lewis blood group antigens, Gal1,3 (Fuc␣1,4), GlcNAc-R, and Fuc␣1,2Gal1,3(Fuc␣1,4)GlcNAc-R are determinants for Lewis a and Lewis b blood group antigens. In addition, fucosylated and sialylated oligosaccharides have been found to be the recognition molecules for the E-and P-selectins, two members of the selectin family of cell adhesion molecules (3). These selectins and their fucosylated (and sialylated) ligands are important in inflammation and in the recognition of leukocytes for endothelial cells (4).
The primary pathway for the formation of L-fucose in procaryotic and eucaryotic cells is from D-mannose via an internal oxidation reduction and then epimerization of GDP-D-mannose to produce GDP-L-fucose (5)(6)(7)(8). However, studies in rats showed that radiolabeled L-fucose could be incorporated into glycoproteins (9,10), suggesting an alternate route for activation of L-fucose. An L-fucokinase that synthesizes -L-fucose-1phosphate (11) and a GDP-L-fucose pyrophosphorylase (12) were partially purified from pig liver. However, the fucokinase preparation had rather broad substrate specificity with regard to sugar and nucleoside triphosphate, probably because of contaminating enzymes such as hexokinase in the partially purified fraction. In the present report, we describe the purification to apparent homogeneity of the pig kidney fucokinase. This enzyme preparation was very specific for L-fucose, and the only other sugar that could be phosphorylated, at about 10% of the rate with L-fucose, was D-arabinose. The fucokinase is also quite specific for ATP as the phosphate donor. This enzyme should be valuable for the synthesis of large amounts of L-fucose-1-P, as well as for the formation of radiolabeled fucose-1-P.
EXPERIMENTAL PROCEDURES
Materials L-[ 3 H]Fucose (52 Ci/mmol) and other radioactive sugars were purchased from American Radiolabeled Chemicals, Inc., or New England Nuclear Co. L-fucose-1-P, non-radioactive sugars, and nucleoside diphosphate sugars were obtained from Sigma Chemical Co. Various adsorbents were obtained from the following sources: DE-52 from Whatman Chemical Ltd., hydroxylapatite from Bio-Rad, omega-aminohexyl-Agarose and Sephacryl S-300-HR from Sigma Chemical Co. The following materials were obtained from Bio-Rad: Sodium dodecylsulfate (SDS), Acrylamide, Bisacrylamide, Comassie blue, Protein assay reagent. All other chemicals were from reliable chemical sources, and were of the best grade available.
Assay of Fucokinase Activity
Fucokinase activity was assayed by measuring the production of L-fucose-1-P from L-[ 3 H]fucose and ATP. The incubation mixtures contained the following components (final concentrations) in a total volume of 150 l: 0.1 mM L-fucose, 5 mM ATP, 5 mM MgSO 4 , 65 mM Tris-HCl buffer, pH 8.0, and various amounts of enzyme at the different stages of purification. Incubations were at 37°C for 10 min, and reactions were terminated by heating the reaction mixtures in a boiling water bath for 1 min. The incubation mixtures were then applied to columns of DE-52 contained in Pasteur pipettes, and the columns were washed with at least 5 column volumes of 10 mM (NH 4 )HCO 3 to remove the unbound material. The [ 3 H]fucose-1-P was then eluted with 500 mM (NH 4 )HCO 3 . Aliquots of the eluates were assayed for their radioactive content by subjecting a portion to scintillation counting.
Purification of the Fucokinase
Preparation of Cytosolic Fraction-Pig kidneys were obtained from a local slaughterhouse and were transported to the laboratory on ice. The fresh kidneys were defatted and cut into pieces that were washed with cold distilled water. Each kidney piece was homogenized in a Waring blender in 2 volumes of Buffer A (30 mM Tris-HCl, pH 7.8, containing 10% glycerol, 1 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, and 1 mM -mercaptoethanol). The homogenates were centrifuged at 12,000 ϫ g for 30 min in a Beckman J-21 centrifuge. The supernatant liquid was removed and filtered through six layers of cheesecloth. The filtered supernatant liquid was then further centrifuged at 100,000 ϫ g for 45 min. The clarified supernatant liquid was used as the starting crude extract. All operations were done at 0 -4°C, unless otherwise specified. In all cases, fresh kidneys were used as the starting material for purification since the fucokinase activity was much lower after freezing the tissue.
DE-52 Column Chromatography-A 5 ϫ 16 cm column of DE-52 was prepared, and the column was equilibrated with Buffer A. The super-natant liquid from the ultracentrifugation of 400 g of kidney (i.e. about 500 ml of supernatant liquid) was applied to the column, and the column was washed well with Buffer A and then with 800 ml of 0.1 M KCl. The fucokinase was eluted with 1000 ml of a 0.1-0.5 M gradient of KCl. Nine-ml fractions were collected, and every other fraction was assayed for activity and for protein. Active fractions were pooled and brought to 60% saturation by the addition of solid ammonium sulfate. After standing on ice for 15 min, the precipitate was isolated by centrifugation and dissolved in Buffer A containing 1 M ammonium sulfate.
Hydrophobic Chromatography on Macro-Prep Methyl HIC Support-The dissolved ammonium sulfate fraction was applied to a 2.5 ϫ 20 cm column of Macro-Prep Methyl HIC Support (Bio-Rad) that had been equilibrated with Buffer A containing ammonium sulfate. The column was then washed with the equilibration buffer. The kinase was eluted from the column with a linear gradient of 1-0 M ammonium sulfate in Buffer A. Active fractions were pooled and concentrated to about 30 ml on an Amicon filtration apparatus using a PM 30 membrane. The concentrated enzyme was then dialyzed against Buffer B (30 mM HEPES buffer, pH 7.6, containing 1 mM -mercaptoethanol and 10% glycerol).
Chromatography on Hydroxylapatite-The dialyzed enzyme was loaded onto a 2.5 ϫ 10 cm column of hydroxylapatite that had been equilibrated with Buffer B. The column was washed with the same buffer and then eluted with a 0 -50 mM linear gradient of KH 2 PO 4 in Buffer B. Under these conditions, the kinase bound to the column but not as tightly as other proteins in the preparation. Thus, about 60 -70% of the enzyme emerged from the column at 15-25 mM KH 2 PO 4 . Some of the enzyme did remain on the column and could be eluted with the bulk of the protein at about 50 mM K 2 HPO 4 , in Buffer B.
Gel Filtration Chromatography-Active fractions, eluted from the hydroxylapatite column, were pooled and concentrated to about 2 ml on the Amicon apparatus. The concentrated enzyme preparation was applied to a 1.5 ϫ 95 cm column of Sephacryl S-300 that had been equilibrated with Buffer C (25 mM HEPES buffer, pH 7.1, containing 1 mM -mercaptoethanol and 10% glycerol). Four-ml fractions were collected and assayed for fucokinase activity. Active fractions were pooled and concentrated to a small volume.
Chromatography on Aminohexyl Agarose-The concentrated enzyme fraction from Sephacryl was applied to a 1.5 ϫ 10 cm column of aminohexyl agarose, which had been equilibrated with Buffer C. The column was washed with 300 mM NaCl in Buffer C, and the kinase was eluted with 160 ml of a linear gradient of 300 -700 mM NaCl in Buffer C. Fractions containing the active enzyme were pooled and the NaCl was removed by filtration on an Amicon apparatus and stored at Ϫ80°C until used for various experiments. The most purified enzyme preparation gave a single protein band of 110 kDa on SDS gels but was found to still be contaminated with ␣-mannosidase activity (see "Results"), which also had a molecular mass of 110 kDa on SDS-PAGE. Thus, fractions from the aminohexyl-agarose column were assayed for fucokinase and ␣-mannosidase, and fractions containing fucokinase activity were incubated with the N 3 -[ 32 P]ATP probe and examined by SDS-PAGE and autoradiography to identify the fucokinase.
Polyacrylamide (Native) Gel Electrophoresis
Preparative polyacrylamide gel electrophoresis was done at 4°C in tubes containing 7% acrylamide and 10% glycerol as described by Laemmli (13) and using Tris buffer. The pH of the stacking gel was 6.7 and that of the resolving gel was 8.9. The samples of fucokinase were made up to 10% with respect to sucrose and contained bromphenol blue. During electrophoresis, the current was maintained at 3 mA/gel, and the temperature was kept at 4°C. Two samples were run in parallel. One gel was stained with Coomassie Blue to detect proteins while the other gel was cut into 0.25 cm pieces, and the enzyme was eluted by overnight diffusion at 4°C into Buffer A. The various elutions were then assayed for enzymatic activity.
On native gels, the fucokinase (molecular mass of 440 kDa, see lower band of Fig. 4A) was separated from the ␣-mannosidase (see upper band of Fig. 4A). To show that both of these bands were composed of 110-kDa subunits, the native gel was removed from the tube and laid on its side on top of an SDS slab gel and polymerized to that gel. Standard proteins were also combined in a native gel and added to the top of the slab gel. The proteins in the native gels were then subjected to SDS-PAGE (as seen in Fig. 4B).
Photoaffinity Labeling of the Fucokinase with 8-Azido-[ 32 P]ATP
Enzyme, at various stages of purity, was mixed with 8-azido-ATP[ 32 P] in buffer and allowed to incubate for 20 s in an ice bath.
8-Azido-[ 32 P]ATP was prepared as described previously (14). After incubation, the reaction mixture was exposed to short wave UV light for about 90 s to activate the azido group, and the protein was subjected to SDS-gel electrophoresis to separate the proteins. The gels were dried and exposed to film to locate the radioactive bands and were also stained with Coomassie Blue to locate the various proteins. The specificity of the labeling was determined by examining the effect of various concentrations of unlabeled ATP or other nucleotides on the labeling of the protein by N 3 -[ 32 P]ATP. Various controls were also run, such as one in which exposure to UV was omitted.
Characterization of the Product
The radioactive sugar phosphate produced in the reaction was isolated by ion exchange chromatography from large scale incubations of [ 3 H]fucose and ATP with purified enzyme. The radiolabeled peak that eluted from DE-52 with a gradient of 0 -250 mM (NH 4 )HCO 3 was lyophilized several times to remove the bicarbonate and was then subjected to hydrolysis in various concentrations of HCl to determine the location of the phosphate group. Sugar-1-phosphates are quite sensitive to mild acid hydrolysis (0.05 N), and the phosphate group is lost fairly rapidly, whereas phosphate residues on other hydroxyl groups are quite stable to these conditions. In addition, the product was analyzed by proton NMR to determine the location and anomeric configuration of the phosphate group. Three hundred mHz proton NMR and 31 P-decoupled (GARP) NMR on the sample of L-fucose-1-P were performed on a Bruker ARX300 NMR. Data were acquired in D 2 O at pH 6.0.
Other Methods
Protein was measured by the method of Bradford (15) using bovine serum albumin as the standard. The molecular weight of the native fucokinase was determined by gel filtration on Sephacryl S-300 and that of the subunit by SDS-gel electrophoresis. A number of molecular weight standards were run including: thyroglobulin (M r 669,000), apoferritin (M r 443,000), -amylase (M r 200,000), alcohol dehydrogenase (M r 150,000), bovine serum albumin (M r 66,000), and cytochrome c (M r 12,000).
RESULTS
Purification of the Fucokinase-The pig kidney fucokinase was purified about 5000-fold with a recovery of activity of about 21% using the methods described under "Experimental Procedures." Fig. 1 shows two of the key steps in the purification procedure, i.e. chromatography on hydroxylapatite (panel A) and chromatography on an aminohexyl agarose column (panel B). The hydroxylapatite step gave about a 10-fold purification, whereas chromatography on aminohexyl-agarose gave better than a 6-fold purification. Table I presents a summary of the purification procedure showing the changes in specific activity at each step and the recovery of activity. Based on gel filtration, the native enzyme emerged from the column in the same area as apoferritin and had an estimated molecular mass of about 440 kDa.
The purified enzyme was subjected to SDS gel electrophoresis as shown in Fig. 2. The initial crude extract showed a number of protein bands (lane 2) while the most purified preparation (lane 7) gave one major protein band with a molecular mass of about 110 kDa. That this band was indeed the Lfucokinase was demonstrated by the fact that it was specifically labeled by the photoprobe, azido-[ 32 P]ATP. Thus, as seen in Fig. 3, incubation of enzyme with N 3 -[ 32 P]ATP gave a single labeled band at the 110-kDa region (lane 2), but no labeled protein band was seen in the absence of exposure to UV light (lane 1). The labeling was shown to be specific since it was inhibited in a dose-dependent manner by the addition of increasing amounts of unlabeled ATP (i.e. 0.3, 0.6, 0.9, and 1.2 mM) to the incubation mixtures (lanes 3-6). On the other hand, GTP at 0.6 and 1.2 mM (lanes 7 and 8) or ITP at 0.6 or 1.2 mM (lanes 9 and 10) were considerably less effective in inhibiting the reaction. Lanes 11-13 of Fig. 3 show the results of another experiment designed to determine whether 8-N 3 -[ 32 P]GTP could label the fucokinase. Lane 12 shows that incubation with this probe did not give rise to labeled protein, whereas incuba-tion with N 3 -[ 32 P]ATP did result in labeling of the 110-kDa protein (lane 13). These experiments indicate that the fucokinase is quite specific for ATP.
In addition, when various fractions from the aminohexyl agarose column (Fig. 1B, fractions 48 -56) were incubated with the N 3 -[ 32 P]ATP probe, maximum labeling of the 110-kDa band was coincident with maximum fucokinase activity, i.e. fractions 48, 50, and 52 (data not shown). These data provide convincing evidence that the 110-kDa band is the fucokinase.
The 110-kDa protein was cut from the gel and sent to Harvard Microsystems for amino acid sequencing. One peptide, obtained by endo-Lys-C digestion, was sequenced, and a BLAST search indicated significant homology to ␣-mannosidase. The purified enzyme preparation was found to have strong fucokinase activity but also had readily detectable ␣-mannosidase activity. Although much of the ␣-mannosidase activity was removed on the aminohexyl agarose column, some activity still emerged with the fucokinase in fractions 46 -56 (Fig. 1B). This enzyme preparation gave a single sharp band at 110 kDa on SDS gels as shown in Fig. 2, lane 7.
The fucokinase could be separated from the ␣-mannosidase by native gel electrophoresis. Thus as seen in Fig. 4A, gel electrophoresis of the enzyme preparation from the aminohexyl-agarose column on native gels gave two protein bands, one with an estimated molecular mass of 440 kDa and a slower migrating band (Fig. 4A). The gel was sliced into 2.5-mm sections, and the proteins were eluted into buffer and assayed for activity. The fucokinase activity was only associated with the lower band while ␣-mannosidase was found in the upper, slower moving band. When this native gel with the two bands was then subjected to SDS-PAGE in a second dimension (Fig. FIG. 1. Steps in the purification of fucokinase. In panel A, the enzyme fraction from Macro-Prep Methyl HIC was applied to a 2.5 ϫ 10 cm column of hydroxylapatite that had been equilibrated with Buffer B. The column was eluted with a 0 -50 mM gradient of KH 2 PO 4 in Buffer B (arrow 1) and then with a 0 -150 mM linear gradient of K 2 HPO 4 in Buffer B (arrow 2). In panel B, the concentrated enzyme from the Sephacryl S-300 was applied to a 1.5 ϫ 10 cm column of aminohexyl-agarose. The column was washed with 300 mM NaCl in Buffer C and was then eluted with a gradient of 300 -700 mM NaCl in Buffer C (arrow indicates the start of the gradient). Fractions were assayed for fucokinase (f), ␣-mannosidase (OE), and protein (E). a Units are nmol of fucose-1-P produced in 1 minute.
L-Fucokinase
4B), both the 440-kDa protein and the slower moving protein gave a single protein band of 110 kDa. These data indicate that both the fucokinase and the ␣-mannosidase are composed of 110-kDa subunits, but the native enzymes are quite different in size or charge. The 110-kDa subunit isolated from the 440-kDa protein (Fig. 4B) was subjected to endo-Lys-C digestion, peptide isolation, and amino acid sequencing of several of the well-separated peptides. The amino acid sequences of three peptides were as follows: peptide 1, VDFSGGWSDTPPLAYE; peptide 2, (T)(G)-IRDWDLWDPDTP(P)(T)ER; and peptide 3, LSWEQLQPC-LDR. These sequences do not show significant homology to any known sequences in the BLAST search.
Properties of the Fucokinase-The purified enzyme was studied to determine its substrate specificity, as well as various other properties of the enzymatic reaction. The enzyme showed a typical pH profile from 5.5 to 8.0 using MES and HEPES buffers, with a sharp pH optimum at about 8.0 in HEPES buffer. However, the pH curve on the alkaline side, between 8.0 and 9.0 in Tris buffer, did not show a sharp optimum. The activity in Tris buffer, pH 8.0, was about 80% of that in HEPES buffer, pH 8.0 (data not shown).
The enzyme had an absolute requirement for a divalent cation for activity. As shown in Fig. 5, Mg 2ϩ gave the best stimulation, with optimum activity being seen at 3 mM concentration. Fe 2ϩ also stimulated the enzyme to nearly the same degree as Mg 2ϩ , but in this case, optimum activity occurred at about 10 mM. Co 2ϩ and Mn 2ϩ were also stimulatory, with maximum activity occurring at about 3-5 mM, but the maximum activity was only about one-fourth to one-third of that observed with Mg 2ϩ . A variety of other metal ions were tested and found to be inactive, including Ca 2ϩ , Cu 2ϩ , Fe 3ϩ , Hg 2ϩ , Mo 2ϩ , Ni 2ϩ , and Zn 2ϩ . However, when Cu 2ϩ , Zn 2ϩ , and Hg 2ϩ were added at 1 mM concentrations to incubations containing Mg 2ϩ , they completely inhibited activity. The specificity of the kinase for various sugar substrates was examined in two different ways. In the first set of experiments, various radiola- P]ATP, and the mixture was exposed to UV light for 90 s. The reaction was stopped by adding loading buffer and the mixture being subjected to SDS-PAGE. Radioactive bands were detected by exposure to film. Lanes are as follows: lane 1, probe ϩ enzyme but no UV; lane 2, probe ϩ enzyme ϩ UV; lanes 3-6, probe ϩ enzyme ϩ 0.3, 0.6, 0.9, and 1.2 mM ATP ϩ exposure to UV light; lanes 7 and 8, 0.6 and 1.2 mM GTP ϩ probe ϩ exposure to UV light; and lanes 9 and 10, 0.6 and 1.2 mM ITP ϩ probe ϩ exposure to UV light. In lanes 11 and 12, the enzyme was incubated with N 3 -[ 32 P]GTP and then exposed to UV light (lane 12) or not exposed to UV light (lane 11). Lane 13 is a control of enzyme ϩ N 3 -[ 32 P]ATP ϩ exposure to UV light.
FIG. 4. Native gel electrophoresis of the purified fucokinase.
Enzyme was purified as indicated in Table I and gave a single protein band of 110 kDa on SDS gels (Fig. 2, lane 7). This enzyme preparation was subjected to nondenaturing gel electrophoresis as indicated under "Experimental Procedures" and gave two protein bands (panel A). The proteins were visualized with Coomassie Blue. The native gel was removed from the tube and loaded on top of a slab gel and subjected to SDS-PAGE in the second dimension to determine the subunit composition of the two native protein bands. These proteins were also stained with Coomassie Blue (panel B).
beled sugars were tested as phosphate acceptors for the purified enzyme using the ion-exchange chromatography method for assay of activity. Table II shows the results of this experiment. It can be seen that, of all the sugars tested, L-fucose was by far the best substrate and was readily phosphorylated. D-Arabinose, which has the same configuration at carbons 1 through 4 as L-fucose, was also a reasonable substrate for phosphorylation and was about 10% as effective as L-fucose. On the other hand, all of the other sugars were ineffective as phosphate acceptors.
Although it was possible to get a reasonable assessment of the sugar specificity of the kinase from the above experiment, it was not possible to test sugars such as D-fucose since these sugars are not available in radioactive form. Thus, these unlabeled sugars were tested for their ability to inhibit the phosphorylation of L-[ 3 H]fucose. The rationale for this experiment is that a sugar that inhibits the phosphorylation of L-fucose would probably compete with L-fucose for the phosphorylation (i.e. active) site. Table III shows the results. As expected, unlabeled L-fucose was a reasonable inhibitor of the activity, and unlabeled D-arabinose also inhibited although considerably less so than the L-fucose. Interestingly, D-fucose and L-rhamnose were ineffective as inhibitors, as was 2-deoxyglucose or other sugars. These data indicate that the configuration of the sugar at carbons 1 through 4 must be in the L-galactose configuration to be a substrate or inhibitor.
The specificity of the nucleoside triphosphate was also examined by testing the ability of a variety of nucleotides to serve as phosphate donors in the phosphorylation of L-[ 3 H]fucose. Table IV demonstrates that the kinase is very specific for ATP as the phosphate donor and shows less than 2% activity with any other nucleoside triphosphate. In addition, no activity is observed with any nucleoside diphosphates or monophosphates.
The effect of concentration of the substrates, L-fucose and ATP, on the velocity of the reaction was determined. The K m for L-fucose was determined at saturating concentrations of the other substrates, i.e. 5 mM ATP and 5 mM Mg 2ϩ . The data were plotted by the method of Lineweaver and Burke, and the K m for L-fucose was determined to be 27 M (data not shown). A similar experiment was performed with ATP. In this case, the reactions were done in the presence of 5 mM Mg 2ϩ and 100 M L-fucose. The K m for ATP was estimated to be 600 M (data not shown).
The kinase was found to be inhibited by the final product of this alternate pathway, GDP-L-fucose. The data in Fig. 6 shows the effect of adding increasing amounts of GDP-L-fucose to reaction mixtures containing L-fucokinase, L-fucose, and ATP. It can be seen that the amount of inhibition of fucokinase increased with increasing amounts of GDP-L-fucose, and the K i for GDP-L-fucose was estimated to be about 10 M. Other GDP-linked sugars, such as GDP-D-mannose and GDP-D-glucose, did not inhibit the fucokinase, nor did L-fucose-1-P. However, when increasing amounts of GDP-D-mannose were added to incubation mixtures of fucokinase with L-fucose, ATP, Mg 2ϩ , and 20 M GDP-L-fucose, the presence of GDP-D-mannose effectively blocked the inhibition by GDP-L-fucose, with complete reactivation occurring at about 1-1.5 mM GDP-D-mannose (data not shown).
Tissue Distribution-To determine whether the fucokinase was present in other tissues besides kidney, crude cytosolic extracts were prepared from various porcine tissues, and each of these extracts was incubated with [ 3 H]fucose, Mg 2ϩ , and ATP for various times. The amount of label that bound to DE-52 and the specific activity of each extract are presented in Table V. It can be seen that L-fucokinase activity was present in many different tissues, and in fact, lung and kidney were the tissues with the highest specific activity for this enzyme. Aorta and brain also had reasonably high activity, whereas pancreas, heart, and spleen were the lowest. Crude extracts prepared from cultured MDCK cells and HT-29 cells were also assayed, but no detectable fucokinase activity was found in those extracts.
Identification of the Product-The product of the reaction was isolated from large scale incubations of L-fucose with ATP and active enzyme and was purified by ion-exchange chromatography and by paper chromatography. The radioactive fucose product eluted from the DE-52 column in the same position as authentic sugar-1-phosphates, such as glucose-1-P or GlcNAc-1-P. The product was subjected to mild acid hydrolysis in 0.05 N HCl at 100°C. Aliquots of the hydrolysis mixture were withdrawn at various times after the initiation of heating, and each aliquot was passed through a column of DE-52. The wash and salt elution of the column were subjected to scintillation counting to determine the rate of hydrolysis. The phosphorylated sugar completely lost its charge (and no longer bound to DE-52) within the first 3 min of hydrolysis (data not shown). These data provide convincing evidence that the phosphate group is attached to the anomeric carbon of the sugar.
The sugar-1-P produced in the reaction was subjected to 300 mHz NMR as an aid in the structural characterization as well as to determine the anomeric configuration of the phosphate group. Panel A of Fig. 7 shows 300 mHz NMR proton detection for the anomeric proton region of L-fucose-1-phosphate, and panel B shows the 31 P-decoupled spectrum. The J 1,2 spin-coupling is 4 Hz for the anomeric proton, consistent with a -Lfucose-1-phosphate configuration. DISCUSSION L-Fucose is an important component of many animal glycolipids and glycoproteins, and turnover of these polymers in the lysosomes must lead to the formation of free L-fucose (9). Thus, it is not surprising to find that certain tissues, especially liver and kidney, contain a specific kinase that can phosphorylate L-fucose to form L-fucose-1-phosphate. In fact, early labeling studies in rats indicated that various tissues are capable of utilizing free L-fucose as a precursor of the L-fucose in glycoproteins (9, 10), suggesting a pathway to reutilize L-fucose.
The presence of the enzyme L-fucokinase was first demonstrated by Ishihara et al. (11) who partially purified this enzyme from pig liver. However, the initial purification of this enzyme was only about 70-fold, and the enzyme preparation still had considerable activity for phosphorylating D-glucose, D-ribose, and L-rhamnose. Thus, that enzyme fraction still probably contained hexokinase, ribokinase, and other enzymatic activities. Interestingly enough, the kinase was also purified about 3500-fold from pig liver by Yurchenko and Atkinson (16). Their enzyme preparation also phosphorylated D-glucose, D-galactose, and D-mannose at about the same rate or better then it phosphorylated L-fucose. These data suggest 7. High resolution NMR characterization of the biosynthetic L-fucose-1-P. 300 mHz NMR and 31 P-decoupled (GARP) NMR were performed on a Bruker ARX300 NMR. Data were acquired in D 2 O at pH 6.0. The anomeric signal is shown in panel A, and the GARP-31 Pdecoupling experiment is shown in panel B.
L-Fucokinase either that their kinase had a very broad specificity for sugar or that the preparation was still contaminated with hexokinase and galactokinase. Unfortunately, those authors did not examine the nature of the products formed from glucose and mannose to determine whether the phosphate group was in the 1 or 6 position since that would readily have shown whether the reaction was catalyzed by hexokinase. Otherwise, it is difficult to envision how a sugar kinase could catalyze a reaction with four different sugars, all having a different stereochemistry at carbons 2 through 4.
On the other hand, the L-fucokinase described in the present report shows very strong specificity for sugars having the Lgalactose configuration at carbons 2 through 4. Thus, only L-fucose and D-arabinose were active as phosphate acceptors although D-arabinose was only about 10% as effective as Lfucose. In addition, the enzyme only utilized ATP as the phosphate donor in contrast to the enzyme reported by Ishihara et al. (11), which could also use CTP, UTP, and GTP as phosphate donors. Thus, it is not clear whether the enzyme reported here is a new enzyme or whether it is the same protein as described earlier but has been purified much more extensively to apparent homogeneity. We sent the purified enzyme to Harvard Microsystems for amino acid sequencing and obtained the sequences of three peptides. These sequences do not show any more then 40% homology to known sequences by the BLAST search. Thus, our protein is clearly not very closely related to other reported kinases. | 6,822.2 | 1998-03-06T00:00:00.000 | [
"Biology",
"Chemistry",
"Computer Science"
] |
The cross-border movement of Nepali labour migrants amidst COVID-19: challenges for public health and reintegration
The migration of Nepali workers to India for labour is a long-standing tradition and a common pattern of life in the region. As poverty and unemployment have been the main causes of migration, migration for work has helped maintain a standard of living particularly for low-skilled and low-income migrants. However, the COVID-19 pandemic made visible a policy vacuum relating to the connections between the mobility of labour migrants, economic resilience and public health. In the absence of effective policy, returnees have posed challenges for the Nepali state and posed risks for individual workers, households and the communities more broadly as a result of a lack of adequate support and planning. This paper reviews what we know about the return of Nepali labour migrants during the COVID-19 pandemic and highlights the impacts this policy vacuum has had on public health consequences within the established social structure of Nepal.
Background
In a rapidly globalising world workforce mobility plays the role of the artery of production in a global market. Often driven by domestic unemployment and poverty, low and semiskilled migrants on low incomes routinely leave their countries to seek work beyond their borders. For Nepali labour migrants the main destinations for foreign employment are the Gulf countries, Malaysia and India . Benefiting from an open and unregulated border, the migration of Nepali workers to India has taken place for decades. India remains a viable option to improve the earning potential of workers particularly for those people from rural areas, and the hinterlands of Nepal, due to cheap and relatively easy travel. Migration to India for Nepali labour migrants is mostly an essential means of survival rather than for saving and investment (Bashyal, 2020). The compounded experiences of landlessness, unemployment and poverty are major push factors for migration from Nepal whereas accessibility, low cost of travel and employment opportunities are the pull factors to India (Dhungana et al., 2019;Shrestha, 2019).
Though exact numbers remain unknown, in the absence of official figures, it is estimated that roughly two million Nepali labour migrants per year live in India . As the majority of migrant workers come from very poor households, rarely in possession of educational qualifications or specific technical skills, they are mostly employed in informal sectors and in low skill jobs such as porters, gatekeepers or in various service sector roles, which yield a very low wage (O'Neill, 2021;Bashyal, 2020). With the money they can save, they contribute to attending to their familial responsibilities including paying off debts, arranging food provision and/or investing in their children's education (Gautam, 2017). Labour migration from Nepal to India is mostly seasonal and temporary and has become interwoven into both the labour market of India and the economy of Nepal (Sapkota, 2018). However, the ongoing cross-border mobility is far from harmonious and is often marked with complexities such as workplace safety, public provisioning, discrimination and insecure work (Rao et al., 2020).
These migration trajectories, however, undergo complexities at the time of crisis. At the onset of the COVID-19 pandemic Nepali migrant workers present in India were severely impacted due to restrictions on their mobility, a collapse of the export market, joblessness and the fear of infection (Bhandari et al., 2021). As India forced a strict lockdown with just four hours' notice (Harris, 2020) to curtail the spread of virus, the dilemmas facing labour migrants were further heightened (Rao et al., 2020). Consequently, thousands of Nepali migrant workers were obliged to return home. But the process of return was a major challenge in the absence of effective transportation, logistical difficulties in the provision of food and water and highly controlled border entry points (Shah et al., 2020).
The connections between migration and work in this context highlight how mobility is interwoven into the familial obligation in Nepal as within just a few months of their return to Nepal from India, labour migrants started to return to India despite the economic uncertainty and ongoing presence of COVID-19. From early May 2021, due to a subsequent wave of the virus, India experienced a massive impact with over three thousand deaths and over three hundred thousand new cases each day (ABC Premium News, 2021). Again, mobility restrictions were imposed, and life became further compromised. As a result, Nepali migrant workers started to return to Nepal again choosing their homeland against the uncertainties and risk in the labour market and to their health (Guha et al., 2021). However, against the backdrop of increasing COVID-19 transmission and growing political instability in Nepal, the return of the migrants, their reintegration and public health have seen significant challenges. Whilst Nepal has experienced a series of economic challenges from civil war to earthquakes, the COVID-19 pandemic has highlighted the fragility of the current socio-political structure and the connections between economic stability, public health and citizenship. This paper explores these connections and attempts to contribute new insights about how the [un]official framework for the management of labour migration from, and return to, Nepal has taken shape in response to the global COVID-19 pandemic. The main purpose of this article is to illustrate how the wellbeing of return migrants is inextricably linked to the social policies of the state and how, and in what ways, the impacts on health were amplified during the COVID-19 pandemic. In doing so we draw on a range of evidence from the press, academic and policy literature. In so doing we hope to raise the profile of the situation of return migrants as they attempt to re-integrate into their country of origin and home.
'Existing' issues
The role of labour migrants in the context of the remittance supported economy of Nepal has remained significant to contribute to the Gross Domestic Product (GDP) of the country (Seddon et al., 2002). Given that more than a quarter of GDP is contributed by remittances (Government of Nepal, Ministry of Finance, 2020), critics argue that the Government of Nepal is more focused on exporting the labour force to source its GDP but it has not come up with concrete policy for their reintegration (Baniya et al, 2020). Equally, the challenges labour migrants face in their migration journey have often remained overlooked from a public policy perspective (Sapkota, 2015). Their plight begins from their home country as they face multiple challenges in the process of preparing documents and obtaining work permits. Quite often they are sandwiched between the prolonged bureaucratic procedures in the government offices and the unclear paperwork, false contracts and high migration costs of employment agencies (Kern and Müller-Böker, 2015;Liu, 2015). They then face further challenges in the country of destination in relation to employment contracts, working hours, wages and holiday entitlements (Thapa et al., 2019). As a result, migrants routinely face precarious working conditions and are compelled to live in poorquality accommodation (Sharma, 2020) which impacts their health and human rights adversely. Subsequently, researchers have highlighted the poor health and wellbeing of Nepali labour migrants (for example, Adhikary et al., 2020;Dhungana et al., 2019;Regmi et al., 2019;Simkhada et al., 2018;Simkhada et al., 2017) both in the country of destination and in the country of origin upon their return. The presence of adverse health conditions experienced by the returnee not only impacts the individual but also has impacts on the wider family economically, psychologically and socially.
Whilst migration to India for Nepali migrant workers is different from other destination countries as visas, work permits and pre-departure contracts are not required, it is likely that their lived experiences of their temporary settlement remain the same.
Reintegration has been much debated phenomenon in relation to return migration and it has been interpreted in varied ways. In the section below a short review of various understandings on reintegration has been presented.
Understanding reintegration
The 2018 Global Compact for Safe, Regular and Orderly Migration, Objective 21 makes a commitment to create a congenial environment for the returnee migrants' personal safety, their economic empowerment, incorporation and social unity in communities. The International Organisation for Migration (IOM) states that reintegration of the returnee migrants is a situation in which the returnees are re-included successfully in their day-today life, the labour market and social environment of their country of origin without having to face the challenges that triggered their first migration (IOM, 2017). A more comprehensive definition of reintegration, thus, appears to cover many aspects: economic, sociocultural and psychological dimensions of reintegration under which components like employment, income sources, property ownership, networks, personal position and memberships, belongness, safety and security, access to amenities or services and absence of discriminations, hatred and stigmatization incorporate (Koser and Kushminder, 2015). Together, the cultural, political or structural factors may be closely interlinked with an individual's socioeconomic activities and productivity impacting their reintegration (Hunter, 2011).
The experience of migration, return migration and reintegration may vary according to the personal, familial, geographical, structural and situational contexts of the migrant. A study by Nisrane et al. (2017) on Ethiopian female returnee migrants shows that the money they had remitted was spent by their family members and they struggled economically after their arrival. The financial support they received from the Governmental Organizations (GOs) and Non-Governmental Organizations (NGOs) was not enough to sustain or start a small business. Parreñas et al. (2019), a study in the context of Filipino and Indonesian returnee migrant workers exposes that the returnee migrants did not find income generating opportunities in the country of origin and struggled to support their family. For the sake of economic sustainability of their family, they made decision to migrate again despite their experience of being exploited in the destination country. Ullah's (2013) mixed method study on returnee female migrants originated from South Asian countries reveals that the returnees regretted the loss of opportunity of getting married and bearing children. As the study shows, there was misunderstanding in the family; the returnees lost their skills and they had to suffer from social stigma associated with female migration. This shows the evidence that these returnees experienced reintegration problems in the family and in the community. David (2017), on the forced returnee migrants originated from Algeria, Morocco and Tunisia indicates that there was problem of reintegration of the returnees in the labour market of the country of origin. However, Kureková and Žilinčíková's study (2018) on young returnee migrants in the context of Slovakia presents evidence that returnees were given more priority in the job market if they had foreign work experience in the West Europe (UK, Germany, Ireland) or America. Also, the returnees could earn more compared with the non-migrants. Flahaux (2020), a study based on the returnee migrants originated from Congo and Senegal demonstrates that migrants who were most prepared to return were well reintegrated after returning than those unprepared. Hagan et al's (2014) study in the context of Mexico reveals that migrants returned with the skills they had gained in the country of destination and applied those in the home country. These returnees became more productive and reintegrated economically as well as socially.
How Nepali migrant workers have reintegrated in their home community after arrival from their migration tenure remains largely under-explored. However, there are several studies worthy of particular attention. Liateart et al. (2014) in their longitudinal study on irregular Nepali migrants returned from Belgium finds that these migrants experienced difficulty to reintegrate due to economic problem because they could not work in Belgium because of irregular status and could not find employment opportunities after returning. These migrants considered remigration to meet the previous migration cost as well as for financial security of their family. On the contrary, Bhandari and Pant (2019) on the basis of household survey from 31 districts of Nepal shows that returnee migrants were more likely to engage in agricultural business indicating that reintegration is mostly associated with their financial status. One of the themes of Korzenevica's (2020) study carried out in eastern hills of Nepal was identical with the finding that returnee migrants saw more scope in agricultural business, and they expressed their realization that they could live a decent life in the village living with family members rather than going for foreign employment. This kind of preparedness and realization may be taken as one of the indicators of successful reintegration.
In the context of cross-border mobility too, reintegration is understood in respect of the returnee's financial gains given that the primary factor of mobility is unemployment and poverty. However, due to work related obligations, peer pressure and the individual's unawareness, Nepali labour migrants normally do not tend to follow healthy lifestyles . The issues of earnings and family finance may result in severe familial and psychosocial costs bringing health related problems to the individual migrant as well as to their family back home and often extending to the community. Poudel et al. (2004) and Regmi et al. (2019) further highlight these issues in connection with the migrants' unhealthy lifestyle, low access or no access to health facilities, unsafe sexual relationship in India, and its consequences after returning home. In connection with these studies, Vaidya and Wu's (2011) study shows that the seasonal migrants not only were affected by HIV, but also transferred diseases in the community. This could have affected the returnee migrants for their reintegration due to the perceived stigma about the disease. Labour migrants' lifestyle, risky and unsafe work in destination countries thus appears to have adverse effect on their reintegration after returning home in addition to financial factors.
Cross-border mobility appears much more transitory and complex amidst COVID-19 pandemic. Even though the nature of seasonal migration mostly occurs during the agriculture-off seasons in Nepal, the weakened and imbalanced family financial situation due to the prolonged immobility and absence of economic support and employment opportunity during the first wave of the pandemic forced the returnee migrants to move to India again as soon as the restriction was lifted.
Impacts of the COVID-19 Pandemic
At the time of developing this paper (June 2022), COVID-19 has caused 6,947,192 deaths and affected 767,518,723 people worldwide (WHO, 2022). In addition to the impacts on health and daily lives, the pandemic will have a lasting effect on the world economy (International Labour Organisation, 2021). In the context of Asia, India was worst affected by the pandemic. The second wave of the pandemic impacted India with far greater intensity compared to the first wave. As of 9 June 2021, 22:14 GMT, the new death related to COVID in India was 6,138 and total death was 359,695 where total positive cases was recorded as 29,182,072 (Worldometer, 2021). A huge number of Nepali labour migrants staying and working in India were immensely affected due to the spread of the virus, a lack of access to health facilities and job loss. Staying in India would pose more threat both from financial and health point of view. As a result, they were obliged to return home as only option amidst the uncertainty of immediate recovery. However, their return was replete with challenges.
The financial resilience, health and day-to-day life of people tend to be impacted adversely when a major emergency in the broader societal context, such as a natural disaster, conflict or public health crisis occurs. Specifically labour migrants as well as their dependants undergo social and economic vulnerability in the time of crisis (Karim and Talukder, 2020). For example, the crisis of recession that appeared in the world economy in 2008 severely affected millions of migrant workers across the world. Hundreds of thousands of these migrant workers were obliged to return home to avoid further pressures on their cost of living. However, they faced further challenges to support their families in the absence of income generating opportunities in their home countries. Again, the recent COVID-19 pandemic has posed a significant challenge impacting the broader economy, health and other aspects of life. Fasani and Mazza (2021) report that labour migrants are likely to have been the most vulnerable group of workers during the pandemic due to the direct and indirect impacts it had on jobs, quality of life, housing and issues brought about as a result of displacement. Sharma (2020) argues that COVID-19 has particularly worsened the lives of labour migrants worldwide, especially those moving from low and middle-income countries (LMICs). In a Policy Brief by the Migration Policy institute, Le Coz and Newland (2021) highlight the impacts of COVID-19 on various countries of the world and especially on LMICs like Armenia, Bangladesh, Sri Lanka, Myanmar, Nepal and Uganda. The authors observe that labour migrants faced a number of challenges in the destination countries as well as in the countries of origin upon their arrival. Because of the impact of COVID-19, millions of migrant workers worldwide lost their jobs and returned home. Regarding the return from India, by September 15, 2020, an estimated 76,048 Nepali labour migrants returned via the Nepalgunj border alone (Himalayan Times, 2020b). According to Asian News International (2020) more than 200,000 labour migrants had returned from India due to COVID-19. It is likely that this is an underestimate as other migrants are thought to have entered through 'inactive' border points to avoid the authority at the regular entry points (Republica, 2020a).
Return pathways and challenges
The return of large numbers of the Nepali diaspora as a result of the COVID-19 pandemic has posed particular dilemmas. In the COVID-19 pandemic, the shortcomings of planning for return migration have been seemingly amplified. For example, migrant workers stranded in destination countries such as Gulf countries, Malaysia, and India, were hit hard by COVID-19 as they lost their jobs and were compelled to live without food, shelter and money (Bhattarai, 2020). They faced a range of challenges in the process of returning (Mandal, 2020a) with stories emerging of migrants stranded in the streets, airports and at the border between Nepal and India (Hashim, 2020;Mandal, 2020b;Shrestha, 2020). Furthermore, many migrant workers faced further challenges after entering their home country (Hashim, 2020;Shrestha, 2020). This shows a lack of coordinated efforts of the employers, the employment agencies, the diplomatic missions of Nepal as well as the Government of Nepal in their responsibilities as they appeared missing to take proactive steps to assist the migrant workers. The issues of returnees from India might pose added challenge as these returnees have remained absent in the existing policy of the government-both in the absence of data and framework of their financial as well as social security.
In April 2020, as the cases of COVID-19 started to rise in India, the Government of Nepal decided to seal the border as a preventive measure to curtail the spread of the virus (Basnet et al., 2020). Thousands of returning labour migrants remained stranded at the border (Shrestha, 2020) as the authority at the entry points prevented them from crossing. They had to wait for weeks to receive entry clearance from the authority. Issues of human dignity violation emerged (Dhungana, 2020) as the vulnerability of those returning was further exacerbated (Prerna, et al., 2020). The returnees were reported to have been so desperate and frustrated by the situation that some of them risked their lives by jumping into the Mahakali River to swim across the border, away from surveillance of the authorities to cross the border (Baniya et al., 2020;Badu, 2020). After entering Nepal, the returnees faced further challenges when they were required to stay in the quarantines. The quarantines were reported as unhygienic, unsafe, poorly managed and overcrowded (Shah et al., 2020;ILO, 2020). Females were obliged to share the quarantine accommodation and facilities along with the males, where sexual violence on girls and women was reported (Dahal et al., 2020). Incidents of suicide and attempted suicide emerged, and some people ran away from quarantines due to perceived stigma towards the virus (Baniya et al., 2020). Those who managed to reach home after this period of quarantine, faced further challenges of integration in the community because these people also perceived the stigma (Keetie et al., 2020) with the fear that they were carrying and transmitting the virus in the community. Together with the fear of life imposed by the pandemic, the returnees struggled to support their family in the absence of income generating opportunities. Even though the Government of Nepal had formed the COVID-19 Crisis Management Committee, as a special body with authority to address the pandemic related issues, the efficiency of the committee in taking immediate steps in the intervention was questioned. Literature shows that other LMICs also experienced identical problems in the management of the returnees during the crisis. For example, people ran away from the quarantine centres in Zimbabwe in the absence of basic provisions for life. Similarly, in Uganda, Sri Lanka and Myanmar, the government kept holding the returnees because the reception centres were preoccupied and waited them to become empty (Le Coz and Newland, 2021). As the report adds, these countries faced budget crisis to arrange the required facilities in the quarantine centres as well as to mitigate the exceeding health crisis. However, according to the report, the management of the returnees in Kerala, India was exemplary because the state had maintained the data of the migrants and the returnees regularly and it helped them project the potential number of returnees and prepare for their management including setting up hospital beds and quarantine facilities as well as taking account of the migrants from other regions.
Post-return crisis
Even though returning home and reuniting with family becomes the primary concern for the returnee migrants in the time of crisis, post-return life is replete with challenges due to the presence of low household incomes (Bastia, 2011)). Economic reintegration thus emerges as a key problem for the returnee labour migrants. This is acute for low-income households in rural areas who rarely benefit from state interventions and social assistance (Ojha, 2021). Households in other LMICs also may have faced the similar challenges. For example, 87 percent of the returnee migrants faced significant challenge in the absence of source of income in Bangladesh (Dhaka Tribune (2020) and reengaging with the domestic labour market was a major challenge for the returnee migrants in Cambodia (IOM, 2020). So, even in the case of Nepal, re-migration amidst the ongoing pandemic therefore becomes a difficult but rational choice for labour migrants who cannot sustain their families. That could be the reason that during the four weeks of September 2020, 22,000 Nepali labour migrants left for India via the Nepalgunj border point alone (Himalayan Times, 2020b). Republica (2020c) reported a number of testimonies including the following: I came here four months ago. However, I couldn't withstand scarcity and starvation, …I have no other option but to leave the country again merely to fulfil the basic needs.
You can't cheat your stomach. You have to feed yourself to survive.
We couldn't find any job to earn a living here. So, ultimately, we are returning to India.
But the second wave of the virus further exacerbated human health and the job market, impacting more workers. Once again, because of the likelihood of destitution, a return to Nepal was necessary (Jesline et al., 2021; Online Khabar, 2021), but once again a significant challenge (Shah et al., 2020). Thousands of Nepali migrants returned home as the second wave of virus caused a devastating impact in India. Records showed that 30,000 Nepali migrants returned via the Gauriphanta entry point alone during mid-March to mid-April 2021 (Deuba, 2021). Whilst returning further impacts individuals and households financially, there were reports that this was preferable to succumbing to the virus, as one returnee stated, ''I am lucky to be alive…. I managed to escape, and I am alive and safe'' (Sapkota and Khada, 2021). However, economic reintegration and health of the returnee migrants remained a major challenge while Nepal was undergoing the crisis of the pandemic and even struggling to ensure the supply of oxygen to the patients (The Guardian, 2021a).
The second return marked leniency in the absence of standard checks and mandatory quarantine requirement. In the event of positive cases, there was no provision of contact tracing (Aryal and KC, 2021). Karnali Province stopped carrying out tests referring to the high number of entrants and low capacity to administer the programme. Similar withdrawals were in place in Rautahat and in most of the bordering districts, which were interpreted as the mismanagement of the returnees by the authorities (Jha, 2021), which indicated the possibility that the virus had already spread to the wider community. There was then, what was described as, a chaotic scene in most of the hospitals of the country and the government's announcement that bed availability in the hospitals was rare, underlined this (Poudel, 2021). This could be argued as the failure of institutional capacity as a result of the ineffectiveness of Crisis Management Committee (Aryal and KC, 2021). Also, in the scenario that 4500 migrant workers returning from India daily (Online Khabar, 2021), it can hardly be underestimated the plight of the returnee migrants during the pandemic where the lockdown might have further complicated their return and reintegration.
The mobility of people, like that of goods and money, has always posed problems for authorities (Jordan and Brown, 2007) and the COVID-19 pandemic has once again exacerbated the particular vulnerabilities faced by labour migrants where their agency, rights and voices appear missing from the public policy space (Rao et al., 2020). The COVID-19 pandemic has highlighted the fragility of internal migrants in India and how the policy vacuum leaves citizens adrift from accessing their basic rights of citizenship. Despite existing problems, migration to India, in the current global market, stands as an unavoidable phenomenon for most of the people of rural Nepal. The people who are obliged to cross the Indo-Nepal border can be seen as amongst the ''poorest of the poor'' and these people are highly vulnerable not only in the country of destination but also in their own country after return (The Guardian, 2021b). However, the COVID-19 pandemic has illustrated how fragile the status quo is and how the global market in labour and a weak welfare system can impact and exacerbate a global public health crisis. Migrants who have no means of income in their own regions and countries take the rational decision to move to improve their situation. This movement though presents risks and challenges to these individuals and the global health system as the virus moves. For the individual labour migrants, the COVID-19 pandemic has created multiple impacts from the both -the actual, and the fear of the infection as well as the stress of unemployment often termed as psychological morbidity (Dhungana et al., 2019). Discrimination, psychological stress, family obligations and financial hardship are common challenges faced by labour migrants at the time of crisis (Bhandari et al, 2021). In addition to the experience of discrimination in India (Adhikari et al., 2022), migrants face challenges at cross-border points and experience being "betwixt and between" citizenship and exclusion. The vulnerability of Nepali labour migrants in India could be far worse because many of these migrants have not possessed the mandatory document, known as adhar card to come withing the framework of state welfare system in the scenario that the internal migrants of India who already are under the framework also have undergone severe challenges to find permanent employment and state facilities like access to health at the time of crisis.
In Nepal, despite acting proactively in the initial phase of the pandemic, the government struggled to cope with the high volume of returnee migrants both from air and land routes which illustrated a lack of depth in their emergency planning policies. Reintegrating [former] migrants in the community, delivering health services, support for labour market engagement or social assistance, and ensuring psycho-social support were major challenges for the government in the absence of well-coordinated political and administrative structure. The public perception was that the political parties were more focused on political games at the expense of public health and economic resilience of the most vulnerable (The Kathmandu Post, 2020; People's Review, 2020). Insufficiency of planning, inadequate relief and assistance, inefficient bureaucracy and a lack of trust has been the assessment of returnees to the governmental response (Republica, 2020b;Seddon, 2021).
The accounts of returnee labour migrants as well as response of the authority and stakeholders' experiences draw our attention for policy overview. In the section below, existing plans, policies and gaps on reintegration have been overviewed.
Planning, policy and gaps A great deal of attention has been afforded to ensuring that migrants from Nepal are best equipped to maximise their journey and that safeguards are implemented. This is largely because the remittances from migrant workers have made a major contribution to the GDP of Nepal. The Foreign Employment Policy of 2012 asserts that any would-be labour migrants are given adequate training and counselling before their departure for foreign employment (Department of Foreign Employment, Government of Nepal, 2012). Equally, it commits to liaise with the governments of destination countries to guarantee basic rights of the migrant workers including their safety, security, access to public health, safe working environment and holiday entitlements, together with the provision of their safe return. Likewise, the policy has envisioned the specific strategies for the reintegration of returnee migrants. For example, the policy stresses to incorporate the skills and technical knowledge of the returnees in the local context with a view to use their skills for the development of the nation by providing employment locally. For this, the government has stated the need to develop packages for social and economic reintegration. Likewise, the policy emphasized the need to set up psychological counselling centres and rehabilitation centres through the Foreign Employment Welfare Fund created by the government under Foreign Employment Board.
The Government has also set up contribution related insurance scheme for the returnee migrants. Plans and policies on migration and return migration exist in general, but the issues with the rights, safety and security of the labour migrants in relation to other countries also have not been solved. Very recently, there has been a call for amendments of the Foreign Employment Policy, 2012 stating that diplomacy should be the major goal of foreign employment to address the existing challenges (Himalayan news Service, 2022). However, in the light that Nepali migrant workers in India are not officially acknowledged, documented in Nepal, they do not have access to any of the privileges that the migrant workers to other destination are entitled to have. In fact, there is absence of policy on the issues of migrant workers in India till the date.
As an attempt to start decreasing unemployment, in February 2019, the government of Nepal brought Prime Minister's Employment Programme (PMEP) with a view to guarantee 100 days of paid work to the jobless Nepalis including the returnee migrants and called for registration with the supporting documents (Sapkota and Khadka, 2021). A total of 752,976 people registered claiming that they were unemployed and out of which only 78,678 were successful to receive employment opportunity for combined 843,042 days -about 11 days per person. Likewise, the Federal Government's budget of 2020 announced an ambitious mission of creating 700,000 jobs out of which 200,000 would be employed through PMEP (Thapa et al., 2020). The Provincial Government of Karnali launched Chief Minister's Employment Programme (CMEP) tuning to PMEP with its aim to end poverty in the region (Katuwal, 2020). It planned to create 1,260,000 days of employment and pay Nepali Rupees 500 per person on daily basis. The Government recruited Employment Coordinators (EC) at the Local Level to identify the unemployed people and provide employment. However, the ECs could not extend their services proactively to cover the large number of people. The residents of rural villages are reported to have experienced the greatest information barrier about how they could benefit from the programmes (Sapkota and Khadka, 2021). Equally, a complex and unfriendly bureaucratic procedure often demotivated the rural people from accessing the assistance provided by the government (Baniya et. al. 2020). To mitigate the economic problems caused by the COVID-19 pandemic and support the people struggling with urgent financial need, the Government of Nepal created the COVID-19 Resilient Fund. However, a lack of coordination between the Provincial and Local Government (Ojha,2021) posed an added challenge to supporting returnees with relief funds and creating self-employment opportunities (Republica, 2020c). Consecutively, the COVID-19 Resilient Fund and Prime Minister Employment Programmes created by the Government hardly reached the returnees in greatest need (Seddon, 2021). This indicates a gap in the structural level of the government in extending the services to the target population. Thus, in the absence of viable financial support and challenges of securing paid work, returnee labour migrants struggled to support their families and considered re-migration, particularly to India, despite the risks involved (Asian News International, 2020; Himalayan Times, 2020a).
Within Nepal, migration and return migration have been the subjects of great concern for communities and stakeholders (Ministry of Labour, Employment and Social Security, 2020) but measures that need to be brought in to address the problems within these are often ignored by policy (Thieme and Ghimire, 2014;Ghimire, 2019). In many cases, the programmes that have been launched appear 'populist' in nature and are not sustainable. Many of the programmes appear to be transitory as they discontinue along with discontinuation of a political regime. Also, there is general perception among the people that only those who have access to the political power can benefit from these programmes. The International Organisation for Migration (IOM) Country Profile of Nepal (2019) shows that there are limitations in data and information on migration in Nepal adding that these gaps may impact negatively in designing laws and policies and implementing them effectively. The report further states that Nepal has not been benefiting from the skill contribution of the returnee migrants and recommends that congenial business, investment and employment opportunities need to be created both for male and female returnee migrants. Equally, the report highlights the necessity in finding information and data on migration to India and irregular migration and update them periodically. A policy brief by Governance Monitoring Centre Nepal (GMC) (2022) a Nepal based organisation, points out that there was a lack of preparation from the part of the Government for emergency and crisis management in the wake of COVID-19 pandemic. It also shows that there was a gap of cohesion among the departments of ministries while preparing, planning and implementing the programmes. Recent research of IOM (IOM, 2022) has highlighted the Government's achievements so far and the limitations and challenges that existed in the migration policies of Nepal. Based on findings, IOM has offered its ways forward for the effective reintegration of the returnee migrants in structural, community and individual level.
Discussion
Labour migration for low-income Nepali people to foreign countries has been a viable means of survival and resourceful channel for economic activities in the scenario that Nepal has not been able to generate enough employment opportunities within the country. But there are issues often highlighted on health and wellbeing, social security, human right and wages. The government of Nepal has often framed plans and policies (e.g., Migration Act 2007, Foreign Employment Rules 2008, Reintegration Policy 2012, designed programmes and partnered with various agencies for safe migration and sustainable reintegration of the migrant workers after their return (United Nations Nepal, 2021;IOM, 2022). However, it is recognised that there exists a paucity of policy relating to how countries address reintegration in LMICs like Nepal (Liu, 2015). In the context of Nepal, the gaps in the policies exist mainly in the execution levels -such as information collection, documentation, implementation, and intervention. In many respects the challenges and issues with reintegration of returnee labour migrants of Nepal appear identical with those of ASEAN countries as documented in an ILO report (see Wickramasekara, 2019). However, the model of the Philippines is often taken as successful among the countries across the region. The policy stakeholders of Nepal also could take insight from the model of the Philippines (Himalayan News Service, 2022) in which there is significant contribution from the government bureaucracy from premigration period to reintegration upon return (Rocer, 2021).
Another crucial issue that possesses tension to the authority and migration stake holders is about the cross-border mobility of labour migrants from Nepal to India. Even though India has remained a viable labour market for a large number of low-skilled Nepali migrant workers, the issues related with these migrant workers appears to have been missing in the policy. Work migration to India has not yet been acknowledged as foreign employment and there is no official record of the migrants and their employers. There exists a gap in policy that there has not been any bilateral agreement or policy amendments after the free mobility agreement of 1950 between the Governments of Nepal and India (Sharma and Thapa, 2013). It appears that neither of the countries has maintained an official record on how many Nepalis live and work in India. Hence the government of Nepal may require redefining these mobilities and bring them in its policy priority to guarantee the basic rights like safety, security, access to health and other public provisioning of labour migrants living and working in India (ILO, 2020). The government may consider bringing these migrant workers under an official framework and bring them under insurance schemes to ensure their social security as they do not have the provision like their fellow men who are in overseas employment. After moving to India, they are unlikely to have the provisions of economic and social security because majority of them work in informal sectors where contracts, guarantee of wages, and entitlements are often compromised (Mandal, 2020b). One of the main reasons for the migrants in India to be deprived of the basic provisions like health and social security in the absence of Adhar card, a fundamental document required to qualify for such privileges (Sharma and Thapa, 2013). Consequently, poor lifestyle, health, hygiene and the precarious nature of work poses serious physical and mental health problems for migrants (Saraswati et al., 2015). Temporary settlement of Nepali migrants in many respects share the common characteristics with the migrants from other neighboring countries in relation to lifestyles, work, health and hygiene, social integration, economic activities as well as the reasons for return (ibid.) This indicates that the issues with crossborder mobility may be of far complex nature. Evidence shows that there is prevalence of psychological morbidity, distress and poor health issues in the returnee migrants (Dhungana et al., 2019). All these factors are likely to impact their reintegration after returning home.
Intermittent cross-border (im)mobility of people amidst COVID-19 may provide the policy stake holders retrospective lens to review the plans and policies. The untimely lockdown, lack of strategic measures, toughened border control without a prior notice were redundant issues experienced by the people. Lack of both human and logistic resources were other factors that contributed to the spread of the virus. Most importantly, the psychological factor associated with COVID-19 and the stigma adhered with it had major impact on the community and the wider society. It shows that there was a lack of human resource for counselling during the pandemic. Another vital issue associated with risky mobility was the absence of opportunities for economic activities within the country (Le Coz and Newland, 2021). The Federal, Provincial and the Local government lacked coordination in disseminating the relief funds, services and packages to the most affected ones (Thapa et al., 2020). The effectiveness of the COVID Resilient Fund, Prime Minster Employment Programme and the Chief Minster Employment Programmes brought several questions as these programmes could not reach the needy people in time (Adhikari et al., 2022). Equally, the role of COVID Crisis Management Committee did not appear proactive enough to mobilise resources and mitigate the situation. Despite the involvement of various non-government agencies along with the government agencies to cater assistance and services to the vulnerable population, the situation could not come in control in time due to a lack of coordination in the government bureaucracy. Thus, the accounts above help us understand how the mobilities of the people took place during the COVID-19 under an (Un)official framework.
In order to address the issues which have existed for long, it is high time the government took initiation in the diplomatic level to facilitate employment in formal sectors and guarantee financial and social security. Even though it may appear challenging in respect with low skilled migrant workers, it could benefit the migrants as well as the governments of both the nations in the long run. It is also suggested that the government of India considered an effective management of the migrant workers to ensure that the workers' life is not further compromised and that they are not discriminated against or exploited (Weeraratne, 2020). The long existing problematic issues of these mobilities demand the government of Nepal for designing reintegration policy on the basis of the returnee population, their skills and their potential enrolment in the local job market. In addition, maintaining record of the migrants would help the government to make contingency plans at the time of crisis like COVID 19, to mitigate the pressure on health services, financial supports and employment (IOM, 2022). Together, the government could take the insights from the effective reintegration models like that of Kerala, India and the Philippines, as mentioned in the section above. Thus, return and reintegration draws strong policy attention specially in case of short term and temporary migration to support returnee migrants from facing multiple changes in the absence of specific framework of services (Wickramasekara, 2019).
Effective implementation of the reintegration tasks envisioned in the existing policies of Nepal may demand strong commitment and willpower of the authorities and efficient coordination between and among the departments of respective ministries. Equally, it is pivotal that there is presence of mutual understanding and agreement in regard to the shared responsibilities between the Federal, Provincial and Local governments. To reiterate, migration to India may need to be redefined and brought within the framework of foreign employment for the increased security of the migrant workers. Regarding reintegration, it has been admitted in Foreign Employment Policy document 2012 (Department of Foreign Employment, Government of Nepal, 2012) that there are many issues that have remained unaddressed adding that there are still challenges to overcome the issues of reintegration. However, these issues have remained unaddressed for over a decade. Consequently, health and wellbeing along with financial stability and social security of the returning population has long been jeopardised and it is further heightened within the context of COVID-19. In the absence of policy intervention in the making of welfare state with well-coordinated state mechanism, the destitution of these people is most likely to continue. | 9,942.8 | 2023-07-14T00:00:00.000 | [
"Sociology",
"Political Science",
"Economics",
"Medicine"
] |
Three-dimensional mapping of mechanical activation patterns, contractile dyssynchrony and dyscoordination by two-dimensional strain echocardiography: Rationale and design of a novel software toolbox
Background Dyssynchrony of myocardial deformation is usually described in terms of variability only (e.g. standard deviations SD's). A description in terms of the spatio-temporal distribution pattern (vector-analysis) of dyssynchrony or by indices estimating its impact by expressing dyscoordination of shortening in relation to the global ventricular shortening may be preferential. Strain echocardiography by speckle tracking is a new non-invasive, albeit 2-D imaging modality to study myocardial deformation. Methods A post-processing toolbox was designed to incorporate local, speckle tracking-derived deformation data into a 36 segment 3-D model of the left ventricle. Global left ventricular shortening, standard deviations and vectors of timing of shortening were calculated. The impact of dyssynchrony was estimated by comparing the end-systolic values with either early peak values only (early shortening reserve ESR) or with all peak values (virtual shortening reserve VSR), and by the internal strain fraction (ISF) expressing dyscoordination as the fraction of deformation lost internally due to simultaneous shortening and stretching. These dyssynchrony parameters were compared in 8 volunteers (NL), 8 patients with Wolff-Parkinson-White syndrome (WPW), and 7 patients before (LBBB) and after cardiac resynchronization therapy (CRT). Results Dyssynchrony indices merely based on variability failed to detect differences between WPW and NL and failed to demonstrate the effect of CRT. Only the 3-D vector of onset of shortening could distinguish WPW from NL, while at peak shortening and by VSR, ESR and ISF no differences were found. All tested dyssynchrony parameters yielded higher values in LBBB compared to both NL and WPW. CRT reduced the spatial divergence of shortening (both vector magnitude and direction), and improved global ventricular shortening along with reductions in ESR and dyscoordination of shortening expressed by ISF. Conclusion Incorporation of local 2-D echocardiographic deformation data into a 3-D model by dedicated software allows a comprehensive analysis of spatio-temporal distribution patterns of myocardial dyssynchrony, of the global left ventricular deformation and of newer indices that may better reflect myocardial dyscoordination and/or impaired ventricular contractile efficiency. The potential value of such an analysis is highlighted in two dyssynchronous pathologies that impose particular challenges to deformation imaging.
Background
The deleterious effects of an altered electrical activation on ventricular mechanical function have been recognized for the first time some 40 years ago but have gained important scientific interest only in more recent years [1]. Since, it has become clear that important disparities exist between electrical dyssynchrony and its mechanical consequences. The physiology behind these disparities is complex; it encompasses non-linear relationships between electrical and mechanical activation times [2][3][4] and involves an intricate interplay between loco-regional differences in wall stress, workload and contractility [5][6][7][8]. Electrical dyssynchrony can thereby induce a variable degree of unbalanced myocardial forces. Spatial differences in forces provoke spatial heterogeneities in timing and amplitude of myocardial deformation and also give way to segmental interactions within the heart (back-andforth shortening and stretching between different regions) [7,[9][10][11]. By this mechanism, part of total deformation work is dissipated into internal interaction work instead of being externalized into stroke work. Multiple well controlled studies have indicated that it is this heterogeneity of wall stress and deformation that determines both the functional impairment as well as the remodelling observed in the dyssynchronous ventricle [2,[6][7][8]10,[12][13][14]. Moreover, the benefits of cardiac resynchronization therapy (CRT) have been shown to be directly proportional to the reduction in deformation heterogeneity and dyscoordination [3,14,15]. Finally, the spatial organization of dyssynchrony -random versus organized -has been suggested to determine the chances of successful resynchronization and its pattern is considered important in choosing the most appropriate pacing site [3,15]. Therefore, myocardial deformation plays a pivotal role in the physiology of dyssynchrony and resynchronization. Nevertheless, a considerable gap persists between the experimental knowledge obtained from animal experiments and the complex physiology of dyssynchrony and response to therapy in human pathologies. Hence, for a proper evaluation of dyssynchrony in human subjects, both regional and global deformation have to be assessed by accurate techniques and with appropriate analysis methods. In the present work, we describe a novel software toolbox designed to improve the echocardiographic assessment of the above mentioned physiological aspects of dyssynchrony of deformation, we illustrate how this can provide new data in two challenging patient groups, and we discuss potential advantages and limitations of different approaches to quantify dyssynchrony.
Patients and volunteers
Healthy controls (NL; n = 8) were included after providing written informed consent if they had a normal resting electrocardiogram and no cardiovascular disease and medication. Eight patients with Wolff-Parkinson-White syndrome (WPW), admitted for radio-frequency ablation of the accessory pathway underwent echocardiography the day before the procedure to rule out underlying structural abnormalities. All provided written informed consent. Seven patients with drug-refractory NYHA-class III heart failure (EF 18.8 ± 4.8%), widened QRS (179 ± 28 ms) and left bundle branch block (LBBB), of which 3 with an ischemic aetiology, underwent an extensive echocardiographic examination as part of the routine clinical workup, an average of 45 ± 49 days before CRT. The exam was repeated before discharge, 2.5 ± 2 days after device implantation. The execution of the study conformed with the local Medical Ethics Committee policy and with the principles outlined in the Declaration of Helsinki on research in human subjects.
Echocardiography acquisition
Echocardiography was performed on a GE Vingmed Vivid 7 scanner (GE Vingmed Ultrasound, Horten, Norway). Small angle, single wall, B-mode recordings of the septal, anteroseptal, anterior, lateral, posterior and inferior wall were performed from 3 standard apical imaging planes at 51 to 109 frames per second [16]. From the Doppler recordings of mitral inflow and left ventricular outflow the duration of the RR-interval, the timing of mitral valve opening (MVO) and closure (MVC), the onset of atrial flow wave (AWO), and aortic valve opening (AVO) and closure (AVC) were measured with respect to the onset of the QRS to serve as "reference timing events". Two-dimensional longitudinal and transverse strain and strain-rate curves were processed off-line using commercially available speckle-tracking software (GE, EchoPAC version 6.0.1). For each wall six samples were evenly distributed from base to apex, providing a 36-segment model of the left ventricle. Spatial smoothing was set at half of the software default value and the onset of the ECG was taken as the zero reference point. The obtained traces were transferred as text-files to a personal computer for post-processing into custom-made software (STOUT: Speckle tracking Toolbox Utrecht) programmed in Matlab (The Math-Works Inc., Natick, USA). The exported text-files contained information on the wall under investigation within the filename and the EchoPac software automatically generated a header to the numeric data encoding the type of parameter (velocity, strain, strain-rate, etc), its direction (longitudinal, transverse, etc.), the time of the zero reference point at onset and end of the cycle (defining the R-Rinterval), and a colour-coding for the six levels.
Data post-processing in STOUT
In STOUT, the information embedded in the imported files regarding type of parameter, wall segment, beginning of the QRS and duration of the cardiac cycle is automatically decoded file by file. The "reference timing events" are imported manually once, after which adjustment for unequal frame rates and RR-intervals is automatically performed within each imported file by interpolating the data to 1 ms and by re-sampling based on the empirical observation that the systolic period lengthens with about 33% when total RR duration doubles; [17] see Additional file 1: Algorithm for RR-normalization. These RR-normalized and interpolated data are consecutively fitted to a simple 36 segment 3-D model assuming all walls to have similar length and a rotational orientation of 60° between the imaging planes [18]. The integration of spatial and continuous temporal information permits to display the data as a series of bulls-eyes (figure 1) and two-dimensional M-mode maps (figure 2), as well as a 4-dimensional projection of the data on a conical cast. By normalizing the individual curves to the reference RR, all data can be summed and averaged to yield a "global" or "netto" curve, representing the externalized motion or deformation of the ventricle as far as the dataset is complete (figure 3).
To make the analysis more time efficient, STOUT has an automated search algorithm for the identification of onsets, peaks and end-systolic values of motion and deformation. All data can be manually edited if needed. The vector algorithm proposed by Zwanenburg et al., allowing estimation of 3-D vectors also in case of missing values, is implemented in the software and is automatically calculated for the operator-approved onsets and peaks [19]. At the end of the analysis, all the results are automatically exported to an excel-spreadsheet.
A new index estimating the impact of segmental interaction and expressing dyssynergy (i.e. dyscoordination/ opposing strain work) is implemented in the software. The calculation of the internal strain fraction (ISF) is based on the directional changes of the strain, i.e. the slopes of the all strain curves, which for each time span are ranked in a group of shortening and lengthening/thickening strain slopes [20]. The absolute values of all these slopes for that particular time span (the actual value of the strain-rate) are summed within the two groups and plotted over time as a positive strain-rate group and a negative strain-rate group which are integrated over time to yield total positive and total negative strain (figure 4). ISF represents their relative fraction for the desired period within the cardiac cycle; see Additional file 2: Algorithm for ISF and vector of paradoxical strain-rate behavior (PSrV).
Definitions and data analysis
Mechanical activation time was defined as the time of onset of shortening and throughout the article we will use mechanical activation for the onset of shortening [10]. The temporal variability of mechanical activation was then Shortening and stretching patterns in a patient with LBBB (left) and a normal individual (right) represented by a series of col-our-coded bulls-eyes representing deformation-rate at 25 time points throughout the entire cardiac cycle for each of the 36 segments expressed by the standard deviation of shortening onset times (SDot) and its spatio-temporal distribution width by the vector magnitude of onset times in the horizontal plane (VMot). In analogy, the temporal variability (= standard deviation) and the spatio-temporal distribution width (= vector magnitude) were also calculated for the time to the (first) peak of shortening (SDpt and VMpt, respectively). These indices express dyssynchrony based Shortening and stretching patterns in a patient with LBBB (left) and a normal individual (right) represented by a two-dimen-sional M-mode map representation of the same data as in figure 1: the temporal information can now be continuously plotted over time (left to right within each plot) figure 1: the temporal information can now be continuously plotted over time (left to right within each plot). Vertical lines represent event timing markers. Spatial representation is less optimal: each of the 6 walls is plotted separately (from top to bottom) with each level plotted from base (top) to apex (bottom) within the separate plots. As in figure 1, shortening and stretching are markedly inhomogeneous in LBBB (left) compared to NL (right). on timing-issues only. The coefficient of variation of endsystolic strains (CVeS) was calculated to express the effects of dyssynchrony in terms of heterogeneity in strain amplitudes at end-ejection [13]. The inefficiency caused by deviation of the peak shortening from its ideal timing at AVC was estimated by comparing the peak deformation values with the end-systolic values in two ways. According to the first approach, the impact of dyssynchrony on global ventricular function (= degree of inefficiency) was expressed by estimating the improvement in global end-systolic shortening if all peak shortening were to occur at end-systole. As this represents a virtual resynchronization towards AVC it was denominated the virtual shortening reserve (VSR). VSR = [(mean of peak strains -mean of end-systolic strains)/mean peak strains]*100%
Global strain plot in a patient with LBBB (left) and in a normal individual
For the second approach, the distinction was made between premature (peak shortening before AVC) and postsystolic shortening (peak at or after AVC). Only the inefficiency caused by premature shortening was considered to be amenable by resynchronization, while postsystolic shortening was not considered to represent recruitable shortening (figure 5). Hence the early shortening reserve (ESR) was used to estimate the amount of potentially amenable dyssynchrony by performing a virtual resynchronization of early shortening only: ESR = [(mean of premature peaks + mean of end-systolic strains of postsystolic peaks) -mean end-systolic value of all peaks)/(mean of peaks of early strains + mean of endsystolic strains of late peaks)]*100 The internal strain fraction (ISF) was used to express the impact of segmental interaction on ventricular function. ISF was calculated for the ejection period, defined in this study as the time between AVO and AVC. To describe the global left ventricular shortening during ejection, the global ejecting strain (GejS) was determined automatically from the internal strain rate plot: GejS = (|total positive strain| -|total negative strain|) between AVO and AVC.
The site of earliest electrical activation was searched for by looking for the site of earliest mechanical activation within the ventricle was as well as on the origin of the vector of mechanical activation time by an observer blinded to the electrophysiological procedure (figure 4). In patients with WPW-syndrome, we compared the results with the localization of the bundle defined by electrophysiological mapping [21] and in patients following CRT with left ventricular lead position (all V-V ≤ 0 ms).
Statistical analysis
Data between NL, WPW and LBBB at baseline were compared by ANOVA with Bonferroni correction for multiple comparisons. Dyssynchrony of mechanical activation ISF-plot with timing markers in a patient before (A) and after (B) CRT (SDot, VMot) was compared to the corresponding dyssynchrony of peak deformation (SDpt, VMpt) by paired t-test in each patient group. The effect of CRT was also studied by comparison of pre-CRT values (LBBB) with post-CRT values (CRT) by paired samples T-test. A p-value of < 0.05 was considered statistically significant. Agreement between first mechanically activated site and localization of the extra bundle (WPW) or left ventricular lead (BiV) was described for circumferential segment.
Results
Post-processing of the longitudinal deformation data imported in STOUT required 5 minutes on average for calculation and checking/editing of SDot, SDpt, VMot, VMpt and CVeS and for calculation and plotting of ISF. figure 6). Table 2 shows the baseline differences in CVeS, VSR, ESR and ISF. All parameters of deformation heterogeneity (CVES, VSR, ESR) and dyssynergy (ISF) were highly abnormal in LBBB, compared to NL as well as to WPW. None of the parameters reached statistical difference when comparing NL and WPW. Observation of the ISFplots identified pre-excitation induced dyssynergy of deformation shortly after the QRS and mostly before AVO rather than during ventricular ejection in WPW, compati-Rationale for the use of ESR Figure 5 Rationale for the use of ESR. Representative deformation traces of the septal (green) and lateral wall (red) and global ventricular deformation (light grey) obtained by MR-tagging before (A) and 8 weeks after the induction of left bundle branch block (B) in a dog with on Y-axis shortening in % (data from reference 14). AVC = time of aortic valve closure; # and dashed line from AVC to Y-axis denote the end-systolic value, * denotes the peak value of deformation. From A to B: LBBB induces a marked reduction in septal strain peak amplitude (green *) and in particular in the end-systolic value (green #). The peak deformation amplitude of the lateral wall (red *) occurs after AVC and has increased (-11.0% to -13%) but the end-systolic value has changed less. This means that a hypothetically perfect resynchronization (backwards from B to A) would consist of an relative increase in septal deformation with little change in the lateral contribution in this period. Hence, to estimate how much function can improve by resynchronization, ESR only takes differences between peak and end-systolic values of early shortening into account (i.c.green* and green #).
Indices of (the impact of) deformation heterogeneity and/ or dyssynergy
ble with local abnormal early shortening during late passive filling and isovolumic contraction (figure 7 and Additional file 3: figure showing PSrV plot in NL and WPW).
Effects of resynchronization
In this small sample of patients, a resynchronization effect of biventricular pacing was not demonstrable when dyssynchrony was expressed merely in terms of variability of deformation timing (SDot, SDpt) or end-systolic amplitudes (CVeS). However, biventricular pacing markedly diminished VMot, VMpt, indicating a decreased spatial divergence of deformation timing (Table 3). A more detailed analysis of the vector data also indicated that biventricular pacing with a mean V-V-interval of -43 ± 37 ms, had inverted the septal to lateral mechanical activation delay vector to a small lateral to septal delay (121.8 ± 50.0 ms to -30 ± 44.8 ms, p < 0.0001). Analysis by ISF suggested that the overall coordination of shortening had improved and this improved coordination during the ejection was paralleled by an improvement in ejection performance (GejS) ( Table 3).
Site of ventricular pre-excitation in WPW and LV-first pacing
The presence of a bypass with electrical pre-excitation somewhere at the ventricular base instead of around the apical anterior and septal breakthrough site in the normal ventricle inverted the mechanical activation vector from an apex to base (apex-base vector component NL: 38 ± 30 ms) pattern to a base to apex mechanical activation gradient (apex-base vector component WPW: -19 ± 21 ms, p = 0.015). The origin of the mechanical activation vector in the horizontal plane was variable and correctly indicated the site of the bundle in 7 out of 8 patients ( figure 6). The site of earliest mechanical activation matched with the site of the bypass in 6 of the 8 patients. The same methodology identified the site of the left ventricular pacing in 5 out of 7 cases (both for vector and earliest mechanical activation). In all cases of incorrect echocardiographic diagnosis, the site of electrical pre-excitation was located in the adjacent segment.
Discussion
In the current work, we illustrate how two-dimensional deformation data obtained from echocardiography can be reconstructed into a 3-D model of the left ventricle in order to enable a more comprehensive description of mechanical activation and deformation dyssynchrony. Such an approach enables mapping of the spatio-temporal distribution characteristics of dyssynchrony and allows the implementation of newer indices aimed at estimating the impact of dyssynchrony on global ventricular performance. Using a model of mild dyssynchrony (WPW), severe dyssynchrony (LBBB) and an intervention on the dyssynchronous substrate (CRT), differences between and potential advantages of certain approaches are further discussed.
Differences between approaches to express dyssynchrony and dyscoordination
The most widely used method to describe dyssynchrony consists of measuring differences in timing of onsets and/ or peaks of deformation throughout the ventricle [10,22,23]. However, multiple shortening waves are very common in the dyssynchronous ventricle, making this method vulnerable to noise and rendering uniform definitions on "onsets" and "peaks" more cumbersome [10,22,23]. When spatial information is encoded in the data-set, delays within the ventricle can also be expressed in terms of their spatio-temporal distribution patterns, e.g. by vector-analysis [10,19]. This approach may be preferable as it offers additional data on the organizational pattern of deformation and makes the analysis less vulnerable to accidental outliers or random noise in the measurements. In the present study, the additional value of vector analysis became apparent in the WPW-ventricle. A distinctly different spatio-temporal pattern of mechanical activation compared to the normal ventricle could be demonstrated, while in this small group no differences were detectable in variability of mechanical activation. Moreover, in the patients with left bundle branch block, CRT had a stronger effect on the vector magnitude than on the standard deviation of peak shortening timing [10].
Myocardial dyssynchrony does not only induce heterogeneity of deformation-timing but also of -amplitudes. Because global ventricular function relates to global deformation, [24,25] and global deformation in turn depends on the deformation magnitude in the individual wall segments as well as on the coordination (synergy) between them, timing alone does not necessarily reflect the impact [13]. However, not only dyssynchrony but also regional ischemia or scarring can affect end-systolic strain variance [26]. Accordingly, in a recent study involving ischemic and non-ischemic patients, this parameter seemed less valuable [27].
VSR represents a novel way to estimate the impact of dyssynchrony on global function. By "weighing" the observed end-systolic strain to the peak strain it may somewhat compensate for the aforementioned shortcoming of CVeS. Indeed, the VSR-value is insensitive to timely peaking but hypokinetic contractile behaviour. Nevertheless, in the present study neither CVeS nor VSR signifi- cantly changed upon resynchronization (see next paragraph: rationale for ESR).
ISF is a another new approach to estimate the impact of dyssynchrony by reflecting the part of the total deformation that is lost internally due to simultaneous shortening and stretching because of dyssynchrony [20]. ISF is rather a dyssynergy (= dyscoordination) than a dyssynchrony marker since it regards "synchrony of contraction" as "simultaneous shortening or lengthening in all parts of the ventricle". When different wall segments are deforming in phase with each other, there's synergy and ISF will be zero regardless of differences in velocity and extent of deformation. However, when some wall segments are deforming out of phase, the velocity and extent of their abnormal deformation do determine ISF. Hence, ISF becomes independent of the choice of peaks while remaining sensitive to strain amplitude differences of dyssynchronous segments. Preliminary results with ISF of circumferential shortening obtained by MR-T suggest this index of segmental interaction to be better related with long term remodelling than timing parameters only [20]. Of interest, the present study indicates that CRT improves global ventricular function (GejS, ejection fraction) by a reduction of ISF, i.e. by a conversion of internal into external shortening. The fact that spatial distribution patterns cannot be deducted from ISF and that its value can be affected by random noise may represent limitations; with a small adaptation of the algorithm however, 3-D vectors of outof-phase or paradoxical strain behaviour can be calculated throughout the cardiac cycle (See Additional file 2: Algorithm for ISF and vector of paradoxical strain-rate behavior (PSrV) and Additional file 3: Additional figure showing PSrV plot in NL and WPW). This fell beyond the scope of the present work.
The advantages of the proposed method in comparison with other techniques are summarized in the table attached in the appendix of this document (Additional file 4: Table 4: Comparison of commonly used echocardiographic techniques/indices to evaluate mechanical dyssynchrony with STOUT-indices)
Dyssynchrony analysis by myocardial deformation: unmet challenges
A key issue in the treatment of mechanical dyssynchrony is that electrical therapies-like CRT -can only amend electrical dyssynchrony [28]. Unfortunately, heterogeneity of deformation and mechanical dyssynchrony are not always caused by electrical dyssynchrony [29]. An imbalance in active and passive forces, causing deformation heterogeneity, can also occur in the absence of electrical activation delays [26]. One of the true challenges for deformation imaging therefore lies in the distinction between mechanical dyssynchrony based on electrical dyssynchrony or based on other local conditions [26,30,31]. In (local) pathologies such as ischemia for example, delayed and post-systolic shortening is rather a passive phenomenon of recoil than an expression of amendable dyssynchrony and premature shortening may represent a more specific marker. In addition, the relative amplitude changes in premature and delayed segments seen in animal experiments of acutely induced left bundle branch block suggest that postsystolic shortening in general may not represent shortening that can be recruited towards the end of the ejection period (see figure 5). Excluding post-systolic shortening from the analysis by confining measurements to the ejection period (e.g. by ISF) or by disregarding postsystolic peaks as in the calculation of ESR may therefore improve the estimation of truly recoverable dyssynchrony. In accordance with the latter hypothesis, VSR was not significantly changed by CRT in the present study, while ESR was significantly reduced.
Modelling of deformation: comparison with previous work
Myocardial deformation or strain can reliably be measured in vivo by magnetic resonance tagging (MR-T) imaging [24,32]. This technique has been applied in animal models also during CRT, instigating the development of highly effective therapies like cardiac resynchronization therapy (CRT) [3,7,10,14,15]. However, in humans MR-T has practical constraints and not all human's pathology can accurately be represented in animal models. Strain echocardiography by speckle tracking is a valuable alternative [16,33]. However, particularly in spherically dilated, thin walled and hypokinetic ventricles, the temporal and spatial resolution of echocardiography and the signal to noise ratio of speckle tracking are challenged. Frame rate, focus position and sector width can be adapted to optimize the ultrasound beam density and image quality in order to improve the reliability of speckle tracking [16]. We therefore designed the current software in such a way that segmental data from single wall recordings can be imported separately if needed.
It has been recognized previously that the assumptions and algorithms used for data interpolation and incorporation into a 3-D model can alleviate but also introduce sources of error [18]. However, all current deformation imaging modalities depend on reconstruction techniques and all are particularly vulnerable to grossly irregular heart rates. Because exact spatial location, orientation and geometry are known when MR-T is used, true 3-dimensional MR-T data sets can be reconstructed. Nor with the current, nor with a previously proposed echocardiographic methodology this is possible [18]. With 3-D based speckle tracking software soon becoming available, the latter problem might be solved in the near future. Nevertheless, and in spite of using longitudinal instead of circumferential deformation, our ISF, dispersion, and vector data on mechanical activation and dyssynchrony closely resemble the published MR-T data in normal individuals and in patients with LBBB [13,19,20,34].
Echocardiographic strain analysis can be applied also in humans with contraindication to MR-T, such as following CRT. This has offered unique data on the effects of CRT in humans in the current and in previous studies [35]. Finally, 2-DSE can measure deformation throughout the entire cardiac cycle and is thus independent of QRS triggering or fading of the taglines in diastole. This offers new opportunities. In the current work this is illustrated by providing the first preliminary data on mechanical activation vectors and dyssynchrony in WPW-patients.
Limitations
The presented echocardiographic approach remains time consuming and laborious, in particular related to the care taken to obtain high quality single wall recordings, the need for a meticulous registration of the timing events and the subsequent off-line calculation of deformation by the EchoPac-software at each of the wall segments individually. Once all files are transferred to STOUT however, little extra time is spent at the actual analysis of the traces and at making the dyssynchrony results available for statistical analysis. Another drawback of the methodology is that many of the presented indices will offer valuable information only when image quality is sufficient to provide robust deformation results covering most of the ventricle.
In clinical practice, this can be problematic even when attempting to optimize quality by a single wall approach. The presented image acquisition and post-processing approach might therefore better serve research purposes than clinical practice but we expect newly gained insight to generate simpler methods for routine practice. One of such clinically more feasible methods to predict response to CRT for example, might be the calculation of ESR deducted from the septum only, as previous and the present work indicates that in LBBB the septal segments generally are the earliest (vector of peak time), display most stretching towards end-systole [3,7,10,14,15] and thereby likely contribute most to the ESR-value.
Myocardial deformation is a complex three-dimensional event and differences in synchrony and synergy between the main axes of deformation have been suggested [36]. In the present study we only reported on longitudinal defor- mation parameters. Transverse data from the same longaxis images and at the same location can be processed in STOUT, as well as circumferential, radial and transverse data. Although this allows a direct comparison, such study fell beyond the scope of the present work.
In the present study we primarily intended to highlight the differences (in strength) between the individual dyssynchrony indices and to point out some physiological aspects that have to be taken into consideration when expressing dyssynchrony and dyscoordination. Only a limited number of patients were therefore included in the present study. It is important to recognize that in recent literature many new technologies and dyssynchrony indices have been put forward, in some cases without providing either the pathophysiological rationale for their use nor a standardized methodology. In particular in the field of cardiac resynchronization therapy, many of them have entered the clinical arena long before being properly evaluated in multi-centre trials against simpler and userfriendly methods. Each new method should therefore be scrutinized regarding its rationale and tested for its feasibility and reliability in the real world. This is no different for the currently proposed indices; whether the higher sensitivity of a vector-, ISF-and/or ESR-based approach found in this study translates into a superior clinical yield remains to be established in larger, prospective studies.
Conclusion
Ample experimental data and sound physiologic principles support the use of deformation imaging in the study of the nature and the impact of mechanical dyssynchrony. A dedicated software toolbox was designed to reconstruct myocardial deformation data obtained by 2-D speckle tracking echocardiography into a simple 3-D model of global ventricular deformation. This allowed the calculation of 3-D vectors of mechanical activation and of global left ventricular deformation. The software was also designed to allow the implementation of newer indices better reflecting important pathophysiological aspects of myocardial dyscoordination and impaired ventricular contractile efficiency. A comprehensive description of the spatio-temporal characteristics and of the impact of dyssynchrony of myocardial deformation by echocardiography might prove helpful in particular in pathologies in which magnetic resonance imaging has practical constraints, such as WPW and following CRT. | 6,924.4 | 2008-05-30T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Optimization Calculation and Analysis of Moving Load of the Railgun by Newton Method
To explore influences from drive current on the projectile launched speed, and the spacing and width of rails on the maximum value of current in the electromagnetic gun. The motion equation of plasma armature is analyzed by the Newton method. In this paper, considering armature motion subjected to plasma viscous drag and inertial drag, the optimization model between armature speed and drive current was built. The results showed that the projectile launched speed would get the maximum value when the drive current was the specific value. Moreover, the influences from spacing and width of rails on the maximum value of current and the projectile launched speed were proved. Therefore, the optimization model that was built by the Newton method research the electromagnetic railgun is of great significance.
Introduction
The electromagnetic launcher makes use of ampere force to accelerate driving projectile, so far it has a century.With electromagnetic forces push projectile, the projectile can reach very high acceleration.Such as Dr.Richard Ashleigh and Dr. John Barber in the Australian national university used the generator and plasma armature to accelerate a carbonic acid projectile weighed 3g to 5.9km/s in 1978.The Su Rense.Livermore Nation Laboratory and the Lowes.Alamos Nation Laboratory, once have cooperated to accelerated a projectile weighed 2.2g to a supervelocity of 10km/s.Fluid Physics institute of the Chinese Engineering Academy had built the first electromagnetic rail launcher, which can accelerate the projectile weighted 0.34g to 16.8km/s.While the velocity of the conventional cannon is only 2km/s, which is so closed to the limitation of physics that the range is not possible to be farther.On the contrary, the thrust of the electromagnetic railgun is ten times bigger than that of the traditional launcher.The projectile can be accelerated to several kilometers or even to dozens of kilometers in one second, for it possesses huge kinetic energy which greatly enhances the range and power of the weapon (Liu Wen and Li Min, 2010).At present the studies about electromagnetic gun still continue, involving different areas of studying.About the model problem of the velocity of a projectile, Parker thinks speed of electromagnetic railgun relates to chamber wall ablation and increased plasma mass; Ray introduced the resisting force relates to speed from angle of motion resisting force.In this paper, combining with method of Parker and Ray, they put forward the forces and speed model of an armature movement, and by Newton method to optimize calculation.
Constructing Model
The armature pushes the projectile high-speed movement under the effect of electromagnetic force.So the choice of the armature is an important link in the system of electromagnetic gun.Currently, there are three kinds of forms of armatures used in electromagnetic railguns, including solid armature, plasma armature and composite armature, as shows in fig. 1. Solid armature is simple in design, has a very small ablation viscous force, low resistance.The heat consumption is mainly concentrated in the armature inside, the defect is that an armature quality is larger, the projectile velocity is low.The quality of plasma armature is lesser, and has good capability with the contact of rail under higher speed.So it can obtain higher speed.The defect is higher electrical resistance, can also appear second impedance under the effect of plasma instability and impedance wave power.This erodes orbit seriously and limits the capability of the projectile, but also easily under viscous force etc many kinds of factors influences.The compound armature uses the combination of above two kinds of the armature.The plasma armature exists in between solid armature and rail, it improved the contact capability of solid armature and rail.The defect is that the institution is more complex, speed is limited.In this paper, we mainly discuss the movement of plasma armature in orbit.When large current through, the armature was rapidly melted and vaporized in a very short time, and forming the plasma.The plasma armature in orbit under electromagnetic force influence, along with the armature current is increasing and time is continuing, the temperature of plasma is also increasing.This caused local melting and evaporation of rail inside and the materials of a projectile.Moreover, mixed with plasma area and formed so-called viscosity resistance v F .Some materials migrated to the armature, the quality of plasma armature is increased, formed inert resistance d F (Yang Yudong and Wang Jianxin, 2008). 1) According to the electromagnetic theory, electromagnetic force can be expressed as follows: Where L F is the driving force, E is the conservative energy of system, mainly distributes the magnetic energy of inductance for the orbit (Jiang Zhongqiu and Tang Chenglin, 2010).L is inductance incremental, reflects inductance of unit length rail.L is the displacement of the armature movement.
2) The viscous resistance v F can be expressed as follows: Where is viscous factor, relate to armature, processing precision and smoothness of orbit.In conditions of higher precision and smoothness, 0.0125
; v is armature speed; a m is armature quality; d is spacing of rail; w is width of rail.
3) The inert resistance d F can be expressed as follows: Armature quality a m can be obtained by the solution of (4): Where is ablation coefficient, relates to speed, armature and material of orbit.Take it as a constant.R is resistance of armature.For a particular structure of rail gun in experiment, the workers of the Canberra laboratory found that the muzzle voltage is basically independent of the armature current.In their experiments (L EHMANN P RECK B, 2007), armature current changes from 300KA to dozens of KA.The muzzle voltage is generally 200V.Therefore, let the muzzle voltage is constant in numerical calculation.The resultanting force of plasma armature can be expressed as follows: ( ) Where a m is quality of armature; p m is quality of projectile; a is acceleration of armature.
From Newton's law and kinematics formula, the armature instantaneous speed and displacement respectively can be derived.That is From ( 1), ( 2), ( 7), ( 8) , we obtain a velocity equation of plasma armature:
Model design
The motion equation of plasma armature gives spacing and width of the track, and the relationship between drive current and velocities of the armature.The optimization design of electromagnetic railgun makes use of Newton method, the object is that take different width and spacing of orbit as design variables.When the velocity of a projectile is maximum, we calculate the driving current and the speed of a projectile.
Based on motion equation ( 8) and ( 9), optimization design model of a railgun is as follows: Objective function : min X Constraint conditions : X X Where is independent variable; v is the velocity of a projectile; I is driving current.
Restriction condition and the calculated parameters are as follows: 0.05 0.1 0.05 0.1 3.2 Algorithm design ① Given initial point (0) x , precision 0
The optimization result and analysis
Tab.1 analyzes the influence spacing and width of a rails on drive current and launched speed of the projectile, drive current on launched speed of the projectile when 0.05 In this paper, the calculated parameters we chose are basically the same as Marshall experimental parameters.Moreover, spacing and width of orbit were increased (Asghar keshtkar and Toraj Maleki, 2009).Tab.1 shows spacing and width of track have important effect on launched speed of the projectile.Along with the spacing and width of orbit increasing, the launched speed of the projectile is a increasing trend.So we obtain a conclusion: increasing spacing and width of the track is a way to improve the speech of a projectile.When spacing and widths of the orbit are both increased, materials used to construct actual railguns are also increased.According to the practical application of guns, the date of spacing and width of the orbit is 0.1m .Another problem showed is that along with the spacing and width of track increasing, the current is an increasing trend.Fig. 2 shows spacing and width of orbit have effect on the current.The result optimized with Newton method shows that first along with launched speed of the projectile increasing, the drive current is increasing(J.Mankowski,2007;Asghar Keshtkar,2009;Wang Zijian,2009 andThomas G.Engel,2006), and then is decreasing.Finally, launched speed would achieve a constant when driving current increases to a certain value.When driving current is maximum, launched speed of the projectile is also maximum.(Yang Yudong, 2010)So it is an affective method for improving the speed of a projectile to access to current reasonably.
Conclusions
(1) Based on the plasma armature resistance by the viscous resistance and inertia resistance, built the mathematical model includes some important variables such as armature speed, drive current, width and spacing of the orbit etc; (2) The optimization calculation brings from Newton method.When driving current 5 1.6384 10 , the launched speed of a projectile is the maximum, the maximum value 1716.7 / v m s .In order to improve the velocity of the projectile; we should be properly select width, spacing and length of the orbit.
(3) The optimization results compared with the other simulation and experimental results, we verified rationality and effectiveness of the model.It provides theoretical basis for the design and manufacture of guns.
Figure 1 .
Figure 1.The conventional types of armature U is arcing voltage.If the plasma is in a smoothly situation, arcing voltage was approximately calculated by measuring voltage of the orbit outlet.Arcing voltage and armature resistance can meet ohm's law: a | 2,165.8 | 2011-08-03T00:00:00.000 | [
"Engineering",
"Physics"
] |
Temperature Dependence of Deformation Behaviors in High Manganese Austenitic Steel for Cryogenic Applications
The deformation structure and its contribution to strain hardening of a high manganese austenitic steel were investigated after tensile deformation at 298 K, 77 K and 4 K by means of electron backscatter diffraction and transmission electron microscopy, exhibiting a strong dependence of strain hardening and deformation structure on deformation temperature. It was demonstrated that sufficient twinning indeed provides a high and stable strain hardening capacity, leading to a simultaneous increase in strength and ductility at 77 K compared with the tensile deformation at 298 K. Moreover, although the SFE of the steel is ~34.4 mJ/m2 at 4 K, sufficient twinning was not observed, indicating that the mechanical twinning is hard to activate at 4 K. However, numerous planar dislocation arrays and microbands can be observed, and these substructures may be a reason for multi-peak strain hardening behaviors at 4 K. They can also provide certain strain hardening capacity, and a relatively high total elongation of ~48% can be obtained at 4 K. In addition, it was found that the yield strength (YS) and ultimate tensile strength (UTS) linearly increases with the lowering of the deformation temperature from 298 K to 4 K, and the increment in YS and UTS was estimated to be 2.13 and 2.43 MPa per 1 K reduction, respectively.
Introduction
For a long time, high manganese austenitic steels, such as high manganese twinninginduced plasticity (TWIP), transformation-induced plasticity (TRIP) and low-density steels, have attracted much interest due to their potential applications in the automotive industry [1][2][3][4][5]. A large number of studies focus on deformation mechanisms, strain hardening, yield strength, texture, fracture and fatigue, etc. Up to now, these steels have not been widely used in the automotive industry because of their relatively high cost compared with conventional automobile steels, hydrogen-induced cracks and other problems. Recently, high manganese austenitic steels were shown to act as a candidate cryogenic material for liquefied natural gas (LNG) transportation by ship and truck due to their extraordinary cryogenic mechanical properties and relatively low cost compared with conventional cryogenic materials [6][7][8]. Moreover, tensile and impact properties at room temperature and 77 K, as well as corresponding deformation mechanisms, were investigated in detail [6][7][8][9][10][11][12], indicating that these steels have potential applications in the LNG field, and they have been used in LNG tank building. However, there are few data on plastic properties of high manganese austenitic steels at a temperature as low as 4 K. These extremely cryogenic plastic properties determine whether they can be used in extremely cryogenic fields, such as liquid hydrogen and liquid helium.
In addition, the mechanical twinning, as one of the secondary plastic deformation mechanisms [1,2,13], plays a significant role in the enhancement of strain hardening and the improvement of ductility. Recently, Luo et al. [14] reported that high manganese TWIP steels can also achieve high strain-hardening rates without mechanical twining at 373 K and 473 K. However, it is commonly accepted that the combination of strength and ductility in high manganese TWIP steel results from the high strain-hardening rates caused by mechanical twinning [1,2,15,16]. The stacking fault energy (SFE) increases with increasing temperature [17], leading to the suppression of mechanical twinning at high temperatures. Thus, at low temperatures of 77 K and 4 K, the SFE may be less than 15~20 mJ/m 2 . The TRIP effect can occur, and extremely low temperature mechanical properties are deteriorated. Hence, we designed a Fe-0.6C-0.5Si-24.2Mn-4.9Al (in wt%) steel with a relatively high SFE at room temperature to suppress martensite transformation at 77 K and 4 K. In addition, the deformation mechanisms are somewhat unclear at 4 K. Moreover, it is also unknown whether sufficient twinning can occur for a twinning SFE at 4 K.
Hence, we investigate the temperature dependence of deformation behaviors, mainly focusing on mechanical twinning and planar dislocation slipping. The present study will provide a better understanding of the role of mechanical twining in the enhancement of the strain hardening rate and deformation mechanism at a temperature as low as 4 K.
Material Preparation
The high manganese steel was alloyed by~4.9 wt% Al to increase the SFE, with the aim of suppressing martensite transformation at 4 K. The chemical compositions and SFEs, which were estimated via a thermodynamic model [17], are given in Table 1. The steel was melted in a high-frequency vacuum induction furnace with an argon protective atmosphere and cast into an iron mold. After that, the ingot was air cooled to room temperature with a cooling rate of~20 • C/min. The ingot was hot-rolled to a thickness of~12 mm at a temperature range of 1150-1050 • C and subsequently water-quenched to room temperature. After that, the hot-rolled plate was isothermally treated at 1200 • C for 2 h to eliminate segregation.
Tensile Test
Standard cylindrical tensile samples with a gauge diameter of 6 mm and a gauge length of 50 mm were machined along the rolling direction. The uniaxial tensile testing was performed on an AG-X plus pc-controlled tensile machine (Shimadzu Co. Ltd., Kyoto, Japan) at 298 and 77 K at a cross speed of 5 mm/min. The 4 K tensile testing was performed on an MTS-SANS CMT5000 pc-controlled tensile machine (MTS SYSTEMS (CHINA) Co. Ltd., Shanghai, China) equipped with a CryoLab cryogenic system (4.2~300 K) (Selfresearch system by Technical Institute of Physics and Chemistry, CAS, Beijing, China) at a cross speed of 5 mm/min. Moreover, the steels with tensile deformation at 298, 77 and 4 K were designated as the 298 K, 77 K and 4 K steels, respectively.
Electron Backscatter Diffraction
Metallographic specimens were electron-discharge machined from the hot rolled steel and tensile fractured samples with a maximum uniform strain. These specimens were mechanically polished using silicon papers and a polishing machine with the help of polishing paste. Subsequently, they were further polished using a three ion beam polishing instrument (Leica EM TIC 3X, Leica Microsystems Ltd., Wetzlar, Germany) to remove the surface strain layers. The microstructure characteristics before and after tensile deformation were analyzed using a Zeiss Ultra 55 (Carl Zeiss AG, Jena, Germany) fieldemission scanning electron microscope (FE-SEM) equipped with an Electron backscatter diffraction (EBSD) attachment. Moreover, a scanning step of 5 µm, which is small enough to obtain a clear morphology, was utilized during EBSD data collection. The analyzed area of~2.3 ×~1.7 mm was sufficiently large. These EBSD data were post-processed by a Tango procedure using HKL CHANNEL 5 software (version; 5.0.9.0, OXIG Co. Ltd., Oxford, UK).
Transmission Electron Microscopy
Some square slices with a thickness of~600 µm were electron-discharge machined from 298 K, 77 K and 4 K steels. The square slices were mechanically thinned to~50 µm from both sides to avoid sample bending, which commonly causes curved extinction fringes. Subsequently, they were punched to obtain thin foils with a diameter of 3 mm using a gatan 659 puncher. These thin foils were further thinned using a twin-jet electrolytic polisher (Struers TenuPol-5, Struers Inc., Copenhagen, Denmark) to acquire thin regions, which surround a hole in the thin foil and are thin enough to cause the electron beam to penetrate. Mechanical twins, dislocation configurations and deformation bands were observed using an FEI Tecnai G 2 F20 (FEI Co. Ltd., Hillsboro, OR, USA) field-emission transmission electron microscope (FE-TEM) operated at 200 kV.
Tensile Properties at Different Temperatures
The uniaxial tensile engineering stress-strain and corresponding strain hardening rate (SHR) curves are shown in Figure 1. The yield strength (YS), ultimate tensile strength (UTS) and total elongation (TEL) are provided in the table and inserted in Figure 1a, showing that both YS and UTS can be greatly enhanced by lowering deformation temperature from 298 to 4 K, as a result of the high shear stress needed for dislocation gliding at a low temperature. The YS of the 77 K steel is~2.4 times as high as that of the 298 K steel, and this value becomes~3.0 as the deformation temperature further decreases to 4 K. Moreover, the correlation between YS and deformation temperature obeys a linear relationship, as shown in Figure 2. The UTS also shows a linear increase with the lowering of deformation temperature. The TEL increases from~53 to~69 % with the decreasing of deformation temperature from 298 to 77 K. Thus, the strength and ductility are simultaneously enhanced at a deformation temperature of 77 K. However, when the deformation temperature is further lowered to 4 K, there is a small and large decrease in TEL compared with the 298 K and 77 K steels, respectively. Note that the shape of the stress-strain curve of the 4 K steel is highly different from that of 298 K and 77 K steels. The stress-strain curves of the 298 K and 77 K steels are relatively smooth, whereas numerous serrations in the stressstrain curve of the 4 K steel can be clearly seen. In fact, this phenomenon was commonly observed in 316LN, 304L, medium-entropy alloy, etc. [18][19][20], and was interpreted by twinning, martensite transformation, the formation of burst dislocations or adiabatic deformation [19][20][21][22]. However, in the present work, the martensite transformation has not occurred at 4 K, indicating that it is not a reason for the formation of serrations.
In addition, the strain hardening rate was derived from the true stress-strain curves, as shown in Figure 1b, exhibiting a strong temperature dependence of SHR. In the early deformation stage, the SHRs sharply decrease for the three deformation temperatures as a result of dynamic recovery caused by cross slipping and annihilation of dislocations as well as the formation of low energy dislocation structures [13]. In the final deformation stage, the SHRs also sharply decrease because of the saturation of dislocation multiplication and the formation of denser and thicker twins [1]. However, in the main deformation stage, the SHR of the 298 K steel is nearly the same as typical high manganese TWIP steels, showing a slight increase owing to mechanical twinning and a further decrease because of less active mechanical twinning [1,23]. The 77 K steel possesses a higher SHR compared with the 298 K steel, and the SHR becomes nearly constant, appearing as if the mechanical twinning is sufficiently and continuously activated. Moreover, the dynamic recovery can be effectively suppressed at a low temperature, which also generates dislocation storage [24]. When the deformation temperature is lowered to 4 K, numerous peaks in the SHR curve can be observed. This result should not be caused by dynamic strain aging due to the very low diffusivity of carbon at 4 K and the addition of aluminum. In fact, the localized plastic instabilities lead to serrations, which are frequently observed in 316LN, 304L, mediumentropy alloy and Ti alloys, etc., at low temperatures because of the combination of thermal heat flow capacity and high flow stress [25]. In addition, the strain hardening rate was derived from the true stress-strain curves, as shown in Figure 1b, exhibiting a strong temperature dependence of SHR. In the early deformation stage, the SHRs sharply decrease for the three deformation temperatures as a result of dynamic recovery caused by cross slipping and annihilation of dislocations as well as the formation of low energy dislocation structures [13]. In the final deformation stage, the SHRs also sharply decrease because of the saturation of dislocation multiplication and the formation of denser and thicker twins [1]. However, in the main deformation stage, the SHR of the 298 K steel is nearly the same as typical high manganese TWIP steels, showing a slight increase owing to mechanical twinning and a further decrease because of less active mechanical twinning [1,23]. The 77 K steel possesses a higher SHR compared with the 298 K steel, and the SHR becomes nearly constant, appearing as if the mechanical twinning is sufficiently and continuously activated. Moreover, the dynamic recovery can be effectively suppressed at a low temperature, which also generates dislocation storage [24]. When the deformation temperature is lowered to 4 K, numerous peaks in the SHR curve can be observed. This result should not be caused by dynamic strain aging due to In addition, the strain hardening rate was derived from the true stress-strain curves, as shown in Figure 1b, exhibiting a strong temperature dependence of SHR. In the early deformation stage, the SHRs sharply decrease for the three deformation temperatures as a result of dynamic recovery caused by cross slipping and annihilation of dislocations as well as the formation of low energy dislocation structures [13]. In the final deformation stage, the SHRs also sharply decrease because of the saturation of dislocation multiplication and the formation of denser and thicker twins [1]. However, in the main deformation stage, the SHR of the 298 K steel is nearly the same as typical high manganese TWIP steels, showing a slight increase owing to mechanical twinning and a further decrease because of less active mechanical twinning [1,23]. The 77 K steel possesses a higher SHR compared with the 298 K steel, and the SHR becomes nearly constant, appearing as if the mechanical twinning is sufficiently and continuously activated. Moreover, the dynamic recovery can be effectively suppressed at a low temperature, which also generates dislocation storage [24]. When the deformation temperature is lowered to 4 K, numerous peaks in the SHR curve can be observed. This result should not be caused by dynamic strain aging due to the very low diffusivity of carbon at 4 K and the addition of aluminum. In fact, the local- Figure 3 provides an EBSD-inverse pole figure (IPF) map as well as an image quality (IQ) map with general grain boundaries and twin boundaries. There is nearly no color contrast in the grain interior, implying very low local orientation gradients in the grain interior. Numerous annealing twins can be observed, indicating that annealing twins were not suppressed by the addition of~5.0 wt% Al. Moreover, note that a lot of annealing twins go through the whole grain, whereas some Σ3 {112} incoherent twin boundaries terminate in the grain interior. It is commonly accepted that the twinning in face-centered-cubic crystal is through layer-by-layer displacement of a/6<112> Shockley partial dislocations on consecutive {111} planes [26][27][28]. Thus, these partial dislocations can be blocked by pre-existing dislocation tangles [29], resulting in the termination of annealing twins in the grain interior. In addition, the average grain size, considering annealing twins, was estimated to be 122 µm using a linear intercept method.
EBSD Observations
The EBSD IPF and kernel average misorientation (KAM) maps of the steels, subjected to a maximum uniform tensile deformation at 298, 77 and 4 K, are shown in Figure 4. Regardless of the deformation temperatures, almost all grains were prolonged along the rolling direction//loading direction, nearly all grains tended to align along the <001>//tensile direction or the <111>//tensile direction, and there existed an obvious color contrast in the grain interior. Figure 3 shows that there is nearly no preferred orientation of austenite grains in the steel before deformation. Thus, the observations of <001> or <111>//tensile direction indicates that the grains with orientations deviating from <001> or <111> reorientated during deformation. In addition, the pronounced color contrast in the grain interior always implies severe lattice distortion and plastic deformation, leading to a large local misorientation, which can be supported by KAM maps. Figure 4d,f show most regions with KAM values higher than 2 • . A high KAM value always represents a high dislocation density, a wide distribution of misorientation and a high accumulation of plastic deformation [7]. Hence, a relatively large plastic deformation occurred in the 298 K, 77 K and 4 K steels. Note that there exist some regions with KAM values close to or greater than 4 • , and these regions are commonly observed in the vicinity of grain boundaries, implying the piling up of more dislocations at the grain boundaries. Moreover, the area of the regions with KAM values close to or greater than 4 • in the 77 K steel is higher than those in the 298 K and 4 K steels. In addition, there are some dark regions in the 77 K steel, indicating low confidence index values because of strong lattice distortion [30]. Furthermore, the areas of the regions with KAM values less than 1 • in the 298 K and 4 K steels are greater than those in the 77 K steels. These results indicate that the steel possesses the best tensile plastic deformation capacity. Although the EBSD results clearly exhibit grain orientations, the degree of plastic deformation, as well as some twin segments and substructures, cannot be clearly observed due to a relatively low resolution of SEM. Hence, these substructures were observed by means of TEM.
TEM Observations
The representative microstructure characteristics of the steel subjected to a maximum uniform tensile deformation at 298 K are shown in Figure 5, exhibiting deformation mechanisms consisting of mechanical twinning, planar dislocation gliding as well as cross slipping and climbing of dislocations. Figure 5a,b show that the two-variant twinning systems of 111 211 and 111 211 were activated during tensile deformation at 298 K, providing evidence that a slight increase in the SHR curve results from mechanical twinning. In order to clearly show the morphology of two-variant twins, they are highlighted in Figure 5d,e. It can be clearly seen that the T1 twin becomes bent after twin-twin intersection, indicating strong twin-twin interactions, which can accommodate plastic deformation and enhance SHR [31]. Additionally, there are numerous T2 twins with fine spacing, leading to a sharp decrease in the mean free path of dislocation. Moreover, there exist contrasts in the dislocations between mechanical twins, indicating a sufficient dislocation storage. Thus, the SHR can be increased or kept at a high value. Although the SFE of the steel is as high as~63.6 mJ/m 2 at 298 K, numerous mechanical twins can be observed in the 298 K steel, and the SHR curve also shows that the mechanical twinning has occurred. In general, the mechanical twinning can be activated for the SFE ranging from 15~20 to 40~50 mJ/m 2 [11,[32][33][34], implying that a SFE of~63.6 mJ/m 2 exceeds the upper limit of the SFE range for mechanical twinning. The reason for the activation of mechanical twinning in the 298 K steel is large grain size, which can sufficiently lower twinning stress [35]. When the deformation temperature is decreased to 77 K, the deformed microstructure is very different from the 298 K steel, as shown in Figure 6. Although the SFE of the steel is as high as ~63.6 mJ/m 2 at 298 K, this value decreases to ~38.6 mJ/m 2 at 77 K. Thus, the mechanical twinning can be sufficiently activated according to the SFE range for mechanical twinning. Numerous mechanical twins are indeed observed in the 77 K steel. Most mechanical twins show a relatively small thickness of 10 ~ 20 nm or large thickness of 50 ~ 100 nm, and there are some thicken mechanical twins with a thickness greater than 300 nm, as shown in Figure 6e. Compared with the 298 K steel, there are two major differences in the morphology of mechanical twins. One is that the volume fraction of mechanical twins is far higher than that in the 298 K steel. The other one is that the distribution of mechanical twins is very dense, leading to small twin spacing. Large volume fraction of mechanical twins and very small twin spacing should provide a major contribution to high SHR. Thus, the SHR of the 77 K steel is higher than that of the 298 K steel and remains Besides mechanical twinning, Figure 5f exhibits highly dense dislocation walls (HD-DWs) along one variant {111} slip plane, and this dislocation configuration is always formed by dense dislocation sheets lying on the primary slip system [36], implying that the planar dislocation glide should be another deformation mechanism. In addition, the deformation bands, dislocation tangles and dislocation cells can be also observed. These substructures also provide some contributions to SHR.
When the deformation temperature is decreased to 77 K, the deformed microstructure is very different from the 298 K steel, as shown in Figure 6. Although the SFE of the steel is as high as~63.6 mJ/m 2 at 298 K, this value decreases to~38.6 mJ/m 2 at 77 K. Thus, the mechanical twinning can be sufficiently activated according to the SFE range for mechanical twinning. Numerous mechanical twins are indeed observed in the 77 K steel. Most mechanical twins show a relatively small thickness of 10~20 nm or large thickness of 50~100 nm, and there are some thicken mechanical twins with a thickness greater than 300 nm, as shown in Figure 6e. Compared with the 298 K steel, there are two major differences in the morphology of mechanical twins. One is that the volume fraction of mechanical twins is far higher than that in the 298 K steel. The other one is that the distribution of mechanical twins is very dense, leading to small twin spacing. Large volume fraction of mechanical twins and very small twin spacing should provide a major contribution to high SHR. Thus, the SHR of the 77 K steel is higher than that of the 298 K steel and remains nearly constant during the main deformation stage, supporting the fact that the mechanical twinning indeed enhances SHR and ductility. Hence, the TEL of the 77 K steel is higher than that of the 298 K steel. Some deformation bands can be also observed, as shown in Figure 6f. The regions with bright contrast have almost the same orientation, and the regions with dark contrast also have similar orientations. Moreover, there is a small deviation in orientation between the two contrasted regions. These deformation bands contribute to SHR to varying degrees [36]. Figure 7a shows numerous dense bands with a dark contrast along the 111 planes, and numerous dislocations can be also observed between bands. In order to show the clear morphology of these bands, some regions were magnified and shown in Figure 7b,c. Numerous slip traces along the 111 planes can be also observed. Thus, the formation of these dislocation configurations is due to dislocations pile up on the 111 and 111 planes [37,38], leading to the formation of dense dislocation networks of planar dislocation arrays forming the so-called Taylor lattices [36]. Figure 7e shows that some regions contain HDDWs along the primary slip planes. In addition to the above-mentioned dislocation configurations, the dislocation tangles can be also observed, indicating a high dislocation accumulation and strong dislocation interactions. Moreover, it can be seen that the dislocation density between ill-defined boundaries is always high. These bands are also termed microbands [37,38]. With the further lowering of the deformation temperature to 4 K, few mechanical twins can be observed. Instead, the dislocation configurations are characterized by planar dislocation arrays. Although the SFE of the steel is~34.4 mJ/m 2 at 4 K and this value is slightly less than~38.6 mJ/m 2 , a lot of mechanical twins, such as those in the 77 K steel, are not observed, seeming to indicate that the mechanical twinning is hard to activate at 4 K. Compared with the 77 K steel, a pronounced decrease in TEL may be due to few mechanical twins. However, compared with the 298 K steel, there is only a small decrease in TEL, indicating that the formation of planar dislocation arrays and microbands can also enhance SHR, and a relatively high TEL can be obtained. In addition, the serrations in the stress-strain curves of metals tested at 4 K are commonly observed, and some mechanisms were proposed to explain this phenomenon. We think that the adiabatic deformation is the main reason. Furthermore, the two-variant planar dislocation arrays in the 4 K steel may be also a reason for the formation of serrations. The pre-formed planar dislocation arrays can act strong barriers for dislocation slipping along another variant slip plane, and dislocation pinning and depinning can occur, generating bursts of increased and decreased stress.
Conclusions
In the present work, the tensile deformation behaviors of a high manganese austenitic steel were investigated at 298, 77 and 4 K. The deformed microstructures were characterized by means of EBSD and TEM. Some conclusions can be drawn.
1.
It is normal that both YS and UTS increased with the lowering of deformation temperature, whereas the TEL exhibited an obvious increase at 77 K and only a slight decrease in TEL at 4 K compared with the one at 298 K, thus overcoming the long-standing strength-ductility trade-off.
2.
In the main deformation stage, the SHR of the 298 K steel was typical for high manganese TWIP steels. A high and constant SHR led to a high ductility in the 77 K steel. However, the 4 K steel exhibited multi-peak strain hardening behaviors.
3.
With the lowering of deformation temperature, the major change in the deformation mechanism was as follows: certain twinning → sufficient twinning → little twinning. This change is consistent with the change in TEL, indicating that the mechanical twinning indeed provided a high and stable strain hardening capacity. Thus, a high TEL could be obtained.
4.
Interestingly, it was found that although the SFE of the steel was~34.4 mJ/m 2 at 4 K, sufficient twinning was not observed. Instead, numerous planar dislocation arrays and microbands could be observed. This may be also a reason for the formation of serrations. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of another ongoing study.
Conflicts of Interest:
No conflict of interest exits in the submission of this manuscript, and manuscript is approved by all authors for publication. I would like to declare on behalf of my co-authors that the work described was original research that has not been published previously, and is not under consideration for publication elsewhere, in whole or in part. All the authors listed have approved the manuscript that is enclosed. | 6,042.4 | 2021-09-01T00:00:00.000 | [
"Materials Science"
] |
Construction of Microporous Zincophilic Interface for Stable Zn Anode
Aqueous zinc ion batteries (AZIBs) are promising electrochemical energy storage devices due to their high theoretical specific capacity, low cost, and environmental friendliness. However, uncontrolled dendrite growth poses a serious threat to the reversibility of Zn plating/stripping, which impacts the stability of batteries. Therefore, controlling the disordered dendrite growth remains a considerable challenge in the development of AZIBs. Herein, a ZIF-8-derived ZnO/C/N composite (ZOCC) interface layer was constructed on the surface of the Zn anode. The homogeneous distribution of zincophilic ZnO and the N element in the ZOCC facilitates directional Zn deposition on the (002) crystal plane. Moreover, the conductive skeleton with a microporous structure accelerates Zn2+ transport kinetics, resulting in a reduction in polarization. As a result, the stability and electrochemical properties of AZIBs are improved. Specifically, the ZOCC@Zn symmetric cell sustains over 1150 h at 0.5 mA cm−2 with 0.25 mA h cm−2, while the ZOCC@Zn half-cell achieves an outstanding Coulombic efficiency of 99.79% over 2000 cycles. This work provides a simple and effective strategy for improving the lifespan of AZIBs.
Introduction
The excessive consumption of non-renewable energy sources has caused global warming, frequent disasters, and energy shortage, all of which seriously affect the environment. To mitigate these issues, renewable energy sources have gradually emerged into human view [1]. Although lithium-ion batteries (LIBs) have been widely utilized in electronic products and new energy vehicles, they still face some problems, such as limited resources, high costs, and safety hazards [2][3][4][5]. Aqueous zinc ion batteries (AZIBs) have captured the attention of researchers due to their inherent characteristics, including low cost, high safety, abundant resources, high theoretical capacity (820 mA h g −1 ), and low redox potential (−0.76 V) [6][7][8][9],. However, a series of issues such as dendrite growth, dead Zn, and by-product generation severely constrains the practical application of AZIBs.
Results and Discussion
Figure 1a presents a schematic diagram of the preparation process of ZOCC@Zn. Porous ZOCC was obtained by carbonizing the synthesized cubic ZIF-8 precursors, and ZOCC@Zn was further obtained by uniformly coating ZOCC on bare Zn. The X-ray diffraction (XRD) pattern of the synthesized ZIF-8 ( Figure S1) matches well with the typical diffraction peak of the simulated card [30], indicating a successful synthesis with excellent crystallinity. After calcination at 800 • C in a nitrogen atmosphere, the diffraction peak of ZIF-Molecules 2023, 28, 4789 3 of 12 8 completely disappeared. The XRD pattern of the ZOOC (Figure 1b) shows two distinct diffraction peaks with broad half-peaks at 24 • and 43 • , corresponding to the (002) and (100) crystal planes, respectively. The weak peak of the (100) belongs to amorphous carbon, and the signal on the (002) crystal plane is graphitized carbon. No diffraction peak related to Zn is observed. This absence may be due to the content of the Zn element evaporating to a very low level under a high temperature condition. We specifically discuss the existence form of the Zn element in the subsequent XPS analysis. Additionally, the specific surface area and pore size distribution of the ZOCC were investigated using N 2 adsorption-desorption isotherms. Figure 1c displays the sharp increase at a relatively low pressure (P/P 0 ≤ 0.01), exhibiting the microporous structure [31]. The appearance of a hysteresis loop indicates the presence of mesopores. The Brunauer-Emmett-Teller (BET) surface area is 675.72 m 2 g −1 with a total pore volume of 0.591 cm 3 g −1 . The hierarchical pore structure of ZOCC with a high specific surface area facilitates the kinetic mass transfer and regulates the equilibrium Zn 2+ ion flux [32]. Furthermore, the pore size distribution curve (Figure 1d) was obtained by using the non-local density functional theory (NLDFT) method. The pore size distribution of ZOCC is mainly in the range of 0.6-4 nm, revealing that ZOCC is predominantly microporous (<2 nm) with a slight mesoporous pore structure (2-50 nm). X-ray photoelectron spectroscopy (XPS) was employed to analyze the composition and chemical bonding state of ZOCC. As shown in Figure S2, the wide-scan XPS spectrum clearly demonstrates the presence of C, N, and Zn elements. The high-resolution XPS spectrum of C 1s (Figure 1e) can be deconvoluted into three peaks, which are ascribed to C sp 3 -C sp 3 (284.8 eV), C sp 2 -C sp 2 (286.6 eV), and C=N bonds (289.7 eV) [33,34]. The presence of the C=N bonds indicates that the carbon skeleton of ZOCC is doped with the N element, which is consistent with the high-resolution XPS spectrum of N 1s. As displayed in Figure 1f, the peaks at 398.4, 399.7, and 401.0 eV are assigned to pyridinic-N, pyrrolic-N, and graphitic-N, respectively. It is demonstrated that most N atoms are integrated into the carbon lattice and the doping of N atoms is essential to enhance the electrical conductivity of the carbon material and the electrochemical properties. Furthermore, the presence of pyridinic-N and pyrrolic-N enhances the wettability of the composite surface, thereby facilitating the adsorption of Zn 2+ and promoting electron transport and zinc ion diffusion [32,35]. The Zn 2p spectrum (Figure 1g) indicates binding energies of 1021.6 eV and 1044.6 eV for Zn 2p 3/2 and Zn 2p 1/2 , respectively. The 23.0 eV binding energy gap between the two peaks confirms the presence of Zn 2+ originating from ZnO [36,37].
The morphology and structure of both ZIF-8 and ZOCC powder were characterized by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). The obtained ZIF-8 precursor (Figure 2a,b) exhibits a cubic shape with a smooth surface and an average size of about 200 nm, which is consistent with the TEM image displayed in Figure S3. The uniform distribution of C, N, O, and Zn elements in ZIF-8 was confirmed by the corresponding elemental mapping. After calcination, as shown in Figure 2c,d, ZOCC generally inherits the original shape of the precursor, but with a surface decorated with granular substances, which are identified as ZnO by the elemental analysis of HAADF-STEM in Figure 3. This observation is consistent with the XPS results. Although there is some degree of shrinkage and diameter reduction in ZOCC compared to the precursor, the structure remains intact without any visible collapse. This shrinkage may be attributed to the volatilization of Zn or the depletion of C and N under a high calcination temperature.
We fabricated the ZOCC interface layer on the bare Zn surface by mixing ZOCC powder and a small amount of organic binder PVDF to prepare a slurry. Figure S4a displays the overall flatness of the bare Zn surface with a thickness of 80 µm ( Figure S4c). However, at a higher magnification, the image in Figure S4b reveals some defects on the surface. Under the effect of organic binder, ZOCC is evenly distributed on the electrode surface ( Figure S4d) without altering its original morphology ( Figure S4e). Furthermore, the cross-sectional SEM image confirms that the ZOCC layer is densely spread on the Zn metal surface with a thickness of about 20 µm ( Figure S4f). To assess the electrochemical stability of ZOCC@Zn, symmetric cells were assembled using 2 M ZnSO 4 solution as the electrolyte. The ZOCC@Zn and bare Zn symmetric cells separately underwent cyclic plating/stripping tests at different current densities and capacities. As depicted in Figure 4a, the cycling life of ZOCC@Zn can stably reach 1152 h at a current density of 0.5 mA cm −2 and a capacity of 0.25 mA h cm −2 , while maintaining a continuous ultralow polarization (18 mV). Conversely, the bare Zn cell displays clear potential fluctuations, a high polarization of approximately 48 mV, and a short cycling life of 165 h. As the current density is increased to 1 mA cm −2 and the capacity to 1 mA h cm −2 (Figure 4b), the ZOCC@Zn cell still achieves a relatively long cycling time of 519 h with a lower polarization (19 mV). In sharp contrast, the bare Zn cell exhibits a higher polarization of about 37 mV. What is more, at a current density of 2 mA cm −2 (Figure 4c), the polarization potential of bare Zn gradually declines, with a sudden drop at 228 h, presumably due to the uncontrolled growth of dendrites caused by inhomogeneous deposition until the diaphragm is punctured and the cell is short-circuited. However, the ZOCC@Zn cell can be plated/stripped for 720 h. Finally, further galvanostatic charge/discharge testing was performed at 5 mA cm −2 as shown in Figure 4d, demonstrating that the ZOCC@Zn cell has a lower polarization and longer cycle number than bare Zn. These findings suggest that ZOCC@Zn is better suited for long-term cycling at various current densities and is adaptable to more diverse environments, while bare Zn is prone to short-circuiting. It also well demonstrates that the ZOCC interface layer effectively inhibits dendrite growth and prolongs battery life. The morphology and structure of both ZIF-8 and ZOCC powder were characterized by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Th obtained ZIF-8 precursor (Figure 2a,b) exhibits a cubic shape with a smooth surface and an average size of about 200 nm, which is consistent with the TEM image displayed in Figure S3. The uniform distribution of C, N, O, and Zn elements in ZIF-8 was confirmed by the corresponding elemental mapping. After calcination, as shown in Figure 2c,d ZOCC generally inherits the original shape of the precursor, but with a surface decorated with granular substances, which are identified as ZnO by the elemental analysis o ZOCC@Zn and bare Zn symmetric cells were serially measured at current densities (1, 2, 5, 8, and 10 mA cm −2 ) with a fixed areal capacity of 1 mA h cm −2 to evaluate the stability and rate aspects ( Figure 4e). The polarization of the ZOCC@Zn cell is consistently lower than that of the bare Zn cell at every current density, demonstrating that the ZOCC@Zn cell has excellent Zn 2+ transport kinetics together with plating/stripping stability. In addition, we examined the effect of the ZOCC layer on the reaction kinetics by obtaining the relationship between the exchange current density and the Zn deposition kinetics based on the following equation: [38,39] Fη total 2RT where i, i 0 , and η total represent the cycling current density, exchange current density, and total overpotential, respectively. F, R, and T refer to the faradic constant, the ideal gas constant, and the Kelvin temperature, respectively. The calculations in Figure S5 reveal that the exchange current density of the ZOCC@Zn anode (9.12 mA cm −2 ) is superior to that of the bare Zn cell (7.05 mA cm −2 ), which means that the kinetics of ZOCC@Zn cell is promoted. We fabricated the ZOCC interface layer on the bare Zn surface by mixing ZOCC powder and a small amount of organic binder PVDF to prepare a slurry. Figure S4a performed at 5 mA cm −2 as shown in Figure 4d, demonstrating that the ZOCC@Zn cell ha a lower polarization and longer cycle number than bare Zn. These findings suggest tha ZOCC@Zn is better suited for long-term cycling at various current densities and i adaptable to more diverse environments, while bare Zn is prone to short-circuiting. It als well demonstrates that the ZOCC interface layer effectively inhibits dendrite growth and prolongs battery life. ZOCC@Zn and bare Zn symmetric cells were serially measured at current densitie (1, 2, 5, 8, and 10 mA cm −2 ) with a fixed areal capacity of 1 mA h cm −2 to evaluate th stability and rate aspects (Figure 4e). The polarization of the ZOCC@Zn cell is consistentl lower than that of the bare Zn cell at every current density, demonstrating that th ZOCC@Zn cell has excellent Zn 2+ transport kinetics together with plating/strippin stability. In addition, we examined the effect of the ZOCC layer on the reaction kinetic by obtaining the relationship between the exchange current density and the Zn depositio kinetics based on the following equation: [38,39] = 2 where , , and represent the cycling current density, exchange current density and total overpotential, respectively. F, R, and T refer to the faradic constant, the ideal ga The morphological characterization of the electrode surface after 15 cycles was performed to further validate the modulation effect of the ZOCC interface layer. Figure S6 shows that Zn 2+ prefers to deposit more uniformly on the ZOCC@Zn surface compared to bare Zn. In addition, the distinctive peak at 8 • of ZnSO 4 (OH) 6 ·5H 2 O (PDF#39-0688) is clearly observed in the XRD pattern of the bare Zn electrode surface after cycling, which is not found on the ZOCC@Zn surface (Figure 4f). When the current density and capacity are increased to 1, the surface of the bare Zn electrode has an irregular and disordered morphology with a large number of protrusions, as shown in Figure S7a,b. However, the ZOCC@Zn surface exhibits a relatively flat and dendrite-free morphology, with the uniformly deposited Zn forming a hexagonal lamellar structure ( Figure S7c,d). Furthermore, at 2 mA cm −2 , 1 mA h cm −2 , uneven deposition and severe dendrite growth appeared in bare Zn, resulting in clear regional overgrowth and continuous accumulation on the electrode, which is extremely detrimental to cell reversibility (Figure 5a,b), In contrast, the surface of the ZOCC@Zn electrode is apparently flat and the metal Zn is uniformly distributed on the (002) crystal plane, as shown in Figure 5c,d. Finally, ZOCC@Zn at 5 mA cm −2 , 1 mA h cm −2 still had a remarkable effect in restraining dendrite growth ( Figure S8). on the electrode, which is extremely detrimental to cell reversibility (F contrast, the surface of the ZOCC@Zn electrode is apparently flat and th uniformly distributed on the (002) crystal plane, as shown in Figure ZOCC@Zn at 5 mA cm −2 , 1 mA h cm −2 still had a remarkable effect in restra growth ( Figure S8). Coulombic efficiency (CE) is another critical parameter for evaluating electrochemical performance. A half-cell was assembled with ZOCC@Zn or bare Zn as the anode and copper foil as the cathode for charge/discharge tests. For bare Zn//Cu, a severe fluctuation after 81 h was observed at 2 mA cm −2 , 1 mA h cm −2 . Encouragingly, the ZOCC@Zn//Cu presents an exceptional average CE of 99.60% for 808 h (Figure 6a). The charge/discharge voltage profiles of ZOCC@Zn//Cu and bare Zn//Cu are exhibited in Figures 6b and 6c, respectively. It can be clearly seen that ZOCC@Zn//Cu has great reversibility in the Zn plating/stripping process. Notably, when the current density is increased to 5 mA cm −2 (Figure 6d), the ZOCC@Zn//Cu displays an outstanding average CE (99.79%) for 2016 cycles, which is remarkably superior to that of the bare Zn//Cu (252 cycles). Moreover, the voltage stability of ZOCC@Zn//Cu is significantly stronger than that of bare Zn//Cu (Figure 6e,f), reflecting the fact that the reversibility of the electrode is remarkably raised by constructing the ZOCC interfacial layer.
To explore the practical feasibility of the ZOCC@Zn anode, bare Zn//MnO 2 , and ZOCC@Zn//MnO 2 , full cells were assembled to evaluate the contribution of the ZOCC layer in improving the performance of AZIBs. α-MnO 2 was synthesized based on previous literature [15]. The excellent crystallinity of α-MnO 2 was confirmed by XRD ( Figure S9). SEM ( Figure S10) and TEM ( Figure S11) were employed to confirm the nanorod-like structure of α-MnO 2 . Cyclic voltammetry (CV) curves were initially measured to understand the electrochemical behavior of the cathode and anode, as shown in Figure 6g. Typical MnO 2 redox peaks are clearly observed in ZOCC@Zn and bare Zn full cells, corresponding to the reversible redox reaction between MnO 2 and MnOOH [40]. This denotes the fact that both full cells have similar electrochemical behavior and the ZOCC coating does not impair the redox process of MnO 2 . Interestingly, the gap between the redox peaks of ZOCC@Zn//MnO 2 is reduced compared to bare Zn//MnO 2 , which represents a decrease in voltage hysteresis. Moreover, ZOCC@Zn//MnO 2 has a higher current density, illustrating a faster plating/stripping rate and a higher capacity. The electrochemical impedance (EIS) studies also proved these results. To explore the practical feasibility of the ZOCC@Zn anode, bare Zn / / MnO2, and ZOCC@Zn / / MnO2, full cells were assembled to evaluate the contribution of the ZOCC layer in improving the performance of AZIBs. α-MnO2 was synthesized based on previous literature [15]. The excellent crystallinity of α-MnO2 was confirmed by XRD ( Figure S9). SEM ( Figure S10) and TEM ( Figure S11) were employed to confirm the nanorod-like structure of α-MnO2. Cyclic voltammetry (CV) curves were initially measured to understand the electrochemical behavior of the cathode and anode, as shown in Figure 6g. Typical MnO2 redox peaks are clearly observed in ZOCC@Zn and bare Zn full cells, corresponding to the reversible redox reaction between MnO2 and MnOOH [40]. This denotes the fact that both full cells have similar electrochemical behavior and the ZOCC coating does not impair the redox process of MnO2. Interestingly, the gap between the redox peaks of ZOCC@Zn / / MnO2 is reduced compared to bare Zn / / MnO2, which represents a decrease in voltage hysteresis. Moreover, ZOCC@Zn//MnO2 has a higher current density, illustrating a faster plating/stripping rate and a higher capacity. The electrochemical impedance (EIS) studies also proved these results. Figure 6h was obtained by fitting the Nyquist plot to the equivalent circuit of Figure S12. Figure 6i confirms the capacity reversibility of the ZOCC@Zn cell, which varies smoothly with current density and shows a higher specific capacity at each current density than bare Zn//MnO 2 . In addition, the cycling performances of full cells at 1 A g −1 are shown in Figure 6j; the specific capacity of ZOCC@Zn//MnO 2 is 128.8 mA h g −1 with a capacity retention of 67.51% after 400 cycles while maintaining an admirable average CE (99.72%). However, for bare Zn//MnO 2 , the capacity dropped extremely fast, leaving only 34.71% of its initial discharge specific capacity after 400 cycles. The corresponding voltage profiles are shown in Figure 6k,l. The capacity of the bare Zn full cell drops rapidly from the initial 222.8 mA h to 86 mA h after 200 cycles. Unusually, for the ZOCC@Zn cell, the capacity only varies from the initial 229 mA h to 200.6 mA h. This phenomenon is mainly attributed to the flat deposition morphology of the ZOCC@Zn cell, which limits the generation of dead Zn during cycling. In light of the above results, the approach of the ZOCC interfacial modification is very promising for promoting the practical application of AZIBs.
Preparation of ZIF-8 Precursor and ZOCC
A modified method was used to synthesize the ZIF-8 at room temperature [41]. Solution A was prepared by completely dissolving 0.713 g of Zn(NO 3 ) 2 ·6H 2 O in 25 mL of deionized water and dissolving 12 mg of CTAB and 10.9 g of 2-MI in 160 mL of deionized water with ultrasonic stirring for 30 min to prepare Solution B. Subsequently, Solution A was gradually added to Solution B and stirred for 15 min at room temperature. The white product was collected by centrifugation and washed with deionized water and ethanol, respectively, and then dried under vacuum at 70 • C for 12 h to obtain the ZIF-8 precursor.
The obtained ZIF-8 powder was placed in a tubular furnace, raised to 800 • C with a heating rate of 5 • C min −1 and maintained for 2 h in the N 2 atmosphere. The calcined powder was collected and denoted as ZOCC.
Preparation of ZOCC@Zn
To fabricate the ZOCC@Zn, a homogeneous slurry was prepared by mixing the asprepared ZOCC powder and PVDF in the mass ratio of 9:1, and grinding it with NMP solvent. The slurry was then uniformly distributed on the Zn foil (80 µm) and dried at 60 • C for 12 h to obtain ZOCC@Zn.
Preparation of MnO 2 Electrode
MnO 2 powder was prepared based on previous reports [15,42]. Solutions C and D were attained by dissolving 4.74 g of KMnO 4 and 11.03 g of Mn(CH 3 COO) 2 ·4H 2 O in 40 mL of deionized water, respectively. Solution D was gradually added to solution C while stirring vigorously, and heated for 4 h at 80 • C in a water bath. The precipitate was separated by centrifugation and washed with deionized water, then dried at 80 • C for 12 h. Subsequently, the precipitate was heated to 200 • C in a tube furnace with air to obtain the desired MnO 2 powder.
The MnO 2 electrode was fabricated by grinding MnO 2 powder, acetylene black, and PVDF into a slurry in the weight ratio of 7:2:1 with the NMP solvent. The slurry was coated on Ti foil (20 µm) and dried under vacuum at 70 • C for 12 h. The loading amount of active substance MnO 2 was about 1.2 mg.
Characterization
The microscopic morphology and particle size of the samples were characterized using scanning electron microscopy (SEM, Helios 5 CX, Thermo Scientific, Waltham, MA, USA) and transmission electron microscopy (TEM, Talos, F200S, Thermo Scientific, Waltham, MA, USA G2). The crystal structure and phase composition were analyzed using X-ray diffraction (XRD, Empyrean, Panalytical B.V., Almelo, The Netherlands) with Cu Kα radiation between 5 • and 80 • (40 kV; 40 mA; 5 • min −1 ). The specific surface area and pore size distribution can be obtained from the adsorption−desorption isotherm of nitrogen at −196 • C (Belsorp Max, MicrotracBEL, Osaka, Japan); the sample should be degassed at 200 • C for 12 h before testing. Additionally, X-ray photoelectron spectroscopy (XPS, K-Alpha, Thermo Scientific, Waltham, MA, USA) was obtained using a thermo scientific Kα energy spectrometer paired with an X-ray source of monochromatic Al-K α .
The electrochemical properties were tested mainly for symmetric cells, half-cells, and full cells. For symmetric cells, ZOCC@Zn served as the working electrode and counterelectrode. The half-cell was assembled with ZOCC@Zn as the negative electrode and Cu foil (20 µm) as the positive electrode. Furthermore, with the MnO 2 electrode as the positive electrode, the full cell was assembled. A series of electrochemical tests was performed with bare Zn instead of ZOCC@Zn electrodes as a comparison while maintaining all test conditions. The above electrodes were sized as 12 mm diameter rounds and assembled into CR-2032-type coin cells at an air atmosphere with a glass fiber (0.68 mm, Whatman, GF/D) as the separator, where the electrolyte for symmetric and half-cells was 90 mL of 2 M Zn(SO 4 ) 2 and for the full cells was 90 mL of 2 M ZnSO 4 + 0.2 M MnSO 4 . A Land CT3002A test system (Wuhan LAND Electronic Co. Ltd., Wuhan, China) was employed for galvanostatic measurements of coin cells at different current densities of 0.5-5 mA cm −2 and capacities of 0.25-1 mA h cm −2 . Cyclic voltammetry (CV) and electrochemical impedance (EIS) tests were carried on a CHI660E electrochemical workstation.
Conclusions
In summary, ZOCC with a microporous framework structure was applied to the surface modification of a Zn anode as a way to optimize the electrochemical properties of AZIBs. Initially, the conductive ZOCC interface layer with a microporous structure could homogenize the electric field distribution on the electrode surface, thereby improving the reversibility of the plating/stripping process, accelerating the Zn 2+ transport kinetics, and reducing the polarization potential. Furthermore, ZnO and N atoms with zincophilic trait were uniformly distributed in the ZOCC framework, which attracted the directional Zn deposition, thus effectively suppressing the formation of dendrites during charging/discharging. Notably, the unique structure and composition of the ZOCC layer contributed to addressing dendrite growth and achieving excellent electrochemical performance. In detail, the ZOCC@Zn symmetric cell at a current density of 0.5 mA cm −2 and a capacity of 0.25 mA h cm −2 can cycle stably for 1152 h with an ultralow polarization of 18 mV. Meanwhile, at 2 mA cm −2 , 1 mA h cm −2 , an outstanding average CE (99.79%) for over 2016 cycles was observed in the ZOCC@Zn//Cu. The capacity retention of the ZOCC@Zn//MnO 2 was greater than that of the bare Zn cell after 400 cycles. This work offers a novel insight into the inhibition of dendrite growth and accelerating the evolution of AZIBs in practical applications. | 5,678.4 | 2023-06-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Chemistry"
] |
A Historical Perspective on Crystallography in Croatia and the Career of Biserka Kojic̓-Prodic̓ from the Viewpoint of the CSD
There is a long history of chemical crystallography and use of the Cambridge Structural Database (CSD) in Croatia, which dates back to 1985 when the CSD first became accessible there thanks to the efforts of Biserka Kojić-Prodić as head of the National Affiliated Centre. On the occasion of Dr Kojić-Prodić’s 80th birthday we take the opportunity to look back at the history of crystallography in Croatia, and particularly in Biserka’s career, from the point of view of the CSD. Biserka has been a prolific author and contributor of crystal structures over the years sharing over 400 structures and collaborating with over 270 different co-authors on a wide range of different types of structure.
INTRODUCTION
HE relationship between Croatian crystallography and the Cambridge Structural Database (CSD) [1] began in 1985 with the establishment of a local National Affiliated Centre (NAC).This development was significantly driven by the efforts of Biserka and she became the head of this NAC from its inception in 1985 until 2007.The role of an NAC was initially to support the dissemination of the CSD to academic users within a region, and over time it has evolved more towards acting as a local advocate for the CSD and its applications.It is safe to say that Biserka has had a huge impact in both aspects over the years, initially bringing access to the CSD to Croatia for the first time, then encouraging the use of the database through many publications and local workshops.
Croatian chemical crystallographers, and Biserka in particular, have made a huge contribution to the CSD since its inception.Dr Kojić-Prodić, along with her co-authors, has shared over 400 crystal structures with the international research community through the CSD and here we look back over her crystal structures in the context of the CSD and Croatian crystallography in general.
CROATIAN CHEMICAL CRYSTALLOGRAPHY
One of the first chemical crystallography groups in Croatia was started by Drago Grdenić in Zagreb in 1948. [2]Professor Grdenić mainly studied organo-mercury compounds in the 1950s and has some of the earliest structures containing mercury in the CSD, like the ring structure, mercury diethylene oxide, with two bridged mercury centres (Figure 1, CSD refcode DOHGCD). [3]rom these early structures, the volume of crystallographic publications has continued to grow and there are now thousands of structures from Croatia in the CSD.These structures cover a broad range of chemistry from structures with a small molecular volume such as the organic zwitterion, 3-carboxypyridinium-2-olate, (molecular volume of just 142 Å 3 , CSD refcode PIMBAP01). [4]At the other end of the volume scale there are organic salts of T N,N'-3-azoniapentane-1,5-diylbis(3-(1-aminoethylidene)-6-methyl-2H-pyran-2,4(3H)-dione) that have an extremely large unit cell volume (72951 Å 3 , Figure 2, CSD refcode YOHFUX). [5]This huge unit cell contains 96 individual formula units (Z = 96) but only half of the formula unit is symmetry unique (Z' = 0.5).
The studies carried out by Croatian crystallographers include many metal-organic structures, which have been explored in diverse areas from magnetic properties (CSD refcode BAWFOX) [6] to the limits of halogen-bonding interactions (CSD refcode WEKNOS). [7]They are also not limited to single crystal studies and the CSD contains, amongst others, the powder structure of a metal-organic framework generated by mechanochemsitry (CSD refcode OFERUN08). [8]n recent years, Croatian crystallographers have contributed to the international community on the editorial boards of journals and several high-profile international crystallographic meetings have been held in Croatia.These highly successful meetings include the European Crystallographic Meeting (ECM29) in 2015, the European Crystallography School (ECS3) in 2016 and the Hot Topics in Contemporary Crystallography meetings in 2014, 2017 and 2018.
BISERKA AND THE CSD
In a chemical crystallography career spanning five decades and dating back almost to the inception of the CSD itself, Dr Kojić-Prodić has so far contributed 439 crystal structures to the CSD (Figure 3).The crystal structures she has shared span an impressive range of chemistries from her simplest structure with only 12 atomic coordinates (the organic compound tetrabromo-semiquinone, CSD refcode TBBENQ02), [9] to her most complex including 544 atomic coordinates (a Z' = 4 structure of an adamantane bisurea salt, CSD refcode LETLUT). [10]ome of Biserka's most high-profile structures in the CSD include those of supramolecular organic gelators based on amino acid and amino alcohol oxalamides (Figure 4)the publications associated with these structures have each been cited over 100 times. [11,12]Other key studies include the structural analysis of interaction behaviour in calixarene amino acid derivatives [13] and the structures of palladium(II) quinolinylaminophosphonate complexes. [14]oughly two thirds of Biserka's CSD structures are organic, the remaining third containing at least one metal atom with an increasing fraction of polymeric coordination compounds since 2009.
Experimental crystal structures can be highly precise, and they can be extremely challenging.The lowest R-factor structure of Dr Kojić-Prodić is a low temperature, charge density structure from 2015 of a potassium salt (R-factor of 1.3 %, CSD refcode UHOTAN01). [15]In contrast, the highest R-factor structure is that of a muramic acid derivative from 1998 which could not be easily crystallised, Figure 1.One of the first Croatian structures in the CSD, mercury diethylene oxide (CSD refcode DOHGCD). [3]gure 2. One of the largest Croatian structures in the CSD by unit cell volume, an organic salt in space group Fd-3c (CSD refcode YOHFUX). [5]gure 3. Chart of the number of crystal structures contributed by Dr Kojić-Prodić to the CSD over the years with breakdown into organic (green line), metal-organic (blue line) and polymeric systems (purple line).has a Z' = 2 and only very tiny plates could be obtained (Rfactor of 17 %, CSD refcode BEFQUY). [16]This pair of structures really capture the challenges associated with experimental crystallography and the need to obtain structures of appropriate precision for the research in question.If you want to study charge density, you need incredibly precise, high resolution data; if you are simply aiming to prove the 3D structure and conformation of the compounds studied then a different level of precision is required.
Through Biserka's 439 crystal structures, there are a wide range of space groups represented including the most common space groups in the CSD, namely P21/c, P-1 and C2/c, as well as some of the least common (Figure 5).There are a number of extremely rare space groups observed within these structures including Ccc2, [17] P4212 [13] and P31m [18] which have just 114, 77 and 19 examples in the CSD respectively.This again highlights the breadth of crystallography encountered by Dr Kojić-Prodić over her career.
A wide range of experimental conditions have been sampled across Biserka's CSD structures -her coolest structures were determined at just 90 K, the polymorphs of p-benzosemiquinone radical (CSD refcodes JEMROJ and JEMROJ02), [19] as part of a variable temperature study to assess dynamical disorder of a proton.The hottest structure was done at 340 K, the structure of di-potassium chloranilate chloranilic acid, [20] again as part of a variable temperature study to investigate proton dynamics.
Crystallography is also a very collaborative science and Biserka's career in chemical crystallography is no exception to this; she has worked with 270 different co-authors across her 439 different structures in the CSD from many different countries around the world.There isn't space here to adequately reflect all the many collaborations she has had on these structures, but the list of highprofile names includes Nobel Prize winner Jean-Marie Lehn, with whom Biserka published the structure of two cyclo-bis-intercaland receptor molecules(CSD refcodes TIVTOH and TIVTUN). [21]She has also worked with many well-known crystallographers from across the globe like Bill Duax, [22] George Sheldrick [23] and Ton Spek. [24]t is important of course to recognise not just the big names, but also the very frequent collaborators -there are a few scientists that have shared many crystal structures in collaboration with Dr Kojić-Prodić over the years.Aleksandar Višnjevac has published 19 structures in the CSD with Biserka, Živa Ružić-Toroš has shared 35 and Krešimir Molčanov a mighty 38 structures!To paraphrase the wellknown saying, "no research scientist is an island", and Biserka has shown the clear benefits of active collaboration throughout her career.
SUMMARY
The history of both chemical crystallography and the use of the CSD in Croatia is rich and Biserka's contribution has been particularly impactful.With over 400 crystal structures across a wide range of structural types, her footprint in the CSD is significant.Biserka's efforts in setting up the National Affiliated Centre and running it for over 20 years have also made an enormous difference to structural chemistry research in Croatia.There is now also a countrywide agreement for the CSD licence, so all Croatian academics can get easy access to the database.We look forward to seeing the very strong relationship between the CSD and Croatian crystallography continue well into the future.
Acknowledgements.The authors would like to thank all the many Croatian crystallographers that have contributed structures to the Cambridge Structural Database over the years.nd UNEGOJ (right). [12]gure 5. Pie chart of frequency of space groups observed within Dr Kojić-Prodić's CSD entries. | 2,093 | 2018-06-04T00:00:00.000 | [
"Materials Science"
] |
“Unknown Genome” Proteomics
We present here a new approach that enabled the identification of a new protein from a bacterial strain with unknown genomic background using a combination of inverted PCR with degenerate primers derived from N-terminal protein sequences and high resolution peptide mass determination of proteolytic digests from two-dimensional electrophoretic separation. Proteins of the sulfate-reducing bacterium Desulfotignum phosphitoxidans specifically induced in the presence of phosphite were separated by two-dimensional gel electrophoresis as a series of apparent soluble and membrane-bound isoforms with molecular masses of ∼35 kDa. Inverted PCR based on N-terminal sequences and high resolution peptide mass fingerprinting by Fourier transform-ion cyclotron resonance mass spectrometry provided the identification of a new NAD(P) epimerase/dehydratase by specific assignment of peptide masses to a single ORF, excluding other possible ORF candidates. The protein identification was ascertained by chromatographic separation and sequencing of internal proteolytic peptides. Metal ion affinity isolation of tryptic peptides and high resolution mass spectrometry provided the identification of five phosphorylations identified in the domains 23–47 and 91–118 of the protein. In agreement with the phosphorylations identified, direct molecular weight determination of the soluble protein eluted from the two-dimensional gels by mass spectrometry provided a molecular mass of 35,400 Da, which is consistent with an average degree of three phosphorylations.
The sulfate-reducing bacterium Desulfotignum phosphitoxidans can utilize phosphite as electron donor for growth and has been shown to induce a specific protein band of ϳ40 kDa in one-dimensional SDS-PAGE (1,2). The genome of this bacterium is unknown, and no genetic information has been available concerning the genes and proteins involved in the process of phosphite oxidation. It is commonly appreciated that in the case of unknown genetic background, there is no direct approach applicable for proper identification of a protein of interest, and protein identification by proteome analysis is normally based on availability of genomic data. Using bottom-up proteomics (3,4) with amenable databases, identification of proteins is often straightforward but is highly complex or unfeasible in the absence of genomic data (5,6). In such cases, suitable derivatization approaches (5) and/or "de novo" identification (6) is typically required.
The combination of two-dimensional gel electrophoresis (2-DE) 1 and mass spectrometry has become a powerful tool for protein identification if the genetic background is known. A standard procedure is to excise protein spots from the gel followed by in-gel digestion with a specific protease, extraction, and mass spectrometric analysis of the proteolytic peptides (7,8). One possibility in this "bottom-up" approach involves the application of tandem mass spectrometry to a set of peptides from a protein digest (9), resulting in a series of sequence-specific fragment ions that can be used for protein identification. In the present study we investigated proteins specifically induced by the sulfate-reducing bacterium D. phosphitoxidans in the presence of phosphite. The particular features of bacterial genetics, such as the lack of gene splicing, enabled the combination of proteomics and genetics methods comprising (i) initial N-terminal Edman protein sequencing followed by inverted PCR of degenerate primers for N-terminal sequences and (ii) proteome analysis by high resolution mass spectrometry for protein identification from ORF candidates obtained in the first step. FTICR mass spectrometry (10) was shown in this study to be a powerful tool for unequivocal peptide identification. The analytical scheme of this "unknown genome" proteomics approach is shown in Fig. 1. Using 2-DE-isolated soluble and membrane-bound proteins expressed in the presence of phosphite, this combined approach enabled the identification of a new NAD(P)-dependent epimerase/dehydratase from D. phosphitoxidans. In addition, direct mass spectrometric molecular weight determinations and identification of affinity-isolated peptides provided the detection and localization of multiple phosphorylation sites.
Preparation of Protein Samples-Cells grown in 1-liter cultures of D. phosphitoxidans in the presence of 10 mM sodium phosphite and/or 10 mM sodium fumarate as electron donors and 10 mM sodium sulfate as electron acceptor were harvested in the late exponential growth phase. Phosphite-induced and non-induced cells were harvested under an anoxic atmosphere (95:5 (v/v) N 2 /H 2 (Coy chamber, Ann Arbor, MI)). Cells were washed with anoxic 10 mM Tris-HCl buffer, pH 7.2, containing 0.342 M NaCl and suspended in 3 ml of soluble cytoplasmic extraction reagent containing 50 l/ml protease inhibitor mixture for bacterial cell extracts (Sigma). Cell-free extracts were prepared anoxically by passing the cells four to five times dropwise through a chilled French pressure cell at 138 megapascals. Unopened cells and cell debris were removed by centrifugation at 27,000 ϫ g for 20 min at 4°C (Optima TM TL ultracentrifuge, Beckman). Soluble and membrane fractions of proteins were obtained by ultracentrifugation at 57,000 ϫ g for 1 h at 4°C. Further both protein fractions were treated according to the ProteoPrep Universal Extraction kit (Sigma) and the manufacturer's instructions. Protein content in the preparations was determined spectrophotometrically by the bicinchonic acid method (BCA protein assay kit, Pierce) with bovine serum albumin as a standard. The soluble and the membrane protein fractions were stored in 200-l aliquots and separated on 2-DE gels.
Acetone Precipitation-For the 2-DE preparation of samples from the soluble fractions we used acetone precipitation for removal of salts and contaminants. The protein content was precipitated at Ϫ28°C for 5 h by adding 6 volumes of ice-cold acetone to the sample. After 20 min of centrifugation at 14,926 ϫ g the residual acetone was removed, and the obtained pellet was allowed to dry.
Protein Separation by Two-dimensional Gel Electrophoresis-The samples were applied overnight on 17-cm IPG strips (pH range 5-8) using a passive in-gel rehydration method. Approximately 0.4 -0.8 mg of total protein was loaded on one gel. The rehydration solution contained 7 M urea, 2 M thiourea, 4% (w/v) CHAPS, 40 mM Tris base, 2% (v/v) Servalyt 5-8, 0.3% DTT, and a trace of bromphenol blue. IEF was carried out using a Multiphor horizontal electrophoresis system (Amersham Biosciences). Rehydrated strips were run in the first dimension for about 23 kV-h at 20°C. The proteins were focused for 30 min at 150 V, 30 min at 300 V, and 5 h at 3500 V. For the second dimension the IPG strips were equilibrated in 50 mM Tris-HCl, pH 8.8, 6 M urea, 30% (v/v) glycerol, 2% (w/v) SDS, a trace of bromphenol blue, 1% (w/v) DTT for 40 min. The second equilibration step used 4.5% (w/v) iodoacetamide instead of DTT for 20 min. In the second separation step (SDS-PAGE) the system used was the Bio-Rad Protean II xi vertical electrophoresis system. 10% SDS gels (1.5 mm thick) were used. Strips were placed on the vertical gels and overlaid with 0.5% agarose in SDS running buffer (25 mM Tris base, 192 mM glycine, 0.1% (w/v) SDS). Electrophoresis was performed in two steps: 25 mA/gel for ϳ30 min and 40 mA/gel until the dye front reached the anodic end of the gels. After this separation step the proteins were visualized with sensitive colloidal Coomassie staining according to Neuhoff et al. (11). The gels were scanned using a GS-710 calibrated imaging densitometer (Bio-Rad). For 2-DE gel comparison PDQuest analysis software (version 6) from Bio-Rad was used.
N-terminal Sequence Determination-For N-terminal sequence determinations 2-DE gels were electroblotted for 2 h onto PVDF membranes (Applied Biosystems) at 50 V using a WEB-M tank blotter (PEQLAB) with a buffer containing 25 mM Tris, 192 mM glycine, 20% methanol, pH 8.3. Gel-blotting papers (200 ϫ 200 mm; Whatman) were used for the blotting sandwich preparation. After transfer, the PVDF membranes were washed with water for 15 min and then with methanol for 5-10 s and incubated in the staining solution (0.1% Coomassie Brilliant Blue R-250 in 40% methanol in water) until protein spots became visible. The membranes were then destained in destaining solution (50% methanol in water, freshly prepared before use) until the background disappeared and the spots were clearly visible. The membranes were air-dried. The spots of interest were "Unknown Genome" Proteomics by Inverted PCR and FTICR-MS excised and fully destained, dried, and kept at 4°C in Eppendorf tubes until sequencing. Prior to sequencing, the destained protein spots were wetted with 100% methanol and applied into the sequencing cartridge. Sequence determinations were performed on an Applied Biosystems Model 494 Procise Sequencer attached to a Model 140C Microgradient System, a 785A Programmable Absorbance Detector, and a 610A Data Analysis System. All solvents and reagents used were of highest analytical grade purity (Applied Biosystems). For sequencing of both blotted and lyophilized samples, the corresponding standard pulsed liquid methods were used. Table IIA) were developed from the N-terminal protein sequences using the following codon usage tables: Desulfobacter vibrioformis and Desulfobacula toluolica as closest relatives of D. phosphitoxidans. Additionally the Escherichia coli reversed translation codon table and best reverse translate, worst reverse translate, and degenerate codon tables of E. coli (DNAstar software, version 5.01) were used. For amplification of the EcoRI self-ligated fragment harboring the gene coding for a putative NAD(P)-dependent epimerase/dehydratase, primers shown in Table II part B were used. For the localization and identification of additional loci coding for similar proteins or isoforms of the putative NAD(P)-dependent epimerase/dehydratase, the degenerate oligonucleotide pairs shown in Table III were used. 1 g of chromosomal DNA of D. phosphitoxidans was digested completely with 10 units of each of the following restriction endonucleases: EcoRI, BamHI, HindIII (Fermentas International Inc., Burlington, Canada), and MaeIII (Roche Applied Science) in separate reactions in a final volume of 20 l. The digestion reactions were terminated by heat inactivation of the enzymes where appropriate, and the obtained fragments were self-circularized with T4 DNA ligase (Fermentas GmbH, St. Leon-Rot, Germany). 10 units of T4 ligase were used in a 100-l total reaction volume. Self-circularization reactions were carried out at 16°C overnight. IPCRs were performed with self-circularized fragments and primers shown in Table II, part A. The TA cloning kit (Invitrogen) was used in the first round of IPCR with degenerate primers.
Design of Degenerate Primers and PCR-Primer sets (shown in
Total RNAs were isolated from cultures of D. phosphitoxidans grown to late logarithmic phase (A 578 ϳ 0.28 -0.30) in minimal medium containing 10 mM sulfate plus either 10 mM fumarate or 10 mM phosphite. Total RNA isolations were carried out with the RNeasy minikit (Qiagen, Valencia, CA) according to the manufacturer's instructions. For the removal of contaminating genomic DNA, the RNA preparations were on-column digested with DNase I, (DNase I, RNase-free set, Qiagen). The DNase I-treated RNA was used as a template in one-step reversed transcription assays using SuperScript II reverse transcriptase and Platinum Taq DNA polymerase (Invitrogen) according to the manufacturer's protocol. The RNA concentration in each preparation was assessed spectrophotometrically. The positive control with only genomic DNA and the negative control containing only RNA without reverse transcriptase were run under identical PCR amplification conditions. Gene-specific oligonucleotide probes for amplification of junction region ORF3-ORF4 used O34F (5Ј-TTTCTCGGCCAATTAATACTCTCC-3Ј) and O34R (5Ј-AGCTTTT-GGGTTTCTTCATACAT-3Ј) in phosphite-induced and non-induced cells.
Proteolytic Digestion-Spots were excised manually from the gel and subjected to in-gel digestion with trypsin according to Mortz et al. (12). The excised gel pieces were washed with deionized water for 15 min, dehydrated by addition of 3:2 ACN/deionized water for 30 min at 25°C, and dried in a SpeedVac centrifuge (30 min). They were destained by addition of 50 mM NH 4 HCO 3 (15 min), dehydrated with 3:2 ACN/deionized water (15 min), and dried in a SpeedVac centrifuge (30 min). Freshly prepared trypsin solution (12.5 ng/l trypsin in 50 mM NH 4 HCO 3 ) was added and incubated at 4°C (on ice) for 45 min and
TABLE I N-terminal sequences determined by Edman analysis of phosphite-induced proteins of D. phosphitoxidans
Amino acids not determined unequivocally are shown in parentheses.
TABLE II Oligonucleotides used in this study
Oligonucleotides in A were developed based on the N-terminal amino acid sequences of phosphate-induced proteins for amplification of self-circularized DNA fragments of D. phosphitoxidans genomic DNA. Oligonucleotides in B were used for amplification of EcoRI-digested and self-ligated DNA. *, these are not degenerated primers as compared to RPD2 and FPD2.
"Unknown Genome" Proteomics by Inverted PCR and FTICR-MS then for 12 h at 37°C in 50 mM NH 4 HCO 3 . After removal of the supernatant fraction, peptide extraction was performed with a solution of 3:2 ACN, 0.1% TFA in deionized water at room temperature (three steps of 1 h each). For Lys-C digestion the excised gel pieces were washed first with deionized water for 10 min and then with a solution of 25 mM Tris-HCl, pH 8.5, with 1 mM EDTA for 30 min. The gel spots were destained by addition of 25 mM Tris-HCl, pH 8.5, with 1 mM EDTA, 50% ACN (10 min). This last step was repeated until the Coomassie dye was completely removed. The gel pieces were dehydrated for 10 min by the addition of 100% ACN and dried in the SpeedVac centrifuge (1 h). The freshly prepared Lys-C solution (10 ng/l Lys-C in 25 mM Tris-HCl, pH 8.5, with 1 mM EDTA) was added and incubated at 4°C (on ice) for 30 min followed by overnight incubation (16 -20 h) at 37°C in 25 mM Tris-HCl, pH 8.5, with 1 mM EDTA. After transfer of the supernatant fraction into tubes containing 100 l of deionized water and 5 l of a solution of 50% acetonitrile, 5% TFA, peptide extraction was performed at room temperature (three steps of 10 min each). The eluates (supernatant and elution fractions that were collected in the same tube) were lyophilized to dryness. HPLC Isolation of Peptides-Lys-C peptides obtained by in-gel digestion of protein spots were separated by analytical HPLC on a Bio-Rad system using a Vydac C 4 column (250 ϫ 4.6-mm inner diameter, 5-m silica, 300-Å pore size) (Vydac, Hesperia, CA). The samples were dissolved in 200 l of 0.1% TFA (aqueous solution), and the peptides were separated using a linear gradient elution (0 min, 0% B; 5 min, 0% B; 105 min, 100% B; 110 min, 100% B; 115 min, 0% B; 120 min, 0% B) with eluent A (0.1% TFA in water) and eluent B (0.1% TFA in acetonitrile/water (80:20, v/v). The flow rate was 1 ml/min, and the peaks were detected at 220-nm wavelength.
ZipTip Cleanup Procedure-C 18 OMIX pipette tips were used for purification of the protein digests. The ZipTip procedure was carried out in five steps: wetting (50% ACN in deionized water), equilibration (1% TFA) of the ZipTip pipette tip, binding of peptides and proteins to the pipette tip, washing (0.1% TFA), and elution (50% ACN in 0.1% TFA).
Mass Spectrometry-MALDI-FTICR mass spectrometric analysis of the in-gel digested proteins was performed with a Bruker APEX II FTICR instrument equipped with an actively shielded 7-tesla superconducting magnet, a cylindrical infinity ICR analyzer cell, and an external Scout 100 fully automated X-Y target stage MALDI source with pulsed collision gas (Bruker Daltonics). The pulsed nitrogen laser was operated at 337 nm, and ions were directly desorbed into a hexapole ion guide situated 1 mm from the laser target (13). Ions generated by 20 laser shots were accumulated in the hexapole for 0.5-1 s at 10 V and extracted at Ϫ10 V into the analyzer cell. A 100 mg/ml solution of 2,5-dihydroxybenzoic acid in ACN, 0.1% TFA in water (2:1) was used as matrix. 0.5 l of matrix solution and 1 l of sample solution were mixed on the stainless steel MALDI target and allowed to dry. Mass spectra were obtained by acquisition of 64 scans. External calibration was carried out using the monoisotopic masses of singly protonated ion signals of bovine insulin (5730.609 Da), bovine insulin B-chain oxidized (3494.651 Da), human neurotensin (1672.917 Da), human angiotensin I (1296.685 Da), human bradykinin (1060.569 Da), and human angiotensin II (1046.542 Da). Acquisition and processing of spectra were performed with XMASS software version 6.1.2 (Bruker Daltonics). For peptide mass fingerprinting monoisotopic masses of all singly charged ions from the MALDI-FTICR mass spectra (generated by XMASS, version 6.1.2) were directly used for database search using a MASCOT peptide mass fingerprinting search engine (Matrix Science) in combination with ProFound engine. Search and acceptance criteria were as follows: 10 -50-ppm mass error tolerance, one missed cleavage site permitted, methionine oxidation as variable modification, other proteobacteria as taxonomy (277,231 sequences), and 3 as the minimum number for matched peptides for protein identification. The database used was National Center for Biotechnology Information non-redundant (NCBInr) (July 12, 2006, 3 MALDI-TOF mass spectrometric analysis of the Lys-C digest was carried out with a Bruker Biflex TM linear TOF mass spectrometer (Bruker Daltonics) equipped with a nitrogen UV laser (337 nm), a dual channel plate detector, a 26-sample Scout source, a video system, and an XMASS data system for spectra acquisition and instrument control. A saturated solution of ␣-cyano-4-hydroxycinnamic acid in ACN, 0.1% trifluoroacetic acid in water (2:1, v/v) was used as the matrix. Aliquots of 0.8 l of the sample solution and the saturated matrix solution were mixed on the stainless steel MALDI target and allowed to dry. Acquisition of spectra was carried out at an acceleration voltage of 20 kV and a detector voltage of 1.5 kV. Molecular weight analysis of intact proteins was performed by MALDI-TOF-MS by excision of spots from 2-DE gels, destaining as described under "Proteolytic Digestion," and SpeedVac drying. Each gel spot was crushed in an Eppendorf cup and incubated with 20 -50 l of an organic solvent mixture consisting of 50% formic acid, 25% acetonitrile, 15% isopropanol, 10% water (v/v/v/v) (14). After a 20-min incubation in an ultrasonic bath at room temperature followed by centrifugation, 1 l of the supernatant was placed on the MALDI target.
Affinity Isolation of Peptides-For analysis of phosphorylations, in-gel tryptic protein digest mixtures were purified by IMAC as described previously (15). The digestion mixtures were applied to Zip-Tip MC tips (Millipore) with Ga(III) IMAC (200 mM gallium nitrate) in a solution of 0.1% acetic acid with 10% ACN at conditions suggested by the manufacturer. Affinity-bound peptides were eluted with 0.3 M ammonium hydroxide solution (2 l) and spotted directly onto the MALDI target. Monoisotopic masses obtained by MALDI-FTICR-MS for IMAC affinity-isolated peptides were compared with the corresponding mixtures from the in-gel tryptic digestion experiments.
N-terminal Protein Sequencing and Design of Degenerate
Primers-After two-dimensional gel electrophoretic separation of proteins of D. phosphitoxidans, four protein spots with a molecular mass around 40 kDa were detected that were expressed only in the presence of phosphite. To compare and assign the specific phosphite-dependent protein expression pattern, cells were grown in minimal medium supplemented with fumarate or phosphite as electron donor and sulfate or CO 2 as terminal electron acceptor in the following combinations: (a) system I, fumarate/sulfate; (b) system II, phosphite/ CO 2 ; (c) system III, phosphite/sulfate; and (d) system IV, fumarate/phosphite/sulfate (Fig. 2). For all growth conditions, proteins were separated in two fractions, membrane protein fraction and soluble protein fraction (SF). Gels were scanned and analyzed using the PDQuest software (Bio-Rad). The comparison of the 2-DE maps of the four different growth systems for the soluble fractions is shown in Fig. 2. Spots 2, 3, 4, and 5 were clearly assigned to be expressed only in the presence of phosphite.
N-terminal Edman sequence determinations were performed for the spots expressed in the presence of phosphite. Protein spots were electroblotted onto PVDF membranes using a wet transfer procedure. Membranes were stained with 0.1% Coo-"Unknown Genome" Proteomics by Inverted PCR and FTICR-MS massie Brilliant Blue R-250 in 40% aqueous methanol, and protein spots were excised from the membranes and after destaining were subjected to automated Edman sequencing. N-terminal sequences of spots 2, 3, 4, and 5 from the soluble fraction, system II (Table I), provided definite sequence determinations for ϳ40 cycles, except for spot 2, which yielded only 17 residues because of its low concentration in 2-DE. All protein spots provided identical N-terminal sequences.
Identification of the Gene Coding for a Putative NAD(P)-dependent Epimerase/Dehydratase-Degenerate primers were designed on the basis of the first 35 residues obtained for proteins from the soluble fraction of system II that were specifically expressed in the presence of phosphite (see Table I) using the codon usage tables described under "Experimental Procedures." The amplification of digested and self-circularized DNA fragments of D. phosphitoxidans genomic DNA with primer pair AFP/RPD2* (*, not degenerate primer) gave a single amplification product. The amplicon of 851 nucleotides was obtained after IPCR with BamHI-digested and self-ligated fragments. The PCR product was cloned into a pCR2.1 vector, transformed in E. coli INV␣FЈ cells, and sequenced. On its right end, the fragment contained 24 nucleotides coding for the sequence MKEGKVVG. Further digest of the genomic DNA with endonuclease EcoRI, self-circularization, and IPCR with primer pair fex2/rex2.1 resulted in an amplicon of 3179 nucleotides in length. This product was amplified and completely sequenced with primer pairs fex3.2/rex3.1 and fex4/rex4. All nucleotide sequences formed one contig with 98% match of consensus sequence in which the locus coding for a putative NAD(P)-dependent epimerase/dehydratase was identified. The gene is 951 bp long, coding for a protein of 317 amino acids with a calculated molecular mass of 35,212 Da The encircled spots (2, 3, 4, and 5) were found to be expressed only in the presence of phosphite. The following pI values were assigned to the proteins: spot 2, 6.0; spot 3, 6.2; spot 4, 6.3; spot 5, 6.5.
"Unknown Genome" Proteomics by Inverted PCR and FTICR-MS
and pI of 5.7 (Figs. 3 and 4). The genomic DNA regions upstream and downstream of the gene coding for the putative NAD(P)-dependent epimerase/dehydratase were amplified by combination of IPCR and nested PCR (data not shown). The translated sequences of the ORFs coded in these regions were compared with the MALDI-FTICR mass spectrometric determination of peptides and provided the unambiguous identification of a single protein (ORF4; see Figs. 4 and 5).
The protein sequence was checked for similarity and conserved domains with BlastP and revealed highest similarity to the UDP-glucose 4-epimerase gb͉AAM0406.1 of Methanosarcina acetivorans C2A with 31% identity of residues and 53% similarity. Verification of the sequence for putative signal peptides with the SignalP software (version 3.0) did not yield a result. The nucleotide sequence of the new gene was assigned as a putative epimerase/dehydratase and deposited in the GenBank database under accession number ABU54327.
Results from total RNA RT-PCR obtained from cells grown with and without phosphite are shown in Fig. 3. The oligonucleotides used were specific for the junction between the gene coding for the putative NAD(P)-dependent epimerase/ dehydratase and the previous ORF. Primers were designed to yield an amplification product of ϳ420 bp of which ϳ250 bp were in the intergenic region between the prior ORF and the gene of interest. This result is in agreement with the calculated length of the amplification product of 399 bp, suggesting that the mRNA spans the junction between two genes.
To detect similar loci in the genome of D. phosphitoxidans that might be responsible for the synthesis of putative isoforms (similar proteins) to the one coding for an NAD(P)-dependent epimerase/dehydratase, two approaches were used. (i) Degenerate oligonucleotides (Table III) based on the Nterminal sequences were used as forward primers together with primers based on internal peptide sequences obtained from Edman degradation. (ii) In addition, degenerate primers were developed from the internal peptide sequences obtained from protein spot 3 of the soluble fraction. In all cases a single amplicon per reaction was obtained that after sequencing resulted in nucleotide sequences of different lengths but complete identity with the nucleotide sequence of the gene coding for an NAD(P)-dependent epimerase/dehydratase. An exception was found for the amplified products oligonucleotides 3152F2 (fraction 15, SF) and 38R1 (fraction 8, SF). With this pair one amplification product of 626 bp was obtained that was sequenced and yielded 47 of 115 amino acids identity (40%) with the C terminus of gb͉EAX56319.1 (anthranilate/ para-aminobenzoate synthase component I-like (Candidatus Desulfococcus oleovorans Hxd3)). This locus in Candidatus Desulfococcus oleovorans Hxd3 codes for a product of 741 amino acids. A further ClustalW alignment on the nucleotide and amino acid levels of the partial product obtained with 3152F2 and 38R1 and the product of the NAD(P)-epimerase/ dehydratase did not show positive results. We assume that this product was formed because of mispriming of degenerate oligonucleotides.
High Resolution Mass Spectrometric Identification of a Single Protein, a New NAD(P)-dependent Epimerase/Dehydratase-
The monoisotopic masses of all singly charged peptide ions from the MALDI-FTICR mass spectra of the tryptic in-gel digests of spots 2, 3, 4, and 5 (Fig. 2) from the soluble protein fraction were used directly for database search. The NCBInr database (October 1, 2007) was used as protein database, and MASCOT peptide mass fingerprinting and Pro-Found were used as search engines using the following search parameters: 10 -50-ppm mass tolerance, one missed cleavage site permitted, Met oxidation as variable modification, other proteobacteria as taxonomy, and a minimum number of 3 matched peptides required for protein identification. Database searches did not provide a conclusive protein identification. This result was expected because there are no genomic data available for D. phosphitoxidans. However, unequivocal protein identification was obtained by high resolution peptide mass analyses using MALDI-FTICR-MS and by comparison of monoisotopic masses of peptide ions for each protein spot (spots 2-5 from the soluble fraction of system II) with the calculated masses for fragment ions of the ORFs obtained from genomic DNA amplification (Figs. 4 and 5). Fig. 4 illustrates examples of the protein identification for spot 3 SFII by comparison of fragment ions for ORFs 4, 5, and 8, which provided peptide assignments only for ORF4 and thus unequivocal identification of this protein, an NAD(P)-dependent epimerase/dehydratase. No peptide mass could be assigned for ORF8, and one peptide mass was assigned for ORF5; in contrast peptides for ORF4 covered 75% of the complete amino acid sequence. These results ascertained that the protein spots were products of the gene coding for an NAD(P)-dependent epimerase/dehydratase. The MALDI-FTICR mass spectrum acquired for the tryptic digestion mixture of spot 3 from SFII is shown in Fig. 5. In the mass range 1300 -3100 Da, 14 tryptic peptides (13 displayed in the mass spectrum) were identified with high mass accuracies (⌬m, 1.2-11.5 ppm; considered mass threshold, 20 ppm) as tryptic peptide fragments of the new protein (Table IV).
Protein Structure Confirmation by Sequence Determination of Internal Peptides-The identification of a specific NAD(P)dependent epimerase/dehydratase was confirmed by isolation and sequence determination of internal Lys-C peptide fragments. As an example, the HPLC analysis of Lys-C proteolytic peptides for protein spot 3 from the 2-DE gel of SFII (Fig. 2) is shown in Fig. 6. Twenty fractions eluted from the HPLC column were subjected to Edman sequence determination (see Table V). The internal peptide sequences obtained by Edman sequence were aligned with the translated amino acid sequence of the ORF4 gene (Table V) affinity enrichment (IMAC) of tryptic peptides was performed. Analyses of tryptic digestion mixtures from protein spots 3 and 4 of SFII with and without IMAC enrichment revealed the identification of two phosphorylated peptides in spots 3 and 4 from SFII. In both spots four phosphorylation sites were identified in the peptide fragment Phe 91 -Lys 118 , corresponding to phosphorylations at the residues Thr 97 , Thr 105 , Ser 112 , and Thr 115 . In addition, a single phosphorylation was identified for spot 3 in the peptide Leu 23 -Lys 47 ; this sequence contains three possible phosphorylation sites (Thr 31 , Tyr 42 , and Thr 45 ). The increasing number of phosphorylations identified corresponded with the more acidic pI values for the spots separated by 2-DE (see Fig. 2). Thus, in protein spot 3 (pI 6.2) of the soluble fraction of system II, five phosphorylations were identified, whereas four phosphorylations were found in spot 4 (pI 6.3) ( Table VI). The pI values for the phosphorylated proteins were significantly higher compared with the pI calculated for the unphosphorylated protein (5.8); the structural basis for this difference is unclear at present.
The identification of phosphorylations was further ascertained by molecular weight determinations of the intact proteins using MALDI-TOF-MS. MALDI-MS was performed by elution of proteins from 2-DE gels and placing 1-l aliquots on the target as illustrated in Fig. 7 by the spectrum of protein spot 4 from SFII. A molecular mass of 35,400 Da (average mass of 1ϩ to 5ϩ charged ions) was determined that by comparison with the unmodified molecular mass of 35,212 Da is consistent with an approximate (average) degree of three phosphorylations.
DISCUSSION
Anaerobic phosphite oxidation by bacteria has been discovered only recently, and D. phosphitoxidans is the only bacterium known thus far to oxidize phosphite to phosphate and gain energy from the oxidation process. We present here the first protein and its gene involved in this process. The "Unknown Genome" Proteomics by Inverted PCR and FTICR-MS NAD(P)-dependent epimerase/dehydratase identified is a new protein that on the basis of its conserved domains, amino acid sequence, and conservation degree of catalytic residues is assigned as a member of the short chain dehydrogenase/ reductase (SDR) family of proteins. Peptide mass fingerprinting using known databases provided no identification consistent with the presence of a new protein. In the "unknown genome proteomics" approach used (Fig. 1), proteomics and genetics techniques were successfully combined using high resolution peptide mass determinations by FTICR-MS to discriminate between different ORFs from putative genes found upstream and downstream of the gene coding for an NAD(P)dependent epimerase/dehydratase. The MALDI-FTICR-MS data identified the only correct protein out of a series of possible products and were ascertained by Edman sequence determination of internal peptides and in complete agreement with the translated gene product. In addition, direct analysis of the new protein by MALDI-MS upon elution from 2-DE identified a molecular mass of 35,400 Da, which is in agreement with the calculated mass of the amino acid sequence and a degree of approximately three phosphorylations; a total of five phosphorylations were identified by mass spectrometric analysis of IMAC-isolated peptides. The first epimerase studied, the UDP-glucose 4-epimerase of E. coli (GALE), belongs to the superfamily of SDRs (16). This protein is a homodimer with a molecular mass of 37.3 kDa and an N-terminal nucleotide cofactor binding domain. The SDRs are a diverse group of enzymes with highly divergent sequences (15-30% sequence identity) (16), most of which are dimers or tetramers (17). The SDR superfamily members contain two characteristic signature sequences, a YXXXK motif (where X can be any residue) and a GXGXXG motif (Rossman fold) that is usually found near the cofactor binding pocket. The proteins that adopt Rossmann fold binding of nucleotide cofactors such as NAD(P) or FAD are known to function as oxidoreductases (18). Both signature sequences were found in the new NAD(P)-dependent epimerase/dehydratase identified. A Rossman fold, GTGFIL carrying one non-conserved substitution at the last residue, is located at the N-terminal part of the protein (residues 11-16). The second characteristic sequence found, 135 YIISK 139 , fully corresponds to the YXXXK motif.
A multiple alignment (ClustalW (1.83)) at the amino acid level of the translated product and 20 randomly selected UDP-glucose 4-epimerases from NCBInr (December 6, 2007), including the sequence of galE (P09147) from the Swiss-Prot/ TrEMBL database, revealed that two of three amino acids assigned to be important in the catalytic mechanism of the enzyme are conserved in the new protein. The Lys 153 , Tyr 149 , and Ser 124 /Thr 124 residues were identified as catalytic residues of UDP-glucose 4-epimerase by site-directed mutagenesis (19,20) (the numbering refers to the GALE monomer from E. coli). The residues Tyr 149 and Lys 153 were conserved in all sequences examined, including the new protein identified. Tyr 149 functions as a proton acceptor in GALE. Both residues Tyr 149 and Lys 153 were found to be conserved also among Table VI). calc, calculated; exp, experimental. "Unknown Genome" Proteomics by Inverted PCR and FTICR-MS members of the SDR superfamily. The third residue, Ser 124 / Thr 124 , involved in the catalytic function of UDP-glucose 4-epimerases, is not conserved, in contrast to all other aligned sequences, in the new protein, which contains an Arg residue at this position. Functionally Ser 124 /Thr 124 are involved in substrate binding in all known UDP-glucose 4-epimerases. In summary, the new protein can be assigned to the SDR family of proteins based on its specific pattern sequences and the functions in which such sequences are involved. This protein is assigned as an NAD(P)-dependent epimerase/dehydratase according to its amino acid sequence. Further studies at the biochemical and protein structure level will elucidate the functional mode and the catalytic mechanism of this new enzyme and enable its detailed classification.
Acknowledgment-We gratefully acknowledge the expert assistance of Dr. Marilena Manea with the HPLC separation of the Lys-C digests.
* This work was supported by a grant from the Deutsche Forschungsgemeinschaft (DFG), Bonn, Germany (Grant SI 1300/1-1, Bacterial Anaerobic Phosphite Oxidation) and in part by DFG Grants PR175-12/1 and PR175-13/1. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM EBI Data Bank with accession number(s) ABU54327. | 7,708.4 | 2009-01-01T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
Building semantic grid maps for domestic robot navigation
This article proposes a semantic grid mapping method for domestic robot navigation. Occupancy grid maps are sufficient for mobile robots to complete point-to-point navigation tasks in 2-D small-scale environments. However, when used in the real domestic scene, grid maps are lack of semantic information for end users to specify navigation tasks conveniently. Semantic grid maps, enhancing the occupancy grid map with the semantics of objects and rooms, endowing the robots with the capacity of robust navigation skills and human-friendly operation modes, are thus proposed to overcome this limitation. In our method, an object semantic grid map is built with low-cost sonar and binocular stereovision sensors by correctly fusing the occupancy grid map and object point clouds. Topological spaces of each object are defined to make robots autonomously select navigation destinations. Based on the domestic common sense of the relationship between rooms and objects, topological segmentation is used to get room semantics. Our method is evaluated in a real homelike environment, and the results show that the generated map is at a satisfactory precision and feasible for a domestic mobile robot to complete navigation tasks commanded in natural language with a high success rate.
Introduction
Occupancy grid map is one of the most important environment representation methods in mobile robotics. Unlike topological map modeling the environment as a graph, 1,2 the grid map discretizes the space as many grids with unknown, free, and occupied attributions. 3,4 Recently, the methods used to create occupancy grid maps for 2-D smallscale environments have matured. 5 Based on these maps, the robots can successfully complete point-to-point navigation tasks. 6 However, when it comes to using the grid maps in the real domestic scene, they are lack of semantic information for end users to order the robots to complete navigation tasks conveniently. For example, if an end user asks a robot to navigate to the position of a bed, he/she should know where the bed is in the grid map and manually specify a free grid as the goal point. Unfortunately, this approach is not feasible for two reasons: (1) most of the end users have little knowledge of robotics and cannot operate the map easily; (2) it cannot be accepted by the end users to repeat this operation all the time. For that, semantic map, 7,8 which describes not only the geometric arrangement of the environment but also semantic concepts, has come up to solve these problems.
In this article, we focus on creating semantic grid maps for robot navigation in domestic environments. Our method enhances the occupancy grid map with semantic concepts of objects and rooms. In this way, the robots can not only utilize traditional robust navigation algorithms but also make natural interactions with end users.
To build this map, we first create an occupancy grid map with sonar sensors. Point clouds of objects are then built by using object detection with a stereovision. A filtering approach based on the information of the grid map and the size of objects is used to remove outliers, and density-based spatial clustering is adopted to get the final point clouds. We then employ minimum bounding rectangles (MBRs) to approximatively represent the objects and create the topological spaces of objects. The concepts of rooms are acquired by spectral clustering and region growing with domestic common sense. The experimental results show the semantic grid mapping approach is affordable to represent the environment and enable domestic robots with the capability of robust, efficient, and human-friendly navigation skills.
The proposed method has the following contributions: A novel method for building domestic semantic maps with low-cost sonar and binocular vision sensors; Object and room semantics are contained; Make end users order the robots to complete navigation tasks conveniently; Take advantage of the information from occupancy grid map and domestic common sense to remove outliers; Build topological spaces of objects for robots to autonomously select navigation goals. Use domestic common sense to merge segment results.
The remainder of this article is organized as follows. The "Related work" section presents related work about semantic mapping. Then, the "Grid map with object semantics" section introduces our semantic grid mapping method. Based on the generated object grid map, the "Semantic concepts of rooms" section subsequently describes our method for acquiring room concepts. The suitability of our approach is demonstrated in the "Experimental results" section. Finally, the "Conclusion" section concludes this article.
Related work
In recent years, there are many authors focus on creating semantic maps for robot navigation. According to the different goals of semantic mapping methods, they can be divided into two classes: One aims to augment metric/ topological models with semantic labels like types of objects or rooms. For example, on the basis of the grid map, Mozos et al. 9 added the semantic labels of the scene through supervised learning. Using a laser sensor to construct a grid map and a visual sensor to identify objects, Meger et al. 10 constructed a map containing both space arrangements and object semantics. Pillai and Leonard 11 combined monocular simultaneous localization and mapping (SLAM) and object recognition to create a point cloud map with object semantics.
The other aims to study semantic map representation and reasoning. These methods use multilayer maps to represent the environment, providing the robot with capability of complex task planning, and implicit knowledge reasoning. For instance, Ruiz-Sarmiento et al. 12 used RGB-D sensors with the conditional random field to construct a multiversal semantic map containing spatial relationships and symbolic grounded uncertainty. Pronobis and Jensfelt 13 fused multiple sensor information and used a probability method to create a multilayer map. Zender et al. 14 conceptualized the environment at different levels of abstraction.
The key part of these methods is to get the semantic information of the environments. For that, object recognition, 15,16 scene recognition, 17,18 database of geometric model, 19,20 human-computer interaction, 21,22 artificial landmark 23,24 are used to get semantic concepts. In recent years, with the advancement of deep learning, the methods based on neural networks occupy a leading position. The interested reader can refer to the work by Kostavelis and Gasteratos 25 for a comprehensive review of semantic mapping approaches for robotic tasks and the work by Landsiedel et al. 26 for spatial reasoning and interacting.
Although these methods successfully created semantic maps, they only describe the occupied grids in occupancy grid maps and could not be used for the robot navigation as the path planning algorithms need the goal points located on free grids. Moreover, these methods do not consider the computational resources consumption and the price of robot sensors which are of great importance for domestic robots.
As for robot navigation, classical methods, such as AStar, 27 dynamic window approach (DWA) 28 can make the robots complete point-to-point navigation tasks successfully while lacking of semantic information. Inspired by the fact that human beings and higher animals only need significant landmarks and spatial relationships to navigate, some authors try to endow the robots with the capacity of cognitive navigation. For example, Ko et al. 29 first built a topological-semantic-metric map, then proposed a humanlike semantic navigation method for mobile robots. Crespo et al. 30 proposed a semantic relational model for robot path planning. However, as these methods do not yet have high robustness and real-time performance, they cannot be applied in the real scene application at the current stage. Different above, this article describes a novel semantic grid mapping method with sonar and stereovision sensors for domestic robots. Our method directly adds semantics on the grid map and builds topological space of objects, taking advantage of classical navigation algorithms and human concepts for robust and human-friendly robot navigation.
Grid map with object semantics
This section introduces our object semantic grid mapping method. An overview of our approach is depicted in Figure 1. Based on the sonar grid mapping method, we first use sonar and odometry readings to build a grid map, which has two functions. On one hand, it is the basis for building an object grid map. On the other hand, it provides a priori knowledge for building the object point clouds. Based on the odometry and stereovision information, we then employ the object detection and triangulation method to create the original point clouds with object semantics. To remove the outliers and solve the problem that object detection cannot distinguish different instances of the same concept, the original point clouds are filtered and clustered. Next, we adopt the MBRs to describe the topological space of the objects. Finally, the topological space is merged into the grid map to form an object grid map. In addition, to make the article more readable, Table 1 shows the list of the abbreviation and annotations which are used in this article.
Occupancy grid map
Sonar sensors are widely used in domestic robots. Compared with laser scanners, they are low-cost, low weight, and low computational resource consumption. Thus, we adopt sonar sensors to build an occupancy grid map. Since the sonar measurements are always erroneous and no angular information, the map quality may be severely degraded.
In this article, we use the off-line grid mapping method proposed by Lee et al., 31 which is based on the approximate maximum likelihood estimation theory to solve these problems. In his method, the sound pressure map is used to distinguish the correct and incorrect readings. The submap M 1 is constructed based on the correct data, and the erroneous observation data are reprocessed based on M 1 . Then, the sub-map M 2 is created by using the reprocessed data. Finally, the grid map M is the union of sub-map M 1 and M 2 .
Object detection
Our method uses object detection to provide semantic information. In domestic environments, there are a large number of object classes. We first have to determine the categories of detected objects. As we extract the semantic information of the rooms based on the common sense of the relationship between rooms and objects, and consider that the objects are only used for robot navigation, we determine to detect 10 classes of objects, including: sofa, coffee table, TV cabinet, TV, bookcase, bed, bedside table, dining table, and wardrobe. Since region-based fully convolutional networks 32 can maintain or even improve the accuracy of position and classification of objects, as well as has high computational efficiency, we use it to detect the object. Specifically, we use it to detect the left image and get the object bounding boxes. We then triangulate the feature points in these regions. In addition, since open source data sets cannot meet our requirements for domestic robot navigation, we train and test the net using our data set and the confusion matrix of our object detection is shown in Figure 2. Original point clouds with object semantics Figure 3 shows the coordinate frames of our system. Since the domestic environment is usually a small-scale scene and the resolution of the grid map is more than 5 cm, we directly employ triangulation method to acquire original point clouds instead of the state estimation method like ORB-SLAM 33 or LSD-SLAM. 34 In addition, to simplify system complexity and speed up calculations, the robot does not need to visit the complete environment and record all images. So we make the robot navigate to specified points to take images. Speak specifically, based on the occupancy grid map, the robot moves to the points specified by users, and turns around to record images and corresponding poses. Object detection is then used to acquire the region of interest (ROI) of objects. For the points in ROI, ORB 35 is exploited to find corresponding points u L ; v L ð Þ and u R ; v R ð Þ in left and right images. The coordinates of the feature point in the camera frame where u L0 and u R0 are the coordinates of axes u of principal points in pixels, f x is the size of unit length in horizontal pixel, and b is the length of baseline. Notice that the point clouds are projected onto the grid map, thus y i CL is simply set to zero. Thus, the point P where W T R 2 SEð3Þ is the transform from world to robot frame, and R T CL 2 SEð3Þ is the transform from robot to camera frame.
Filtering and clustering for original points
As shown in Figure 4, the original points include many outliers and object detection cannot distinguish different instances of the same concept. The information of grid map, size of objects, and density-based clustering are used to address these problems. As the objects are obstacles and they can only locate on occupied grids, we first exploit the free grids to remove some outliers. Then, the occupied grids are abstracted to form a set O and the shortest distance between point P iW and occupied grid point O W j 2 O is given as Next, the minimum length m oi of the object of P W i is used as a prior to defining the outlier Finally, density-based clustering 36 is employed to further filter out the outliers and distinct different instances with the same concept.
Object representation
While point clouds use feature points to represent objects, we can take a compact way of representation. Consider the detected objects can be approximated as rectangles and are always parallel to walls, MBRs are thus used to represent objects. In addition, as the MBRs of objects usually include occupied grids that cannot be goal points for navigation tasks, it needs to define the free spaces nearby the objects to make the robot autonomously select goal points. We thus expand the MBRs to build the topological spaces of objects. Figure 5 shows our expanding approach uses a three triple e; d; l ð Þ to determine the topological space, where e represents the expanding edge, d is the expanding direction, l represents the expanding length. As many objects are located on unknown grids, it is hard to directly to expand the MBRs of objects. We employ robot positions that can observe the objects as additional information to define topological spaces. Suppose the center of MBR of object i is P W ci , the vertexes of MBR used to determine the edge r j is defined as P W aj and P W bj , and the robot pose that can observe the object is P W ri .The line segments s i and s ij are defined as Then, the expanding edge is defined as null; otherwise Based on the result of expanding edge e oi , the expanding direction d oi is given as Finally, the expanding length l oi is computed as where l gi is the distance between the expanding edge e oi and the first line segment that is parallel with the edge along expanding direction d oi and contains no occupied grids, s r is a safety factor to increase the robot radius, l r is the robot radius, n oi is the length of topological space with no occupied grids. According to the tripe ðe oi ; d oi ; l oi Þ, the generated rectangle defines the topological space of object i. The free grids in the rectangle then added the semantics of object i.
Semantic concepts of rooms
In order to acquire concepts of rooms, we first use spectral clustering to coarsely segment the grid map, and then use the topological spaces of objects and domestic common sense to merge segment results.
Normalized graph cut and spectral clustering
Suppose an undirected weight graph GðV ; EÞ is composed of nodes V ¼ fV 1 ; V 2 ; Á Á Á ; V n g, edges E ¼ fE 1 ; E 2 ; Á Á Á ; E m g and the weight between V i and V j is W ði; jÞ, we want to divide G into two subgraphs G 1 and G 2 , where G 1 [ G 2 ¼ G and G 1 \ G 2 ¼ :. To get the best division, the norm cut criteria 37 is given as where N cutðG 1 ; G 2 Þ is objective function cutðG 1 ; G 2 Þ represents the dividing cost function and assoðG 1 ; GÞ is the weighted sum of edges of G 1 However, it is an NP-hard problem to find the minimum value of N cut. Therefore, spectral clustering is used to get an approximate solution as following steps: 1. Down sampling the grid map into a graph GðV ; EÞ. 2. Compute affinity matrix W by where s i is defined as the median of the n nearest neighbors.
Define
4. Select a desired number of subgraphs k. 5. Find x 1 ; Á Á Á ; x k the largest eigenvectors of L, and form the matrix X ¼ ½x 1 ; Á Á Á ; x k 2 R nÂk . 6. Set Y 2 R nÂk to be X with rows renormalized to have unit length, such that Y ij ¼ X ij =ð P j X 2 ij Þ 2 .
7. Use k-means clustering on each row of Y. 8. Assign V i to cluster j if and only if row i of Y is assigned to cluster j. 9. Assign other grids to cluster k with n-nearest neighbor if and only if the most neighbors are in the cluster k.
Using common sense to improve the results of segmentation Based on the results of spectral clustering, the grid map can be segmented into k clusters. Figure 6 shows that some of them contain object semantics and some do not. As spectral clustering is one of unsupervised learning methods, the number of clusters k is a predefined parameter need to be tuned in different domestic environments. In addition, although the number k is equal to the number of rooms, the spectral clustering is not able to get the correct results. It thus needs to tune many magic parameters manually. To overcome this limitation, we choose a larger number k to coarsely segment grid map and use object semantic map and domestic common sense to merge the segment results. According to the clusters k, a region adjacency graph G 0 ðV 0 ; E 0 Þ is formed with V 0 ¼ fV 1 0 ; Á Á Á ; V k 0 g, and E 0 ¼ fE 0 1 ; Á Á Á ; E 0 q g, and the weights W 0 ði; jÞ between node V 0 i and V 0 j is computed as Figure 6. Example of the graph generated by using topological segment: some nodes have object semantics, and some do not. International Journal of Advanced Robotic Systems For each node including object semantics, region growing with the threshold l d based on the length of doors in domestic environments is used to merge the regions without semantics.
Experimental results
This section presents the experimental results of the proposed semantic grid mapping method and navigation performance using the generated map. Figure 7 shows that our experiments have been conducted using a FABO robot in a homelike environment containing three rooms and some domestic objects. The robot is a differential drive robot equipped with 13 sonar sensors and a stereovision. In addition, it also has the capacity of speech recognition for end users to command navigation tasks in natural language to evaluate the navigation performance.
The robot was first manually controlled at an average speed of about 0.2 m/s while acquiring sensor data at a rate of 4 Hz and recording the coordinates of designated positions used to take images. Then, an occupancy grid map is built and the grid size of the generated grid map was 12 cm  12 cm. Based on the grid map, the robot autonomously went to each recording positions to collect images and built a semantic grid map. Finally, the robot was asked to six different places by an end user in natural language.
Semantic grid map
The performance of sonar grid mapping was compared with an open source laser-based SLAM method called Cartographer. 38 Since the FABO does not have a laser sensor, a TurtleBot equipped with a SLAMTEC RPLIDAR A2 was used to build the laser-based grid map. Figure 8 shows the results of occupancy grid maps. Although the laser-based grid map captures the overall shape of the environment more successfully and provides the environment information more detailedly, the sonar-based grid map is able to make the robot navigation successfully. In addition, the 2-D laser sensor scans in a two-dimensional plane, while the sonar projects a three-dimensional cone. Thus, as shown the dashed red areas in Figure 8(a) and (b), the laser sensor cannot detect obstacles that are above or below the laser plane, but still detectable by the sonar. These results further provide the evidence of the advantages of sonar sensors in domestic environments. In addition, we use the sonar maximum sound pressure map 31 to evaluate the sonar sensor errors and its improvement. There are 6975 sonar readings and 22.29% of them are correct. Namely, the method can filter out 77.71% sonar readings, which are incorrect. Based on the sonar-based grid map, Figure 9 shows the process of the proposed object semantic mapping method. Using object detection and triangulation, the original object point clouds are generated as described in Figure 9(a). Most of the object points are locating on the unknown areas for the reason that objects are obstacles in the occupancy grid map.
Based on our filtering and clustering method, the outliers are easily filtered out and the different instances of the same concepts can be distinct. Note that the size of the robot should be considered in the grid map, so the two free grids that near the occupied grids are treated as the occupied grids as shown in Figure 9(b). Figure 9(c) shows that MBRs and expanding rectangles are used to represent the object compactly. Although some objects such as bed1 and sofa2 slightly deviate from the corresponding positions in the grid map, it does not make a significant impact on the generated topological space of these objects. Therefore, the experimental results of object semantic mapping show that the proposed method can successfully add the object semantics on the grid map. Therefore, the experimental results of object semantic mapping show that the proposed method can successfully add the object semantics on the grid map. Figure 10 shows the results of topological segmentation of rooms. Even if we know the correct result is that the environment contains three rooms, Figure 10(a) and (b) indicates it is hard for the topological segmentation to divide the map into correct rooms. We thus need to tune the magic parameters to get the correct results. However, based on the object grid map and domestic common sense, we only need to tune the cluster number k. The domestic common sense used in our experiments includes: the living room contains a TV, TV cabinet, sofa, tea table; the bedroom contains a bed, desk, and wardrobe. One example of our segment and merging results is shown in Figure 10(c) and (d). To prove the effectiveness of our method, we test two different parameters and varied the cluster number k from 3 to 20. The former test uses the parameters of kth nearest-neighbor number k nn ¼ 3, scaling parameter k s ¼ 7 and the latter is k nn ¼ 3 and k s ¼ 3. Figure 11 shows that the topological segmentation based on the object semantic grid map can successfully get the correct room concepts by only changing the cluster number k.
Navigation
To evaluate the effectiveness of proposed semantic grid map, a user in turn commanded the robot go to TV cabinet, sofa1, desk, wardrobe, bed1, sofa2. As shown in Figure 12(a), the user asked the robot to go to the TV cabinet. According to the generated semantic grid map and consider the factor of radius of FABO, we inflated the occupied grids with two free grids. In order to increase the diversity of navigation behavior, the goal points were generated using random selection in topological spaces of objects except the inflated grids. AStar was then used to generate navigation paths. Figure 12(b) shows the final pose for the first navigation task. The whole goal points and paths for the navigation tasks are described in Figure 12(c).
Since we only used the odometry to build the semantic grid map and navigate in it, we couldn't get the ground truth. We thus adopted the maximum pose error to evaluate the navigation perforation. We used the manual loop closing to make the robot start from the charging pile and finally return it to get an optimal trajectory. Specifically, we recorded the odometry readings and use g2o 39 to optimize the initial trajectory. Please note that since the pose graph only has one loop position, the error will propagate far away from the charging pile. However, we compare the maximum error mapped onto trajectory pose error and the moving distance is not long, this method can get a relatively accurate result.
As can be seen from Figure 13, the maximum error of the robot is 9.6 cm. The resolution of grid map is 12 cm  12 cm and the error is no more than 1 grid size. Additionally, from the perspective of indoor applications, as long as the position of the robot near the specified object, and there is no collision during the navigation, it can be regarded as successful navigation. Thus, the FABO successfully completed the navigation task. However, the uncertainty of the robot becomes larger as the moving distance increases. This may make the robot cannot complete other navigation tasks. To solve this problem, we will use the charging pile as a global landmark to reduce the uncertainty of the robot in our future work. Table 2 presents the time evaluation of this navigation task, which includes planning time and execution time. In terms of planning time, planning the third path is the most time consuming, which costs 0.5121 s. The sixth path costs 0.3677 s, and other paths are around 0.1 s. As the domestic environment is small scale, it can meet the requirements of actual applications. In terms of execution time, it is much longer. One reason is that we set the average moving speed Figure 11. Two data was used to test the topological segment. The parameters of data1 was the kth nearest-neighbor number k nn ¼ 3, scaling parameter k s ¼ 7 and data2 was k nn ¼ 3 and k s ¼ 3. The cluster number k was varied from 3 to 20. Value equals 1 representing success and 0 representing failure. of the robot to 0.2 m/s. If we want to reduce the execution time, we can increase the moving speed of the robot. However, for indoor service robots, security is first considered.
In general, the results indicate that the robot can successfully complete the navigation tasks as the robot was in 2-D small-scale environment. At the same time, the users can conveniently command the robot in natural language.
Conclusion
As more and more robots are coming into domestic environments, they need semantic knowledge to make human beings operate conveniently. In this work, we present a novel semantic mapping method for domestic navigation application. Our method enhances the occupancy grid map with semantics of objects and rooms, endowing the robots with robust navigation skills and the semantic knowledge about environments. Concretely, the proposed method first build occupancy grid map and object point clouds respectively and then fuse them correctly to get an object grid map. Based on the generated map and domestic common sense, topological segmentation is used to get room concepts. The experiments of semantic mapping and navigation were conducted with a FABO robot in a homelike environment. The experimental results prove the mapping method has a satisfactory precision to represent the environment and makes the robot can complete domestic navigation tasks in a human-friendly operation way.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work is supported by the National Natural Science Foundation of China (nos 91748101. | 6,644.4 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Performance Analysis of NOMA for Ultra-Reliable and Low-Latency Communications
Grant-free non-orthogonal multiple access (NOMA) has been regarded as a key-enabler technology for ultra-reliable and low-latency communications (URLLC). In this paper, we analyse the performance of NOMA with short packet communications for URLLC. In this regard, the overall packet loss probability consists of transmission error probability and queueing-delay violation probability. Queueing-delay has been modelled using the effective bandwidth. Due to short transmission time, the infinite block-length has been replaced with finite blocklength of the channel codes which rules out the application of Shannon's formula. The achievable effective bandwidth of the system is derived, and then, the transmission error probability has been analysed. The derivations are validated through extensive simulations, which shows the variations of the signal-to-noise ratio (SNR) requirements of the system for various transmission-error probability, QoS exponent, and the transmission packet size.
I. INTRODUCTION
As compared to the previous cellular system generations, fifth-generations (5G) is considered to be more innovative while promising the seamless connectivity for massive machine type communications (mMTC) and ultra-reliable and low latency communications (URLLC) [1]. Considering URLLC, achieving a low latency in conjunction with a very high reliability is a challenging task. However, with latency as much low as 1ms and 99.999% reliability, mission critical communications can be ensured. This feature can support the ballooning demands of critical applications such as Tactile Internet, autonomous vehicles, and factory automation [2].
Non-orthogonal multiple access (NOMA) technology has been introduced to address the need for high data rate while supporting multiple users within same time/frequency resource block [3]. When compared to the orthogonal multiple access (OMA), NOMA can provide higher performance in terms of spectral efficiency and energy efficiency. The potential benefits of NOMA can be exploited while integrated it with other technologies such as short-packet communications. On the other hand, grant-free NOMA has been proposed as a potential enabler technology for URLLC to support the time-critical applications [4].
Researchers have come out with different designs paradigm to achieve the URLLC. This technology, infact, is envisioned Part of this work has been supported by H2020-MSCA-RISE-2018 project "RECENT". as a potential solution for achieving low-latency in URLLC. With the short packet communications of finite blocklength, the conventional shannon capacity has been replaced with a new formula proposed in [5].
In most of the existing studies, only the transmission delay and transmission error probability have been taken into account to model the required delay and reliability for the URLLC. However, we note that the queueing-delay could consist the highest latency in the system. In this regard, the concept of effective-bandwidth has been used to model the queuing delay of short packet communications in [2] which also considers the transmission error probability, queueingdelay violation probability, and proactive packet dropping probability. The proposed scheme proactively drops some packets to ensure the high reliability and low latency. Other designs paradigm has also been adopted to improve the transmission error probability in [6]- [9]. Particularly, the polar codes are introduced in [6] to minimize the various delays that caused from the signal processing, coding, and transmission. Various diversity schemes have also been exploited to minimize the transmission and other delays while supporting the URLLC. Spatial diversity has been used to improve the reliability in [7], while the frequency diversity has been investigated in [8]. Further, [9] shows that delay and reliability requirements for URLLC can be achieved with increased number of antennas at the BS.
On the other hand, performance of NOMA with short packet commications has been investigated in [10] where in the Transmission error probability has been translated into the block error length with finite blocklength and short transmission time. Simulations results demonstrate the effectiveness of the NOMA as compared to the OMA for reducing the transmission latency. However, the queueing-delay and queueing delay violation probability has not been taken into consideration in modelling the reliability and latency requirements in [10].
With the above background study coverage, in this paper, we aim to answer the following question: Given the low latency and high reliability requirements of URLLC, how much latency can NOMA reduce while satisfying the given reliability constraints.
Employing the NOMA with short packet commications can result in the following advantages, • As compared to NOMA, in conventional OMA schemes, base station (BS) has to employ the contention-based access scheme to mitigate the collision among multiple users. This can result into the increased delay/latency due to the increased collisions with the increased number of users. Hence, for the ultra-low latency, OMA cannot be employed. • NOMA does not use the grant acquisition as it uses the grant-free access. This grant-free access is helpful in satisfying the required delay and reliability requirements when the number of users increases. • NOMA can also be easily integrated with the other coding schemes such as polar codes to achieve the low latency for ultra-delay sensitive applications.
This paper will be a first attempt to analyse the performance limitations of NOMA in short packet communications for URLLC. The major contributions of our work can be summarized as follows: • A performance analysis of NOMA for URLLC has been provided while considering the queueing-delay, error-rate, and packet sizes. • Transmission error probability and queueing-delay violation probability with finite blocklength codes has been taken into consideration to ensure the overall reliability. • Analysis of effective-bandwidth for the NOMA in URLLC has been provided. This analysis is then used to find the required SNR for the given latency and errorate. • Extensive simulations have been carried out to investigate the performance of NOMA for URLLC while taking into consideration the required SNR, transmission-error probability, delay exponent, and different packet sizes.
The rest of the paper is organized as follows. Section II provides the system model and problem formulation. Section III shows the performance evaluation of the proposed analysis, and Section IV concludes the paper with future research directions.
II. SYSTEM MODEL AND PROBLEM FORMULATION
We consider a two users power-domain NOMA operation. Out of the two users, one user is getting a better channel condition such that, where h i is the channel coefficient between the user k i and the BS, where i = {1, 2}. According to the NOMA operation, the broadcast signal from the BS to the users can be defined as: where α i is the power allocation coefficient, P is the transmitted power, and m i is the message of user k i . The received signal at user i is hence given by where n i is the additive white Gaussian noise at user i. According to the conventional NOMA operation, k 2 decodes the m 2 first. The resulting signal-to-interference-plus-noise ratio (SINR) at k 2 to decode m 2 , hence can be approximated as follows, where ρ is the transmit signal-to-noise ratio (SNR). Similarly, k 1 first decodes m 2 . The received SINR at k 1 for decoding first the m 2 is given as follows, after successfully decoding the m 2 , the received SNR at k 1 to decode m 1 is, The overall reliability is the overall packet loss probability ε D of a single user which is the combination of transmission error probability and queueing-delay violation probability: where ε C is the transmission-error probability and ε Q is the queuing-delay violation probability. Normal operation of NOMA revolves around the principle of superposition coding (SC) at the transmitter and interference cancellation at the receiver. After considering the queueing delays and successive interference cancellation at the two users, the overall reliability can be summarized as follows, where ε Cij and ε Qij are the transmission error probabilities and queuing-delays violation probabilities of user k i to decode the message m j , where i, j = {1, 2}. In this paper, we use the concept of effective bandwidth which models the performance of the system when taking into consideration the queuingdelay violation probability. Effective bandwidth can be defined as the minimal constant service rate while satisfying a certain queueing delay constraint for a given arrival process. It can be derived through large-deviation approximation. For poisson arrival process {A p }, the effective-bandwidth can be defined as [11] where θ ij is the quality-of-service (QoS) exponent for the user k i to decode the message m j , smaller value of θ ij indicates the larger queuing-delay bound. T f is the frame duration and E[.] is the expectation operator. If the arrival rate is constant then the queueing-delay violation probability can be derived as [12] Pr where Pr{a > b} shows the probability of a being bigger than b, D q max is the delay bound, ≈ shows the approximation, and η ij is the probability of non-empty buffer and is approximated accurately when the queue length tends to infinity. For η ij ≤ 1, the queueing-delay violation probability can be, (10), the queuing-delay violation probability for user k i to decode message m j can be approximated as, The maximum number of packets that can be transmitted to any single user in frame n can be approximated as [13] s ij (n) = φB uln(2) where φ is the data transmission phase, B is the bandwidth. φB is the block-length that can be used for the short packet communications, u is the size of packet in bits, N o is the spectral density of noise, Q −1 (.) is the inverse of Gaussian Q-function, and V ij is the channel dispersion of user k i to decode the message m j , that can be approximated by [14] V If the k th user is served with the constant rate, i.e., E B ij (θ ij ), then we can approximate the number of packets transmitted in frame n using [2], From (4), (5), (6), and (15), the required SINR at user k i to decode message m j can be approximated using: The performance of the proposed NOMA for achieving URLLC can be compared with the OMA for URLLC. For any single user k i , the performance of orthogonal frequencydivision multiplexing (OFDM) for URLLC can be derived as: where V i and ε Ci is the channel dispersion and the transmission error probability of the k ith user employing the OMA operation. From (15) the required SNR to calculate the transmission error probability (ε Ci ) and queueing-delay violation probability (ε Qi ) for k ith user under OFDMA operation can be described as follows: where γ i is SNR and E B i (θ i ) is the effective-bandwidth with θ i as the QoS exponent of the k ith user.
III. PERFORMANCE EVALUATION
In this section, we analyse the effective-bandwidth of NOMA for URLLC. Simulations have been performed to find the required SNR/SINR for a certain error-rate and latency requirement. Simulation parameters have been described in Table I, unless otherwise specified in different simulation results.
For a given transmission-error probability and queueingdelay violation probability, the required SNR for the NOMA user has been analysed in detail. Queueing-delay with extremely short delay bound has been derived from the effective bandwidth using the Poisson process. In Fig. 1, SNR requirements for the given transmission error probability (ε D = 10 −5 to 10 −3 ) and delay QoS exponent (θ ij = 0.1) with differen packets sizes are shown. The required SNR is small with small packet size and as the packet size increases the required Required SNR (dB) SNR increased. However, with increase in the channel errors (transmission error probability), the required SNR decreases. It shows that, for tighter reliability requirements, a higher SNR is needed. The duration of data transmission within one frame duration has also been analysed in Fig. 2. The increase in data transmission duration with the given frame duration of T f = 0.5ms also results into the lower requierd SNR. From Fig. 1 and 2, it is clear that with the short packet and short data transmission duration the required SNR decreases.
Queuing-delay for NOMA in URLLC with extremely short delay bound (D max = 0.8ms) has also been investigated through simulations. In Fig. 3, required SNR with the different values of delay QoS exponent has been shown. The increased value of θ ij shows the increased in the delay (more stringent delay requirements). With increased in the delay, the required SNR increased. From θ ij = 0.1 to 1, the delay requirements are more stringent, which shows that the SNR requirements will also be high. This trend has also been investigated with different packet size of 5, 15, and 25 bytes while keeping the data transmission duration fixed. Here, also the short packets with different delay requirements show small SNR requirements as compared to the large packets. The delay analysis for different data transmission duration with packet size of 15 bytes has been shown in Fig. 4. The short data transmissions duration seems to be more susceptible with more stringent delay requirements as compared to the slighter larger data transmission duration. For this analysis the frame duration has been kept fixed at T f = 0.5ms.
From Fig. 1 to 4, the SNR requirements for the NOMA systems for ultra-reliable and low latency communications have been analysed while considering the different transmission error probability and queueing-delay violation probability. It is clear from the simulation results that employment of short packets to ensure the ultra-reliability and latency has a significant impact on SNR requirements to achieve the certain latency and error rate for NOMA in URLLC.
As a future work, we will perform an-depth performance analysis of NOMA for URLLC communications with different arrival rate and by optimizing the different resources in the system. We will also integrate other state-of-the-art technologies such as cogntive radio networks [15], [16] and wireless sensor networks [17] with NOMA-URLLC to investigate the delay-sensitive applications.
IV. CONCLUSION
In this paper, the performance of the power-domain nonorthogonal multiple access (NOMA) for ultra-reliable and low latency communications (URLLC) has been analysed with the help of effective-bandwidth. Queuing-delay and transmission delay has been considered to model the latency, while the reliability is modelled using the transmission error probability and queueing-delay violation probability. NOMA has significantly reduced the physical layer latency and improved the reliability for URLLC to support the time-critical applications. Extensive simulations have been carried out to show the SNR requirements while considering transmission error probability, delay QoS exponent, and different packet sizes. | 3,354.4 | 2018-10-04T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Low-rate smartphone videoscopy for microsecond luminescence lifetime imaging with machine learning
Abstract Time-resolved techniques have been widely used in time-gated and luminescence lifetime imaging. However, traditional time-resolved systems require expensive lab equipment such as high-speed excitation sources and detectors or complicated mechanical choppers to achieve high repetition rates. Here, we present a cost-effective and miniaturized smartphone lifetime imaging system integrated with a pulsed ultraviolet (UV) light-emitting diode (LED) for 2D luminescence lifetime imaging using a videoscopy-based virtual chopper (V-chopper) mechanism combined with machine learning. The V-chopper method generates a series of time-delayed images between excitation pulses and smartphone gating so that the luminescence lifetime can be measured at each pixel using a relatively low acquisition frame rate (e.g. 30 frames per second [fps]) without the need for excitation synchronization. Europium (Eu) complex dyes with different luminescent lifetimes ranging from microseconds to seconds were used to demonstrate and evaluate the principle of V-chopper on a 3D-printed smartphone microscopy platform. A convolutional neural network (CNN) model was developed to automatically distinguish the gated images in different decay cycles with an accuracy of >99.5%. The current smartphone V-chopper system can detect lifetime down to ∼75 µs utilizing the default phase shift between the smartphone video rate and excitation pulses and in principle can detect much shorter lifetimes by accurately programming the time delay. This V-chopper methodology has eliminated the need for the expensive and complicated instruments used in traditional time-resolved detection and can greatly expand the applications of time-resolved lifetime technologies.
Introduction
Time-resolved techniques, including time-gated autofluorescencefree imaging and fluorescence lifetime detection, have drawn significant attention in the past decades (1,2).By taking advantage of a long-lived luminescence probe (3)(4)(5)(6), the high background scattering and autofluorescence in biological samples can be effectively removed using time-resolved luminescence detection.Moreover, coded luminescence lifetimes can be exploited for temporally multiplexed detection assays, enabling multichannel detection while minimizing crosstalk between detection channels, which is a common limitation for the spectral multiplexing method (7,8).
Therefore, the time-resolved detection and luminescence lifetime imaging has found a wide variety of applications, such as highcontrast, in vivo imaging of cells and tissues (9)(10)(11), detection of rare diseased cells and pathogenic microorganisms (12)(13)(14), ultrasensitive bioassays (15)(16)(17), and physiological sensing (e.g.pH and temperature) (18)(19)(20).A range of analytical instruments such as spectrometers, microscopes, and flow cytometers have been adapted to enable time-resolved luminescence measurements (21)(22)(23)(24)(25)(26).However, most current systems for time-resolved and lifetime detection require complicated mechanical choppers to achieve high repetition rates or expensive equipment such as high-speed excitation sources and detectors such as photomultiplier tubes (PMT), streak cameras, and intensified charge-coupled device (CCD) cameras to provide the temporal resolution (26)(27)(28)(29).The bulkiness and complexity of the current time-resolved instruments, therefore, have posed significant challenges to the broad access to this technology outside of well-equipped laboratories.
Portable and cost-effective time-resolved devices are promising platforms for point-of-care (POC) monitoring for medical, agricultural, and environmental applications.In particular, modern smartphone equipped with advanced process unit and camera modules is an emerging platform for field-portable time-gated or timeresolved detection (30).For instance, time-gated imaging has been adopted on the smartphone by capturing persistent postexcitation luminescence (31,32).On the other hand, lifetime resolving and imaging can be achieved by exponentially fitting the pixel intensities in consequent time-gated frames over time on the smartphone (33)(34)(35)(36)(37)(38).However, the temporal resolution of smartphone camera is very limited due to its low frame rate (typically 30 frames per second [fps]).Even though the frame rate can reach 60 fps or higher on some smartphone models, it is still inadequate to detect lifetimes in the range of submillisecond (ms).As such, for demonstration purposes, many previous smartphone-based time-gated platforms (31)(32)(33)(34)38) selected persistent luminescent phosphors with ultralong lifetimes of hundreds of milliseconds to seconds (s).Additional mechanical apparatus such as chopper and motorized turntable were necessarily included into the system to break the limitation of temporal resolution of the smartphone for the measurement of shorter lifetimes in the microsecond (µs) range (35)(36)(37).Recently, B. Xiong and Q. Fang reported a portable lifetime imaging system that can detect lifetimes in the subhundred microsecond range using a smartphone camera with an electronic rolling shutter (ERS).The lifetime was extracted by measuring the phase shift of fringe profiles captured by the smartphone camera with an ERS at 30 fps.However, the lifetime image is sensitive to fringe distortion and cannot provide high pixel-by-pixel resolution due to the method of lifetime calculation (frequency domain-based) (39).
Here, a cost-effective and miniaturized smartphone-based lifetime imager was developed for luminescence lifetime quantification on the microsecond time scale using a virtual chopper (V-chopper) concept combined with machine learning.The smartphone V-chopper system was integrated with a pulsed UV LED, a UV reflection mirror, and a 615 nm band-pass filter in a 3D-printed enclosure.The V-chopper mechanism used the video rate (30 fps) of the smartphone to record repeated luminescence decay cycles and a convolutional neural network (CNN) model to extract correct gated images with >99.5% accuracy from different modulation cycles for lifetime image reconstruction.The effect of light pulse frequency, duty cycle, smartphone frame rate, and exposure time was systematically studied both experimentally and theoretically.Under the optimal setting, the smartphone V-chopper system can resolve luminescence lifetime from Europium (Eu) complex dyes as short as 75 µs.This portable smartphone V-chopper system decoupled the traditional time-resolved detection from expensive and complicated instruments such as mechanical choppers and high-speed detectors, making lifetime measurements practical in resource-limited settings.
Design of the smartphone V-chopper lifetime imaging device
The prototype for the smartphone V-chopper lifetime imaging device utilized a 3D-printed enclosure to integrate a UV LED (365 nm, M365L3, Thorlabs), a condenser lens (ACL2520U-A, Thorlabs), and a UV-enhanced reflection mirror (PFSQ10-03-F01, Thorlabs) with the smartphone camera (Fig. 1a and b).The UV LED was controlled by an LED driver (LEDD1B, Thorlabs) in the trigger or modulation mode and pulsed by a square wave voltage source (DG1062Z, Rigol).A 615 nm band-pass filter (87-739, Edmund Optics) was inserted in the enclosure in front of the phone camera when detecting luminescent signals from Eu complex dyes.The dyes were deposited on the glass slide, which was inserted at the bottom of the 3D-printed enclosure.A smartphone (Samsung Galaxy S9) with manual video control (e.g.ISO, focal length, shutter speed, video frame rate, and image resolutions) was placed on top of the enclosure as the detector.A 60 fps video frame rate was used for lifetime detection of ultralong luminescent materials (seconds).For the measurement of microsecond lifetime targets, a normal video rate (30 fps) was used combined with the V-chopper principle, as detailed below.
To deliver a uniform illumination to the sample slide, a 1" × 1" UV-enhanced mirror was used to reflect the excitation instead of direct illumination.The highly divergent emission from the UV LED was first collimated using the aspheric condenser lens (f = 20.1 mm, NA = 0.60) and then evenly projected on the glass slide by a tilted mirror (Fig. 1a).The tilted angle of the mirror is designed to be ∼67.5 degree relative to the slide surface, so that the illumination center is aligned with the field of view (35 × 63 mm 2 ) of the smartphone.Figure 1c shows an autofluorescence image of a printing paper under UV excitation.The fluorescence intensity across the slide was extracted with Matlab for R, G, and B channels, respectively, showing the uniformity of the signals for subsequent measurements and experiments.
Resolving ultralong lifetime in single decay cycle
We first tested the smartphone device for lifetime imaging of persistent luminescent probes by acquiring multiple gated images per decay cycle (Fig. 2a).The ultralong or persistent luminescent phosphors can glow for a relatively long time from seconds to even days after switching off excitation sources.The long luminescence is also called "afterglow" (40).Four afterglow composite powders of calcium sulfide (red) and strontium aluminate europium dysprosium (malachite, jade green, and cyan) were used as the testing samples and evenly dispensed on the adhesive tape, forming four letters "N," "C," "S," and "U," respectively (Fig. 2b).The glowing letters were sandwiched with glass slides and then inserted into the smartphone enclosure for lifetime imaging.The letters were excited for 10 s (0.005 Hz and 5% duty cycle) with an irradiance of 1 mW/cm 2 .No band-pass filter was used in this application in order to capture all four colors in the visible wavelength range.The smartphone took a video recording of both UV on (10 s) and UV off (∼190 s) time periods at a frame rate of 60 fps and exposure time of 1/60 s (Video S1).The image frames were then extracted from the recorded video by Matlab, and the time-gated frames were identified based on the background autofluorescence level.The last frame when the UV was on was assigned as the #0 frame (Fig. 2b, top), and the first frame after UV off was labeled as the #1 frame, which is also the first time-gated image in the gating cycle (Fig. 2b, middle).The delay time equals the integer times of the frame interval which is the reciprocal of frame rate (e.g.1/60 s for frame #1, 2/60 s for frame #2, and 3/60 s for frame #3).Totally 8,000 gated frames (from #1 to #8,000) after UV off were identified per decay cycle to resolve the luminescent lifetime.The lifetime images were then reconstructed based on the lifetimes determined from each pixel, which was calculated by exponentially fitting the intensities of each pixel over the delay time using the gated frames.The bottom panel in Fig. 2b shows a representative smartphone lifetime image generated by Matlab, where different colors represent different lifetime values (0 to 40 s).The image clearly shows different lifetime values for the different letters.For instance, letter "C" has the shortest lifetime of around 1 s and letter "U" has the longest lifetime of about 30 s.The average pixel intensities for each of four letters were plotted (solid lines) in Fig. 2d as the luminescence decay curves, confirming the different lifetimes for different phosphors.The duration and irradiance of the UV excitation both had a strong effect on the lifetime of the luminescent materials we used to pattern the letters, and the lifetimes measured under different excitation conditions are summarized in Table S5.
To validate the lifetimes of glowing letters measured on the smartphone, we used a conventional benchtop microscope (Olympus BX43) as a comparison.Two regions of interest (ROI) were selected to be imaged by a 4× objective on the benchtop system (Fig. 2b, top, yellow and red boxes).The same pulsed LED from the smartphone device was used to excite the letters with the same irradiance of 1 mW/cm 2 for 10 s.The benchtop microscope recorded videos at 15 fps with an exposure time of 1/60 s.The #0 frame before UV off and #1 frame after UV off are shown in Fig. 2c (top and middle panels).The lifetime images from the benchtop microscope were calculated in the same way as mentioned above, shown in Fig. 2c (bottom panels).When comparing them with the lifetime image obtained from the smartphone device, the lifetime values are quite consistent for all four letters between the different lifetime images.Moreover, the normalized intensity of luminescence decay curves generated by the two acquisition systems matched with each other well (Fig. 2d).
The V-chopper concept
While it is easy to implement on the smartphone, the previous gating method (fast, multiple time gating per decay cycle) (Fig. 2) is limited by the intrinsic low frame rate of the smartphone and therefore lacks the required temporal resolution to probe faster luminescent decay events in the microsecond range.To overcome this limitation, a V-chopper method is introduced, allowing video rate smartphone device to detect microsecond lifetime signals without the need for precision excitation synchronization.The basic principle of V-chopper concept is illustrated in Fig. 3.The method consists of three simple steps: (1) smartphone videoscopy, to capture multiple cycles of luminescence decay driven by pulsed excitation; (2) frame extraction by machine learning, to isolate time-gated images (UV-off images) from different decay cycles and rearrange those frames to form a new virtual gated image sequence that can represent the illuminance decay property of the dye; and (3) lifetime image reconstruction, to calculate the lifetime value for each pixel and recover the 2D lifetime image.The V-chopper method aims to extract time-gated images from multiple cycles of excitation-decay events instead of a single decay cycle in a conventional time-gated method.The extracted gated images from different decay cycles will then be assembled to reconstruct a virtual luminescence decay curve, based on which the luminescence lifetime value will be calculated.As such, the fundamental difference between the V-chopper method and the conventional gating method is that V-chopper requires only one image gating per each or every few decay cycles, while most conventional methods rely on very fast, multiple time gating for a single decay cycle.This key difference allows us to loosen the requirement of acquisition speed (frame rate) so that a fast fluorescent event (microsecond fluorescence decay) can be captured by a low imaging rate (e.g. 30 fps) smartphone device.
Machine learning-assisted gated image extraction
To streamline the selection and rearrangement of the gated images in the V-chopper method, a CNN model was designed to automatically discriminate images of interest with a high accuracy (Fig. 4).For accurate lifetime determination, the raw smartphone video frames need to be classified into two different groups: UV-on images (or class 1) and UV-off images (or class 0).After classification, all class 1 images will be discarded, and class 0 images will be rearranged based on their intensity level to form the virtual time-gated image sequence.The CNN model developed here was composed of 3 convolution layers followed by a flattened, fully connected layer including 100 hidden nodes (Fig. 4 and Table S1).Each convolution layer was succeeded by batch normalization, ReLU activation, and a max pooling layer.The fully connected layer was followed by batch normalization, ReLU activation, and dropout.Lastly, Softmax activation was utilized in the output layer to generate class probabilities, resulting in predicted labels.Additional strategies were applied to combat overfitting (see Materials and methods).
The CNN model was trained and tested with a balanced number of class 1 and class 0 smartphone images, each composed of 3,200 images.Briefly, the smartphone video frame images were converted into grayscale and resized to 128 × 128 pixels.Then, the data set was divided into three subsets-training, validation, and test sets-with a split ratio of 60/20/20.The learning process was performed in three steps: first, the CNN model was trained on the training set and applied on the validation set; second, the model was trained on both training and validation sets and applied on the test set; finally, the model was trained using the whole data set.The final trained model (after all three steps) was exported for future classification use.The trained CNN model was then applied to unknown images and separated images into two categories (class 1 or class 0).The images with predicted label "0" were the time-gated frames (UV off) which were then used to calculate the luminescence lifetime.
Microsecond lifetime imaging by V-chopper and machine learning
For a proof-of-concept demonstration of the V-chopper method, two Eu probes with distinct lifetimes in the range of microseconds were patterned on a paper substrate in different shapes (Fig. 5 and Materials and methods).One Eu chelate, 4,4′-bis(1″,1″,1″,2″,2″, 3″,3″-heptafluoro-4″,6″-hexanedion-6″-yl)-chlorosulfo-o-terphenyl-Eu 3+ (BHHCT-Eu 3+ ) with a shorter lifetime (∼250 µs), was patterned in the round shape, and Eu microbeads with a longer lifetime (∼500 µs) were dispersed in a triangle shape.The sample slide was illuminated using the UV LED at a pulse frequency of 50 Hz and 40% duty cycle, and the phosphorescence was recorded at 30 fps and 1/350 s exposure time set on the smartphone (Fig. 5a and Videos S2 and S3).A series of time-gated images from different modulation cycles were generated (Fig. S1b).The frame with UV on is assigned the #0 frame, and the first frame after the UV is off is referred as the #1 frame, as usual.The time delay of #1 frame is close to 0 (Δt 1 ∼ 0).Subsequently, #4, 7, 10, … are timegated images identified by the CNN model with different decay times Δt n .Although the preset frame rate was 30 fps, we noticed the actual frame rate was around 29.98 fps.The little shift of the smartphone video rate is actually critical to generate small time delays without the need for expensive control devices.The interplay between video rate and excitation frequency was explored in more detail by a modeling method as described in the Discussion section.Finally, according to Eq. 6, time interval Δt is 22.2 µs between two successive frames for an actual frame rate of 29.98 fps.For a UV pulse of 50 Hz, one gated image was identified for every three consecutive image frames, and therefore, the gated interval is Δt 2 = 66.6 μs, Δt 3 = 123.2μs, and so on for the rearranged virtual image sequence.
The developed CNN model classifies time-gated images with a high accuracy.Figure 5b and c shows the confusion matrix as well as the performance measures of the CNN model.The CNN model trained on the combination of training/validation sets was found to perform well with a training and test accuracy of 99.90 and 99.84%, respectively (Fig. 5c).The CNN model trained on the whole data set was subsequently applied to 5,317 unseen images to classify.The smartphone images were automatically separated into two different categories by their predicted labels.The representative #0 frame and CNN-classified time-gated images are shown in Fig. 5d.The lifetimes of these two dyes can be easily distinguished in the gated frames.The circle is barely visible after frame #37, while the triangle still glows slightly in frame #73, indicating the much longer lifetime of triangle (Eu microbeads) than the circle (BHHCT-Eu).The average pixel intensities of the circle (blue) and triangle (red) were plotted as a function of time (Fig. 5e).When fitted with the monoexponential decay curves, the lifetime of the circle was calculated to be 168.4µs, and the lifetime of the triangle was 512.3 µs.A reconstructed lifetime image was generated by fitting the intensity of each pixel over time and calculating the lifetime for each pixel (Fig. 5f).A clear difference in the lifetimes of the circle and the triangle was visualized based on the color bar.The corresponding lifetime histogram of each pattern is shown in Fig. 5g, showing a narrow distribution of ±9.3 µs (coefficient of variation 5.5%) and ±14.5 µs (coefficient of variation 2.8%) for the circle and the triangle, respectively.Both lifetimes determined on the smartphone V-chopper device were highly consistent with the results measured on the commercial time-resolved spectrometer, which were 167.2 ± 1.4 µs and 516.4 ± 7.0 µs for the circle and the triangle, respectively (Fig. S2 and Table S2).The error percent is within 0.8% when comparing the smartphone V-chopper and benchtop spectrometer results.
To demonstrate the detection of subhundred microsecond lifetime on the smartphone V-chopper system, three different luminescent dyes were microprinted in a picture of a howling wolf (Fig. 6).The wolf, ground, and moon consisted of Eu microbeads (lifetime ∼500 µs), BHHCT-Eu chelate (lifetime ∼250 µs), and tetracycline hydrochloride (Tc) Eu dye (Tc-Eu) (lifetime <100 µs), respectively.The smartphone took videos at 30 fps (29.98 fps in real data set) with 1/500 s exposure time, and the UV LED was pulsed at a frequency of 30 Hz with a 40% duty cycle.That we are able to perform this experiment by using the same rate for both video recording (30 fps) and excitation pulse (30 Hz) is again due to the fact that the actual video rate (∼29.98 fps) always deviates from the nominal value a little bit.Due to this mismatch, when a long video was recorded, the image frames gradually shifted away from the UV pulses, generating gated UV-off images (Fig. 6a and Videos S4).Then, the gated frames in successive cycles were gathered to form the virtual decay image sequence.The delay time Δt n was measured as 0 µs, 22.2 µs, 44.4 µs, and 66.6 µs, respectively, for gated frame #1, #2, #3, and #4, making it highly possible to resolve lifetime between 50 and 100 µs.This setting requires a long video to be recorded to allow the frames to scan over the whole decay curves, especially when the lifetime is over hundreds of microseconds.The minimum video duration (Eq.8) for efficient lifetime resolving is considered further in the Discussion section.
Figure 6b displays frame #0 (UV on), frame #1 (the first gated frame after UV off), frame #60, and so on.Comparing the gated frames #1-160, it is obvious that the phosphor in the wolf is the most long-lived and the phosphor in the moon is the shortest-lived-it is barely visible after frame #3.The decay curves of three dyes extracted from all gated frames were plotted in Fig. 6c (green dots) and fitted by exponential functions (solid lines) to resolve the lifetimes.The recovered lifetime image (Fig. 6d) shows three distinct lifetime values (510.4 µs, 169.3 µs, and ∼78.3 µs) for the wolf, ground, and moon, respectively, corresponding to the advertised lifetime of each luminescent material (Fig. S2 and Table S2).The data suggest that by applying the V-chopper concept, short lifetimes below 100 µs can be resolved on a portable smartphone reader device with a relatively low frame rate at 30 fps.
Discussion
A conventional time-resolved luminescence detection system is usually composed of a light source capable of pulsed excitation, a gated optical detector to capture time-dependent luminescence, and a synchronized control unit to provide the phase difference (or time delay) between the excitation and detection windows.Nowadays, laser diodes and LEDs can achieve terrific temporal resolutions with repetition rates up to 100 MHz and pulse width of ns to µs while maintaining cost-effectiveness and portability.Therefore, the gated optical detector becomes a key challenge in the implementation of time-gated detection, especially for a portable time-resolving system which needs a very high acquisition rate for multiple samplings in each cycle to fit the luminescence decay exponentially.
On the other hand, over the past decades, the smartphone has been widely explored as a capable analytical sensing and imaging platform in many POC applications, such as disease diagnostics, environmental monitoring, and screening for food contamination (41)(42)(43)(44)(45)(46).In most of the previous applications, smartphone cameras are used to take individual photos for data analysis.Recently, with the rapid technological advancement of complementary metal oxide semiconductor (CMOS) cameras equipped on the smartphone, users can have more control of the video capturing mode such as the exposure time, frame rate, and focusing distance.In the recent half decade, many smartphone models on the market have achieved super high frame rate videos (above 100 fps) with high resolution definition and opened an emerging analytical method defined as smartphone videoscopy (30).
However, previously reported smartphone-based lifetime quantification methods still require complicated mechanical chopper systems to achieve high temporal resolution, limiting the time-resolved technologies for POC use (Table S4) (35)(36)(37).Here, instead of using expensive high-speed image sensors or complicated mechanical choppers, we demonstrated a virtual modulation method, the V-chopper, by generating controlled time shift between video frames and light pulses to reconstruct luminescence decay curves from multiple modulation cycles.The gated image extraction and rearrangement are fully automated by a CNN deep learning model.By applying the V-chopper method, the lifetime as short as a few tens of microseconds can be resolved on a consumer smartphone device using a low video frame rate (e.g. 30 fps).Compared with previous mechanical chopper-based systems, the smartphone V-chopper platform is much more cost-effective and easier to implement.Meanwhile, the concept of the V-chopper can be broadly applied not only to smartphone detectors but also conventional digital cameras or image sensors.The latter can provide more precise control of frame rate and exposure time and has the potential to even resolve nanosecond lifetime.
To perform smartphone V-chopper, it is important to select a good combination of excitation and video recording settings.A Matlab-based simulation program (Fig. S3 and codes available at https://github.com/VictoriaYanWang/Smartphone-Lifetime-Imaging) was developed to study the interplay of the light pulse frequency The following general rules should be followed for a successful V-chopper implementation on the smartphone.
Excitation pulse
To detect the luminescence lifetime (τ) precisely, the requirement for the LED pulses is that the LED off time (defined by (1 − D)/F) should be longer than the whole decay curve or longer than at least 3 times the lifetime τ.In other words, the next excitation should not be turned on until the previous decay curve is completed.
For instance, if the LED duty cycle is set at 40%, then the LED pulse frequency (F) should meet the following condition: Therefore, based on the estimated lifetime of the dye to be detected, the repetition frequency of LED can also be selected accordingly.For example, for dyes with 10 ms lifetime, excitation with 20 Hz or lower frequencies (40% duty) should be used (Table S3).In contrast, for ultralong lifetimes, excitation with 1 Hz or lower should be applied to match the long decay curve.
Shutter speed
Conditions for a successful time-resolved detection also involve the setting of shutter speed (s), which should be shorter than the LED off time; otherwise, the camera will always capture LED on images.The condition can be expressed as For example, for LED frequency of 20 Hz and 40% duty cycle, the shutter speed should be shorter than 0.03 or 1/33 s (Table S3).
Frame rate
The choice of frame rate on most smartphones is very limited.The values are discrete instead of being successively adjustable.However, the few options of preset frame rate will not limit the capacity of smartphone V-chopper device to resolve a broad range of lifetimes, including the short lifetimes in the microseconds.It is worth mentioning there is sometimes a small drift to the preset frame rate when a video is taken on the smartphone.Based on the information stored in the video properties, the actual frame rate f real may be slightly different from the preset values (e.g.f = 24, 30, or 60 fps) with a tiny drift σ (0.01 to 0.30 Hz).For the current phone model we use, the f real is often 30.02 or 29.98 fps for a given preset rate of 30 fps.
The small drift of the smartphone frame rate provides a simple phase shifting mechanism between video frames and LED pulses, which provides the shortest delay time of gated frames that we can achieve in the current setup for measuring short luminescence lifetimes.
If the smartphone videoscopy is set at 30.00 fps sharp with 1/350 s exposure time and LED has 50 Hz frequency with 40% duty cycle, the intensities of gated frames will be the same across the whole video since the frame window sits at the same delay time for each luminescence decay (Δt n is a constant; Fig. S4a).However, for a frame rate of 29.98 fps (Fig. S4b) or 30.02 fps (Fig. S4c), the gated frames started to show modulated intensities across frames.That is because the small drift provides a phase shift to video frames which modulates Δt n over the decay curves.The varied Δt n allows video frames to sample different parts of the luminescence decay curves in multiple time-gated cycles.More specifically, if f real < f , the intensity of the gated frames decreases as a function of time, meaning that gated frames scan the luminescence decay curve from high to low intensity (or shifting away from the LED pulses).In contrast, if f real > f , the gated frames are shifting toward the LED pulses, so the intensity of the gated frames increases accordingly.
The delay time between successive frames Δt will be equal to the drifting time between the actual frame rate and preset frame rate, which can be defined by Therefore, the delay time between two successive gated frames Δt n will depend on frame numbers m between each two, which means And m can be calculated by where gcd represents the greatest common divisor.For example, for a frame rate of 30 fps and UV pulse of 50 Hz (40% duty), m = 3; therefore, Δt n = 3(n − 1) • Δt, and Δt 1 is always close to 0. According to Eq. 5, Δt is 11 µs for f = 30 fps and σ = 0.01 fps, and Δt will be 2.78 µs when f = 60 fps and same σ.It is clear that the smaller frame drift and higher frame rate that the smartphone camera can provide, the smaller time delay Δt would be.The minimum delay time can predict the limit of detection (LOD) for lifetime by the smartphone V-chopper device on the order of tens of microseconds, which is equivalent to the results obtained by the previous smartphone systems equipped with mechanical choppers and motor for lifetime detection (35)(36)(37).
Based on the above general rules and Eqs.1-6, the recommended settings for successful smartphone V-chopper implementation on the different lifetime ranges can be found in Table S3.
Video duration
The minimum duration of a video clip (T min ) to capture in order to scan over a whole decay curve for lifetime detection follows the equation: According to Eq. 8, the minimum necessary length of a video recorded to resolve the lifetime is proportional to the video frame rate (f ) and inversely proportional to the LED pulse frequency (F).As such, a lower video frame rate combined with higher frequency LED pulses is more time-efficient for lifetime detection on the smartphone.Many smartphone CMOS sensors are controlled by the ERS, which can lead to different readout times of each line of the gated image and therefore affects the accuracy of lifetime calculation.We compared three different smartphone models (iPhone 13 Pro, LG V10, and Samsung Galaxy S9) and evaluated the effect of ERS on V-chopper applications (Figs.S8-S11).The results showed that Samsung S9 had the minimum ERS effect probably due to their ERS effect cancellation technology by using a unique threelayer stack image sensor construction.In addition, several ERS compensation algorithms (47,48) can be applied to further reduce the ERS effect if needed.
In this work, we focus on the methodology and the proof-of-concept of the V-chopper system.Follow-up studies to demonstrate real applications for sensitive biomarker detection are undergoing.Moreover, the concentrations of Eu dye in the current experiments are relatively high (0.01 to 0.1 M) compared with those used in the real applications, which are normally below 1 mM.It will be challenging to image luminescent molecules at low concentrations by the current smartphone setup without any external objective lens and signal amplification technologies.In future, a more powerful excitation source like pulsed laser and objective lens could be introduced to further improve the optical sensitivity of the system and therefore unlock the full potential for biosensing.
In summary, a low-cost smartphone-based lifetime imaging platform has been developed for time-gated detection and 2D lifetime imaging over a broad range of lifetime from microseconds to seconds.To probe low microsecond lifetime events, a V-chopper method was demonstrated by modulating the LED pulses and smartphone video frame rate accordingly.Coupled with machine learning for gated image extraction, the V-chopper method offers opportunities to resolve fast luminescence decay events on a low frame rate image sensor.The minimum lifetime that can be detected by the smartphone V-chopper system is about 75 μs, which is comparable with or even lower than that obtained from previous mechanical chopper-based smartphone systems.This V-chopper method decouples the traditional time-resolved detection from expensive and complicated instruments.The miniaturized smartphone V-chopper system exhibits huge potentiality in lifetime imaging for various applications such as POC biosensing.The methodology can also be a universal method, which can be applied to benchtop sensors to resolve even faster fluorescence events in the future.
Online content
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information, details of author contributions and competing interests, and statements of data and code availability are available at https://doi.org/.
Preparation of the smartphone V-chopper device
The smartphone V-chopper lifetime imaging prototype device consists of a 3D-printed enclosure, a UV LED (365 nm, M365L3, Thorlabs), a condenser lens (ACL2520U-A, Thorlabs), a UV-enhanced reflection mirror (PFSQ10-03-F01, Thorlabs), and a smartphone (Samsung Galaxy S9).The sample glass slides can be placed inside the enclosure on the bottom.The UV LED is controlled by a LED driver (LEDD1B, Thorlabs) and pulsed via a square wave voltage source (DG1062Z, Rigol).The highly divergent emission from the UV LED was first collimated by the aspheric condenser lens (f = 20.1 mm, NA = 0.60) and then evenly projected on the glass slide by a tilted reflection mirror (Fig. 1a).The tilted angle of the 1" × 1" UV-enhanced mirror is designed to be ∼67.5 degree relative to the slide surface, so the LED can deliver a uniform illumination to the sample slide, and meanwhile, the illumination center is aligned with the field of view (35 × 63 mm 2 ) of the smartphone.When detecting luminescent signals from Eu complex dyes, a 615 nm band-pass filter (87-739, Edmund Optics) can be mounted in front of the phone camera to eliminate excitation interference.The Galaxy S9 smartphone has manual control of video settings, e.g.ISO, focal length, shutter speed, video frame rate, and image resolutions.A 60 fps video frame rate was used for lifetime detection of ultralong luminescent materials (seconds).For the measurement of microsecond lifetime targets, a normal video rate (30 fps) was used instead combined with the V-chopper principle.
Preparation of Eu dyes
The long lifetime luminescence probes are often lanthanidebased complexes and nanoparticles, with luminescence lifetimes typically in the range of 1 μs to 10 ms.Ultralong or persistent lifetimes can last for a few seconds or even up to minutes.Here, Eu probes with different labeled lifetimes in microseconds, milliseconds, and seconds range have been used for demonstration of lifetime imaging with V-chopper on the smartphone videoscopy.To demonstrate the concept of using V-chopper for time-resolving detection and lifetime imaging, the dyes described above have been coated on paper substrates, which have strong autofluorescence to present as background noise.The ultralong lifetime powders (micron size particles) with different colors and glow durations (shown in Table S2, Techno Glow) have been selected to demonstrate resolving lifetimes over hundreds of milliseconds and up to seconds.The red glowing powder contains calcium sulfide, and the other three powders have composition of strontium aluminate europium dysprosium (SrAl 2 O 4 :Eu 2+ , Dy 3+ ).To prepare different patterns, sticker labels were cut into "N," "C," "S," and "U" shapes to trap glow powders evenly on the adhesive side.Then, four letters with different colored powders were placed on an autofluorescent paper substrate which was then sandwiched with two glass slides.Excess Tc powder (T2525, TCI) was dissolved in 0.1 M Na 2 CO 3 butter (pH = 8).Then, EuCl 3 solution was added to generate Tc-Eu chelate.Filter paper (09801B, Fisherbrand) was then soaked in the Tc-Eu dye suspension and measured immediately when wet. BHHTC-Eu (59752, Sigma-Aldrich) with bright emission under UV was dissolved in DMSO with a concentration of 0.01 M to coat the filter paper.The original suspension of 0.2 µm Eu chelate polystyrene beads (S9347, Thermo Fisher) was diluted by 100 times with MilliQ water and then spiked on the filter paper.
The Eu chelate dye Tc-Eu, BHHCT-Eu, and Eu microbeads in Table S2 have been calibrated with a commercial time-resolved spectrometer (LP920, Edinburgh Instruments).The emission spectra peak of the Eu chelates is at ∼615 nm when excited at the UV 365 nm (Fig. S5).The emission spectra of paper substrate and substrate with Eu dye were demonstrated in Fig. S6.The red spectra (solid and dash) were measured when UV was on, and the blue ones had a 10 µs delay time after UV exposure.It shows the paper substrate has strong autofluorescence under UV (red dash), which can be eliminated with a 10 µs gate time (blue dash).However, the long-lived Eu dye is still luminescent, peaking at ∼615 nm (blue solid).The lifetimes of Eu dyes have been measured by the timeresolved spectrometer as shown in Table S2 and Fig. S2.
Simulation of the V-chopper mechanism
To visualize the time delay (Δt) in the V-chopper method and study the interplay of the light pulse frequency, duty cycle, video recording rate, and shutter speed, a simulation program with user interface was designed in Matlab.The program is now available to download on GitHub (https://github.com/VictoriaYanWang/Smartphone-Lifetime-Imaging), which will help users find an optimal excitation and data acquisition setting for a given lifetime target.When the LED pulse frequency (Hz), duty cycle (%), smartphone frame rate (fps), shutter speed (s), and estimated lifetime of the target dye are input in the interface window, two tracks of pulses will be generated, namely the waveform of LED (Fig. S3b, blue solid line) and waveform of smartphone frames (Fig. S3b, green solid line).Each smartphone frame will be assigned a frame number (e.g.#1, #2, and #3), shown on top of the video frames.In addition, the decay curves of luminescence (Fig. S3b, red dash line) will also be simulated following each LED excitation pulse.The X-axis represents time (s), and the Y-axis with arbitrary units represents intensity for luminescence.The example simulation result shown in Fig. S3b demonstrates when LED has a repetition rate of 50 Hz and duty cycle of 40%, and the frame rate of smartphone video is 30.02fps with 1/350 s shutter speed.The gated frames will be found only where the decay curve (Fig. S3b, red dashed line) and smartphone frames (Fig. S3b, green solid line) are overlapped, marked in magenta (Fig. S3b).
The size of the magenta area corresponds to the amount of light being collected in this frame, which is proportional to the luminescence intensity of the gated frames.Therefore, the luminescent intensity of each frame can be calculated by accumulating the area of magenta segments and plotted over time (Fig. S7a).Several exponential curves (decay n, n + 1, n + 2…) can be synthesized in a 30 s observation window.Among these virtual decay curves, decay n + 2 is a complete curve that can be used for accurate lifetime calculation.Figure S7b shows the actual frames extracted from the video which was taken with the settings used in Fig. S3b.
The CNN classification model
The basic structure of the CNN model is illustrated in Fig. 4. To combat overfitting, several techniques were applied simultaneously: (i) batch normalization was utilized after each convolution and the fully connected layers, (ii) dropout was employed after the fully connected layer, and (iii) early stopping was used to stop the training process when the validation loss reaches its minima.The RMSprop optimizer with the learning rate of 0.0001 was adopted to compute the model weights and biases.
For training, the smartphone image data set was prepared with the balanced classes (0 or 1), each composed of 3,200 images.A 60/ 20/20 split was utilized to initiate training, validation, and test sets, respectively.The performance of the CNN models was assessed using accuracy, sensitivity, and specificity that could be calculated based on the following equations: Accuracy = (TP + TN)/(TP + TN + FP + FN) (9) Sensitivity = TP/(TP + FN) (10) Specificity = TN/(TN + FP) (11) where TP is the true positive (the number of class 0 images classified correctly), TN is the true negative (the number of class 1 images classified correctly), FP is the false positive (the number of class 1 images the model classifies incorrectly as class 0), and FN is the false negative (the number of class 0 images the model classifies incorrectly as class 1).
After training, a user-friendly code was written to apply the trained CNN model to classify unseen smartphone images.The Python code for training the CNN model, the model application code, and the smartphone images used as the data set to develop and evaluate the model are freely available on GitHub (https:// github.com/VictoriaYanWang/Smartphone-Lifetime-Imaging).
Fig. 1 .
Fig. 1.Design of the smartphone V-chopper lifetime system.a) Schematic and cross-section of the smartphone enclosure.b) Photograph of the actual smartphone V-chopper system with LED and pulse control.c) Illumination distribution of light from the LED on the sample slide.
Fig. 2 .
Fig. 2. Smartphone-based lifetime imaging of ultralong (seconds) luminescent targets.a) Schematic of smartphone videoscopy for long lifetime measurement.Multiple gating can be applied to a single luminescence decay cycle.b) From Top to Bottom: smartphone fluorescence image (frame #0, UV on), smartphone time-gated image (frame #1, UV off), and smartphone lifetime image, respectively.c) From Top to Bottom: benchtop fluorescence images, benchtop time-gated images (frame #1, UV off), and benchtop lifetime images, respectively.Two different ROIs were selected from b (Top) for comparison.d) Luminescence decay curves extracted from the time-gated frames on smartphone (solid lines) and benchtop (dashed lines) microscopes, respectively.
Fig. 3 .
Fig. 3. Concept of V-chopper.The method consists of three steps: (1) smartphone videoscopy, (2) frame extraction by machine learning, and (3) lifetime image reconstruction.In step 1, the large clear bars (black edge) indicate the timing when the UV excitation was on, while the translucent bars indicate the timing of the detection measurements.The red curves illustrate time-dependent emission intensity.In step 2, image frames (translucent bars) without the excitation on were collected to measure the emission lifetimes.
Fig. 4 .
Fig. 4. Workflow of the CNN model for automatically classifying smartphone video frames.Class 0, UV-off images, or gated frames with no autofluorescent background; class 1, UV-on images, or frames with autofluorescent background.
Fig. 5 .
Fig. 5. Demonstration of the smartphone V-chopper method for microsecond lifetime imaging.a) Schematic of the smartphone video sequence and pulsed excitation timelines before and after frame extraction and gating image rearrangement.b) Confusion matrix of the CNN model.c) Performance of the CNN model trained by the combined training/validation data sets and applied on the test set.d) Representative #0 frame and gated frames extracted from the smartphone video by the CNN model.e) The decay curves of circle (blue) and triangle (red) over gated time, f) the reconstructed lifetime image of the microsecond probes, and g) the corresponding lifetime histograms from the two pattern spots.
Fig. 6 .
Fig. 6.Detection of subhundred microsecond lifetime on the smartphone V-chopper device.a) Schematic of the smartphone video sequence and pulsed excitation settings.b) Representative gated frames extracted from the smartphone video by the CNN algorithm.c) The decay curves of luminescent intensities over gated time.Green dots are actual experimental data, and solid lines are exponential fitting curves.d) The reconstructed lifetime image of the time-encoded pattern.The shortest lifetime pattern (moon) is clearly visualized. | 9,786 | 2023-04-20T00:00:00.000 | [
"Physics"
] |
Manganese Oxide on Carbon Fabric for Flexible Supercapacitors
We report the fabrication of uniform large-area manganese oxide (MnO 2 ) nanosheets on carbon fabric which oxidized using O 2 plasma treatment (MnO 2 /O 2 -carbon fabric) via electrodeposition process and their implementation as supercapacitor electrodes. Electrochemical measurements demonstrated that MnO 2 /O 2 -carbon fabric exhibited capacitance as high as 275 F/g at a scan rate of 5mV/s; in addition, it showed an excellent cycling performance (less than 20% capacitance loss after 10,000 cycles). All the results suggest that MnO 2 /O 2 -carbon fabric is a promising electrode material which has great potential for application on flexible supercapacitors.
Introduction
With the rapid development of economy, the global energy consumption has been increasing for decades.As a result, the traditional fossil energy faces serious shortages.Green renewable energy such as solar cells and wind power generation set is desired.However, most of the new energy sources are intermittent and unsustainable, which hinder their application greatly [1,2].The energy supply gap deriving from the discontinuous characteristics of the renewable sources can be filled by coupling them with energy storage devices, such as supercapacitors (SCs) and batteries, which are able to store energy and deliver it to power the electronics [3][4][5].
SCs have drawn great attention in addressing the emerging energy demands due to the advantages of high power density, fast charge/discharge rates, and long cycle life [6][7][8].Generally, SCs could be categorized into two types according to the charge storage mechanisms: electrochemical double layer capacitors (EDLCs) [9,10] and pseudocapacitors (PSCs) [11][12][13].EDLCs attract charges on the electrode-electrolyte interface of electrode materials electrostatically; meanwhile PSCs store energy via fast redox reaction on/near electrode surface [5,14,15].Each of the two types of SCs has advantages and disadvantages, respectively.EDLCs use carbon materials such as carbon nanotubes (CNTs), graphene, carbon nanofibers (CFs), and carbon onion as electrode materials, while PSCs employ transition metal oxides or conducting polymers such as manganese oxide (MnO 2 ), molybdenum trioxide (MoO 3 ), and polyaniline as electrode materials.Carbon materials usually hold higher physical and chemical stability, better electrical conductivity, and higher specific surface than those materials for PSCs, resulting in higher rate capability and longer durability than the latter.However, the theoretical capacitance of carbon materials is much lower than that for transition metal oxides, leading to the fact that most specific capacitance of carbon-based EDLCs are less than 150 F/g [3,[16][17][18].On the contrary, PSCs exhibit higher capacitance and energy density through Faradic reaction, but suffered by the poor electrical conductivity [12,19].In this regard, if we can combine both the advantages of the two types of SCs and solve the shortcomings, then the SCs with enhanced electrochemical properties could be expected.
MnO 2 is one of the most attractive pseudocapacitive materials for the superior theoretical capacitance (1370 F/g), low cost, and abundance.Nevertheless, it suffered from the poor electric conductivity (10 −5 -10 −6 S/cm), leading to the fact that the practice capacitance is much lower than the theoretical value [20,21].Growth of pseudocapacitive materials on well conductive carbon substrates not only can facilitate the diffusion of electrolyte ions but also can improve the transport of electrons, thus enhancing the electrochemical properties [22,23].Furthermore, the hybrid structures may broaden their applications in energy storage device [24].
Herein, different surface treatments were employed to carbon fabric for assessing the influence of different treatments on the surface chemical states.We choose carbon fabric as substrates here for its low cost, good electrical conductivity, excellent chemical stability, and the flexible nature.Characterizations showed that the oxidic carbon fabric substrate is more suitable for electrodeposition of MnO 2 , for the reason of more oxygen containing functional groups which can act as nucleation points of MnO 2 .As a result, it exhibits a high specific capacitance (275 F/g) at a current density of 5 mV/s.In addition, the oxidic carbon fabric-MnO 2 showed excellent long-term cycle stability.
Experimental
2.1.Synthesis of MnO 2 /Carbon Fabric.The carbon fabric was oxidized or reduced using plasma technology.Firstly, carbon fabric was cut into the same size (0.9 × 1.8 cm 2 ).Then it was cleaned ultrasonically for 15 min by acetone, ethanol, and deionized water, respectively.After drying at 70 ∘ C, the carbon fabric was placed in plasma sample chamber for oxidation or reduction treatments.The treated carbon fabric was ready for next experiments.
A template-free electrodeposition method was introduced to fabricate MnO 2 /carbon fabric in a three-electrode cell.Carbon fabric, a graphite rod, and a saturated Ag/AgCl electrode were used as working electrode, counter electrode, and reference electrode, respectively.The solution containing 0.01 M manganese acetate (MnAc 2 ) and 0.02 M ammonium acetate (NH 4 Ac) was used as electrolyte.The constant current and deposition time are 0.1 mA/cm 2 and 30 min, respectively.
Fabrication of SCs Electrodes.
One piece of MnO 2 /carbon fabric was used as working electrode.The saturated Ag/AgCl electrode, a piece of pure Pt foil, and 1 M sodium sulfate (Na 2 SO 4 ) aqueous solution were employed as reference electrode, counter electrode, and electrolyte, respectively.
Characterization.
The morphology of MnO 2 /carbon fabric was analyzed using scanning electron microscopy (SEM, JSM-7100F).The transmission electron microscopy (TEM) and high-resolution transmission electron microscopy (HR-TEM) were performed on a JEOL-2010 HR transmission electron microscope to further investigate the internal structures and lattice fringes.The crystal structure of MnO 2 /carbon fabric was characterized by X-ray diffraction using the Cu K radiation ( = 1.5418Å) (XRD, D8-Advanced Bruker-AXS).
The electrochemical workstation (Chenhua, CHI 660D) was used to perform cyclic voltammetry (CV) and chronopotentiometry measurements.Autolab (PGSTAT302N) was used to measure the electrochemical impedance spectroscopy (EIS) with potential amplitude of 10 mV under the frequency ranging from 10 kHz to 100 mHz.
Results and Discussion
The fabrication procedure for the MnO 2 -carbon fabric composites is illustrated in Figure 1.Firstly, the carbon fabric cut in 0.9 × 1.8 cm 2 is washed by deionized water, acetone, and ethanol, respectively.After drying, the carbon fabric was reduced or oxidized by plasma to add redox active functional groups on carbon fabric.Finally, the efficient electrodeposition method was adopted to prepare MnO 2carbon fabric composites.
The morphology of the original carbon fabric and asprepared MnO 2 /carbon fabric samples was characterized by SEM, as shown in Figure 2. Figure 2(a) clearly shows that there is no other substance on the carbon fiber surface except very small amount of impurities.After electrodeposition, all the carbon fibers were covered by a multitude of MnO TEM was introduced to further investigate the morphology of MnO 2 /O 2 -carbon fabric, as shown in Figure 3.According to the low resolution TEM image (Figure 3(a)), the electrodeposited MnO 2 is a sheet shape with a nanoscale thickness.HRTEM image (Figure 3(b)) shows the interplanar spacings for the two perpendicular directions to be ∼0.48nm.This value corresponds to 101 of the tetragonal MnO 2 phase (JCPDS reference card number 18-0802).
XRD pattern was collected from the electrodeposited products for investigating the crystal phase, as shown in Figure 4. From the spectrum, there are six peaks located at 2 = 11.4 ∘ , 21.4 ∘ , 36.5 ∘ , 37.7 ∘ , 41.3 ∘ , 54.9 ∘ , and 65.7 ∘ .Among them, the broad peak located at 21.4 ∘ not only corresponds to the amorphous carbon but also can be assigned to the reflection of (101) of tetragonal MnO 2 .Meanwhile, the other five peaks can be well assigned to the tetragonal MnO 2 (JCPDS reference card number 18-0802), which is consistent well with the TEM observations.Hence, the products synthesized by electrodeposition procedure are tetragonal MnO 2 .
To study the electrochemical performances of MnO 2 /O 2carbon fabric, cyclic voltammetry (CV) and galvanostatic charge-discharge (GCD) were conducted using a threeelectrode configuration with Ag/AgCl as reference electrode and 1 M Na 2 SO 4 as electrolyte.The typical CV curves of MnO 2 /O 2 -carbon fabric are displayed in Figure 5(a) with scan rates from 5 mV/s to 100 mV/s.From the curves, even the scan rate has been increased to 100 mV/s; the CV curves retain a symmetrical rectangular shape, which demonstrate that the MnO 2 /O 2 -carbon fabric not only holds rapid capacitive response but also has good electronic conductivity.In addition, the GCD curves of MnO 2 /O 2 -carbon fabric collected at various current densities (Figure 5 The specific capacitance derived from the discharge curves measured at different current densities could be calculated according to the following equation [1]: where is the mass specific capacitance of the MnO 2 /O 2carbon fabric, is the average electric quantity, Δ is the working voltage window of the active material, and is the mass of the active material.The specific capacitance of the MnO 2 /O 2 -carbon fabric calculated from their CV curves with different scan rates was summarized in Figure 5(c).From the specific capacitance change curve, the specific capacitance value decreases along with the increase of the scan rate.The highest specific capacitance for MnO 2 /O 2 -carbon fabric can achieve 275 F/g at the scan rate of 5 mV/s.This value is higher than those recently reported for other MnO 2 electrodes [25][26][27].The specific capacitance of MnO 2 /O 2 -carbon fabric still remains more than 45% (120 F/g) comparable with that obtained at 5 mV/s when the scan rate increases to 200 mV/s.It is important to note that the specific capacitance contribution of the carbon fabric is rather small [28,29].Thus, the MnO 2 /O 2 -carbon fabric has high rate capability which provides a benefit for the potential applications.The high rate capability could be attributed to the unique free-standing composite structure including good-electrical-conductivity carbon fibers and disordered nanosheets which not only makes electron transportation and ion diffusion convenient, but also facilitates the reaction of active species, so that a good rate capability was obtained.
Beside high specific capacitance, good cycling performance is also one of the most important characteristics for high-performance supercapacitors [24].In present work, GCD cycling at a current density of 5 A/g was employed to evaluate the long-term stability of the MnO 2 /O 2 -carbon fabric electrode.From Figure 5(d), it is observed clearly that the specific capacitance for MnO 2 /O 2 -carbon fabric remains more than 70% of the initial capacitance over the first 5000 cycles.Meanwhile, the capacitance even slightly increases to about 80% of the initial capacitance after 10,000 cycles.The specific capacitance increase of the MnO 2 /O 2 -carbon fabric could be assigned to the following reasons: after the beginning circulations, the intercalation and deintercalation of the active species had been reacted completely, leading to the increase of active points; hence the specific capacitance was enhanced.This outstanding long-term stability performance could be attributed to the good contact between MnO 2 nanosheets and carbon fibers.Furthermore, this cycling performance is higher than those recently reported results for MnO 2 nanotube arrays [30], MnO 2 nanowires [31], and hierarchical tubular MnO 2 structures [32].
Conclusions
In summary, uniform large-area MnO 2 nanosheets were successfully fabricated on flexible carbon fabric through a simple electrodeposition method.The as-electrodeposited MnO 2 / O 2 -carbon fabric was implemented as supercapacitor electrodes and shows outstanding electrochemical performance such as high specific capacitance and good cyclic stability.These results suggest that MnO 2 /O 2 -carbon fabric is a promising electrode material which has great potential for application in flexible supercapacitors.
Figure 1 :
Figure 1: Schematic of the fabrication procedure for MnO 2 -carbon composites.
2
on their surface (Figures 2(b)-2(d)).However, from Figures 2(b) and 2(c), it can be observed clearly that both the electrodeposition samples based on the pristine carbon fabric (MnO 2 /carbon fabric) and the carbon fabric reduced under Ar atmosphere (MnO 2 /Ar-carbon fabric) covered by many flower-like MnO 2 on the surface which exhibit uneven surface characteristics, suggesting lower pseudocapacitive electrochemical performance.Meanwhile, there is lots of sheet | 2,734.2 | 2016-06-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Dimple Generators of Longitudinal Vortex Structures
Visual research of characteristic features and measurement of velocity and pressure fields of a vortex flow inside and nearby of a pair of the oval dimples on hydraulically smooth flat plate are conducted. It is established that depending on the flow regime inside the oval dimples, potential and vortex flows with ejection of vortex structures outside of dimples in the boundary layer are formed. In the conditions of a laminar flow, a vortex motion inside dimples is not observed. With an increase of flow velocity in dimples, boundary layer separation, shear layer, and potential and circulating flows are formed inside the oval dimples. In the conditions of the turbulent flow, the potential motion disappears, and intensive vortex motion is formed. The profiles of longitudinal velocity and the dynamic and wall-pressure fluctuations are studied inside and on the streamlined surface of the pair of oval dimples. The maximum wall-pressure fluctuation levels are pointed out on the aft walls of the dimples. The tonal components corresponding to oscillation frequencies of vortical flow inside the dimples and ejection frequencies of the large-scale vortical structures outside the dimples are observed in velocity and pressure fluctuation spectra.
Introduction
Various inhomogeneities of the streamlined surface in the form of cavities or dimples are present in many hydraulic structures and constructions. Under appropriate conditions of the flow, large-scale coherent vortex systems and small-scale vortices are formed inside dimples that generate intense fluctuations of velocity, pressure, temperature, vorticity, and other turbulence parameters [1][2][3]. Boundary layer control uses these artificial vortex structures for drag reduction, increase of mixing, and noise minimization. Vortex structures of various scales, directions, rotational frequencies, and oscillations are generated in space and in time depending on the flow regime, the geometric parameters, and the shape of the cavities. Experimental and numerical results of aerodynamic and thermophysical studies showed a rather high efficiency of dimple reliefs, which allowed to increase heat and mass transfer for a slight increase in the level of hydrodynamic losses [4][5][6].
The boundary layer separation from the frontal edge of the cavity and the instability of the shear layer flow generate vortex structures inside the cavity. With the increase of flow velocity, one of the edges of vortex structures, circulating in the cavity, is separated from the streamlined surface of the cavity and is extracted following the flow. These inclined structures have a longitudinal dimension that substantially exceeds their lateral scale. They intensively initiate the interaction of medium of the cavity and the surrounding area [2,3,7,8].
The experience achieved by scientists and engineers when using dimple surfaces indicates that the creation of time and space stable vortex systems generated inside the cavities has a perspective value for boundary layer control. The creation of large-scale coherent vortex structures, with predefined qualities, allows you to change the structure of the boundary layer or the separation flow. It improves the heat and mass transfer, reduces the drag of streamlined structures, or changes the spectral composition of aerohydrodynamic noise, in order to reduce it [3,9,10].
In Refs. [11,12], it was noted that spherical cavities for heat and hydraulic efficiency are not the best for turbulent regime of heat carrier flow and for laminar regime; their use is practically not justified. The presence of a switching mechanism of generation and ejection of vortex structures inside spherical cavities on a streamlined surface [13][14][15] does not allow to form longitudinal vortex structures that are stable in space and time, which are necessary for boundary layer control. This defect is absent in oval dimples, which are at an angle to the current direction. Asymmetry of the dimple shape due to its lateral deformation allows transforming the vortex structure and intensifying the transverse flow of liquid within its boundaries. Adding a shallow dimple of an asymmetric shape leads to a reorganization of its flow. A two-dimensional vortical structure in the dimple, generated in a symmetrical dimple during its laminar flow, is changed to an inclined monovortex. The high stability of the inclined structure should be noted, which ensures the stability of vortex intensification of heat transfer [16][17][18].
In this connection, the purpose of this experimental work is to study the characteristic features of the flow of a system of oval dimples on a flat plate and to study the fields of dynamic and wall-pressure fluctuations inside and on the streamlined surface of the inclined oval dimples and in their vicinity.
Experimental setup
Experimental research was carried out in a hydrodynamic flume with an open surface of water 16 m long, 1 m wide, and 0.4 m deep. The scheme of the experimental stand and the location of the measuring plate with dimples are given in works [19,20]. At a distance of about 8 m from the input part of the flume, there were a measuring section equipped with control equipment and means of visual recording of the flow characteristics, coordinate devices, lighting equipment, and other auxiliary tools necessary for conducting experimental research. The design and equipment of the hydrodynamic flume allowed the flow velocity and water depth control in wide limits.
Transparent walls of a hydrodynamic flume, which were made of thick shockproof glass, ensured high-quality visual research.
Hydraulically flat plate made of polished organic glass of 0.01 m thick, 0.5 m in width, and 2 m in length was sharpened from one (front) and from the other (aft) side. End washers are fixed to the lateral sides of the plate. At a distance of X = 0.8 m from the front edge of the plate, there was a hole, where the system of two oval dimples was installed, which was located at an angle of 30 degrees to the direction of flow (Figure 1). The diameter of a spherical part of the dimple (d) was 0.025 m. The width and length of the cylindrical part of the dimple were also 0.025 m. Thus, the oval dimples located at a distance of 0.005 m from each other had a width of 0.025 m, a length of 0.05 m, and a depth to width ratio of h/d = 0.22. DOI: http://dx.doi.org /10.5772/intechopen.83518 According to the developed program and experimental research methodology, visual studies were initially carried out. Then, in the characteristic points of the vortex generation and the places of interaction of vortices with a streamlined surface, measurements of the fields of velocity and pressure were carried out. Visualization was carried out by drawing of contrasting coatings on the streamlined surface and coloring agents that were introduced into the stream. Paints and labeled particles through a small diameter tube were introduced into the boundary layer before the dimple and/or inside the dimple.
The study of the pressure fluctuation fields on the streamlined surface of the oval dimples and the plate, as well as the velocity fields of the vortex flow over the investigated surfaces, was carried out using miniature piezoceramic and piezoresistive sensors of pressure fluctuations and differential electronic manometers (Figure 2a). Specially designed and manufactured pressure sensors were installed in a level with a streamlined surface and measured the absolute pressure and the wall-pressure fluctuations [9,21,22]. Inside of the system of oval dimples and in their near wake, 12 sensors of pressure fluctuations were used (Figure 2b). The field of velocity fluctuations inside a pair of oval dimples and over a streamlined plate surface was measured by sensors of the dynamic pressure fluctuations or dynamic velocity pressure based on piezoceramic sensing elements.
The degree of the flow turbulence in the hydrodynamic flume did not exceed 10% for the velocity range from 0.03 to 0.5 m/s. The levels of acoustic radiation in the area of the dimples were no more than 90 dB relative to 2 × 10 −5 Pa in the frequency range from 20 Hz to 20 kHz, and the vibration levels of the test plate with a pair of dimples and sensor holder did not exceed −55 dB relative to g (gravitation constant) in the frequency range from 2 Hz to 12.5 kHz. The measurement error of the averaged parameters of the fields of velocity and pressure did not exceed 10% (reliability 0.95). The measurement error of the spectral components of the velocity fluctuations did not exceed 1 dB, and the pressure and acceleration fluctuationsno more than 2 dB-in the frequency range from 2 Hz to 12.5 kHz.
Research results
The vortical motion in the middle of the dimples is not what was observed (Figure 3a) for a laminar flow regime over a pair of oval dimples ( U = (0.03…0.06) m/s, Re X = UX/ν = (24,000…48,000), and Re d = Ud/ν = (750…1500), where ν is the kinematic coefficient of water viscosity). The contrast dye was transferred inside the dimple along its front spherical and cylindrical parts and gradually filled the entire volume of the oval dimples. The separation flow was not observed inside the dimples, and colored dye, which was moved from the front of the dimple to its aft part, made non-intensive oscillatory motion.
When the flow velocity was increased to (0.08…0.12) m/s, then a separation zone of the boundary layer appeared inside the front parts of the oval dimples. A shear layer began to form over the dimple opening, generating a circulating flow and a slow vortex motion inside the dimples (Figure 3b). This fluid motion had a kind of longitudinal spirals and was slow and almost symmetrical in each of the dimples. The liquid of the dimples fluctuated in three mutually perpendicular planes. The oscillation frequencies in each of the dimples were practically equal, but the destruction of the vortex sheet did not occur simultaneously. Contrast material went inside the dimples along their front semispherical and cylindrical parts. The separation and circulation areas behind the front edge of the dimple occupied almost half the volume of the dimple. There was a very slow rotation of the fluid inside the dimples, and its direction was coincided with the direction of the flow as well as its fluctuations along the longitudinal and transverse axes of the dimples. The disturbance package was transferred in the direction of the flow at a transfer velocity of approximately (0.4…0.5) U . In this case, the contrast material was ejected into the plate boundary layer over the region of the combination of the aft cylindrical and spherical parts of the dimple (Figure 3b). The ejection of a largescale vortex or spiral-like vortices from the oval dimple was observed at a frequency close to f = (0.16…0.2) Hz, which the Strouhal number corresponded to St = fd/U = (0.04…0.05). A wake of the contrast material into the boundary layer outside the dimples was traced at a distance of about 8-10 in diameter of the dimple. The vortex motion became more intense when the flow velocity over the dimple system was increased up to (0.2…0.3) m/s ( Re X = (160,000…240,000) and Re d = (5000…7500)). The zone of potential flow near the separation wall of the oval dimple had almost vanished (Figure 4a). The entire fluid filling the front spherical part of the dimple is turned in a circulating flow and formed a coherent large-scale spindle-shaped vortex. This vortex had a source near the center of the spherical part of the dimple and made intensive oscillations. During the ejection, the spindleshaped vortex structures began to lift above the front hemispheres of oval dimples and to stretch along the axis of the dimples. Then they were ejected outside the dimples over their aft parts. These large-scale vortex structures were rotated in the XOZ plane in each of the dimples in opposite directions. For example, in the left dimple in Figure 4a, the vortex rotated against the clockwise arrow, and in the right-clockwise. Ejection of vortex structures from the dimples was sometimes observed at the same time, but in most cases, ejections occurred at different time intervals. At the same time, there was no interaction of these vortex structures in the near wake of the dimples. The frequency of ejections of large-scale vortex structures from each of the dimples was estimated as (0.4…0.6)Hz or St = (0.04…0.06). In addition, ejections of the small-scale eddy structures were also observed. These vortices were broken off from the upper part of a large-scale spindle-shaped vortex during its formation, when its transverse scale exceeded the depth of the dimple. Vortex structures retained their identity at a distance (7…9) of the diameter of the dimple.
The contrast dye inside the dimple was concentrated inside the front spherical parts of the dimple (Figure 4b) for developed turbulent flow and flow velocity (0.4…0.5) m/s ( Re X = (320,000…400,000) and Re d = (10,000…12,500)). Here, spindle-shaped vortex structures were generated, and they were ejected from the dimples at a frequency close to 1 Hz ( St ~0.05). Color dyes, swirling in a spindleshaped vortex, were intensively oscillated in three mutually perpendicular planes. When the transverse scale of the spindle-shaped vortex exceeded the depth of the dimple, an intensive ejection of small-scale structures was observed from its upper part. These vortices were flushed over the region of the conjugation of the front spherical part of the dimple and its aft cylindrical part. The ejection frequency of small-scale vortices was estimated as (4…5) Hz or St = (0.2…0.25). As shown by the dye visualization (see Figure 4b), in the gap between the dimples, the flow did not undergo significant perturbations, as can be seen on the dye on the axis of the plate, which was not washed.
The intensity of the fluctuations of the longitudinal velocity over the streamlined surface of the oval dimple ( u′/U ), which was calculated from the fluctuations of the dynamic pressure ( u rms ′ = √ ____________ 2 ( p rms ′ ) dyn / ρ ) measured by the piezoceramic sensors, depending on the distance from the streamlined surface ( y / δ , where δ is the boundary layer thickness in front of the oval dimple), is presented in Figure 5a. These results were obtained for two flow velocities, namely, U = 0.25 m/s ( Re X = 200,000) and U = 0.45 m/s ( Re X = 360,000). The results were measured over the streamlined surface of one of the oval dimples above the wall-pressure fluctuation sensor of No. 2 (see Figure 1b). The longitudinal velocity fluctuations are increased with approach to the streamlined plate surface. They have a maximum and then are decreased at the level of the plate surface above the opening of the oval dimple. When the dynamic pressure fluctuation sensors are deepened in the opening of the oval dimple, the velocity fluctuations again increased (the boundary of the shear layer) and then decreased (the core of the circulating flow inside the dimple).
The change of the rms values of the wall-pressure fluctuations measured on the streamlined surface of the oval dimple and in its vicinity is presented in Figure 5b depending on the Reynolds number. The normalization of the root mean square values of the wall-pressure fluctuations was carried out by the dynamic pressure ( q = ρ U 2 /2 ). In this figure, the curve numbers correspond to the numbers of the wall-pressure fluctuation sensors, which are set to the level with the streamlined surface in accordance with Figure 1b.
Thus, the wall-pressure fluctuations on a flat surface before the dimples are subjected to a quadratic dependence on the flow velocity. It should be noted that the wall-pressure fluctuations normalized by the dynamic pressure in the undisturbed boundary layer before the oval dimples are approximately 0.01, practically, in the entire range of studied Reynolds numbers.
Consequently, the smallest levels of the wall-pressure fluctuations are observed at the bottom of the oval dimples, in their front parts, especially for low flow velocities and Reynolds numbers (curve 2, Figure 5b). Inside the oval dimples, the levels of the wall-pressure fluctuations are greatest in the aft spherical parts of the dimples and in the near wake immediately after the dimples (see curves 4, 7, and 9 in Figure 5b).
A spectral analysis of the wall-pressure fluctuations on the streamlined surfaces of the oval dimples and plate was performed. To do this, we used the fast Fourier transform algorithm and the Hanning weighting function, as recommended in [23][24][25]. Power spectral densities of the wall-pressure fluctuations on a streamlined surface of oval dimples and on a flat plate near the system of these dimples have clearly visible discrete peaks which correspond to the nature of the vortex and jet motion over the investigated surfaces. respectively. In this case, the oscillations of the vortex motion and, respectively, the field of the wall-pressure fluctuations inside the dimple correspond to the subharmonics and harmonics of higher orders of these frequencies, as it is clearly illustrated in Figure 6a.
The results of the measurements of the power spectral densities of the wallpressure fluctuations along the middle section of the oval dimple system, as well as inside the dimples, are shown in Figure 6b. It should be noted that under the boundary layer on a flat surface of a hydraulically smooth plate, the spectral levels of the wall-pressure fluctuations (curve 1) are minimal and do not have the tonal or discrete peaks observed inside and near the dimples. Behind the oval dimples, these discrete peaks are clearly visible on the spectra, but the tone frequencies near the system of oval dimples and at a distance of 2 d differ from them (see curves 8 and Figure 6b). In the middle section of the oval dimple system, where the sensor number 8 is located, the character of the pressure fluctuation spectrum differs from that which occurs on the aft wall of the dimple. Here, the maximum of spectral levels is observed at a frequency of 0.2 Hz ( St = 0.02), and in the frequency range of the order of 0.03 Hz ( St = 0.003), the intensity of the wall-pressure fluctuations is negligible. At a distance of 2 d from the dimples, the tonal peaks appear in the spectra corresponding to the ejection frequencies of large-scale vortex structures outside of the dimples. Thus, the traces of the vortex flows ejected from the oval dimples are intersected at the location of the sensor No. 11.
in
Experiments have shown that all sensors located at a distance of 2 d from the system of oval dimples record the field of the wall-pressure fluctuations with tonal peaks in the spectra corresponding to the ejection frequencies of large-scale vortex structures from the dimples, the frequencies of oscillatory motion inside the dimples, and their subharmonics and harmonics of higher orders. In this case, the spectral levels at such a distance from the dimples are of lesser value than in the near wake of the dimples. Thus, with the distance from the system of oval dimples, the boundary layer is gradually restored, which was observed during the visualization of the flow.
In the conditions of developed turbulent flow ( Re d > 11,000 and Re X > 350,000), the spectral characteristics of the wall-pressure fluctuation field are similar to those observed for Reynolds numbers Re X = 200,000 and Re d = 6250. But the spectral levels become higher (Figure 7a) wall-pressure fluctuations inside the oval dimple, there are discrete peaks that correspond to subharmonics and harmonics of higher orders of dominant frequencies of the vortex motion.
The features of the vortex motion, as well as wall-pressure fluctuation field, which it generates, in the near wake of the oval dimple system, in its middle section and at a distance of 2 d from the system of oval dimples, are shown in Figure 7b. The spectral levels of the wall-pressure fluctuations in the wake behind the aft spherical part of the dimple are similar to those obtained inside the oval dimple as for a lower flow velocity. At the same time, the spectra in the middle section of the system of oval dimples (in their near wake) have a specific character with a maximum at 0.13 Hz (curve 8 in Figure 7b). Behind the aft spherical part of the oval dimple in the spectral dependences of the field of the wall-pressure fluctuations, there are tone peaks which are characteristic of the vortical motion inside the oval dimples. In the middle section of the oval dimple system at a distance 2 d from the dimples, discrete peaks appear in the spectral levels of wall-pressure fluctuations. They are characteristic for the low-frequency oscillations of the vortex motion inside the dimples, as well as for the ejection of large-scale vortex structures from the dimples. The intensity, for example, of the wall-pressure fluctuations at the ejection frequency, is much lower for this flow regime than that observed for the velocity flow 0.25 m/s (Figures 6b and 7b). This is due to the fact that for a large flow velocity, the interaction between the vortices of each of the dimples takes place more distant from the dimples, and the distance 2 d from the pair of oval dimples for this regime is in the initial stage of this interaction.
Conclusions
1. The visual images of the vortex flow formed inside the oval dimple system are obtained, and the characteristic features of vortex formation for different flow regimes are determined. It has been experimentally established that the separation flow was not observed inside the dimples for laminar regime. For transient flow regime and small flow velocities within the oval dimples, the formation of very intense longitudinal spirals is observed, which are rotated and slowly fluctuated along the longitudinal and transverse axes of the dimples. For a turbulent flow regime inside the oval dimples, the spindle-shaped vortices are formed, which, with increasing velocity, are pressed against the front spherical parts of the dimples. These spindle-shaped vortices, reaching the scales of the dimples, are ejected from the oval dimples, disturbing the structure of the boundary layer. Inside the oval dimples, there is a low-frequency oscillatory motion in mutually perpendicular planes relative to the axes of the dimples, whose frequency is increased with increasing flow velocity.
2. It is shown that the intensity of the field of the velocity fluctuations has maximum values near the streamlined surface and also on the boundary of the shear layer in the opening of the oval dimple. The intensity of the wall-pressure fluctuation field is greatest in the interaction region between vortex structures of the shear layer and large-scale vortex systems ejected from the dimples with the aft wall of the oval dimple. The smallest intensity of the wall-pressure fluctuations occurred at the bottom of the oval dimple in its forward spherical part.
3. It has been established that depending on the flow regimes in the spectral characteristics of the field of the wall-pressure fluctuations measured on the © 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
streamlined surface, characteristic features appear in the form of discrete peaks corresponding to the frequencies of low-frequency oscillations of the vortex flow inside the oval dimples and the ejection frequencies of large-scale vortex systems from the dimples. In the middle section of the system of oval dimples (in their near wake), there is no interaction of vortex structures that are ejected from the dimples. With a distance of more than two diameters of the dimple, intensive tone peaks are observed in the spectral dependences. They correspond to the ejection frequencies of large-scale vortices and the frequency of oscillations of the vortex motion inside the dimples, both in the middle section of the system of the dimples and behind their aft spherical parts. With the distance from the system of oval dimples, the intensity of the tonal oscillations, which are characteristic for the vortical motion inside the dimples, is decreased, and the boundary layer is restored. | 5,573.4 | 2019-09-30T00:00:00.000 | [
"Engineering",
"Physics"
] |
A System to Determine the Optimal Work-in-Progress Inventory Stored in Interoperation Manufacturing Buffers
Continuous cost reduction is a subject of interest for almost every production company. The cost reflects the competitiveness and sustainability of the business. Many company costs are linked to the effectiveness of production. One such cost is the work-in-progress (WIP) inventory cost. The present article deals with the design of a system for calculating the optimal WIP inventory stored in a manufacturing buffer, which, in the long term, provides the lowest costs. The main goal of the article is to design a new system that allows for the calculation of the optimal capacity of interoperation manufacturing buffers and thus the calculation of the optimal WIP inventory, which influences the lead time and cost. The newly designed system consists of algorithms that describe various steps, many of which use mathematical models. The individual blocks of algorithms are described, and the proposed system is verified and validated by simulation of the production line in the automotive production company.
Introduction
It is known that industry is at the threshold of transformation, which will have a major impact on the production of goods, the provision of additional services, the labour market, the working environment, and the behaviour of customers. Society is constantly evolving, as are its needs. Research has always responded to the needs of society [1]. These needs have been transformed into scientific inventions that have been assessed by companies to meet the mentioned needs. There are four main scientific advances since the Industrial Revolution that have provided companies with a huge competitive advantage. The first advancement that changed the method of production was the use of water force and steam force. This enabled the first real mechanisation and has served as a springboard for further inventions.
The development of needs that have determined the lines of development resulted in a Second Industrial Revolution. This has mainly involved the fields of electrification and process thinking. The first person to make this change in processing was Henry Ford. The first process created was a movable assembly line that allowed products to move from one job to another; this was in contrast to the common trend. This allowed production costs to be reduced and production to increase to satisfy mass demand. At the time, the markets were not oversaturated and therefore did not assess the quality or variability of the product spectrum. Transfer lines were used. The advantages arising from the suspension of the transfer production lines were the production of large quantities at a low cost, considering that the preparations and instruments were fixed. The transfer lines also resulted in the device. This makes it possible to reduce the WIP, and also, from subjective observations, the energy consumption. The Industrial Engineering Department workplace has significant experience in modelling, simulations and optimisation of material flows in production, and the field of interoperation manufacturing buffers is an advanced industrial engineering direction, which is aimed at research towards the development of smart factories. The goal of the article is the design of a system to determine the optimal WIP inventory stored in manufacturing buffers using calculation of the objective function and simulation software for the purpose of reducing the WIP inventory as well as the WIP costs, with cost being the main criterion for optimal performance.
Simulations
Simulations can be understood as imitations of the behaviour of dynamic systems. The basic purpose of a simulation is to determine the result in known system parameters (inputs). Their sequence and value determine the final behaviour of the system. In a simulation, any real system that is designed to simulate the most essential parts has a decrease in computing difficulty [12]. Based on abstractions, the simulation model is created, verified and validated. Simulation models run in simulating experiments, usually according to an experiment plan. After the implementation of simulation runs, formal results are obtained. These are interpreted in the form of consequences for the real system. Based on these, a decision is made as to whether a modification should be made to the system or whether variations in another parameter of the system should be investigated [13,14]. An illustration of the simulation cycle is given in Figure 1. The importance of the simulation grows with the complexity of systems. In simple systems, it is possible, using certain knowledge, to estimate possible system behaviours by the variation of input parameters. In complex systems, it is necessary to use a simulation. In general, it is possible to define three types of system complexity [15]:
Real system
• simple systems, • complicated systems, • complex systems. The importance of the simulation grows with the complexity of systems. In simple systems, it is possible, using certain knowledge, to estimate possible system behaviours by the variation of input parameters. In complex systems, it is necessary to use a simulation. In general, it is possible to define three types of system complexity [15]: • simple systems, • complicated systems, For the first two cases, the tools that are commonly used for event-oriented simulations can be utilized, but for complex systems, emergent behaviours can be found, and agent simulations are, therefore, suitable. Simulations are applicable in areas such as air transport, transport, supply chains, production, and telecommunications.
Simulation in PC environments imitates the operation of the system and its internal processes, the ordinary times of the system and the corresponding details for rendering conclusions about system behaviour. Simulation models are created using software designed to represent common system components and recording the system's behaviour over time. Simulations according to the nature of the process can be stochastic or deterministic.
The basic approaches for creating a simulation model in PC environments are • Creating a simulation model in a higher programming language (for example, Fortran, Pascal, C, Basic, etc.). This approach is used when it comes to a special application and there is no support software in the area; • Using simulation language and programming the model in this language (GASP II, SIMAN, SLAM, GPSS, etc.). Simulation languages (systems) simplify the process of creating simulation models by providing ready-made structures to carry out repetitive activities (generators, collection and processing of statistics, animation, initialization, etc.); • Applying a generally usable model, called a simulator (Arena, Witness, Plant Simulation, Simfactory II. 5, Simio, GEMS, etc.). The simulator is a generalized simulation model of a particular type of system (e.g., an automated warehouse, production system, transfer line, etc.). The use of a simulator usually does not require knowledge of programming; • Entering a functional model or performing an entire simulation project for a specialized company.
For the planning of simulation experiments, a simulation target needs to be defined. It is important that the rates of reduction and abstraction of the real system are appropriate for the desired output. Chosen factors and parameters depend on the simulation target. The factors are the input values of the investigated system that determine their behaviours [16]. Based on [17], in general, one can distinguish following factors: • Managed factors (x j ), j = 1, 2, . . . , m, which have a constant value, or their value during the experiment is changed in a predetermined manner; • Controlled (managed) factors (u k ), k = 1, 2, . . . , m. Their values remain constant during experimentation, as they are not the subject of an experiment; • Distracting (uncontrolled and unmanaged) factors (w l ), l = 1, 2, . . . These are factors of unknown origin, which occur randomly, and may be subject to some theoretical divisions (e.g., the incidence and duration of faults in system elements, etc.).
Parameters are the output values of the modelled system, which express changes in the system by the effect of changing the factor values of each factor. These include, for example, the minimum lead time, maximum resource utilization, minimum cost of production, etc.
Software for Realisation of Simulation Runs
Simulation software from Siemens, DE, Tecnomatix Plant Simulation (version 14.0.0.1177, academic version) (Berlin, Germany) was chosen for the verification and validation of a simulation model.
Tecnomatix Plant Simulation allows the realisation of simulations, from which statistics can be obtained for the purpose of optimisation. The software also allows the realisation of multi-level experiments from parameters defined in advance without tedious changes in parameters in the framework of single runs [17,18]. The functions of Tecnomatix Plant Simulation enable the creation of a digital model of real logistic systems (for example, production flow, material flow in supplying, etc.), thanks to which experiments and control of individual courses and system characteristics can be done. The advantage of simulation software like Tecnomatix Plant Simulation is that the testing is executed in a digital model without corrupting or touching the real system, so the possibility of failure is eliminated. Outputs gained from the simulation in the form of information enable analysts to execute fast and credible decision-making during production. Indications of user settings are given in Figures 2 and 3. etc.), thanks to which experiments and control of individual courses and system characteristics can be done. The advantage of simulation software like Tecnomatix Plant Simulation is that the testing is executed in a digital model without corrupting or touching the real system, so the possibility of failure is eliminated. Outputs gained from the simulation in the form of information enable analysts to execute fast and credible decision-making during production. Indications of user settings are given in Figures 2 and 3. etc.), thanks to which experiments and control of individual courses and system characteristics can be done. The advantage of simulation software like Tecnomatix Plant Simulation is that the testing is executed in a digital model without corrupting or touching the real system, so the possibility of failure is eliminated. Outputs gained from the simulation in the form of information enable analysts to execute fast and credible decision-making during production. Indications of user settings are given in Figures 2 and 3.
Manufacturing Buffers and their Functions
Manufacturing buffers form an important part of production lines and systems. Their role is to create a stock of spatially oriented, semi-finished products. The movement of these semi-finished products may be caused by their own weight or by the force of movement. Based on [19], currently, the most common uses of these products in production include groove manufacturing buffers, vibrating manufacturing buffers, pipe manufacturing buffers, chain manufacturing buffers, cassette manufacturing buffers, and conveyor functioning in manufacturing buffers.
The operation of the automatic lines is adversely affected by fixed bonds between the machines. The cycles of these automatic machines or, in other words, the time taken to manufacture one product, should be the same. Additionally, there is a discrepancy between the requirements of production technology and the cycles. Therefore, from a fault point of view, on an automatic line, the failure of a single machine will cause the entire manufacturing line to be suspended for repair. However, the benefits include, in particular, fewer halts in areas regarding the transport of products between operations; this will lead to a shortened production time and less work. Together with the assessment of technological and manufacturing factors, with different cycle time leads to the inclusion of manufacturing buffers between certain machines. Based on [11], these interoperating manufacturing buffers can be divided into the following: • Automatic: The products are stored in the manufacturing buffers after the previous operation has been performed. When the manufacturing buffers are filled, the previous automatic machine is automatically stopped. On the other hand, when a manufacturing buffer is empty, the next machine is stopped.
•
With a manual operator: The products are stored in boxes, pallets, etc. in the automatic machine. They are handled manually in a secure, defined space between the machines. The entrance to the next slot is usually provided by automatic feeders, which are supplemented by the operator.
If the system is used by a central warehouse located outside its own production area, it cannot be understood as the work of the interoperating manufacturing buffer, but can be seen as a disruption of the production flow; in this case, the machines are assessed separately.
If the manufacturing buffers are automatic, then the entire system of machines can be called an automatic line with a flexible bond or a flexible, automatic line. The conveyor is used to ensure the distribution of the products to all branches.
Each interoperating manufacturing buffer can be characterised by its capacity and, thus, the maximum number of pieces of semi-finished products that can be involved with the buffer. Depending on the faults, in a system of automatic machines with interoperation manufacturing buffers, the machines influence each other while running. This is triggered by insufficient suction or removal of semi-finished products from the manufacturing buffers. In automatic line connections with N machines, all devices can be characterised by the intensity of the faults, the maintenance intensity, the production cycle, and, assuming an exponential distribution, the running time and repair time. The times and periods of repair of the machines have independent random variables.
Assessment of the run of the i-th machine will be done according to the use factor or the downtime. The downtime of the i-th machine is made up of the machine's own downtime (η iv ), these given by the failure rate of the i-th machine, and the downtime (η is ) triggered by adjacent machines. The mean running time of the i-th machine (Ti) and the mean maintenance period (ϕ i ) can then be used to obtain a total downtime (ηi) relationship [11]: The compensation of downtime (η is ) caused by the influence of adjacent machines comes out thanks to the interoperating manufacturing buffers. The function of interoperating manufacturing buffers is largely influenced by the cycle times of the individual machines: (ϑ i ) (i = 1, 2, . . . , N).
For automatic lines with a fixed bond, the line cycle time or takt time of a line with a zero interoperation manufacturing buffer capacity is given by the slower automatic machine (ϑ j ). The inclusion of the interoperation manufacturing buffers enables the gradation of the cycle time, so the following equation can be applied [11]: The downtime (η i ) throughout the line should be assessed by the downtime of the lowest automatic machine [11]: The idea of optimizing the design of the line will, therefore, be achieved by the appropriate choice of the capacity of the interoperating manufacturing buffers and the timing of the cycles to give the shortest downtime (η js ). The aim is to find the optimal size of the manufacturing buffers in terms of profit and cost with respect to the reliability of the machines.
The Optimum Criterion in Simulations
Simulations are used to obtain statistics, but the appropriate criterion of optimality will determine the chosen optimal variant. Based on [2], the following optimisation criteria are used:
•
The chosen objective function-the principle lies in looking for an alternative in which the value of the dedicated function will reach the desired extreme. The objective function is usually compiled based on the direct dependency of the types of costs considered at the time (costs of the operation of the production system and the costs associated with the objects served).
•
The values of some numerical characteristics usually relate to the performance of the production system, intermediate periods of parts, etc.
The results of optimisation in simulations are probability values (inputs are estimates). In our solution, a dedicated function is selected as the optimisation criterion.
Calculation of the Objective Function
Currently, no formula exists for calculating the optimal WIP inventory. However, each capacity of production manufacturing buffer creates the costs or benefits for the system. The expression of their interactions is an objective function. The objective function value can also take a negative or positive value. The optimum capacity of the manufacturing buffers may be determined with the aid of the maximum profit or objective function. The methods of calculation include a solution through the Schor's unconstrained profit maximization problem or Maixner calculation of the objective function.
Schor's Unconstrained Profit Maximization Problem
The optimum capacity of the manufacturing buffers is determined to help find the maximum profit. The maximum profit is considered to include the proceeds of production, the cost of inventories, and the stack itself. It can also be found through the defined required production volume for optimal capacity [20]: where P(N) is the production rate (parts/time unit);P is the required production rate (parts/time unit); A is the profit coefficient (unit €/part); ni(N) is the average inventory of buffer i,I = 1, . . . , k − 1 (parts); bi is the buffer cost coefficient (unit €/part/time unit); ci is the inventory cost coefficient, (unit €/part/time unit); and s.t Ni ≥ Nmin, ∀i = 1, . . . . , k − 1.
Maixner Calculation of the Objective Function
The optimum capacity of the manufacturing buffers is designed to help find the highest value of the objective function. The purpose of the function is to take into account the proceeds from the increased output versus the current solution or the transfer production line, the cost of inventories, and the manufacturing buffer itself [11]: where G is the objective function (unit €); v is the profit from one product (unit €); D is the increase in performance on the line from the current production or fixed bound line (unit Pcs./2 shift); Bi is the capacity of the i-th buffer (unit Pcs.); bi is the operating and maintenance costs of the i-th buffer referenced on one piece and two shifts (unit €/Pcs.2 shift); s is the number of working days in a year; w is the number of years for which a product will be made on a line; ai is the production cost per piece of capacity of the i-th buffer (unit € /Pcs.); ci is the cost of medium and general repair, referenced per piece (unit € /Pcs.); and r is the coefficient of growth, providing costs and repair costs (r ≥ 1)).
In the next calculations, the Maixner calculation of the objective function is selected. The objective function is the value that is provided by maintaining a particular inventory in the system. It applies to a set of three machines and two manufacturing buffers. It is a simple gradual solution that can be tackled gradually across the lines. The line is divided into parts, where the objective function for the three automated machines and the two manufacturing buffers is calculated by means of a decomposition method. From this split line, one can subsequently create models to obtain the downtime. The difference between the Schor's unconstrained profit maximization problem and Maixner calculation of objective function is that, in the Schor problem, the benefits are calculated from production, while Maixner calculates the benefits of the current solution or the transfer of the production line [21].
System for Determining the Optimal Work-in-Progress Inventory
The current situation in the field of interoperation manufacturing buffer is a combination of using conveyers for the functions of manufacturing buffers and common manufacturing buffers. To make manufacturing competitive and sustainable, it is necessary to actively seek to avoid waste in production. Waste in production also includes a high level of buffer inventory. The cost of inventory is mainly manifested by the use of capital that could have been used for another purpose. WIP inventory is, therefore, a cost item, and is required to be as low as possible. To reduce the WIP inventory, it is necessary for the optimum capacity of the manufacturing buffer to be calculated for each group of products before each device is added to the manufacturing line. For the process of determining the optimal capacity of manufacturing buffers, the system described below was created. Mutual interactions and relationships are defined for each block of the algorithm. The phases of this system are shown in Figure 4. The system is suitable for already operating production systems, and statistics from devices can be obtained. Defining layouts for simulation run realization Assessment of calculated manufacturing buffer capacities according to production and weight limits and determination of optimal work-in-process stored in manufacturing buffer
Definitions of Targets and Inputs
The main goal of the first phase of the definition of targets and inputs is to determine the extent of the solvable manufacturing line parts and the objectives of optimisation. This phase also includes the collection and processing of all necessary data needed to determine the optimum capacity of the manufacturing buffer and, thus, the optimal WIP inventory. The data obtained in the later phase serve as input for the realisation of calculations. The contents of the algorithm for defining targets and inputs are illustrated in Figure 5. The definition of targets can be understood as the selection of a manufacturing line range with defined limitations causing negative impacts that are exceeded by the benefits. Inputs are understood as data related to devices on the manufacturing line that ensure the validity of the simulation model to represent the real situation.
Definitions of Targets and Inputs
The main goal of the first phase of the definition of targets and inputs is to determine the extent of the solvable manufacturing line parts and the objectives of optimisation. This phase also includes the collection and processing of all necessary data needed to determine the optimum capacity of the manufacturing buffer and, thus, the optimal WIP inventory. The data obtained in the later phase serve as input for the realisation of calculations. The contents of the algorithm for defining targets and inputs are illustrated in Figure 5. The definition of targets can be understood as the selection of a manufacturing line range with defined limitations causing negative impacts that are exceeded by the benefits. Inputs are understood as data related to devices on the manufacturing line that ensure the validity of the simulation model to represent the real situation. Block P1.0, at the beginning of the optimisation process, is necessary to define the objectives according to which WIP inventory optimisation of the range of the production line will be carried out.
Block P1.1 represents the determination of the number of production days per year. The parameter includes only the production days when the product for which the calculation of the optimal WIP inventory is intended is being produced. This can be estimated from the number of planned manufactured products pulled from a marketing survey and orders [21].
Block P1.2 represents the determination of the production volume for the entire shift (V). It is given in pieces for all shifts per day (the best results can be achieved by considering two shifts [11]). This is the maximum throughput that the manufacturing line is able to achieve without downtime. It can be determined based on the cycle time of automatic machines. Block P1.3 represents the identification of the cycle time of the automatic machine (ϑ). It is given in pieces per minute and converted into pieces per hour. It should be the cycle time of an automatic machine that is a bottleneck in the system. Block P1.4 represents the determination of the profit that comes from one product (v). It is given in monetary units per piece. It is calculated as the unit cost of production subtracted from the selling price.
Block P1.5 represents the number of years (w) for which the production of the product is planned. It is a dynamic indicator and depends mainly on the market and its requirements. The planned production time can be determined based on marketing forecasts and orders.
Block P1.6 represents the determination of the unit cost of production of the manufacturing buffer capacity with the following characteristics: (a) It is given in monetary units per piece. They are understood as storage costs. Storage costs create an effect where capital is tied to materials and cannot be used elsewhere. The capital-binding materials generate costs calculated from their price with the aid of any of the evaluation methods.
(b) It represents the running and maintenance costs of the manufacturing buffer. It is given in monetary units per manufacturing buffer for all shifts (the best results can be achieved by considering two shifts [11]). Its value can be calculated from the cost of lubricating the manufacturing buffer mechanisms. In the case of sensors, it is the cost of energy, the work done to control the sensors and, where appropriate, the cleaned contents of the manufacturing buffer after a variety of hardened emulsions mixed with impurities. They are activities that are carried out every day. The determination of maintenance costs consists of taking the findings of each day for repetitive acts, measuring their time costs and multiplying the duration by the hourly added value of the manufacturing line.
(c) It represents the determination of the costs of medium and general repair of the manufacturing buffer. It is given in monetary units for one manufacturing buffer. This is the cost of planned manufacturing buffer repairs that do not occur every day; they are planned ahead using predictive maintenance. They may be repaired outside the total productive maintenance (TPM) [20] of machines, which suspends the manufacturing line and creates costs. The cost depends on different factors like the needs of the professional and the cost of performing the repairs, e.g., the need for the replacement of mechanisms or sensors, the duration of the repair, etc. All such factors need to be considered, and they approximately determine the duration of the intervention and the costs of medium and general repair. The costs of medium and general repair can be found by multiplying the duration of the activity by the hourly added value of the manufacturing line.
Block P1.7 represents the determination of the factor that determines the growth of the acquisition and repair costs (r). Its value should be 1 ≤ r. Based on [11], the ideal value of this coefficient is 1.5.
Block P1.8 represents the acquisition or outline of a layout with the deployment and naming of devices (automatic machines) on the manufacturing line.
Block P1.9 represents the detection of all restrictions that occur on the line. At the same time, the cycles of individual conveyors must be measured.
The decision block represents the assessment in the form of documentation. If it is in paper form, it is necessary to insert all data about the failure into Excel for further work. If the failure data are inserted into any database software, it is necessary to pull all downtime data from the database.
Block P1.12 is the calculation of the devices' running time (T i ) [20]. It is given in hours: where PT i is the planned the working time of the i-th device and η i is the overall length of downtime of the i-th device during the planned working time. Block P1.13 represents the calculation of the failure intensity (λ i ) on all devices on the manufacturing line [20]: where COD i is the overall downtime that occurs during the running time of the i-th device. Block P1.14 represents the calculation of the mean time between failures (MTTR) for the i-th device (To i ) counted for every device on the manufacturing line [20]. It is given in hours: Block P1.15 represents the calculation of the mean maintenance time for the i-th device (Φ i ) counted for every device on the manufacturing line [20]. It is given in hours: Block P1. 16 represents the calculation of the coefficient of downtime (CDM) for every device located on the manufacturing line [20]. It is given as a percentage: The value of availability is calculated for each device using (CDM) [20]: If the manufacturing line is transferred, it can be used to calculate the coefficient of downtime of the manufacturing line (CDS), which is used for the calculation (CDM).
Defining Layouts for Simulation Run Realisation
The manufacturing line decomposition module is used to create the schemas needed to create a simulation model in accordance with the submodule of the optimal manufacturing buffer capacity calculation and the objective function value. The input data include block P1.8. The result of this phase is the creation of the layout of a solvable manufacturing line according to which the simulation models are carried out. A diagram of the defining layouts for simulation run realisation is shown in Figure 6. The purpose of block P2.1 is to determine the number of solvable layout parts of the manufacturing line based on the decomposition strategy. To calculate the number of layout parts in the manufacturing line, the number of machines must be divided by three. The number three is used as it is the number of automatic machines with two manufacturing buffers. The optimal capacity is calculated using three automatic machines and two manufacturing buffers because that is the simplest example of the interactions between manufacturing buffers. With two manufacturing buffers, the impact of their capacity on the objective function can be observed. At the same time, the understanding of the line of three automatic machines and two manufacturing buffers helps us to deal with longer and much more complex manufacturing lines [11]. A graphical expression of the decomposition process for three machines and two manufacturing buffers is shown in Figure 7.
First layout of manufacturing line for the evaluation Rl1 Second layout of manufacturing line for the evaluation Rl2 The principle of decomposition lies in the gradual solution of the manufacturing line, starting from the first automatic machine, and each calculated solution becomes a constant for the next solved part.
In the case of a branch, the part must be solved as a problem of two automatic machines and one manufacturing buffer [11]. A graphical expression of the decomposition process for two machines and one manufacturing buffer is shown in Figure 8. The purpose of block P2.1 is to determine the number of solvable layout parts of the manufacturing line based on the decomposition strategy. To calculate the number of layout parts in the manufacturing line, the number of machines must be divided by three. The number three is used as it is the number of automatic machines with two manufacturing buffers. The optimal capacity is calculated using three automatic machines and two manufacturing buffers because that is the simplest example of the interactions between manufacturing buffers. With two manufacturing buffers, the impact of their capacity on the objective function can be observed. At the same time, the understanding of the line of three automatic machines and two manufacturing buffers helps us to deal with longer and much more complex manufacturing lines [11]. A graphical expression of the decomposition process for three machines and two manufacturing buffers is shown in Figure 7. The purpose of block P2.1 is to determine the number of solvable layout parts of the manufacturing line based on the decomposition strategy. To calculate the number of layout parts in the manufacturing line, the number of machines must be divided by three. The number three is used as it is the number of automatic machines with two manufacturing buffers. The optimal capacity is calculated using three automatic machines and two manufacturing buffers because that is the simplest example of the interactions between manufacturing buffers. With two manufacturing buffers, the impact of their capacity on the objective function can be observed. At the same time, the understanding of the line of three automatic machines and two manufacturing buffers helps us to deal with longer and much more complex manufacturing lines [11]. A graphical expression of the decomposition process for three machines and two manufacturing buffers is shown in Figure 7.
First layout of manufacturing line for the evaluation Rl1 Second layout of manufacturing line for the evaluation Rl2 The principle of decomposition lies in the gradual solution of the manufacturing line, starting from the first automatic machine, and each calculated solution becomes a constant for the next solved part.
In the case of a branch, the part must be solved as a problem of two automatic machines and one manufacturing buffer [11]. A graphical expression of the decomposition process for two machines and one manufacturing buffer is shown in Figure 8. The principle of decomposition lies in the gradual solution of the manufacturing line, starting from the first automatic machine, and each calculated solution becomes a constant for the next solved part.
In the case of a branch, the part must be solved as a problem of two automatic machines and one manufacturing buffer [11]. A graphical expression of the decomposition process for two machines and one manufacturing buffer is shown in Figure 8.
Each part (A or B) creates a separate part of the manufacturing line for the realisation of simulation runs, and the objective function is calculated separately for B1 and B2.
This decomposition of the line is possible because the throughput of part of the system is given by the last machine parameter [22]. Each part (A or B) creates a separate part of the manufacturing line for the realisation of simulation runs, and the objective function is calculated separately for B1 and B2.
This decomposition of the line is possible because the throughput of part of the system is given by the last machine parameter [22].
Realisation of Simulation Runs
The module serves as an algorithm to obtain the results of the downtime η at different manufacturing buffer capacities. The input data come from blocks P1.9, P1.15 and P1. 16. In these blocks, there are data about the parameters of the machines and conveyors. Other input data include simulation schematics obtained from block P2.1. A schema of the phase realisation of simulation runs is shown in Figure 9.
Realisation of Simulation Runs
The module serves as an algorithm to obtain the results of the downtime η at different manufacturing buffer capacities. The input data come from blocks P1.9, P1.15 and P1. 16. In these blocks, there are data about the parameters of the machines and conveyors. Other input data include simulation schematics obtained from block P2.1. A schema of the phase realisation of simulation runs is shown in Figure 9.
Block P3.1 represents the creation of a simulation model for the first part of the manufacturing line based on the data from the input module and the layout obtained from the decomposition module.
Block P3.2 represents the realisation of simulation experiments with the created model. First, it is run with the maximum capacity of the manufacturing buffer obtained based on the volume or weight calculation.
The volume calculation (CB i ) [20] is where WCB i is the total volume of the manufacturing buffer after the subduction of mechanical restraints, and VS is the volume of one product placed in the manufacturing buffer calculated according to the widest size of the product. The weight calculation (CB i ) [20] is where MWB i is the maximum weight load of the manufacturing buffer, and WS is the weight of the single product. Note that the number with the smallest value is selected. The data about the average state of the manufacturing buffer and the CDS of the manufacturing lines are the starting points for comparing the results obtained in block P3.4. Calculate optimal manufactural buffer capacity "B i ", "B j " for given part of manufacturing line Block P3.3 represents the number of experiments that will be carried out to obtain the maximum objective function, which is represented by the optimum and can be found as an extreme of the objective function. The objective function is chosen as the optimisation criterion. Based on the selected calculation according to Maixner [11], if the current average capacity of the manufacturing buffer does not interfere with the current MTTR or the availability of a maximum limit by weight or volume, it represents the upper limit. In case of a solution to the problem of three machines and two manufacturing buffers, and if it is an upper limitation, a high value rounded up to a multiple of 10 is a good upper limit. Then, the number is divided by 10, and the incrementation step is complete. Consequently, after the analysis of the results, the highest objective function is chosen, and it goes on searching near the area with incrementation of one piece. To solve the problem of two machines and one manufacturing buffer, there is no limitation on carrying out the experiments, as it is a one-way matrix without multiplying combinatorist options.
Block P3.4 aims to realise the simulation run, which gradually changes the capacity of the manufacturing buffer, thereby obtaining the downtime.
Block P3.5 represents the enrolment of data from the simulation results to the matrix of the downtime η. These data are used for the calculation of the objective function.
The purpose of block P3.6 is to create an algorithm that calculates the value of objective function (G). After determination (Gmax), it is possible to determine the optimum capacity of the manufacturing buffer. A block schema, called a block of optimal manufacturing buffer capacity, and the objective function calculation are shown in Figure 10.
The decision block is used to decide whether the result is comparable to the coefficient of the line downtime, which has a fixed bond, or to a manufacturing line with a specific coefficient of downtime. The coefficient of downtime can be found by simulation or by processing a document related to downtime on the manufacturing line.
Blocks P3.5.2 to P3.5.7 are used to calculate the coefficient of line downtime with a fixed bond. After the determination of the coefficient of downtime, the calculation of objective function (G) can start. Blocks P3.5.8 to P3.5.11 are used to calculate the parameters that are put in the final formula (G) in block P3.5.12. The value (G) is different for each manufacturing buffer capacity. For calculation (G), all input data must be defined or calculated. The objective function (G) is the total benefit of the application of the selected manufacturing buffer capacity over the entire period. Thanks to (G), it is possible to determine the optimum manufacturing buffer capacity [11].
In block P3.5.13, after determining all the values (G) at different manufacturing buffer capacities, (Gmax) can be found. (Gmax) is the maximum objective function value. After the selection of (Gmax), the capacity of the manufacturing buffer that corresponds to (Gmax) is determined.
Block P 3.7 involves the determination of the optimal manufacturing buffer capacity for the solving part of the manufacturing line.
The decision-making block chooses whether to continue with simulating the next part of the manufacturing line or whether all parts of the manufacturing line are resolved and can be processed to assess the calculated manufacturing buffer capacity according to the production and weight limits and the determination of the optimal WIP inventory stored in the manufacturing buffer.
If all parts of the manufacturing line are not solved, block P3.8 creates a new simulation model. The new simulation model is created based on input data parameters and the schematic of the next part of the manufacturing line. The decision block is used to decide whether the result is comparable to the coefficient of the line downtime, which has a fixed bond, or to a manufacturing line with a specific coefficient of downtime. The coefficient of downtime can be found by simulation or by processing a document related to downtime on the manufacturing line.
Blocks P3.5.2 to P3.5.7 are used to calculate the coefficient of line downtime with a fixed bond. After the determination of the coefficient of downtime, the calculation of objective function (G) can start. Blocks P3.5.8 to P3.5.11 are used to calculate the parameters that are put in the final formula (G) in block P3.5.12. The value (G) is different for each manufacturing buffer capacity. For calculation (G), all input data must be defined or calculated. The objective function (G) is the total benefit of the application of the selected manufacturing buffer capacity over the entire period. Thanks to (G), it is possible to determine the optimum manufacturing buffer capacity [11].
In block P3.5.13, after determining all the values (G) at different manufacturing buffer capacities, (Gmax) can be found. (Gmax) is the maximum objective function value. After the selection of (Gmax), the capacity of the manufacturing buffer that corresponds to (Gmax) is determined.
Assessment of Calculated Manufacturing Buffer Capacities According to Production and Weight Limits and Determination of the Optimal Work-in-Progress Stored in the Manufacturing Buffer
The module is used to evaluate the calculated optimum manufacturing buffer capacity according to the production restrictions of the automatic machines. A schema of the assessment of calculated manufacturing buffer capacities according to production and weight limits and determination of the optimal work-in-progress stored in the manufacturing buffer algorithm stages is shown in Figure 11.
The entry decision block determines whether the average annual state of the manufacturing buffer is obtained by simulating P4.2 or by observing manufacturing line P4.1. Block P4.3 represents the determination of the average manufacturing buffer capacity ((Bp i ), (Bp j ). It represents the quantity at which, in production without restrictions, production machines can produce the manufacturing buffer.
Buffer
The module is used to evaluate the calculated optimum manufacturing buffer capacity according to the production restrictions of the automatic machines. A schema of the assessment of calculated manufacturing buffer capacities according to production and weight limits and determination of the optimal work-in-progress stored in the manufacturing buffer algorithm stages is shown in Figure 11. Figure 11. Assessment of the calculated manufacturing buffer capacities according to production and weight limits and determination of the optimal work-in-progress stored in the manufacturing buffer.
The decision blocks behind block P4.3 represent a comparison (Bp i ) and (Bp j ) with the calculated optimum capacity (Bopt i ) and (Bopt j ). Based on the comparison, (Bopt i ) or (Bp i ) is selected. At first, in the blocks, the manufacturing buffer capacity is assessed, and the production limit result is determined as the quantity that is to be maintained in the manufacturing buffer. The sum of the capacity to be maintained in manufacturing buffers represents the optimal WIP inventory for that part of the production line.
Defining the Targets and Inputs
Verification and validation of the proposed algorithm were carried out on a production line in a manufacturing company in the automotive industry. The manufacturing process consists of grinding (MOL3VER, GR1065), honing (KM85A1, KM85A2, KM85B1, KM85B2), washing with a nozzle (ST1 washing, ST2 turn, ST3 washing, ST1B washing, ST2B turn, ST3B washing) and a series of assembly operations (weight measure 1, dimension measure 1, weight measure 2, dimension measure 2, 13 stations and visual check) connected with conveyors, as shown in Figure 9. The main criterion for assessment was whether the solution achieved was better than the current state. In other words, the benefit from the introduction of the optimal manufacturing buffer capacities should be greater than the benefit of maintaining the current state. Another criterion is that the loss of throughput should not be more than 2%. To solve and optimise the WIP inventory, grinding line A was selected. The layout used to solve this production line is illustrated in Figure 12. The red, dashed, bounded line represents the grinding line A. The yellow line is grinding line B, and the last assembly zone has no boundary.
The conveyors are used on the chosen production line as the manufacturing buffer. To determine the optimal WIP of grinding line A, it was necessary to determine the optimum capacity of manufacturing buffers B2, B3, B4, B5 and B6. For the creation of the manufacturing line simulation model, the MTTR, availability and cycle time parameters were identified, as summarised in Table 1. The data were analysed for three months. The parameters for conveyors functioning as manufacturing buffers placed on the solving line are shown in Table 2. An availability of 100% and an MTTR value of manufacturing buffers of 0 were considered. The supply of production processes on inputs has no downtime. The value (CDS) for the solving line is determined by the simulation from the last device in the line; in our case, for grinding line 1, it is dimension measure 1, CDS = 52.61%.
The input data required for the calculation of the objective function are summarised in Table 3. The values of the cost constants of the manufacturing buffer (a, b, c) that are noted above were modelled using the characteristics of production. Each production has different values, and they are influenced by their processes [11].
Defining Layouts for Simulation Run Realisation
For validation and verification, the selected production line was divided into parts, as shown in Figure 13. The optimisation of the line was divided into four solvable parts. Layouts Rl1, Rl2 and Rl4 can be used to solve two machines and one manufacturing buffer. Layout Rl3 can be used to solve three machines and two manufacturing buffers. Layout Rl1 in Figure 13 (red area) contains machines MOL3VER and KM85A2, and the manufacturing buffer is B2. Layout Rl2 in Figure 13 (yellow area) contains machines MOL3VER and KM85A1, and the manufacturing buffer is B3. Layout Rl3 in Figure 13 (purple area) contains machines KM85A1 and KM85A2, the series of station ST washing, ST2 turn, and ST3 washing, and manufacturing buffers B4 and B5. Layout Rl4 in Figure 13 (blue area) contains the series of station ST washing, ST2 turn, and ST3 washing and the series of station weight measure 1 and dimension measure 1. The manufacturing buffer is B6.
Realisation of Simulation Runs
By carrying out a simulation on a simulation model in which the capacity of the manufacturing buffer was limited to the maximum capacity found through the volume or weight calculation, the following average inventory of the manufacturing buffers was found. These average manufacturing buffer states are listed in Table 4. By finding the inventory state of the manufacturing buffers, the buffers' capacities could be used, at the same time, to represent the production restriction value of the manufacturing line (Bp). The values (CDS) for the solvable manufacturing line parts are given in Table 5. The task is to ensure that there is no increase in CDS against the original value outside the tolerance threshold of a 2% decrease in the throughput, which is defined in the optimisation objectives. At the same time, the condition that the proposed solution exceeds the benefits of the current solution must be fulfilled. For calculation purposes, because it is not compared with a fixed bounding line, the CDC values in Table 5 were chosen.
The experimental results of downtime η for solving layout Rl1 show Table 6 and Figure 14. Based on the input data and downtime data, the objective function G was calculated to solve layout Rl1, as shown in Table 7 and Figure 15. The behaviour of the curve in Figure 14 shows that the manufacturing buffer capacity downtime decreases until point 12, after which no decrease occurs. The curve in Figure 15 shows that, as the manufacturing buffer capacity increases, the assets increase until point 3, which is marked as the local extreme; after this point, a decrease occurs. Table 7 and Figure 15 show the local optimum, which appears as an extreme towards the plus values. In our case, this value is 35,384.24, which corresponds to a value of three pieces in manufacturing buffer B2. The value of three pieces in manufacturing buffer B2 stays constant in the next calculations.
The experimental results of the downtime η for solving layout Rl2 and the objective function calculated based on the downtime and input information are shown in Table 8. A graphical expression of the downtime η and objective function G for solving layout Rl2 is shown in Figure 16. Figure 16. Progress of (a) downtime according to the manufacturing buffer capacity for solving layout Rl2; (b) objective function according to the manufacturing buffer capacity for solving layout Rl2.
The behaviour of the curve in Figure 16a shows that with an increase in the manufacturing buffer capacity, the downtime decreases most steeply until point 3; after that point, the decrease is not so steep. The curve in Figure 16b shows that, as the manufacturing buffer capacity increases, the assets increase until point 2, which is marked as the local extreme; after this point, a decrease occurs. From Table 8 and Figure 16b, the local optimum can be found, which appears as an extreme towards the positive values. In our case, this value is 32,556.02, which corresponds to a value of two pieces in manufacturing buffer B3. This value of two pieces in manufacturing buffer B3 stay as a constant in the next calculations.
In the case of solution layout Rl3, it can be seen that it is necessary to validate the capacity of 60 in manufacturing buffer B4 and the capacity of 10 in manufacturing buffer B5; together, this involves 600 experiments. To reduce the computational difficulty, experiments were carried out in two phases. In the first phase, the capacity of manufacturing buffer B4 increased in increments of five and manufacturing buffer B5 increased in increments of one, so 120 experiments were required. In the Figure 16. Progress of (a) downtime according to the manufacturing buffer capacity for solving layout Rl2; (b) objective function according to the manufacturing buffer capacity for solving layout Rl2.
The behaviour of the curve in Figure 16a shows that with an increase in the manufacturing buffer capacity, the downtime decreases most steeply until point 3; after that point, the decrease is not so steep.
The curve in Figure 16b shows that, as the manufacturing buffer capacity increases, the assets increase until point 2, which is marked as the local extreme; after this point, a decrease occurs. From Table 8 and Figure 16b, the local optimum can be found, which appears as an extreme towards the positive values. In our case, this value is 32,556.02, which corresponds to a value of two pieces in manufacturing buffer B3. This value of two pieces in manufacturing buffer B3 stay as a constant in the next calculations.
In the case of solution layout Rl3, it can be seen that it is necessary to validate the capacity of 60 in manufacturing buffer B4 and the capacity of 10 in manufacturing buffer B5; together, this involves 600 experiments. To reduce the computational difficulty, experiments were carried out in two phases. In the first phase, the capacity of manufacturing buffer B4 increased in increments of five and manufacturing buffer B5 increased in increments of one, so 120 experiments were required. In the next phase, after the calculation of the objective function, the range ±5 capacity from the identified optimal objective function was selected for testing, so 100 experiments were required. The experimental results of downtime η for solving layout Rl3 phase 1 are shown in Table 9 and Figure 17a. The objective function is illustrated in Table 10 and Figure 17b.
The shape of the graph in Figure 17a shows that, as the manufacturing buffer capacity increases, the downtime decreases most steeply until point 30; after that, the decrease is not so steep. The surface of the graph in Figure 17b shows that the most benefit to the assets occurs at around point 5 for B4 and in the range of 1-10 for B5. Otherwise, the surface of the objective function decreases over the whole range of B4. Table 10 and Figure 17b show that the local optimum appears as an extreme towards the positive values. In our case, this value is 22,749.94, which corresponds to a value of five pieces in manufacturing buffer B4 and one piece in manufacturing buffer B5. Based on these numbers, B4(5) and B5(1), the optimum values in phase 2 are a B4 manufacturing buffer capacity ranging from one to 10 pieces and a B5 manufacturing buffer ranging from one to 10 pieces. The experimental results of the downtime η for solving layout Rl3 phase 2 are shown in Table 11 and Figure 18a. The objective function is illustrated in Table 12 and Figure 18b. Figure 17. Development of (a) downtime according to the manufacturing buffer capacity for solving layout Rl3 phase 1; (b) objective function according to the manufacturing buffer capacity for solving layout Rl3 phase 1.
The shape of the graph in Figure 17a shows that, as the manufacturing buffer capacity increases, the downtime decreases most steeply until point 30; after that, the decrease is not so steep. The surface of the graph in Figure 17b shows that the most benefit to the assets occurs at around point 5 for B4 and in the range of 1-10 for B5. Otherwise, the surface of the objective function decreases over the whole range of B4. Table 10 and Figure 17b show that the local optimum appears as an extreme towards the positive values. In our case, this value is 22,749.94, which corresponds to a value of five pieces in manufacturing buffer B4 and one piece in manufacturing buffer B5. Based on these numbers, B4(5) and B5(1), the optimum values in phase 2 are a B4 manufacturing buffer capacity ranging from one to 10 pieces and a B5 manufacturing buffer ranging from one to 10 pieces. The experimental results of the downtime η for solving layout Rl3 phase 2 are shown in Table 11 and Figure 18a. The objective function is illustrated in Table 12 and Figure 18b. The shape of the graph in Figure 18a shows that, with an increase in the manufacturing buffer capacity, the downtime decreases. The surface of the graph in Figure 18b shows that the most benefit occurs inside the area bounded by B5(3) and B4(4). From Table 12 and Figure 18b, the local optimum can be found, which appears as an extreme towards the positive values. In our case, this value is 18,888.79, which corresponds to a value of two pieces in manufacturing buffer B4 and one piece in manufacturing buffer B4. Layout Rl3, therefore, gives the optimum manufacturing buffer capacity for B4 (2) and B5 (1).
The experimental results of the downtime η for solving layout Rl4 are shown in Table 13 and the graphical expression is shown in Figure 19a. The objective function is calculated in Table 13, and its graphical expression is shown in Figure 19b. The shape of the graph in Figure 18a shows that, with an increase in the manufacturing buffer capacity, the downtime decreases. The surface of the graph in Figure 18b shows that the most benefit occurs inside the area bounded by B5(3) and B4(4). From Table 12 and Figure 18b, the local optimum can be found, which appears as an extreme towards the positive values. In our case, this value is 18,888.79, which corresponds to a value of two pieces in manufacturing buffer B4 and one piece in manufacturing buffer B4. Layout Rl3, therefore, gives the optimum manufacturing buffer capacity for B4 (2) and B5 (1).
The experimental results of the downtime η for solving layout Rl4 are shown in Table 13 and the graphical expression is shown in Figure 19a. The objective function is calculated in Table 13, and its graphical expression is shown in Figure 19b. Figure 19. Development of (a) downtime according to the manufacturing buffer capacity for solving layout Rl4; (b) objective function according to the manufacturing buffer capacity for solving layout Rl4.
The behaviour of the curve in Figure 19a shows that as the manufacturing buffer capacity increases, the downtime decreases until point 11; after that point, no decrease occurs. The curve in Figure 15 shows that, as the manufacturing buffer capacity increases, the assets increase until point 7, which is marked as the local extreme; after this point, a decrease occurs. From Table 13 and Figure 19b, the local optimum can be found, which appears as an extreme towards the positive values. In our case, this value is 17,523.70, which corresponds to a value of seven pieces in manufacturing buffer B6. Figure 19. Development of (a) downtime according to the manufacturing buffer capacity for solving layout Rl4; (b) objective function according to the manufacturing buffer capacity for solving layout Rl4.
The behaviour of the curve in Figure 19a shows that as the manufacturing buffer capacity increases, the downtime decreases until point 11; after that point, no decrease occurs. The curve in Figure 15 shows that, as the manufacturing buffer capacity increases, the assets increase until point 7, which is marked as the local extreme; after this point, a decrease occurs. From Table 13 and Figure 19b, the local optimum can be found, which appears as an extreme towards the positive values. In our case, this value is 17,523.70, which corresponds to a value of seven pieces in manufacturing buffer B6.
Assessment of Calculated Manufacturing Buffer Capacities According to the Production and Weight Limits and Determination of the Optimal Work-in-Progress Stored in the Manufacturing Buffer
A comparison of the calculated optimum manufacturing buffer capacity (B opt ) and the manufacturing buffer capacity production limitation (B p ) is shown in Table 14. It can be seen that there is no value (B opt ) that does not exceed the production limit B opt ≤ B p , which means that manufacturing line is able to create the required state in the manufacturing buffers.
Achieved Results
After the dimensioning of the manufacturing buffers' capacities according to the calculated and assessed capacities, the number of products produced during production days per year, i.e., 20 days, was determined through a simulation of the downtime. The positive and negative statistical results were obtained from the model, and the inputs that were used for calculations are summarized in Table 15. The beneficial results are summarized in Table 16. The results show that, after a decrease in the WIP inventory, CDS may rise and throughputs may decrease. However, as part of the main assessment criteria, further research was focused on a decrease in costs and an increase in overall profit. This condition was met, and a positive overall profit value was achieved from maintaining the optimal manufacturing buffer according to the present value.
•
Maintaining the optimal capacity of the manufacturing buffers can be obtained by reducing the WIP inventory and lead time: Based on the results obtained from the simulation, it is possible to argue that maintenance of the optimum capacity of the manufacturing buffers can lead to a decrease in the WIP inventory. Therefore, it is advisable to concentrate on the manufacturing buffer to reduce the WIP inventory. A positive effect of reducing the WIP inventory is a decrease in the lead time. The conveyors with low cycle times used in most of the mechanical manufacturing factories function as manufacturing buffers, as used in our case study. Therefore, the dimensions of the manufacturing buffers are linked to the determination of the conveyer's capacity. Calculation of the optimum manufacturing buffer capacity is possible by determining the benefits and costs generated by a certain capacity. It is, therefore, appropriate to use the objective function to calculate the optimal manufacturing buffer capacity.
•
The negative effects of maintaining the optimal capacity may be reflected by an increase in the downtime and a reduction in throughput: There may be an increase in downtime when resolving the optimal manufacturing buffers capacities. This is because, in some cases, the high level of development can cover the downtime, but it generates a much higher WIP inventory cost than the optimum capacity of the manufacturing buffers. The optimum capacity of manufacturing buffers is the state at which the cost is the lowest and the total benefit is the highest. However, these negative effects can be eliminated.
•
In the future, the system of reconfigurability in manufacturing buffers will be used to reduce the negative consequences of the optimum manufacturing buffer capacity: The removal of higher downtime, which leads to a decrease in production, requires further research in the area. This focus of research will involve the reconfigurability of manufacturing buffers to not only provide the possibility of preventing the increase in downtime but also contribute to its reduction. This is based on using a combination of reconfigurability and the optimum manufacturing buffer capacity through the principle of the digital twin. The reconfigurability of manufacturing buffers consists of monitoring the most frequently worn and periodically exchanged components of the production equipment. Based on fault statistics, the system-determined time is required to create an inventory that covers the downtime resulting from specific failures. The optimum manufacturing buffer capacity is maintained for the workplace in the event that no fault is indicated but a failure still occurs.
•
The future introduction of the reconfigurability system is linked to the involvement of the digital twin in practice: The digital twin, which is part of a smart factory, uses a database to perform real-time information-gathering on the state of the system to create a digital copy of production, according to which predictions of future downtime can be made. Consequently, the manufacturing buffer can react by an immediate change in capacity. The involvement of the digital twin in the buffer area is shown in Figure 20. Figure 20 shows that the future concept will use three databases, and all these data will be connected through a digital twin. Based on calculations and prognosis, a control command for the machine and the manufacturing buffer will be realised.
Conclusions
Currently, a lot of pressure is placed on companies to minimise production waste. Various types of waste occur in the maintenance of high interoperation stocks. The high level of interoperation stocks between operations, also known as the WIP inventory, creates costs that influence the competitiveness and sustainability of the business. These costs can be reduced by decreasing the WIP inventory to the optimum level. In order to calculate the optimal level of WIP, several calculations can be used. One method is to calculate the objective function, by which it is possible to assess the efficiency of the solution. This article dealt with the design of a system for the determination of the optimal WIP inventory stored in the interoperate manufacturing buffer based on a simulation and objective function. As a part of the article, an example of solving optimal WIP inventory on a manufacturing line in a production company was presented. It was solved in detailed steps to show how the designed system works. In the example that was solved, a reduction in the WIP inventory of 81.25% was achieved if account workpieces stored in the automatic machine were not accounted for, and a reduction of 73.86% was achieved if the account workpieces in the automatic machine were considered. At the same time, by reducing the WIP inventory, reduction in the lead time will occur. In the example, the lead time decreased by 16.33%. The side effect of maintaining an optimal manufacturing buffer capacity was an increase in downtime and a decrease in throughput. This negative effect was, however, within our defined limit for a decrease in production of 2%. At the same time, the condition that the overall benefit of the proposed solution must be higher than the current one was met. In the example, the cost of the WIP inventory decreased by a value of 1563.11. However, this only represents one part of the line and one type of product. Nevertheless, if a whole Figure 20 shows that the future concept will use three databases, and all these data will be connected through a digital twin. Based on calculations and prognosis, a control command for the machine and the manufacturing buffer will be realised.
Conclusions
Currently, a lot of pressure is placed on companies to minimise production waste. Various types of waste occur in the maintenance of high interoperation stocks. The high level of interoperation stocks between operations, also known as the WIP inventory, creates costs that influence the competitiveness and sustainability of the business. These costs can be reduced by decreasing the WIP inventory to the optimum level. In order to calculate the optimal level of WIP, several calculations can be used. One method is to calculate the objective function, by which it is possible to assess the efficiency of the solution. This article dealt with the design of a system for the determination of the optimal WIP inventory stored in the interoperate manufacturing buffer based on a simulation and objective function. As a part of the article, an example of solving optimal WIP inventory on a manufacturing line in a production company was presented. It was solved in detailed steps to show how the designed system works. In the example that was solved, a reduction in the WIP inventory of 81.25% was achieved if account workpieces stored in the automatic machine were not accounted for, and a reduction of 73.86% was achieved if the account workpieces in the automatic machine were considered. At the same time, by reducing the WIP inventory, reduction in the lead time will occur. In the example, the lead time decreased by 16.33%. The side effect of maintaining an optimal manufacturing buffer capacity was an increase in downtime and a decrease in throughput. This negative effect was, however, within our defined limit for a decrease in production of 2%. At the same time, the condition that the overall benefit of the proposed solution must be higher than the current one was met. In the example, the cost of the WIP inventory decreased by a value of 1563.11. However, this only represents one part of the line and one type of product. Nevertheless, if a whole year's utilization of the machine is considered, the optimal WIP inventory can be calculated for the whole spectrum of products, and the savings could be significantly higher. The negative effects of the optimal capacity will be removed in the future by designing a system for manufacturing buffer reconfigurability. This system will be based on the dimensioning of the manufacturing buffers according to maintenance and the prediction of failure. The optimum manufacturing buffer capacity will be maintained to cover unpredicted failures. | 15,351 | 2019-07-19T00:00:00.000 | [
"Engineering",
"Business"
] |
MOTIVE AND OBSTACLES IN MAKING A DECISION AS EARLY ADOPTERS OF PSAK NO. 71 FOR IMPAIRMENT PROVISION OF LOANS (STUDY CASE IN INDONESIA BANKING INDUSTRY)
IFRS 9 has converged into PSAK No. 71 and became effective on January 1, 2020 where early implementation is permitted. Changing in accounting standards might have caused controversy. However, there are bank in Indonesia had implemented before PSAK No. 71 effectived (early adopters). This study aims to determine the motive of early adopters of PSAK No.71, the obstacles they faced and the impact especially through loan impairment. By using study case method in Indonesia banking industry and then collect data using semi structured interviews and content analysis. DiMaggio and Powell (1983) explained that institutional theory emphasizes institutional patterns formed because the influence of policy from inside and outside the company (symbolic carriers). Therefore, the motive behind the institutions and actors (material carriers) decision to become early adopters will be revealed. In this case, we found that early adopters are mostly the bank with ownership as foreign bank and mixed bank (as subsidiary). Also, the primary motive of the bank when decide to early adopt are based on following their holding that mandatory to implement IFRS 9. This could create a good external reporting because the accounting standard between holding company and its branch or subsidiary will be similar.
INTRODUCTION
Indonesia, as a member of G20 organization, has an obligation to implement International Financial Reporting Standards (IFRS) No. 9 which has been applied effectively on January 1, 2018. In order to adopt the international accounting standard, the Financial Accounting Standards Board of the Indonesian Accountants Association (DSAK-IAI) ratified PSAK No. 71, which is the convergence from IFRS No. 9, on July 26, 2017 and its implementation will be effective on January 1, 2020 and early adoption is allowed. PSAK No. 71 changes the requirements of classification and measurement of financial instruments that previously used PSAK No. 55 which is the convergence from International Accounting Standard (IAS) No. 39.
In order with this, the industry that will be significantly impact is the banking industry. Based on Article 4 (four) Law Act Number 10 in 1998 explained that the purpose of Banking is to support the implementation of national development in order to improve equity, economic growth and national stability towards increasing the welfare of the people. Banking has a function as financial intermediary that aims to collect public funds in a form of deposit and then distribute it in the form of loan. Therefore, banking has a very important role in the economy of a country. Its role showed from Indonesia's financial structure that dominate by the banking industry with total assets 73.69% or Rp7,354.7 million from the total assets in the financial industry, which is Rp9,980 million in November 2017 (Santoso, 2018).
However, when the bank distribute funds in the form of loan, they were exposed to risk of potential default of this loan. Therefore, bank has to form a provision for impairment loss of loan (CKPN). According to Indonesia Banking Accounting Guidelines (PAPI, 2008) that impairment is an objective evidence of loss after the loan recognized which impact the estimation of the financial asset future cash flows. When forming a provision for impairment of non performing loan, the bank using debtor data as a basis for analysis. Therefore, the policy and standard operating procedure of each bank will be different with the other. It depends on the size and ownership of the Bank and also the condition of each debtor that they had. However, the policy and standard operating procedure that have been authorized by management must refer to the applicable financial accounting standards so the financial statements will be represent faithfulness (must accurately represent the actual situation). This is important because financial statement was used as a tool by internal and external parties as a stakeholder in order to make a decision.
Enforcement of PSAK No. 71 in Indonesia will certainly make a change to banking accounting practices especially the loan impairment method. Management decision will be influenced by this alteration when making accounting policy and standard operating procedure of the company. PSAK No. 71 will replaced PSAK No. 55 as accounting standard that applied in banking industry today. PSAK No. 71 changes the accounting treatment method of classification and measurement, loan impairment, dan accounting hedge. However, the most significant impact due to this changed is the method in determining the provision for loan impairment. Based on IAI (2016) that PSAK No. 71 was issued with objective to complete the shortcomings of PSAK No. 55. PSAK No. 55 uses the incurred loss method, that is, if there is an evidence that the quality of the value of financial assets decreased means the provision must be formed and on the othe hand, PSAK No. 71 uses the expected loss method, which the provision established if there is a possibility of change in credit risk estimation due to changes in conditions that will decline in the future (IAI, 2016). It is different between PSAK No. 55 that recognizes loss when its occured with the method that use in PSAK No. 71 which recognized the impact of changes in the expected credit loss faster which is after the asset recognized at the beginning. It means the provision that must be formed using PSAK No. 71 will be bigger than using PSAK No. 55. The impact of forming provision for loan impairment will affect the capital and profit of the Bank according to the statement that quoted by the author from the website which is "not only the company's profit and loss that will have a significant impact related to the implementation of PSAK No. 71 but also have a significant impact on capital reduction" (Triana, 2018, http://infobanknews.com/penerapan-psak-71-berdampakpada-penurunan-modal-bank/, accessed 15 May 2018). Therefore, based on the results of the PwC Indonesia survey (2018) that the larger commercial banks (BUKU 3 and 4) are far ahead in their progress in implementing PSAK No. 71 that is 48% of the total banks were already in the impact assessment stage which is compared to the smaller commercial banks (BUKU 1 and 2) that only 10% from the total of BUKU 1 and 2. This proved that most of the bank still have not carried out the impact assessment stage due to the complexity of the PSAK.
Furthermore, the adoption of new financial reporting standard caused controversy. This is reflected from the rejection like in France which opposes IFRS to be applied because of the pressure from the political elements that consider the volatility of the balance sheet and also the result of financial statement that produced using this standard will have a negative impact to stakeholders' interests such as decreasing equity (Ball, 2006). Also, there is a complaint that stated there is a violations committed by the rule makers without going through the right process in making IFRS 9 (Bouvier, 2017). Considering IFRS 9 was adopted into PSAK No. 71 then this problem is also suspected to be a problem in Indonesia. However, in practice, there are some commercial bank in Indonesia had implemented PSAK No. 71 earlier before this standard becomes effective even though there is a problem regarding alteration of the accounting standard. Based on the explanation above, the authors conducted a reasearch with title: "Motive and Obstacles in Making a Decision as Early Adopters of PSAK No. 71 for Impairment Provision of Loans (Study Case in Indonesia Banking Industry) ".
LITERATURE STUDY/HYPOTHESES DEVELOPMENT
The literature study of this paper discusses the motive and obstacle of the Bank in implementing PSAK No. 71 earlier. In connection with research problems, there are differences between this study and previous studies. The following are the differences: • Stent (2011) . Based on the results of previous study and publication journals, it can be concluded that the study of PSAK No. 71 is still limited. Since there is still a gap and need for the research, the author conducted a study to find out the motivations behind the institution and actors inside the Bank when making a decision to choose early implementation of PSAK No. 71 especially towards forming provision of impairment loan and the constraints faced.
Statement of Financial Accounting Standards (PSAK) No. 71
According to Prihadi (2011) that national financial accounting standards are in the process of full convergence with the International Financial Reporting Standard (IFRS) issued by the International Accounting Standards Board (IASB). This must be applied by Indonesia as a member of the countries that incorporated in G20. Following up on any changes to the accounting standard, DSAK-IAI compiled a PSAK that would later be used as a financial accounting standard in preparing financial statements in Indonesia. In Januari 1, 2020, PSAK No. 71 will replaced PSAK No. 55. PSAK No. 71 is a statement of financial accounting standards governing financial instruments which is classification and measurement. This statement is convergenced from IFRS 9 regarding financial instruments: classification and measurement. PSAK No. 71 will replaced PSAK No. 55 (revised 2011) regarding financial instruments: recognition and measurement that adopt IAS 39.
According to IAI (2016), financial instrument is any contract that adds value to financial assets of financial entities and liabilities or equity instruments in other entities. An entity must recognize financial assets or financial liabilities in the statement of financial position when, and only when, the entity becomes part of the contractual provision of the instrument. The most critical impact in this changed will be on financial asset side especially credit or loan. Credit is a word that comes from several languages, namely the Roman language, Credere, which means to believe, the Dutch term Vertrouwen, English, which is believe or trust or confidence which means the same, namely trust (Suparmo, 2009). While the notion of credit according to Law No. 10 of 1998 concerning banking, that credit is the provision of money or equivalent bills, based on agreements or lending and borrowing agreements between the bank and other parties that require the borrowing party to repay its debt after a certain period of time with interest.
Because of this, a common thread can be drawn that credit is given on the basis of an agreement between the bank and the debtor who has the rights and obligations of each term, interest and sanctions if the agreed upon agreement is denied.
PSAK No. 71 will replaced PSAK No. 55 as accounting standard that applied in banking industry today. PSAK No. 71 changes the accounting treatment method of classification and measurement, loan impairment, dan accounting hedge. First, classification and measurement in PSAK No. 55 divide classification into 4 categories which is Fair Value through Profit / Loss, Available for Sale (AFS), Held to Maturity (HTM) and Loan and Receivables. The classification of each instrument in this standard was determined based on management intention. However, PSAK No. 71 has changed the classification into 3 categories which is Fair Value through Profit / Loss (FVPL), Fair Value through Other Comprehensive Income (FVOCI), and Amortized Cost (AC). The classification of each instrument in this standard was determined not only based on management intention but also using SPPI (Solely Payments of Principal and Interest) test and Business Model test. Second, the accounting treatment method for loan impairments has been changed as well. PSAK No. 71 introduced new approach which is expected credit loss. According to Witjaksono (2017), there is a difference approach in forming provision for impairment of non performing loan between PSAK No. 55 that uses the loss incurred method methodology (there is evidence / information on impairment of financial assets namely historical event and current condition (there is objective evidence) with PSAK No. 71 which uses the expected credit loss methodology (evidence of historical events and current condition but adds also information that is forward-looking. It means the provision that must be formed using PSAK No. 71 will be bigger than using PSAK No. 55. Moreover, the accounting treatment method for Accounting Hedge has been changed as well which is more simplified according to PSAK No. 71 this. In PSAK No. 55 they uses temporary rules while PSAK No. 71 testing effectiveness. However, PSAK No. 71 is more complicated, so the method determined based on qualitative.
As per explanation above, it can be concluded that the most critical impact that affected by the changed in accounting standard will be loan impairment. This is due to forming provision for impairment loan will affect the capital and profit of the Bank. Therefore, in this study, the author will focused on the impact of early implemention of PSAK No. 71 especially on loan impairment.
Bank
The definition of a Bank according to Law No. 10 of 1998 (Amendment to Law No. 7 of 1992) concerning banking that the definition of a bank is a business entity that collects funds from the public in the form of deposits and distributes them to the public in the form of loans and / or other forms in order to improve people's lives many. Bank are divided based on business activities or called BUKU which adjusted to their core capital. Based on the financial services authority regulation (POJK) number 6 /POJK.03/2016 about business activities and office networks based on bank core capital, bank are divided into 4 categories as follow: a. BUKU 1 is a bank with core capital less than Rp.1 Billion; b. BUKU 2 is a bank with core capital of Rp.1 Billion up to less than Rp.5 Billion; c. BUKU 3 is a bank with core capital of Rp.5 Billion up to less than Rp.30 Billion; and d. BUKU 4 is a bank with core capital above Rp. 30 Billion.
Currently, there are 115 Bank in Indonesia and based on the data from Indonesian Banking Statistics per December 31, 2017 issued by Financial Service Authority that the distribution of bank according to BUKU can be seen in the table below.
No Group of Bank based on Business Activities
Total While based on ownership, the type of bank is divided into 5 categories which is state owned bank, national private Bank, union bank, foreign bank, and mixed bank (Thamrin, 2012). In general, the understanding of each ownership according to Thamrin (2012) as follow: a. State owned bank is a bank whose entire share ownership is owned by the government so all profit the bank received are owned by the government. b. National private bank is a bank whose entire share ownership is owned by national private parties. c. Union bank is a bank whose ownership is owned by company with legal entity as union company. d. Foreign bank is a bank in the form of a branch office where all of its shares are owned by a foreign party. e. Mixed bank is a bank in the form of subsidiaries whose shares are owned by national private parties and foreign parties. However, one from five categories of bank ownership above is currently no longer available in Indonesia banking industry which is the bank owned by union company. Furthermore, based on the explanation above, it can be explained that bank have an important role by becoming an intermediary for the public in financial services. In addition, the Bank is grouped based on business activities and ownership.
Theoretical Foundation -Institutional Theory
This theory was introduced by DiMaggio and Powell in 1983 at the first place and over the time has developed further, where institutional theory not only sees the influence of external pressure (isomorphism) but also looks at the behavior of institutions in taking certain policies (institutional logics). This theory basically emphasizes the institutional pattern formed because of the influence of pressure from inside and outside through the process of obedience, imitation and professional demands (DiMaggio and Powell, 1983). According to Lammers and Barbour (2006), institutional theory is described as a series of practices directed based on rational beliefs that are formalized beyond certain organizations and situations. In addition, according to Gidden (in Scott, 2001) that institutions are multidimensional social structures that are built from symbolic elements, social activities, and material resources. One of these theories has a view that plays a very important role in management theory and organization (Greenwood et al., 2008;and Zuckler, 1987) that pressure and dynamics in an environment can form an organization. This theory basically emphasizes the institutional pattern formed because of the influence of pressure from inside and outside through the process of compliance, imitation and professional demands.
According to Hawley (1968), isomorphism is an inhibiting process that enforced a unit in some population to be similar with another units that deal with a similar series of environmental condition. Institutional Isomorphic was changing through three mechanism which is coercive isomorphism (external pressure such as influence from politic), Mimetic isomorphism (by following other institution because of uncertainty) and Normative isomorphism (transformation that result in professionalization) (DiMaggio and Powell, 1983). On the other hand, according to Thornton & Ocasio (2008), institutional logics was design to understand the behaviour of organizational and indvidual which in context of social and institutional so these context can be standardize and giving an opportunity for change and agency. Due to these theories, material carriers in this study are institution or actors that influenced the decision to do early implementation and the symbolic carriers of this study is the standards or rules that the adopter follow which is the requirement of PSAK No. 71 and other relevant regulations, standards or policies. The application of institutional theory in this case study is that the relationship between motivation underlying institutional logics to implement early can be influenced by pressure from inside and outside the company (symbolic carriers) whether it is due to regulatory compliance or shareholder direction, follow other companies or holding, or have become the demands of professionals to become the first company to apply the new accounting standards that applied (or what is referred to as the neo institutional theory). So that it can be seen the motivation of institutions and implementing actors (material carriers) behind the early implementation of PSAK No. 71.
RESEARCH METHODOLOGY
This study uses a case study method which is a series of scientific activities carried out intensively, in detail and deeply about a program, event, and activity, both at the level of individuals, a group of people, institutions, or organizations to gain indepth knowledge of these events (Raharjo, 2017). The author choose a case study as a study approach because case study can answered the research questions that are the object of this study which is occured of 2 group of commercial bank in Indonesia consist of early adopters and non-adopters of PSAK No. 71. This is related with the function of the case study that analyze in detail a phenomenon so the research question of why a phenomenon happened and how the phenomenon execute in detail can be answered. According to Ellett (2007, p.20), there are several output as a result from study case method which is decision making process, rules determination, evaluation and problem solving.
Furthermore, this study will use the mixed-method approach. According to Shauki (2014), mixed research occurs when researchers and methodologies believe that qualitative perspectives and methods are useful when they discuss their research questions. Usually it will give the most informative, complete, balanced and useful research results. Therefore, it can be concluded that the design of this research method has the objective of completing the deficiencies that exist in quantitative and qualitative research methods and can improve the quality of this case study research. Based on this explanation, the author uses the method in this study to be able to answer the research question.
Moreover, the author used between-method triangulation approach which involving a quantitative approach and a qualitative approach in collecting data. The data that used for this paper are primary data and secondary data. The primary data was obtained from the results of semi structured interviews towards actors at 2 group of commercial bank in Indonesia which is early adopters and non adopters of PSAK No. 71. While for the secondary data, the author use data that has been documented in early adopter Banks which is quarterly financial statements.
The steps that will be carried out in order to collect primary data and secondary data, the author uses sequential exploratory design by collecting data and analyzing qualitative data in the first stage, and followed by collecting quantitative data in the second stage, to strengthen the results of qualitative research conducted the first stage. (Sugiyono, 2011: 409). In the initial stage, the author will examine using qualitative methods, namely semi structured interview. The purpose of this method is to gain a deeper understanding so that the interviews will use the open ended questions that do not limit the answer to questions such as closed questions or multiple choice questions (Shauki, 2018). The interview was conducted to 2 group of commercial bank in Indonesia which is early adopters and non adopters PSAK No. 71. The result of semi structured interview will become the source of primary data. At this stage, the author will prepare a list of open ended questions. According to Stake (2005) that the interview steps are as follows: a. Compile a list of open ended questions that lead to research questions to be used during interviews; b. Before the list of questions that have been prepared is submitted to the respondent, a trial is conducted on the question; c. Select appropriate respondent to help answer research questions; d. Conduct interviews with respondent using questions that have been prepared; e. Creating guidelines for answering research questions by conducting a qualitative approach to gather information and data from the results of the interview. After doing the steps above, the non-financial data and information that are qualitative in nature will be grouped through a coding process and then processed it using microsoft office excel applications. Furthermore, the results of research on the qualitative information and data have been obtained, then the author will conduct content analysis of the information that has been obtained from the initial stage and also from the bank's quarterly financial report. This needs to be done to obtain the information that can support data and information from qualitative research that has been carried out in the first stage. While for the unit of analysis for this study are 2 group of commercial bank in Indonesia which is early adopters and non adopters PSAK No. 71. By evaluating the institution or actor in bank motivation in making decisions to early adoption and the obstacles that faced by the Bank.
RESULTS
The complexity of PSAK No. 71 causes most of the bank in banking industry have not carried out the stages of impact assessment on the implementation of PSAK No. 71 because of the impact in forming provision for impairment loan can affect the capital and profit of the Bank. In addition, changes in accounting standard have caused controversy which is reflected in the rejection of France as opposed to the validity of IFRS because the pressure from the political elements that consider the volatility of the balance sheet and also the result of financial statement that produced using this standard will have a negative impact to stakeholders' interests such as decreasing equity (Ball, 2006). Also, there is a complaint that stated there is a violations committed by the rule makers without going through the right process in making IFRS 9 (Bouvier, 2017). Considering IFRS 9 was adopted as PSAK No. 71 then this problem is suspected to be a problem also in Indonesia. However, in practice there are some commercial bank in Indonesia that have implemented PSAK No. 71 before this standard becomes effective. Therefore, finding will be discussed as per research question which is outlined as follow: a. How was the motivation and obstacle of a manager in making decisions to implement the PSAK No. 71 early or not from the effective date the accounting standard applies. This research is specifically carried out to answer the following phenomena: • Are there similarities and differences in decisions made between one bank and another bank? • Is there a dominant logic behind the manager in making these decisions? • What institutional factors can trigger / prevent a carrier material from practicing decoupling from existing symbolic carriers? b. How is the impact of the implementation of PSAK No. 71 by the banking industry especially towards the accounting treatment method for determining the formation of CKPN on non-performing loans?
Figure 1. Research Framework Related to Institutional Theory
As per Figure 1 above, the institutional field are group of early adopters and group of non adopters that represent banking industry in Indonesia. The similarity between early adopters group and non adopters group in this paper are based on the ownership of both early adopters group and non adopters group. As for institutional logics, it consists of rule and standard that the adopters follow as symbolic carriers and institution or actor as material carriers that influence Bank in making decision whether to adopt new accounting standard earlier or not.
Furthermore, the author found that the early adopters are mostly the bank with ownership as foreign bank and mixed bank (as subsidiary). Also, the author believes that the motive behind the decision to do early implementation was following their holding company. The holding company of each bank is came from the country that has already effectived implement IFRS 9 that converged into PSAK No. 71 in Indonesia. Therefore, they have an obligation to follow their holding company otherwise there will be a gap due to different accounting standard applied between them. This evidence was found from interview with the actor of the Bank. As a primary motive of the institution or actors inside the Bank when making a decision to early adopt PSAK No. 71 that shown from interview are based on following their holding company that must implement IFRS 9 due to the regulation in their country. Therefore, the accounting standard between their holding company and them will be similar. Also, they can create a good external reporting for stakeholders because the accounting standard that applied in their bank will be similar with their holding company. In addition, the author found that the obstacle when implemented PSAK No. 71 earlier then the effective date was the data is not completed as per required to be use for loan impairment, lack of guidance, and the company infrastructures that can't support the changed. Overall, the impact of implementing this standard will increased number of loan impairment which affect the company equity and also additional cost that arise in implementing PSAK No. 71.
DISCUSSION
This study aims to determine the motive of early adopters of PSAK No.71 in Indonesia banking industry especially when forming loan impairment provision, knowing the obstacles they faced when applied this standard and also knowing the impact of implementing PSAK No. 71 by the banking industry especially in forming provision for loan impairment. By using sample of 2 group of early adopters group and non adopters group of PSAK No. 71 that represent banking industry in Indonesia, the author found that the early adopters are mostly the bank with ownership as foreign bank and mixed bank (as subsidiary).
PSAK No. 71 will replaced PSAK No. 55 as accounting standard that applied in banking industry today. PSAK No. 71 changes the accounting treatment method of classification and measurement, loan impairment, dan accounting hedge. In this case study, the author will limit the scope area only on the impact through loan impairment. The accounting treatment method for loan impairments will changed. PSAK No. 71 introduced new approach which is expected credit loss. According to Witjaksono (2017), there is a difference approach in forming provision for impairment of non performing loan between PSAK No. 55 that uses the loss incurred method methodology (there is evidence / information on impairment of financial assets namely historical event and current condition (there is objective evidence) with PSAK No. 71 which uses the expected credit loss methodology (evidence of historical events and current condition but adds also information that is forward-looking). It means the provision that must be formed using PSAK No. 71 will be bigger than using PSAK No. 55. Therefore, many bank did not want to implement this earlier. However, there are some commercial bank in Indonesia had implemented PSAK No. 71 earlier before this standard becomes effective even though there is a problem regarding alteration of the accounting standard.
The author believes that the primary motive behind the decision to do early implementation was following their holding company. The holding company of each bank is came from the country that mandatory to implement IFRS 9 that converged into PSAK No. 71 in Indonesia. Therefore, they have an obligation to follow their holding company otherwise there will be a gap due to different accounting standard applied between them. This evidence was found from interview with the actor of the Bank. As a primary motive of the institution or actors inside the Bank when making a decision to early adopt PSAK No. 71 that shown from interview are based on following their holding company that must implement IFRS 9 due to the regulation in their country. Therefore, the accounting standard between their holding company and them will be similar. Also, they can create a good external reporting for stakeholders because the accounting standard that applied in their bank will be similar with their holding company.
While the obstacle when implemented PSAK No. 71 earlier then the effective date was the data is not completed as per required to be use for loan impairment, lack of guidance to understand the new standard, and the company infrastructures that can't support the changed. Currently, many Bank didn't want to adopt this standard earlier because of the data is not completed as per required to be use for loan impairment. The data that will use for PSAK No. 71 not only the evidence of historical events and current condition but adds also information that is forward-looking. It means the bank must assess each data of the debtor whether there is an exposure of risk or not. Another obstacle is lack of guidance. PSAK No. 71 is more complex than PSAK No. 55 which is depend on modelling in forming loan impairment. Though they've got training from their holding, while there are still many issues regarding PSAK No. 71 which is still debatable between DSAK IAI, Financial Service Authority and Group of Bank. Therefore, when implementing this standard earlier, the early adopters was using the accounting standard based on IFRS 9 perspective. However, they will found obstacle when it comes to make a judgement due to country matter such as making a modelling using macro economy data in Indonesia or whether forming an impairment or not for product that related to government. Also, the other obstacle is company infrastructures that can't support the changed of accounting standard. This is refer to information technology system, database and the employee which need an improvement because can't support the changed. Furthermore, the impact of implementing this standard will increased number of loan impairment which affect the company equity and also additional cost that arise in implementing PSAK No. 71. The number of loan impairment will increased because the method that use in PSAK No. 71 will use forward looking data to determine expected credit lossess, it means the loan that previously not formed impairment will be formed depend on whether there is a credit risk or not. Moreover, additional cost related to cost to improve their human resources, technology information system, hiring a consultant and other infrastructure that related. When we discussed about improvement, there will be cost to spend in earlier adopters while the company has an obligation to achieve certain number of profit as their commitment to shareholder.
The studies about motivation to do early implementation of PSAK No. 71 in Indonesia banking industry had never been done in previous studies. It means this paper will give a contribution for accounting research in Indonesia. Furthermore, if we linked this paper with relevance literaturate, it shows the different result that the motivation to adopt new accounting standard in this paper was based on compliance basis and for previous paper was based actor's proactiveness.
CONCLUSION
By using sample of early adopters group and non adopters group of PSAK No. 71 that represent banking industry in Indonesia, , the author found that the early adopters are mostly the bank with ownership as foreign bank and mixed bank (as subsidiary). Also, the author aims to determine the motive of early adopters of PSAK No.71 in Indonesia banking industry especially when they formed provision of impairment loan and also knowing the obstacles they face when applied this standard. Therefore, the author found that the institution or actors in the company motive when making a decision whether to choose early implementation or not is based on following their holding company that must implement IFRS 9 due to the regulation in their country so the symbolic carrier between both holding company and its branch or subsidiary company will be similar.
Furthermore, this paper has limitation such as the object of this paper is the company that engaged in banking industry and operating in Indonesia. The result from this paper may show different result if we changed the object into different industry or country because they have different symbolic carriers that need to be followed. The author recommends that further research into industries or countries with different rule from current unit analysis. | 7,812 | 2019-06-17T00:00:00.000 | [
"Business",
"Economics"
] |
Seeing Structural Mechanisms of Optimized Piezoelectric and Thermoelectric Bulk Materials through Structural Defect Engineering
Aberration-corrected scanning transmission electron microscopy (AC-STEM) has evolved into the most powerful characterization and manufacturing platform for all materials, especially functional materials with complex structural characteristics that respond dynamically to external fields. It has become possible to directly observe and tune all kinds of defects, including those at the crucial atomic scale. In-depth understanding and technically tailoring structural defects will be of great significance for revealing the structure-performance relation of existing high-property materials, as well as for foreseeing paths to the design of high-performance materials. Insights would be gained from piezoelectrics and thermoelectrics, two representative functional materials. A general strategy is highlighted for optimizing these functional materials’ properties, namely defect engineering at the atomic scale.
Introduction
For the majority of crystalline materials, the arrangement of a lattice is interrupted by various crystal defects, but such imperfections are important to the properties of materials. The properties of perfect crystalline materials would be only depended on their crystal structure and composition, which makes them hard to adjust. The probability of making defects beneficial allows us to customize functional attributes to the different combinations required by modern devices, effectively turning defects into advantages [1,2].
Crystal defects occur as points and lines in the form of a surface or distributed in the bulk, referred to as point, line, planar or bulk defects respectively. Here, atomic-scale defects refer to those with at least one dimension at the atomic scale, including all point defects, dislocations, grain/phase boundaries and interfaces of nanostructures, etc. Atomicscale defects always induce static lattice distortion and influence thermal vibrations [3][4][5][6][7], especially under the action of an external thermal/stress/electric field.
Convergent-beam electron diffraction (CBED) was applied in detection of symmetry information in nano-scale domains such as: the coexistence of T and R nanotwins [114]. The size of CBED probe is a few nanometers, which can identify symmetry information within such nanodomains [115], which is hard to achieve that through X-ray and Neutron diffraction because of the limitation of spatial resolution. The distortions of the low- symmetry T and R phases of KNN could be considered as elongations of their parent cubic unit cell along an edge ({001} for T), or along a body diagonal ({111} for R). The spontaneous polarization (Ps) directions, which import characteristic symmetry elements, could be identified via CBED. The T (P4mm) lattice is shown with a 4-fold rotation axis along <001> as well as with mirror planes along {100}/{110}, while the R (R3c) lattice presents with a 3-fold rotation axis as well as glide plane along <111>. The CBED patterns shown in Figure 1(e1,e2) reflect the local symmetry of T and R nanophases coexisting in the nanodomain, where they intersect and finally form the hierarchical domain structure. Moreover, the electron diffraction patterns shown in Figure 1(f1,f2) disclose weak superlattice spots which should not exist in the well-known KNN structure. The superlattice spots reflect local ordering existing inside the nanodomains.
Although the success of local symmetry identification via the CBED technique was achieved, it only gives reciprocal-space information, so it is difficult to show atomic displacements or polarization in real space. Then aberration-corrected STEM was employed to directly observe the atom displacement, which could quantitatively reflect the polarization state and symmetry in per unit cell [132]. A STEM high angle angular dark field (HAADF) generates Z-contrast imaging, thus it is a useful structure image mode, especially at the atomic scale [108][109][110][111].
STEM HAADF was employed to identify the coordinates of a perovskite ABO 3 lattice (A for K/Na and its substitutions while B for Nb and its substitutions) precisely, and to locate the local symmetry via the displacement vector of atom B. Figure 2b is a STEM HAADF lattice image gained on a domain boundary. Thanks to the Z-contrast difference, the A and B sites of the perovskite ABO 3 lattice could be clearly recognized. A peak finder strategy was applied to identify the coordinates of atom columns precisely [43,44,[133][134][135], as shown in Figure 2b. Figure 2(c1,c2) show enlarged versions from Figure 2b, indicating that the displacement vectors of the centers relative to the corners are variable, some are arranged along <100> while others on <110>, and they are in keeping with the schematics of the T and R unit cells seen in Figure 2(a1,a2). The local symmetry inside the nanodomains is a reflection of the T and R coexistence phases.
Then quantitative measurement of atom displacements (i.e., polarization) through HAADF imaging was performed, which is always for heavy element characterization, and annular bright field (ABF) imaging was performed to identify light elements. Using these methods, local symmetries inside the domains, their relative concentrations, and polarization rotation between domains can be characterized. Figure 2(d1,d2) present STEM HAADF and ABF images, light elements, e.g., oxygen could be observed. The relative displacements of the corner between Nb atoms and O atoms could be mapped after locating all types of atom positions, as shown in Figure 2e, which indicates that the polarization rotates continuously between R and T phases.
BaTiO 3 -Based Piezoelectrics with Constructed Wild R-O-T Phase Boundary Region
When it comes to BaTiO 3 -based piezoelectrics, a special engineering method was employed based on phase boundaries, utilizing a QCP to achieve first, the highest piezoelectric effect ever reported in any lead-free piezoelectric materials and even higher than the commercialized PZT, and second, a record broad temperature/composition plateau with high d 33 in lead-free BaTiO 3 piezoceramics, as shown in Figure 3a,b. The successful employment of such phase boundary engineering is actually guided by the informed understanding and expectation of the necessary structural imperfection. Then quantitative measurement of atom displacements (i.e., polarization) through HAADF imaging was performed, which is always for heavy element characterization, and annular bright field (ABF) imaging was performed to identify light elements. Using these methods, local symmetries inside the domains, their relative concentrations, and polarization rotation between domains can be characterized. Figure 2(d1,d2) present STEM HAADF and ABF images, light elements, e.g., oxygen could be observed. The relative displacements of the corner between Nb atoms and O atoms could be mapped after locating all types of atom positions, as shown in Figure 2e, which indicates that the polarization rotates continuously between R and T phases.
BaTiO3-Based Piezoelectrics with Constructed Wild R-O-T Phase Boundary Region
When it comes to BaTiO3-based piezoelectrics, a special engineering method was employed based on phase boundaries, utilizing a QCP to achieve first, the highest piezoelectric effect ever reported in any lead-free piezoelectric materials and even higher than the Figure 3c shows STEM ABF as a quantitative analysis of the atomic displacements (i.e., polarization) in a BaTiO 3 -based system. STEM ABF images are useful when observing light elements, because they has a poor Z-dependence compared with HAADF [79,88], so they could be used to identify light oxygen positions. For ferroelectric BaTiO 3 , the spontaneous polarization (P S ) shown in the inset of Figure 3c comes from the electric dipoles from relative displacements between negative (O 2− ) and positive (Ba 2+ and Ti 4+ ) ion centers, and the relative displacement of the central Ti 4+ cation with respect to its two nearest O 2− neighbor (δ Ti-O ) centers reflects the local polarization state, and therefore the symmetry, as shown in Figure 3c. The local P S could be roughly calculated by a linear relation to δ Ti-O (Ps = kδ Ti-O , where k is a constant,~1894 (µC cm −2 ) nm −1 for BaTiO 3 ) [136]. The visualization of the 2D δ Ti-O (polarization) vectors marked with polarization vectors for T, O and R phases is shown in Figure 3c. This can be clearly observed the coexistence of T, O and R nanoregions and the continuous polarization rotation between these nanoregions, which is not homogeneous, as shown in Figure 3d. commercialized PZT, and second, a record broad temperature/composition plateau with high d33 in lead-free BaTiO3 piezoceramics, as shown in Figure 3a,b. The successful employment of such phase boundary engineering is actually guided by the informed understanding and expectation of the necessary structural imperfection. It necessary to interpret the origin of high piezoelectricity from a theoretical view. According to density functional theory (DFT) calculations, the addition of Sn or Ca could change the order of the stability of different phases, and the three ferroelectric phases (R, T, and O) possess almost zero energy difference. The quadruple critical point (QCP) composition (C+T+O+R) has almost isotropic free energy, which is independent of the direction of polarization, as shown in Figure 3g. The T+O+R three-phase coexistence components show stable O <110> and R <111> states as well as metastable T <100> states, compared to pure BaTiO 3 , polarization anisotropy is greatly reduced. In addition, phasefield modeling was employed to simulate the T+O+R multiphase coexistence state. As shown in Figure 3e, the T, O, and R phase states coexist in the nanodomain and permeate each other in random distribution. Therefore, there are various paths of polarization conversion between phases. Figure 3f shows multiphase coexists polarization projection on the {110} plane. The prevalent polarization rotation between T, O, and R phases is predicted, which is in good agreement with STEM results in Figure 3c. In summary, the coexistence of T+O+R multiphase with low free energy/polarization anisotropy, weak energy barrier, and multiple polarization rotation possibilities leads to a high piezoelectric coefficient.
According to the atomic polarization mappings and theoretical calculations, the physical origin of good piezoelectric properties at the phase transition region is the phase coexistence (T+O+R) inside hierarchical nanodomains, and the gradual polarization rotation is a bridge between different phases. This static polarization state may simulate dynamic polarization changes under external stimuli (heat or electric field). This kind of origin is common in piezoelectric materials with phase boundary components [44,114,115,128,129]. It is extremely important to understand the roles of such atomic-scale coexisted phases and the gradual polarization rotation between them on high piezoelectricity at phase boundaries, which is the base for further designing new materials with higher performance.
BiFeO 3 -Based Piezoelectrics with Strain-Driven R-T Phase Boundary
The other promising lead-free piezoelectric materials are BiFeO 3 -based, presenting much higher T C (~500 • C), compared with KNN and BaTiO 3 [137], however it is challenging to achieve good ferroelectric/piezoelectric properties of BFO, because of its high leakage current. High quality BFO thin films have been recently achieved through strain engineering [138][139][140][141][142]. They are promising for high-density memory and spintronic devices. One of the characteristic achievements in BFO thin films was to construct a strain-driven phase boundary [141], which completely differs from the traditional chemical approaches. Zeches et al. demonstrated how to employ epitaxial strain to drive the formation of a phase coexisted boundary and thus produce a giant piezoelectric response in lead-free ferroelectric thin films. The phase coexistence of the tetragonal (T) nanoscale phase within the parent R-phase matrix is beneficial for the ferroelectric/piezoelectric performance. Both atomic force microscope (AFM) and TEM images show strip nanodomains which might be attributed to the coexisted phases, as shown in Figure 4a,b. The phase coexistence was directly seen via aberration-corrected STEM. The atomically-resolved STEM HAADF image in Figure 4c-e clearly differentiated the local R and T phases. Figure 4f gives the relative fractions of these two coexisting phases with respect to the film thickness. The phase coexistence could happen when the film thickness >50 nm. Moreover, the substrates with different lattice mismatches show different local structures in BFO film. The aberrationcorrected STEM observation can support the assumption of a strain-driven phase boundary in thin film. ence was directly seen via aberration-corrected STEM. The atomically-resolved STEM HAADF image in Figure 4c-e clearly differentiated the local R and T phases. Figure 4f gives the relative fractions of these two coexisting phases with respect to the film thickness. The phase coexistence could happen when the film thickness >50 nm. Moreover, the substrates with different lattice mismatches show different local structures in BFO film. The aberration-corrected STEM observation can support the assumption of a strain-driven phase boundary in thin film.
Perovskite Thermoelectric Oxides: The Bridge between Piezoelectrics/Ferroelectrics and Thermoelectrics
As discussed above, good piezoelectrics are always insulators when minimized electrical conduction is desired. A good thermal material has high electron conductivity, like a metal. When ferroelectrics become highly conductive, they destabilize the long-range dipolar ordering necessary for ferroelectricity. There were few clues in classic textbooks that hinted that telluride materials had the kind of crystal symmetries that were consistent with ferroelectricity, but because they had such high electrical conductivity, they could not switch polarization, a requirement of true ferroelectricity. However, weak piezoelectric/ferroelectric or even paraelectric oxides in the unusual condition where the concentration of electronic carriers is close to a metal-insulator transition have properties of interest for oxide-based thermoelectric applications. The typical example, doped SrTiO3, is
Perovskite Thermoelectric Oxides: The Bridge between Piezoelectrics/Ferroelectrics and Thermoelectrics
As discussed above, good piezoelectrics are always insulators when minimized electrical conduction is desired. A good thermal material has high electron conductivity, like a metal. When ferroelectrics become highly conductive, they destabilize the long-range dipolar ordering necessary for ferroelectricity. There were few clues in classic textbooks that hinted that telluride materials had the kind of crystal symmetries that were consistent with ferroelectricity, but because they had such high electrical conductivity, they could not switch polarization, a requirement of true ferroelectricity. However, weak piezoelectric/ferroelectric or even paraelectric oxides in the unusual condition where the concentration of electronic carriers is close to a metal-insulator transition have properties of interest for oxide-based thermoelectric applications. The typical example, doped SrTiO 3 , is paraelectric in bulk, while it could be ferroelectric in films under certain strained conditions [60,[143][144][145]. The heavily reduced, nonstoichiometric n-type perovskite SrTiO 3−δ shows metallic-like conductivity [146][147][148][149].
SrTiO 3 -based thermoelectric oxides have attracted considerable attention due to their thermally stable features, compared with conventional semiconductor-based thermoelectric materials. With donor-doping with a higher valence ion on the Sr site, like La-doped SrTiO 3 , the overall thermoelectric performance of SrTiO 3 has been improved remarkably, making this material system promising for high-temperature usage [146][147][148][149]. To compensate for the extra positive charge from the substitution of Sr 2+ by La 3+ , A-site vacancies might form according to the general formula Sr 1−3x/2 La x TiO 3 . Lu et al. investigated the structure and thermoelectric properties of Sr 1−3x/2 La x TiO 3 ceramics with different content of La dopants. It as shown that the thermoelectric properties, especially the electrical transport, are highly sensitive to the content of La, as shown in Figure 5a,b. Advanced electron microscopies, including aberration-corrected STEM, were employed to reveal the structural origin of this phenomenon. The samples with 0.10 ≤ x < 0.30 presented as overall cubic structure with superstructure ( Figure 5(c1)); the samples with 0.30 ≤ x < 0.50 exhibited additional short-range A-site vacancy ordering ( Figure 5(c2)); and the samples with x ≥ 0.50 were orthorhombic with a tilt system and long-range vacancy ordering ( Figure 5(c3)). TEM images of Sr 1−3x/2 La x TiO 3 with x = 0.50 along <110> revealed antiphase boundaries associated with antiphase rotations of the O-octahedra. For the sample with x = 0.63, it showed ferroelastic domains with orthorhombic distortion. The key feature of vacancy ordering can be directly seen via aberration-corrected STEM. As shown in Figure 5(e1,e2,f1,f2), two types of domains with normal perovskite and layered structures exist. The layered structure was formed due to cation vacancy ordering.
of La dopants. It as shown that the thermoelectric properties, especially the electrical transport, are highly sensitive to the content of La, as shown in Figure 5a,b. Advanced electron microscopies, including aberration-corrected STEM, were employed to reveal the structural origin of this phenomenon. The samples with 0.10 ≤ x < 0.30 presented as overall cubic structure with superstructure ( Figure 5(c1)); the samples with 0.30 ≤ x < 0.50 exhibited additional short-range A-site vacancy ordering ( Figure 5(c2)); and the samples with x ≥ 0.50 were orthorhombic with a tilt system and long-range vacancy ordering ( Figure 5(c3)). TEM images of Sr1−3x/2LaxTiO3 with x = 0.50 along <110> revealed antiphase boundaries associated with antiphase rotations of the O-octahedra. For the sample with x = 0.63, it showed ferroelastic domains with orthorhombic distortion. The key feature of vacancy ordering can be directly seen via aberration-corrected STEM. As shown in Figure 5(e1,e2) and 5(f1,f2), two types of domains with normal perovskite and layered structures exist. The layered structure was formed due to cation vacancy ordering.
Thermoelectrics: Structural Defect Engineering for Carrier and Phonon Transport
Thermoelectricity, which enables direct conversion between electrical and thermal energy, promises to harvest electric energy from waste heat sources and from the overheating of solid-state refrigeration electronics. Sizes of various structural defects have a strong relationship with their electrical and thermal transport properties [17,20,89,[150][151][152][153][154][155][156][157][158]. The electrical transport characteristics (Seebeck coefficient α and electrical conductivity σ) of thermoelectrics are affected by the electronic band structure, interactions of carriers with natural lattice vibration, nanostructure and point defects, which can be expressed in Boltzmann transport equations within the approximate relaxation times [14,18,89,152,159].
where m * I = (1/m * L + 2/m * T ) −1 is the inert effective mass (m * T and m * L are transverse and longitudinal mass), m * d = N 2/3 V m * d is the density of state (DOS) effective mass (N V is the band degeneracy and m * d the effective mass of single valley), k B , and e are Boltzmann constant, Plank's reduced constant and electron charge. ε = E/k B T, ε F = E F /k B T are reduced energy and reduced Fermi energy, τ AC , τ PD and τ P are the relaxation time due to acoustic phonon scattering, point defect scattering and precipitate scattering, respectively [14,18].
When it comes to thermal transport, the lattice thermal conductivity can be expressed using Callaway's model [160]: where υ is average sound velocity, θ D is Debye temperature, and x is defined ashω = k B T. Submicron grains come from the spark plasma sintering method [151] or the hotpressing sintering method [163], an indispensable synthesis strategy in thermoelectrics because the mechanical properties of pristine ingot are usually too weak to be used [151,163]. The grain size could be even lowered into nanoscale by solvothermal method combined with spark plasma sintering [164]. Submicron and nanoscale grains produce dense lowangle grain boundaries with arrays of atomic-scale dislocations that serve as significant phonon scattering centers [14], in particular, for phonons with long waves [151,163]. Under some circumstances, the segregation of the precipitated phase occurs at grain boundaries. As shown in Figure 6d-f, at the grain boundary, there are high density nano-meter precipitates segregated at triple junctions. The HRSTEM HAADF image in Figure 6f concentrates on a Bi precipitate at the grain boundary as well as on a Bi-rich precipitate inside the grain. Its strain analysis through geometric phase analysis (GPA) indicates that high strain centers are arranged between the Bi precipitates and the matrix, corresponding to the interface dislocation core. In addition, Bi nanoprecipitates can not only release the strain between the mismatched grains, but can also further promote charge redistribution as additional carriers, which are parallel to the result of modulated doping [21,22,165,166]. [151,163]. Under some circumstances, the segregation of the precipitated phase occurs at grain boundaries. As shown in Figure 6d-f, at the grain boundary, there are high density nano-meter precipitates segregated at triple junctions. The HRSTEM HAADF image in Figure 6f concentrates on a Bi precipitate at the grain boundary as well as on a Bi-rich precipitate inside the grain. Its strain analysis through geometric phase analysis (GPA) indicates that high strain centers are arranged between the Bi precipitates and the matrix, corresponding to the interface dislocation core. In addition, Bi nanoprecipitates can not only release the strain between the mismatched grains, but can also further promote charge redistribution as additional carriers, which are parallel to the result of modulated doping [21,22,165,166]. (PbQ, Q = Te, Se and S) thermoelectrics [13,14,164,168,169]. These nanostructures are thought to be inherent, which comes from the inevitable evaporation of lead during its formation process. As shown in Figure 7a,b, platelet-like nanostructures are perpendicular or parallel with each other, keeping within two of three possible {100} directions. The GPA strain analysis in the inset of Figure 7b indicates an anisotropic strain distribution, compared with normal spherical or ellipsoidal ones [152]. In addition to the structural defects on the grain boundaries, other type of plane defects exist, like stacking faults. Figure 6g shows a large amount of stacking faults. In the HRSTEM HAADF the fine structure of a stacking fault clearly emerges, as shown in Figure 6h. Such atomic-scale 2D planes of crystal mismatches densely pack together to form a 3D strain network, which is an effective scattering source for phonons with short to medium wavelengths. Furthermore, another type of planar defect, platelet-like precipitates with one/two atom-layer thickness, are characteristic nanostructures in lead chalcogenide (PbQ, Q = Te, Se and S) thermoelectrics [13,14,164,168,169]. These nanostructures are thought to be inherent, which comes from the inevitable evaporation of lead during its formation process. As shown in Figure 7a,b, platelet-like nanostructures are perpendicular or parallel with each other, keeping within two of three possible {100} directions. The GPA strain analysis in the inset of Figure 7b indicates an anisotropic strain distribution, compared with normal spherical or ellipsoidal ones [152].
Nanocrystalline precipitation is a characteristic feature of thermoelectric materials with nano-structures. For example, Okhay et al. contributed a good review about impact of graphene or reduced graphene oxide on performance of thermoelectric composites, including chalcogenides, skutterudites, and metal oxides [170,171]. Nanostructures have been the main structural defects for scattering phonons with short to middle wavelengths, which depends on the size and morphology of precipitation. Regular shapes were found that layer Pb, Bi poor phase in a SnTe system [172], and the rod-like Bi rich phase in a Mg 3 Sb 2 system [167], as shown in Figure 7c. The lattice of a precipitate is very similar to that of the matrix, and there is no lattice mismatch, so the GPA strain analysis indicates a homogenous strain distribution. Nanoprecipitates with atomic-scale coherent interfaces can scatter phonons efficiently without causing too much disturbance to carrier transport. As shown in Figure 7e,f,(g1,g2), the Cu 2 Te precipitate which was identified by the EDS method was well-faceted. The electron diffraction patterns show centrosymmetric peak splitting, which reflects the epitaxial orientation relation between the layered Cu 2 Te (space group: P6/mmm) and cubic PbTe structures, as shown in Figure 7e. These phase boundaries can effectively scatter phonons without influencing carrier transport due to small lattice mismatch between two phases. The strain distribution around the Cu 2 Te precipitated phase was obtained by GPA [89]. According to the strain analysis in Figure 7f, the phase boundary has dislocation cores, and the layered Cu 2 Te precipitated phase presents periodical strain distribution. Figure 7h shows STEM HAADF/ABF lattice images of the Cu 2 Te precipitated phase with a four-layer structure. The insert shows the enlarged image around the superimposed fault, and high strain can be observed in Figure 7f. These structural defects may provide additional phonon scattering sources.
The nanostructured approach has been commonly recognized as the most common method for improving thermoelectric performance, but the defects at the atomic scale may play more significant roles on carrier and phonon transportation. With the new generation of AC-STEM, it is a great chance to make the direct visualization of atomic-scale defects possible. One of the most recent results was the direct observation of inherent Pb vacancies and extrinsic Cu interstitials so as to reveal the magic function of Cu on the synergistic majorization of phonon and carrier transport in traditional PbTe, as shown in Figure 8.
The doped Cu atoms in an intrinsic Pb vacancy could enhance the carrier mobility, as shown in Figure 8a,b while reducing the thermal conductivity of the lattice, as shown in Figure 8c, via scattering all-wavelength phonons from forming precipitates, clusters and interstitials. As shown in Figure 8a, the carrier mobility increases firstly, and then decreases. The substituted Cu atom in a Pb site shows a+1 valence state, which provides one less charge to the matrix (Pb 2+ ), resulting in the reduction of the carrier concentration. On the other hand, interstitial copper atoms act as impurity dopants in the matrix, providing additional charge to add the carrier concentration, so the tendency of carrier concentration to change with the increase of Cu 2 Te content is the result of these two competitive effects. This large enhancement of carrier mobility is remarkable due to the occupancy of Pb vacancies by external dopants and has not been reported in any thermoelectric bulk materials with nanostructure. To comprehend the abnormal behavior of Cu in PbTe, it is necessary to focus on the formation energies of any possible defects (vacancies, antisites, interstitials, and Cu-filled inherent vacancies) in PbTe-Cu 2 Te. The formation energies of Cu-related defects can reveal the influence of copper on the electric transport performance. After Cu additions, Cu interstitials (Cu i 1+ ) donate electrons and reach up to a higher-level of Fermi energy, leading to positive effects in n-type conductivity. The formation energy of Cu interstitials is higher than that of Cu-filled Pb vacancies. Cu will fill the Pb vacancy as an acceptor until the Pb vacancy becomes unavailable. Hence, the carrier concentration decreases at first, and then increases with increasing Cu fractions [4]. The doped Cu atoms in an intrinsic Pb vacancy could enhance the carrier mobility, as shown in Figure 8a,b while reducing the thermal conductivity of the lattice, as shown in Figure 8c, via scattering all-wavelength phonons from forming precipitates, clusters and interstitials. As shown in Figure 8a, the carrier mobility increases firstly, and then decreases. The substituted Cu atom in a Pb site shows a+1 valence state, which provides one less charge to the matrix (Pb 2+ ), resulting in the reduction of the carrier concentration. On the other hand, interstitial copper atoms act as impurity dopants in the matrix, providing additional charge to add the carrier concentration, so the tendency of carrier concentration to change with the increase of Cu2Te content is the result of these two competitive effects. This large enhancement of carrier mobility is remarkable due to the occupancy of Pb vacancies by external dopants and has not been reported in any thermoelectric bulk materials with nanostructure. To comprehend the abnormal behavior of Cu in PbTe, it is necessary to focus on the formation energies of any possible defects (vacancies, antisites, Based on the anomalous variation in electrical properties, Figure 8d gives a schematic figure showing how Cu atoms present in the matrix with increasing Cu fraction. At first, a small number of Cu atoms filled the inherent Pb vacancy in PbTe, which eliminated Pb vacancies and then diminished the carrier scattering and effectively improved the carrier mobility. With further increase of the Cu content, the excess Cu atoms were forced into the interstitial sites, forming isolated Cu interstitial arrays first, then Cu interstitial clusters, and finally Cu-rich precipitates and even Cu 2 Te precipitates. These layered structures could effectively scatter phonons across various length scales and lead to extremely low lattice thermal conductivity, as shown in Figure 8c [4,173]. In order to estimate whether point defects exist, AC-STEM was utilized to observe atomic-scale Pb vacancies and Cu interstitials. Interstitial arrays and clusters of copper could be seen in the magnified images, as shown in Figure 8e-g. In addition, Cu interstitials can cause local lattice distortion as well as local strain. The GPA strain analysis in Figure 8h exhibits the strain network caused by the Cu interstitials [4]. It's worth noting that the ab initio molecular dynamics (AIMD) calculations show a farther synergy: the copper atom vibrates near the lead vacancy with a maximum displacement of about 3.4 Å. Cu atoms in the region shown in Figure 8i exhibit highly anisotropic vibrations along {110} direction. In addition, it obviously interferes with the movement of Pb atoms around it. Cu atoms cause local lattice disorder, which plays an important role in the scattering of phonons with high frequency at high temperatures [4,43,174].
In addition to vacancies and interstitials, another important type of point defects, substitutions, could also been utilized to optimize the electrical and thermal transport. In thermoelectrics, the well-employed band structure engineering strategies to boost the Seebeck coefficient, e.g., band alignment and band gap enlargement, are highly related with substitutions, like Sr doped in PbTe [151,152], Mg doped in PbTe [13], and Se doped in PbTe [10]. For SnSe, Te alloying as a substitution could increase the crystal symmetry, optimize the bond structure, change the band shape, and thus enhance the electrical transport properties. Meanwhile, the substitutions could play as phonon scattering centers and contribute to lower the thermal conductivity [33]. AC-STEM revealed the substitution of Te dopants at Se sites and its effluence at atom bonds. Figure 9a is an atomic resolution STEM HAADF image along {100} region axis (a axis), showing a dumbbell-shaped arrangement of atoms. Each atom column is not round, but rather slightly elongated and mismatched because half of the Sn and Se atoms overlapped. The two columns of the dumbbell were actually of equal intensity. To better view the substitution of Te on Se or Na on Sn sites, it turns to the b or c axis, because the Sn and Se columns could be nicely separated along these axes. Figure 9d is an atomically resolved STEM HAADF image along {001} region axis (c axis). It is clear that Sn and Se atoms are well distinguished by their apparent intensity differences. The respective electron diffraction patterns in Figure 9e show the reflection of the superlattice due to multiple cycles in arrangement of the atoms, as opposed to the case along the a axis (Figure 9b).
To assess Te substitutions and bonds, a quantitative analysis of Figure 9d was performed on atom positions and intensity, with a peaking methodology. Here, the atom columns were divided in 4 sets, Sn1, Sn2, Se3 and Se4. Figure 9g shows the intensity mapping of Sn1 atom columns. The uneven intensity distribution indicates the presence of Te substitution (abnormal bright column) at the Sn site. The bond lengths of Sn-Sn and Sn-Se can be acquired from the determined atomic positions. Figure 9h,i calculate the lattice parameters of Sn1 atoms columns, associated with the bond lengths of Sn-Sn along X (020) and Y (400) directions. Once the positions of all the atomic columns were determined, the bond lengths of Sn-Se could be reflected by their X and Y projections, as shown in Figure 9h-j. All of these maps reflect a feature simultaneously: uneven contrast and slight deviations. Intensity and bond length difference resulted from Te replacing Se sites is tiny, because few of the substituted atoms are buried in the thicker matrix atom column (~a few dozen atoms). The substitution of Te on Se causes changes in local bond lengths and local strain fields, which have great influence on electric and thermal transmission [33]. To assess Te substitutions and bonds, a quantitative analysis of Figure 9d was performed on atom positions and intensity, with a peaking methodology. Here, the atom columns were divided in 4 sets, Sn1, Sn2, Se3 and Se4. Figure 9g shows the intensity mapping of Sn1 atom columns. The uneven intensity distribution indicates the presence of Te substitution (abnormal bright column) at the Sn site. The bond lengths of Sn-Sn and Sn-Se can be acquired from the determined atomic positions. Figure 9h,i calculate the lattice parameters of Sn1 atoms columns, associated with the bond lengths of Sn-Sn along X (020) and Y (400) directions. Once the positions of all the atomic columns were determined, the bond lengths of Sn-Se could be reflected by their X and Y projections, as shown in Figure 9h-j. All of these maps reflect a feature simultaneously: uneven contrast and slight deviations. Intensity and bond length difference resulted from Te replacing Se sites is tiny, because few of the substituted atoms are buried in the thicker matrix atom column (~a few dozen atoms). The substitution of Te on Se causes changes in local bond lengths and local strain fields, which have great influence on electric and thermal transmission [33].
Conclusions and Prospects
AC-STEM/TEM could realize various functions for accurate atom imaging, chemical mapping, electronical configuration, etc, because multiple images and spectra can be obtained simultaneously. These structural features are closely related to the properties of materials and are of great value to materials research.
For piezoelectric materials, the quantitative atom displacement calculation from STEM Z-contrast images was developed into a common method for characterizing the
Conclusions and Prospects
AC-STEM/TEM could realize various functions for accurate atom imaging, chemical mapping, electronical configuration, etc, because multiple images and spectra can be obtained simultaneously. These structural features are closely related to the properties of materials and are of great value to materials research.
For piezoelectric materials, the quantitative atom displacement calculation from STEM Z-contrast images was developed into a common method for characterizing the local polarization configuration, which is significant to unveil the structural and physical issues behind high performance at phase boundaries. The increase of theresolution of the AC-STEM will implement an effective application in ferroelectricity: 3D polarization mapping through fine control of optical depth slices [175,176]. It is significant to fully understand the responses to local polarization in practical situations, and not just in two-dimensional projections.
For thermoelectric materials, structural defects at various scales have been recognized as the main parameters for optimizing carrier and phonon transport characteristics. The quantification of atom defects has most often been ignored in conventional methods, because of its difficulty. Using AC-STEM, we can see that, in contrast to widely accepted nanoscale structures, intrinsic and extrinsic defects at the atomic scale may become dominant. With the exploitation of these new-generation thermoelectric materials, for example, SnSe [8,9,12], the intrinsic defects of these materials at the atomic scale have attracted widespread attention. Atomic scale defects are always present in thermodynamic states.
Local disorders and related anomalies of local lattice thermal vibrations are widespread in thermoelectric materials, particularly at extreme temperatures. The key to increasing thermoelectric efficiency is to manipulate the dynamics of atoms and their defects in lattices. Atomic-scale point defect engineering will become a new strategy to improve both electrical and thermal properties of thermoelectric materials. Despite the significant progress achieved, it has been severely hampered by lack of any direct micro information about these atomic scale defects, particularly their dynamic vibrational properties. With the advent of a new generation of AC-STEM, this has become possible, and it will further lead to improved performance. | 8,099 | 2022-01-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Aero-YOLO: An Efficient Vehicle and Pedestrian Detection Algorithm Based on Unmanned Aerial Imagery
: The cost-effectiveness, compact size, and inherent flexibility of UAV technology have garnered significant attention. Utilizing sensors, UAVs capture ground-based targets, offering a novel perspective for aerial target detection and data collection. However, traditional UAV aerial image recognition techniques suffer from various drawbacks, including limited payload capacity, resulting in insufficient computing power, low recognition accuracy due to small target sizes in images, and missed detections caused by dense target arrangements. To address these challenges, this study proposes a lightweight UAV image target detection method based on YOLOv8, named Aero-YOLO. The specific approach involves replacing the original Conv module with GSConv and substituting the C2f module with C3 to reduce model parameters, extend the receptive field, and enhance computational efficiency. Furthermore, the introduction of the CoordAtt and shuffle attention mechanisms enhances feature extraction, which is particularly beneficial for detecting small vehicles from a UAV perspective. Lastly, three new parameter specifications for YOLOv8 are proposed to meet the requirements of different application scenarios. Experimental evaluations were conducted on the UAV-ROD and VisDrone2019 datasets. The results demonstrate that the algorithm proposed in this study improves the accuracy and speed of vehicle and pedestrian detection, exhibiting robust performance across various angles, heights, and imaging conditions.
Introduction
In recent years, unmanned aerial vehicles (UAVs) have emerged as a burgeoning technology owing to their advantages of low cost, compact size, and operational flexibility (the abbreviations corresponding to all phrases can be found in Appendix A) [1,2].Serving as ideal tools for low-altitude aerial photography, these UAVs utilize sensors to effortlessly capture ground targets, thereby acquiring images with enhanced maneuverability.This technological advancement has provided novel solutions across various domains, significantly improving the efficiency of aerial target detection and the precision of data collection.
The rapid advancement of UAV technology is spurred by the concerted efforts of remote sensing departments and agricultural sectors across several nations.UAVs play a pivotal role in multiple domains, including security monitoring [3], aerial photography [4], high-speed deliveries [5], wildlife conservation [6], agriculture [7], and transportation systems [8].Nevertheless, owing to the flexibility of UAVs, capturing vehicle exteriors and dimensions presents substantial variations (e.g., as depicted in Figure 1), allowing image capture from diverse perspectives and heights, leading to intricate and diverse backgrounds.
Traditional algorithms encounter challenges in target detection due to insufficiently prominent target features, resulting in slow detection speeds, low accuracy, and susceptibil-ity to false positives and negatives.In contrast, the You Only Look Once (YOLO) model has garnered significant attention for its outstanding accuracy and real-time performance, markedly enhancing both detection precision and speed, thus playing a pivotal role in target detection.However, the size and weight limitations of UAVs restrict the performance of onboard computing devices, necessitating the reduction of computational and storage expenses while maintaining superior detection performance.Previous UAV visual recognition has often relied on larger models to improve recognition rates due to imaging issues with UAV images [9][10][11].Simultaneously, the lack of information points in images often requires lowering predictive confidence to enhance model generalization.However, reducing confidence levels may lead to issues like erroneous fitting of picture data.To address this challenge, this paper introduces a lightweight UAV vehicle recognition algorithm based on the YOLOv8 model, termed Aero-YOLO.The key contributions of this research can be summarized as follows: • The replacement of the original Conv module [12] with Grouped Separable Convolution (GSConv) led to a reduction in model parameters, an expanded receptive field, and improved computational efficiency.
•
The incorporation of the CoordAtt and shuffle attention [13] mechanisms bolstered feature extraction, particularly benefiting the detection of small or obstructed vehicles from the perspective of unmanned aerial vehicles.
•
After comparative analysis with Adaptive Moment Estimation (Adam) [14], the selection of Stochastic Gradient Descent (SGD) as the optimizer resulted in superior performance in model convergence and overall efficiency.• Substituting the original CSPDarknet53 to Two-Stage FPN (C2f) module with C3 resulted in a lightweight structure for the model.
We conducted comparative experiments using the UAV-ROD [15] and VisDrone2019 datasets [16].Our comparative analysis demonstrated that our proposed method significantly outperforms existing detection models and current mainstream parameter models.Furthermore, we conducted specific ablation experiments on the VisDrone2019 dataset to validate the feasibility and effectiveness of our proposed network optimization methods.The results indicated that Aero-YOLO significantly enhances the performance of unmanned aerial vehicle visual recognition models, even when utilizing the same or fewer network model parameters.
The remainder of this paper is organized as follows.Section 2 reviews the related works, Section 3 elaborates on our proposed methodology, Section 4 presents the experimental findings, and the conclusions are provided in Section 5.
Related Works
Target detection has long been a focal point in the field of computer vision [17], aiming to accurately identify and locate objects, discern their categorical attributes, and precisely determine their positions within images.With the advent of deep learning and the widespread deployment of surveillance cameras [18], object detection has garnered heightened importance.Broadly, object detection algorithms are typically categorized into two-stage and one-stage approaches, differing fundamentally in their processing stages.Two-stage algorithms involve the use of separate networks for region proposal and classification/regression tasks.A classic example of a two-stage approach is the Faster R-CNN [19], which relies on region-based convolutional neural networks.In contrast, single-stage methods like YOLO [20], SSD [21], and RetinaNet [22] utilize a single network to directly classify bounding boxes and perform adjustments using anchor points.
One of the most representative algorithms among one-stage detectors is the YOLO series.YOLO employs convolutional neural networks to extract image features and directly predicts bounding boxes and categories by generating anchored boxes, thereby enabling real-time object detection.YOLOv2 [23] replaced the original YOLO's Google Inception Net (GoogleNet) with Darknet-19, while YOLOv3 [24] upgraded Darknet-19 to Darknet-53 and adopted a multi-scale framework with residual connections from ResNet [25].YOLOv4 [26] combined CSPNet [27], the Darknet-53 framework, CIoU loss [28], and the Mish activation function [29] to enhance performance.YOLOv5 integrated various architectures mentioned earlier and offered multiple choices in terms of inference speed, accuracy, and computational cost.YOLOv8 [30], released in January 2023, incorporated updates from YOLOv5 [31], which are discussed in this paper.
The rapid development of deep learning-based object detection models has led some scholars to apply their enhanced versions to object detection in drone imagery.Traditional UAV aerial image recognition techniques suffer from limitations in computing power due to the restricted payload of UAVs, resulting in low recognition accuracy for small target sizes and missed detections in densely populated areas.Maintaining a balance between detection accuracy and inference efficiency remains crucial.Ruiqian Zhang et al. [32] proposed a multiscale adversarial network to address the diversity challenges in UAV imagery, integrating deep convolutional feature extractors, multiscale discriminators, and a vehicle detection network, significantly enhancing vehicle detection performance.Seongkyun Han et al. [33] designed DRFBNet300, incorporating deeper receptive field block (DRFB) modules to improve feature map expressiveness for detecting small objects in UAV images.Mohamed Lamine Mekhalfi et al. [34] introduced CapsNets to tackle complex object detection issues in UAV images, accurately extracting hierarchical positional information compared to traditional convolutional neural networks, thereby improving object detection accuracy and computational efficiency.Z. Fang et al. [35] proposed a dual-source detection model, DViTDet, based on Vision Transformer Detector (ViTDet), leveraging Transformer networks to extract features from various sources and employing feature fusion to utilize cross-source information.They demonstrated that combining CNNs and Transformer networks can extract richer features.
These previous models still suffer from issues such as low detection accuracy, inefficient computational performance, and inadequate detection capabilities for small objects to some extent.Our research aims to address these challenges by proposing Aero-YOLO, a lightweight UAV vehicle detection model based on YOLOv8.By incorporating advanced modules like the CoordAtt attention mechanism, shuffle attention mechanism, and GSConv, we enhance YOLOv8.It is anticipated that this optimized object detection framework will exhibit significant advantages in UAV vehicle and pedestrian recognition.
Aero-YOLO Model Architecture
Aero-YOLO represents an enhanced version of the YOLOv8 model tailored for UAV target detection tasks.Its architectural design is illustrated in Figure 2. Aero-YOLO integrates GSConv and C3 in its backbone to reduce network computational overhead.Moreover, it capitalizes on two attention mechanisms, CoordAtt and shuffle attention, significantly reinforcing the feature extraction capability, which is particularly advantageous for detecting small or obstructed vehicles from a UAV perspective.
The overall framework of Aero-YOLO comprises four parts: input, backbone, neck, and head.The input section of the Aero-YOLO network primarily manages image scaling, data augmentation, adaptive anchor computation, and adaptive image scaling.The default input image size is set at 640 × 640 × 3. Its backbone consists of GSConv modules, C3 modules, CoordAtt attention mechanisms, and Spatial Pyramid Pooling Fusion (SPPF) modules.In the network's head, the original Conv module is replaced with GSConv, and shuffle attention mechanisms are added before two GSConv modules.In comparison with related products like YOLOv8, Aero-YOLO achieves an optimal balance between detection accuracy and computational cost.
Object Detection Framework
The YOLO model architecture stands as one of the prominent object detection algorithms currently in use.YOLOv8 has exhibited commendable results in terms of speed and accuracy.Considering the vehicle recognition performance and resource constraints in UAVs [36], we opted for YOLOv8 as our foundational model for research.As the latest state-of-the-art (SOTA) model, it offers both object detection and instance segmentation capabilities, presenting various scale models to adapt to diverse scene requirements.Compared to its predecessors, YOLOv8 introduces structural changes, including adjustments in certain bottleneck structures, adopting an anchor-free approach with decoupled heads, and modifications in top-layer activation functions.It leverages multiple loss functions such as binary cross-entropy for classification loss and CloU and distribution focal loss for localization loss [37], while optimizing data augmentation strategies to enhance accuracy.For output bounding boxes, it employs post-processing techniques like non-maximum suppression (NMS) to filter out detections in regions lacking significance, reducing redundant and overlapping boxes for more precise results.
Despite YOLOv8 demonstrating commendable performance, deploying it on lightweight agile UAVs presents challenges due to computational requirements, larger model sizes, and significant variations in captured vehicle appearances and sizes.To enhance detection performance concerning scale variations and computational costs, Aero-YOLO modifies the network structure in two aspects.
Lightweight Network Optimization
In the original YOLOv8 backbone, the intermediate feature maps from conventional convolutions exhibited significant redundancy, contributing to increased computational costs.The challenge lay in reducing algorithmic overhead while preserving algorithm performance.This section proposes modifications to two modules to minimize algorithmic costs.
GSConv emerges as the preferred choice for optimizing lightweight networks by reducing model parameters, broadening receptive fields, and enhancing computational efficiency.Replacing standard convolutional layers, GSConv bolsters the network's feature extraction capabilities.Research indicates that integrating GSConv throughout the network notably augments depth while reducing inference speeds.This module's structure encompasses Conv, DWConv, Concat, and shuffle operations [38].The structure of the GSConv module is shown in Figure 3.The input feature map, derived from standard convolutions, yields half the channel count as output channels.Retaining the channel count through depth-wise separable convolutions, the channels are then merged to restore the original count, finally outputting the results via the shuffle module.Combining group convolutions with depth-wise separable convolutions, GSConv intensifies feature extraction and fusion abilities, facilitating better capture of crucial vehicle features in images.Simultaneously, it slashes computational and parameter counts by approximately 30% to 50%, sidestepping redundant information and complex calculations.The computational complexity of GSConv, compared to standard convolutions, is expressed as where W and H denote the output feature map's width and height, K 1 and K 2 refer to the convolution kernel's size, C 1 signifies the kernel's channel count, and C 2 stands for the output feature map's channel count.When C 1 is substantial, GSConv's computational complexity approaches 50% of SC.In the original backbone network of YOLOv8, the intermediate feature maps from conventional convolution computations exhibit significant redundancy, resulting in increased computational costs.Addressing the challenge of reducing algorithmic expenditure while preserving algorithm performance is the focus of this section, achieved through modifications in two modules.Simultaneously, the basic structure of the C3 and C2f modules follows a Cross-Stage Partial Network (CSP) architecture, differing primarily in the selection of correction units.While C3 provides feature expressiveness requisite for vehicle detection tasks, it maintains a lighter structure more suitable for drone deployment.Consequently, the C2f module has been supplanted by the C3 module, effectively reducing computational burdens while sustaining high performance.The module's structure, as depicted in Figure 4, routes the feature map into two paths after entering C3: the left path traverses a Conv and a bottleneck, while the right path undergoes a single Conv operation.Eventually, the outputs from both paths are concatenated and processed through another Conv layer.Within C3, the three Conv modules, each being a 1 × 1 convolution, handle dimensionality reduction or expansion.The bottleneck in the backbone employs residual connections comprising two Convs: the first is a 1 × 1 convolution, halving the channel count, followed by a 3 × 3 convolution, doubling the channel count.This initial reduction aids the convolutional kernel in better grasping feature information, while the subsequent expansion enables the extraction of more detailed features.Finally, the residual structure, combining the input and output, prevents gradient vanishing issues.
Feature Extraction Optimization
To enhance vehicle detection accuracy in drone-captured scenes, we introduce two pivotal attention mechanisms: CoordAtt and shuffle attention.These mechanisms aim to bolster the model's ability to identify smaller or occluded vehicles, thereby enhancing detection performance from the drone's perspective.Below, we detail our attention mechanisms and explore their roles and advantages within the optimized lightweight network.
Primarily, a CoordAtt module is incorporated after each C3 module with the aim of directing the model's attention to features at different locations, which is particularly significant for addressing small vehicles or local regions that may appear in the UAV perspective.The network structure of the CoordAtt module is illustrated in Figure 5. CoordAtt integrates positional data within channel attention, skillfully avoiding twodimensional global pooling by decomposing channel attention into two one-dimensional feature encodings [39].This approach astutely aggregates the input features into two independently directional-aware feature maps, vertically and horizontally.These maps not only embed directional information but also capture long-range spatial dependencies along the spatial axis through two attention maps generated by encoding.Eventually, multiplying these two attention maps with the input feature map highlights the expressions of the regions of interest.
While embedding coordinate information, the challenge of retaining positional data arises in global pooling.Hence, capturing through horizontal and vertical decomposed pooling is executed.Specifically, for each feature output, representation occurs as follows: where H and W represent the height and width of the pooling kernel.These transformations aggregate features from two spatial directions, forming a pair of directional-aware feature maps while capturing dependencies and preserving positional information.
The generation of coordinated attention undergoes concatenation, followed by subsequent 1 × 1 convolutions.Spatial data in both vertical and horizontal spaces are encoded through BatchNorm and nonlinear activations.These encoded data are segmented and then adjusted in channel size using another 1 × 1 convolution to align with the input.The entire process concludes by normalizing and weighted fusion through the sigmoid function: , where x c (i, j) represents the input feature map, whereas g h c (i) and g w c (j) denote attention weights in two spatial directions.
Subsequently, the introduction of the channel attention mechanism, namely shuffle attention, is implemented.This mechanism aids in enhancing the network's efficiency in utilizing features from different channels.By rearranging and integrating feature channels, shuffle attention directs the network's focus toward crucial channel information, contributing to improved feature distinctiveness, especially in scenarios involving occluded vehicles.The network structure of the shuffle attention module is illustrated in Figure 6.For the task of drone-based vehicle recognition, the SA module implements an innovative and efficient attention mechanism by embedding positional information into channel attention [40].To retain this information, the module abstains from utilizing 2D global pooling, instead proposing the decomposition of channel attention into two parallel 1D feature encodings.This approach aggregates the input features into two directional-aware feature maps along both the vertical and horizontal axes.These feature maps encompass embedded direction-specific information, encoding it into two attention maps, each capturing long-range spatial dependencies of the input feature map.Thus, positional information is stored within the generated attention maps.Finally, the product of these two attention maps is applied to the input feature map, emphasizing the expressions of the regions of interest.
For a given input feature map x ∈ R C×W×H , C, H, and W represent the number of channels, height, and width, respectively.Initially, the feature map X is segmented into G groups along the channel dimension, denoted as X = [X 1 , . . . ,X G ], with X i ∈ R ( C G )×W×H .Subsequently, each group is further divided into two branches along the channel direction,
×W×H
. One branch leverages inter-channel relationships to generate a channel attention map, while the other branch employs spatial attention maps between features.
Regarding channel attention, shuffle attention employs a lightweight strategy [41], combining global average pooling, scaling, and an activation function to achieve a balance between speed and precision in the drone environment.The specific mathematical formulations are as follows: where
×1×1
represents the network's trainable parameters and σ denotes the sigmoid activation function.
Concerning spatial attention, to complement channel attention, group normalization operations are introduced.The specific mathematical expression is outlined below:
×1×1
represents the network's trainable parameters and σ indicates the sigmoid activation function.
Following attention learning and feature recalibration, aggregation of the two branches is required to obtain Then, aggregation of all sub-features and channel mash operations are performed.
Through the introduction of these two attention mechanisms, the model aims to better capture crucial features in drone-captured scenes, enhancing the accuracy in detecting small or obscured vehicles.
The Model Parameters of Aero-YOLO
For Aero-YOLO, we introduce three new sets of model parameters: Aero-YOLO (extreme), Aero-YOLO (ultra), and Aero-YOLO (omega).The parameter models of Aero-YOLO are presented in Table 1.They strike a balance between model performance and computational complexity, offering adaptability and versatility across various application scenarios.
1.
Aero-YOLO (extreme): Prioritizes performance enhancement while focusing on improving computational efficiency.It involves a moderate reduction in model size, suitable for resource-constrained scenarios with extensive datasets.
2.
Aero-YOLO (ultra): Aims to achieve a comprehensive balance by adjusting the proportions of depth, width, and channel numbers.This adjustment seeks the optimal equilibrium among performance, computational complexity, and resource utilization, suitable for general-purpose application scenarios.
3.
Aero-YOLO (omega): Emphasizes maintaining high performance while reducing computational complexity.It concentrates on optimizing extreme scenarios and complex environments within object detection to achieve more precise detection and localization.
The introduction of these three parameter models enriches the selection range of Aero-YOLO, better meeting diverse requirements across different tasks and environments.The subsequent versions of Aero-YOLO, namely Aero-YOLO (omega), Aero-YOLO (ultra), and Aero-YOLO (extreme), are abbreviated as Aero-YOLOo, Aero-YOLOu, and Aero-YOLOe, respectively.To provide a clearer demonstration of the model's architecture, the parameters of the backbone and head layers of the Aero-YOLO model are displayed in Tables 2 and 3.The VisDrone2019 dataset, collected by the AISkyEye team at Tianjin University, stands as a significant dataset for object detection.It comprises images captured from drone perspectives, along with corresponding annotation files, serving the purpose of training and evaluating computer vision algorithms.With over 10,000 images, it includes 6471 training, 548 validation, 1610 test, and 1580 competition images.The images exhibit diverse sizes ranging from 2000 × 1500 to 480 × 360, encompassing scenes spanning streets, squares, parks, schools, and residential areas, with shooting conditions varying from ample daytime lighting to inadequate nighttime lighting, cloudy, strong light, and glare.The detailed annotation files meticulously catalog ten different object categories depicted in the images, such as pedestrians, bicycles, cars, trucks, tricycles, canopy tricycles, buses, and motorcycles.
UAV-ROD Dataset
The UAV-ROD dataset comprises 1577 images, encompassing 30,090 annotated vehicle instances delineated by oriented bounding boxes.Image resolutions are set at 1920 × 1080 and 2720 × 1530 pixels, with drone flight altitudes ranging from 30 to 80 meters.Encompassing diverse scenes such as urban roads, parking lots, and residential areas, the dataset provides a rich array of visual contexts.It is segmented into training and testing subsets, comprising 1150 and 427 images, respectively.
Experimental Environment
In this study, experiments were conducted using PyTorch 2.0.0 based on GPU for the experimental setup.PyTorch utilizes CUDA 11.8 to support the parallel computation of the YOLOv8 deep learning model.Leveraging GPU and CUDA, we accelerated the computational processes and employed the PyTorch framework for model construction and training.For the detailed configuration specifics of the experimental setup, refer to Table 4.This study conducted a comprehensive assessment of the proposed method, examining its performance in terms of detection accuracy and model parameter size.Multiple metrics were utilized to evaluate the model performance, including precision (P), recall (R), average precision (AP), F1-score, and mean average precision (mAP).P gauges the accuracy of the model in predicting positive classes, whereas R measures the model's capability to identify true-positive classes.The F1-score is a comprehensive metric that balances precision and recall, representing their harmonic mean.AP signifies the area enclosed by the precisionrecall curve, offering an assessment of overall model performance.Additionally, mAP measures the average precision across all object categories, providing a holistic view to evaluate the model's performance in recognizing multiple classes.The equations for these metrics are illustrated in Formulas (1)-( 5): Furthermore, the number of model parameters (Params) represents the count of parameters (i.e., weights) the model uses to learn patterns from training data; more parameters indicate increased model complexity.The Giga Floating-Point Operations (GFLOPs) is a unit used to measure the total number of floating-point operations performed in a computer, where 1 GFLOP is equivalent to 10 9 Floating-Point Operations (FLOPs).GFLOPs is commonly used to assess the computational requirements of deep learning models, especially in tasks that demand substantial computing resources.The Frames per Second (FPS) metric signifies the speed at which the model analyzes images during target detection, serving as an indicator of its detection efficiency.Evaluating the real-time performance of the model in detection tasks allows for the examination of the dynamic relationship between accuracy and FPS.Consequently, both FPS and accuracy play pivotal roles in determining the model's applicability in practical scenarios.
Results on the VisDrone2019 Dataset
A series of experiments was conducted on the VisDrone2019 dataset to showcase the advantages of the proposed architecture.Comparative experiments involved widely used methods like YOLOv5, YOLOv8 improved with MobileNetv3, MobileNetv2-SSD [42], the method proposed by Li et al [43], and the original YOLOv8.Apart from the standard YOLOv8 model, we presented three novel parameter configurations-Aero-YOLOe, Aero-YOLOu, and Aero-YOLOo-with all training and testing processes employing identical default runtime settings and image processing rules.
Figure 7 illustrates the performance of various experimental models concerning their AP values.On the VisDrone dataset, the Aero-YOLO network consistently leads in almost all precision metrics.The assessment distinctly indicates that the Aero-YOLO series outperforms both the YOLOv5 and fundamental YOLOv8 models.Notably, Aero-YOLOe, Aero-YOLOu, and Aero-YOLOo exhibit significant improvements, emphasizing their prowess in object detection.For instance, Aero-YOLOe achieves an<EMAIL_ADDRESS>of 0.434, marking a 9.0% increase over the baseline YOLOv5l and a 4.6% rise over the YOLOv8l-based model.
Figure 8 showcases the performance of all experimental models in terms of F1 and P values.The Aero-YOLO series demonstrates a pronounced advantage in F1 values.For example, Aero-YOLOo, Aero-YOLOu, and Aero-YOLOe all achieve an F1-score of 0.47, surpassing the performance of YOLOv5, the MobileNetv3 series, and the basic YOLOv8 model.Compared to the baseline YOLOv5l, the F1-score shows an improvement of 6.8% and a 2.1% increase relative to the YOLOv8l-based model.This signifies the superior performance of the Aero-YOLO model in balancing precision and recall.In terms of R values, the Aero-YOLO series exhibits competitiveness, surpassing other models.Aero-YOLOe achieves an R value of 0.63, marking a 6.8% improvement over the baseline YOLOv5l and a 3.2% increase over the YOLOv8l-based model.The overall trend indicates a proportional increase in R values with the increment of the model scale.To further explore the complexity and computational efficiency of our proposed method, Figure 9 illustrates the Params, GFLOPs, and FPS.Compared to the baseline YOLOv8 model, Aero-YOLO exhibits reductions of approximately 23% in the Params and 22% in the GFLOPs, and it also demonstrates a significant advantage in FPS.These improvements stem from Aero-YOLO's substitution of the original YOLOv8 Conv and C2f modules with GSConv and C3f modules, resulting in a more streamlined model structure.The adoption of Aero-YOLO significantly alleviates the computational burden on-board drones and achieves a well-balanced lightweight model, ideal for resource-constrained environments such as unmanned aerial vehicles.The experimental outcomes of our Aero-YOLO model are visually depicted in Figures 10 and 11, with detected objects delineated by rectangles and annotated with their predicted categories.Figure 10 showcases selected instances of vehicles under various conditions, encompassing both daytime and nighttime scenarios, as well as diverse angles and altitudes.The majority of vehicles in these images were accurately detected.Particularly notable is our algorithm's capability to identify vehicles partially obscured at image edges or occluded, underscoring its robust ability to detect vehicle objects from UAV images.In Figure 11, images exhibiting erroneous detections or undetected elements are presented.It can be observed that most false detections occurred in images captured from high altitudes and those containing vehicles with significant size disparities, indicating the potential for improvement in the proposed detector.Further inspection reveals instances of missed detections in many distant, densely packed vehicles and small targets, emphasizing the ongoing challenge of detecting occluded objects.Moreover, certain real objects were inaccurately labeled; for instance, in the first image of the last row in Figure 11, a trash bin was misidentified as a pedestrian.We acknowledge that mislabeled real objects might impact the model's training and experimental evaluations.However, rectifying all labels within this training dataset poses a challenging task.
Results on the UAV-ROD Dataset
Based on the results presented in Table 5, comparative experiments were conducted on the UAV-ROD dataset using the Aero-YOLO model.As the model size increased from Aero-YOLOn to Aero-YOLOo, there was an improvement in accuracy, albeit accompanied by a proportional increase in model parameters and computational complexity.Concurrently, the Aero-YOLO model demonstrated superiority across most metrics, particularly excelling in mAP50-95 compared to other models.Compared to the YOLOv8 series and other popular object detection models, Aero-YOLO exhibits better performance, further validating its significant advantage in the field of unmanned aerial vehicle object detection, especially concerning small object handling and high-precision detection.Figure 12 presents selected visualizations from the UAV-ROD dataset, showcasing our method's precise vehicle detection across various backgrounds, encompassing urban roads, residential areas, and roadsides.Even among densely packed vehicles, our approach adeptly discriminated each vehicle.
Ablation Experiments
A series of ablation experiments was conducted on the VisDrone dataset to investigate the impact of different network structures on the final detection outcomes.The results are summarized in Table 6.We sequentially modified the networks with the GSConv, C3, double shuffle attention, and CoordAtt modules while changing the optimizer to SGD, leading to the development of Aero-YOLO.Each model underwent metric evaluation on the VisDrone2019-Val dataset under consistent hyperparameters: input image size of 640 × 640, batch size set to 8 for all models, and training epochs fixed at 100.Replacing YOLOv8's Conv module with GSConv and C2f with C3 notably decreased both the GFLOPs and Params, resulting in a marginal decline in the mAP@0.5,F1, and R metrics.Striking a balance between model size and performance in aerial imagery, where sacrificing a slight performance margin facilitated substantial reductions in the GFLOPs and Params, emerged as a more significant consideration.Integrating a dual-layer shuffle attention mechanism into YOLOv8's head segment saw a maximum 6.9% improvement in mAP@50, enhancing recognition of intricate details in specialized vehicles and thereby augmenting detection capabilities.The comparison between the Adam [44] and SGD optimizers indicated superior model performance with SGD.
To seamlessly incorporate the CoordAtt module into the backbone network, parameter adjustments were executed without inflating the GFLOPs or Params.However, improve-ments in the R<EMAIL_ADDRESS>and F1 metrics demonstrated the efficacy of the CoordAtt module modifications in enhancing detection accuracy.
Overall, the Aero-YOLO series maintains relatively high detection performance while reducing model parameters, showcasing its potential and advantages in lightweight object detection.
Conclusions and Future Outlook
The realm of aviation imagery poses numerous challenges, encompassing small target sizes, low resolution, occlusions, variations in pose, and scale, all significantly impacting the performance of many object detectors.Throughout the detection process, there remains a perpetual need to strike a balance between accuracy and inference efficiency.In response to this challenge, we introduce Aero-YOLO, an unmanned aerial vehicle (UAV) object detection algorithm.We propose three novel parameter configurations aimed at bolstering feature extraction capabilities while concurrently reducing computational requirements.Specifically, we replace the C2f module in the backbone network with C3, substitute the Conv module with GSConv, and introduce the CoordAtt and shuffle attention mechanisms in both the backbone and head.
When evaluated on the VisDrone2019 dataset using the parameter specifications (n, s, m, l, x) of YOLOv8, Aero-YOLO exhibits a 23% reduction in parameters while maintaining close proximity to its F1, R, and mAP@50 metrics.Under the new parameter settings, Aero-YOLOe aligns its parameter count with YOLOv8m, yet demonstrates significant improvements in the F1, mAP, and R indicators.Additionally, experiments conducted on the UAV-ROD dataset demonstrate Aero-YOLO's consistent excellence, affirming its superior performance in UAV-based vehicle recognition.Although Aero-YOLO has improved the accuracy of target detection, it has not effectively addressed the issue of identifying vehicles that are occluded or blurred.In our forthcoming research, we plan to delve deeper into the Aero-YOLO algorithm to better address issues related to occlusion and target blurring.Additionally, the future stages of the project will involve field-testing the proposed algorithm to validate its performance in real-world scenarios.
Figure 1 .
Figure 1.Examples of unmanned aerial vehicle images in the VisDrone dataset, including images with varied and complex backgrounds, weather conditions, and lighting, as well as varying vehicle appearances and sizes.
Figure 5 .
Figure 5.The schematic diagram of the CoordAtt module.
Figure 6 .
Figure 6.Structure of the shuffle attention module.
Figure 9 .
Figure 9. Params, GFLOPs, and FPS bar charts (MobileNetv2-SSD [42]; Li et al. [43]) .Overall, Aero-YOLO performed exceptionally well in drone vehicle detection tasks, reducing the Params and GFLOPs while improving model accuracy, showcasing the effectiveness of our model in experimental settings.The experimental outcomes of our Aero-YOLO model are visually depicted in Figures 10 and 11, with detected objects delineated by rectangles and annotated with
Figure 10 .
Figure 10.Samples of vehicle recognition under varying lighting and weather conditions and crowded backgrounds were collected.
Figure 11 .
Figure 11.Samples of vehicle recognition under varying lighting and weather conditions and crowded backgrounds were collected.
Figure 12 .
Figure 12.Visual demonstrations of precise vehicle detection across diverse backgrounds in the UAV-ROD dataset.
Table 1 .
Summary of Aero-YOLO models by depth, width, max.channels, and layers.
Table 5 .
Comparison experiment of Aero-YOLO on the UAV-ROD dataset.
Table 6 .
Ablation experiment of Aero-YOLO on the VisDrone2019 dataset. | 7,088.2 | 2024-03-25T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
CONAN - COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech
Although there is an unprecedented effort to provide adequate responses in terms of laws and policies to hate content on social media platforms, dealing with hatred online is still a tough problem. Tackling hate speech in the standard way of content deletion or user suspension may be charged with censorship and overblocking. One alternate strategy, that has received little attention so far by the research community, is to actually oppose hate content with counter-narratives (i.e. informed textual responses). In this paper, we describe the creation of the first large-scale, multilingual, expert-based dataset of hate-speech/counter-narrative pairs. This dataset has been built with the effort of more than 100 operators from three different NGOs that applied their training and expertise to the task. Together with the collected data we also provide additional annotations about expert demographics, hate and response type, and data augmentation through translation and paraphrasing. Finally, we provide initial experiments to assess the quality of our data.
Introduction
Together with the rapid growth of social media platforms, the amount of user-generated content is steadily increasing. At the same time, abusive and offensive language can spread quickly and is difficult to monitor. Defining hate speech is challenging for the broadness and the nuances in cultures and languages. For instance, according to UNESCO hate speech refers to "expressions that advocate incitement to harm based upon the targets being identified with a certain social or demographic group" (Gagliardone et al., 2015).
Victims of hate speech are usually targeted because of various aspects such as gender, race, religion, sexual orientation, physical appearance. For example, Sentence 1 shows explicit hostility towards a specific group with no reasons explained 1 .
(1) I hate Muslims. They should not exist.
Online hate speech can deepen prejudice and stereotypes (Citron and Norton, 2011) and bystanders may receive false messages and consider them correct.
Although Social Media Platforms (SMP) and governmental organizations have elicited unprecedented attention to take adequate actions against hate speech by implementing laws and policies (Gagliardone et al., 2015), they do not seem to achieve the desired effect, since hate content is continuously evolving and adapting, making its identification a tough problem (Davidson et al., 2017).
The standard approach used on SMPs to prevent hate spreading is the suspension of user accounts or deletion of hate comments, while trying to weigh the right to freedom of speech. Another strategy, which has received little attention so far, is to use counter-narratives. A counternarrative (sometimes called counter-comment or counter-speech) is a response that provides nonnegative feedback through fact-bound arguments and is considered as the most effective approach to withstand hate speech (Benesch, 2014;Schieb and Preuss, 2016). In fact, it preserves the right to freedom of speech, counters stereotypes and misleading information with credible evidence. It can also alter the viewpoints of haters and bystanders, by encouraging the exchange of opinions and mutual understanding, and can help de-escalating the conversation. A counter-narrative such as the one in Sentence 2 is a non-negative, appropriate response to Sentence 1, while the one in 3 is not, since it escalates the conversation.
(2) Muslims are human too. People can choose their own religion.
(3) You are truly one stupid backwards thinking idiot to believe negativity about Islam.
In this respect, some NGOs are tackling hatred online by training operators to monitor SMPs and to produce appropriate counter-narratives when necessary. Still, manual intervention against hate speech is a toil of Sisyphus, and automatizing the countering procedure would increase the efficacy and effectiveness of hate countering (Munger, 2017).
As a first step in the above direction, we have nichesourced the collection of a dataset of counternarratives to 3 different NGOs. Nichesourcing is a specific form of outsourcing that harnesses the computational efforts from niche groups of experts rather than the 'faceless crowd' (De Boer et al., 2012). Nichesourcing combines the strengths of the crowd with those of professionals (De Boer et al., 2012;Oosterman et al., 2014). In our case we organized several data collection sessions with NGO operators, who are trained experts, specialized in writing counter-narratives that are meant to fight hatred and de-escalate the conversation. In this way we build the first large-scale, multilingual, publicly available, expert-based dataset of hate speech/counter-narrative pairs for English, French and Italian, focusing on the hate phenomenon of Islamophobia. The construction of this dataset involved more than 100 operators and more than 500 person-hours of data collection. After the data collection phase, we hired three non-expert annotators, that performed additional tasks that did not require specific domain expertise (200 person-hours of work): paraphrase original hate content to augment the number of pairs per language, annotate hate content subtopic and counter-narrative type, translate content from Italian and French to English to have parallel data across languages. This additional annotation grants that the dataset can be used for several NLP tasks related to hate speech.
The remainder of the paper is structured as follows. First, we briefly discuss related work on hate speech in Section 2. Then, in Section 3, we introduce our CONAN dataset and some descriptive statistics, followed by a quantitative and qualitative analysis on our dataset in Section 4. We conclude with our future works in Section 5.
Hate datasets. Several hate speech datasets are publicly available, usually including a binary annotation, i.e. whether the content is hateful or not (Reynolds et al., 2011;Rafiq et al., 2015;Hosseinmardi et al., 2015;de Gibert et al., 2018;ElSherief et al., 2018). Also, several shared tasks have released their datasets for hate speech detection in different languages. For instance, there is the German abusive language identification on SMPs at Germeval (Bai et al., 2018), or the hate speech and misogyny identification for Italian at EVALITA (Del Vigna et al., 2017;Fersini et al., 2018) and for Spanish at IberEval (Ahluwalia et al., 2018;Shushkevich and Cardiff, 2018). Bilingual hate speech datasets are also available for Spanish and English (Pamungkas et al., 2018). Waseem and Hovy (2016) released 16k annotated tweets containing 3 offense types: sexist, racist and neither. Ross et al. (2017) first released a German hate speech dataset of 541 tweets targeting refugee crisis and then offered insights for the improvement on hate speech detection by providing multiple labels for each hate speech.
It should be noted that, due to the copyright limitations, usually hate speech datasets are distributed as a list of tweet IDs making them ephemeral and prone to data loss (Klubička and Fernández, 2018). For this reason, Sprugnoli et al. (2018) created a multi-turn annotated WhatsApp dataset for Italian on Cyberbullying, using simulation session with teenagers to overcome the data collection/loss problem.
Hate detection. Several works have investigated online English hate speech detection and the types of hate speech. Owing to the availability of current datasets, researchers often use supervisedapproaches to tackle hate speech detection on SMPs including blogs (Warner and Hirschberg, 2012;Djuric et al., 2015;Gitari et al., 2015), Twitter (Xiang et al., 2012;Silva et al., 2016;Mathew et al., 2018a), Facebook (Del Vigna et al., 2017), and Instagram (Zhong et al., 2016). The predominant approaches are to build a classifier trained on various features derived from lexical resources (Gitari et al., 2015;Williams, 2015, 2016), n-grams (Sood et al., 2012;Nobata et al., 2016) and knowledge base (Dinakar et al., 2012), or to utilize deep neural networks Badjatiya et al., 2017). In addition, other approaches have been proposed to detect subcategories of hate speech such as antiblack (Kwok and Wang, 2013) and racist (Badjatiya et al., 2017). Silva et al. (2016) studied the prevalent hate categories and targets on Twitter and Whisper, but limited hate speech only to the form of I <intensity> <user intent> <any word>. A comprehensive overview of recent approaches on hate speech detection using NLP can be found in (Schmidt and Wiegand, 2017;Fortuna and Nunes, 2018).
Hate countering. Lastly, we should mention that a very limited number of studies have been conducted on counter-narratives (Benesch, 2014;Schieb and Preuss, 2016;Ernst et al., 2017;Mathew et al., 2018b). Mathew et al. (2018b) collected Youtube comments that contain counternarratives to YouTube videos of hatred. Schieb and Preuss (2016) studied the effectiveness of counter-narrative on Facebook via a simulation model. The study of Wright et al. (2017) shows that some arguments among strangers induce favorable changes in discourse and attitudes. To our knowledge, there exists only one very recent seminal work (Mathew et al., 2018a), focusing on the idea of collecting hate message/counternarrative pairs from Twitter. They used a simple pattern in the form (I <hate> <category>) to first extract hate tweets and then manually annotate counter-narratives found in the responses. Still, there are several shortcomings of their approach: (i) this dataset already lost more that 60% of the pairs in a small time interval (content deletion) since only tweet IDs are distributed, (ii) it is only in English language, (iii) the dataset was collected from a specific template which limits the coverage of hate speech, and (iv) many of these answers come from ordinary web users and contain -for example-offensive text, that do not meet the de-escalation intent of NGOs and the standards/quality of their operators' responses.
Considering the aforementioned works, we can reasonably state that no suitable corpora of counter-narratives is available for our purposes, especially because the natural 'countering' data that can be found on SMP -such as example 3 -often does not meet the required standards. For this reason we decided to build CONAN, a dataset of COunter NArratives through Nichesourcing.
CONAN Dataset
In this section, we describe the characteristics that we intend our dataset to posses, the nichesourcing methodology we employed to collect the data and the further expansion of the dataset together with the annotation procedures. Moreover, we give some descriptive statistics and analysis for the collected data. CONAN can be downloaded at the following link https://github.com/ marcoguerini/CONAN.
Fundamentals of the Dataset
Considering the shortcomings of the existing datasets and our aim to provide a reliable resource to the research community, we want CONAN to comply with the following characteristics: Copy-free data. We want to provide a dataset that is not ephemeral, by releasing only copy-free textual data that can be directly exploited by researches without data loss across time, as originally pointed out in (Klubička and Fernández, 2018).
Multilingual data. Our dataset is produced as a multilingual resource to allow for cross lingual studies and approaches. In particular, it contains hate speech/counter-narrative pairs for English, French, and Italian.
Expert-based data. The hate speech/counternarrative pairs have been collected through nichesourcing to three different NGOs from United Kingdom, France and Italy. Therefore, both the responses and the hate speech itself are expert-based and composed by operators, specifically trained to oppose online hate speech.
Protecting operator's identity. We aim to create a secure dataset that will not disclose the identity of operators in order to protect them against being tracked and attacked online by hate spreaders. This might be the case if we were to collect their real SMP activities, following a procedure similar to the one in Mathew et al. (2018a). Therefore our data collection was based on simulated SMP activity.
Dataset Collection
We have followed the same data collection procedure for each language to grant the same conditions and comparability of the results. The data collection has been conducted along the following steps: 1. Hate speech collection. For each language we asked two native speaker experts (NGO trainers) to write around 50 prototypical islamophobic short hate texts. This step was used to ensure that: (i) the sample uniformly covers the typical 'arguments' against Islam as much as possible, (ii) we can distribute to the NLP community the original hate speech as well as its counter-narrative. 2. Preparation of data collection forms. We prepared three online forms (one per language) with the same instructions for the operators translated in the corresponding language. For each language, we prepared 2 types of forms: in the first users can respond to hate text prepared by NGO trainers, in the second users can write their own hate text and counter-narratives at the same time. In each form operators were first asked to anonymously provide their demographic profile including age, gender, and education level; secondly to compose up to 5 counter-narratives for each hate text. 3. Counter-narrative instructions. The operators were already trained to follow the guidelines of the NGOs for creating proper counter-narratives. Such guidelines are highly consistent across languages and across NGOs, and are similar to those in 'Get the Trolls Out' project 2 . These guidelines emphasize using fact-bounded information and non-offensive language in order to avoid escalating the discussion as outlined in Table 1. Furthermore, for our specific data collection task, op-erators were asked to follow their intuitions without over-thinking and to compose reasonable responses. The motivation for this instruction was to collect as much and as diverse data as possible, since for current AI technologies (such as deep learning approaches) quantity and quality are of paramount importance and few perfect examples do not provide enough generalization evidence. Other than this instruction and the fact of using a form -instead of responding on a SMP -operators carried out their normal counter messaging activities. 4. Data collection sessions. For each language, we performed three data collection sessions on different days. Each session lasted roughly three hours 3 and had a variable number of operators -usually around 20 (depending on their availability). Operators are different from NGO trainers and might change across sessions. Operators were gathered in the same room (NGO premises) with a computer, and received a brief introduction from the NGO trainer. This introduction was about our specific counter-narrative collection task, as described above. A sample of the collected data for the three languages is given in Table 2.
Dataset Augmentation and Annotation
After the data collection phase, we hired three non-expert annotators, that performed additional work that did not require specific domain expertise. Their work amounted to roughly 200 hours. In particular they were asked to (i) paraphrase original hate content to augment the number of pairs per language, (ii) annotate hate speech subtopics and counter-narrative types (iii) translate content from French and Italian to English to have parallel data across languages. To guarantee data quality, after the annotation and the augmentation phase, a validation procedure has been conducted by NGO trainers on the newly generated data for their specific language.
Paraphrasing for augmenting data pairs. Recent deep learning approaches are data hungry, and data augmentation is a way to mitigate the problem. For instance, to improve text classification performance for sexism, new tweets are generated by replacing words in original tweets with synonyms from ConceptNet (Sharifirad et al., Don't be abusive Before submitting a response, make sure the response does not spread any hate, bigotry, prejudice or illegal content. We want to maintain the conversations peaceful and not to degenerate into a conflict. We are talking about people not categories. Think about the objectives Before writing a response, think about the effect it may create and the one you want to obtain. Paying attention to the objectives will help use proper words. Call for influential users Enlisting influential supporters (civic leaders, politicians, subject experts) will help bring attention and deepen the effect to counter-narrative. Use credible evidence The information in hate speech may be confusing and misleading. Ask for clarification when necessary. Counter it with credible evidence and use reliable sources. Think about the tone We can demonstrate understanding and support to those who might be attacked. Be careful of using sarcasm, humour, parody and satire. We can use them, if we are able to master it as they run the danger of being antagonistic.
Hate Speech
Counter-Narrative Every Muslim is a potential terrorist.
Every Muslim is also a potential peacemaker, doctor, philanthropist... What's your point? I musulmani violentano le nostre donne e vanno castrati.
Good evening, if I understood correctly, you're telling me that all adult men of Islamic faith present in Italy, raped, rape or are going to rape Italian women? Can you bring in data in support of your statement? Le voile est contraireà la laïcité.
Bien au contraire la laïcité permetà tout citoyen de vivre librement sa confession. The veil is contrary to secularism. On the contrary, secularism allows every citizen to freely profess his faith. (Sennrich et al., 2016) and gold standard repetition (Chatterjee et al., 2017) that have been used in sequence-tosequence Machine Translation. In all these tasks, adding the synthetic pairs to the original data always results in significant improvements in the performance.
In line with the idea of artificially augmenting pairs, and since in our dataset we have many responses for few hate speeches, we produced two manual paraphrases of each hate speech and paired them with the counter-narratives of the original one. Therefore we increased the number of our pairs by three times in each language.
Counter-narrative type annotation. In this task, we asked the annotators to label each counter-narrative with types.
Based on the counter-narrative classes proposed by (Benesch et al., 2016;Mathew et al., 2018b), we defined the following set of types: PRESENTA-TION OF FACTS, POINTING OUT HYPOCRISY OR CONTRADICTION, WARNING OF CONSE-QUENCES, AFFILIATION, POSITIVE TONE, NEG-ATIVE TONE, HUMOR, COUNTER-QUESTIONS, OTHER. With respect to the original guidelines, we added a new type of counter-narrative called COUNTER-QUESTIONS to cover expressions/replies using a question that can be thoughtprovoking or asking for more evidence from the hate speaker. In fact, a preliminary analysis showed that this category is quite frequent among operator responses. Finally, each counternarrative can be labeled with more than one type, thus making the annotation more fine-grained. Two annotators per language annotated all the counter-narratives independently. A reconciliation phase was then performed for the disagreement cases.
Hate speech sub-topic annotation. We labeled sub-topics of hate content to have an annotation that can be used both for fine grained hate speech classification, and for exploring the correlation between hate sub-topics and counternarrative types. The following sub-topics are determined for the annotation based on the guidelines used by NGOs to identify hate messages (mostly consistent across languages): CULTURE, criticizing Islamic culture or particular aspects such as religious events or clothes; ECONOMICS, hate statements about Muslims taking European workplaces or not contributing economically to the society; CRIMES, hate statements about Muslims committing actions against the law; RAPISM, a very frequent topic in hate speech, for this reason it has been isolated from the previous category; TERRORISM, accusing Muslims of being terrorists, killers, preparing attacks; WOMEN OP-PRESSION, criticizing Muslims for their behavior against women; HISTORY, stating that we should hate Muslims because of historical events; OTHER/GENERIC, everything that does not fall into the above categories.
As before, two annotators per language annotated all the material. Also in this annotation task, a reconciliation phase was performed for the disagreement cases.
Parallel corpus of language pairs. To allow studying cross-language approaches to counternarratives and more generally to increase language portability, we also translated the French and the Italian pairs (i.e. hate speech and counternarratives) to English. Similar motivations can be found in using zero-short learning to translate between unseen language pairs during training (Johnson et al., 2017). With parallel corpora we can exploit cross-lingual word embeddings to enable knowledge transfer between languages (Schuster et al., 2018).
Dataset Statistics
In total we had more than 500 hours of data collection with NGOs, where we collected 4078 hate speech/counter-narrative pairs; specifically, 1288 pairs for English, 1719 pairs for French, and 1071 pairs for Italian. At least 111 operators participated in the 9 data collection sessions and each counter-narrative needed about 8 minutes on average to be composed. The paraphrasing of hate messages and the translation of French and Italian pairs to English brought the total number of pairs to more than 15 thousand. Regarding the token length of counter-narratives, we observe that there is a consistency across the three languages with 14 tokens on average for French, and 21 for Italian and English. Considering counter-narrative length in terms of characters, only a small portion (2% for English, 1% for French, and 5% for Italian) contains more than 280 characters, which is the character limit per message in Twitter, one of the key SMPs for hate speech research. Further details on the dataset can be found in Table 3.
Regarding demographics, the majority of responses were written by operators that held a bachelor's or a higher degree (95% for English, 65% for French, and 69% for Italian). As it is shown in Table 4, there is a good balance in responses with regard to declared gender, with a slight predominance of counter-narratives written by female operators in English and Italian (53 and 55 per cent respectively) while a slight predominance of counter-narratives written by male operators is present in French (61%). Finally, the predominant age bin is 21-30 for English and Italian, Considering the annotation tasks, we give the distribution of counter-narrative types per language in Table 5. As can be seen in the table, there is a consistency across the languages such that FACTS, QUESTION, DENOUNCING, and HYPOCRISY are the most frequent counternarrative types. Before the reconciliation phase, the agreement between the annotators was moderate: Cohen's Kappa 4 0.55 over the three languages. This can be partially explained by the complexity of the messages, that often fall under more than one category (two labels were assigned in more than 50% of the cases). On the other hand, for hate speech sub-topic annotation, the agree-ment between the annotators was very high even before the reconciliation phase (Cohen's Kappa 0.92 over the three languages). A possible reason is that such messages represent short and prototypical hate arguments, as explicitly requested to the NGO trainers. In fact, the vast majority has only one label. In Table 6, we give a distribution of hate speech sub-topics per language. As can be observed in the table, the labels are distributed quite evenly among sub-topics and across languages -in particular, CULTURE, ISLAMIZATION, GENERIC, and TERRORISM are the most frequent sub-topics.
Evaluation
In order to assess the quality of our dataset, we ran a series of preliminary experiments that involved three annotators to judge hate speech/counternarrative pairs along a yes/no dimension.
Augmentation reliability. The first experiment was meant to assess how natural a pair is when coupling a counter-narrative with the manual paraphrase of the original hate speech it refers to. We administered 120 pairs to the subjects to be evaluated: 20 were kept as they are so to have an upper bound representing ORIGINAL pairs. In 50 pairs we replaced the hate speech with a PARA-PHRASE, and in the 50 remaining pairs, we randomly matched a hate speech with a counternarrative from another hate speech (UNRELATED baseline). Results show that 85% of the times in the ORIGINAL condition hate speech and counternarrative were considered as clearly tied, followed by the 74% of times by PARAPHRASE condition, and only 4% of the UNRELATED baseline, this difference is statistically significant with p < .001 (w.r.t. χ 2 test). This indicates that the quality of augmented pairs is almost as good as the one of original pairs.
Augmentation for counter-narrative selection.
Once we assessed the quality of augmented pairs, we focused on the possible contribution of the paraphrases also in standard information retrieval approaches that have been used as baselines in dialogue systems (Lowe et al., 2015;Mazaré et al., 2018b). We first collected a small sample of natural/real hate speech from Twitter using relevant keywords (such as "stop Islam") and manually selected those that were effectively hate speeches.
We then compared 2 tf-idf response retrieval models by calculating the tf-idf matrix using the following document variants: (i) hate speech and counter-narrative response, (ii) hate speech, its 2 paraphrases, and counter-narrative response. The final response for a given sample tweet is calculated by finding the highest score among the cosine similarities between the tf-idf vectors of the sample and all the documents in a model. For each of the 100 natural hate tweets, we then provided 2 answers (one per approach) selected from our English database. Annotators were then asked to evaluate the responses with respect to their relevancy/relatedness to the given tweet. Results show that introducing the augmented data as a part of the tf-idf model provides 9% absolute increase in the percentage of the agreed 'very relevant' responses, i.e. from 18% to 27% -this difference is statistically significant with p < .01 (w.r.t. χ 2 test). This result is especially encouraging since it shows that the augmented data can be helpful in improving even a basic automatic counter-narrative selection model. Impact of Demographics. The final experiment was designed to assess whether demographic information can have a beneficial effect on the task of counter-narrative selection/production. In this experiment, we selected a subsample of 230 pairs from our dataset written by 4 male and 4 female operators that were controlled for age (i.e. same age range). We then presented our subjects with each pair in isolation and asked them to state whether they would definitely use that particular counter-narrative for that hate speech or not. Note that, in this case, we did not ask whether the counter-narrative was relevant, but if they would use that given counter-narrative text to answer the paired hate speech. The results show that in the SAMEGENDER configuration (gender declared by the operator who wrote the message and gender declared by the annotator are the same), the appre-ciation was expressed 47% of the times, while it decreases to 32% in the DIFFERENTGENDER configuration (gender declared by the operator who wrote the message and gender declared by the annotator are different). This difference is statistically significant with p < .001 (w.r.t. χ 2 test), indicating that even if operators were following the same guidelines and were instructed on the same possible arguments to build counternarratives, there is still an effect of their gender on the produced text, and this effect contributes to the counter-narrative preference in a SAMEGENDER configuration.
Conclusion
As online hate content rises massively, responding to it with counter-narratives as a combating strategy draws the attention of international organizations. Although a fast and effective responding mechanism can benefit from an automatic generation system, the lack of large datasets of appropriate counter-narratives hinders tackling the problem through supervised approaches such as deep learning. In this paper, we described CONAN: the first large-scale, multilingual, and expert-based hate speech/counter-narrative dataset for English, French, and Italian. The dataset consists of 4078 pairs over the 3 languages. Together with the collected data we also provided several types of metadata: expert demographics, hate speech sub-topic and counter-narrative type. Finally, we expanded the dataset through translation and paraphrasing.
As future work, we intend to continue collecting more data for Islam and to include other hate targets such as migrants or LGBT+, in order to put the dataset at the service of other organizations and further research. Moreover, as a future direction, we want to utilize CONAN dataset to develop a counter-narrative generation tool that can support NGOs in fighting hate speech online, considering counter-narrative type as an input feature. | 6,297.2 | 2019-07-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Towards Design and Development of a Data Security and Privacy Risk Management Framework for WBAN Based Healthcare Applications
: Assuring security and privacy of data is a key challenge for organizations when developing WBAN applications. The reasons for this challenge include (i) developers have limited knowledge of market-specific regulatory requirements and security standards, and (ii) there are a vast number of security controls with insufficient implementation detail. To address these challenges, we have developed a WBAN data security and privacy risk management framework. The goal of this paper is trifold. First, we present the methodology used to develop the framework. The framework was developed by considering recommendations from legislation and standards. Second, we present the findings from an initial validation of the framework’s usability and effectiveness of the security and privacy controls. Finally, we present an updated version of the framework and explain how it addresses the aforementioned challenges.
Introduction
A Wireless Body Area Network (WBAN) application is composed of intelligent, lowpower sensor nodes which monitor body functions and physiological states. These sensor nodes can collect and process data, store it locally and transmit it to an actuator or a local server. WBAN based applications collect personal health record (PHR) data, which can provide real-time healthcare monitoring services. A general architecture for WBAN applications is illustrated in Figure 1. A WBAN based health care application can provide long term health monitoring of a patient's natural physiological states without constraining their everyday activities. It also helps in the provision of a smart, easily accessible and affordable health care system. Additionally, a WBAN based health care application can also assist with diagnostic procedures, supervised recovery from a surgical procedure, and can handle emergency events [1].
The main design requirements for any WBAN application are that the body sensor node needs to be extremely small and thin, capable of wireless communication, and use minimal power for data collection and processing [2]. User requirements such as privacy, safety, ease of use, security and compatibility are also of great importance [3]. WBAN applications operate in an environment where many people have open internet access which leaves them vulnerable and open to many types of attacks and threats [4]. Open connectivity creates a large attack surface. Attacks can affect the performance and availability of the service, sometimes leading to life threating situations [4]. Therefore, security and privacy safeguards need to be considered during development of this type of healthcare application. The goal of this research paper is to present the development of a WBAN data security and privacy risk management framework, and to demonstrate how the framework addresses the challenges faced by developers in assuring security and privacy of WBAN based healthcare applications. This paper is extended from the previous study which was presented at the PerCom 2021 conference [6]. The paper is organized as follows: Section 2 presents the various regulations and risk management frameworks for healthcare applications. Section 3 presents the methodology used to develop and validate the framework. Section 4 presents the challenges faced by developers in adopting security and privacy standards, while Section 5 presents the alpha version of the WBAN data security and privacy framework followed by implementation within an industrial setting which is outlined in Section 6. Section 7 presents an overview of the beta version of the framework. Section 8 presents the steps to conduct the security risk assessment at both the requirement analysis and system architecture phases. Section 9 outlines the steps to implement security risk controls followed by Section 10, which outlines the steps to evaluate the effectiveness of the controls. Section 11 presents a discussion about how the framework addresses the challenges. Finally, Section 12 concludes this paper.
Background and Related Work
Data security means ensuring that data are protected while the data are being collected, processed, stored and transmitted. The data confidentiality, integrity and availability (CIA) triad is a common concept to ensure data security. Confidentiality ensures that data are not made available or disclosed to unauthorized individuals or entities. Integrity provides assurance that data are not modified accidentally or deliberately. Availability ensures the reliable accessibility of the system for authorized entities. Data privacy governs how data are collected, shared and used; it also ensures that only authorized persons can access the data [7]. However, data privacy cannot be achieved by securing only personally identifiable information (PII). As PHR data include both PII and patient health record data, privacy needs to be assured for both PII and health record data.
WBAN applications are vulnerable and open to many types of attacks and threats as sensor nodes operate in an environment where the sensor node uses low powered radio signals for communication. These attacks make the security and privacy of PHR data one of the primary challenges for WBAN systems. We have previously conducted a structured literature review and identified a total of 11 types of attacks on WBAN applications, in addition to identifying 22 security and privacy requirements for WBAN applications [8].
Regulations and Standards
Nowadays, healthcare applications and medical devices need to be compliant with various regulations. Ensuring security and privacy is a vital requirement for compliance with these regulations. This section presents the various regulations from the US and EU markets with their individual security and privacy requirements.
• FDA: The 800 series under Title 21 of the Code of Federal Regulations (CFR) outlines the regulations which govern medical devices within the United States (US). This regulation is enforced by the Food and Drug Administration (FDA). The FDA recognizes that the security and privacy of medical devices is a shared responsibility among stakeholders, including health care facilities, patients, health care providers, and man-ufacturers of medical devices [9]. Medical devices should be designed to protect assets and functionality, and to reduce the risk of loss of authenticity, availability, integrity and confidentiality. As part of Title 21 CFR part 820 -Quality System Regulation states that the medical device manufacturer needs to employ a cybersecurity risk management program [10]. The aim of the risk management program is to reduce the likelihood of the device functionality being compromised, intentionally or unintentionally, by inadequate cybersecurity. An effective cybersecurity risk management program should address cybersecurity in both premarket and postmarket medical device development lifecycle phases. • HIPAA: The Health Insurance Portability and Accountability Act of 1996 (HIPAA) was brought forward by the Secretary of the US Department of Health and Human Services (HHS) as a law to enforce regulations for governing electronically managed patient information in the healthcare industry, and includes privacy and security protection of electronic personal health information(e-PHI) [11]. Title II of HIPAA provides five rules: Privacy Rule, Transactions and Code Sets Rule, Security Rule, Unique Identifiers Rule, and Enforcement Rule. The purpose of these rules is to prevent fraud and abuse within the healthcare system. The Privacy Rule requires implementing different policies and procedures to provide federal protections for personal health information held by covered entities. This rule also ensures patient rights concerning that information. The Security Rule specifies a series of administrative, physical, and technical safeguards to assure the confidentiality, integrity, and availability of electronically protected health information. Below are the key security and privacy requirements outlined in the HIPAA Security and Privacy Rule: Ensure the confidentiality, integrity, and availability of all PHI while they create, receive, store and transmit. Identify and protect against reasonably anticipated threats to the security or integrity of the information. Protect against reasonably anticipated, impermissible uses or disclosures of the information.
Perform risk analysis as part of the security management processes Implement technical policies and procedures that allow only authorized persons to access e-PHI. Implement policies and procedures to ensure that e-PHI is not improperly altered or destroyed. Implement security measures to guard against unauthorized access to e-PHI.
• EU Medical Devices Regulations: The European Medical Device Regulation (EU MDR) ensures high standards of safety, security, and quality of medical devices being marketed within the EU for human uses [12]. EU MDR is also known as EU Directive 2017/745 and 2017/746, which was published in 2017. The cybersecurity requirements listed in Annex I of the MDR deal with the medical device's premarket and postmarket aspects. Below is the list of key cybersecurity requirements from the EU MDR: Manufacturers shall establish, implement, document and maintain a risk management system. Medical device software should be developed in accordance with the state-ofthe-art principles of the development life cycle, risk management, including information security, verification and validation. Manufacturers shall set out minimum requirements concerning hardware, IT networks security measures, including protection against unauthorized access. Implement proper safeguards to avoid unauthorized access, disclosure, dissemination, alteration or loss of information and personal data processed. Implement adequate safeguards to ensure confidentiality of records and personal data of subjects. Implement proper incident response plan and safeguards in case of a data security breach in order to mitigate the possible adverse effects.
• GDPR: The General Data Protection Regulation (GDPR) is a regulation on data protection and privacy for citizens in the European Union (EU) and European Economic Area [13]. It was introduced in May 2018. The EU commission designed the GDPR to achieve key goals such as, (1) protect the rights, privacy and freedom of individuals in the EU, (2) reduce the barrier for free movement of data inside the EU, (3) inform individuals how personal data will be processed and who will be given access, (4) individuals will be able to obtain their data and reuse it for their own purposes and (5) individuals have the right to restrict access and erase their data.
Adoption of the following standards can help an organization achieve regulatory compliance from a security and privacy perspective: • FDA premarket and postmarket guidelines: The FDA provides premarket and postmarket guidelines for organizations and developers that need to be considered during the development lifecycle of a medical device or healthcare application. The premarket guidance [14] outlines the following key security and privacy related recommendations for medical device manufacturers: To employ a risk-based approach to the design and development of medical devices with appropriate cybersecurity protections. Take a holistic approach to device cybersecurity by assessing risks and mitigations throughout the product's lifecycle. Identify the assets, threats, and vulnerabilities. Perform an impact assessment of the threats and vulnerabilities on device functionality and end-users. Assess the likelihood of a threat and of a vulnerability being exploited. Determine the risk levels and suitable mitigation strategies.
As cybersecurity risks to medical devices are continuously evolving, it is impossible to mitigate the risk within the premarket controls alone. Therefore, the FDA provides the following key guidance for manufacturers as part of postmarket medical device development [15]: Take an approach to monitor the cybersecurity information sources for identification and detection of cybersecurity vulnerabilities and risk. Identify and assess the threats and vulnerabilities being exploited. Take a holistic approach to detect and assess the threat sources. Establish a communication process for incident response. Design a verification and validation process for software updates and patches used to remediate the vulnerabilities.
•
IEC 62304: IEC 62304 provides guidelines for each stage of the medical device software lifecycle with activities and tasks required for the safe design and maintenance of medical device software [16]. This standard is recognized by the FDA, EU and other regulatory agencies across the world. IEC 62304 recommends that organizations establish and maintain a risk management process to manage risk associated with security. The process should provide a methodology to identify the vulnerabilities, evaluate the associated threats, and implement risk controls to mitigate these threats. Finally, the process should also monitor the effectiveness of the risk control. • NIST 800-53: The NIST 800-53 standard provides security and privacy controls to protect the application, data, assets and organizations from a diverse set of attacks, threats and risks [17]. These controls can be employed as safeguards to assure confidentiality, integrity, and availability of the information while it is processed, stored and transmitted. • ISO 27002: ISO 27002 is an information security standard developed by International Organization for Standardization (ISO) which provides best practice recommendations and information security controls to assure confidentiality, integrity, and availability of data [18]. This standard aims to guide organizations to select, implement, and manage controls to minimize security risk.
Risk Management Frameworks
This section presents two risk management frameworks the IEC 80001-1:2010 and the AAMI TIR57 which are widely used for developing healthcare applications. This section also outlines why they are not directly applicable to WBAN applications, even though they are specific to healthcare applications.
• IEC 80001-1:2010: IEC 80001-1-Application of risk management for IT-networks incorporating medical devices was introduced in 2010 to address risks associated with medical devices when connecting to IT-networks [19]. The framework aims to help organizations define the risk management roles, responsibilities, and activities to achieve medical device safety and security. IEC/TR 80001-2-2 [20] is a technical report that provides background processes to address security risk related capabilities for connecting medical devices to IT-networks. • AAMI TIR57: AAMI TIR57 provides guidance for manufacturers to perform information security risk management to address security risks within medical devices [21].
Methodology
This section presents the methodology used to develop a data security and privacy risk management framework for WBAN. The methodology used to conduct this research comprised of four key stages, as illustrated in Figure 2.
Identify and Analyse the Healthcare Regulations and Standards for Security and Privacy Requirements
The goal of this step was to identify and analyze the security and privacy recommendations provided by the various healthcare-related regulations and standards. The scope was limited to regulations that apply in the US and Europe. The approach taken for the identification and analysis was as follows:
•
The Regulated Software Research Centre, of which the authors are members, is widely recognized for its research in the medical device regulatory world. Its members provided advice on the applicable regulations and standards. The respective regions legislative portal website was also checked to identify the regulations. This resulted in a total of four regulations which were the FDA's Code of Federal regulation for medical devices, HIPAA, EU MDR and GDPR.
•
The resultant four regulations were analyzed to extract the security and privacy requirements for developing healthcare applications. The regulations, along with their respective security and privacy requirements are detailed in Section 2.1 above. • Additionally, a snowballing approach was taken for reviewing each regulation to identify the security and privacy standards. Along with a snowballing approach and guidance from members of the Regulated Software Research, the following standards were identified as applicable: the FDA's premarket and postmarket guidelines, IEC 62304, NIST 800-53 and ISO 27002.
•
The resultant five standards were analyzed to extract the security and privacy requirements. These security and privacy requirements are detailed in Section 2.1 above.
Identify and Analyse the Healthcare Security and Privacy Risk Management Frameworks
The goal of this step was to identify and analyze the risk management process recommended by the regulations and standards identified in the previous section (Section 3.1) to manage security and privacy risks throughout the development lifecycle of medical devices and healthcare applications. The risk management frameworks were analyzed to check whether they were applicable to the development of WBAN based healthcare applications. The approach taken during the identification and analysis process was as follows:
•
Review the regulations and standards identified in the previous section (Section 3.1) for references to security and privacy risk management frameworks. The review resulted in a total of four risk management frameworks: ISO/IEC 80001-1:2010, AAMI TIR57, ISO 14971 and NIST 800-30.
•
Analyze the risk management frameworks to identify which of them are specific for developing healthcare-based applications. An initial analysis found that only two of these four frameworks were 'healthcare specific' security and privacy risk management frameworks, that is ISO/IEC 80001-1:2010 and AAMI TIR57. Details of the risk management frameworks are outlined in Section 2.2. ISO/IEC 80001-1:2010 and AAMI TIR57 were selected for further analysis to identify whether both are applicable for developing WBAN based healthcare applications. It was found that neither of these frameworks were suitable for developing WBAN applications. The reason for their unsuitability is presented at the end of Section 2.2.
Identify the Challenges for Assuring WBAN Data Security and Privacy
The goal of this step was to identify the challenges faced by developers for assuring data security and privacy for WBAN based healthcare applications and complying with regulations. A two-step process was utilized to identify the challenges. The first step involved a literature review, while the second step involved an interview with the Chief Technology Officer (CTO) and the tech lead of an organization that develops a WBAN based fitness tracking application. The findings from the literature review and interview have been published here [24], and are summarised in Section 4. The following steps were utilised by the lead author of this paper to conduct the literature review: • Conduct a search on IEEEXplore, ScienceDirect and Google scholar using the search string; "healthcare AND (security OR privacy) AND (standard OR regulation OR compliance) AND (barrier OR challenges OR difficulties)". • Set inclusion criteria as follows: (1) presented the challenges for assuring security and privacy of healthcare applications that comply with regulations; (2) publication year: 2010-2020; (3) language is English and full text available.
•
The initial search resulted in a total of 320 research papers.
•
In the first screening each paper was analyzed by reviewing the abstract and conclusion. If the paper addressed any challenges, then it was selected for the second screening. A total of 125 papers out of 320 were selected for the second screening.
•
In the second screening each paper was analyzed by reading the full text and checking whether the paper presented any challenges for assuring security and privacy of healthcare applications that comply with regulations. The second screening resulted in a total of 19 papers out of 125. • Finally, a list of challenges was recorded from those papers which is presented in Section 4.
Develop the Proposed Security and Privacy Framework
The following steps were used to develop the security and privacy risk management framework: • Identify the possible threats and vulnerabilities of a WBAN based healthcare application by conducting threat modeling. • Review the report from threat modeling to identify the respective control(s) for each threat and vulnerability.
•
Develop the implementation details for these controls (presented in Section 5.2.).
•
Validate the effectiveness of the controls by implementation in an industrial setting. This is outlined in Section 6.
•
Gather recommendations and suggestions for improvement to the alpha version from the organization who conducted the implementation. This is outlined in Section 6.5.
Each of the suggestions were then reviewed by the authors of this paper. All the suggestions were considered, and appropriate action was taken during development of the beta version. For example, the developer suggested to identify the threats and vulnerabilities at the requirement analysis phase to produce the security and privacy requirements. To address this suggestion a security risk assessment step was designed to be conducted in both the requirement analysis and the system architecture phases (presented in Section 8.3). Sections 7-10 present the detailed steps and implementation process of the beta version of the framework.
Challenges
The list of challenges from both the literature review and interview is presented in Table 1. The second column indicates whether the challenges were identified by literature review or by interview, or indeed by both literature review and interview.
Challenges Sources
Lack of comprehensive understanding of the architecture for WBAN security and privacy Interview Understanding the data flow around the system and what assets need to be protected Interview Standards outline each security control at a very high-level with limited amount of implementation details Literature & Interview [27,33] Identification of appropriate security controls with respective implementation details to ensure CIA and privacy of data Due to a vast number of controls, the challenge is prioritizing these controls in addition to planning releases without compromising security and privacy
Interview
Lack of security mechanisms for sensor device nodes connected to wireless networks, which are often limited by physical memory, computational power and storage Literature & Interview [37,38,42,43]
Data Security and Privacy Framework (Alpha Version)
The alpha version of the data security and privacy framework consists of the following key stages: • Identification of possible threats and vulnerabilities. • Implement controls to protect the application against those threats and vulnerabilities.
•
Evaluate the effectiveness of the controls.
The remainder of this section describes each stage (parts 1, 2 and 3), and also outlines how the framework should be used (part 4).
Identification of Possible Threats and Vulnerabilities
A structured process is required to examine how vulnerable an application is, and which types of attack can be launched to compromise the application. Threat modelling is a widely recognised process for identifying the possible threats to an application and is considered a significant step in assuring security. Threat modelling activities will start with defining the scope and data flow of the application. There are several tools and methods available to conduct threat modelling such as STRIDE, Linddun, The Process for Attack Simulation & Threat Analysis (PASTA), and Trike.
Implement Controls to Protect the Application against Those Threats and Vulnerabilities
One of the key stages in the development of this framework was to identify appropriate WBAN security and privacy controls with implementation details to mitigate the risks. The controls were identified by considering the potential security and privacy weaknesses of WBAN application ecosystems and mapping them against controls from the standards. Both ISO 62304 and AAMI TIR57 recommend considering the security capabilities outlined by the ISO/IEC 80001-2-2 while developing security and privacy requirements. Therefore, the ISO/IEC 80001-2-2 standard was selected as the primary standard for developing data security and privacy guidelines. To identify appropriate security controls and to develop the implementation detail for each control, the three-step process illustrated in Figure 3 was followed.
Control Collection
The ISO/IEC 80001-2-2 technical report provides 19 security capabilities with highlevel details for Health Delivery Organizations (HDOs) and Medical Device Manufacturers (MDMs), but this technical report does not provide any security control implementation details. The ISO/IEC 80001-2-8 [44] technical report guides the establishment of the security capabilities identified in ISO/IEC 80001-2-2. ISO/IEC 80001-2-8 also provides security controls from other standards such as NIST 800-53, ISO 27002 [18], and ISO 27799 [45]. These controls will help HDOs and MDMs to implement each capability identified in ISO/IEC 80001-2-2. In this step, all the controls for the respective security capabilities were collected for further analysis. Appropriate controls were selected using exclusion criteria and a review process which is described in the next step.
Control Selection
Each control was mapped to the WBAN security and privacy requirements that the authors had previously identified through a literature review, which is presented in [8]. Controls were then selected by excluding controls that related to: (1) Business operation, (2) Organizational facilities, (3) Management operation, (4) Offices, rooms and facilities, (5) Human resource security, (6) Personal security and (7) Network cabling. The controls related to security and privacy requirements such as access control, authorization, cryptography, key management, non-repudiation and intrusion detection are included.
Development of Security Control Implementation Details
As stated earlier, ISO/IEC 80001-2-8 refers to other standards such as NIST 800-53, ISO 27002 or ISO 27799 for implementation guidelines. Each control's implementation details were extracted from the respective standards for review. A review team was setup which composed of the lead author of this paper, a tech lead and a senior developer from Company A. During the review process, each control's implementation details were checked for whether it had enough detail for developers to implement. If the implementation details were not adequate, then further details were selected from other sources. Other sources included standards or technical reports as detailed in Figure 3, OWASP guidelines, blogs, websites and scientific research papers. For example, the ISO/IEC 80001-2-8 proposes the use of a key management process as a risk control to generate, distribute and revoke a cryptographic key. To achieve this the standard refers to Section 10.1.2 of ISO 27002 for further details. Section 10.1.2 of ISO 27002 provides very high level and generic details about a key management process and does not provide any information about how the key will be generated and how the key will be transferred from the mobile application to the sensor device. ISO 27002 again refers to another standard ISO/IEC 11770 [46] for further details about key management, however ISO/IEC 11770 only outlines the details about the key generation and not about the key transfer. From the above example, the developer needs to review three different standards to find implementation details for key management. A goal of this framework is to provide implementation details for each security and privacy control. As an example, implementation details for key management, which a developer can quickly adopt, are outlined in Appendix B.
Evaluate the Effectiveness of the Controls
To evaluate the effectiveness of the controls an assessment needs to be conducted on the application. This assessment will help to identify to what degree the application will assure the security and privacy of the PHR data. According to NIST 800-53, vulnerability scanning and/or penetration testing can be used as part of the assessment process. An organization can conduct an assessment by forming a team of people within the organization who have technical expertise in conducting an assessment. Additionally, an organization can also onboard external resources to conduct the assessment, for example security consultants.
Implementation Process
The implementation of the data security and privacy framework commences by defining the scope and the WBAN application use-cases. The developer then needs to convert the proposed use-cases into a data flow diagram which will be used as input for the threat identification process. As discussed in the threat identification section, a threat modelling technique can be used as part of the threat identification process. The threat modelling will produce a list of threats and vulnerabilities for the application. After that the developer needs to identify the controls provided the framework to mitigate the threats and vulnerabilities. If a control is not available in the framework the developer needs to find the control's implementation details from the standards or external sources and update the existing security and privacy guidelines. Once the control is selected, the developer needs to implement it. Finally, penetration testing needs to be conducted upon completion of the development. If the penetration test fails, then the reason for the failure needs to be reviewed. The penetration test can fail due not to implementing the control as outlined in the framework, or a new threat could be identified. If the penetration test failed due not to implementing a control properly, the developer needs to implement the control as presented in the framework. Suppose any new threats are identified during the penetration testing. In that case, the developer needs to find the respective security and privacy controls from the standards or external sources and implement them. Figure 4 illustrates the implementation process of the data security and privacy framework.
Validation within an Industrial Setting
The purpose of this section is to demonstrate the validation of the security control implementation details provided in the WBAN risk management framework. This validation was achieved through implementation of the framework within Company A (an Irish WBAN development company). In this section we outline the results of threat modelling, which was conducted on Company A's WBAN application, along with the security controls which were implemented as a result of vulnerabilities identified through the threat modelling. Finally, the results of a penetration test are presented. The penetration test was conducted in order to verify to what degree the controls assure security and privacy of the WBAN fitness tracking application.
Scope and Application Use-Case
The FitnessX app is the first consumer product for Company A following on from the success of the core product for professional sports teams. The product uses a physical activity monitor, known as a pod, which uses GPS and a series of sensors to track an athlete's activity during training and gameplay, and relay this information to the app running on either iOS or Android over Bluetooth. In the app, users can sign up for an account and pair their device, before tracking sessions and syncing this data to the cloud. Sessions generate statistics and analysis which can be used by the individual to track their performance and they can choose to share some of their data in a global leader-board. They can also create mini private or group leagues to use the same leader-board functionality among a closed group of individuals.
Develop Data Flow Diagram
A data flow diagram (DFD) is used to provide an overview of the application and graphically represent the flow of the data through an information system or application. A DFD can also provide insight about input and output of data, how data will flow and where it will be stored in an application. There are several levels of DFDs that can be drawn for an application. These are categorised based on the level of complexity. Increasing the level of a DFD increases the complexity. Level '0' and Level '1' are widely used levels of DFD.
Apply Threat Modelling
STRIDE is a widely recognized threat modelling technique for web-based applications. It was developed by Microsoft, which also provide an open-source tool named the Microsoft Threat Modelling Tool (TMT). This tool includes a graphical interface to conduct threat modelling. By using the graphical interface, a user can easily design the data flow diagram, configure necessary parameters and track the threat with respective implementation status. Conducting threat modelling using this tool is carried out in three steps: • Design and configuration. • Generate threat report. • Identify the security controls by analyzing the report.
The design and configuration step starts by drawing the Data Flow Diagram (DFD). This DFD diagram is enhanced by adding the proper data flows, data stores, processes, interactors, and trust boundaries. Each of the DFD element properties is configured based on the respective element behaviour. For example, device attribute properties are configured by setting "Yes" to GPS, data, store log data, encrypted, write access, removable storage and backup. After that, each of the DFD elements is connected by defining the proper connectivity attribute. The connectivity attribute is set to "Bluetooth" from device to iOS and Android mobile app, and mobile app to REST API is set to "Wi-Fi". The REST API to Non-Relational database is configured as "wired" as both are deployed in cloud infrastructure. Finally, a trust boundary is configured to enable the trust level between DFD elements for data exchange. Figure 5 illustrates the application's updated DFD. One of the key features of the Microsoft TMT tool is the ability to generate a threat report based on the DFD and element attributes. The threat report consists of a list of threats, threat categories, data flow directions and respective descriptions. Table 2 illustrates some sample threats and vulnerabilities with their respective descriptions.
Vulnerabilities Description
The device data store could be corrupted Data flowing across iOS_to_S_Response may be tampered with by an attacker. This may lead to corruption of device. Ensure the integrity of the data flow to the data store.
Potential weak protections for audit data
Consider what happens when the audit mechanism comes under attack, including attempts to destroy the logs. Ensure access to the log is through channels which control read and write separately.
Potential data repudiation by REST API REST API claims that it did not receive data from a source outside the trust boundary. Consider using logging or auditing to record the source, time, and summary of the received data.
Weak authentication scheme Custom authentication schemes are susceptible to common weaknesses such as weak credential change management, credential equivalence, easily guessable credentials, null credentials and a weak credential change management system. Potential lack of input validation for REST API Data flowing across Android_to_API_Request may be tampered with by an attacker. This may lead to a denial of service (DoS) attack against REST API or an elevation of privilege attack against REST API or an information disclosure by REST API.
The description of each threat will help to identify the appropriate security controls. After exporting the threat report from the TMT tool, each threat needs to be reviewed to identify appropriate controls. During the review process, each threat description, threat type and data flow interaction needs to be considered. In some cases, if a threat does not contain enough description of the threat, then the threat category will be used to select a control as a countermeasure. Table 3 outlines a snapshot of the list of controls for mitigating the vulnerabilities. Maintain a list of commonly used, expected, or compromised passwords and update the list when passwords are compromised directly or indirectly.
Evaluate the Effectiveness of the Controls
The goal of this stage is to evaluate the effectiveness of the controls implemented to mitigate the threats and vulnerabilities. To carry out this evaluation, a penetration test was conducted with the help of a third-party penetration service provider. The goal of this stage is to evaluate the effectiveness of the controls implemented to mitigate the threats and vulnerabilities. To carry out this evaluation, a penetration test was conducted with the help of a third-party penetration service provider.
Scope of the Testing
The scope of the testing consists of what networks, applications, databases, accounts, people, physical security controls and assets will be attacked during the testing. So, the sensor device, mobile application, database, and respective communication medium was set as scope for the testing. Furthermore, a combination of manual and automated tools was used to exploit the system.
Testing Tools
As discussed in the previous section penetration testing can be conducted using a combination of manual and automated tools. Table A3 in Appendix C illustrates some of the automated tools used during penetration testing.
Penetration Test Result
The penetration tests identified two different types of vulnerabilities. Along with the test result, the penetration service provider also included recommendations on how to mitigate the vulnerabilities. Below is the list of vulnerabilities, along with mitigation recommendations which were identified during the penetration testing: • Potential denial of service points: During testing, there were four potential DoS points found. These are requests that timeout within 10 s due to malformed data inside the payload. These can be run multiple times in multiple threads, driving up the usage and putting stress and strain on the service. Recommendation: It was advised that the API endpoints backend code should handle potential malformed data gracefully by input validation. Additionally, a proper HTTP response is needed if an API endpoint failed to process a request, so that the user can retry a request later. Action: Added input validation to validate the input data stream. Additionally, an error response code was also added to notify the user that API endpoints were unable to process the malformed input data. • Security misconfiguration-Stack traces enabled: During testing, it was discovered that stack traces were enabled for some API endpoints. Recommendation: It was advised to turn off the stack trace for all endpoints and use a code review process to detect this coding error during development. Action: Stack trace was disabled for all the endpoints and the exception was written into a log file for auditing.
After making the necessary changes in the codebase to address the issues found during the penetration testing, the update was shared with the penetration service provider.
A retest of the updated application was conducted, and it was unable to reproduce these vulnerabilities.
Suggestions
Suggestions for improvement to the framework, received from the developer and the penetration test service provider, are described below.
• Identify threats and vulnerabilities at the requirement analysis phase to produce security and privacy requirements. • A guideline for system architecture review would be useful to check whether the minimum security and privacy requirements are taken into consideration. • A risk evaluation process would be helpful to identify the severity level of the identified threats and vulnerabilities. • A risk treatment process will be useful to identify the risks which require controls to mitigate. • A code review process during the control's implementation will help to minimize coding errors.
•
Conduct unit testing during the implementation phase to identify whether the control is implemented properly.
By considering the above suggestions, the beta version of the framework was developed which is presented in Section 7.
Overview of the Data Security and Privacy Risk Management Framework (Beta Version)
ISO 62304 is a widely known standard which provides guidelines for developing healthcare applications [16]. This standard states that organizations need to implement a risk management process while developing healthcare software to assure security and privacy. ISO 62304 refers to AAMI TIR57 for managing security and privacy risks during development. The framework proposed in this paper is based on the guidelines provided by AAMI TIR 57. Furthermore, security and privacy activities in the healthcare application lifecycle guidance provided by IEC 80001-5-1 were also taken into consideration. The framework consists of three different stages: (1) Security and privacy risk assessment, (2) Security and privacy risk controls and (3) Evaluation of overall residual security and privacy risk acceptability. These stages are similar to AAMI TIR57, but is differentiated as follows: • AAMI TIR57 does not clearly define how to conduct the security and privacy risk assessment at both the requirements analysis and the system architecture phases. This framework provides the steps to conduct security and privacy risk assessment at both phases. Additionally, the framework also provides a list of assets, threats and vulnerabilities which are specific to WBAN applications, which can be used as a starting point for conducting risk analysis. • AAMI TIR57 does not provide any design review guidelines at the system architecture phase. The proposed framework added the design review guidelines recommended by IEC 80001-5-1. • TIR57 does not include risk treatment to identify unacceptable risks which require controls to mitigate. This framework provides risk treatment steps as part of the risk assessment.
•
This framework also consists of a mapping of possible threats and vulnerabilities with respective controls along with implementation details for the controls.
•
This framework provides steps and tools to conduct in-house vulnerability scans and penetration testing.
The framework takes initial product requirements as an input but does not perform any validation or verification of the quality of the product requirements. To develop quality product requirements, guidelines provided by ISO/IEC 62304 can be utilized. To implement this framework an organization needs to gather a team. Table 4 outlines the respective tasks of each role related to the implementation of the framework. In the case of limited resource in an organization, a single resource can carry out multiple roles and conduct more than one task. The three different stages of the beta version of the framework are outlined as follows; Section 8 presents the steps to conduct the security and privacy risk assessment at both the requirement analysis and system architecture phases. Section 9 outlines the steps to implement risk controls. Finally, Section 10 outlines the steps to evaluate the effectiveness of the controls.
Security and Privacy Risk Assessment
The security and privacy risk assessment helps to identify, analyze and evaluate potential security risks. This assessment helps an organization to make decisions about which risks require controls. Based on the recommendation of ISO 62304 Clause 5.2 and 5.3, this framework conducts risk assessment at the requirements analysis and system architecture phase of the development lifecycle.
The security and privacy risk assessment are divided into two key stages; (1) Risk analysis and (2) Risk evaluation and treatment. The risk analysis stage aims to identify the assets, threats, vulnerabilities and adverse impacts on an application. To assist with the security risk analysis, an organization may use relevant information obtained from a previously risk analysis of a similar type of product as a starting point. The degree of reusability of data from previous analyses depends on the difference between the applications from a security perspective. The risk evaluation and treatment stage will identify the acceptable risks and unacceptable risks which will require controls to mitigate.
Define Scope and Purpose
Before conducting the security and privacy risk assessment, organizations need to define and document the purpose and scope of the assessment. The scope will include: • The intended use. • Initial product requirements.
•
Operating environment of the application.
•
List of team members presented in Table 4 who will conduct the risk assessment. • Timeline for the security and privacy risk assessment.
Risk Assessment Approach
There are three different risk assessment approaches-qualitative, quantitative and semi-quantitative. A qualitative assessment approach uses subjective values with a scale of qualifying attributes (e.g., Very Low, Low, Medium, High, Very High) to describe the impact and likelihood of potential consequences of threats and vulnerabilities. The value of the impact and likelihood depends on the experience, expertise and competence of the person conducting the risk assessment. The qualitative assessment approach is very easy and less time consuming to perform compared to quantitative and semi-qualitative approaches, as this approach does not require any special tools or methods.
Quantitative risk assessments use a scale with numerical values based on a set of mathematical methods, rules and historical incident data. This approach is usually expressed in a monetary term which reflects the amount of money an organization may lose over a time period if the threat event occurs, or a vulnerability is exploited. The quality of the analysis depends on the accuracy of the numerical values, historical incident data and the validity of the methods used. A semi-quantitative risk assessment provides an intermediate level between the qualitative and quantitative risk assessment. To evaluate a security risk using a semi-quantitative approach, use bins (e.g., 0-4, 5-20, 21-79, 80-95, 96-100) and scales (e.g., 1-10) which will provide the textual evaluation of qualitative risk assessment and the numerical evaluation of quantitative risk assessment. The value of the bins and scales will help to communicate the risk to decision-makers as well as to perform a relative comparison of risk. This approach does not require the same level of skill, tools, mathematical methods and historical incident data as in quantitative risk assessment.
All three approaches have advantages and disadvantages. Quantitative risk assessment requires historical data to determine the likelihood of a threat event occurring or a vulnerability being exploited. Historical data that is not recently updated may add additional error to the risk assessment. Furthermore, it is difficult to calculate the cost of organization reputational damage, loss of competitive advantage and harm to user health if any threat event occurs or a vulnerability is exploited. Due to these facts, the quantitative approach will not be appropriate in information security and privacy risk assessment. This framework will use qualitative and semi-quantitative assessment approaches for evaluating the risk.
Security and Privacy Risk Assessment at the Requirements Analysis Phase
The objective of conducting a security and privacy risk assessment at the requirement analysis phase is to identify the risks, evaluate the identified risks, apply risk treatment to identify the risks which will require controls to mitigate and develop the security and privacy requirements. The initial product requirements and risk assessment approach will be taken as an input to conduct the security and privacy risk assessment at this phase. Figure 6 illustrates the steps to conduct a risk assessment at the requirements analysis phase.
Below is the list of key tasks to be conducted during the risk assessment at the requirements analysis phase:
•
Apply risk analysis to identify the risk.
•
Evaluate each risk to identify the acceptable and unacceptable risks. • Update list of security and privacy requirements for unacceptable risk.
Risk Analysis
As part of the risk analysis, the following four tasks need to be conducted. Of the following four tasks, identify and document threats and identify and document vulnerabilities can be performed in any order.
Identify and Document the Assets
Assets of a WBAN application include sensor devices, information collected by the sensor devices, and server instances which are used to process and store the data. If the application interfaces with any external services such as third-party libraries or third-party application services, these also need to be taken into consideration. The assets will be documented in the security and privacy risk assessment report, along with the date that the assets were identified, and the name of the persons with their role as presented in Table 4. Figure 7 illustrates the list of assets for general WBAN applications which can be used as a starting point.
Identify and Document Threats
To identify threats, the assessor team comprised of the technical lead, software architect, product owner, and senior software engineer needs to perform the following steps:
•
Using Table A1 in Appendix A, select the threats related to the assets identified in the previous section. July 2021). Each newly discovered threat needs to be analyzed by studying the threat description, threat agents, possible attack scenarios and checking whether the same attack scenario can occur within the WBAN application. If a threat is applicable to WBAN applications, then the assessor team needs to identify the assets which will be affected if the threat occurs. • Document the following in the security and privacy risk assessment report: List of threats and respective affected assets. Date when the threat identification was conducted. The name and role of the person who conducted the threat identification.
Identify and Document the Vulnerabilities
To identify vulnerabilities, the assessor team need to perform the following steps: • Review the list of vulnerabilities presented in Table A1 in Appendix A and select which are related to the identified assets.
•
As the vulnerability landscape is constantly changing, the team need to check in various sources such as OWASP IoT Top 10 (https://wiki.owasp.org/index.php/ OWASP_Internet_of_Things_Project access on 30 July 2021) and OWASP Mobile Top 10 (https://owasp.org/www-project-mobile-top-10/ access on 30 July 2021). During the review of a newly discovered vulnerability, the team needs to review the common security weaknesses and possible threat scenario section, in order to check whether the vulnerability can be exploited by any threat and affect any assets. • Finally, the assessor team will document all the vulnerabilities details, name and role of the person, and date when the vulnerability identification process was conducted in a security and privacy risk assessment report.
Identify and Document the Adverse Impacts
An adverse impact of a security breach can be described in terms of loss or degradation of confidentiality, integrity, availability and privacy of data. TIR 57 outlines a set of questions to identify the adverse impact. This framework has extended those questions by the addition of point 4 below:
1.
What is the impact if that asset's confidentiality is compromised, and the information it contained is made available to an attacker? 2.
What is the impact if that asset's integrity is compromised? 3.
What is the impact if that asset is made unavailable? 4.
What is the impact if that asset's privacy is compromised? 5.
Can the immediate impact of a compromised asset lead to another type of attack or vulnerability?
The members of the assessor team will review each threat and vulnerability and ask the above questions to identify the adverse impacts. For example, if the attacker launches a DoS attack on the webserver and makes the service unavailable, it will have an impact on the service operation and business mission. Finally, document the adverse impact of each threat and vulnerability in the security risk assessment report.
Risk Evaluation and Treatment
The risk evaluation process helps to determine whether the threats and vulnerabilities are acceptable or not by calculating the impact and likelihood level. Furthermore, risk treatment will help to decide how each unacceptable risk will be addressed. Figure 8 illustrates the steps to conduct risk evaluation and risk treatment.
Determine Impact
Impact refers to the extent to which a threat event might affect the application. Impact assessment criteria may include:
•
Harm to user health and organization reputation.
Loss of assets.
The assessor team also needs to consider the asset's valuation while calculating the impact score of a threat. An asset's valuation will include the importance of that asset to fulfil the business objectives, the replacement value of the asset and the business consequences due to the asset being lost or compromised. For example, a physical attack on a sensor device or a database will have a different impact on business operations. A physical attack on a sensor will only compromise that particular sensor device. If the database is compromised and data are lost, then it will have a much larger impact on financial, reputation, regulatory consequences and the operation of the application. Table 5 outlines the assessment scale for calculating impact scores. Table 6 illustrates an example for identifying the impact level of a physical attack on a sensor node. During the calculation, the impact level value is assigned to each impact factor and then the average is calculated. Table 6. Impact analysis for physical attack on a sensor node.
Impact Factor Impact Description
Impact Level
Scale Bins
Harm to user health Only the person who is using the device will be in risk Very High 100 10 Operational impacts Only that device will be out of operation, it will not severely affect the overall application operation
Determine Likelihood
The likelihood represents the probability that a threat event will occur by exploiting one or more vulnerabilities. To estimate the likelihood, the assessor team needs to consider factors such as:
•
Adversary intent and skill level. • The affected asset. • Historical evidence about the threat.
The same threat can have a different likelihood score based on the source of the threat and assets affected. For example, a DoS attack can compromise the availability of the web server and sensor devices. Initiating a DoS attack on a web server will be easier than the sensor device, as an attack on a sensor device will require advanced level skills and tools. In this scenario, the likelihood level will be different on both assets. So, during the assessment the assessor team needs to assign the likelihood level based on the available evidence, experience and expert judgement. Table 7 outlines the assessment scale for calculating likelihood level. Table 8 illustrates an example for identifying the likelihood level for a DoS attack on a web server. During the calculation, the likelihood level value is assigned to each likelihood factor and then the average of all the factors is calculated. The aim of this stage is to calculate the risk score based on the impact and likelihood of threats and vulnerabilities. Appendix I of NIST 800-30 details calculating the risk score by multiplying impact times likelihood [23]. Alternatively, the team can use the CVSS risk score calculator to calculate the risk score [47]. A sample risk score matrix using a qualitative assessment approach is presented in Table 9.
Risk Acceptability Criteria
Risk acceptability criteria will help to identify whether the threats and vulnerabilities are acceptable or unacceptable based on a set of criteria defined by the security and privacy evaluation team. There are no standard guidelines available to define the set of criteria. However, the team can consider various factors while defining the criteria such as:
•
The organization's goals and objectives. • Business operations. • Application use case and technology stack used for developing the application. • Legal and regulatory aspects. • Budget and time for developing the application. Table 10 outlines the risk acceptability criteria based on the risk score calculated using the qualitative approach. The proposed criteria treat the risks with a low or very low score as acceptable risks, and rest as unacceptable risks. If required, the evaluation team can make adjustments to the selection criteria. Finally, all unacceptable and acceptable risks with the rationale need to be documented in the security and privacy risk assessment report. The risk may be acceptable over the short term. Plans to mitigate risk should be included in future plans and budgets.
21-79 5
The risk is unacceptable. Measures to reduce and mitigate the risk should be implemented as soon as possible.
High 80-95 8 The risk is unacceptable. Immediate measures to reduce and mitigate the risk should be implemented as soon as possible.
Very High 96-100 10 The risk is totally unacceptable. Immediate measures must be taken to mitigate the risk.
Risk Treatment
Risk Treatment is the process of selecting and implementing measures to address the risk. There are three options available for risk treatment which include:
•
Risk modification: A risk which requires implementation of controls to reduce the impact and/or likelihood to an acceptable level. • Risk avoidance: A risk can be avoided by eliminating the source of the risk or the asset exposed to the risk. This is usually applied when the severity of the risk impact and/or likelihood outweighs the benefits gained from implementing the countermeasure. For example, physically moving an on-premises server to an alternative location to mitigate the risk caused by nature might be outweighed with the cost of moving the server. • Risk sharing: A risk can be fully or partially shared or transferred to another party. If the application is using any third-party libraries or public cloud services, risk related to these can be shared or transferred to the owner of the service.
The risk evaluation team will evaluate each unacceptable risk taking the above possible risk treatment options into account. Finally, the team will also record the list of risks that require controls, shared risks and avoided risks with rationale in the risk assessment report.
Update Security and Privacy Requirements
The goal of this stage is to update the security and privacy requirements with the list of security and privacy risks which require controls to mitigate. As risk analysis on the requirement analysis stage uses the initial product requirements, the updated security and privacy requirements will feed into the final product requirements. The following security and privacy requirements can be used as a starting point:
•
Assure data confidentiality by protecting sensor nodes, and database server from unauthorized access. Assure data integrity by protecting data from external modification during transmission or while in storage.
•
Assure that data will always be available to an authorized entity of the application. • Assure privacy of the data during collection, processing and transmission. Allow access of the data only to authorized entities. • Use a lightweight, memory and energy-efficient cryptographic algorithm for encryption. • Facilitate a key management service for key generation, key refreshing, key agreement, key distribution and key revocation. • Include a firewall and intrusion detection system to identify and block suspicious activity on a network. • Include logging for auditing and accountability. • Include a data backup strategy to assure high availability of the application.
After identifying the security and privacy requirements the following two tasks need to be conducted:
•
Update the initial product requirements with security and privacy requirements. • Document the security and privacy requirements in the security assessment report.
Security and Privacy Risk Assessment at the System Architecture Phase
To conduct security and privacy risk assessment at the system architecture phase, the updated product requirements and system architecture will be taken as an input to this phase. Figure 9 illustrates the steps to conduct a risk assessment at the system architecture phase. Below is the list of key tasks that will be conducted during the security and privacy risk assessment at the system architecture phase:
•
Review system architecture according to security and privacy principles and requirements identified in Section 8.3.2.6.
•
Apply risk analysis to identify the security and privacy risks. • Identify acceptable and unacceptable risks. • Identify the list of unacceptable risks which will require controls to mitigate. • Update security and privacy requirements and product requirements with unacceptable risks. • Check whether any update to the current system architecture is required due to newly identified security and privacy requirements. If yes, then make necessary changes to the system architecture and conduct risk analysis followed by risk evaluation and treatment.
Review System Architecture
To review the system architecture an organization needs to consider the following steps:
•
Review the system architecture for compliance with security and privacy design principles. To review system architecture, organizations should take the following security and privacy design principles into consideration: Identify whether each component of the application will interface externally or internally or both. Identify how the user will access each component of the application and define the trust boundary. Use least privilege principle while accessing and interfacing with any component. Take the threats and vulnerabilities identified in the requirement analysis phase into consideration while designing the security and privacy requirements. Identify the use of any third-party components and their security and privacy capabilities.
Keep the system architecture as simple as possible.
• Ensure that all security and privacy requirements identified in Section 8.3.2.6 are implemented.
•
If any security and privacy requirements or design principles are not implemented, then implement the missing one and iterate the review process.
Risk Analysis
To conduct risk analysis at the system architecture phase, the following four steps need to be performed. Among these four tasks, identifying the threats and vulnerabilities can be performed in any order.
Identify and Document the Assets
To identify and document the assets in the system architecture conduct the following steps: • Check whether any new asset is discovered compared to the list of assets identified during the requirement analysis phase in Section 8.3.1.1. • Document the complete list of assets in the risk assessment report.
Identify and Document Threats
To identify and document the threats at the system architecture phase, the assessor team should conduct the following steps:
•
Follow the steps outlined in Section 8.3.1.2. • Document the complete list of threats in the risk assessment report.
Identify and Document the Vulnerabilities
To identify vulnerabilities at the system architecture phase, the assessor team should conduct the following steps:
•
Apply threat modelling to identify vulnerabilities in a WBAN application. Section 6.3 outlines guidance on how to conduct threat modelling.
•
Check if there are any additional vulnerabilities to those in the list of vulnerabilities identified during the requirements analysis phase in Section 8.3.1.3.
•
If yes, then record the newly discovered vulnerabilities with possible countermeasures (if available) in the security assessment report.
Identify and Document the Adverse Impacts
To identify the adverse impact of newly discovered threats and vulnerabilities, the assessor team can reuse the questionnaire and process outlined in Section 8.3.1.4.
Risk Evaluation and Treatment
To evaluate and treat the risks identified at the system architecture phase, conduct the following steps: • Follow the steps outlined in Sections 8.3.2.1 to 8.3.2.5.
•
Identify the list of acceptable risks followed by unacceptable risks which require control to mitigate. • Finally, document the updated product requirements, list the acceptable and unacceptable risks in the security and privacy risk assessment report.
Update Security and Privacy Requirements
Follow the steps outlined in Section 8.3.2.6 to develop the security and privacy requirements for the unacceptable risks which require security controls to mitigate. Update the product requirements with the updated security and privacy requirements. If the updated requirements require modifications to the system architecture, then conduct the following steps: • Make necessary modifications to the system architecture.
•
Iterate the security risk analysis and security evaluation with treatment process until the security requirements are addressed in the system architecture.
Security and Privacy Risk Assessment Report
The result of the security and privacy risk assessment needs to be documented in a report which will include the following: • Scope of the security and privacy risk assessment. • Team members who conducted the risk analysis, the risk evaluation and treatment with date. • Initial product requirements. • Selected risk assessment approach with rationale.
•
List of assets identified in both phases.
•
List of threats and vulnerabilities, along with impact and likelihood score that were identified in both phases. • Risk acceptability criteria with rationale for both the requirements and system architecture phases.
•
List of acceptable and unacceptable risks with rationale.
•
List of unacceptable risks to be shared, avoided and which require controls to mitigate.
•
List of security and privacy requirements identified at both the requirement analysis and the system architecture phases.
Security and Privacy Risk Controls
Security and privacy risk controls are safeguards or countermeasures whose purpose is to mitigate the threats and vulnerabilities. This stage will take a list of unacceptable risks which require controls to mitigate as the input and produce an application that has all the necessary risk controls implemented and verified. Figure 10 presents the steps for the selection and implementation of security and privacy risk controls.
Review and Prioritise the Security and Privacy Risk Controls
After completing the security and privacy risk control selection process, the next task is to review the implementation details and prioritize the controls. The review and prioritization of the security and privacy risk controls should be conducted as follows: • A team, comprised of a technical lead, a developer, and a QA person will review the implementation details presented in Appendix B for each control • Prioritize the controls based on the following: Risk score. Product delivery plan and timeline of the project. The priority of each use case. Complexity, time required to implement the control.
• Document the list of controls, along with their implementation details and prioritization in the security and privacy risk control report.
Implementation and Verification of Security and Privacy Risk Controls
In the development phase, the developer will implement and verify each of the selected controls. During the implementation, developers should consider secure coding practices. The developer will use organization defined secure coding practices if available; otherwise the developer can follow the secure coding guidelines provided below. Finally, to verify whether controls have been implemented properly, code review and unit testing should be conducted.
Secure coding guidelines: • Validate input from all data sources. • Compile code using the highest warning level and take necessary action to resolve the warnings.
•
Use version control to track code changes. • Sanitize the input to SQL statements. Use parameterized SQL statements. Do not use string concatenation or string replacement to build SQL statements.
•
Use the latest version of compilers, which often include defences against coding errors; for example, GCC protects code from buffer overflows. • Include proper error/exception handling. Check the return values of every function, especially security and privacy related functions. • Encode HTML input field data. Do not store sensitive data in cookies.
•
Use code review tools to find security and privacy issues early.
Code Review: Code review is an effective technique to examine the source code to minimize coding errors and reduce the risk of introducing vulnerabilities during the implementation phase. Secure coding guidelines also need to be considered during the code review process. Code review can be performed manually and/or by using an automated tool. To conduct a manual code review, organizations need to assign an experienced person from the development team. To conduct a code review using an automated tool, an organization needs to select the tool based on the technology stack. There are various automated code review tools available such as: SonarQube, IBM Security AppScan, Code Dx or Veracode which support a wider range of technology stacks.
Unit Testing: Unit testing is a testing method which helps to test an individual unit or component of an application. The goal of unit testing, from a security and privacy perspective, is to verify that each implemented control effectively mitigates its respective risk. Sample acceptance criteria for unit-tests are present in Table 11. The example below details the test to verify that the countermeasure for "Weak Authentication Scheme" is properly implemented. Sample use case: User login with username and password Test objectives: Verify that the user authentication is aligned with business and security requirements If the code review or unit test identifies any control failures, then the developer needs to conduct the following steps in order:
•
Review the reason for the failure and take necessary action based on the scenario presented in Section 9.3.
• Conduct code review and/or unit test again to check whether the failure case is addressed. • Finally, the result of the code review and the unit testing needs to be documented in the security risk control report with the updated list of controls (if any new control were added).
Review of Security and Privacy Controls
The aim at this stage is to present a list of reasons which can cause a control to fail. During the review, the following considerations need to be taken into account in order to identify the cause of the failure:
•
The control was not properly implemented according to the implementation guidelines outlined in Appendix B. In that case the developer needs to implement the control again according to the implementation guidelines. • Appropriate control was not selected for addressing the threats and/or vulnerabilities. If the appropriate control is not available in Appendix B, then analyze external sources such as NIST 800-53, ISO 27005, OWASP and blogs for appropriate control and implementation details.
•
The developer did not follow appropriate secure coding practices during implementation.
Software Integration Testing
Software integration testing is a level of software testing where individual units are combined and tested as a group. Integration tests help to identify whether independently developed units of software work correctly when they are connected together. Integration testing can adopt different approaches, such as: Black Box Testing, White Box Testing and Gray Box Testing methods. During software integration testing, the developer needs to conduct two key tests: • Security and privacy requirements testing-to validate the security and privacy requirements identified during the risk assessment are implemented properly by conducting functional, performance and scalability testing.
•
Threat and vulnerabilities mitigating testing-to validate the effectiveness of the implemented controls against the identified threats and vulnerabilities. The following steps should be conducted at the software integration testing stage: • Perform integration testing by conducting functional, unit-test, black-box, white box and gray box testing. Organizations can use one or a combination of multiple testing approaches to conduct the integration testing based on the QA resource expertise and availability.
•
If an integration test fails, then check whether it failed due to a security risk control If no, then take appropriate measures to fix the failure case and conduct the software integration test again. If yes, then review based on considerations presented in Section 9.3 in order to identify the reason for failure and take appropriate measures to address the failure case and conduct the software integration test again.
Evaluation of Overall Residual Security and Privacy Risk Acceptability
Evaluating an application's overall residual security and privacy risk is a complex process as determining how an attacker will exploit the application and the severity level of the exploit, is difficult to assess. According to the TIR 57 standard, an organization can employ security testing techniques such as vulnerability scans and/or penetration testing to assess the overall residual security and privacy risk of an application. This stage will take the application with controls implemented and verified as input. Figure 11 presents the steps for evaluating the overall residual security and privacy risk of the application.
Conduct Vulnerability Scanning/Penetration Testing
Vulnerability scans and penetration testing are very different from each other, but both serve important functions for evaluating the implemented controls. A vulnerability scan only discovers known vulnerabilities; it does not attempt to exploit a vulnerability but instead only confirms the possible existence of a vulnerability. An organization can conduct vulnerability scanning using an automated tool with some manual support. Table A2 in Appendix C lists popular tools for vulnerability scanning. Penetration testing is a security testing approach which identifies exploitable vulnerabilities of a system, or of individual components of a system. Penetration testing requires specialized skills, higher budgets and more time than vulnerability scanning. An organization can conduct penetration testing by forming a team of people within the organization who have the technical expertise to and/or on-board external resources with the required expertise to conduct penetration testing. Table A3 in Appendix C lists some penetration testing tools. To conduct vulnerability scanning and/or penetration testing, an organization should conduct the following steps: • Define the scope of the vulnerability scanning and/or penetration testing. The scope will include: List of application use-cases. List of assets. List of threats and vulnerabilities for which countermeasures are implemented.
• Select the tools to be used to conduct the testing.
•
Collect the results for review. • Document the overall residual security and privacy risk acceptability in a report including: Date of the testing. Name of people/organization who performed the scanning and/or testing. Scope of the testing. List of tools used for conducting the testing.
Review Test Result
Passing a penetration test/vulnerability scan does not guarantee that the application is invulnerable, however it does mean that the application is at least invulnerable within the scope of the testing. If the testing is successful (i.e., did not record a fail), then the organization can mark the product for launch. If it fails, then the reason of the failure needs to be analyzed using the following steps: • Check whether the threat is a new threat or existing threat which was identified during the security and privacy risk assessment steps.
•
If the threat is an existing threat, then perform a review of the control based on the considerations presented in Section 9.3. to identify the reason for failure and take appropriate measures to address the failure case and mitigate the threat. • Upon completion of the implementation of the controls, testing needs to be conducted again to verify that the control successfully mitigates the threat. • Document the action taken to address each threat in the overall residual security and privacy risk acceptability report.
Discussion
The goal of this section is to present how the beta version of the framework addresses the challenges presented in Section 4, followed by a discussion on the threats to the validity of this study.
How the Proposed Framework Addresses the Challenges
Section 4 outlines the challenges faced by developers and organizations in adopting a risk management framework and standards for assuring security and privacy of WBAN applications.
•
Lack of trained staff, responsibilities, budget, and management support-this framework consists of a list of assets, threats, vulnerabilities, and controls with implementation details which are specific to WBAN applications. The implementation of this framework requires minimal security expertise and will help to reduce development time, and thereby development cost.
•
The existing standards are too complex and complicated to implement-this framework provides detailed guidance on how to conduct each step of the risk management process. This guidance should greatly assist developers with limited experience in implementing a risk management process. • Limited knowledge about healthcare regulatory requirements and standards-the framework is based on recommendations and best practice guidelines provided by regulations such as HIPPA and GDPR, and by standards such ISO/IEC 80001-2-2, TIR 57, NIST 800-53 and ISO 27002.
•
Understanding the data flow around the system and what assets need to be protectedthe framework provide guidance on conducting security risk assessment at both the requirements analysis and the system architecture phases. This guidance will help the organization understand how data flows around the system and to identify the assets that need protection.
•
Comprehensive understanding of the architecture for WBAN security and privacythe framework outlines the possible assets, threats and vulnerabilities, and provides guidelines on how to conduct the architecture review. Additionally, the framework identifies the security requirements that need to be considered during the development of the architecture of a WBAN application. This will help organizations to obtain a comprehensive understanding of WBAN architecture.
•
Identifying appropriate security controls with respective implementation details-the framework provides appropriate security controls, along with their implementation details, for a WBAN application. The implementation details will assist a developer to implement the security controls.
•
Due to a vast number of security controls, the challenge is prioritizing these controls in addition to planning releases without compromising security and privacy-the security risk score which is identified during the security risk evaluation and treatment stage can be used to prioritize the risk and respective security risk control. • Security mechanisms for sensor device nodes-the framework suggests using very lightweight encryption and decryption processes. This framework recommends use of the AES symmetric cryptographic algorithm and the Diffie-Hellman process for key exchanges between mobile applications and sensor devices.
Threat to Validity
A threat to validity arises due to the fact that the alpha version of the framework has only been validated through implementation within one industrial setting. This also raises concerns around the generalisability of the framework to all WBAN development organizations. To address these concerns, we intend to have the framework undergo expert review, and to further trial the framework within other WBAN development organizations.
Conclusions and Future Work
Assuring security and privacy of PHR data are a key concern and challenging task faced by developers of WBAN applications. Developers have difficulties in assuring security and privacy of WBAN based healthcare applications for a number of reasons which include: lack of knowledge and complexity of the security and privacy standards; lack of understanding of what assets need to be protected in WBAN ecosystems; and difficulty with the identification of appropriate controls and lack of implementation details.
In this paper, we identified a number of healthcare-related risk management frameworks. However, these risk management frameworks were not directly applicable to WBAN applications because the primary objective of these frameworks is to manage the risk of applications which operate within a HDO's IT-network, whereas WBAN applications may operate in a public, open network using short-range communication media. Furthermore, these frameworks lack a process for selecting controls, lack implementation details for controls, and do not provide any guidance to assure security and privacy for resource constrained sensor devices. This paper presents a risk management framework specifically for WBAN applications which addresses the challenges detailed above. The framework was developed in two stages, the alpha version and beta version. The beta version of the framework was developed by considering the suggestions and recommendations received after implementing the alpha version in an industrial setting. We have detailed how the framework addresses the difficulties developers face in assuring security and privacy of WBAN applications, and through implementing the framework within a WBAN development organization we have demonstrated the effectiveness of the security control implementation details provided within the framework.
Future work is to validate this framework through expert review.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
A non-exhaustive list of assets, possible threats, vulnerabilities and respective security controls for WBAN applications is presented in in Table A1. files accessed and the kind of access. network addresses and protocols. alarms raised by the access control system. activation and de-activation of protection systems, such as anti-virus systems and intrusion detection systems. records of transactions executed by users in applications.
• Limit the capturing of PHI and/or PHR data in audit records to minimize the privacy risk. If required anonymize the PHI and/or PHR data records before capturing in the audit log (AU-3, 12.4.1).
•
Provide a warning to respective roles or owner within an organization when allocated audit record storage volume reaches the maximum audit record storage capacity (AU-5, 12.4.2). • Provide a real-time alert if the system failed to capture audit record in a time-period (AU-5). • Implement an automated process to review and analysis the audit log which followed by generating report. Use this report to investigation and response to suspicious activities (AU-6). • Implement the capability to sort and search audit records for an event based on the content fields of audit records (AU-7). • Use internal system clocks to generate the timestamp for audit records (AU-8). • Implement cryptographic mechanisms to protect the integrity of audit records and ensure only authorized users obtain access to these audit records. If required, create an authorized user with read-only permission to audit record (AU-9). • Initiate session audits including automatically file transfer, user request/response at the system start-up (AU-14).
the revocation of a compromised key, a new key needs to be generated and distributed using the above steps (ISO/IEC 11770). • Log each activity related to key management and use this data to perform auditing (ISO 27002 10.1.2).
Appendix C
List of tools for vulnerability scanning and penetration testing. | 17,836 | 2021-10-12T00:00:00.000 | [
"Medicine",
"Computer Science",
"Engineering"
] |
A Network with Composite Loss and Parameter-free Chunking Fusion Block for Super-Resolution MR Image
MRI is often influenced by many factors, and single image super-resolution (SISR) based on a neural network is an effective and cost-effective alternative technique for the high-resolution restoration of low-resolution images. However, deep neural networks can easily lead to overfitting and make the test results worse. The network with a shallow training network is difficult to fit quickly and cannot completely learn training samples. To solve the above problems, a new end-to-end super-resolution (SR) method is proposed for magnetic resonance (MR) images. Firstly, in order to better fuse features, a parameter-free chunking fusion block (PCFB) is proposed, which can divide the feature map into n branches by splitting channels to obtain parameter-free attention. Secondly, the proposed training strategy including perceptual loss, gradient loss, and L1 loss has significantly improved the accuracy of model fitting and prediction. Finally, the proposed model and training strategy take the super-resolution IXISR dataset (PD, T1, and T2) as an example to compare with the existing excellent methods and obtain advanced performance. A large number of experiments have proved that the proposed method performs better than the advanced methods in highly reliable measurement.
Introduction
MRI is a noninvasive imaging technology in vivo that uses the phenomenon of magnetic resonance to obtain molecular structure and thus information about the internal structure of the human body. MRI not only provides more information than many other imaging techniques in medical imaging, but it can also directly make cross-sectional, sagittal, coronal, and various oblique images of the body, which does not produce the artifacts in CT detection, does not require contrast injection, does not have ionizing radiation, and has less adverse efects on the body. MRI is very efective in detecting intracerebral hematomas, extracerebral hematomas, brain tumors, and other diseases. Of course, MRI has its shortcomings [1]. It is relatively slow, has less spatial resolution than CT, has motion artifacts, etc. Terefore, obtaining high-resolution MRI images has become the direction of current research.
High-resolution MRI can not only clearly show the relationship between tumor and surrounding tissues but also the anatomical structure of the brain. It has high application value in the early and middle stages of diagnosis [2].
However, the generation of high-resolution MRI images is odnften infuenced by many factors, such as hardware equipment, imaging time, the motion of the human body, and the efect of environmental noise. Terefore, in order to perform efective high-resolution restoration of the lowresolution images obtained by MRI, image superresolution is an efective and cost-efective excellent technique to improve the spatial resolution of MR images. Tis technique ofers the feasibility of a high signal-to-noise ratio and high-resolution reconstruction of low-resolution MRI images [3].
Te traditional SR algorithms include interpolationbased and reconstruction-based methods, which are generally difcult to reconstruct from the high-frequency detailed information of the image, more complicated to compute, and take longer time to reconstruct [4]. In order to solve these problems, scholars have applied deep learning to SR reconstruction in recent years and made a lot of breakthroughs, and nowadays, SR algorithms based on deep learning have occupied the mainstream position of SR algorithm research. In the feld of medical images, deep learning-based SR algorithms can obtain prior knowledge from medical image training set data and reconstruct lowresolution images into high-resolution images using neural networks based on this information.
In recent years, with the continuous development of deep learning [5][6][7][8], many advanced deep learning-based SR methods have emerged in the feld of SR image [9,10], enabling the performance and efciency of SR image to be continuously enhanced. Super-resolution convolutional neural network [11] and fast super-resolution evolutionary neural network [12] were pioneering works of deep learning in the feld of super-resolution reconstruction. Tey use a convolutional neural network (CNN) for super-resolution image reconstruction for the frst time. Subsequently, on the basis of this pioneering work, researchers proposed many new super-resolution image networks to further improve the model performance, such as deeply recursive convolutional network [13] and deep recursive residual network [14] based on recurrent neural networks and super-resolution using very deep convolutional networks [15]. FFTI [16] was a fne inpainting method which is an incomplete image inpainting method based on feature fusion and two-step inpainting. However, most of these methods were aimed at natural images and are not suitable for medical images.
Recently, many literature studies in the feld of medical images have also proposed many SR methods for medical images, such as [17][18][19][20][21]. However, unlike ordinary images, high-quality medical image datasets are relatively scarce, and most of the images are gray-scale images, and the images are relatively single. Using this data set to train a model with a deep network layer will easily lead to overftting and make the test result worse. A model with a shallow training network will be difcult to ft quickly and cannot learn the training samples completely. Terefore, SR medical images trained by a traditional network cannot meet the requirements of SR tasks.
Considering the above problems, in order to make a SR image model more suitable for medical image tasks, in this paper, we introduce residual learning and a parameter-free chunking fusion method to improve the above difculties. In the stage of feature extraction, residual learning is designed similar to the residual network [22] to acquire features, which uses layerNorm [23] in the transformer for reference. LayerNorm is also used in residual learning to make the training smoother and avoid the impact of variance differences between diferent batches. Subsequently, a parameter-free chunking fusion block is used to better fuse features and perform efective feature enhancement. In the module, the feature graph chunking is divided into n branches for diferent information transmission, and then the SimAM [24] is performed on each branch to enhance the features of diferent branches, and fnally the semantic information of diferent branches is integrated. SimAM can efectively enhance the feature on diferent branches and efectively integrate at the end. Moreover, SimAM has no parameters to learn and can improve the model performance without parameter training. In addition, in order to further accelerate model ftting and improve prediction accuracy, this paper proposes a composite loss to optimize the training strategy by combining perceptual loss, gradient loss, and L1 loss.
In order to solve the above problems, we have proposed corresponding solutions, to which the follow-up work mainly makes three contributions: (1) A parameter-free chunking fusion block (PCFB) model is proposed, which divides the feature map into n branches for parameter-free attention and then integrates the feature information of diferent branches, so as to better fuse features and perform efective feature enhancement, which can improve the expression ability of the feature map without adding parameters, thereby improving the accuracy. (2) A composite loss for our SR method is proposed which combines perceptual loss, gradient loss, and L1 loss. Te loss can further make the model pay attention to the impact of loss in diferent dimensions, thus enhancing the model's expressiveness.
(3) A new end-to-end SR method for MR images is proposed, where the methods contain PCFB and composite loss, which can improve SR method performance more efectively and avoid overftting.
Te rest of this paper is organized as follows: Section 2 introduces some related work in this paper. Te proposed methods and experimental results are described in detail in Sections 3 and 4, respectively. We conclude our thesis in Section 5.
Super-Resolution in Deep
Learning. With the development of deep convolutional neural networks (DCNN), research on super-resolution has made progress recently. For deep learning methods with SISR, fast response and reconstruction quality are important references for measuring super-resolution methods. Super-resolution convolutional neural network (SRCNN) [11] and fast super-resolution evolutionary neural network (FSRCNN) [12] were pioneering works of deep learning in the feld of superresolution reconstruction. Te two neural networks frst used bicubic interpolation to reduce and enlarge lowresolution images to obtain comparable super-resolution images. Ten convolutional neural network was frst introduced to achieve image reconstruction. In addition, the traditional SR method based on sparse coding can also be regarded as a deep convolutional network from the two networks, and compared with the traditional method, all sublayers in the two networks were optimized to give full play to the performance of each component. DRCN has a very deep recursion layer (up to 16 recursions), and recursive supervision and skip connections were further proposed by taking into account gradient disappearance/ explosion. For deep models, the residual structure exhibits excellent performance. Terefore, the residual structure is introduced into the super-resolution method to make up for the shortcomings caused by gradient disappearance and gradient explosion. Te deep super-resolution network (EDSR) [25] was inspired by the residual structure. Compared with the traditional residual structure, the residual blocks of EDSR discard unnecessary modules, thus constructing a multiscale depth super-resolution system (MDSR), which can reconstruct high-resolution images with diferent magnifcation factors in a single model. In addition, the SR robustness of images in complex scenes should also be focused on. A heterogeneous group SR CNN [9] contains multiple heterogeneous group blocks. Tese blocks increase the internal and external relations of diferent channels in a parallel way to cope with SR in complex scenarios. An enhanced super-resolution group CNN (ESRGCNN) [26] can fully fuse the correlation between wide channel features and retain the long-distance context dependence in the upsampling operation to obtain more accurate lowfrequency information. Further, in order to solve the common problems in image super-resolution algorithms, such as image edge blurring caused by redundant network structure, infexible selection of convolution kernel size, and slow convergence speed of training process, MFFN [27] used a lightweight fusion multilevel single image super-resolution method to achieve SISR.
Super-Resolution in Medical
Imaging. Te problem of super-resolution has been widely discussed in medical imaging. Due to limitations such as image acquisition time, low radiation dose, or hardware limitations, the spatial resolution of medical images is insufcient [28]. To solve this problem, Zhu et al. [29] proposed a method for arbitrary scale super-resolution (MIASSR) of medical images, where the method also combined meta-learning with GAN, which can be used for super-resolution at any magnifcation.
To get as many useful image details as possible, Bing et al. [20] proposed a SR method in medical imaging based on an improved generative adversarial network. Tis method can not only avoid the interference of high-frequency false information but also integrate the low-level feature constraints to train the model. Zhang et al. [21] proposed a fast medical image super-resolution method, in which subpixel convolution layer addition and mini-network replacement in the hidden layer were crucial to improving the speed of image reconstruction. Inspired by the super-resolution convolutional neural network method based on three hidden layers, Deeba et al. [18] proposed a wavelet-based microgrid network super-resolution method for medical images, where image restoration was speeded up by adding a subpixel layer to replace the small grid network on the hidden layer.
Attention Mechanism for Vision Tasks.
Attention has arguably become one of the most important concepts in the feld of deep learning. It was inspired by human biological systems, which tend to focus on unique parts when processing large amounts of information [30]. Liu et al. [31] proposed a multiattention domain module to weigh and reorganize the features; the channel and spatial domain information in the super-resolution method are efectively fused, and the quality of the super-resolution image is effectively improved. Wang et al. [32] proposed two new attention mechanisms: context-weighted channel attention and persistent spatial attention. Te proposed attention modulates rich features by suppressing useless features and enhancing features of interest in a channel and spatial manner. Liu and Chen [33] made the following improvements on the basis of the super-resolution universal reverse network (SRGAN). Firstly, they added the channel attention (CA) module to the SRGAN network and increased network depth to better express high-frequency features. Secondly, the old batch normalization layer is deleted to improve network performance. Finally, the loss function is modifed to reduce the infuence of noise on the image.
Overview.
In the image super-resolution task, our goal is to take the low-resolution (LR) image I LR ∈ R H×W×C as the input of the super-resolution model and generate the superresolution (SR) image I SR ∈ R H×W×C . While the general lowresolution image I LR is obtained by downsampling the ground-truth of the high-resolution image I HR ∈ R H×W×C . We expressed the super-resolution model as G and the parameter as θ G . Te super-resolution task can be expressed as the following formula: (1) In order to make I SR as similar to I HR as possible, it is necessary to optimize the model G with the loss function L, and fnally the optimal parameter θ * G is obtained. Te objective formula is as follows: Te proposed architecture of super-resolution is shown in Figure 1. Ten, the details are given about the feature extraction block, parameter-free chunking fusion block (PCFB), and image reconstruction block. Finally, the composite loss and the training strategy are introduced to enhance the model's expressiveness. First, if the normal ReLU activation function is used, when the feature x is less than 0, x will be suppressed to 0, and the feature information will be lost. Terefore, we use PReLU [34] (parametric rectifed linear unit) to replace ReLU. PReLU adds a learnable parameter on the basis of ReLU, which can adjust the activation function according to diferent experimental conditions. Te formula is as follows:
Network
where x represents the the feature map, a i ∈ [0, 1] is a learnable parameter. Second, if batch normalization (BN) is used, due to the diference in the mean and variance of data in the minibatch, unstable statistical data may be brought [35], and instance normalization [36] can avoid the above small batch problems. However, the work reported in [37] shows that adding instance normalization does not always bring performance improvement, and manual adjustment is required. Terefore, we introduce layer normalization (LN), which was used by relevant papers of transformer [23] in the early stages. Many recent SOTA methods [38][39][40] also use this normalization. LayerNorm is independent of the batch size, so it will not be afected by the above problems, and there are no parameters that need to be manually adjusted in the instance normalization. Terefore, LN is introduced to stabilize the training and improve the performance. Te normalization formula is as follows: where x represents the feature map, ϵ is a small constant, E[x] is mean, Var[x] is variance, and c and β is scale and shift. Te same normalization method is used as BN, but the diference is that LN normalizes each single batch rather than normalizing all batches together like BN.
Parameter-Free Chunking Fusion Block (PCFB).
In order to improve the propagation of feature information, Zhao et al. [41] designed module CSB to help the neural networks deal with hierarchical features with diferent attributes. Because CBF contains a large number of parameters that need to be learned and the ftting speed is slow, we propose PCFB that does not need to learn a large number of parameters on the basis of maintaining image quality. In PCFB, chunking and fusing are represented as channel splitting and channel merging, respectively. Te diference from CSB is that the size of the chunking is determined by the parameter n, where each input feature x is divided into n chunks, and each chunk x i is the size of H × W × (c/v). Subsequently, in order to carry out targeted feature enhancement for each block of data, SimAM is used to process features of diferent blocks, and SimAM does not need redundant parameters to be learned, so the number of model parameters will not be increased.
(1) Chunking and Fusing. Te input feature x can be divided into n chunks along the channel direction, and the dimension of each chunk is H × W × (c/v). It can be formally expressed as follows: where S(·) is the chunking function which split feature map x into n chunks x 1 , x 2 . . . x n . In contrast, M(·) is the fusing function, which can merge x 1 , x 2 . . . x n back to the original dimension use concat function.
(2) Parameter-Free Attention. Normally, spatial attention is often used for spatial information, while channel attention is often used for channel information to focus on feature information. However, in human eyes, spatial attention and channel attention coexist and jointly promote information selection in visual processing. Terefore, we need a threedimensional attention to focus on the features in each channel and spatial position, so a parametric 3D attention SimAM is used to enhance the features of diferent chunks in the paper. Te structure of the proposed method is shown in Figure 2.
SimAM evaluates the importance of each neuron by constructing an energy function e * t . Te lower the energy, the greater the diference between the neuron and surrounding neurons, and the higher the importance of features. Te energy function is as follows: where t is a neuron which means a pixel of feature map x, u, and σ represent the mean and standard deviation of the characteristic map, respectively, and λ is a hyper parameter. Terefore, the importance of neurons can be obtained by e * t . In addition, the attention mechanism can be realized by weighting the feature map through the sigmoid function. Te formula is as follows: where ⊗ means element-wise multiplication, and E is the energy matrix containing all e * t . Tis module does not introduce any additional training parameters, so it does not increase the original network parameters on the premise of improving performance.
(3) Parameter-Free Chunking Fusion Block. In order to better learn and enhance the features, we use equation (5) to obtain n chunks and then let each chunk pass through equation (8) alone for 3D weighted attention. Equation (6) is used to fuse them into the original size like equation (9). Te process is shown in Figure 1.
Image Reconstruction Block.
In order to change the image to the super-resolution size, the upsampling operation is required, and we build the image reconstruction part to realize it. As shown in Figure 1, image reconstruction includes 3 × 3 convolution, 1 × 1 convolution, PReLU, and PixelShufe [42] layers. Te main function of PixelShufe is to obtain highresolution feature maps by multichannel recombination of low-resolution feature maps. As shown in Figure 3, the feature mapping of the r 2 channels is recombined into the supersampling result of (H * r) × (W * r) of a single channel. Pixel shufe transforms the feature map from lowresolution space to high-resolution space.
Conventional Loss.
Most super-resolution methods use pixel loss to optimize the network. Pixel loss measures the pixel-wise diference between SR image and HR image, which contains L1 loss and L2 loss. Compared with L1 loss, L2 loss penalizes large errors but has a higher tolerance for small errors. In actual training, L1 loss [25,43] shows better convergence than L2 loss. Finally, a higher peak signal-tonoise ratio (PSNR) index will be obtained, so it is the most widely used loss function in the super-resolution feld. Te formula is as follows: However, since such pixel loss does not consider the image quality, such as edges, textures, and high-frequency details, which may be too smooth to maintain sharp edges to obtain visual efects.
Perceptual Loss.
In order to incorporate high-level feature loss on the basis of pixel loss, perceptual loss [44] is introduced. Te perceptual loss uses the pretrained VGG [45] network to extract the high-level features of the image and constructs the perceptual loss through the Euclidean distance between the HR image features and the SR image features to restore the perceptual quality of the image. Te formula of perceptual loss is as follows: where ϕ i (·) denotes the i-th layer output of the VGG model.
Edge-Aware Loss.
In order to combine the loss of image edge information on the basis of pixel loss, we further introduce edge-aware loss [46]. In edge-aware loss, edges of the SR image and HR image are extracted according to the edge extraction operator, and then the diference is Journal of Healthcare Engineering 5 calculated between the output and the label edge. In this paper, Laplacian operator is used to extract edge features. Te formula of edge-aware loss is as follows: where c i (·) denotes an edge extraction method based on Laplacian operator.
Our Composite Loss.
Our loss function uses L1 loss as the basic loss function, adds perceptual loss to avoid the loss of high-level features, and adds edge perceptual loss to further monitor the integrity of image edge information. Te formula is as follows: where α and β are hyper-parameters. We use our composite loss to optimize the proposed model, and the algorithm for training the model is shown in Algorithm 1.
Dataset.
Te IXISR dataset was constructed by Zhao et al. through further processing of IXI dataset [41], which contains three types of MR images: 81 T1 volumes, 578 T2 volumes, and 578 PD volumes. In this work, we take the intersection of these three types of MR images to obtain 576 3D volumes of each type of MR image. Tese 3D volumes are then trimmed to 240 × 240 × 96 (H × W × D) to ft the three scaling factors. For SISR, each 3D MR voxel is divided into 96 (H × W) gray-scale images. LR images are generated based on bicubic downsampling and K-space truncation. As for truncation degradation, HR images are frst converted to k-space by discrete Fourier transform (DFT) and then truncated along the height and width directions.
Implementation Details.
Our method is implemented by using the paddle framework. Similar to the previous work, in the IXISR [41] dataset, we use 70% of the images as the training dataset, 10% as the validation dataset, and 20% as the test dataset. Te size of the small batch is set to 16, and the parameter α in the loss function is set to 0.3, the parameter β is set to 0.1, and the parameter n is set to 2. We use a size of 24 × 24 randomly extracted from LR slices and the corresponding HR area. Data enhancement is simply achieved by random horizontal fipping and 90 degree rotation [25]. And millions of iterative trainings are conducted on the NVIDIA GeForce GTX 3090 GPU. We use Xavier initialization [47] and Adam optimizer for all model parameters and an initial learning rate of 0.001 for iterative optimization. Trough the optimization of Algorithm 1, a single iteration of the proposed model including all modules takes about one minute. Te space complexity depends on the number of parameters involved in the calculation. Specifcally, the representation of the number of parameters is refected in Table 1.
Evaluation Metrics.
For quantitative comparison, highly reliable metrics are introduced, such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). Te calculated metric scores are derived from the comparison of the results I SR obtained by the super-resolution method and the highresolution image I HR .
Root Mean Square Error (RMSE)
where h ∈ [0, H − 1] and w ∈ [0, W − 1] together represent the position of the pixel in I HR and I SR .
Peak Signal-to-Noise Ratio (PSNR)
PSNR � 10 × log 10 where n is the number of bits per pixel value, which generally takes 8.
Structural Similarity Index (SSIM)
SSIM � 2μ HR μ SR + c 1 2σ (HR,SR) + c 2 where I SR is obtained by the super-resolution method and I HR is the high-resolution image, respectively; μ HR and μ SR are the average; σ HR and σ SR are the standard deviation; σ (HR,SR) is the covariance of HR and SR; and c 1 and c 2 are small constants.
Experimental Results.
In this paper, the expressiveness of diferent models is compared in the case of the IXISR dataset (PD, T1, and T2) of ×2 super-resolution. PSNR, SSIM, and RMSE are used to evaluate the expressiveness of the model. Subdatasets are used under two diferent sampling (bicubic degradation and truncation degradation) in the dataset. Bicubic downsampling is widely used by LR image generation simulation in SR images, where bicubic downsampling is used to downsample HR images and generate LR images. Truncation degradation is a process that simulates the real image acquisition process. Te LR image is obtained by k-space truncation, which means that the HR image is intercepted in frequency space for sampling. Tables 1 and 2, respectively, show the evaluation results of diferent models of PD, T1, and T2 datasets under the bicubic downsampling and truncation degradation methods. From Figures 4 and 5, we can see that our model has higher expression ability than other models. Compared with the two residual-based networks SRResNet and EDSR, our module adds PCFB, which helps to improve the performance of the model.
Ablation Studies.
Te proposed method is based on the improvement of SSResNet, so the ablation experiment will also be carried out around SSResNet. In Tables 3 and 4, we compare the number of parameters and the performance in PSNR, SSIM, and RMSE for all methods. Note that all results are the average values of PSNR, SSIM, and RMSE calculated from MR images on the same dataset. Te experimental results show that the proposed method improves the PSNR, SSIM, and RMSE of LR images obtained from BD and TD by 0.2 dB, 0.33 dB, 0.06 dB and 0.17 dB, 0.15 dB, 0.25 dB, respectively, compared with SRResnet, although the amount of parameters is only 0.01 MB lower. Tis shows that PCFB is more efective.
In order to evaluate the efectiveness of the composite loss we constructed, we performed ablation experiments with diferent loss functions on the PD data in the dataset, as shown in Table 5. Compared with L1 and L2 loss functions, the PSNR performance of our composite loss Back propagation update θ G according to gradient (zL/zθ G ).
Conclusion and Future Work
High-resolution MR images have smaller voxel sizes, providing clinical physicians with more accurate structural and textural details. However, generating high-resolution MR images usually incurs enormous costs. Image super-resolution is an efective and cost-efcient alternative technique for highresolution restoration of low-resolution images. In this work, we propose a novel end-to-end MR image superresolution method. First, we introduced a parameter-free block fusion block (PCFB) that can split the feature map into n branches for better fusion features without parameters. Second, a training strategy combining perceptual loss, gradient loss, and LI played an important role in accelerating model ftting and improving prediction accuracy. Finally, the proposed method is efective in the super-resolution task of MR images, improving model accuracy. Our future work needs to focus more on lightweight processing of the model to reduce the model's parameters while achieving the optimal model accuracy mentioned in the paper.
Data Availability
Te IXISR dataset used to support the fndings of this study are included within the article [41].
Disclosure
Mingyang Hou and Hongyi Wang should be considered as co-correspondents.
Conflicts of Interest
Te authors declare that they have no conficts of interest. Journal of Healthcare Engineering 9 | 6,360.2 | 2023-06-12T00:00:00.000 | [
"Computer Science"
] |
Monoclonal Antibodies that Inhibit the Proteolytic Activity of Botulinum Neurotoxin Serotype/B
Existing antibodies (Abs) used to treat botulism cannot enter the cytosol of neurons and bind to botulinum neurotoxin (BoNT) at its site of action, and thus cannot reverse paralysis. However, Abs targeting the proteolytic domain of the toxin could inhibit the proteolytic activity of the toxin intracellularly and potentially reverse intoxication, if they could be delivered intracellularly. As such, antibodies that neutralize toxin activity could serve as potent inhibitory cargos for therapeutic antitoxins against botulism. BoNT serotype B (BoNT/B) contains a zinc endopeptidase light chain (LC) domain that cleaves synaoptobrevin-2, a SNARE protein responsible for vesicle fusion and acetylcholine vesicle release. To generate monoclonal Abs (mAbs) that could reverse paralysis, we targeted the protease domain for Ab generation. Single-chain variable fragment (scFv) libraries from immunized mice or humans were displayed on yeast, and 19 unique BoNT/B LC-specific mAbs isolated by fluorescence-activated cell sorting (FACS). The equilibrium dissociation constants (KD) of these mAbs for BoNT/B LC ranged from 0.24 nM to 14.3 nM (mean KD 3.27 nM). Eleven mAbs inhibited BoNT/B LC proteolytic activity. The fine epitopes of selected mAbs were identified by alanine-scanning mutagenesis, revealing that inhibitory mAbs bound near the active site, substrate-binding site or the extended substrate-binding site. The results provide mAbs that could prove useful for intracellular reversal of paralysis and identify epitopes that could be targeted by small molecules inhibitors.
Introduction
Botulinum neurotoxins (BoNTs), produced by the bacterium Clostridium botulinum are the most lethal substances known [1] and are considered to be a high risk for bioterrorism use [2]. All of the serotypes of BoNTs are composed of two polypeptide chains and three functional protein domains [3]. The 100-kDa heavy chain (HC) contains the binding domain (HC) and translocation domain (HN) and the 50-kDa light chain (LC) contains the zinc protease catalytic domain. The C-terminal domain of the HC (HC) binds receptors on the presynaptic membrane [4][5][6][7][8][9] leading to BoNT endocytosis. In the neuron, the N-terminal domain of the HC (HN) forms a channel across the endosomal membrane allowing delivery of the LC into the cytoplasm [10,11]. In the case of BoNT/B, the protease cleaves synaptobrevin-2 (Syb-2), a SNARE protein, resulting in loss of neurotransmitter release and flaccid paralysis (botulism) [12]. BoNTs have stringent specificity requirements and low turnover due to their extended substrate-binding sites [13]. In the holotoxin, the HN "belt" wraps around the catalytic domain and occludes the extended substrate-binding site. The protease is inactive until the HN and belt separate from the LC during the translocation process inside the neuron [3,14].
The only approved treatment for botulism is human or equine polyclonal antitoxin antibodies used to treat infant and adult botulism, respectively [15,16]. To replace equine antitoxin, we have generated a number of extremely high-affinity recombinant monoclonal antibodies (mAbs) to BoNTs [17][18][19] that neutralize the toxins by a variety of mechanisms, including clearing BoNT from the circulation before it can reach the neuron or preventing BoNT entry into neurons [17]. Such recombinant antitoxins for serotypes A, B, C, D and E are in clinical or pre-clinical development [20,21]. Antibodies and antitoxins, however, cannot reverse BoNT paralysis, as they do not cross the neuronal cell membrane. An alternative to antitoxins is small molecule inhibitors of the catalytic domain [22][23][24]. Small molecule inhibitors are at a very early stage of research development; none have been approved for treatment and none have advanced into pre-clinical or clinical development. Obstacles hindering advancement of antitoxin therapies include the difficulty in development of potent inhibitors with exquisite specificity and high affinity and the challenges of getting them selectively into the presynaptic neuron [22,23,25].
Alternatively, BoNT antibodies could potentially inhibit translocation or proteolysis if they could be taken up into the neuron and then also delivered into the cytosol of the neuron via attachment to the toxin. A number of platforms are currently being developed for targeted delivery of therapeutic cargos, recently reviewed in [26]. The advent of these new post-exposure strategies potentially enables the delivery of antibody-based therapies to the site of toxin action in neurons, as has been reported for the delivery of inhibitory peptides [27].
We previously reported the isolation of a single-domain camelid VHH antibody that bound the BoNT/A LC alpha exosite with a KD of 147 pM and potently inhibited SNAP25 cleavage [28]. More recently we have reported scFv and IgG mAbs that bind BoNT/A LC and inhibit SNAP25 cleavage, and like the VHH, these inhibitory mAbs bind at the alpha exosite [29]. Here, we report generation of mouse and fully human antibodies that can inhibit BoNT/B LC proteolytic activity, as well as identification of the mAb epitopes mediating this inhibition.
Libraries Used for Monoclonal Antibody Generation
To generate mAbs that bind BoNT/B LC, yeast display scFv antibody libraries were constructed from immunized humans and mice. Humans were immunized with pentavalent (ABCDE) toxoid and mice were immunized with one of the BoNT/B sub-serotypes or recombinant BoNT/B LC (Table 1), using the immunization strategy described in the methods. scFv yeast display libraries were constructed from antibody variable (V) region genes isolated from either peripheral blood lymphocytes (human libraries) or from spleens (murine libraries). Yeast display libraries were flow sorted for binding to either BoNT/B or recombinant BoNT/B LC. After three to four rounds of sorting, a total of 19 unique scFvs, as determined by DNA sequencing, were identified that bound BoNT/B LC (Table 1).
Characterization of Monoclonal Antibodies
Nineteen mAbs with unique VH CDR3 binding BoNT/B LC were isolated from the human and mouse libraries, as determined by DNA sequencing (Tables 1 and 2). Equilibrium binding constants (KD) for yeast-displayed scFv binding to BoNT/B LC ranged from 0.24-14.3 nM with an average KD of 3.27 nM ( Table 2). The epitopes recognized by each scFv were classified into epitope groups based on their ability to compete with each other for binding to BoNT/B LC. In the assay, BoNT/B1 LC (or holotoxins of other subserotypes) captured by yeast-displayed scFv was probed with E. coli-expressed soluble scFv. The 19 mAbs were grouped into three epitope clusters (I-III) based on their ability to compete for LC binding (Figure 1). The largest cluster (I) was shared by 13 scFvs, at least one of each overlapped with other cluster I mAbs. Given the large number of mAbs, the epitope cluster was further divided into three subgroups (I-1, I-2 and I-3) based on the degree of inhibition.
Inhibition of BoNT LC Endopeptidase Activity
A Fluorescence Resonance Energy Transfer (FRET) assay was used screen each scFv for inhibition of substrate cleavage by BoNT/B LC. mAbs in epitope group 1, cluster I showed the greatest inhibition with scFvs 18E5 and 1B10.1 completely inhibiting substrate cleavage after 30 min. of incubation, while scFvs 16B3, 19D22 and 31A5 decreased the cleavage rate by 11-40 fold (Figure 2A). The remaining scFvs in epitope cluster I group 1 (2B25.1 and 1B22) showed less than a 4.5-fold reduction in the cleavage rate, similar to that seen for scFvs in epitope group 1, cluster II. scFv in epitope cluster I, group 3, as well as mAbs in epitope clusters II, and III showed minimal (at most 1.63-fold) inhibition.
The degree of inhibition of substrate cleavage did not correlate with affinity of the scFv for BoNT/B LC. SDS-PAGE was also used to independently verify the ability of scFvs in epitope group I to inhibit substrate (Syb-2) cleavage. SDS-PAGE results were generally but not entirely consistent with inhibition of proteolytic activity, as determined using FRET ( Figure 2B).
Figure 1.
Cartoon of mAb epitope clusters. mAbs were clustered based on their ability to simultaneously bind BoNT/B LC with one of the scFv displayed on the surface of yeast used to capture BoNT/B LC out of solution and then the ability of a second scFv to bind the captured B LC determined. mAb epitopes are shown as circles; overlapping circles indicate mAb pairs that cannot simultaneously bind LC. Red circles indicate mAbs that inhibited BoNT/ B endopeptidase activity; green circles represent non-inhibitory mAbs. Dotted circles indicate human-derived mAbs, and solid circles mouse-derived mAbs. Epitope cluster 1 antibodies were sub-divided into three groups based on degree of inhibition of BoNT/ B endopeptidase activity. Extent of inhibition is denoted as "++", ≥75% inhibition at 5 min.; "+", ≥50% inhibition at 5 min; "−", ≤50% inhibition at 5 min; (B) BoNT/B LC inhibition assayed by SDS-PAGE. Synaptobrevin-2 (Syb-2) and mAbs were mixed together in Tris buffer and BoNT/B LC (20 nM) was added to start the reaction. After 15 min, protein-loading buffer was applied to stop the reaction. Cleaved Syb-2 is indicated as cSb2. Extent of inhibition is denoted as "++", ≥75% inhibition; "+", ≥50% inhibition; "−", ≤50% inhibition at 15 min. The experiments were performed in triplicate. The mean ± SD is presented.
IC50 values of mAbs displaying the most potent inhibition were measured using FRET (Table 3). IC50 values ranged from 2 nM to 59 nM, with the lowest values for those mAbs (1B10.1, 16B3, 18E5 and 19D22) that showed the greatest slowing in the substrate cleavage rate. We are unable to explain how 1B10.1 had an IC50 less than the KD, however these were two different assay types perform in different buffers. Of note, the 1B10.1 IC50 was consistent with the amount of cleavage observed in Figure 2.
Defining the mAb Epitopes
The fine epitopes of 17 of the mAbs were determined using yeast display [30], in order to define the binding sites on the BoNT/B LC that resulted in inhibition of endopeptidase activity. Mutations were randomly introduced into the BoNT/B LC gene using error prone PCR, and the resulting mutants were displayed on the surface of yeast. Each of the 19 mAbs was incubated separately with the yeast displayed BoNT/B LC library, and the yeast cells were sorted for loss of mAb binding. After three rounds of sorting with decreasing antigen concentration, individual yeast cells were analyzed to identify those not binding mAbs, and the BoNT/B LC gene was sequenced to identify the mutations responsible for loss of binding (Table 4). These mutations were then mapped onto the crystal structure of BoNT/B LC [31]. While a structure of substrate-bound BoNT/B LC has not been reported, one can use the BoNT/B belt as a substrate surrogate [32]. The belt-binding site is shown in pink in Figure 3. One of the mAbs in epitope cluster I, group 2 (34E8) could not be mapped due to limited expression of soluble scFv/IgG. Table 4. BoNT/B LC mutants that result in loss of mAb binding. mAbs are grouped (separated by a solid line) based on shared amino acids. Amino acids indicated in bold were shared by the epitopes of more than one mAb. Epitope location determined by the above method ( Figure 4) correlated well with the classification of epitopes based on ability of mAbs to bind simultaneously to BoNT/B LC (Figure 1). The three mAbs displaying the most potent inhibition (16B3, 18E5 and 1B10.1, epitope cluster I, group 1) bound an epitope located near the catalytic site and the putative substrate-binding site and shared amino acids in the epitope ( Figure 3A). The mAb 19D22, displaying the next most potent inhibition, also bound near the catalytic and substrate-binding sites, but at a structurally non-overlapping epitope with mAbs in the group 1 below the belt ( Figure 3A). The remaining inhibitory mAbs in epitope cluster I, group 1 (2B25.1, 1B22 and 31A5) bound above and near the substrate-binding site but remote from the catalytic site ( Figure 3B). The epitope of 31A5 includes an amino acid within the substrate-binding site, explaining why it is more inhibitory than 2B25.1, 1B22. In epitope cluster I, group 2, the weakly inhibitory mAbs 18A7, 18D10 and 31H3 shared amino acids in the epitope and bound an epitope below but near the substrate-binding site and more remote from the catalytic binding site than mAbs in epitope cluster I, group 1 ( Figure 3C). In epitope cluster I, group 3, the non-inhibitory mAbs B6.1 and 18A6 bound far from the substrate-binding site ( Figure 4D). Similarly, in epitope cluster II (mAbs 4B19, 19A9) and III (mAbs 18F2, 31G2 and 31E2), the minimally inhibitory or non-inhibitory mAbs bind remote from the substrate-binding site. The antibodies binding in these epitopes that displayed low levels of inhibition may be conformationally-specific, stabilizing catalytically inactive form of the enzyme [14]. Conformationally-specific antibodies have been identified for other proteases [33]. Residue K285 of the epitope of 18F2 is adjacent to one of the reported Ca 2+ -binding sites that includes R284 [34].
The Epitope of the Inhibitory mAb 1B10.1 Includes Active Site Residues
The fine epitope of the highly inhibitory mAb 1B10.1 was further mapped by measuring the loss of free energy of binding (ΔΔG) that occurred when the side chain of amino acids in the epitope were exchanged for alanine [18,30] (Table 5). Amino acids around the 1B10.1 binding site were individually mutated to alanine and displayed on yeast, and their KD values vs. F(ab) 1B10.1 were measured to calculate ΔΔG values. Four amino acids (indicated in bold in Table 5) were involved in the 1B10.1 epitope. Three are on the LC (R121, H179, D244) and one is on the HN belt (D516). As shown in Figure 4A, mAb 1B10.1 binds at the active site (shown in orange) and overlaps with the S2, S4 and S6 binding sites for Syb-2 ( Figure 4B). That one of the critical amino acids for 1B10 binding (D516) is on the belt explains the observation that 1B10.1 binds to the BoNT/B holotoxin with much higher affinity than to the catalytic domain alone ( Figure 4B, Table 2). We previously reported that the antibodies that most potently inhibited BoNT/A are bound at the alpha-exosite, where the α-helix of substrate SNAP-25 binds, and are remote from the substrate cleavage site [28,29]. In contrast, the mAbs showing the greatest inhibition of BoNT/B activity bound at the catalytic site with the degree of inhibition decreasing with distance of the epitope from the catalytic site. This is consistent with data indicating that truncated Syb-2 with as few as 21 residues upstream of the cleavage site are required for high-affinity binding and cleavage of Syb-2 [35,36]. In contrast, SNAP-25 binding and cleavage is significantly reduced when the SNAP-25 N-terminal helix that binds to the alpha-exosite is truncated [14].
Ethics Section
The USAMRIID Institutional Animal Care and Use Committee approved the animal care and use protocol to conduct the animal studies reported here. The animals used in this study were euthanized using carbon dioxide gas following the AVMA Guidelines on Euthanasia prior to spleen removal.
The University of California, San Francisco (UCSF) Institutional Review Board approved the human use protocol used for the studies described here. Human donors were laboratory workers being immunized to work with BoNT who were recruited via an informational letter and who signed informed consent under a protocol approved by the UCSF IRB.
Oligonucleotides for Library Construction
The primers for site-directed mutagenesis were designed and synthesized per the QuikChange ® Site-Directed Mutagenesis Kit (Agilent Technologies, Santa Clara, CA, USA) instructions. The primers for human and mouse library construction were synthesized as described previously [29,37].
Strains, Media, Antibodies, and Toxin
YPD medium was used for growth of Saccharomyces cerevisiae strain EBY100, SD-CAA, for selection of pYD4 transformed EBY100 and SG-CAA, for induction of scFv expression on the surface of EBY100. E. coli strain DH5α was used for subcloning and preparation of plasmid DNA. E. coli strain BL21 was used for BoNT/B LC fragment and Syb-2 expression. E. coli strain TG1 was used for soluble scFv antibody expression. Pure holotoxin BoNT/B1 was purchased from Metabiologics (Madison, WI, USA). Other subserotypes were obtained from U.S. Army Medical Research Institute of Infectious Diseases, and from Dr. Eric Johnson, University of Wisconsin. All IgGs were expressed from Chinese hamster ovary (CHO) cells, while the mouse anti-SV5 antibody was purified from hybridoma cells, and labeled with AlexaFluo-488 or AlexaFluo-647 labeling kit (Invitrogen, Carlsbad, CA, USA). All the secondary antibodies, including PE or AlexaFluo-647-conjugated goat anti human-Fc, goat anti-mouse Fc and goat anti-human F(ab) (Jackson ImmunoResearch Laboratories, West Grove, PA, USA).
Protein Expression and Purification
The BoNT/B LC (amino acid residues 1-441) encoding cDNA was amplified from a synthetic BoNT/B1 gene. The Syb-2-encoding cDNA was purchased from ATCC (MGC-45137). The cDNAs were subcloned into the plasmid pET15b and then the protein was expressed in E. coli BL21 (DE3) cells induced by 0.5 mM IPTG at 18 °C overnight. The cells were broken with Novagen BugBuster ® Master (EMD Millipore, Billerica, MA, USA), and hexahistidine-tagged BoNT/B LC was purified by immobilized metal affinity chromatography (IMAC) using Ni-NTA agarose (Qiagen, Valencia, CA, USA) followed by cation exchange chromatography.
For scFv expression, the scFv genes of the selected antibodies that bound to BoNT/B LC were subcloned into the expression vector pSYN1 [38] and were transformed into E. coli TG1 cells. Following induction with 0.5 mM IPTG at 18 °C overnight, periplasmic proteins were extracted by osmotic shock and the hexahistidine-tagged scFvs were purified by IMAC using Ni-NTA agarose.
The plasmid for expression of the BoNT/B-specific Syb-2-derived FRET-substrate, yellow fluorescent protein-synaptobevin-2-cyan fluorescent protein-synaptobevin-2-yellow fluorescent protein (YsCsY) with a hexahistidine affinity tag at the N-terminus and 16 repeats of Asp-Glu (DE-tag) at the C-terminus was constructed as described previously for the BoNT/A-specific SNAP25-derived FRET substrate [39,40], except with residues 27-94 of mouse Syb-2 as the substrate peptide sequence. YsCsY was purified from cell-free extracts of E coli BL21 cells after IMAC using Ni-NTA agarose, followed by anion-exchange chromatography.
Mouse Immunization and Spleen Harvest
Sixteen female CD-1 mice were vaccinated three times at four-week intervals with 5 µg of BoNT/B-HC to establish protection against potentially fatal active toxin challenges. Immunized mice were then challenged with 20,000-200,000 LD50 of BoNT/B1 (Okra, five mice per group) at 11, 14, and 16 weeks or at 11, 16, and 19 weeks. Four additional mice were immunized with BoNT/B LC as above but without holotoxin challenge. Mice were euthanized and spleens were removed five days after the final toxin challenge/boosts. Studies using mice were conducted in compliance with the Animal Welfare Act and other federal statutes and regulations relating to animals and experiments involving animals, and adhere to principles stated in the Guide for the Care and Use of Laboratory Animals, National Research Council, 2011. The facility where this research was conducted is fully accredited by the AAALAC/I.
Yeast Displayed scFv Library Construction and Library Sorting
Total RNA was isolated from mouse spleens or from white blood cells of 12 adult human donors immunized with BoNT toxoids subtype A, B, C, D and E. The cDNA was synthesized by RT-PCR with oligo dT as the primer using a ThermoScript RT-PCR Kit (Invitrogen, Carlsbad, CA, USA). The VH and VK gene repertoires were amplified from cDNA by PCR with primers without gap tails and then amplified with gap-tailed primers, following gel purification for library construction. For the mouse libraries, VK gene repertoires were first cloned into the BssHII/NotI sites of the plasmid pYD4 leading to a pYD4-VK library [29]; then the gap-tailed VH genes were transformed into EBY100 together with the NcoI/SalI digested pYD4-VK libraries using LiAC as described previously [41]. For the human libraries, the VH and VK genes first were coupled with a (G4S)3 linker to obtain full-length scFv genes by splicing using overlap extension PCR as previously described [37]. The scFv gene was inserted into the NcoI/Not I sites of pYD2 plasmid and transformed into EBY100. The library size was determined by plating serially diluted transformation mixture on SD-CAA plates. The scFv libraries were induced by culturing in SG-CAA medium with 10% SD-CAA for at least 24 h.
For library sorting, the libraries were incubated with 50 nM of BoNT/B LC at RT for 1 h. All subsequent washing and staining steps were performed at 4 °C using ice-cold FACS buffer (phosphate-buffered saline, 0.5% bovine serum albumin, pH 7.4). Washed yeast clones were incubated with 2 μg/mL of 1B10.1 and B6.1 mAbs for 60 min, washed, and then incubated with 1 μg/mL of PE-labeled goat anti-human Fc antibody (Jackson ImmunoResearch, West Grove, PA, USA) and 1 μg/mL Alexa-647-labelled anti-SV5 mAb. After washing, yeast clones were flow sorted on a FACSAria II, and the population with BoNT/B LC-binding yeast was gated and collected. The collected yeast clones were cultured and induced for the next round of sorting. After three rounds of sorting, the collected yeast clones were plated on SD-CAA medium and cultured at 30 °C for 48 h. Individual colonies were picked, grown, and induced in 96 deep-well plates. These colonies were then screened for binding using the same staining conditions used for sorting. Unique BoNT/B LC-binding clones were identified by DNA sequencing.
Measurement of Yeast Displayed scFv KD
The equilibrium dissociation constant (KD) of yeast-displayed scFvs was measured by flow cytometry, as previously described with modification [28,41]. Briefly, 1 × 10 6 yeast displaying scFvs were incubated for 1 h in FACS buffer with six different concentrations of BoNT/B LC or holotoxin that spanned the range 10-fold above and 10-fold below the expected KD at room temperature. Ice-cold FACS buffer was used to wash the samples, and 2 μg/mL each of 1B10.1 and B6.1 was applied at 4 °C for 60 min, followed by 1 μg/uL of PE-conjugated goat anti-human IgG and 1 μg/mL Alexa-647-labelled anti-SV5 mAb at 4 °C for 30 min. Finally, the yeast clones were washed with ice-cold FACS buffer and the mean fluorescence (MFl) of binding was measured by flow cytometry. The MFI was plotted against the concentration of antigen and the KD was determined by the following equation: y = m1 + m2 × m0/(m3 + m0) where y = MFI at a given antigen concentration, m0 = Antigen concentration, m1 = MFI of the no antigen control, m2 = MFl at saturation, and m3 = KD.
Classification of mAbs Based on Overlap of Epitopes
mAbs were classified into epitope groups based on their ability to compete with each other for binding to antigen. Briefly, yeast-displayed scFvs were incubated for 60 min with 25 nM of BoNT/B LC in solution, and then 50 nM of soluble myc-tagged scFv mAb was added and incubated at 4 °C for 60 min. The ability of soluble scFv to bind to BoNT/B LC was detected by incubation for 60 min. with 1 μg/mL of PE-conjugated anti-myc antibody and 1 μg/mL of Alex-647-labeled SV5 antibody. Binding was measured by flow cytometry. The soluble scFv that bound an overlapping epitope to yeast displayed scFv showed no PE-signal, while those binding non-overlapping epitopes showed a positive PE signal.
MAb Inhibition of Synaptobrevin-2 Cleavage by BoNT/B LC
For FRET-based analysis of substrate cleavage, 0.8 μM of the BoNT/B-specific Syb-2-derived FRET substrate, YsCsY, was mixed with 200 nM of scFv or IgG in FRET buffer (20 mM HEPES, pH 7.5, containing 1.25 mM DTT, 10 μM ZnCl2, 0.2% Tween20, 0.1 mg/mL BSA) in a black 96-well plate (Corning, New York, NY, USA). After pre-incubation for 30 min, 4 nM BoNT/B LC was added to initiate the reaction. With the excitation at 425 nm and a cutoff at 435 nm, the emission at 528 nm and 485 nm was measured at t = 0 and at 5 min intervals using a fluorescence reader (Spectra Max Gemini, Molecular Devices LLC, Sunnyvale, CA, USA). The ratio of Fluo528/485 (R) is used for evaluation of YsCsY cleavage. The extent of inhibition of BoNT/B activity (%) was calculated as (RmAb − RBLC)/(RYsCsY − RBLC) × 100%, where Rmab is the 528/485 nm ratio of a sample with mAb, RBLC is the ratio of the control using BoNT/B LC only and RYsCsY is the ratio of the sample with YsCsY only. The extent of inhibitory activity of the mAb was denoted as the most potent "++" when activity was greater than 75% at 5 min, as medium potency "+" when, >59%, or as having no inhibitory capability "−" when <50% at 5 min. The reaction rate (v, change in the rate of the 528/485 nm ratio) was calculated by fitting v over the first 10 min to a simple linear regression model: Y = v0X + C, where Y = 528/485 nm, v0 = the initial rate (slope), X = time, and C = y-intercept.
For SDS-PAGE analysis of substrate cleavage, 200 nM of BoNT/B LC and 8 μM of scFv were mixed in Tris buffer (50 mM Tris buffer, pH 8.0), and then 5 μg of Syb-2 was added to initiate the reaction. After 15 min the reaction was stopped by addition of SDS-PAGE loading buffer. Samples were heated for 10 min at 95 °C in SDS-PAGE loading buffer and loaded on 15% SDS-PAGE gel for electrophoresis and detection with Coomassie staining [28]. If the non-cleaved Syb-2 levels were greater than 75%, then the mAb was considered to have the most potent inhibitory activity "++", between 50% and 75%, then medium potency "+", or if less than 50%, it was considered to have no inhibitory activity "−".
Determination of IC50 Values
The concentration leading to 50% inhibition (IC50) of selected scFvs or IgGs was measured using FRET for selected mAbs [39]. Briefly, 0.8 μM of YsCsY was first mixed with two-fold serially diluted scFv or IgG starting at a concentration of 50 nM. After pre-incubation for 15 min at 30 °C, 2 nM of BoNT/B LC was added. Emissions at 528 and 485 nm with a cutoff at 435 nm were measured at t = 0 and at 40 s intervals. The ratio 528/485 nm (R) was calculated to quantify YsCsY cleavage. The rate of change of R was considered as the cleavage rate of YsCsY. The initial reaction rate (v0) was then calculated by fitting the R of the first 400 s to a linear regression model: Y = v0X + C. Finally, the IC50 values were determined by fitting v0 and log mAb concentration to the sigmoidal dose-response (variable slope) model (GraphPad Prism, version 6.0, La Jolla, CA, USA). The use of 2 nM BoNT/B LC may have limited the ability to quantitate mAbs with IC50 below this level.
Random Mutation Library Construction and Sorting
A BoNT/B LC fragment library with random mutations was prepared using error-prone PCR with the primers pYD-For/pYD-Rev and DNA polymerase Paq5000 (Agilent Tech, Palo Alto, CA, USA) plus 12.5 μM MnCl2, as previously described [30]. The PCR product was then gel purified, inserted into the NcoI/NotI sites of the plasmid pYD4, and transformed into EBY100. The library was cultured in SD-CAA medium for 48 h, and then 50 mL of the culture was induced in 500 mL SG-CAA medium at 18 °C for 48 h. To determine the critical amino acid for mAb binding, taking mAb 1B10.1 as an example, the library was sorted by incubating the clones with 1B10.1 labeled with Alexa-647 and a mAb that binds a non-overlapping epitope; for example, mAb 2B23, labeled with Alexa-488. After labeling, the yeast population binding both mAbs was collected for the second round of sorting, in which the library was again incubated with 1B10.1-Alexa-647 and 2B23-Alexa-488. The population binding mAb 2B23 but not binding 1B10.1 was collected, grown, induced with SG-CAA and a third round of sorting using the conditions described above was performed. The sort output was plated and individual colonies displaying the BoNT/B LC analyzed for loss of binding to mAb 1B10.1. Clones with loss of binding were sequenced to identify mutations responsible for loss of binding. This process was repeated to identify the residues associated with loss of binding for the other mAbs.
Site-Directed Mutagenesis
Alanine mutants of BoNT/B LC were prepared following the manufacturers instructions for the QuikChange II-E Site-Directed Mutagenesis Kit (Agilent Tech, Palo Alto, CA, USA). Briefly, primers containing the mutation were used for PCR amplification with the plasmid pYD4 containing the BoNT/B LC gene (pYD4-BLC) or containing the BoNT/B LC-HN gene (pYD4-B LC-HN) as a template for 18 cycles. The PCR product was digested by DpnI to remove the parental methylated and hemimethylated DNA, which was then purified by StrataClean Resin and transformed into E. coli XL1-Blue. The alanine mutants of BoNT/B LC or BoNT/B LC-HN in pYD4 were than individually transformed into EBY100, grown in SD-CAA medium and induced for expression on the surface of EBY100. DNA sequencing was used to verify each construct.
Fine Epitope Mapping of mAb 1B10.1
The fine epitope map of mAb 1B10.1was determined by calculating the change of Gibbs free energy (ΔΔG) upon alanine-scanning mutagenesis of BoNT-LC-HN as described [30]. Residues flanking those shown to knockout 1B10.1 binding were mutated to alanine by site-directed mutagenesis and displayed on the yeast surface. The KD of 1B10.1 to the yeast-displayed for wild type and BoNT-LC-HN mutants was determined by serial dilution of 1B10.1 F(ab). All KD values were determined in triplicate. ΔΔG was calculated to evaluate the binding contribution of each amino acid in the epitope using the following formula: ΔΔG (kcal/mol) = RT × ln(KD − Mut/KD − Wt)/1000 where, R = 1.9858775 cal/k·mol; T (K). Using the ΔΔG value, the epitope was modeled on the surface of the crystal structure of BoNT/B (PDB ID: 1S0F [31]). Amino acids in the epitope were colored from red (the greatest ΔΔG) to wheat (the smallest ΔΔG) to show the different amino acid contributions to the epitope.
Conclusions
This work identifies for the first time mAbs that potently inhibit substrate cleavage by BoNT/B and defines their epitopes within the LC domain. The epitopes of the mAbs showing the greatest inhibition of BoNT/B activity mapped differently from the mAbs inhibiting substrate cleavage by BoNT/A, consistent with differences in how the cognate substrates bind the two BoNT serotypes. The mAbs presented here could form the basis of a therapeutic inhibitory cargo for reversal of paralysis due to BoNT intoxication via a delivery vehicle [26]. For example, liposomes containing inhibitory peptides and delivered into cells inhibited the neuroparalytic effect of BoNT/A in vitro [27]. Similar delivery vehicles could be used to deliver either the scFv mAbs or their genes to achieve reversal of paralysis. Inhibitory epitopes defined through this study could potentially be targeted for development of small molecule inhibitors. | 7,019.6 | 2015-08-26T00:00:00.000 | [
"Biology"
] |
Improved ethanol production by a xylose-fermenting recombinant yeast strain constructed through a modified genome shuffling method
Background Xylose is the second most abundant carbohydrate in the lignocellulosic biomass hydrolysate. The fermentation of xylose is essential for the bioconversion of lignocelluloses to fuels and chemicals. However the wild-type strains of Saccharomyces cerevisiae are unable to utilize xylose. Many efforts have been made to construct recombinant yeast strains to enhance xylose fermentation over the past few decades. Xylose fermentation remains challenging due to the complexity of lignocellulosic biomass hydrolysate. In this study, a modified genome shuffling method was developed to improve xylose fermentation by S. cerevisiae. Recombinant yeast strains were constructed by recursive DNA shuffling with the recombination of entire genome of P. stipitis with that of S. cerevisiae. Results After two rounds of genome shuffling and screening, one potential recombinant yeast strain ScF2 was obtained. It was able to utilize high concentration of xylose (100 g/L to 250 g/L xylose) and produced ethanol. The recombinant yeast ScF2 produced ethanol more rapidly than the naturally occurring xylose-fermenting yeast, P. stipitis, with improved ethanol titre and much more enhanced xylose tolerance. Conclusion The modified genome shuffling method developed in this study was more effective and easier to operate than the traditional protoplast-fusion-based method. Recombinant yeast strain ScF2 obtained in this study was a promising candidate for industrial cellulosic ethanol production. In order to further enhance its xylose fermentation performance, ScF2 needs to be additionally improved by metabolic engineering and directed evolution.
Background
In recent years, there is a growing interest in the utilization of renewable resources for the production of bioethanol, which has been deemed as the cleanest liquid fuel alternative to fossil fuels. Apart from starch crops and sugarcane, lignocellulosic biomass, such as wood waste and agricultural waste, was considered as the most potential feedstock for bioethanol production as it is the most abundant source of sugars and does not compete with the food resource. Xylose is the 2 nd most abundant sugar present in lignocellulosic biomass after glucose. The efficient fermentation of xylose is required to develop economically viable processes for the production of bioethanol from lignocellulosic biomass [1]. Saccharomyces cerevisiae is regarded as an industrial working horse for ethanol production because it can produce ethanol in high titre using hexose sugars and have high ethanol tolerance. However it cannot ferment xylose [2]. The yeast, Pichia stipitis, is one of the best naturally occurring xylose-fermenting yeasts and it can convert xylose to ethanol in high yield. However, it has low ethanol and sugar tolerance. This feature of P. stipitis has limited its use as an industrial strain for large-scale bioethanol production from lignocellulosic biomass. The primary desired traits of an industrial strain required for fermenting lignocellulosic hydrolysate are efficient utilization of hexoses and pentoses, fast fermentation rates, high ethanol production, high tolerance to ethanol, sugars and fermentation inhibitors. [3].
While rational metabolic engineering was effective in improving phenotypes of S. cerevisiae strains for xylose fermentation [4], it normally involves the constitutive expression of multiple genes followed by necessary mutagenesis and post-evolutionary engineering. It is therefore tedious, labour-intensive and time-consuming. On the other hand, the whole genome engineering approach, such as genome shuffling, offers the advantage of simultaneous changes at different positions throughout the entire genome without the necessity for genome sequence data or network information. It therefore has advanced the field of constructing phenotypes at a more wild space as compared with the rational tools [5]. Considering the complexity of pathway design for rational metabolic engineering, genome shuffling uses recursive genetic recombination analogous to DNA shuffling [6]. This strategy was successfully applied in rapid strain improvement of both prokaryotic and eukaryotic cells [7,8]. However, this method largely depends on the efficiency of the traditional protoplast fusion techniques, which have the disadvantages of fusant instability, low fusion efficiency, and time-consuming fusant regeneration [9]. The aim of this study is therefore to rapidly construct a recombinant yeast strain with enhanced xylosefermentation using a modified genome shuffling method. This involves the recursive recombination of the P. stipitis genome with that of S. cerevisiae through direct genome isolation and transformation. The improved method shares the same advantages with the protoplast fusion-based genome shuffling method for rapid complex phenotype improvement. In addition, it is time-saving, easier to operate and has higher gene recombination efficiency.
Modified method of genome shuffling
Protoplast fusion has been regarded as a traditional and effective way to accelerate strain evolution and been applied in many studies. However, it suffers from the disadvantages of low efficiency of fusion induced by polyethylene glycol (PEG), labour-intensive and timeconsuming protoplast preparation and fusant regeneration, and fusant instability. The attempt of this study was to develop a rapid and reliable modified genome shuffling method to construct a recombinant yeast strain with improved performance of xylose fermentation. This method was based on the recombination of the whole genomes from different yeast strains in vivo. Genomic DNA of one parent strain was extracted and it was then transferred into the other parental strain to allow the recombination of the two genomes. Potential recombinant strains with the required features were selected on the properly designed screening plates. Their fermentation performance were then evaluated and compared.
Specifically, in this study, S. cerevisiae and P. stipitis were used as the parents for recombinant yeast strain construction. In the first round, the whole genome of P. stipitis was extracted and transferred into S. cerevisiae by electroporation. The recombinant strains were selected on YNBX plates containing 6.7 g/L yeast nitrogen base, 50 g/L xylose and 20 g/L agar. Such plates were incubated at 30°C for 7-10 days. S. cerevisiae cannot grow under the same conditions [10]. Eight hybrid yeast strains were obtained and they were further evaluated for ethanol production in YNB broth containing 6.7 g/L YNB, 150 g/L xylose, and 50 mM phosphate buffer at pH 7.0 and 30°C for 72 h. The potential recombinant strain with the best ethanol production performance was F1-8 (Table 1). This strain was then used as the starting strain for the second round genome shuffling.
In the second round, the whole genome of S. cerevisiae was transferred into F1-8 by electroporation and the recombinant strain was screened on YNBXE plates containing 6.7 g/L yeast nitrogen base, 50 g/L xylose, 50 g/L ethanol and 20 g/L agar. Hybrid yeast strain F1-8 showed no growth on this selective plate. Three positive colonies were obtained and the most potential strain was ScF2 according to their competency in ethanol production. As a reference, protoplast fusion was conducted to construct the hybrid yeast using F1-8 and S. cerevisiae. None fusants survived on the same YNBXE selective plates. Afterwards, the xylose fermentation capability of the potential recombinant strains F1-8 and ScF2, and their parents, P. stipitis, were evaluated in 150 mL shaking flasks filled with 50 mL of the fermentation medium containing 120 g/L xylose. Results are listed in Figure 1. As can be seen, ScF2 presented improved ethanol production rate and ethanol titre compared to both P. stipitis and F1-8.
Random amplified polymorphic DNA (RAPD)
To obtain molecular evidence of the occurrence of recombinatory events using the modified genome shuffling method, we compared the amplification profiles of parental strains and the potential recombinant strains by random amplified polymorphic DNA analysis (RAPD). Using OPA kit, RP1-4, RP-2, RP4-2 and SOY as primers (Table 2), a large number of DNA bands were obtained from the templates of the recombinant yeast strain genomes ( Figure 2). Differences were clearly observed between the RAPD profiles of the parents and ScF2 (Figure 2A). Consistent RAPD profiles were obtained for ScF2 obtained at different time point over a period of nine months ( Figure 2B).
Sugar utilization
The hybrid nature of ScF2 was confirmed by comparing its sugar utilization pattern with those of its two parental strains (Table 3). Combined sugar utilization characteristics of S. cerevisiae and P. stipitis were observed for the recombinant strain ScF2. ScF2 demonstrated enhanced performance for fructose, xylose, maltose and cellobiose compared to both of the parental strains. It displayed decreased glucose and raffinose utilization capability than S. cerevisiae, and less mannose, sucrose and lactose utilization than P. stipitis. It showed similar sugar utilization pattern with P. stipitis for the rest sugars listed in Table 3.
Fermentation performance of ScF2 in high initial xylose concentration
In this part of the study, xylose fermentation was conducted in high initial xylose concentration (100, 150, 200, and 250 g/L) using ScF2 and P. stipitis. The results are shown in Figure 3. At initial concentration of 100 g/L, xylose was completely utilized on day 3 by both strains and 42 g/L of ethanol was obtained by ScF2 and 38 g/L by P. stipitis. The maximum ethanol production of 51 g/L was obtained on day 5 in 150 g/L xylose by ScF2, whereas 48 g/L ethanol was obtained by P. stipitis under the same conditions. In addition, recombinant strain ScF2 demonstrated slightly higher rates of xylose consumption and ethanol production in both of the above initial xylose concentration. When the initial xylose concentration was increased further to 200 g/L, the difference between the rates of xylose consumption and ethanol production by ScF2 and P. stipitis became more noticeable. Approximately 49 g/L ethanol was obtained by ScF2 on day 5, whereas 43 g/L ethanol was obtained by P. stipitis on day 8. At initial xylose concentration of 250 g/L, xylose consumption and ethanol production by P. stipitis were significantly inhibited by the high content of xylose and about 20 g/L of ethanol was obtained on day 7. On the other hand, the high xylose content only slightly inhibited xylose consumption and ethanol production by ScF2 with the maximal ethanol concentration of 47 g/L on day 6. The highest ethanol titre of 51 g/L was obtained by the recombinant strain ScF2 in 150 g/L initial xylose concentration. Further increase of the initial xylose concentration triggered a slight decrease in the maximal ethanol titre and an increase of the fermentation time. Although ScF2 demonstrated much higher xylose tolerance and improved ethanol titre compared to P. stipitis, its ethanol titre was only limited to around 50 g/L due to the incomplete conversion of xylose. Similar to its parent, P. stipitis, the main byproduct for the recombinant strain ScF2 was xylitol. With the enhancement of ethanol production, its xylitol production rate was also higher than that of P. stipitis ( Figure 4).
Fermentation of glucose, xylose and their mixture
In this part of the study, the fermentation of glucose, xylose and their mixture by strains P. stipitis, S. cerevisiae and ScF2 were investigated independently under batch cultivation conditions. The total sugar concentration was maintained at 100 g/L for all experiments and experiments were conducted in duplicate. As shown in Figure 5, P. stipitis and ScF2 could utilize both glucose and xylose, while S. cerevisiae could only utilize glucose. Glucose was completely consumed by S. cerevisiae within 24 h, by P. stipitis within 48 h, and by ScF2 in 56 h. However, ScF2 produced more ethanol (47 g/L) than P. stipitis (45 g/L) from glucose. Complete utilization of xylose was observed for both ScF2 and P. stipitis, with the former being faster in the rates of both xylose consumption and ethanol production. For the case of glucose and xylose mixture fermentation, again ScF2 and P. stipitis could utilize both sugars, with glucose being consumed in a much faster rate. S. cerevisiae strain only consumed glucose and the maximal ethanol concentration was 22 g/L. Slight decrease of xylose consumption rate was also observed for both ScF2 and P. stipitis under this condition compared to the case when xylose was used as the sole carbon source. In addition, ScF2 exhibited slightly higher rates for both xylose consumption and ethanol production than P. stipitis. The maximal ethanol concentration of 40 g/L was obtained for ScF2 at 144 h, and that for P. stipitis was 31 g/L at 96 h.
Xylose fermentation by ScF2 precultured in highconcentration glucose or xylose
It was reported that metabolic lag existed for substrate transition [11]. This indicates that yeast strain precultured on glucose prior to its use as inoculum for xylose fermentation may lead to longer metabolic lag phase. In order to further improve xylose fermentation performance by ScF2, seeds culture of ScF2 was prepared in yeast peptone medium containing 10 g/L yeast extract, 20 g/L peptone, and 150 g/L glucose or xylose. Cells were harvested and inoculated to fresh fermentation medium containing 150 g/L xylose at an initial OD 600 of 3.0. Results are displayed in Figure 6. Slight enhancement of cell growth and ethanol production by ScF2 precultured in xylose were observed. The maximal ethanol titre was obtained at 96 h by xylose precultured ScF2 and at 120 h by glucose precultured ScF2 (Table 4). Interestingly, although preculture in glucose resulted in a slightly longer lag phase for cell growth and ethanol production, a marginally higher ethanol titre, 52 g/L, was obtained compared with the preculture in xylose (Table 4). Noticeably, despite the difference in preculture substrates, ScF2 presented higher xylose consumption rate and ethanol productivity
Discussion
S. cerevisiae is the best working horse for ethanol industrial production [12]. However, hydrolysate from biomass contains both hexoses and pentoses, and wild-type strains of S. cerevisiae cannot utilize pentoses, such as xylose. Utilization of xylose is very important to improve the ethanol yield from biomass hydrolyzate making the process economically viable. Numerous recombinant S. cerevisiae strains were constructed by heterologous expression of xylose utilization pathways from P. stipitis and overexpression of endogenous XKS gene through rational metabolic engineering in combination with evolutionary engineering [4,13,14]. Potential recombinant strains were obtained with the efforts of scientists from around the world over the past few decades. Protoplast fusion is widely used to improve the fermentative properties of industrial yeasts. It is a potential method to rapidly construct a hybrid strain with combined traits of both parental strains. Attempt of construction the recombinant yeast strain through protoplast fusion of S. cerevisiae and P. stipitis were made in order to obtain a hybrid yeast with the enhanced ethanol tolerance and xylose fermentation performance [15,16]. Although the hybrid yeast was improved in ethanol tolerance, its xylose fermentation rate and ethanol yield were lower than those of its parent strain P. stipitis [16]. In addition, it was discovered that the mononucleate fusants were able to quickly segregate into their parental type strains [17]. More recently, protoplasts of thermotolerant S. cerevisiae VS3 and mesophilic, xylose-utilizing C. shehatae were fused by electrofusion [3]. The fusants were selected based on their growth at 42°C and ability to utilize xylose. The mutant fusant CP11 was found to be stable with an ethanol yield of 0.459 ± 0.012 g/g, productivity of 0.67 ± 0.15 g/l/h and fermentation efficiency of 90%. However the maximal ethanol titre obtained was limited to 26-32 g/L. Genome shuffling uses recursive genetic recombination through protoplast fusion. It is an effective and rapid strategy to obtain a potential strain with improved phenotypes [5]. In this study, we attempted to construct a recombinant yeast strain using a modified genome shuffling method. Instead of using recursive protoplast fusion, recursive direct genome isolation and transformation were used for gene recombination. In the first round, the whole genome of P. stipitis was extracted and transferred into S. cerevisiae. The recombinant strains were screened on YNBX plates containing 6.7 g/L yeast nitrogen base, 50 g/L xylose, and 20 g/L agar. Eight positive colonies were obtained and they were evaluated for ethanol production in YNB broth containing 150 g/L xylose. One potential recombinant yeast strain F1-8 was selected due to its better xylose fermentation performance (Table 1). This strain was then used as the starting strain for the second round genome shuffling, where the whole genome of S. cerevisiae was extracted and transferred into F1-8 and the resulted recombinant strain was screened on YNBXE plates containing 6.7 g/L yeast nitrogen base, 50 g/L xylose, 50 g/L ethanol and 20 g/L agar. Three potential recombinant colonies were obtained and the most potential recombinant strain ScF2 was selected due to its enhanced xylose fermentation performance. The final potential recombinant yeast ScF2 presented improved ethanol production rate and ethanol titre compared to both P. stipitis and the first round recombinant strain F1-8 ( Figure 1). The results indicate that the modified genome shuffling method adopted in this study is efficient in generating a recombinant yeast strain with improved xylose fermentation capability. In combination with proper screening strategy, this modified genome shuffling was able to rapidly construct a hybrid yeast strain with desired traits from both of the parental yeasts. This modified genome shuffling method was fast, straight-forward, and easy to operate. To our knowledge, this is the first report of such method.
The molecular analysis was carried out to identify the hybrid nature of ScF2. The random amplified polymorphic DNA (RAPD) technique relies on the use of arbitrary primers which are annealed to genomic DNA using low temperature conditions. This technique detects genetic polymorphisms and does not depend on prior knowledge of species-specific sequences [18,19]. From Figure 2A, it can be observed that, apparently, there were differences between the RAPD profiles of ScF2 and its parental strains, suggesting that ScF2 is different from its parents on the genetic level. According to Figure 2A, the RAPD profile of ScF2 was closer to that of P. stipitis, indicating that more genetic material in ScF2 might be from P. stipitis. Consistent RAPD profiles of ScF2 stored at different time point shown in Figure 2B demonstrate the genetic stability of ScF2. It reconfirmed the high efficiency of gene recombination using the modified genome shuffling method.
Sugar utilization test proved that the potential recombinant yeast ScF2 had the ability to utilize most of the tested pentoses, hexoses and disaccharides (Table 3). Combined sugar utilization characteristics of S. cerevisiae and P. stipitis were observed for the recombinant strain ScF2 indicating the successful recombination of genomes from both P. stipitis and S. cerevisiae. Compared to S. cerevisiae, ScF2 had better ability to assimilate more sugars and enhanced sugar utilization than P. stipitis.
Xylose fermentation performance of ScF2 was tested in fermentation medium initially containing high xylose concentration (100 -250 g/L xylose). Results displayed in Figure 3 clearly demonstrate that ScF2 exhibited faster rates of both xylose consumption and ethanol production than the naturally occurring xylose fermenting yeast, P. stipitis. In addition, it was much more tolerant to the high xylose concentration ( Figure 3D) and produced more ethanol under the same cultivation conditions. Such enhancement in rates of ethanol production and sugar tolerance can be attributed to the parent strain S. cerevisiae, indicating the recombination of its genes in the hybrid yeast ScF2.
The maximal ethanol production of 51 g/L was obtained on day 5 in fermentation medium initially containing 150 g/L xylose by ScF2, whereas 48 g/L ethanol was obtained on day 8 by P. stipitis under the same conditions. Further increase in the initial xylose concentration did not result in further increase of ethanol production. On the contrary, it resulted in the decreased ethanol titre and a longer fermentation time for both ScF2 and P. stipitis. It was reported that ethanol plays a dramatic role as a repressor preventing the induction of specific enzymes needed for xylose utilization in P. stipitis and when ethanol concentration was greater than 30 g/L, induction of xylose reductase (XR) and xylitol dehydrogenase (XDH) was greatly decreased [11]. Ethanol concentration was topped at around 50 g/L for ScF2 in fermentation medium initially containing increased xylose concentration (100 -250 g/L), indicating the repression of xylose utilization pathway by ethanol. This feature of ScF2 is similar to that of P. stipitis because xylose utilization pathway in both strains was from the same source. However, recombinant S. cerevisiae strains constructed by heterologous expression of P. stipitis xylose utilization pathway did produce ethanol in a titre higher than 60 g/L [4]. This might be due to the fact that the regulation system in rationally constructed recombinant S. cerevisiae strains was from their host, S. cerevisiae and these genes were normally expressed using strong constitutive promoters. The limitation of ethanol titre to around 50 g/L by ScF2 indicates that the gene regulation system of the xylose utilization pathway in this hybrid yeast was mainly from P. stipitis. Although a titre of 51 g/L ethanol using ScF2 is lower than that using the rationally constructed recombinant S. cerevisiae, it is so far the highest ethanol titre obtained by hybrid yeasts. Through traditional protoplast fusion, hybrid yeasts normally presented lower ethanol titre [3,16] and slower ethanol production rates or lower ethanol yield compared to their parents [15,16]. The might be attributed to the instable nature of such hybrid yeast strains due to the different background of the parent species and the limited genetic material transferred through protoplast fusion techniques. Results listed above suggest that the modified genome shuffling method is effective for efficient gene transfer and therefore capable of constructing stable recombinant yeast strains with enhanced fermentation performance in a short time.
It is noticeable that besides ethanol, high xylose concentration was another repressor for P. stipitis. With the increase of initial xylose concentration, the difference in rates of xylose consumption and ethanol production between ScF2 and P. stipitis became more significant. Higher xylose concentration almost had no effects on the maximal ethanol production for ScF2 (around 50 g/L), though a longer fermentation time was necessary. On the contrary, higher xylose content greatly influenced the maximal ethanol production for P. stipitis. When the initial concentration of xylose was increased to 250 g/L, only around 20 g/L of ethanol was obtained by P. stipitis. Interestingly, maximal cell biomass growth remained unchanged with the increase of initial xylose content for both ScF2 and P. stipitis indicated by the constant OD 600 at approximately 40, suggesting the inhibition of cell growth under high xylose concentration. Compared to ScF2, higher content of xylose affected more negatively on its rates of xylose consummation and ethanol production for P. stipitis, signifying that ScF2 had much better xylose tolerance. The above evidence strongly proves the recombination of S. cerevisiae genes in the hybrid yeast ScF2 as S. cerevisiae strains are normally more resistant to the osmotic pressure from high sugar concentration [1,12].
As expected, xylitol was the main byproduct for ScF2 ( Figure 4) and it was produced in a faster rate in ScF2 with a slightly higher concentration compared to that of P. stipitis. It was reported that hybrid yeast constructed through traditional protoplast fusion of S. cerevisiae and P. stipitis, displayed much more xylitol production [16].
Such results further confirm that the current modified genome shuffling method in combination with proper screening strategy was successful in recombinant yeast strain construction to obtain improved phenotypes from both parents.
The performance of ScF2 was further tested in the fermentation of glucose, xylose and their mixture. Results displayed in Figure 5 demonstrated that ScF2 could utilize both glucose and xylose more rapidly than P. stipitis and produced more ethanol. However, the rate of glucose consumption for ScF2 was slower than that for S. cerevisiae. Similar to its parent strain P. stipitis, in the fermentation of glucose and xylose mixture, ScF2 consumed glucose much faster than xylose. Glucose exhibited repression on xylose consumption for both ScF2 and P. stipitis, with effects for the latter being more significant. Compared to P. stipitis, ScF2 displayed faster rates of xylose consumption and ethanol production for sugar mixture fermentation and produced more ethanol. Such results are in full agreement with those in previous sessions and further reveal the improved performance of ScF2.
More recently, reports showed that repitched cell populations grown on xylose resulted in faster fermentation rates, particularly on xylose [11]. Sugar transition leads to longer lag phase and using repitched yeasts in the fermented sugar could eliminate the lag phase therefore enhance the fermentation rates. In order to further improve the performance of ScF2, we investigated the effects of seed culture preparation using high-concentration glucose or xylose. Results shown in Figure 6 revealed that seed culture prepared using high-concentration xylose exhibited slightly faster rates of cell growth and ethanol production. However, it did not improve the maximal ethanol concentration (Table 4). Interestingly, seed culture prepared using high-concentration of glucose resulted in higher ethanol production (~52 g/L) for both ScF2 and P. stipitis, correspondingly higher ethanol yield. This might be due to the less by-product production under such conditions. Despite the difference in the preculture conditions, ScF2 consistently displayed faster rates for xylose consumption and ethanol production compared to P. stipitis. This again confirmed the enhancement of its fermentation performance by the modified genome shuffling method. It is worthwhile noting that the lag phase due to sugar transition in our study was insignificant. This may possibly be attributed to the smaller inoculum size (OD 600 = 3) used in such experiments compared with what reported in the literature (OD 600 = 40) [11]. In industrial applications, high inoculum size is not possible. Strain improvement is therefore playing a key role in achieving enhanced fermentation rates and higher ethanol productivity.
From the above analysis, the hybrid yeast ScF2 constructed using the modified genome shuffling method entailed in this study, displayed a higher xylose and ethanol tolerance, presented faster rates of xylose consumption and ethanol production, and produced more ethanol. Combined feature of both parents, S. cerevisiae (ethanol and sugar tolerance) and P. stipitis (xylose utilization), were evidently shown in ScF2. Furthermore, ethanol repression made the ethanol titre of the hybrid yeast limited to around 50 g/L. However, this ethanol titre for ScF2 was higher than those obtained by hybrid yeasts constructed through traditional protoplast fusion techniques, indicating that the modified genome shuffling method adopted in this study was more efficient in gene transfer and recombination. Through direct genome isolation, genomic DNA was randomly cut to small pieces (> 30 kb). They were then transferred to the host strain through electroporation. This enhanced the gene transfer and recombination efficacy compared to protoplast fusion, for which gene transfer mostly depends on the efficiency of cell fusion. In addition, recursive genome transfer and screening allows further enhancement in gene recombination and sequential addition of the desired traits. Using this method, it is likely to add more desired traits, such as temperature tolerance and inhibitor resistance to the recombinant yeasts to construct a robust yeast strain for cellulosic ethanol industries. Direct fusion of isolated fungal nuclei to yeast protoplast was reported [20]. However, such method involved the preparation of protoplast and the regeneration of fusants. It is therefore tedious and time-consuming. Compared to the protoplast-fusion-based approach, our modified genome shuffling method has advantages of high efficiency, high speed and easy operation. Although the hybrid yeast strain constructed in this study has limited ethanol titre of around 50 g/L, it can be further improved by minimal rational metabolic engineering and directed evolution.
Conclusion
In this study, we developed a modified genome shuffling method for rapid construction of a recombinant yeast strain from S. cerevisiae and P. stipitis. In combination with properly designed screening strategy, a potential hybrid yeast ScF2 was constructed. This hybrid yeast displayed improved tolerance to xylose and ethanol, enhanced rates of xylose consumption and ethanol production compared to their parents. Combined with proper screening strategy, the modified genome shuffling method was effective and easy to operate for the construction a recombinant strain with desired phenotypes in a short time. However further strain improvement is possible if such method is integrated with rational metabolic engineering and directed evolution.
Strains and media
Pichia stipitis CBS 6054, a haploidy yeast, was obtained from Centraalbureau voor Schimmelcultures (CBS, Baarn) Culture Collection, and it was maintained on YPX agar slants containing (g/L): xylose, 20.0; yeast extract, 10.0; peptone, 20.0; agar, 20.0 at pH 5.5 ± 0.2. Saccharomyces cerevisiae ATCC 24860, a diploidy yeast, was procured from American Type Culture Collection (ATCC) and it was maintained on YPD agar slants containing (g/L): glucose, 20.0; yeast extract, 10.0; peptone, 20.0; agar, 20.0 at pH 5.5 ± 0.2. They were stored in YPX or YPD broth containing 20% glycerol at −80°C and were subcultured on YPX and YPD plates, respectively, at regular intervals. Yeast cells from freshly streaked YPD plates were inoculated in YPD broth and incubated at 30°C and 200 rpm for 24 h. Cells were harvested and used as the source for genomic DNA extraction, direct genome transformation or as the inoculum for fermentation experiments.
Genomic DNA extraction
Cells of Pichia stipitis CBS 6054 were cultured in 50-mL centrifuge tubes containing 10 mL YPD broth at 30°C and 200 rpm overnight. They were harvested after centrifugation at 5000 × g at 4°C for 5 min and then were washed with 20 mL sterile water three times. Cells were resuspended in 200 μL lysis buffer (100 mM Tris-HCl pH8.0, 50 mM EDTA and 0.5% SDS) and were transferred to a 1.5 mL microcentrifuge tube. Then 0.2 g glass beads (0.5 mm) were added to resuspend the cells. Cell suspension was thoroughly mixed at the maximal speed on a high speed vortex mixer. After centrifugation at 5000× g for 5 min at 4°C, the supernatant was transferred to a new 1.5 mL microcentrifuge tube and 500 μL phenol:chloroform:isoamyl alcohol (25:24:1) was added to the supernatant. This mixture was then briefly mixed on the vortex mixer and was centrifuged again at 12000 × g and 4°C for 10 min. The upper layer was then withdrawn carefully and was transferred to a new 1.5 mL microcentrifuge tube. One mL ice-cold 95% (v/v) ethanol was added to the supernatant and was briefly mixed by inversion. It was then stored at −20°Cfor 2 h to precipitate the genomic DNA. After that, the sample was centrifuged at 12000 × g and 4°C for 10 min and the supernatant was carefully discarded to retain the genome DNA pellet. Afterwards, 1 mL 75% (v/v) ice-cold ethanol was used to wash the genomic DNA pellet three times and the DNA pellets were then dried by incubation at 37°C for 1 h. The genomic DNA was resuspended in 200 μL of sterile water and was stored −20°C until use.
Electroporation
The host yeast strain S. cerevisiae was cultured in 150-mL shaking flasks containing 50 mL YPD broth at 30°C and 200 rpm overnight. Cells were harvested by centrifugation at 5000 × g and 4°C for 5 min and were washed three times with 20 mL sterile water each time. Cells were resuspended in 20 mL pretreatment-solution (0.1 M lithium acetate, 0.1 M Dithiothreitol (DTT), 0.6 M sorbitol, 0.01 M Tris-HCl of pH7.5) and incubated at room temperature for 30 min. The solution was centrifuged at 5000 × g and 4°C for 5 min and the supernatant was discarded. Cells were then resuspended in 20 mL 1 M sorbitol and centrifuged again under the same conditions. Again, the supernatant was discarded. Cells were then resuspended in 80 μL 1 M sorbitol solution and mixed with 20 μL the isolated P. stipitis genomic DNA solution. The mixed solution was transferred into an electroporation cuvette and incubated in ice for about 5 min. Electroporation was then conducted using Gene Pulser Xcell TM electroporation system (Bio-Rad, USA) under the prescribed conditions according to the manufacturer's instructions. After electroporation, 1 mL 1 M sorbitol solution was added into the cuvette gently. The cuvette was then incubated at 30°C for about 2 h. The transformed cells were then resuspended in 50 mL sterile centrifuge tube containing 5 mL YPD broth and incubated at 30°C and 200 rpm for 3 h. The cultivation broth was spread on the predefined screening plates. Afterwards, the plates were incubated at 30°C for 7-10 days. Positive clones were then selected, subcultured on YPD plates and were evaluated in shaking flasks for xylose fermentation. Potential recombinant strains will be used as the host for next round whole genome transformation.
Shaking flask fermentation
One loop of the positive clones was transferred from 1day YPD plates to 150-mL Erlenmeyer flask containing 50 mL of YPD broth. Yeasts were grown for 24 h at 200 rpm on a rotary shaker at 30°C. A small volume of such seed culture was inoculated to each 150-mL Erlenmeyer flask containing 50 mL of the fermentation medium (FM) containing (g/L) yeast extract, 7; Peptone, 2; (NH 4 ) 2 SO 4 , 2; KH 2 PO 4 , 2.05; Na 2 HPO 4 , 0.25 to make an initial inoculum size of 0.5 OD 600 . The Erlenmeyer were shaken at 100 rpm and 30°C. Samples were withdrawn periodically to determine the concentration of sugar, ethanol, xylitol and cell biomass. Fermentation experiments were conducted in duplicate.
Analytical methods
Cell biomass was monitored spectrophotometrically by measuring absorbance at 600 nm. The measurement was made such that the optical density (OD 600 ) of the samples was smaller than 0.70, as obtained by sample dilution. This is to ensure that the Beer-Lambert law applies. Samples were filtered through 0.45 μm filters and stored at −20°C until analysed by a 1200 Series HPLC system (Agilent Technologies Inc.) equipped with a Refractive Index Detector. Sugars, ethanol and xylitol were analysed on a Sugar-PakIcolumn (Waters, USA) at 75°C with the mobile phase of 0.001 mM EDTA-Ca and a flow rate of 0.4 mL/min.
Sugar utilization tests
Sugar utilization tests were carried out in YNB broth containing 6.7 g/L yeast nitrogen base (YNB) and 2 g/L of various tested sugars individually. ScF2 and its parents (P. stipitis and S. cerevisiae) were inoculated into 50 mL centrifuge tubes containing 10 mL YNB broth with each tested sugar. YNB broth without sugar was used as the control. These tubes were incubated in an orbital shaker at 200 rpm and 30°C for 48 h and experiments were conducted in duplicate [3]. At the end of the experiments, OD 600 was measured and compared. | 7,876.4 | 2012-07-18T00:00:00.000 | [
"Engineering",
"Biology"
] |
Most Efficient Digital Filter Structures: The Potential of Halfband Filters in Digital Signal Processing
In this book the reader will find a collection of chapters authored/co-authored by a large number of experts around the world, covering the broad field of digital signal processing. This book intends to provide highlights of the current research in the digital signal processing area, showing the recent advances in this field. This work is mainly destined to researchers in the digital signal processing and related areas but it is also accessible to anyone with a scientific background desiring to have an up-to-date overview of this domain. Each chapter is self-contained and can be read independently of the others. These nineteenth chapters present methodological advances and recent applications of digital signal processing in various domains as communications, filtering, medicine, astronomy, and image processing.
In Section 4 of this chapter we consider the application of the two-channel DF as a building block of a multiple channel tree-structured FDMUX filter bank according to Fig. 2, typically applied for on-board processing in satellite communications [Danesfahani et al. (1994); Göckler & Felbecker (2001); Göckler & Groth (2004); Göckler & Eyssele (1992)]. In case of a great number of channels and/or challenging bandwidth requirements, implementation of the front-end DF is crucial, which must be operated at (extremely) high sampling rates. To cope with this issue, in Section 4 we present an approach to parallelise at least the front end of the FDMUX filter bank according to Fig. 2.
Single halfband filters 1
In this Section 2 of this chapter we recall the properties of the well-known HBF with real coefficients (real HBF with centre frequencies f c ∈{f 0 , f 4 } = {0, f n /2} according to (1)), and investigate those of the complex HBF with their passbands (stopbands) centred at f c = c · f n 8 , c = 1, 2, 3, 5, 6, 7 that require roughly the same amount of computation as their real HBF prototype ( f c = f 0 = 0). In particular, we derive the most efficient elementary SFG for sample rate alteration. These will be given both for LP FIR [Göckler (1996b)] and MP IIR HBF for real-and complex-valued input and/or output signals, respectively. The expenditure of all eight versions of HBF according to (1) is determined and thoroughly compared with each other. The organisation of Section 2 is as follows: First, we recall the properties of both classes of the afore-mentioned real HBF, the linear-phase (LP) FIR and the minimum-phase (MP) IIR approaches. The efficient multirate implementations presented are based on the polyphase decomposition of the filter transfer functions [Bellanger (1989); Göckler & Groth (2004); Mitra (1998); Vaidyanathan (1993)]. Next, we present the corresponding results on complex HBF (CHBF), the classical HT, by shifting a real HBF to a centre frequency according to (2)
Real halfband filters (RHBF)
In this subsection we recall the essentials of LP FIR and MP IIR lowpass HBF with real-valued impulse responses h(k)=h k ←→ H(z),w h e r eH(z) represents the associated z-transform transfer function. From such a lowpass (prototype) HBF a corresponding real highpass HBF is readily derived by using the modulation property of the z-transform [Oppenheim & Schafer (1989)] by setting in accordance with (1) resulting in a frequency shift by f 4 = f n /2 (Ω 4 = π).
Linear-Phase (LP) FIR filters
Throughout this Section 2 we describe a real LP FIR (lowpass) filter by its non-causal impulse response with its centre of symmetry located at the time or sample index k = 0 according to where the associated frequency response H(e jΩ ) ∈ R is zero-phase [Mitra & Kaiser (1993); Oppenheim & Schafer (1989)].
Specification and properties
A real zero-phase (LP) lowpass HBF, also called Nyquist(2)filter [Mitra & Kaiser (1993)], is specified in the frequency domain as shown in Fig. 5, for instance, for an equiripple or constrained least squares design, respectively, allowing for a don't care transition band between passband and stopband [Mintzer (1982); Mitra & Kaiser (1993); Schüssler & Steffen (1998)]. Passband and stopband constraints δ p = δ s = δ are identical, and for the cut-off frequencies we have the relationship: As a result, the zero-phase desired function D(e jΩ ) ∈ R as well as the frequency response H(e jΩ ) ∈ R are centrosymmetric about D(e jπ/2 )=H(e jπ/2 )= 1 2 . From this frequency domain symmetry property immediately follows H(e jΩ )+H(e j(Ω−π) )=1, indicating that this type of halfband filter is strictly complementary [Schüssler & Steffen (1998)]. According to (5), a real zero-phase FIR HBF has a symmetric impulse response of odd length N = n + 1 (denoted as type I filter in [Mitra & Kaiser (1993)]), where n represents the even filter order. In case of a minimal (canonic) monorate filter implementation, n is identical to the minimum number n mc of delay elements required for realisation, where n mc is known as the McMillan degree [Vaidyanathan (1993)]. Due to the odd symmetry of the HBF zero-phase frequency response about the transition region (don't care band according to Fig. 5), roughly every other coefficient of the impulse response is zero [Mintzer (1982); Schüssler & Steffen (1998)], resulting in the additional filter length constraint: Hence, the non-causal impulse response of a real zero-phase FIR HBF is characterized by [Bellanger et al. (1974); Göckler & Groth (2004);Mintzer (1982); Schüssler & Steffen (1998)]: giving rise to efficient implementations. Note that the name Nyquist(2)filter is justified by the zero coefficients of the impulse response (9). Moreover, if an HBF is used as an anti-imaging filter of an interpolator for upsampling by two, the coefficients (9) are scaled by the upsampling factor of two replacing the central coefficient with h 0 = 1 [Fliege (1993); Göckler & Groth (2004); Mitra (1998)]. As a result, independently of the application this coefficient does never contribute to the computational burden of the filter.
Design outline
Assuming an ideal lowpass desired function consistent with the specification of Fig. 5 with a cut-off frequency of Ω t =( Ω p + Ω s )/2 = π/2 and zero transition bandwidth, and minimizing the integral squared error, yields the coefficients [Göckler & Groth (2004); Parks & Burrus (1987)] in compliance with (9): This least squares design is optimal for multirate HBF in conjunction with spectrally white input signals since, e.g in case of decimation, the overall residual power aliased by downsampling onto the usable signal spectrum is minimum [Göckler & Groth (2004)]. To master the Gibbs' phenomenon connected with (10), a centrosymmetric smoothed desired function can be introduced in the transition region [Parks & Burrus (1987)]. Requiring, for instance, a transition band of width ΔΩ = Ω s − Ω p > 0 and using spline transition functions for D(e jΩ ), the above coefficients (10) are modified as follows [Göckler & Groth (2004); Parks & Burrus (1987)]: Least squares design can also be subjected to constraints that confine the maximum deviation from the desired function: The Constrained Least Squares (CLS) design [Evangelista (2001); Göckler & Groth (2004)]. This approach has also efficiently been applied to the design of high-order LP FIR filters with quantized coefficients [Evangelista (2002)]. Subsequently, all comparisons are based on equiripple designs obtained by minimization of the maximum deviation max H(e jΩ ) − D(e jΩ ) ∀Ω on the region of support according to [McClellan et al. (1973)]. To this end, we briefly recall the clever use of this minimax design procedure in order to obtain the exact values of the predefined (centre and zero) coefficients of (9), as proposed in [Vaidyanathan & Nguyen (1987)]: To design a two-band HBF of even order n = N − 1 = 4m − 2, as specified in Fig. 5, start with designing i) a single-band zero-phase FIR filter g(k) ←→ G(z) of odd order n/2 = 2m − 1f o rap a s s b a n dc u t -o f ff r e q u e n c y of 2Ω p which, as a type II filter [Mitra & Kaiser (1993)], has a centrosymmetric zero-phase frequency response about G(e jπ )=0, ii) upsample the impulse response g(k) by two by inserting between any pair of coefficients an additional zero coefficient (without actually changing the sample rate), which yields an interim filter impulse response h ′ (k) ←→ H ′ (z 2 ) of the desired odd length N with a centrosymmetric frequency response about H ′ (e jπ/2 )=0 [Göckler & Groth (2004); Vaidyanathan (1993)], iii) lift the passband (stopband) of H ′ (e jΩ ) to 2 (0) by replacing the zero centre coefficient with 2h(0)=1, and iv) scale the coefficients of the final impulse response h(k) ←→ H(z) with 1 2 .
Efficient implementations
Monorate FIR filters are commonly realized by using one of the direct forms [Mitra (1998)]. In our case of an LP HBF, minimum expenditure is obtained by exploiting coefficient symmetry, as it is well known [Mitra & Kaiser (1993); Oppenheim & Schafer (1989)]. The count of operations or hardware required, respectively, is included below in Table 1 (column MoR). Note that the "multiplication" by the central coefficient h 0 does not contribute to the overall expenditure. The minimal implementation of an LP HBF decimator (interpolator) for twofold down(up)sampling is based on the decomposition of the HBF transfer function into two (type 1) polyphase components [Bellanger (1989); Göckler & Groth (2004); Vaidyanathan (1993)]: In the case of decimation, downsampling of the output signal (cf. upper branch of Fig. 1) is shifted from filter output to system input by exploiting the noble identities [Göckler & Groth (2004); Vaidyanathan (1993)], as shown in Fig. 6(a). As a result, all operations (including delay and its control) can be performed at the reduced (decimated) output sample rate f d = f n /2: Fig. 6(b), the input demultiplexer of Fig. 6(a) is replaced with a commutator where, for consistency, the shimming delay z −1/2 d := z −1 must be introduced [Göckler & Groth (2004)]. As an example, in Fig. 7(a) an optimum, causal real LP FIR HBF decimator of n = 10th order and for twofold downsampling is recalled [Bellanger et al. (1974)]. Here, the odd-numbered coefficients of (9) are assigned to the zeroth polyphase component E 0 (z d ) of Fig. 6(b), whereas the only non-zero even-numbered coefficient h 0 belongs to E 1 (z d ). For implementation we assume a digital signal processor as a hardware platform. Hence, the overall computational load of its arithmetic unit is given by the total number of operations N Op = N M + N A , comprising multiplication (M) and addition (A), times the operational clock frequency f Op [Göckler & Groth (2004)]. All contributions to the expenditure are listed in Table 1 as a function of the filter order n, where the McMillan degree includes the shimming delays. Obviously, both coefficient symmetry (N M < n/2) and the minimum memory property (n mc < n [Bellanger (1989); Fliege (1993); Göckler & Groth (2004) MoR: f Op = f n Dec: f Op = f n /2 Int: f Op = f n /2 n mc n n/2 + 1 N M (n + 2)/4 N A n/2 + 1 n/2 N Op 3n/4 + 3/2 3n/4 + 1/2 [Göckler & Groth (2004)], for Nyquist(M)filters with M > 2o n lyeither coefficient symmetry or the minimum memory property can be exploited.) The application of the multirate transposition rules on the optimum decimator according to Fig. 7(a), as detailed in Section 3 and [Göckler & Groth (2004)], yields the optimum LP FIR HBF interpolator, as depicted in Fig. 6(c) and Fig. 7(b), respectively. Table 1 shows that the interpolator obtained by transposition requires less memory than that published in [Bellanger (1989); Bellanger et al. (1974)].
Minimum-Phase (MP) IIR filters
In contrast to FIR HBF, we describe an MP IIR HBF always by its transfer function H(z) in the z-domain.
Specification and properties
The magnitude response of an MP IIR lowpass HBF is specified in the frequency domain by D(e jΩ ) , as shown in Fig. 8, again for a minimax or equiripple design. The constraints of the designed magnitude response H(e jΩ ) are characterized by the passband and stopband deviations, δ p and δ s , according to [Lutovac et al. (2001); Schüssler & Steffen (1998)] related by The cut-off frequencies of the IIR HBF satisfy the symmetry condition (6), and the squared magnitude response H(e jΩ ) 2 is centrosymmetric about D(e jπ/2 ) 2 = H(e jπ/2 ) 2 = 1 2 . We consider real MP IIR lowpass HBF of odd order n.
H(z) has a single pole at the origin of the z-plane, and (n − 1)/2 complex-conjugated pole pairs on the imaginary axis within the unit circle, and all zeros on the unit circle [Schüssler & Steffen (2001)]. Hence, the odd order MP IIR HBF is suitably realized by a parallel connection of two allpass polyphase sections as expressed by where the allpass polyphase components can be derived by alternating assignment of adjacent complex-conjugated pole pairs of the IIR HBF to the polyphase components. The polyphase components A l (z 2 ), l = 0, 1 consist of cascade connections of second order allpass sections: where the coefficients a i , i = 0, 1, ..., ( n−1 2 − 1),w i t ha i < a i+1 , denote the squared moduli of the HBF complex-conjugated pole pairs in ascending order; the complete set of n poles is given by 0, ±j √ a 0 , ±j √ a 1 , ..., ±j a n−1 2 −1 [Mitra (1998)]. (
Design outline
In order to compare MP IIR and LP FIR HBF, we subsequently consider elliptic filter designs.
Since an elliptic (minimax) HBF transfer function satisfies the conditions (6) and (13), the design result is uniquely determined by specifying the passband Ω p (stopband Ω s )c u t -o f f frequency and one of the three remaining parameters: the odd filter order n, allowed minimal stopband attenuation A s = −20log(δ s ) or allowed maximum passband attenuation A p = −20log(1 − δ p ). There are two most common approaches to elliptic HBF design. The first group of methods is performed in the analogue frequency domain and is based on classical analogue filter design techniques: The desired magnitude response D(e jΩ ) of the elliptic HBF transfer function H(z) to be designed is mapped onto an analogue frequency domain by applying the bilinear transformation [Mitra (1998); Oppenheim & Schafer (1989)]. The magnitude response of the analogue elliptic filter is approximated by appropriate iterative procedures to satisfy the design requirements [Ansari (1985); Schüssler & Steffen (1998;2001); Valenzuela & Constantinides (1983)]. Finally, the analogue filter transfer function is remapped to the z-domain by the bilinear transformation. The other group of algorithms starts from an elliptic HBF transfer function, as given by (17). The filter coefficients a i , i = 0, 1, ..., ( n−1 2 − 1) are obtained by iterative nonlinear optimization techniques minimizing the peak stopband deviation. For a given transition bandwidth, the maximum deviation is minimized e.g. by the Remez exchange algorithm or by Gauss-Newton methods [Valenzuela & Constantinides (1983); Zhang & Yoshikawa (1999)]. For the particular class of elliptic HBF with minimal Q-factor, closed-form equations for calculating the exact values of stopband and passband attenuation are known allowing for straightforward designs, if the cut-off frequencies and the filter order are given [Lutovac et al. (2001)].
Efficient implementation
In case of a monorate filter implementation, the McMillan degree n mc is equal to the filter order n. Having the same hardware prerequisites as in the previous subsection on FIR HBF, the computational load of hardware operations per output sample is given in Table 2 (column MoR). Note that multiplication by a factor of 0.5 does not contribute to the overall expenditure. In the general decimating structure, as shown in Fig. 9(a), decimation is performed by an input commutator in conjunction with a shimming delay according to Fig. 6(b). By the underlying exploitation of the noble identities [Göckler & Groth (2004); Vaidyanathan (1993)], the cascaded second order allpass sections of the transfer function (17) Fig. 9(a) operate at the reduced output sampling rate f d = f n /2, and the McMillan degree n mc is almost halved. The optimum interpolating structure is readily derived from the decimator by applying the multirate transposition rules (cf. Section 3 and [Göckler & Groth (2004)]). Computational complexity is presented in Table 2, also indicating the respective operational rates f Op for the N Op arithmetical operations. Elliptic filters also allow for multiplierless implementations with small quantization error, or implementations with a reduced number of shift-and-add operations in multipliers [Lutovac & Milic (1997;2000); Milic (2009)].
Comparison of real FIR and IIR HBF
The comparison of the Tables 1 and 2 shows that N FIR Op < N IIR Op for the same filter order n, where all operations are performed at the operational rate f Op , as given in these Tables. Since, however, the filter order n IIR < n FIR or even n IIR ≪ n FIR for any type of approximation, the computational load of an MP IIR HBF is generally smaller than that of an LP FIR HBF, as it is well known [Lutovac et al. (2001); Schüssler & Steffen (1998)]. The relative computational advantage of equiripple minimax designs of monorate IIR halfband filters and polyphase decimators [Parks & Burrus (1987)], respectively, is depicted in Fig. 10 where, in extension to [Lutovac et al. (2001)], the expenditure N Op is indicated as a parameter along with the filter order n. Note that the IIR and FIR curves of the lowest order filters differ by just one operation despite the LP property of the FIR HBF. A specification of a design example is deduced from Fig. 10: n IIR = 5a n dn FIR = 14, respectively, with a passband cut-off frequency of f p = 0.1769 f n at the intersection point of the associated expenditure curves: Fig. 11. As a result, the stopband attenuations of both filters are the same (cf. Fig. 10). In addition, for both designs the typical pole-zero plots are shown [Schüssler & Steffen (1998;2001)]. From the point of view of expenditure, the MP IIR HBF decimator (N Op = 9, n mc = 3) outperforms its LP FIR counterpart (N Op = 12, n mc = 8).
Linear-Phase (LP) FIR filters
In the FIR CHBF case the frequency shift operation (3) is immediately applied to the impulse response h(k) in the time domain according to (3). As a result of the modulation of the impulse response (9) of any real LP HBF on a carrier of frequency f 2 according to (18), the complex-valued CHBF impulse response is obtained. (Underlining indicates complex quantities in time domain.) By directly equating (19) and relating the result to (9), we get: where, in contrast to (5), the imaginary part of the impulse response is skew-symmetric about zero, as it is expected from a Hilbert-Transformer. Note that the centre coefficient h 0 is still real, whilst all other coefficients are purely imaginary rather than generally complex-valued.
Specification and properties
All properties of the real HBF are basically retained except of those which are subjected to the frequency shift operation of (18). This applies to the filter specification depicted in Fig. 5 and, hence, (6) modifies to where Ω p+ represents the upper passband cut-off frequency and Ω s− the associated stopband cut-off frequency. Obviously, strict complementarity (7) is retained as follows where (3) is applied in the frequency domain.
Efficient implementations
The optimum implementation of an n = 10th order LP FIR CHBF for twofold downsampling is again based on the polyphase decomposition of (20) according to (12). Its SFG is depicted in Fig. 12(a) that exploits the odd symmetry of the HT part of the system. Note that all imaginary units are included deliberately. Hence, the optimal FIR CHBF interpolator according to Fig. 12(b), which is derived from the original decimator of Fig. 12(a) by applying the multirate transposition rules [Göckler & Groth (2004)], performs the dual operation with respect to the underlying decimator. Since, however, an LP FIR CHBF is strictly rather than power complementary (cf. (23)), the inverse functionality of the decimator is only approximated [Göckler & Groth (2004)].
In addition, Fig. 13 shows the optimum SFG of an LP FIR CHBF for decimation of a complex signal by a factor of two. In essence, it represents a doubling of the SFG of Fig. 12(a). Again, the dual interpolator is readily derived by transposition of multirate systems, as outlined in Section 3. The expenditure of the half-(R ⇋ C) and the full-complex (C → C) CHBF decimators and their transposes is listed in Table 3. A comparison of Tables 1 and 3 shows that the overall numbers of operations N CFIR Op of the half-complex CHBF sample rate converters (cf. Fig. 12) are almost the same as those of the real FIR HBF systems depicted in Fig. 7. Only the number of delays is, for obvious reasons, higher in the case of CHBF.
Minimum-Phase (MP) IIR filters
In the IIR CHBF case the frequency shift operation (3) is again applied in the z-domain. Using (18), this is achieved by substituting the complex z-domain variable in the respective transfer functions H(z) and all corresponding SFG according to:
Efficient implementations
Introducing (24) into (16) performs a frequency-shift of the transfer function H(z) by f 2 = f n /4 (Ω 2 = π/2): The optimum general block structure of a decimating MP IIR HT, being up-scaled by 2, is shown in Fig. 14(a) along with the SFG of the 1st (system theoretic 2nd) order allpass sections (b), where the noble identities [Göckler & Groth (2004); Vaidyanathan (1993)] are exploited. By doubling this structure, as depicted in Fig. 15, the IIR CHBF for decimating a complex signal by two is obtained. Multirate transposition [Göckler & Groth (2004)] can again be applied to derive the corresponding dual structures for interpolation. The expenditure of the half-(R ⇋ C) and the full-complex (C → C) CHBF decimators and their transposes is listed in Table 4. A comparison of Tables 2 and 4 shows that, basically, the half-complex IIR CHBF sample rate converters (cf. Fig. 14) require almost the same expenditure as the real IIR HBF systems depicted in Fig. 9.
Comparison of FIR and IIR CHBF
As it is obvious from the similarity of the corresponding expenditure tables of the previous subsections, the expenditure chart Fig. 10 can likewise be used for the comparison of CHBF
Complex Offset Halfband Filters (COHBF)
A complex offset HBF, a Hilbert-Transformer with a frequency offset of Δ f = ± f n /8 relative to an RHBF, is readily derived from a real HBF according to Subsection 2.1 by applying the zT modulation theorem (3) with c ∈ {1, 3, 5, 7},asintroducedin (2): As a result, the real prototype HBF is shifted to a passband centre frequency of f c ∈ ± f n 8 , ± 3 f n 8 . In the sequel, we predominantly consider the case f c = f 1 (Ω 1 = π/4).
Linear-Phase (LP) FIR filters
Again, the frequency shift operation (3) is applied in the time domain. However, in order to get the smallest number of full-complex COHBF coefficients, we introduce an additional complex scaling factor of unity magnitude. As a result, the modulation of a carrier of frequency f c according to (28) by the impulse response (9) of any real LP FIR HBF yields the complex-valued COHBF impulse response: where − n 2 ≤ k ≤ n 2 and c = 1, 3, 5, 7. By directly equating (39) for c = 1, and relating the result to (9), we get: where, in contrast to (21), the impulse response exhibits the symmetry property: Note that the centre coefficient h 0 is the only truly complex-valued coefficient where, fortunately, its real and imaginary parts are identical. All other coefficients are again either purely imaginary or real-valued. Hence, the symmetry of the impulse response can still be exploited, and the implementation of an LP FIR COHBF requires just one multiplication more than that of a real or complex HBF [Göckler (1996b)].
Specification and properties
All properties of the real HBF are basically retained except of those which are subjected to the frequency shift operation according to (28). This applies to the filter specification depicted in Fig. 5 and, hence, (6) modifies to where Ω p+ represents the upper passband cut-off frequency and Ω s− the associated stopband cut-off frequency. Obviously, strict complementarity (7) reads as follows H(e j(Ω−c π 4 ) )+H(e j(Ω−π(1+c/4)) )=1.
Efficient implementations
The optimum implementation of an n = 10th order LP FIR COHBF for twofold downsampling is again based on the polyphase decomposition of (40). Its SFG is depicted in Fig. 16(a) that exploits the coefficient symmetry as given by (41). The optimum FIR COHBF interpolator according to Fig. 16(b) is readily derived from the original decimator of Fig. 16(a) by applying the multirate transposition rules, as discussed in Section 3. As a result, the overall expenditure is again retained (c.f. invariant property of transposition [Göckler & Groth (2004)]). In addition, Fig. 17 shows the optimum SFG of an LP FIR COHBF for decimation of a complex signal by a factor of two. It represents essentially a doubling of the SFG of Fig. 16(a). The dual interpolator can be derived by transposition [Göckler & Groth (2004)].
The expenditure of the half-(R ⇋ C) and the full-complex (C → C) LP COHBF decimators and their transposes is listed in ) calling for only one extra multiplication. The number n mc of delays is, however, of the order of n, since a (nearly) full delay line is needed both for the real and imaginary parts of the respective signals. Note that the shimming delays are always included in the delay count. (The number of delays required for a monorate COHBF corresponding to Fig. 17 is 2n.)
Minimum-Phase (MP) IIR filters
In the IIR COHBF case the frequency shift operation (3) is again applied in the z-domain. This is achieved by substituting the complex z-domain variable in the respective transfer functions H(z) and all corresponding SFG according to: Dec: R → C Int: C → R Dec: C → C Int: C → C n mc n 2n N M n 2n N A 3(n − 1) 6(n − 1)+2 6(n − 1) N Op 4n − 3 8n − 4 8n − 6 Table 6. Expenditure of minimum-phase IIR COHBF; n:or der ,n mc : McMillan degree, N M (N A ): number of multipliers (adders), operational clock frequency: f Op = f n /2
Efficient implementations
Introducing (34) in (16), the transfer function is frequency-shifted by f 1 = f n /8 (Ω = π/4): The optimal structure of an n = 5th order MP IIR COHBF decimator for real input signals is shown in Fig. 18(a) along with the elementary SFG of the allpass sections Fig. 18(b). Doubling of the structure according to Fig. 19 allows for full-complex signal processing. Multirate transposition [Göckler & Groth (2004)] is again applied to derive the corresponding dual structure for interpolation. The expenditure of the half-(R ⇋ C) and the full-complex (C → C) COHBF decimators and their transposes is listed in Table 6. A comparison of Tables 2 and 6 shows that the half-complex IIR COHBF sample rate converter (cf. Fig. 18(a)) requires almost twice, whereas the full-complex IIR COHBF (cf. Fig. 19) requires even four times the expenditure of that of the real IIR HBF system depicted in Fig. 9.
Comparison of FIR and IIR COHBF
LP FIR COHBF structures allow for implementations that utilize the coefficient symmetry property. Hence, the required expenditure is just slightly higher than that needed for CHBF. On the other hand, the expenditure of MP IIR COHBF is almost twice as high as that of the corresponding CHBF, since it is not possible to exploit memory and coefficient sharing. Almost the whole structure has to be doubled for a full-complex decimator (cf. Fig. 19).
Conclusion: Family of single real and complex halfband filters
We have recalled basic properties and design outlines of linear-phase FIR and minimum-phase IIR halfband filters, predominantly for the purpose of sample rate alteration by a factor of two, which have a passband centre frequency out of the specific set defined by (1). Our It has been confirmed that, for the even-numbered centre frequencies c ∈ {0, 2, 4, 6},M P IIR HBF outperform their LP FIR counterparts the more the tighter the filter specifications. However, for phase sensitive applications (e.g. software radio employing quadrature amplitude modulation), the LP property of FIR HBF may justify the higher amount of computation to some extent.
In the case of the odd-numbered HBF centre frequencies of (2), c ∈ {1, 3, 5, 7},t h e r ee x i s t specification domains, where the computational loads of complex FIR HBF with frequency offset range below those of their IIR counterparts. This is confirmed by the two bottom rows of Table 7, where this table lists contribution. This sectoral computational advantage of LP FIR COHBF is, despite n IIR < n FIR , due to the fact that these FIR filters still allow for memory sharing in conjunction with the exploitation of coefficient symmetry [Göckler (1996b)]. However, the amount of storage n mc required for IIR HBF is always below that of their FIR counterparts.
Halfband filter pairs 2
In this Section 3, we address a particular class of efficient directional filters (DF). These DF are composed of two real or complex HBF, respectively, of different centre frequencies out of the set given by (1). To this end, we conceptually introduce and investigate two-channel frequency demultiplexer filter banks (FDMUX) that extract from an incoming complex-valued frequency division multiplex (FDM) signal, being composed of up to four uniformly allocated independent user signals of identical bandwidth (cf. Fig. 20), two of its constituents by concurrently reducing the sample rate by two Göckler & Groth (2004). Moreover, the DF shall allow to select any pair of user signals out of the four constituents of the incoming FDM signal, where the individual centre frequencies are to be selectable with minimum switching effort. At first glance, there are two optional approaches: The selectable combination of two filter functions out of a pool of i) two RBF according to Subsection 2.1 and two CHBF (HT), as described in Subsection 2.2, where the centre frequencies of this filter quadruple are given by (1) with c ∈{ 0, 2, 4, 6},o rii) four COHBF, as described in Subsection 2.3, where the centre frequencies of this filter quadruple are given by (1) with c ∈{1, 3, 5, 7}. Since centre frequency switching is more crucial in case one (switching between real and/or complex filters), we subsequently restrict our investigations to case two, where the FDM input spectrum must be allocated as shown in Fig. 20. These DF with easily selectable centre frequencies are frequently used in receiver front-ends to meet routing requirements [Göckler (1996c)], in tree-structured FDMUX filter banks [Göckler & Felbecker (2001); Göckler & Groth (2004); Göckler & Eyssele (1992)], and, in modified form, for frequency re-allocation to avoid hard-wired frequency-shifting ; Eghbali et al. (2009)]. Efficient implementation is crucial, if these DF are operated at high sampling rates at system input or output port. To cope with this high rate challenge, we introduce a systematic approach to system parallelisation according to [Groth (2003)] in Section 4 . In continuation of the investigations reported in Section 2, we combine two linear-phase (LP) FIR complex offset halfband filters (COHBF) with different centre frequencies, being characterized by (1) and two output signals Göckler (1996a). For convenience, we map the original odd indices c ∈{1, 3, 5, 7} of the COHBF centre frequencies to natural numbers as defined by for subsequent use throughout Section 3. Section 3 is organized as follows: In Subsection 3.1, we detail the statement of the problem, and recall the major properties of COHBF needed for our DF investigations. In the main Subsection 3.2, we present and compare two different approaches to implement the outlined LP DF for signal separation with selectable centre frequencies: i) A four-channel uniform complex-modulated FDMUX filter bank undercritically decimating by two, where the respective undesired two output signals are discarded, and ii) a synergetic connection of two COHBF that share common multipliers and exploit coefficient symmetry for minimum computation. In Subsection 3.3, we apply the transposition rules of [Göckler & Groth (2004)] to derive the dual DF for signal combination (FDM multiplexing). Finally, we draw some further conclusions in Subsection 3.4. (38), o ∈{0, 1, 2, 3}, with the RHBF impulse response h(k) defined by (9). According to (39), highest efficiency is obtained by additionally introducing a suitable complex scaling factor of unity magnitude:
Statement of the DF problem
where − N−1 2 ≤ k ≤ N−1 2 and o ∈{0, 1, 2, 3}. By directly equating (39), and relating the result to (9) with a suitable choice of the constant a = 2o + 1 compliant with (29), we get : with the symmetry property: The respective COHBF centre coefficient is the only truly complex-valued coefficient, where its real and imaginary parts always possess identical moduli. All other coefficients are either purely imaginary or real-valued. Obviously, all frequency domain symmetry properties, including also those related to strict complementarity, are retained in the respective frequency-shifted versions, cf. Subsection 2.3.1 and [Göckler & Damjanovic (2006a)].
FDMUX approach
Using time-domain convolution, the I = 4 potentially required complex output signals, decimated by 2 and related to the channel indices o ∈{0, 1, 2, 3}, are obtained as follows: where the complex impulse responses of channels o are introduced in causal (realizable) form.
Replacing the complex impulse responses with the respective modulation forms (39), and setting the constant to a =(2o + 1)(N − 1)/2, we get: where h[k − (N − 1)/2] represents the real HBF prototype (9) in causal form. Next, in order to introduce an I-component polyphase decomposition for efficient decimation, we split the convolution index κ into two indices: where p = 0, 1, 2, I − 1 = 3a n dr = 0,1,...,⌊(N − 1)/I⌋ = ⌊(N − 1)/4⌋.A sar e s u l t ,i t follows from (44): Rearranging the exponent of the exponential term according to π 4 (4r + p)(2o + 1)=2πro + πr + p π 4 + 2π 4 op, (46) can compactly be rewritten as [Oppenheim & Schafer (1989)]: where the quantity encompasses all complex signal processing to be performed by the modified causal HBF prototype. An illustrative example with an underlying HBF prototype filter of length N = n + 1 = 11 is shown in Fig. 22 [Göckler & Groth (2004)]. Due to polyphase decomposition (45) and (46), sample rate reduction can be performed in front of any signal processing (shimming delays: z −1 ). Always two polyphase components of the real and the imaginary parts of the complex input signal share a delay chain in the direct form implementation of the modified causal HBF, where all coefficients are either real-or imaginary-valued except for the centre coefficient h 0 = 1 2 e j π 4 . A sar esu lt ,on lyN + 3 real multiplications must be performed to calculate a set of complex output samples at the two (i.e. all) DF output ports. Furthermore, for the FDMUX DF implementation a total of (3N − 5)/2 delays are needed (not counting shimming delays). The calculation of v p (m), p = 0, 1, 2, 3, is readily understood from the signal flow graph (SFG) Fig. 22, where for any filter length Na l w a y so n eof these quantities vanishes as a result of the zero coefficients of (9). Hence, the I = 4 point IDFT, depicted in Fig. 23(a,b) in detailed form, requires only 4 real additions to provide a complex output sample at any of the output ports o ∈{ 0, 1, 2, 3}; Fig. 23(b). Channel selection, for instance as shown in Fig. 21, is simply achieved by selection of the respective two output ports of the SFG of Figs.22 and 23(a), respectively. Moreover, the remaining two unused output ports may be deactivated by disconnection from power supply. (39) subsequently: a = 2o + 1). These impulse responses are presented in Table 8 as a function of the channel number o ∈{0, 1, 2, 3} for the non-zero coefficients of (40), related to the respective real RHBF coefficients. Except for the centre coefficient exhibiting identical real and imaginary parts, one half of the coefficientsisreal(R)andindependent of the desired centre frequency represented by the channel indices o ∈{ 0, 1, 2, 3}. Hence, these coefficients are common to all four transfer functions. The other half of the coefficients is purely imaginary (I: i.e., their real parts are zero) and dependent of the selected centre frequency. However, this dependency on the channel number is identical for all these coefficients and just requires a simple sign operation. Finally, the repetitive pattern of the coefficients, as a result of coefficient symmetry (41), is reflected in Table 8. A COHBF implementation of a demultiplexing DF aiming at minimum computational load must exploit the inherent coefficient symmetry (41), cf. Table 8. To this end, we consider the COHBF as depicted in Fig. 17 of Subsection 2.3.1, applying input commutators for sample rate reduction. In contrast to the FDMUX approach of Fig. 22, the SFG of Fig. 17 is based on the transposed FIR direct form Bellanger (1989); Mitra (1998), where the incoming signal samples are concurrently multiplied by the complete set of all coefficients, and the delay chains are directly connected to the output ports. When combining two of these COHBF Based on the outlined DF implementation strategy, an illustrative example is presented in Fig. 24 with an underlying RHBF of length N = 11. The front end for polyphase decomposition and sample rate reduction by 2 is identical to that of the FDMUX approach of Fig. 22. Contrary to the former approach, the delay chains for the odd-numbered coefficients are outbound and duplicated (rather than interlaced) to allow for simple channel selection. As a result, channel selection is performed by combining the respective sub-sequences that have passed the R-set coefficients (cf. Table 8) with those having passed the corresponding I-set coefficients, where the latter sub-sequences are pre-multiplied by b i =(−1) o i ; o i ∈{0, 1, 2, 3}, i ∈{I, II}. Multipliers and delays for the centre coefficient h 0,o i signal processing are implemented similarly to Fig. 22 without need for duplication of delays. However, the post-delay inner lattice must be realized for each transfer function individually; its channel dependency follows from Table 8 and (40): where o i ∈{ 0, 1, 2, 3}, i ∈{ I, II} and h 0 = 1/2 according to (9). Rearranging (49) yields with obvious abbreviations: It is easily recognized that the inner lattices of Fig. 24 implement the operations within the brackets of (50) with their results displayed at the respective inner nodes A, B, C, D. In compliance with (50), these inner node sequences must be multiplied by the respective signs d i =( −1) ⌊o i /2⌋ ; o i ∈{ 0, 1, 2, 3}, i ∈{ I, II}, prior to their combination with the above R/I sub-sequences.
To calculate a set of complex output samples at the two DF output ports, obviously the minimum number of (N + 5)/2 real multiplications must be carried out. Furthermore, for the COHBF approach to DF implementation a total of (5N − 11)/2 delays are needed (not counting shimming delays, z −1 , and the two superfluous delays at the input nodes of the outer delay chains, indicated in grey). Finally, we want to show and emphasise the simplicity of the channel selection procedure. There is a total of 8 summation points, the inner 4 lattice output nodes A, B, C, and D, and the 4 system output port nodes, where the signs of some input sequences of the output port nodes must be set compliant to the desired channel transfer functions: o i ∈{0, 1, 2, 3}, i ∈{I, II}.The sign selection is most easily performed as shown in Fig. 25. A concise survey of the required expenditure of the two approaches to the implementation of a demultiplexing DF is given in Table 9, not counting sign manipulations for channel selection. Obviously, the COHBF approach requires the minimum number of multiplications (N + 5)/2 (5N − 11)/2 COHBF ex.: N = 11 8 22 Table 9. Comparison of expenditure of FDMUX and COHBF DF approaches at the expense of a higher count of delay elements. Finally, it should be noticed that the DF group delay is independent of its (FDMUX or COHBF) implementation.
Linear-phase directional combination filter
Using transposition techniques, we subsequently derive DF being complementary (dual) to those presented in Subsection 3.2: They combine two complex-valued signals of identical sampling rate f d that are likewise oversampled by at least 2 to an FDM signal, where different oversampling factors allow for different bandwidths. An example can be deduced from
Transposition of complex multirate systems
The goal of transposition is to derive a system that is complementary or dual to the original one: The various filter transfer functions must be retained, demultiplexing and decimating operations must be replaced with the dual operations of multiplexing and interpolation, respectively [Göckler & Groth (2004)].
The types of systems we want to transpose, Figs.22 and 24, represent complex-valued 4 × 2 multiple-input multiple-output (MIMO) multirate systems. Obviously, these systems are composed of complex monorate sub-systems (complex filtering of polyphase components) and real multirate sub-systems (down-and upsampler), cf. [Göckler & Groth (2004)]. While the transposition of real MIMO monorate systems is well-known and unique [Göckler & Groth (2004); Mitra (1998)], in the context of complex MIMO monorate systems the Invariant (ITr) and the Hermitian (HTr) transposition must be distinguished, where the former retains the original transfer functions, H T o (z)=H o (z) ∀o, as desired in our application. As detailed in [Göckler & Groth (2004)], the ITr is performed by applying the transposition rules known for real MIMO monorate systems provided that all imaginary units "j", both of the complex input and output signals and of the complex coefficients, are conceptually considered and treated as multipliers within the SFG 3 (denoted as truly complex implementation), as to be seen from Figs.22 and 24. The transposition of an M-downsampler, representing a real single-input single-output (SISO) multirate system, uniquely leads to the corresponding M-upsampler, the complementary (dual) multirate system, and vice versa [Göckler & Groth (2004)]. 3 The imaginary units of the input signals and the coefficients must not be eliminated by simple multiplication and consideration of the correct signs in subsequent adders; this approach would transform the original complex MIMO SFG to a corresponding real SFG, where the direct transposition of the latter would perform the HTr [Göckler & Groth (2004)].
265
Most Efficient Digital Filter Structures: The Potential of Halfband Filters in Digital Signal Processing
www.intechopen.com
Connecting all of the above considerations, the ITr transposition of a complex-valued MIMO multirate system is performed as follows [Göckler & Groth (2004)]: • The system SFG to be transposed must be given as truly complex implementation.
• Reverse all arrows of the given SFG, both the arrows representing signal flows and those symbolic arrows of down-and upsamplers or rotating switches (commutators), respectively.
As a result of transposition [Göckler & Groth (2004)] • all input (output) nodes become output (input) nodes, a 4 × 2 MIMO system is transformed to a 2 × 4MIMOsystem, • the number of delays and multipliers is retained, • the overall number of branching and summation nodes is retained, and • the overall number of down-and upsamplers is retained.
Transposition of the SFG of the COHBF approach to DF
As an example, we transpose the SFG of the COHBF approach to the implementation of a separating DF, as depicted in Fig. 24. The application of the transposition rules of the preceding Subsection 3.3.1 to the SFG of Fig. 24 results in the COHBF approach to a multiplexing DF shown in Fig. 26. The invariant properties are easily confirmed by comparing the original and the transposed SFG. Hence, the numbers of delays and multipliers required by both DF systems being mutually dual are identical. As expected, the numbers of adders required are different, since the overall number of branching and summation nodes is retained only. Moreover, it should be noted that also the simplicity of the channel selection procedure is retained. To this end, we have shifted the channel-dependent sign-setting operators d i = (−1) ⌊o i /2⌋ , o i ∈{ 0, 1, 2, 3}, i ∈{ I, II}, to more suitable positions in front of the summation nodes G and H. Again, there is a total of 8 summation points, where the signs of the respective input sequences must be adjusted: The 4 inner lattice output nodes A, B, C, and D, the 2 input summation nodes E and F immediately fed by the imaginary parts of the input sequences, and the 2 inner post-lattice summing nodes G and H. At all these summation nodes, the signs of some or all input sequences must be set in compliance with the desired channel transfer functions: H o (z), o i ∈{0, 1, 2, 3}, i ∈{I, II}, cf. Fig. 26. The sign selection is again most easily performed, as shown in Fig. 27.
Conclusion: Halfband filter pair combined to directional filter
In this Section 3, we have derived and analyzed two different approaches to linear-phase directional filters that separate from a complex-valued FDM input signal two complex user signals, where the FDM signal may be composed of up to four independent user signals: The FDMUX approach (Subsection 3.2.1) needs the least number of delays, whereas the synergetic COHBF approach (Subsection 3.2.2) requires minimum computation. Signal extraction is always combined with decimation by two. While the four frequency slots of the user signals to be processed (corresponding to the four potential DF transfer functions H o (z), o i ∈{ 0, 1, 2, 3}, i ∈{ I, II}, centred according to (38); cf. Fig. 21 ) are equally wide and uniformly allocated, as indicated in Fig. 28 user signals may possess different bandwidths. However, each user signal must completely be contained in one of the four frequency slots, as exemplified in Fig. 28. Furthermore, by applying the transposition rules of [Göckler & Groth (2004)], the corresponding complementary (dual) combining directional filters have been derived, where the multiplication rates and the delay counts of the original structures are always retained. Obviously, transposing a system allows for the derivation of an optimum dual system by applying the simple transposition rules, provided that the original system is optimal. Thus, a tedious re-derivation and optimization of the complementary system is circumvented. Nevertheless, it should be noted that by transposition always just one particular structure is obtained, rather than a variety of structures [Göckler & Groth (2004)]. Finally, to give an idea of the required filter lengths required, we recall the design result reported in [Göckler & Eyssele (1992)] where, as depicted in the above Fig. 21(a,b), the passband, stopband and transition bands were assumed equally wide: With an HBF prototype filter length of N = 11 and 10 bit coefficients, a stopband attenuation of > 50dB was achieved.
Parallelisation of tree-structured filter banks composed of directional filters 4
In the subsequent Section 4 of this chapter we consider the combination of multiple two-channel DF investigated in Section 3 to construct tree-structured filter banks. To this end, we cascade separating DF in a hierarchical manner to demultiplex (split) a frequency division multiplex (FDM) signal into its constituting user signals: this type of filter bank (FB) is denoted by FDMUX FB; Fig. 2. Its transposed counterpart (cf. Subsection 3.3.1), the FMUX FB, is a cascade connection of combining DF considered in Subsection 3.3 to form an FDM signal of independent user signals. Finally, we call an FDMUX FB followed by an FMUX FB an FDFMUX FB, which may contain a switching unit for channel routing between the two FB. Subsequently, we consider an application of FDFMUX FB for on-board processing in satellite communications. If the number of channels and/or the bandwidth requirements are high, efficient implementation of the high-end DF is crucial, if they are operated at (extremely) high sampling rates. To cope with this issue, we propose to parallelise the at least the front-end (back-end) of the FDMUX (FMUX) filter bank. For this outlined application, we give the following introduction and motivation. Digital signal processing on-board communication satellites (OBP) is an active field of research where, in conjunction with frequency division multiplex (FDMA) systems, presently two trends and challenges are observed, respectively: i) The need of an ever-increasing number of user channels makes it necessary to digitally process, i.e. to demultiplex, cross-connect and remultiplex, ultra-wideband FDM signals requiring high-end sampling rates that range considerably beyond 1GHz [Arbesser-Rastburg et al. (2002); Maufroid et al. (2004;2003); Rio-Herrero & Maufroid (2003); Wittig (2000)], and ii) the desire of flexibility of channel bandwidth-to-user assignment calling for simply reconfigurable OBP systems [Abdulazim & Göckler (2005); Göckler & Felbecker (2001); Johansson & Löwenborg (2005); Kopmann et al. (2003)]. Yet, overall power consumption must be minimum demanding highly efficient FB for FDM demultiplexing (FDMUX) and remultiplexing (FMUX). Two baseline approaches to most efficient uniform digital FB, as required for OBP, are known: a) The complex-modulated (DFT) polyphase (PP) FB applying single-step sample rate alteration [Vaidyanathan (1993)], and b) the multistage tree-structured FB as depicted in Fig. 2, where its directional filters (DF) are either based on the DFT PP method 4 Underlying original publication: Göckler et al. (2006) [Göckler & Groth (2004); Göckler & Eyssele (1992)] according to Subsection 3.2.1, or on the COHBF approach investigated in Subsection 3.2.2. For both approaches it has been shown that bandwidth-to-user assignment is feasible within reasonable constraints ; Johansson & Löwenborg (2005); Kopmann et al. (2003)]: A minimum user channel bandwidth, denoted by slot bandwidth b, can stepwise be extended by any integer number of additional slots up to a desired maximum overall bandwidth that shall be assigned to a single user. However, as to challenge i), the above two FB approaches fundamentally differ from each other: In a DFT PP FDMUX (a) the overall sample rate reduction is performed in compliance with the number of user channels in a single step: all arithmetic operations are carried out at the (lowest) output sampling rate [Vaidyanathan (1993)]. In contrast, in the multistage FDMUX (b) the sampling rate is reduced stepwise, in each stage by a factor of two [Göckler & Eyssele (1992)]. As a result, the polyphase approach (a) inherently represents a completely parallelised structure, immediately usable for extremely high front-end sampling frequencies, whereas the high-end stages of the tree-structured FDMUX (b) cannot be implemented with standard space-proved CMOS technology. Hence, the tree structure, FDMUX as well as FMUX, calls for a parallelisation of the high rate stages. As motivated, this contribution deals with the parallelisation of multistage multirate systems.
To this end, we recall a general systematic procedure for multirate system parallelisation [Groth (2003)], which is deployed in detail in Subsection 4.1. For proper understanding, in Subsection 4.2 this procedure is applied to the high rate front-end stages of the FDMUX part of the recently proposed tree-structured SBC-FDFMUX FB [Abdulazim & Göckler (2005); ], which uniformly demultiplexes an FDM signal always down to slot level (of bandwidth b) and that, after on-board switching, recombines these independent slot signals to an FDM signal (FMUX) with different channel allocation -FDFMUX functionality. If a single user occupies a multiple slot channel, the corresponding parts of FDMUX and FMUX are matched for (nearly) perfect reconstruction of this wideband channel signal -SBC functionality [Vaidyanathan (1993)]. Finally, some conclusions are drawn.
Sample-by-sample approach to parallelisation
In this subsection, we introduce the novel sample-by-sample processing (SBSP) approach to parallelisation of digital multirate systems, as proposed by [Groth (2003)] where, without any additional delay, all incoming signal samples are directly fed into assigned units for immediate signal processing. Hence, in contrast to the widely used block processing (BP) approach, SBSP does not increase latency. In order to systematically parallelise a (multirate) system, we distinguish four procedural steps [Groth (2003)]: 1. Partition the original system in (elementary SISO or MIMO) subsystems E(z) with single or multiple input and/or output ports, respectively, still operating at the original high clock frequency f n = 1/T that are simply amenable to parallelisation. To enumerate some of these: Delay, multiplier, down-and up-sampler, summation and branching, but also suitable compound subsystems such as SISO filters and FFT transform blocks. 2. Parallelise each subsystem E(z) in an SBSP manner according to the desired individual degree of parallelisation P,w h e r eP ∈ N. To this end, each subsystem is cascaded with a P-fold SBSP serial-to-parallel (SP) commutator for signal decomposition (demultiplexing) followed by a consistently connected P-fold parallel-to-serial (PS) commutator for recomposition (remultiplexing) of the original signal, as depicted in Fig. 29(a). Here, obviously P = . P-Parallelisation of SISO subsystem E(z) to P × P MIMO system E(z d ) P SP = P PS ,a n dp ∈ [0, P − 1] denotes the relative time offsets of connected pairs of down-and up-samplers, respectively. Evidently, the P output signals of the SP interface comprise all polyphase components of its input signal in a time-interleaved (SBSP) manner at a P-fold lower sampling rate f d = f n /P [Göckler & Groth (2004); Vaidyanathan (1993)].
Since the subsequent PS interface is inverse to the preceding SP interface [Göckler & Groth (2004)], the SP-PS commutator cascade has unity transfer with zero delay in contrast to the (P − 1)-fold delay of the BP Delay-Chain Perfect-Reconstruction system [Göckler & Groth (2004); Vaidyanathan (1993)], as anticipated (cf. also Fig. 30). After this preparation, P-fold parallelisation is readily achieved by shifting the (SISO) subsystem E(z) between the SP and PS interfaces by exploiting the noble identities [Göckler & Groth (2004); Vaidyanathan (1993)] and some novel generalized SBSP multirate identities [Groth (2003); Groth & Göckler (2001)]. Thus, as shown in Fig. 29(b), the two interfaces are interconnected by an equivalent P × P MIMO system E(z d ), which represents the P-fold parallelisation of E(z), where all operations of which are performed at the P-fold reduced operational clock frequency f d .
3. Reconnect all parallelised subsystems exactly in the same manner as in the original system. This is always given, since parallelisation does not change the original numbers of input and output ports of SISO or MIMO subsystems, respectively. 4. Eliminate all interfractional cascade connections of PS-SP interfaces using the obvious multirate identity depicted in Fig. 30. Note that this elimination process requires identical up-and down-sampling factors, P out,a PS = P in,b SP , of each PS-SP interface cascade restricting free choice of P for subsystem parallelisation. As a result of parallelisation, all input signals of the original (possibly MIMO) system are decomposed into P time-interleaved polyphase components by a SP demultiplexer for subsequent parallel processing at a P-fold lower rate, and all system output ports are provided with a PS commutator to interleave all low rate subsignals to form the high speed output signals. For illustration, we present the parallelisation of a unit delay z −1 := z −1/P d ,andofanM-fold down-sampler with zero time offset [Groth (2003)], as shown in Fig. 31. The unit delay (a) is realized by P parallel time-interleaved shimming delays to be implemented by suitable system control: where permutation is introduced for straightforward elimination of interfractional PS-SP cascades according to Fig. 30 (I : Identity matrix). In case of down-sampling Fig. 31(b), to increase efficiency, the P parallel down-samplers of the diagonal MIMO system E(z d ) are merged with the P down-samplers of the SP interface. Hence, by using suitable multirate identities [Groth (2003)], the contiguous PM-fold down-samplers of the SP demultiplexer have a relative time offset of M.
Parallelisation of SBC-FDFMUX filter bank
Subsequently, we deploy the parallelisation of the high rate FDMUX front-end section of the versatile tree-structured SBC-FDFMUX FB for flexible channel and bandwidth allocation [Abdulazim & Göckler (2005); Abdulazim et al. (2007)]. The first three hierarchically cascaded stages of the FDMUX are shown in Fig. 32 in block diagram form applying BP. In each stage, ν = 1, 2, 3, the respective input spectrum is split into two subbands of equal bandwidth in conjunction with decimation by two. For convenience of presentation, all DF have identical coefficients and, in contrast to Section 3, are assumed as critically sampling 2-channel DFT PP FB with zero frequency offset (cf. ). The branch filter transfer functions H λ (z ν ), λ = 0, 1, represent the two PP components of the prototype filter [Göckler & Groth (2004); Vaidyanathan (1993)] where, by setting z ν := e jΩ (ν) with Ω (ν) = 2π f / f ν and ν = 1, 2, 3, the respective frequency responses H λ (e jΩ (ν) ) are obtained, which are related to the operational sampling rate f ν of stage ν. The respective DF lowpass ); z ν := e jΩ (ν) , and highpass filter transfer functions of stage ν, related to the original sampling rate 2 f ν , are generated by the two branch filter transfer functions H λ (z ν ), λ = 0, 1, in combination with the simple "butterfly" across the output ports of each DF: Summation produces the lowpass, subtraction the complementary highpass filter transfer function Bellanger (1989); Kammeyer & Kroschel (2002); Mitra (1998);Schüssler (2008); Vaidyanathan (1993). Assuming, for instance, a high-end input sampling frequency of f n = f 0 = 2.4GHz [Kopmann et al. (2003); Maufroid et al. (2003)], the operational clock rate of the third stage is f 3 = f n /2 3 = 300MHz, which is deemed feasible using present-day CMOS technology. Hence, front-end parallelisation has to reduce operational clock of all subsystems preceding the third stage down to f d = f 3 = 300MHz. This is achieved by 8-fold parallelisation of input branching and blocking (delay z −1 0 ), 4-fold parallelisation of the first stage of the FDMUX tree (comprising input decimation by two, the PP branch filters H λ (z 1 ), λ = 0, 1, and butterfly), and of the input branching and blocking (delay z −1 1 ) of the second stage and, finally, corresponding 2-fold parallelisation of the two parallel 2-channel FDMUX FB of the second stage of the tree, as indicated in Fig. 32. The result of parallelisation, as required above, is shown in Fig. 33, where all interfractional interfaces have been removed by straightforward application of identity of Fig. 30. Subsequently, parallelisation of elementary subsystems is explained in detail: 1. Down-Sampling by M = 2: In compliance with Fig. 31(b), each 2-fold down-sampler is replaced with P ν units in parallel for 2P ν -fold down-sampling with even time offset 2p,where p = 0, 1, 2, 3 applies to the first tree stage (P 1 = 4),andp = 0, 1 to the second stage (P 2 = 2). The result of 4-fold parallelisation of the front end input down-sampler of the upper branch (ν = 1, λ = 0) is readily visible in Fig. 33 preceding filter MIMO block H 1 0 (z d ): In fact, it represents an 8-to-4 parallelisation, where all odd PP components are removed according to Fig. 31 H 1 (z 1 ). To this end, as required by Fig. 32, the unit delay z −1 0 is parallelised by P 0 = 8, as shown in Fig. 31(a), while the subsequent down-sampler applies P 1 = 4, as described above w.r.t. Fig. 31(b). Immediate cascading of parallelised unit delay (P 0 = 8) and down-sampling (P 1 = 4, M = 2) (as induced by Fig. 31) shows that only those four PP components of the parallelised delay with even time offset (p = 0, 2, 4, 6) are transferred via the 4-branch SP-input interface of down-sampling (2P 1 = 8) to its PS-output interface with naturally ordered time offsets p = 0, 1, 2, 3 w.r.t. P 1 = 4. Hence, only those retained 4 out of 8 PP components of odd time index p = 7, 1, 3, 5, being provided by the unit delay's SP-input interface and delayed by z −1 0 = z −1/8 d , are transferred (mapped) to the P 1 = 4 up-samplers with timing offset p = 0, 1, 2, 3 of the 4-branch PS-output interface of the down-sampler. Fig. 33 shows the correspondingly rearranged signal flow graph representation of stage 1 input section (ν = λ = 1). As a result, the upper branch of stage 1, H 0 (z 1 ) → H 1 0 (z d ),i sf e db yt h ee v e n -i n d e x e d PP components of the high rate FDMUX input signal, whereas the lower branch H 1 (z 1 ) → H 1 1 (z d ) is provided with the delayed versions of the PP components of odd index, as depicted in Fig. 33. Hence, as in the original system Fig. 32, the input sequence is completely fed into the parallelised system. This procedure is repeated with the input branching and blocking sections of the subsequent stages ν = 2, 3: The PP branch filters H 0 (z ν ) → H ν 0 (z d ) parallelised by P ν ,whereP 2 = 2and P 3 = 1 (P 1 = 4), are provided with the even-numbered PP components of the respective input signals with timing offsets in natural order. Contrary, the set of PP components of odd index is always delayed by z −1/P ν−1 d and fed into filter blocks H 1 (z ν ) → H ν 1 (z d ) in crossed manner (cf. input section λ = 1). 3. P ν -fold Parallelisation of PP branch filters H λ (z ν ) → H ν λ (z d ), λ = 0, 1; ν = 1, 2, is achieved by systematic application of the procedure condensed in Fig. 29 (for details cf. Göckler & Groth (2004); Groth (2003)). To this end, H λ (z ν ) is decomposed in P ν PP components of correspondingly reduced order, which are arranged to a MIMO system by 273 Most Efficient Digital Filter Structures: The Potential of Halfband Filters in Digital Signal Processing www.intechopen.com exploiting a multitude of multirate identities Groth (2003); Groth & Göckler (2001). The resulting P ν × P ν MIMO filter transfer matrix H ν λ (z d ) contains each PP component of H λ (z ν ) P ν times: Thus, the amount of hardware is increased P ν times whereas, as desired for feasibility, the operational clock rate is concurrently reduced by P ν . Hence, the overall expenditure, i.e. the number of operations times the respective operational clock rate Göckler & Groth (2004), is not changed. 4. Parallelisation of butterflies combining the output signals of associated PP filter blocks is straightforward: For each (time-interleaved) PP component of the respective signals a butterfly has to be foreseen, as shown in Fig. 33.
Conclusion: Parallelisation of multirate systems
In this Section 4, a general and systematic procedure for parallelisation of multirate systems, for instance as investigated in Sections 2 and 3, has been presented . Its application to the high rate decimating FDMUX front end of the tree-structured SBC-FDFMUX FB Abdulazim & Göckler (2005); has been deployed in detail. The stage ν degree of parallelisation P ν , ν = 0, 1, 2, 3, is diminished proportionally to the operational clock frequency f ν of stage ν and is, thus, adapted to the actual sampling rate. As a result, after suitable decomposition of the high rate front end input signal by an input commutator in P 0 = P max polyphase components (as depicted for P max = 8 in Fig. 33), all subsequent processing units are likewise operated at the same operational clock rate f d = f n /P 0 = f 0 /P 0 . Since inherent parallelism of the original tree-structured FDMUX (Fig. 32) has attained P max = 8 in the third stage, and the output signals of this stage represent the desired eight demultiplexed FDM subsignals, interleaving PS-output commutators are no longer required, as to be seen in Fig. 33. Finally, it should be noted that parallelisation does not change overall expenditure; yet, by multiplying stage ν hardware by P ν , the operational clock rates are reduced by a factor of P ν to a feasible order of magnitude, as desired. Applying the rules of multirate transposition (cf. Subsection 3.3.1 or Göckler & Groth (2004)) to the parallelised FDMUX front end, the high rate interpolating back end of the tree-structured SBC-FDFMUX FB is obtained likewise and exhibits the same properties as to expenditure and feasibility Groth (2003). Hence, the versatile and efficient tree-structured filter bank (FDMUX, FMUX, SBC, wavelet, or any combination thereof) can be used in any (ultra) wide-band application without any restriction.
Summary and conclusion
In Section 2 we have introduced and investigated a special class of real and complex FIR and IIR halfband bandpass filters with the particular set of centre frequencies defined by (1). As a result of the constraint (1), almost all filter coefficients are either real-valued or purely imaginary-valued, as opposed to fully complex-valued coefficients. Hence, this class of halfband filters requires only a small amount of computation. In Section 3, two different options to combine two of the above FIR halfband filters with different centre frequencies to form a directional filter (DF) have been investigated. As a result, one of these DF approaches is optimum w.r.t. to computation (most efficient), whereas the other requires the least number of delay elements (minimum McMillan degree). The relation between separating DF and DF that combine two independent signals to an FDM signal via multirate transposition rules has extensively been shown. Finally, in Section 4, the above FIR directional filters (DF) have been combined to tree-structured multiplexing and demultiplexing filter banks. While this procedure is straightforward, the operating clock rates within the front-or back-ends may be too high for implementation. To this end, we have introduced and described to some extent the systematic graphically induced procedure to parallelise multirate systems according to [Groth (2003)]. It has been applied to a three-stage demultiplexing tree-structured filter bank in such a manner that all operations throughout the overall system are performed at the operational output clock. As a result, parallelisation makes the system feasible but retains the computational load. | 15,698.2 | 2011-11-23T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
1891 AD Submarine Eruptive Processes and Geochemical Studies of Floating Scoria at Foerstner Volcano, Pantelleria
On October 17, 1891 a submarine eruption occurred at Foerstner volcano in the Straits of Sicily 4 km northwest of the island of Pantelleria, Italy. The eruption produced floating scoria bombs, or balloons, that discharged gas at the surface and eventually sank to the seafloor. Activity occurred for a period of one week from an eruptive vent located within the Pantelleria Rift at a water depth of 250 m. Remotely Operated Vehicle (ROV) video footage and high resolution multibeam mapping of the Foerstner vent site was used to create a geologic map of the 1891 AD deposits and conduct the first detailed study of the source area associated with this unusual type of submarine volcanism. The main Foerstner vent consists of two overlapping circular mounds with a total volume of 6.3 x 10 m and relief of 60 m. It is dominantly constructed of clastic scoriaceous deposits with some interbedded effusive pillow flow deposits. Petrographic and geochemical analyses of Foerstner samples by X-ray fluorescence and inductively coupled plasma mass spectrometry reveal that the majority of the deposits are highly to extremely vesicular, hypocrystalline tephrite basanite scoria that display porphyritic, hyaloophitic, and vitrophyric textures. An intact scoria balloon recovered from the seafloor consists of an interior gas cavity surrounded by a thin lava shell comprised of two distinct layers; a thin, oxidized quenched crust surrounding the exterior of the balloon and a dark grey, tachylite layer lying beneath it. Ostwald ripening is determined to be the dominant bubble growth mechanism of four representative Foerstner scoria samples as determined by vesicle size distributions. Characterization of the diversity of deposit facies observed at Foerstner in conjunction with quantitative rock texture analysis indicates that Strombolian-like activity is the most likely mechanism for the formation of buoyant scoria bombs. The deposit facies observed at the main Foerstner vent are very similar to those produced by other known submarine Strombolian eruptions (short pillow flow lobes, large scoriaceous clasts, spatter-like vent facies). Balloons were likely formed from the rapid cooling of extremely vesicular magma fragments as a result of a gas-rich frothy magma source. The exterior of these fragments hyperquenched forming a vesicular glassy shell that acted as an insulating layer preventing magmatic gas in its interior from escaping and thus allowing flotation as densities reached less than 1000 kg/m. We believe that lava balloon eruptions are more common than previously thought, as the eruptive conditions required to generate these products are likely to be present in a variety of submarine volcanic environments. Additionally, the facies relationships observed at Foerstner may be used as a paleoenvironmental indicator for modern and ancient basaltic shallow submarine eruptions because of the relatively narrow depth range over which they likely occur (200-400 m).
viii
Introduction
Shallow submarine volcanism that produces floating basaltic scoria bombs is one of the most peculiar and rarely observed eruption styles on Earth (Kueppers et al., 2012). Rapid degassing of mafic magma at shallow depths (< 400 m) can lead to highly vesicular, extremely low-density volcanic products. Owing to their morphology and behavior, the floating scoria bombs have been termed "lava balloons" (Gaspar et al., 2003). Eruptive conditions that generate these products are not well understood due to their rare occurrence and paucity of direct submarine observations. Previous investigations only recovered samples while they were still floating on the sea surface and no attempts to describe the vent site and/or the distribution of the bombs after sinking have been made. This has led to debate as to the style of submarine eruptions that produces these unique deposits (e.g. Gaspar et al., 2003).
There have been only five cases where floating basaltic bombs have been observed during submarine eruptions: the recent 2011-2012 eruption of El Hierro, Canary Islands, Spain (Troll et al., 2012), a 1993 eruption near Socorro Island, Mexico (Siebe et al., 1995), a 1877 eruption west of the island of Hawai'i, USA (Moore et al., 1985), a 1998-2001 eruptive episode of the Serreta Submarine Ridge northwest of Terceira Island, Azores (Gaspar et al., 2003;Kueppers et al., 2012), and the 1891 eruption of Foerstner volcano northwest of Pantelleria island (Washington, 1909: Butler, 1892.
The Foerstner eruption was the most recent volcanic activity in the Straits of Sicily. It occurred within the Pantelleria graben, a large, tectonic depression, 90 km long and 30 km wide, that has a general NW-SE orientation and is bounded by very steep normal faults (Civile at al, 2010). Like other balloon eruptions, it produced ellipsoidal, scoriaceous bombs that rose intact through the water column and floated on the surface of the sea before becoming saturated with seawater and sinking (Washington, 1909, Butler, 1892. Many were greater than 1 m in diameter and extremely vesicular with individual vesicles reaching up to decimeters in size. In this study, we report on remotely operated vehicle (ROV) explorations of the vent area of the 1891 Foerstner eruption carried out during cruise NA-018 of the E/V Nautilus. This is the first detailed study of the vent site of a basaltic balloon eruption and a geologic map of the volcanic products has been created using a highresolution mapping and photographic survey of the area. Geochemical and mineralogical analyses of samples collected by the ROV provide information about magma composition, viscosity, and crystal content of the 1891 products. The MATLAB-based program, FOAMS, was used to investigate the degassing processes of the basaltic balloons based on imaging of bubble size distributions. We compare the Foerstner balloons to the products of other basaltic balloon events and develop an eruption model to describe the vent facies and processes that led to the formation of the balloons during the 1891 eruption. The results from this well constrained example provide an important basis for the facies interpretation of ancient highly vesicular submarine basaltic sequences (e.g. Simpson and McPhie, 2001).
Geologic Setting
The Straits of Sicily is located in the northern part of the African continental plate called the Pelagian block ( Figure 1, Burollet et al., 1978). It is bounded by the Skerki bank to the west and the Malta escarpment to the east and is very shallow (averaging 350 m depth), except in three NW-trending depressions (Civile et al., 2010). The Sicily Channel has been affected by Late Miocene-Early Pliocene continental rifting (Civile et al., 2008), which produced several geologic features. The rifting created the Pantelleria, Malta, and Linosa tectonic depressions, which reach depths of 1350, 1580, and 1720m and are bounded by NW-SE trending normal faults (Calanchi et al., 1989;Civile et al., 2008). It also created two volcanic islands (Pantelleria and Linosa) and a series of magmatic seamounts located in the Adventure Plateau, Graham and Nameless banks (Peccerillo, 2005;Rotolo et al., 2006). The rifting also led to a thinning of the continental crust beneath the troughs up to about 17 km along the Pantelleria graben axis (Civile et al., 2008). The rifting has been interpreted as a result of mantle convections developed during the roll-back of the African lithosphere slab beneath the Tyrrhenian basin (Argnani, 1990). The tectonic depressions have also been interpreted as large pull-apart basins involving deep crustal levels, formed within a large wrench zone in front of the Africa-Europe collisional belt (Cello et al., 1985). The rifting may also be related to the NE-directed displacement of Sicily away from the African continent (Illies, 1981). These diverse interpretations show that the tectonic mechanisms responsible for the Sicily Channel rift zone are still not well understood.
Pantelleria Island has a surface area of 83 km 2 and represents the largest extent of an emerged composite volcano in the Sicily Channel, rising from a depth of 1300 m to 830 m above sea level (Civile et al., 2010;Martorelli et al., 2011). The island is composed wholly of peralkaline trachytes and rhyolites (pantellerites) (Civetta et al., 1988) produced by Late Quaternary volcanic activity. The activity was mainly explosive and characterized by caldera collapses producing large volumes of ignimbrites and pyroclastics (Civetta et al., 1984(Civetta et al., , 1988. Pantelleria is characterized by notable volcano-tectonic features such as caldera rims, emission centers, and dike swarms, most significant of which are two large calderas that developed on the southeastern part of the island (Civile et al, 2008). High-resolution mapping of the Pantelleria submarine flanks shows numerous small volcanic cones concentrated to the NW of the island (Bosman et al., 2007). Seismic profiles and regional magnetic anomalies indicates that volcanic bodies with a clear magnetic signature are exposed at the seafloor as far as 37 km northwest and southeast of Pantelleria, with their elongation consistent with the orientation of the rift (Calanchi et al., 1988). Volcanic bodies southeast of the island are buried beneath undisturbed Upper Pliocene-Quaternary sediments, suggesting major volcanic activity in the early stage of the graben development, which partly filled the rift floor (Calanchi et al., 1988). The bodies located northwest of the island are probably related to volcanic activity associated with the development of the Pantelleria Rift (Calanchi et al., 1988).
Seismic activity of the Sicily Channel is characterized by shallow (<25 km), low magnitude (2 to 4) events (Civile et al., 2008). Seismicity is notably absent along the Pantelleria graben, while some earthquakes have been recorded north of Pantelleria.
In the Sicily Channel, volcanic activity was concentrated mainly on the islands of Pantelleria and Linosa during the Pleistocene. Minor submarine volcanism began during the early Pliocene and lasted until 110 years ago, mainly occurring in Adventure plateau and in Graham and Nameless Banks (Corti et al., 2006;Rotolo et al., 2006). The oldest products have been found in a volcanic seamount located east of Nameless Bank, where dredged samples gave a K-Ar age of 9.5 Ma (Beccaluva et al., 1981). The most recent activity occurred during the nineteenth century, most notably in the Graham Bank in 1831 and Foerstner volcano in 1891, the most recent eruption (Washington, 1909). Additional eruptions occurred in 1801, 1845, 1846, and 1863 but were not observed because they did not give rise to a permanent subaerial island due to short periods of activity (Washington, 1909).
Historical Accounts of the 1891 Foerstner Submarine Eruption
In 1890, many premonitory signals to an eruption were detected on Pantelleria, as summarized by A. Ricco (Butler, 1892). These signals included increased fumarolic activity causing damage to vineyards, increased earthquake activity, and uplift of the island's north coast due to increased tectonic activity (Butler, 1892). On October 14-15, 1891 (three days before the eruption began), stronger earthquakes were accompanied by the drying up of hot springs and a further rise of the north coast (totaling 80 cm), which resulted in surface cracks (Washington, 1909;Butler, 1892).
Large amplitude earthquakes preceded the beginning of the eruption on the morning of October 17, after which all earthquake activity ceased (Washington, 1909).
The 1891 eruption of Foerstner volcano lasted only a week and was not described by any scientific observer. Descriptions of the eruption have been derived from the testimony of fishermen, most notably A. Ricco, whose account was translated by G.
W. Butler in 1892 and provides the basis for the description that follows (Washington, 1909). The first signs of the eruption were deep rumblings and columns of "smoke" protruding from the sea surface 4 km west of the town of Pantelleria at the northwest end of the island. Black, subspherical, scoriaceous bombs up to 1 m in diameter were seen rising to the surface along a NE-SW trending line about 850-1000 m long, initially thought to have been produced by fissure activity (Figure 2). Some of the bombs were still degassing at the surface and as a result, were propelled laterally by horizontal steam jets. Some were thrown up to 20 m in the air as a result of rapid degassing. Bombs collected at the surface were still at very high temperature inside (at least 415º C as indicated by fusion of Zn) and one bomb was noted as being incandescent. After the degassing episodes had ceased, the scoria balloons sank as a result of seawater saturation. Some claim that an ephemeral island was formed (including Foerstner himself), but both Ricco and Butler explicitly deny this suggestion. The highest water temperature recorded at the eruption site as 1.5º C above the ambient temperature. It was noted that there was a strong smell "as of gunpowder" at the site, which was most likely H 2 S and SO 2 gas emissions. The eruption ceased on October 25, 1891.
Methodology
The suspected vent site of the 1891 Foerstner eruption (Bosman et al., 2007) was explored using the remotely operated vehicle (ROV) Hercules during cruise Scoria and lava flow samples were collected at 42 sites at or in close proximity to the submarine vent site using the ROV. Ten samples that encompass the major clast types observed at the vent site were selected for petrographic and geochemical analyses. Bulk samples were cleaned in de-ionized (DI) water, sonicated for 30 minutes to remove foreign particles, rinsed in DI water again, and then dried for 48 hours at 100° C. Powdered whole rocks were analyzed for major elements by X-ray fluorescence (XRF) using the standard BHVO-2 at the Ronald B. Gilmore X-ray Fluorescence Laboratory, University of Massachusetts, Amherst. Trace element compositions in the same powders were analyzed using the New Wave 213 nm Nd-YAG laser, attached to a Thermo X-Series II ICP-MS at Dr. Katie Kelley's lab at URI's Graduate School of Oceanography (GSO) in Narragansett, RI following the methods of Kelley et al. (2003). Standards used were: JB-3, BHVO-1, DNC-1, W-2, and EN026 10D-3. Reproductibility of replicate analyses is <2% rsd. Petrographic descriptions of these samples were completed using a Zeiss Axioscopt petrographic microscope in Dr. Steven Carey's lab at GSO.
Characterization of bubble textures in vesicular samples was carried out following the methods of Shea et al. (2010). Thin sections made from selected samples were imaged using a petrographic microscope and a scanning electron microscope (SEM), using 5x-100x magnifications to image a range of vesicle sizes between 10 µm and 1.58 mm. SEM imaging was done using a JEOL JSM-5900LV SEM at Mike Platek's lab at the University of Rhode Island, Kingston, RI. The nested images were used to derive vesicle volume distributions via the FOAMS program (Shea et al. 2010).
A geologic map of Foerstner volcano was created using high definition video footage recorded by Hercules. A total of 40 hours of video footage of the Foerstner vent site and surrounding area was recorded during dives H1205, H1206, and H1207.
Preliminary viewing of the video was used to identify 17 different facies of volcaniclastic and effusive deposits. The video footage was then systematically viewed and facies type was recorded at one minute intervals and linked to the ROV navigation tracks. The map was created using Adobe Illustrator®.
Mineralogy and Petrography
All samples from the vent area of Foerstner volcano are plagioclase-olivinephyric basanite. The majority are hypocrystalline and display porphyritic, hyaloophitic, and vitrophyric textures. Plagioclase is present mostly as acicular with some tabular and few equant microphenocrysts and phenocrysts (0.03-1.85 mm) as well as microlites. Olivine is mainly present as microlites, but with some euhedral to subhedral microphenocrysts and few phenocrysts (0.03-0.85 mm) that contain melt inclusions. Olivine also occurs within aggregates (glomerocrysts) in some samples.
Subhedral augite is the least commonly found phenocryst.
The groundmass of the samples consists of sideromelane, tachylite, or a mixture of both. Tachylite is cryptocrystalline glass and is the result of confined conditions of crystallization, controlled by local enrichment-depletion of elements adjacent to precipitating minerals (Taddeucci et al. 2004). Tachylite is nearly opaque as it contains abundant microlites leading to an optical isotropy similar to magnetite (Morris et al., 1990). Vesicularity descriptions used in this study are adopted from Houghton and Wilson (1989) and are as follows: 0-5% non-vesicular, 5-20% incipiently vesicular, 20-40% poorly vesicular, 40-60% moderately vesicular, 60-80% highly vesicular, >80% extremely vesicular. Most of the samples from the Foerstner vent area are highly to extremely vesicular.
Whole-rock and Trace Element Compositions
Ten samples were chosen for chemical analyses based on the location and lithology of the submarine deposits (Table 1). Samples analyzed were from the main vent of Foerstner (NA018-019, 021, 022, 023, 026), 125 m south of Foerstner (NA018-025), a northern, deeper vent (NA018-027), in the saddle region between Foerstner and a large seamount to the west (NA018-030), and at the summit of the western seamount (NA018-032, 033) ( Table 1). All scoria samples are classified as tephrite basanite according to IUGS classification, occupying a narrow range in SiO 2 abundance from 43.9-44.9 wt% (Figure 5ab, Table 2). Sample NA018-025 to the south of Foerstner is more evolved than the rest of the Foerstner deposits with SiO 2 content of 47.0 wt%. Washington (1909) carried out the only other chemical analyses of Foerstner scoria deposits. Samples analyzed from the Foerstner vent site in this study are similar in composition to those of Washington (1909), although slightly less evolved ( Figure 5a). Low abundances of SiO 2 (~44 wt%), MgO (~5.5 wt%), and alkalis (~4.5 wt%) are common between both analyses as well as the abundance of CaO. Discrepancies arise when comparing Al 2 O 3 , P 2 O 5 , and TiO 2 . The content of Al 2 O 3 is higher in samples from this study (1.7 wt%), while there is 2.1 wt% and 0.5 wt% more TiO 2 and P 2 O 5 in Washington's samples. Some of the discrepancies may reflect true compositional differences but they more likely represent interlaboratory differences in techniques used for the analyses that span more than 100 years.
Samples from Foerstner are typically less evolved than samples taken from other nearby volcanic centers in the Straits of Sicily, which are basalt and trachy-basalt ( Figure 5a) (Calanchi et al, 1989). An exception are samples from Nameless and Tetide Banks that are less evolved when compared to Foerstner and plot near the boundary of tephrite basanite-foidite and tephrite basanite-picro-basalt ( Figure 5a) (Calanchi et al., 1989).
Whole-rock chemical analyses of the Foerstner samples were compared with all other known submarine lava balloon deposits. Chemical data plotted on the IUGS TAS diagram confirms that each of the four lava balloon deposits are classified as four different types of volcanic rock: Foerstner -tephrite basanite (this study), Socorrotrachy-basalt (Siebe et al., 1995), Hawaii -basaltic andesite (Moore et al., 1985), Azores -basalt (Gaspar et al., 2003) (Figure 5b). There does not appear to be a strong correlation between the composition of magma and the ability to produce lava balloons during eruption, although it is noted that most basaltic balloon eruptions tend to be more alkalic in nature.
The Foerstner basanites show very similar enrichment in the rare earth elements relative to chondrites, with light rare earth elements (LREE) being strongly enriched relative to the heavy rare earth elements (HREE) ( Figure 6, Table 3).
Following the methods of Peace and Norry (1979), trace elements of the basanites are plotted on the Zr/Y-Zr tectonic discrimination diagram (Figure 7, Table 4). According to this diagram, the high Zr-Y Foerstner samples plot in the within-plate (oceanic island) volcanic province domain. Mid-ocean ridge-normalized (N-MORB) trace element patterns were plotted following the methods of Pearce (1983). Trace element patterns show most incompatible elements are enriched relative to MORB as well as significant enrichment of Ta and Nb ( Figure 8) (Pearce, 1983). These patterns are indicative of intra-continental plate basalts and agree with the REE results. Sample NA018-025 shows anomalous geochemical patterns relative to the rest of the Foerstner samples. It is significantly less enriched in rare earth and trace elements suggesting that it originated from a different magmatic source.
Vent Structure and Deposit Facies
A bathymetric map of the suspected Foerstner vent site was created using high resolution ROV surveys and shows that the volcanic edifice has a slight elliptical shape formed by the overlapping of two circular mounds (
General Textural Observations
Thin section -Certain textural features are common throughout most of the Foerstner scoria deposits. Vesicles contained within sideromelane groundmass are typically more abundant but smaller in size, while those found in tachylite are fewer but larger in size. Small vesicles (i.e., L<0.2 mm) are typically round while larger vesicles become increasingly irregular with non-circular outlines. Vesicle walls are smooth, and bubbles are found adjacent to both groundmass and crystals. There is no evidence of shearing as evidenced by the lack of elongated vesicle trains.
Tachylite has a higher content of microlites and a lower vesicularity compared to sideromelane (Figure 17a). This is most likely a result of a slower rate of cooling allowing time for microlite nucleation and growth. The transition from sideromelane to tachylite groundmass observed in the shell of some Foerstner samples can be explained by a variation of cooling rates, as described by Kueppers et al. (2012) for basaltic balloons from the Azores. Upon discharge into seawater, lava instantly cools at a very high rate (up to 1,259 K/s) forming a thin, oxidized crust that consists of rapidly quenched glass (sideromelane) and large vesicles. This crust provides a thermal boundary layer between the incandescent interior of the scoria bomb and cold seawater. The glass beneath the crust is not in direct contact with the seawater and cools at a much slower rate (~30 K/s). This slower rate allows for an extensive nucleation and growth of microlites, leading to the development of tachylite ( Figure 17b). The VVD plot of a scoria bomb representative of those observed at Foerstner recovered from the large western vent (NA018-032) displays a negative skewness with an extended tail towards small vesicles (Figure 23). Vesicles range from 0.01 to 1 mm in diameter. The distribution shows a progressive disappearance of smaller vesicles with a major mode between 0.32 and 0.63 mm (ignoring the significant drop off in volume fraction at 0.5 mm).
Bubble formation and growth within AD 1891 magmas
A comparison of bubble texture data from samples of AD 1891 deposits allows inferences to be made about magma degassing processes. Samples were chosen to cover the major lithofacies found at the Foerstner vent site and surrounding seafloor.
All four samples exhibit generally unimodal distributions on VVD plots (Figures 20-23) suggesting a single distinct pulse of nucleation and growth (Shea et al. 2010). The negative skewness, also observed on all four plots, is interpreted to be a consequence of bubble ripening during the course of the eruption.
The bubble sizes and spatial distributions observed in the Foerstner samples suggest that growth resulted from the steady diffusive transfer of gas between bubbles through films. This transfer process, known as Ostwald ripening, is driven by the pressure excess inside bubbles, which is high for small bubbles and low for large bubbles (Mangan and Cashman 1996). Since the bubble distribution within magma is generally polydispersed, internal pressures will be irregular. As bubbles occupy a greater volume of the melt (usually near the vent surface), the nearest-neighbor distance becomes smaller and gas diffuses from regions of high to low pressure. As a result, large bubbles grow and small bubbles shrink or may disappear altogether (Mangan and Cashman 1996). In this respect, the process differs from coalescence, which is driven by mechanical and molecular interactions resulting in a more favorable thermodynamic condition, but is not actually driven by surface energy (Herd and Pinkerton 1997).
The progressive disappearance of small, and enrichment of large vesicles is observed in all VVD plots (Figures 20-23). As in coalescence, the foam coarsens as magma residence time in the conduit increases and the surface area of the gas-melt interface decreases. However, individual bubbles shrink or grow gradually during ripening, and a normal rather than polymodal bubble size frequency distribution evolves (Mangan and Cashman 1996). Studies have shown that ripening has a significant influence in the bubble textures of basaltic fire-fountain eruptions (Mangan and Cashman 1996). Additionally, it has been observed that Strombolian activity at Heimaey, Iceland was controlled by the bursting of large, individual bubbles (Blackburn et al. 1976) such as those that are formed by ripening. Further discrimination between Hawaiian fire fountaining and Strombolian-type eruption mechanisms cannot be deduced from these data.
Facies Distribution and Eruption Model
Eruption mechanisms that vary from dominantly effusive to explosive have been proposed to explain the production of floating scoria and their subsequent deposition. Gaspar et al. (2003) and Kueppers et al. (2012) proposed that the magma involved in the Serreta submarine ridge eruption was fluid and gas-rich, favoring the segregation and accumulation of gas under a cooler lava crust at vent level. They proposed the development of large gas bubbles within the magma just below the crust.
At a critical point of accumulation, these large gas bubbles form blisters that, in a subaqueous setting, detach from the lava surface as swollen lava balloons and rise by flotation. In contrast, Siebe et al. (1995) proposed that intermittent lava fountaining could be responsible for the production of floating scoria bombs, as well as other pyroclastic deposits and pillow flows. Fountaining results from changes in eruption velocities due to variation in exsolved volatile content. While most of the clasts fall out close to the vent, some gas-charged magma could produce highly vesiculated scoria that rise to the surface (Smith and Batiza 1989). At the summit of Foerstner volcano there was no evidence for a solidified lava lake or extensive pillow flow lobes. Thus, the dominant clastic nature of the vent site points more strongly to a dynamic explosive mechanism for balloon formation.
The dominant ripening degassing process determined by analyzing the bubble size distributions of Foerstner deposits favors either a Hawaiian fire fountaining or Strombolian-type eruption mechanism. Each of these eruption types are characterized by fluctuating magma discharge rates due to changing exsolved volatile contents (Siebe et al. 1995;Pioli et al. 2009). Either submarine fire fountaining or Strombolian activity could explain the deposition of abundant pyroclastic deposits as well as the effusive lava flow deposits observed at the main vent site. Magma eruption velocities are unlikely to remain constant for extended periods of time; instead they would fluctuate as observed at subaerial lava fountains (Head and Wilson 1987). Periods of rapid gas exsolution would promote explosive eruptions and their subsequent deposits, while waning exsolution promotes effusive eruptions (Pioli et al 2009).
The diversity of facies observed at the Foerstner vent site are most likely a result of varying magma rise speed in the feeding conduit, enhanced volatile content of the source alkali basalt, and generally low viscosity of the magma. These factors promote differential rise speeds of melt and bubbles allowing large, early-nucleated bubbles to migrate upward in the conduit as a slug flow and overtake smaller, laternucleated bubbles. The emergence of these large bubbles through magma within the conduit drives Strombolian type eruptions (Head and Wilson 2003). This erupted magma would have contained a lower fraction of small vesicles as they were overtaken by larger, more energetically stable bubbles. The lack of small gas bubbles limits the disruption of erupted lava into smaller fragments, allowing for very large magma clots to be ejected at sizes comparable to the width of the conduit (Head and Wilson 2003). This mechanism could explain the deposition of scoria bombs up to 2 meters in size observed on the flanks of Foerstner. Head and Wilson (2003) made predictions about the resulting deposits and landforms of submarine Strombolian eruptions. The initial stages of a Strombolian eruption are characterized by dike emplacement and extrusion of lavas at very low discharge rates. This leads to the deposition of short flows and pillows rather than extensive lobate sheets (Head et al. 1996;Gregg and Fink 1995), much like the welldefined pillow flow lobes forming linear ridges that emanate radially towards the northwest and west from the main Foerstner vent ( Figure 10). As magma rise rates increase and stabilize, typical Strombolian activity occurs as the gas bubble rise rate exceed that of the magma. Explosive disruption of the magma occurs at the ventwater interface, producing fragmental deposits that typically fall within 10-20 m from the vent as a result of hydrodynamic drag and subsequent deceleration of pyroclastic fragments. The larger blocks and bombs (64 to >256 mm in diameter) that constitute the major facies observed at Foerstner most likely formed from the plug of magma ejected in front of the larger rising gas bubbles. Smaller fragments quickly settle near the vent upon ejection as a result of their low inertia and accumulate as agglutinate.
Deposits of this kind may correspond to spatter-like deposits observed at the summit of Foerstner.
In general, explosive Strombolian activity becomes localized at the cone vent, while lava flows emerge from lateral vents located at the base of the cone (Valentine et al. 2005;Pioli et al. 2009). This type of vent morphology is observed at Foerstner as the main vent site is dominated by explosive pyroclastic deposits and the three smaller mounds observed immediately to the northwest are constructed of pillow flow lobes. Simultaneous eruption of pyroclastics from the cone and lava flows from the lateral vents requires segregation of a low viscosity, exsolved volatile-rich magma into a gas-rich mixture that ascends through the central conduit and gas-poor lava flowing laterally. Observations of Strombolian activity at Paricutin volcano in Mexico showed that the mass eruption rate (MER) controlled the proportion of magma emitted by explosive vs. effusive activity and the initial formation of lateral vents increased the explosivity of eruptions occurring at the cone (Pioli et al. 2009). Dual activity of this type requires a MER of 10 3 to 10 5 kg/s and when this drops to a rate below 10 3 kg/s, degassing dominates producing either lava effusion or mild explosive activity (Pioli et al. 2009).
The pyroclastic deposits that occur on Foerstner can be distinguished from submarine fire fountain eruption products by the dominance of large scoriaceous clasts deposited near the vent, a spatter-like vent facies, lack of extensive pyroclastic flow deposits, and short, pillow textured flows. If Hawaiian-style fountaining was the dominant eruption mechanism, the predicted deposits would include vesicular sheet flows, partly agglutinated distal fragments, relatively small grain sizes, and abundant pyroclastic flows surrounding the cone (Head and Wilson 2003).
Thus, we propose that Strombolian activity was the most likely eruption style Although we use the term Strombolian style to describe the eruption mechanism it is noted that there are fundamental differences between the behaviors of Strombolian events in the subaerial versus the submarine environment. First, the presence of water significantly dampens the dispersal of fragmented clasts in the submarine environment and will likely affect the resulting morphology at the vent area. In the subaerial environment Strombolian activity produces scoria cones with a central crater because fragmented clasts are able to be dispersed over relatively large distances (>100 m) by ballistic trajectories through the atmosphere. Underwater, the drag effect restricts dispersal and likely builds a mound-like cone without a welldefined central crater. Second, in the submarine environment gas-rich magma can attain positive buoyancy relative to seawater prior to fragmentation (Friedman et al. 2012;Rotella et al. 2013). This condition, which can never be attained in the subaerial environment, can dramatically affect the nature of submarine eruptions. Positive buoyancy flux results in a potentially complex globular discharge of highly vesicular magma that then fragment by gas expansion (Friedman et al. 2012). This style of activity was captured by ROV photography on West Mata submarine volcano in the Pacific (Rubin et al. 2012, fig. 2b).
Assuming that Foerstner exhibited Strombolian-type eruptive activity, predictions can be made about the generation of the lava balloons. We suggest that discharge of magma at the vent-seawater interface produced batches of coarse magma "blobs" that rapidly cooled and decelerated upon contact with the surrounding seawater. Since magma erupted from Strombolian activity is typically a gas-rich froth rather than liquid, many of these blobs have the potential to be extremely vesicular and thus buoyant relative to seawater. Maximum bubble sizes for submarine Strombolian eruptions can reach up to 1.5-2 m in size as a result of higher ambient pressure (Head and Wilson 2003). Data from other studies suggest that the exterior of these fragments can hyperquench with a cooling rate of ~1,259 K/s to form a solid lava shell that acts as an insulating layer preventing the magmatic gas in its interior from escaping (Kueppers et al. 2012). These extremely vesicular bombs attained densities less than 1000 kg/m 3 and thus can rise buoyantly to the surface. Magma forming these rising balloons continued to degas as the ambient pressure decreases leading to balloon inflation. This expansion leads to enlargement of the balloons' surface area and formation of new skin. This process is facilitated by the presence of still-molten lava inside the balloon, which prevents complete rupture of the outer solid crust. Some lava balloons collected off the sea surface in 1891 were noted to still have molten, redhot interiors (Butler 1892).
All other known occurrences of floating scoria have been associated with a relatively narrow water depth range between 30 and 1000 m (Siebe et al. 1995;Kueppers et al. 2012;Rivera et al. 2013), but with the best-defined source vents being located within a more limited range of only 200 to 400 m. It is likely that at depths greater than 400 m, confining pressures prevent the degree of volatile exsolution required to generate extremely vesicular magma that is buoyant relative to seawater.
In contrast, at depths shallower than 200m, extremely low pressures promotes rapid volatile exsolution and a high degree of fragmentation preventing the discharge of coarse magma blobs. At such shallow depths highly explosive activity is likely driven by phreatomagmatic explosions in addition to primary degassing (Sigurdsson et al. 1999). The presence of highly inflated scoria bombs in association with a dominantly clastic vent site, as observed at Foerstner, may represent a paleoenvironmental indicator for modern and ancient basaltic shallow submarine eruptions at depths of several hundred meters. producing short, pillow flow lobes that emanate from the center of the vent to the northwest and west. Typical Strombolian activity soon followed as the gas bubble rise rate exceeded that of the magma. Explosive disruption of the magma occurred at the vent-water interface and produced coarse fragmental deposits that fell on the slopes, while smaller fragments quickly settled near the vent and accumulated as spatter-like deposits. We suggest that buoyant scoria bombs were formed from the rapid cooling of extremely vesicular magma blobs. The exterior of these fragments hyperquenched forming a solid lava shell that acted as an insulating layer preventing magmatic gas in its interior from escaping. The bombs attained densities less than 1000 kg/m 3 and thus rose buoyantly to the surface. Rupture of the outer solid crust was prevented by the presence of a still-molten interior that accommodates expansion by progressive thin crust formation.
Conclusions
Lava balloon eruptions may occur more frequently than previously thought.
The eruptive conditions that characterize these products are now better documented and likely occur in a variety of submarine volcanic environments: low silica alkalic magma (43-52% SiO 2 ), high volatile content (CO 2 , H 2 O), and low hydrostatic pressures (water depths between ~200-400 m). Most historical balloon eruptions had durations of days to weeks, providing a very small window of time to allow for their detection by direct observation, seismology, or remote sensing. Only the Serretta Ridge eruption in the Azores lasted for a year or longer (Gaspar et al. 2003;Kueppers et al. 2012). Identifying and monitoring active submarine volcanoes that satisfy the
Fig. 5a
Average composition of Foerstner basanitic scoria in comparison with samples from other major volcanic centers in the Straits of Sicily as plotted in the TAS diagram after the International Union of Geological Sciences (Calanchi et al., 1988;Washington, 1909;Beccaluva et al., 1981). b Average composition of Foerstner basanitic scoria balloons in comparison with all other known submarine lava balloon deposits (Siebe et al., 1995;Moore et al., 1985;Kueppers et al., 2012). Red dashed line discriminates between alkaline and sub-alkaline rocks. Pearce (1983). Sample NA018-025 shows significant less trace element enrichment relative to the rest of the Foerstner samples indicating it likely originated from a different magma source. (OL) and inner tachylite (TL) layers labeled. Arrow points to the location of the white horizon layer separating the lava shell from the hollowed interior.
Fig. 22
Vesicle volume distribution of a pillow flow lobe fragment recovered from the northwest mound (NA018-027).
Fig. 23
Vesicle volume distribution of a scoria bomb representative of those observed at Foerstner recovered from the large western vent (NA018-032). | 8,507.8 | 2013-01-01T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
New Challenge for Classics: Neutral Zinc Complexes Stabilised by 2,2'-bipyridine and 1,10-phenanthroline and Their Application in the Ring-opening Polymerisation of Lactide
The zinc acetato and triflato complexes of 2,2'-bipyridine and 1,10-phenanthroline were prepared and completely characterised. The whole series (including the already described dichlorido complexes and the ligands themselves) were screened for their catalytic activity in the solvent free ring-opening polymerisation of D,L-lactide. The acetato and triflato complexes were found to be active initiators and polylactides could be obtained in almost quantitative yields or with high molecular weights, up to 145,000 g/mol.
Introduction
Modern approaches towards green and sustainable chemistry focus on the substitution of petrochemical-based plastics with biorenewable and biodegradable materials.[1][2][3][4][5][6].Polylactide (PLA) is an aliphatic polyester which is produced by controlled metal-initiated ring-opening polymerisation (ROP) of lactide, the cyclic diester of lactic acid (Figure 1).Due to their favourable properties resulting in a wide range of applications (e.g., packaging materials, drug delivery systems, surgical implants), PLAs have been proven to be the most attractive and useful class of biodegradable polyesters among the numerous polyesters studied to date [7].Based on the 12 principles of green chemistry introduced by P. T. Anastas and J. C. Warner [8][9][10] PLA can be described as sustainable polymer in context of green chemistry.It is produced from inexpensive annually renewable resources, and after its lifetime it can be recycled or it degrades through simple hydrolysis of the ester linkages into non-toxic, harmless natural products.Thus PLA is also a low-impact greenhouse gas polymer because the CO 2 generated during the biodegradation is balanced by an equal amount taken from the atmosphere during the growth of plant feedstock [10].By using environmentally desirable solvent free reaction conditions the waste disposal of the production process can be further improved.Thus, the change of PLA from a specialty material to a large-volume commodity plastic is required in reference to green chemistry [7,[11][12][13][14][15][16][17].
Consequently, the development of new single-site metal catalysts for the ROP of lactide has seen tremendous growth over the past decade.Several important ligand classes have been used to stabilise catalytic active complexes, including simple alkoxides and carboxylates, β-diiminates, tris(pyrazolyl)borates, phenolates, guanidates, Schiff bases, bis(phosphinimino)methanides and salen ligands.[5,11,12,14,[18][19][20][21] However, their high polymerisation activity is often combined with high sensitivity which can be ascribed to the anionic character of these ligands.Thus, for industrial purposes, there is an exigent need for initiators that tolerate air, moisture and small impurities in the monomer [14].
Up to now, only few systems using neutral ligands have been described.They apply strong donor systems like guanidines, [22,23], phosphinimines, [24] and imidazolin-2-imines [25].To find new neutral ligands for the development of ROP active single-site metal catalysts we focused our interest on 2,2'-bipyridine (bipy, 1) and 1,10-phenanthroline (phen, 2), some of the most widely used bidentate ligands in coordination chemistry (Figure 2).They are commercially available, easy to handle and can stabilise complexes with a wide range of transition metals due to their favourable donor properties.To date several zinc complexes containing 1 and 2 were synthesised, [26][27][28][29][29][30][31][32][33][34][35][36][37][38] but to the best of our knowledge, none has been tested for their ability to initiate the ring-opening polymerisation of cyclic lactones.Herein we report on the synthesis and characterisation of four novel zinc acetate and triflate complexes stabilised by 2,2'-bipyridine and 1,10-phenanthroline.They together with already described chloride complexes (Figure 3) [39,40] and the ligands themselves were screened for their catalytic activity in the solvent free ROP of D,L-lactide.
Complex Syntheses
The zinc complexes 1a [39] and 2a [40] were prepared according to literature procedures.
) and [Zn(phen) 2 (CF 3 SO 3 ) 2 ] (2c) were obtained as colourless crystal solids in 88-98% yields by simple stirring of 1 and 2 with Zn(OAc) 2 and Zn(OTf) 2 in a dry aprotic solvent (THF/MeCN) (see Figure 4).Single crystals were prepared either by cooling a saturated solution slowly to room temperature or by slow diffusion of diethyl ether into the solution.The molecular structures of 1b-2c (Figures 5-7) were determined by X-ray crystallography.The crystals obtained show a high stability towards moisture and air.They can be handled and stored in air, whereas the corresponding zinc salts (ZnCl 2 , Zn(OAc) 2 and Zn(OTf) 2 ) are sensitive towards hydrolysis and are rather hygroscopic.The chlorido complexes 1a and 2a exhibit simple tetrahedral coordination geometry, where the zinc atom is fourfold coordinated by the two N-donor atoms of the chelating ligand and two chlorides, respectively (Figure 3).The acetato complexes 1b and 2b possess a polynuclear structure.In 1b, two Zn atoms are surrounded by the N-donor atoms of 1 and three oxygen atoms of three acetato bridging ligands connecting each Zn atom with a third Zn atom located between them.Consequently, the latter shows an octahedral coordination environment whereat each corner is occupied by an acetato oxygen atom.Interestingly, in each case two acetato ligands bridge the Zn atoms via two oxygen functions but the third acetato ligand connects the metal atoms via only one oxygen atom of the acetate group.2b is a binuclear species that is also bridged by acetato ligands.Each Zn atom is coordinated by the N-donors of 2 in a chelating manner, and two oxygen atoms of two acetato ligands connecting them both and generating an eight-membered heterocycle.To complete the trigonal bipyramidal coordination sphere of each metal atom the fifth coordination site is occupied by a non-bridging acetato ligand.The zinc triflato complexes 1c and 2c exhibit an analogous structure motif.Each zinc atom is surrounded by the four N-donor atoms of two chelate ligands and two oxygen atoms of two triflato ligands generating an octahedral coordination environment.Due to the rigid structure of 1 and 2, the bite angles of the described complexes differ slightly, ranging from 75.8(1) to 79.3(1) • but showing slightly higher values for the triflato complexes.Regarding the Zn-N bond length, it is notable that in all complexes, except 1b, one of the bonds is with a difference of 0.010 (1c), 0.024 (2c) and 0.104 Å (2b) slightly longer than the other.Their absolute values range from 2.102(1) to 2.210(2) Å.The values for the Zn-O bonds in 1b and 2b depend on the coordination mode of the acetato ligands.In 1b the Zn-O bonds between the bridging acetato ligands and the zinc atoms that are also coordinated by the pyridine ligands are in average with 2.010 Å generally shorter than those belonging to the ZnO 6 octahedron (av.2.111 Å).In 2b the non-bridging acetato ligand possesses the shortest Zn-O value with av.1.983 Å.The value for the bridging ligand, whose oxygen atom is occupying the equatorial coordination site of the trigonal bipyramid, is with av.2.004 Å longer than the latter Zn-O bond length, but not as long as the Zn-O bond belonging to the oxygen atom occupying the axial coordination site of the trigonal bipyramid (av.2.087 Å).Due to their similar structure, the Zn-O bond lengths of 1c (2.191(1) Å) and 2c (2.189(1) Å) possess equal values.Selected bond length and angles are summarised in Table 1.
Polymerisation Activity
The complexes 1a to 1c and 2a to 2c, as well as the ligands 1 and 2 were tested for their ability to initiate the ring-opening polymerisation of D,L-lactide.For preliminary polymerisation tests the monomer (used without further purification) and the initiator (I/M ratio 1:500) were heated for 24 h or 48 h at 150 • C.After the reaction time, the melt was dissolved in dichloromethane, and the PLA was precipitated in cold ethanol, isolated and dried under vacuum at 50 • C. In order to rate the catalytic activity of the complexes, the polymer yield was determined and the molecular weights as well as the polydispersity of the PLA were determined by gel permeation chromatography (see Table 2).The tacticity was analysed by homonuclear decoupled 1 H NMR spectroscopy [41].
We found that the pure ligands as well as the chloride complexes show even after 48 h no catalytic activity whereas the zinc acetato and zinc triflato complexes possess the ability to produce PLA.Due to the fact that the acetate containing complexes provide polymers with significantly higher molecular weights and higher yields as their triflate containing analogues, it is obvious that the catalytic activity strongly depends on the character of the anionic component of the zinc complex.We reported this strong dependence in previous studies by using bis-chelated zinc guanidine complexes [22].In addition, the complexes stabilised by 1 exhibit a higher activity as those stabilised with 2. The extension of reaction time from 24 to 48 h leads in the case of 1c and 2c to an increase of yield and molecular weight, whereas in polymerisations initiated with 1b and 2b a decrease of molecular weight can be observed which may be caused by side reactions such as interchain or intrachain transesterification resulting in a chain-transfer reaction [42,43].It is also remarkable that the molecular weights of the polymers obtained by utilising 1c and 2c were significantly high in relation to the obtained yield.These PLA samples also show with P r values of 0.59 and 0.61 a slight heterotactic enchainment whereas the values of samples obtained using 1b and 2b (0.50) imply that the complex structure shows no ability to affect the tacticity of the formed polymer.In oder to classify the polymerisation activity of 1a, 2a, 1b, 2b, 1c and 2c their results are compared to those of the free salts and guanidine-pyridine zinc complexes that were tested under the same conditions [22].Since ZnCl 2 gives after 48 h PLAs with M w values of 45,000 g/mol in yields of 85%, whereas the chlorido complexes 1a and 2a show no ability to catalyse the ROP of lactide, the possibility that the complexes decompose in the melt and the single components are the real initiators can be excluded.The reversed effect can be observed in the case of Zn(CF 3 SO 3 ) 2 and 1c and 2c.Whereas 1c and 2c are quite active, the free zinc salt did not show any catalytic performance.Zn(CH 3 COO) 2 is well known to initiate lactide polymeristaion, but under the same conditions, it shows less catalytic activity than 1b and 2b and a broader distribution (t = 24 h, Y = 69%, M w = 130,000 g/mol, PD = 2.1).In comparison to the most active guanidine-pyridine zinc triflate complex (t = 24 h, Y = 93%, M w = 155,000 g/mol, PD = 2.2) out of a recently tested series of guanidine-pyridine zinc complexes, 1c and 2c show less catalytic activity whereas 1b and 2b provide a comparable activity.In addition their M w values that lie close to the expected values (based on the monomer to initiator ratio) and the relatively narrow distribution of molecular weight indicate a more controlled reaction.
These preliminary investigations using complexes stabilised by the neutral bis-chelating ligands, 2,2'-bipyridine and 1,10-phenanthroline, show that they provide PLAs with high yields or with high molecular weights.This together with their commercial availability, their high stability which makes them easy to handle and their favourable donor properties clearly demonstrate that they are a promising neutral ligand class for the development of single-site metal catalysts for the ROP of lactide.
Conclusions
In this contribution we reported on the synthesis and complete characterisation of four novel zinc complexes stabilised by the neutral bis-chelating ligands, 2,2'-bipyridine and 1,10-phenanthroline, that were proven to be active initiators in the solvent-free ring-opening polymerisation of D,L-lactide.They provide PLAs in almost quantitative yields or with high molecular weights up to 145,000 g/mol.This combined with commercial availability, high robustness resulting in an easy handling and favourable donor properties demonstrate the high potential of this neutral ligand class for the development of excellent and application-oriented single-site metal catalysts for the ROP of lactide.Optimisation of the reaction conditions to control the polymerisation process and the elucidation of the mechanism which is active without the presence of alcohols and alkoxides, as well as the development of more active catalysts are conducted in further studies.
Physical Measurements
Spectra were recorded with the following spectrometers: NMR: Bruker Avance 500.The NMR signals were calibrated to the residual signals of the deuterated solvents (δ H (CDCl 3 ) = 7.26 ppm, δ H (CD 3 CN) = 1.94 ppm).Samples for homonuclear decoupling were prepared by dissolving 10 mg of the polymer in 1 mL of CDCl 3 (Aldrich) and the samples were left for 2 hours to ensure full dissolution [45].The 1 H homonuclear decoupled spectra were recorded on a Bruker Avance 400 MHz spectrometer and referenced to residual solvent peaks.The parameter P r (probability of heterotactic enchainment) was determined via analysis of the respective integrals of the tetrads, using P r 2 = 2 [sis].
For the NMR analysis of the respective integrals of the tetrads [sis], see the work of Coates et al. [41] IR: Nicolet P510.-MS (EI, 70 eV): Finnigan MAT 95. -Elemental analyses: elementar vario MICRO cube, device CHNS-932 from Leco Instruments.Crystal structure analyses: Crystal data for the crystal structures 1b, 2b, 1c and 2c are presented in Table 3. Data were collected on a Bruker-AXS SMART [46] APEX CCD, using MoKα radiation (λ = 0.71073 Å) and a graphite monochromator.Data reduction and absorption correction were performed with SAINT and SADABS [46].The structures were solved by direct and conventional Fourier methods, and all non-hydrogen atoms refined anisotropically by full-matrix least-squares techniques based on F 2 (SHELXTL [46]).Hydrogen atoms were derived from difference Fourier maps and placed at idealised positions, riding on their parent C atoms, with isotropic displacement parameters U iso (H) = 1.2U eq (C) and 1.5U eq (C methyl ).All methyl groups were allowed to rotate but not to tip.Full crystallographic data (excluding structure factors) for 1b, 2b, 1c and 2c have been deposited with the Cambridge Crystallographic Data Centre as supplementary no.CCDC-752689 (1b), CCDC-752690 (2b), CCDC-752691 (1c) and CCDC-752692 (2c).Copies of the data can be obtained free of charge on application to CCDC, 12 Union Road, Cambridge CB2 1EZ, UK (fax: (+44)1223-336-033; e-mail: deposit@ccdc.cam.ac.uk).Gel permeation chromatography: The molecular weight and molecular weight distribution of obtained polylactide samples were determined by gel permeation chromatography (GPC) in THF as mobile phase at a flow rate of 1 mL/min.A combination of PSS SDV columns with porosities of 10 5 and 10 3 Å was used together with a HPLC pump (L6200, Merck Hitachi) and a refractive index detector (Smartline RI Detector 2300, Knauer).THF was used as mobile phase at a flow rate of 1 mLmin −1 .Universal calibration was applied to evaluate the chromatographic results.Kuhn-Mark-Houwink (KMH) parameters for the polystyrene standards (K P S = 0.011 mLg −1 , α P S = 0.725) were taken from literature [47].Previous GPC measurements utilizing online viscosimetry detection revealed the KMH parameters for polylactide (K P LA = 0.053 mLg −1 , α P LA = 0.610) [22].
Preparation of Compounds
[Zn 3 (bipy) 2 (CH 3 COO) 6 ] (1b): To a suspension of 0.5 mmol of zinc(II) acetate in dry THF, a solution of the ligand 1 (0.55 mmol) in THF was added under stirring.The resulting reaction mixture was stirred for 20 minutes.Due to the precipitation of the corresponding complex, the reaction mixture was slowly heated under reflux.Dry MeCN was added to give a clear colourless solution.Colourless crystals suitable for X-ray diffraction could be obtained by slowly cooling to room temperature., 128 (18), 78 (14), 51 (12).CHN analysis: calculated: C 44.51, H 3.94, N 6.49; found: C 44.53, H 4.00, N 6.50.
Table 2 .
Polymerisation of D,L-lactide in the presence of bipy, phen and their corresponding zinc complexes.
[41]action conditions: Catalyst (0.2 mol%), 150 • C; b PD = Mw/Mn where Mn is the number-average molar mass;c From analysis of the 1 H homonuclear decoupled NMR spectrum using the equation Pr 2 = 2 [sis];[41]d reaction times were not necessarily optimised. | 3,633 | 2009-12-08T00:00:00.000 | [
"Chemistry"
] |
Sampling from single-cell observations to predict tumor cell growth in-vitro and in-vivo
Cancer stem-like cells (CSCs) are a topic of increasing importance in cancer research, but are difficult to study due to their rarity and ability to rapidly divide to produce non-self-cells. We developed a simple model to describe transitions between aldehyde dehydrogenase (ALDH) positive CSCs and ALDH(-) bulk ovarian cancer cells. Microfluidics device-isolated single cell experiments demonstrated that ALDH+ cells were more proliferative than ALDH(-) cells. Based on our model we used ALDH+ and ALDH(-) cell division and proliferation properties to develop an empiric sampling algorithm and predict growth rate and CSC proportion for both ovarian cancer cell line and primary ovarian cancer cells, in-vitro and in-vivo. In both cell line and primary ovarian cancer cells, the algorithm predictions demonstrated a high correlation with observed ovarian cancer cell proliferation and CSC proportion. High correlation was maintained even in the presence of the EGF-like domain multiple 6 (EGFL6), a growth factor which changes ALDH+ cell asymmetric division rates and thereby tumor growth rates. Thus, based on sampling from the heterogeneity of in-vitro cell growth and division characteristics of a few hundred single cells, the simple algorithm described here provides rapid and inexpensive means to generate predictions that correlate with in-vivo tumor growth.
INTRODUCTION
Recent laboratory work has identified a limited subset of ovarian cancer cells with stem cell marker expression. These cancer stem-like cells (CSC) have been found to have unique biologic properties, including increased tumor initiation capacity and, in some cases, chemotherapy resistance [1][2][3][4]. Our group and others have reported that aldehyde dehydrogenase (ALDH) activity, alone or in combination with other stem cell markers, identifies CSC in ovarian cancer [5][6][7][8]. These ALDH+ cells have increased chemotherapy resistance, increased tumor initiation capacity, and the ability to produce both ALDH+ and ALDH(-) cells [9]. Suggesting a role in disease chemotherapy resistance and disease recurrence, ALDH+ cells are enriched in both patient derived www.impactjournals.com/oncotarget/ Oncotarget, 2017, Vol. 8, (No. 67), pp: 111176-111189 Research Paper xenografts and primary chemo-refractory tumor specimens [10,11]. Given these unique properties, CSCs are an important focus in translational research. Understanding how the small CSCs fraction drives self-renewal and tumor growth will provide insights into tumorigenesis.
Despite the potential importance of CSCs, evaluating CSCs has been a challenge. It is difficult to obtain sufficient numbers of primary CSCs for large-scale studies. In addition, primary human CSC engraftment in mice is inefficient and slow, and can take 6-12 months [5]. Similarly, in-vitro growth of primary CSCs is hampered by the poor growth in isolation with traditional cell culture media. Growth in "tumor spheres" can be used to enrich CSCs [4], however this assay often requires tens of thousands of cells to replicate analyses and obtaining this number of cells from primary samples can be problematic.
Given the long standing challenges of studying the growth of rare cell populations, mathematical modeling has been used to extrapolate and explain data from experimental studies into a broader understanding of tumor growth dynamics [12][13][14]. A variety of mathematical modeling approaches have been employed to describe changes in cancer cell states, but each approach has drawbacks. Markov chains have been deployed to model changes in the cell state equilibrium, and are appealing in their ability to generate a unique long term stationary distribution independent of starting state [15][16][17]. However these models require the problematic assumption that different cell states grow at equivalent rates [18]. A number of separate stochastic processes have been used to model cancer stem cell growth and resistance [19]. Birth/Death processes are one such stochastic method useful for modeling extinction probabilities and steady-state proportions among different cancer states such as CSCs [20,21]. Multi-state branching processes are a stochastic process that has been deployed to model hierarchical cell-state relationships such as with cancer stem cells [20]. However, theoretical assessment of steady-state behavior can be limited if the observed data do not conform to certain transitional requirements [22][23][24]; assumptions regarding feedback between states via a mathematical function are often required to account for even small inequalities in transition rates in order to achieve cell-state equilibrium in stochastic models [25][26][27]. Both ordinary [28][29][30] and partial [31,32] differential equation networks have been employed successfully to model changes between different cellular states, and while these modeling networks afford significant flexibility, they often require the estimation of numerous unobservable biological parameters. Finally, cellular automaton and agent-based models offer computational visualization of cellular subtype interactions within a multi-dimensional environment [33][34][35]. While generally flexible, these models can require advanced computer code and significant computational time to produce results. Furthermore, all of the methods described require the input of a skilled quantitative scientist. The development of a simple, understandable, data-driven method which does not require significant analysis expertise could expand the reach of CSC modeling.
Here we use data gathered from single cell microfluidic culture observations over short time periods to generate an empirical mathematical model that predicts the behavior of full ovarian cancer population over up to 28 days in-vivo. We used a single-cell microfluidic culture device to capture, grow, and analyze the division of single cells [36,37], observing primary ovarian derived CSC in isolation. These devices, via in situ live cell stains, also allow for the direct observation of cell divisions and an analysis of the phenotype of progeny cells. As such, self-renewal and asymmetric division potential of live cells exposed to different environmental or treatment conditions can be assessed. Using growth rates and division patterns, we produced CSC and non-CSC simulation-based predictions for larger mixed populations in-vitro and in-vivo. We show that this simple approach accurately predicts changes in growth associated with the CSC-oriented growth factor EGF-like domain multiple 6 (EGFL6). Our results demonstrate there is a useful relationship between microfluidics events at the single cell level and growth dynamics in larger in-vitro and in-vivo systems.
Monitoring cell growth and division of ALDH+ and ALDH(-) ovarian cancer cells
While ALDH+ cells represent a small portion of total ovarian cancer cells, they play an important role in chemotherapy resistance and tumor initiation [5,7]. We used a single cell microfluidic culture method to evaluate the growth of isolated ALDH+ and ALDH(-) cells from the ovarian cancer cell line SKOV3 and a primary ovarian cancer debulking specimens ( Figure 1A, 1B). Using passive hydrodynamic structures, an array of microchambers efficiently captures single cells ( Figure 1B). While SKOV3 cells demonstrated excellent viability in both traditional and microfluidic culture (90 and >95% viability, data not shown), primary cells demonstrated significantly greater viability in microfluidic culture, surviving and proliferating ( Figure 1C). Importantly, within the device the purity of initial of loading, total cell numbers per chamber, and ALDH expression (via the ALDEFLUOR assay) can be directly interrogated. This essential feature allows identification of the cellular state (ALDH+/ALDH(-)) in the captured live cells at initial capture and in the progeny following cell division ( Figure 1D-1F).
After confirming cell growth in the microfluidic device, we evaluated the growth rate of both ALDH+ and ALDH(-) cells. For both SKOV3 and primary cells, www.impactjournals.com/oncotarget ALDH+ cells were more proliferative than ALDH(-) cells; compared to ALDH(-) cells ALDH+ cells were both (i) more likely to divide and, (ii) more likely to generate numerous progeny ( Figure 2). ~12% of SKOV3 ALDH+ cells were quiescent (live but non-dividing) while 35% of SKOV3 ALDH(-) cells were quiescent (p = 0.024). Similarly, for primary cells, 14% of ALDH+ cells were quiescent while 53% of ALDH(-) cells were quiescent (p = 0.018). For SKOV3 cells the average number of cells after 72 hours per dividing single ALDH+ cell was 4.4 whereas the average number of cells after 72 hours per dividing single ALDH(-) cell was 2.2 (p < 0.001). Similarly, for primary cells the average number of cells after 120 hours per dividing single ALDH+ cell was 2.4 whereas the average number of cells after 120 hours per dividing single ALDH(-) cell was 1.7 (p = 0.008).
We also evaluated the ALDH expression of the progeny of cells captured in each chamber. For both SKOV3 and primary cancer cells, ALDH positive cells were observed to generate both ALDH+ and ALDH(-) cells ( Figure 1E-1F, 2A, C). In contrast, ALDH(-) cells were observed to only produce ALDH(-) cells ( Figure 2B, 2D).
Developing a cancer cell population growth model and empirical sampling algorithm using in-vitro microfluidics device observations
We conceptualized a simple model of cell state transitions ( Figure 3A). In our model cells may undergo one of three fates: symmetrical cell division (producing an offspring of the same type), asymmetric cell division (producing an offspring of the opposite cell type), or cell death. Here, parent cells die in the next time frame with probability g λ (t) or survive to divide with probability 1g λ (t). Cell division probabilities are determined using an empirical sampling algorithm that is designed to estimate cell state transitions based on data obtained from in vitro microfluidic observations.
In order to determine if we could predict bulk cancer growth with our model using experimental observations of the growth properties of single cells, we iterate the model for cell state transitions in time using the sampling algorithm to select the appropriate transition probabilities at each time step. This sampling algorithm is based on observed cell growth rates in single cell culture and requires a minimum of assumptions to generate its predictions. Briefly, for cells of a specified ALDH status, their number and state of offspring will be drawn from the full spectrum of observed outcomes of ALDH+ and ALDH(-) cells reflected in Figure 2. The non-zero offspring probability distributions define the possible transitions between states ( Figure 3A). They also provide a basis for estimating the size and proportion of CSCs in larger populations, by iteratively drawing potential realizations of self-renewal and asymmetric division on a cell-wise basis over many replicates. A schematic of one hypothetical run of the algorithm starting from a single ALDH+ cell is given in Figure 3B. Notably, though no ALDH(-) to ALDH+ transitions were observed, our model would automatically incorporate this transition should future experiments witness de-differentiation events.
Empirical sampling algorithm
We are interested in the temporal evolution of a population of l distinct cell subtypes, c t λ ( ), where λ ∈ ( , , ) 1 l . These cell subtypes should be observable and quantifiable as they change in time. Each cell is classified In order to calculate the number of cells of type λ at the next time step t +1 , first we define a frequencyhistogram-sampled number of additional offspring of cell type j produced by cell type i at the current time t as by u ij * . We can then denote the observed offspring of all l distinct cell subtypes from a single cell of type λ with the vector u λ Here q λ * is a l length sampled vector.
Next, we compute the λ -length row-summed vector of realized sampled outcomes over cells of type λ
The empirical sampling model predicts changes in cell growth in cell lines and primary patient samples
We next used our empirical sampling algorithm to predict the growth of 200,000 bulk SKOV3 cells (assuming 188,000 ALDH(-) and 12,000 ALDH+ cells at time 0 based on baseline FACS analysis indicating 6% ALDH+ cells). We based the sampling algorithm on the observations from microfluidic culture (Figure 2A) and compared the predicted outcomes of the sampling algorithm to the growth of 200,000 bulk SKOV3 cells grown in traditional cell culture for 72 hours. After 72 hours we counted total live cell number and determined the ALDH+ proportion by FACS. We observed good agreement between observed and predicted cell numbers and ALDH proportion at 72 hours for SKOV3 cells ( Figure 4A).
We next assessed the ability of the sampling model to predict the growth of primary cells. We plated 300,000 primary ovarian cells (20% ALDH+ based on FACS), and counted total cell number and ALDH+ percentage after 72 hours. In parallel we used our sampling algorithm assuming 240,000 ALDH(-) and 60,000 ALDH+ cells, as was set up in the in-vitro culture for comparison. Once again, we again observed good agreement between observed and model predicted primary cell numbers andALDH proportion at 72 hours ( Figure 4B).
The empirical sampling model predicts changes in cell growth related to CSC targeting growth factors
Factors which induce small changes in CSC growth characteristics can significantly alter the growth of bulk cell populations and tumors [38][39][40][41]. We next assessed if our microfluidics chip behavior-based modeling schema can predict population growth changes in response to treatment with growth factors. We evaluated the ability of the model to predict the growth changes observed with the exposure of cells to EGFL6. EGFL6 is tumor growth factor produced primarily by tumor endothelial cells [42,43]. EGFL6 is of particular interest as it acts primarily on ALDH+ cells [39]. We repeated the microfluidic growth assay with ALHD+ and ALDH(-) SKOV3 cells or primary ovarian cancer cells in the presence of absence of EGFL6. After 72 hours, the number and type of daughter cells (ALDH(-) or ALDH+) were scored as described above ( Figure 5). EGFL6 treatment was associated with an expansion of ALDH(-) cell self-renewal, with more ALDH(-) cells produced by ALDH(-) parents in both cell line and control cells.
In parallel, we evaluated the growth of bulk SKOV3 and primary cells grown with EGFL6. To determine if our empirical sampling based algorithm was able to accurately predict the effects caused by treatment with EGFL6, we ran simulation experiments for SKOV3 and primary ovarian cancer cells based on single cell observations and compared the predictions to bulk growth. Once again, we observed good agreement between observed and predicted total cell numbers and ALDH proportion at 72 hours for both SKOV3 cells ( Figure 6A) and primary cells ( Figure 6B).
We also evaluated the ability of our algorithm to accurately track primary cells grown with EGFL6 over repeated time points. We ran simulation experiments for primary ovarian cancer cells grown in control media or with treatment with EGFL6. Here we gathered validation measurements on ALDH+ and ALDH(-) cell number experiments every 24 hours for 4 days ( Figure 6C, 6D). We again saw good prediction of the validation output by our model, particularly in the ALDH+ cell pool.
The empirical sampling model predicts in-vivo growth
The ability to predict in-vivo tumor growth using an in-vitro assay would be both time and cost-effective. To investigate the potential of microfluidics growth observations coupled with the sampling model to predict tumor growth under different growth conditions, we conducted a parallel in-vivo and in-silica experiment. EGFL6 is expressed primarily in the vasculature and is not expressed by SKOV3 cells, so to assess the effect of EGFL6 in-vivo, we initiated tumors using SKOV3 cells co-injected with control hemangioma stem cell (HemSC) derived endothelial cells or HemSC derived endothelial cells expressing EGFL6. Tumor size was measured for palpability through 21 days total. Though the microenvironment is distinct from the ovary, we chose an orthotopic in-vivo model for cost and ease of serial measurements. We ran our empirical sampling algorithm, drawing our samples for SKOV3 cell behavior from the observed data in Figure 2A-2B for control cells (SKOV3), and from Figure 6A-6B for EGFL6 treated cells (SKOV3 with HemSC cells expressing EGFL6), starting with 200,000 simulated cells. To compare our simulated ovarian cancer cell outcome numbers to xenograft tumor volume data, we assumed 100,000,000 cells per cm 3 [44].
Our in-silico control SKOV3 predictions correlated well with the observed results ( Figure 7A). Similarly, the predictions generated from EGFL6 treatment in single cell devices predicted an increased proportion of ALDH(-) cells as well as an increase in total cell numbers ( Figure 7B).
Predicted results from our algorithm correlated well with predictions from both in-vitro and in-vivo experiments. A correlation coefficient calculated for the eight mean observed and eight algorithm-predicted in-vitro values showed excellent correlation (r = 0.98, p <0.0001, Figure 7C). Similarly, the median in-silico predictions correlated very well with the mean observed cell numbers for the xenograft tumor volume data over three observations up to 28 days (r = 0.92, p = 0.009, Figure 7D).
DISCUSSION
Translational cancer research is a costly and timeconsuming endeavor. The implicit requirement for in-vivo data during anti-neoplastic drug development mandates expensive mouse (or other mammalian host) experiments. The ability to predict in-vivo tumor growth, which takes weeks to months, from small numbers of cells grown invitro over a period of days could expedite research and significantly reduce costs. To this goal, we have described here a relatively simple single cell system using a few hundred purified CSC and non-CSC and an empirical sampling model that predicts population growth both invitro and in-vivo.
The role of CSCs in tumor biology is an important topic, yet is surrounded in controversy. In particular, controversy exists as to the plasticity non-CSC to attain a CSC state. This is likely to be, at least in part, due to contamination rates (~1%) associated with standard cellular purification procedures, such as FACS, that are used in many studied [5,45,46]. Using the microfluidic single cell culture approach, CSC marker expression can be confirmed in cells after capture and isolation, eliminating the possibility of CSC contamination in non-CSC pools and vice versa. Furthermore, the microenvironment of the microfluidic device (compared to 384 well plates) is more amenable to single cell growth, allowing >95% viability of isolated cell line CSC and >60% growth of primary isolated CSC. Using this device, we observed that ALDH+ cells produce both ALDH+ cells and ALDH(-) cells. In contrast ALDH(-) cells only produced ALDH(-) cells. This supports the possibility of an ovarian CSC hierarchy defined by ALDH expression [5]. Furthermore, the ALDH+ cells produced . (B and D) Bar graphs of the number of microfluidic chambers (Y axis) with the indicated number of progeny (X axis) from ALDH+ (A) and ALDH(-) (C) primary cells grown in the presence of EGFL6. Cell counts represent the frequency with which a given number of offspring of a given state are observed after 72 or 120 hours for SKOV3 and primary cells, respectively. No ALDH(-) parents were observed to produce ALDH+ offspring. SKOV3 results are representative of triplicate analyses. Primary samples are pooled results from 2 patients in duplicate. www.impactjournals.com/oncotarget more offspring on average, suggesting that at least a subpopulation of CSCs have a higher reproductive capacity or are capable of rapidly responding to environmental cues to increase cell division. It is important to note that our studies do not rule out "de-differentiation" events, and the presence of these events remains uncertain in light of previous studies [38]. Further studies are necessary to determine if factors such as hypoxia or chemotherapy can promote de-differentiation such that ALDH+ cells are generated from ALDH(-) cells, and to better define quiescent and reproductive subpopulations of CSCs.
This device allows cell growth and CSC marker expression to be assessed in live cells over time in a controlled manner. Additionally, this approach facilitates the identification of outcome information from a multitude of cells simultaneously and under similar conditions. In order to rapidly deploy our system on primary cells, we used the ALDEFLUOR assay instead of an engineered CSC fluorescent gene reporter. With this information, we can begin to construct an understanding of cancer cell type specific events. Using the growth and differentiation information we observed for the ALDH+ and ALDH(-) cell populations in microfluidic culture, we developed an empirical sampling based algorithm to predict the growth of bulk cells in-vitro and in-vivo. Our modeling framework is appealing as it is driven by the laboratory data without the extensive mathematical assumptions or parameter estimation for predictive functioning that are inherent in other mathematical modeling techniques. This mechanistic simplicity and transparency can allow for the deployment of our approach by a wide range of researchers. Our algorithm could be applied to markers of interest other than ALDH, including engineered fluorescent reporter genes.
Our empirical sampling framework has produced results supporting a straightforward mechanism to predict changes in cancer growth based on rapid microfluidics experiments. Our results show promising agreement with both in-vitro and in-vivo results. Importantly, this model functioned both in unmodified populations and in the presence of a growth factor that altered cell states and growth rates. Furthermore, the algorithm corresponded with validation experiments under both cell line and primary cell data where the growth rates and experimental time frames were significantly different. We postulate that predictive accuracy is improved by incorporating stochastic information on differential growth rates and cell transitions between the two cell populations. Despite the successes of the model, further simulations and modeling refinements can improve the approach. In particular, for simplicity in these proof of principle experiments, we used a 2-state model, however this is clearly an over-simplification as there are likely multiple additional cancer cell populations present [5,38]. Furthermore, important cell-cell interactions are limited in our current microfluidics device. In addition, studies of the ability of this algorithm to predict anti-neoplastic response in-vivo would be of great use. We also believe that more realistic prediction functions could be generated by using a continuous time, rather than discrete time, modeling framework. Opportunities for computational modeling refinement will continue as microfluidic technology improves. Single cell co-culture devices may improve our in-vivo predictive accuracy by recreating microenvironmental effects as well.
In conclusion, we have generated a simple model to use cell state (CSC/non-CSC) growth to predict the growth of populations of cell in-vitro and in-vivo. Using proliferative heterogeneity information from small numbers of primary cells, this model can also be used to predict the response of a population of cells to growth factors which alter cell state. This study lays the groundwork for future work potentially combining single cell studies and mathematical models to predict response to therapeutics for translational drug discovery studies and ultimately personalized medicine.
Microfluidics experiments
Microfluidic single cell devices were fabricated using PDMS soft lithography as detailed in [36]; PDMS was thermally aged and soaked in ethanol overnight to remove potentially uncured oligomers. Cells were trypsinized, fluorescence activated cell sorting (FACs) isolated, and loaded into microfluidic devices in supplemented mammary epithelial basal media (MEBM) media as previously described [38] such that ~80% of chambers contained a single cell based on direct microscopic observation. The remainder of chambers either contained multiple cells or were empty. Direct immunofluorescent (IF) microscopy was used to confirm identity (ALDH+ or ALDH(-)) of the captured cells immediately after capture to prevent FACS contamination. Devices were then incubated at 37°C with 5% CO 2 for the specified time period. Cells were then restained with ALDEFLUOR (Stem Cell Technologies) within the device after 72 (SKOV3) or 120 (Primary cells) hours based on the differential growth rates of these cell types, and IF microscopy was used to evaluate type of daughter cells (ALDH(-) or ALDH+) produced and total cell numbers. Cell count and ALDH-status scoring were uniformly counted by a single operator. Cellular death rate was estimated by performing flow cytometry sorting on Annexin-V stained and ALDEFLUOR stained cells to give the proportion of apoptotic ALDH+ and ALDH(-) cells.
Ovarian cancer cells
SKOV3 cells were maintained in RPMI-1640 media, supplemented with 10% FBS with 1% Penicillin/ Streptomycin, and cultured in humidified atmosphere of 5% CO2 at 37°C. For primary cells, all tissue was procured after obtaining informed consent, and procurement was approved by the Institutional Review Board of the University of Michigan. Tumors used in this study were stage III/IV high grade serous epithelial ovarian cancer. Tumors were mechanically dissected into single-cell suspensions and isolated on a ficoll gradient as previously described [47]. For ascites, cell pellets were collected by centrifugation; red cells were lysed using ACK buffer (Lonza, Hopkinton, MA, USA), washed, passed through a 40-μm filter, then passed 4 times through a Standard Hub Pipetting needle to isolate single cells [5].
Murine studies
All animal experiments were conducted in accordance with institutional guidelines of the University of Michigan, and the studies were approved by the University Committee for Use and Care of Animals. SKOV3 cells (chosen as they are a non-EGFL6 expressing cell line) (2x10 5 ) were mixed were mixed with EGFL6expressing human infantile hemangioma stem cells (HemSC,1x10 6 ) or equal number of control HemSC. The cells were mixed with Matrigel and injected into the axilla of NSG mice (n=10/group) as previously described [5]. Tumor volumes were monitored overtime and tumor weights obtained at the time of euthanasia.
EGFL6 production
HEK293 cells were transiently transfected with EGFL6 plasmid using FuGENE 6 reagent (Promega) per protocol in growth medium containing 2% FBS. Supernatant was collected at 36 hours and 72 hours after transfection. Supernatant from empty vector transfected cell was collected as controls. To obtain purified EGFL6, recombinant EGFL6 flag protein was expressed by transient transfection of HEK293 cells and purified with Anti-FLAG M2 Affinity Gel (Sigma). Briefly, cell lysate was loaded onto the FLAG M2 Affinity Gel column under gravity flow on ice, and washed with 10-20 column volumes of TBS. The bound FLAG-EGFL6 fusion protein was eluted with 0.1 M glycine HCl, pH 3.5, into vials containing 20 μL 1 M Tris, pH 8.0 to neutralize pH. Eluted FLAG-EGFL6 fusion protein was used immediately or stored at -80°C in 10% glycine.
Empirical simulations
Numerical simulations were performed using the statistical program R 3.1.0 [48]. Simulations were performed over 50 iterations, and median values recorded. | 5,916.4 | 2017-11-25T00:00:00.000 | [
"Biology"
] |
DEFICIENCIES IN THE THEORY OF FREE-KNOT AND VARIABLE-KNOT SPLINE GRADUATION METHODS WITH SPECIFIC REFERENCE TO THE ELT 14 MALES GRADUATION
This paper revisits the theory and practical implementation of graduation of mortality rates using spline functions, and in particular, variable-knot cubic spline graduation. The paper contrasts the actuarial literature on free-knot splines with the mathematical literature. It finds that the practical difficulties of implementing free-knot spline graduation are not recognised in the actuarial literature reviewed. The paper also revisits the results of the graduation of the English Life Tables no. 14 (ELT 14) experience for male lives using a ‘multistart’ optimisation approach for the free-knot graduation. Application of this technique results in the finding that the chi-squared values reported in the ELT 14 graduation for male lives for 10, 11 and 12 knots were not optimal values. The multistart optimisation results appear to show that McCutcheon’s t-statistic, which is used in variable-knot spline graduation to select the optimal number of knots, may not in fact result in an optimal choice. Free-knot spline graduation should be used with caution and variable-knot spline graduation, in the form that employs McCutcheon’s t-statistic, should not be used.
INTRODUCTION
This paper examines deficiencies in the theory of variable-knot spline graduation, which was the method used for the graduation of the 14th English Life Tables ('ELT 14'). The paper begins by explaining what a spline function is, and then describes graduation using fixed-knot splines. The question how to select the knot values in such a graduation leads to a discussion of free-knot splines, which are spline functions whose knot values are also optimised with respect to fit. The paper proceeds to discuss the method of variable-knot splines, as described by McCutcheon (1984). In this paper, the term 'variable-knot splines' is intended to convey the choice of a best-fitting free-knot spline with an optimal number of knots. The results of attempting to duplicate the ELT 14 male mortality graduation are examined. The paper closes by drawing conclusions regarding variable-knot spline graduation.
2.1
Spline graduation involves fitting a spline function, usually a cubic spline, to crude mortality data, and taking the fitted spline as the graduated mortality curve.
2.2
A spline s(C) of degree k (where k is a positive integer) with knots x 0 <x 1 <...<x n <x n+1 is a function defined on the closed interval [a,b] (where x 0 = a and x n+1 = b) for which the following conditions hold: -s(C) is a polynomial of degree not greater than k on each open interval (x i-1 ,x i ) for i=1,2,...,n+1; and s(C) has k-1 continuous derivatives in the open interval (a,b). (1)
2.3
In spline graduation, knots are allowed to coincide, but the number of equal knots at a point (the 'multiplicity' of that knot) may not be greater than k+1.
2.4
Various criteria can be used to fit spline functions. McCutcheon (1981) minimises a chi-squared criterion. London (1985) and Jupp (1978) describe an unweighted least-squares criterion, and De Boor (1978) describes this and other criteria. Forfar, McCutcheon & Wilke (1988) describe the minimum chi-squared and maximumlikelihood criteria, and note that similar graduations are produced by these two methods. Benjamin & Pollard (1985) have shown that, for mortality experiences involving large exposures, the maximum-likelihood, minimum-weighted-least-squares and minimumchi-squared methods will give approximately the same results. All the results in this paper employ the chi-squared criterion to fit splines, since it is clear that this criterion is as capable as the others of producing satisfactory results, and since it was the criterion which McCutcheon used in his development of the variable-knot spline technique of graduation.
2.5
Variable-knot splines are discussed, albeit not necessarily under that name, in the statistical literature. Eubank (1988) gives a summary of various methods of selecting the number of parameters of a fitted spline. The methods discussed include a 'backfitting' algorithm, a 'projection pursuit' algorithm, and an 'alternating conditional expectations' algorithm. Breiman (1993) describes the theory and application to data of the 'delete knot/cross-validation' algorithm for fitting splines. Pittman (2000) discusses the use of genetic algorithms and model selection techniques to fit variable-knot splines. However, this paper considers variable-knot splines only as described in the actuarial literature.
3.1
We concentrate in this paper on results involving graduation of central mortality rates m x , although one could fit minimum-chi-squared spline functions to mortality probabilities q x (McCutcheon, 1981).
3.2
The minimum-chi-squared criterion (McCutcheon, 1984) consists of finding the spline s(C) of degree k on [a,b] with specified internal knots x 1 ,...,x n that minimises where: qx is the number of deaths in the period of investigation classified age x at death; and -E x c is the corresponding central exposure to risk for lives classified age x.
3.3 Formula (2) may also be expressed as with .
3.4
In formula (3), the weights w x depend on the unknown spline function s(C). This is a key disadvantage of using the chi-squared function that would not exist if another criterion such as the maximum-likelihood-estimate method were used to fit the splines. However we do not pursue this idea further in this paper.
3.5
In minimising ( 4. Fit the weighted-least-squares spline s(C) on [a,b] with internal knots x 1 ,...,x n using the weights in the previous step. 5. Repeat 3 and 4 until the chi-squared value converges.
3.6
Hence the minimum chi-squared criterion is equivalent to an iteratively reweighted least-squares procedure. McCutcheon develops the formulae that may be used to obtain the weighted-least-squares splines. The following is a brief outline of how to calculate the parameters of a weighted-least-squares spline with fixed internal knots.
3.7
Let (x-c) + = 0 if x < c x-c if x ³c. It is easy to show that the class of splines of degree k on [a,b] with internal knots x 1 ,...,x n forms a vector space of dimension n + k + 1 . A basis for this class is then .
3.9
Solving for ¶ ¶l W j = 0 for j=1,2,...,n+k+1, yields: 3.10 If BWB¢ is not singular, then a unique solution exists. McCutcheon (1981) and Booysens (1992) implicitly assume that a unique solution always exists, and although it is true that in general there is a unique solution, Jupp (1978) has shown that for the unweighted-least-squares case there may be no unique solution when, loosely speaking, there are regions containing too many knots compared to the number of data points. In this case, BWB¢ will be singular.
3.11
The vector-space basis in (4) may result in ill-conditioned matrices BWB¢that are numerically unstable to invert. In other words, the matrices BWB¢ may be special cases where the algorithms which are used to find their inverses are sensitive to small changes in the BWB¢ matrices, and may result in matrices that are not in fact inverses of BWB¢.
McCutcheon shows how an alternative basis to (4) consisting of 'B-Splines' that are superior in this respect may be used in determining solutions. These results are not important to the subject of this paper and the reader is referred to McCutcheon (1981) for details on B-Splines.
4.1
In 1981, McCutcheon noted that knot placement could influence the fit dramatically, and suggested that fit be optimised with respect to knot position. Jupp (1978) and De Boor (1978) refer to this as the free-knot spline problem.
4.2
Formula (2) may be expanded as follows: McCutcheon's proposal was to minimise formula (7) with respect to knot vector x = (x 1 ,x 2 ,...,x n ) after first having minimised it with respect to parameter vector ë. This involves finding values for a total of 2n+k+1 parameters, comprising n +k +1 parameters in fitting the least chi-squared spline and a further n parameters in finding the best knots.
5.1
In the following discussion we refer to z k (x 1 ,x 2 ,...,x n )[a,b] as the chi-squared value obtained from fitting to the data a minimum-chi-squared spline of degree k on [a,b] with specified internal knots x 1 ,x 2 ,...,x n . We refer to zuw k (x 1 ,x 2 ,...,x n )[a,b] as the least-squares error term from fitting to the data the minimum-unweighted-least-squares spline of degree k on [a,b] with specified internal knots x 1 ,x 2 ,...,x n . We also define Z k (n) = min{z k (x 1 ,x 2 ,...,x n )[a,b]:a#x 1 #...#x n #b}.
5.2
Referring to the problem of optimising fit with respect to knot position, McCutcheon states that 'theoretically it is not obvious that the criterion does in fact lead to a unique spline, but in practice this seems always to be the case ' (1981: 436). However, De Boor (1978) points out that, for splines fitted using the unweighted-least-squares criterion (not criterion 2), it is actually impossible to show that a given value of zuw k (x 1 ,x 2 ,...,x n )[a,b] is a minimum for all a#x 1 #...#x n #b. In addition, De Boor (1996) also points out that this is likely to be the case also for splines fitted using the chi-squared criterion.
5.3
Since it is possible to characterise local minima on z k (x 1 ,x 2 ,...,x n )[a,b], a practical method of attempting to find a global minimum is to find all local minima and choosing the smallest of these. McCutcheon (1993) and others (for example Jupp in 1978) have used a multistart approach to do this. The reason for the use of a multistart technique is that a minimisation algorithm requires as input an initial vector of knots, and will usually converge to the local minimum nearest to that starting point. Choosing a wide range of initial starting points, increases the probability of finding all the local minima, but it is clear that there is no guarantee of having obtained a global minimum.
5.4
Jupp (1978) shows that for the unweighted-least-squares case, this process is complicated not only by the existence of numerous stationary points (these may be local minima, saddle points or local maxima) of zuw k (x 1 ,x 2 ,...,x n )[a,b], but also by the slow convergence to a solution of algorithms used to find local minima. The numerical results in Table 1 ( ¶7.5) suggest that this conclusion holds also for splines fitted using the chisquared criterion.
5.5
It is clear from the above discussion that, while McCutcheon's suggestion regarding optimisation of fit with respect to knot position appears to be easy to implement, it is not easy to guarantee an optimal fit.
5.6
Splines allow the graduator to approximate any continuous function to any level of precision by increasing the number of knots. This characteristic is desirable for the graduation of a large mortality experience (such as ELT 14) where variances around underlying mortality rates are small and hence relatively small adjustments need to be made to the crude rates. But for the graduation of a mortality experience with relatively small exposure, where the graduator's a-priori notion of the shape of the mortality curve plays an important role in determining the graduated rates, use of free-knot splines will not necessarily give the graduator the required shape of curve. (Although the graduator could adapt the knot positions under the free-knot graduation to obtain the required curve, this would not only defeat the object of the optimisation procedure, but it would require a time-consuming trial-and-error process, and render useless chi-squared tests of the adherence of the curve to the crude rates.)
MCCUTCHEON'S VARIABLE-KNOT SPLINE GRADUATION METHOD
6.1 One of the complications with graduation is that the process often involves many attempts to fit curves and requires the graduator to exercise judgement in deciding which curve is best. McCutcheon's method of variable-knot splines appears almost to remove the need for judgment, saving time, and reducing uncertainty regarding the number of degrees of freedom to use in chi-squared tests of goodness of fit of the graduation (Benjamin & Pollard, 1985). (This objective may be less desirable than it appears. The chi-squared test may not be the most important test of goodness of fit in a graduation, since it does not make full use of the information contained in the deviations between expected and actual deaths. In testing the adherence of graduated rates to a mortality experience, the application of the chi-squared test would normally be supplemented by other tests such as the individual standardised deviations test, the cumulative deviation test, the sign test, Stevens' test, and the binomial test (Benjamin & Pollard, 1985)).
6.2
McCutcheon (1984) proposed the following procedure to select the optimal number n of knots to use in a free-knot cubic spline graduation: 1. Determine Z 3 (n) for various values of n.
2. Choose the free-knot spline with n 0 knots, where n 0 is the lowest integer for which where: McCutcheon's rationale for step 2 is that t x x k ( )= --2 2 1is a test statistic for a c 2 variable with k degrees of freedom. (It is true that for large k, if x has a chi-squared distribution, t has approximately a normal (0,1) distribution.) The degrees of freedom corresponding to Z 3 (n) equal the number of data points (which is b-a+1) less the number of parameters fitted (which is 2n+4). His argument is that { } t Z n 3 ( ) has a minimum with respect to n, leading to the criterion set out in step 2. He states that 'this procedure for determining the number of knots to be used leads to an acceptable graduation method ' (1984: 49).
COMPARISON WITH ELT 14 RESULTS
7.1 ELT 14 is based on the mortality experience of the entire population of England and Wales for the period from 1980 to 1982. Central mortality rates m x for each gender and age were derived from the experience and graduated using variable-knot splines. Mortality probabilities q x were then calculated from the graduated rates.
7.2
This section of the paper compares the results obtained applying a multistart algorithm to free-knot spline graduation of ELT 14 male data with those reported by the Office of Population Censuses and Surveys (1987).
7.3
Since the t-statistic used in variable-knot spline graduation tests how significant the deviation is of the graduated results from the crude rates, a crude justification might be that the graduation with the smallest t-statistic might be thought to be the best from this point of view. However, the method appears to be flawed in two important respects.
7.4
Firstly, because there is no guarantee that the minimisation algorithm has produced the minimum value Z 3 (n), it is impossible to tell whether the reason for an increase in the calculated value of { } t Z n 3 ( ) is that the minimum value of t has been attained or that the minimisation algorithm has not achieved the minimum value Z 3 (n).
7.5
Secondly, as Table 1 shows, it appears that the minimum in { } t Z n 3 ( ) may be attained for a much larger value of n than McCutcheon envisaged. If the t-statistic attains a minimum value for some number of knots over 21, it is clearly not of much use, since even the 21-knot graduation, which involves fitting 46 parameters, is not a parsimonious model. (Indeed, it could be argued that a free-knot cubic-spline graduation employing more than 11 knots is not a parsimonious model.) 7.6 The results in Table 1 were obtained using FORTRAN programs calling the 'black box' numerical optimisation and spline-fitting routines of the Numerical Algorithms Group.
7.7
For an 8-knot spline, the minimum chi-squared values resulting from running the minimisation routine 20 times on groups of 50 random starting knots were collected. A similar procedure was followed for 9 to 15 knot splines. The histograms of the resulting distribution of values for each group within each number of knots are shown in the Appendix.
7.8
For 8, 9, and 10 knots, there is a bunching of chi-squared results for each group around the smallest chi-squared values in the range. Although this does not prove that the values being obtained are close to an absolute minimum, it is consistent with the behaviour of a multistart minimisation algorithm near an absolute minimum. This may be taken as evidence that the lowest chi-squared values within each group for a given number of knots may be close to the absolute minimum.
7.9
For 11 knots onwards the peak in the distribution is never near the lowest chi-squared values in the range. Hence for splines with more than 11 knots, the chi-squared values at the bottom of the range are less likely to be close to the absolute minimum. Larger groups of random starting points are apparently required to locate a likely range for the absolute minimum as the number of knots increases. 7.10 It appears that the reason for the use of a ten-knot spline in the published graduation of ELT 14 was that, as shown in Table 1, the lowest t-statistic of 6,35 is attained for ten knots. This appears not to be optimum. Nevertheless, the graduation of ELT 14 appears to have been of a high quality.
7.11 A question arises as to the number of random starting points required to be able to state, for a given number of knots, that there is, say, a 95% probability that the result given by the multistart algorithm is in fact the absolute minimum. This might be addressed by the following Monte-Carlo procedure for each number k of knots: 1. Choose a block size b (say 100) for the number of random starting points. 2. Optimise for each block, recording the least chi-squared value for that block. 3. Repeat step 2 a large number of times (say 1000 times) to obtain the distribution of the least chi-squared values.
4. Determine the proportion p of blocks for which the chi-squared value equals the least achieved for the entire set of 1000b simulations. 5. Repeat steps 2 to 4 for larger or smaller b until p equals 95%. 6. Repeat steps 1 to 6 for other values of k as required.
7.12 This procedure requires a very large number of simulations. It is also difficult to automate fully. It is therefore computationally expensive and time-consuming to implement. It is not definitive, since the minimum attained in step 4 is not guaranteed to be the absolute minimum. Finally, the results are likely to depend on the nature of the numerical optimisation procedure used. Because of these difficulties, this question has not been considered as part of this study.
7.13 The practical consequences of the choice of a sub-optimal number of knots is another area that this paper leaves to further research.
8.1
The method of variable-knot splines is difficult to implement in practice. As a result, some of the decisions in the ELT 14 graduation were based on an insufficient number of random starting points. Free-knot splines may still be used for graduation, but the graduator can never know how close the fit is to the absolute-minimum-chi-squared fit for that number of knots. The larger the number of random starting points used in conjunction with the minimisation algorithm graduation, the more likely it will be that the graduation will be close to the minimum. Further study is required to determine how the number of starting points varies with knot numbers to achieve a particular level of confidence (for example 95%) that the absolute minimum value has been obtained.
8.2
The selection of too many knots will result in graduated rates that are not sufficiently smooth, while the selection of too few knots will result in graduated rates that do not adhere adequately to the crude rates. Because McCutcheon's criterion for selecting the number of knots appears not to identify an optimum, an alternative criterion for choosing a number of knots needs to be formulated. Further work is also required to determine how well an alternative criterion, such as the Akaike information criterion or the likelihood ratio, performs for deciding on a number of knots.
8.3
In addition, free-knot splines do not allow adequately for incorporation of the judgement of the graduator of mortality experiences with small exposures. Even for large experiences, the graduator may be required to make adjustments to the rates at the oldest ages because of the difficulty of incorporating a-priori information regarding the shape of the curve. This is unsatisfactory because it reduces the reliability of statistical tests of adherence to data, and requires time-consuming trial-and-error fitting of a range of alternative curves.
8.4
It is suggested that, for mortality graduation the method of free-knot splines be used with caution, and that the method of variable-knot splines as described in McCutcheon's paper be avoided. | 4,845.4 | 2002-01-01T00:00:00.000 | [
"Mathematics"
] |
Capturing temporal heterogeneity of communities: a temporal β-diversity based on Hill numbers and time series analysis
Beta-diversity is a term used to refer to the heterogeneity in the composition of species through space or time. Despite a consensus on the advantages of measuring β-diversity using data on species abundances through Hill numbers, we still lack a measure of temporal β-diversity based on this framework. In this paper, we present the mathematical basis for a temporal β-diversity measure, based on both signal processing and Hill numbers theory through the partition of temporal ƴ-diversity. The proposed measure was tested in four hypothetical simulated communities with species varying in temporal concurrence and abundance and two empirical data sets. The values of each simulation reflected community heterogeneity and changes in abundance over time. In terms of ƴ-diversity, q-values are closely related to total richness (S) and show a negative exponential pattern when they increase. For α-diversity, q-value profiles were more variable than ƴ-diversity, and different decaying patterns in α-diversity can be observed among simulations. Temporal β-diversity shows different patterns, which are principally related to the rate of change between ƴ- and α-diversity. Our framework provides a direct and objective approach for comparing the heterogeneity of temporal community patterns; this measure can be interpreted as the effective number of completely different unique communities over the sampling period indicating either a larger variety of community structures or higher species heterogeneity through time. This method can be applied to any ecological community that has been monitored over time.
Introduction
We live in a biodiverse world were species changes along space and time (1,2).The different forms of biodiversity have been studied using different mathematical, statistical and information system 1949; Jost, 2006b; Chung, Miasojedow, Startek, & Gambin, 2019).There is a unique temporal α diversity approach based on Hill numbers theory (Chao et al., 2021).This measure is the most robust and counts for the effective number of equally abundant species, and can also be applied for species traits (Chao et al., 2021).However, β-diversity measures are diverse and framework development is still lacking (31).Engen proposed to calculate bivariate correlations among community assemblages (24).Later, Baselga proposed the β TUR and β NES measures.These indexes are the most used temporal β diversity measures because they permit the use of presence/absence data, information that can be easily obtained with low sampling effort (13).Legendre used Moran´s Eigenvector Map analysis (MEMs) to calculate the positive and negative correlations among time points in community samples (32).Finally, the temporal β-diversity index was proposed to measure changes in species composition between adjacent time points (33).In general, all the proposed measures are similar to spatial diversity measures.These measures of temporal α and β-diversity are accessible, but the outputs of these measures are not completely comparable between studies, with exception with Chao et al (2021).The temporal β-diversity measures are more related to correlation measures, and the species composition among time points has been calculated using various approaches.Although all measures are quite robust to different traits and approaches, they are not immune to many of the problems associated with spatial β diversity measures.The main shortcomings associated with these measures include their tendency to generate misunderstandings of temporal traits and overlook data independence assumptions.Generally, most temporal biodiversity measurements are based on a sequence of data points indexed along a temporal axis.The data points are literally consecutive measurements of biodiversity of the same area over a time interval, which permits changes to be tracked over time.The assembly of biodiversity observations obtained by repeated measurements over time has certain peculiarities (34)(35)(36).Time series are normally characterized with information gaps created by sampling effort, which usually results in discrete variables.The amount of missing information will be determined by the time intervals between samples (34,37).Ecological data can present both stationary and non-stationary dynamics over time (XXX).Stationarity in a temporal context refers to the consistency in the mean and variance throughout time, thus a standardization of data should be performed.Any approach for estimating diversity from a temporal perspective has been developed without consideration of the aforementioned assumptions (33,38).Temporal diversity analysis has been conducted based on time-correlation analysis and time-to-time relatedness analysis of communities; consequently, comparisons among current studies of temporal diversity are not possible (24) , except for α diversity (39).Thus, a robust temporal β-diversity analysis should be consider the above-mentioned time series assumptions relating to the gaps in the database (34,40,41).The major problem with this assumption is that ecological processes are considered to occur discretely, which is not true, moreover, species into a community present variation in their abundance changes.This assumption are effectively accounted through wavelet transform analysis, which includes the frequency estimators and provide a reliable approach for predicting gaps in the data to model a continuous variable (42).Finally, the standardization of the abundance time series data should be considered to account for the diversity in the temporal activity patterns of species over time (31).In light of these considerations, Hill numbers diversity theory integrated with time series analysis might provide a robust approach for overcoming the lack of comparability in current temporal measures.Hill numbers are a group of diversity metrics that have various advantages in the interpretation of diversity values principally because their units are consistent, which makes them duplicable and comparable among studies.
Hill numbers are a group of parametric measures used to measure diversity based on the modification of the q parameter (20,27), which determines the sensitivity of the measure to the relative abundance of species and can take any value ≥ 0 (27).Hill numbers denote the effective number of species in the community and have been shown to be reliable for characterizing several community traits (e.g., taxonomic, functional and phylogenetic diversity; Liebhold, Koenig, & Bjørnstad, 2004).The advantage of the Hill numbers approach is that results derived from it usage are more robust, comparable, and interpretable measures compared to other diversity index approaches because Hill numbers are direct diversity values (15,23,26,44).A measure of temporal diversity based on Hill numbers and time series analysis that captures the heterogeneity of diversity through time could provide a more promising strategy for assessing this important biodiversity component, similar to the way in which spatial diversity measures have become comparable among studies (8,27).We expect that these measures will enhance estimates of temporal γ and α diversity, and have a new temporal β that allows interpretable comparisons among studies, considering the ecological processes as a continuous variable.In addition, the measures could be used to characterize temporal diversity patterns and thus provide insights into the temporal dynamics of communities; information on temporal compositional shifts can shed on light on the status of communities and the effects of environmental variables on temporal species heterogeneity (29,31,33).
Temporal patterns as a continuous variable: temporal diversity data preparation
We developed our temporal α, γ and β-diversity measures based on the wavelet transform analysis (45)(46)(47) and the Hill numbers diversity approach (26,27,48).Our proposed temporal diversity measures were developed specifically for the decomposition approach.By considering the fully stationary behavior of abundance species data and the gaps associated with variation in sampling effort, wavelet time series analysis was used to standardize and fill gaps in abundance information through an estimated continual abundance curve, and this permits the distance-based Hill numbers diversity measure to be used (26), with the area under the curve as the basis of the pairwise distance measure.Wavelet transform analyze the frequency spectrum of a discrete time series data, fitting a modeled smoothed curve.The analysis compares (through convolutions) the raw data with a scaling function which can be shrunk, stretched and shifted in time.Finally, a matrix is constructed with all the fitting values in which the sum of the columns results on the new modeled curve (Figure 1).
Validity of the stationarity of the time series of species abundance
The verification of stationary is pivotal for the standardization of the species abundance time series data.Generally, ecological data exhibit variability in their stationarity characteristics, often resulting in a limited degree of comparability when working with time series.For instance, we performed a stationarity test analysis to know the nature of species abundance curves data.We conducted an analysis to determine whether the temporal data need to be standardized through time series analysis.
-the rate of change of species abundances over continuous time
Biological processes occur gradually and continuously, however in most cases they are discreetly recorded due to sampling.Nevertheless, there are differences in rates of change in the occurrence and abundance changes of species.For example, groups of species that have a high response to environmental changes often have abrupt changes in their abundance (such as amphibians, reptiles and insects).On the contrary, there are organisms in which changes in abundance occur in a more attenuated pattern (such as plant communities).Being more specific, within these generalities, it is true that each species has its particularities of the temporal variation, thus, is a variable that considers this biological aspect (Figure 1).Wavelet analysis typically assigns a value of 2 to the rates of change in modeled phenomena through time.Given that rates of change in the abundance of species through time are not equivalent due to variation in the traits of species, unique values of this parameter should be assigned to each species to account for this variation.Wavelet analysis allows interpolation of a continuously modelled abundance curve; however, unlike the common use of wavelet analysis, this analysis for our purpose requires the specification of the attenuation threshold (), a parameter that controls for the rate of continuous change in the occurrence of species in the community.In other words, determines the slope (i.e., the speed) at which species abundance or species intensity changes through time (41,49).For example, there are communities where some species display faster rates of appearance with respect to others (50), i.e., depending on its intrinsic natural history traits, each species can potentially exhibit their own rate of change in abundance through time (51).Thus, a fixed parameter value (usually 2 in most analyses) is not an appropriate assumption, and a more objective value is needed for each species (47,52,53).In this way, a correct fixed must consider the resolution (sampling time interval), the total sampling effort (T), and the steepness of the abundance changes in the raw data.In this regard, the rate of change in the abundance of species over time and the accuracy of the data sampled are important.If time series data have a high resolution, will be weighted highly according to the rate of change in the abundance of each species; otherwise, when time series data have low resolution, -values will highly weight the new abundance curve calculated with wavelet transform analysis.Here, we propose, for a given i species, a value of , eqn. 2 max corresponds to a constant of the maximum value that can take in the analyzed community, min = 2, min corresponds to the constant minimum threshold that can take.The equation represents the rescaling factor between 0 -1 of , or the derivative that biologically refers to the rate of change where corresponds to the mean concavity of absolute values of the species abundance; in other words, is the average steepness of the changes in species abundances between consecutive samples in time.Finally, () represents the raw abundance of the ith-species for each time, whereas T represents the total number of sampling times.Each value integrates the rate of abundance change of each species based on the observations and the maximum number of sampling points of the study.The scaling function used for the wavelet analysis was the Morlet scaling function, which is optimal for data with unknown frequencies and scales, and data that cannot be directly interpreted (41).
Temporal β diversity: effective number of distinct communities over time
For our temporal β-diversity measure, we replaced the discrete abundance vectors by abundance or intensity curves derived from the wavelet time series analysis.We modified the equations from Chao & Chiu (2014) (eqn.5), so that the relative abundance values (z i ) represent the abundance curves of the i-th species.To estimate temporal β-diversity through the multiplicative component (ƴ/α), we first need to calculate temporal ƴ-diversity and temporal α-diversity.To calculate ƴdiversity of order q ( q DTγ), we used the relative abundance of species in the community ( + / ++ ; i = 1, 2, …, S), where + = ∫ () is the total area of abundance of i-th species, and ++ = ∑ =1 ∫ () is the total sum of the area abundance of all species.Consequently, ƴ-diversity of order q is defined as (1-) , eqn. 5 and when q = 1 as eqn. 6 The temporal ƴ-diversity is interpreted as the "effective number of species in the entire community through time" or the species richness when q > 0. For temporal α-diversity we applied the same set of measures and definitions proposed by Chao & Chiu (2014) but on a temporal scale.In this sense, temporal α-diversity represents "the effective number of species per time unit" or the "mean effective number of species per time unit", and defined by: , eqn. 7 and when q = 1, as: eqn. 8 Finally, the multiplicative temporal β-diversity can be calculated as: eqn. 9 This value can be interpreted as "effective number of completely different unique communities over the sampling period".The contribution of species heterogeneity among communities is based on changes in the rate of whole community richness and the mean community changes at each sampling point.Temporal α-and ƴ-diversities always range from > 0 to S and decrease as q increases; temporal β-diversity ranges from 1 (when q = 0) to infinite.
To illustrate the use and utility of this measure of temporal β-diversity, we performed simulations varying the abundance and heterogeneity of species richness.In addition, we performed two extra analyses based on field data of an amphibian community from Madagascar (S = 40; time period = 360 days; frequency = daily; Heinermann et al., 2015) and a macro-benthic community from Chesapeake Bay (S = 66; time period = 24 years, frequency = yearly; Chesapeake Bay Foundation, 2020).We used R Studio with the DescTools (Asem, 2020) and wavScalogram (Benítez, Bolós & Ramírez, 2010) packages.The script for the temporal β-diversity calculation can be found in Appendix 1.
Results
Patterns of temporal α-and ƴ-diversity were similar to those suggested by other diversity measures based on Hill numbers, which was consistent with expectation.The values of each simulation reflected community heterogeneity and changes in abundance over time.In terms of ƴdiversity, q-values are closely related to total richness (S) and show a negative exponential pattern when they increase, except when species abundances are constant over time.For α-diversity, qvalues profiles are more variable than for ƴ-diversity, and different decaying patterns in α-diversity can be observed among simulations (Figure 1).Whenever q increases, if there are differences in abundance, α-diversity values display a decaying pattern (Figures 1B, 1D and 1H), causing temporal β-diversity to show different patterns, as observed in the analyses based both on simulations and field data (Figures 1 and 2).Otherwise, the absence of a decreasing pattern in Figures 1 and 3 reflects the null variation in both species richness and abundances in the simulated data.The minimum values of β-diversity are always 1 and in general they increase as q increases, except in those cases where ƴ-and α-diversities do not change (i.e., Figure 1F and 1H).The most variable β-diversity pattern is shown in Figures 1B and 1D, where species are equally distributed over time, which corresponds to the most heterogeneous community in terms of species changes over time.For the Malagasy amphibian community, temporal β-diversity shows high values near q = 0 and then stabilizes as q increases; by contrast, β-diversity shows a nearly constant increasing pattern as q increases in the Chesapeake Bay macro-benthic community.
Discussion
Here, we propose a temporal γ, α and β-diversity measures based on time series analysis and Hill numbers diversity theory that provides an interpretable, efficient and comparable tool to characterize temporal community heterogeneity.The use of wavelet analysis for developing a temporal diversity framework enhanced the robustness of the results and circumvents sampling effort-related problems (42,47).To perform this analysis, we approximate the abundance time series data to simulate a continuous abundance curve as in other studies (37,49,52,54).One of the major challenges associated with temporal diversity analysis, especially β-diversity measures, is the failure of datasets to meet the assumptions of time series data because current measures do not consider sampling resolution patterns, time frequency characteristics (33,38,55), and stationarity of the variables; wavelet analysis can overcome this difficulty.Other studies have shown that time series analysis is an effective way to analyze data collected from ecological and forestry studies (52,56) for several purposes such as population dynamics, disease transmission, animal migration and phenology (45,49,52,(57)(58)(59)(60).Considering the great diversity in the activity patterns of species, we tested the stationarity in species abundance of our species to justify the use of wavelet analysis, given that wavelet analysis standardizes a continuously modeled curve, which permits comparisons of our beta-temporal diversity measure to be made.We suggest that the stationarity of species abundance should be tested as a best practice, as this provides key information for determining how the data could be optimally standardized.For our purpose, wavelet analysis provides an effective method for modelling a continuous abundance curve under the assumption that species abundance changes occur gradually rather than abruptly at different scales.Nevertheless, there are two assumptions to considerate in wavelet analysis transform related with sampling effort: Overall, (1) equidistant temporal sampling and (2) long time series data conduct to better modeled curves.It is possible to analyze data based on repeatedly sampling effort, depending on the question and life cycle of organisms, otherwise other time series analysis perspectives as Hilbert Huang approach should be better, but this perspective has not been used for ecological data.Likewise, is recommended that 25 is the minimum number of sampling temporal points necessary to wavelet analysis, nevertheless here we performed an example with 24 sampling points (Chesapeake microbenthic community) (37,61).There is no systematic assessment for that assumption and there is a need to test it with high resolution data, removing and reducing random sampling points.Temporal changes in the abundance, conspicuousness or biological processes of species vary among all species in communities.Wavelet analysis through the τ parameter partially takes into account this variation.Mathematically, the τ parameter controls the slope of the modeled curve, but this parameter is not modified in conventional wavelet analysis (47,49,53).Biologically, the τ parameter reflects the rate of change in the occurrence and abundance of species; this prompted us to assign an objective τ-value for each species in the studied community.In this way, the τparameter directly relates to the rate of change in the abundance of each species and the total sampling points.In our study, τ-values near 2 imply that the analyzed species have abundance patterns similar to those modeled by wavelet analysis, and species with higher τ-values have simulated abundance curves that resemble the mode (statistics) of the raw data.Likewise, this idea suggests that the probability of detection or occurrence of species is a parameter that should be considered in all diversity studies, and the recorded frequencies of species are an artifact of species´ actual frequencies and their detection probabilities.The imperfect detection of species is important to consider to obtain high quality estimates of the size of ecological populations (62)(63)(64).This stems from the fact that the signals of species are underestimated when they are present but not detected.Various situations can lead to underestimates in the signals of species (37,49,65), and a correction in the detection in the abundance curves in our analysis could refine the overlapping area and calculation; nevertheless, imperfect detectability also varies with time.For temporal α diversity, was possible to use the perspective of Chao 2016 (26), however it is not the same as that proposed in 2021 (39).Firstly, the already existing measure accounts for the effective number of equally abundant species and here, temporal α diversity accounts to the mean effective number of species seen per sampling unit.The differences between measures are mainly associated with the model construction, however, we considered the 2016 Chao's perspective because it was a framework built to bridge the two β diversities approaches.On the other hand, this perspective indirectly considers the detection of organisms since it considers the average number of species seen per sample.Thus, this perspective is more sensitive to average changes in the number of species from sample to sample.Thus, this perspective is more sensitive to average changes in the number of species from sample temporal points.At the same time temporal α diversity values when q = 0 will be equal to those of temporal γ diversity when q = 0.This is mainly because the calculation of the continuous curve of abundance through wavelet assigns a value close to zero in the probability of encountering the species over time.In this sense, in this measure we are assuming that the species have imperfect detectability, and could be registered at any time, variating in four axes (species traits, spatial variation, temporal variation and sampling characteristics; (66).This has received a lot of attention at population and community level (50,64,67,68); however, some studies have pointed out the need to include imperfect detectability on biodiversity surveys (68,69).Future research on the standardization of values is needed to improve the comparability of results, without excluding low detectability or rare species.From our experience, link function, parameter, or even a cutoff value could be options where the diversity values could be better adjusted, however, with the change of these variables, diversity patterns remain unchanged.The temporal β-diversity measure proposed here is a decomposition Hill numbers approach adopted from the scheme developed by Chao and Chiu (2014), which is based on the relationship between ƴ-and α-diversities in a community.Given the great variation in β-diversity decomposition methods, other decomposition approximations could also be tested using the same approach; our ƴ-and α-diversity calculations would be the same as the theoretical framework and can be easily linked to other perspectives (29,70).β-diversity can be analyzed through a distance-based approach; however, the Chao and Chiu framework was mainly constructed to establish a link between both β-diversity approaches ( 26) and a temporal βdistance-based approach to complement our measure, including time series transformation of the same data (15,71).Likewise, other temporal β diversity frameworks, such as those of Baselga and Legendre (13,33), have demonstrated the value of using multiple measures of β diversity, but the utility of using multiple measures ultimately depends on the hypothesis tested (9).However, the interpretation of these other measures requires caution because some refer to the concepts of turnover and others to variation (as our measure), but the interpretability is maintained under the Hill numbers framework.Otherwise, the most used measures of β diversity (Jaccard and Sorensen) (Jaccard, 1912; Sorensen, 1948; González, n.d.; Chao, Chazdon, & Shen, 2005; Baselga, 2012) and even α diversity (Shannon and Simpson) (74) do not have a unifying structure that facilitates interpretation and comparison (12,22).Thus, the principal advantage of our framework is that the proposed temporal β-diversity measure provides a more direct and objective approach for comparing the heterogeneity of temporal community patterns.In this context, temporal ƴ-diversity is defined as the "effective number of species throughout the entire studied time period", temporal α-diversity as the "mean effective number of species at each time", and temporal β-diversity as the "effective number of completely different unique communities over the sampling period".In general, ƴ-and α-q profiles are consistent across estimated spatial diversity patterns (44); however, q-profiles related to βdiversity do not show a consistent pattern.Specifically, temporal α-diversity only reflects an expected outcome rather than the reality indicated by the sampling measurements; thus, a completeness analysis could improve the robustness of the results for both temporal ƴ-and αdiversity as has been shown in other studies of diversity patterns (75)(76)(77).For temporal βdiversity, overestimations were observed in simulations, especially in cases where several species were equally distributed, as temporal β-diversity values are higher than S (the number of species) when q = 2.A high temporal β-diversity indicates a high number of unique communities throughout the sampling period and thus temporal heterogeneity in the activity of species within the community.Nevertheless, our measure is not suitable for indicating the moments where unique communities are occurring, but other measures can be used to provide this information, such as Legendre's TBI (Temporal Beta Index) (33).Thus, we show here that the use of different frameworks provides complementary information and that the use of each measure is not mutually exclusive.Despite differences in the taxonomic group, species richness, and temporal resolution among field studies, temporal β-diversity can be measured using these data.Although we expected asymptotic behaviors, we observed different temporal β-diversity patterns in the two data sets examined.In the Malagasy amphibian community, we observed that the temporal β-diversity q-profile shows high values when q is between 0 and 1, and the profile shows an asymptotic pattern.The rate of change in the α-diversity q-profile largely determines the heterogeneity of the community (temporal β-diversity) because α-and ƴ-diversity values are divergent.For example, few species of amphibians are commonly observed per sampling occasion or per unit of time; in other words, few species are recorded during each sampling event, and the species observed continuously vary as has been documented in other studies (78)(79)(80).This result has direct implications for our understanding of the heterogeneity of communities through time as well as for conservation and monitoring actions because some community traits through spatial and temporal scales exhibit divergent patterns (81,82).Thus, to understand the temporal relations as spatial perspectives of γ, α and β diversity is to analyze several data sets.It is true that the relation between spatial γ and α diversities largely determines spatial β diversity, something that we also suspect occurs in temporal diversity.From a temporal view, probably there is a linear correlation between temporal γ and α diversity, resulting in a constant and low temporal β diversity values.Thus if temporal α diversity (mean number of species per time) present low values per unit of time, temporal β diversity should be higher, resulting a temporarily diverse community (3,83).Communities that show high temporal heterogeneity in composition require conservation or monitoring plans in which sampling effort is high in frequency, so that the range of environmental variation that can occur at a site is sampled; however, the reason for the need for a high sampling frequency is not solely because of temporal variation in the composition of communities but also because of variation in the detectability of species as aforementioned (84).However, low temporal α-diversity values do not prove that some species do not occur in the studied unit; rather, it is likely that features of the environment affect their conspicuousness as several studies have shown (62,79,80) or even undergo short-distance migrations (85)(86)(87).All of these assumptions directly relate to other ecological processes such as interactions and phenological patterns because the occurrence of some species depends directly on the presence of other species (63); for example, the common interactions between flowering plants and pollinator life cycles (88,89), as well as interactions between predators prey (90)(91)(92) or parasites and host (93).In this way, patterns of temporal β diversity between different functional groups require comparison to determine whether the temporal β diversity of one group predicts the temporal β diversity of another group; our measure permits these comparisons to be made and other hypotheses to be tested.Finally, in the case of the Chesapeake Bay macro-benthic community, we observed that the q-profile of the temporal βdiversity increases without reaching an asymptote; thus, temporal β-diversity values are likely higher than the one presented (8.17) and a higher sampling resolution or a longer time window could alter these results; ultimately, it is likely that this community is more heterogeneous through time.This demonstrates the need for more studies that estimate temporal β diversity using different levels of sampling effort or conducting analyses at different time scales to understand the effect of scale on temporal β diversity patterns, as other studies have shown that scale affects diversity patterns in other ecological axes (94)(95)(96).The implementation of our new temporal-diversity measure is needed to advance our understanding of community temporal species changes and its heterogeneity and how this could become a tool for the optimization of time and resources in management plans and community monitoring programs.Otherwise, understanding of temporal ecological patterns and their relationships with environmental cues could generate new questions related to temporal community changes and how communities are affected by this poorly explored axis.It would also be interesting to know whether temporal β-diversity responds similarly to spatial measures of temporal α-and γ-diversities.Finally, we emphasize that the temporal diversity measure proposed here is suitable for the analysis of any taxonomic level, community and temporal scale.Therefore, if we have a long time series data of species abundance, we are able to compare between years, seasons or any periodical perspective; always taking into account the wavelet limitations aforementioned or our own sampling resolution.Finally, this proposal looks for establish a baseline (principally for β diversity), for analyze temporal diversity.
Conclusions
The analysis of temporal diversity is crucial for understanding the temporal distribution of species assemblages and the uniqueness and heterogeneity of species in communities.As the collection of long-term data increases, appropriate temporal analytical methods are needed to improve our understanding of temporal community patterns.Our temporal diversity framework produces intuitive, comparable and simple values for assessing species heterogeneity over time.Our measure has the same properties of other γ, α and β-diversity measures and can be applied to mid-and long-term community data sets available for any taxon even on disturbed ecosystems.Temporal α-, β-and ƴ-diversities have important implications for the temporal design of community monitoring, conservation and restoration programs.
Figure 1 .
Figure 1.Comparison of same data set of abundance patterns of a Malagasy amphibian | 6,865.6 | 2023-09-27T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Temporal Encoding to Reject Background Signals in a Low Complexity, Photon Counting Communication Link
Communicating information at the few photon level typically requires some complexity in the transmitter or receiver in order to operate in the presence of noise. This in turn incurs expense in the necessary spatial volume and power consumption of the system. In this work, we present a self-synchronised free-space optical communications system based on simple, compact and low power consumption semiconductor devices. A temporal encoding method, implemented using a gallium nitride micro-LED source and a silicon single photon avalanche photo-detector (SPAD), demonstrates data transmission at rates up to 100 kb/s for 8.25 pW received power, corresponding to 27 photons per bit. Furthermore, the signals can be decoded in the presence of both constant and modulated background noise at levels significantly exceeding the signal power. The system’s low power consumption and modest electronics requirements are demonstrated by employing it as a communications channel between two nano-satellite simulator systems.
Introduction
Conventional optical wireless communications (OWC) involves the modulation of the optical emission from a light source, such as a light-emitting diode (LED) or laser, and detection of the output light with a photoreceiver [1]. When transmitting over long distances, or through high loss media, received power will become greatly reduced, and eventually be lost in noise from background light or within the receiver electronics themselves. Single photon detection and counting methods are used to achieve high receiver sensitivity with intensity modulated optical signals [2][3][4][5][6]. With the use of forward error correction (FEC) codes and high order pulse position modulation (PPM) [7], photon counting systems can operate with extremely low numbers of photons per bit [8]. In combination with arrayed receivers, the high sensitivity of single photon counting techniques has potential for deep-space communication links, operating at megabit rates [9,10].
The link performance of a single photon counting link can suffer significantly under the presence of noise counts [11], which can occur due to background light in the channel, or dark counts occurring within the detector. These additional counts cause erroneous detection of bits, necessitating the use of powerful FEC codes [2,12]. Photon counting links using coincident photon pairs can overcome noise limitations [13][14][15][16], and can be used for quantum key distribution [17][18][19]. However, such systems typically require high efficiency photon pair sources, putting large form factor requirements on the internal and transceiver optics. In fact, many single photon communication links make use of complex, large, and/or costly equipment, such as cryogenic receivers [3,10], CW lasers with external modulators [2,6] and arbitrary waveform generators [4]. This makes such systems difficult to deploy in application areas where size, weight and power budgets may be limited.
Here, we demonstrate a novel optical transmission scheme, suitable for OWC with single photon detection, requiring a single, low photon flux channel. Compared with existing methods, this scheme is implemented with simple and widely available semiconductor components and electronics. A gallium nitride (GaN) micro-LED transmitter, silicon single-photon avalanche diode (SPAD) receiver and field-programmable gate array (FPGA) electronics provide a compact system and with low power consumption. The transmission method operates under the presence of both constant and modulated background noise, which is enabled by the encoding of data in the timing statistics of the received photons. The following sections discuss the details of the transmission scheme, its current implementation, data transmission results and a demonstration of the system's suitability for inter-satellite communications, such as shown in Figure 1a. It is resilient against constant background such as sunlight and also insensitive to most AC background such as conventional wireless optical signals. (b) Schematic of the transmission scheme used. LED output on transmission of "0" and "1" (top), SPAD response to the LED signal (middle) and the calculated correlation histograms for each data interval (bottom).
Time Correlation Encoding Scheme
The transmission scheme presented here, inspired by time-correlated single photon counting (TCSPC) techniques often used for fluorescence lifetime imaging [20], involves the use of a single SPAD to receive time correlated signals at a single photon level. Analysis of the SPAD response to incoming light over an interval [−t 1 , t 1 ] shows that the correlation count density function g(τ)dt of recording two subsequent SPAD counts with temporal separation in the interval [τ, τ + dt ] is given by: Here, f (t) is the temporal probability distribution of received SPAD pulses, which is determined by the optical signal from the transmitter. Full analysis is given in the Supplementary Materials of Reference [21]. If a suitable optical source transmits pulses with a time separation of T, g(τ) will show a peak at τ = T, as the probability of observing SPAD pulses separated by T is increased.
Equation (1) is the autocorrelation of f (t), so it is expected that peaks in g(τ) would have a width of 2t pulse , where t pulse is the width of the optical pulse. After detection of a photon, the output of a SPAD has a "dead time" (τ d ) in which it is insensitive to further photons, typically 10 s of ns in length [6,12]. It is important that T > τ d , as otherwise the SPAD would not recover from the first pulse in time to see the second. This restriction can be lifted by using a SPAD array [22,23], however here we consider the use of only a single SPAD. The presence and/or temporal position of peaks in g(τ) directly depends on the sequence of optical pulses from the transmitter, and therefore can be used as a means of transmitting data.
In reality, the SPAD output is not a continuous probability distribution, but a series of discrete photon detection events. These events can occur due to the optical pulses from the transmitter, background photons, or dark counts. SPADs typically have dark count rates (DCRs) ranging from 100 s of Hz to several kHz [2,6,7,12], however, the SPAD used in the following experimental sections has an active cooling system, reducing the DCR to 25 Hz. The SPAD output signal will be sampled over a time period, into N s time bins t i , i = 1, . . . , N s , chosen to be smaller than τ d , so that bin contains a number of counts f i ∈ [0, 1]. The correlation time will also be discretised into τ j , j = 1, . . . , N τ . For a single pair of pulses a correlation either is or is not detected, so the optical signal must be repeated many times to distinguish correlation counts from noise. Instead of a single pulse pair, the pulses are continuously repeated at a rate specified by the temporal separation R pulse = 1/T. With correlation time bin size chosen as an integer multiple of sampling bin size, τ bin = kt bin , we can define start and stop indices for correlating across i as n start = τ 1 /t bin and n stop = n start + kN τ − 1. With this, the discrete form of Equation (1) is: As f i is a binary value, and the output from the SPAD is a transistor-transistor logic (TTL) signal, the summation could be implemented with simple logic circuits.
Encoding data in g(τ j ) has the potential to allow data transfer at exceptionally low light levels, and in the presence of significant background illumination. To detect correlations, the receiver requires the detection of a single photon from each optical pulse. Such conditions allow average received power to be extremely low, in the range of pW. The trade-off in this transmission scheme is that the data rate is expected to be relatively modest, as the optical signal must be repeated several times in order to generate a distinguishable signal in g(τ j ).
There are several potential ways to encode data in g(τ j ), with parallels to on-off keying (OOK), PPM or pulse amplitude modulation (PAM). Here, we consider the simplest form of encoding, OOK, where data can be encoded using a single pulse time separation. On transmission of the symbol "1", pulses are transmitted continuously with a fixed time separation T = 40 ns, corresponding to a repetition rate of 25 MHz, so g(τ j ) will show a peak at τ = T. This time separation has been chosen as the dead time of the SPAD used is 35 ns. On transmission of the symbol "0" no pulses are transmitted, producing only background counts in g(τ j ). A schematic of the expected waveforms is shown in Figure 1b. With pulse width t pulse = 5 ns, deliberately less than τ d , only one signal photon can be detected from each pulse, indicated by the blue SPAD signals in Figure 1b. In reality, the detection rate will be less than one per pulse, and pulses can also be missed if they are received during the dead time after a noise pulse, indicated in red. Time correlation of the measured events from the SPAD is performed over a data interval, producing a histogram with peaks at 40 ns intervals for transmission of a "1", and a background correlation level for transmission of a "0", determined by ambient background light and detector dark count rate. Applying a threshold to the histogram bin generated for each symbol at a delay of 40 ns allows decoding of the binary stream. This threshold will have to be sufficient to reject correlation counts from background and dark count correlations. A crucial feature of this method is that it is robust to temporal jitter between the transmitter and receiver. Synchronisation of the system can be easily achieved by using an embedded clock in the transmitted data, as discussed in Section 4.2.
Experimental Demonstration
The scheme detailed above was realised using a GaN violet emitting (405 nm), micro-LED device as the transmitter and a silicon Single Photon Avalanche Detector (SPAD) as the receiver. The LED chip was bonded to a custom CMOS driver allowing short pulse operation, with durations of 5 ns. The LED wavelength of 405 nm was initially used as the devices showed improved pulsed performance in this configuration over other wavelength counterparts. Data signal modulation was applied as a slower on-off keying of the short pulse train. Figure 2a shows a measured pair of pulses from the micro-LED of 5 ns duration and with a relative delay of 40 ns. A variable neutral density filter was placed between the emitter and detector to control the received power at the SPAD. A schematic of the measurement setup is shown in Figure 2b. Full details of the devices and electronic drivers are given in Section 4.1.
Signal-to-Noise Ratio
To set an operation threshold for the system, a figure of merit similar to the classical signal-to-noise ratio (SNR) must be defined. In this method, it is the distinguishability of the correlation peak in the g(τ) function that indicates the robustness of the classical information recovery to noise.
Conventional SNR can be defined as SNR = 10 log 10 N signal N noise , where N signal is average signal correlation counts and N noise is average noise correlation counts. However, as the number of pulse repetitions increases, the correlation counting interval increases, causing both N signal and N noise to increase at linear rates. This results in a constant SNR, which does not reflect the observed increase in distinguishability of signal correlations with increasing pulse repetitions.
Instead, it is more useful to consider the statistical distribution of correlation counts for signal and noise. Photon counting experiments were undertaken using the experimental setup described above, with a received power at the detector of 38 pW, corresponding to a detector count rate of 1.07 × 10 7 Hz, in a dark lab environment with an average background count rate of 619 Hz. Note that this count rate contains both dark counts and counts from the small amount of ambient light. The delay correlations of detected photons were binned with a resolution of 10 ns, with the transmitted pulse delay set at 40 ns. Figure 3a,b shows average histograms of received photon correlations for 5 and 100 pulse repetitions, respectively. Figure 3c shows the histogram for 100 pulse repetitions under high background conditions, displaying the correlation histogram due to background noise alone, and signal with noise. The background count rate for this measurement was 10 7 Hz.
In Figure 3a-c, the signal is defined as the number of correlations in the 40 ns delay time bin, and the noise correlation count is taken from the 60 ns delay bin. Correlation counts follow a Poissonian distribution, as they are discrete independent events arising directly from shot noise limited photon counts. Figure 3d,e shows the measured Poissonian distributions for signal with noise and noise alone correlation counts at 5 and 100 repetitions of the pulses, respectively, taken from 1500 independent measurements of each case. Figure 3f shows the distributions for background noise, signal alone (identical to Figure 3e) and combined signal and noise. At five repetitions, the probability distributions for signal and noise are strongly overlapped. Thus, a correlation count peak due to signal transmission is difficult to distinguish from a correlation count peak due to random background and dark counts. At 100 repetitions, the overlap of signal and noise distributions is significantly reduced, making distinction much easier. A histogram threshold equates to a point along the x-axis of the distribution plots. Evidently, a threshold of 2 would result in many erroneous detections at five repetitions, whereas, at 100 repetitions, the majority of signal correlation peaks would be correctly identified, and noise correlation peaks rejected. Under high background noise conditions, the number of correlations from noise is increased, so the threshold must increase to distinguish between the correlations due to noise from those due to the signal, as both will be present on transmission of "1".
Distinguishability is therefore described by Equation (3), the overlap of the Poisson distributions for: (i) the total signal and noise contributions, P T (k); and (ii) the noise alone, P n (k). P T (k) is related to the signal count distribution, P s (k) and noise count distribution, P n (k) via Equation (4). Here, P n/s (k) are the the probabilities of k correlation counts occurring due to noise or signal with mean λ, given by Equation (5).
Figure 3g-i shows the calculated overlap for changing pulse repetitions, photon detection rate and noise correlations, respectively. The overlap reduces exponentially with pulse repetitions, faster than exponential with photon detection rates, and increases sub-exponentially with background correlations. This is understood by noting that λ in Equation (5) follows λ = p 2 ph N rep , where p ph is the probability of detecting a photon from a single pulse, and N rep is the number of pulse repetitions.
Therefore, the distinguishability of binary 0 and 1 is governed by the Poissonian overlap, Equation (3), and depends on the number of sampled pulse repetitions, the received signal power, and the background intensity, with the first two parameters dominating.
Data Rates
The achievable data rate of this system is determined by the number of pulse repetitions required to distinguish the signal, and hence the received power and the time separation between pulses. The SPAD response imposes a lower limit on this separation, due to the dead time τ d and pulse width, τ pulse , giving an achievable data rate of: where N rep is the number of pulse repetitions required to see a distinguishable peak in the correlation histogram. Use of a SPAD array could lift the restrictions imposed by dead time through pulse combining techniques [23]. To demonstrate the system performance as a function of received power and data rate, bit error ratio (BER) measurements were taken. A target BER of 1 × 10 −3 was used, as FEC codes can be used to reduce this to effectively error free levels at a small overhead on data throughput [24]. A pseudo-random bit sequence (PRBS) of 10 4 bits was transmitted, limited by the data processing capabilities of the oscilloscope and PC components in the measurement setup. The ND filter wheel allowed control of received power, or equivalently, photon detection probability. Figure 4a shows BER curves for varying data transmission rates, taken with minimal background light. At 8.25 pW of average power, corresponding to an average of 0.34 incident photons per pulse, a data rate of 100 kb/s was possible with a BER of less than 10 −3 . Received optical power can be reduced at the expense of data rate. A data rate of 10 kb/s can be achieved at the same BER with 2 pW, corresponding to 0.08 photons per pulse. The power measurements quoted here and used in Figures 4 and 5 are the incident optical power on the active area of the SPAD, calculated through numerical methods from the average detector count rate. Detector count rate is the parameter which governs BER performance, however the incident optical power will be influenced by the performance of the SPAD. Most importantly, the photon detection probability (PDP) at 405 nm is 18%, so the incident photon flux is significantly higher than the detector count rate. More efficient photon detection would improve BER performance in terms of required power.
The system performance can also be described in terms of the number of received photons per bit. Figure 4b shows detected photons per bit for each data rate, at the level required for a BER of less than 10 −3 . The fitted curve is calculated from the relationship between correlation counts, received power and data rate. The number of signal correlations depends on the square of received power and is inversely proportional to the data rate R data . To reach a given target BER, a certain constant number of signal correlation counts must be reached, meaning (ph/s) 2 ∝ R data . As photons per bit is the required photons per second divided by data rate, the data follows a y = x − 1 2 relationship. The 100 kb/s link is transmitting each bit with an average of 27 detected photons. This is relatively close to the standard quantum limit (SQL), set by Poissonian photon statistics [25]. For a BER of 10 −3 , a minimum of seven photons is required to detect a "1". Therefore, an average of 3.5 photons per bit is required, assuming the probability of transmitting "0" or "1" is equal. The implemented scheme will be unable to reach the SQL, due to the correlation approach. Two photons are required for a single correlation detection, which itself has a Poissonian distribution that must be distinguished from noise. per bit required to achieve a BER of less than 10 −3 for varying data rates, fitted with a x − 1 2 relationship. The standard quantum limit for OOK at this BER is also shown.
After correcting for detector efficiency, 27 detected photons equate to 7.37 × 10 −17 J incident on the detector per bit. This exceptionally low energy demonstrates the suitability of the transmission scheme in low power or high loss systems. As mentioned above, more efficient photon detection would allow further reductions in energy received per bit. Additionally, efficiency improvements will be possible through use of PPM style transmission, to transmit multiple bits per correlation peak.
Robustness to Noise
A major advantage of this transmission scheme is that it is expected to be robust against background counts, as ambient light is generally uncorrelated on the time scale of 10 s of ns. To verify this, BER measurements were taken for increasing levels of background light using a secondary 450 nm light source, as shown in Figure 2b. As background counts increase, the probability of detecting noise correlations increases. The threshold applied to the correlation histogram must be increased to avoid erroneous detection of bits, requiring higher received average power to maintain the same BER performance.
The results in Figure 5a show the incident optical power required to maintain a BER of 1 × 10 −3 for 10 and 50 kb/s with increasing levels of background optical power. The signal power requirements do increase, however, they are still very low. At high background levels, the required signal power is significantly lower than the power received from background illumination, with equal levels indicated by the solid line. Here, the interplay between number of pulse repetitions, photons detected per pulse and background counts becomes important. As discussed in Section 2.3, higher levels of background power increases the number of bit errors, as the overlap of Poisson distributions increases. While the target BER can be recovered by increasing received photons per bit, equivalent to increasing the signal power, it can also be recovered by increasing the number of transmitted pulses, equivalent to reducing the data rate. The result is that given a certain level of background optical power, a signal can always be transmitted at a power level below that of noise, at the expense of data rate.
The system was also measured under modulated background illumination. Since the number of detected correlation counts depends on the square of received power, a high modulation rate background should interfere in the same manner as a DC signal at the root-mean-squared (RMS) of its count rate. For this reason, background signals were set to maintain similar RMS photon count rates for comparison to DC measurements. The RMS background optical power was approximately 15 pW for all measurements. The power required to maintain a BER of 10 −3 is shown in Figure 5b, and displays two distinct groups of results. The high background modulation rates of 1 and 10 MHz show similar required signal power to constant background conditions, while when the background modulation rate is close to the correlation link data rate, the BER performance is degraded, requiring approximately 40% more received power. This reduction in performance occurs as the background signal is now generating different levels of noise correlations from one bit period to the next, making it more difficult to choose a correlation threshold. At higher background modulation rates the background signal completes many cycles within a single bit period of the correlation link, and the dead time of the SPAD restricts the number of photons that can be detected per background cycle, causing the signal to interfere in the same manner as constant background. Nevertheless, all conditions still reach a BER of less than 10 −3 for less than 14 pW of received signal power. Under all background conditions, the signal is transmitted with a lower photon count rate than the background signal, demonstrating low power performance even with high power modulated background interference.
Satellite Systems Demonstration
The communications system presented here is applicable in many scenarios, but is particularly attractive for inter-satellite links. The semiconductor devices are extremely compact, have low power consumption and are readily integrated with control electronics. LED based visible light communications shows potential for use with cube satellites [26,27]. The robustness of the signal to background noise and operation at picowatt levels of received optical power means the scheme could be implemented without the high accuracy pointing requirements, telescope optics and filters of current satellite systems.
To highlight this capability, the system was tested in the nano-satellite hardware and software test-bed, NANOBED, shown in Figure 6a, which can simulate the available power systems on a cube satellite. To demonstrate that the full communication link was able to be powered by the NANOBED, a real time decoder, incorporating embedded clock signal recovery, was implemented on a FPGA platform to replace the oscilloscope and PC components in the characterisation setup. Details of this setup, shown in Figure 6b, are given in Section 4.3. The LED transmission system was integrated with one NANOBED system, while the SPAD receiver system was integrated with a second. In this work, the solar panel emulating power sources are used to supply the transmitter and receiver devices via the electrical power supply (EPS) and battery units, simulating an in-orbit scenario. The transmitter side of the real time link requires a single FPGA board, from which the CMOS micro-LED array is powered and controlled. On the receiver side, the commercial SPAD module requires a 6 V DC supply, and a second FPGA is used to process the received signals. A summary of typical power consumption is shown in Table 1. The SPAD consumes the most power in the system, however the commercial module has not been designed with power conservation in mind, and employs significant levels of active cooling. A custom SPAD receiver may show power requirements of 10-100 mW [6,22]. While the lack of active cooling would result in higher levels of dark counts, the resilience to constant background signals demonstrated in Section 2.5 indicates this would not be problematic. Additionally, bespoke electronics in place of FPGA boards may also permit lower power consumption, therefore this demonstration should be thought of as an upper limit on power requirements. For the laboratory demonstration, the transmitter and receiver were placed 4 m apart with the micro-LED pixel projecting the light across a 4 cm wide square with received power controlled using a neutral density filter. A micro-LED emitter at 450 nm was used to improve the PDP to 25%. As shown in Figure 7, the live link requires 2.5 pW of received power to maintain a BER of 10 −3 at 20 kb/s. On a 20 µm diameter SPAD, 3 pW corresponds to an intensity of 9.5 mW/m 2 . To provide this over the projected 4 cm wide square, 15.3 µW must be collected from the micro-LED by the transmitter lens.
Discussion
We have demonstrated a transmission scheme suitable for single photon level optical wireless communications. By transmitting temporally correlated signals, data communications can be performed at extremely low light levels, with received power on the order of pico-Watts. Signals can be transmitted using an LED, and received with a single SPAD. A 100 kb/s link has been achieved with a BER of less than 10 −3 at a received power of 8.25 pW. The scheme is robust to background light, with only a minor increase in required power for very high background conditions. Modulated background signals appear to have little additional influence over that of continuous background, suggesting the scheme could be used in parallel with other optical communications with minimal interference. Furthermore, multiple transmission systems using this scheme could operate without interfering with each other, simply by using different pulse time separations. The modest data rates presented in this work are dominated by two key factors, firstly the requirement of this protocol for correlating many repetitions of a pulse pattern, and, secondly, the dead time of the SPAD detector itself.
A real time transmission setup has been demonstrated, showing a method for clock synchronisation and determination of a threshold level. The current, unoptimised implementation allows a data rate up to 20 kb/s, with only a minor reduction in performance when compared to offline processed transmission. The real time transmission link has been demonstrated in a simulated satellite environment, providing data transmission at a received optical intensity of 9.5 mW/m 2 under simulated nanosatellite power systems. Additionally, GaN LEDs at low current densities show higher wall-plug efficiencies than laser diode counterparts [28], further enhancing the power consumption characteristics of the system. In future satellite-focussed experiments, an optimum transmission wavelength will be chosen, based on LED performance, detector response and solar-blind wavelengths.
Data rate and photons per bit efficiency can both be improved through relatively straightforward modifications to the system. By using a SPAD array as a receiver rather than a single device, the dead time limitation can be overcome and therefore higher data rates achieved. In addition, by implementing a form of pulse position modulation, with powerful FEC codes, the photons per bit transmission efficiency can be improved. Finally, data rates may be enhanced by using a form of pulse amplitude modulation, however the received power requirement would also increase.
This transmission protocol has clear applications in communications systems for long range or high loss environments, but is also equally applicable in microscopy or low light level imaging systems when coupled with a SPAD imaging array, and can be implemented using a wide range of pulsed optical sources dependent on the application.
Optical Transmitter and Receiver Realisation
The transmitter used for the results presented here is a complementary metal oxide semiconductor (CMOS) integrated gallium nitride micro-LED pixel. Details and fabrication of comparable devices can be found in [29]. The micro-LED pixel is a square 100 × 100 µm in size, and part of a 16 × 16 array with a 405 nm emission wavelength. The array was fabricated in flip chip format, and bump-bonded onto CMOS control electronics which allow the LEDs to be modulated in a pulsed mode, triggered by the falling edge of an input logic signal. The shortest stable optical pulses t pulse that could be generated with this device and control system were 5 ns. To produce pulses for the OOK transmission, a data signal was produced at the desired data rate by a field programmable gate array (FPGA) (Xilinx Spartan-3, XEM3010, Opal Kelly, Portland, OR, USA). The FPGA clocks were derived from a 48 MHz signal from a USB microcontroller, with parts of the system running at 100 MHz. The data sequence was sent to a simple transmission circuit. Here, the data signal was combined with an oscillator producing a signal of square waves with a period of 40 ns, through an AND gate, as shown in Figure 2b.
The SPAD receiver is a commercial module (SPCM20A, Thorlabs, Newton, NJ, USA), with a detector active area diameter of 20 µm. The dead time of the detector is 35 ns, and the typical dark count rate is 25 Hz. At 405 nm and 450 nm, the PDP is 18% and 25%, respectively. The module outputs 3 V logic signals indicating photon counts. This signal was sent to an oscilloscope, and collected by the PC for offline processing of g(τ j ). In a practical system, this processing could be performed by digital logic circuits The LED output was collected with a lens (C220TME-A, Thorlabs, Newton, NJ, USA) and transmitted through a graded neutral density (ND) wheel (NDC-50C-4M-A, Thorlabs, Newton, NJ, USA). A 450 nm shortpass filter was used in front of the SPAD to reject additional background light. This filter was removed for the experiments assessing performance under high background conditions. The pixel was imaged onto the SPAD active area. As the pixel image is approximately a 7 mm square, only a small portion of the light was imaged on to the SPAD circular active area of diameter 20 µm. In a practical system, receiver optics could be used to collect more light on to the active area of the SPAD, reducing the loss through the channel. Received optical power was calculated numerically from the average number of photon counts detected. This method accounts for detector dead time, and photon detection probability at the operational wavelength. Details of the calculation can be found in the Supplementary Materials of Reference [21].
To assess the effects of DC background illumination, a commercial 450 nm LED (LD CQ7P, OSRAM, Munich, Germany) was placed within a few centimetres of the transmitter LED, directed towards the SPAD, as shown in the setup schematic in Figure 2b. By increasing the driving current for the commercial LED, the background counts could be controlled. The modulated background optical signal was generated using a commercial 450 nm LED (LERTDUW S2W, OSRAM, Munich, Germany) modulated with a transistor. This commercial LED had a modulation bandwidth of 15.9 MHz, and was placed within a few centimetres of the transmitting LED. Modulating this LED with a PRBS effectively simulates operation of the correlation link in an environment with conventional optical wireless communication links.
Real Time Link
To demonstrate a practical system, an FPGA based synchronisation system was implemented, involving data transmission in frames consisting of a 6 bit clock word and 32 data bits. The clock word, "001101", allows both frame level and symbol level synchronisation of data streams. Details on choice of clock work and synchronisation methods can be found in the Supplementary Materials of Reference [21]. A block diagram of the experimental setup for real-time transmission is shown in Figure 6b. On the transmitter side, the FPGA was used to generate a data stream in frames, with the 6 bit clock word. In contrast to the offline setup in Figure 2b, this FPGA directly supplied the falling edge trigger for the LED board, without the need for extra logic circuitry. The receiver FPGA (Xilinx Spartan-3, XEM3010, Opal Kelly, Portland, OR, USA) was connected to a separate PC, and clock synchronisation removed the need for a trigger from the transmitter. However, due to limitations from the FPGA boards, the achievable data rates with the real-time setup are limited to 20 kb/s. It should also be noted that the data rates quoted here include transmission of the clock word. This 18.75% overhead reduces useful data transfer to 8.42 and 16.84 kb/s for 10 and 20 kb/s links, respectively.
NANOBED Satellite Simulator Experiments
The LED transmitter and SPAD receiver systems were independently powered by separate NANOBED systems, positioned approximately 4 m apart. A 450 nm micro-LED was used, focussed on the receiver plane using an 8 mm focal length lens (C240TME-A, Thorlabs, Newton, NJ, USA), giving a pixel image size at of approximately 4 cm. To increase received power on the 20 µm diameter SPAD, a 35 mm focal length collection lens (ACL4532U-A, Thorlabs, Newton, NJ, USA) was used.
The satellite simulator test bed is a FlatSat-configured CubeSat system, which includes an electrical power system, batteries, an on-board computer and communication systems. A software design tool offers mission design, simulation and analysis, including a link to the hardware for in-loop simulation and testing. A software defined radio link to NANOBED enables ground software validation and operational testing, over which command and control of the system components can be invoked. The NANOBED EPS provides a 5 V bus suitable for powering the transmitter and receiver FPGA boards directly. For the SPAD supply, the unregulated battery bus was used with a voltage regulator to fix the voltage to 6 V. | 8,005.2 | 2018-09-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Gluon PDF of the proton using twisted mass fermions
In this paper, we present lattice QCD results for the $x$-dependence of the unpolarized gluon PDF for the proton. We use one ensemble of $N_f=2+1+1$ maximally twisted mass fermions with a clover improvement, and the Iwasaki improved gluon action. The quark masses are tuned to produce a pion with a mass of 260 MeV. The ensemble has a lattice spacing of $a=0.093$ fm and a spatial extent of 3 fm. We employ the pseudo-distribution approach, which relies on matrix elements of non-local operators that couple to momentum-boosted hadrons. In this work, we use five values of the momentum boost between 0 and 1.67 GeV. The gluon field strength tensors of the non-local operator are connected with straight Wilson lines of varying length $z$. The light-cone Ioffe time distribution (ITD) is extracted utilizing data with $z$ up to 0.56 fm and a quadratic parametrization in terms of the Ioffe time at fixed values of $z$. We explore systematic effects, such as the effect of the stout smearing for the gluon operator, excited states effects, and the dependence on the maximum value of $z$ entering the fits to obtain the gluon PDF. Also, for the first time, the mixing with the quark singlet PDFs is eliminated using matrix elements with non-local quark operators that were previously analyzed within the quasi-PDF framework on the same ensemble. Here, we expand the data set for the quark singlet and reanalyze within the pseudo-PDFs method eliminating the corresponding mixing in the gluon PDF.
In this paper, we present lattice QCD results for the x-dependence of the unpolarized gluon PDF for the proton.We use one ensemble of N f = 2 + 1 + 1 maximally twisted mass fermions with a clover improvement, and the Iwasaki improved gluon action.The quark masses are tuned to produce a pion with a mass of 260 MeV.The ensemble has a lattice spacing of a = 0.093 fm and a spatial extent of 3 fm.We employ the pseudo-distribution approach, which relies on matrix elements of non-local operators that couple to momentum-boosted hadrons.In this work, we use five values of the momentum boost between 0 and 1.67 GeV.The gluon field strength tensors of the non-local operator are connected with straight Wilson lines of varying length z.The light-cone Ioffe time distribution (ITD) is extracted utilizing data with z up to 0.56 fm and a quadratic parametrization in terms of the Ioffe time at fixed values of z.We explore systematic effects, such as the effect of the stout smearing for the gluon operator, excited states effects, and the dependence on the maximum value of z entering the fits to obtain the gluon PDF.Also, for the first time, the mixing with the quark singlet PDFs is eliminated using matrix elements with non-local quark operators that were previously analyzed within the quasi-PDF framework on the same ensemble.Here, we expand the data set for the quark singlet and reanalyze within the pseudo-PDFs method eliminating the corresponding mixing in the gluon PDF.
I. INTRODUCTION
As the mediators of the strong force, gluons play a significant role in the internal structure of hadrons.However, color confinement, a key aspect of quantum chromodynamics (QCD), prevents direct observation of quarks and gluons.Instead, both theoretical and experimental approaches to hadronic structure calculations rely on QCD factorization, which separates the perturbatively-calculable hard-scattering part from the non-perturbative part described by form factors and distribution functions, including parton distribution functions (PDFs).PDFs are probability distributions quantifying the likelihood of finding partons with a particular momentum fraction.Precise and accurate calculations of the gluon PDF are necessary for J/ψ photo production at Jefferson lab, the cross-section of Higgs boson production and jet production at the Large Hadron Collider (LHC), as well as providing theoretical input to experiments at the future Electron-Ion Collider (EIC) in the U.S. and the Electron-Ion Collider in China (EicC).
Lattice QCD is a first-principles approach to calculating strong force quantities performed on a discrete 4dimensional Euclidean lattice.While lattice QCD calculations have proven successful in extracting the nonperturbative dynamics of QCD governing hadron structure, the light-like nature of PDFs prevents direct calculation on Euclidean lattices.Several methods have been proposed over the last decade to relate lattice data to physical light-cone distributions.Two notable and most widely used approaches are the quasi-distribution [1,2] and pseudodistribution [3][4][5][6][7] methods.These approaches utilize the same matrix elements of momentum-boosted hadrons coupled to non-local operators containing a Wilson line but differ in the way the Euclidean observable is factorized into its light-cone counterpart directly in coordinate space (pseudo) or after reconstruction of the x-dependence, i.e. in momentum space (quasi).Typically, they are also renormalized differently.By construction, the renormalization for pseudo-distributions employs canceling the divergences by forming an appropriate ratio of matrix elements (ratio scheme).In turn, quasi-distributions are typically renormalized using a dedicated calculation of vertex functions of the operator under study that leads to an RI/MOM type of renormalization.It should be noted that, the ratio scheme is also increasingly utilized for quasi-distributions in hybrid schemes [8] that treat short and long scales differently.Another typical difference is in the x-dependence reconstruction.For quasi-distributions, this step uses Euclidean matrix elements in the full range of the non-local operator lengths, z.Pseudo-distributions, in turn, are matched in coordinate space, which imposes limitations on the value of z, which needs to be kept relatively small so that it remains in the perturbative region.Thus, without access to the full range of z, approaches based on pseudo-distributions typically employ a physically-motivated fitting ansatz for the functional form of the reconstructed function.
In this work, we present our calculation of the unpolarized gluon PDF for the proton using the pseudo-PDF approach.We calculate the Ioffe-time pseudo-distribution function (pseudo-ITD) by taking the ratio of matrix elements and evolving to a common scale.The ITD describes the interaction of the nucleon with the probe in deep inelastic scattering (DIS) interactions.We use a fitting ansatz to reconstruct the pseudo-PDF from the pseudo-ITD.This approach has proved successful for the extraction of the quark pseudo-PDF.The gluon component presents additional difficulties, including the need for an order of magnitude more statistics arising from the noise associated with the purely disconnected diagram.The gluon PDF also mixes with the quark singlet PDF.Previous lattice calculations have neglected this mixing.We present the first analysis incorporating the quark singlet mixing from lattice QCD data.We compare our pseudo-PDF results neglecting mixing with lattice results from the HadStruc collaboration [66].We also compare our results with and without mixing to global analysis from the JAM collaboration [71].This paper is organized as follows.In Secs.II A and II B, we describe the theoretical and lattice setups for the calculation.In Sec.III A, we present our analysis of various smearing and source-sink time separation values of the matrix elements and reduced-ITDs.Sec.III B shows the results of the pseudo-ITD and pseudo-PDF neglecting mixing with the quark singlet, and Sec.III C presents the results addressing the mixing with the quark singlet.
A. Approach
The computationally expensive component of the methodology is the evaluation of matrix elements with momentumboosted proton states, N (P ), that couple to non-local gluon operators; P indicates the proton momentum.The operator is constructed by two gluon field-strength tensors, F µν , located at two lattice points that are spatially separated in the ẑ direction by distance z.The operator also contains two straight Wilson lines, connecting points 0 → z and z → 0, to ensure gauge invariance.The matrix element reads where F µν is the gluon field strength tensor defined as and g is the bare coupling constant.Potential candidates for the gluon operator are given below for different values of the indices µ , ν, i, j, which can be temporal or spatial, that is The various options of the indices lead to the construction of operators with different properties.Here, we use the operator O 4 , which does not exhibit mixing under renormalization.This operator has a non-vanishing vacuum expectation value that must be subtracted.Since the calculation of the gluon loops is computationally very inexpensive, the vacuum expectation value subtraction does not pose any challenges in the calculation.It should be noted that, regardless of the choice of operator, the unpolarized gluon PDF mixes with the unpolarized singlet quark PDF.We take this mixing into account in our analysis and we quantify its effects by comparing to results with the mixing neglected.
The matrix elements of Eq. (1), M µi;νj , are extracted from the ground state contribution to the ratio taken between the three-point and two-point correlation functions.The variables t s , τ , and t 0 indicate the time of the sink, operator insertion, and source, respectively.Without loss of generality, we have taken the source position to be at t 0 = 0.The ground state contribution, M O ≡ M µi;νj , is identified at large enough values of t s and at τ away from the source and the sink.Practically, we seek convergence with a variance of t s and at τ .
For the calculation of gluonic contributions to the proton, C 3pt O correspond to the so-called disconnected contributions, which are constructed by the expectation value of a product of a gluon loop with the proton two-point function.Also, for the unpolarized gluon PDF, the appropriate parity projector is Γ 0 ≡ 1 4 (1 + γ 0 ) for both the three-and two-point functions.
In our analysis, we implement the pseudo-ITD framework, which requires several nontrivial steps to extract the x-dependence of the gluon PDF.For convenience, we use M g to denote the ground state contribution for the operator O 4 .First, the matrix elements at different values of P and z are combined to construct the reduced Ioffe-time distribution (pseudo-ITD), which depends on the Lorentz-invariant quantities ν ≡ z • P (Ioffe time) and z 2 .For multiplicatively renormalizable operators, the reduced ITD acts as a gauge invariant renormalization scheme that removes UV divergences, including the power divergence due to the presence of the Wilson line.The effects of the residual scale 1/z can be accounted for by an evolution term (see below) and data from different scales 1/z can be combined into ITDs defined at a common renormalization scale, µ 2 .Furthermore, it is anticipated that Eq. ( 9) leads to suppressed discretization and higher-twist effects, which are assumed similar in the two single ratios shown above [44].
Another component of this work is the calculation of the unpolarized quark PDF to address the mixing with the gluon case.The matrix element can be written similarly to the gluon case, that is where the fermionic field ) is taken to be the up, down, and strange quark; f indicates the flavor.For a proper flavor decomposition of the up and down quark contributions, we calculate the disconnected diagram in addition to the connected one.Moreover, the strange-quark contribution is purely disconnected for the nucleon case.Forming the quark-disconnected contributions requires the evaluation of quark loops that are combined with the nucleon two-point correlators.The quark loop of the non-local operator reads where D −1 f (x ins ; x ins + z) is the quark propagator, whose endpoints are connected by a Wilson line.More details in the calculation of the disconnected contributions can be found in Ref. [35].Here we combine the connected and disconnected contributions to the matrix element to form the singlet u + d + s combination, M q .The latter is normalized by constructing the pseudo-ITD, M q similarly to the definition of Eq. ( 9).
To extract the light-cone counterpart of M g , indicated as Q gq , one must apply a matching procedure known to one-loop-level accuracy [74,75], where ⟨x⟩ µ g is the gluon momentum fraction renormalized at the scale µ, and with q f (x, µ 2 ) (q f (x, µ 2 )) being the quark (antiquark) PDF of flavor f , and the sum runs over all considered quark flavors (f = u, d, s).This distribution is related to the imaginary part of the double ratio M q , Differentiating this equation with respect to the upper limit of the integral, we get Thus, the singlet quark Ioffe-time distribution appearing in the matching equation, M S (ν, µ 2 ), is purely real and related to the imaginary part of the quark double ratio.
The matching kernels read and the plus prescription is given by ).The matching equations involve evolving the reduced gluon ITD to a common scale (B gg (u) term), converting the expressions to the light-cone gluon ITD in the MS scheme (L(u) term) and taking its mixing with the singlet quarks into account (B gq term).It is convenient to rewrite Eq. ( 12) in three parts so that one can inspect the role of the three terms separately, where M ′ g (ν, z 2 , µ 2 ) is the evolved gluon ITD, which depends on ν, the final scale µ 2 and the initial scale z 2 .The matching and conversion to the MS scheme is given by Finally, we take the mixing with the singlet quark into account, M q , arriving at the final light-cone ITD, The matched gluon ITD still keeps track of the initial scale z 2 at this stage.However, different scales z 2 should lead to the same light-cone ITDs up to higher-twist effects.For data points where this holds, i.e. leading to consistent values of Q g (ν, z 2 , µ 2 ) from different initial z 2 , Q g is averaged over the same values of ν extracted from different combinations of P and z.We denote such ν-averaged ITDs by Q g/gq (ν, µ 2 ), i.e. dropping the argument indicating the initial scale z.
To extract the x-dependent gluon PDF, xg(x), the light-cone ITDs need to be subjected to a cosine Fourier transform, The extraction of xg(x, µ 2 ) poses an inverse problem [45], because one attempts to calculate a continuous distribution from a limited number of lattice data points for a finite range of Ioffe times up to some ν max .Therefore, to determine xg(x, µ 2 ), one requires additional information, which can be chosen in several ways.Here, we reconstruct the gluon PDF by using a fitting ansatz commonly used in the analysis of experimental data sets, that is where the exponents a, b are fitting parameters and N is the normalization that is fixed by the gluon momentum fraction 1 0 dx xg(x) = ⟨x⟩ g .The lattice data are, thus, fitted according to the minimization of We consider the reconstruction in the cases with (Q gq ) and without (Q g ) the mixing taken into account, to assess the effect of this mixing at the level of the x-dependent distributions.The data are weighted by the inverse variance of the light-cone ITDs, σ 2 Q g/gq (ν,µ 2 ) .Q f (ν, µ 2 ) is the cosine Fourier transform of the assumed fitting ansatz.
B. Setup of lattice calculation
The calculation is performed using an N f = 2 + 1 + 1 ensemble of twisted-mass clover-improved fermions and Iwasaki-improved gluons [76].The quark masses are fixed such that the pion has approximately twice its physical mass (m π = 260 MeV).The lattice spacing is a = 0.0938(2)(3) fm, and the lattice volume is 32 3 × 64 in lattice units.The parameters of the ensemble are summarized in Table I.Matrix elements of gluon operators have increased gauge noise, and one needs to (a) obtain high statistics and (b) use smoothing techniques.To this end, we calculate the correlation functions from different source positions on the same configuration, as the cA211.30.32 ensemble has about 1,200 thermalized gauge configurations [76].Utilizing several source positions per configuration combined with the large speed-up achieved with the use of the multigrid [77][78][79][80], leads to an efficient increase in statistics.Here, we analyze a total of 200 source positions for each configuration.To further increase statistics without loss of generality, we calculate the matrix element using six kinematically equivalent setups, where both the Wilson line and momentum boost are in the ±x, ±y, ±z directions.These six matrix elements can be averaged over, leading to total statistics exceeding one million measurements, as shown in Table II.Since the pseudo-ITD utilizes matrix elements at several values of the proton momentum, we use five values, that is, P = 0, 0.42, 0.83, 1.25, 1.67 GeV.Each matrix element is normalized with the P = 0 case and we found non-negligible correlations between the numerator and denominator of the reduced ITD in Eq. ( 9).These are eliminated by calculating all matrix elements at the same configurations and identical source positions.Regarding excited-state contamination, we use the measurements of Table II The increased gauge noise is addressed by employing the stout smearing smoothing technique [81] on the gauge links entering the gluon field strength tensor and the Wilson line.The stout smearing parameter is ρ = 0.129 [82,83], and the number of smearing steps is chosen independently in the gluon field strength tensor (N F stout ) and the Wilson line (N W stout ).We apply a 4D smearing to the field strength tensor and a 3D smearing to the gauge links of the Wilson line.We have tested a 4D smearing in the Wilson line, obtaining compatible results after the double-ratio renormalization.We calculate 25 combinations of N F stout and N W stout , by using the values 0, 5, 10, 15, and 20 for each.Another technique to decrease the noise-to-signal ratio is to improve the overlap with the proton ground state.We apply momentum smearing [84] for the three highest momentum boosts, P = 0.83, 1.25, 1.67 GeV, which has been proven essential in suppressing the gauge noise in matrix elements with boosted hadrons and non-local operators [12].We found the optimized value at ξ = 0.6 for the momentum smearing parameter.
The evaluation of the quark matrix elements is an extension of the previous work of Ref. [35], which obtained the quark PDFs within the quasi-distributions method with momenta P = 0.42, 0.83, 1.25 GeV.Here, we added P = 0, 1.67 GeV, so that we obtain the reduced ITDs for the quarks, M q , needed in Eq. (19).As for the gluon case, we implement the momentum smearing method and five stout smearing steps on the gluon fields of the Wilson line entering the operator.To reduce the stochastic noise coming from the low modes [85] in the calculation of the quark loops, we compute the first N ev = 200 eigenpairs of the squared twisted-mass Dirac operator.Then, the low-mode contribution to the all-to-all propagator can be exactly reconstructed, and the high-modes can then be evaluated with stochastic techniques, such as hierarchical probing [86].The latter allows for the reduction of the contamination of the off-diagonal terms in the evaluation of the trace of Eq. ( 11), up to a distance 2 k , using Hadamard vectors as basis vectors for the partitioning of the lattice.Here, we use k = 3 in four dimensions leading to 512 Hadamard vectors.In addition to the hierarchical probing, we make use of the one-end trick [87,88] and fully dilute spin and color sub-spaces.More information can be found in Ref. [35], as well as Refs.[83,[89][90][91].
On a large enough lattice, and given that the source positions are selected randomly, the autocorrelations become very small, and the data on multiple source positions on the same configuration can be considered statistically independent.To check for autocorrelations, we analyze different subsets of data for the two-point functions and extract the relative error on the energy, as shown in Fig. 1 for two representative values of the momentum boost, P = 0.83 GeV (p = 2) and P = 1.67 GeV (p = 4).We find that the statistical error of various quantities scales with the inverse square root of the number of source positions indicating uncorrelated data.
A. Gluon matrix elements and reduced ITDs
Before presenting the final bare matrix elements, it is useful to examine the effect of the stout smearing in terms of the signal quality.The stout smearing is extensively used in the calculation of non-local operators of Refs.[12, 16, 17, 20, 27-29, 35, 36] demonstrating the noise reduction.Also, in Ref. [20], we demonstrated the independence of the renormalized matrix elements from the level of smearing.However, the above statements regard quark bilinear operators, so similar tests are imperative for gluonic operators.As mentioned in the previous section, we construct the gluon matrix elements for 25 combinations of stout steps in the gluon field strength tensor and the Wilson line, that is 20] in steps of 5.The bare matrix elements are shown in Fig. 2 for a subset of these combinations, which includes {N F stout , N W stout } = {0, 10, 20}.All presented matrix elements have been evaluated at t s = 9a, which, as we will demonstrate below, is the one used in the final analysis.It is interesting to observe that the smearing on the field strength tensor has a bigger impact on the signal compared to the smearing on the Wilson line.For instance, the signal already improves significantly with N F stout = 10 and N W stout = 0. Comparing the effect of the stout smearing directly at each momentum can offer another qualitative understanding of signal improvement.In Fig. 3 we show selected cases of the N W stout and N F stout combinations, presented as N stout = (N F stout , N W stout ).As previously discussed, the stout smearing applied on the gluon fields of the field strength tensor is crucial to get a signal.In all values of P , further signal improvement is found as N F stout and N W stout increase.We observe a saturation at (N F stout , N W stout ) = (20, 10), which we will use for the remainder of this analysis.In Fig. 6, we will examine the effect of the mixing in the pseudo ITDs.In this work, we also examine excited-states effects using the preferred setup for the stout smearing, N F stout = 20, N W stout = 10.Fig. 4 shows the matrix elements at four values of the source-sink time separation, that is, t s = 8a, 9a, 10a, 11a.For P = 0, 0.42, and 0.83 GeV, there is an indication of excited-states effects at t s = 8a, which differs from t s = 10a and t s = 11a.The effect is visible mainly due to the high statistical accuracy of the data.For higher momenta, all matrix elements are compatible within uncertainties, which are enhanced compared to the lower momenta.Therefore, t s = 9a is favorable, as it is consistent with t s = 10a and t s = 11a in all cases, while good signal is maintained.Below, we will also consider excited-states effects in the reduced ITDs (see, e.g., Fig. 6).L p with p = 0, 1, 2, 3, 4 are shown with blue squares, red circles, green downward-pointing triangles, yellow upward-pointing triangles, and magenta rightward-pointing triangles, respectively.
To summarize the presentation of the bare matrix elements, we compare in Fig. 5 the data for all values of the momentum boost using t s = 9a, N F stout = 20, and N W stout = 10.The P dependence of the data is as observed in the quark case, that is, the signal quality decreases.We find that the relative error at z = 0 for P = 0 is about 6%, while for P = 1.67 GeV, the error becomes close to 9% despite the same statistics.In all cases, we find that the matrix elements decay to zero at about z = 8a.
The matrix elements of Fig. 5 are the core of our calculation and are used to construct the double ratio of Eq. ( 9).We note that systematic uncertainties might affect the matrix elements and pseudo ITDs differently due to possible correlations between the ratios in the numerator and/or denominator.Thus, investigating systematic effects, such as excited states and stout smearing, in the ratio of Eq. ( 9) is important.Since the pseudo ITD, i.e. the double ratio, serves as a renormalization prescription, it should be independent of the number of smearing steps.We examine the validity of this argument, and a summary is shown in Fig. 6.Due to the large uncertainties of certain combinations of smearing steps, their inclusion in the plot is not meaningful, as their errors cover the whole range of the plot.As can be seen, all combinations of N F stout and N W stout are in full agreement within errors, demonstrating that the pseudo ITDs can be extracted from any of these combinations.(15,15), (20,10), (20,20) are shown with blue circles, green squares, red up triangles, black down triangles, and orange left-pointing triangles, respectively.Right: Excited states in the reduced ITDs at N F stout = 20 and N W stout = 10.ts = 8a, 9a, 10a, 11a are shown with blue circles, green squares, red up triangles, and orange down triangles, respectively.We study excited-states effects in M g , as shown in Fig. 6 for four values of the source-sink time separation, that is t s = 8a, 9a, 10a, 11a.The increase of the statistical error is sizeable between t s = 9a and t s = 11a and the signal is lost at t s = 12a; the latter is not shown here.Overall, we find that both t s = 8a and t s = 9a are good options for these data.Nevertheless, we choose t s = 9a for a more conservative estimate.For completeness, we show in Fig. 7 the double ratio for all values of P corresponding to t s = 9a, N F stout = 20 and N W stout = 10.We note that each value of ν is constructed from all possible combinations of available z and P , but we constrain z up to 6a ∼ 0.56 fm.This leads to ν max ∼ 5. Comparing all combinations of P and z at a given ν allows one to comment on the effect of P dependence.We find that dependence on P is within the statistical errors for up to z = 6a.Thus, M g can be described by a smooth function in terms of the Ioffe time, which allows for a controlled interpolation.The latter is needed for the scale evolution and matching procedure, as discussed below.
The reduced ITDs are interpolated in terms of ν at each value of z 2 , so that one obtains a continuous function in ν/z, which is needed for the matching procedure.Having five values of ν at a fixed z 2 , one can test different parametrizations of the ν dependence.Here, we test a linear and a second-order polynomial fit, which can be seen in Fig. 8 for selected values of z.We find that the two fits are compatible and choose the polynomial fit to proceed.0.0 0.5 0.0 0.2 0.4 0.6 FIG. 8. Lattice data of the reduced ITDs for z = 1a − 6a (blue points) and their interpolation at fixed z 2 using a first order (green bands) and second order polynomial fit (red bands).
B. Reconstruction of the gluon PDF
In general, the extraction of light-cone ITDs from reduced ITDs contains combining effects of three functions, the B gg , L and B gq kernels, as given in Eq. (12).That is, one must apply the evolution to a renormalization scale of choice (µ), convert the data to light-cone ITDs in the MS scheme, and eliminate the mixing with the quark-singlet PDF.
Here, we chose 2 GeV for the renormalization scale, as commonly used in global analyses.In all previous calculations of the gluon PDF, the mixing with the quark-singlet case has been ignored due to the lack of lattice results for the latter, as it requires information from disconnected contributions, which are computationally very expensive.Here, we extend the calculation of Ref. [35] to include all values of P implemented in this work, which allows us to eliminate the mixing by considering B gq .To demonstrate the effect of the mixing, we first apply B gg and L, but ignore B gq .The resulting evolved and matched ITDs are shown in Fig. 9.We find that the scale evolution increases the values of the evolved ITDs (M ′ ) relative to those of the reduced ITDs (M), while the matching has the opposite effect than the evolution and brings the light-cone ITDs (Q) closer to the reduced ITDs, making them consistent with the latter within error bars.Such behavior is also observed in the case of quark PDFs (see, e.g., Refs.[50,55]).We note that the dependence on the individual P and z is minimal for all three functions, M, M ′ , and Q, as the values from different (P, z) pairs fall on a universal curve.In the right panel of Fig. 9, we show the matched ITDs, Q(z 2 , µ 2 ), where we average over the (P, z) pairs for a given value of the Ioffe time.
To extract the x-dependence of the gluon PDF, we use the fitting reconstruction and follow the procedure discussed in Sec.II A. As can be seen in Eq. ( 21), one cannot isolate the gluon PDF, because it appears normalized with the gluon momentum fraction.The latter has not been extracted on the ensemble under study, so we use, instead, the lattice results of Ref. [83].The aforementioned calculation used an ensemble that has the same gluon and fermion action as this work, but different lattice parameters.In particular, the lattice spacing is 0.08 fm, and the pion mass is 139 MeV.The reported value for the gluon momentum fraction is ⟨x⟩ MS, 2GeV g = 0.427 (92), which we use below.Another input of the reconstruction procedure is the value of z max .We tested z max = 5a, 6a, 7a, which correspond to z max = 0.47, 0.56, 0.66 fm, respectively.In this subsection, we show results for z max = 6a and we demonstrate the independence of the results on z max in the case where mixing with the quark singlet is eliminated (see next subsection).The conclusions of this test fully pertain also to the case with the mixing neglected.
Using the above value of ⟨x⟩ MS, 2GeV g and z max = 6a, we obtain the gluon PDF, which is given in the left panel of Fig. 10.The corresponding fitted ITDs are shown in the right panel of Fig. 9.We remind the reader that we have not yet considered the mixing with the quark-singlet PDF; this will be addressed in the next subsection.In the right panel of Fig. 10, we compare our final results to the lattice results of HadStruc [66], in which the gluonquark singlet mixing has not been considered.HadStruc used an ensemble of N f = 2 + 1 clover Wilson fermions with stout-link smearing and the Symanzik-improved gauge action.The ensemble has the same volume and lattice spacing as this work.However, their pion is heavier, namely m π = 358 MeV.Their source-sink time separation is also 9a, which is the same as the value used here.In general, our results are consistent with the ones from HadStruc.It is worth noting that the reconstruction performed by HadStruc includes values of Ioffe time up to ν max = 7.07, while our reconstruction includes up to a maximum Ioffe time of ν max = 4.71 (z max = 6a).The smaller statistical error of HadStruc may possibly be attributed to two factors: (a) the use of the distillation method [92]; (b) the higher pion mass compared to this work.In Fig. 10, we also compare the lattice data to the global analysis of JAM20 [70].As can be seen, all results are in full agreement within errors.We note that all comparisons are qualitative, as the lattice results are obtained on a single ensemble with different lattice formulations.Nevertheless, the agreement between lattice results and global analysis is very promising.C. Elimination of mixing with quark-singlet PDF In this section, we provide, for the first time, the quark-singlet PDF using the pseudo-distribution method.This is a continuation of the work of Ref. [35], which used a subset of the data of Table II to obtain the quark PDFs within the quasi-distributions method.The quark-singlet PDF will be used to eliminate the mixing with the gluon contribution using the matching formalism of Ref. [93].In principle, with our data for the quark and gluon PDFs, we can also obtain the quark-singlet PDFs without mixing.However, Refs.[74,75] only provide the components of the mixing kernel that are relevant to the gluon PDF, that is, B gg and B gq .While the complete 2 × 2 matching kernel is presented in Ref. [93], it corresponds to a different definition of the gluon operator than the one we use in this work, so we are not able to apply it here.
First, let us present the bare quark matrix elements for the singlet combination u + d + s.The matrix elements contain all kinematic factors, so they can be compared directly at z = 0 for different values of P .As seen in Fig. 11, the data are consistent at z = 0.This is expected theoretically, because z = 0 is directly related to ⟨x⟩, which is independent of the kinematic frame.As z increases, we find that the behavior with the increase of P is as expected.That is, the real part of the matrix element falls faster, while the imaginary part is enhanced.The corresponding quark reduced ITDs are shown in Fig. 12.In the real part, we find an agreement between the different P and z combinations corresponding to the same value of ν.Some difference is observed in the imaginary part for {p, z/a} = {1, 4} as compared to {p, z/a} = {2, 2} and {p, z/a} = {4, 1} (where P = 2π L p).Similarly, {p, z/a} = {1, 6} deviates from {p, z/a} = {2, 3}, and {p, z/a} = {3, 2}.However, the momenta with p > 1 are in agreement within errors.We use the above quark-singlet reduced ITDs to eliminate the mixing in the light-cone gluon ITDs.In particular, only M S -the ν-derivative of the imaginary part of M q -enters the matching formalism, as explained in Sec.II A. For completeness we show M S in Fig. 13.Finally, the resulting effect of the mixing is shown in Fig. 14 gluon ITDs before (B gq = 0) and after (B gq ̸ = 0) the elimination of the mixing with the quark-singlet.The fitting bands from the x-dependence reconstruction procedure are also shown.The main finding is that the gluon ITDs move slightly towards lower values, with the mixing being well within the occurring statistical uncertainties.As hinted in the previous subsection, we also establish the robustness of the results against the choice of the value of z max , using z max = 5a, 6a, 7a, see Fig. 15.We find a small difference between z max = 5a and z max = 6a, but the effect is significantly smaller than the statistical uncertainties.The difference z max = 6a and z max = 7a is almost negligible, and the two bands cannot be visually distinguished in the figure.That is, the addition of z = 7a points does not influence the reconstruction, mainly due to their large statistical errors.Thus, our choice of z max = 6a is validated at this level of data precision and given the compatibility of results for different z max , it is not necessary to assign a reconstruction-related systematic uncertainty to our results.
For completeness, we also present the effect of the mixing in the x-dependent gluon PDF, as seen in the right panel of Fig. 16.The conclusion is consistent with Fig. 14, as the effect of the mixing is smaller than the statistical uncertainties.In the left panel of Fig. 16, we show our final results together with JAM20 [70], demonstrating full compatibility.As previously mentioned, the statistical uncertainties are currently larger than the ones from global analysis.
IV. SUMMARY
The main component of this work is the calculation of the unpolarized gluon PDF of the proton using numerical simulations of QCD.The calculation is performed using an N f = 2 + 1 + 1 ensemble of clover-improved twisted mass fermions with the quark masses tuned to give a pion mass of 260 MeV.The lattice spacing is 0.093 fm, and the volume is 32 3 × 64.For the calculation, we employ the pseudo-distribution approach that significantly simplifies the renormalization procedure by forming ratios of matrix elements, leading to the reduced pseudo-Ioffe time distributions, expressed in terms of the combination ν = z • P .In our calculation, we use a nucleon momentum boost with values up to 1.67 GeV, and, in the final results we restrict the length of the Wilson line to 0.56 fm.We find that the combination of P and z suffices to extract a continuous dependence on ν and reconstruct the gluon PDF.We explore systematic effects such as excited-states effects, the effect of stout smearing, and the dependence on the maximum value of z entering the fits to obtain the ITD.For the evolution and conversion to the MS scheme at a scale of 2 GeV, we use a one-loop formalism.We use the fitting reconstruction method to address the inverse problem and obtain the x-dependence of the gluon PDF.A novel aspect of the calculation is the elimination of the mixing with the quarksinglet unpolarized PDF, which we extract for the same ensemble.The effect of the mixing brings the gluon ITD to smaller values, but the effect is much smaller than the statistical uncertainties.However, when the precision stage is reached for such lattice calculations, mixing will inevitably become a more effect.Our results are compared with other lattice data obtained using a different lattice formulation, methodology, and setup [66], and we find a very good agreement.In such a comparison, we ignore the quark-gluon mixing for a more appropriate comparison with Ref. [66].Furthermore, a comparison of our final data with the global analysis of the JAM collaboration [70] reveals agreement, with the global analysis being much more accurate than lattice data at this stage.The above-mentioned comparison uses our data after the elimination of the quark-gluon mixing as done in JAM20.An extension of this work is the investigation of other sources of systematic uncertainties, such as volume and discretization effects, as well as pion mass dependence.In near future, we will address the continuum limit by adding two ensembles with smaller lattice spacing.
4 FIG. 1 .
FIG. 1.The relative error of the proton energy at momentum boost P = 2π L p as a function of the source positions analyzed.As examples, we show p = 2 (blue circles) and p = 4 (red squares).The lines correspond to the 1/ √ Nsrc scaling.
4 FIG. 4 .
FIG. 4. Source-sink time separation dependence of the bare matrix elements at each momentum boost.We use N W stout = 10 and N F stout = 20 in all cases.The results for ts = 8a, 9a, 10a, 11a are shown in blue circles, green squares, red up triangles, and orange down triangles, respectively.The data at momentum boost P = 2π L p with p = 0, 1, 2, 3, 4 are shown in the top, middle left, middle right, bottom left, and bottom right panels, respectively.
4 FIG. 5 .
FIG. 5. Matrix elements of Eq. (1) as a function of the length of the Wilson line, z/a.The data at momentum boost P = 2πL p with p = 0, 1, 2, 3, 4 are shown with blue squares, red circles, green downward-pointing triangles, yellow upward-pointing triangles, and magenta rightward-pointing triangles, respectively.
4 FIG. 7 .
FIG. 7. Final results for the reduced-ITD at ts = 9a, N F stout = 20 and N W stout = 10.We show data with all values of P = 2π L p and z up to 6a ∼ 0.56 fm.Data for p = 1, 2, 3, 4 are shown with blue circles, red down triangles, green up triangles, and magenta right triangles, respectively.
FIG. 10 .
FIG.10.Left: The reconstructed gluon PDF without mixing elimination.Right: A comparison of our results from the left panel (red), the lattice results of HadStruc[66] (green), and the global analysis of JAM20[70] (blue).Results are shown in the MS scheme at a scale of 2 GeV.
FIG. 11 .
FIG. 11.Bare matrix elements for the quark-singlet case as a function of the length of the Wilson line, z/a.The data at momentum boost P = 2π L p with p = 0, 1, 2, 3, 4 are shown with blue squares, green circles, red downward-pointing triangles, yellow upward-pointing triangles, and magenta rightward-pointing triangles, respectively.
Q(ν, µ 2 )FIG. 14 .
FIG.14.Comparison of the light-cone ITD before (Bgq = 0, shown in red) and after (Bgq ̸ = 0, shown in green) the elimination of the mixing with the quark-singlet case.The bands correspond to the fits of the lattice data.
TABLE I .
Parameters of the ensemble used in this work.
at multiple t s values.This comes at no additional computational cost, as, by construction, disconnected contributions are evaluated at open sink time.
TABLE II
. Total statistics for the calculation for each value of P .N confs is the number of configurations, Nsrc the number of source positions, N dir is the number of spatial directions for the Wilson line and P , and Nmeas is the number of total measurements (Nmeas = N confs × Nsrc × N dir ). | 9,608.2 | 2023-10-02T00:00:00.000 | [
"Physics"
] |
On the Unitary Representations of the Braid Group B 6
We consider a non-abelian leakage-free qudit system that consists of two qubits each composed of three anyons. For this system, we need to have a non-abelian four dimensional unitary representation of the braid group B6 to obtain a totally leakage-free braiding. The obtained representation is denoted by ρ. We first prove that ρ is irreducible. Next, we find the points y ∈ C∗ at which the representation ρ is equivalent to the tensor product of a one dimensional representation χ(y) and μ̂6(±i), an irreducible four dimensional representation of the braid group B6. The representation μ̂6(±i) was constructed by E. Formanek to classify the irreducible representations of the braid group Bn of low degree. Finally, we prove that the representation χ(y) ⊗ μ̂6(±i) is a unitary relative to a hermitian positive definite matrix.
Introduction
Due to Artin, the braid group B n is represented in the group Aut(F n ) of automorphisms of the free group F n generated by x 1 , . . . , x n . The matrix representation of B n was published by W. Burau in 1936. This representation was known as a Burau representation. Since then, other matrix representations of B n have been constructed. For more details, see [1].
Braid group unitary representations have been essential in topological quantum computations. To understand the d-dimensional systems in which anyons are exchanged, a lot of work has been made. The exchange of n anyons inside the qudit system, the d-dimensional analogues of qubits, has been governed by the braid group B n which has n − 1 generators τ 1 , . . . , τ n−1 . Here, τ i exchanges the particle i with its neighbor, particle i + 1.
When the topological charge of the qudits changes due to the braiding of the anyons from different qudits, a leakage of some of the information will occur in the computational Hilbert space, the fusion space of the anyons.
The leakage-free braiding of anyons has been under investigation for a while. To perform universal quantum computation without any leakage, the requirement would be to consider two-qubit gates. This would be very restrictive and this property can only be realized for two-qubit systems related to the Ising-like anyons model [2].
R. Ainsworth and J.K. Slingerland showed that a non-abelian, leakage-free qudit of dimension d involving n anyons is equivalent to a non-abelian d-dimensional representation of the braid group B n . Here, n is the sum of the number of anyons n 1 inside the first qudit and the number of anyons n 2 inside the second qudit. As for the dimension d of the representation of B n , it is the product of the dimensions d 1 and d 2 of the Hilbert spaces of the individual qudits.
Moreover, it was proved in [2] that the number of anyons per qubit is either 3 or 4. Thus, there are mainly 3 different types of two-qubit systems and a 4-dimensional representation of the corresponding braid group is constructed for each. Taking into account E. Formanek's result that there is no d-dimensional representation of B n with d ≺ n − 2, it was verified in [2] that the only possible type of two-qubit system is having 2 qubits of which each is composed of 3 anyons.
This system is a non-abelian leakage-free qudit system of dimension 4 involving 6 anyons. It is equivalent to a non-abelian 4-dimensional representation of the braid group B 6 . This representation is denoted by ρ. Since the number of anyons is 6, there are 5 elementary exchanges τ 1 , . . . , τ 5 . The exchanges τ 1 , τ 2 , τ 4 , and τ 5 satisfy the following relations: where ρ 1 and ρ 2 are the d 1 and d 2 -dimensional representations of B n 1 and B n 2 on the Hilbert spaces of the first and second qudit respectively. I d 1 and I d 2 are the d 1 and d 2 -dimensional identity matrices respectively. Here, n 1 = n 2 = 3 and d 1 = d 2 = 2.
The matrix ρ(τ 3 ) is constructed by imposing braid group relations. For more details, see [2]. In our work, we consider the unitary representation ρ and the irreducible representationμ 6 (±i) which is defined by E. Formanek in [3]. Both representations are 4-dimensional representations of the braid group B 6 .
First, we prove that the unitary representation ρ : B 6 → GL 4 (C) is irreducible.
As the representation ρ is proved to be irreducible, it follows that it is equivalent to the tensor product of a one-dimensional representation χ(y) and the irreducible 4-dimensional representation µ 6 (±i), where y ∈ C * . For more details, see [3].
We then determine the points y ∈ C * at which the two representations ρ and χ(y) ⊗μ 6 (±i) are equivalent.
Finally, we show that the representation χ(y) ⊗μ 6 (±i) is a unitary relative to a hermitian positive definite matrix.
Preliminaries
Definition 1 (See [4]). The braid group on n strings, B n , is the abstract group with presentation The Hecke algebra representation of B 6 was constructed by V.F.R. Jones in [5]. E. Formanek obtained a low-degree representation of B 6 by conjugating the representation constructed by V.F.R. Jones by a certain permutation matrix. For more details, see [3].
is the reduced Burau representation, andβ n (z) is the composition factor of the reduced Burau representation.
Irreducibility of
The construction of a two-qubit system with a minimum amount of leakage has been of great interest. The only two-qubit system that can be realized without leakage is the system of two 3-anyon qubits. This system is equivalent to a 4-dimensional representation of the braid group B 6 . This representation which was constructed in [2] is denoted by ρ.
In this section, we prove that ρ : B 6 → GL 4 (C) is irreducible. We denote τ i , the exchange of the i th and (i + 1) th anyon, by σ i where 1 ≤ i ≤ 5.
That is, a must be a primitive eighth root of unity. Furthermore, Note that since a is a primitive eighth root of unity, a 8 = 1 and a 2 1. Then, a 3 a. Consequently, a 3 ā, f f 3 , andf f 3 . This emphasizes that the defined matrices ρ(σ i ), 1 ≤ i ≤ 5, are well-defined.
Now we study the irreducibility of ρ. For simplicity, we denote ρ(σ i ) by σ i for 1 ≤ i ≤ 5. For simplicity, we take i = 1. Since S is invariant, it follows that σ 2 (e 1 ) = This implies that c = 0, a contradiction.
For simplicity, we take i = 1. Since S is invariant, it follows that σ 2 (e 1 + ue 2 ) = This implies that c = 0, a contradiction.
Thus, there are no non trivial proper invariant subspaces of dimension 1. For simplicity, we take i = 1. Since S is invariant, it follows that σ 2 (e 1 ) = This implies that c = 0, a contradiction.
For simplicity, we take i = 1. Since S is invariant, it follows that σ 4 (e 1 ) = This implies that e = 0. But, e = c. Thus, c = 0, a contradiction.
For simplicity, we take i = 3. Since S is invariant, it follows that σ 4 (e 3 ) = This implies that e = 0. But, e = c. Thus, c = 0, a contradiction.
For simplicity, we take i = 1. Since S is invariant, it follows that σ 4 (e 1 ) = This implies that e = 0. But, e = c. Thus, c = 0, a contradiction.
Thus, there are no non trivial proper invariant subspaces of dimension 2. Now, we state the theorem of irreducibility. Clearly, the representation ρ is unitary, that is σ i σ * i = I 4 for 1 ≤ i ≤ 5. We note that if the representation is unitary, then the orthogonal complement of a proper invariant subspace is again a proper invariant subspace. As there is no proper invariant subspace of dimension 1, there is no proper invariant subspace of dimension 3.
As a result, all the possible proper subspaces are not invariant. Consequently, ρ is irreducible.
The
Representations ρ and χ(y) ⊗μ 6 (±i) Are Equivalent By Theorem 3, the representation ρ is irreducible. The eigen values of ρ(σ i ) for 1 ≤ i ≤ 5 are different from those ofβ 4 (z), the composition factor of the reduced Burau representation. Therefore, the representation ρ is not equivalent to the tensor product of a one dimensional representation χ(y) andβ 4 (z). That is, ρ is not of a Burau type.
Moreover, ρ is a 4-dimensional representation. Consequently, Theorem 2 implies that the representation ρ is equivalent to the representation χ(y) ⊗μ 6 (±i) for some y ∈ C * . Note that, by Theorem 1, the representationμ 6 (z) is irreducible for z = ±i since the roots of the polynomial t 2 + 1 are clearly ±i.
In this section, we determine the points y ∈ C * at which the representations ρ and χ(y) ⊗μ 6 (±i) are equivalent.
As a result, the two considered representations are equivalent at the following points: where i is the complex number such that i 2 = −1.
In this section, we find the matrix M and we prove that M is a hermitian and positive definite.
Let M * be the complex conjugate transpose of M. Clearly, M * = M. This implies that the invertible matrix M is hermitian.
By computations, the eigen values of the matrix M are 2 + √ 2 and 2 − √ 2. Clearly, both values are positive. Consequently, M is a positive definite matrix.
As a result, the representationμ 6 (±i) is a unitary relative to an invertible hermitian positive definite matrix M.
Note that the unitarity of the representationμ 6 (±i) relative to the matrix M clearly implies that the representation χ(y) ⊗μ 6 (±i) is also a unitary relative to the same matrix M. | 2,434.6 | 2019-11-09T00:00:00.000 | [
"Mathematics"
] |
Coupling traction force patterns and actomyosin wave dynamics reveals mechanics of cell motion
Abstract Motile cells can use and switch between different modes of migration. Here, we use traction force microscopy and fluorescent labeling of actin and myosin to quantify and correlate traction force patterns and cytoskeletal distributions in Dictyostelium discoideum cells that move and switch between keratocyte‐like fan‐shaped, oscillatory, and amoeboid modes. We find that the wave dynamics of the cytoskeletal components critically determine the traction force pattern, cell morphology, and migration mode. Furthermore, we find that fan‐shaped cells can exhibit two different propulsion mechanisms, each with a distinct traction force pattern. Finally, the traction force patterns can be recapitulated using a computational model, which uses the experimentally determined spatiotemporal distributions of actin and myosin forces and a viscous cytoskeletal network. Our results suggest that cell motion can be generated by friction between the flow of this network and the substrate.
Introduction
Eukaryotic cells can move using different modes of migration. For example, amoeboid cells move through the extension of randomly placed actin-filled pseudopods, fish keratocytes move with a nearconstant morphology in a persistent fashion, neuronal cells use filopodia for migration, and some cells display oscillatory motion during which the basal surface undergoes periodic variations (Webb & Horwitz, 2003;Chan & Odde, 2008;Charras & Paluch, 2008;Keren et al, 2008;Bosgraaf & Van Haastert, 2009;Chan et al, 2013). These different modes and morphologies are often used to characterize cell types. However, cells of the same type can exhibit multiple modes and can easily switch between them. The ability of cells to change their migration mode, depending on external or internal cues, has been implicated in diseases, including cancer metastasis (Yilmaz & Christofori, 2010;Friedl & Alexander, 2011;Kim et al, 2021).
The different modes of migration are correlated with waves of signal transduction and cytoskeletal components propagating along the cell cortex and responsible for contraction and protrusion (Weiner et al, 2007;Case & Waterman, 2011;Allard & Mogilner, 2013;Inagaki & Katsuno, 2017). The waves originate from the excitable dynamics of the signaling network and can be triggered spontaneously or by a sufficiently large stimulus. The resulting wave can then continue to propagate outward and away from the initiation site or can fail to propagate further, resulting in a spatially restricted excitation and protrusion (Miao et al, 2017). In addition, the excitable system can produce oscillatory initiation of symmetric waves, leading to periodic flattening. Furthermore, oscillatory signaling dynamics can result in polarized waves that push the membrane on the one side of the cell forward with a constant speed (Cao et al, 2019a(Cao et al, , 2019b. The dramatically different migration modes displayed by the same cell type can be traced to slight shifts in the strength of feedback loops within the underlying signaling system, which controls the cell protrusions and contractions.
The distinct migration modes have in common that the various protrusions and contractions can only generate motion through the exertion of forces onto the extracellular environment. These forces can be measured using traction force microscopy (TFM), which enables real-time spatially resolved measurements of forces exerted onto the substrate (Plotnikov et al, 2014;Style et al, 2014;Roca-Cusachs et al, 2017). Earlier studies revealed that the traction force maps differ significantly for different cells. Gliding fish keratocyte cells, for example, exert large traction forces at two foci at the posterior end, and these foci are persistent and nearly symmetric with respect to the longitudinal axis of the keratocyte (Fournier et al, 2010;Barnhart et al, 2015;Sonoda et al, 2016). In contrast, chemotactic Dictyostelium cells and neutrophils migrating in the amoeboid mode were shown to have two traction force poles, near the front and near the back (Del Alamo
Results
Here, we determined how cell migration, signaling, and traction forces are coupled in different modes of migration by quantifying the traction force maps using thin, soft silicone gel substrates with tracer particles attached to the gel surfaces (Gutierrez et al, 2011;Han et al, 2015). We use cells of the social amoeba Dictyostelium discoideum, which display a variety of migration modes when starved under low cell density conditions or when synthetically altered to have decreased phosphatidylinositol-4,5-bisphosphate levels or increased Ras/ Rap-related activities (Asano et al, 2004;Miao et al, 2017;Cao et al, 2019a). These modes consist of a keratocyte-like mode, an oscillatory mode, and an amoeboid mode (Fig 1A-C). Each of these modes has its own wave dynamics, which determines their morphology and migration properties (Miao et al, 2017;Cao et al, 2019b). The fan-shaped cells contain a broad and stable traveling wave of cytoskeletal components, including actin, which moves at a constant speed in a persistent direction. Oscillatory cells display an actin wave that originates at the basal surface of the cell and reaches the entire cell perimeter simultaneously. Finally, the pseudopods of amoeboid cells result from waves that expand narrowly and originate at random locations.
Stable traveling waves result in fan-shaped cells
We first determined how the key cytoskeletal components actin and myosin were distributed near the substrate in fan-shaped cells (Fig 2). As expected, the cytoskeletal distributions were stationary with the cell's frame of reference (Fig 2A,Movie EV1 and EV2). Surprisingly, however, we observed two qualitatively different patterns, which we will call type 1 and type 2. For type 1 cells, the distribution of freshly polymerized filamentous actin (F-actin), measured using LimE-GFP (Materials and Methods), formed a ring that is positioned at the membrane of the front of the cell and slightly ahead of the back of the cell (Fig 2A, upper left panel). This ring is propagating as a wave with constant shape and speed, resulting in a near-constant cell morphology ( Fig 1A). The distribution of the contractile protein myosin II, visualized using GFP-myo, showed an elevated band parallel to the back of the cell (Fig 2A, upper right panel), consistent with earlier results (Asano et al, 2004). Double labeling with GFP-myo and LimE-RFP showed that this myosin band was positioned between the rear membrane and the actin ring and that the location where the LimE-GFP ring detached from the membrane coincided with the two ends of the myosin region (Appendix Fig S1A). Furthermore, labeling cells with lifeAct-GFP, a marker for all the F-actin in the cell (Riedl et al, 2008), revealed that F-actin is also present at the rear membrane of the cell (Appendix Fig S1B).
GFP-myo limE-GFP
1min 1min x y x y 1min 1min 1min 1min D B To further quantify the distributions of the cytoskeletal components, we computed kymographs, which represent the fluorescent intensity along the membrane as a function of time. Consistent with our experimental observations, the kymograph that represents the LimE-GFP distribution along the membrane showed elevated fluorescence levels everywhere, except at the posterior edge of the cell (Fig 2A, lower left panel), while the kymographs of the fluorescent intensity of GFP-myo (Fig 2A, lower right panel) showed a region of high fluorescence that corresponds to the back of the cell.
GFP-myo limE-GFP
We next computed the traction force maps of these type 1 fanshaped cells from the bead displacement map (Appendix Fig S2A; see Materials and Methods). The resulting stress map revealed that the stress was largest at the posterior corners ( Fig 2B, Movie EV3, and Fig EV1A). Interestingly, however, the forces in the front half of these cells were in the forward direction, indicating that the force exerted onto the substrate is directed forward. In other words, the cell-substrate forces in the front half of the cell are pointing in the direction of motion. Furthermore, as can also be seen from the more detailed map in Fig EV1A, the traction force map displayed two counter-rotating vortices, located in the left and right part of the cell.
The kymograph of the stress in the direction of motion, T x , also clearly showed the forward-oriented forces at the anterior edge of the cell: T x was positive at the middle and front of the cell and changed sign at the sides and posterior corners of the cell (Fig 2B, left-middle panel). The y-component of the stress, T y , was largest in the two posterior corners and was directed toward the midline of the cell (Fig 2B right-middle panel). We have verified that this traction force map remains qualitatively unaltered when using a different reconstruction method (Appendix Fig S2B) (Butler et al, 2002). Furthermore, we found that the location in the posterior corner where. T x changed sign corresponded to the location of maximum stress, as indicated by the black dots in the kymographs. This maximum stress occurred at locations of maximum gradient intensity of the fluorescent signal and remained approximately at the same location relative to the cell ( Fig 2B). Therefore, both the area and the total force, calculated by integrating the absolute stress within the cell's basal plan, remained roughly constant during the movement of the cell (Appendix Fig S1C-E). The change in the direction of forces can also be seen when integrating the stress T x in the direction of motion and plotting it as a function of y (Appendix Fig S3A). Finally, we computed the cell speed as a function of the cell area and total force, and the pressure (force per area) as a function of the cell area. Both quantities were found to be largely independent of the cell area (Appendix Figs S4A and B,and S5A).
The actin distribution of type 2 fan-shaped cells also revealed a traveling wave with constant shape and speed. There was, however, a subtle difference in type 1 and 2 cells as the type 2 distributions formed a ring that is positioned away from the membrane (Fig 2C,upper row). Consistent with these observations, the LimE-GFP kymograph did not show any distinct spatial or temporal features ( Fig 2C, lower row). The distribution of GFP-myo was identical to type 1 cells and showed an elevated band parallel to the back of the cell (Fig 2C, upper row). Furthermore, the GFP-myo kymographs showed a region of high fluorescence at the back of the cell (Fig 2C, lower row), thus indicating a clear symmetry breaking and polarization in the cell.
The difference between the two different types of fan-shaped cells was most striking when examining the traction force maps (Fig 2B and D). The computed stress map for a type 2 cell reveals two large force poles at the posterior corners (see Fig 2D and for more detail Fig EV1B; see also Movie EV4). At the back of the cell, T x was positive, which means forces are in the direction of the motion. However, and in sharp contrast to the pattern for type 1, in the front half of the cell, T x was negative, indicating that the force exerted onto the substrate was directed backward (Fig 2D,. T y was largest in the two posterior corners and was directed toward the midline of the cell . As for the type 1 cell, we also integrated the traction forces in the direction of motion and plotted it as a function of y (Appendix Fig S3B) and computed the cell speed as a function of the cell area and total force (Appendix Fig S4A and B). Again, the cell speed was largely independent of these parameters but was found to be smaller than the speed of type 1 cells. Furthermore, the pressure is also independent of the basal area (Appendix Fig S5A).
The kymographs of T x and T y showed that, as is the case for type 1 cells, the two force poles at the posterior corners remained ◀ Figure 2. Traction force maps and distributions of signaling components in fan-shaped cells.
A Snapshots of LimE-GFP, GFP-myo, and corresponding kymographs for type 1 cells. Corresponding speeds are 10.8 µm/min and 9.4 µm/min. White dots in the fluorescent kymographs indicate the location of the two poles of force at each time point, as extracted from the corresponding stress kymographs. B Stress maps quantifying the magnitude of the force per area using a color scale with blue/red corresponding to small/large stresses and the direction of forces using vectors. Fan-shaped cells were rotated so that the vertical (x) axis is the direction of motion (see Materials and Methods). Shown are the overall stress T, the stress in the direction of motion T x , and the stress perpendicular to motion T y for type 1 cell with the LimE marker, and the corresponding T x and T y kymographs along the cell's outlines. C Snapshots of LimE-GFP, GFP-myo, and corresponding kymographs for type 2 cells. Corresponding speeds are 5.6 and 8.7 µm/min. White dots in the fluorescent kymographs indicate the location of the two poles of force at each time point. D Stress maps of T, T x and T y for type 2 cell with the LimE marker, and corresponding T x and T y kymographs along the cell's outlines. E Left three panels: Major and minor axes of the cells, L maj and L min , and total force F as a function of the cell area A. Right panel: Force perpendicular to the motion, F y , as a function of the force parallel to the motion, F x . The dashed line represents a linear fit with a slope of 1.87 (r 2 = 0.88). The plots present averaged values for each cell based on the duration of each recording (cell type 1: blue markers, cell type 2: orange markers, and less stable cells: gray markers [see Materials and Methods]). The basal area and speed of type 2 cells was larger than for type 1 cells: 628 (502/692) μm 2 and 6.0 (5.4/8.2) μm/min (N = 12 biological replicates) vs. 326 (258/461) μm 2 and 10.8 (9.4/12.3) μm/min (N = 161 biological replicates; P = 1.9 × 10 −6 and 2.6 × 10 −7 ), while the median ratio between the pole-pole distance and the cell's length was 0.75 (0.70/0.79, N = 161 biological replicates) for type 1 cells and 0.84 (0.77/0.90, N = 12 biological replicates) for type 2 cells (P = 2.2 × 10 −3 ).
Data information: The arrows indicate the direction of motion, and black dots in the kymographs correspond to the location of maximum stress. All scale bars in the figure: 10 μm.
present for the entire duration of migration ( Fig 2D, lower row), resulting in a nearly constant area and total force during their migration (Appendix Fig S1C-E). The kymograph of T x clearly showed the change in direction of T x along the cell outline, occurring at the locations of maximal stress T (black dots, Fig 2D). As for type 1 cells, simultaneous measurement of the traction force pattern and the myo-GFP distribution revealed that this maximum stress occurred at the locations of maximum gradient intensity of myo-GFP (white dots, Fig 2C). Finally, to verify that our traction force patterns are not affected by the thickness of the gel, which could potentially introduce long-range effects in bead displacements (Merkel et al, 2007), we repeated the experiments for thinner gels (3 vs. 15 μm). For these thin gels, the traction force pattern for type 1 and type 2 cells was qualitatively unchanged (Fig EV1C and D).
To determine whether the morphology of the two types differed, we fitted the basal surface morphologies to an ellipse (Materials and Methods). The major and minor axes as a function of area are shown in Fig 2E, where the different cell type and morphology stability are indicated with different colors. A more detailed graph, indicating the different cell strains and generation methods, is presented in Appendix Fig S6. For both cell types, the major and minor axes of the fitted ellipse increased for increasing basal surface area, but the basal area of type 2 cells was on average larger than that of type 1 cells. Thus, the force pattern seemed to be mostly determined by the cell size and not by the method employed to obtain fan-shaped cells. In addition, the total force was found to increase for increasing area (Fig 2E,third panel). This dependence of the force on the area has also been observed in migrating keratocytes (Sonoda et al, 2016). We also determined the total force in the direction of motion, F x , and perpendicular to the motion, F y (see Materials and Methods). A plot of F x vs. F y showed a linear dependence with a slope larger than 1, indicating that the forces perpendicular to the direction of motion were larger than in the direction of the motion (Fig 2E,right panel).
Target waves lead to oscillatory cells
The LimE-GFP distribution corresponding to an oscillatory cell is consistent with an F-actin wave that was initiated in the basal plane at the start of the spreading phase (Miao et al, 2017) (upper row Fig 3A and Movie EV5). This target wave then traveled along the surface of the cell, and the basal plane expanded when it reached the periphery. As we have shown earlier, the actin wave disappeared from the basal plane by moving up on the cell's side (Cao et al, 2019a). Snapshots of the GFP-myo distribution during an oscillatory cycle are presented in the middle row of Fig 3A, which show that the fluorescent intensity decreased when the cell expanded and increased when the cell's area shrank (Movie EV6). The traction force map of an oscillatory cell for one complete cycle shows that throughout the spreading and contraction cycle, the force onto the substrate was pointing inward, toward the center of mass of the cell (see Fig 3A and for more detail Fig EV2A for the LimE-GFP cell and Fig EV2B for the GFP-myo cell). Furthermore, the force and stress were higher during the retraction phase than the expansion phase.
The periodic nature of the cytoskeletal waves and basal area size is illustrated in Fig 3B (upper panel), where we plot the area and the average LimE-GFP fluorescence within the cell outline as a function of time. The area changed more than fourfold during a cycle, while the difference between the maximum and minimum fluorescent intensity was almost twofold. We computed the autocorrelation of the area, which can be well fitted with a damped sinusoidal function, indicating that the area dynamics is strongly periodic (Appendix Figs S7 and S8, Appendix Table S1). Furthermore, the period of this oscillation is not strongly dependent on the timeaveraged basal area (Appendix Fig S7D).
To determine how the cytoskeletal components are correlated with morphology changes, we next computed the correlation function (CF) between the cell area, as well as the area change rate (the time derivative of the area), and the intensity of the fluorescent signals. The area and the LimE-GFP fluorescence intensity were significantly correlated (blue line and symbol, inset upper panel Data information: (B-D) The peaks in the CF are indicated by star symbols, the 95% confidence interval is gray-shaded, and the sign of the peak in the CF defined whether the quantities were correlated (largest peak occurred for positive CF values) or were anticorrelated (largest peak occurred for negative CF values; see Materials and Methods). (E, F) P-values higher than 0.05 are considered not significant, *P < 0.05, **P < 0.01, and ****P < 0.0001 as determined by the Wilcoxon-Mann-Whitney test using the rank sum function in MATLAB. All scale bars in the figure: 10 μm. , and the F-actin activity was maximal before the cell reached its maximal expansion (Fig 3E and Appendix Table S2). Furthermore, the maximum increase in area occurred before the maximum of LimE (magenta line and symbol, inset upper panel Fig 3B and F). Finally, the average GFP-myo intensity showed oscillatory behavior with the same period as the area (upper panel Fig 3C). We found a positive median shift between the area and this intensity (Fig 3E), indicating that the maximum of myosin fluorescence intensity occurred after the maximum expansion with a considerable delay. The CF between the area change rate and the fluorescent intensity (magenta line, inset upper panel Fig 3C) revealed that the myosin activity was maximal slightly after the maximal decrease in area ( Fig 3F). As expected, the total force also showed oscillations with the same period as the area and cytoskeletal fluorescent intensities (upper panel Fig 3D). The CFs revealed that the area and the total force were correlated, with the area leading the total force (blue line and symbol, inset upper panel Fig 3D and E), while the area change rate and the total force revealed were anticorrelated (magenta line and symbol, inset upper panel Fig 3D and F). Thus, the total force was maximal slightly before the maximal decrease in area. Finally, the temporal evolution of the force, the area, and the cell-averaged LimE and myosin are summed up schematically in Appendix Fig S9A. We also determined the kymographs of oscillatory cells (lower panels Fig 3B-D), which clearly showed the oscillatory nature of the cell area, as the length of the boundary oscillates between a maximum and minimum value. The kymograph of the LimE-GFP-labeled cell showed that the fluorescent intensity along the membrane is elevated only during the protrusion part of the cycle (lower panel Fig 3B). Conversely, the kymograph of the GFP-myo-labeled cell revealed that myosin was present along the membrane mostly during the contraction but not during expansion (lower panel Fig 3 C). The kymograph of the total force along the boundary on the cell in Fig 3D showed periods of high forces, corresponding to contraction, alternating with periods of very low forces, associated with expansion (lower panel Fig 3D). In addition, we computed the timeaveraged total force as a function of the cell area, which revealed that larger cells exert a larger total force ( Fig 3G). A more detailed graph, where data points for different cell strains are shown by different symbols, is presented in Appendix Fig S10A.
Amoeboid cells are associated with unstable waves
Consistent with a large body of work (see, e.g., Iwadate & Yumura, 2008), LimE-GFP appeared as waves close to the membrane that resulted in bright patches located at random locations (upper row Fig 4A and Movie EV7). When these waves reached the membrane, they extended the membrane, creating pseudopods (top row Fig 4 A). However, these waves are unstable, are unable to propagate further, and have a limited spatial extent (Miao et al, 2017;Cao et al, 2019b). As a consequence, the fluorescent intensity of the patches decreased and the pseudopods retracted. The distribution of GFP-myo in an amoeboid cell also changed as the cell underwent a protrusion and retraction cycle (Movie EV8). During the protrusion of a pseudopod, the fluorescent intensity of GFP-myo was relatively low and non-localized (middle row Fig 4A). The retraction of a pseudopod, however, was associated with an accumulation of myosin at the location of the pseudopod (79-90 s; Fig 4A), as observed in previous studies (Iwadate & Yumura, 2008).
The traction force map corresponding to the LimE-GFP labeled cell showed cycles of expansion due to randomly placed protruding pseudopods (0-30 s), followed by the contraction of these pseudopods (45-90 s; Fig 4A, lower row; for a more detailed map of this cell and of the GFP-myo-expressing cell, see Fig EV3). Large stresses only occurred during the contraction phase and were located mainly underneath retracting pseudopods, while the traction forces during the protrusive phase were small. In contrast to the fan-shaped and oscillatory cells, the forces were transiently associated with each protrusion rather than broadly near the front of the cell. In agreement with previous studies, traction forces were directed inward at all times (Del Alamo et al, 2007;Lombardi et al, 2007).
The area of the cell presented in Fig Fig S7C). The cell-averaged fluorescence intensity of LimE-GFP also showed quasi-periodic dynamics (green curve, Fig 4B) and was significantly correlated with the area (blue line and symbol, inset Fig 4B) with a negative median shift identical to the one found for amoeboid cells ( Together, this means that maximal actin polymerization occurs before the cell area has reached its maximum value but after the maximal increase in area. In contrast to LimE-GFP, GFP-myo showed less pronounced localized areas of elevated fluorescence. Therefore, the cellaveraged GFP-myo intensity as a function of time was quite noisy for some cells (Appendix Fig S11A) and the CF of the area and the myosin signal displayed a significant correlation in only~2/3 of the cells (7/11, see Appendix Fig S11B for such an example) with a positive shift ( Fig 4D, yellow symbols). In contrast, the area change rate and the fluorescent signal were anticorrelated with a negative shift ( Fig 4E, yellow symbols). Thus, on average the peak of myosin fluorescent intensity occurs slightly after the maximum area and before the maximal decrease in area. Comparing the time shift of the CF of the area and force to time shift of the CF of the area and myosin reveals that that the peak of myosin occurs slightly before the peak of force.
The total force as a function of time showed quasi-periodic dynamics, oscillating between small values during expansion and much larger values during a decrease in the basal area (blue curve Fig 4C). The area and the total force were significantly correlated (inset Fig 4C) with a positive median time shift (Fig 4D, blue symbols), consistent with an earlier study (Delanoe-Ayari et al, 2008). In contrast, the area change rate and the total force were anticorrelated, with a negative median shift (inset Fig 4C and E, blue symbols). Thus, the maximum total force is achieved after the maximum area but before the maximal decrease in area. The temporal evolution of the force, the area, and the cell-averaged LimE and myosin are schematically summarized in Appendix Fig S9B. Just as for the other cell types, the speed and pressure are largely independent of the time-averaged cell area and total force (Appendix Figs S4E and F, and S5C).
To gain further insights into how signaling components, force generation, were correlated, we next constructed kymographs of fluorescent intensity, traction force along the membrane, and the cell's edge velocity, defined as the normal velocity of the membrane (Machacek & Danuser, 2006) (Fig 4F, G, I and J). For the cell expressing LimE-GFP, regions of F-actin polymerization were observed in the kymograph (Fig 4F). These regions, however, were not colocalized with regions of elevated stress, which were much smaller in extent (Fig 4G). This is in contrast to the cell that expressed GFP-myo, where patches of GFP-myo were mostly correlated with regions of high stress (Fig 4I and J). Furthermore, and as expected, a comparison between the fluorescence and edge velocity kymographs revealed that negative edge velocities were associated with high myosin intensity, whereas positive edge velocities corresponded to LimE-GFP patches (Fig 4H and K).
We then computed the total force, averaged over time, as a function of the cell area ( Fig 4L). As expected, since the total force is the integral of the absolute value of stress over the area, it increased for increasing areas. We also determined the time-averaged total force in the direction of motion, F x , and perpendicular to the motion, F y (Materials and Methods). Contrary to the fan-shaped cells, where the ratio F y /F x was close to 2, the ratio F y /F x for amoeboid cells is close to 1 (Fig 4M). Thus, the total force in the direction of motion is approximately the same as the total force perpendicular to the motion. This ratio is much smaller than for chemotactic amoeboid cells where the ratio was found to be approximately a half, indicating that the axial stresses, along the direction of motion, are larger than the lateral . A more detailed presentation of the data in Fig 4L and
Comparison between the 3 modes of migration
Since kymographs include both spatial and temporal information, we computed edge velocity kymographs for all migration modes and correlated them with the force and fluorescent kymographs ( Fig EV4 and Appendix Figs S12-S20 and Materials and Methods). We first determined the protrusion and retraction speeds (see Materials and Methods) and found that the fan-shaped cells exhibited the largest edge velocity for both protrusions and retractions, whereas the amoeboid cells displayed the lowest edge velocity (Fig 5A). Furthermore, for all modes of migration, we found that the protrusion and retraction speed did not differ significantly and that the ratio of their absolute value was close to 1 (Fig 5B and Appendix Table S3). This result is obvious for the fan-shaped cells, since their morphology does not change, but is less intuitive in the case of the more complex amoeboid morphologies. These results suggest that the speed of retraction and protrusion determine the overall cell speed, which was found to be highest for fan-shaped cells and lowest for amoeboid cells (Appendix Fig S21).
Next, we computed the ratio between the stress in the protruding regions and the stress in the retracting regions and found it to be smaller than 1 for all modes: The stress in the retracting regions is always larger than in the protruding regions ( Fig 5B). The ratio was significantly different for all modes and was much smaller in the fanshaped cells (Appendix Table S3). We also computed the ratio of LimE to myosin fluorescence intensity in the protruding and retracting regions (Fig 5B, Appendix Table S3). As expected, LimE-GFP was brighter in the protruding regions, resulting in a ratio that was larger than 1 for all migration modes. Furthermore, this ratio was smaller than 1 for myosin, indicating that myosin is localized in the retracting regions for all three migration modes (Appendix Table S3). This is especially true for the fan-shaped cells, where myosin is exclusively localized at the back (retracting) side. For amoeboid cells, on the contrary, a negative edge velocity can sometimes be caused by an active pseudopod at the front that pulls the cell body forward without a marked increase in myosin in the retracting region. As a result, the ratio for amoeboid cells is closer to 1.
We then computed the average edge velocity for the 20% brightest LimE-GFP and myo-GFP pixels (high fluorescence regions), as well as the edge velocity for the remaining 80% of the pixels (low fluorescence regions). The average edge velocity was found to be positive in the high LimE fluorescence regions but negative in the Table S3). In other words, protrusions occurred predominantly in regions for high LimE-GFP fluorescence. Conversely, regions of high myosin were associated with negative edge velocities, whereas the remaining regions exhibited near-zero or positive edge velocities ( Fig 5C). Thus, for all migration modes, membrane regions of high LimE-GFP fluorescence are associated with protrusion and regions of high myosin fluorescence correspond to retractions. Next, we computed the time-averaged ratio between stresses in regions of low and high fluorescence (Fig 5D, Appendix Table S3) and found that the computed ratio between stresses in regions of high and low myosin fluorescence was similar and larger than 1 for all three modes. Thus, the stress is higher in regions where myosin is recruited (Fig 5D). The only significant qualitative difference we found was for the stress ratio in regions of high and low LimE-GFP fluorescence (Fig 5D). For the amoeboid and the fan-shaped modes, this stress ratio was found to be close to 1. In other words, the stress at membrane locations where F-actin polymerized was similar to the average stress along the rest of the cell's membrane. For oscillatory cells, however, this ratio was significantly smaller than 1, indicating that the stress at F-actin polymerization sites was lower than in the remaining sites. This is expected since for these cells, only the expansion, which is associated with low stresses, results in regions A Protrusion and retraction speed for amoeboid, oscillatory, and fan-shaped cells, defined as the average of the pixels with the 20% lowest and highest membrane speed. B Ratio between the edge velocity, stress, LimE-GFP, and GFP-myo intensity in membrane regions identified as protrusions and retractions. C Average edge velocity in regions of low and high LimE-GFP and GFP-myo fluorescence. High fluorescence was defined as the 20% brightest LimE-GFP and GFP-myo pixels in the kymographs, while low fluorescence consisted of the remaining 80% pixels. D Ratio between the stress in regions of high and low LimE-GFP and GFP-myo fluorescence for the three modes of migration. The ratio was significantly different for all modes and was found to be much smaller in the fan-shaped cells, which have large traction force poles at the back of the cell.
Data information: P-values higher than 0.05 are considered not significant, *P < 0.05, **P < 0.01, ***P < 0.001, and ****P < 0.0001 as determined by the Wilcoxon-Mann-Whitney test using the rank sum function in MATLAB. The box plots were created using the boxplot function in MATLAB, with the line indicating the median, the bottom and top edges of the box indicating the 25 th and 75 th percentiles, respectively, and the whiskers extending to the most extreme data points not considered outliers. Values and number of biological replicates are listed in Appendix Table S3. (B, C) The dotted line indicates a ratio equal to 1.
of high LimE-GFP fluorescence. In the case of amoeboid cells, expansion and retraction phases are not well separated as pseudopods are generated randomly in time and space. Consequently, regions of high fluorescence can occur contemporaneously with retracting pseudopods, resulting in a stress ratio close to 1. For the fan-shaped cells, the high LimE-GFP region does encompass not only the front but also the part of the boundary close to the two force poles (Fig 3). Thus, even though the stress is low at the front of the cell, the stress ratio is close to 1.
Computational modeling can explain force patterns
Our experiments suggest the following scenario, shown schematically in Fig 6. Actin polymerization is responsible for membrane protrusions and is controlled by the wave dynamics: stable waves propagating with the speed of the cell for fan-shaped cells (Fig 6 B), target waves propagating outwardly for oscillatory cells (Fig 6 C), and unstable waves in the case of amoeboid cells (Fig 6D). For all migration modes, once an actin wave reaches the cell membrane, it "pushes off" against it, generating a cytoskeletal flow that is directed inward. Due to friction with the substrate, this flow creates traction forces that are also directed inward ( Fig 6A). Myosin is responsible for contraction and pulls on the membrane. As a result, traction forces are generated that are also pointing inward (Fig 6B). For fan-shaped cells, myosin is along most of the nearly straight membrane at the back of the cell (Fig 6C). Since myosin contracts along this entire band, the traction forces are largest at the end points, located at the rear corners of the cell. The generated cytoskeletal flow created by the contractile myosin and the protrusive actin then leads to the cell-wide traction force patterns that is different for the two types of cells. Specifically, when myosin is dominant, contractile forces generate a swirling flow pattern and push the cytoskeleton forward in the entire cell (type 1 cell). For type 2 cells, myosin creates forward-directed flow at the rear while actin polymerization results in backward-oriented flow at the front. For oscillatory cells, the contractile forces generated by myosin start after the actin ring has moved away from the basal plane, contracting the cell at the basal surface ( Fig 6C). Finally, for amoeboid cells, myosin is creating contractions that retract pseudopods, which result in traction forces at the base of pseudopods (Fig 6D).
To test this scenario, we developed a mathematical model with as aim to reproduce the traction force patterns for all three migration modes, including the two fan-shaped cell types, by simply changing the wave dynamics and spatial location of actin and myosin. In our model, detailed in Materials and Methods, the cytoskeletal interior of the two-dimensional cell is modeled as a compressible fluid, which is actively driven by actin polymerization, representing protrusion and myosin contraction (Keren et al, 2009;Rubinstein et al, 2009). The fluid flow interacts with the substrate through friction, resulting in traction forces. Since we are interested in modeling traction force patterns, we do not include any explicit polarization mechanisms. Instead, we specify actin, as visualized using LimE, and myosin distributions and wave dynamics, which allows us to compute the traction force patterns related to a different cytoskeletal organization. Our actin distribution represents freshly polymerized actin, visualized in the experiments using LimE, but we assume that actin filaments are distributed over the entire cell, providing a substrate for myosin. Finally, the cell's morphology and its motion are determined by a force balance equation, which involves membrane tension, cell-substrate friction, and forces due to fluid flow. Our model is implemented using the phase-field method, which eliminates the need for explicit boundary tracking and which is therefore ideally suitable for so-called free boundary problem methodology (Lowengrub et al, 2009;Shao et al, 2010Shao et al, , 2012Ziebert et al, 2012;Moure & Gomez, 2016Cao et al, 2019aCao et al, , 2019bCao et al, , 2019cFlemming et al, 2020;Moreno et al, 2020). We first simulated the motion of fan-shaped cells. Based on our experimental observations that actin and myosin are spatially excluded and remain fixed over time (Fig 2A and C), we implemented mutually exclusive distributions of actin and myosin that propagate as a stable traveling wave with constant speed. Moreover, since myosin is only elevated near the back of the cells, we take myosin to be restricted to a narrow band at the rear of the cell (Fig 7 A and B). In our simulations for the fan-shaped cells, we kept the parameter that determines the protrusive strength, η a , fixed, and varied the contractile strength, parameterized by η m . The values for these and other model parameters can be found in Table EV1.
In a first set of simulations, the contractile strength parameter η m was chosen to be large, such that the contractile force dominates. The resulting morphology, traction force pattern, and stress pattern are shown in Fig 6A and Movie EV9. The morphology of the cell was consistent with the fan-shaped cells in the experiments with an arched front and a near-straight back. Furthermore, the traction force pattern was qualitatively similar to the pattern of a type 1 cell: The largest forces were located in the posterior corners, while the traction forces in the front of the cell were pointing in the direction of motion. Furthermore, as in the experiments ( Fig 2B) the traction force pattern displayed two clear rotating vortices, located near these posterior corners, with a sink present at the center of these vortices.
In a second set of simulations, we reduced the value of η m so that the protrusive force dominates. Decreasing the strength of the contractile force did not change the morphology of the computational cell but resulted in larger cells. Furthermore, this cell showed a distinctly different traction force pattern that was in qualitative agreement with a type 2 cell (Figs 2D and 7B and Movie EV10). The largest forces were still located in the posterior corners, and the maximal forces occurred at locations where both the actin and myosin gradient were large. The traction forces in the front of the cell, however, were now directed opposite from the motion direction. These results suggest that the difference between a type 1 and 2 cell can be explained by the balance between protrusive and contractile forces: Contractile forces dominate in type 1 cells, while protrusive forces dominate in type 2 cells. Consistent with experiments, we found that the speed of type 1 cells was larger than that of type 2 cells (Fig 7). Furthermore, the patterns of stress in the direction of motion T x and of the integrated traction forces as a function of y were similar to the corresponding patterns in our experiments ( Fig 3B) for both cell types (Appendix Fig S22).
Next, we addressed the traction force patterns in oscillatory cells. Following our experimental results (Fig 3A), we modeled actin to be present within a thin annulus that borders the cell membrane and myosin to be present within the entire computational domain. Both distributions were taken to be spatially homogeneous and were oscillating out of phase. As a result, the cell membrane remained circular and the area was oscillating between a minimum and maximum value (Fig 7C and Movie EV11). We should note, however, that spatially non-homogeneous distributions that are synchronized in time can also produce oscillatory morphologies that are consistent with the experiments (Appendix Fig S23). The resulting traction forces are consistent with the experimental results: The forces always pointed inward, toward the center of the cell (Fig 7C), and the largest forces were present during the contraction phase (Fig 7 D). Furthermore, the computed CFs for these simulations are fully consistent with the experimentally determined CFs (Fig 7D, lower panel).
Lastly, we simulated the traction force patterns arising from amoeboid motion. As in our experiments, we restricted actin polymerization to small, randomly located patches on the boundary, A Actin (using a green color scale) and myosin distribution (using a red color scale) in a simulated type 1 cell, obtained for contractile strength parameter η m = 200 pNμm, with corresponding traction force map. The speed of the cell, and thus the edge speed, was 10.2 μm/min. B As in A, but for a type 2 cell. The protrusive strength parameter in this simulation was η m = 80 pNμm, and the cell speed was 5.9 μm/min. Fig 2), the retraction of pseudopods resulted in larger forces than the protrusion of pseudopods (lower row Fig 7E), and both the area and total force as a function of time showed quasi-periodic dynamics (Fig 7F, upper panel). Finally, the CFs between the area change rate and total force, actin, and total myosin qualitatively agree with the experimental results ( Fig 7F, lower panel).
Discussion
Our results show that diverse cell migration modes in Dictyostelium cells are characterized by distinct traction force patterns and that each of these modes corresponds to specific wave dynamics and spatial distributions of the key cytoskeletal components, F-actin and myosin. The temporal correlation between the spatially cellaveraged cytoskeletal components and traction force, however, was conserved across the different modes, suggesting that the modes employ the same migration mechanisms. Furthermore, quantifying the ratio between membrane properties in high and low intensity and edge velocity regions also revealed qualitatively similar results for the three migration modes. The sole exception was the stress ratio in regions of high and low LimE-GFP fluorescence, which was close to 1 for amoeboid and fan-shaped cells but significantly smaller than 1 for the oscillatory cells ( Fig 5D). We also show that a computational model, which uses the wave dynamics as input and that computes traction forces arising from friction between the cytoskeletal fluid flow and the substrate, is able to reproduce all experimentally observed patterns. In our TFM experiments, we used relatively thin gel substrates (3-15 µm), with fluorescent beads attached to the top surface. This approach has multiple advantages: gels are stable, do not shrink or swell, and have excellent optical properties. In addition, when all tracer particles are in the same plane, the precision and spatial resolution of TFM are maximized (Driscoll & Danuser, 2015). Furthermore, using thinner gel substrates compared with conventional substrates ensures that substrate deformations resulting from cell traction forces decay over short distances, typically in the order of the thickness of the gel. As a result, the reference (i.e., zero traction force) positions of the tracer particles, which are needed to compute their displacements, can be identified at short distances in front and behind migrating cells, greatly facilitating the dynamic tracking of the traction force distributions along the cell migration trajectory. Most importantly, however, the short decay distance enables the distinction between nearby force foci and, thus, more accurate traction force maps.
Surprisingly, our TFM revealed two distinct traction force patterns for fan-shaped cells. While both patterns display large forces in the posterior corners of the cells, they differ in their traction force direction at the front of the cell. The force maps for type 2 cells are qualitatively similar to the ones found in migrating keratocytes: two large force poles at the posterior corners, with forces in the front part of the cell pointing opposite from the direction of motion (Fournier et al, 2010). In keratocytes, this pattern is believed to be due to the retrograde flow of the protrusive actin network (Fournier et al, 2010), which transmits forces to the substrate using adhesive focal adhesion complexes that are formed at the front of the cells, mature, and are released at the back of the cell (Gardel et al, 2010). Our results suggest for the type 2 cells the observed pattern is also due to cytoskeletal flow and that the traction force map mimics the flow pattern. Contrary to keratocytes, however, Dictyostelium cells do not exhibit stable focal adhesion complexes linked to stress fibers. Like neutrophils, they display transient adhesions marked with paxillin, although a specific integrin-extracellular matrix interaction has not been identified. Dictyostelium cells can adhere to a wide variety of surfaces (Bukharova et al, 2005;Loomis et al, 2012), and it is believed that non-specific van der Waals and electrostatic interactions play a role (Loomis et al, 2012;Tarantola et al, 2014). Therefore, it is likely that these forces, together with cytoskeletal flow, provide the required traction forces.
The keratocyte-like force maps were found in a minority of cells, distinguishable by their larger size. Most cells, however, display a traction force pattern that is at odds with retrograde flow generating traction forces. Specifically, the forces at the front of this type 1 cell point in the direction of motion instead of that in the retrograde direction and the two counter-rotating vortices are present. This pattern suggests that contractile forces at the back of the cell propel the cell forward. This dominance of contractile forces would also explain why these type 1 cells are smaller than the type 2 cells, where protrusive forces are mostly responsible for motion. Although we have never observed a transition between the two cell types, we cannot rule it out since we can only follow cells for up to approximately 10 min.
For the oscillatory cells, our data suggest a sequence of events that start with an expansion phase during which an actin wave pushes the membrane outward (Cao et al, 2019b). As a result, the actin network is being dragged inward, presumably again by retrograde flow (Watanabe & Mitchison, 2002), resulting in traction forces that point toward the center of the cell. The LimE-GFP intensity reaches a maximum before the maximum area has been achieved, after which accumulation of myosin pulls the membrane inward. Again, the resulting flow of the cytoskeleton network results in inward-directed traction forces. Since the membrane forces are occurring along the entire membrane, the total traction force is always close to zero. Furthermore, the contraction phase was associated with a peak in traction force, whereas forces were found to be weaker during expansion.
In the case of amoeboid cells, expansion and retraction phases are not well separated or periodic as pseudopods are generated randomly in time and space by short-lived actin waves with limited spatial extent. Correlating the observed traction force patterns with the actin and myosin distributions, however, allowed us to determine how these cytoskeletal components contribute to morphology changes and locomotion. Since the correlations between both the cytoskeletal molecules and force and the morphology are qualitatively identical to the ones for the oscillatory cell, the migration mode may be described in a similar fashion. Specifically, F-actin polymerization moves the membrane forward while pushing off against the substrate, generating forces on the substrate that point away from the membrane. Myosin-mediated contraction occurring at a distant site will then also result in inwardly directed traction forces, which are balanced by the protrusive, actin-mediated traction forces.
Our numerical model was able to duplicate all observed traction force patterns. As critical input into the model, we used the observed wave dynamics and distributions of actin and myosin. These distributions were then used to generate protrusive and contractile forces, which, together with area conservation and membrane tension, determined the movement and morphology of the cell. Thus, our modeling approach is different from previous studies that solve reaction-diffusion equations to obtain the distributions of signaling components (Cao et al, 2019a(Cao et al, , 2019bMoreno et al, 2020). However, since these previous studies have demonstrated that the essential wave dynamics of these distributions can be obtained using computational models we are able to use them as inputs (Cao et al, 2019a(Cao et al, , 2019bMoreno et al, 2020). Future work could include combining these models with the framework we have presented here. A further extension of the model that could potentially verify some of our results is to render cells as threedimensional objects, as was carried in recent studies (Cao et al, 2019a;Winkler et al, 2019). Also note that we have not incorporated the explicit dynamics of adhesion bonds as in some previous results (Shao et al, 2012;Reeves et al, 2018). Instead, the interior of the deformable computational cell consisted of a compressible viscous fluid, representing the actin cytoskeleton, and the friction of the flow of this fluid with the substrate then generated traction, as in other computational models (Barnhart et al, 2015;Allen et al, 2020). Note that in this model, just as in some similar (Rubinstein et al, 2009;Shao et al, 2012), the flow is derived from the cytosolic interior of the cell and not from the membrane (Fogelson & Mogilner, 2014).
The assumption of network friction-mediated traction in our model is reasonable for Dictyostelium cells. Aside from the abovementioned non-specific cell substrate (Loomis et al, 2012), this assumption is also consistent with the flow patterns in the two different types of fan-shaped cells. In our model, and as a consequence of the friction in our model, the direction of the traction force at a particular location is determined by the direction of the flow at the same location and our simulations predict retrograde actin flow at the front of type 2 cells and more complicated, vortex-like patterns in type 1 cell. Note that a mechanism in which the membrane is firmly attached to the substrate is unlikely to generate the vortex pattern in type 1 cells. Nevertheless, our results do not rule out additional mechanisms, including adhesion patterns that are dynamically regulated.
By changing the distributions of actin, responsible for protrusive forces, and myosin, responsible for contractile forces, our model was able to recapitulate all traction force patterns. Specifically, for the amoeboid and oscillatory modes, all traction forces were pointed inward. Furthermore, by placing the myosin distribution spatially opposite from the actin distribution in amoeboid cells, it was able to recapitulate patterns observed in the experiments. Finally, by varying the relative strength of the myosin and actin forces, it generated both type 1 and type 2 cells. Taken together, our numerical results suggest that the traction force patterns in Dictyostelium cells are primarily due to friction between cytosolic flow and substrate and different patterns are generated by different distributions and wave dynamics of actin and myosin.
Cells and plasmids
We used wild-type AX2 cells, amiB-null AX2 cells, and engineered AX2 cells in our experiments. Wild-type and amiB-null were transformed with the plasmid expressing LimE-delta-coil-GFP. Engineered cells were transformed with the plasmid expressing LimE-YFP. In addition, wild-type and engineered cells were transformed with the plasmid pBig-myo, expressing GFP-myoII and wild-type cells were transformed with the plasmid pEXP-4 carrying lifeAct-GFP.
Wild-type and fluorescently labeled AX2 cells were kept in an exponential growth phase in a shaker at 22°C in HL5 media. For cells expressing LimE-GFP, HL5 was supplemented with hygromycin (50 μg/ml), while for cells expressing GFP-myosinII, it was supplemented with G418 (10 μg/ml). To obtain amoeboid cells, 10 5 cells were plated on the soft silicone gel substrate used for traction force measurements (see below) in HL5. Recordings started 15 min after plating for up to 3 h. For fan-shaped cells, cells were diluted to a low concentration (1-2 × 10 5 cells/ml) to stop exponential growth on the day before the experiment and kept in a shaker at 22°C in HL5 media. After 15-18 h, the cell concentration reached 2-5 × 10 5 cells/ml and 10 5 cells were plated in 7 ml DB (5 mM Na 2 HPO 4 , 5 mM KH 2 PO 4 , 200 μM CaCl 2 , 2 mM MgCl 2 , pH = 6.5) on the soft gel substrate. Recording started 4-5 h after plating for up to 3-4 h. Up to 50% of cells prepared in this way were fan-shaped.
AmiB-null cells were grown in HL5 in petri dishes and harvested when they reached 50-70% confluency. To obtain fan-shaped cells, 10 5 cells were plated in 7 ml DB on the soft gel substrate. Recording started 4-5 h after plating for up to 3-4 h, after which 30-50% of cells were fan-shaped (Cao et al, 2019b).
Traction force microscopy
As is customary for TFM, cells were plated on a deformable substrate that contained small fluorescent tracer particles (Sabass et al, 2008;Style et al, 2014). The spatial map of displacements of these particles (relative to their positions with no cells on the substrate) was measured (Appendix Fig S2A) and converted, using computational algorithms, to a spatial map of cell traction forces. Specific details are described in the following. beads. The exact preparation steps are described below. Young's modulus was measured to be~1 kPa, using a centrifugal rheometer (Appendix Fig S24).
Silicone gels
Glass preparation 47-mm round coverslips from WillCo-dish ® Kit glass-bottom dishes were cleaned with ethanol and plasma-treated for 15 s to activate the glass surface. The surface was functionalized with a vapor deposition of (3-aminopropyl)trimethoxysilane (APTMS) and 3-(trimethoxysilyl) propylmethacrylate (Sigma-Aldrich) for 10 min at 170°C. The glass bottom was then assembled into the WillCo petri dish with dedicated sticker. 40-nm carboxylate-modified red or yellow-green fluorescent beads (580/605, Molecular Probe F8793 or 505/515, Molecular probe F8795) were diluted 40,000 times in a buffer with pH 8 (20 μl HEPES/ml, 10 mM NaOH in DI water), and incubated with 0.5 mg/ ml 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC, Sigma-Aldrich) on the glass bottom for 2 min before washing with DI water. Dishes were dried for 1 h at 65°C and cooled down before the silicone gel deposition.
Functionalized silicone gel deposition
Soft gels were prepared using the curer CY52-276A and base CY52-276B (Dow Corning Toray) with a weight ratio 1.2:1.0 (total weight 11 g) to achieve a Young modulus of 1 kPa. The silicone gel was functionalized in bulk with (20-25% aminopropylmethylsiloxane)dimethylsiloxane copolymer (APTES-PDMS, Gelest, Inc.). To delay the viscosity increase, we also used QSIL PLE (Quantum Silicones QSI). 2.5 μl of stock solution containing 10 ml ethanol, 10 μl APTES-PDMS, and 25 μl QSIL PLE was added for each 1 g of gel. Ingredients were mixed for 3 min using an overhead stirrer (Heidolph RZR1) and centrifuged for 1min to remove bubbles. 500 μl of gel mixture was poured into a glass-bottom dish and spread with a spin coater for 30 s at 4,000 rpm (for 15-μm-thick gels) or for 300 s at 7,500 rpm (for 3-μm gels). The gel layer was baked for 8 h at 65°C.
Surface coating
Each dish was incubated with 40-nm carboxylate-modified red or yellow-green fluorescent beads diluted 1:1,000 in a HEPES buffer with pH 8 for 3 min with 0.5 mg/ml EDC. Excess beads were washed off by carefully flowing DI water over the dish, ensuring that the gel was never exposed to air. 0.3 mg type I collagen (PureCol 3 mg/ml, Advanced BioMatrix) diluted in 2 ml water with 0.5 mg/ml EDC was added to each sample. After 3 min, the solution was washed off with DI water and replaced by DB buffer. Dishes were stored at 4°C for up to 1 week. Gel thickness was measured using confocal microscopy and the two layers of beads. Results were corrected by the ratio of the glass refractive index to the gel refractive index (n = 1.4).
Imaging
DIC and fluorescent images (561-nm excitation, for the red fluorescent beads, and 488 nm for the GFP probes and yellow-green beads) were captured every 15 s with a 63× oil objective on a spinning-disk confocal Zeiss Axio Observer inverted microscope equipped with a Roper Quantum 512SC camera. Autofocus was set on the fluorescent beads at the surface of the gel so that all images were recorded in the basal plane.
Image analysis
To visualize and analyze the cell's surface area, we used the fluorescent data of mCherry-FRB-Inp54p, LimE-GFP, or GFP-myoII. Alternatively, for non-fluorescent cells we used DIC images. Pixels within this boundary were detected using a custom MATLAB algorithm, which created a binary image. For fluorescent images, this binarization was performed by applying a threshold automatically determined using the Ridler-Calvard method (Ridler & Calvard, 1978). Then, outlier pixels were removed (using the function bwareaopen), followed by image dilation, the filling of holes, and image erosion. For DIC images, the following steps were performed before binarization: A blurred background was created from images that did not contain the cell. This background was subtracted from the images containing the cell. Shadows from DIC imaging were turned into bright spots by taking the absolute value after subtracting the background. A Gaussian blur was then applied to the images, after which binarization was carried as for fluorescent images. The binary image was then used as input to the MATLAB function regionprops to determine the basal surface area, average fluorescent intensity inside the basal plane, cell morphology, and fit to ellipse. The cell outline was determined using the MATLAB function bwboundaries, from which we constructed kymographs of fluorescent intensity and forces and computed the cell's center of mass. Cell tracks and cell velocity were determined by tracking the centroid of each cell in each frame.
Assignment of migratory modes
Assignment of migratory modes followed the method described by Miao et al (2017). Briefly, oscillatory cells were defined as cells that displayed a large coefficient of variation of the area. For the remaining cells, fan-shaped cells were defined as cells that migrated perpendicular to their long axis. All other cells were defined as amoeboid cells. Less stable fan-shaped cells were defined as cells that did not keep a constant area (coefficient of variation [COV] > 0.075) or speed (COV > 0.4). For these less stable cells, the force patterns were not clear, making the assignment between types 1 and 2 problematic.
Statistics
Experiments were performed on at least two or three different days for each type of cells and for each type of motion. For data that were not normally distributed, data are reported as median (interquartile 1/interquartile 3) and the significance was evaluated with the Wilcoxon-Mann-Whitney test using the rank sum function in MATLAB. P-values higher than 0.05 are considered not significant, * corresponds to 0.05 > P > 0.01, ** to 0.01 > P > 0.001, *** to 0.001 > P > 0.0001, and **** to P < 0.0001.
Force computation
Bead displacements and force reconstruction were computed using an open source MATLAB algorithm (R2018a; The MathWorks) (Han et al, 2015), which is based on the boundary element method (Dembo & Wang, 1999). Beads were tracked by subpixel correlation by image interpolation (SCII), and traction force reconstruction was accomplished using the boundary element method and L1 regularization. Typical bead density detected by our code was 1.2/μm 2 .
Resolution of the resulting traction force was approximately 1 μm, and the noise level was about 15 Pa. Total force was defined as the sum of absolute value of all local stresses, T(x, y), multiplied by the local area ΔA: F tot = ∑|T(x, y)|ΔA. To capture the entire cell, the cell outline was dilated with the MATLAB function imdilate using a disk of radius 8 pixels as a structural element, resulting in an outline that was approximately about 1.3 μm larger in each direction. The local area depended on the bead density but was approximately 1 μm 2 . Please note that this quantity is not the net force from the cell on the substrate, which can be obtained by summing the vector force field.
In our force maps, cells appear to exert traction forces in areas outside their physical boundaries. This appearance of non-zero forces outside the cells is due to the finite spatial resolutions of both the tracer particle displacement map and the conversion of the displacement map into the traction force map and is inherent to force reconstruction methods that do not have any constraints on where traction forces are exerted. Thus, unlike some methods explicitly postulating that traction forces are only applied at the adhesion complexes within the cell footprint, e.g., traction reconstruction with point forces (Sabass et al, 2008), our procedure will always result in traction force maps with non-zero forces just outside of the cell footprint. We should also point out that a different computational technique of obtaining the traction force map, the Fourier transform traction cytometry method (FTTC) (Dembo & Wang, 1999), gives qualitatively similar results (Appendix Fig S2B).
Rotation of stress maps and fluorescent images
In order to define the stress maps along the direction of motion T x and the stress perpendicular to motion, T y , the stress vectors were rotated for the amoeboid and fan-shaped modes. The angle of rotation is based on the cell's trajectory obtained from the center of mass coordinates. For the fan-shaped mode, the trajectory is linear so a single angle can be extracted for the whole trajectory. For the amoeboid mode, however, the trajectory is random and was rotated each time frame. For this, an angle φ(t) is defined for each frame (time t) between the vector connecting the center of mass position at time t − 1 and t + 1 and the x-axis. A rotation matrix R(t) is then defined using this angle: R t The original measurement obtained from TFM provides us with the components of the traction stress (T x0;i , T y 0;i ) measured at position i (x 0;i , y 0;i ). These components can be transformed using the rotation matrix to obtain rotated values: The position vector (x 0;i , y 0;i ) can be rotated in a similar fashion. Repeating this for each position i, we obtained a rotated stress map. Note that this procedure can be efficiently carried out by a single matrix multiplication. The total force in the direction of and perpendicular to the motion is then defined as F x = ∑ |T x (x, y)|ΔA and F y = ∑ |T y (x, y)|ΔA where the sum is over all points of the stress map.
To obtain kymographs of fan-shaped cells (e.g. , Fig 2), the rotation was also applied to the cell's outlines, using the same rotation matrix and the fluorescent images were rotated using the MATLAB function imrotate. The rotated stresses were interpolated on a regular grid with the same resolution as the fluorescent images (a camera pixel: 212 nm). Finally, pixels along the rotated cell's outlines could be extracted from the rotated fluorescent images and from the interpolated stress maps.
Temporal correlations Autocorrelations
The autocorrelation function (ACF) for the area was computed in MATLAB, using the function autocorr. For the oscillatory mode, the period P of the oscillations was obtained by fitting the area ACF with a damped cosine function Ae −t/τ cos (2πt/P) (Appendix Figs S7 and S8). For the amoeboid mode, exhibiting a weakly periodic behavior, a pseudoperiod was extracted from the position of the first peak in the area ACF as no significant result could be obtained using a damped cosine fit. As expected, for oscillatory cells, the position of the first peak of the area ACF and the period obtained from the damped cosine fit give very close results (see Appendix Table S1). Note that the period of oscillation could also be defined using the ACF of the total force or the strain energy. However, the ACF based on the total force or on the strain energy is less well fitted by a damped cosine fit than the area ACF. Therefore, the period of oscillations reported in the main text is based on the area ACF.
Cross-correlations
Cross-correlation functions (CFs) between area or area change rate and fluorescent signals or total force were computed in MATLAB using the function crosscorr. For positively correlated signals, we computed the time shift as the difference between the maximum value of the CF (as these data are positively correlated) and the origin of time (see, e.g., blue line in inset of Fig 4B). The time shift between the area and the fluorescent signal (LimE-GFP or GFP-myoII) or the total force was computed as the shift between the maximum value of the CF (as these data are positively correlated) and the origin of time. The correlation between the area change rate and LimE-GFP was also positive, so we use the definition of the delay. For signals that were anticorrelated, e.g., the area change rate and the total force for amoeboid cells (Fig 4C), this shift was defined as the time difference between the minimum value of the CF and the origin of time. For both the CF and ACF, dashed lines in the plots represent the 95% confidence interval (CI). Correlation is significant only if the CF or ACF has larger values than this interval. The limits of the CI are defined as AE sqrt(2)erf −1 (0.95)/sqrt(L), with L the size of the sample.
Spatiotemporal correlation
The second kind of correlation is based on kymographs created from the values of the stress, the fluorescence, and the edge velocity on the cell's boundary. For the correlation using the edge velocity as a reference, protruding and retracting regions refer to pixels of the edge velocity kymograph with values respectively higher than the 80 th percentile and lower than the 20 th percentile. Once these regions are identified (Appendix Figs S11-S20), the corresponding regions in the fluorescent and stress kymographs are determined. Average values of the fluorescence and of the stress in the protruding and retracting regions are then computed for each cell. The protruding and retracting velocities are defined as the average edge ª 2021 The Authors Molecular Systems Biology 17: e10505 | 2021 velocity in these areas. For the correlation based on the level of fluorescence, the 20% brightest pixels are selected from the fluorescent kymographs and denoted as high fluorescence regions. The remaining 80% of the pixels define the regions of low fluorescence. The corresponding regions are determined on the stress and edge velocity kymographs, so that their average value in regions of high and low fluorescence can be computed.
Fan-shaped cells
We computed the cell's edge velocity for type 1 and 2 cells expressing LimE (Appendix Figs S12 and S13) and myosinII (Appendix Figs S14 and S15), and for type 2 cells lifeAct-GFP-expressing cells (Appendix Fig S16). Using the kymographs, we further quantified the correlation between membrane-localized cytoskeletal components, force generation, and motion for fan-shaped cells by identifying regions of large positive and negative edge velocities, corresponding to retracting and protruding regions. For fan-shaped cells, these regions obviously correspond to the back and front of the cell, respectively. We also found, and consistent with our observation of large forces in the posterior corner, that the ratio between the traction forces in the protruding (front) and retracting (back) regions was~0.15. Furthermore, and as expected, LimE-GFP was brighter in the protruding regions, where F-actin is polymerizing, whereas myosin was significantly brighter in the retracting regions. In addition, experiments performed with cells tagged with lifeAct-GFP showed no noticeable difference in fluorescence between retracting and protruding regions, indicating the presence of F-actin everywhere along the membrane. Then, we used the fluorescent kymographs to detect regions of high F-actin polymerization and high myosin activity and correlated them with the stress at the boundary and the edge velocity (Appendix Fig S16E and F). As expected, regions of high LimE-GFP intensity corresponded mostly to the front of the cell and to higher edge velocity, whereas regions of high myosin activity were found mostly in the back of the cell and were correlated with negative edge velocities (Appendix Fig S16E). Comparing the average values of the stress in the regions of high fluorescence to the values in the rest of the cell revealed that for LimE-GFP, this ratio was close to 1 (Appendix Fig S16F). For GFP-myo and lifeAct-GFP, this ratio was larger than 1. These results suggest that myosin was responsible for most of the total force developed by the cells during motion, and that the forces created by actin polymerization were not significantly larger than the average force in the rest of the cell's outline, including regions of zero normal motion.
Oscillatory cells
Next, we quantified the edge velocity of the oscillatory cells displayed in Fig 3, which showed that the edge velocity was high when the LimE-GFP intensity is highest (Fig EV3). During these expansion phases, the stress was relatively low. When the edge velocity was small, corresponding to retractions, the myosin intensity was high (Appendix Fig S17). Thus, protrusions are associated with increased LimE activity near the membrane, while contractions, and larger stresses, occur when myosin is elevated near the membrane.
We also determined the ratio between the retracting and protruding velocity (Appendix Fig S17D). As for fan-shaped cells (Appendix Fig S16D), this ratio was close to 1, indicating that the edge velocity during retraction and protrusion was approximately identical. The ratio between the traction forces in the protruding and retracting regions was~0.4, again illustrating that the traction forces during the retraction phase were larger (Appendix Fig S17D). We also found that LimE-GFP was brighter in the protruding regions for both engineered and wild-type cells, while myosin was brighter in the retracting regions (Appendix Fig S17D). The fluorescent kymographs revealed that regions of high LimE-GFP intensity corresponded mostly to the protruding phase of the cell and that regions of high myosin activity were found mostly during the retractile phase of the cell (Appendix Fig S17E). The ratio between the average stress in the regions of high LimE fluorescence and in the rest of the cell was slightly less than 1 for both engineered and wild-type cells. This ratio for myosinII fluorescence, on the contrary, was almost 2 (Appendix Fig S17F). Therefore, and similar to fan-shaped cells, myosin was responsible for most of the total force developed by the cells during morphology changes.
Amoeboid cells
We used the velocity kymographs to identify regions of large positive and negative edge velocities (Appendix Figs S18-S20). We found that, on average, the magnitude of the most negative and the most positive edge velocity was the same, indicating that the protrusion and retraction speed were similar (Appendix Fig S20D). We then computed the average fluorescent intensities and average stress in these regions. This revealed that the average stress in the retracting regions was about twofold larger than the stress in protruding regions (Appendix Fig S20D). These findings are consistent with our cell-averaged results and previous results and demonstrate small forces underneath expanding pseudopods but larger ones in retracting areas (Del Alamo et al, 2007;Delanoe-Ayari et al, 2008;Iwadate & Yumura, 2008). Furthermore, and as expected, LimE-GFP was brighter in the protruding regions, whereas myosin was slightly brighter in the retracting regions. Interestingly, experiments performed with cells tagged with lifeAct-GFP showed no noticeable difference in fluorescence between retracting and protruding regions, indicating that F-actin is required for both retractions and protrusions (Appendix Fig S20D). This suggests that myosin and actin can form an actin-myosin complex that is responsible for contraction not only in pseudopods but also in regions distinct from pseudopods.
We also used the fluorescent kymographs to detect regions of high F-actin polymerization and high myosin activity and correlated them with the stress at the boundary and the edge velocity (Appendix Fig S20E and F). As expected, regions of high LimE-GFP intensity corresponded mostly to positive edge velocities, whereas regions of high GFP-myo and lifeAct-GFP intensity were correlated with negative edge velocities (Appendix Fig S20E). To further quantify this observation, the average values of the stress in the regions of high fluorescence were compared with the values in the rest of the cell (Appendix Fig S20F). For GFP-myo and lifeAct-GFP, this ratio is close to 2, while for LimE-GFP, it is close to 1. This suggests that myosin was responsible for most of the total force developed by the cells during motion, which occurs during retraction, and that the forces created by actin polymerization were not significantly larger than the average force in the rest of the cell's outline, which includes regions of vanishing normal motion.
Computational model
We propose a mathematical model to explain the different force patterns observed in the experiments. Here, we consider a 2D cell that interacts with the substrate. In our model, the interior of the cell, which is assumed to be the cell cortex, is modeled as a compressible fluid. This fluid is actively driven by actin polymerization and myosin contraction, and the cell morphology and motion are determined by the force balance on the cell boundary. Furthermore, the size of the cell is taken to be constrained within a certain range and the distribution of actin and myosin is pre-set based on experimental observations. Finally, the friction of the fluid with the substrate generates the traction forces exerted by the cell onto substrate (Barnhart et al, 2011). This is a reasonable assumption since Dictyostelium cells exhibit non-specific cellsubstrate adhesion and do not utilize focal adhesion complexes (Loomis et al, 2012).
To simulate the motion, we utilized the phase-field approach (Shao et al, 2010(Shao et al, , 2012Cao et al, 2019c;Moreno et al, 2020). In this approach, the shape of the cell is tracked by a phase-field variable φ, with φ = 1 indicating the interior and φ = 0 representing the exterior of the cell. The cell boundary is then implicitly tracked by φ = 1/2. 0 The cell shape evolves according to the equation: where u is the velocity field of the fluid flow, ɛ is the phase-field boundary width, c = −∇⋅(∇φ/|∇φ|) is the local interface curvature, Γ is a relaxation coefficient, and G(φ)=18φ 2 (1 − φ) 2 . The dynamics of the interior fluid is modeled using the Stokes equation: r Á νφ ru þ ru T À Á Â Ã þ F mem þ F area þ F act À ξu ¼ 0: Here, ν is the fluid viscosity, F mem is the cell membrane tension, given by F = δH(φ)/δφ, with H(φ) = γ(ɛ |φ| 2 + G(φ)/ɛ), γ is the cell membrane tension per length, and F area is the area conservation force that constrains the cell size A = ∫φdxdy between [A min , A max ]. Specifically, we take F area = M s g(φ)∇φ, where g(φ) = A − A min if A < A min or A − A max if A > A max , and g(φ) = 0 otherwise and where M s is a parameter that controls the penalty for having an area outside the preferred range.
The active force in our model is provided by actin polymerization and myosin contraction. Following earlier work (Shao et al, 2012), the active force takes the form: where ρ a,m is the density of actin and myosin, respectively. In this equation, the parameters η a and η m describe the strength of actin polymerization and myosin contraction and n = −∇φ/|∇φ| is the outward normal direction at the cell membrane. To model the different cell migration modes, we implemented three different spatial distributions.
1 For fan-shaped cells, we implemented a stationary distribution for both actin and myosin. Since our experiments showed that LimE and myosin are spatially excluded and that myosin was localized in the back of the cell, we restricted myosin to a narrow band at the back of the cell: ρ m = 1 when x − x c < β, where x is the coordinate in the direction of motion, x c is the center of mass of the cell, and β is a negative constant. Actin is filled in a ring with width of 2 μm that surrounds the rest of the cell. 2 In the experiments, the oscillatory cells showed spatially homogeneous and temporally oscillating actin and myosin profiles near the cell periphery. In the models, we thus define an annulus with radius r 0 , located at the membrane, in which actin shows oscillations. Specifically, we set ρ a ¼ ζ r, r 0 ð Þ 1 þ sin ½ 2πt T À Á =2 if sin 2πt=T ð Þ> 0 and ρ m = 1 if sin (2πt/ T) < 0, where T is a constant period, and ζ(r, r 0 ) = 0 if the distance to the center of mass r < r 0 , and ζ(r,r 0 ) = 1 otherwise. To account for spatial heterogeneity in oscillatory cells (Appendix Fig S23), we introduced two patches of actin along the membrane, using the same method as in the amoeboid cell simulations, detailed below. These actin patches are synchronized, and both actin and myosin have the same oscillatory dynamics as described above. 3 For amoeboid cells, we implemented spatiotemporally heterogeneously distributed actin and myosin. Specifically, and based on our experimental results, we assumed that actin polymerization and myosin are limited to two small protrusions (denoted by χ a and χ m ) with radius r 2 close to the membrane, and show alternating oscillations. An additional myosin patch was generated in random positions within the cell. Furthermore, to capture the limited lifetime and spatial extend of a pseudopod, we assumed that both actin and myosin had a lifetime τ, where τ is drawn from a normal distribution with mean of T and variance σ. Explicitly, for the protrusions, we used the following distributions: ρ a ¼ χ a r, r 1 ð Þ 1 þ sin 2πt T À Á Â Ã =2 if sin 2πt=T ð Þ> 0 and ρ m ¼ χ m r, r 1 ð Þ 1 À sin 2πt T À Á Â Ã =2 if sin 2πt=T ð Þ< 0. Here χ m r, r 1 ð Þ¼ 1 þ tanh 3 r2À rÀr1 j j ∈ h i n o =2 is a disk with a center position r 1 that is randomly drawn from the boundary points of the cell, and χ m = χ a (r,r m − r 1 ), with r m being the cell mass center. For the myosin patch, we simply use ρ m ¼ χ m r, r 1 ð Þ 1 À sin 2πt T À Á Â Ã =2.
Parameters for our simulations are given in Table EV1. The equations were solved on a n × m regular grid with size L x × L y . We denote the state of the system at time t = nΔt by φ (n) , u (n) . The φ-equation was solved by forward Euler scheme: where ∇φ (n) ,∇ 2 φ (n) are calculated by the Fourier transformation method, and the curvature term is calculated with a central difference scheme. The Stokes equation was solved with a semi-implicit Fourier spectral scheme to obtain u (n + 1) . To do so, we first subtract the term 2ν∇ 2 u from both sides of the Stokes equation to yield ξu À 2νr 2 u ¼ r Á ν φ À 2 ð Þru þ νφru T Â Ã þ F mem þF area þ F act ≡ RHS u, φ ð Þ: ª 2021 The Authors Molecular Systems Biology 17: e10505 | 2021 We solve the above equation iteratively using the Fourier spectral method, where u k is the k-th Fourier series, and F, F −1 is the forward and reverse Fourier transformation, respectively. The iteration will stop until max u nþ1 ð Þ À u n ð Þ < 0:01 max u n ð Þ j or the maximal iteration steps exceed 100.
Data availability
The datasets of the images in this study are available in the following database: https://doi.org/10.6084/m9.figshare.16826740. Computational code is deposited on https://github.com/Rappellab/Traction_force. Expanded View for this article is available online. | 18,768.4 | 2021-12-01T00:00:00.000 | [
"Biology",
"Physics"
] |
Constructing a Predicting Model for JCI Return Using Adaptive Network-based Fuzzy Inference System
The high price fluctuations in the stock market make an investment in this area relatively risky. However, higher risk levels are associated with the possibility of higher returns. Predicting models allows investors to avoid loss rate due to price fluctuations. This study uses the ANFIS (Adaptive Network-based Fuzzy Inference System) to predict the Jakarta Composite Index (JCI) return. Forecasting JCI movement is considered to be the most influential predictor, consisting of Indonesia real interest rate, real exchange rate, US real interest rate, and WTI crude oil price. The results of this study point out that the best model to predict JCI return is the ANFIS model with pi membership function. The predicting model shows that real exchange rate is the most influential factor to the JCI movement. This model is able to predict the trend direction of the JCI movement with an accuracy of 83.33 percent. This model also has better performance than the Vector Error Correction Model (VECM) based on RMSE value. The ANFIS performance is relatively satisfactory to allow investors to forecast the market direction. Thus, investors can immediately take preventive action towards any potential for turmoil in the stock market.JEL Classification: D13, I31, J22DOI: https://doi.org/10.26905/jkdp.v23i1.2521
Introduction
As a nation with the largest emerging market in Southeast Asia, Indonesia has a distinct value that attracts investors to Indonesia. According to a survey conducted by the United Nations Conference on Trade and Development (UNCTAD), Indonesia was categorized as one out of four nations with the most promising investment destination for 2017-2019 (UNCTAD, 2017). The influx of investment funds to Indonesia through capital market attracts more investors than through direct investment due to its better liquidity. As historical data reveals an average growth of the Jakarta Composite Index (JCI) for the last ten years (2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017) was 13.1 percent. This value was beyond an average inflation rate, which was equal to 5.4 percent. Thus, it positioned Indonesia as a nation with the second largest growing stock exchange in the world after the Philippines.
However, JCI as a benchmark index in Indonesian stock market has a high level of volatility (Kartika, 2012), suggesting that not only investing in Indonesian stock market does provide an opportunity for large profit but also carries risks. Both internal and external factors can influence stock price movements in the capital market (Divianto, 2013;Patar, Darminto, & Saifi, 2014). Internal factors include corporate financial performance and internal company policies which generally only affect the share price of the issuers. Meanwhile, external factors include macroeconomic conditions and energy commodity prices that affect most of the issuers on the stock market (Krisna & Wirawati, 2013;Raraga, Chabachib, & Muharam, 2013). Therefore, these factors are put into consideration by many investors when measuring the potential return and risk of investment.
The high volatility in the Indonesia stock market in 2008 and 2015 which was followed by both domestic and foreign fluctuations in some macroeconomic variables made some investors restless. The decline of JCI was due to global financial crisis triggered by the housing credit crisis in America as the property sector being hit by high-interest rate of the US central bank (The Fed), forcing The Fed to reduce its benchmark interest rate from 5.25 percent in June 2007 to 2.00 percent in June 2008.
In addition, the 2008 financial crisis also had an impact on the world oil prices and Indonesian Rupiah (IDR) exchange rate. The price of West Texas Intermediate (WTI) crude fell from 140 dollars per barrel in June 2008 to 44.60 dollars per barrel in December 2008. Consequently, the IDR exchange rate went down especially in the second half of 2008 which was the culmination of the financial crisis. This poor condition was worsened by the high level of domestic inflation, leading real interest rates fell within the negative range. The decline in the JCI in the year 2015 also occurred along with both depreciation of IDR exchange rate and the issue of the Fed's interest rate increase. Understanding how macroeconomic variables and global economic conditions indices influence the movement of domestic stock market allow investors and regulators to anticipate any possible turmoil in the future domestic stock market. Therefore, the latest research on the influence of macroeconomic variables both domestically and abroad on the return of the JCI is important for setting a benchmark for the Indonesian stock market index. Some predicting models have been used to allow investors to forecast the future stock price movements (Tung & Quek, 2011). These models, such as moving averages, exponential smoothing, and Autoregressive Moving Average (ARIMA), have been quite successful in fulfilling this purpose (Anityaloka & Ambarwati, 2013;Lilipaly, Hatidja, & Kekenusa, 2014;Muslim, 2018). However, these models have some limitations as most of their variables, i.e., the stock market index, come with non-linear relationships, making impossible to predict stock market with classical linear models (Yudong & Lenan, 2009). A quality predicting model helps investors increase potential profits on the stock market and avoid the risk of losses. Therefore, stock market prediction remains a fascinating topic to discuss even to these days.
Hypotheses Development
Some computational techniques have been extensively used for non-linear prediction. The use of artificial intelligence system such as an artificial neural network (ANN), fuzzy systems, and various artificial algorithms have been widely used in finance (Atsalakis & Valavanis, 2009b). The models have been quite successful in solving the problem regarding timeseries financial prediction.
Adaptive-network-based fuzzy inference system (ANFIS) is one of the non-linear models combining the superiority of each fuzzy logic and artificial neural networks (Jang, 1993). This model uses a neural network learning method to adjust parameters in the fuzzy control system. Hence it can improve if-then rules on fuzzy control system through a superior ability of complex system behavior description (Jang, Sun, & Mizutani, 1997). With a combination of each superiority of the features, this predicting model is expected to yield better results. This assumption is supported by some previous studies that affirm satisfactory results for a model with a similar approach. One of these studies was conducted by Boyacioglu & Avci (2010) predicting Turkish stock market returns, the ISE National 100 index. This study used ANFIS approach to establish relationships among variables, e.g., gold prices, exchange rates, interest rates, inflation, reduction index, treasury bill interest rates, and closing prices of DJIA, DAX, and BOVESPA. The result of the study indicated that an ability to predict the stock market could be optimized with ANFIS. Its prediction accuracy was 98.3 percent for the ISE National 100 index.
Another study using ANFIS was conducted by Atsalakis & Valavanis (2009a) to predict stock prices on the NYSE and ASA. The result of the study indicated that the ANFIS method gave a satisfactory result. The prediction was tested by the root mean square error (RMSE), mean square error (MSE), and absolute minimum error (MAE). The input of study was stock data, i.e., daily data on day t and previous day (t-1). The accuracy rate of prediction was 62.33 percent. Yunos, Shamsuddin, & Sallehuddin (2008) predicted daily movements of the Kuala Lumpur Composite Index (KLCI) using ANFIS and ANN methods. Some technical indicators were used as input variables such as KLCI prices at time t and t-1, MA 5 days period, RSI 14 days period, and stochastic indicators. The results of this study showed that the ANFIS model was more reliable in predicting KLCI than ANN. Fahimifar et al. (2009) compare the performance of non-linear model models with linear models to predict Iranian exchange rates against the US Dollar and Euro. The non-linear models used ANFIS and ANN, while the linear model used GARCH and ARIMA. The result showed that non-linear models were better than linear models. ANFIS was the best model for prediction, followed by ANN, GARCH, and ARIMA. Raoofi, Montazer-Hojjat, & Kiani (2016) predicted the stock market index of Tehran (Iran) by using several prediction methods, i.e., ANN, ANFIS, fuzzy regression, and ARIMA. The input of study was TEDPIX daily price on day t to t-9. The result showed that the ANFIS method was able to provide the best predicting results compared to the other three methods. These previous studies support the ANFIS model as a reliable one to predict the stock market. In addition, the performance of the model depends on each predictor. A majority of previous studies use technical factors as predictors; yet, only a few studies use fundamental factors even though fundamental factors mostly trigger a lot of turmoil in Indonesia stock market.
Therefore, the ANFIS model is used in this study to predict Indonesia stock market using macroeconomic variables such as Indonesia's real interest rates, USD and IDR real exchange rates, the real American interest rates, and world oil prices. An increase in interest rate in a nation directly influences its capital market. If interest rates increase, the capital market tends to decrease (Ali, 2014;Amarasinghe, 2015). This happens because the investors attempt to transfer some of their funds to other types of investment. A study conducted by Harsono & Worokinasih (2018) showed | 4 | a similar conclusion as well, in which the interest rate has a negative and significant effect on the JCI.
Exchange rate influences stock prices based on company types. Exchange rate depreciation negatively influences import-oriented companies' stock prices, whereas the exchange rate depreciation positively affects to export-oriented companies' stock prices. The relationship of exchange rate changes with JCI depends on the dominant group of impacts as well. Some historical data shows that the decline of the JCI also followed the weakening of IDR exchange rate in 2013 and 2015. This indicates a positive relationship between the decline of IDR exchange rate and the decline of JCI. Some studies also point out both negative and significant influence of the exchange rate on JCI (Krisna & Wirawati, 2013;Harsono & Worokinasih, 2018). In this case, the interest rates of The Fed have an impact on the Indonesian capital market as well. The increase in the Fed's interest rate could lead to the withdrawal of foreign funds from the Indonesian stock market, causing turmoil on the JCI. A study by Miyanti & Wiagustini (2018), shows that The Fed's interest rate has a positive and significant effect on the JCI. Aside from being affected by the high-interest rate of The Fed, JCI is influenced by world oil prices since Indonesia is the oil importer & coal exporter country that relies on world oil prices to determine these commodity prices. For oil importing countries, rising oil prices will have a negative impact on their stock markets. Gumilang, Hidayat, & Endang (2014) and Kowanda et al. (2015) states that oil prices have both a negative and significant effect on the JCI. However, the decline in world oil prices in 2008 actually lead the JCI to plunge at the lowest level of the year. Hutapea, Margareth, & Tarigan (2014) identified the effect of oil prices on the JCI during the period 2007-2011, revealing that oil prices had a significant positive effect on the JCI.
ANFIS Architecture
ANFIS is an improved version of the fuzzy logic model which has been taken into the next level with a neural-network learning system that could transform expert knowledge into some rules for non-linear relationships. The rule establishment takes a long time, both in the selection of membership function and determination of the weighting rules. The neural network will improve the predicting model to be faster and more accurate; leading to a better consistency with long-term prediction than of classical economics models with error rates come along with the extent of prediction period. In addition, the learning technique allows better consistency of long-term prediction compared to classical economics models whose error rate increases with the progress of the predicting period. The design of the ANFIS model is presented in Figure Figure 1. The architecture of the ANFIS Model Constructing a predicting model for JCI return using adaptive network-based Fuzzy Inference System
Endy Jeri Suswono, Dedi Budiman Hakim, & Toni Bakhtiar
| 5 | 1, showing two inputs and one output in the structure of the model. The network consists of five layers and uses two kinds of vertices, namely loop vertices and square vertices. The loop node is also called a fixed node because it has a fixed mathematical operation and no parameter adjustment process. The box-shaped knot is an adaptive node since no parameter adjustment process in this stage.
Input variables consisting of x 1 and x 2 assigned as Layer 1 into the membership function, namely A i and B i . The membership function can be explained using various membership functions like the generalized-bell which is explained through the following equation: of 1 represent the normalized weight for the 1 st fuzzy rule. w 1 and w 2 are the weight of the first and second node in a previous layer.
Moreover, the output from Layer 3 will get into Layer 4. The node in this layer is adaptive because there are parameters that will be optimized through the training process. The Layer 3 output will be multiplied by f i based on Takagi-Sugeno fuzzy inference system. The variable f i in the first order Takagi-Sugeno model is a linear combination of input variable plus a constant term, whereas in the zero order Takagi-Sugeno model, variable f i is only a constant term. These parameters will be optimized in training process. Variable f i also indicates the weights of each fuzzy rule because the output of each node in this layer will determine the output value of each node in the ANFIS model altogether. This is because at Layer 5 there is only one single node in the form of a sum operation. Therefore, the output of Layer 5 can be explained through the equation as follows: is the degree of membership input x 1 on A 1 membership and as the output of Layer 1. Meanwhile a 1 , b 1 , and c 1 are parameters for the membership function A 1 or also called premise parameters. These parameters will be optimized through the training process using the gradient descent method.
The output from Layer 1 will be connected with a set of fuzzy rules that directly impacts up to Layer 4. The node at Layer 2 is labeled 2 is labeled i . In this layer, a multiplication operation is carried out between the output of the node at Layer 1 resulting in the new degree of membership represented by .. Thus, is obtained through the following equation: w 1 is the output value of layer 2 for node 1 . A 1 (x 1 ) is the degree of membership of the input on membership A 1 and B 1 (x 2 ) is the degree of membership input x 1 on membership of . The output of Layer 2 will enter Layer 3. The node at Layer 3 is labeled Ni is the normalization of the previous layer output node. Normalization is performed by dividing the output of i from Layer 2 with the total output of all nodes at Layer 2. Layer 3 output is labeled d . . Thus, the value of of 1 is obtained through the following equation: is the final output of the ANFIS system. f i , p i , q i , and r i are parameters that will be optimized and known as consequent parameters. The parameter optimization uses the least-squares estimator (LSE) which is further explained by the following equation (Jang, 1993): X is the consequent parameter vector. A represents the Layer 3 output matrix. A T is the matrix transpose of A. B as the actual data output vector.
ANFIS Model constructions
The design procedure of the model begins with an initial stage of data collection consisting of input
Jurnal Keuangan dan Perbankan
Volume 23, Issue 1, January 2019: 1-13 | 6 | data and output data in the form of monthly time series. Beforehand, the data are tested for stationarity. The stationarity test is carried out using the Augmented Dickey-Fuller (ADF) test. Stationary data at 5 percent level will be used as a variable for the ANFIS model.
Moreover, the pair of input and output data is divided into two parts. The first part is training data for the period of January 2003 to December 2015. The second part is testing data for the period of January 2016 to December 2017.
The data used in this study are monthly time series about Indonesia's real interest rates, the real exchange rates of IDR and USD, the real US interest rates, world oil prices, and the JCI. The data is in the form of natural logarithms (ln), except for data that is already in the form of percent such as interest rates. The real interest rate is the interest rate minus the inflation rate. Likewise, the real exchange rate is the nominal exchange rate after being adjusted with inflation factors from each country. The Indonesian real interest rate is The Bank of Indonesia interest rates minus Indonesia inflation.
The data of Indonesia interest rates and the inflation rate are retrieved from www.bi.go.id. The data about the calculation of US real interest rates are obtained from The Fed's interest rate (Federal Fund Rate, FFR) minus the American inflation rate. The FFR data was retrieved from fred.stlouisfed.org, while American inflation data was obtained at www.bls.gov. The calculation of the real exchange rates of Indonesia and the US (USDIDRR) refers to the nominal exchange rate of the US dollar against rupiah. The nominal exchange rate data is closing prices on the spot market which retrieved from www.investing.com. The oil price data refers to West Texas Intermediate (WTI) oil prices traded on the New York Mercantile Exchange (NYMEX). WTI price data is also retrieved from www.investing.com. JCI data used in the form of monthly closing values are retrieved from finance.yahoo.com.
The design of the model in this study was carried out using the help of the Matlab 2018a application. Pair of input and output data is prepared in the form of a matrix with the first four columns as input and last column as output. Fuzzy control systems are generated using the grid partition method. Ahead of the training process, number and shape of the input membership function must be assigned, This study uses two membership functions for each input and each constant output function. There are eight types of membership functions that can be selected, including trimf (triangle), trapmf (trapezium), gbellmf (generalized-bell), gaussmf (Gaussian), gauss2mf (Gaussian combination), pimf (pi), dsigmf (difference of two sigmoid), and psigmf (product of two sigmoid). Each shape of the membership function will be tested on the zero order of the ANFIS Takagi-Sugeno model for identifying the best model.
The training process uses a hybrid learning system in which the output function parameters (consequent parameters) will be optimized using the Least Square Estimator (LSE) method. Whereas the input function parameters (premise parameters) will be determined using the gradient descent method. The training process will end when the smallest error value is obtained. The error size is a root-mean-squared error (RMSE). The RMSE value is obtained through the following equation: n is the number of data sets from the test data.
^ indicates the predicted value on the data point m. is the actual value on the data point m.
Following a complete training process, the model is tested to see how it works. This testing process also uses a data pair consisting of pairs of input and output, except that at this stage there is no parameter adjustment process like in the training stage. Output data on the test data will be compared with the output of the model. The size used at the testing stage is also based on the RMSE value. The smaller the RMSE value, the better the interpretation of the model. This process can be done simultaneously with the training Constructing a predicting model for JCI return using adaptive network-based Fuzzy Inference System
Endy Jeri Suswono, Dedi Budiman Hakim, & Toni Bakhtiar
| 7 | process. After selecting the model, the performance of the model will be evaluated by comparing with the conventional econometric model, the Vector Error Correction Model (VECM). It is one of the forecasting models that fit the research design so that it can be used as a comparison to evaluate the performance of ANFIS forecasting models. The performance indicators based on RMSE value and the accuracy in terms of trend predictions of the JCI. The accuracy calculation method is explained through the following equation (Ahmad et al., 2015): is a trend in the model interpretations that is equivalent with the actual trend.
indicates total number of attempts used in the test.
Results
Stationarity test was performed using the ADF test to compare the ADF statistical values with MacKinnon Critical Value. If the statistical value is smaller than the value of MacKinnon Critical Value, then the data is stationary. Conversely, if the ADF statistical value is greater than the value of the MacKinnon Critical Value, the data is non-stationary. The results of stationarity tests are presented in Table 1.
Based on the results, SBRID is stationary at the level of five percent and the rest are not stationary. Other variables are stationary at first difference with a real level of five percent. Thus, the variables used for ANFIS model include JCI return as an output variable and Indonesian Real Interest Rate as an input variable, changes in the real exchange rate (DSBRUS), Table 2.
First Difference
There are 179 pairs of data used in this modeling procedure. 155 of the data pairs are used as training data from February 2003 to December 2015. The remaining are 24 pairs of test data from January 2016 to December 2017. Training data is used as a basis for model formation because the parameters in the model are obtained based on the training data. The model obtained from the training process is further tested using a new pair of data (test data) by comparing the results of the measurement model with its actual value. The testing stage is used to determine whether the model is reliable enough to predict the future or only good in its establishment.
The construction of the predicting model in this study is presented in Figure 2. The model uses four inputs consisting of Indonesian real interest rate (SBRID), changes in the real USD & IDR exchange rate (DlnUSDIDRR), changes in US real interest rates (DSBRUS), and changes in WTI oil prices (DlnWTI). Each input is then arranged into two memberships, namely L i and H i . L i indicates low and H i indicates high. Assignment of membership functions is based on the results of training and testing data. The results of training and testing models are presented in Table 3. Based on the results, the selected model is one with a pi-shaped membership function. The pi-shaped membership function is explained by the following equation:
Figure 2. ANFIS Prediction Model for JCI Return
The function f(x) is a membership degree of input x, where is the input value. a, b, c, and d are parameters that will be optimized through the training process. The parameter value of the ANFIS model input membership function in the form of pi after the training process is presented in Table 4. The grouping of input variables with membership functions is then illustrated in Figure 3. The construction of the model in Figure 1 shows that there are 16 fuzzy rules and fi as output parameters. The constant in the output function is the weight of each fuzzy rule which is also known as the consequent parameter. Information about fuzzy rules and their consequent parameters are explained further in Table 5.
Constructing a predicting model for JCI return using adaptive network-based Fuzzy Inference System
The weight of the fuzzy rule also describes how JCI response to various macroeconomic conditions which alert any potentials of fluctuation in JCI within the short term, based on the weight of the largest and smallest fuzzy rules. The condition that provides the most positive response is the 1 st fuzzy rule, i.e., the low Indonesian real interest rates (< 3 percent), low changes in the real USDIDR exchange rate (< 0.03), low changes in US interest rates (< -0.9), and low changes in WTI oil prices (< -0.2).
Meanwhile, conditions that give the greatest negative response are the 5 th fuzzy rules such as Indonesia's low real interest rates (< 3 percent), high changes in the real USDIDR exchange rate (> 0.1), low changes in US interest rates (< -0.9), and low changes in WTI oil prices are low (< -0.2). This model shows that the real exchange rate variable (USDIDR) has the greatest influence on the JCI compared to the other three macroeconomic variables.
Based on the result of model testing on the return JCI using new data pairs, it shows that the ANFIS model can predict the trend direction of the stock market with an accuracy of 83.33 percent. It is better than the VECM model which the accuracy is 62.50 percent. The error value of model testing based on the RMSE indicator is 0.0281 for ANFIS model and 0.0302 for VECM model. This value, when compared with a range of return JCI, has a ratio of 23.95 percent for ANFIS model and 25.72 percent for VECM model. The results show that the ANFIS model has better predicting performance than the VECM model. The description related to the results of predicting model with the actual values is shown in Figure 4.
Discussion
Since predicting stock market movement involves many factors, it becomes difficult to forecast the movement of the stock market. However, the development of predicting model in stock market remains an interesting topic to discuss because it helps investors to earn large profits or to avoid any potential losses. The development of computing technology is very helpful in the development of predicting model. ANFIS is one of predicting model that has been used in the application of Matlab 2018a for easier and faster one.
| 11 |
The rule if-then in ANFIS model gives a different response on the change of JCI return due to some changes in macroeconomic conditions. Fuzzy rules with the largest and smallest weight indicate the most vulnerable conditions which predict turmoil in JCI. This condition happens when Indonesia's real interest rates are low, and both world oil prices and American real interest rates have low changes. In these conditions, change of real exchange rate of USD against IDR determines whether the turmoil has positive or negative impacts. If the change in the real exchange rate is low, then the turmoil has a positive impact on JCI. Conversely, if the change in the real exchange rate is high, the turmoil will have a negative impact on JCI with the larger scale of impact compared to low change real exchange rate. This indicates that the real exchange rate is the main factor that has more potential in triggering stock market turmoil, compared to the other three macroeconomic variables.
The results of this study indicate that the stability of IDR exchange rate is very sensitive toward the JCI movement. As a regulator, The Bank of Indonesia should be able to maintain the stability of IDR exchange rate to reduce the potential turmoil in the domestic stock market. The capital market players should keep up-date of the development of macroeconomics, especially those of exchange rates such as the benchmark of interest rates and world oil prices. If there is potential turmoil in the exchange rate such as planning to increase The Fed's interest rates, the shortterm investors must refrain from entering the stock market or relocating investment portfolio to other safer instruments.
The model evaluation showed that ANFIS has better performance than VECM with RMSE value at 0.0281. This result is not as good as the results of the modeling conducted by Boyacioglu and Avci (2010) which took the case on the Turkish stock exchange. The study obtained an ANFIS forecasting model with the RMSE value of 0.0068. This discrepancy could be caused by several things such as differences in market characteristics and input variables used. However, this result reinforces the statement that non-linear models have better forecasting performance than linear models as done by Fahimifar et al. (2009) and Raoofi et al. (2016).
The performance of the predicting model is quite good to predict the future direction of JCI with an accuracy of 83.33 percent. The model can be an alternate model for predicting the direction of market movement. However, the use of this model as a benchmark to determine the price target is still very limited because the RMSE value of normalization is still quite high at 23.95 percent. The smaller the normalization value of RMSE, the better the predicting model created.
Conclusion
The forecasting model for JCI return that obtained from this research is the ANFIS model that applies pi-shaped membership function. The model has better predicting performance than VECM and able to describe the direct effect of changes in macroeconomic factors of the JCI. The predicting model shows changes in real exchange rate highly impact on JCI return compared to the other three variables. This indicates that the JCI movement was very sensitive to the stability of the exchange rate. The Bank of Indonesia, as a regulator, should maintain the stability of the Indonesian exchange rate. The investors should keep up with the information of the latest macroeconomic conditions, mainly related to exchange rates in order to anticipate stock market turmoil.
Limitations and suggestions
The ANFIS predicting model is good enough in predicting the market movement. The model obtained able to predict JCI return trend with accuracy at 83.33 percent. Although the performance is better than VECM, it has some limitation that refrains its use as a benchmark to determine price target due to its relatively high error value. Therefore, the use of other non-linear predicting models opens any possibilities for further development of the more advanced model in this field. | 7,021 | 2019-01-01T00:00:00.000 | [
"Economics",
"Computer Science",
"Business"
] |
Comparative Analysis of the Life-Cycle Cost of Robot Substitution: A Case of Automobile Welding Production in China
Within the context of the large-scale application of industrial robots, methods of analyzing the life-cycle cost (LCC) of industrial robot production have shown considerable developments, but there remains a lack of methods that allow for the examination of robot substitution. Taking inspiration from the symmetry philosophy in manufacturing systems engineering, this article further establishes a comparative LCC analysis model to compare the LCC of the industrial robot production with traditional production at the same time. This model introduces intangible costs (covering idle loss, efficiency loss and defect loss) to supplement the actual costs and comprehensively uses various methods for cost allocation and variable estimation to conduct total cost and the cost efficiency analysis, together with hierarchical decomposition and dynamic comparison. To demonstrate the model, an investigation of a Chinese automobile manufacturer is provided to compare the LCC of welding robot production with that of manual welding production; methods of case analysis and simulation are combined, and a thorough comparison is done with related existing works to show the validity of this framework. In accordance with this study, a simple template is developed to support the decision-making analysis of the application and cost management of industrial robots. In addition, the case analysis and simulations can provide references for enterprises in emerging markets in relation to robot substitution.
Introduction
Since the 1970s, the industrial robot has been widely used in industrialized countries in manufacturing processes, subsequently becoming the core of modern manufacturing [1]. Since 2012, the global industrial robot holding volume has grown at an average annual rate of 15.2%. Although the large-scale application of industrial robots in the manufacturing sector started late in China, it has sustained an average annual growth rate of 28%, much higher than that of other countries [2]. This trend is expected to be maintained in the next decade due to the rising labor cost, the upgrading of industrial structure, and the fierce global competition [3]. However, in the context of the "robot substitution", numerous problems have emerged in Chinese enterprises such as the challenge of running industrial robots with other production systems, the difficulty of cultivating robot professionals, and the high expenditure after robot introduction. In some cases, compared with traditional production, it seems that the industrial robot is unaffordable [4]. In fact, most enterprises focus on the initial investment, but the huge life-cycle costs (LCC) are ignored. In addition, due to the lack of a corresponding analysis approach, it is difficult to clearly estimate the cost of "robot substitution", resulting in decision-making risks and unmanageable costs [5]. To determine whether industrial robot substitution is economically viable, it is necessary to establish a comparative analysis framework of LCCs and compare the cost and cost-efficiency of the two manufacturing modes during the lifespan. Thus, the following questions were identified: What costs do the LCCs of the two manufacturing modes include? How do the LCC components change at different stages? How can the dynamic cost (the tendency of the total cost structure) of the two manufacturing modes over their life cycle be scientifically analyzed? Does industrial robot production have a cost advantage over traditional production in the life cycle? How does it perform over its lifespan in terms of total cost, unit cost, and cost structure? What factors mainly affect and restrict this cost advantage (if there are any)? Finally, what can we learn from it?
LCC is a well-known cost analysis method, first applied in the UK in the late 1950s, that considers all related expenses in support of one item, from its initial development to the final scrapping [6]. Traditionally, it is considered mostly as a support tool in investment in fixed assets, such as property, machinery, plant, and infrastructure. However, from a wider perspective, LCC is now viewed as a strategic tool for the lifecycle management of manufacturing assets [7], and it plays a larger role in decisions supporting asset procurement [8], asset configuration [9], and asset operations [5], contributing to the product-service system [10]. Scholars have developed various methods and discussed the LCCs of manufacturing systems, and the LCC of industrial robots is now becoming a hot topic in current research [11]. Some studies have discussed heterogeneous cost factors of industrial robots over their life cycle [12,13] and preliminarily established the cost structure of the LCC [5]. On this basis, a few LCC analyses are conducted with the help of simulation technology [12,13] and case studies [5,14]. In regard to "robot substitution", it can be found that some economic models have been developed to conduct a quantitative comparison of cost efficiency between industrial robot and traditional manual manufacturing systems [15][16][17]. Nevertheless, current research still lacks LCC comparison between industrial robot production and traditional manual production, representing a research gap.
It is widely considered that space-time symmetry is ubiquitous in the universe, and there are lots of symmetric problems waiting to be discovered and studied in every field [18]. To contribute to filling the gap above, taking inspiration from the symmetry philosophy in manufacturing systems engineering, this study further examines the LCC of robot substitution and presents a comparative analysis of the LCC of robot substitution. At first, it establishes a comparative LCC model, integrating industrial robot production and traditional production into the framework. Then, it conducts a case analysis and simulations from a Chinese automobile manufacturer and compares the LCC of welding robot production with manual welding production. In this study, intangible costs are introduced (covering idle loss, efficiency loss, and defect loss) to supplement the actual costs, constituting a relatively complete cost breakdown structure. Meanwhile, various methods are comprehensively used for cost allocation and variable estimation in order to conduct total cost and the cost efficiency analysis, together with hierarchical decomposition and dynamic comparison. It involves the symmetry and asymmetry of the following: industrial robot production and traditional production in terms of "robot substitution", the actual and intangible costs, the LCC structure, and the manufacturing system and external environment.
This study makes the following contributions. (i) Previous studies mainly focus on LCC analysis of industrial robots, paying more attention to the total ownership cost over the life cycle [5,12], while the current study is the first study that conducts comparative LCC analysis of robot substitution, providing a comparative analysis framework of dynamic costs for the two manufacturing modes. (ii) Existing LCC models of industrial robots mostly highlight the actual costs. Although the cost factors related to intangible loss in production have received extensive attention, intangible costs have not been clearly defined and measured [5,13]. This study integrates the actual and intangible cost (covering idle loss, efficiency loss, and defect loss) to form a complete LCC structure, which helps to clearly reflect the dynamic cost-efficiency of robot substitution. (iii) Considering the availability and integrity of the database, most of the existing literature uses simulation technology to estimate the LCC of industrial robots [11], and empirical analysis of the cost efficiency of robotic implementation is relatively scarce [14]. Combined with case analysis and simulation, this study performs a comparative cost-benefit analysis of robot substitution with the help of operation records over the life cycle, and it obtains some valuable results. (iv) Most of the relevant research comes from industrial countries, and there is little literature regarding emerging markets [5]. This paper conducts a case analysis and simulations of a Chinese manufacturer, which enriches the relevant literature. The structure of this paper is arranged as follows. After the introduction, this paper provides a theoretical background. Then, the materials and methods are discussed. Afterwards, a case analysis and simulations are conducted. The results are then presented and discussed, and the conclusion is obtained from the perspective of management.
Cost Breakdown Structure of LCC
LCC mainly focuses on the process of obtaining and applying fixed assets [10], which requires a full consideration of the costs associated with the acquisition, ownership, usage, and subsequent disposal [19]. All of the disbursement during the life span, including the demand assessment, concept, design, manufacturing, operation, disposal costs, etc. [8], can be divided into four categories, namely research and development cost, production and consumption cost, operating and maintenance cost, and retirement and disposal cost [20]. Its specific breakdown depends on the cost characteristic, the information heterogeneity, and the available data [21]. Previous studies show that in LCC structure, the utilization cost is much higher than the acquisition cost and a longer life cycle corresponds to a lower unit total cost, as well as a higher average return [22]. Traditional LCC models focus on the actual cost and ignore the intangible cost [8]. However, when comparing the cost among projects, the productivity loss should not be neglected [23]. It is recognized that when the asset fails, the potential output gains lost should be quantified since the asset is out of service [24,25]. Moreover, additional resources are utilized to maintain a smooth manufacturing process in order to ensure operational reliability [26]. In addition, substandard product quality or reduced productivity may cause a huge amount of waste [25]. Furthermore, some business uncertainties and decision risks can lead to production disruption, leaving equipment and workers waiting [27]. Based on the literature above, the potential productivity loss, or "gains from cost savings", should be considered separately and clearly classified as intangible cost [23]. Hence, we can obtain a clear outline of the cost breakdown structure, which divides the total cost into actual cost and intangible cost. The actual cost can be subdivided into several components according to the characteristics of the LCC structure, and the intangible cost mainly refers to the potential cost caused by idle loss, efficiency loss, quality loss, etc. [15,16,28].
Many studies have discussed the cost breakdown structure of the automatic manufacturing systems related to industrial robots. One representative study builds the LCC framework of heavy machinery systems, divides the total cost into ownership and operation costs, subdivides the operation costs into explicit and stealth operation costs, and categorizes the related cost factors [29]. Another study examines the LCC of computer information systems; decomposes the total cost into initial, operation, and disposal costs; and analyzes their drivers [30]. Another study establishes an LCC framework of industrial robots; divides the total cost into acquisition, operation, and disposal costs; summarizes various cost drivers; and examines the quality cost factors [5]. Some scholars investigate the LCC of robot systems; argue that the costs in the debugging, upgrading, maintenance, and disposal processes can be easily ignored; and examine the related cost factors in depth [12,13]. Some scholars have examined the LCC of electronic equipment components [31] and examined in depth the disposal costs, including the costs associated with depreciation and the expenses related to disposal, while others have investigated the LCC of cloud computing services, with emphasis on the relevant costs of equipment procurement [32]. In addition, intangible costs are thoroughly examined in some LCC models, and their cost factors are discussed [9,14,29]. Given the similarities in the cost structures, most of these frameworks have defined and subdivided acquisition, operation, maintenance, and disposal costs, as well as intangible costs, forming a relatively clear framework. However, compared with the actual cost, the intangible cost has not been clearly defined [8]. Overall, there is still a lack of an integrated framework between actual and intangible costs, making comparative analysis difficult.
Cost Factors Related to the LCC of Robot Substitution
Industrial robot production refers to the manufacturing mode of using highly integrated automation equipment, while traditional production is the manufacturing mode that mainly relies on manual operation machine [33]. Scholars emphasize that the industrial robot, as a special piece of equipment, has a high total ownership cost and investment risk over its life cycle. They argue that some related costs can be easily ignored and examine the cost factors of LCC in-depth [12]. The LCC of industrial robot production needs to be adequately evaluated with respect to the life cycle production task, which is characterized by the product properties, manufacturing process, technologies steps, output scale, and lot-sizes, as well as the corresponding resource consumption [11]. During investment, the industrial robot is designed according to manufacturers' needs, and components are purchased or developed and then integrated. This life-cycle phase continues from negotiation to acceptance tests, and the typical cost-relevant expenditure should be treated as investments and allocated over the life cycle [13]. At the beginning of the operation phase, a transient decrease in the output or increase in scrap might result from the running-in process, which induces additional direct cost and opportunity costs. Moreover, another important cost factor is the training of the operators. After the running-in process, the benefits of the industrial robot materialize, as the increasing output, quality, or productivity causes cost savings [34]. Maintenance is considered as a relatively independent life-cycle phase, since it interrupts the operation of the industrial robot. Cost considerations for regular inspections, repair work, and reconfiguration are significantly different from those of operation, since the related expenditures regard the guarantee of sustainable production over the life cycle [12,35]. At the end of the life cycle, the industrial robot will be decommissioned for the following reasons, among others: the operation and maintenance costs continue to rise, it does not fulfil the quality standards, and it is not feasible for new products when a new robot with better cost-benefit ratio is available. Consequently, the disposal expenditure and the residual value should be considered [14].
For traditional production, labor is the most critical factor. Drawn on the LCC structure of equipment, the costs for labor are suggested to be divided into three basic categories: employment costs, which indicate administrative costs related to employees; operating costs, which are those associated with employee compensation; and work environmental costs, which refer to the expenses related to the security and welfare of employees [36]. It is generally believed that administrative costs included in production costs could be further divided into three sub-categories, namely, recruitment costs, education costs, and additional costs (other management fees spent on workers) [37]. Moreover, employee compensation, including salary, performance, overtime pay, subsidies, etc., should be regarded as the direct production cost [38]. Furthermore, the welfare costs, which are the expenses spent on employees, such as social security, medical insurance, retirement, vacation, sick leave, paid vacation, and housing, should be included in indirect production costs [39]. In addition, considering the low value of manual operation equipment, all production costs other than labor costs and administrative costs can be lumped into operation cost, covering direct materials, and related overheads. Examples include materials, power, depreciation on buildings and equipment, maintenance, and so on [40][41][42].
From the literature, we summarize the LCC-related cost factors of two manufacturing modes, and further clarify the key factors that need to be paid attention to in the LCC framework of robot substitution, allowing for the preparation of model construction.
Cost Measurement and Estimation Methods
In the field of engineering economics, the NPV (net present value) method is widely used to analyze the costs and benefits of projects [43]. It emphasizes the time value of money, which reflects future inflation and interest rates, while the present value represents the value that should be reserved today for future expenditures. In this approach, the expenditure at different times over the life span can be converted into the present value using an appropriate discount rate.
There are three representative methods for estimating costs, namely estimating by engineering procedures, estimating by analogy, and parametric estimation, which apply to heterogeneous costs [44]. The actual costs are suitable for the methods of estimating by engineering procedures, such as the process cost method and the operation cost method. The former focuses on the impact of processes on the overall cost, which is reasonably distributed to each link [45], and the latter emphasizes the "cost driver" in the course of operation and accurately accumulates and allocates the resources consumed in the production process [46]. According to ISO, the accelerated depreciation method can be used for high-tech equipment, while the composite life or annual output proportion method for cost allocation is adopted for general electromechanical. The expenditures for technician training, which is considered the investment in human capital, can be depreciated with high-tech equipment, while technician compensation, energy consumption, consumables, and other expenses are regarded as the current production cost. In addition, the impairment and disposal expenses incurred during the scrapping of assets are apportioned based on the proportion of each annual output [47].
Furthermore, the intangible cost is suitable for the methods of analogy and parametric estimation. One of the most well-known tools is overall equipment efficiency (OEE), which mainly focuses on the factors of availability, performance, and quality [48]. Moreover, the value system cost method facilitates estimation of the value-added loss [49], while the activity-based costing method establishes the activity-based cost standard and can measure the opportunity loss based on time [50]. All of these parameters can be improved to estimate intangible costs from equipment idleness, work inefficiency, and substandard quality [14]. The parameter, presenting a prediction of possibility or cost rate over the life cycle, is mainly estimated based on probabilistic/stochastic methods. Most studies related to industrial robots apply computer simulation technology to estimate the relative parameters [12,13,15]. However, considering the complexity of the manufacturing system, its accuracy needs to be improved [16]. In contrast, it might be more convenient and reliable to conduct estimations with the help of statistical analysis from operation records, although this method is limited by the availability of appropriate data [51]. Drawing on the discussion, we must first convert all the expenditures over the life span into NPV at first. As for actual costs, we should collect and allocate them using appropriate methods according to the nature of each item. Finally, regarding intangible costs, we can set reasonable parameters to estimate them with the help of operational records of the life cycle.
Materials and Methods
Given the complexity in conducting the comparative analysis of LCC of robot substitution, first, the theoretical framework is established and the cost drivers are clarified. Second, the LCC model of robot substitution is constructed in accordance with the cost breakdown structure. We define the formulas to calculate total cost, cost efficiency, and substitution coefficient of the two manufacturing modes that could clearly reflect the cost composition, comparative advantage, and dynamic cost. Third, the measurement and estimation methods of various costs are identified. Due to the significant differences in costing, we adopt multiple approaches for heterogeneous costs. Finally, the method of case analysis and simulation is briefly introduced.
Comparative LCC Model of Robot Substitution
Combining literature collection with field investigation, we sort out heterogeneous costs incurred during the life cycle of the two manufacturing modes respectively and summarized them into one comparative LCC structure, which can reflect the cost characteristics of production procedure at the same time. The comparative LCC structure and cost drivers are as shown in Table 1. The LCC structure of industrial robot production is defined as investment, operation, maintenance, disposal, and intangible costs; and that of the traditional production is defined as management, compensation, welfare, operation, and intangible costs. Where C R is the total cost of the industrial robot production in terms of actual costs; C RI refers to investment cost, including items such as equipment price, transaction cost, loan interest, taxes minus subsidies, after-sales fee, etc. [5,9,[12][13][14]; C RO refers to operation cost, including items such as operator's remuneration, training expenditure, spend on pace, accessories charge, energy consumption, etc. [5,9,[12][13][14]; C RM refers to maintenance cost, including items such as service fee, replacing parts price, consumables charge, annual inspection fee, etc. [5,9,12,14]; C RD refers to disposal cost, including demolition expenses minus residual value, etc. [5,9,14]; C T is the total cost of the traditional production in terms of actual costs; C TM refers to management cost, including recruitment expenditure, training expenditure, labor protection expense, office expense etc. [36][37][38][39][40][41][42]; C TC refers to compensation cost, including basic salary, performance bonus, overtime pay, work subsidy, etc. [36][37][38][39]; C TW refers to welfare cost, including social security charges, housing fund, trade union funds, daily welfare expenses, etc. [36][37][38][39]; C TO refers to operation cost, including equipment depreciation, equipment maintenance, material consumption, energy consumption, spend on pace, tool amortization, etc. [40][41][42]. In addition, CO R and CO T represent the intangible costs of the two production modes, respectively, covering production idle loss, product efficiency loss, product defect loss, etc. [15][16][17]. The total cost of robot production is shown in Equation (1), and the total cost of traditional production is shown in Equation (2).
Cost efficiency reflects the productivity of cost expenditure. It is calculated as a ratio of the total cost, as well as its components to the output during the current period. The actual total cost is decomposed into components according to Equation (1) and then divided by the output quantity so the unit cost of each component is obtained. This is shown in Equations (3) and (4), where Q R is the output of the industrial robot production, AC R is C R per unit output of the industrial robot production, similar to AC RI , AC RO , AC RM , AC RD , and ACO R ; Q T is the output of the traditional production, AC T is C T per unit output of the traditional production, similar to AC TM , AC TC , AC TW , AC TO , and ACO T .
The substitution coefficient reflects the comparative advantage of LCC and is calculated as the ratio of the cost efficiency of the two manufacturing modes. This is shown in Equation (5), where SC RT refers to the substitution coefficient of the two manufacturing modes. When SC RT > 1, industrial robot production has a comparative advantage over traditional production; when SC RT = 1, the cost efficiency of the two is equal; when SC RT < 1, traditional production is considered the more economical method.
Cost Allocation and Variables Estimation
Taking into account the time value of money, all expenditures involved during the life span should be converted to NPV before being carried forward to the actual cost. This is shown in Equation (6), where NPV refers to the net present value, PV refers to the actual expenditure, R refers to the discount rate (benchmark interest rate), and n refers to the rank in the lifecycle.
In the actual cost measurement, in order to accurately reflect the dynamic cost, it is necessary to allocate the capitalized expenditure reasonably to each year. As high-tech production equipment, industrial robots are suitable for the accelerated depreciation method, and C RI adopts the sum of years digits method for cost allocation. C RO is regarded as the current cost, while C RM and C RD are apportioned based on the output. The traditional production follows a conservative method, in which the value of equipment is firstly depreciated according to the output and then C TM , C TC , C TW , and C TO are directly accounted for in the current cost. Furthermore, the intangible cost is usually estimated with the help of relevant parameters. Based on the literature discussed above, the estimation methods of CO R and CO T are developed, as shown in Equations (7)-(11), where T I is the idle production time, T M is the maintenance time, T S is the saturated working time, r I is the idle loss rate, r E is the efficiency loss rate, r U is the unqualified product rate, r V is the added value rate, AP is the value of the finished product, and ACA is the actual cost per unit output.
Data Resources
Industrial robots are widely used in the automotive manufacturing industry, and the welding robot is the most commonly used. Hence, a Chinese automobile enterprise is selected as a sample, and the application of welding robots in the production is investigated. In this case, all the products are initially welded by hand-operated welding machines. In 2006, the welding production line introduced 15 sets of Hyundai hx165 electric welding robots. After that, the welding operation was divided into two parts: industrial robot welding and manual welding. The welding robots are responsible for the welding operation of 2454 welding points outside the body-in-white, while the manual welding is responsible for 1418 welding points inside it. The two manufacturing modes are fairly similar in terms of their product attributes and quality requirements. By the end of 2019, these robots were scrapped. The main machining cells of the two manufacturing modes are shown in Figure 1.
In this case, industrial robots are introduced by financial leasing at a price of CNY 600,000 per unit and settled in the form of mortgage payments. The designed life cycle of the industrial robot is 15 years. The actual life span of the welding industrial robots was from 2006 to 2019, lasting 14 years. Five engineers are trained to be in charge of operating the welding robot. Moreover, regular maintenance is carried out every year, while special maintenance is carried out according to the production arrangements. Industrial robots were scrapped one year ahead of schedule with a small residual value and high dismantling costs. At the same time, the manual welding production process was divided into 5 production teams with a total of 50 employees. Twenty spot welders are used in the traditional production with a value of CNY 27,000 per unit. The actual life cycle of these tools is synchronized with industrial robots, and this whole value is allocated to operating costs without residual value. The data involved in the actual cost originate from accounting data and the data involved in intangible cost come from historical production records. All the data are verified by material collection and on-site interviews.
Case Analysis and Simulation
The case analysis was firstly conducted, in which a comparative cost-benefit analysis of robot substitution with the help of operation records over the life cycle was performed. Then, simulations were carried out in order to overcome the shortcomings of the case analysis and extend the model to more scenarios. Based on the comparative LCC model of industrial substitution, an LCC system dynamics model was built in Vensim software, and parameters were set with reference to the empirical data from the case analysis. Then, the LCCs of welding robot production and manual welding production were modeled and simulated, respectively. The simulation mainly focused on four heterogeneous factors affecting the LCC of robot substitution, including the fluctuation of production scale, the rise of labor costs, and the number of robots input in industrial robot production and frontline workers input in traditional production. Each of these four factors was divided into two conditions, and every condition is further defined and explained in Table 2. Applying these heterogeneous conditions to the simulation, we can obtain the LCCs of industrial robot production in 8 scenarios, the LCCs of traditional production in 8 scenarios, and the comparative LCCs of industrial robot substitution in 16 scenarios.
Comparison of the Total Cost
In case analysis, the total cost and cost efficiency over the life cycle of the two manufacturing modes are computed separately, as shown in Table 3, from which several important findings can be observed. Figure 2 presents the total costs for the two manufacturing modes of the case analysis, and Figure 3 presents that of the simulations.
As shown in Figure 2, C R has two main components, namely, C RI and C RO , which account for 35.73% and 39.76% respectively, making them highly relevant to industrial robotic equipment. The simulation results are the same in most scenarios, as shown in Figure 3. This is also in line with the findings of Landscheidt [14], Dietz et al. [12], and Zwicker et al. [13]. Meanwhile, C TO , C TC and C TW account for relatively high proportions in the traditional production of 55.56%, 29.41%, and 13.97% respectively, suggesting that they are deeply affected by labor costs. The simulation results are the same in most scenarios, as is shown in Figure 3. This is similar to the prior studies of Adegbola etc. [38] and Barosz etc. [16]. This indicates that industrial robot production is typically characterized by a huge initial investment on manufacturing assets, while traditional production highly relies on skilled labor force. So the industrial robot creates high fixed costs and low variable costs compared to manual work. Overall, two manufacturing modes show heterogeneous characteristics of the total cost structure. Robot production is typically characterized by a huge initial investment in manufacturing assets, while traditional production highly relies on a skilled labor force [16]. Thus, the industrial robot creates high fixed costs and low variable costs compared to manual work. In addition, industrial robot production has a large proportion of intangible costs, which indicates that the relatively low flexibility leads to significant potential loss [11]. Figure 4 presents the comparison of total cost efficiency of the two manufacturing modes of the case analysis, and Figure 5 presents that of the simulations. The case analysis obtained that, AC R is 2.49 cents/piece, which is far lower than AC T (7.72 cents/piece), highlighting the significant competitiveness of industrial robot production in terms of cost efficiency. The simulation results are the same in most scenarios, as is shown in Figure 5. This is also consistent with the findings of Adegbola et al. [38].
Comparison of Total Cost Efficiency
As shown in Figure 4, from its internal structure, it can be found that for labor-related costs in traditional production, AC TC is 2.27 cents/piece and AC TW is 1.08 cents/piece, much higher than AC RI (0.89 cents/piece), which represents the share of investment cost of industrial robots for each product. This shows that the investment in industrial robots is diluted over the life cycle and has a strong cost advantage over labor force input. And the simulation results are the same in most scenarios, as is shown in Figure 5. Moreover, in terms of the costs supporting the operational process, AC TO is 4.29 cents/piece and AC TM 0.07 cents/piece, much higher than AC RO (0.99 cents/piece) and AC RM (0.2 cents/piece). This indicates that the use cost of industrial robots in the whole life cycle is significantly lower than the consumption of traditional production. Moreover, the simulation results are the same in most scenarios, as is shown in Figure 5. This is similar to the prior studies of Adegbola etc. [39]. Moreover, considering the intangible costs, ACO R is 0.39 cents/piece, is much higher than ACO T (0.02 cents/piece). Thus, the impact of intangible loss on the cost efficiency of industrial robot production cannot be ignored, while the impact on traditional production is weak. The simulation results are the same in most scenarios, as is shown in Figure 5. Nevertheless, the intangible costs of industrial robot production seem acceptable due to its robust actual cost advantages [11].
Comparison of Intangible Cost Factors
From the perspective of the total cost factors in the case analysis, as shown in Table 3, r V of the industrial robot is 37.2%, 13 times larger than that of the manual operation, showing the great productivity of industrial robots. In terms of invisible losses, r I of the industrial robot is 0.472, much higher than that of the manual operation, which indicates that industrial robots have a huge idle cost risk when compared with traditional production. This is inconsistent with the findings of Glaser [52], which indicates that the utilization of industrial robots reached up to 90% and, for manual machines, it was only about 40%-60%. This difference may stem from the heterogeneous manufacturing scenario, including the difficulty of manual handling, automation levels, etc. [16]. Moreover, r E of the industrial robot production is 0.041, which is much higher than that of the traditional production, presenting a larger potential efficiency loss. This is consistent with the findings of Barosz etc. [16], which show that the ratio of failure time to scheduled time of manual production is smaller than that of industrial robot production; that is, it has better availability. On the contrary, Kampa et al. find that the availability of industrial robot production is 0.938, higher than that of traditional production (0.893), and this instability may be due to the higher risk of system failure in industrial robot production [28]. In addition, r U of the traditional production is eight times that of industrial robot production, which reflects huge potential quality loss. This is consistent with extensive research [15][16][17]28]. It is generally believed that due to the higher standard deviation in the Six Sigma method resulting from the inhomogeneous manual labor (e.g., different employee qualifications, worst quality in the night shift), the quality level of traditional production is typically lower than that of industrial robot production. Figure 6 presents the comparison of intangible cost factors of the two manufacturing modes in the case analysis, and Figure 7 presents that of the simulations. In the simulation system, we set r V with reference to the actual value. The output scale is predetermined, which determines r I . Meanwhile, we simply set r E and r U as logarithmic functions of the output, using empirical data for fitting, and then obtained their values under heterogeneous scenarios. On this basis, we obtained the intangible cost rates. Hence, due to the difference in production scale, intangible cost factors in the simulation present two typical types. Two scenarios,I1III1IV1 andI2III1IV1, were studied as representatives respectively, as shown in Figure 7.
As shown in Figure 6a, initially, both manufacturing modes have a large idle rate due to their low output. Among them, r I of industrial robots is as high as 84.15%, causing huge idle loss. As production expands, r I drops rapidly in both cases. When the output reaches 46,000 vehicles, traditional production reaches saturation and continues to the end of the lifespan. However, when the output reaches the peak of 66,000 vehicles, r I of the industrial robots is at its lowest value of 26.37% and still results in considerable productivity loss. This is similar to the simulations under conditions with a uniform and stable production scale, as shown in the Figure 7e; however, it is significantly different from those under conditions with a sustained rising production scale, as shown in the Figure 7a. As shown in Figure 6b, r E of the two manufacturing modes rises with the output expansion over the life cycle. The r E of traditional production rises from the initial value of 1.60% to its maximum value of 4.10%, while r E of industrial robots increases from 3.52% to 4.55%, indicating that industrial robots have better operation stability under large-scale production. However, compared with traditional production, more time and resources must be invested to maintain smooth operation, resulting in large efficiency losses. This is similar to the simulations under conditions with a uniform and stable production scale, as shown in the Figure 7g; however, it is significantly different from those under conditions with a sustained rising production scale, as shown in the Figure 7c. As shown in Figure 6c, r U of industrial robots is always at a stable and low level over its life cycle, from 0.11% to 0.20%, decreasing in the initial period and rising continuously in the later period. Nevertheless, there is an obvious adaptation process for traditional production: r U decreases in the early stage and remains stable at the level of 0.71-0.75 in the later stage. Despite this, it can be observed that industrial robots have undisputed advantages in product quality. This is similar to the simulations under conditions with a uniform and stable production scale, as shown in the Figure 7f; however, it is significantly different from those under conditions with a sustained rising production scale, as shown in the Figure 7b. Considering r V , r I , r E , and r U comprehensively, as shown in Figure 6d, industrial robots always have a considerably high intangible cost rate, and as output expands, this ratio continues to fall from 32.6% to 12.04%. In contrast, the intangible cost rate of traditional production is small. This is similar to the simulations under condition with a uniform and stable production scale, as shown in the Figure 7h; however, it is significantly different from those under conditions with a sustained rising production scale, as shown in Figure 7d. From the comparison between the two conditions, the idle loss resulting from insufficient output is the main source of the intangible cost of industrial robots, and the high value-added rate further amplifies it. Therefore, as a manufacturing asset with huge initial investment and high productivity, the application of industrial robots has a great potential loss risk. Figure 8 presents the dynamic cost efficiency of industrial robot production of the case analysis, and Figure 9 presents that of the simulations. As shown in Figure 8, AC R declines, but then rebounds at the end, peaking at 8.13 cents /piece at the beginning, reaching its lowest level of 1.85 cents /piece in 2015, and increasing to 2.18 cents /piece at the end. It can be found that in the early stage, the rapid decline in AC RI and ACO R is a major influencing factor.On the one hand, it is observed that with production expansion, the investment cost is rapidly diluted, and its allocation to the unit output becomes smaller; on the other hand, the potential capacity of industrial robots is brought into actual costs, and idle loss is continuously reduced, which leads to a significant reduction in the unit cost and improved cost efficiency. In the later periods, although AC RI continues to decrease, and ACO R is at a low level, AC RO and AC RM increase rapidly, promoting the growth of AC R .This reflects that, at the later life cycle, with the wear and aging of industrial robots, the costs supporting the operation become increasingly prominent, which increases the unit cost and reduces the cost-efficiency. presents the simulation of dynamic cost efficiency of industrial robot production in scenario I2II2III2; (c) presents the simulation of dynamic cost efficiency of industrial robot production in scenario I2II1III1; (d) presents the simulation of dynamic cost efficiency of industrial robot production in scenario I2II2III1; (e) presents the simulation of dynamic cost efficiency of industrial robot production in scenario I1II1III2; (f) presents the simulation of dynamic cost efficiency of industrial robot production in scenario I1II2III2; (g) presents the simulation of dynamic cost efficiency of industrial robot production in scenario I1II1III1; (h) presents the simulation of dynamic cost efficiency of industrial robot production in scenario I1II2III1.
Comparison of the Dynamic Cost Efficiency
As shown in Figure 9, the curve shape of the simulations of dynamic cost efficiency of industrial robot production presents two evolutionary trends on the whole, with significant differences betweenI1 andI2 situations. This indicates that the cost-efficiency of industrial robots is most significantly affected by the change in production scale. Figure 9a-d show that in the four scenarios under conditionI1, the shape of the dynamic cost efficiency curve is similar to that of the case analysis, reflecting the characteristics of rapid expansion of production. Nevertheless, Figure 9e-h present a smoother dynamic cost efficiency curve, which reflects the characteristics of uniform and stable production, quite different from the case analysis. In these scenarios, industrial robot production is able to reach its optimal state more quickly and better exploit its cost-efficiency advantages, maintaining a lower cost per unit product throughout the life cycle. Moreover, other things being equal, the simulations under conditions II1 and II2 are compared. It can be seen from the comparison between Figure 9a,b, Figure 9c,d, Figure 9e,f, and Figure 9g,h that there is little difference in dynamic cost efficiency between the two conditions over the life cycle. This indicates that the rise in labor costs has little impact on the cost efficiency of industrial robot production. Furthermore, with other things being equal, the simulations under conditions III1 and III2 are compared. It can be seen from the comparison between Figure 9a,c, Figure 9b,d, Figure 9e,g, and Figure 9f,h that the unit output cost under condition III2 is significantly higher than that under condition III1, which indicates that the cost efficiency is higher when there is a small number of industrial robots input. As in the unsaturated production scale, large-scale industrial robot investment means more depreciation costs and idle costs Figure 10 presents the dynamic cost efficiency of traditional production in the case analysis, and Figure 11 presents that of the simulations. As shown in Figure 10, AC T demonstrates an obvious U-shaped curve, which continuously declines from 2006 to 2010 and rebounds from 2011 with a rapid growth rate. The main drivers of this trend include AC TO , AC TC , and AC TW , with AC TO accounting for the largest proportion. These variable costs demonstrate a running-in process at the beginning; and a decrease with the learning curve at an average rate of 8.7% from 2006 to 2010. However, the AC TO starts to grow from 2011 with an average annual growth rate of 9.2%, due to the increase in raw material prices, equipment aging, increasing loss rates, job burnout, etc. Meanwhile, AC TC and AC TW continue to increase at an average annual rate of 12.3%, which indicates that the rapid rise in labor costs can be identified as important factors that promote the growth of AC T . (d) presents the simulation of dynamic cost efficiency of traditional production in scenario I2II2IV2; (e) presents the simulation of dynamic cost efficiency of traditional production in scenario I1II1IV1; (f) presents the simulation of dynamic cost efficiency of traditional production in scenario I1II2IV1; (g) presents the simulation of dynamic cost efficiency of traditional production in scenario I1II1IV2; (h) presents the simulation of dynamic cost efficiency of traditional production in scenario I1II2IV2.
As shown in Figure 11, the curve shape of the simulations of dynamic cost efficiency of traditional production presents two evolutionary trends on the whole, with significant differences between II1 and II2 situations. This indicates that the cost efficiency of traditional production is most significantly affected by the rise in labor costs. Figure 11b,d,f,h show that in the four scenarios under condition II2, the shape of the dynamic cost efficiency curve is similar to that of the case analysis, reflecting the characteristics of the rapid rise in labor costs. Nevertheless, Figure 11a,c,e,g present a smoother dynamic cost efficiency curve, which reflects the characteristics of relatively stationary labor costs, quite differently from the case analysis. In these scenarios, traditional production can maintain a lower unit cost throughout the life cycle since its main components are labor costs and operation costs, and the latter are relatively stable. Moreover, with other things being equal, the simulations under conditions I1 andI2 are compared. It can be seen from the comparison between Figure 11a,e, Figure 11b,f, Figure 11c,g, and Figure 11d,h that there is little difference in dynamic cost efficiency between the two conditions over the life cycle. This indicates that the fluctuation of production scale has little impact on the cost efficiency of traditional production. Furthermore, with other things being equal, the simulations under conditions IV1 and IV2 are compared. It can be seen from the comparison between Figure 11a,c, Figure 11b,d, Figure 11e,g, and Figure 11f,h that the unit output cost under condition IV2 is significantly higher than that under condition IV1, which indicates that the cost efficiency is higher when there is a small number of front-line workers input. Since labor costs constitute the core cost of traditional production, more front-line workers input means a reduced cost efficiency. Figure 12 presents the comparison of the cost efficiency of the two manufacturing modes in the case analysis, and Figure 13 presents that of the simulations. As shown in Figure 12, in this case, industrial robot production has a comparative advantage over traditional production in the life cycle, and such an advantage becomes more significant with the rapid increase in output and labor costs. In the initial stage, the advantage of industrial robot production is not obvious because its production capacity is unable to reach the saturation state quickly, while traditional production shows a strong production elasticity due to its small fixed investment. In Figure 12, SC RT < 1 indicates that in the first one to two years, industrial robot production demonstrates a lower cost efficiency when compared with traditional production. However, with the expansion of production, the management experienced minimal difficulties in using industrial robots, the marginal cost of the industrial robot production continued to decline, and the scale economic effect became increasingly prominent. Meanwhile, due to the rising labor cost and workload, the management experienced difficulties in maintaining traditional production, resulting in a diseconomy of scale. In Figure 12, SC RT = 1 indicates that in the period from 2006 to 2007, cost efficiency becomes equal to that of traditional production. However, from 2007 to 2015, the ratio is SC RT > 1, the overall growth trend is accelerating, the output is rapidly increasing, and the advantage of industrial robots becomes increasingly significant. Following this period, the growth rate slows down.
It can be seen from Figure 13 that SC RT in all scenarios shows a continuous growth trend over the life cycle, but there are significant differences in heterogeneous scenarios. According to its growth rate, SC RT can be classified into four categories, corresponding to four heterogeneous conditions, namely I1II1, I1II2, I2II1, andI2II2, which indicates that the fluctuation of production scale and the rise of labor costs are the key factors affecting robot substitution. First of all, Figure 13e Figure 13e-h show that in the four scenarios under condition I1II1, SC RT keeps a moderate growth rate over the life cycle, from an average of 2.54 at the beginning to 3.52 at the end. This indicates that the cost efficiency of robot substitution is the most significant under the situation of continuous expansion of production scale and rapid rise of labor cost; the next is under the situation of uniform and stable output and rapid rise of labor cost; the next is under the situation of continuous expansion of production scale and relatively stable labor costs; and the next is under the situation of uniform and stable output and relatively stable labor costs. Moreover, with other things being equal, the simulations under conditions III1 and III2 are compared. It can be seen from the comparison between Figure 13a,c, Figure 13b,d, Figure 13e,g, Figure 13f,h, Figure 13i,k, Figure 13j,l, Figure 13m,o, and Figure 13n,p that SC RT under condition III1 is generally higher than that under condition III2. Combined with the previous analysis, this indicates that the cost efficiency of robot substitution is more significant when the scale of industrial robots is in line with the actual production scale, since excessive input will inevitably bring huge idle costs and investment depreciation losses. Furthermore, with other things being equal, the simulations under conditions IV1 and IV2 are compared. It can be seen from the comparison between Figure 13a,b, Figure 13c,d, Figure 13e,f, Figure 13g,h, Figure 13i,g, Figure 13k,l, Figure 13m,n, and Figure 13o,p that SC RT under condition IV2 is generally higher than that under condition IV1. Combined with the previous analysis, this indicates that the cost efficiency of robot substitution is more significant when more labor is put into traditional production, since labor costs are inevitably increasing relative to industrial robots. (c) presents that in scenario I2II1III1IV1; (d) presents that in scenario I2II1III1IV2; (e) presents that in scenario I2II2III2IV1; (f) presents that in scenario I2II2III2IV2; (g) presents that in scenario I2II2III1IV1; (h) presents that in scenario I2II2III1IV2; (i) presents that in scenario I1II1III2IV1; (j) presents that in scenario I1II1III2IV2; (k) presents that in scenario I1II1III1IV1; (l) presents that in scenario I1II1III1IV2; (m) presents that in scenario I1II2III2IV1; (n) presents that in scenario I1II2III2IV2; (o) presents that in scenario I1II2III1IV1; (p) presents that in scenario I1II2III1IV2.
Rising Labor Costs Promote Robot Substitution
From the results of the case analysis and simulations, we can find that the increasing labor costs are the direct motivation for robot substitution. In this case, the compensation and welfare costs account for around 43.3% of the total cost in manual welding, since the salary of skilled welders increased from CNY 10,006 in 2006 to CNY 57,093 in 2019 with an average annual increase of 14.3%. Meanwhile, due to the high social security rate (45% of the salary), the human-resource-related components sharply increased, which caused the labor cost efficiency to increase from 2.22 cents /piece in 2010 to 4.6 cents /piece in 2019. Currently, the prices of industrial robots are steadily falling, while labor costs systematically grow; thus, it seems that industrial robots are becoming price-competitive, especially in high-income countries [53]. However, in many developing countries, despite the fact that industrial robots show a high-cost productivity, their LCC is not economic due to the relatively low wages [54]. A computer simulation comparison between the two manufacturing modes in EU-28 countries has been conducted, and the simplified cost analysis results indicate that the effect of robot substitution is highly dependent on the heterogeneous labor costs in different countries. Industrial robots have better cost efficiency in high-income countries such as Germany than in low-income countries such as Poland [15]. A recent empirical study of industrial robot applications in 42 countries reveals that the increase in both unit labor costs and hourly compensation level significantly promotes industrial robot substitution [55]. In recent years, wages in China's labor market have continued to rise. It was estimated that manufacturing labor costs per hour in China were USD 3.30 in 2018, which is much higher than those in India, Thailand, Malaysia, Indonesia, and Vietnam. With this in mind, enterprises may be interested in promoting robot substitution in order to reduce production costs [4].
Increasing Demand Gives Rise to Robot Substitution
From the results of the case analysis and simulations, we can also see that the continuous expansion of the market is identified as a decisive factor for robot substitution. In this case, with the rapid growth of China's automobile market, the output of the selected enterprise increased from 14,263 (vehicles) in 2006 to 62,988 (vehicles) in 2015, with an average annual growth rate of 17.9%, and the final output reached stability at about 64,000 (vehicles) per year. The idle rate of the welding robot production decreased from 84.2% to its lowest point of 26.4%, thereby greatly improving its cost efficiency. Meanwhile, although the manual welding production has been working at full capacity, its cost efficiency is much lower. Under the economies of scale, the ratio of the cost efficiency of the two manufacturing modes was as high as 3:1. This result is consistent with relevant research conclusions. Given the large-scale economy, industrial robot production shows obvious advantages in maintaining an efficient production rhythm and ensuring low-cost organization and management, whereas the shortcomings of traditional production have been completely exposed [56]. It is estimated that in some enterprises with mass production, industrial robots have replaced manual operations, resulting in a 30% increase in productivity, a 50% reduction in production costs and an increase of 85% in equipment utilization [52]. With higher workloads and longer working hours, industrial robots are much more reliable than human operators and can achieve better cost efficiency [15]. A recent simulation comparison found that in the case of work in three shifts per day for a long time period, due to better work organization and synchronization, OEE (overall equipment efficiency) of industrial robots is 48% higher than manual operation production, which indicates that industrial robots have prominent advantages of availability, performance and quality in a mass production [16]. With China's sustainable economic growth, potential high-and midend consumption is rapidly increasing, thereby encouraging the emergence of innovations and upgrades in the manufacturing sector and greatly stimulating robot substitution [4]. Thus, the large market capacity creates a broad space for enterprises to accelerate their robot substitution.
Robot Substitution Requires a Sustained and Effective Investment of Resources
From the present comparative analysis of life-cycle cost and the simulations, it should be highlighted that a sustained and effective resource should be input into robotic substitution. In this case, it can be observed that the cost threshold for the initial introduction of industrial robots is high. In addition to the high price, after-sales service fees and related taxes, there are various transaction costs, which account for 8.8% of the price. Another large expenditure is financial expenses, accounting for about 30.5%. Moreover, industrial robots require more intensive input of human capital. In the early stage, the enterprise highlights the importance of cultivating robot operators. With an adequate budget, operators are dispatched for operation training, and experts are hired to direct employees. Furthermore, a lot of resources are invested in order to establish a comprehensive quality management system that strictly followed the standards of production, technology, organization, process flow, and other aspects. Moreover, competitive remuneration is paid to industrial robot operators, which is 1.56 times that of welding workers. This result is supported by relevant research. It is argued that industrial robots need to be specially designed and developed in the investment stage to adapt to specific manufacturing scenarios. In order to achieve high productivity, it is necessary to effectively integrate all elements. Furthermore, software systems need to be upgraded to facilitate reconfiguration. In addition, emphasis should be placed on training to develop and maintain skilled operators [15]. It has been found that, in typical cases, the initial investment associated with the implementation of the industrial robot project may be the same as the robot's price [53]. In order to maintain cost efficiency, operators should pay more attention to comprehensively controlling and supervising industrial robots, and continuously optimize the planning and scheduling of operations. More resources need to be devoted to periodic maintenance services, inspections of robots, and reconfiguration [28]. Therefore, compared with enterprises that maintain traditional manufacturing, enterprises that prefer robot substitution should have a larger scale, a higher capital-labor ratio, and a more stable strategy [4]. This ensures that heterogeneous resources are continuously and efficiently invested into the full life cycle of robot substitution, thereby creating a competitive advantage in cost efficiency.
The Risks of Robotic Substitution Require Attention
From the present comparative analysis of life-cycle cost and the simulations, it can be observed that more attention should be paid to the risks of robotic substitution. One possible risk might relate to the low flexibility that comes with the huge initial investment in industrial robots. In this case, as the results show, operation-related costs and laborrelated costs are the most important factors of the total cost in traditional production, while operation-related costs and initial investment costs are the most important factors in industrial robot production, and these are also accordingly reflected in the internal structure of the unit cost. It is considered that the comparative cost advantage of industrial robots mainly comes from cost saving through the reduction of manual labor per produced unit with the help of fixed asset input [11]. However, the industrial robot is somewhat of a double-edged sword, as it also has higher fixed costs and lower adaptability to dynamic work environments when compared with manual operation [16]. It can be seen from this case, at the very beginning of the life cycle, that due to low output and the running-in process, the labor cost savings cannot offset the fixed assets depreciation, resulting in lack of significant cost efficiency of the industrial robot phase. In addition, industrial robots are generally tailored to a spectrum of tasks or even for a specific product, which leads to a significant risk of obsolescence [35]. Another risk might come from the huge waste of potential production capacity caused by the idling time of industrial robots. We can see from this case that the idleness rate of the industrial robots over the life cycle is as high as 47.21%, ranging from 84.15% to 26.37%. Considering the high value-added rate of robot production (average of 37.22%), the idle loss (17.57% of the actual cost) converts into a unit idle cost of 0.37 cents/piece, accounting for 14.90% of the unit product cost. Nevertheless, the idle losses from traditional production are minimal. Relevant research suggests that this risk may be caused by unexpected changes in market demand [11], or it could be the result of irrational investment decisions [4]. In addition, the system reliability of industrial robots may also pose a risk. In this case, the expenditure on maintenance contributes to 8.03% of actual costs, while it also consumes a large amount of intangible resources, resulting in a certain efficiency loss (1.54% of the actual cost). Scholars argue that industrial robots, as complex manufacturing equipment with integrated systems, may be at risk of system failure due to minor problems such as broken parts, software failures, and operational errors [57]. Moreover, a failure of any process of the industrial robot manufacturing line can cause disturbances or even stoppages of the whole factory [28]. An incompetent operator can be replaced, but a fallible robot can only be sent for repair. Therefore, reliability is crucial to the cost efficiency of industrial robots when compared with manual operation [16].
Conclusions
This paper examines the LCC of industrial robots and establishes a comparative LCC analysis model of robot substitution. The model integrates LCC structures of the two manufacturing modes into one analysis framework, and introduces intangible costs to supplement the actual costs, realizing the comparative LCC analysis of robot substitution. Moreover, it comprehensively uses various methods for cost allocation and parameter estimation regarding the net present value in order to conduct total cost and the cost efficiency analysis, together with hierarchical decomposition and dynamic comparison. The model integrates the research related to LCC and the mainstream approaches related to economic analysis and technical analysis; it can be widely used in similar manufacturing systems. Applying this model, this study conducts an investigation of a Chinese automobile manufacturer and a comparative LCC analysis of welding robot production and manual welding production with the help of the methods of case analysis and simulation. As expected, the result shows that, in the context of rapidly rising labor costs and expanding market shares, robot substitution can bring cost advantages, which arise from continuous heterogeneous resource investment, to enterprises. However, the risks of robot substitution, such as low flexibility, idle waste, and system reliability, require more attention. Thus, adequate consideration of the long-term trend of external factors when making decisions regarding robot substitution is recommended. Moreover, more emphasis should be placed on improving productivity in terms of resource allocation during operation. Furthermore, risk control of robot substitution should also be taken more seriously, in order to timely discover and solve potential problems. In accordance with this study, a simple template is developed to support the decision-making analysis of the application and cost management of industrial robots. The case analysis and simulations can provide references for enterprises in emerging markets in relation to robot substitution.
This study may have some limitations. (i) It is believed that the service life of a manufacturing asset determines its cost efficiency and productivity over its life cycle [58]. We should further improve the LCC model and place more emphasis on optimum service life. (ii) It is argued that industrial robot will be more cost efficient in working scenarios with high-intensity production, precise and repetitive tasks, and strenuous and monotonous physical activities [59]. Moreover, the LCC of robot substitution might vary greatly depending on the heterogeneity of the situation. In the future, we intend to explore the application of this model in other working scenarios to enrich the comparative LCC analysis of robot substitution. (iii) A new generation of industrial robots is currently being developed and applied in various settings, which promotes cost-effective industrial robot manufacturing based on a variant product spectrum, small lot size, and more thorough automation [60]. Considering its new technical characteristics, we aim to explore the applicability of this model and continue to improve it.
Author Contributions: Methodology, writing-original draft and revisions, X.Z.; conceptualization, funding acquisition, writing-review and revisions, editing, supervision, and project administration, C.W.; review and revisions, editing, D.L. All authors have read and agreed to the published version of the manuscript.
Funding: This research was supported by the China National Social Science Fund (CNSSF)-the Major project "Research on the influence of the synergy between the material capital and the intellectual capital in economic growth" (Grant No.13AJY004). | 14,077 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Economics"
] |
Secure Real-Time Chaotic Partial Encryption of Entropy-Coded Multimedia Information for Mobile Devices: Smartphones
Smartphones penetration-rate continues expanding, from 44% in 2017 up to 59% expected increase in 2022 as reported by Strategy Analytics. At present, smartphones dominate the global mobile traffic, which in turn is dominated by video communications. For mobile systems with limited power capabilities, the processing of real-time multimedia information with affixed security represents a real challenge. In this work, we propose a high-performance encryption scheme capable of running on low-power smartphones without holding back video coding operation. We are aimed at destroying the meaning of the entropy coded bitstream by inserting random bit errors to induce error propagation and impede natural self-resynchronization process. The scheme consists of three main process: 1) A new integer chaos-based Coupled-Map Lattice-CML for creating secure pseudo-random trajectories; 2) Random bit flipping of the bitstream based on a Dynamic Reference Point (DRP) not exposed to attackers; and 3) Random selection of CML byte-trajectories for both DRP and bit flipping processes for increased security. Implementation of the scheme on smartphones with different CPU-power capabilities shows excellent performance to handle high-bandwidth real-time video smoothly (one-to-one and group calls). The scheme provides high scalable security with encrypted data volume fluctuating between 0.7%-3% of the total compressed data (video and still images).
I. INTRODUCTION
The number of mobile phones users worldwide has increased almost 600% during the last decade, going from 1.06 billion users in 2012 to 6.64 billion in 2022 [1]. Similarly, the global internet traffic share (dominated by desktop users until 2016) rose in such a way that, as of July 2021, 55.56% of all web traffic comes through mobile phones [2]. One of the main activities of mobile users is on-line video, accounting for 64% in 2014 and projected to grow up to 82% in 2021, representing a million minutes of video content crossing the network every second [3]. Digital communications have some inherent risks, communication networks (wired or wireless) are vulnerable to attacks violating the user's right of privacy.
Compared to other communication media datatype such as image, audio, graphics and text, securing real-time video The associate editor coordinating the review of this manuscript and approving it for publication was Paulo Mendes . data demands the addressing of hard challenges on mobile devices, among the most important are: a) the vast amount of information involved; b) coding complexity; c) time processing constraints; and d) limited processing power, in particular low-to-mid-range smartphones, which dominate the market worldwide according to IDC (International Data Corporation) [51]. To avoid the processing overhead of fulldata encryption, Partial or Selected Encryption (SE) has been proposed as a viable alternative [4]- [9] in which, only the most important information is encrypted. Relevant data involves our human perception of a particular media datatype (i.e. audio, image, video, etc.), which is transferred during compression to transform coefficients (DCT-based, Wavelet-based, etc.) and subsequent bitstream via data transformations and variable-length entropy coding (Huffman and Arithmetic Codes) respectively. The main advantages of SE with respect to full-encryption are: a) scalability; b) fast performance (it varies depending on the complexity of the scheme); reasonable-to-high security (depending on the method and percentage of the encrypted bitstream); and c) tunable perception quality for partial content exposure. SE can be applied to different data types: a) Plain data (uncompressed domain without further compression), b) Transform data during the compression process, called Joint Compression-Selective Encryption-JCSE, and c) Encoded data (after compression) [10]. SE over plain data has been particularly applied to plain images (without compression), wherein relevant information (image bitplanes [11], [12] or regions of interest [13], [14]) that contributes the most to the perceptual quality of the object in question is subject for encryption. Even though these schemes consider a subset of plain data, there is no performance gain with respect to full encryption in the compress domain (they may end up encrypting higher percentage of data). Encrypting plain data is not recommended in practice because the compression ratio is severely affected. JCSE on the other hand, has been successfully applied to all classes of multimedia objects such as audio, image, and video. The main target are transform-coefficients and encoded bitstream. In speech applications using the G.729 codec for example, the most valuable part of the bitstream subject for encryption includes vector quantization indices, line spectral frequencies, quantized pitch period, and gain indices [7], [15]. MPEG4-Audio on the other side offers scalable embedded coding, in which the lower rate base layer represents the intelligible part of the speech to be encrypted [16]. The amount of encrypted data in speech coding is significantly reduced to about 3%-45% of the total bitstream, however, not all JCSE schemes fall in the category of strong encryption [17]. The vast majority of JCSE schemes in the literature are aimed at securing image and video data. Relevant data to be encrypted may include one or more of the following data blocks: transform coefficients, headers, Intraframes, motion compensation, entropy coding tables and indices, base and enhancement layers in scalable coding, etc. SE schemes over DCT-based compression standards (JPEG, MPEG-4, H.26X, etc.) are vast. They can be applied to practically any intermediate element during the encoding process or even after encoding, as in our actual proposal. In [18], a pseudorandom DCT-sign change and entropy coding bit inversion to protect Regions of Interest (ROI) is proposed; authors in [19] and [20] considered DC and AC shuffling only; a more complex scheme is presented in [52], where a subset of DC and AC coefficients are scrambled after zig-zag scanning and encrypted. Intra and Inter macroblocks encryption is considered in [21]; an optimized SE is applied to motion vectors and quantized coefficients in [53]; in [54], [57], authors applied SE at the CABAC bitstream (MVD signs, NON-ZERO TC, etc.) in the scalable extension of the High Efficiency Video Coding (HEVC) standard; a fast video encryption for smartphones is proposed in [55], which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks (taking advantage of the error propagation of the H.264); video slices are encrypted in [56] with two different encryption schemes, AES cipherfeedback mode to maintain the exact same bit rate and a fourdimensional hyperchaotic algorithm for further protection; residual data such as runs and level are encrypted in [22]. In the case of DCT-based scalable compression, different levels of information can be distinguished and selectively protected, such as base and enhancement layers, temporal scalability and/or spatial/SNR scalability [23]. On waveletbased formats (JPEG2000, SPHIT-Set Partitioning in Hierarchical Trees, etc.), selective encryption is commonly applied to low resolution sub-bands, wherein the most energy is concentrated on [10], [24]- [27], [58]. Significant parameters such as sign bits, refinement bits, significance of pixels, etc., can also be relevant for data encryption, providing different degrees of security [27]. Within the same JCSE category, some others encryption schemes secure the entropy coded bitstream (inside the codec) in a variety of ways: i) codewords are replaced by other valid and equal length codewords in order to maintain video compliance without affecting the compression ratio [9]; ii) selected intervals in the bitstream are randomly swapped (using randomized arithmetic coding (RAC), so that only a synchronized decoder is able to interpret correctly the encoded sequence [5]; iii) bitstream is encoded using multiple and randomly selected Huffman tables [4], and iv) partitioned of the bitstream into random blocks followed a circular random rotation within each block [6].
Major concerns of JCSE schemes are: a) codec intrusive; b) codec specific, implying strong dependencies on both multimedia format (such as H.264, MPEG4, JPEG, JPEG2000, etc.) and multimedia object (image, audio, video, etc.); and c) negative effect on the compression ratio (except when the encryption is performed in the entropy coded bitstream without changing its length). As a response to these concerns, we propose a codec independent partial encryption scheme with the following properties: non-intrusive (performed after the coding process), non-selective, and consequently format independent. We take advantage of the Variable-Length Coding (VLC) sensitivity to bit errors in order to induce a loss in synchronization and impede at the same time the longterm self-synchronization capability of entropy coded bitstreams [28]. The frequency of induced bit-errors (or random bit flipping) depends on the Mean Error Propagation length (MEPL) [29], which represents the number of affected codewords (error propagation) until self-synchronization occurs. Our new scheme, derived from our previous proposed scheme in [17], proposes major changes that considerably simplify the encryption process, improve performance, increase security, and more importantly provide computational stability (see Wobbling effect below). We are aimed at providing high performance and secure codec-independent SE for low-tomid power smartphones based on three major contributions: a) New integer-based Pseudo-Random Number Generator (PRNG) for secure random bit errors in the bitstream; b) a clueless phantom Dynamic Bit-Reference Point (DRP) for the bit flipping; and c) Random selection of byte-trajectories for the both DRP and bit flipping processes (this step doubles the security of our previous scheme [17]). The justification of our new proposal is as follows. Our new PRNG (based on Coupled Map Lattice-CML) works entirely in the integer domain. With this new integer representation, we elude both floating-point arithmetic (eliminating high computational costs) and the so-called Wobbling Floating-Point Precision (WFP). WFP demands perfect coherence between cipher and decipher, otherwise the inversion of encryption cannot always be guaranteed [30]. Despite the use of IEEE 754 floating-point compliance, several aspects of floating-point operations depend on the system designer which may yield different results for the same computation on two different computers. By using integer-based CML, our partial encryption scheme becomes more accessible to mobile handheld devices, improving the performance of highlevel languages such as Java (which is the main development platform on Android OS). Secondly, and considering that, the high-complexity of our previous scheme was to repel known/chosen plaintext attacks to the bit flipping and shuffling process, we propose a new version of the bit flipping that makes use of a phantom DRP, that works as a variable reference location for the flipping of bits in the bitstream. DRP is not directly exposed under a known/chosen attack, hence the name of phantom DRP. Furthermore, we eliminate the shuffling process proposed in [17] (not robust to chosen/plaintext attack) and include an optional and secure step in which chaotic trajectories are randomly selected in such a way that, they may come from N different maps and from different iterations in time. Our scheme duplicates the security of [17], showing extremely fast speed on different smartphone CPU technologies and Android OS versions. To the best of our knowledge, our proposed scheme provides the best performance speeds reported in the literature for smartphones.
The rest of the paper is organized as follows. The next section describes the methodology, including our previous and current schemes for comparison purpose. Security and performance of our proposed schemes are analyzed in section 3. Conclusions are presented in section 4.
II. METHODOLOGY
In this section we describe our simple but highly secure encryption scheme for compressed multimedia distribution (in particular video streaming) over low (Quad-core CPU, 1GB) to mid-power (Octa-core CPU, ≥ 2GB) smartphone technologies. Our scheme should be both easily implementable on high-level programming environments such as C and Java (which are available on mobiles systems) and sufficiently fast for securing one-to-one and group realtime video communications. To accomplish this, we propose a new nonintrusive chaos-based encryption methodology that inserts artificial random errors in the VLC bitstream (Huffman or Arithmetic codes) after the codec process. These errors produce valid or invalid codewords with the same or different length (shorter or longer). The effects of errors producing the same codeword-length are local in the bitstream, but they propagate on the decoded data depending on the information carried out by the corrupted codeword (AC, DC, motion vector, header, etc.). The effect of an error producing different codeword-length is more severe; it modifies original codewords bounds, making the reading process incorrect (error propagation) until selfsynchronization occurs. The average number of affected codewords during error propagations is called Mean Error Propagation Length (MEPL), which is an important variable for defining the bit flipping frequency and security of the transmitted data. MEPL has been extensively studied in [31]- [35], and in principle, a symbolic algebraic software is necessary for computing MEPL. We follow Takishima et al.'s formula [35] for computing MEPL based on crossover probability, which for several numbers of different VLC codes is ∼3-4 codewords.
For the sake of clarity, we briefly describe the scheme in [17], followed by a detailed description of our proposed scheme for improving robustness, performance and elimination of the Wobbling Floating-Point Precision for a perfect deciphering process (independently of the operating system and system designer).
A. OUR PREVIOUS SCHEME
The scheme in [17], consists of three main processes: 1) random-bit flipping, 2) packet division into L segments and 3) packet segment shuffling (Fig.1). These processes depend on chaotic trajectories (or pseudo-random numbers) coming from a floating-point based CML as shown in Eq.1. In particularly, three chaotic trajectories are merged (XOR) to create a new trajectory where only the most significant 27 bits are used for the encryption process (bit flipping and segment shuffling). These actions prevent the attacker from having complete knowledge of the system (we have simplified this process in our new proposal). Once the CML is defined and the entropy coded bitstream is received, the bit flipping operation is applied. The objective is to diffuse and destroy the meaning of the compressed codewords sequence by flipping one bit every BF = f ·Av·MEPL bits, where Av is the average size of the entropy codewords in bits (Huffman, Arithmetic, etc.), MEPL is defined in codeword units, and f ≥ (1/MEPL) is a tunable security factor. The smaller the value of BF the higher the bit flipping frequency and corresponding security. Starting from the beginning of the packet, the location of the bit to be flipped is computed as follows: The bit-flipping location is referenced to the beginning of the packet (i * BF) in multiples of BF, and the bit flipped is within current BF bits (we replaced this mechanism for a more robust one, as we will discuss in the next section). Following the bit flipping process, the packet payload is permuted by an S-way shuffling process. Here, the payload is divided into S segments, that represents a security-control variable of the shuffling process. The value of S is randomly changed for every iterated packet depending on the user-defined levels of security; low, medium and high, representing 20 ≤ S ≤ 35, 36≤ S ≤ 50 and 51 ≤ S ≤ 65 segments, respectively. The higher the number of segments S in the packet, the higher the security level, since brute force complexity to de-shuffle the packet is S!. Once S has been computed, the segment size is S g = L/S where L is the packet length in bytes.
B. PROPOSED SCHEME
Our new partial encryption scheme is composed of three main processes: 1) a new integer-based CML, 2) phantom DRP, and 3) a random byte-trajectory selection. With these processes, we manage to eliminate several computationally expensive steps in [17] such as, floating-point CML, segment shuffling, truncated trajectories (to 27 bits), random map access, and 3-trajectory XOR computation that affect the overall performance on low-power mobile devices. Table 1 shows the main differences between [17] and our new proposed scheme. A detail description of each step is given next.
1) INTEGER-BASED COUPLED-MAP LATTICE (CML)
Most continuous iterative maps exhibit chaotic behavior represented by an infinite periodicity and sensitivity to initial conditions. However, when implemented on digital computers, the state variable takes a finite number of values leading to short limit cycles, non-ergodic trajectories and degraded statistical properties such as invariant probability distribution and correlation [36]. Extensive studies have been conducted to understand and improve the performance of Digital Generators of Chaos (DGC) [37]- [45] and their application to cryptography [46], [47]. Several techniques have been utilized to extend the periodic orbit length of DGC, among the most important for the present work and of primary interest in the field of chaotic cryptography is CML. CMLs are constructed by an N -network of dynamically evolving maps interacting through some coupling rule over a neighborhood: where X i,j is the chaotic variable (or random trajectory) of the i th (≤ N ) map at state or iteration j > 0, f X i,j−1 represents each individual map (local term) and H the coupling function (linear/nonlinear interaction term) with weights w i , such that N i=1 w i = 1. By varying the strength of the control parameter ε different phases may appear, such as localized chaos or spatio-temporal intermittency chaos [44]. That is, when the weight of the coupling is weak, the system can be regarded as a local map perturbed by contributions from other sites, thus maintaining its main individual properties. On the other hand, when the weight of the coupling is large, the system reaches an asymptotic collective behavior characterized by intermittent periodic chaotic cycles. There are several attractive characteristics of CML from the cryptographic point of view: a) the chaotic regime appears sooner than a single map [44]; b) extended cycle length; and c) robustness to attacks. Regarding the coupling function H , two variants have been used for studying the dynamics of CML, local and global coupling. The latter, considers the effects of the nearest neighbors of a given lattice site, while the former considers the interaction of each map with the ''mean field'' generated by all lattice sites.
Our proposed pseudo-random number generator to be used in the bit-flipping and random byte-trajectory selection processes is based on global coupling and integer arithmetic, which can be written in generalized form as: where N ≥ 6 is the number of maps, represents a logical (XOR → ⊕, OR → |, AND → &, left-shift → , rightshift → , etc.) or arithmetic operator, or a combination of operators. For the purpose of this work, our final CML is defined as follows: where ε n = 2 n − 1, 16 ≥ n ≥ 0 represents a 16-bit random integer number that can be fixed or dynamically changed on iteration basis (increasing security), H is a global coupling function based on XOR operation over the N previous chaotic variables. ε n , adds the first n-bits of the global coupling function H , to the corresponding local map f (X i,j ) represented by the digitized Rényi Map defined as [48]: where b Z >0 is the control parameter and PR is the CPU bitprecision (32 or 64 bits). It is important to point out that H can be modified to include previous plaintext/ciphertext values to induce diffusion in case of attacks, generating a completely different ciphertext with respect to the original output. Plaintext/ciphertext feedback may be introduced as follows: The number of maps N (≥ 6) depends on the size of the system-key K . 10 bytes are used for the initialization of each map, where the first 4 bytes initialize the chaotic variable X , 4 additional bytes to parameter b, and 2 bytes for m. For N = 6, the minimum K must have B = 480 bits of length. It is possible though, that the length of the system key be 256 ≤ B < 480 bits; in this case, we generate a CML with n < 6 maps and use the output to initialize the rest of the variables and parameters (including ε) until all 6 maps are created. The N PR-bit trajectories produced at each iteration j in Eq.3, are used in a byte-to-byte basis for the DRP and bit flipping (the idea is to get the most out of every trajectory). Each iteration of Eq.4 produces Tr = N · (PR/8) byte trajectories, corresponding to Tr/2 flipped bits in the bitstream (2 bytes are used for flipping one bit). {32-64}-bit trajectories are sequentially stored in memory and randomly retrieved at a byte level, as shown in Fig.2 for a 2-map CML example (discussed in the next section).
In order to increase the sensitivity of the encryption system to system-key changes, Eq.4 is randomly iterated 20 ≤ RT ≤ 50 times and the N -map output becomes the initial state in the encryption system. According to our experiments, RT = 20 represents the minimum number of iterations required for a perturbed chaotic trajectory to diverge from its original trajectory when the magnitude of the perturbation is 1 (minimum magnitude), guaranteeing that a bit change in K , will affect the entire system output or ciphertext.
2) BIT-FLIPPING
This step is aimed at destroying the meaning and synchrony of the entropy coded bitstream. The Huffman code tends to recover from errors after a certain number of codewords, unlike Arithmetic code which exhibits poor resynchronization capabilities [5], [30]. Consequently, the frequency of the bit flipping depends on the selected VLC code and corresponding MEPL value. Random errors in the bitstream are induced and propagated in such a way that becomes impossible for an attacker to resynchronize back to the original sequence, unless he finds out the system-key K . This is an effective method to hide partial information. However, scheme in [17] overexposes the location of the bits flipped and related CML pseudo-random numbers under a known/chosen attack. The reason is that, every bit flipped is selected with respect to a fixed reference point along the packet, which happens to be a multiple of BF (knowing BF discloses the random bit flipping number). We extend the bitflipping process to make it more robust against known/chosen attacks by considering a phantom or invisible DRP. The DRP is not exposed to attacker and does not reveal information about the random numbers involved in the bit-flipping (its effect on security is discussed in section 3). The scheme consists of the following steps: 1) Define the mean average propagation-length in codeword units MEPL, the level of security f ≥ 1 (lower values represent higher bit flipping frequency), the average size of the entropy codewords Av in bits and compute the bit flipping block length 8 ≤ BF = f · Av · MEPL ≤ 128. 2) For every CML iteration (Eq.3), organize the N PR-bit output trajectories X i,j , 1 ≤ i ≤ N , j ≥ 0 into an array of bytes 0 ≤ X [n] ≤ 255 of length LEN =N * (PR/8) (as in Fig.2). 3) Do until the end of input multimedia data: initialize n = DRP = rand = 0, iteration = 1: a. Compute the next N PR-bit trajectories ( operator ensures the flipped bit is between (pDRP,DRP). ii. pDPR = DPR; Fig.3, shows an example of the bit flipping steps for BF = 10 bits. The first random DRP falls in the bit number 6 (even number) out of BF = 10, therefore one of the bits at the left side of DRP will be flipped. In this case, the bit flipped corresponds to bit number 3 and sets pDRP = 6. In the second iteration, the new DRP can go from bit locations (pDRP = 6, 2 * BF = 20), randomly falling in DRP = 11 (odd number), meaning that one of the bits at the right side of DRP will be flipped. In this case the random bit flipped corresponds to bit 13, and the new reference point for the next iteration becomes pDRP = 13, the position of the bit flipped. If the flipped bit is at the right side of DRP, then pDRP is equal to the position of the last bit flipped (see 3e step in algorithm above). As can be seen, the range of DRP (pDRP ≤ DRP ≤ iteration * BF) is variable along the encryption process; iteration * BF is used as an upper boundary for the bit flipping, and not as a measure of bit flipping frequency anymore. There is a probabilistic relationship though between bit flipping frequency and BF. Once a bit has been flipped, the next iteration * BF block may overlap with previous block, therefore more than 1-bit is likely to be flipped every iteration * BF bits. Bit frequency (per packet) oscillates probabilistically between L/BF and L/2 (where L is the packet-length in bits), the minimum and maximum bit flipping frequencies respectively. BF is inversely related ''probabilistically speaking'' to bit-flipping frequency, the smaller the BF (or f ) the higher the bit flipping frequency.
III. RESULTS
The performance of the proposed scheme is evaluated for different gammas of smartphones architectures ( Table 2) and programming languages (C and Java) under Android-OS. The CML set up is according to Table 3. The packet or bitstream length considered is 400 bytes with average bit flipping frequencies of 1/32, 1/64, and 1/128 (1 bit randomly flipped every 32, 64 or 128 bits respectively). The following test are taken in consideration for the overall performance of the scheme: a) Ciphertext (compressed video) and CML sensitivity to initial conditions; b) Security Analysis; and c) Scheme performance. The deciphering process performs the same operations as the cipher, therefore both have the same time complexity.
A. CIPHERTEXT AND CML SENSITIVITY TO INITIAL CONDITIONS
In previous section, we pointed out the relevance of flipping at least 1 bit every MEPL codewords to destroy any possible resynchronization of the entropy coded bitstream. Is this sufficient from the security point of view? Before we answer this question (see section 3.2), let us first visualize the effect of a bit inversion on compressed video data. Under random errors in the coded bitstream, data visualization is not always possible because codewords may be invalid, terminating the decoding process. Precautions were taken for inserting bit errors on DC or AC coefficients to generate valid codewords and continue decoding. The inversion of only one bit, along with error propagation affects dramatically the quality of the decoded data as shown in Fig.4b. At a higher error rates (or higher bit flipping frequencies in our case), the effect of our partial encryption scheme completely transforms compressed data into a non-decodable noisy look pattern (Fig.4c). In the case of attack, our SE force the attacker to either guess the system-key, guess the flipped bits (using original codewords to guess the error bit), or break the CML system through a known/chosen plaintext/ciphertext attack. As we will see next, the lowest complexity attack for breaking our scheme is through the system-key. Another important behavior of the encrypted data at a high bit flipping frequency is that, the output (ciphertext) histogram is uniform and independent of the input histogram shape (Fig.4d).
Important properties of chaos-based encryption systems are their sensitivity to initial conditions, randomness, and unpredictability, which are directly related to our proposed integer-based CML. We will prove these properties using a statistical package developed by the National Institute of Standard and Technology (NIST) [49]. The NIST test suite consists of 15 tests formulated under the null hypothesis (H 0 ) that a sequence (of the order 10 3 to 10 7 elements) is random. The tests are based on a specified 0.01 ≤ α ≤ 0.001 value, representing the probability that the sequence is not random when it really is random, and a P-value representing the strength of the evidence against the null hypothesis respectively. If P-value ≥ α, then the sequence appears to be random. We performed over 200 different tests on Eqs.3 and 4 for N = 6, with random initializations of the state variable X , parameter b, and 5 ≤ j ≤ 10, for fixed ε = 255. All sequences individually passed each one of the 15 tests with different strength; the average output is shown in Table 4. We found that 5 ≤ j ≤ 10 is the best range for producing chaotic sequences, for j < 5, results were not adequate. Now, is Eq.3 sensitive to initial conditions? In Fig.5 we show the sensitivity of the CML to system-key changes, where only one of the N maps is plotted for clarity. The sequence with original system-key K is represented in black, most significant bit inverted in blue, least significant bit inverted in green, and intermediate bit inverted in red. As can be seen, all 4 trajectories diverge from the very beginning up to the end of the process, ensuring that a system-key attack (section 3.2) will produce a totally different ciphertext. Same result is obtained when the modification (or attack) happens in map's parameters (b, j), intermediate trajectories (changing one bit in the i th map trajectory), coupling function H , or ε. In conclusion, the proposed CML is excellent as a PRNG with extreme sensitive to initial conditions. A more visual example is shown in Fig.6, where the ciphertext (Fig.6b) is deciphered with the wrong system-key (least significant bit inverted) yielding Fig.6c, the decoded image is not identifiable.
B. SECURITY ANALYSIS
We now discuss the order of magnitude required for breaking our code using both bit flipping brute force attack and known/chosen attack. System-key brute force attack is related to its length in bits, with a user-defined minimum complexity of 2 B≥250 (for N = 6, System-key attack is 2 480 ).
1) BIT-FLIPPING BRUTE FORCE ATTACK
Attacker wants to find out the bits flipped in the bitstream. As mentioned in section 2.2.2, our bit-flipping process involves a random bit-reference point 0 < DRP ≤ 2 k=8 , selected from a variable-length window shifted along the VOLUME 10, 2022 bitstream (see Fig.3). Once DRP is defined, the bit flipping position is another random number R ≤ 2 k=8 to either left or right side of DRP. Altogether, DRP and R have an average complexity of 2 2k . The total Number of Bit Flipping (NBF) in a packet of length L bits becomes a random variable as well, between L/128 ≤ NBF ≤ L/8, for which the complexity of brute force attack per packet can be expressed as: where BF is fixed in the entire encryption process. For P total number of packets, we have: The security is increased by a power of two with respect to [17], as shown in Table 5. The security provided by this scheme is tunable, it can be modified by increasing or decreasing the bit flipping period (BF) or frequency (1/BF) through the value of f in BF = f · Av · MEPL for f ≥ 1. This result can be used for answering our question (see previous section) about how much security is provided by flipping at least one bit every MEPL codewords. Assuming for the moment that one bit is flipped every BF bits (not true in our case, since 1/BF underestimates our real number of bits flipped in a packet), then for f = 1, Av = 6 bits, and MEPL = 3 codewords (following [35]) we get BF = 18 bits. For a packet length of 400 bytes, average k = 6 (for representing BF), NBF = L/BF = 178, the order of the attacks becomes ∼ 2 2136 (just for 1-packet), which is highly secure. It is possible to increase the security (if desired) to our maximum allowable value of one bit flipped every BF = 8 bits, to get a minimum complexity attack of ∼ 2 4800 . Security can also be decreased to users' needs proportionally to NBF; in our case, we consider ∼ 2 300 as the lower security limit corresponding to NBF = 25 bits (BF = 128 bits) per packet length of 400 bytes. For an I-Frame of size 2KB (320 x 240 coded in MPEG4 format) the complexity of the attack with the lowest (maximum) security would be ∼ 2 1500 (∼ 2 24000 ), representing ∼ 5 packets attack.
2) KNOWN/CHOSEN PLAINTEXT ATTACK
In the case of known/chosen plaintext attacks, the target is the CML through vulnerabilities in the bit flipping ( Eqs.3 and 4). Even though bits flipped are vulnerable to this attack (their position is easily revealed), it is not easy to break through the CML because of the unknown DRP and the random selection of byte-trajectories. Recall that DRP is computed as (section 2.2.2): Assuming the attacker knows the inverted bits in the packet, the aim at first is to find the byte trajectories X [n] and BF.
The complexity of the first bit flipped in the packet is: where, 2 2k represents the join complexity of BF and the bytelength trajectory (X [n]) for deciding if a bit is flipped to the right or left side of DRP. The attacker needs to find 2N consecutive PR-bit trajectories (Eq.3) and solve for the N parameters 1 ≤ b ≤ 2 PR={32,64} and m ∈ {1, 2, . . . , PR − 1} in the Rényi map (Eq.4). Assuming ε and H are known, the complexity can be represented by: The effect of random byte selection is an additional complexity that needs to be reflected into Eq.8. Before this happens, the attacker must expose two consecutive sets of N trajectories and solve for b and m. The easiest way is to attack the system from the very first iteration, in which the attacker knows that the first NB = N * PR/8 byte-trajectories may come from the first and/or second iterations of the CML (X[n] and Y[n]). After this iteration things complicates out, trajectories may now come from i ≤ N different maps and j ≤ IterationNumber different iteration times (see Fig.2), which is a major problem since the goal is to extract bytes from two consecutive iterations. From the second iteration on, the attacker needs to figure out how many bit-flipping are needed until all byte-trajectories in Y[n] appear again; this is to ensure that the attacker will have on hands two consecutive set of N trajectories for breaking out the system. The aim is bringing together in perfect order these 2N trajectories to recover original PR-bit trajectories. Following the above steps, we randomly draw NB bytes from X This is similar to the NB-face die problem stating ''how many times must a NB-sided die be rolled out until all sides appear at least once''. We initiate by randomly calling the first byte from X[n], then there are NB-1 different random numbers we could call in, taking on average 1/((NB − 1)/NB) = NB/(NB − 1) calls to get a different n from X[n]. Third random call requires NB/(NB − 2), and the process continues until all NB calls are completed. For PR = 32 bits, the total expected number of calls (including the 24 bytes drawn from the first iteration) is E(NB) + NB = 90 + 24 = 114. So, the attacker needs to track down 114 random calls (on average) to find every byte belonging to the first two consecutive iterations of 2N-CML trajectories. The good news for the attacker is that he only needs one packet to attack the system (first packet); the bad news is the complexity of the attack itself, which involves an additional permutation process of 2N (PR/8) bytes in order to find the right trajectories. The final complexity of known/chosen plaintext attack can be represented by: Again, a much better option is to attack the system-key. Note that, the complexity of known/chosen attack (and thus security of the system) is not directly proportional (probabilistically speaking) to the bit flipping frequency NBF, as in brute force attack. Therefore, it is possible to use a wide range of bit flipping frequencies without affecting the security of the system. For very low frequencies, the limit is the brute force attack. Known/Chosen security attack can be modified by increasing the number of maps N in the CML (Eq.3), and/or using more bytes in X[n] and Y[n] (such as unsigned short or unsigned int) for the computation of all random numbers involved in the encryption.
C. SCHEME PERFORMANCE
We describe now the encryption speed of our proposed scheme on low to mid-power smartphones under Android OS (see Table 2). Experimental tests show that our encryption scheme performs differently according to the arithmetic involved in the computation (integer or floating-point arithmetic), CPU type, version of the operating system, and programming language (Java and C).
1) FLOATING-POINT VS INTEGER ARITHMETIC
The first experiment evaluates the performance gain of our new integer-based CML (I-CML) against the floating-point based CML (F-CML). The timespan for evaluating fivehundred thousand iterations were recorded for two different programming environments, Java under Android RunTime-ART and C under Native Developing Kit-NDK (see Fig.7). On low-power smartphones under Marshmallow (HTC and Samsung-GP), I-CML C implementation came out to be ∼9 times faster than corresponding C F-CML, whereas Java I-CML is only twice as faster than Java F-CML. On midpower smartphones under Nougat (G4+ and G5+) same gain is maintained in C (∼9), while a deep gap is found in Java where I-CML reached as high as ∼21 times faster than corresponding Java F-CML implementation (a possible explanation of this behavior is discussed in the next section). Cross-comparison of C vs Java, C I-CML is on average 46 and 14 times faster than corresponding Java F-CML and Java I-CML respectively. The migration from floating-point to integer arithmetic on smartphones, provides significant performance gain without degrading the chaotic properties of the CML as discussed in section 3.1.
2) PERFORMANCE OF THE PROPOSED INTEGER-BASED ENCRYPTION SCHEME
The next set of experiments evaluate our integer-based encryption scheme implemented on both C and Java. The experiments consider High (HF), Medium (MF), and Low VOLUME 10, 2022 (LF) bit-flipping frequencies, which may also be considered as ''security levels'' with respect to brute-force attack (the higher the frequency the higher the complexity of the attack).
In the case of known/chosen plaintext attack the story is a bit different and beneficial to our proposed scheme, the complexity of the attack is independent of the bit flipping frequency which can be reduced without sacrificing system security (see section 3.2.2). Something to keep in mind when using low bit flipping frequencies (lower than 1 bit flipped every 128 bits) is the resynchronization capability of entropy coders, which may reveal (with very low probability) sporadic blocks of information depending on the affected portion of the compressed video (transform coefficients, motion vectors, etc.). Table 6, indicates the recommended security levels in terms of the bit flipping frequency, e.g. HF is considered when the bit flipping frequency is between 1/32 ≤ HF ≤ 1/8, that is one bit is flipped at least every 32 bits and at the most every 8 bits (this is on average, since variable length block DRP is used). In our experiments, we use the lowest frequency at each security range, that is 1/32, 1/64, and 1/128 for HF, MF, and LF respectively. In particular, the bit flipping frequency of 1/32, corresponds to Takishima's et.al. [35] general entropy coding recommendation to avoid natural bitstream resynchronization (MEPL∼3-4 codewords with average codeword length of Av∼8 bits). This recommendation can be loosened up according to user application's needs and/or CPU strength; we setup our lowest frequency limit to 1/128, corresponding to a minimum accepted brute force security of 2 250 . In general, high-performance behavior is practically seen on both C and Java for all bit flipping frequencies and smartphone platforms, the only exception is Java on Lowrange Smartphones (L-SM) under Marshmallow (HTC and Samsung-GP) as shown in Fig.8. Average speeds are overwhelming, with an overall average (along all bit flipping frequencies and smartphone platforms) of C = 544 Mbs and Java = 115 Mbs, for a total C Java ratio of 4.73 (C is on the average four times faster than Java). In particular, C offers excellent cross-platform average (along smartphones) of 819Mbs, 425Mbs, and 218Mbs for LF, MF, and HF respectively. With variable CPU-power capabilities, C continues offering excellent performance with cross-frequency average (along bit-flipping frequencies) of 421Mbs (min = 184, max = 762) on L-SM (HTC and Samsung-GP) and 554Mbs (min = 210, max = 1052) on M-SM (G4+ and G5+), for a ratio C M −SM C L−SM = 1.31 (31% M-SM performance gain). The maximum encryption speed for C is 1Gbs (Gigabits per second) corresponding to LF on the G5+ smartphone.
Java performance is drastically distinct between L-SM and M-SM. On L-SM running Marshmallow, Java achieves an unbelievable low performance average of 19Mbs (although sufficient for real-time video communications), while surprisingly on M-SM running Nougat there is a clear turnover, the average climbs up to 212Mbs (min = 114, max = 363) for a 1015% performance gain. A possible explanation to this behavior may be attributed to Java improvements due to Just-In-Time (JIT) compiler introduced in Android 7.0, JIT complements RunTime-ART's current ahead-of-time (AOT) compiler, improves runtime performance, saves storage space, and speeds up applications (for more information see: https://source.android.com/devices/tech/dalvik/jit-compiler). Comparing C vs Java, C outperforms Java by 2100% (C L−SM Java L−SM = 22) on L-SM, while on M-SM the performance difference is considerably reduced to only 161% VOLUME 10, 2022 TABLE 10. i5 platform specifications for performance comparison between our scheme and Homidouche et al. [54].
(C L−SM Java L−SM = 2.61); the performance gap between C and Java is less apparent now. Although, on low-power smartphones with 32-bit CPU and Android 6.0 or lower, our standard C encryption implementation is recommended.
With the above encryption performance, secure video calls on smartphones can be handled easily independently of the video codec technology (MPEG4, H.264, etc.), resolution and #frames/sec. Videoconferencing applications such as Skype has different bandwidth requirements depending on the quality and number of participants in the video call. Download bandwidth for one-to-one to group (7+ participants) video calls may go from 0.5-8.0 Mbs (Table 7) [50], which represents 0.2-3.7%, 0.12-1.8%, and 0.06-0.9% of the average processing speed in C for HF, MF, and LF respectively, and 0.3-5.8%, 0.2-3.8%, and 0.1-2.7% for Java considering mid-range smartphones only (G4+ and G5+). Despite of the high bandwidth demands of group video-call, our scheme implementation is able to encrypt in real-time effortlessly.
Our final experiment analyzes the encrypted data volume defined as the percentage of encrypted bits in the compressed video sequence. Our scheme is scalable, so in order to avoid security holes we endorse an optimal range of encryption volume between 0.7-3.0% (32 ≤ BF ≤ 128), with provided security range shown in Table 8. Table 9, compares different selective encryption algorithms in the literature regarding encrypted information, encryption bitrate increase, codec independency, tunable encryption, encrypted data volume, and Complexity Overhead (ratio between encryption_time/encoding_time). Our scheme represents the minimum scalable data volume securely encrypted and best complexity overhead than any other scheme. Among all analyzed works, Homidouche et al. [54] reports the highest speeds for real-time applications in our comparative, delivering 824Mbs on a Core-i5-4300M CPU @ 2.6 GHz. For comparative purposes, we run our proposed cipher on a similar CPU configuration (see Table 10), getting the following speeds: [4000, 2035,1180] Mbs for [HF, MF, LF] encryption modes respectively, representing 30%-80% faster than [54]. It is fair to mention though, that our scheme cannot partially reproduce (decode) entropy coded data without perfect deciphering (this is because headers are subject for encryption as well). However, our scheme has additional important qualities to take care of in data security, such as: a) non-intrusive, b) codec independent (this is why we can easily report encryption performance on different encoders, such as H.264, JPEG2000 and more), and c) do not affect image/video compression ratio (no data overhead is added).
IV. CONCLUSION
We have proposed a new highly effective chaos-based encryption scheme that can handle real-time video communications on a wide gamma of CPU-power capabilities, including low to mid-power smartphone technologies. The aim is to diffuse bit errors along entropy coded bitstreams, so the decoding process is probabilistically speaking not possible. The scheme is entirely based on integer arithmetic that eliminates the Wobbling effect (see section 1) and speed up encryption computation without compromising security. It provides the following advantages: a) Excellent chaotic properties (based on the NIST test suite); b) Codec independency; it works entirely after entropy coding, therefore input video (or audio, image, etc.), can be in any format (MPEG-4, H.264, etc.); c) Scalable security; d) Fast performance (on C and Java programming languages); and e) Very low encrypting data volume. The scheme was implemented in C and Java programming languages showing the highest performance reported in the literature for smartphones, capable to handle one-to-one to group video calls effortlessly. | 10,346 | 2022-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
On Hyperbolic 3-Manifolds Obtained by Dehn Surgery on Links
We study the algebraic and geometric structures for closed orientable 3-manifolds obtained by Dehn surgery along the family of hyperbolic links with certain surgery coefficients and moreover, the geometric presentations of the fundamental group of these manifolds. We prove that our surgery manifolds are 2-fold cyclic covering of 3-sphere branched over certain link by applying the Montesinos theorem in Montesinos-Amilibia 1975 . In particular, our result includes the topological classification of the closed 3-manifolds obtained by Dehn surgery on the Whitehead link, according to Mednykh and Vesnin 1998 , and the hyperbolic link Ld 1 of d 1 components in Cavicchioli and Paoluzzi 2000 .
Introduction
All manifolds will be assumed to be connected, orientable, and PL Piecewise Linear .In 1, 2 , theorems state that any closed orientable 3-manifold can be obtained by Dehn surgeries on the components of an oriented link in the 3-sphere.Considering the hyperbolic link, the Thurston-Jorgensen theory in 3 of hyperbolic surgery implies that the resulting manifolds are hyperbolic for almost all surgery coefficients.Another method for describing closed 3manifolds says that any closed 3-manifold can be represented as a branched covering of some link in the 3-sphere 2 .As the above, if the link is hyperbolic, the construction yields hyperbolic manifolds for branching indices sufficiently large.According to the algorithm in 4 , any manifold obtained by Dehn surgeries on a strongly invertible link can be presented as a 2-fold covering of the 3-sphere branched over some link.Thus we can construct many classes of closed orientable 3-manifolds by considering its branched coverings or by performing Dehn surgery along it.Moreover, the branched covering and Dehn surgery are nice methods for representing closed orientable International Journal of Mathematics and Mathematical Sciences 3-manifolds by combinatorial tools.See 5 for the many faces of cyclic branched coverings of 2-bridge links.
In this paper, we consider a family of links L m,d for positive integers m and d as in Figure 1, where each L i in box denotes the 1/m-rational tangle.In fact the link L m,d has two component links if d and m are odd, and three components if d and m is even and odd, respectively.Moreover, L m,d has d 1 -component links if m is even.Actually L 1,1 is the double link and L 2,1 is the Whitehead link which was considered in 6 , and L 2,d is the hyperbolic link L d 1 as considered in 7 .We note that L m,1 is the hyperbolic link for m > 1 8 and that L m,d is the hyperbolic link for m > 1, which act by isometries.
Lastly, for positive integers m > 1, n ≥ 3, and k ≥ 1, it was proved that a family of closed 3-manifold M 2m 1, n, k as the identification space of certain polyhedron P 2m 1, n, k whose finitely many boundary faces are glued together in pair and which is another method to construct 3-manifolds, is the n/d -fold strongly cyclic covering of the 3-sphere branched over the link L m,d , where gcd n, k d 8 .Since our link L m,d is hyperbolic link for m > 1, it is clear that M 2m 1, n, k is the closed hyperbolic 3-manifold.
In this paper, we study the closed hyperbolic 3-manifolds obtained by Dehn surgeries on the components of these links.Moreover, we show that our surgery manifolds are 2-fold cyclic covering of 3-sphere branched over certain link as Figure 6.In particular, our result includes the topological classification of the closed 3-manifolds obtained by Dehn surgery on the Whitehead link, due to Mednykh and Vesnin 9 , and a hyperbolic link L d 1 of d 1 components in 7 , which extends the Whitehead link in case of d 1. See 5 for similar results obtained by Dehn surgery on the 2-bridge links.
Dehn Surgery on the Link L m,d
We now consider the oriented link L m,d in the 3-sphere illustrated in Figure 2, which is formed by a chain of lines K i between L i and L i 1 for i 1, . . ., d − 1, and a chain of K d , plus a further circle Λ transversally linked to K d .Let p i /r i be the surgery coefficient along the ith K i of the chain for i 1, . . ., d, and let a/b be the surgery coefficient along the transversal component Λ, where gcd p i , r i gcd a, b 1.On the other hand, we obtain the fundamental group π 1 S 3 \ L m,d for m even.Let x i , y i , z i,j , x d , y d , u, and v be the generators of a Wirtinger presentation of π 1 S 3 \ L m,d according to Figure 2. Then we have , and R 3 are as follows under modn; 1,
2.2
For our simplicity, we write where, for i ≥ 3, Similarly R 2 reduces to Hence we have where G {x 1 , . . ., x d , x d , y 1 , . . ., y d , y d , u, v} and R 1 , R 2 , and R 3 are as above.We denote by M p 1 /r 1 , . . ., p d /r d ; a/b the closed connected orientable 3-manifold obtained by Dehn surgeries along the components K 1 , K 2 , . . ., K d , Λ of L m,d with surgery coefficients p i /r i , p 2 /r 2 , . . ., p d /r d , and a/b, respectively, where p i , r i a, b 1.We now obtain finite presentations of the fundamental group of these surgery manifolds as follows.
The meridians m i and the longitude l i of each component K i and the meridian m and the longitude l of Λ are as follows:
2.8
A presentation of the fundamental group of M p 1 /r 1 , . . ., p d /r d ; a/b is obtained from that of the link group of L m,d by adding relations:
2.9
Since p i and r i resp., a and b are coprime, there exist integers s i and q i resp.s and q such that r i s i − p i q i 1, for any i 1, . . ., d, and bs − aq 1.
Summarizing we obtain the following result.We note that the link L m,d is hyperbolic in the sense that it has hyperbolic complement.So the Thurston-Jorgensen theory in 3 of hyperbolic surgery yields the following.Corollary 2.2.For any integer d ≥ 1, and for almost all pairs of surgery coefficients p i /r i and a/b, the closed connected orientable 3-manifolds M p 1 /r 1 , . . ., p d /r d ; a/b is hyperbolic.
We now describe M p 1 /r 1 , . . ., p d /r d ; a/b as 2-fold branched coverings of the 3sphere in the following.We note that a link L is strongly invertible if there is an orientationpreserving involution of S 3 which induces on each component of L an involution with two fixed points.The above mentioned involution is called a strongly invertible involution of the link.The following theorem of Montesinos relates to two different approaches for describing closed orientable 3-manifolds, which is Dehn surgery and branched coverings see 1, 10-13 for manuscripts , and moreover, gives us an effective algorithm for describing the branch set of M p 1 /r 1 , . . ., p d /r d ; a/b as 2-fold branched coverings of the 3-sphere.Theorem 2.3 see 4 .Let M be a closed orientable 3-manifold obtained by Dehn surgery on a strongly invertible link L of n components.Then M is a 2-fold covering of the 3-sphere branched over a link of at most n 1 components.Conversely, every 2-fold cyclic branched covering of the 3-sphere can be obtained in this fashion.3.This link is strongly invertible, and the axis of a strongly invertible involution ρ of L m,d is given by the dotted line in Figure 3.
We choose meridean μ i and longitude λ i according to Figure 4. Let V be a regular neighborhood of L d in S 3 .Without loss of generality, we can choose neighborhood V, meridean μ i , and longitudes λ i on ∂V to be invariant under the involution ρ.The image of V under the canonical projection π : S 3 to the quotient space S 3 /ρ of S 3 under ρ consists of d 3-balls B i .Let θ denote the axis of the involution ρ in S 3 .For each 3-ball B i , the set B i ∩ π θ consists of two arcs.By isotopy of B i along the image π λ i of longitude λ i for any i 1, . . ., d, we get Figure 5.Each 3-ball B i with arcs B i ∩ π θ is a trivial tangle.By the Montesinos algorithm, we replace these trivial tangles B i by p i /r i -rational tangles for any i 1, . . ., d.
For the simplicity, we now define some series of links.We recall that any link can be obtained as the closure of some braid.Given coprime integers p and q, denote by σ p/q 2 , the rational link p/q -tangle whose incoming arc are the ith link and i 1 th strings, where T 1 , T 2 denotes a 4-strings braid σ −1 2 σ −1
Figure 2 :
Figure 2: L m,d with surgery coefficients and L i with 1/m -rational tangle
Theorem 2 . 1 .
The fundamental group of the closed connected orientable 3-manifold M p 1 /r 1 , . . ., p d /r d ; a/b obtained by Dehn surgery along the link L m,d with surgery coefficients p i /r i and a/b admits the finite presentation G | R 1 , . . ., R 6 where G and R i are as above.
Figure 3 :
Figure 3: The strongly invertible links L m, d and its involution. λ
Figure 4 :
Figure 4: Regular neighborhood of L m,d and strongly involution.
Figure 6 :
Figure 6: The branched links obtained by the Montesinos algorithm.
2 m and a 3 -Theorem 2 . 4 .
strings braid σ 1 σ 2 σ 1 σ 2 σ 1 b respectively, and T 3 is a rational b 1 /2 -tangle.Summarizing we have proved the following.Let M p 1 /r 1 , . . ., p d /r d ; a/b , d ≥ 2, a ±1, be the closed orientable 3manifold obtained by Dehn surgery on the link L m,d with surgery coefficients p i /r i and a/b.Then M p 1 /r 1 , . . ., p d /r d ; a/b is a 2-fold covering of the 3-sphere S 3 branched over the link in Figure 6. | 2,606.8 | 2010-10-26T00:00:00.000 | [
"Mathematics"
] |
Damage cascade in a softening interface
A model describing the damage at an interface which is coupled to an elastic homogeneous block is introduced. Resorting to a real-space renormalization analysis, we show that in the absence of heterogeneity localization proceeds through a cascade of bifurcations which progressively concentrates the damage from the global interface to a narrow region leading to a crack nucleation. The equivalent homogeneous interface behaviour is obtained through this entire cascade, allowing for the analysis of size effects. When random heterogeneities are introduced in the interface, prior to the onset of localization damage proceeds by a sequence of avalanches whose mean size diverges at the first bifurcation point of the homogeneous interface. The large scale features of the bifurcation cascade are preserved, while the details of the late stage are smeared out by the randomness.
A model describing the damage at an interface which is coupled to an elastic homogeneous block is introduced[ Resorting to a real!space renormalization analysis\ we show that in the absence of heterogeneity localization proceeds through a cascade of bifurcations which progressively concentrates the damage from the global interface to a narrow region leading to a crack nucleation[ The equivalent homogeneous interface behaviour is obtained through this entire cascade\ allowing for the analysis of size e}ects[ When random heterogeneities are introduced in the interface\ prior to the onset of localization damage proceeds by a sequence of avalanches whose mean size diverges at the _rst bifurcation point of the homogeneous interface[ The large scale features of the bifurcation cascade are preserved\ while the details of the late stage are smeared out by the randomness. [
0[ Introduction
Progressive failure of quasi!brittle materials can be separated in three di}erent phases ] _rst\ the material response is elastic\ then microcracking appears and these microcracks coalesce eventually in order to form a macro!crack which propagates suddenly[ From the theoretical point of view\ the di.culties involved in the transition between the last two phases are quite important[ In the second phase\ the strain _eld is quasi!homogeneous at a macroscopic scale[ Then\ the strain _eld becomes more and more heterogeneous\ and the strain grows only inside a narrow region[ The subsequent apparition of a discontinuity is often called strain localization in a general sense[ There are in the literature di}erent approaches to the description of this transition[ One of them is the continuum approach\ e[g[ with continuous damage models[ It is based on the description of the average behaviour of the material "see e[g[ Krajcinovic and Lemaitre\ 0876^Laws and Brock! enbrough\ 0876^Lemaitre\ 0881#[ For such continuum models\ the transition is viewed as a bifurcation problem[ When strain localization is due to strain softening\ the tangent sti}ness operator ceases to be de_nite positive[ The partial di}erential equations of equilibrium lose their ellipticity which authorizes discontinuous rate of deformation _elds to develop suddenly[ The inception of strain localization might be for instance depicted under some restrictive assumptions by Hill|s criterion "Hill\ 0848# ] det ðn = H = nŁ 9 "0# where H is the tangent sti}ness operator at the continuum point level and n is the orientation of the localized band[ Another one is the loss of stability at the material level in the sense of the Drucker postulate[ Note that in some well!de_ned cases "associative constitutive laws#\ the loss of uniqueness coincides with the loss of stability in the rheological sense[ The second approach is discrete random modelling[ It is directed towards the description of the study of the material heterogeneities "i[e[ at a scale lower than the representative volume of the material# "see e[g[ Delaplace et al[\ 0885^Fokwa\ 0881#[ Because of heterogeneity\ the usual employed localization criteria in continuum models cannot be used for two reasons mainly ] the _rst one is that the solution is always unique[ To some extent\ the situation for discrete models is the same as the situation for some rate dependent models where bifurcation is not possible and strain localization cannot be viewed as a loss of uniqueness problem anymore "see e[g[ Dudzinski and Molinari\ 0880^Leroy\ 0880#[ The second one is that no tangent operator can be calculated because of the discrete characteristic of the response\ and because of the~uctuations that appear all along the curve[ There lies a subtle di.culty ] consider the dimensionless ratio o a:L\ of the microstructure units a of a discrete model over the system size L\ which characterizes the dis! creteness of the medium[ When o tends to 9\ a continuum description is expected to hold[ As we will see later\ the stressÐstrain response of the system converges towards a smooth law whose tangent operator H can be de_ned[ The latter does provide information on the stability of the structure[ However\ when stability is analysed using actual responses for non!zero o\ it can be shown that the~uctuations in the stressÐstrain responses give rise to a non!di}erentiable law\ and this feature brings some useful additional informations on the approach to the loss of stability[ Elaborating over these notions leads to the useful concept of {avalanches|[ The aim of this paper is to propose a tool to characterize the transition from a homogeneous state of microcracking to a localised one for the discrete models[ This tool should also be applied to any response with~uctuations\ like those met in experiments where dispersions and~uctuations due to material heterogeneity are unavoidable[ Because one cannot deal in this case with a loss of uniqueness\ this tool should be based on stability considerations in the broad sense[ Therefore\ we study the~uctuations that are encountered all along the response of the system[ More precisely\ the avalanche statistics of the~uctuations are analysed[ This formalism is used in many kinds of model "Bak\ 0885^Paczuski et al[\ 0884#\ from biological di}usion up to earthquake response[ All these models have at least one thing in common ] their evolutions are structured around a critical point\ as for the _bre bundle model that we used as a basis for our di}erent proposed models[ For the sake of simplicity\ we will consider a model problem ] the case of a band made of a strain softening "discrete# material assembled in series to an elastic block " Fig[ 0#[ This system is loaded by uniaxial tension\ perpendicular to the direction of the band[ Thus\ the problem of localization will be strictly one directional as the orientation of the localised band is _xed [ Since Fig[ 0[ The model problem[ the band has a _nite width which might become small with respect to the block dimensions\ its response may also be regarded to be the same as that of a softening interface located in between a rigid substrate and an elastic body[ In the _rst part of the paper\ we will recall the analytical results obtained on the _bre bundle model\ also called Daniels model[ We will particularly present the properties of the avalanches distribution in the presence of~uctuations due to the variability of the _bres strength and deal with a simple derived model\ that is a Daniels model and a spring connected in series[ We will look at the evolution of the avalanche properties\ and apply them for detecting the loss of stability[ In order to have a realistic representation of the local mechanical redistribution of the stress _eld when a micro!crack appears\ we will use in the second part a hierarchical model that takes into account redistribution\ i[e[ a non!local load sharing on the surviving _bres when a bond breaks[ Again\ we will look at the evolution of the avalanche properties\ and we will carry out a complete study in terms of stability[ In order to better understand the mechanism of rupture and the apparition of successive bifurcation points\ this model will be compared to the equivalent con! tinuum one[ This model also allows to have direct access to the damage pro_le at the onset of unstable propagation of a macrocrack[
1[ The Daniels model and avalanche statistics
The Daniels model "Daniels\ 0834# albeit simple\ displays an amazingly rich behaviour which is*at least partly*representative of the role of heterogeneity in the mechanical behaviour of some materials[ It is commonly called the _bre bundle model[ N parallel _bres are equally stretched between two rigid beams[ The _bre behaviour is elastic up to a threshold force where the _bre breaks irreversibly[ The sti}ness is the same for all _bres*and thus can be chosen to be unity* but the threshold force t is a random variable characterized by its probability distribution function p"t#\ or its cumulative distribution P"t# Ð t 9 p"t?# dt?[ The advantage of this model is that it is completely solvable analytically[ For instance\ the mean force F\ that is the applied force divided by the total number of _bres N\ vs displacement u is easily obtained as F"u# "0−P"u##u "1# Then\ for a sti}ness of 0 "t u for a single _bre at failure#\ the displacement u varies between 9 Note that similar~uctuations are also encountered experimentally on quasi!brittle heterogeneous materials\ like _bre!reinforced concrete\ but they are usually not described by continuum models[ It is to be noted right away that the amplitude of these~uctuations vanishes as N −0:1 \ and thus considering the limit of an in_nite system size\ N : \ the response of the system converges to the above given mean behaviour[ It is our aim to show that these~uctuations\ albeit of modest amplitude\ are of interest both from an experimental and a theoretical standpoint\ and that some care has to be taken when considering the in_nite size limit[ Since we are interested in stability\ it is important to incorporate in the analysis the boundary and load conditions[ In the following study\ we will consider that the bundle is loaded with a testing machine of known sti}ness k[ Hence\ the analysis will be applied to a bundle connected in series with a spring whose sti}ness is that of the testing machine as shown in Fig[ 2[ This simple system can also be seen as a rough model for the mechanical behaviour of an elastic body "the spring# attached to a rigid substrate through a damageable interface "the _bre bundle#[ The overall displacement "bundle plus spring# will be controlled during the loading sequence[ If the bundle is loaded with a sti} enough testing machine\ only one _bre may break for a constant loading[ However\ if the sti}ness is reduced\ one single failure may induce catastrophically a sequence of failures before reaching a new stable position[ This sequence is what we call an {avalanche|[ In the numerical simulations\ one can easily solve for the response of the bundle with an ideally sti} control\ that is\ imposing strictly a prescribed displacement[ From such a response\ one can also compute the response of the bundle under any boundary conditions "including a _nite sti}ness k\ and large viscous damping to avoid interial overshoot#\ as a succession of equilibrium positions[ Let us _rst consider the limit of an in_nite system size\ and substitute a deterministic damageable as obtained above for a uniform distribution of _bre strengths Daniels "0834#[ The stability analysis of this system is quite straightforward "see Baz ³ant and Cedolin\ 0880# ] under a small enough prescribed displacement U of the entire system "interface u plus elastic body v#\ the system has a unique solution\ i[e[ u ð"0¦k#−z"0¦k# 1 −3kUŁ:1 v u"0−u#:k "3# The second!order work of the system is ] where K is the tangent sti}ness of the bundle\ i[e[ dF:du[ k is always positive\ as K is positive in the prepeak part of the bundle behaviour\ and negative in the postpeak part[ The state of the system is stable if d 1 W × 9\ that is equivalent to K × −k[ At the maximum displacement U "0¦k# 1 :3k\ we _nd a bifurcation point\ where the solution is no longer unique[ At this point\ the displacement is u "0¦k#:1 and the tangent sti}ness in the bundle is It is exactly opposite to that of the elastic body[ It corresponds also to the loss of stability\ because the second!order work is zero[ This is a trivial example of a localization point[ If one tries to increase the displacement past U \ then the entire interface fails catastrophically[ Under an idealized controlled displacement\ the system response follows a snap!back branch\ that represents instability "d 1 W ³ 9#[ Note that prior to the critical equilibrium\ strain softening develops in a stable fashion as also pointed out in Baz ³ant|s analysis Let us now come back to the _nite size _bre bundle as the interface[ The response of the bundle is no longer a di}erentiable law such as eqn "2#\ but rather a sequence of linear elastic responses limited by end!points where a _bre breaks[ Because of randomness of the _bre strength\ the response of the system is always unique\ and no bifurcation point could be de_ned[ Let us call "u i \ F i # the sequence of failure displacements and forces\ respectively\ where i indicates the number of broken _bres[ In a _rst approach\ we can repeat the same analysis as previously[ One cannot of course consider that eqn "5# holds at bifurcation as in the previous example because the response of the bundle is no longer di}erentiable[ Nevertheless\ tracing the maximum of the function F"u#¦ku yields a criterion for the onset of unstable behaviour\ and if F"u# is di}erentiable one recovers the previous analysis[ As we saw before\ the response of such a system is just a succession of~uctuations[ It can be analysed through {avalanches|[ With the introduced variables\ we can de_ne an avalanche in the _bre bundle as follows ] an avalanche of size D and direction k\ starting at "u In the continuous case\ prior to the point u \ damage in the bundle is controlled and we can say that the avalanche size is 9[ At u u \ the critical equilibrium state is reached and a single avalanche of size equal to the remaining number of bonds in the bundle is observed[ In the thermodynamic limit\ N : \ this avalanche has a size which diverges to in_nity[ We observe that avalanches do reproduce the result of a standard stability analysis\ with a simple {binary| "9− # avalanche size distribution[ A crucial point is that this analysis is not entirely correct\ in the sense that we have _rst considered the continuum limit for the force displacement curve\ and analysed the avalanches on this mean response[ Most of the information which can be derived from the concept of avalanches has been lost in this procedure[ Taking into account the full random and discrete nature of the model\ Hemmer and Hansen "Hemmer and Hansen\ 0881^Hansen and Hemmer\ 0883# succeeded in determining the analytical solution of the probability distribution n of observing an avalanche of size D\ for any sti}ness k\ starting at any prescribed displacement u[ They found ] n"D\ u\ k# N where F"x# is a scaling function which is constant for small arguments x ð 0\ and drops to zero rapidly for x × 0[ D is the maximum avalanche size[ Moreover\ the exponent −2:1 and −1 appearing in eqn "7# are universal\ in the sense that they do not depend on the chosen distribution This statistical distribution of avalanches can potentially be used as a precursor of the macro! scopic failure point u [ According to Hemmer and Hansen "0881# analysis and in the course of loading the bundle\ we would observe a series of avalanches\ which can be referred to micro! instabilities\ which progressively becomes larger and larger\ up to the point where they diverge and become {macroscopic|[ This signals in particular that some care has to be taken when taking the thermodynamic limit[ Taking _rst the continuous limit of the forceÐdisplacement response\ and analysing its stability\ erases the progressive development of the avalanches\ and hence misses an important feature of the model[ To demonstrate the utility of such an analysis\ we will analyse the~uctuation in the global failure displacement for a _nite size system[ Let us call du the distance to u "k# where the _nal avalanche is initiated[ du can be estimated by writing that the maximum avalanche size at this displacement\ D \ allows to increase the displacement u up to u "k#[ Hence if not impossible to derive this result without considering the notion of avalanches[ Finally\ let us also note as a side!remark that in the continuous limit\ the response of the system is continuous but not di}erentiable[ In this case\ one can show that locally\ the~uctuating part of the forceÐdisplacement response becomes self!a.ne with a Hurst exponent of 0:1[ Thus it belongs to the realm of C 0:1 functions\ rather than C 0 as would be needed to apply a criterion such as Because the displacement is constrained to be the same in each _bre\ the response tends to a well!de_ned behaviour\ and thus _nite size e}ects play only a marginal role in the present example[ For instance\ the peak stress\ F\ tends to well de_ned value with~uctuations of order 0:zN[ However\ other systems display a much more signi_cant size e}ect\ which can be tracked back to the cumulative e}ect of the avalanches[ Hence\ in this case\ the concept of avalanches and of their statistical distribution is unavoidable[ A principal~aw of this simple model is the load redistribution when a _bre breaks ] the external force is shared equally between all surviving _bres[ On the other hand\ when a micro!crack appears in a quasi!brittle material\ the stress is redistributed mainly around it\ and the local interaction decreases fast as the distance with the micro!crack increases "typically as a power law of the distance#[ To take into account this e}ect\ we need to introduce this redistribution in the model[ Note that a simple local redistribution\ where the load of a failed _bre is redistributed equally on the nearest surviving _bres\ has been already studied "see e[g[ Harlow and Phoenix\ 0880#[
2[0[ Presentation and properties
For the sake of simplicity\ we are going to focus on the situation where the band of strain softening material is small with respect to the size of the elastic block modelled above by a spring and a rigid bar[ Hence\ we will deal with a softening interface embedded in between a rigid and an elastic substrate[ This description is suited to adhesion\ and can also be seen as a simpli_cation of a 1!D medium since the redistribution process will be constrained to develop in the direction of the interface[ Nevertheless\ this example contains the basic features involved in the transitional behaviour between di}use and localised cracking[ At variance with the previously discussed case\ we would like to incorporate an elastic coupling between the _bres as mediated directly by the elastic body\ i[e[ without the rigid bar which redistributed equally the displacement among the surviving _bres[ In continuum mechanics\ this e}ect could be represented by the elastic Green function of a semi!in_nite plane[ For the convenience of the analysis\ we resort to a di}erent choice based on a hierarchical decomposition of the elastic body[ The structure is the following one ] a block is split up in three sub!blocks " Fig[ 8#[ The two lower blocks are then subdivided in three\ and this recursively down to the lowest level chosen in the discretization[ Each block is described using an elastic uniaxial behaviour[ Thus\ the elastic body can be seen as composed of springs\ connected in parallel and alternatively in series[ At the lowest level\ each _ner block is connected Figure 09 shows an example of a 2!generation model[ With 09 _bres for each bundle\ a system of generation 01 is for instance made up of 09×1 01−0 19\379 _bres[ The simplicity of the construction allows for the numerical simulation of extremely large sizes\ while still preserving the long range nature of the elastic couplings[ In two dimensions\ all springs have the same sti}ness k at all generations\ but their initial lengths is divided by two as the generation is decreased by one[ At the _rst generation\ however\ the aspect ratio of the element is twice that of all other generations\ and thus the _rst springs in contact with the interface have a sti}ness k:1[ This allows to obtain a global sti}ness for the entire elastic system which is independent of the discretization level as expected "see the Appendix#[ An important point which will be used later concerns the interaction between an intermediate level "say index i#\ with the rest of the medium[ We can compute recursively the sti}ness L i of this structure deprived from one subblock i\ if a force is applied at this level[ We _nd that L i is exactly equal to k\ i[e[ just as if the subsystem was simply connected to the exterior world by a single block "see the Appendix#[ As the generation increases it can be shown that the elastic coupling will be the same as for a continuous model\ i[e[ with an in~uence function scaling with the same power!law as the Green function in an elastic continuum[ In order to reach this result\ one needs to introduce a distance suited to our discretization[ Considering two points along the interface\ we search from the smallest block which contains both points[ If j is the block generation "which could take value between 0 and N\ which is the generation of the entire system#\ the distance is then de_ned as This distance has the special feature of being ultrametric "i[e[ the same properties than an usual distance\ except for the triangular inequality where d"A\ C# ¾ min"d"A\ B#\ d"B\C##[ With this de_nition\ one can show that under an applied force F on a point of the interface\ the induced displacement v of an other point is v" j# "N−" j−0## F k ¦v 9 "03# where v 9 is the initial displacement of the considered point\ and N−" j−0# is nothing but the number of springs that separate the two points[ By introducing the distance d\ and for large j\ it where A 0:log 1 and B 1 N are constants[ It is exactly the same form that the Green function of a semi!in_nite plane[ The only di}erence is that this in~uence function consists in constant plateaus whose size increases in geometric series[ This is a residual e}ect of the two!fold splitting of each level[
2[1[ Continuous interface
We now proceed by considering _rst the case where each _bre bundle is changed into a damageable element\ with a behaviour law derived from the mean _bre!bundle response\ eqn "2#[ We use the hierarchical construction to relate the interface law to the global "interface plus elastic body# response[ Let us construct a system at generation "n¦0#\ starting from two generation!n subsystems[ These last subsystems are supposed to be described by two forceÐdisplacement relations F "n# 0 "U "n# # and F "n# 1 "U "n# #[ We wish to _nd the global F "n¦0# "U "n¦0# # response[ The two subsystems are subjected to the same displacement\ hence the force is F "n¦0# F "n# 0 "U "n# #¦F "n# 1 "U "n# # "05# The same force also stretches a spring of sti}ness k in series with the blocks\ and thus the global displacement is U "n¦0# U "n# ¦ F "n# 0 "U "n# #¦F "n# 1 "U "n# # k "06# These equations provide a parametric representation of the "n¦0#th generation system as a function of the nth generation[ Let us _rst assume that the interface is homogeneous\ thus in the previous analysis F 0 F 1 [ Hence U "n¦0# "F# U "n# "F:1#¦F:k U "0# "F:1 n #¦"0−1 −n #F:k "07# The _nal equation simply relates the interface displacement to the global one u"F# U "0# "F#[ We note that as n increases\ the global behaviour is nothing but that of the elastic medium because the displacement in the interface represents a vanishing contribution[ We can invert the previous relation to obtain the homogeneous interface law from the global response[ This is a practical tool to compute the equivalent homogeneous interface law when some inhomogeneity exists locally[ From the previous equation and because the interface response is continuous\ the tangent "subscript tg# and secant "subscript sc# sti}nesses of the entire system at generation n can be computed ] where we have used the local interface displacement u to characterize the loading\ F 1 n−0 u"0−u#\ assuming in this formula a homogeneous displacement all along the interface[
2[2[ Bifurcation analysis
For moderate displacements\ and in the absence of randomness in the interface\ every point of the interface undergoes the same damage[ The system response however may cease to be unique at a particular displacement for which the {global| displacement U "n# is maximum[ Let us _rst assume that one half of the interface is subjected to an increasing damage while the other half is elastically unloaded[ Using eqn "06#\ we see that this bifurcation condition is reached when From the expression for the secant and tangent sti}nesses eqn "08#\ we obtain an equation for the displacement\ u u 0 at the interface level for which a _rst bifurcation is encountered ] "2−1o#"0−1o#"0−u#"0−1u#¦1ko"1−1o#"1−2u#¦3k 1 o 1 9 "10# where o 1 0−n [ Focusing on the large system size limit\ we can expand the solution in order of o and obtain the solution as where we have introduced the size of the interface where the damage localises\ l 0 1 n−0 L:1\ to express the result in physical terms[ L refers here to the number of damageable elements "or _bre bundles in the discrete case#[ At this stage\ there is a bifurcation to three possible evolutions ] either damage remains inhomo! geneous "but this solution is unstable# or only one of the two subsystems continues to be damaged while the other is elastically unloaded[ Due to the symmetry of the system these two solutions are identical[ For a large system size\ o : 9\ we note that u 0 tends to 0:1\ i[e[ the interface displacement at peak force[ Bifurcation is however delayed to a larger displacement by a quantity proportional to "k:L#[ The homogeneity in the latter expression can be restored if we consider that the sti}ness of the interface _bres is not unity\ and that the interface is in fact a band of softening material of width h[ The o}set of the _rst bifurcation point is then of order where E bulk is the Young modulus of the elastic block\ and L its size\ E i and n i are the Young modulus and Poisson ratio of the band of width h[ The occurrence of n i comes from the antiplane displacement in the layer[ In this analysis\ we have postulated that the _rst bifurcation mode appeared at the macroscopic scale[ One can perform the same computation for any intermediate level 0 ¾ i ¾ n\ keeping the boundary condition on U "n# [ The only variance with eqn "19# is that the sti}ness k has now to incorporate all the intermediate levels from i to n[ The hierarchical structure allows to compute this sti}ness which remains simply equal to k at all levels[ Therefore\ the localization at generation i appears for a displacement u n−i given by eqn "11# where l n−i 1 i L:1 n−i is to be substituted to l 0 [ Thus\ these modes will occur much later than the _rst one l 0 1 n−0 [ They will however be of interest if we proceed along one of the two symmetric stable branches past the _rst bifurcation point[ The nth generation subsystem where the damage continues to progress will encounter a bifurcation point similar to the previous for u u 1 [ Past this local displacement\ the damage concentrates on one quarter of the system while the rest will be elastically unloaded[ The same analysis can be carried out to any stage down the cascade of bifurcation always concentrating on a stable branch[ The local displacement of the interface on the active part of the interface at the ith bifurcation is given by eqn "13#[ We thus obtain a simple physical picture of the post!localisation regime "localisation is under! stood here as bifurcation# where the damage zone progressively condenses onto a smaller and smaller "{active|# region\ while the rest of the structure is elastically unloaded[
2[3[ Post bifurcation response
An important feature which deserves a particular interest is the equivalent interface law which can be measured past the _rst bifurcation point[ Indeed\ as soon as the damage is no longer homogeneously distributed\ the equivalent homogeneous law is no longer similar to the one of any of the constituents[ However\ if we were to perform the experiment\ without any a priori knowledge of the cascade of bifurcations\ the equivalent homogeneous law is the one we would extract from the loadÐdisplacement curve[ The easiest way to have access to such an equivalent law for a system at generation n is to use the hierarchical nature of the decomposition of the elastic block[ Let us assume that we know this equivalent law for a system of generation n−0\ and express the equivalent law at the next generation[ As discussed above\ the _rst bifurcation point occurs _rst at the largest scale[ Past this _rst point\ one half of the lattice is simply elastically unloaded[ The other half is described by the homogeneous equivalent law[ Let "0:1¦x "n# 0 \ 0:3−y "n# 0 # be the displacementÐforce coordinate of the _rst bifurcation point[ As observed above\ we know that the x "n# 0 form a geometric sequence of ratio 0:1 ] For the y coordinate\ it su.ces to observe that the _rst bifurcation point lies on the homogeneous characteristic\ and hence 0:3−y "n# 0 "0:1¦x "n# 0 #"0:1−x "n# 0 # where we have used the speci_c parabolic form of the {bare| interface law[ Thus\ Any point "0:1¦x\ 0:3−y# of the "n−0# generation equivalent homogeneous interface law is transformed into "0:1¦x?\ 0:3−y?# such that We label the succession of bifurcation points by a subscript j whereas the superscript n refers to the system generation[ Solving for the asymptotic "large n# behaviour of the "x\ y# variables provides Figure "00# shows the sequence of bifurcation points for n 6\ n 8 and n 00 computed exactly\ together with the asymptotic expression shown as a curve[ We observe that only the latest j ¼ n points are not well described by the asymptotic behaviour[ However\ as n increases\ most of the cascade is very accurately described[ To make the above result more explicit\ we note that past the _rst bifurcation point\ the forceÐ displacement relation becomes size!dependent\ but it can be cast in a simple scaling form using the system size L 1 n ] 0 y y "0# where the scaling function is just ] One important point to be noted here is the di}erence of exponents of L which appears in x and y[ As a consequence\ the equivalent homogeneous interface law shows a sudden decay of the force at constant displacement past the peak force[ It is also of interest to consider the scaling of other physical quantities[ In particular\ if we come back to the picture of the _bre bundle at the interface level\ we may introduce another variable Fig[ 00[ The sequence of the bifurcation points "x n j \ y n j #[ The continuous curve shows the asymptotic behaviour obtained from the recurrence relation[ The dotted curves are the real sequence of bifurcation points for a 6!generation "crosses#\ 8!generation "black point# and 00!generation "triangle# system[ Note that just the _rst points are well!described by this relation\ and the snap!back part is not represented[ which is the number N of broken _bres[ The latter is simply related to the displacement as N 1 n u[ Therefore\ we conclude that two consecutive bifurcations are separated by a _xed number of broken _bres[ In fact the displacement in the active region for the consecutive bifurcations increases exponentially fast\ as 1 j \ but simultaneously the active region shrinks also exponentially\ as 1 −j \ so that the product of these two terms which gives the number of failed _bres remains constant[ It is now a simple matter to express the variations of x or y as a function of the number of broken _bres "past the peak force\ where N N p L:1# ] x is linear in "N−N p #\ whereas y grows exponentially fast[ Simultaneously\ the size of the active region is L:1 j \ decreasing exponentially fast with j or equivalently "N−N p # or x j [
2[4[ From dama`e localization to crack nucleation
The physical picture which arises from this analytic solution is of particular interest\ since it is one of the rare situations where some insight can be obtained past the _rst bifurcation [ We have seen that the interface degradation process consists in a progressive condensation of the damaging region from the structure scale down to the basic constitutive unit[ At the end of this _rst cascade\ exactly one of the smallest size interface element is totally broken[ This naturally forms the initiation stage for a crack propagation regime[ Unfortunately\ following the crack propagation is of little interest in our model\ which is then too much sensitive to the detail of the hierarchical decomposition to pretend any possible comparison with reality[ However\ up to the crack nucleation\ we believe that the hierarchical interface model is a faithful description of continuum model\ yet simple enough to be amenable to an analytic solution[ An interesting feature is to be noted at the crack nucleation stage ] the progressive condensation of the damage can be read back from the damage pro_le along the interface[ Indeed\ the damage D of the interface is a simple linear function of the maximum displacement ever encountered by a homogeneous domain\ that varies between 9 and 0[ Tracing backward the damage in the active region\ we obtain that the damage D at a distance d from the crack nucleation point is where we have used the earlier de_ned distance ðeqn "02#Ł[ Thus\ the cascade of bifurcation leads to a rather unusual damage pro_le ahead of the crack[ If we de_ne a {process zone| as a damage zone ahead of a crack\ we would conclude that the process zone is of in_nite extent[ However\ this is important to note that the damage decreases very fast with the distance\ i[e[ as an inverse law[ Then this process zone seems more similar than the classical one\ that is a quasi!con_ned damage zone of _nite length ahead of a crack[
3[ Discrete interface model with redistribution
We have already underlined the importance of the notion of avalanches for a disordered _bre bundle[ In the interface model\ basically the results of Hemmer and Hansen still hold[ The same statistics is expected in this case[ The only variant comes from the boundary conditions[ We have seen that an elastic coupling to the _bre bundle has the major e}ect of moving the interface displacement ðde_ned in eqn "3#Ł at which the avalanche size diverges[ In the interface case\ the elastic coupling is a little more complex\ and thus the point of divergence for avalanches requires some discussion[ Let us consider a block at generation n[ This block is subjected to an imposed displacement through a device of sti}ness k "n# [ The avalanches which are meaningful at this level are those which are constructed from the global forceÐdisplacement characteristic at generation n with a slope −k[ We would like to relate those avalanches to the one computed at the previous generation[ We have seen above how to relate the forceÐdisplacement relations from one generation to the next[ This provides a simple equivalent sti}ness of the loading device k "n−0# to be considered at the "n−0#th generation[ where the function H is shown in Fig[ 01[ Iterating the previous transformation allows to compute the elastic coupling to be considered directly at the interface level[ The function H has two _xed points\ k 9 and k k:1[ The _rst one is attractive\ whereas the second one is repulsive[ In order to better understand what is the practical measuring of the slope\ let us consider a system consisting in a few elements[ If we are looking for the _rst bifurcation point\ that is equivalent to the divergence of the avalanche sizes\ we have to consider avalanches with a sti}ness equal to k k:1 for just one bundle response[ Because we are far from the _xed points\ we use the relation eqn "21# to obtain this slope[ Figure 02 illustrates this point for an 00!generation system[ Therefore\ for most values of k n \ the equivalent sti}ness to be considered at the interface level k "0# tends to 9 as the system size tends to in_nity[ This means in practice that for most boundary conditions\ the avalanches should be analysed at the interface level with an elastic coupling which tends to 9\ i[e[ under a constant force condition[ This is precisely what has been shown in the previous analysis\ where we considered k "n# : \ i[e[ a constant displacement imposed on the entire elastic domain\ and we have retrieved that the _rst bifurcation occurred for a displacement at the interface level which approached the apex of the forceÐdisplacement curve "u 0:1#[ The slight delay in this displacement resulted from the last iterations of the function F[ Indeed\ for k : 9\ H"k# ¼ k:1\ and thus\ k "i−0# ¼ k "i# :1 for i ð n[ We observed that the _rst bifurcation in a generation n system occurred at points u "0# 0:1¦B1 −n where B is a constant\ and thus the tangent sti}ness du:dF"u u "0## B1 0−n is indeed a geometric series of ratio 0:1[ The existence of the unstable _xed point k k:1 can also easily be understood ] if we invert the relation eqn "21#\ we can relate the larger scale sti}ness to the lower one\ through the function inverse of H[ In this case the _xed points remain obviously identical but their attractive or repulsive character is turned to the opposite[ This means that the sti}ness of the entire system tends to k:1 as n increases to in_nity[ The k:1 is nothing but the sti}ness of the elastic body computed in the preceding section[ This shows that the conditions for bifurcation become independent of the global boundary conditions as the system size diverge[ It also underlines the fact that in order to observe the cascade of bifurcation\ one should use an active control on the loading conditions\ with the ability to decrease the loading fast compared to the typical time needed to fracture the _bres\ or to redistribute the load to the _bres[ This imposes some severe constraints on the monitoring of the experiment[ A possible way to build this control might be to use the acoustic emission during loading[ Let us note that the notion of avalanche allows to understand naturally the cascading process[ Indeed\ we have seen that for a subblock embedded inside the entire structure the e}ective sti}ness of the surrounding medium amounts to k "instead of k:1 if all other subblocks of the same generation are subjected to the same displacement#[ Therefore\ at the bifurcation point where the damage is localized in a subblock of generation i\ the two subblocks at generation i−0 are still stable\ i[e[ the maximum avalanche size in each of these two blocks is _nite[ Hence the damage will be shared between the two subblocks up to the next bifurcation point[ This argument also allows to estimate the validity of the cascade once the~uctuations due to the random nature of the _bre bundles are taken into account[ As the size of the active region\ l\ decreases\ the force~uctuation increases as l −0:1 \ and the proportion of broken _bres displays ã uctuation of order l −0:2 [ Comparisons of the level of~uctuations with the increment of force\ displacement or number of broken bonds\ show that the late stage of the process "l small enough# are dominated by the~uctuations\ but in contrast\ the early stage is well de_ned[ Therefore\ we anticipate that the _rst steps of the cascade may be correctly described by the above homogeneous situation\ whereas the more mature stage may be scrambled by the presence of disorder[ A representation of the location of the _bres that break under the loading gives a good physical idea of the cascade phenomena " Fig[ 03#[ The bifurcation cascade observed during the failure is not the usual idea of the failure of a joint ] such a failure generally occurs catastrophically\ and then the _rst idea is to think that it is due to a critical~aw[ It is important to note that our model follows the same catastrophic behaviour if we consider the global loadÐdisplacement response[ Hence\ Fig[ 04 shows the global response of the model for three di}erent generation systems[ As the generation increases\ the behaviour becomes more and more elastic brittle\ as expected for a joint failure[ But if we consider just the interface response\ we _nd e}ectively the previous behaviour with the bifurcation cascade[ Finally\ in spite of their di}erent description\ we see that the heterogeneous discrete system " for large enough sizes# and the homogeneous one have the same post!peak behaviour ] , The relation F vs u is similar until the apparition of the _rst crack[ , The onset of localisation appears at the same time " Fig[ 02#[ , The damage cascade is observed in both cases[
4[ Conclusion
For continuous models\ the localization is well!de_ned\ using some criteria based on the loss of uniqueness or the study of the tangent sti}ness operator[ For discrete models\ the localization could not be de_ned in such a manner[ The solution is always unique\ and no tangent could be calculated on the response because of the~uctuations that are superimposed[ In some well!known cases\ the loss of uniqueness coincides with the loss of stability\ where a bifurcation point is encountered[ In the _rst part\ we show that the study of avalanche statistics allows to detect this point[ Precisely\ the divergence of the avalanche sizes could be directly compared to the loss of stability in a continuous model[ After de_ning this equivalence\ we propose\ as an application\ to study a damage interface coupled with an elastic block[ For the sake of simplicity\ the interface is chosen to be thin\ then the damage propagates only in the interface direction[ We are interested particularly in the unstable path\ that is very di.cult to observe with continuous models[ The discrete model that we use is a hierarchical model\ that has a good representation of the Green in~uence function in an elastic continuum[ Our conclusions are the following ones ] We _rst propose to establish the recurrence relation between the sti}ness of a i!generation structure\ K i \ and a "i¦0#!generation one\ K i¦0 [ Per de_nition of the sti}ness\ the external force F is ] We search the expression of K i¦0 \ as a function of K i and k[ Using the hierarchical structure of the block\ we can write and thus\ By identifying with eqn "22#\ we obtain the recurrence relation ] The stable _xed point of this recurrence relation is Then\ choosing the value K 0 k:1 "i[e[ the sti}ness of the _rst springs in contact with the interface#\ the global sti}ness of the hierarchical structure is independent of the discretization level as expected[ We now give the sti}ness L i of a n!generation structure deprived from one subblock i\ if a force is applied at this level[ Again using the hierarchical structure of the block leads to a simple recurrence ] The stable _xed point is thus | 10,200.4 | 1999-04-01T00:00:00.000 | [
"Physics"
] |
A reaction norm perspective on reproducibility
Reproducibility in biomedical research, and more specifically in preclinical animal research, has been seriously questioned. Several cases of spectacular failures to replicate findings published in the primary scientific literature have led to a perceived reproducibility crisis. Diverse threats to reproducibility have been proposed, including lack of scientific rigour, low statistical power, publication bias, analytical flexibility and fraud. An important aspect that is generally overlooked is the lack of external validity caused by rigorous standardization of both the animals and the environment. Here, we argue that a reaction norm approach to phenotypic variation, acknowledging gene-by-environment interactions, can help us seeing reproducibility of animal experiments in a new light. We illustrate how dominating environmental effects can affect inference and effect size estimates of studies and how elimination of dominant factors through standardization affects the nature of the expected phenotype variation through the reaction norms of small effect. Finally, we discuss the consequences of reaction norms of small effect for statistical analysis, specifically for random effect latent variable models and the random lab model.
Introduction
Since the mid-seventeenth century reproducibility, i.e., the ability to reproduce an experimental outcome by an independent study is a fundamental cornerstone of the scientific method which distinguishes scientific evidence from mere anecdote. In modern research, however, such independent replication has been replaced by principles of experimental design which-in principle-should render replication by independent studies redundant. In the simplest form, the effect of a predictor (independent variable) on an outcome (dependent variable) is measured in a sample of independent replicate units (individuals). Scientific evidence generated in this way is arguably reproducible if the experimental units (i.e. individuals) are true random samples of the overall target population. Despite the general wisdom that true random samples are practically impossible to achieve when the target population is e.g. a biological species, the potential consequences of non-independence on the reproducibility of results are usually ignored. This is mirrored by the fact that no independent replication studies are generally required by funders for accepting grant proposals or by editors before accepting manuscripts for publication.
Over the last 10-15 years, however, reproducibility in biomedical research, and more specifically in preclinical animal research, has been seriously questioned (Bailoo et al. 2014). Several cases of spectacular failures to replicate findings published in the primary scientific literature have led to a perceived reproducibility crisis (Freedman et al. 2015;Ioannidis 2005). In 2011, researchers from the company Bayer reported that out of 67 in-house replication studies of published research in the areas of oncology, women's health and cardiovascular diseases only 14 (21%) could fully replicate the original findings (Prinz et al. 2011). Similarly, researchers of the company Amgen have replicated 53 original research studies deemed 'landmark' studies in haemathology or oncology, recovering the original findings only in 6 cases (11%) (Begley and Ellis 2012). These reports and a surge of meta-analyses confirming low replication rates [e.g. (Sena et al. 2010;Rooke et al. 2011;Dumas-Mallet et al. 2017)] lead to a heated debate within as well as outside the scientific community about the usefulness of animal models for bio-medical research (Ioannidis 2005;Freedman et al. 2015;Munafò et al. 2017;Loken and Gelman 2017;Sena et al. 2007).
Several potential causes for poor reproducibility have been proposed, including lack of scientific rigour, low statistical power, publication bias, analytical flexibility and perverse incentives in research-leading in some cases to outright fraud (Loken and Gelman 2017;Freedman et al. 2015;Ioannidis 2005). While all of these aspects might contribute to replication failure, we will here focus on another aspect that is all too often ignored: biological variation. Biological variation is the sum of genetic variation, environmentally induced variation and variation due to the interaction between environment and genotype (G × E interaction). As the response of an animal to an experimental treatment (e.g. a drug) depends on the phenotypic state of the animal, the response, too, is a product of the genotype and the environmental conditions. Despite attempts to standardize animal facilities, laboratories always differ in many environmental factors that affect the animals' phenotype [e.g. noise, odours, microbiota, or personnel (Crabbe et al.1999;Chesler et al. 2002;Wahlsten et al. 2002;Würbel 2002;Chesler et al. 2002;Sorge et al. 2014)]. In a landmark study, Crabbe and colleagues (1999) investigated the confounding effects of the laboratory environment and G × E interactions on behavioural strain differences in mice. Despite rigorous standardization of housing conditions and study protocols across three laboratories, systematic differences were found between laboratories, as well as significant interactions between genotype and laboratory. Even temporal variation within a single laboratory can lead to relevant effects, as demonstrated in a recent study where researchers found considerable phenotypic variation between different batches of knockout mice tested successively in the same laboratory (Karp et al. 2014;von Kortzfleisch et al. 2020).
The reaction norm is a concept helping to explain the observation that individuals of the same genotype will produce different phenotypes if they experience different environmental conditions (Woltereck 1909). It is the result of a complex environmental cue response system, which buffers the functioning of the organism against environmental and genetic perturbations (Schmalhausen 1949;Waddington 1942;Forsman 2015). The consequence of such a regulatory system is that environmental influences can play an important part in shaping the phenotype. Environmental influences do not only play a role at the time of assessment of the phenotype but throughout the ontogeny of the organism (Schlichting and Pigliucci 1998). A reaction norm perspective on phenotypic traits unifies two concepts which have often been treated as opposing mechanisms: phenotype diversification due to environmental variation (plasticity) and the limitation of phenotypic variation by mechanisms that buffer development against genetic and environmental variation (canalization). Both plasticity and canalization have been considered as adaptive traits evolved as a consequence of environmental variation, though following Woltereck (1909) arguments, it is the reaction norm itself that one should consider as the evolved trait (Stearns 1989). Its adaptive value is, however, limited to a certain range of environmental variation: environmental situations that lie far outside the range of environments a species experienced over its evolutionary past can overtax the organism's ability to appropriately respond to the situation and lead to maladaptive or pathological responses. With respect to reproducibility it must be emphasized that 'phenotype' is not restricted to visible differences between individuals but does equally refer to differences in physiological or behavioural responses to any sort of stimulation or treatment.
We have recently argued that a failure to recognize the implications of reaction norms might seriously compromise reproducibility in bioscience-specifically in in-vivo research (Voelkl and Würbel 2016;Voelkl et al. 2018). Laboratory experiments that are conducted with inbred animals under highly standardized conditions are testing only a very narrow range of one specific reaction norm. Independent replicate studies that fail to reproduce the original findings might not necessarily indicate that the original study was poorly done or reported, but rather that the replicate study was probing a different region of the norm of reaction (Voelkl et al. 2020). Therefore, the attempt to improve reproducibility through rigorous standardization of both genotype and environment has been referred to as "standardization fallacy" (Würbel 2000). Here we will explore this proposition in more detail, first consider the case of a single dominating environmental factor, and then the reaction norms of small effect. In practical terms this will lead us to emphasize the importance of including the laboratory environment as a factor in multi-laboratory studies and meta-analyses or to consider introducing a correction factor in the statistical model to account for predicted between-laboratory variation.
Conceptualizing the reaction norm
The reaction norm can be conceptualized as a function mapping an environmental parameter to an expected value of a phenotypic trait (Fig. 1).
If we denote the environmental parameter as X and the phenotypic trait of the organism as Y, then the norm of reaction h(⋅) gives the expected value for Y given the environmental state x as E(y|x) = h(x) . In many cases, the phenotypic trait will be a continuous valued trait. In this case, we can describe the distribution of expected values for the trait by a probability density function (PDF) f(y). The environmental parameter is assumed to be a characteristic that can be measured on a continuous scale. Environments differ in the environmental parameter and the probability of finding the environment in a specific state regarding this parameter can be given by a probability density function g(x). Hence, with the help of the reaction norm, we can describe the relationship between the expected trait value and the distribution of the environmental states with the composite function Originally Woltereck (1909) referred to the relationship between a specific environmental variable and the phenotype as Phänotypenkurve (phenotype curve), while he used the term Reaktionsnorm (reaction norm) for specifying the collective influence of all environmental variables. However, later Woltereck widened the use of the term reaction norm to include also small subsets of phenotype curves or even phenotype curves of a single environmental variable. Today the term norm of reaction is usually used to describe the relationship between a single environmental parameter on the expected phenotype of the organism (Pigliucci 2005;Sarkar 1999). In evolutionary ecology, reaction norms are often the target of the study. Reaction norms are studied experimentally by systematically varying one environmental parameter. If one wants to describe the combined effect of two or more environmental parameters on the phenotype, the norm of reaction takes on the form of a surface or a hypersurface. Conceptually, there is no bound for the number of dimensions included, though limits of human imagination sets constrains as the heuristic value of the model quickly decreases with increasing dimensionality. Furthermore, collecting empirical data becomes very cumbersome when combinations of several parameters need to be varied systematically. (1) For these two reasons defining, high-dimensional norms of reaction is an approach rarely taken or advised.
Dominating factors
In most cases of biomedical research, environmentally induced trait variation apart from the treatment effect is not of interest and considered as unwanted noise. The predominant approach taken to deal with environmentally induced variation is to identify potential dominating environmental parameters and keep them constant (standardization), where we speak of a parameter as 'dominant' if it contribute much more to the total environmentally induced trait variance than most other parameters. In those cases, where a dominating factor can be identified but not controlled, it might be recorded and added to the analysis as co-variate or nuisance factor (Fig. 2). The very idea of environmental standardization is, thus, to reduce environmentally induced trait variation by reducing variation in all those environmental factors that are known to-or are suspected to-cause trait variation. The list of factors standardized in most pre-clinical studies with rodent model organisms includes (but is not limited to) cage size, cage content (nesting material, shelter, enrichment devices), housing temperature, humidity, light regime, stocking density, food and water supply, handling techniques and cage maintenance routines. In fact, even many more environmental factors are standardized, though some of them seem to be so self-evident or trivial that they are hardly ever mentioned and easily overlooked (e.g. all laboratory environments are free of catastrophic events like hailstorms or feline predators). Thus, rigorous standardization is presumed to eliminate most or all dominating factors and, hence, lead to a substantial reduction in environmental variation and arguably also to a reduction in environmentally induced trait variation. Study-specific standardization will mainly reduce within-study trait variation, while standardization across studies (harmonization) will reduce both within-and between-study variation.
Reaction norms of small effect
If all environmental factors with dominating contributions to trait variation have been "neutralized" in a big sweep, one might believe that the remaining environmentally induced variation is of little interest. This, however, might not necessarily be the case, because in addition to environmental conditions, the genetic background of the laboratory animals is also highly standardized when experiments are conducted with inbred mouse strains. Mice used in a single study will be delivered from the same breeding facility and stem from the same breeding line. As a consequence, individual genetic variation is very small, with the result that environmentally induced variation and G × E interactions might still make Fig. 1 a Reaction norm allows describing the relationship between the expected value of a phenotypic trait (E(Y)) and an environmental parameter (X) for a specific genotype. The observed values of the phenotypic state (indicated by the Gaussian bell curves) will vary due test variation, measurement error, and due to biological variation induced by variation in other environmental parameters. b The reaction is a genotype specific property: different genotypes ( g 1 , g 2 ) can have different reaction norms, with the effect that for the same environmental parameter value, x 3 , g 1 and g 2 produce different expected trait values, k and m. For some x, both genotypes can have the same expected values for y (e.g. E(y|g 1 , x 2 ) = E(y|g 2 , x 2 ) = l ) and different genotypes can have the same expected trait value under different environmental conditions (e.g. E(y|g 2 , x 1 ) = E(y|g 1 , x 3 ) = k ). If the reaction norm is flat, we expect the same trait value even under different environmental conditions (e.g. E(y|g 2 , x 3 ) = E(y|g 2 , x 4 ) = m) up most of the total biological variation in the organism (Würbel 2000). Environmental effects should, therefore, still be taken into account. Yet, the nature of the combined environmental influences has changed. Originally, we were confronted with the situation of many environmental parameters having a small effect on trait variation and one or a small number of dominating parameters, contributing much more to trait variability. However, after dominating factors have been taken care off, we should be left only with a large number of factors, each having a small effect on the total variance. This situation requires a different treatment. Assuming that those factors are additive and independent of each other and recalling the central limit theorem (Galton 1875;De Moivre 1756;Lindeberg 1922), we can expect that under those assumptions the limiting distribution for the effect of the environmental states can be described by a Gaussian random variable X ∼ N( , ).
Reproducibility
As the reaction norm allows relating environmental variation to expected variation in trait values Y, we might ask, whether this can help us in defining an acceptance region, in which the effect size estimate of a replicate study has to fall, in order to be considered a 'successful' replication. Traditionally, the discussion how to find this region has focused almost exclusively on the domain of Y-the trait-by partitioning the observed variation in the trait value in variance attributed to laboratory (i.e. environmental variation) and variance attributed to individual variation and measurement error. Here, we suggest a conceptually different approach: instead of defining the acceptance region based on observed trait variation, we want to define the acceptance region based on the range of expected values given the environmental states. We can consider two different scenarios: (a) the reaction norm is known and the values x for the environmental variable for the specific studies are known, and (b) the reaction norm is known and the distribution for the environmental variable is known. Under scenario (a), we can use the reaction norm to find the expected value for y for the original study as E(y|x 1 ) = h(x 1 ) , where x 1 is the value for the environmental variable of the original study. Likewise, the expected value for y of a replicate study done under environmental condition x 2 is given by E(y|x 2 ) = h(x 2 ) . Different measures for reproducibility have been suggested, though for our purpose a very simple definition might suffice. We say that a replication study successfully reproduced the original finding if its parameter estimate falls within the confidence interval of the original study. The replicate study can be said to reproduce the findings of the original study if where ȳ 1 is the mean of the observed values of the original study, ȳ 2 is the mean of the observed values of the replicate study, SE 1 is the standard error for the mean estimate of the original study done under environmental condition x 1 and z is a parameter determining the confidence level. In words, as we know the difference in expected trait values for the environmental conditions under which original and replicate study have been conducted, we can shift the confidence interval for the mean estimate of the original study Fig. 2 Effect of dominating factors on effect size estimates and reproducibility. Panel a shows the hypothetical results of 25 studies, where between-study variability is relatively large in comparison to within study variability and the confidence intervals of several studies would not include the summary effect size estimate. In panel b, however, studies are sorted by an environmental gradient (ambient temperature) on the y-axis, suggesting that this environmental factor has a linear influence on the effect size of the experimental treatment. In this case, inclusion of this factor, would allow giving predicted values with respect to the environmental variable and most studies capture the predicted value for the respective ambient temperature. In the case of a specific environmental factor that was reliably measured and reported for all studies, such a regression approach would, indeed, be the best option for both estimating the conditional effect size and estimating replication success by that amount before testing whether the mean estimate of the replicate study falls within that interval. Practically, such cases where the reaction norm and the environmental parameters are known might be rare, because if they are known, then the expected value for y could be deduced readily from x and there would be little need to actually perform the experiment. Under scenario (b), the reaction norm is known, but the researcher is blind to the actual parameter values for the environmental variable X under which the original study or the replication study were performed. However, the researcher knows the overall distribution for X. In this case we can approach the question of reproducibility differently. If we know the distribution for X and the reaction norm h(⋅) , equ. 1 allows us to evaluate the distribution for the expected values for Y. We can use this distribution to ask for the likelihood that the mean value from an observed set of values, ȳ , could stem from Y by calculating the probability that a randomly drawn value from Y would be more extreme than ȳ . We can do this for both, the observed mean for the original study ȳ 1 and the observed mean for the replicate study ȳ 2 . If the product of those probabilities is sufficiently large (lager than a critical value L), we have no reason to reject the idea that both estimates faithfully reflect randomly sampled realizations of the environmental parameter X. For ȳ 1 > M 1 and ȳ 2 > M 1 , where M 1 is the first moment of f (⋅) , we can speak of successful replication if In case of ȳ < M 1 the respective integral is to be taken from ∫̄y −∞ . Like scenario (a), scenario (b) suffers from the problem that the reaction norm must be known. If it is not known, we cannot proceed this way, but the reaction norms of small effect can at least be integrated in the statistical model. For this case we noted that the combined effect of many environmental variables should result in Y ∼ N( , ) . The contribution of the reaction norms of small effect to an observed difference between two study outcomes will be confounded with other sources of between-study variation; thus, we cannot isolate it and consequently also not determine its effect on reproducibility. However, the reaction norms of small effect can be subsumed in the random variable for laboratory or study in a latent variable model and, hence, statistically taken care of.
Random lab model
A statistical approach incorporating the reaction norm into estimates of individual studies has been suggested by Kafkafi et al. (2017), dubbed the random lab model (RLM). This model adds 'noise' for the presumed variation contributed by the G × E interaction term to the individual variation, generating an 'adjusted yardstick' for inference and parameter estimates. It is, thus, raising the benchmark for finding significant results by trading statistical power for increased realism through wider confidence intervals of the effect size estimates. The effect of this adjustment is technically achieved by adding a penalizing G × E term to the variance. The standard error for the effect size estimate of a simple contrast of two groups (e.g. 'test' and 'control') can, then, be calculated as: where s 2 is the observed variance, n 1 and n 2 are the respective sample sizes for treatment and control groups, and 2s 2 G×E is the added ' G × E noise' (Kafkafi et al. 2017). The latter term cannot be estimated from data from a single experiment, but it is suggested-or hoped for-that large data bases or meta-analyses will allow giving rough approximate values for specific fields of research and specific types of interventions.
Discussion
We started off with the observation that the phenotype of an organism is always a product of its genotype and the environmental circumstances under which it developed. Thus, a phenotypic trait should not be considered as a fixed entity but as a conditional property of the organism. Experimenters have long identified environmental clustering-be it as sites, laboratories, batches, racks, cages-as potential sources for covariation. The seemingly logical solution to this problem is, to add shared environment as a random effect in the statistical model. For example, if a large biomedical intervention study is carried out at several laboratories, then a joint analysis would include the identity of the laboratory as a random factor in the analysis. In single-laboratory studies batch or cage are often added as random factors. These random factors are by default modelled as normally distributed random variables. As several authors have noted (e.g. Einbeck et al. 2007;Aitkin 1999;Neuhaus 2010, 2011) this assumption might often be made for computational convenience and not because of compelling empirical evidence. From a conceptual viewpoint it is not always justified: it might work well if the environmental influence is a sum of many different underlying processes (reaction norms of small effect), while the presence of dominating factors can lead to non-normal distributions for the expected trait value.
Next, we have noted that reaction norms come in two flavours: dominating factors and factors of small effect. Given the usually continuous nature of environmental effects on trait values, this is a rather arbitrary distinction that would defy any attempt of operationalization. Dominating factors are environmental factors that contribute much more to the overall trait variation, than other environmental factors, but for practical purposes we can simply define dominating factors as factors where we can see clear effects on the trait variation given realistic (reasonably small) sample sizes. If such effects exist, vigilant experimenters will either control the environmental parameter (keeping it constant) or incorporate it in the analysis by systematically varying it and adding it to the model. Furthermore, we can expect a large number of environmental parameters having a small effect on the expected phenotype value. Employing the central limit theorem, we suggested summarizing the effects of all those parameters in a single normally distributed random variable. The question arises, whether those small environmental effects can have an effect on the reproducibility of a study result. We argue that this can, indeed, be the case for two reasons. First, even if the effect of a single environmental parameter might be rather small, the combined effects of many such parameters can-sometimesbecome substantial. (Though in many cases it will not as a result of the regression to the mean.) Second, what we see in biomedical research is a tendency for standardizing many aspects of experimental studies. Standardizing instruments and measurement protocols means reducing measurement error. Standardizing housing conditions and testing conditions means eliminating most dominating environmental factors and, hence, reducing the overall variation. At the same time, standardizing the genotype by working with highly inbred lines means that also the genetic variation is largely reduced-leading again to a reduction in variance of the phenotype. Thus, while the overall phenotypic variation is reduced through standardization, the relative proportion of the phenotypic variation contributed by the remaining environmental factors will consequently increase (Würbel 2000). As the reduction in measurement error and genetic variation results in a larger proportion of phenotype variation that can be attributed to the reaction norms of small effect, we have to consider what consequences this has for the distribution of the expected trait value. From viewing between-study variation from a reaction norm perspective, we can learn two important things. First, as soon as the slope for the reaction norm is not flat, the environment affects the expected trait value and should be incorporated in any explanatory model as latent variable. In analyses of multi-laboratory studies and in meta-analyses this is done by treating the laboratory, the study site, or the study as random factor of a mixed effect model. Indeed, over the last decades several authors have emphasized and diligently advocated the use of mixed effect models for multicentre studies (Localio et al. 2001;Kahan and Morris 2013) and meta analyses (Freeman et al. 1986). Their efforts have not been in vain and today mixed effect models can be considered the standard approach to dealing with laboratoryto-laboratory or clinic-to-clinic variation. However, while those recommendations for the use of mixed effect models were based on statistical arguments (non-independence and the observation that adding a random factor for laboratory or clinic can reduce the unexplained error term), we arrived at the same suggestion from-what we would call-first principles of biology: the norm of reaction as a cogent product of stabilizing selection. Second, as soon as dominating factors have non-linear reaction norms, it becomes likely that the resulting distribution for expected trait values is not normal. Does this mean that multi-centre studies or metaanalyses implicitly assuming a normally distributed latent variable for the combined effects of laboratory environment are wrong? From a conceptual viewpoint, this might indeed be a questionable assumption; however, this might not matter too much for practical purposes. For most statistical models it is sufficient that normality is only approximately met as the algorithms might be rather robust against moderate deviations from normality (McCulloch and Neuhaus 2010; Maas and Hox 2004;Grilli and Rampichini 2014;Bell et al. 2018). That is, if the reaction norm for the dominating factor does not lead to a heavily skewed or distorted distribution of the latent variable, then the effect on the model outcome might be negligible. If one has reason to believe that the assumption is substantially violated, then a non-parametric modelling approach based on mixture-models (Aitkin 1999;Einbeck et al. 2007) or Markov chain Monte Carlo methods (Hadfield 2010) might offer suitable alternatives.
Conclusion
When studying living organisms, we are faced with inherent biological variation which is distinct from random noise or measurement error and which is fundamental to the correct interpretation of experimental results. Fully acknowledging this requires adopting a reaction norm perspective on physiological and behavioural responses. This will lead to a re-thinking of parameter estimation and inference, it will let us see reproducibility in a new light and it can even help gaining new insights into adaptive responses and gene-by-environment interactions. Here, we have tried to dissect its implications for the reproducibility debate and, more generally, what it means for the interpretation of experimental results in biomedical research. anonymous reviewers for their helpful comments on an earlier version of this manuscript.
Funding Open Access funding provided by Universität Bern.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 6,679.4 | 2019-01-07T00:00:00.000 | [
"Biology",
"Philosophy",
"Medicine"
] |
Comparing Sequential with Combined Spatiotemporal Clustering of Passenger Trips in the Public Transit Network Using Smart Card Data
Smart card datasets in the public transit network provide opportunities to analyse the behaviour of passengers as individuals or as groups. Studying passenger behaviour in both spatial and temporal space is important because it helps to find the pattern of mobility in the network. Also, clustering passengers based on their trips regarding both spatial and temporal similarity measures can improve group-based transit services such as Demand-Responsive Transit (DRT). Clustering passengers based on their trips can be carried out by different methods, which are investigated in this paper. This paper sheds light on differences between sequential and combined spatial and temporal clustering alternatives in the public transit network. Firstly, the spatial and temporal similarity measures between passengers are defined. Secondly, the passengers are clustered using a hierarchical agglomerative algorithm by three different methods including sequential two-step spatial-temporal (S-T), sequential two-step temporal-spatial (T-S), and combined one-step spatiotemporal (ST) clustering. Thirdly, the characteristics of the resultant clusters are described and compared using maps, numerical and statistical values, cross correlation techniques, and temporal density plots. Furthermore, some passengers are selected to show how differently the three methods put the passengers in groups. Four days of smart card data comprising 80,000 passengers in Brisbane, Australia, are selected to compare these methods.The analyses show that while the sequential methods (S-T and T-S) discover more diverse spatial and temporal patterns in the network, the ST method entails more robust groups (higher spatial and temporal similarity values inside the groups).
Introduction
Automated Fare Collection (AFC) systems have been implemented in the public transit network since two decades ago.These systems not only expedite the process of fare collection but also produce valuable datasets.Smart card datasets create a great opportunity for both researchers and practitioners of the public transit network to improve the status quo.Smart card datasets usually include the location and time of boarding and/or alighting transactions of passenger trips.Contrary to a classic survey that was limited in sampling and may not have reflected the ground truth, the smart card datasets are comprehensive and reliable.The datasets can reconstruct passenger trips, which can help to understand, improve, and evaluate the network performance [1,2].Hence, smart card datasets attract attention of both researchers and practitioners who desire to improve the public transit network.
Studies of travel demand in the public transit network focused on understanding how passengers move in the network by modelling both the time and location of their trips [3].Smart card datasets can help to discover the patterns of travel demand.Data mining techniques have been used to extract travel demand patterns from the smart card datasets [4][5][6].In other words, data mining techniques can discover groups of passengers who have a similar travel feature based on similarity measures.For instance, passenger groups with similar travel time or length can be determined.Clustering passengers can develop various group and customercentric transit services and mobility applications such as DRT systems [7], friend recommendation systems [8], inferring socioeconomic attributes of passengers [9], level of access in the public transit network [10,11], and traffic flow prediction models [12][13][14].Potential implications for the spatial and temporal clustering methods are discussed later on in the discussion section.Consequently, ascertaining travel demand patterns using data mining techniques is a building block for many novel applications.
The spatial or temporal perspective in passenger clustering can form different groups of the passengers (i.e., groups of passengers with spatially similar (same routes) or temporally similar (same time) trips).Also, a passenger moves simultaneously in both spatial and temporal space.Two passengers can be similar based on the spatial similarity measure but can be dissimilar according to the temporal similarity measure.For instance, two passengers may use the same routes but in different periods of the day.In addition, a passenger can have one or more trips during a day, all of which should be considered in measuring the spatial or temporal similarity with other passengers.Spatial and/or temporal similarity measures between passengers based on their trips can be used as a measure to study the closeness or relationship between the passengers [15].Therefore, to have a comprehensive insight of travel patterns, both spatial and temporal dimensions of the trips should be considered in the clustering of the passengers.
Spatial and temporal similarity measures should be defined separately because of fundamental differences.Spatial space is a two-dimensional space with units such as meters or inches; however, temporal space is a onedimensional space with units such as minutes.Hence, defining a unique spatiotemporal similarity measure might be an ambiguous technique because it needs to merge these two different spaces [15].Moreover, to have passenger clusters with both spatial and temporal similarities, it is necessary to cluster them in two steps (sequentially) or combine values of the spatial and temporal similarities (that are calculated separately) and then cluster them in one step.Also, different priorities in the sequential method (first, spatial clustering and then temporal clustering or vice versa) may reveal different passenger groups.The existing literature of the clustering passenger trips focuses on sequential spatial and then temporal clustering methods, which are examined in the next section.Consequently, passenger clusters with the spatial and temporal similarities can be discovered by different methods, which yield different outcomes.
This paper compares characteristics of passenger groups that are discovered by different methods of the spatial and temporal clustering.This paper for the first time (to the best of our knowledge) shed light on differences between the sequential and combined spatial and temporal clustering alternatives in the public transit network.Firstly, the spatial and temporal similarity measures between the passengers are defined.Secondly, the passengers are clustered using a hierarchical agglomerative algorithm with three different methods, including sequential two-step spatial-temporal (S-T), sequential two-step temporal-spatial (T-S), and combined one-step spatiotemporal (ST).Thirdly, characteristics of the discovered groups are described and compared using maps, numerical and statistical values, the cross correlation technique, and temporal density plots.Finally, some passengers are selected to show how differently passengers are clustered by the above-mentioned methods.Four-day smart card data including 80,000 passengers in Brisbane, Australia, are selected to compare these methods.
The remainder of this paper is structured as follows.Firstly, the existing literature is reviewed.Then, the spatial and temporal similarity measures and clustering algorithm are explained in the methodology section.Next, the case study and results are described in the Results section.Finally, the methodology, findings, and plans are summarized in the conclusion section.
Literature Review
Data mining techniques have recently been used to discover the spatial and temporal patterns in the public transit network using the smart card data.Agard et al. [4] carried out the first study using clustering algorithms to discover patterns in the smart card data; however, after 2013, the tendency to use clustering algorithms has been advanced by these datasets.Ma et al. [16] determined transit passenger regularity by clustering passengers based on the location of boarding stops and then dividing clusters according to the time interval of boarding transactions.They used one week of AFC transactions from Beijing and compared the efficiency of three clustering algorithms (K++, C 4.5, KNN).Also, they showed that the regularity of a transit passenger would be a significant factor for transit market analysis.Nishiuchi et al. [17] studied passenger regularity based on spatial and temporal patterns using more than 500,000 transactions for 32,000 users during one month in Osaka, Japan.Tao et al. [18] utilised single day transactions from Brisbane to detect the major travel paths for bus passengers at the stop level using flow-comap techniques for visualising the patterns.
Kieu et al. [5] studied spatial and temporal aspects of travel patterns.Firstly, trips were clustered regarding the location of alighting stops; then identified groups were divided based on the location of boarding stops and, next, according to times of the boarding transactions.They used the DBSCAN algorithm for clustering the AFC data in Brisbane over 4 months.Sun and Axhausen [19] decomposed AFC data using a probabilistic tensor factorisation model to investigate the interactions between time of day, passenger type, and origin and destination zones.Manley et al. [20] analysed variation in regular and irregular travel behaviour to derive a system-wide spatial-temporal understanding of regularity in the travel behaviours.They used the DBSCAN algorithm over 49 weekdays.Also, they investigated regularity over different transit modes and found that bus mode had a higher proportion of regular travellers than others.Yu and He [21] used a 3-step methodology to discover spatialtemporal characteristics of bus travel demand using heatmap technique and the Gaussian Mixture Model (GMM).They used 8-week data from Guang Zhou.The heat-map method visually unveils the spatial-temporal travel demand patterns at a regional level.Ghaemi et al. [22] presented Ghaemi et al. [22] Clustering passengers based on boarding time transactions.
Spatial clustering
Tao et al. [18] Clustering trips based on locations of boarding and alighting stops.
Spatial-Temporal clustering
Ma et al. [16] Clustering trips first based on location of boarding stops, then dividing clusters according to the time interval of boarding transactions.
Nishiuchi et al. [17] Clustering and investigating relations between the spatial and temporal patterns of trips.
Kieu et al. [5] Clustering trips first based on locations of alighting stops, then based on the location of boarding stops, and thirdly based on times of the boarding transactions.
Sun and Axhausen [19] Decomposing data to investigate the interactions between the time of day, passenger type, and origin and destination zones.
Manley et al. [20] Investigating spatial and temporal regularity over different transit modes.
Yu and He [21] Using heat maps to discover spatial and temporal demand of bus trips.
Briand et al. [23] Modelling time in a continuous space to investigate passenger exchanges between clusters over time.
a new representation of the smart card dataset.This provided a visual guide to better understand temporal patterns.Seventeen clusters were identified in terms of single trip, regular users, late commuters, long day, midday, and active and inactive groups as the temporal behaviour of users by an agglomerative hierarchical clustering method.Briand et al. [23] proposed a 2-level generative model that applies the (GMM) to regroup passengers based on their temporal habits.They used 391,783 transactions by 2504 users over 4 years in Gatineau.Also, they modelled time in a continuous space.They found that clusters over time mostly exchange their cards with clusters having similar patterns.Table 1 summarizes the mentioned studies.Consequently, the existing literature has recently focused on the spatial and temporal patterns of travel behaviour in the public transit network.While initial studies focused more on the temporal patterns of trips [4], more recent studies focused on spatial and temporal patterns of trips [20,21,23]; the latter studies discovered the spatial and temporal patterns by the S-T sequential clustering way, in which passengers are first clustered based on the spatial similarity and then each spatial group is clustered based on the temporal similarity [24].However, no study considered the opposite sequential clustering of first the temporal clustering and then the spatial clustering (T-S).In other words, there is no study that has investigated differences between these two methods of sequential clustering.In addition, no study has tried to cluster passengers in the public transit network using a combined one-step way considering both spatial and temporal measures.These research gaps are addressed in this paper.The scientific contributions of this paper are twofold: (1) Defining a one-step method for extracting the spatiotemporal patterns in the public transit network.
(2) Comparing the different methods of spatial and temporal clustering of passengers in the public transit network using the smart card dataset.
Methodology
This section briefly explains trip reconstruction from the smart card dataset, defining the spatial and temporal similarities, and clustering the passengers by different methods.Figure 1 presents the main steps of the methodology.The passengers are modelled by their trips during the entire day.The trips are reconstructed from the smart card dataset that includes both boarding and alighting transactions.Each trip is made from one or more trip leg, and each trip leg comprises the space and period between the boarding and alighting transactions.Two trip legs are linked based on the time gap between the first alighting and the next boarding transactions.Various thresholds are examined for this time gap.Based on the analyses of Alsger et al. [25], the time gap is considered as 30 minutes in this study.Therefore, if the time gap between two trip legs is less than 30 minutes, then they will be linked as one trip.Data mining techniques aim to discover patterns in large datasets.A clustering or unsupervised learning method as a data mining technique assembles sets of objects in the similar groups; it can assemble them in a way that increases the similarity between members of a group and/or increases the dissimilarity between members of different groups.The clustering algorithm initialises based on a similarity measure among the objects, in this case, passengers.Calculating the similarity between any pair of objects in the dataset builds a matrix that works as an input for the clustering algorithm [26].
The trip similarity is examined in two parallel steps of the spatial and temporal similarity measures.Both spatial and temporal similarity measures are adopted from Faroqi et al. [15] where more details can be found.The measures are developed specifically for smart card dataset that include boarding and alighting transactions (not like GPS trajectories that include measurements every few meters).In brief, two trips are considered as spatially similar if the distance between the origins (destinations) is less than a threshold (A in (1)) and the angle between the two trips is less than a threshold (B in (1)).Equations ( 1) to (3) present the spatial similarity measure and corresponding functions between trips (T1, T2) that are between (O1, D1) and (O2, D2), where "O" stands for origin and "D" for destination; each origin or destination stop is presented by coordination (x, y); "T" stands for trips; "A" stands for the maximum distance between origins or destinations; "B" stands for the maximum angle between trips; "d(O1, O2)" or "d(D1, D2)" is the distance function that measures Euclidean distance between two points; "di(T1, T2)" is the direction function that measures angle between two trips; and "SS(T1, T2)" is the spatial similarity value between two trips.Values of the spatial similarity vary between 0 and 1 [15].
Spatial Similarity Measure between Trips
Distance Function Direction Function Algorithm 1: Pseudocode for the calculating spatial similarity between passengers.
Algorithm 2: Pseudocode for temporal similarity between passengers.
Equation ( 1) is appropriate for a pair of passengers each of which has just one trip.The final spatial similarity value for a pair of passengers, who have more than one trip, is assumed as the ratio of the sum of lengths of the shorter similar trips to the greater sum of lengths of all the trips belonging to the pair of passengers.For instance, if passenger A has two trips with lengths of 3 and 6 km and passenger B has one trip with a length of 4 km that closely overlaps with passenger A's 3 km trip, then the spatial similarity between these two passengers will be (3/(3+6)) * 100 = 33.Algorithm 1 presents the pseudocode for the spatial similarity between two passengers (P1, P2) who, respectively, have m and n unique trips, where "p" stands for the passenger, "a" is defined for measuring sum of the lengths of shorter similar trips, "a12" is sum of the lengths of shorter similar trip between passenger 1 and passenger 2, "a21" is sum of the lengths of shorter similar trip between passenger 2 and passenger 1, "B" is the set of similar trips, in which the longest one is chosen to determine the shorter similar trip, "l" is the length of the trip, and the other parameters are defined previously [15].Two trips are considered as temporally similar if their trip time overlaps.The temporal similarity between two passengers is assumed as the ratio of sum of the overlapped time between the trips to the greater sum of the all trips time.Equation (4) presents the temporal similarity measure between two trips (T1, T2) that, respectively, are between (B1, A1) and (B2, A2), where "B" stands for boarding time and "A" for alighting time and "TS (T1, T2)" stands for the temporal similarity value.The temporal similarity value between two trips is assumed as the ratio of overlapped trip time to longer trip time.Values of the temporal similarity vary between 0 and 1 [15].
Temporal Similarity Measure between Trips
Algorithm 2 presents the pseudocode for the temporal similarity between two passengers (P1, P2) who, respectively, have m and n trips, where "TT" stands for trip time, "OT(T1, T2)" stands for overlapped time that is calculated between the two trips, "a" is defined for measuring the overlapped time, and the other parameters are defined previously [15].Separately measuring the spatial similarity and temporal similarity of the trips enables us to find similar trips in the same time interval and the same corridor (the same or opposite direction).For instance, assuming two passengers each of whom has two trips in the morning and evening and the same route (for example, a bus route between stops G and H) but opposite directions (one passenger goes from stop G to H in the morning and returns from stop H to G in the evening; another passenger goes from stop H to G in the morning and returns from stop G to H in the morning), these two passengers have temporal similarity because they both are in the public transit network in the same time period, and, Mathematical Problems in Engineering also, they have spatial similarity because each of whom have two trips traversing from stop G to stop H and from stop H to stop G.
This paper utilises the agglomerative hierarchical clustering algorithm using the Ward method that minimises the total within-cluster variance.While the agglomerative hierarchical clustering algorithm can be implemented with various methods such as Single, Average, Complete, and Ward, the Ward method is chosen according to the results of comparing these methods by Ferreira and Hitchcock [27].It is chosen because it does not need to determine the number of clusters, and it is flexible with different similarity measures.It begins at the bottom where each object has its own cluster and merges them till all the objects form one cluster at the top.The result of the hierarchical agglomerative clustering is a dendrogram that shows how the objects are merged at each step [26].According to the shape of the dendrograms and the Silhouette information, the dendrogram can be cut at a proper level.The Silhouette information refers to a method of interpretation and validation of consistency within clusters of data [28].Spatial or temporal clusters of passengers are discovered after cutting the related dendrograms.
Spatial clusters include groups of passengers with similar trip routes, and temporal clusters comprise groups of passengers with similar trip times.To have groups of passengers similar in both trip routes and time, three methods are explained and compared: S-T, T-S, and ST.S-T reclusters each spatial group into several temporal groups.T-S reclusters each temporal group into several spatial groups.A potential flaw for both S-T and T-S is that, at the first step of clustering, they ignore the second similarity measure.For instance, if two passengers have high spatial similarity and low temporal similarity, then S-T would consider them in the same group, but T-S would not.
ST is a one-step clustering method that combines both spatial and temporal similarity matrices into one matrix (spatiotemporal similarity matrix).For joining the similarity matrices, the matrices are multiplied element by element.For instance, if passenger A and passenger B have 0.75 spatial similarity and 0.5 temporal similarity, then the spatiotemporal similarity between them will be 0.375.One of the premises for this method is that similarities can be taken as probabilities; in simple words, the spatial and temporal similarities are indices to ultimately measure the probability of two passengers confronting during their trips.Multiplying the spatial and temporal similarity values is calculating the probability of occurring two independent events: one is passengers travelling at the same locations and the other one is passengers travelling at the same time.In other words, if having the spatial similarity between two trips is assumed an independent event from the temporal similarity, then product of the spatial and temporal similarity values equals to having both events.Clustering passengers according to the spatiotemporal similarity matrix is the combined one-step spatiotemporal method.The combined one-step clustering method identifies the groups of passengers who are simultaneously spatially and temporally similar.
Results
The explained methods are applied to the smart card dataset of Translink, the public transport authority of South East Queensland (SEQ), Australia.The dataset for three weekdays and one weekend day are selected.Wednesday to Saturday (20-23 March 2013) are chosen as the weather on all four days was normal and there were no special events during those days.20,000 passengers randomly are selected for each day, who approximately make 45,000 trip legs per day.The sample size for each day is almost 15% of the whole number of transactions.Considering the analysis from Alsger et al. [29], the sample size can appropriately represent the whole dataset.The dataset includes both time and location of boarding and alighting transactions, which is an important feature of the Translink dataset as most of the AFC systems around the world just include boarding or alighting transactions.It should be mentioned that the analysis is done by R (version 3.3.2) language in RStudio framework [30].600 metres (value for A in (1)) and 6 degrees (value for B in (1)) are adjusted for the Brisbane public transit network [15].In order to have a concise presentation, only the maps for a few groups for Wednesday will be presented in the paper.Figure 2 shows the map for Brisbane, in which the City Business District (CBD) area is highlighted with a yellow circle.Also, some of the major train and bus lines representing the direction of main corridors in Brisbane are presented in the map.Following is an illustrative example for 12 passengers extracted from the dataset.Table 2 shows the spatial similarity values between these passengers, and Table 3 presents the temporal similarity values.
Given the similarity matrices, the hierarchical agglomerative algorithm is implemented and the outputs as dendrograms are presented in Figure 3.At the bottom of the dendrograms, the number of passengers is presented and this shows how they are merged into one cluster at each level.Also, the height of the dendrograms shows differences between the passengers; a higher height means more difference between the passenger groups.Also, values of the Silhouette information for the similarity matrices are presented in Table 4. Higher values of the Silhouette information show a better level for cutting the dendrograms.
According to the dendrogram and Silhouette information values in Figure 4, the spatial dendrograms are cut at 16 groups.Each spatial group is clustered into four S-T groups, which have similar spatial patterns but different temporal patterns.Members of each S-T group include the passengers who have similar boarding and alighting times of transactions with similar routes.Figure 5 shows the routes and temporal 6, the temporal dendrograms are cut at 8 groups.Each temporal group is divided into the eight T-S groups, which have similar temporal patterns but different spatial patterns.Members of T-S groups are the passengers who have similar boarding and alighting locations of transactions during specific periods and peaks.Figure 7 shows the temporal density plots and maps for four T-S groups.Temporal plots mostly present two-peak transactions during the day and some with one flat peak groups.Similar to S-T groups, two-peak plots can be assumed as work-home trips, and one flat peak plots as the shopping-home trips that usually happen at the middle of the day.Also, most of the spatial patterns present trips between suburbs and CBD.Therefore, T-S groups represent passenger trips with variety of temporal and spatial features.
According to the dendrogram and Silhouette information values in Figure 8, the spatiotemporal dendrograms are cut at 32 groups.Four ST groups are represented in Figure 9, in which each route and its next density plot represents a ST group.Members of ST groups are passengers who simultaneously have both similar routes and transactions times.Obvious corridors for routes from suburbs to CBD with clear peaks for the temporal density plots are observed in the ST groups.Schematically comparison, ST groups find most of the spatial and temporal patterns including trips between suburbs and CBD with peaks at the morning and evening.Hence, ST groups can represent the passenger trips with fewer numbers of groups than S-T or T-S.
Table 5 shows the mean of the spatial and temporal similarity values for the discovered groups.It should be noted that increasing the number of the groups will increase the value of similarity means; therefore, the number of groups should be considered as a factor in discussing effective values on the similarities.Also, in order to compare the different alternatives, the relative values of the spatial and temporal similarities are more important than their absolute values.As it is expected, spatial clustering has the highest (with 16 groups) mean spatial similarity, and temporal clustering leads to the highest (with 8 groups) mean temporal similarity.S-T and T-S clustering have twice the number of groups as ST clustering; however, the values for mean spatial similarity in ST clustering are higher than S-T and T-S; also, the values for mean temporal similarity are close together.Furthermore, the average of means for the spatial and temporal similarity values of ST clustering are higher than the others; it basically means that members of ST groups are more likely to confront each other during their trips than S-T and T-S groups.Consequently, ST clustering leads to higher values of similarity in groups (with 32 groups) in comparison with S-T and T-S (with 64 groups) clustering methods.
Cross correlation analyses the correlation between groups of the different clustering methods.It reveals how groups from different methods are correlated.To achieve this goal, the number of passengers who are in the same group in different clustering methods are counted and then the number is divided by the size of the group.For instance, if an S-T group has 100 passengers, among which 40 remained in the same T-S group, then the cross correlation between the S-T and T-S groups is 40%.The numbers in Table 6 represent the average of all groups' correlation.For instance, 24% of S-T groups remain in the same groups of T-S, and 32% of T-S groups remain in the same groups of S-T.The fourth column (ST) has the highest value among the others, which means ST clustering covers higher proportions of S-T and T-S clustering.Also, S-T groups cover more T-S groups than the reverse situation.Considering the average of the spatial and temporal similarity values, number of groups, and the cross correlation values, the ST clustering method is a more robust method to discover the spatial and temporal patterns than S-T and T-S.discovered 6 temporal clusters (temporal patterns 4 and 7 are missing from the ST clusters).Also, the proportions of population in the temporal clusters are closer to the S-T than ST method.Therefore, the sequential clustering methods have a better performance in discovering diversity in the spatial and temporal patterns in the network.Table 7 ranks the sequential and combined methods considering the results of the analyses.According to Table 5, the ST clusters have the highest spatial and temporal similarity values; average of the spatial and temporal similarity values among passengers in ST clusters is higher than the passengers in the S-T and T-S clusters.According to Figure 11, the T-S method has a better performance than the ST method in discovering the spatial diversity; the distribution of passengers in the spatial clusters is more similar to the T-S method than the ST method.Also, the S-T method has a better performance than the ST method in discovering the temporal diversity; the distribution of passengers in the temporal clusters is more similar to the S-T method than the ST method.In conclusion, while the sequential methods (S-T and T-S) discover more diverse spatial and temporal patterns in the network, the ST method entails more robust groups (higher spatial and temporal similarity values inside the groups) than the others.
Conclusion
The paper investigates the different clustering methods for discovering groups of passengers whose trips are spatially and temporally similar.First, the spatial and temporal similarity measures are defined.Then, the passengers are clustered using the hierarchical agglomerative algorithm with three different methods.The outcomes of each method are examined and compared using maps, temporal density plots, and quantitative values.Each method generates different groups with specific characteristics.The S-T method shows more diversity in the spatial dimension of the passenger trips.
The T-S shows more specific temporal density plots for the groups.The ST shows a moderate combination of S-T and T-S methods with a lower number of groups and higher values for the spatial and temporal similarities in comparison with S-T and T-S.Also, ST groups cover higher proportions of S-T and T-S groups in comparison with the coverage of S-T and T-S on ST groups.In conclusion, while the sequential methods (S-T and T-S) discover more diverse spatial and temporal patterns in the network, the ST method entails more robust groups (higher spatial and temporal similarity values inside the groups) than the others.
Results from this paper are independent from the used spatial and temporal similarity measures because the alternatives of sequential or combined clustering, without loss of generality, can be implemented on other spatial and temporal similarity measures.S-T, T-S, and ST are three different methods for clustering passengers with the same spatial and temporal similarity measures.In other words, this paper investigates the effects of each method in clustering by the same similarity measures.S-T clustering can be used in those cases where the spatial diversity is the main focus, while T-S can be used in finding more specific temporal patterns.Comparing to S-T and T-S, ST is a moderate method in finding diverse spatial and temporal patterns, but it can be used to discover more robust groups of passengers, where confronting passengers during their trips is more important than diversity of the patterns.Consequently, this paper sheds light on differences between sequential and combined spatial and temporal clustering alternatives in the public transit network, which can lead to begin new trends of studies in discovering and implementing data mining techniques in the public transit network.
The main difference between S-T and T-S (as the sequential clustering methods) with ST (as the combined method) originates from ignoring one of the similarity measures at each step of clustering in the sequential methods.S-T reclusters each spatial group into several temporal groups.T-S reclusters each temporal group into several spatial groups.Both S-T and T-S at the first step of clustering ignore the second similarity measure.For instance, if two passengers have high spatial similarity and low temporal similarity, then S-T would consider them in the same group, but T-S would not.However, ST method considers both spatial and temporal similarity measures at the same step.Therefore, having more robust groups by ST is expectable.
While choosing between the S-T, T-S, and ST in practice or research might just depend on specific applications, knowing the differences between these methods can help researchers/practitioners to decide on the proper method for their desired applications.Also, focusing on the differences between the spatial and temporal clustering methods can create new trends in the public transit research area.For an instance, the clustering methods can be used in designing bus networks [31].In simple words, designing bus networks happens in two steps (the first two steps out of four main steps in designing the public transit network [32]).First, routes of the network are designed according to the spatial movement demand by the passengers, and, then, schedules are designed according to the designed routes and temporal movement demand of the passengers.In other words, designing the bus network is similar to the S-T clustering of the movement demand of passengers.Also, it might be possible to design the network with the two other methods (T-S and ST).Designing the bus network from the T-S perspective is similar to the following: first, schedules are designed according to the temporal demand of passengers, and, then, routes are designed according to the designed schedules and spatial demand of the passengers.Considering the results from this study, it is likely to have a more reliable temporal bus network with the T-S than S-T perspectives.
Another example of implementing these clustering methods in the real world is passenger segmentation methods that discover groups of passengers, who are similar in their travel behaviour.Passenger segmentation methods are usually used in marketing applications where marketing companies are willing to target certain types of passengers.Another application of an improved clustering method is in policy making analysis where a new policy might affect passenger clusters differently.Each of the investigated methods in this study establish different sets of passengers.The ST method generates groups of passengers with more spatial and temporal similarity than S-T or T-S method.In other words, passengers in ST groups are more likely to confront each other during their trips than passengers in S-T or T-S groups, which basically means that passengers in ST groups are more likely to share a certain stop or bus route at the same time period during their trips.Higher chance of having similar passengers in a specific route or time slot is more desirable for the marketing companies than having more diversity because marketing companies are more likely to target higher number of passengers in a location or time point.Therefore, using the ST method could be a more attractive clustering method for the marketing companies.The same argument is also true for policy making analysis when the improved clustering method of the ST method is applied.
Additional analyses can be performed to extend this work.First, public transit networks and schedules can be designed according to the different methods and, then, the effectiveness of each method can be compared.Second, the effects of each method on real world applications such as demand-responsive transport should be studied.Third, trip similarity measures could be defined and compared by simultaneously considering space and time dimensions, using time-geography concepts, of the trip.
Data Availability
The smart card data used in this study has a restricted access because of privacy issues.
Figure 10
Figure 10 contains some examples to illustrate how passengers are clustered by the different methods.Figure 10 shows three examples of exchanging passengers between S-T, T-S, and ST groups.The first example shows a ST group with 274 passengers who are mostly clustered in four S-T groups; all S-T groups have similar spatial patterns, but the last one has a different temporal density plot from the ST group.The second example shows how a T-S group with 263 passengers is fit into 5 ST groups; the passengers are clustered by ST in the same spatial pattern with more diverse temporal patterns.The last example shows an S-T group with 157 passengers that are grouped in 4 T-S groups; T-S distinguishes most of the passengers in the same spatial patterns with more diverse temporal patterns.At the end, proportions of passengers in the spatial clusters and temporal clusters are compared with proportions of passengers in the S-T, T-S, and ST clusters.According to the dendrograms and Silhouette information in Figures 4 and 6, passengers are grouped in 16 spatial clusters and 8 temporal clusters.Discovered groups by T-S and ST are compared with the spatial clusters (proportions of passengers in the S-T clusters are same as the spatial clusters); the spatial pattern of each T-S and ST group is assigned to one of the spatial clusters considering the direction and length of the discovered patterns.Also, discovered groups by S-T and ST are compared with the temporal clusters (proportions of passengers in the T-S clusters are the same as the temporal clusters); the temporal pattern of each S-T and ST group is assigned to one of the temporal clusters Figure 10 contains some examples to illustrate how passengers are clustered by the different methods.Figure 10 shows three examples of exchanging passengers between S-T, T-S, and ST groups.The first example shows a ST group with 274 passengers who are mostly clustered in four S-T groups; all S-T groups have similar spatial patterns, but the last one has a different temporal density plot from the ST group.The second example shows how a T-S group with 263 passengers is fit into 5 ST groups; the passengers are clustered by ST in the same spatial pattern with more diverse temporal patterns.The last example shows an S-T group with 157 passengers that are grouped in 4 T-S groups; T-S distinguishes most of the passengers in the same spatial patterns with more diverse temporal patterns.At the end, proportions of passengers in the spatial clusters and temporal clusters are compared with proportions of passengers in the S-T, T-S, and ST clusters.According to the dendrograms and Silhouette information in Figures 4 and 6, passengers are grouped in 16 spatial clusters and 8 temporal clusters.Discovered groups by T-S and ST are compared with the spatial clusters (proportions of passengers in the S-T clusters are same as the spatial clusters); the spatial pattern of each T-S and ST group is assigned to one of the spatial clusters considering the direction and length of the discovered patterns.Also, discovered groups by S-T and ST are compared with the temporal clusters (proportions of passengers in the T-S clusters are the same as the temporal clusters); the temporal pattern of each S-T and ST group is assigned to one of the temporal clusters
Figure 10 :
Figure 10: Passenger groups by different methods.
Figure 11 :
Figure 11: Proportions of passengers in the clusters.
between suburbs and CBD of Brisbane, which can be considered as work or shopping trips.Also, most of the temporal patterns show two peaks plots representing morning and evening peak in the public transit network, which can be assumed as work-home trips.There are some one flat peak plots that can be considered as shopping-home trips, which
Table 5 :
Average of the spatial and temporal similarity values in the groups.
Table 6 :
Cross correlation between groups from different clustering methods.
Table 7 :
Ranking of the sequential and combined methods. | 8,787.8 | 2019-04-14T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Analysis of recycled poly (styrene-co-butadiene) sulfonation: a new approach in solid catalysts for biodiesel production
The disposal of solid waste is a serious problem worldwide that is made worse in developing countries due to inadequate planning and unsustainable solid waste management. In Mexico, only 2% of total urban solid waste is recycled. One non-recyclable material is poly (styrene-co-butadiene), which is commonly used in consumer products (like components of appliances and toys), in the automotive industry (in instrument panels) and in food services (e.g. hot and cold drinking cups and glasses). In this paper, a lab-scale strategy is proposed for recycling poly (styrene-co-butadiene) waste by sulfonation with fuming sulfuric acid. Tests of the sulfonation strategy were carried out at various reaction conditions. The results show that 75°C and 2.5 h are the operating conditions that maximize the sulfonation level expressed as number of acid sites. The modified resin is tested as a heterogeneous catalyst in the first step (known as esterification) of biodiesel production from a mixture containing tallow fat and canola oil with 59% of free fatty acids. The preliminary results show that esterification can reach 91% conversion in the presence of the sulfonated polymeric catalyst compared with 67% conversion when the reaction is performed without catalyst.
Introduction
The Guadalajara Metropolitan Zone (GMZ) is the second largest urban area in México. More than 4 million inhabitants in the GMZ generate approximately 0.508 kgperson -1 day -1 of household solid waste. The major components of the household solid waste are putrescible elements (53%), different types of paper (10%) and plastics (9%). From all of this waste, only 2.2% is separated for reuse/recycling, whereas the rest is deposited in municipal landfills. Rigid plastics, including poly (styreneco-butadiene), represent approximately 1% of the nonrecyclable materials (Bernache-Pérez et al. 2001).
On the other hand, it is well known that the widespread use of fossil fuel reserves has increased the air pollution levels worldwide, affecting global climate. These reserves (including petroleum-based diesel or petrodiesel) are being rapidly depleted. Biodiesel has been proposed as a renewable, biodegradable, non-toxic and non-inflammable alternative to petrodiesel. Chemically, biodiesel is a mixture of alkyl esters that is traditionally produced in a process known as transesterification in which refined plant oils or animal fats (i.e., triglycerides) are mixed with alcohol and heated in the presence of an alkaline catalyst. The relatively high cost of oils and fats contribute 60-80% of the total biodiesel cost, making it non-competitive with petrodiesel (Wen et al. 2010). To address the cost issue, it has been proposed that biodiesel be produced from cheaper feedstock, such as waste cooking oils (Liang 2013), grease from grease traps or animal fats (Canoira et al. 2008) that are characterized by their high (>1%) amount of free fatty acids (FFAs). However, the application of the alkaline transesterification technology to transform the aforementioned raw materials into biodiesel is not recommended because the reaction between FFAs and the alkaline catalyst makes soap, thereby reducing biodiesel conversion and creating difficulties in separating and purifying the product (Marchetti et al. 2007). To avoid this problem, it has been proposed that a pretreatment step be introduced before the conventional transesterification process. This pretreatment stage is known as esterification and is usually catalyzed by sulfuric acid, with reaction yields over 95% (Canacki and Van Gerpen 2001). Solid-acid catalysts, such as Dowex monosphere 550A, Dowex upcore Mono A-625, Amberlyst-15, Amberlyst-16, Amberlyst-35, Dowex HCR-W2, mesoporous aluminosilicates, Amberlyst 131, Relite CFS, ZrO 2 -supported metal oxide and mesoporous organosilicas, have also been considered (Özbay et al. 2008, Carmo et al. 2009, Tesser et al. 2010, Morales et al. 2010, Kim et al. 2011). More recently layered bismuth carboxylates, has also been used in esterification of fatty acids (Rosa da Silva et al. 2013). Compared with sulfuric acid, solid-acid catalysts have lower reaction rates, but they are often preferred over sulfuric acid because they are easily separated from the product, prevent corrosion (Silva and Rodrigues 2006) and can be reused (Vieira Grossi et al. 2010).
In this paper, we present a lab-scale strategy for recycling poly (styrene-co-butadiene) waste. Although the strategy is conceived to mitigate the solid-waste disposal problem in the GMZ, it could be extended to any city that has a similar situation. In this strategy, the poly (styrene-co-butadiene) waste is sulfonated with fuming sulfuric acid. The sulfonation method proposed here has been previously studied (Inagaki et al. 1999;Inagaki & Kiuchi 2001) but, to the best of our knowledge, the conditions (temperature and time) that maximize the number of acid sites in the sulfonation are not reported yet. Therefore, the behavior of the poly (styrene-cobutadiene) waste sulfonation under time and temperature variations is analyzed in the present work. This analysis is conducted with an experimental design from which is possible to deduce a mathematical model that adequately describes the sulfonation process. The theoretical optimal conditions for the sulfonation experimental runs are obtained from this model and the resulting product sulfonated in these conditions is then used as a solid-acid catalyst to produce biodiesel in the esterification of feedstock with a high content of FFAs.
Materials
Poly (styrene-co-butadiene) waste was collected in the form of disposable cups. Chloroform, sulfuric acid and potassium hydroxide were provided by Analytyka (México). Methanol and phenolphtalein were obtained from Karal (México). Finally, fuming sulfuric acid and potassium bromide were provided by JT Baker (USA).
Qualitative characterization of poly (styrene-co-butadiene) waste
Poly (styrene-co-butadiene) waste cups contain residual accumulations of carbonated drinks or natural/artificial juice. These cups are crushed to a particle size of approximately 0.60-0.80 cm 2 , washed in a detergent solution and then rinsed. The clean plastic pieces are dried to a constant weight and are qualitatively analyzed as follows to confirm the presence of butadiene. First, a 0.20 g aliquot of the polymer is dissolved in 2.5 mL of chloroform. The polymer is extracted from this solution with 10 mL of methanol to ensure that additives are not interfering with the characterization (Lacoste et al. 1996). The extract is dried and dissolved again in 2.5 mL of chloroform to generate two samples. A film obtained from the first sample is further analyzed with a Perkin-Elmer FT-IR spectrophotometer, whereas the second sample is added to bromine water.
Sulfonation experiments
The sulfonation of poly (styrene-co-butadiene) waste which is already clean and dry, is carried out with fuming sulfuric acid (10 mL/g plastic) as the sulfonation agent. Different combinations of temperature (30°C, 70°C, 110°C) and time (1.0 h, 3.0 h, 5.0 h) are considered. Once the reactions are finished, the products are washed with distilled water and then dried to a constant weight.
Sulfonation level determination
The sulfonation level of the sulfonated products is commonly known as number of acid sites and is expressed in terms of the number of milliequivalents of~SO 3 H groups (m eq SO 3 H) per gram of sulfonated product. In this work, the sulfonation level was determined in a titration procedure with 0.1 N alkaline solution of potassium hydroxide using phenolphthalein as an indicator.
Optimization of the sulfonation process
It is well known that the chemical reaction yield is strongly affected by temperature and time. Nevertheless the influence of these factors on the sulfonation process of poly (styrene-co-butadiene) is not reported yet. To cover this lack of information it is proposed in this work to conduct an experimental design considering three levels for each factor and the number of acid sites as response variable. The main objective of this 3×3 experimental design is to verify if a combination of temperature and time could maximize the number of acid sites. Previous experiments on the sulfonation process showed that a very low number of acid sites is obtained when the reaction is carried out at 30°C and 1.0 h. As a consequence, these operating conditions are set as the lowest point in the proposed design. Additionally, the highest point (i.e., 110°C and 5.0 h) is selected because it has been previously reported (Inagaki & Watanabe 2003;Inagaki and Noguchi 2003).
Characterization of the product obtained under optimal sulfonation conditions
Poly (styrene-co-butadiene) waste is sulfonated at the optimal conditions that maximize the number of acid sites to both quantitatively and qualitatively characterize the sulfonated product and to verify whether it can act as catalyst in esterification reactions. The results of the quantitative characterization are expressed not only in terms of the number of acid sites but also in terms of methanol and water absorption, which are calculated following the "tea bag" method (Hosseinzadeh 2011). The qualitative characterization is carried out by using a sample to form a potassium bromide pellet whose infrared spectrum is recorded using a Perkin-Elmer FT-IR spectrophotometer (Martins et al. 2003).
Esterification reactions
The polymer that is sulfonated under optimal conditions is tested as a catalyst in an esterification procedure. The raw material for this process is a synthetic mixture prepared by mixing tallow fat (supplied by Quimikao, Guadalajara, México) with commercial canola oil. Such feedstock is esterified with methanol in three different experiments: without catalyst, with sulfuric acid as catalyst (5% based on the FFAs weight in the mixture) and with sulfonated poly (styrene-co-butadiene) waste as catalyst. The main goal of these experiments is to explore the effect of the proposed catalyst on the conversion of FFAs for certain feedstock and to compare that effect with the effects generated both by the conventional catalyst and without catalyst. All the esterification reactions are carried out in batch conditions at 60°C and 1.0 h, with a methanol to FFAs molar ratio of 40:1 (Canacki and Van Gerpen 2001). At the end of the esterification, the reactants are allowed to settle in a separatory funnel. The conversion of FFAs (%C) in each reaction is computed with the following equation (Özbay et al. 2008;Carmo et al. 2009): where A 0 is the initial content of FFAs for the feedstock and A f is the final content of FFAs of the fluid extracted from the bottom of the separatory funnel (biodiesel phase). The values for A 0 and A f are obtained as described in the American Oil Chemists' Society Official Method Cd 3d-63.
Results and discussion
The infrared spectrum for the film from the clean and dry poly (styrene-co-butadiene) waste is taken over a range of wavenumbers from 600 to 4000 cm -1 . This spectrum is compared with the corresponding spectrum of virgin poly (styrene-co-butadiene) (see Figure 1) by using the OMNIC E.S.P. software (Thermo Electron Scientific Instruments Corporation, Madison WI, USA). The match among these spectra is up to 80%.
Bromine water is added drop by drop to the chloroform-poly (styrene-co-butadiene) solution. The yellowbrown color of the bromine reagent disappears almost immediately when it contacts with the solution. This result confirms the presence of double bonds in the polymer (in addition to those related to the aromatic ring in polystyrene) and, therefore, the presence of butadiene. Thus, the results of the infrared test and the bromine water test proposed in subsection 2.2 suggest that the collected waste is indeed poly (styrene-co-butadiene). Once these tests are carried out, the polymer waste is sulfonated as described in subsection 2.3. The sulfonation level for each sulfonated polymer is computed as explained in subsection 2.4, with the results reported in Table 1.
It is important to note that a heterogeneous mixture of SO 3 in sulfuric acid (i.e., fuming sulfuric acid or oleum) is used here as the sulfonating agent because it has a high reactivity due to the presence of SO 3 (Kucera and Jancar 1998). As a consequence of the nature of this agent, one expects the presence of two phases (gasliquid) while the sulfonation experiments are being conducted. This phenomenon is confirmed for all the experiments except for the one conducted at the operating conditions of 5.0 h and 110°C. In this case, two phases are observed at the beginning, but not at the end, where only the liquid phase remains; which most likely caused a reactivity decrease in the sulfonating agent. Therefore, to apply the same statistical treatment to all data, the operating conditions corresponding to 5.0 h and 110°C are considered to be unavailable in Table 1. The rest of the experimental data are analyzed with Statgraphics® (Centurion XVI, release 2009), which provides the following nonlinear regression model: In this equation, N.A.S. is the Number of Acid Sites of the polymer, T is temperature and t is time. Table 2 shows the results of an analysis of variance (ANOVA) carried out for testing the significance of each one of the regression coefficients in the model described by equation (2). If the level of significance (α) is chosen as 0.05 (i.e. 95% confidence level), the results in Table 2 demonstrate that all the coefficients are significant since p-value< α. Besides, it is also possible to conduct an ANOVA for testing the significance of the mathematical model. It is well known that the test procedure involves partitioning the total sum of squares (S ST ) into a sum of squares due to the model (S SM ) and a sum of squares due to the error (S M ), say S ST = S SM + S E where In equations (3) and (4), n is the total number of experimental data (observations), y i denote the value of the experimental data (in this case the N.A.S.) at each temperature and time whereas ŷ i is the estimated value for the observations at each temperature and time obtained from the proposed model. From the sum of squares it is possible to compute the following statistic: Þ where k is the number of regressor variables in the model. If F O exceeds F α,k,n−k−1 then the proposed model is significant at the level of significance α (Myers et al. 2009). The results for testing the significance of the model are shown in Table 3. If we select again α = 0.05 then F 0.05,5,18 = 2.77. Since F O > F α,k,n−k−1 then the model is significant at 95% confidence. From the results depicted in Table 3 is possible to compute the coefficient of multiple determination R 2 and the adjusted statistic R adj 2 according with the following ex- For the proposed model, R 2 = 0.982 and R adj 2 ¼ 0:977 . This means that the proposed model explains in a satisfactory manner the variability observed. This fact is enhanced when the experimental results are plotted in the same figure than the proposed model (see Figure 2). Once the model has been satisfactorily tested through diverse statistical tests, is possible to derive from such model the theoretical conditions that generate the maximum number of acid sites. These conditions are provided by Statgraphics and they correspond to T = 75°C and t = 2.5 h. Then, as described in subsection 2.6, the sulfonation reaction is carried out at these optimal conditions, and the resulting product is quantitatively characterized in terms of the number of acid sites ð4:8 m eq SO 3 H g Þ , the methanol absorption capacity (1.7 g/g) and the water absorption capacity (7.2 g/g). This product is also qualitatively characterized with an infrared spectrum N.A.S. Figure 2 Surface response for the proposed model in the sulfonation process. Experimental data are plotted as red circles.
T ( 0 C) t (h)
wave numbers (cm -1 ) %T Peak at 830 cm -1 Peak at 1034cm -1 Figure 3 Infrared spectrum of the sulfonated polymer prepared at optimal conditions. recorded over a range of wavenumbers, from 600 to 4000 cm -1 (see Figure 3). The peak at 830 cm -1 suggests the bonding of the~SO 3 H groups to the aromatic ring of polystyrene. In addition, the absorption at 1034 cm -1 is a result of the symmetric stretching vibration of~SO 3 H groups, (Martins et al. 2003). Once characterized, the polymer sulfonated under optimal conditions is tested as a catalyst for the esterification of a synthetic mixture with canola oil and tallow fat with 59% of FFAs (i.e. A 0 = 59). This reaction is carried out with 0.12 g catalyst per gram of FFAs under the operating conditions depicted in section 2.7. The average conversion is computed with equation (1) and the result is 91%, with an easy recovery of the catalyst from the reaction products. In addition, the esterification reactions carried out with sulfuric acid and without catalyst achieve average yields of 99% and 67%, respectively. Both yields were also computed considering equation (1). Finally, it is important to state that the objective of this paper is not to make an extensive characterization (e.g. thermal stability analysis, morphological characterization) of the product sulfonated under optimal conditions. The goal of the paper is to analyze the sulfonation level for poly (styrene-co-butadiene) waste under time and temperature conditions and also to verify the catalytic activity of sulfonated polymer in optimal conditions. The qualitative characterization of the aforementioned polymer based on its IR-spectrum was presented here to demonstrate that it contains~SO 3 H groups. Probably, these groups are responsible for the satisfactory activity of the proposed catalyst since they are considered as active sites in the Eley-Rideal mechanism which is assumed to occur in esterification reactions promoted by solid-acid catalysts (Tesser et al. 2010).
Conclusions and perspectives
Poly (styrene-co-butadiene) waste was sulfonated with fuming sulfuric acid under varying times and temperatures. The objective of these experimental runs was to apply a 3×3 experimental design to deduce a mathematical model that adequately represents the sulfonation data at 95% confidence. From this model is possible to derive the operating conditions (75°C and 2.5 h) maximizing the number of acid sites in the sulfonated polymer. Then, it was verified that the sulfonated polymer waste under these conditions was able to act as catalyst in the esterification of a synthetic mixture of tallow fat and canola oil with a high FFAs content. Thus, it was demonstrated that this type of rigid plastic waste can be treated and further applied in a process that produces biodiesel. However, there is another issue related to the proposed catalyst that remains to be examined: its stability (i.e., the catalytic activity when the catalyst is reused). We are currently studying this issue, along with the economic feasibility of the proposed catalyst which is crucial to scale up the strategy proposed here from lab-scale to pilot-plant scale. Another future research topic is a comparison between the activity of the proposed catalyst and the activity of other catalysts that are either reported in the literature or commercially available.
FFAs Free Fatty Acids.
A O, A f Initial and final content of FFAs in the esterification experiments, respectively. N.A.S Number of Acid Sites. T,t Temperature and time, respectively. S ST ,S SM ,S E Total sum of squares, sum of squares due to the model and sum of squares due to error, respectively. y i , ŷ i Experimental data and estimated value, respectively. F 0 Stastistic and significance level, respectively. | 4,405.8 | 2013-09-21T00:00:00.000 | [
"Chemistry",
"Environmental Science",
"Materials Science"
] |
Exogenous Application of Alpha-Lipoic Acid Mitigates Salt-Induced Oxidative Damage in Sorghum Plants through Regulation Growth, Leaf Pigments, Ionic Homeostasis, Antioxidant Enzymes, and Expression of Salt Stress Responsive Genes
In plants, α-Lipoic acid (ALA) is considered a dithiol short-chain fatty acid with several strong antioxidative properties. To date, no data are conclusive regarding its effects as an exogenous application on salt stressed sorghum plants. In this study, we investigated the effect of 20 µM ALA as a foliar application on salt-stressed sorghum plants (0, 75 and 150 mM as NaCl). Under saline conditions, the applied-ALA significantly (p ≤ 0.05) stimulated plant growth, indicated by improving both fresh and dry shoot weights. A similar trend was observed in the photosynthetic pigments, including Chl a, Chl b and carotenoids. This improvement was associated with an obvious increase in the membrane stability index (MSI). At the same time, an obvious decrease in the salt induced oxidative damages was seen when the concentration of H2O2 and malondialdehyde (MDA) was reduced in the salt stressed leaf tissues. Generally, ALA-treated plants demonstrated higher antioxidant enzyme activity than in the ALA-untreated plants. A moderate level of salinity (75 mM) induced the highest activities of superoxide dismutase (SOD), guaiacol peroxidase (G-POX), and ascorbate peroxidase (APX). Meanwhile, the highest activity of catalase (CAT) was seen with 150 mM NaCl. Interestingly, applied-ALA led to a substantial decrease in the concentration of both Na and the Na/K ratio. In contrast, K and Ca exhibited a considerable increase in this respect. The role of ALA in the regulation of K+/Na+ selectivity under saline condition was confirmed through a molecular study (RT-PCR). It was found that ALA treatment downregulated the relative gene expression of plasma membrane (SOS1) and vacuolar (NHX1) Na+/H+ antiporters. In contrast, the high-affinity potassium transporter protein (HKT1) was upregulated.
Introduction
Salinity is considered one of the most compelling environmental challenges encountered worldwide by the agricultural sector [1]. Generally, salt stress can cause significant damage to biodiversity, ecosystems, human health, and natural resources [2]. Nowadays, this problem has been exacerbated in several regions of the world due to the adverse impacts of human activities, frequent climate changes, scarcity of freshwater, and a limitation of arable lands [1,3]. Recently, salt affected lands worldwide were estimated to be 1125 million hectares [4]. In the next few years, these areas are expected to increase with the exponential growth of the global population, threatening food security [1,4,5]. Therefore, achieving an increase in agricultural food production under saline conditions has become a critical area of concern.
Under salt stress conditions, both the behavior of plants and their interaction with the stressful factor have been found to be extremely complex, leading to changes at the morphological, physiological, biochemical, and molecular levels [6,7]. This complexity can ultimately trigger varying degrees of stress adaptation among the salt-tolerant and sensitive plant species [8,9]. In this context, plants are developing diverse defense strategies and mechanisms to reduce the detrimental effects and toxicity of salt ions affecting different developmental stages and metabolic pathways. These processes contain multiple steps. First, stimulation of the antioxidative systems (enzymatic and non-enzymatic) is necessary to keep the reactive oxygen species (ROS) under control [6,7]. In general, maintaining these cytotoxic molecules (ROS) at low levels can allow them to act as beneficial, significant signaling molecules involved in different metabolic events [6,7,[10][11][12]. Secondly, maintaining a low Na + /K + ratio in tissues is a common response in a wide array of plants. This response can occur when the gene expression of a number of high affinity Na + and/or K + transporter proteins, such as SOS1, HKT1 and NHX1, is altered [6,13]; Furthermore, regulating the expression of these genes is usually concomitant with maintaining the cell membranes stability index, photosynthetic pigments, and enhancing plant growth and development [6][7][8].
Sorghum (Sorghum bicolor L. Moench; family, Poaceae) is the 5th most cultivated cereal crop in arid and semiarid regions worldwide [14]. It is extremely economically important due to multiple uses in human nutrition [15] and as a fodder for animals [16,17]. Despite sorghum being a C4 plant, which is generally considered a tolerant plant to diverse stressful factors including drought, salinity, and high temperatures [18,19], its growth and productivity can be significantly affected under severe adverse conditions in particular salinities [8,17]. Thus, greater attention is needed to find an optimal solution to these challenges.
In this study, α-lipoic acid (1, 2-dithiolane-3-pentanoic acid; ALA) was investigated, as it is one of the most promising and effective solutions that can reduce the detrimental effects of salinity stress on sorghum plants. It can maintain its antioxidative power and protective impacts against diverse stresses in both its reduced and oxidised form [20]. Its antioxidant capacity depends on two sulfhydryl moieties [21] which enable it to scavenge free radicals and chelate metals [22]. Under salt stress conditions, ALA was reported to mitigate oxidative damage and enhance growth and root formation of canola seedlings [23]. Moreover, exogenous ALA has been suggested to improve photosynthesis and induce tolerance mechanisms of several plant species under diverse environmental stresses [20,24,25].
This study was conducted to determine the role of ALA as an exogenous application and its possible ameliorative effects in sorghum plants grown under saline conditions. These effects were examined through several aspects: strengthening the antioxidant capacity, modifying the ionic homeostasis, maintaining cell membrane stability, and stimulating the growth of stressed plants.
Growth Conditions and Experimental Design
A pot experiment was conducted from 16 May 2021 to 14 July 2021 at the experimental farm located in the Department of Agricultural Botany, Faculty of Agriculture, Ain Shams University, Cairo, Egypt. The seeds of grain sorghum (Sorghum bicolor L. Moench; CV. Dorado) were provided by the Agriculture Research Center, Egypt. Sodium hypochlorite 0.5% was used to sterilize the surface of seeds for 5 min, after which they were washed with distilled water several times. Seeds were sown in black plastic pots (30 cm diameter) filled with 16 kg pre-washed sand. After 2 weeks, the pots containing seedlings homogenous in size and form (two seedlings/pot) were then regularly irrigated with half strength Hoagland's solution modified by adding 0, 75 or 150 mM NaCl every 2 days. Irrigation was performed three times a week with half strength Hoagland's solution (two times with NaCl and the last time without NaCl to prevent average soil salinity from rising above the studied levels with time (leaching requirement)). The total volume of solution ranged between 0.8-1.1 L/pot every irrigation, adjusting with increased growth of the plants and the rate of evapotranspiration (ET) (the volume was calculated according to the reduction in water holding capacity by weight method). Under each level of salinity, pots were divided to two subgroups to apply ALA (0 or 20 µM) as foliar application. To determine the concentration of ALA in this study, a quick preliminary experiment was conducted for 25 days with different concentrations (0, 5, 10, 20, 50 and 100 µM) based on the chlorophyll SPAD readings using a digital chlorophyll meter (Minolta SPAD-502, Marunouchi, Japan).
Each pot of ALA-untreated plants was sprayed every 10 days with 15 mL of a solution containing distilled water and 0.05% (v/v) Tween-20 (non-ionic surfactant), whereas each pot of ALA-treated plants was sprayed by 15 mL of a solution containing 20 µM ALA plus 0.05% (v/v) Tween-20. All foliar treatments were stopped 15 days before the date of sampling (60 days after sowing), at which point leaves were collected to measure and analyze the different parameters. The experimental layout was a complete randomized design (CRD) with three replicates.
Determination of Growth Parameters
Shoot fresh weight was immediately estimated after sampling using digital balance, whereas shoot dry weight was determined by drying the samples in an air-forced ventilated oven at 105 • C.
Membrane Stability Index (MSI), Hydrogen Peroxide and Lipid Peroxidation
Cell membrane stability was measured by the electrolyte leakage technique as described by Singh, et al. [26] with some modifications [27]. Samples from each treatment were selected randomly from fully expanded leaves. Ten leaf discs (1.8 cm diameter) were cut, cleaned well and incubated in 10 mL deionized water for 24 h on a shaker. After that EC 1 values of contents were measured by EC meters (DOH-SD1, TC-OMEGA, Stamford, CT, USA). Then, samples were autoclaved at 120 • C for 20 min to determine the values of EC 2 . Cell membrane stability index was calculated using the following equation: MSI = 1 − (EC 1 /EC 2 ) × 100.
Hydrogen peroxide (H 2 O 2 ) concentration was determined according to [28] with some modifications. Leaf samples of 0.5 g were homogenized in 3 mL of 1% (w/v) trichloroacetic acid (TCA). The homogenate was centrifuged at 10,000 rpm and 4 • C for 10 min. Subsequently, 0.75 mL of the supernatant was added to 0.75 mL of 10 mM K-phosphate buffer (pH 7.0) and 1.5 mL of 1 M KI. H 2 O 2 concentration was evaluated by comparing its absorbance at 390 nm to a standard calibration curve. The concentration of H 2 O 2 was calculated from a standard curve plotted in the range from 0 to 15 nmol mL −1 .
The level of lipid peroxidation was measured by the determination of malondialdehyde (MDA) as described by Heath and Packer [29]. Frozen tissues were homogenized in 0.1% (w/v) trichloroacetic acid (TCA). The extraction ratio was 10 mL for each gram of plant tissues. The homogenate was centrifuged at 4500 rpm for 15 min. The reaction mixture contained 1 mL from the supernatant and 4 mL 0.5% (w/v) thiobarbituric acid (TBA) dissolved in 20% (w/v) TCA. The mixture was heated in boiling water for 30 min then the mixture was cooled at room temperature and centrifuged at 4500 rpm for 15 min. The absorbance of the supernatant was measured at 535 nm and corrected for non-specific turbidity at 600 nm using a spectrophotometer (UV-1601PC; Shimadzu, Tokyo, Japan). The MDA concentration (nmol.g −1 FW) was calculated using ∆ OD (A532-A600) and the extinction coefficient (ε = 155 mM −1 cm −1 ).
Determination of Leaf Pigments
Chlorophyll a, b and total chlorophyll was determined as described by Costache, et al. [30] with some modification, small pieces of fresh leaves (0.5 g) was submerged into 10 mL pure acetone for 24 h/4 • C. The absorbance was measured at 645 and 663 nm respectively. The concentration was calculated using the following equations: where, A is the absorbance at 645 and 663 nm, V is the Final volume of chlorophyll extract in pure acetone and W is the fresh weight of tissue extract.
Carotenoids were quantified using the acetone and petroleum ether method as described by de Carvalho, et al. [31] using the following formula: where A 450 = Absorbance at 450 nm, V = Total extract volume; W = sample weight; A 1% 1cm = 2592 (β-carotene coefficient in petroleum ether).
Assay of Antioxidant Enzymes
To prepare the extraction of enzyme and soluble proteins, 0.5 g fresh leaves was homogenized in 4 mL 0.1 M sodium phosphate buffer (pH 7.0) containing 1% (w:v) polyvinylpyrrolidon (PVP) and 0.1 mM EDTA, centrifuged at 10,000× g for 20 min at 4 • C and then the supernatant was used for assays. To calculate the specific activity of enzymes, the concentration of total soluble protein was evaluated by the method of Bradford [32]. All studied enzyme activities were measured using a spectrophotometer (UV-1601PC; Shimadzu, Tokyo, Japan) as following: Superoxide dismutase (SOD) (EC 1.15.1.1) assay was based on the method described by Beyer and Fridovich [33]. The reaction mixture with a total volume of 3 mL contained 100 µL total soluble protein extract, 50 mM phosphate buffer (pH 7.8), 75 µM NBT, 13 mM L-methionine, 0.1 mM EDTA and 0.5 mM riboflavin. The reaction was initiated by the addition of riboflavin then the reaction mixture was illuminated for 20 min with 20 W flourscent lamp. One unit of enzyme activity was defined as the amount of enzyme required to result in a 50% inhibition in the rate of nitro blue tetrazolium (NBT) reduction at 560 nm.
Catalase (CAT) (EC 1.11.1.6) activity was measured by monitoring the decrease in absorbance at 240 nm as described by Cakmak, et al. [34]. The reaction mixture with a total volume of 3 mL contained 15 mM H 2 O 2 in 50 mM phosphate buffer (pH = 7). The reaction was initiated by adding 50 µL total soluble protein extract. The activity was calculated from extinction coefficient (ε = 40 mM −1 cm −1 ) for H 2 O 2 . One unit of enzyme activity was defined as the decomposition of 1 µmol of H 2 O 2 per minute.
Guaiacol peroxidase (G-POX) (EC1.11.1.7) activity was quantified by the method of Dias and Costa [35] with some minor modifications. The assay mixture (100 mL) contained 10 mL of 1% (v/v) guaiacol, 10 mL of 0.3% H 2 O 2 and 80 mL of 50 mM phosphate buffer (pH = 6.6). The volume of 100 µL of the total soluble protein extract was added to 2.9 mL of the assay mixture to start the reaction. The absorbance was recorded every 30 s for 3 min at 470 nm. One unit of G-POX was defined as the amount of enzyme that caused the formation of 1 µM of guaiacol dehydrogenation per minute (the final product of the oxidized guaiacol by H 2 O 2 ).
The activity of ascorbate peroxidase (APX) (EC 1.11.1.11) was determined according to Nakano and Asada [36]. The decrease of absorbance at 290 nm was monitored for 3 min. The reaction mixture with a total volume of 3 mL included 100 µL total soluble protein extract, 50 mM phosphate buffer (pH 7), 0.1 mM EDTA, 0.5 mM ascorbic acid, and 0.1 mM H 2 O 2 . The reaction was initiated by the addition of H 2 O 2 . One unit of enzyme activity was defined as the amount of enzyme required for oxidation of 1 µmol of ascorbate per minute. The rate of ascorbate oxidation was calculated using the extinction coefficient (ε = 2.8 mm −1 cm −1 ). 2.6. Determination of Na, K and Ca Leaf mineral concentrations of Na, K and Ca were determined using the flame photometric method (Jenway, Staffordshire, UK) as described by Havre [37].
Gene Expression
Total mRNA was isolated from 0.5 g sorghum leaves under salinity stress levels (0, 75, 150 mM NaCl) and 20 µM α Lipoic acid as foliar application treatment compared with the untreated plant (control experiment). Total RNA is easily purified from plant leaves tissues by RNeasy Tissue Kits (Qiagen, Maryland, USA) according to the manufacturer's protocol. Quantification and quality of the purified RNA was checked with a NanoDrop spectrostar (BMG LABTECH, Saitama, Japan), and analyzed on 1% agarose gel. For each sample, Total RNA (5 µg) was reverse transcribed to complementary cDNA in a reaction mixture consists of 2.5 µL 2.5 mM dNTPs, 2.5 µL MgCl 2 , 1.0 µL oligo dT primer (10 pml/µL), 2.5 µL 5X buffer, 0.2 µL (5 Unit/µL) reverse transcriptase (Promega, Baden-Württemberg, Germany), RT-PCR amplification was performed in a thermal cycler PCR, at 42 • C for 1.5 h and 80 • C for 20 min. Quantitative Real time PCR carried out on 1 µL diluted cDNA by triplicate using the real time analysis using (Rotor-Gene 6000, Qiagen, Hilden, Germany) system and the primer sequences used in qRT-PCR were given in Table 1. Primers of Salt Overly Sensitive (SOS), high-affinity potassium transporter 1 (HKT1) and the members of tonoplast-localized Na + /H + antiporter (NHX) genes and GAPDH housekeeping gene (reference gene) were used for gene expression analysis used a SYBR ® Green based method. A total reaction volume of 20 µL was used. Reactions mixture consists of 2 µL of template, 10 µL of SYBR Green Master Mix, 2 µL of reverse primer, 2 µL of forward primer, and sterile dist. water for a total volume of 20 µL. PCR assays were performed using the following conditions: 95 • C for 15 min followed by 40 cycles of 95 • C for 30 s and 60 • C for 30 s. The CT of each sample was used to calculate ∆CT values (target gene CT subtracted from β-Actin gene CT). The relative gene expression was determined using the 2 −∆∆Ct method [38].
Statistics
One way ANOVA procedure was followed using SAS [39] software. Means ± SD were calculated from three replicates and Tukey's multiple range test (p ≤ 0.05) was used to determine significant differences between means.
Effect of ALA on Growth Parameters
Exposing sorghum plants to salt stress significantly (p ≤ 0.05) reduced the growth parameters compared to the unstressed plants ( Figure 1). In general, the treatment with the lowest values in plant height, fresh weight, dry weight, and leaf area was the higher NaCl concentration (150 mM). In contrast, ALA-treated plants showed a significant (p ≤ 0.05) improvement in all studied growth parameters under non-saline conditions. A similar trend was observed in respect to plant height, fresh weight, and dry weight in ALA treated plants under saline conditions (75 and 150 mM NaCl). However, this effect was not significant (p ≤ 0.05) in leaf area under both examined levels of salinity.
Effect of ALA on the Membranes' Stability and Leaf Oxidative Damage
Exposing sorghum plants to salt stress significantly (p ≤ 0.05) decreased the membrane stability index (MSI) in parallel with raising NaCl concentrations, up to 150 mM, compared to the unstressed plants (Figure 2A). This decrease was associated with a significant (p ≤ 0.05) increase in the leaf oxidative damage and the rate of lipid peroxidation as indicated by the elevated the concentration of H 2 O 2 and MDA, respectively ( Figure 2B,C). Applied-ALA was shown to significantly enhance (p ≤ 0.05) MSI under both investigated levels of salinity. Simultaneously, this effect was concomitant with a significant (p ≤ 0.05) decrease in the concentration of H 2 O 2 and MDA. Figure 3 show that Chl a was positively and significantly (p ≤ 0.05) affected by the treatment of ALA under non-saline conditions; whereas, Chl b and carotenoids did not reveal any significant changes in this respect. Under saline conditions, Chl a, Chl b and carotenoids were significantly (p ≤ 0.05) decreased compared to the non-stressed plants. This negative effect was more destructive to all studied leaf pigments with increasing the level of salt stress. Otherwise, the treatment of ALA improved significantly (p ≤ 0.05) Chl b and carotenoids under the lower level of salinity. When plants were subjected to the higher level of salinity, ALA treatment led to maintaining significantly (p ≤ 0.05) the content of Chl a and carotenoids. These findings may indicate the protective effect of ALA on the photosynthetic machinery.
Effect of ALA on the Activities of Antioxidant Enzymes
Under saline conditions, the general tendency was that the activities of antioxidant enzymes including SOD, CAT, POX, and APX revealed a significant (p ≤ 0.05) increase compared to the unstressed plants ( Figure 4). Under non saline conditions, applied-ALA significantly (p ≤ 0.05) increased the activities of CAT and APX; whereas, this effect did not reach the level of significance in respect to SOD and POX. On the other hand, ALA-treated plants showed a significant (p ≤ 0.05) increase in SOD, CAT compared to ALA-untreated plants under both investigated levels of salinity. This response was explicit in POX and APX under slight saline condition (75 mM), whereas, POX was significantly (p ≤ 0.05) decreased in the ALA-treated plants under the higher level of salinity (150 mM).
Effect of ALA on Na, K, Ca and Na/K Ratio
Under saline conditions, the Na and Na/K ratio was significantly (p ≤ 0.05) increased in leaf tissues compared to the unstressed plants. In contrast, a significant decrease in the concentration of K and Ca was observed with raising the level of salinity ( Figure 5). On the other hand, applied ALA achieved significant decrease (p ≤ 0.05) in the concentration of the Na and Na/K ratio under both examined levels of salinity. These influences were associated with an obvious and significant (p ≤ 0.05) enhancement of the concentration of K and Ca. These results imply that exogenous ALA may induce tolerance to salinity stress in sorghum plants by affecting the homeostasis of relevant salt stress ions. Figure 5. Effect of salinity stress as NaCl (0, 75 and 150 mM) and the foliar application by α-lipoic acid (ALA; 0 and 20 µM) on the leaf content of Na (A), K(B), Ca (C) and Na/K ratio (D) of sorghum plants. For each parameter, the mean values ± SD followed by a different letter are significantly (p ≤ 0.05) different according to Tukey's range test.
Effect of ALA on the Expression of SOS1, NHX1 and HKT1
The relative expression of salt stress relevant genes (SOS1, NHX1 and HKT1) was investigated in this study ( Figure 6). The results indicated that SOS1 and NHX1 were significantly (p ≤ 0.05) upregulated with increasing the level of salinity compared to those of non-saline condition. Conversely, an obvious and significant (p ≤ 0.05) downregulation in HKT1 was observed under both investigated levels of salinity stress (75 and 150 mM). On the other hand, applied-ALA led to a significant p ≤ 0.05) inhibition in the relative expression of SOS1and NHX1 compared to the ALA-untreated plants under the same level of salinity. On the contrary, ALA caused a significant (p ≤ 0.05) increase in the relative expression of HKT1 regardless the presence of salinity stress.
Discussion
Several studies have shown that salt stress can affect multiple morphological, physiological, biochemical, and molecular aspects of plants [6,17,40]. In this study, we observed that exposing plants to salt stress inhibited plant growth parameters, including plant height, fresh weight, dry weight, and leaf area as compared to the unstressed plants ( Figure 1). This decrease in plant growth can be attributed to the modulation of cell cycle progression as well as inhibition of the rate of cell division [41]. Furthermore, elevating the level of salinity is a key factor in increasing osmotic stress and decreasing plant growth by affecting the ability of plants to uptake water [42,43]. In contrast, ALA-treated plants showed considerable enhancement in all of the studied growth parameters regardless of the presence of salinity stress. Alpha-lipoic acid is a potent antioxidant that is soluble in both water and lipids [44]. A previous report showed that ALA can enhance growth and root formation of the salt-stressed canola seedlings [23]. This stimulation may be due to enhanced photosynthesis and carbon fixation [24].
In this study, exposing plants to salt stress resulted in a significant decrease in their membrane stability index (MSI) and a greater increase in H 2 O 2 and malondialdehyde (MDA) (Figure 2). Generally, under abiotic stress conditions, the excessive generation of reactive oxygen species (ROS) is a common response in many plant species to oxidative damage [45][46][47][48]. These harmful molecules can lead to destruction of cell membrane structure by affecting the structure and function of the protein and lipid bilayers [49]. In contrast, ALA-treated plants showed a significant enhancement in their MSI, as well as a parallel decrease in H 2 O 2 and MDA. It has been found that exogenous ALA is a potent dithiol antioxidant and can mitigate oxidative damage by scavenging ROS that are produced under diverse environmental stresses, such as high salinity [23], drought [24], heavy metals [21,25], and osmotic stress [20].
Interestingly, the positive influence of ALA on maintaining the cell membrane stability index and reducing oxidative damage was positively reflected in the leaf content represented by the photosynthetic pigments i.e., Chl a, Chl b and carotenoids ( Figure 3). Furthermore, it was observed that ALA treatment was more effective on improving the content of Chl a than it was on Chl b when treated with the higher level of salinity. This effect implies that under high levels of salinity, ALA as a potent antioxidant has a key protective role on Chl a (considered the major cofactor in the photochemical reactions inside chloroplast) [50]. It is well known that ABA (the major stress hormone in higher order plants under osmotic stress) is synthesized from a carotenoid intermediate [51,52]. In this study, under saline conditions, carotenoids were increased by ALA treatment. This effect could be attributed to improved cell membrane stability and water potential, affecting the biosynthesis of ABA and consequently maintaining the carotenoids content.
Under abiotic stress conditions, increasing the activity of antioxidant enzymes is necessary for ROS elimination [9,45,53,54]. In this study, the activities of SOD, CAT, POX, and APX were significantly increased under saline conditions ( Figure 4). These responses occurred to restrict the excessive accumulation of superoxide radicals and H 2 O 2 [55,56]. Under a low level of salinity (75 mM NaCl), ALA treatment caused a significant increase in the activity of SOD, CAT, POX, and APX. Several previous studies have suggested that ALA is able to induce the antioxidant systems (enzymatic and non-enzymatic) in plants under diverse abiotic stressors [20,23,25,44]. This effect may be attributed to the essential role of ALA as a part of several multi-enzyme complexes [57]. Under the highest level of salinity (150 mM NaCl), ALA-treated plants displayed a significant decrease in POX, while, SOD and CAT showed an opposite trend. No significant changes were detected in APX. These results imply that under severe levels of salinity stress, SOD and CAT are the two major antioxidant enzymes for scavenging superoxide radicals and H 2 O 2, respectively, in sorghum plants. In contrast, the decrease in the activity of POX in ALA-treated plants under high levels of salinity (150 mM NaCl) could be attributed to the antioxidative properties of ALA which relatively compensate for the role of POX in the ALA-untreated plants.
In plants, Na + exclusion and reducing Na/K ratio in the sensitive tissue of the leaf are critical techniques for plant tolerance to saline stress. This response can be attributed to minimizing the toxic effect of Na + on several cytosolic enzymes [58]. In the present study, NaCl-stressed plants demonstrated different compartmentalization of Na, K, and Ca in their leaf tissues ( Figure 5). The increase in NaCl concentration was associated with a decrease in the concentration of K and Ca, making the reduction of the Na/K ratio very clear. These results were consistent with those obtained in several previous studies on many plant species [6,8,40,42]. On the other hand, ALA-treated plants revealed a significant decrease in the concentration of Na and the Na/K ratio in the leaf under both examined salinity levels. These influences were concomitant with greater improvement in the concentration of K and Ca. These effects could be attributed to the positive effect of ALA on the membrane stability index (Figure 2), which can affect plant water relation and the ability of plants to uptake K and Ca with transpiration stream. In this respect, similar results were reported in salt-stressed wheat seedlings [59].
To further understand the effect of ALA on Na and K, and subsequently the Na/K ratio, under saline conditions, using RT-PCR, the relative gene expression of some membrane transport proteins mediating Na and K transport was studied in this investigation. We found that the relative expression of SOS1 (plasma membrane Na + /H + antiporter) and NHX1 (vacuolar Na + /H + antiporter) were significantly downregulated in the ALA-treated plants under saline conditions ( Figure 6A,B). These responses may enable plants to survive under saline conditions by excluding Na + from the cytosol to the apoplast or the vacuole. In contrast, the high-affinity potassium transporter (HKT1) was upregulated in the ALAtreated plants compared to the untreated ones ( Figure 6C), indicating that ALA enhanced K + /Na + selectivity and thus the plant's tolerance to salinity stress. These findings were previously discussed in this study ( Figure 5).
Conclusions
From the result of this study, we can conclude that ALA can induce tolerance to salinity stress in sorghum plants. Data show that ALA has many protective aspects against salt stress through enhancing plant growth, the membrane stability index (MSI), and reducing oxidative damage. These responses were associated with increasing the activities of antioxidant enzymes (SOD, CAT, POX, and APX). Furthermore, ALA affected ionic homeostasis by reducing the uptake of Na and increasing K and Ca. This effect led to maintaining a lower Na/K ratio in leaf tissues. The explanation for these important influences is that exogenous ALA leads to a significant downregulation in the relative gene expression of plasma membrane (SOS1) and vacuolar (NHX1) Na + /H + antiporters. At the same time, we show a considerable upregulation in the high-affinity potassium transporter protein (HKT1). | 6,693.4 | 2021-11-01T00:00:00.000 | [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
] |
Little Higgs after the little one
At the LHC, the Littlest Higgs Model with T-parity is characterised by various production channels. If the T-odd quarks are heavier than the exotic partners of the W and the Z, then associated production can be as important as the pair-production of the former. Studying both, we look for final states comprising at least one lepton, jets and missing transverse energy. We consider all the SM processes that could conspire to contribute as background to our signals, and perform a full detector level simulation of the signal and background to estimate the discovery potential at the current run as well as at the scheduled upgrade of the LHC. We also show that, for one of the channels, the reconstruction of two tagged b-jets at the Higgs mass (Mh = 125 GeV) provides us with an unambiguous hint for this model.
Introduction
The Standard Model (SM) of particle physics provides an admissible explanation for the electroweak symmetry breaking (EWSB) mechanism that seems to be in accordance with all observations till date including the electroweak precision tests. The discovery of the long sought Higgs boson at the Large Hadron Collider (LHC) [1,2] completes the search for its particle content, and the current level of agreement of this particle's couplings to the other SM particles is a strong argument in favour of the model. In spite of such a triumph, the SM is beset with unanswerable problems, whose resolution requires the introduction of physics beyond the domain of the SM. One such issue pertains to the smallness of the Higgs mass, which is unexpected as there exists no symmetry within the SM that would protect the Higgs mass from radiative corrections. This extremely fine-tuned nature of the SM is termed as the Naturalness Problem and many scenarios beyond the SM (BSM) such as supersymmetric theories, extra dimensional models and little Higgs models have been proposed as solutions.
In the little Higgs models, the Higgs boson is realized as a pseudo Goldstone boson of a new global symmetry group [3][4][5]. With the Higgs mass now being proportional to the extent of the soft breaking of this symmetry, the relative lightness can, presumably, be protected. The minimal extension of the SM based on the idea of little Higgs scenario is the Littlest Higgs model [6,7], which is essentially a non-linear sigma model with a global SU (5) symmetry that breaks down to SO(5) at some scale Λ on account of a scalar field vacuum expectation value f ≈ Λ/4π. A subgroup of the SU(5), namely [SU(2) × U(1)] 2 , is gauged, and the breaking mechanism is such that the local symmetry spontaneously breaks into its diagonal subgroup which is identified with the SM gauge group SU(2) L × U(1) Y .
JHEP06(2016)074
Unlike in supersymmetric theories, the cancellation of the leading correction to the Higgs mass square occurs here between contributions from particles of the same spin. 1 For example, the W/Z contributions are cancelled by those accruing from the extra gauge bosons. Similarly, it is the exotic partner of the top quark that is responsible for cancelling the latter's contribution. The collective symmetry breaking mechanism ensures that no quadratic divergence enters in the Higgs mass before two loops. Although, technically, the little Higgs models, unlike supersymmetry, are not natural (for the stabilization of the scale Λ is not guaranteed and has to be ensured by other means), the inescapability of this extra loop suppression ameliorates the fine tuning to a great degree rendering it almost acceptable.
On the other hand, the very presence of these extra particles results in additional contributions to the electroweak precision observables [8][9][10][11][12][13][14], and consistency with the same requires that the scale f should be above a few TeVs, thereby introducing the 'little hierarchy problem'. These constraints can, however, be largely avoided with the introduction of a new discrete symmetry, namely 'T -parity', under which all the SM particles are even while all the new particles are odd. This forbids the mixing between the SM gauge bosons and the heavy T -odd gauge bosons at the tree-level, thereby preserving the tree-level value of the electroweak ρ-parameter at unity [15]. The Littlest Higgs model with T -parity (LHT) [16][17][18][19][20], thus, solves the little hierarchy problem and has the additional advantage that the lightest T -odd particle (which naturally happens to be electrically neutral and color-singlet) can be a good cold Dark Matter (DM) candidate [21][22][23].
The LHT model, like any other BSM scenarios also has interesting phenomenological implications with its own set of non standard particles. In light of the Higgs discovery a detailed analysis of the model has been considered at run-I of the LHC [24]. In this work, we investigate a few of the most likely signatures of LHT that could be observed at the current run of LHC with √ s = 13 TeV as well as predictions for the possible upgrade to √ s = 14 TeV. As the discrete T symmetry forbids single production of any of the T -odd particles, they must be pair produced at the LHC. While the pair production of T -odd gauge boson (W ± h ) has been studied in refs. [25][26][27][28][29], unless the Yukawa couplings are very large, the production rates are expected to be higher for processes involving the exotic quarks. Here, we consider the signals generated from the associated production of heavy T -odd quarks with heavy T -odd gauge bosons. We eschew the simplistic possibility that the exotic quark decays directly into its SM counterpart and the invisible A h (relevant only for a limited part of the parameter space and considered in ref. [30]) and consider (the more prevalent and more complicated) cascade decays instead. We concentrate on final states -for LHC run II -comprising leptons and jets accompanied by large missing transverse energy, while noting that the pair-production of the T -odd quarks also contributes significantly owing to their larger cross-sections. For a large part of the allowed parameter space, the Z h boson dominantly decays to a Higgs boson and A h . This, consequently, gives rise to two b-tagged jets, thereby proffering the interesting possibility of reconstructing the Higgs mass and validating the decay chain predicted by the model. Performing a detailed collider JHEP06(2016)074 analysis while taking all the relevant SM backgrounds into consideration, we explore the possibility of probing the model parameter space at the current run of the LHC.
The paper is organized in the following manner: in section 2, we begin with a brief description of the LHT model. In section 3, we describe our analysis strategy and explore the discovery possibilities at the LHC run II. Finally, in section 4, we summarize our findings and conclude.
The littlest Higgs model with T-parity
Consider a non-linear sigma model with a global SU(5) symmetry of which the subgroup , is gauged. If Σ is a dimensionless scalar field transforming under the adjoint representation, its kinetic term could be parametrized as where the covariant derivative D µ is defined through The gauged generators can be represented in the convenient form where σ a are the Pauli matrices. The imposition of a Z 2 symmetry (T -parity) exchanging G 1 ←→ G 2 (and, naturally, the corresponding quantum numbers for all the fields in the theory), requires that where g and g would shortly be identified with the SM gauge couplings. The global SU(5) symmetry is spontaneously broken down to SO(5) by the vacuum expectation value (vev) (Σ 0 ) of the scalar field Σ at the scale f , viz.
JHEP06(2016)074
where Π is the matrix containing the Goldstone degrees of freedom. The latter decompose under the SM gauge group as 1 0 ⊕ 3 0 ⊕ 2 1/2 ⊕ 3 1 and are given by (2.7) Here, H = (−iπ + , h+iπ 0 √ 2 ) T is the SU(2) Higgs doublet 2 1/2 and Φ is the complex triplet 3 1 which forms a symmetric tensor with components φ ±± , φ ± , φ 0 , φ P . After EWSB, π + and π 0 will be eaten by the SM gauge bosons W and Z. The invariance of the Lagrangian under T -parity demands the scalar to transform as with Ω = diag(1, 1, −1, 1, 1). The transformation rules guarantee that the complex triplet field is odd under T parity, while the (usual) Higgs doublet is even. This has the consequence that the SM gauge bosons do not mix with the T -odd heavy gauge bosons, thereby prohibiting any further corrections to the low energy EW observables at tree level and thus relaxing the EW constraints on the model [20]. After the electroweak symmetry breaking, the masses of the T -odd partners of photon(A h ), Z boson(Z h ) and W boson(W h ) are given by, with v 246 GeV being the electroweak breaking scale. The heavy photon A h is the lightest T -odd particle (LTP) and can serve as the DM candidate with the correct relic density [21][22][23].
Implementation of T -parity in the fermion sector requires a doubling of content and each fermion doublet of the SM must be replaced by a pair of SU(2) doublets (Ψ 1 , Ψ 2 ).
Under T -parity, the doublets exchange between themselves (Ψ 1 ↔ Ψ 2 ) and the T even combination remains almost massless and is identified with the SM doublet. On the other hand, the T odd combination acquires a large mass, 2 courtesy a Yukawa coupling involving the large vev and an extra SU(2) singlet fermion (necessary, anyway, for anomaly cancellation). For simplicity, we can assume an universal and flavor diagonal Yukawa coupling κ for both up and down type fermions. The mass terms will then, respectively, be (2.10) 2 A recent study of the heavy top partner production at the LHC including the global analysis of this model has been done in [31,32].
JHEP06(2016)074
Benchmark Points If f ∼ O(TeV), the masses for the exotic up and down type fermions become comparable. Since our study concentrates on the first two generations of T -odd heavy fermions, we desist from a discussion of the top sector and point the reader to refs. [18][19][20]. Thus, in a nutshell, the phenomenology relevant to this paper is characterized by only two parameters, the scale f and the universal Yukawa coupling κ.
Numerical analysis
We now present a detailed discussion of our analysis, which pertains to the case of large κ, or, in other words the situation where the T -odd fermions are significantly heavier than the T -odd gauge bosons. We limit ourselves to a study of the dominant processes, viz. the production of a pair of such fermions (antifermions) on the one hand, and the associated production of a heavy gauge boson alongwith one such fermion. In other words, the processes of interest are: where Q h i , Q h j , (i, j = 1, 2) denote the first two generations of heavy T -odd quarks (u h , d h , c h , s h ), whereas W ± h and Z h are the T -odd heavy partners of the SM W -boson and Z-boson respectively. We focus mainly on the current and future runs of the LHC, keeping in mind the constraints on the parameter space ensuing from the negative results of Run I (center of mass energy √ s = 8 TeV) [24]. Rather than presenting a scan over the parameter space, we choose two representative benchmark points (consistent with the present constraints) that illustrate not only the sensitivity of the experiments to the twodimensional parameter space (f ,κ), but also the bearing that the spectrum has on the kinematics and, hence, the efficiencies. In table 1, we list the values of the scale f and the Yukawa coupling κ for the chosen benchmark points (BP), as also the relevant part of the T -odd spectrum. The corresponding branching ratios of the up-type heavy quarks (u hi ) and the heavy gauge bosons are where H is the light (standard model-like) Higgs. The branching ratios for the down-type heavy quarks (d hi ) are very similar to those for u hi . Furthermore, with the available phase space being quite large in each case, the kinematic suppression is negligible. Consequently, there is relatively little difference between the branching ratios (less than 0.5%) for the two benchmark points. And, while, for more extreme points, the difference could be slightly larger, the situation does not change qualitatively.
The three sub-processes of eq. (3.1) can, thus, give rise to the following three possible final states: 3,4 where, = e, µ; b corresponds to a b-tagged jet and j denotes non b-tagged jets. The leading order (LO) production cross-sections for each of the sub-processes listed in eq. (3.1) are calculated using MadGraph5 [33] and are listed in table 2, wherein we have used the Cteq6L parton distributions. Since the K-factors are larger than unity, the use of the LO cross sections for the signal events is a conservative choice. The larger production cross-sections for BP1 (as compared to BP2) is but a consequence of the lighter masses for the exotic particles. For our analysis, we use Madgraph5 to generate the events at parton level at LO for both the signal as well as the SM background contributing to the respective final states under consideration. The model files for LHT, used in Madgraph5 are generated using FeynRules [34]. 5 The unweighted parton level events are then passed for showering through Pythia(v6.4) [35] to simulate showering and hadronisation effects, including fragmentation. For Detector simulation, we then pass these events through Delphes(v3) [36] where jets are constructed using the anti-k T jet clustering algorithm with proper MLM matching 3 Of several possibilities, we concentrate only on final states with leptons. This not only ensures a good sensitivity, but is also, experimentally, very robust and least likely to suffer on account of the level of sophistication of our analysis. However, non-leptonic final states may also provide interesting signal topologies. For example, hadronic decays of W/H, with their larger branching ratios as well as di-higgs final state where both the Higgs decay to bb channel can be studied exploiting jet substructures. Such a all-encompassing analysis is, though, beyond the scope of this paper. 4 Note that final states with additional charged leptons are also possible, but the corresponding branching fractions are smaller. Thus, the signal size is likely to prove a bottleneck in spite of a possibly better discriminatory power. 5 We thank the authors of ref. [24] for sharing the UFO model files.
JHEP06(2016)074
scheme chosen for background processes. Finally, we perform the cut analyses 6 using MadAnalysis5 [37]. Several SM sub-processes constitute backgrounds to the aforementioned final states. In particular, one needs to consider: • tt(+jets): comprising the semi-inclusive cross-section for tt production with up to two additional hard jets, this constitutes the dominant background for all the three final states. For example, the orders of magnitude larger cross section for top-production means that a disconcertingly large number of such events would satisfy the requirement of a pair of b-jets reconstructing to the SM Higgs peak.
• W ± + jets: with a significantly hard E T / distribution, this process serves as the dominant background for the signal configuration with a single charged lepton in the final state (and no b-jets). We consider here, the semi-inclusive cross section for W ± with up to three hard jets.
• Z +jets: while this could have been the major background for the signal configuration with two charged leptons in the final state, a large E T / requirement can effectively suppress it. Akin to the case for the W ± + jets background, this too includes the semi-inclusive cross section for the production of Z with up to three hard jets.
• Diboson +jets: with large production cross-sections, W W (W Z, ZZ) with two hard jets production in SM are significant sources of background. For example, owing to mismeasurements, a bb pair from a Z-decay could fake a Higgs. In addition, mistagging constitutes another source for such backgrounds.
• Single top production: this will contribute mainly to final state (i).
• tt(+W/Z/H): similar to tt(+jets), these processes may also contribute to the total SM background, but with much lower production cross-sections.
Since the final states under discussion can also result from hard subprocesses accompanied by either or both of initial and final state radiation, or soft decays, we must impose some basic cuts before we attempt to simulate the events. To this end, we demand that Following the ATLAS collaboration [38], we consider a p T -dependent b-tagging efficiency as below: Along with this, we also incorporate a mistagging probability of 10% (1%) for charm-jets (light-quark and gluon jets). Also, the absolute rapidity of b-jets are demanded to be less than 2.5 (|η b | < 2.5). We show, in figures 1, 2 and 3, the histograms for the signal and background events after imposing only the basic cuts of eq. (3.4).
To understand the transverse momentum distribution of the leading lepton (the upper panels of figure 1), recall the decay chain for the signal processes. In all of the three processes, the heavy W ± h is produced, either directly or from the decay of heavy T -odd quarks. As already mentioned, for the parameter space of interest, the W ± h decay to W ± + A h with almost 100% branching ratio and, hence, the subsequent decay of the W ± generates leptons in the final state. With the mass difference between the W ± h and SM W ± bosons being so large, the latter would, typically, have a large p T , even if the former had a small p T . This translates to a large p T for the charged lepton emanating from the W ± decay. Thus, for most events resulting from the process of (3.1b), the leading lepton tends to have a large p T . For the other two production channels, at least a large fraction of the events would have the Q h decaying into W ± h , thereby bestowing the latter with a large p T to start with. It should be realized though, that in each case, the possibility exists JHEP06(2016)074 that, in a decay, the p T of a daughter, as defined in mother's rest frame, is aligned against the mother's p T . While this degradation of the p T is not very important for the leading lepton, it certainly is so for the next-to-leading one, as is attested to by the lower panels of figure 1. It is instructive to examine the corresponding distributions for the background events. Since the W 's (or Z's) now have typically lower p T , the Jacobian peak at m W /2 (m Z /2) is quite visible, and particularly so for the next-to-leading lepton. For the leading one, the peak, understandably, gets smeared on account of the inherent p T of the decaying boson. This effect, of course, is more pronounced for the signal events. The second, and more pronounced, peak in the lower panels of figure 1 result from non-resonant processes and/or configurations wherein the lepton travels against the direction of its parent. This motivates our cuts on the lepton p T s.
In an analogous fashion, the decay of the heavy T -odd quark (almost) always yields a high-p T jet owing to the large difference between its mass and those for T -odd bosons. Consequently, a requirement of p T (j 1 ) 250 GeV would eliminate only a very small fraction of the signal events (for each of the three channels, while, potentially, removing a significant fraction of the background events (see upper panels of figure 2). For process (3.1a), the decay of the second Q h would lead to a jet almost as hard. On the other hand, for JHEP06(2016)074 . Normalized distributions for the missing transverse energy (top panels), the effective mass (second row), the bb invariant mass (third row) and ∆R bb (bottom panels). In each case, the left and right panels correspond to BP1 and BP2 respectively. channels (3.1b) & (3.1c), the second jet results only from the decays of the W ± or the H. Intrinsically much softer, these still gain from the p T of the mother. Consequently, the second leading jet, very often, may have a p T larger than 200 GeV (see the middle panels of figure 2). For the processes under discussion, a third jet can only result from the (cascade) decay of a SM boson, and, hence, is typically softer (the lower panels of figure 2) and hence, for three-jet final states, the requirement on the third jet p T should not be much stricter than about 45 GeV.
Next, we turn to some derived kinematic variables. The first quantity of interest is the missing transverse energy (E T / ). For the background events, this arises from the neutrinos (courtesy W ± or Z decays) or mismeasurement of the jet and lepton momenta. For the signal events, this receives an additional contribution from the heavy photon A h , which JHEP06(2016)074 is stable because of T -parity. Not only does the A h have a large mass, but also a large p T owing to the large difference in the masses of the mother and the daughters (on each occasion wherein it is produced). Consequently, the E T / spectrum is much harder for the signal (the upper panels of figure 3) and a requirement of E T / > 250 GeV would significantly improve the signal to background ratio.
Another variable of interest is the effective mass variable defined as where the sum goes over up to four jets. Similar to the case for the E T / distribution, the high masses of the exotic T -odd particles leads to a large M eff for the signal events as one can see from the second panel of figure 3. Hence, a significant (M eff ) cut also helps in reducing the background events.
Finally, for events with two tagged b-jets, we may consider the invariant mass of the pair (the third row of figure 3). In the signal events, the Z h decays to a Higgs boson and A h , with the former decaying predominantly into a bb pair. Due to the large mass difference between Z h and A h , the Higgs boson will be produced with a high p T which will be imparted to its decay products. As a result, the bb pair will be produced with a relatively small opening angle. On the other hand, the b's in the SM background arise, primarily, from three classes of processes: (a) the decay of different top-quarks where the separation between them show a much broader structure; (b) the decay of a Z boson, wherein the invariant mass would peak at m Z , and owing to the relatively low momentum of the Z, the b's be well-separated (in fact, close to being back-to-back); and (c) from non-resonant processes where the b's would be softer and, again, ∆R bb would have a wider distribution. These features are well reflected by the third and fourth rows of figure 3. It is, thus, expected that a judicious upper cut on ∆R bb would definitely improve the signal significance. Similarly, a good energy-momentum resolution for the b-jets would serve to remove much of the Z-background.
Cut analysis
All the processes in eq. (3.1) may contribute to a given final state -of eq. (3.3) -and, henceforth, we include all under 'Signal', while 'Background' receives contributions from all SM processes leading to the particular final state.
In addition to the basic cuts of eq. (3.4), further selection cuts may be imposed in order to improve the signal to background ratio. Understandably, these selections cuts would depend on the final state under consideration, both in respect of the differences in event topology for the signal and the background, as well as on the actual size of the signal. In particular, we are guided by the requirement that not too large an integrated luminosity is required to reach a 5σ significance (S = 5), with S being
JHEP06(2016)074
where N S and N B represent number of signal and background events respectively. We now take up each final state given in eq. (3.3) and describe the kinematic cut flow followed in selecting events for the signal while suppressing the background.
This final state for the signal receives contribution from both the strongly produced Todd quark pair as well as the associated production modes, thus giving us the maximum signal event rate amongst the final states under consideration. Here the single charged lepton almost always comes from the decay of a W boson resulting from the cascades. The selection cuts, in the order that they are imposed, are: 1. p T (j 3 ) > 45 GeV and |η(j)| < 2.5 (C1-1): in other words, we demand that our final state has at least three jets within the given pseudo-rapidity range, each with a minimum p T of 45 GeV. This choice is motivated by the lowest panels of figure 2.
2. p T (j 1 ) > 250 GeV (C1-2): given that the hardest jet is, typically, much harder for the signal events than it is for the background (see top panels of figure 2), we ask that p T (j 1 ) > 250 GeV. This, understandably, helps increase the signal to noise ratio to a remarkable extent. 5. E T / > 250 GeV (C1-5): the top row of figure 3 bears our (previously stated) expectations that the extent of transverse momentum imbalance (E T / ) would be far larger for the signal events than that for the background. Consequently, this requirement improves considerably the signal to background ratio. 6. p T ( 1 ) > 20 GeV (C1-6): finally, to distinguish this final state from that considered in section 3.1.3, we require that there be only one isolated charged lepton e, µ with a p T more than 20 GeV.
In tables 3(4), we display the effect, for √ s = 13 (14) TeV, that these cuts have on the signal and background events, when applied successively in the order described above. It is noteworthy that a discovery in this final state is possible at the current LHC Run with an integrated luminosity as little as ∼ 8 fb −1 and ∼ 20 fb −1 for BP1 and BP2 respectively. The corresponding numbers for LHC Run II are ∼ 5 fb −1 and ∼ 12 fb −1 respectively.
1 ± + 2b + j + E T /
Since the interest in this channel owes to the possibility of reconstructing the Higgs (and possibly develop an experimental handle on the very structure of the theory), the cuts now have to be reorganized keeping in mind both the origin (and, hence, the distributions) of the b-jets, as well as the signal strength.
JHEP06(2016)074
3. E T / > 250 GeV (C2-3): this, again, is similar to cut C1-5 of section 3.1.1, and particularly helps eliminate much of the dominant tt background. 4. p T (b 2 ) > 40 GeV (C2-4): as far as the signal events are concerned, the b-jets arise from the decay chain Z h → A h +H → A h +bb. The large mass difference between the Z h and A h would be manifested in a large boost for the H which, very often, would be translated to a large p T for the b-jets. On the other hand, the SM background is dominated by the tt contribution, with typically, has a smaller p T for the b-jets. Thus, requiring that at least two b-jets have substantial p T would discriminate against the background. It might seem that imposing a harder cut on p T (b 1 ) would be beneficial. While this, per se, is indeed true, such a gain is subsumed (and, in fact, bettered) by the next two cuts. Hence, we desist from imposing one such.
5. ∆R bb < 1.5 (C2-5): The aforementioned large boost for the H in the signal events would, typically, result in the two b-jets being relatively close to each other. On the other hand, the background events from tt would have a much wider distribution, whereas b's emanating from associated H-production (which, in the SM, is dominated by low-p T Higgs) would, preferentially, be back to back (see third row of figure 3).
Thus, an upper limit on the angular separation between the two tagged b-jets considerably improves the signal-to background ratio.
The effects of the aforementioned cuts are summarised in tables 5 (6). As expected, the signal strength is much weaker when compared to that discussed in section 3.1.1. While the background rate suffers a suppression too, it is not enough and the required integrated luminosity is much larger in the present case. However, the combination of cuts on ∆R bb and M bb brings discovery into the realm of possibility even for the present run of the LHC and certainly so for that at √ s = 14 TeV.
2
This final state receives contributions only from the primary production channels of eq. (3.1a) & (3.1b), and not from that of eq. (3.1c). Consequently, the signal size is smaller. However, the higher multiplicity of the charged lepton in the final state proves helpful in suppressing the background, provided we re-tune the kinematic selections as follows: 1. |η j | < 2.5 (C3-1): the requirement of jet "centrality" remains the same.
2. p T (j 1 ) > 300 GeV (C3-2): the requirement on the hardest jet is now strengthened. This reduces the cross-sections for most of the background subprocesses by 2-3 order of magnitude whereas the signal cross-section is reduced only by a few percent.
3. p T (j 2 ) > 200 GeV (C3-3): the preceding cut (C3-2) also serves to harden the spectrum of the next sub-leading jet. Although this happens for both signal and background, the effect is larger for the former. This allows us to demand that the next (8)), with the suppressions for the single-top and Z + n-jets production being even more pronounced.
Note that although the dilepton final state has a reduced signal cross-section as compared to that with a single lepton, the requirement of a second isolated lepton also significantly reduces the background. Therefore, this final state requires moderate values for integrated luminosity at 13 (14) TeV LHC, namely around ∼ 40 (26) and ∼ 203(116) fb −1 for BP1 and BP2 respectively, which would be accessible in the current run of LHC.
At this point, we would like to mention that the benchmark points considered in our analysis can also be probed via the pair production of heavy T -odd gauge bosons (W H /Z H ). However, the later processes being purely electroweak in nature yields much lower cross-section which in turn require significantly higher luminosity to reach the same signal significance as ours. This has been studied in ref. [29].
Summary and conclusions
The very lightness of the Higgs boson that was discovered at the LHC has been a cause for concern, especially in the absence of any indication for physics beyond the SM that could be responsible for keeping it light. Amongst others, Little Higgs scenarios provide an intriguing explanation for the same. While several variants have been considered in the literature, in this paper, we examine a particularly elegant version, namely the Littlest Higgs model with a Z 2 symmetry (T -parity). The latter not only alleviates the severe constraints (on such models) from the electroweak precision measurements but also provides for a viable Dark Matter candidate in the shape of A h , the exotic gauge partner of the photon.
At the LHC, the exotic particles can only be pair-produced on account of the aforementioned T -parity. Understandably, the production cross sections are, typically, the largest for the strongly-interacting particles. For example, if the exotic quarks Q ih are light enough to have a large branching fraction into their SM counterparts and the A h , we would have a very pronounced excess in a final state comprising a dijet along with large missing-p T [30].
On the other hand, if the Q ih are heavier than W h and Z h (as can happen for a wide expanse of the parameter space) then they decay into the latter instead, with these, in turn decaying into their SM counterparts (or the Higgs), resulting in a final state comprising multiple jets, possibly leptons and missing-p T [39,40], and it is this possibility that we concentrate on. The parameter space of interest is the two-dimensional one, spanned by f , the scale of breaking of the larger symmetry and κ, the universal Yukawa coupling. Although a part of it is already ruled out by the negative results from the 8 TeV run, a very large expanse is still unconstrained by these analyses. We illustrate our search strategies choosing representative benchmark points from within the latter set. We consider not only the production of a pair of exotic quarks, but also the associated production of W h /Z h with such a quark. Concentrating on the final state comprising leptons plus jets plus missing transverse energy, we consider all the SM processes that could conspire to contribute as background to our LHT signal, and perform a full detector level simulation of the signal
JHEP06(2016)074
and background to estimate the discovery potential at the current run and subsequent upgrade of the LHC. The large mass difference between the Q ih and W h /Z h results in large momenta for at least a few of the jets. Similarly, the even larger mass difference between the W h /Z h and the A h results, typically, in large missing-momentum. This encourages us to consider final states consisting of hard jets and leptons and large missing transverse momentum. We observe that final states with only one isolated charged lepton (e ± , µ ± ) and with at least three jets and substantial missing transverse energy are the ones most amenable to discovery. For example, at the 13 (14) TeV LHC with only 8(5) fb −1 and 20(12) fb −1 integrated luminosity for our BP1 and BP2 respectively. A confirmatory test is afforded by a final state requiring one extra isolated lepton. Though this decreases the signal crosssection significantly, LHC Run II can still reach the discovery level, but now only with 40(27) fb −1 and 210(120) fb −1 integrated luminosity at 13 (14) TeV.
We however wish to highlight, through this work, a more interesting signal, one with 2 tagged b-jets in the final state. As discussed earlier (eq. (3.2c)), the heavy Z h boson decays to a Higgs boson and the A h with almost 100% branching ratio. This presents us with an unique opportunity to reconstruct the Higgs mass from the tagged b-jets, thus providing us with an important insight into the LHT parameter space. As we have found out, the reconstruction of Higgs mass requires higher integrated luminosity (few hundred fb −1 )) but it is still within the reach of the LHC run II. We, thus, hope that our analysis demonstrates the viability of testing the LHT model in the current run of the LHC. | 8,318.8 | 2016-06-01T00:00:00.000 | [
"Physics"
] |
Experimental Demonstration of MmWave Vehicle-to-Vehicle Communications Using IEEE 802.11ad
Millimeter wave (mmWave) vehicle-to-vehicle (V2V) communications has received significant attention as one of the key applications in 5G technology, which is called as Giga-V2V (GiV2V). The ultra-wide band of the GiV2V allows vehicles to transfer gigabit data within a few seconds, which can achieve platooning of autonomous vehicles. The platooning process requires the rich data of a 4K dash-camera and LiDAR sensors for accurate vehicle control. To achieve this, 3GPP, a global organization of standards that provides specifications for the 5G mobile technology, is developing a new standard for GiV2V technology by extending the existing specification for device-to-device (D2D) communication. Meanwhile, in the last decade, the mmWave spectrum has been used in the wireless local area network (WLAN) for indoor devices, such as home appliances, based on the IEEE 802.11ad (also known as Wireless Gigabit Alliance (WiGig)) technology, which supports gigabit wireless connectivity of approximately 10 m distance in the 60-GHz frequency spectrum. The WiGig technology has been commercialized and used for various applications ranging from Internet access points to set-top boxes for televisions. In this study, we investigated the applicability of the WiGig technology to the GiV2V communications through experiments on a real vehicular testbed. To achieve this, we built a testbed using commercial off-the-shelf WiGig devices and performed experiments to measure inter-vehicle connectivity on a campus and on city roads with different permitted vehicle speeds. The experimental results demonstrate that disconnections occurred frequently due to the short radio range and the connectivity varied with the vehicle speed. However, the instantaneous throughput was sufficient to exchange large data between moving vehicles in different road environments.
Introduction
Various 5G wireless technologies were standardized and developed by academia and industries related to the field of telecommunications, and are now ready to be deployed with several key features such as enhanced mobile broadband (eMBB), ultra reliable low latency communications (URLLC), etc. A new radio for the 5G wireless networks is expected to use various radio frequencies, above or below 6 GHz. As compared to the 28, 60, and 73 GHz spectrums, the frequencies near the 3 GHz spectrum receive considerable attention from service providers because the millimeter Wave (mmWave) suffers from significant pathloss and link blockage caused by obstacles (e.g., buildings, vehicles and human beings) owing to severe penetration loss and reflection. However, it is advantageous to still consider the mmWave spectrum to obtain several hundred mega-hertz (MHz) of bandwidth because of the scarcity of spectrum The highlights of our contributions in this study are as follows: • We conducted mmWave V2V communications using commercial IEEE 802.11ad modules. • We analyzed inter-vehicle connectivity by mmWave of short-range radio. • We compared the mmWave V2V communications in different driving environments.
According to our experimental results, the IEEE 802.11ad modules perform only in a short radio range of approximately 10-20 m at boresight due to significant pathloss, which causes frequent disconnections between vehicles, especially during high mobility. However, the average disconnection time is approximately 1 s, irrespective of the vehicle speed, even though the deviation of disconnection differs based on the speed; thus, high-speed vehicles demonstrate a larger deviation of disconnection time than low-speed vehicles. Regardless of the frequent disconnections, the experimental results demonstrate that the mmWave connectivity can transmit a large amount of data within a few seconds in the future smart cars.
The remainder of the paper is organized as follows. In Section 2, we introduce the background and related works on mmWave and vehicular communication technology. We describe the GiV2V system in Section 3. Section 4 describes our experimental configuration, followed by the results in Section 5. Finally, we discuss and conclude our study in Section 6.
Related Works
In the last decade, the mmWave spectrum was popularly explored to enlarge the bandwidth and increase the throughput in mobile communications for a last decade. Rappaport et al. introduced seminal results of the experiments using a wideband sliding correlator channel sounder with steerable directional horn antennas at both the transmitter and receiver in New York city in [1] from 2011 to 2013, and presented more results on the frequency bands of 28, 38, and 73 GHz bands in [2,3].
Based on those studies, mmWave channel models are demonstrated in [4,5], where empirically-based propagation channel models are proposed for the 28, 38, 60, and 73 GHz mmWave bands. Samimi et al. presented a 3-D statistical channel impulse response model for urban LOS and non-LOS (NLOS) channels developed from the 28 and 73 GHz ultra-wideband propagation measurements presented [6,7]. They later added small-scale fading measurements for the 28 GHz outdoor mmWave ultrawideband channels using directional horn antennas [23]. Additionally, 28 GHz wideband propagation channel characteristics for mmWave urban cellular communication systems are presented in [24].
Sun et al. characterized a mmWave indoor propagation channel based on a wideband measurement campaign at 73 GHz in an office-type environment [25]. For outdoor access channel modeling at 60 GHz mmWave spectrum, Weiler et al. [26] established a quasi-deterministic channel model and a link level-focused channel model and, subsequently, the authors of [27] introduced a new quasi-deterministic (Q-D) approach for modeling mmWave channels, which allows natural description of scenario-specific geometric properties, reflection attenuation and scattering, ray blockage, and mobility effects.
Andrews et al. provided a comprehensive overview of mathematical models and analytical techniques for mmWave cellular systems based on stochastic geometry [28]. In addition, a system-level analysis of the success probability for cell selection and random access delay in the mmWave cellular systems is conducted in [29].
Currently, the mmWave communication is now considered for V2V or V2I communications, which are required for autonomous or proxy driving in future smart cars [9]. We survey existing works related to the mmWave V2X communications as in Table 1.
Several studies on measurement campaigns and channel modeling for the mmWave V2V or V2I communications are reported. Ben-Dor et al. conducted multipath and angle-of-arrival (AOA) measurements at 60 GHz of outdoor peer-to-peer channels in an urban campus courtyard for transmission and into a vehicle [30] in 2011, demonstrating that varying the transmitter and receiver separation provides different root mean square (RMS) delay spreads from 2.73 ns to a maximum value of 12.3 ns for the LOS and NLOS antenna pointing scenarios. Loch et al. developed a practical mmWave vehicular testbed to evaluate performance, where fixed beam-steering approach enables the RSUs to transmit large amounts of data in a considerably short amount of time for a wide range of speeds [10]. Park et al. investigated mmWave blockage characteristics based on measurements collected in a typical V2V environment at 28 GHz [11].
Based on these measurements and simulations, the V2X channel models are exploited. Va et al. reviewed the state-of-the-art in measurements related to mmWave vehicular channels [12]. In contrast to the previous studies conducted with the two-ray model on flat road surfaces, more realistic settings with road undulation, road surface curvature, and blockage by other vehicles are considered to improve the accuracy of pathloss prediction. The propagation mechanisms reflection and diffraction of mmWave on realistic road surfaces and geometries at 60-77 GHz are demonstrated in [31,32]. Further, Antoescu et al. proposed channel propagation models for mmWave V2X communications using ray-tracing simulations [13], which include the effects of link blockage, scattering and multipath fading. Wang et al. provided research results on propagation characteristics of V2V channels, particularly for shadowing effects induced by obstructing vehicles between a transmitter and receiver [14]. In [15,33], a geometric multiple-input multiple-output (MIMO) channel model for mmWave mobile-to-mobile (M2M) applications based on the two-ring reference model is proposed.
Several studies on the V2X network based on a stochastic geometry model are reported that investigate and optimize the connectivity and throughput with varying blockage, beam direction and vehicle density. Tassi et al. proposed a stochastic model of mmWave-based RSUs infra-structure to vehicle communications [34], where they investigated the blockage probability and throughput with varying vehicle densities and speeds at a multi-lane highway. Lorca et al. presented a theoretical analysis of the Doppler power spectrum in the presence of beamforming at the transmitter and/or the receiver in V2I systems [35]. Wang et al. analyzed the coverage of urban mmWave micro-cellular networks based on stochastic geometry with a LOS probability function of randomly oriented buildings for a V2I scenario [36]. Perfecto et al. analyzed the interplay between the beamwidth assignment and the scheduling period in V2V communications [20] and proposed an optimization algorithm to establish a V2V link having optimal beam width, using swarm intelligence based on the channel and queue state information [21]. Va et al. proposed a swarm intelligence to efficiently pair vehicles of V2V links and optimize beam widths considering the channel state information and queue state information [22].
Some studies propose approaches to utilize sensors (such as radars, GPS, camera, etc.) to reduce the beam alignment overhead among vehicles based on vehicle position, posture, or other sensed information. Gonzalez et al. [19] and Kumari et al. [18] proposed a set of algorithms to perform the beam alignment in a V2I scenario, by extracting information from the IEEE 802.11ad module or the LRR radar signal to configure the beams or create a joint waveform for automotive radars and mmWave V2V communications using the same hardware. In [16], Choi et al. proposed a high-level solution for a key challenge of the mmWave beam training overhead where the information derived from the sensors or DSRC are leveraged as lateral information for the mmWave communication link configuration. Mavromatics et al. leveraged vehicle sensory data of position and the motion for beam forming by DSRC beacons [17].
Similarly, location-based beam alignment and training are considered in the following studies. Va et al. proposed a mmWave-beam switching approach based on the position information (for example, the information available via GPS) from the train control system for efficient beam alignment [37] and presented an optimization of beam design to maximize the data rate for non-overlap beams in the LOS to the RSU [38]. Garcia et al. also proposed a location-aided beam forming strategy and analyzed the resulting performance considering the antenna gain and latency [39]. Maschietti et al. formulated the optimum beam alignment solution of a Bayesian team decision problem with novel and less complicated algorithms for optimality [40]. Va et al. leveraged the position of a vehicle along the past beam measurements to rank desirable pointing directions that can reduce the required beam training based on a popular machine learning method used in recommender systems [41]; moreover, they proposed the utilization of the position of the vehicle to query a multipath fingerprint database that provides prior knowledge of potential pointing directions for reliable beam alignment [42]. In [43], Wang et al. also introduced machine learning with the past beam training records for optimal beam pairing by exploiting the locations and sizes of the receiver and its neighboring vehicles.
Eltayeb et al. proposed a blockage detection technique for mmWave vehicular antenna arrays that jointly estimate the locations of the blocked antennas along with the attenuation and phase-shifts that result from the suspended particles [44]. In such a blockage, a joint optimization problem to select a relay and link to circumvent obstacles and to reduce delivery latency in the 60 GHz mmWave networks was modeled by He et al. [45] together with a less complex algorithm decomposing the problem into tractable sub-problems. Furthermore, Taya et al. proposed a multi-hop relaying through dynamic vehicle deployment to increase the coverage of V2X, formulate the deployment problem as an optimization problem, and obtain its lower and upper bounds of performances [46].
In [47], Petrov et al. showed that the interference from the adjacent lanes can be reasonably approximated using two-dimensional stochastic models without any significant loss of accuracy. This interference may significantly affect the performance of the communication systems as highly directional antennas are used by spatial configurations. Kim et al. proposed a channel assignment algorithm for mmWave beams to avoid the inter-beam interference from the uncoordinated beams in mmWave V2V communications [8].
Multiple connections to legacy network (such as 3G or LTE) and mmWave base stations can provide seamless connectivity to moving vehicles. Giordani et al. introduced a method with multi-connectivity to a mmWave cell and a conventional microwave cell for robust connectivity and handover based on a sounding signal sweeping and instantaneous measurement of the received signal strength measurement [48]; further, they proposed a novel uplink multi-connectivity system for the efficient control plane applications, such as handover, beam tracking, and initial access [49].
For security in vehicular mmWave communication systems, Eltayeb et al. proposed physical layer security techniques by injecting artificial noise in controlled directions using multiple antennas [50].
These previous studies on the vehicular communications attempt to establish a vehicular channel model of mmWave V2V or V2I communications through simulation or mathematical modeling and propose ideas to reduce beam alignment overhead and training based on location or sensor data. In this study, we demonstrated the feasibility of V2V communication using the IEEE 802.11ad with the 60 GHz mmWave spectrum, specifically in a LOS environment. For the IEEE 802.11ad standard, Jacob et al. explored a channel model for system level simulations with medium access control (MAC) protocols to investigate the influence of moving humans in the framework of IEEE 802.11ad standard [51]. Coll et al. evaluated the IEEE 802.11ad standard for V2V communication through simulation [52], demonstrating that the MAC operation and beamforming processes result in a high overhead, and the uncoordinated transmitting stations can significantly degrade network throughput. However, to the best of our knowledge, there has been no experimental study with off-the-shelf IEEE 802.11ad devices in a driving testbed.
[ [37][38][39][40][41][42][43] Location or situation-based channel estimation, beam direction steering and training are achieved when prior channel information or past measurement is given for each location or situation. Machine learning techniques can be applied. [18,19] A mmWave link is configured using Long-Range Radar (LRR) mounted on the road infrastructures and on the vehicles for V2I and V2V communications.
[ 16,17] The mmWave link configuration is assisted by motion and posture information of vehicles estimated from vehicular sensors. DSRC beacons carry the sensor information periodically. [8,47] Effect of inter-beam interference in V2V networks is analyzed and beam alignment and multi-channel assignment are considered. [20][21][22] Distributed beam alignment and width decision are achieved by channel and queue state information. V2V association and scheduling problem are solved in a decentralized manner. [48,49] Multi-connectivity with use of microwave frequencies (e.g., DSRC, LTE) is considered to increase the robustness connectivity (e.g., handover or relay) and reduce the beam tracking overhead. [10,11,30] Testbeds for mmWave V2I or V2V communications and experiment results are introduced.
[50] Security techniques of physical layer in mmWave and MIMO systems are proposed.
GiV2V Communication Architecture
In this section, we introduce the GiV2V architecture and beam forming process according to the GiV2V topologies. Further, a well-known pathloss model of the IEEE 802.11ad standard is introduced to verify the applicability of this pathloss model to our measurement results in Section 4. The GiV2V network topology is shown in Figure 1. The V2V communications for autonomous driving occur typically in a vehicular platoon, as illustrated in Figure 1a, in which a single driver or the front car leads multiple trailing vehicles similar to a train. In this convoy model, the GiV2V can form a front or rear beam to obtain directivity gain in mmWave communications. Thus, the connectivity between vehicles can be more stable when compared to other traffic formations as the beam direction is unchanged and inter-vehicle distance can be maintained consistently by controlling vehicle speed. In other vehicle topologies, vehicles require diagonal (i.e., front-to-side or rear-to-side) beams to communicate with vehicles in other neighboring lanes, as shown in Figure 1b, rather than just side beams to reduce the collision risk between lanes. These diagonal transmissions are required while changing lanes in the convoy model or while communicating with other convoy vehicles.
GiV2V Communication Antenna
For the beam alignment in the GiV2V network, the IEEE 802.11ad antenna module is required to support two types of beam patterns, as shown in Figure 1. Thus, we adopted a commercial IEEE 802.11ad product supporting beam patterns in our testbed, as illustrated in Figure 2. The directional antenna allows radiation intensity in a designated direction (θ, φ) with width of θ w at a given transmission power in contrast to the omnidirectional antenna that emits the power in isotropic mode; θ and φ are the angles in z-axis [0, π) and xy-axis domains [0, 2π), respectively. Therefore, the directional antenna is useful for mmWave communications that suffer from high attenuation along the path. Furthermore, unlicensed bands at 60 GHz for WiGig allow low transmission power, only 10 dBm to avoid reciprocal interference between devices; thus, the antenna gain from the beam forming is important to increase the radio range.
The directional antenna gain is where η is the antenna efficiency with loss (0 < η ≤ 1) and u 0 is the average power density transmitted in all directions. If we assume that the omnidirectional transmission power is nearly constant, then u 0 = 1/4πP t and the directivity D = 4πu(θ, φ)/P t if the loss η is negligible. (1) The beamforming gain g(θ, φ) of the a uniform linear array antenna (ULA) or uniform circular array antenna (UCA) is derived from Equation (1). The ULA antenna gain for beam directions is shown in Figure 2. The beam-width is the narrowest in the orthogonal direction (i.e., ±90 • ) of bore sight and widest at ±0 • when the antenna array is aligned at zero. Conversely, the UCA antenna can consistently form the same beam width at each sector.
The beam patterns shown in Figure 2 can be dynamically applied to the GiV2V topologies illustrated in Figure 1. For the convoy model of vehicles, the beam pattern of Figure 2a is appropriate while that of Figure 2b is suitable for the diagonal topology. Typically, the IEEE 802.11ad module explores all the possible sectors for beam directions provided by the RF front-end and antenna arrays. This type of beam sweeping procedure follows the IEEE 802.11ad standard. Furthermore, the vehicles can utilize the vehicular topology information to decide a beam direction. Assuming that all vehicles exchange hello messages (e.g., cooperative awareness message (CAM)) with neighbor nodes and recognize their location by GPS information, vehicles can choose a beam direction without sweeping or by at least reducing the number of sweeping sectors.
GiV2V Communication Radio Range
In this section, we introduce a well-known pathloss model of IEEE 802.11ad to compare the theoretical outcomes with the measurement results. Based on this model, we calculated the theoretical coverage value with the given system parameters of a commercial IEEE 802.11ad module (Section 4.1) and then compared it with measurement results (Section 5). Maltsev et al. [54] presented an indoor pathloss model of 60 GHz WLAN (i.e., WiGig) for the LOS environment based on the measurement study, as presented below.
where the A is 32.5 dB and without a shadow factor. d is the distance between the transmitter and receiver (km) and α is a pathloss exponent of LOS (e.g., 2). In outdoor GiV2V communication, additional attenuation from water vapor (L vap ), oxygen (L O 2 ), and rain (L R ) are considered [8]. Those atmospheric parameters (dB/km) for further loss are assumed as a constant for the relatively short communication period in this study.
Accordingly, the total pathloss can be PL(d) = L d + L a . For simplicity, we assumed that the pathloss, L a , owing to the atmospheric condition was static during the short communication period. Therefore, the pathloss was only determined by the distance, d, at a given operational frequency (e.g., 60 GHz).
In the LOS environment, the radio range can be determined by the following outage probability with the sensitivity level, T, of the target modulation coding scheme (MCS).
where IL is the implementation loss, such as that from cables. The maximum values of antenna gain of the transmitter G T and receiver G R are assumed to be the same when both have beam directions toward each other and use same number of antenna array. The maximum GiV2V coverage, d, can be derived as PL −1 (P t + G R + G T − T − C). Therefore, the reachable probability between vehicles can be defined as P(D ≤ d) with a random variable, D, indicating the distance between a transmitter and receiver. In the above equation, the effective range is decided from only the antenna gain of the transmitter and receiver (i.e., beam forming factor) while the other component is the constant loss, C. In this study, we assumed no transmitter power control was achieved between vehicles. Using Equation (3), the maximum range, d, can be expressed as follows:
IEEE 802.11ad
Previously, the mmWave technologies were studied and standardized for WLANs, which are primarily used for home appliances and hand-held devices at indoor environments. IEEE 802.15.3 task group 3c (TG3c) [55] and IEEE 802.11ad [56] standards specify physical and MAC layer protocols at 60 GHz unlicensed bands. The IEEE 802.11ad defines operations between an access point (AP) and a mobile stations (STA) using a two-phase beam training (i.e., association beamforming training (A-BFT), beam refinement protocol (BRP)), in which both detect a transmitting or receiving sector approximately by sweeping all directions during the beacon header interval (BHI); subsequently beam refinement is performed by BRP during the service period (SP) of data transmission interval (DTI). Additionally, IEEE 802.11ad supports a relay mode for link blockage between the AP and the STA. During the SPs that are set by the AP for searching a relay STA, a source and destination STA exchange the BRP packets with the neighboring candidate relay STAs nearby. Subsequently, the source STA requests measurement reports from several possible relays with good channel quality (i.e., high signal to noise ratio (SNR)), which include the link state information to both the source and destination STAs. Then, the source STA finally selects the best relay that has the highest SNR in both links.
For the GiV2V testbed used in this study, we used off-the-shelf modules of IEEE 802.11ad [56]. The IEEE 802.11ad standard supports mmWave communications at 60 GHz unlicensed bands and was developed for a considerable time after the completion of the standard. The IEEE 802.11ad standard defines the specification of a physical and MAC protocol for an Access Point (AP) and a directional multi-gigabit (DMG) mobile station (STA). In addition, the relay DMG STA (RDS) operation is specified for the NLOS environment. IEEE 802.11ad has six channels with 2.1 GHz bandwidth each from 58.32 to 69.12 GHz spectrum, which allow different modulations with single or multiple carriers; the single carrier can achieve data rate from 385 Mbps to 4.62 Gbps and multiple carriers (such as orthogonal frequency-division multiplexing(OFDM)) can achieve data rate from 693 Mbps to 6.75 Gbps according to the MCSs.
In this study, we utilized the commercial products developed by Tensorcom [57] shown in Figure 3, which achieves the physical/MAC protocol stack of IEEE 802.11ad and are being used for communications between smart phones, televisions, laptops and their supplementary devices. The performance features of the Tensorcom 802.11ad module are noted listed in Table 2. With these parameters, expected theoretical radio range d can be calculated with a given pathloss exponent using Equation (4) from Section 3.2. The reachable distances according at the required threshold T = −68 dBm for MCS1 are presented in Table 3 with varying pathloss exponents.
On-Board Unit Installation
We created on-board units (OBUs) for GiV2V communications using the Tensorcom IEEE 802.11ad modules and laptops (Intel Core i7 7700HQ, 4 GB memory, graphics processing unit) as host PCs. For a connection between the IEEE 802.11ad module and the host laptop, the Tensorcom 802.11ad module supports USB 3.0 interface; its physical data rate was 5 Gbps; however, the actual data rate measured on the packet level was less than 3 Gbps due to USB driver and Linux kernel overhead. The IEEE 802.11ad module processes physical and MAC protocol stacks for mmWave communications and the host PC deals with the upper layer protocols from an IP to identify the user datagram protocol (UDP)/transmission control protocol (TCP). The IEEE 802.11ad modules were installed on the vehicle roof and connected to Linux laptops via USB cables, as shown in Figure 4. In the host PC, an evaluation software was installed that generated UDP/IP packets to be sent to the IEEE 802.11ad module using the GNU USB library, and then the IEEE 802.11ad module encapsulated the MAC frames over those packets. Similarly, the 802.11ad module decapsulated the received MAC frames and forwarded the packets to the host. Additionally, the evaluation software calculated instantaneous throughput with the received packets and also allowed users to configure the beam directions, MCS levels and modes of intra-structure or ad hoc of the IEEE 802.11ad module. To collect information only available in the IEEE 802.11ad module such as SNR, receive signal strength (RSS), etc., control messages, such as SNR request and response, were defined to be exchanged between the IEEE 802.11ad module and host PC through the GNU USB library.
For our testbed, two vehicles were used to measure inter-vehicle connectivity of a mmWave link in the convoy model, as shown in Figure 1a. Thus, the front car had the WiGig module attached to the back end of the roof while the trailing car had another module attached to the front end of the roof, as shown in Figure 5. The front car (model: Tucsan ix) was a sport utility vehicle with a height 1.55 m and the trailing car (model: Elantra) was a sedan with a height 1.4 m, while their widths were similar. This installation nearly guaranteed LOS for the mmWave link while those two vehicles moved in the same lane.
Directional Antenna
In our experiment, the IEEE 802.11ad module was equipped with four end-fire antenna arrays (2 × 2 MIMO, two antenna arrays were used for reception and the other two antennas were used for transmission), which consisted of multiple short antenna arrays (an array is illustrated at the front-end, in the direction opposite to the USB host interface shown in Figure 3). In the end-fire antenna array, each antenna for the transmission or reception had different phases by 180 degree. There distance between the two antennas could be changed instead of the dynamic phase configuration performed in a phase array antenna. The Tensorcom 802.11ad module forms the beams toward the two different bore sights, as shown in Figure 2, for the transmission and reception using the two end-fire antenna arrays. With this, the module can sweep sectors to align the beam directions between the transmitter and receiver as described in the IEEE 802.11ad standard; accordingly, the module enables the configuration of beams dynamically according to vehicle positions on the roads.
Driving Test Environment
As the radio range is limited due to high pathloss of the mmWave channel, different driving environments on a campus and on city roads were considered to estimate the effect of vehicle speed and deceleration/acceleration. Figure 6a shows the campus map of Gachon University, South Korea and the arrows in the map indicate the driving route for the measurement campaign, which was approximately 3 km. The measurement duration was approximately 400 s and the average speed was 27.7 km/h (campus speed limit is 30 km/h). A major part of the route consisted of sloping roads that could cause disconnection due to misaligned beam elevation. Further, the GiV2V connectivity was demonstrated on city roads, as shown in Figure 6b. The entire route was approximately 8 km and the speed limit was 80 km/h. There was no traffic during the measurement; however, 10 traffic signals for pedestrians and intersections existed along the route. For each stop, the acceleration and deceleration of the vehicles were repeated, which caused the vehicles to be apart by more than the reachable range specified by the 802.11ad mmWave link.
During the measurement time in both scenarios, two vehicles attempted to maintain a convoy model; however, the two vehicles were positioned in the diagonal direction, as shown in Figure 1, during a short period owing to lane change. Here, the beam form was selected between the two patterns shown in Figure 7, which follow the IEEE 802.11ad beam training procedure.
Coverage and Beam Measurement
Prior to the driving experiment, we evaluated the performance of the IEEE 802.11ad module on static vehicles in terms of beam coverage and directions. For this measurement, the transmitter (i.e., vehicle) and receiver (i.e., laptop) were separated by 2 m, and continuous data traffic was transmitted from the transmitter to the receiver. The Tensorcom module supports only two sectors for beam forming, which covers ±30 • and 60 • , respectively. We selected one of these sectors and transmitted dummy packets over the air. Then, we measured the received SNR in 360 • of directions around the transmitter. Figure 7 shows the measured beam strength in SNR. The beam angle spreading over ±30 • , as shown in Figure 7a, was appropriate for the convoy model of vehicles without interfering with other vehicles in adjacent lanes. Another beam sector covered ±60 • , as shown in Figure 7b. The signal strength was the strongest at ±45 • , where the two main lobes were created, while the SNR at approximately 0 • demonstrated a much lower value of −10 dB. This beam pattern was suitable for the diagonal connection model illustrated in Figure 1b. The SNR of the main and side lobes at the beam ±30 • case of the beam was approximately 12 dB and −10 dB, respectively. Similarly, the receiver SNRs of the main lobe and side lobe at ±60 • were approximately 11 and −10 dB, respectively. As the antenna directivity depends on the number of antennas in the array, the gain was comparable between the two beam patterns. The main lobe width was observed at nearly 60 degree in both patterns. The vehicles could switch between the two sectors according to change in the GiV2V topology.
Observation 1. A pair of nodes using the IEEE 802.11ad standard with a directional antenna could achieve the required SNR only by the main-lobe and not by side-lobes.
We measured the received SNR of the boresight of ±30 • beam for varying distances between the transmitter and receiver. For distances within 5 m at the boresight, the SNR was more than 5 dB, while the SNR was approximately 0 dB at distances from 5 to 10 m, as illustrated in Figure 8. In areas farther than 10 m, the SNR varied with fading between −5 and −10 dB. Note that the SNR was less than −10 dB outside the beam angle. Observation 2. The gain difference between the main and side lobes was more than 15 dB in IEEE 802.11ad beam forming.
In addition, we measured the required SNR for each MCS level; we recorded the SNR values with which the 802.11ad module began to receive packets even with certain errors, while changing the distances between the transmitter and receiver nodes at each MCS level. According to Table 4, MCS 3 or 4 level was achievable at less than 5-6 m. After approximately 10 m, only MCS 0 or MCS1 was available. Table 5 lists the pathloss exponents calculated by the measured SNR values and approximate distances, as shown in Figure 8. The mean value of the pathloss exponents was 2.09, which was nearly same as a known pathloss exponent of the LOS. In addition, the measured SNR = −1 dB for MCS1 was available at approximately 10 m and its pathloss exponent was 2.1. Table 3 also indicates that the reachable distance could be approximately 10 m with the α= 2.1. Therefore, we can conclude that the theoretical calculation and measurement were almost identical, although there might be a little error in measurement. Figure 9 shows the varying receive SNR values in the moving vehicle. In contrast to the previous measurement with fixed nodes, we obtained the received SNR samples from a slowly moving vehicle against a static transmitting vehicle. In this case, we observed that the SNR decreased exponentially corresponding to the distance; the SNR decreased by approximately 18 dB when the vehicle crossed the 20 m distance. By applying the pathloss equation (Equation (2)), α * 10log(20/d 0 ) = 18. Thus, the pathloss exponent was calculated as about 2.2 with the d 0 = 3, which is the distance between vehicles when the SNR is 10 dB. This measured exponent value was close to to the value of one of the LOS models described in [56]. Figure 10 shows the measurement results while driving through the campus and on the city roads. The MCS level was configured statically as level 1 (maximum 385 Mbps), with which a receiver can receive approximately 2500 packets every 100 ms by MAC aggregation. Considering the vehicle length and inter-vehicle distance on the roads, we concluded that the higher MCS level of 3 or 4, which was achievable only when the distance between vehicles was within approximately 5 m, was not achievable considering the driving testbed. The throughput shown in Figure 10 was instantaneously derived in a link layer that included error detection and retransmission mechanism. A receiver calculated the throughput of correctly received packets every 100 ms.
Campus Experiment
In the measurement results of the drive through campus, which is illustrated in Figure 10a, the instantaneous throughput was widely scattered between 0 and 300 Mbps. Certain parts of measurement time demonstrated notable throughput. We observed that the throughput improved after 300 s while disconnections were observed clearly between 200 and 300 s. The corresponding SNR is plotted in Figure 10c. The average SNR increased marginally with time. Consequently, the required SNR for MCS 1 (i.e., measured value, −1 dB in Table 4) was satisfied within approximately 200-300 s.
To investigate the relation between inter-vehicle distance and received signal quality during the driving test, we attempted to estimate the inter-vehicle distance using a computer vision technique (i.e., detecting a vehicle object using a marker on the front vehicle, as shown in Figure 4, and estimating the distance from bird's eye view) while measuring the signal quality of the received packets; however, it was difficult to couple a received signal sample from the IEEE 802.11ad module and calculated distance based on the dash-cam video because of the processing delay in the real-time video and the distance estimation error, although the disconnectivity pattern presented in the following sections demonstrated an approximate correlation with the estimated inter-vehicle distance. However, the inter-vehicle distance during driving could be inferred from the measurement results shown in Figures 8 and 9. Figure 11 shows the disconnection interval and its probability. A disconnection interval is a period when a receiver cannot receive packets, i.e., the interval of samples with non-zero values. Our sampling interval could impose an error deviation of up to 100 ms. Most of the intervals existed for less than 2-3 s, as shown in Figure 11a. The cumulative distribution function (CDF) of the interval in Figure 11c shows that almost 90% of disconnection periods were less than 5 s. Observation 5. The SNR and connectivity in IEEE 802.11ad communications of slow-moving vehicles were more stable when compared to the fast-moving vehicles.
City Experiment
To investigate the GiV2V connectivity during high mobility, we performed experiments on normal city roads using the same configuration that was applied in the campus case. Figure 10b,d shows the results of the throughput and SNR while driving through the city, respectively. Here, we observed that disconnectivity occurred during the entire period of measurement because of mobility, while the connectivity in the campus measurement achieved partially stable periods. Owing to the 80 km/h speed limit and the deceleration/acceleration of vehicles on the city roads, the inter-vehicle distance varied more when compared to the campus experiment. The corresponding SNR plot in Figure 10d shows that the average SNR fluctuated more aggressively than the campus results. The campus SNR results in Figure 10c show that the SNR was stable because the vehicle convoy consistently maintained a low vehicle speed. However, samples with peak SNR (approximately 10 dB) were demonstrated more in the city measurement compared to the campus measurement because the vehicles on the campus almost did not stop during the test (approximately 400 s in Figure 10c; vehicles only stopped to end the test), while vehicles stopped at several intersections on the city road. Therefore, the measurement time with the peak SNR shown in Figure 10d indicated that the test vehicles stoppred at the intersection.
Further, as shown in Figure 11, the disconnectivity while traveling through the city roads was higher than that in campus owing to the variation in inter-vehicle distance. For instance, the longest disconnection period was approximately 20 s when compared to the 15 s in the campus case because the fast moving vehicles could not move closer once they were sufficiently separated to cause a loss of connection until the vehicle in front had slowed down to stop for traffic signals. Therefore, 90% of disconnection intervals was less than 10 s in Figure 11d, which was double the value of the campus result. Observation 6. Ten percent of disconnection period was more than 10 s in IEEE 802.11ad based V2V communications on city roads.
Discussion
According to the measurement results, the GiV2V connectivity differs from the driving environment and the degree of mobility affects the disconnection duration. Wireless communication parameters, pathloss and slow/fast fading of the GiV2V can be affected typically by varying vehicle speed in the different driving environment, but other parameters except the pathloss can be negligible considering the short radio range of the WiGig. However, the probabilities of disconnectivity in two different driving environments are compared, as shown in Figure 12. This figure illustrates the probability density function (PDF) of the disconnection interval in the campus and city measurements. As the probability of disconnectivity followed an exponential distribution, we inputed the measured values to the exponential distribution using the least-square error (LSE) minimization. The λ of the exponential distribution (i.e., 1/µ) was 0.65 and 0.86 for the campus and city roads, respectively; the expected intervals (µ) for the campus and city were 1.16 and 1.54 s, respectively. Although certain disconnectivity intervals were exceptionally longer in the city case, we can still argue that the average disconnection was comparable, at 1.1 and 1.5 s for campus and city measurements, respectively. Based on the results on connectivity duration shown in Figure 13, the maximum duration of maintaining a connection was 19.4 s in the campus test, while it was 53.5 s in the city scenario because vehicles stayed close at intersections for a long time owing to the traffic signal. Thus, the arithmetic mean of the connectivity time was higher in the city case (2.38 s) than the campus case (1.45 s). However, the value of geometric mean of the campus and city experiments were 0.7 and 0.6 s, respectively, and the median value of each was 0.5 and 0.4 s, respectively; the average connectivity duration in the campus scenario was higher than the city case, except during several long connectivity durations. We list the summary of the experimental results of V2V communication using IEEE 802.11ad in Table 6. Based on the results, we concluded that the short radio range of IEEE 802.11ad caused too frequent disconnections even at slow vehicle speed. Moreover, the connectivity occurred differently corresponding to the driving environment; in campus, the vehicles demonstrated intermittent connections continuously, while vehicles demonstrated long connections only at intersection. However, the durations of the connections on the two roads were comparable and they allowed vehicles to deliver more data to each other. For instance, a vehicle could transmit approximately 300 MB during the short average connection time, 2 s. However, those intermittent connections require more efficient and intelligent transport layer protocols and buffer management for retransmission and packet reordering. The conventional TCP congestion and flow control can suffer from retransmission of lost packets and connection management. Thus, light-weight protocols such as UDP with control signaling are probably appropriate and redundancy by fountain or network coding is advantageous for a reliable transmission.
Compared to our convoy experiments on the same lane, the disconnectivity interval becomes longer and the vehicle could not receive data continuously from the same transmitter if the vehicle moved individually without platooning. In this scenario, delay tolerant communication and data duplication on vehicular caches are necessary, where a receiver vehicle losing a transmitter due to long disconnection can request the data from a neighboring vehicle having duplicated data. As a future work, we will investigate distributing large data in a vehicular cloud using the GiV2V mmWave links and vehicular storages in the delay tolerant networking.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,528.4 | 2019-05-01T00:00:00.000 | [
"Computer Science"
] |
ICT Development, Innovation Diffusion and Sustainable Growth in Sub-Saharan Africa
This study aims to explore the impacts of ICT and innovation on sustainable economic growth and the direction of causal relationships among them in a trivariate framework. The study employed the DOLS and Panel VECM for causality to study the relationships for 33 Sub-Saharan Africa countries categorized based on income between 2000 and 2020. The study uses annual time-series data that were obtained from the World Development Indicators (WDI) database for the empirical analysis. Results from the DOLS show that both ICT development and innovation contribute positively to sustainable growth in all the categories of countries. However, the marginal effects of innovation on sustainable growth are very small compared to ICT development, especially for low-income countries. The VECM result confirms significant causal relationships among the studied variables in the short and long run. Policies should be geared toward channeling resources to enhance ICT skills, access, and usage in the continent. This can be achieved if organizations engaged in the SSA agenda for prosperity, provide the support needed to complement different governments’ efforts in advancing ICT penetration and innovation diffusion in the continent. Also, it is important for income groups to be considered when establishing and implementing such policies.
Introduction
Information and Communication Technology (ICT) and innovation can be connected to different themes and concepts across different disciplines. For instance, they are highly associated with the term sustainability (Akkemik, 2015;Kumar & Kumar, 2017). Sustainability is the quality and ability to be maintained at a specific rate or level over time (Kumar & Kumar, 2017). A systematic approach toward sustainability constitutes economic, social, and environmental aspects (Ejemeyovwi, Osabuohien et al., 2019;Teh et al., 2021;Zhao et al., 2021). Several extant pieces of literature can be found on the relationship between ICT or innovation and any of the aspects of sustainability (Agbemabiese et al., 2012;Boot & Marinc, 2010;Shehzad et al., 2021;Toader et al., 2018). The focus of the present study is on the role of ICT and innovation diffusion in economic sustainability in the Sub-Saharan African (SSA) setting. Therefore, we mention that innovation diffusion and ICT development are the prerequisites for necessitating progress and competitiveness, and through them, sustainable economic growth.
It is hard to deny the fact that technology and invention have played a noteworthy role in the advancement of economies across the globe (Adeleye & Eboagu, 2019; Karakara & Osabuohien, 2019;Kurniawati, 2020). In most emerging and less developed countries, innovation takes a center stage in sustainable economic growth. Developing countries, particularly those subject to climate change (Ejemeyovwi et al., 2018), and energy scarcity (Ejemeyovwi, Adiat et al., 2019) face numerous contemporary and substantial hurdles to innovation. Innovation and technological adoption accelerated at an extraordinary rate in the 21st century, compared with any time in history. The fact that economies have benefited greatly from the adoption of efficient ICT and innovation cannot be overstated (Akerkar et al., 2016).
Most SSA economies have relaxed limitations and liberalized the ICT sector since the late 1990s, resulting in an upward trend in ICT infrastructure development in the continent (Asongu & Le Roux, 2017). ICTs' investment in Africa has been boosted by market forces. Investors from across the globe view Africa as a financial hotspot and investment destination because of the continent's large population and the better rate of return on investment it offers than other developing economies (Ejemeyovwi & Osabuohien, 2020).
Due to the advancement of wireless mobile communication technologies and the trend of liberalization, the ICT sector in Sub-Saharan Africa (SSA) has experienced a significant insurgence in the past 20 years. Capital investment from both the public and private sectors has poured in as a result of the aforementioned progress. In addition, drastic cost reductions and improved capacity have enabled swift diffusion of innovation (Ejemeyovwi, Osabuohien et al., 2019). Consequently, the mobile penetration rate in the SSA region has more than doubled since the year 2000. Countries like South Africa, Nigeria, the Democratic Republic of Congo, Uganda, and Cote d'Ivoire have more mobile phone lines than fixed lines, and this trend is expected to continue (Ejemeyovwi, Osabuohien & Ebenezer, 2021).
However, in the extant literature, most of the empirical studies on the subject have focused on industrialized and emerging economies on both single country and panel or cross-country perspectives. The single country perspective studies include, but are not limited to those conducted for Brazil (Jung & Lopez-Bazo, 2019), Greece (Tsakanikas et al., 2021), Italy (Daniele, 2006), USA (Whitacre et al., 2014), Japan (Ishida, 2015), Turkey (Iscan, 2012), Australia (Gretton et al., 2002), Singapore (Vu et al., 2020), India (Reddy, 2018;Reddy & Mehjabeen, 2019), and Pakistan (Rahman et al., 2021). Similarly, most empirical studies have looked at it from a panel or cross-country perspective. Here, the first strand of literature focuses on the relationship between innovation and economic growth (Cetin, 2013;Furman et al., 2002;Pradhan et al., 2016;Yang, 2006). Even though most of the studies looked at the effect of innovation on economic growth, characterizing the supply-driven approach, but in fact, it is the rise in economic activity that has the potential to boost the level of innovation in the process of growth and development. This indicates that innovation and economic growth can reinforce each other, which means they can have a bidirectional relationship (Pradhan et al., 2016). In the same line of investigation, Maradana et al. (2017) studied the impact of innovation on economic growth in 19 European countries for the 1989 and 2014 periods. Their findings show a positive contribution of innovation to per capita income growth. They further confirm the bidirectional causal connection between innovation and income per capita growth.
The second strand of the literature considers ICT and growth as the main variables in their studies. For instance, in a study conducted for the NEXT-11countries, verified the causal connection between ICT and growth. They also argued that the direction of causality was dependent on the level of penetration of the IT indicators used. Similarly, the connection between financial development, ICT, and growth was examined by Cheng et al. (2021) for 72 countries for the 2000 and 2015 periods. From among their findings, they were able to establish that ICT diffusion can boost growth in high-income economies, but its influence is unclear in medium-and low-income countries. Between 1991 and 2012, a panel VAR model was also used by Pradhan et al. (2014) to examine the relationship between ICT development and four other economic indicators for G-20 countries. Their findings show a positive correlation between the expansion of ICT infrastructure and economic growth. In addition, there were long-term causal relationships established between these variables.
The third strand of the literature has pointed out a few studies that studied the relationships among the three variables (ICT, innovation, and growth). In a 15-year study with a sample of 13 G-20 countries, Nguyen et al. (2020) examined the impact of ICT and innovation on carbon dioxide emissions and economic growth. From among their findings, ICT and financial development are the key drivers of economic growth. Also, Pradhan, Arvin, Nair, et al. (2017) studied the contribution of innovation, venture capital, and ICT to sustainable growth in 25 European countries for the 1989 and 2016 periods. By employing the VECM approach, they found a long-run impact of the three variables on sustainable economic growth. The results from their short-run analysis of ICT and innovation dissemination show that the direction of causality varies based on the precise indicators employed to measure ICT and innovation. Similarly, Ejemeyovwi, Osabuohien & Ebenezer (2021) investigated the link between ICT, innovation, and financial development in Africa. They employed the Bayesian Vector Auto-Regressive approach. They found the interaction of ICT and innovation to contribute positively to financial development. However, they did not account for how ICT and Innovation can both contribute to growth.
It is also clear from the foregoing that studies that take into account all three factors at the same time in a trivariate framework are scarce, particularly for the countries included in this study. To fill this knowledge vacuum, the study used panel Dynamic Ordinary Least Squares (DOLS) estimation to look at the long-and short-run links between innovation diffusion, ICT development, and sustainable economic growth in SSA. Panel vector error correction model (VECM) was also utilized to capture the direction of causality in a trivariate framework.
Moreover, most studies viewed ICT measurements and innovation diffusion measurements as disaggregated indicators in which the variables in ICT and innovation proxies are not aggregated together, however, their components may have a significant causal effect. For instance, aggregating the ICT indicators (ICT access, ICT use, and ICT skills) into a single dimension in this study will yield appealing results. In the case of innovation diffusion, we have used scientific and technical journal articles as a proxy which we later justify in this study. Real per capita output in SSA is a measure of sustainable economic growth in this study. The same measure has been used for sustainable economic growth by Pradhan et al. (2020) for the European Union and by Belloumi and Alshehry (2020) for Saudi Arabia. Given the above, the study poses the following questions: Does ICT development stimulate sustainable growth in SSA? Does innovation diffusion stimulate sustainable growth in SSA? Are there any causal relationships between ICT development, innovation diffusion, and sustainable growth in SSA? These are the questions that this study seeks to answer through DOLS and panel causality approaches. This gap in the literature has gone unnoticed in previous investigations. The fundamental goal of this study is therefore to comprehensively assess the current state of affairs of these three variables in a trivariate framework in SSA. The other sections of this paper are the theoretical framework and summary of hypotheses, materials and methods, results, conclusion, and implications for policy.
Innovation Diffusion and Economic Growth
Over the previous half-century, the rapid digitalization of the global economy has had a substantial impact on countries' inventive potential and economic growth. The interrelationships between these variables are quite complex. Numerous researches have examined the theoretical basis of the dynamic interaction between the variables. This present study examines the relationship between ICT development, innovation, and sustainable economic growth in a three-way approach. According to Schumpeter (1942), technology and innovation diffusion is vital for long-term economic progress. He further stated that the creation of new knowledge through research and development (R&D) and the use of contemporary technology is essential. According to Romer's (1994) endogenous growth model, technology and innovation are major factors to increase productivity and thus economic growth. Consequently, the study found that countries with a higher level of economic development tend to invest more in innovation and technology. Below, we explain the theoretical basis of the association among the three variables in consideration.
The connections between Innovation, ICT development, and economic growth can be categorized into three distinct categories. First, is the innovation-growth connection, which has attracted a lot of attention in academic circles. Known for its ability to produce new inventions and discoveries, research and development (R&D) is a key contributor to a country's economic growth. There is also evidence that the wealthiest countries are spending in R&D to maintain their position at the top of the innovation value chain. Recently, some studies have looked at the relationship between these two variables for the OECD countries. Sokolov-Mladenović et al. (2016) and Kacprzyk and Świeczewska (2019) studied the relationship for EU28 countries, and Chawla (2020) studied the relationship for all the OECD countries together. Sokolov-Mladenović et al. (2016), for example, used a dynamic panel data approach to evaluate the relationship between innovation and economic growth by incorporating other macroeconomic variables and found innovation to contribute positively to growth. The GMM approach was used by Kacprzyk and Świeczewska (2019) to examine the linkage between R&D and economic growth and control for other indicators. The findings confirm a positive association between R&D and growth. Similarly, using panel data modeling, Chawla (2020) found a substantial dynamic link between population, R&D, and economic growth. Thus, it is proposed that the following hypotheses be evaluated in this research:
ICT Development and Economic Growth
The second viewpoint focuses on the relationship between ICT and economic growth. There are two possible ways in which ICT can contribute to economic growth in this situation. First, as a means of enhancing economic agents' efficiency and productivity. Using ICT, agents can have access to new resources, information, market opportunities, and other advantages. Second, because of the increasing worldwide demand for ICT, the sector has grown to be an important source of income for many countries . ICT services get increasingly complex as economies grow, which means that modern services are required by both customers and enterprises. ICT spending by governments across the globe has increased to suit the needs of a wide range of stakeholders in the economy. There have been several recent pieces of research that looked at the relationship between economic growth and ICT in Sub-Saharan Africa and the OECD countries. Using dynamic panel data modeling, Pradhan, Arvin, Nair, et al. (2017), for example, looked at the relationship between innovation, investment, trade openness, ICT infrastructure, and economic growth. In a similar study, Koutroumpis (2019) used a production function technique to show that capital, labor, broadband, and economic growth have a strong link. Using dynamic panel data modeling, Myovella et al. (2020) discovered a favorable correlation between digitalization and economic growth. Thus, it is proposed that the following hypotheses be evaluated in this research:
ICT Development and Innovation Diffusion
The third viewpoint studies the Innovation-ICT nexus, which has got less consideration in the academic literature. Over time, governments and corporations have been encouraged to spend in R&D in the ICT sector due to ICT's ability to boost economic growth and productivity. ICT innovation has increased, which has allowed the various economic actors to raise their production and efficiency. ICT infrastructure investment has also resulted in decreased prices for ICT services, allowing for greater use of ICT in various sectors and fields. Increased funding for new ICT activities like software and application tools has resulted from this. Koutroumpis et al. (2020) found a greater impact on Europe's economy from R&D investments in ICT companies than from R&D investments in non-ICT industries. This has pushed ICT companies to invest more in R&D. Edquist and Henrekson (2017) studied the link between these two variables for 50 selected industries. Similarly, Saidi and Mongi (2018) examined the dynamic link between these two variables in selected high-income countries, whereas Choi and Yi (2018) examined the relationship in selected 105 countries. Thus, it is proposed that the following hypotheses be evaluated in this research: Hypothesis 5a H ID→ICT: Innovation Diffusion "Granger Causes" ICT Development Hypothesis 5b H ICT→ ID : ICT Development "Granger Causes" Innovation Diffusion The above hypotheses are summarized in Figure 1.
Model Specification
As previously mentioned, endogenous growth models have demonstrated the importance of ICT and innovation in boosting economic growth (Ejemeyovwi, Osabuohien & Bowale, 2021;Tsakanikas et al., 2021). In the preceding section of this work, we discussed the interplay among these variables. However, there is a paucity of research on the impact of ICT and innovation on economic growth that at the same time accounted for the direction of causality among them in a trivariate framework (see, Pradhan, Arvin, Nair, et al., 2017). The present study extends the model of Ofori and Asongu (2021) and Rudra et al. (2018) by aggregating the different measures of ICT and innovation. Consequently, the following is a description of the research model that was used via the Cobb-Douglas production function: After the log transformation, equation (1) can be shown as follows: Where β 0 = ln(A 0 ); i (1, 2. . .. . ., N) denotes a country in the sample; t (1, 2 . . .. . ., T) signifies the time for each country; and β i (for i = 1, 2) signifies the parameters of the model. The onus is to evaluate the parameters in equation (2) and compute for some panel estimations tests on the causal relationships among real GDP per capita (RGDP), innovation diffusion (ID), and ICT development (ICT). The a priori expectation of theory state that ICT and innovation should have a significant positive impact on sustainable growth in SSA.
Data and Sample
An empirical approach was presented to investigate the impacts of ICT and innovation on sustainable growth and the direction of causal relationships among them. The study uses annual time-series data that were obtained from data published by the World Bank, 2021, for a sample of 33 SSA countries selected based on data availability for all the indicators used in the study.
Variables Description
The variables used in this study are innovation diffusion (ID), ICT development (ICT), and real per capita GDP growth (RGDP) as a proxy for sustainable growth (Belloumi & Alshehry, 2020;Rudra et al., 2018). Given that sustainable development index (SDGI) is a critical measure of countries sustainable growth and development, we also incorporated it in the robustness check (Table 4). The innovation diffusion measure was captured by scientific journal articles due to data availability for R&D activities in SSA countries. The same measure has been used by Oluwatobi et al. (2015) and Ejemeyovwi, Adiat et al. (2019). They argued that, apart from data availability, output from innovation can be captured by scientific journal articles as opposed to other measures because of the following reasons: (1) innovative individuals from diverse fields spontaneously convey their ideas through scientific journal papers. Beneficial innovative ideas that emerge from other disciplines other than the engineering areas can readily be kept for reference. Such unique ideas may not need patenting; consequently, scientific and technical journal articles will be an accurate venue for the presentation of such innovative ideas. (2) The procedure of getting a patent and trademark, such as requirements and certifications, is very tedious notably in most Sub-Saharan African countries. For instance, in countries like Nigeria, the process contains bureaucratic requirements, which cause delays in obtaining the security and protection of innovative ideas. Several innovative outputs and ideas may consequently end up becoming insecure and stolen. Others may end up becoming outdated and unnecessary before they are registered.
(3) Profits are typically the driving force behind patenting. As a result, new ideas are protected by patents so that they can be licensed and sold for a profit. This Appendix Table A1. In this paper, ICT is the weighted index of the three ICT development indicators, namely, ICT access, ICT use, and ICT skills. A detailed definition of these variables is available in the WDI database and we summarized them in Table 1.
Econometric Methodology
Panel unit root test. LLC, IPS, and Hadri's standard stationarity tests become ineffective if cross-sections between countries in the panels are not independent. To accommodate for cross-country dependencies and give robust results that are consistently consistent, Dickey fuller and Im, Pesaran, and Shin introduced new approaches in their respective fields. Cross-sectional Augmented Dickey-Fuller (CADF) and Cross-sectional Im, Pesaran, and Shin (CIPS) are the names of two new approaches that have just been developed. The test entails estimating the following equation: Where, y it denotes the variables analyzed in the equation, ε it signifies the error term, ∆ denotes the difference operator and µ i , and R denotes the constants and trends respectively. The null hypothesis is that all of the panel member variables have a unit root. The alternative hypothesis state that at least one panel has no unit root. A suitable lag length is chosen using Schwarz Bayesian criterion (SBC).
Panel cointegration tests.
To assess if the variables have a long-run equilibrium link, a cointegration test is utilized. In other words, if two or more series are cointegrated, the variables in these series are in a long-run equilibrium relationship. In contrast, a lack of cointegration suggests that the variables have no long-run relationship, meaning that they can theoretically move arbitrarily far apart.
Assume that the integration of the variables is of order one. If this is the case, the next step is to perform a cointegration study to examine if the set of possibly "integrated" variables has a long-term relationship. To check for this, an estimated cointegration equation of the following form is used: This equation may be re-written as: With the cointegration vector defined as: Johansen (1988) demonstrated that the aforementioned test is incapable of dealing with a panel setting. As a result, we use the Pedroni (1999Pedroni ( , 2000Pedroni ( , 2004 panel cointegration test to assess whether the variables are cointegrated. On the time-series panel regression setup below, the Pedroni panel cointegration test is used: Y it and X jit are the observable variables; ε it signifies the panel regression's disturbance term, and α i permits country-specific fixed effects. The coefficients (β ji ) above account for In the first hypothesis, the within-dimensional estimation assumes a common value for ρ i (=ρ). To sum up, this technique removes any additional sources of heterogeneity among the panel members' countries. In the pooled, betweendimensions estimation, the null hypothesis for no cointegration is: In the alternative hypothesis, the between-dimensions estimation does not assume a common value for ρi. As a result, it adds another potential source of variation across the panel's country members.
To determine whether the cointegration vector is heterogeneous, Pedroni recommends two types of testing. "The first is a test that uses an approach that works inside a single dimension (i.e., a panel test). The four statistics utilized in this test are the panel v-statistic, panel -statistic, panel PP-statistic, and panel ADF-statistic. These statistics, which pool the autoregressive coefficients over numerous panel members, are used to perform unit root tests on the generated residuals. The second test is a group test with three statistics: a group -statistic, a group PP-statistic, and a group ADFstatistic. These figures are based on estimators that average each panel members individually estimated autoregressive coefficients" (Pedroni, 2000).
Long-run structural parameter estimation. It is wellknown that long-run structural coefficients of the exogenous variables can be estimated once the long-run equilibrium between the variables has been established. Cointegration analysis has an extra advantage in that once it is established; the estimates on the exogenous variables for the endogenous variable are realistic in both statistical and economic terms. However, as there are numerous types of long-run estimators, the problem is which one should be used. There are several regularly used and popular estimators; among them is the Ordinary Least Squares (OLS). The OLS has been replaced by the Dynamic Ordinary Least Squares (DOLS) and the Fully Modified Ordinary Least Squares (FMOLS) because of their superiority in addressing the potential endogeneity issue of explanatory variables and autocorrelation of residuals, allowing the variables to be made asymptotically asymptotic (Pedroni, 2004). When it comes to dealing with the issues of endogeneity and serial correlation, the FMOLS estimate uses a non-parametric approach, whereas the DOLS estimator employs a parametric approach. In this situation, the DOLS estimator outperforms both the OLS and FMOLS estimators in terms of performance and efficiency, particularly in small samples (Fei et al., 2011;Kao & Chiang, 2000;Narayan & Smyth, 2007). It is worth noting that the coefficients derived by the DOLS are unbiased and consistent, according to Pedroni (2001). Also, according to Herrerias et al. (2013), the implementation of the DOLS estimator is the most appropriate way to handle the lack of cross-sectional independence among panel series. According to Rudra et al. (2018), the DOLS is the best estimator for studying the ICT-growth relationships. Thus, given the above-mentioned advantages, the DOLS estimator is used in this study to account for the intrinsic variability in long-run variances.
VECM estimation. A VECM can be used to do a causeeffect evaluation if the variables are co-integrated (Pesaran et al., 1999). Co-integrating regression can be used in a two-step method to acquire the error terms (Granger, 1988). F-statistic signifies the short-run causality for the short-run explanatory variables while λ ik which is the coefficient of ECT Ik-1 that captures the long-run causality. If λ ik which is the coefficient of ECT Ik−1 is statistically significant, it, therefore, suggests a long-run causal link between the variables. After establishing that, the following stage is to explore the direction of causality by utilizing the ECT obtained by the long-run VECM.
Here, we will use the panel-based VECM for determining the direction of causality between the variables, namely economic growth, Innovation diffusion, and ICT development as follows: Lag lengths are an important consideration when attempting to estimate VECM, as causality tests can be heavily influenced by the lag structure used. Bias occurs when there are too few or too many lags. However, short latencies may mean that key variables are being left out of the model, and this can lead to biased regression results, which can lead to incorrect conclusions. While this can save time and reduce the standard error of the estimates, it also reduces the reliability of the data because it wastes observations. The optimal lag length can't be determined with certainty, yet valid formal model definition criteria exist. This would significantly increase the computing load on a large panel like ours. Although the maximum lag lengths for all three variables can vary between countries, this will not be allowed in our VECM. We will utilize the well-known Akaike Information Criterion (AIC) to find the best lag structure for our model in this study.
Discussion of Results
After grouping the countries by income groups based on the World Bank classification, we presented the empirical findings in four steps. First, we examine the nature of the time series variables' stationarity as shown in Appendix Table B1. Second, we reveal the mechanism of their cointegration as shown in Appendix Table B2. Third, we estimated the longrun structural parameters via the DOLS regression as shown in Table 2. Lastly, we show confirmation of the direction of Granger causality among the variables that are cointegrated via the VECM as shown in Table 3.
In the context of long-run analysis, it is possible to use co-integration to tackle the problem of series differentiation. By doing the cointegration test, the long-run information about unit root series may gleam more clearly. After determining that the variables have a panel unit root and are of the first difference, the step that follows next is to assess if there is a long-run interaction between the three variables. Panel long-run tests of Pedroni (1999Pedroni ( , 2004) are used to determine whether or not the variables used in the model are cointegrated. There are two classifications of cointegration analyses suggested by Pedroni. The first classification is a group of panels characterized by four tests which comprise V-statistic, ρ-statistic, Philips Perron-statistic, and Augmented Dickey-Fuller statistic. These test statistics are clustered on the "within-dimension" and account for crosssectional autoregressive estimates for the panel countries. The second classification is clustered on the "betweendimension," and characterized by threeests which comprise Group ρ-statistic, Group Philips-Perron-statistic, and Group Augmented Dickey Fuller-statistic. These 3 tests are based on the common autoregressive estimates for each panel country. In all the tests, the hypothesis of no difference is that there is no cointegration among the variables, whereas the hypothesis of difference is that there is cointegration among the variables. In contrast to other homogeneous co-integration techniques like Johasen (1988) and Kao and Chiang (2000), Pedroni co-integration analysis considers the heterogeneity of the series across cross-sections. The results of the Pedroni cointegration analysis are shown in Appendix Table B2. The results show that the hypothesis of no difference or non-existence of cointegration is rejected at the 1% significance level. Therefore, the Pedroni panel cointegration test suggests a long-run relationship between innovation diffusion, ICT development, and sustainable growth for the overall sample of SSA and the sub-income groups.
DOLS Results
After validating the existence of long-run relationships, we estimated the long-run coefficients via the DOLS and the results are reported in Table 2. We used the overall sample which includes the 33 Sub-Saharan African countries selected for the study. To capture differences in income levels, we divide Sub-Saharan African countries into three groups based on the World Bank classification: UMIC, LMIC, and LIC.
In the estimation, we look at the effect of innovation diffusion and ICT development on sustainable growth. The long-run estimates of the DOLS model analysis are reported in Table 2. The empirical results show that ICT development significantly increases sustainable growth in all the groups (SSA, UMIC, LMIC, and LIC). This implies that a 1% increase in ICT development in SSA, UMIC, LMIC, and LIC increases sustainable growth by approximately 0.23%, 0.24%, 0.12%, and 0.06% respectively. These estimates support the findings of Cheng et al. (2021), Pradhan et al. (2014), and Pradhan, Arvin, Nair, et al. (2017). A possible explanation of this effect of ICT development on sustainable growth could be that since fixed telephone subscription and fixed broadband subscriptions are some of the main components of ICT development, this could be a pointer to the fact that more of the telecommunication indicators have been used in the development of ICT as a whole in SSA, which is an indication that many of the Sub-Saharan African countries can rely on ICT development to boost their economies.
With regards to the relationship between innovation diffusion and sustainable growth, the results follow a similar pattern just as in the relationship between ICT development and sustainable growth. From the DOLS model estimates, innovation diffusion has a positive and significant impact on sustainable growth in all the groups (SSA countries, UMIC, LMIC, and LIC). This implies that a 1% increase in innovation diffusion in SSA, UMIC, LMIC, and LIC increases sustainable growth by approximately 0.08%, 0.15%, 0.05%, and 0.04% respectively. These estimates support the findings of Pradhan et al. (2016) and Maradana et al. (2017).
On the whole, these results indicate that ICT development and innovation diffusion in terms of the DOLS are capable of spurring sustainable growth. However, the magnitude of the long-run elasticity of sustainable growth with respect to ICT development and innovation diffusion in the DOLS is much greater in the model for UMIC than the models for LMIC and LIC respectively. It appears that, although the merits of ICT development and innovation diffusion are evident, however, the diffusion of innovation has been at a slow rate as opposed to ICT development. This implies that ICT development contributes more to sustainable growth followed by innovation diffusion in UMIC, LMIC, and LIC respectively. This confirms the different roles of ICT development and innovation in the sustainable growth process. The finding is in line with the works of Pradhan et al. (2014) and Nguyen et al. (2020) who obtained the same results for G-20 countries. Nguyen et al. (2020) observe that ICT development is more sensitive to variations in economic growth. This greater sensitivity occurs because ICT development activities through the acceleration of fixed telephone subscriptions and fixed broadband subscriptions speed up economic growth. Pradhan, Arvin, Nair, et al. (2017) have a similar result on Source. Authors computation. Note. LM = Lagrange multiplier test for serial correlation; RESET = misspecification test; WHET = heteroscedasticity test (White); RGDP = real per capita GDP; ID = innovation diffusion; ICT = information and communication technology. ***Denotes significant at the 1%; ** denotes significant at the 5%. Source. Authors computation. ***denotes significant at the 1%; **denotes significant at the 5%; *denotes significant at the 10%. Note. RGDP = real per capita GDP; ID = innovation diffusion; ICT = information and communication technology.
the role of ICT development, Innovation diffusion, and venture capital in speeding up economic growth in European countries and consequently agree with the theoretical underpinning.
Panel VECM Granger Causality Results
In Table 3, we present the output from the VECM Granger causality for both the short and long run. The short-run results presented in Table 3 reveal two-way causality between innovation diffusion and sustainable growth and between ICT development and sustainable growth for the overall SSA sample. Moreover, the output reveals the existence of one-way causation from ICT development to innovation diffusion in the short run for the overall SSA sample. In other words, ICT development had a substantial impact on innovation diffusion in the short run and not the other way around. This is not surprising because so many new and innovative activities are heavily dependent on ICT services. The demand for greater ICT development appears to rise in tandem with the rate of innovation dissemination, and this relationship was proven to have an effect on ICT development. The long-run causality output is denoted by ECT (t−1) and the results are shown in the last column in Table 3. Starting with the overall SSA sample, the model where sustainable growth is the endogenous variable, the ECT (t−1) is −0.15328. This value exhibits that ICT development and innovation diffusion Granger-cause sustainable growth in the long run with the ability to adjust at a rapid pace of about 15.32%. Additionally, the outputs show that sustainable growth and ICT development Granger-cause innovation diffusion in the long run with the ability to adjust at a rapid pace of around 9.26%. The outcomes further show that sustainable growth and innovation diffusion Granger-cause ICT development in the long run with the ability to adjust at a rapid pace of about 7.38%. Now moving to the income groups, the outcomes from the long-run results show that ICT development and innovation diffusion Granger-cause sustainable economic growth with the ability to adjust at a rapid pace of around 10.22%, 2.33%, and 7.73%, for UMIC, LMIC, and LIC countries respectively. Likewise, the findings show that the variables converge to a long-run steady-state by approximately 13.09%, 10.88%, and 9.14% for the ICT development model after the occurrence of a shock for UMIC, LMIC, and LIC countries respectively. Also, the outcomes from the long-run results show that sustainable economic growth and ICT development Granger-cause innovation diffusion with the ability to adjust at a rapid pace of approximately 8.26%, 6.82%, and 4.73%, for UMIC, LMIC, and LIC countries respectively.
The overall results reveal that the outcomes of the long-run analysis via the DOLS are consistent with empirical findings in the extant literature regarding the roles of ICT development (Asongu & Le Roux, 2017;Ejemeyovwi, Osabuohien & Bowale, 2021;Iscan, 2012;Pradhan et al., 2018;Yousefi, 2011), and innovation diffusion (Ejemeyovwi, Osabuohien & Bowale, 2021;Maradana et al., 2017) in spurring economic growth. The long-run results also confirm that innovation diffusion, ICT development, and sustainable growth reinforce each other in a trivariate framework via the panel VECM.
Robustness Check Result
It has recently become a standard practice in empirical studies to do robustness check. The test is done to verify and validate the base regression model by some modification to visualize its behavior (Leamer, 1983). It is normally done by adding, removing or replacing variables in the base regression model (Ejemeyovwi, Osabuohien & Bowale, 2021). The fact that the coefficients do not alter substantially is considered proof that they are "robust." If the evaluated regression coefficients' signs and magnitudes are also reasonable, it is generally assumed that the estimated regression coefficients can be relied upon, with all the implications for policy analysis and economic insight that this implies.
In this study, to check for robustness of the baseline model, the dependent variable was replaced with the sustainable development index (SDGI) of Hickel (2020), which denotes the efficiency of nations in achieving human development. The index is calculated as a measure of two indicators that is: the human development index and the ecological impact index. Consequently, the sustainable development index (SDGI) is computed using the formula. Din et al. (2021) applied the SDGI to empirical studies in the literature. The data used for this computation can be found in (https://www.sustainabledevelopmentindex.org/timeseries). Also see Hickel (2020) for detail computation of the index.
SDGI Development Index Ecological Impact Index
The robustness check in Table 4 presents the empirical results for each of the three income groups and the overall SSA countries. The main difference between Tables 2 and 4 is that different measures are used for sustainable growth and development. While Table 2 utilizes real per capita GDP growth, which covers the 33 countries, Table 4 utilizes SDGI, which also covers the 33 countries. Also, the latter indicator is of greater importance in this study. Given that sustainable development index is a critical measure of countries sustainable growth and development, we tend to incorporate it in the robustness check. As shown in Table 4 all models' coefficients portray no significant differences than the baseline results presented in Table 2 models reported in this paper. Consequently, the study claim that legitimacy informs the DOLS estimator's consistency of the sustainable growth variables applied in the entire regression model.
Conclusion and Implications for Policy
This study contributes to the debate on how SSA countries can foster sustainable growth. Consequently, we diverge from the existing debate on how this can be achieved through empirical research. Inspired by the significant rise in ICT development and the anticipated rise in innovation diffusion in SSA following the drastic transformation due to the revolution of technology associated with the development of wireless, mobile communication systems and the liberalization process, we examine the long-run and shortrun relationships among innovation diffusion, ICT development, and sustainable economic growth in SSA. Annual time series data that spans from 2000 to 2020 for a sample of 33 SSA countries selected based on data availability for all the indicators was used in the study. We provide evidence robust to several specifications from the panel DOLS estimation and the panel VECM that captured the direction of causality among the variables in a trivariate framework to show that: (1) both ICT development and innovation diffusion foster sustainable economic growth in SSA, (2) ICT development, innovation diffusion and sustainable growth, reinforce each other, (3) compared to innovation diffusion, ICT development is more effective in driving sustainable economic growth in SSA.
Considering progress made by most Western and East Asian countries in recent times through ICT development and innovation diffusion, our findings offer sparks of confidence in promoting collective prosperity in SSA. First, our results show that ICT can offer policymakers concerned with the growth agenda of SSA countries, convincing means of addressing problems associated with ICT infrastructural development to induce sustainable growth through enhanced ICT access, ICT use, and ICT skills. Our pathway results on innovation diffusion and ICT development show that making shared prospects in SSA may not just be about improving infrastructural investment, but an innovative ICT infrastructure that gear toward sustainable growth and transformation in the continent. Source. Authors computation. Note. LM = Lagrange multiplier test for serial correlation; RESET = misspecification test; WHET = heteroscedasticity test (White); SDGI = sustainable development goal index; ID = innovation diffusion; ICT = information and communication technology. ***denotes significant at the 1%; **denotes significant at the 5%.
Based on the findings above, it is proposed that policymakers should focus their efforts on improving the continent's ICT capabilities, accessibility, and adoption. This can be achieved if entities engaged in the SSA agenda for prosperity, such as the ADB and the World Bank provide the support needed to complement different governments' efforts in advancing ICT penetration in the continent. Additionally, legislative actions are needed to help grow the continent's tech hubs to aid in the marketing of high-tech products, as well as to help establish patents so that the continent's young and innovative population may help build the continent.
In summary, ICT sector advances are changing the global economy at an unprecedented rate. ICT advancement and innovations are having a greater impact on countries' sustainable economic growth. Development plans should incorporate initiatives to boost ICT penetration rates and to establish national innovation systems that can have a stronger multiplier effect on the national economic gain. ICT penetration and innovation diffusion can be bolstered by the introduction of effective governmental measures to assure long-term economic growth.
Limitations and Suggestions for Further Studies
The study has limitations as in any other research. Given the sample countries covered in the study, the study used scientific journal articles as a proxy for innovation diffusion which might not be too accurate as a measure of innovation, we therefore suggest that the United Nations database on Science, Technology and Innovation be a primary source of information for innovation diffusion of future research. These data can be used to test if the study's empirical model holds up when combined with additional measures of innovation, however sparse they may be. To further explore the relationship between sustainable growth and innovation, some of this data can be used as an explanatory variable and incorporated into the model.
Authors Contribution
Mugabe Roger, Shu Lin Liu, and Brima Sesay substantially contributed to the conception or design of the manuscript, analysis and interpretation of data for the work. The authors equally contributed in drafting the work or revising it critically for important intellectual content. All the authors are accountable for all aspects of the work in ensuring that questions related to the accuracy and integrity of any part of the work are appropriately investigated and resolved. All the authors approved the final version to be published. | 9,570.4 | 2022-10-01T00:00:00.000 | [
"Economics",
"Computer Science",
"Environmental Science"
] |
A High Proportion Reuse of RAP in Plant-Mixed Cold Recycling Technology and Its Benefits Analysis
: The concept of the “no-waste city” has focused increasing attention on the recycling of solid waste. One such waste is reclaimed asphalt pavement (RAP), which is generated during road maintenance. The potential to reuse this resource has attracted extensive attention in recent years. This paper explores this concept via a case study of the reconstruction of two sections of the Beijing-Taipei Expressway (from Bengbu to Hefei, sections K69–K69 + 500 and K69 + 500–K69 + 900). The upper base layer of one section was paved with a novel mixture of emulsified asphalt, mixed with a high proportion of RAP made using plant-mixed cold recycling technology (EAPM-HP RAP ). For comparison, the upper base layer of the other section was paved with a conventional large-stone porous asphalt mix (LSPM). The proportions of the components of EAPM-HP RAP were optimized via laboratory-based proportioning design followed by proportioning verification. The results showed that the high-temperature stability, water damage resistance and pavement strength of the EAPM-HP RAP met the specifications of relevant engineering standards. Next, the economic and environmental benefits of this novel approach were estimated. The approach was estimated to save CNY (China Yuan) 1.5–1.8 million in engineering costs per km of road (roadbed width = 27.5 m) and CNY 158–189 million for the whole project (105 km in length). It was also estimated to reduce energy consumption equivalent to 67.41 tons of standard coal per km. Further calculations showed that every km of pavement could reduce CO 2 emissions by 176.6 tons, SO 2 emissions by 0.6 tons, NO X emissions by 0.5 tons, ash emissions by 17.6 tons and soot emissions by 1.0 tons compared with conventional methods. For the whole road section, this is equivalent to reducing CO 2 emissions by 18,543 tons, SO 2 emissions by 60.2 tons, NO X emissions by 52.5 tons, ash emissions by 1848 tons, and soot emissions by nearly 105 tons. In summary, it is feasible for EAPM-HP RAP to be used as the upper base layer in highway renovation projects. It reduces the need to mine new ores and allocate land to RAP storage, which is associated with soil and water pollution due to chemical leaching from aged asphalt. This approach provides great economic and environmental benefits compared with the use of conventional pavement technology.
Introduction
Asphalt is one of the most important products of the petroleum refining industry. Road asphalt products are obtained by the distillation, solvent extraction or oxidation of the residual oil obtained after the vacuum distillation of crude oil. According to data from the National Bureau of Statistics, China's petroleum asphalt output in 2020 was more than 60 million tons; however, the domestic output does not meet the domestic demand and China imported nearly five million tons of petroleum asphalt in 2020.
Asphalt pavement is being increasingly used with the rapid development of the national economy because of its many advantages. China's total highway mileage was 5.2 million km at the end of 2021, which included 161,000 km of expressways. Due to load magnitude and load repetition associated with temperature and environmental factors, such as ultraviolet radiation, oxygen and moisture, the asphalt in pavement structures ages slowly and continuously. Its light components, such as saturated and aromatic components, are gradually converted into heavy components, such as resins and asphaltenes, resulting in increasing hardening and brittleness and reduced bonding performance. This makes asphalt pavement prone to rutting, surface aggregate loss, potholes, cracks and other problems, which, in turn, reduce poor driving comfort, safety and road usability. Hence, it is necessary to repair or re-pave degraded asphalt pavement to reduce these impacts. A huge amount of reclaimed asphalt pavement is generated during the maintenance of old asphalt pavement. This contains a large amount of aged asphalt that contains toxic and harmful substances, such as anthracene, naphthalene and pyridine, which can cause serious pollution to soil and water. Moreover, discarding RAP wastes valuable non-renewable resources, such as asphalt and mineral materials. Therefore, asphalt pavement recycling technology (APRT) that can realize the reuse of RAP has attracted increasing research attention [1][2][3].
APRT refers to the process of excavating, recycling, crushing, and screening old asphalt pavement, and then mixing it with new asphalt, new aggregate (when necessary), and recycling agents (when necessary). There is a complete set of processes for mixing the appropriate proportions of new asphalt mixture and re-paving and forming new pavement layers according to certain performance requirements. APRT not only recycles RAP resources but also reduces the consumption of non-renewable resources, such as asphalt, avoids land use and environmental pollution caused by RAP stacking, and can greatly reduce engineering costs [4][5][6]. Wang et al. used the PaLATE method (Pavement Life-cycle Assessment Tool for Environmental and Economic Effects) to estimate the energy consumption and carbon dioxide emissions of the RAP-added mixture and the new HMA (Hot Mixture Asphalt) and evaluated the environmental benefits of using RAP. The results showed that producing a mixture containing 30% RAP required only 84% of the energy and produces 80% of carbon dioxide emissions compared to using 100% primary aggregates [7].
The plant-mixed cold recycling technology using emulsified asphalt with a high proportion of RAP (EAPM-HP RAP ) is a kind of APRT in which a series of construction operations, such as mixing, paving and rolling, can be carried out under normal temperature conditions. Therefore, in addition to the advantages of APRT described above, it can also reduce energy consumption and pollutant emissions, and protect the health of construction workers [8][9][10].
This study investigated the feasibility of using EAPM-HP RAP in order to improve RAP recycling efficiency. The determination of the optimal proportions of EAPM-HP RAP was first carried out in the laboratory. Then, the EAPM-HP RAP and large stone porous asphalt mixture (LSPM) were used as an upper pavement base to pave two test sections and conduct performance tests. The results show that the high-temperature stability, waterdamage resistance and pavement strength can meet the required specifications when the EAPM-HP RAP is used as a base. Finally, the economic and environmental benefits of RAP-reuse technology were estimated based on the two test sections [11].
Reclaimed Asphalt Pavement
The characteristics of RAP have very important impacts on the performance of the final mixture [12,13]. In this study, RAP was collected from materials milled during an overhaul of the Beijing-Taipei Expressway between Bengbu to Hefei. The RAP was sieved according to standard JTG E42 (Test Methods of Aggregate for Highway Engineering) [14] in order to understand the particle size distribution (gradation). The results are listed in Table 1. The parameters of the RAP were tested in accordance with the requirements of Test Methods of Aggregate for Highway Engineering and the results are listed in Table 2.
Recycling Asphalt from the RAP
The recycled asphalt was extracted from the RAP using tetrachloroethylene solvent. The penetration of the recycled asphalt was 22 and the ductility was only 36.2 cm, which indicates that the asphalt was seriously aged after more than ten years of use. Despite this, it still had a significant impact on the diffusion process of the newly added asphalt binder [15].
Mineral Aggregate
In this study, limestone with a particle size of 10-20 mm from Zhangdian Hutian, Jinan, was used as fresh aggregate and limestone mineral powder produced in Pingyin, Jinan, as ore powder. The density, hydrophilicity coefficient (H-C) and methylene blue value (MBV) of the mineral powder were tested (Table 3).
Cement
Cement not only improves the early strength of EAPM but also promotes the demulsification of emulsified asphalt and enhances the post-demulsification interfacial bonding performance between asphalt and fresh aggregate, thereby improving the high-temperature stability of EAPM [16]. Ordinary Portland cement (PO. 32.5; Shanshui Brand, Sunnsy Group, Jinan, China) was used in this study. Its properties are shown in Table 4.
Emulsified Asphalt
The emulsified asphalt (EA) used for the APRT was produced by a colloid mill in the laboratory. Its properties are shown in Table 5.
Proportioning Design
To determine the optimal material ratio for achieving the best pavement performance, a proportioning design was carried out in the laboratory [17,18].
Experimental Gradation
Based on the sieving results of RAP and mineral aggregates, and referring to the requirements of JTG F41 (Technical Specification for Highway Asphalt Pavement Recycling) [19] for EAPM with a medium granular-gradation range, the mixture ratio was determined to be RAP: 10-20 mm limestone: mineral powder: cement = 80:16:2:2. The experimental gradation data and corresponding curve are shown in Table 6 and Figure 1.
. Mixing Proportion
Referring to the conventional dosages used in actual pavement works, the amount of EA selected was 4.0%, while external water dosages of 1.5%, 2.0%, 2.5%, 3.0% and 3.5% were trialled. Specimen mixtures were prepared using the experimental gradation determined above. Then, compaction tests were performed on samples and their maximum dry density was tested in accordance with JTG E40 (Test Methods of Soils for Highway Engineering) [20]. Figure 2 shows the relationship between the maximum dry density and amount of external water.
Mixing Proportion
Referring to the conventional dosages used in actual pavement works, the amount of EA selected was 4.0%, while external water dosages of 1.5%, 2.0%, 2.5%, 3.0% and 3.5% were trialled. Specimen mixtures were prepared using the experimental gradation determined above. Then, compaction tests were performed on samples and their maximum dry density was tested in accordance with JTG E40 (Test Methods of Soils for Highway Engineering) [20]. Figure 2 shows the relationship between the maximum dry density and amount of external water.
Mixing Proportion
Referring to the conventional dosages used in actual pavement works, the amount of EA selected was 4.0%, while external water dosages of 1.5%, 2.0%, 2.5%, 3.0% and 3.5% were trialled. Specimen mixtures were prepared using the experimental gradation determined above. Then, compaction tests were performed on samples and their maximum dry density was tested in accordance with JTG E40 (Test Methods of Soils for Highway Engineering) [20]. Figure 2 shows the relationship between the maximum dry density and amount of external water. The maximum dry density first increased and then decreased with increases in the amount of external water, reaching a peak of 2.172 g/cm 3 at an external water content of 2.5%, representing an increase of 2.2 percentage points compared to the lowest value. Therefore, the optimal external water content was determined to be 2.5% and the optimum total liquid content was 6.5%. At a total liquid content of 6.5% and cement content of 2.0%, specimen mixtures were prepared at EA contents of 3.0%, 3.5%, 4.0%, 4.5% and 5.0%. Then, relative performance tests were carried out, including bulk specific density, Marshall stability and residual Marshall stability (Table 7). Figure 3 is based on the data in Table 7.
It can be seen that both the Marshall stability and residual Marshall stability peaked at an EA content of 4.0%. Hence, according to the analysis, the optimal content of EA was 4.0% and the optimal external water dosage was 2.5%. 2.5%, representing an increase of 2.2 percentage points compared to the lowest value. Therefore, the optimal external water content was determined to be 2.5% and the optimum total liquid content was 6.5%. At a total liquid content of 6.5% and cement content of 2.0%, specimen mixtures were prepared at EA contents of 3.0%, 3.5%, 4.0%, 4.5% and 5.0%. Then, relative performance tests were carried out, including bulk specific density, Marshall stability and residual Marshall stability (Table 7). Figure 3 is based on the data in Table 7.
(a) (b) It can be seen that both the Marshall stability and residual Marshall stability peaked at an EA content of 4.0%. Hence, according to the analysis, the optimal content of EA was 4.0% and the optimal external water dosage was 2.5%.
High-Temperature Stability
An asphalt mixture is a typical viscoelastic material that is prone to flow deformation under high-temperature conditions [21]. Therefore, the repeated loading of road vehicles, especially heavy-duty overloaded vehicles, can lead to irreversible deformation of the road surface, typically rutting damage [22]. In this paper, the dynamic stability at 60 °C was used as an index to evaluate the high-temperature stability of EAPM. Dynamic stability refers to the number of standard axle loads a mixture is subjected to for each 1 mm deformation under high-temperature conditions (generally selected as 60 °C). Rutting specimens (dimensions = 300 × 300 × 80 mm) were prepared in the laboratory by the wheel rolling method according to the experimental gradation and mixing proportion
High-Temperature Stability
An asphalt mixture is a typical viscoelastic material that is prone to flow deformation under high-temperature conditions [21]. Therefore, the repeated loading of road vehicles, especially heavy-duty overloaded vehicles, can lead to irreversible deformation of the road surface, typically rutting damage [22]. In this paper, the dynamic stability at 60 • C was used as an index to evaluate the high-temperature stability of EAPM. Dynamic stability refers to the number of standard axle loads a mixture is subjected to for each 1 mm deformation under high-temperature conditions (generally selected as 60 • C). Rutting specimens (dimensions = 300 × 300 × 80 mm) were prepared in the laboratory by the wheel rolling method according to the experimental gradation and mixing proportion determined above. Rutting tests were carried out in accordance with the requirements of JTG E20 (Standard Test Methods of Bitumen and Bituminous Mixture for Highway Engineering) [23]. The test temperature was 60 • C and the wheel pressure was 0.7 MPa.
Studies have shown that rutting generally occurs on days when the average maximum temperature on the road surface is above 28 • C for seven consecutive days. The test results show that the dynamic stability of the emulsified asphalt mixed with 80% of RAP and made by the cold regeneration technique was 2320 times/mm, which meets the requirements of JTG F41 (Technical Specification for Highway Asphalt Pavement Recycling) and shows that the anti-rutting performance is fully qualified.
Water Damage Resistance
Freeze-thaw splitting tests were conducted to measure the effect of freeze-thaw cycling on the asphalt mixtures under specified conditions. The splitting strength ratios of the specimens before and after water damage were determined to evaluate the water damage resistance of the asphalt mixtures. The specimens were divided into two groups for the tests. The first set of samples was used to measure the splitting tensile strength R T1 without freeze-thaw cycling, and the second set was used to determine the splitting strength R T2 with freeze-thaw cycling. The freeze-thaw cycling was carried out in accordance with the requirements of standard JTG E20 (Standard Test Methods of Bitumen and Bituminous Mixture for Highway Engineering). The TSR (Tensile strength ratio) can be calculated by Equation (1): where R T1 and R T2 are the splitting strengths (MPa) of specimens without and with freezethaw treatment, respectively, and TSR is the ratio of R T2 to R T1 (%).
The test results show that the splitting strength of the specimen without freeze-thaw was 1.17 MPa, while that after freeze-thaw was 0.99 MPa. The TSR was 84.6%, which meets the requirements of standard JTG F41 (Technical Specification for Highway Asphalt Pavement Recycling).
Proportioning Verification
On the basis of the proportioning design, two test sections (from K69 to K69 + 500 and from K69 + 500 to K69 + 900) were paved in combination with a reconstruction project in the Bengbu to Hefei section of the Beijing-Taipei Expressway. Firstly, a 16 cm asphalt surface layer and an 18 cm upper base layer of the old road were milled off. Then, a cold regeneration technique was used to make two emulsified asphalt mixtures mixed with high proportions of RAP (EAPM-HP RAP ) or LSPM. The mixtures were used as the upper base layers of the two test sections, upon which the original asphalt surface structure was re-paved. Figure 4 shows the pavement structures of the two test sections. and Bituminous Mixture for Highway Engineering). The test temperature was 20 • C and the loading rate was 2 mm/min. The compressive strength is calculated by Equation (2): where R c is the compressive strength of the specimen (MPa), P is the load (N) at which the specimen fails, and d is the specimen diameter (mm). The results are shown in Table 8. The results show that the compressive strength meets the specifications, which shows that it is feasible to use emulsified asphalt cold recycling technology with 80% RAP as the upper base layer in the expressway renovation project.
Deflection Value
The deflection values of the two sections were tested after the construction of the upper base layers was completed ( Figure 5). It can be seen that the deflection values of the sections with EAPM-HPRAP and LSPM base layers were low and similar, which indicates that the layers had the same rigidity. Hence, it is feasible to use emulsified asphalt cold recycling technology with 80% RAP as the upper base layer in the expressway renovation project.
The results show that when the EAPM-HPRAP was used as the upper base layer in the expressway renovation project, its high-temperature stability, water damage resistance and pavement strength met the requirements of the relevant specifications.
Benefits Analysis
Emulsified asphalt cold regeneration technology can be used in multiple construction operations, such as the mixing, paving and rolling of mixtures under normal temperature conditions, without the need to heat the aggregates and asphalt, as is required with traditional pavement construction technology. This not only reduces the amount of construction equipment needed but also greatly reduces the costs of manpower and material resources. More importantly, it can reduce energy consumption and the emissions of pollutants, such as soot, SO2 and CO2 [11,24,25]. It can be seen that the deflection values of the sections with EAPM-HP RAP and LSPM base layers were low and similar, which indicates that the layers had the same rigidity. Hence, it is feasible to use emulsified asphalt cold recycling technology with 80% RAP as the upper base layer in the expressway renovation project.
The results show that when the EAPM-HP RAP was used as the upper base layer in the expressway renovation project, its high-temperature stability, water damage resistance and pavement strength met the requirements of the relevant specifications.
Benefits Analysis
Emulsified asphalt cold regeneration technology can be used in multiple construction operations, such as the mixing, paving and rolling of mixtures under normal temperature conditions, without the need to heat the aggregates and asphalt, as is required with traditional pavement construction technology. This not only reduces the amount of construction Coatings 2022, 12, 1283 9 of 12 equipment needed but also greatly reduces the costs of manpower and material resources. More importantly, it can reduce energy consumption and the emissions of pollutants, such as soot, SO 2 and CO 2 [11,24,25].
Resource Savings
The reconstruction project was used as a case study to estimate the economic benefits of the approach. The project had a total length of 105 km and a roadbed width of 27.5 m. According to the experimental gradation and the mixing proportion, we know that the thickness of the EAPM-HP RAP upper base layer was 18 cm, and 80% RAP was used in the gradation during construction, which had a density of 2.2 g/cm 3 (2.2 t/m 3 ).
According to the above data, it can be inferred that Mz (total mass of mixture required for each 1 km of pavement base) is 27.5 × 1.0 × 0.18 × 2.2 × 1000 = 10,890 t, while the M R (total mass of RAP reused) is 10,890 × 80% = 8712 t. Hence, using an EAPM-HP RAP upper base layer can save 8712 tons of mineral aggregate per km of road. According to a rough calculation, at an aggregate price of CNY 120-150 /ton, the cost saving is CNY 1.05-1.31 million of aggregate per km; such that the whole road section (105 km) would save CNY 110-138 million.
Moreover, the reuse of RAP reduces the need to mine new ore, avoids the occupation of land by RAP accumulation, and removes the pollution risks to soil and water caused by toxic and harmful substances, such as anthracene, naphthalene, and pyridine, leaching from aged asphalt [26,27].
Energy Savings
As RAP does not need to be heated to mix EAPM-HP RAP , an equivalent LSPM mixture will consume more energy for aggregate and asphalt heating and to evaporate the water in the aggregates. The reconstruction project used mobile plant-mixed cold regeneration equipment (German Wirtgen KMA220). The mechanical energy consumption during the mixing process was not considered. Only the energy consumed by heating the mixture was calculated and analysed because both construction schemes require mixing of the mixture. Therefore, the energy consumption per ton of LSPM is the energy that can be saved per ton of EAPM-HP RAP . The densities of the EAPM-HP RAP and LSPM were both calculated as 2.2 g/cm 3 (2.2 t/m 3 ).
According to the test section construction data, the oil-to-stone ratio of LSPM was 4.3% and the moisture content of the aggregate was 0.5%. The aggregate was heated from 25 • C to 160 • C and the asphalt was heated from 25 • C to 175 • C. An asphalt mixing plant was used during construction (Marini 4000 MAC320) and its fuel utilization efficiency and heat exchange efficiency were calculated as 95% and 80%, respectively.
Combining the above construction parameters and taking the production of 1 ton of LSPM as the benchmark, the energy consumption was estimated (Table 9). According to the calculations in the previous section, the total mass of mixture required for each 1 km of pavement base is 10,890 tons. The heating energy consumption of LSPM per km is 67.41 tons of standard coal. According to the market prices of other energy sources and their equivalence to standard coal, the energy and cost savings per km of EAPM-HP RAP were obtained (Table 10). Note: The electricity price is the average of the peak and segment prices for single-system industrial and commercial electricity (≤1 kV).
From the perspective of reducing energy consumption alone, and ignoring the additional resource savings, some CNY 414,000-470,000 of energy cost can be saved per km of EAPM-HP RAP pavement. The whole reconstruction project (105 km) would save nearly CNY 50 million in energy costs. A comprehensive calculation of the saved ore and energy costs shows that the use of EAPM-HP RAP as the upper base layer saves CNY 1.5-1.8 million per km and CNY 158-189 million for the whole project (105 km). Hence, the use of EAPM-HP RAP can generate considerable economic benefits while also reusing RAP resources.
Emissions Reduction
With the intensification of the global greenhouse effect, carbon emissions have attracted increasing attention from the international community and domestic experts, scholars and governments. On 31 December 2020, the Ministry of Ecology and Environment announced the "Measures for the Administration of Carbon Emissions Trading (Trial)", which came into force on 1 February 2021.
It is reported that the combustion of 1 ton of standard coal will emit 260 kg of ash, 15 kg of soot, 2.62 tons of CO 2 , 8.5 kg of SO 2 and 7.4 kg of nitrogen oxides into the atmosphere. The pollutant emissions reduction achieved by using EAPM-HP RAP as an upper base layer instead of conventional asphalt was calculated based on the data from the previous section. The results are shown in Table 11. It is certain that the application and promotion of EAPM-HP RAP will play a very positive role in achieving carbon reduction goals under the new development philosophy. The national carbon emission rights trading market was launched on 16 July 2021, and the closing price of carbon emission allowances (CEA) was CNY 60 per ton on 22 April 2022. The power generation industry became the first industry to be included and it is foreseeable that, with the promotion, implementation and improvement of the carbon emission trading system across the country, industries such as petrochemicals and transportation will also be included in the trading market. The advantages of emissions reduction will generate considerable economic benefits with the implementation of the carbon emissions trading system.
1.
Using the cold regeneration technique with emulsified asphalt mixed with a high proportion of RAP is a feasible way to pave the upper base layer of expressway renovation projects. With an experimentally optimized gradation and mixing proportion, its high-temperature stability, water damage resistance and pavement strength can meet the requirements of relevant specifications.
2.
The EAPM-HP RAP mixture reduces the need to mine new ore resources and allocate land for RAP storage. It also mitigates the soil and water pollution risks caused by the aged asphalt contained in RAP.
3.
Compared with conventional pavement technology, EAPM-HP RAP generates considerable economic benefits while reusing RAP resources. This study indicates that engineering costs of CNY 1. | 5,796 | 2022-09-02T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
3D convolutional neural networks predict cellular metabolic pathway use from fluorescence lifetime decay data
Fluorescence lifetime imaging of the co-enzyme reduced nicotinamide adenine dinucleotide (NADH) offers a label-free approach for detecting cellular metabolic perturbations. However, the relationships between variations in NADH lifetime and metabolic pathway changes are complex, preventing robust interpretation of NADH lifetime data relative to metabolic phenotypes. Here, a three-dimensional convolutional neural network (3D CNN) trained at the cell level with 3D NAD(P)H lifetime decay images (two spatial dimensions and one time dimension) was developed to identify metabolic pathway usage by cancer cells. NADH fluorescence lifetime images of MCF7 breast cancer cells with three isolated metabolic pathways, glycolysis, oxidative phosphorylation, and glutaminolysis were obtained by a multiphoton fluorescence lifetime microscope and then segmented into individual cells as the input data for the classification models. The 3D CNN models achieved over 90% accuracy in identifying cancer cells reliant on glycolysis, oxidative phosphorylation, or glutaminolysis. Furthermore, the model trained with human breast cancer cell data successfully predicted the differences in metabolic phenotypes of macrophages from control and POLG-mutated mice. These results suggest that the integration of autofluorescence lifetime imaging with 3D CNNs enables intracellular spatial patterns of NADH intensity and temporal dynamics of the lifetime decay to discriminate multiple metabolic phenotypes. Furthermore, the use of 3D CNNs to identify metabolic phenotypes from NADH fluorescence lifetime decay images eliminates the need for time- and expertise-demanding exponential decay fitting procedures. In summary, metabolic-prediction CNNs will enable live-cell and in vivo metabolic measurements with single-cell resolution, filling a current gap in metabolic measurement technologies.
I. INTRODUCTION
Cellular metabolism underlies cell function and behavior and, thus, is integral to normal and disease pathologies.Cancer cells often depend on glycolysis to produce energy even in the presence of oxygen, a phenomenon referred to as the Warburg effect. 1 Furthermore, the glutaminolysis pathway can be enhanced in cancer cells to create biosynthetic precursors and compensate for reduced oxidative phosphorylation (OXPHOS) when the electron transport chain is impaired. 2he dependence of cancer cells on specific metabolic pathways enables cancer therapy by metabolism targeting drugs. 3,4Similarly, many immune cell functions are dependent on specific metabolic pathways.For example, pro-inflammatory macrophages are dependent on glycolysis, while anti-inflammatory macrophages undergo a metabolic shift toward oxidative phosphorylation. 5Additionally, T cells and B cells also exhibit metabolic reprogramming to be more glycolytic in activated states. 6,7The metabolic dependences of immune cells suggest that metabolism-modulation drugs may be effective strategies for immune therapy. 8,9Therefore, studies of cellular metabolism and metabolic perturbation are important for advancing fundamental and translational knowledge in many fields, including cancer biology, immunology, and therapeutics.
1][12] In particular, tumor cell metabolic heterogeneity drives different clinical responses such as therapy resistance and recurrence, hindering metabolic-based anti-cancer treatment. 13However, live-cell measurements of metabolism with single-cell resolution are challenging.Currently, the widely used Seahorse technology enables the detection of metabolic variations in cell populations by measuring the oxygen consumption rate (OCR) and the extracellular acidification rate (ECAR). 14Similarly, biochemical analyses of metabolic enzymes using Western Blot analysis, mRNA analysis, and microplate reader absorption or fluorescence assays typically require cell dissolution or fixation and cannot resolve metabolic information with single-cell resolution. 15,16Techniques to evaluate the metabolism of a single cell include flow cytometry, single-cell RNA sequencing, and immunofluorescence or immunohistochemistry detection of metabolic enzymes, each of which requires the destruction of cells. 17Therefore, noninvasive measurements of single-cell metabolism in live samples are a desirable technique and potentially beneficial for a wide range of scientific research and clinical applications.
Optical metabolic imaging provides a label-free modality to detect metabolic activities at a cellular level.This technique captures the fluorescence intensity and lifetime of autofluorescent metabolic coenzymes, including reduced nicotinamide adenine dinucleotide (NADH).NADH is an electron acceptor in glycolysis and an electron donor in oxidative phosphorylation. 18Additionally, NAD þ is converted to NADH through reduction in glutaminolysis. 4,18Furthermore, NADH is used in fatty acid synthesis. 18NADH and its phosphate form, NADPH, have the same fluorescent excitation and emission properties, so NAD(P)H is used to represent the measured fluorescence signal of both molecules. 19Fluorescence lifetime imaging measures the time a fluorophore remains in the excited state before returning to the ground state by emitting a photon. 20The fluorescence lifetime of NADH is sensitive to the surrounding microenvironment and is altered due to conformational changes of NADH in free and enzyme-bound states.Free NAD(P)H has a short lifetime around 300-500 ps, while proteinbound NAD(P)H has a longer lifetime around 1.5-2 ns. 21Thus, fluorescence lifetime imaging (FLIM) can quantify changes in the free to protein-bound ratios of metabolic enzymes, and NAD(P)H FLIM metrics are often altered with metabolic perturbations in cells and tissues. 18,22Furthermore, fluorescence lifetime images can be segmented into individual cells, allowing for metabolic measurements at a cellular level. 23,24luorescence lifetime imaging in the time domain measures the fluorescence intensity decay as a function of time following an excitation pulse from a laser.A common FLIM technique uses timecorrelated single-photon counting (TCSPC), which repeatedly records the arrival time of the emitted photons after excitation and sorts them into a histogram. 25,26The raw fluorescence decay data represents a temporal point spread function (TPSF) at each pixel.Traditional analysis of FLIM decay data requires deconvolution of the TPSF from a measured instrument response function (IRF) and fitting the decay to an exponential model.Due to the difference in lifetimes of free and protein-bound NAD(P)H, the fluorescence lifetime decay of NAD(P)H is often fit to a two-exponential decay model, and a mean lifetime can be calculated by the weighted average lifetime of short and long lifetimes.7][28] However, deconvolution and exponential fitting analysis of the decay curve requires assumptions about the data and measured signal, such as the number of lifetime components and shift in instrument response function, that necessitate domain expertise.Moreover, deconvolution and decay curve fitting are time-consuming due to the iterative nature of deconvolution and maximum likelihood estimated exponential fitting.][31] Once analyzed, the interpretation of NAD(P)H fluorescence lifetime data relative to metabolic phenotypes is difficult as a robust relationship between autofluorescence metrics and specific metabolic pathways has yet to be established.Prior studies have used conventional machine-learning algorithms to identify metabolic phenotypes of T cells and stem cells from autofluorescence lifetime features of each cell by averaging the pixel values across cellular regions. 23,32However, this process removes intracellular spatial patterns, which contain metabolic information since metabolic processes are distributed across mitochondria networks and the cytosol. 33To retain spatial fluorescence patterns, image-based convolutional neural networks (CNN) have been used to predict cell phenotypes from autofluorescence lifetime or intensity images extracted from traditional decay fitting of the TPSF. 34,35espite prior advances in using machine learning to aid the interpretation of NAD(P)H fluorescence lifetime data, a 3-class prediction of metabolism phenotypes remains unexplored.Furthermore, prior models for phenotype identification use lifetime features or images extracted from traditional decay fitting and thus lose subtle spatial and temporal information that may facilitate phenotype identification.Herein, we hypothesize that a 3D (two spatial dimensions and one time dimension) CNN will identify three metabolic phenotypes of cancer cells, glycolysis, OXPHOS, and glutaminolysis.The 3D CNN will retain spatial and temporal information to increase the specificity and accuracy of metabolic pathway identification from NAD(P)H lifetime decay data.A dataset of NAD(P)H fluorescence lifetime images of MCF7 breast cancer cells with enhanced and inhibited glycolysis, OXPHOS, and glutaminolysis pathways was used to train and test the 3D CNN models.The 3D CNN models trained with NAD(P)H TPSF images discriminated three different metabolic pathways at the cellular level with more than 90% accuracy.Moreover, the 3D CNN models trained on the cancer cells were tested to predict the metabolic phenotypes of two cell lines of control and mitochondria-deficient macrophages.To our knowledge, this is the first study to successfully differentiate three major metabolic pathways, glycolysis, OXPHOS, and glutaminolysis at a cellular level using NAD(P)H autofluorescence decay data.This novel approach of NADH FLIM combined with 3D CNNs for metabolic pathway identification will enable live-cell and in vivo studies of metabolic heterogeneity in cancer and immunology, metabolism-targeted therapies, and genetic mitochondrial diseases.
II. RESULTS A. Temporal characteristics of NAD(P)H fluorescence of cells with fixed metabolic phenotypes
The differences in the TPSF data across the metabolic groups can be visualized and potentially detected by machine learning models for metabolism differentiation.Representative intensity-scaled, mean fluorescence lifetime images allow visualization of different NAD(P)H fluorescence lifetimes due to metabolic perturbations at the image level [Fig. 1 Averaged TPSF and data-dimension reduction techniques were used to visualize differences in NAD(P)H TPSF of MCF7 cells dependent on glycolytic, OXPHOS, and glutaminolysis metabolism.The average decay curves of MCF7 cells dependent on specific metabolic pathways show that the cells using glycolysis have an increased fraction of NAD(P)H with a shorter lifetime, 0.79 for glycolysis vs 0.68 for OXPHOS and glutaminolysis [Fig.1(c)].Furthermore, the down-sampling procedure maintained the differences in the decay curves between different metabolic groups [Figs.S2(e) and S2(f)].To visualize the importance of the temporal information in discriminating metabolic phenotypes, the average NAD(P)H intensity within each cell at each time point was calculated resulting in 256 temporal features for each cancer cell.The t-distributed stochastic neighbor embedding (t-SNE) algorithm was applied to project these high-dimensional (256) temporal features of each cell into a 2-dimensional (2D) space to visualize the time dimension variance across the metabolic phenotypes.The t-SNE map visualizes overlap among cells using OXPHOS and glutaminolysis, and slight separation of cells using glycolysis [Fig.1(d)].
B. 3D CNN classifies cells using glycolysis or OXPHOS
A 3D fluorescence lifetime imaging LeNet (FLI-LeNet) model was created to predict cancer cells as either glycolytic or oxidative [Fig.2(a)].The structure of the 3D FLI-LeNet model was derived from the traditional 2D LeNet model and consisted of two convolutional layers and two feature mapping layers, 36 As a result, the FLI-LeNet model trained with the original TPSF images achieved an average AUC ROC of 0.978, an average accuracy of 92.0%, an average recall of 88.3%, and an average precision value of 97.4% for the fivefold test datasets in classifying MCF7 cells using glycolysis vs those using OXPHOS [Fig.2(e), Table I, Table S3].
The FLI-LeNet model trained with the temporal down-sampled TPSF images maintained the ability to distinguish metabolic status with comparable prediction performances and a training speed that is twice as fast as the FLI-LeNet model trained with the original TPSF images.The FLI-LeNet model trained with MEDD attained an AUC value of 0.978, an accuracy of 92.7%, a recall of 85.7%, and a precision of 97.8% in predicting glycolytic and oxidative MCF7 cells [Fig.2(e), Table I, Table S4].Similarly, the FLI-LeNet model trained with MD achieved an accuracy of 91.8%, an AUC of 0.980, a precision of 97.6%, and a recall of 83.0% [Fig.2(e), Table I, Table S5].
C. 3D CNN models allow differentiation of glycolysis, OXPHOS, and glutaminolysis metabolic pathways in cancer cells with NAD(P)H TPSF images
To further explore the ability of 3D CNN models to detect metabolic activities in cells from NAD(P)H fluorescence lifetime images, we expanded our dataset to include three groups of MCF7 cells, each dependent on a single metabolic pathway: glycolysis, OXPHOS, or glutaminolysis.A new 3D CNN model called FLI-ResNet was developed for the prediction of glycolysis, OXPHOS, and glutaminolysis from NAD(P)H TPSF images and compared with three-group prediction performance of FLI-LeNet models [Fig.3(a)].The feature map of a representative cell showed that the FLI-ResNet model extracts both temporal and spatial patterns in the TPSF images (Fig. S5).
Both the FLI-LeNet and FLI-ResNet models were trained on the original TPSF images of cells using a learning rate of 0.001.After approximately 30 epochs, the FLI-ResNet models achieved a validation loss below 0.1, with a validation accuracy exceeding 85% [Figs.To further assess the ability of the models to identify cell dependencies on glycolysis, OXPHOS, and glutaminolysis, t-SNE dimensionality reduction algorithms were applied to the last activation layer, enabling the visualization of cell clustering in a 2D space based on the extracted features of the models.The t-SNE maps of the last activation layer of the FLI-ResNet and FLI-LeNet models show the separation of MCF7 cancer cells dependent on glycolysis, OXPHOS, and glutaminolysis and subgroups of cells within the glycolysis and OXPHOS dependent populations [Fig.3(d), Figs. S6(e) and S7].
Both the FLI-ResNet and FLI-LeNet models discriminate MCF7 cells with differing metabolic activities with an accuracy above 85%.The FLI-ResNet model trained on the original TPSF images showed the best performance in differentiating MCF7 cells using glycolysis, OXPHOS, or glutaminolysis, with an average accuracy of 92.6%, precision of 92.6%, recall of 93.1%, and an F1-score of 92.7% for the fivefold cross-validation (Table II, Table S6).In contrast, the FLI-LeNet model trained on the original TPSF images achieved an average accuracy, precision, recall, and F1-score of 85.0%, 86.7%, 85.4%, and 85.3%, respectively (Table II, Table S9).When trained on down-sampled datasets, the FLI-ResNet model achieved recall, precision, and F1-scores of 87%-89% for distinguishing metabolic activities of MCF7 cells from NAD(P)H TPSF images (Table II, Tables S7 and S8).The FLI-LeNet models trained on the MD and MEDD down-sampled datasets maintained similar performance, achieving accuracy rates of $88% and precision, recall, and F1-scores of 88%-89% (Table II, Tables S10 and S11).
D. The metabolic-prediction models transfer to a FLIM dataset of murine macrophages with genetically modulated mitochondria function
The applicability of the 3D CNN models was evaluated using wild-type (WT) and POLG-mutated murine BMDMs, which have mitochondrial DNA mutations that result in mitochondria and OXPHOS dysfunction. 37The sequential addition of glucose followed by the electron transport chain inhibitor oligomycin stimulated the maximal glycolytic rate resulted in an increase in ECAR in both WT and POLG BMDMs [Fig.4(a)].The addition of 2-DG to inhibit glycolysis decreased ECAR of both WT and POLG BMDMs [Fig.4(a)].In subsequent experiments of measuring OCR, the successive addition of FCCP plus pyruvate and rotenone plus antimycin to stimulate and inhibit OXPHOS resulted in an increase and decrease, respectively, of the OCR of WT BMDMs [Fig.4(b)].However, the changes in OCR of POLG BMDMs exposed to FCCP plus pyruvate and rotenone plus antimycin were not as great in magnitude as compared with the WT BMDMs, indicating impairments in OXPHOS capacity of POLG BMDMs [Fig.4(b)].In the presence of glucose, oligomycin caused a decrease in OCR in WT BMDM [Fig.4(b)].However, this decline was less pronounced in POLG BMDMs.Moreover, the POLG BMDMs exhibited elevated basal levels of ECAR and reduced levels of OCR compared to the WT BMDMs, suggesting that the POLG BMDMs are more glycolytic than the WT BMDMs as we previously reported [Figs.NAD(P)H FLIM images of control and cyanide-treated WT and POLG BMDMs were input into the glycolysis vs OXPHOS FLI-LeNet models previously trained with MCF7 cancer cell images.For the control WT BMDMs, the FLI-LeNet model (original data, 256 images in the time dimension) predicted 56% as oxidative and 44% as glycolytic [Fig.4(d)].After treatment with cyanide to inhibit OXPHOS and stimulate glycolysis, 82% of the WT macrophages were predicted to be glycolytic, and 18% were oxidative [Fig.4(d)].In contrast, 79% of control POLG BMDMs were predicted to be glycolytic and 21% oxidative [Fig.4(d)] at rest.After treatment with cyanide, 95% of the POLG macrophages were identified as glycolytic [Fig.4(d)] and 5% as oxidative.The FLI-LeNet models generated from MEDD and MD data resulted in similar predictions of the WT macrophages, with an increase in the glycolytic portion observed with cyanide treatment (Fig. S9).The MD FLI-LeNet model identified that most POLG BMDMs were glycolytic, and cyanide treatment of POLG BMDMs resulted in a smaller proportion of glycolytic macrophages (Fig. S9).However, the FLI-LeNet model trained with MEDD data predicted a majority of POLG BMDMs to be oxidative for both the control and cyanide-treated groups (Fig. S9).
III. DISCUSSION
0][41][42][43][44] However, the interpretation of variations in the autofluorescence lifetime metrics is challenging and requires domain expertise since NAD(P)H lifetime measurements are multivariant, and many distinct metabolic pathways contribute to NAD(P)H signals.
Recent studies have employed machine-learning algorithms to identify cellular phenotypes from autofluorescence intensity and lifetime features; 23,32,33,45 however, the separation of three metabolic states, glycolysis, OXPHOS, and glutaminolysis, from autofluorescence lifetime data is difficult due to the intricate interconnections of metabolic pathways, and the limited information contained within lifetime features extracted by exponential fitting.This study explores deep learning models for a three-way prediction of cancer cell metabolism as dependent on glycolysis, OXPHOS, or glutaminolysis from NAD(P)H fluorescence lifetime decay data.
The novel 3D models presented here, FLI-LeNet and FLI-ResNet, simplify fluorescence lifetime image analysis and have the potential to be used for metabolic profiling of live cells from NAD(P)H FLIM images.Previously, pre-trained CNN models have been used to generate lifetime component images directly from the lifetime data without deconvolution and fitting; however, these models may be limited to the lifetime values and characteristics of the training dataset and report that lifetime values are not metabolic functions of cells. 29,30Herein, the 3D FLI-LeNet and FLI-ResNet CNN models directly output metabolic activities of cells using the raw TPSF data and bypassing traditional FLIM analysis techniques.Compared to 2D CNNs, 3D CNNs allow an additional dimension of input data, allowing the models to capture both temporal and spatial dynamics within the X-Y-T NAD(P)H TPSF images.The t-SNE visualization of the temporal features of cells showed a slight separation of glycolytic cells from cells using OXPHOS and glutaminolysis, implicating that temporal patterns within the fluorescence decay can effectively discriminate different metabolic activities [Figs.1(c) and 1(d)].Furthermore, spatial signals within NAD(P) H lifetime images encode metabolic information, and previous analysis of mitochondria structure in NAD(P)H images revealed different cluster patterns in glycolytic and oxidative cells. 33,46,47The FLI-LeNet enabled the classification of the glycolytic and oxidative phenotypes of cancer cells with over 90% accuracy, which was comparable to 2D CNN models that use five NAD(P)H lifetime component images (s 1 , s 2 , a 1 , s m , intensity). 35The recall of the FLI-LeNet was lower than the precision and accuracy values for predicting glycolysis and OXPHOS utilization by breast cancer cells, possibly indicating heterogeneity in the response of MCF7 cancer cells to glycolysis inhibition by 2-DG. 48he best-performing model was the FLI-ResNet trained with the original NAD(P)H TPSF images, achieving 94% accuracy in differentiating cells using glycolysis, OXPHOS, and glutaminolysis (Table II).
Although more powerful than 2D CNN models, 3D CNNs are computationally more expensive, resulting in longer training and inference times.Herein, the 3D TPSF NAD(P)H images are composed of 256 2D intensity images representative of different time points, demanding 30 times the storage memory ($400 MB vs 12.7 MB) and 1.2 times the computational cost (144 690 vs 119 466 parameters) for around 5000 images, as compared to the previous 2-D CNN model trained with NAD(P)H lifetime images.To overcome this challenge, protocols were developed to down-sample the TPSF images in the time dimension by applying mean and median filters with a window of 3 to halve the original dataset (Fig. S2).The FLI-LeNet models trained with the down-sampled datasets performed similarly to models trained on the original TPSF images but trained three times faster.However, the FLI-LeNet model trained with MEDD did not perform well when applied to the macrophage datasets, suggesting the quality of training data is sensitive to median down-sampling (Fig. S9).The differences in MD and MEDD FLI-LeNet performance (Fig. S9) may be attributed to differences in variable effects of mean vs median averaging on the noise within the lifetime decay curves.The FLI-ResNet trained with original TPSF images exhibited a better performance compared to models trained with MD and MEDD.The FLI-ResNet model, with its deeper network and residual connections, may capture more intricate and abstract features than the FLI-LeNet model when sufficient training resources are available 49 and, thus, be more sensitive to information lost with down-sampling.
To ensure the versatility of the metabolism-prediction models beyond MCF7 cells and broaden their applicability to various cell types and studies, we applied the FLI-LeNet model to NAD(P)H FLIM data of murine macrophages with and without the POLG mutation.The polymerase gamma (POLG) is the enzyme responsible for replicating and maintaining mitochondrial DNA (mtDNA) within the mitochondria. 37Mutations in the POLG gene are associated with a range of disorders characterized by mtDNA instability, leading to reduced fidelity and efficiency of mtDNA replication. 50,51These mutations result in mitochondrial dysfunction, which affects cellular metabolism.Therefore, WT and POLG BMDMs provide a model with known metabolic phenotypes to assess the efficacy of the 3D CNN models.The pre-trained FLI-LeNet correctly predicted a higher fraction of POLG BMDMs as glycolytic than the WT macrophages [Fig.4(d The applicability of FLI-LeNet models to NAD(P)H fluorescence lifetime images of BMDMs suggests that the features identified by the models trained with cancer cells are preserved across datasets of different species and different metabolic perturbations.The model is primarily influenced by the temporal patterns in NAD(P)H fluorescence lifetime images, and the similarity in morphological characteristics, such as cell size and intensity difference between the cytoplasm and nucleus [Figs.1(a) and 4(c)], among BMDMs and MCF7 cells likely also facilitates the application of cancer cell-trained FLI-LeNet CNN models to macrophages.The accurate results of the FLI-LeNet model on the BMDMs indicate the successful transferability of the FLI-LeNet model developed with data of MCF7 cells with glycolysis and OXPHOS inhibitor treatment and glucose, and pyruvate substrate exposure to genetic modulation-induced metabolic data.
In this study, the lifetime decay matrix was obtained using timecorrelated single-photon counting (TCSPC) with a time resolution of 12.5 ns and 256 time frames.The lifetime decay matrix can also be measured using time-gated imaging or pulse sampling with fewer time frames. 52It is possible that a 3D CNN trained with different formats of decay data can effectively distinguish metabolic phenotypes of cells, as the down-sampled NAD(P)H TPSF images yielded similar performance for identifying metabolic pathways (Figs. 2 and 3).Although the ResNet 3D CNN achieved high accuracy in separating MCF7 cells dependent on glycolysis, OXPHOS, and glutaminolysis, metabolic pathways are not mutually exclusive.Significant crosstalk and overlap between different metabolic pathways can exist within a cell.In this study, the training datasets contained cells with carefully controlled metabolic activities altered by nutrients within the media and inhibition of pathways by chemicals.The NAD(P)H TPSF images likely contain features that could be informative on the heterogeneity of metabolic pathway use by cells, and CNNs have the potential to handle more challenging tasks, such as mapping the percentage of energy supplied by different metabolic pathways.However, the advancement of CNN models for this application to separate contributions of metabolic activities within individual cells is primarily hindered by the limited availability of ground truth single-cell data.Obtaining high-quality and comprehensive datasets that accurately represent various metabolic conditions at the cellular level would greatly facilitate the further development and refinement of CNN models for metabolism analysis.
The classification of cellular metabolic activities from NAD(P)H fluorescence decay image using 3D CNNs may be useful for diverse applications that require non-contact, live-cell detection of metabolic states at the cellular level.The models presented here were developed and validated using in vitro cancer cell experiments, with cellular metabolism manipulated chemically.Although the models demonstrated accurate transfer to NAD(P)H FLIM images of macrophages with genetic metabolic perturbations, there are likely limitations of the models for application to more complex samples such as tissues imaged ex vivo or in vivo.For clinical cancer applications, the tumor environment is complex, including cancer cells with heterogeneous metabolic activities and drug responses and additional cell populations such as immune cells, fibroblasts, and endothelial cells.While the models described herein may lack direct applicability to images of such clinical tissues, the models could be retrained using representative in vivo and ex vivo tissue data.As the number of prediction outcomes increases to encompass additional cell types and metabolic states, the volume of data required for training also increases.To overcome the challenges of obtaining sufficient in vivo and ex vivo FLIM data, training datasets could include simulated FLIM images where lifetime parameters, cell and nucleus size, morphology, and intercellular and intracellular heterogeneity are selected to mimic experimental data. 53uture studies may expand the applicability of FLIM prediction models by training with various biological samples to create an ensemble of models specific to the characteristics of each FLIM dataset.
IV. CONCLUSION
In this paper, the predominance of three major metabolic pathways, glycolysis, oxidative phosphorylation, and glutaminolysis, in MCF7 cancer cells was predicted from NAD(P)H TPSF images using 3D CNN models.The FLIM-based CNN models effectively utilize both temporal and spatial information within the NAD(P)H TPSF data, achieving accuracy rates exceeding 90%.Notably, the CNN models trained with human cancer cells were successfully transferred to murine macrophages.In conclusion, the combination of autofluorescence lifetime imaging of NAD(P)H and 3D CNN models offers a label-free modality for identifying and characterizing metabolic activities in live cells that can be promoted across different metabolic perturbations and various cellular contexts for broad applications.
V. METHODS A. NAD(P)H fluorescence lifetime dataset of cancer cells
The NAD(P)H fluorescence lifetime images of MCF7 cancer cells with different metabolic activities were provided by Hu and Walsh.The methods summarized here are covered in full detail in Hu et al. 35 MCF7 breast cancer cells were cultured in the Dulbecco's Modified Eagle's Medium (DMEM) with glucose (50 mM), pyruvate (2 mM), 1% antibiotic-antimycotic, and 10% fetal bovine serum (FBS).The cells were seeded at a density of 2 Â 10 5 cells in 2 ml of the culture media per 35 mm glass-bottom imaging dish 48 h before imaging.Three metabolic groups were created to target glycolysis, oxidative phosphorylation (OXPHOS), and glutaminolysis.To enhance glycolysis, the cells were treated with NaCN (4 mM) to inhibit OXPHOS 5 min before imaging.To enhance OXPHOS, 2-Dexoy-D-glucose (2-DG, 50 mM) was added to the cells 1 h before imaging to inhibit glycolysis.Additionally, a second group of OXPHOS enhanced cells was created by providing glucose-starved cells with glucose-free DMEM supplemented with pyruvate (50 mM), 1 h prior to imaging.To enhance glutaminolysis, cells were plated in DMEM with glutamine (2 mM) as the only nutrient and imaged at 1, 2, and 3 h.NAD(P)H fluorescence lifetime images were captured on a multiphoton fluorescence lifetime microscope (Marianas, 3i) using 750 nm excitation and a bandpass filter (447/60 nm) to isolate emission.Fluorescence lifetime decays of each cell were obtained through cell segmentation in CellProfiler, 54 using an automated image segmentation pipeline previously described. 55The number of cancer cells in each metabolic group is summarized in Table S1.
B. NAD(P)H lifetime imaging of POLG macrophages
Experimental details for the polymerase gamma (POLG) murine bone marrow-derived macrophage (BMDM) experiments including isolation of cells, cell culture, and Seahorse metabolic flux assay are fully described in the supplementary material.For NAD(P)H lifetime imaging of wild-type and POLG macrophages, the macrophages were cultured in DMEM supplemented with 10% FBS and seeded at a density of 10 5 cells within 2 ml of the culture media per 35 mm glass-bottom imaging dish 24 h before imaging.Experimental groups included control and cyanide-treated wild-type (WT) and POLG macrophages.Both the WT and POLG macrophages were treated with NaCN (4 mM) to inhibit OXPHOS, and autofluorescence lifetime imaging was performed after 5 min.The NAD(P)H fluorescence lifetime images were captured by a customized built multiphoton imaging system (Mariana, 3i) using a 40Â water immersion objective (1.1 NA).The NAD(P)H fluorescence was excited at 750 nm with a power of $5 mW using a tunable (680-1080 nm) Ti:sapphire femtosecond laser (COHERENT, Chameleon Ultra II) and detected with a photomultiplier tube (PMT) detector (HAMAMATSU, H7422PA-40) with a bandpass filter (447/60 nm).The fluorescence lifetime decay was measured in the time domain with a time-correlated single-photon counting (TCSPC) electronics module (SPC-150N, Becker & Hickl).Each fluorescence lifetime image (256 Â 256 pixels, 270 Â 270 lm 2 ) was acquired with a pixel dwell time of 50 ls and 5 frame repeats.The NAD(P)H fluorescence lifetime images were analyzed by SPCImage (Becker & Hickl) to calculate the mean NAD(P)H lifetime (s m ) and export the NAD(P)H intensity image and temporal point spread function (TPSF) image (256 Â 256 Â 256) with a spatial binning of 9 pixels.The mean lifetime NAD(P)H images (s m ) were created in SPCImage for visualization of the lifetime, but the CNN models in this paper only use the raw lifetime decay data.The NAD(P) H fluorescence decay image of each cell was obtained by segmentation based on the cell mask generated from NAD(P)H intensity images using Cellpose. 55,56The resulting number of macrophages imaged for each group is summarized in Table S2.
C. Pre-processing and down-sampling of TPSF images Each cell within each NAD(P)H FLIM image was extracted based on the bounding box of its cellular mask to obtain an X-Y-T TPSF image using MATLAB.Then, the following image processing steps were performed in Python with the OpenCV package.The overall workflow for TPSF image processing, training preparation, and CNN model development is described in Fig. 5. First, the pixel values in the cellular regions of each time frame image were summed to plot the photon distribution as a function of time for each TPSF image.Then, the cells were filtered using an entropy threshold at the time frame with the maximum photon number to remove incomplete or poorly segmented cells.The thresholds were defined based on the distribution of entropy using a Gaussian approximation. 34To unify the image size, all cell images were padded with borders of 0-values to be 40 Â 40 spatial (X-Y) pixels for all 256-time points (T) (Fig. 5).The collective 40 Â 40 Â 256 TPSF images of $7500 cells occupied $4 gigabytes of memory.To eliminate the background pixels from the TPSF images and reduce the amount of data, a 21 Â 21 Â 256 square was extracted in the center of the original image (40 Â 40 Â 256) to preserve the key morphology of the cells in the spatial domain (Fig. 5).This cropping size (21 Â 21 pixels) was selected by analysis of the cell size distribution of the dataset (Fig. S1).Two down-sampling modalities were developed to further reduce each image size to 21 Â 21 Â 128 pixels while maintaining spatial and temporal information.First, mean and median filters with a window of three pixels in the time domain were applied to the original TPSF images to smooth the decay curve [Fig.S2 (a)].Then, the odd time frames were extracted as the down-sampled data [Fig.S2(a)].The TPSF images down-sampled by the median filter are defined as MEDD (Median Down-sampled), and those downsampled by the mean filter are defined as MD (Mean Down-sampled).
D. CNN model building, training, and evaluation
Two different CNN architectures were developed to predict metabolic phenotypes of cancer cells.All models were created using the machine-learning library Keras with a TensorFlow backend in Python running on Jupyter Notebook on the platform Anaconda.First, a 3D LeNet architecture 36 with two convolutional layers and two pooling layers was applied to build a model (FLI-LeNet) to predict cells as either glycolytic or oxidative phenotypes.Then, another 3D LeNet model (FLI-LeNet) was trained to perform three metabolic group classification: those dependent on glycolysis, OXPHOS, or glutaminolysis.The second CNN architecture was developed based on the residual neural networks (ResNet) structure, 49 consisting of a convolutional layer followed by two ResNet blocks.Each ResNet block is comprised of two convolutional layers, and the weight layer learns residual functions by comparing them to the layer inputs.The ResNet CNN model (FLI-ResNet) was created to identify three metabolic activities of cancer cells: glycolysis, OXPHOS, and glutaminolysis.The parameters of the 3D CNN models are described in detail in the supplementary material.
For all models, the size of the input layer was 21 Â 21 Â 256 for models trained with the original data or 21 Â 21 Â 128 for models trained with down-sampled data.Cross entropy was set as the loss function and monitored in each training epoch.The Adam optimizer was used with an initial learning rate set to 10 À3 and a batch size of 8.As a preliminary test, the networks were normally trained for 100 epochs using an NVIDIA GeForce RTX 3080 GPU.70% of the TPSF images were randomly selected as the training dataset, 10% of TPSF images were used as the validation dataset to monitor the performance of models during training, and the remaining 20% of TPSF images were selected as the testing dataset (Table S1).The training time varied slightly, between 30 and 60 s per epoch, depending on the batch size and original or down-sampled datasets.
E. CNN model evaluation
Once the best training parameters were determined, a fivefold cross-validation of the models trained to 50 epochs was applied to evaluate the robustness of the CNN models (Fig. 5).The prediction performance of the test datasets was averaged across the fivefold validation.For the prediction of glycolytic cells and oxidative cells, the results were presented in a confusion matrix, where glycolytic cells were defined as the positive group, and the oxidative cells were defined as the negative group.The accuracy was calculated as the ratio of correctly classified cells to the total number of cells.The precision was calculated as the ratio of true positives to the sum of true positives and false positives, and the recall was calculated as the ratio of true positives to the sum of true positives and false negatives.Additionally, a receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) were obtained from the prediction results of the test datasets for each classifier.
For reporting the performance of the models to distinguish among three metabolic pathways, the precision for each class was calculated as the proportion of correctly predicted cells of that class out of all cells predicted as that class.The recall for each class was calculated as the proportion of correctly predicted instances of that class out of all actual instances of that class in the dataset.The F1-score for each class was calculated by the harmonic mean of precision and recall using the formula F1 À score ¼ 2 ÃðPrecision à RecallÞ=ðPrecision þ RecallÞ.An overall precision, recall, and F1-score were obtained by averaging the precision, recall, and F1-score for each class.
After the development and validation of the models on MCF7 cancer cell data, the two-class FLI-LeNet models were applied to the WT and POLG macrophage FLIM images to predict the major metabolic activity for each cell.From the model outputs, the percentage of cells predominantly utilizing glycolysis or OXPHOS was calculated for both the WT and POLG macrophages and their respective cyanidetreated groups, to facilitate a comparison of the metabolic activities among the different groups.
SUPPLEMENTARY MATERIAL
See the supplementary material for a detailed description of fluorescence lifetime analysis, CNN development, BMDM preparation, Seahorse assay, dataset, and CNN performance.
(a)].The processed TPSF image size for each cell was 21 Â 21 Â 256 pixels (X Â Y Â T), and the cytoplasm contains more NAD(P) H molecules compared to the nucleus, resulting in brighter NAD(P)H intensity [Figs.1(a) and 1(b)].Down-sampling of the TPSF images resulted in a reduction of the NAD(P)H decay curve's length and shifted the peak position, originally mostly at the 64th or 65th (3.17 ns) [Fig.S2(b)] to the 30th or 31st time frame [Fig.S2(c) and S2(d)].
FIG. 1 .
FIG. 1. Characteristics of NAD(P)H fluorescence lifetime decays of MCF7 cells dependent on OXPHOS, glycolysis, and glutaminolysis.(a) Representative NAD(P)H s m images of cancer cells dependent on OXPHOS, glycolysis, and glutaminolysis, scale bar ¼ 50 lm.(b) Representative fluorescent images of an MCF7 cell montaged across time.The upper left frame corresponds to t ¼ 0 ps, and the bottom right frame corresponds to t ¼ 12.5 ns, with 48.8 ps time resolution of each frame.The representative cell is from the glycolysis group, and the image size is 21 Â 21 spatial pixels ($22 Â 22 lm 2 ) Â 256 frames across time.(c) Average NAD(P)H decay curves (TPSF) of cells dependent on glycolysis, OXPHOS, and gluataminolysis.(The curve was obtained by averaging the decays normalized to the decay peak maximum of all pixels within a cell and then averaged across all cells within each metabolic group.)(d) t-SNE projection with the NAD(P)H intensity decay as input features of MCF7 cells dependent on glycolysis (green), OXPHOS (red), and glutaminolysis (blue).
and the extracted feature maps allow visualization of the features used for classification.The feature map for a representative cell from the 3D FLI-LeNet model shows that both time-domain features [Figs.S3(a) and S3(b)] and morphological features including the cell edges and cytoplasm [Fig.S3 (c)] provide information from the NAD(P)H TPSF images to discriminate glycolytic from OXPHOS-dependent MCF7 cells.The 3D FLI-LeNet model was trained to differentiate glycolytic cancer cells from oxidative cancer cells using the NAD(P)H TPSF images (21 Â 21 Â 256) with a 0.001 learning rate.After approximately 30 epochs, the model attained a validation loss below 0.1 [Fig.2(b)] and a validation accuracy of around 90% [Fig.2(c)].Moreover, the models trained with MD and MEDD showed a similar training process [Figs.2(b) and 2(c)].To further visualize the training performance, 64 learned features extracted from the FLI-LeNet model were projected onto a 2D space using t-SNE.The FLI-LeNet identified features that separated the cells using glycolysis from those using OXPHOS, as evidenced by their well-separated clusters in the t-SNE plot [Fig.2(d) and Figs.S4(b) and S4(c)].
FIG. 2 .
FIG. 2. A 3D FLI-LeNet CNN model for classifying glycolytic from oxidative MCF7 cells from NAD(P)H TPSF images.(a) The structure of the FLI-LeNet CNN model for predicting cancer cells using glycolysis and cancer cells using OXPHOS based on the NAD(P)H TPSF images.(b) Validation loss and training loss by epoch for FLI-LeNet models trained with different datasets (Org: original TPSF images, MD: down-sampled TPSF images with the mean filter, MEDD: down-sampled TPSF images with the median filter).Solid lines represent validation loss, and dashed lines represent training loss.(c) Validation accuracy and training accuracy by epoch for FLI-LeNet models trained with different datasets.Solid lines represent validation accuracy, and dashed lines represent training accuracy.(d) t-SNE visualization obtained from the last activation map of the FLI-LeNet model of the test data of the model trained with the original NAD(P)H TPSF images.Each dot corresponds to one cell based on its representation in the last activation layer of the pre-trained FLI-LeNet after fine-tuning.Red data points represent cells using OXPHOS, and green data points represent the cells using glycolysis.(e) Representative ROC curves of FLI-LeNet models trained with original NAD(P)H TPSF data (Org) and down-sampled data (MD, MEDD) for predicting glycolysis or OXPHOS of MCF7 cells within the test datasets.
3(b) and 3(c)].In comparison, the FLI-LeNet models attained a validation loss below 0.1 and a validation accuracy above 80% after 30 epochs [Figs.S6(c) and S6(d)].It was observed that the FLI-ResNet exhibited a stable performance with fewer fluctuations in validation accuracy and loss during the training progress when trained with the original TPSF images than when trained with MD and MEDD [Figs.3(b) and 3(c)].
FIG. 3 .
FIG. 3. 3D FLI-ResNet CNN model for classifying MCF7 cells as dependent on glycolysis, OXPHOS, and glutaminolysis.(a) Illustration of the structure of the FLI-ResNet CNN model for predicting cancer cells using glycolysis, OXPHOS, and glutaminolysis from the original NAD(P)H TPSF images.(b) Validation and training loss by epochs for the FLI-ResNet models trained with a 0.001 learning rate for predicting metabolic activity in different datasets (Org: original TPSF images, MD: down-sampled TPSF images with the mean filter, and MEDD: down-sampled TPSF images with the median filter); solid lines represent validation loss, and dashed lines represent training loss.(c) Validation accuracy and training accuracy over epochs for FLI-ResNet trained with a 0.001 learning rate for predicting metabolic activity in different datasets; solid lines represent validation accuracy, and dashed lines represent training accuracy.(d) 2D t-SNE visualization of the test data of the last activation map of the FLI-ResNet model created with the original NAD(P)H TPSF images.Red data points represent MCF7 cancer cells dependent on OXPHOS, green data points represent MCF7 cancer cells dependent on glycolysis, and blue data points represent MCF7 cancer cells dependent on glutaminolysis.
FIG. 4 .
FIG. 4. Prediction of metabolic activity in WT and POLG-mutated murine macrophages using 3D FLI-LeNet CNN models.(a) Extracellular acidification rate (ECAR) of WT BMDMs and POLG-mutated BMDMs was measured under basal conditions and with the sequential addition of glucose, oligomycin, and 2-DG.(b) Oxygen consumption rate (OCR) of WT BMDM and POLG-mutated BMDMs was under basal conditions and with the sequential addition of glucose, oligomycin, FCCP & pyruvate, and rotenone & antimycin.(c) Representative NAD(P)H s m images of control and cyanide-treated WT and POLG macrophages show alterations in mean fluorescence lifetimes due to POLG mutation and cyanide treatment, scale bar ¼ 50 lm.(d) Prediction of the metabolism of WT and POLG BMDMs as glycolysis or OXPHOS by the FLI-LeNet models trained with MCF7 cancer cells using the original TPSF images.Data are the number of cells and corresponding percentage.
), Fig. S9, Org and MD models], a finding that is consistent with ECAR and OCR data of the differences in the basal metabolic states of WT and POLG BMDMs [Figs.4(a) and 4(b), Org and MD models].Additionally, the FLI-LeNet identified an increased fraction of glycolytic WT BMDMs with cyanide treatment, consistent with the expected metabolic shift due to OXPHOS inhibition [Fig.4(d), Fig. S9, Org and MD models].Even though an elevated fraction of glycolytic phenotypes in POLG BMDMs upon cyanide treatment was also predicted by FLI-LeNet [Fig.4(d), Fig. S9, Org and MD models], this increase is less compared to the WT BMDMs [Fig.4(d), Fig. S9, Org and MD models].This observation indicates that the effectiveness of cyanide in compromising OXPHOS in macrophages with POLG mutation is reduced when compared to WT macrophages.
FIG. 5 .
FIG. 5. FLIM image pre-processing steps and overview of model development (created with BioRender.com).Each cropped image of unique spatial dimensions  256 time frames pixels is padded to a uniform 40  40  256 pixel image and cropped to 21  21  256 pixels.The cropped images are retained in the original (Org) dimensions (21  21  256) and down-sampled in the time dimension via either a median (MEDD) or mean (MD) filter to 21  21  128 pixels.Representative TPSF curves demonstrate the effect of the down-sampling procedure.The resulting dataset of $7500 cells is divided into training and test groups for CNN model development to identify cellular metabolic activity from the FLIM images.
TABLE I .
Performance of FLI-LeNet CNN model on prediction of glycolytic and oxidative cells trained with different datasets.Values are mean þ/À standard deviation for the test datasets of the fivefold cross-validation replication.
TABLE II .
Performance of FLI-ResNet and FLI-LeNet CNN models on prediction of cells using glycolysis, OXPHOS, and glutaminolysis trained with different datasets.Values are mean þ/À standard deviation for the test datasets of the fivefold cross-validation replication. | 9,815.4 | 2024-02-27T00:00:00.000 | [
"Computer Science",
"Biology",
"Medicine"
] |
Connections, field redefinitions and heterotic supergravity
We study heterotic supergravity at Oα′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{O}\left(\alpha^{\prime}\right) $$\end{document}, first described in detail in 1989 by Bergshoeff and de Roo. In particular, we discuss the ambiguity of a connection choice on the tangent bundle. It is well known that in order to have a consistent supergravity with supersymmetry transformations given in the usual way, this connection must be the Hull connection at Oα′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{O}\left(\alpha^{\prime}\right) $$\end{document}. We consider deformations of this connection corresponding to field redefinitions, and the necessary corrections to the supersymmetry transformations. We also discuss possible extensions of this theory to higher orders in α′. We are interested in the moduli space of such field redefinitions which allow for supersymmetric solutions to the equations of motion. We show that for solutions on M4 × X, where M4 is Minkowski and X is compact, this is given by H(0,1)(X, End(TX)). This space corresponds to infinitesimally close connections for which the equations of motion are satisfied. The setup suggests a symmetry between the gauge connection and the tangent bundle connection, as also employed by Bergshoeff and de Roo. We propose that this symmetry should be kept to higher orders in α′, and propose a natural choice for the corresponding tangent bundle connection used in curvature computations.
Introduction
This paper is a continuation of [1] where we studied the infinitesimal moduli space of heterotic supergravity, and in particular the Strominger system [2][3][4]. Moduli of the Strominger system where also studied in [5].
In this paper we will shed more light on an ambiguity which appeared in [1] concerning moduli of T X as a holomorphic bundle defined by a holomorphic connection ∇. We argued that these moduli could not be physical, and formulated a possible interpretation for their appearance, which we elaborate on in this paper.
Ambiguities concerning the connection ∇ have been discussed extensively in the literature before, both from a sigma model perspective [6][7][8][9][10], where a change of this connection has been shown to correspond to a field redefinition, or from the supergravity point of view [3,[11][12][13], where a change of connection choice has been shown to correspond to a JHEP12(2014)008 change of regularisation scheme in the effective action. We will review some of these results and extend them to higher orders in α ′ . Heterotic supergravity has also been considered at higher orders in α ′ before [14][15][16][17][18], 1 and we will review and extend some of these results. In particular, we address the ambiguity concerning a connection choice on the tangent bundle in the higher order theory.
We begin in section 2 with a short review of first order heterotic supergravity as first written down in [15,16]. We discuss the connection choice ∇ on the tangent bundle T X needed for the supersymmetry equations and equations of motion to be compatible. This leads to the an instanton condition on ∇ [1,25,26] R mn Γ mn η = 0 , where R mn is the curvature two-form of ∇ and η is the spinor parametrising supersymmetry on X. This condition has often been applied in the literature when constructing heterotic vacua, see e.g. [27][28][29][30][31]. The instanton condition has an associated infinitesimal moduli space T M ∇ ∼ = H (0,1) (X, End(T X)) . (1.1) which we considered in [1]. These moduli cannot be physical, and the main purpose of this paper is an attempt to understand their appearance. In order to have a consistent supergravity at O(α ′ ), ∇ must be taken to be the Hull connection [3,16]. A deformation of this connection is equivalent to a field redefinition or a change of the regularisation scheme in the effective action [7,11]. We are interested in the space of allowed deformations, for which there are supersymmetric solutions to the supergravity equations of motion. We find that even though we need to deform the supersymmetry transformations accordingly, as was also pointed out in [13], the conditions for preservation of supersymmetry can be assumed to remain the same. Moreover, the space of connections which allow for such supersymmetric solutions to exist is again given by (1.1).
In section 3 we discuss extensions of these results to second order in α ′ . We find that the choice of the Hull connection, which was required at at O(α ′ ), is no longer consistent at higher orders. Indeed, as we shall see, insisting on the Hull connection can put additional constraints on the higher order geometry. In particular, the first order geometry is Calabi-Yau if we assume the internal space to be compact and smooth. This was also noted in [25], where the first order geometry was taken as exact.
We also show that without loss of generality supersymmetric solutions may be assumed to satisfy the Strominger system, assuming that the internal space is compact and smooth. With this, the connection ∇ should again satisfy the instanton condition. This condition looks surprisingly like a supersymmetry condition corresponding to the connection ∇ as if it was a dynamical field. Indeed, it was precisely the fact that (∇, ψ IJ ), where ψ IJ is the supercovariant curvature, transforms as an SO(9, 1)-Yang-Mills multiplet at O(α ′ ) which led to the construction of the O(α ′ )-action in the first place [16]. As also noted in [16], this is symmetric with the gauge sector of the theory, and it is natural to assume this symmetry JHEP12(2014)008 to higher orders in α ′ . This also prompts us to make a conjecture for what the connection choice should be at higher orders in α ′ .
We have left some technical details of the discussion of compactifications to fourdimensional Minkowski space to the appendix, leaving us free to discuss supersymmetry and solutions in the bulk of the paper.
First order heterotic supergravity
In this section we review heterotic supergravity at first order in α ′ as first studied in [15,16]. We write down the action and supersymmetry transformations, and review the supersymmetric solutions of this theory, commonly known as the Strominger sytem [2][3][4]. We show how consistency between the supersymmetry conditions and equations of motion requires that we make a certain connection choice on T X. In fact, the connection should satisfy the instanton condition. Various proofs of this have appeared in the literature before [1,25,26], and we give a slightly different proof in this paper.
Moreover, the condition of a supersymmetry invariant supergravity action reduces this choice of connection further to the the Hull connection [3,15,16]. We show that by deforming this connection, the supersymmetry transformations must be transformed accordingly, however we find that the conditions for supersymmetric solutions can be taken as before. Moreover, the deformed connections must again be instantons. We comment on the moduli space of this condition, and note that these moduli are unphysical as they correspond to changes of the effective action regularisation scheme [11], or as we shall see explicitly, field redefinitions [7].
We comment briefly on the type of geometry that results from the first order supersymmetry conditions, and the fact that the compact space X is conformally balanced. First order deformations of the corresponding system of equations where studied recently in [1] and [5].
Action and field content
Let us begin by recalling the bosonic part of the action at this order [16] where R is the scalar curvature of the metric g, F is the curvature of an E 8 × E 8 gauge bundle, R is the curvature with respect to some connection ∇ on the tangent bundle, and H is the NS three-form, which is appropriately defined for the theory to be anomaly free. Here, ω A CS and ω ∇ CS are Chern-Simons three-forms of the gauge-connection A, and the tangent bundle connection ∇ respectively. We also write |α| 2 = α ∧ * α for α ∈ Ω * (X).
JHEP12(2014)008
The fermonic fields of the theory are the gravitiono ψ M , the dilatino λ and the gaugino χ. The N = 1 supersymmetry variations of these fields are [2,16], where H M = H M N P Γ N P , H = H M N P Γ M N P , ∇ LC denotes the Levi-Civita connection, the Γ M are ten-dimensional gamma-matrices, and Γ M 1 ...Mn denote antisymmetrized products of gamma-matrices as usual. Here large roman letters denote ten-dimensional indices. Note that the transformation for the gauge field has a reduction in the order of α ′ . This is because the gauge field always appears with an extra factor of α ′ in the action. In order to have a supersymmetric theory, we therefore only need to specify the gaugino transformation modulo O(α ′ )-terms. Supersymmetry for a given solution then requires that these variations are set to zero. The choice of connection ∇ is a subtle question. Firstly it cannot be a dynamical field, as there are no modes in the corresponding string theory corresponding to this. Hence, ∇ must depend on the other fields of the theory in a particular way. This dependence is forced upon us once the supergravity action and supersymmetry transformations are specified.
Indeed, if we want the supergravity action to be invariant under the supersymmetry transformations (2.3)-(2.5) at O(α ′ ), we need a particular choice of connection in the action, namely the Hull connection ∇ − , which has connection symbols given by where Γ LC KL M are the connection symbols of the Levi-Civita connection. This connection is needed in order that (∇, ψ IJ ) transforms as a SO(9, 1) Yang-Mills multiplet, as explained in [16]. Here ψ IJ is the supercovariant curvature given by where ∇ + has connection symbols given by (2.6). With this, the full first order heterotic action is invariant under supersymmetry.
First order supersymmetry and geometry
In this section we briefly consider compactifications to four-dimensional Minkowski space. Details of the compactifications are laid out in appendix A, where conventions are laid out. Consider the set of supersymmetry equations (2.3)-(2.4) at first order in α ′ . We look at what conditions they impose on the internal geometry X. The resulting system is known as the Strominger system [2,3], and in terms of the fields (Ψ, ω, H, φ) they may be written as [32] d(e −2φ Ψ) = 0 , as was first shown in [2]. The flux H is identified with the torsion T of the Bismut connection∇, which is in fact the same as ∇ + . In the mathematics literature,∇ is the unique metric connection with totally antisymmetric torsion so that From the anomaly cancellation condition (2.2), we also have the Bianchi Identity Setting the gaugino variation (2.5) to zero is equivalent to requiring that the gauge-bundle is holomorphic, and satisfies the hermitian Yang-Mills equations on the internal space where F is the field-strength of the E 8 × E 8 gauge-bundle.
Instanton condition
In order for the O(α ′ )-action to be invariant under the supersymmetry trasformations (2.3)-(2.5), one is forced to choose the Hull connection in the action. As we shall see in section 2.5, this can be relaxed upon appropriate field redefinitions. Such field redefinitions also change the supersymmetry transformations. We will however see that even though the supersymmetry transformtions change, the supersymmetry conditions may be assumed to be the same. That is, we can without loss of generality, assume that our solutions solve the Strominger system. Furthermore, supersymmetry should be compatible with the bosonic equations of motion derived from (2.1). This leads to a condition on ∇ known as the instanton condition which we now discuss. It should be noted that for supersymmetric solutions, as we also show in appendix C, the Hull connection does satisfy the instanton condition to the order we are working at [26]. It has been shown that the supersymmetry equations derived from (2.3)-(2.5) together with the Bianchi identity imply the equations of motion if and only if the connection ∇ for the curvature two-form R appearing in (2.1) is an SU(3)-instanton [25,26]. This implies that it satisfies the conditions 14)
JHEP12(2014)008
which are similar to those for the field-strength F . We present a proof of this in appendix B for completeness. 3 The first condition in (2.14) implies that R (0,2) = 0. Therefore there is a holomorphic structure ∂ ϑ on the tangent bundle T X, where and ϑ is the (0, 1)-part of the connection one-form of ∇. We denote T X with this holomorphic structure as (T X, ∇). Note that this holomorphic structure is different in general from the holomorphic structure on T X induced by the complex structure J. The second condition of (2.14) says that the connection ∇ is Yang-Mills, more precisely, ∇ is an instanton. By a theorem of Li and Yau [34], which generalizes the Donaldson Uhlenbeck-Yau theorem [35,36], such a connection exists if and only if the holomorphic bundle (T X, ∇) is poly-stable. Moreover, the connection is the unique hermitian connection with respect to the corresponding hermitian structure on T X. 4 It is known that the stability condition is stable under first order deformations of the holomorphic structure [37]. We extended the result of [37] in [1], where we found that the moduli space of infinitesimal deformations of ∂ ϑ , including generic deformations of the Hermitian Yang-Mills conditions (2.14), where given by More explicitly, in [1] we showed that for each [δϑ] ∈ T M ∂ ϑ , 5 there is a corresponding element δϑ ∈ [δϑ] so that the Yang-Mills condition is satisfied. Starting from the instanton connection, there is then an infinitesimal moduli space T M ∂ ϑ of connections for which the equations of motion are satisfied. As mentioned, for the supergravity action to be invariant under the supersymmetry transformations (2.3)-(2.5), the choice of connection is reduced further. In particular, invariance of the first order action forces the connection to be the Hull connection ∇ − [16]. Under these supersymmetry transformations, we therefore cannot choose any element in T M ∂ ϑ when deforming the Strominger system. Rather we have to choose the element corresponding to a deformation of the Hull connection.
Changing the connection
We could ask what happens if we deform the connection in the action. Firstly, such deformations do not correspond to physical fields. We shall see in this section that they are equivalent to field redefinitions [7]. Secondly, insisting upon changing this connection means that we need to change the supersymmetry transformations correspondingly. However, it turns out that the conditions for supersymmetric solutions can be taken as before. Moreover, the condition that the new connection allows for such supersymmetric solutions to the theory forces the new connection precisely to satisfy the instanton condition.
JHEP12(2014)008
Let us discuss what happens when we change the connection ∇ used in the action. That is, we let is a function of all the other fields of the theory, which we collectively have denoted by Φ, and t is an initesimal parameter. In the next section, we will take t = O(α ′ ), but for now we just assume it corresponds to an infinitesimal deformation of the connection. We are interested in what happens to the theory under such a small deformation. Under supersymmetry, the new connection one-forms Θ I JK together with the supercovariant curvature ψ IJ transform as where we have used (C.1) in the second equality of the second equation. The O(α ′ )-terms can be neglected to the order we are working at, but they will become important in the next section when we discuss the theory to higher orders in α ′ . We thus see that (Θ, ψ IJ ) transforms as an SO(9, 1)-Yang-Mills multiplet, modulo O(t) and O(α ′ )-terms. As noted, the O(α ′ )-terms can be ignored for now, but the O(t)-terms will have to be dealt with. This is done by changing the supersymmetry transformations accordingly as we shall see below. 6 A lemma of Bergshoeff and de Roo [16] (see also [38]) states that the action deforms as under an infinitesimal deformation of the Hull connection. Here B 0 denotes a combination of zeroth order bosonic equations of motion. As the correction to the action due to the change of connection (2.16) is proportional to the equations of motion, the change of connection tθ may equivalently be viewed as an infinitesimal field redefinition of order O(t, α ′ ), and is therefore non-physical. 7 We want to consider what happens to the theory under these deformations of the connection. In particular, we are interested in the allowed deformations of the connection, or equivalently field redefinitions, for which supersymmetric solutions to the Strominger system exist. We expect this to be related to the moduli space of connections (2.15) studied in [1], and we see that this is indeed the case.
From (2.17) it follows that the change to the action due to the correction, 6 That a change of the connection requires a change of the supersymmetry transformations in order to have a supersymmetry invariant action has been noted before [13]. 7 That deformations of the connection corresponds to a field redefinition has been noted in the literature before, see e.g. [6,7,12,13].
JHEP12(2014)008
of the transformation of Θ can be absorbed in a redefinition of the bosonic supersymmetry transformations by a similar procedure as is done in [16] for the O(α ′2 )-corrections to the supersymmetry transformations. Similarly, we also have by [16], where Ψ 0 is a combination of zeroth order fermionic equations of motion. It follows that the change in the action due to the correction, may be absorbed into a redefinition of the fermionic supersymmetry transformations which now read is a ten-dimensional viel-bein frame. We have written the corrections in this way to be able to compare with the higher order α ′ -corrections in the next section. With the new supersymmetry transformations (2.19)-(2.21), the action with the new connection is again invariant.
As we saw above, deforming the connection ∇ − → ∇ − + tθ, really just corresponds to an O(α ′ )-field redefinition. Hence, the supersymmetry algebra above (including the bosonic transformations, which we did not write down for brevity) should just be the old algebra written in terms of the new fields. There are therefore no issues concerning closedness of the algebra.
Supersymmetric solutions
Let us look for four-dimensional supersymmetric maximally symmetric compact solutions to the t-adjusted theory. This ammounts to setting the transformations (2.19)-(2.21) to zero. We consider solutions such that where η is the six-dimensional spinor on X. Given the redefined supersymmetry transformations, this might seem like a restriction of allowed supersymmetric solutions. However, this is not the case, at least for compact solutions. Indeed, we have the following proposition.
JHEP12(2014)008
Proposition 1. Consider heterotic compactifications to four dimensions on a smooth compact space X at O(α ′2n−1 ) or less. If then without loss of generality we may assume that i.e. the solutions are solutions of the Strominger system.
upon an integration by parts. 8 Here and ∆ + is the Laplacian of the Bismut connection.
Next expand η in eigen-modes of ∆ + , where {|ψ n } is an orthonormal basis of eigenspinors of ∆ + with corresponding eigenvalues λ n , and where we have gone to bra-ket notation for convenience. We can then compute Note that λ n ≥ 0 as ∆ + is positive semi-definite. From this it follows that each term in the above sum is of O(α ′2n ). That is Moreover, we know that |η = O(1), which implies It follows that at least one α k = O(1). 9 Then, from (2.24), the corresponding eigenvalue is λ k = O(α ′2n ). At the given order in α ′ , O(α ′2n−1 ), we may without loss of generality set λ k = 0. It follows that there is a spinor in the kernel of ∇ + , which we may take to be η.
JHEP12(2014)008
Using Proposition 1 we may set n = 1 to get (2.23). It also follows from equation for the solution to be supersymmetric. From appendix B it then follows that the corrected connection ∇ = ∇ − + tθ should be an instanton.
It is easy to see that (2.25) is satisfied, once we know that we are working with supersymmetric solutions of the Strominger system. Plugging the connection ∇ into the instanton condition, and using that ∇ − is an instanton at this order, we find precicely the condition for the deformed connection to remain an instanton. From this, it also follows that as desired.
Finally, we remark that as noted in [1] there is an infinitesimal moduli space of connections satisfying this condition, where the tangent space T M ∂ ϑ − is taken at the Hull connection. Each connection in this moduli space corresponds to a field redefinition of the supergravity with the corresponding change of the supersymmetry transformations, (2.19)-(2.21). Compact supersymmetric solutions of these equations may by Proposition 1 be assumed to be solutions of the Strominger system, and they also solve the equations of motion provided θ ∈ T M ∂ ϑ − . From this perspective, the moduli space (2.27) found in [1] is unphysical. That is, the moduli space (2.27) may then be viewed as the space of allowed infinitesimal O(α ′ ) field redefinitions for which the equations of motion and supersymmetry are compatible. 10
Higher order heterotic supergravity
Having discussed the first order theory, we now consider heterotic supergravity at higher orders in α ′ . We continue our investigation from a ten-dimensional supergravity point of view, by a similar analysis as that of Bergshoeff and de Roo [16]. In [16] the Hull connection was used at higher orders as well. We wish to generalize this analysis a bit, and allow for a more general connection choice in the action, as was done in the previous section. In order not to overcomplicate matters unnecessarily, we return to letting the T X-connection be the
JHEP12(2014)008
Hull connection at O(α ′ ), which is needed in order that the full action be invariant under the usual supersymmetry transformations (2.3)-(2.5) at O(α ′ ). We will however allow this connection to receive corrections at O(α ′2 ). There are two important points which we wish to emphasise in this section. Firstly, as we saw in the last section, we may deform the tangent bundle connection away from the Hull connection provided we deform the supersymmetry transformations correspondingly. We take a similar approach in this section, where we deform away from the Hull connection by an α ′ -correction, ∇ = ∇ − + θ where now θ = O(α ′ ), and depends on the fields of the theory. Our findings from the previous section also persist in this section. That is, the deformation θ now corresponds to an O(α ′2 ) field redefinition, and deforming θ is therefore non-physical in this sense. Moreover, the supersymmetry transformations also change with θ, in accordance with the deformed fields. However, not all field choices allow for supersymmetric solutions of the Strominger system. Secondly, we note there is a symmetry between the tangent bundle connection ∇ and gauge connection A in the first order action. As a guiding principle, as is also done in [16], we would like to keep this symmetry to higher orders. With this philosophy it seems natural to choose ∇ so it satisfies its own equation of motion similar to that of A, at the locus where the equations of motion are satisfied. Note that this is true for the Hull connection at O(α ′ ), by equation (2.17).
Moreover, this also seems to be the connection choice we need in order for the supersymmetry conditions to hold at the locus of equations of motion. Indeed, we find the following Theorem 1. Strominger system type solutions, where ∇ + ǫ = 0 for heterotic compactifications on a compact six-dimensional manifold X, survive as solutions of heterotic supergravity at O(α ′2 ) if and only if the connection ∇ is an instanton, satisfying it's own "supersymmetry condition" R mn Γ mn η = 0 . Compact O(α ′2 )-supersymmetric solutions can without loss of generality be assumed to be of this type. Moreover, ∇ satisfies it's own equation of motion for these solutions.
Note then that our choice of connection is as if the connection ∇ was dynamical. We again stress that this is not the case, as ∇ must depend on the other fields of the theory. We only require the connection to satisfy an equation of motion (as if it was dynamical), and this then relates to how ∇ depends on the other fields.
With these observations, we make the following conjecture Conjecture 1. At higher orders in α ′ , the correct connection choice/field choice is the choice which preserves the symmetry between ∇ and A. That is, ∇ should be chosen as if it was dynamical, satisfying it's own equation of motion. Moreover, for supersymmetric solutions, ∇ should be chosen to satisfy it's own supersymmetry condition, similar to the one satisfied by A.
The second order theory
According to Bergshoeff where now and The supersymmetry transformations do receive corrections. What these corrections are again depend crucially on which connection is chosen in the action as we will discuss in the next section. Using the Hull connection ∇ = ∇ − , they are given in [16] and read 11 where P M AB = −6α ′ e 2φ ∇ +L (e −2φ dH LM AB ) .
Second order equations of motion
We now derive the equations of motion of the action (3.2). As the action is the same as the first order action, one might guess that the equations of motion will be the same too. This is not quite correct, and we take a moment to explain why. When deriving the first order equations of motion, one relies on the lemma of [16], equation (2.17), from which it follows that the Hull connection satisfies an equation of motion of its own, whenever the other fields do. As a necessary condition to satisfying the first order equations of motion is that the zeroth order equations of motion are of O(α ′ ), the variation of the action with respect to ∇ − can be ignored as it is of O(α ′2 ). This simplifies matters when deriving the first order equations of motion. At second order however, such terms will have to be included, potentially leading to a more complicated set of equations. 11 It should be noted that the specific form of these corrections, where there are no covariant derivatives of the spinor in the O(α ′2 )-correction requires an addition of an extra term of O(α ′2 ) to the fermionic sector action [16].
JHEP12(2014)008
We note that the O(α ′2 )-corrections to the equations of motion come from the variation of the action with respect to ∇. What they are will crutially depend on what connection ∇ is used. Let us write the connection one-form of ∇ as where Θ − are the connection one-forms of the Hull connection, and θ = O(α ′ ), and depends on the other fields of the theory in some unspecified way. The action then takes the form (3.8) Let us compute δ θ S. We find Inserting this back into the action, we find We write this as since the expression in brackets is proportional to a combination of zeroth order bosonic equations of motion according to (2.17). It follows from (3.9) that the change of connection θ may be thought of as an O(α ′2 ) field redefinition, as this is precisely how the action gets corrected when we perform such a field redefinition. This is similar to the O(t, α ′ ) field redefinitions we described in the previous section. In the same way, it follows that the change of the connection θ is unphysical. Let us next compute the variation of the action (3.8) with respect to the connection ∇, assuming that the first order equations of motion are satisfied. Using the first order equations of motion, in particular Note that any variations depending on δθ drop out of this expression. This is due to (3.9) and that θ is of order α ′ , which implies that variations of the action with respect to θ are JHEP12(2014)008 of O(α ′3 ) at the locus of the first order equations of motion. We therefore only need to worry about the δΘ − -part when varying the action with respect to ∇. Equation (2.17) also guarantees that the expression in (3.10) is of O(α ′2 ). The change of the O(α ′2 ) equations of motion depend on what the expression in the brackets is, which again depends on our connection choice. It should be stressed that even though θ corresponds to a field choice, this does not mean that any field choice will do. We want to choose our fields so that supersymmetry, and in particular the Strominger system, is compatible with the equations of motion.
Recall that the β-functions of the (0, 2)-sigma model correspond to the heterotic supergravity equations of motion. In [39] it was noted that the three-loop β-function of the gauge connection equal the two-loop β-function. 12 That is, the β-function of the gauge field does not receive corrections at this order, nor should the corresponding supergravity equation of motion. This is consistent with the supergravity point of view [16]. Motivated by this, and guided by the symmetry between ∇ and the gauge connection in the action, it seems natural to choose ∇ so that it satisfies it's own equation of motion (3.11) at this order. This is exactly the equation one gets when varying the action with respect to ∇, and which is indeed satisfied by the Hull connection at first order. It is easy to see that choosing this connection is in fact equivalent to choosing θ so that the expression in brackets in (3.10) vanishes, modulo higher orders. This again implies that all the first order equations of motion remain the same at O(α ′2 ). Of course, changing the connection also requires that we change the supersymmetry variations appropriately, in order that the full action remains invariant under supersymmetry transformations at O(α ′2 ). This also relates to how we correct the connection outside of the locus of equations of motion. We will return to this later, when we also consider supersymmetric solutions. We shall see that supersymmetric solutions may be assumed to be solutions of the Strominger system (∇ + η = 0), without loss of generality. Moreover, they exist if and only if ∇ is an instanton, and in particular (3.11) is satisfied. This is in complete analogy with the gauge connection, as the supersymmetry condition for A is that F remains an instanton at O(α ′2 ) as well.
The Hull connection at O(α ′3 )
Before we go on to consider the more general connection choices in more detail, let us return to the Hull connection used in [16]. We shall see that choosing this connection severely restricts the allowed supersymmetric solutions. The corrections to the theory in the case of the Hull connection have been worked out in [16] to O(α ′3 ), and we consider this theory to this order.
We look for supersymmetric solutions at O(α ′3 ). As we shall see, insisting upon the Hull connection restricts the allowed supersymmetric solutions. This was also argued in [25] JHEP12(2014)008 where the first order theory was taken to be exact, which lead to Calabi-Yau solutions as the only consistent solutions.
We will consider the theory up to cubic order in α ′ . At this order, the bosonic action may be given as [16] (3.12) where Note again the symmetry between the gauge connection A and the Hull connection ∇ − in the action above. The supersymmetry transformations now read Let us now consider supersymmetric solutions. Note that from Proposition 1, by setting n = 2, we may assume without loss of generality. The corresponding supersymmetric solutions are solutions of the Strominger system. It again follows that since the action remains uncorrected at O(α ′2 ), the connection ∇ must still be an instanton at O(α ′ ) by appendix B. Insisting on a particular choice of connection may then over constrain the system, as it is not guaranteed that this connection is an instanton. We see how this is true for the particular example of the Hull connection next.
JHEP12(2014)008
Theorem 2. For compact smooth compactifications, if we insist upon using the Hull connection at O(α ′n ), n ≥ 2, we can without loss of generality assume that the first order solution is Calabi-Yau. If we also assume that O(α ′ )-corrections are purely geometric, i.e. non-topological, then the second order geometry can be assumed to be Calabi-Yau as well.
Proof. Let us first consider the theory at O(α ′2 ). From Proposition 1, with n = 2, we can without loss of generality assume that the geometry solves the Strominger system. We then further need to require the instanton condition for the Hull connection. At this order in α ′ , this is a nontrivial condition. Indeed, using the identity (C.1), it follows that It follows that Using these results, we see that the cubic corrections to the supersymmetry conditions become quartic when requiring the O(α ′2 ) supersymmetry conditions and the equations of motion satisfied. We find that a variation of the cubic corrections to the action at the supersymmetric locus are of O(α ′4 ) as well. Arguing this way order by order, it then follows that we need in order for the equations of motion to be satisfied to cubic order. This further gives the requirement dH = O(α ′3 ) . We also have by supersymmetry, Note that d 2 φ = 0, with corresponding elliptic Laplacian ∆ d φ . It follows that any form γ has a Hodge decomposition for some three-form κ. Here
JHEP12(2014)008
From (3.16), it follows that d φH = O(α ′3 ), which in turn implies From this it follows that where we have excluded fractional powers of α ′ in the α ′ -expansion of H. It follows that the first order geometry is Calabi-Yau.
We can go further if we make a mild assumption about the α ′ -corrections. First note that ker(∆ d φ ) ∼ = ker(∆ d ), and ker(∆ d ) is topological. If we assume that α ′ -corrections are small, and in particular do not change the topology of X, it follows that |ker(∆ d φ )| does not change under α ′ -corrections. In particular, there are no new zero-modes as α ′ → 0. From this it follows that for λ i = 0 we have λ i = O(1). From (3.17) it follows that This in turn implies that X is a Calabi-Yau, both at first and second order in α ′ .
Choosing other connections
We now consider what happens if a different connection, other than the Hull connection is chosen, that is θ = 0. We work at O(α ′2 ) for the time being, and leave the cubic and higher order corrections for future work.
As argued in [16], the higher order corrections to the supersymmetry transformations come from the failure of (Θ, ψ IJ ) to transform as a SO(9, 1) Yang-Mills multiplet. Under supersymmetry transformations, we have where (C.1) has been used in the second equality of the expression for δψ IJ . Note that without the α ′ -effects, the multiplet transforms as a SO(9, 1) Yang-Mills multiplet. This is how the symmetry in the action between the gauge connection and tangent bundle connection arises at O(α ′ ).
The O(α ′ ) correction to the transformation of Θ I JK depends on how the correction θ of the connection is defined in terms of the other fields of the theory. This correction is
JHEP12(2014)008
what makes the action fail to be invariant under supersymmetry transformations. However, this failure of the action to be invariant may be absorbed into an O(α ′2 )-redefinition of the bosonic supersymmetry transformations due to (2.17), as is done in [16] for the case of the Hull connection. The same holds for the O(α ′ ) correction to δψ IJ , This can be absorbed into a redefinition of the supersymmetry transformations of the fermions due to (2.18). For the more general connection choice, it turns out that the correction we need only requires a change of the three-form P , but otherwise the transformations (3.4)-(3.6) remain the same. Note also that as the deformation of the connection can again be viewed as an O(α ′2 ) field redefinition, the new supersymmetry algebra is again closed. We now compactify our theory on a complex three-fold X. By the argument given in Proposition 1, with n = 2, we can assume without loss of generality that By the rewriting of the bosonic action (B.1), which we stress holds true at O(α ′2 ), we find that for the equations of motion to hold we need Note the similarity between this condition and the supersymmetry condition for the gauge field (3.6). Supersymmetry now also requires by (3.18) and (3.4). Here A, B denote flat indices on X. This equation is however trivial, once we know that R is an instanton. Indeed, we havẽ where we used (C.1) in the second equality. It should also be mentioned that the instanton connection solves the ∇-equation of motion (3.11), as shown in [17,40]. Indeed in dimension six, by the supersymmetry conditions, it follows that e 2φ d ∇ (e −2φ * R) − R ∧ * H = e 2φ d ∇ e −2φ ( * R + R ∧ ω) .
JHEP12(2014)008
As R is both of type (1,1), and primitive, we have the identity * R = −ω ∧R. It follows that the instanton connection satisfies the ∇-equation of motion, and the first order equations of motion do not receive corrections.
We have thus gone through the proof of the statements in Theorem 1. Next, we want to consider their interpretation and give a discussion of the results. In doing so we also give our reasons for proposing Conjecture 1.
Summary of results
In the first order theory, we saw that the connection ∇ = ∇ − + tθ, where θ depends on the fields of the theory in some way, should satisfy the instanton condition whenever the solution is supersymmetric of Strominger system type. As shown in e.g. [1], this condition has an infinitesimal moduli space of the form where the tangent space is taken at the Hull connection. At first order, the requirement that the full supergravity action should be invariant under the usual supersymmetry transformations reduces the choice to the Hull connection. Hence, the t-deformed theory requires changes to the supersymmetry transformations, and we found what these where. We also saw that the allowed deformation space of connections, for which supersymmetric solutions of the Strominger system exist, was given by (4.1). Supersymmetric solutions could also be assumed to be solutions of the Strominger system by Proposition 1. Moreover, by the lemma of Bergshoeff and de Roo [16], these deformations correspond to infinitesimal O(α ′ ) field redefinitions.
Returning to the usual form of the first order supergravity, we saw that at second order the theory can again be corrected appropriately for any O(α ′ )-change θ of the Hull connection ∇ − , corresponding to O(α ′2 ) field redefinitions. Supersymmetric solutions could again be assumed to be solutions of the Strominger system, and the equations of motion are compatible with supersymmetry if and only if ∇ = ∇ − + θ satisfies the instanton condition again.
Higher orders
Let us now take a moment to discuss higher orders in α ′ . Note that the condition we find for compatibility between supersymmetry and equations of motion, (3.19), is exactly the supersymmetry condition we would get from this "connection sector" if ∇ was part of a dynamical superfield, very much analogous to the gauge sector. Indeed, the fact that (∇ − , ψ IJ ) transforms as an SO(9, 1)-Yang-Mills multiplet to O(α ′ ) is what motivated the construction of the action of [16] in the first place. From the discussion above, it appears that supersymmetric solutions behave as if this where the case, at least for compact solutions including O(α ′2 ). The question then arises what happens at O(α ′3 ) and higher?
JHEP12(2014)008
It should first be noted that at higher orders, the form of the supergravity action is no longer unique, and undetermined (curvature) 4 -terms appear [16]. The form of these terms may however be determined through other means such as string amplitude calculations [41,42], which was also used in [16], and these terms indeed preserve the symmetry between the Lorentz and Yang-Mills sectors.
With this, it therefore seems natural to conjecture that the above structure also survives to higher orders. That is, the natural connection ∇ used to calculate the curvatures should be chosen so that it satisfies an equation of motion similar to that of A, whenever the other equations of motion are satisfied. Moreover, for supersymmetric solutions, ∇ should satisfy a supersymmetry condition similar to that of A. We also conjecture that, as seen to order O(α ′ ), the moduli of this "supersymmetry condition" are equivalent to field redefinitions, and therefore do not correspond to physical lower energy fields in any sense.
Future directions
Having reviewed our results, and discussed higher orders in α ′ , there are a few unanswered questions which we would like to look into in the future. Firstly, it would be interesting to check the proposed conjecture to the next order in α ′ . This should not be very difficult, as the cubic theory was laid out in general in [16], and we only have to repeat their analysis using a more general connection. It should be noted that at this order, the supersymmetry condition for the gauge field does receive corrections (3.15), and we expect this to be true for the tangent bundle connection as well. It would be interesting to check whether to cubic or higher order in α ′ , supersymmetric solutions still satisfy the Strominger system. As noted in [1], these solutions can be recast in terms of a holomorphic structure D on a generalised bundle Q = T * X ⊕ End(T X) ⊕ End(V ) ⊕ T X , and it would be interesting to see if this structure survives beyond second order as well. This might be expected to be the case, as it is suggested by the authors in [22], where it is argued that the generalised geometric structure introduced on Q survives to higher orders. Next, it would be interesting to return to the first order theory, and consider higher order deformations of the Hull connection. Indeed, in section 2 we only considered infinitesimal deformations away from the Hull connection of this theory. That is, we considered the tangent space of the moduli space of connections at the Hull connection, which we saw corresponded to infinitesimal O(α ′ ) field redefinitions. It would be interesting to perform higher order deformations of the connection, i.e. deformations of O(t 2 ) and above, and to see how this relates to obstructions of the corresponding deformation theory. Moreover, do such "finite" deformations still correspond to field redefinitions?
It would also be interesting to consider our findings in relation to the sigma model. In terms of the first order sigma model, it was pointed out in [7] that changing the connection ∇ corresponds to O(α ′ ) field redefinitions, consistent with the findings of the present paper. Requiring world-sheet conformal invariance, i.e. the ten-dimensional equations of motion, in addition to space-time supersymmetry, puts conditions on the connection. As we have seen, and as was first noted in [3], it is sufficient to use the Hull connection at first order.
JHEP12(2014)008
This connection was also necessary modulo field redefinitions. We found that the allowed field redefinitions correspond to the moduli space (4.1), and it would be interesting to see if this moduli space can be retrieved from the sigma model point of view as well.
At next order, we found that the Hull connection was not a good field choice, provided one wants supersymmetric solutions to the Strominger system. Still, we found the necessary and sufficient condition for compatibility was that ∇ should satisfy the instanton condition. Moreover, ∇ is related to the Hull connection by a corresponding O(α ′2 ) field redefinition. But for supersymmetric solutions of the Strominger system, the Hull connection lead to too stringent constraints on the geometry, leading us back to Calabi-Yau. It would be interesting to investigate this further from a sigma-model point of view. In particular, it would be interesting to see what the more "natural field choices", i.e. connections satisfying the instanton condition, look like in this picture.
The three-form Ψ may also be used to define an almost complex structure J on X by where the endomorphism I is given by The normalization in (A.2) is needed so that J 2 = −1. Note also that the complex structure J is independent of rescalings of Ψ. A general SU(3)-structure is parameterised by five torsion classes, (W 0 , W ω 1 , W Ψ 1 , W 2 , W 3 ) where [43][44][45][46] dω = − 12 Here W 0 is a complex function, W 2 is a primitive (1, 1)-form, W 3 is real and primitive and of type (1, 2) + (2, 1). Also, W ω 1 is a real one-form, while W Ψ 1 is a (1, 0)-form. These are known as the Lee-forms of ω and Ψ respectively, and they are given by It should be noted that W 2 = W 0 = 0 is equivalent to the vanishing of the Nijenhaus tensor, and therefore equivalent to X being complex.
B Proof of instanton condition
In this appendix, we repeat the proof of [1] showing that supersymmetric solutions of the Strominger system, and the equations of motion are compatible if and only if ∇ is of instanton type. We consider the theory including O(α ′2 ) terms. We note that this is a special case of a more general proof, which appeared in [33].
Recall first that the second order bosonic action is the same as the first order action [16]. According to [47], the six-dimensional part of the action (2.1) may be written in terms of SU(3)-structure forms as S 6 = 1 2 X e −2φ − 4|dφ − W ω 1 | 2 + ω ∧ ω ∧R + |H − e 2φ * d(e −2φ ω)| 2 − 1 4 d 6 y √ g 6 N mn p g mq g nr g ps N nq s JHEP12(2014)008 − α ′ 2 X e −2φ tr|F (2,0) | 2 + tr|F (0,2) | 2 + 1 4 tr|F mn ω mn | 2 + α ′ 2 X e −2φ (tr|R (2,0) | 2 + tr|R (0,2) | 2 + 1 4 tr|R mn ω mn | 2 + O(α ′3 ) , (B.1) where the Bianchi Identity has been applied.R is now the Ricci-form of the unique connec-tion∇ with totally antisymmetric torsion, for which the complex structure is parallel. For supersymmetric solutions of the Strominger system, the connection ∇ + , see equation (2.6), coincides with∇, which is known as the Bismut connection in the mathematics literature. The Ricci-form isR = 1 4R pqmn ω mn dx p ∧ dx q , while N mn p is the Nijenhaus tensor for this almost complex structure. Note that R = 0 is an integrability condition for supersymmetry. Performing a variation of the action at the supersymmetric locus, we find that most of the terms vanish. The only surviving terms are In [47] it is shown that δR is exact, and therefore the first term vanishes using supersymmetry by an integration by parts. If the equations of motion are to be satisfied to the order we work at, we therefore find R mn Γ mn η = O(α ′2 ) , which is equivalent to the instanton condition. Note the reduction in orders of α ′ due to the factor of α ′ in front of the curvature terms in the action.
C The Hull connection
For completeness, we also repeat the argument of [26] that the Hull connection does indeed satisfy the instanton condition for the O(α ′ )-theory, whenever we have supersymmetry. We work in ten dimensions in this appendix. It is easy to show that at the supersymmetric locus, we find as required.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 11,510.8 | 2014-12-01T00:00:00.000 | [
"Physics"
] |
In Vitro Synergistic Enhancement of Newcastle Disease Virus to 5-Fluorouracil Cytotoxicity against Tumor Cells
Background: Chemotherapy is one of the antitumor therapies used worldwide in spite of its serious side effects and unsatisfactory results. Many attempts have been made to increase its activity and reduce its toxicity. 5-Fluorouracil (5-FU) is still a widely-used chemotherapeutic agent, especially in combination with other chemotherapies. Combination therapy seems to be the best option for targeting tumor cells by different mechanisms. Virotherapy is a promising agent for fighting cancer because of its safety and selectivity. Newcastle disease virus is safe, and it selectively targets tumor cells. We previously demonstrated that Newcastle disease virus (NDV) could be used to augment other chemotherapeutic agents and reduce their toxicity by halving the administered dose and replacing the eliminated chemotherapeutic agents with the Newcastle disease virus; the same antitumor activity was maintained. Methods: In the current work, we tested this hypothesis on different tumor cell lines. We used the non-virulent LaSota strain of NDV in combination with 5-FU, and we measured the cytotoxicity effect. We evaluated this combination using Chou–Talalay analysis. Results: NDV was synergistic with 5-FU at low doses when used as a combination therapy on different cancer cells, and there were very mild effects on non-cancer cells. Conclusion: The combination of a virulent, non-pathogenic NDV–LaSota strain with a standard chemotherapeutic agent, 5-FU, has a synergistic effect on different tumor cells in vitro, suggesting this combination could be an important new adjuvant therapy for treating cancer.
Introduction
The mainstays of treatment for advanced cancers are chemotherapy and radiotherapy. However, they are limited due to the resistance of tumor cells to these agents, as well as their narrow therapeutic index. Therefore, combination therapies were invented to overcome cancer cell resistance and to increase the anti-tumor effect while considering the toxicity for normal tissue [1]. 5-Fluorouracil is an important chemotherapeutic agent for many solid tumors, particularly gastrointestinal, brain, and head and neck malignancies. 5-FU has also been actively investigated during the last 40 years for many tumors. However, the role of systemic 5-FU in cancer therapy has been limited by the fact that dose-limiting side effects (myelosuppression and stomatitis) are usually reached before evidence of antitumor response [2,3]. Antitumor chemotherapeutic agents, such as 5-fluorouracil, are toxic to the small intestine and make it dysfunctional [4]. Effective antitumor
Virus
The lentogenic virulent strain of NDV (LaSota) was obtained from Al-Kindy Company for veterinarian vaccines (Baghdad, Iraq). A stock of infectious virus was propagated in embryonated chicken eggs, harvested from allantoic fluid, and purified from debris by centrifugation (3000 rpm, 30 min, 4˝C). NDV was quantified using a hemagglutination test in which one hemagglutination unit (HAU) is defined as the smallest virus concentration leading to visible chicken erythrocyte agglutination.
Chemotherapeutic Agent
5-FU (5-Fluorouracil)-SP Pharmaceuticals (Albuquerque, NM, USA) were purchased from the Radiation and Atomic Medicine Hospital (Baghdad, Iraq). This agent was diluted with a medium without calf bovine serum just before use for in vitro studies.
Combination Cytotoxicity Assays
To determine the cytotoxic effect of NDV and 5-FU in combination treatment, the MTT cell viability assay was conducted on 96-well plates (Becton, Dickinson, Franklin Lakes, NJ, USA). Hep-2, RD, AMN3, and Vero cells were seeded at 1ˆ10 4 cells/well. After 24 h. or a confluent monolayer was achieved, cells were treated with the virus alone (infected with NDV at 128 HAU with two-fold serial dilutions), the drug alone (the chemotherapeutic agent 5-FU at 5 µg in two-fold serial dilutions to 0.039 µg/mL), or a combination of the two (virus + 5-FU in two-fold serial dilutions). The procedure of adding these therapeutic agents involved addition of the virus for 2 h at room temperature to allow for viral attachment and penetration. Afterwards, cells were washed with PBS and serial dilutions of the drug were added to the infected and non-infected cells. Cell viability was measured after 72 h of infection by removing the medium, adding 28 µL of 2 mg/mL solution of MTT (Sigma-Aldrich, St. Louis, MO, USA) and incubating the cells for 1.5 h at 37˝C. After removing the MTT solution, the crystals remaining in the wells were solubilized by the addition of 130 µL of DMSO (Dimethyl Sulphoxide) (BDH, London, UK) followed by 37˝C incubation for 15 min with shaking [20]. The absorbency was determined on a microplate reader (Organon Teknika Reader 230S, Salzburg, Austria) at 492 nm (test wavelength); the assay was performed in triplicate. The inhibition rate of cell growth (the percentage of cytotoxicity) was calculated as (A´B)/Aˆ100, where A is the mean optical density of untreated wells, and B is the optical density of treated wells. The LC50 is the lowest concentration that kills 50% of the cells [21]. Each experiment was repeated at least three times in triplicate.
Chou-Talalay Analysis
The median effect doses (ED50) were calculated for the drug and NDV for each cell line. For synergism determination, NDV and 5-FU were studied as a non-constant ratio. To analyze the combination of NDV and 5-FU, Chou-Talalay combination indices (CI) were calculated using CompuSyn software (Combo Syn, Inc., Paramus, NJ, USA). Non-fixed ratios of NDV and chemotherapeutics, as well as mutually exclusive equations, were used to determine the CIs. A CI between 0.9 and 1.1 is considered additive, whereas CI < 0.9 and CI > 1.1 indicate synergism and antagonism, respectively [22,23].
Combination Chemotherapy and Viral Cytotoxicity in Vitro
To study the potential interaction between NDV and chemotherapy in vitro, the effectiveness of the combined treatment for several concentrations of 5-FU with NDV at various hemagglutination conditions was evaluated in the Hep-2, RD, AMN3, and Vero cell lines. Cells were treated with NDV and with 5-FU or with the combination of NDV and 5-FU. The cell viability was determined after 72 h using the MTT assay.
In RD, Rhabdomyosarcoma, combination therapy had significant 71.6% cytotoxicity (PR = 29%) (p: 0.0001) at 0.625 µg/mL 5-FU and 16 HAU NDV. NDV treatment alone showed 64.1% cytotoxicity (PR = 35.9%) (p: 0.0001) at 16 HAU, and 1.25 µg/mL 5-FU therapy alone also showed a significant cytotoxic effect of 53.8% (PR = 46.2%) (p: 0.0001), which is less than the combination of NDV and half the dose of 5-FU. Combination therapy at 0.312 µg/mL 5-FU and 8HAU showed 49.7% G.I. (p: 0.002), whereas the growth inhibition effect of chemotherapy alone with 5-FU at two-fold dose (0.625 µg/mL) was 39.7% G.I. (PR = 60.3%) (p: 0.1) (Figure 2a). Data were further analyzed using Chou-Talalay equations and the dose-oriented isobologram technique. There was synergism between NDV and 5-FU at 50% growth inhibition doses, as represented in Figure 2b In AMN3, mouse mammary adenocarcinoma cell line combination therapy was effective (p: 0.006) and had a similar effect as with 5-FU alone at two-fold doses. There were no significant differences between combination with half-dose of chemotherapy compared with 5-FU alone. These results show that NDV could compensate for the reduction in the 5-FU doses even though there was no significant effect of NDV treatment on tumor cells alone at any of the concentration (Figure 3a). In AMN3, mouse mammary adenocarcinoma cell line combination therapy was effective (p: 0.006) and had a similar effect as with 5-FU alone at two-fold doses. There were no significant differences between combination with half-dose of chemotherapy compared with 5-FU alone. These results show that NDV could compensate for the reduction in the 5-FU doses even though there was no significant effect of NDV treatment on tumor cells alone at any of the concentration (Figure 3a). In AMN3, mouse mammary adenocarcinoma cell line combination therapy was effective (p: 0.006) and had a similar effect as with 5-FU alone at two-fold doses. There were no significant differences between combination with half-dose of chemotherapy compared with 5-FU alone. These results show that NDV could compensate for the reduction in the 5-FU doses even though there was no significant effect of NDV treatment on tumor cells alone at any of the concentration (Figure 3a). To study the effect of combination treatment on non-cancer cells, the Vero monkey kidney transformed cell line was used. Generally, most of the concentrations used alone or in combination lacked significant differences ( Figure 4). As there was no significant cytotoxic effect, there was no need to do Chou-Talalay equation. To study the effect of combination treatment on non-cancer cells, the Vero monkey kidney transformed cell line was used. Generally, most of the concentrations used alone or in combination lacked significant differences ( Figure 4). As there was no significant cytotoxic effect, there was no need to do Chou-Talalay equation. To study the effect of combination treatment on non-cancer cells, the Vero monkey kidney transformed cell line was used. Generally, most of the concentrations used alone or in combination lacked significant differences ( Figure 4). As there was no significant cytotoxic effect, there was no need to do Chou-Talalay equation.
Discussion
The primary objective of this study was to determine if we can augment cancer chemotherapy by virotherapy. Furthermore, we sought to determine if we can reduce the toxicity of cancer chemotherapeutic agents by reducing the administered dose of 5-FU and replacing it with Newcastle disease virus therapy, while maintaining the same, or more, anti-tumor activity and overcoming resistance to chemotherapy.
Based on our results in four different cell lines, the lentogenic NDV strain (LaSota used as live vaccines against NDV) exhibited oncolytic activity on three tumor cell lines and to a lower degree on the transformed cell line (Vero). Previous studies have shown that a virulent NDV strain is oncolytic [14,24]. The NDV LaSota strain showed anti-lymphoma activity both in vitro and in vivo [25]. Furthermore, Walter et al. [26] showed that the Newcastle Disease Virus LaSota strain kills human pancreatic cancer cells in vitro with 700-fold higher selectivity than normal cells.
Fabian et al. [27] used an attenuated pathogenic NDV MTH-68/H strain, which was originally a vaccine strain. This strain showed antitumor activity both in vitro and in vivo in a clinical trial [28,29]. Moreover, Pecorna et al. [30] used a naturally attenuated strain of NDV (PV701) that exhibits a broad range of oncolytic activity against human tumors in vitro; they introduced the strain into clinical trials. Schirrmacher et al. [14] used a lentogenic virulent Ulster strain and found that infection of cancer cells by non-lytic non-virulent NDV Ulster strain (30 HU/107 cells) eventually causes tumor cell death in vitro, and it also has replication selectivity in tumor cells [31].
The combination of NDV and 5-FU showed greater cytotoxic efficacy than NDV alone or at a two-fold dose of 5-FU alone. The effect appears to be synergistic according to Chou-Talalay analysis. The mechanism(s) of synergistic activity for the combination of 5-FU with NDV is unknown, but we propose a few hypotheses. First, NDV may augment the anti-tumor activity of 5-FU by increasing the cellular sensitivity to chemotherapeutic agents, and this enhanced sensitivity is partially caused by the induction of apoptosis in response to virulent NDV strains [32]. Second, a synergistic dose of 5-FU may augment the viral replication, as suggested by many studies on oncolytic viruses [19,33]. Each agent may work independently on different cell populations, but this is unlikely to be the case here. In addition, virotherapy with NDV may complement the anti-tumor activity of 5-FU, which selectively targets tumor cell populations that are resistant to chemotherapy. This may be of important value because most human tumors consist of a mixture of cells that have a different genetic makeup. Heterogeneity in the tumor cell populations may be the major reason most monotherapies fail to achieve complete tumor remission [34]. Moreover, one of the objectives of this study was to reduce the toxic side effects of chemotherapy in cancer patients. This can be achieved by reducing the administered dose while maintaining the same or stronger antitumor activity. The current experimental results support this claim, but in vivo evaluation is needed.
Several characteristics of the NDV LaSota strain that are favorable for human use include the genetic stability of the vaccine strains, absence of genetic recombination, lack of antigenic drift, and lack of observed human to human transmission [35,36]. The Newcastle disease virus has been safely administered to humans in clinical trials; additionally, accidental exposure in farmers is reported to induce only self-limiting conjunctivitis [28,35,37]. While NDV is safe and lacks toxicity, 5-FU causes myelosuppression and stomatitis before achieving an antitumor response [2].
Conclusions
A virulent, non-pathogenic NDV LaSota strain, in combination with a standard chemotherapeutic agent, 5-FU, has a synergistic effect in vitro on different tumor cells, suggesting this approach could be an important new adjuvant therapy for treating cancer. | 2,928.8 | 2016-01-29T00:00:00.000 | [
"Biology",
"Medicine"
] |
Ranking Methods for Multicriteria Decision-Making: Application to Benchmarking of Solvers and Problems
Evaluating the performance assessments of solvers (e.g., for computation programs), known as the solver benchmarking problem, has become a topic of intense study, and various approaches have been discussed in the literature. Such a variety of approaches exist because a benchmark problem is essentially a multicriteria problem. In particular, the appropriate multicriteria decisionmaking problem can correspond naturally to each benchmark problem and vice versa. In this study, to solve the solver benchmarking problem, we apply the ranking-theory method recently proposed for solving multicriteria decision-making problems. (e benchmarking problem of differential evolution algorithms was considered for a case study to illustrate the ability of the proposedmethod.(is problem was solved using ranking methods from different areas of origin.(e comparisons revealed that the proposed method is competitive and can be successfully used to solve benchmarking problems and obtain relevant engineering decisions. (is study can help practitioners and researchers use multicriteria decision-making approaches for benchmarking problems in different areas, particularly software benchmarking.
Introduction
Recently, evaluating the performance of solvers (e.g., computer programs), that is, the problem of solver benchmarking, has attracted significant attention from scientists. Currently, most benchmarking tests produce tables that present the performance of each solver for each problem according to a specified evaluation metric (e.g., the central processing unit (CPU) time and number of function evaluations) and use various statistical tests for the conclusions. us, the selection of the benchmarking method currently depends on the subjective tastes and preferences of individual researchers. e following components of the benchmarking process, including the solver set, problem set, metric for performance assessment, and statistical tools for data processing, are chosen individually according to the researcher's preferences. For example, the performance profile method, which is currently the most popular and widely used method in practice (see [1]), is based on a comparative analysis of empirical probability distribution functions obtained in numerical experiments with different solvers.
In this study, we consider the benchmarking process based on the viewpoint that emphasizes natural relations between problems and solvers, as determined by their evaluation tables (see [2]). Specifically, we present data for benchmarking in the form of a so-called benchmarking context, that is, a triple 〈S, P, J〉, where S and P are sets of solvers and problems, respectively, and J: S × P ⟶ R is an assessment function (a performance evaluation metric). roughout the paper, the sets of solvers and problems are assumed to be finite.
is concept is quite general and emphasizes that problems, solvers, and assessment functions must be considered closely related and not independent. e benchmarking procedure presented in this study is described as follows. e data encapsulated by the given benchmarking context 〈S, P, J〉 are used to build the corresponding multicriteria decision-making (MCDM) problem 〈A, C〉, where A � S is a set of alternatives, and C � J(·, p)|p ∈ P is a set of criteria. Hence, we define a decision matrix as a matrix whose elements exhibit the performance of different alternatives (i.e., solvers) concerning various criteria (i.e., problems) through the assessment function.
us, the investigation of benchmarking problems was reduced to an MCDM problem. Moreover, for each MCDM problem, a corresponding benchmark context is presented. e rationale for such a consideration is that a vast array of different approaches for MCDM problems can be used for benchmarking problem analysis. In particular, such a multicriteria formulation allows the consideration of Pareto-optimal alternatives (i.e., solvers) as "good" solvers. e next innovation presented in this study is that a recently proposed technique (see [3]) is used to solve the MCDM problem corresponding to a benchmarking problem. e multicriteria formulation is a typical starting point for theoretical and practical analyses of decision-making problems to clarify the essence of the new technique used in this study. Correspondingly, based on the fundamental concept of Pareto optimality, several methods and computational procedures have been developed to solve MCDM problems (see, e.g., overviews by [4][5][6][7][8], and more recently, [9][10][11]). However, unlike single-objective optimizations, a characteristic feature of Pareto optimality is that the set of Pareto-optimal alternatives is typically large. In addition, all these Pareto-optimal alternatives must be considered mathematically equal (equally "good"). Correspondingly, the problem of choosing a specific Pareto-optimal alternative for implementation arises because the final decision must usually be unique. Hence, additional factors must be considered to aid decision-makers in selecting specific or more favorable alternatives from the set of Pareto-optimal solutions. erefore, we build a special score matrix for the MCDM problem, which allows us to construct the corresponding ranking for alternatives [3]. e score matrix can be built in different ways, but we use the simplest and most natural method. is study uses a scoring matrix calculating how many times one alternative is better than another according to the criteria. Hence, the proposed approach may yield an "objective" ranking method and provide an "accurate" ranking of the alternatives for MCDM. Correspondingly, a best-ranked alternative from the Pareto set is declared a "true" solution to the MCDM problem. e approach presented in this study for solving MCDM problems is useful when no decision-making authority is available or when the relative importance of various criteria has not been previously evaluated.
Finally, we demonstrate the possibilities of the proposed method in a case study based on the computational and experimental results for benchmarking differential evolution (DE) algorithms presented by Sala et al. [12]. Specifically, we benchmark nine DE algorithms on a set of 50 test problems, following the random sampling equivalent expected run time (ERTRSE) performance metric. By conducting a numerical investigation, we demonstrate that the solution results of the MCDM problem obtained using the methods proposed in this study are quite competitive.
Contributions.
is paper makes the following main contributions: (1) e concept of the benchmarking context is introduced according to [2], and it is confirmed that a one-to-one correspondence exists between the set of benchmarking contexts and the set of MCDM problems (2) e ranking-theory approach is proposed for solving MCDM problems corresponding to a given benchmarking context [3] (3) e approach proposed in this article is tested on a known literature dataset for benchmarking DE algorithms (see [12]), and the possibility of effectively solving benchmarking problems is fully confirmed Without claiming to be a complete review, we present a brief overview of the literature on the benchmarking problem in the context of optimization problems. Generally, the consideration of a benchmarking problem is motivated by various reasons, such as selecting the best solver (algorithm, software, etc.) for some class of problems, testing the proposed novel solvers, and evaluating the solver performance for different option settings. For example, early contributions in the benchmarking of optimization algorithms are considered [13]. e results achieved at an early stage in the development of the subject can be judged according to work by the following researchers: Nash and Nocedal [14], Billups et al. [15], Conn et al. [16], Sandu et al. [17], Mittelmann [18], Vanderbei and Shanno [19], and Bondarenko et al. [20]. e beginning of a new stage of development is associated with research work of Dolan and Moré [21], in which a performance profile comparison technique was proposed.
is technique is now prevalent (but see, e.g., Gould and Scott [22]). Along with the performance profile comparison method, other more direct approaches have also been used in modern research. An idea of the modern research in the area under consideration can be obtained from the following research examples: Moles et al. [23], Mittelmann [24], Benson et al. [25], Kämpf et al. [26], Foster et al. [27], Rios and Sahinidis [28], Weise et al. [29], Sala et al. [12], and Cheshmi et al. [30]. A critical overview of the current state in the subject area was provided by Beiranvand et al. [1].
At the end of this brief overview, this study focuses on benchmarking for solvers of only the optimization problem. However, the concept of benchmarking has a much broader context (see, e.g., https://en.wikipedia.org/wiki/ Benchmarking). e approach proposed in this article is quite general and can also be applied in other areas, but we do not consider this possibility here. Scientific Programming
Notation.
roughout the article, the following general notation is used: N is a set of natural numbers, and for a natural number n ∈ N, we denote an n-dimensional vector space by R n and ‖ · ‖ p is the l p − norm in R n . If not otherwise mentioned, we identify a finite set A with set N(n) � 1, . . . , n { }, where n � |A| is the capacity of set A. We also introduce the following notations for special vectors and sets: for any n ∈ N: and R n + ⊂ R n is a positive orthant. By necessity, we also identify the matrix Π ∈ R n×m with the map Π: N(n) × N(m) ⟶ R. For a matrix Π ∈ R n×m , we denote its transpose by Π T ∈ R m×n .
Outline.
e remainder of this paper is structured as follows. In Section 2, all necessary theoretical preliminaries regarding the MCDM problem (Section 2.1) and rankingtheory methods for solving MCDM problems are presented (Section 2.2). Section 3 introduces the concept of benchmarking contexts, and its relationship with the MCDM problem is discussed. In Section 4, the case-study problem of DE algorithm benchmarking is investigated numerically. Finally, the conclusions are presented in Section 5.
Multicriteria Decision-Making
Problems. We use the following notation from the general theory of multicriteria optimization theory [31]. We consider the MCDM problem 〈A, C〉, where A � a 1 , . . . , a m is a set of alternatives, and C � c 1 , . . . , c n is a set of criteria, that is, c i : A ⟶ R, i � 1, . . . , n. Hence, we introduce the following decision matrix: where x ij � c j (a i ) is the performance measure of alternative i ∈ N(m) on criterion j ∈ N(n). Without loss of generality, we assume that the lower value is preferable for each criterion (i.e., each criterion is not beneficial; see [32]), and the goal of the decision-making procedure is to minimize all criteria simultaneously. Furthermore, A is the set of admissible alternatives, and map c → � (c 1 , . . . , c n ): A ⟶ R n is the criterion map (correspondingly, c → (A) ⊂ R n is the set of admissible values of criteria). A point ξ I � (ξ I 1 , . . . , ξ I n ) ∈ R n , where ξ I j � min a∈A c j (a), j ∈ N(n), is called the ideal point. An ideal point is considered attainable if alternative a I ∈ A exists such that ξ I � c → (a I ). e following concepts are also associated with the criterion map and set of alternatives. An alternative a * ∈ A is Paretooptimal (efficient) if a ∈ A exists such that c j (a) ≤ c j (a * ) for all j ∈ N(n) and c k (a) < c k (a * ) for some k ∈ N(n). e set of all efficient alternatives is denoted as A e and is called the Pareto set. Correspondingly, f(A e ) is called an efficient front.
Pareto optimality is an appropriate concept for solutions to MCDM problems in general. However, the set A e of Pareto-optimal alternatives is very large, and all alternatives from A e must be considered "equally good solutions." However, the final decision must be unique. Hence, additional factors must be considered to aid in selecting specific or more favorable alternatives from the set A e . We cannot provide a detailed analysis of these methods; however, interested readers can become acquainted with them through overviews [4][5][6][7][8]. Furthermore, we consider only the method proposed by Gogodze [3] without diminishing the value of more classical methods.
Ranking Methods and eir Applications to MCDM Problems.
is section provides a brief overview of the basic concepts of the ranking theory (e.g., see [33] for further details) and presents the necessary formal definitions. For a natural number N, is the ranking problem. We assume (conditionally) that the elements of N(N) are athletes (or sports teams) who compete in matches between themselves. Moreover, M(i, j) denotes a joint match for each pair of athletes (i, j), 1 ≤ i, j ≤ N, and we interpret entry S ij , 1 ≤ i, j ≤ N, of matrix S as the total score of athlete i against athlete j in match M(i, j). In addition, athlete i scored against athlete j in match M(i, j) if S ij > 0, and athlete i has beaten athlete j in match M(i, j) if S ij > S ji . Based on the introduced notation, we define the following quantities: (2) e weak order on the set N(N) is transitive and the complete relation is a ranking method if, for any given ranking problem (N(N), S), R N (S) is a weak order on the set N(N). Any vector r � (r 1 , . . . , r N ) ∈ R N can be considered a rating vector for elements of N(N), in the sense that each r i , 1 ≤ i ≤ N, can be interpreted as a measure of the performance of player i ∈ N(N). For the ranking problem For illustrative purposes, we consider only a few of the many ranking methods discussed in the literature. All of these methods are induced by their corresponding rating vectors. e considered ranking methods originate from different areas, such as athlete/team ranking in sports, citation indices, and website ranking. Hence, all of these reflect some (intuitive as a rule) human experience regarding the Scientific Programming 3 solution concept of the ranking problem. A brief overview of the ranking methods in this article is provided in Appendix.
We can unite all the information described above and demonstrate that, for any MCDM problem, we can construct the necessary matrices (e.g., S, P(S), and A(S)) and, therefore, apply a suitable ranking method for the MCDM problem solution. To simplify the perception of the constructions described below, we use sports terminology. We assume that 〈A, C〉 is an MCDM problem (see Section 2.1) with a set of alternatives A � a 1 , . . . , a m and a set of nonbeneficial criteria C � c 1 , . . . , c n and that the decisionmaking goal is to minimize the criteria simultaneously. We imagine that the number of athletes is N � m for constructing matrix S and that they are competing in an n-athlon (i.e., each match M(i · j) includes competitions in n different disciplines, 1 ≤ i, j ≤ n). For illustrative purposes, we introduce the simplest method for score calculation: us, for criterion k ∈ N(n), the equality s k ij � 1 means that c k (a i ) < c k (a j ) and the alternative a i (i.e., athlete i ∈ N(m)) receives one point (i.e., athlete i ∈ N(m) wins the competition in discipline k ∈ N(n)). Correspondingly, S ij indicates the number of total wins of athlete is the score matrix for a set of alternatives.
us, we can define an auxiliary matrix Furthermore, using matrix Π(S) and a well-known transformation, we can construct a (row) stochastic matrix P � P(S) as follows: ) is a vector defined as follows: e introduced matrix Π(S) can be interpreted as an adjacency matrix for a directed graph Γ(A, C) (associated with the MCDM problem 〈A, C〉), called the adjacency matrix for the MCDM problem 〈A, C〉. Correspondingly, matrix P(S) can be interpreted as a transition probability matrix for the Markov chain determined by the graph Γ(A, C). Moreover, we can construct a reciprocal matrix of pairwise comparisons A(S) � [a ij ], i, j � 1, . . . , m, for the MCDM problem 〈A, C〉 as follows: Subject to the facts presented in this section, the following procedure for solving the MCDM problem under consideration, 〈A, C〉, can be formulated: Using the score matrix S, the alternatives from set A are ranked using a ranking method R (R ∈ R S , R N , R B , R C , R K , R PF , R GM , . . . , ; see, e.g., Appendix) (iii) e alternative from the Pareto set, A e , ranked best by method R is declared the R solution of the considered MCDM problem
Benchmarking Problem
We consider a set P of problems, a set S of solvers, and a function J: S × P ⟶ R, the assessment function (performance metric). e terms "solver," "problem," and "assessment function" are used conditionally only to simplify interpretation, although this is not generally necessary (and, as we observe below, can even lead to terminological inconsistency). Furthermore, we assume for definiteness that the high and low values of J correspond to the worst and best cases, respectively, and for convenience, we interpret J(s, p) as the cost of solving the problem p ∈ P by the solver s ∈ S. Moreover, the following conditions are assumed: (i) Slover s ∈ S solves problem p ∈ P better than solver us, we can introduce the following definition, which is sufficient for many real-world applications. Definition 1. A triple 〈S, P, J〉 is in the (solvers) benchmarking context if and only if S and P are the finite sets (called a set of solvers and set of problems, respectively), J: S × P ⟶ R is a function (called the assessment function, or performance evaluation metric), and the following assumptions hold: e presented concept is quite general and, as mentioned, emphasizes that the set of solvers, set of problems, and assessment function must be considered closely related objects for the benchmarking goal and not independently. Assumption (A0) establishes that sets S and P have sizes m, n ∈ N, respectively, and Assumption (A1) establishes the nonnegativity of the assessment function. Moreover, because sets S and P are finite, Condition (A1) does not limit the generality of our considerations. Generally, the selection of a benchmarking context 〈S, P, J〉 component is based on the research questions motivated by a benchmarking analysis goal. However, the choice of sets S and P is often a disputable issue in the practice of certain applications. In contrast, the situation is relatively straightforward in choosing the assessment function, J, at least in computer science (see, e.g., [34]). For example, the following indicators are often used in this case: running time (e.g., the CPU time [35]), reliability (i.e., the solver's ability to successfully solve several problems, such as the success rate [36]), and others. Moreover, the case when assessment, J, is a mapping in R l , where l ∈ N (i.e., it is a multiple criterion), also can be considered but we do not delve into this issue. Next, we consider the benchmarking context 〈S, P, J〉 as given, and introduce the following definition: Definition 2. For a given (solver) benchmarking context 〈S, P, J〉, we define function J * : P × S ⟶ R as follows: J * (p, s) � J(s, p), ∀p ∈ P, ∀s ∈ S. We call J * the adjoint (to J) assessment function, and 〈P, S, J * 〉 is the adjoint to the 〈S, P, J〉 benchmarking context or problem benchmarking context (corresponding to the solver benchmarking context 〈S, P, J〉).
Definition 2 is easily validated as correct (i.e., J * is the assessment function in the sense of Definition 1). Terminological inconsistency appears, as noted above. In the benchmarking context 〈P, S, J * 〉, the set of solvers is set P, which is the set of problems in the sense of the benchmarking context 〈S, P, J〉. We hope that this does not create any problems in understanding the text below.
We assume now that a benchmarking context 〈S, P, J〉 is given and build a corresponding MCDM problem 〈A, C〉 as follows: A � S is a set of alternatives, and C � c p |p ∈ P , c p (·) � J(·, p): A ⟶ R, ∀p ∈ P is a set of criteria. Hence, we define the decision matrix as a matrix whose elements exhibit the performance of different alternatives (i.e., solvers) with respect to various criteria (i.e., problems) through the assessment function. From Property (A1), c p (s) ≥ 0∀s ∈ S, 0∀p ∈ P In contrast, we assume that 〈A, C〉, where A � a 1 , . . . , a m and C � c 1 , . . . , c n , is a given MCDM problem such that c k (a) ≥ 0∀a ∈ A � N(n) and ∀k ∈ N(n) � C. Hence, for P � N(n), S � N(m), and J(i, k) � c k (a i )∀i ∈ N(n), ∀k ∈ N(n), triplet 〈S, P, J〉 is a benchmarking context corresponding to the MCDM problem 〈A, C〉. e correspondences described above are one-to-one and reciprocal. us, we prove that the following proposition holds. Proposition 1. One-to-one mapping exists between the benchmarking contexts and MCDM problems with nonnegative criteria.
To summarize the results of this section and achieve greater clarity in the presentation, we formulated the proposed approach to solving benchmarking problems in an algorithmic form. Furthermore, we assumed that the considered benchmarking problem has already been formalized as a benchmarking context 〈S, P, J〉, where S is a set of solvers, P is a set of problems, and J is an assessment function. e flowchart of the algorithm is presented in Figure 1. All elements of the Pareto set, Ae, are considered equally "good" solvers (in the sense of Pareto optimality). However, the R ranking allows detailed classification to define the "best of the good," "worst of the good," and other intermediate "good" solvers.
us, S � S 1 , . . . , S 9 is a set of solvers. e set of problems P � P 1 , . . . , P 50 comprises 50 problems, and each problem is defined by the dimension indicator d � 30, 50 and by the test function types F 1 , . . . , F 25 , as listed in Table 2.
A description of the assessment function used by Sala et al. [12] is as follows: first, the expected running time (ERT), a widely used performance metric for optimization algorithms, is defined as follows:
Step 3: An appropriate ranking method R is chosen (see, e.g., Annex A).
Step 4: Using the score matrix S, the alternatives from set A are ranked using ranking method R.
Output: The alternatives from the Pareto set, A e , ranked using method R are declared the R solutions of the benchmarking problem <A(S), C(P, J)>. Shifted rotated Griewank's function F 8 Shifted rotated Ackley's function (with the global optimum on the bounds) F 9 Shifted Rastrigin's function F 10 Shifted rotated Rastrigin's function F 11 Shifted rotated Weierstrass function F 12 Schwefel's problem 2.13 F 13 Expanded extended Griewank's plus Rosenbrock's function F 14 Shifted rotated expanded Schaffer's F 6 F 15 Hybrid composition function F 16 Rotated hybrid composition function F 17 Rotated hybrid composition function (with noise in fitness function) F 18 Rotated hybrid composition function F 19 Rotated hybrid composition functions (with the global optimum on the bounds) F 20 Rotated hybrid composition function (with a narrow basin for the global optimum) F 21 Rotated hybrid composition function F 22 Rotated hybrid composition function (with a high condition number matrix) F 23 Noncontinuous rotated hybrid composition function F 24 Rotated hybrid composition function F 25 Rotated hybrid composition function where τ indicates a reference threshold value, M τ is the number of function evaluations required to reach an objective value better than τ (e.g., successful runs), N max denotes the maximum number of function evaluations per optimization run,N succes represents the number of successful runs, N total is the total number of runs, and q denotes the named success rate [45]. e ERT is interpreted as the expected number of function evaluations of an algorithm to reach an objective function threshold for the first time. A threshold or success criterion is required for the ERT performance measure. However, unlike conventional optimization problems (where the ERT criterion is usually related to reaching the value of the known global optimum within a specified tolerance), the probability of coming close to the global optimum is negligible for difficult optimization problems, and a more acceptable alternative success criterion is required. Moreover, all compared algorithms must meet the success criterion a few times to compare qualitative performance using ERT for difficult optimization problems. Correspondingly, Sala et al. [12] used the success criterion to reach the target value corresponding to the expected value of the best objective function value from the uniform random sampling (1000 samples). Next, the estimation of the expected objective value E RSE (f) for test function f is based on 100 repetitions. Finally, the ERT with respect to this objective function value limit was referred to as ERT RSE for test function f. e dataset of ERT RSE estimations [12] for the above-described solvers and problems is presented in Table 3.
us, the benchmarking context 〈S, P, J〉, where S � s 1 , . . . , s 9 , P � p 1 , . . . , p 50 , and J is the ERT RSE assessment, is fully defined. Hence, in Section 3, the MCDM problem associated with the benchmarking problem under consideration is fully defined with a set of alternatives A � S � N(9) a set of (nonbeneficial) criteria C � P � N(50), and a primary decision matrix obtained by transposing the matrix/table presented in Table 3, which is the transposed primary decision matrix Z � [z ij ], i ∈ N(9), j ∈ N(50), for writing convenience. Hence, the MCDM problem associated with the benchmarking context 〈S, P, J〉 (i.e., the solver benchmarking problem) is fully defined. e benchmarking context 〈P, S, J * 〉 is analogously defined, where the assessment function J * is obtained based on the decision matrix Z * (which is the transpose of the decision matrix Z defined above). Hence, the MCDM problem associated with this benchmarking context (i.e., the benchmark problem for the problems) is also fully defined.
Calculation Results.
In this section, we present a brief description of the calculation results (all calculations related to the case study were calculated in the MATLAB environment using standard equipment: laptop with 2.59 GHz, 8 GB RAM, and a 64 bit operation system and required a few seconds (4.87 s for the solver benchmarking and 5.04 s for the problem benchmarking for calculating all considered rankings without special code optimization measures). First, we consider the solver benchmarking problem and explain the construction of the normalized decision matrix by transforming the primary dataset (see, e.g., [32]).
For the primary decision matrix Z � [z ij ], we define the normalized decision matrix X � [x ij ] as x ij � (z ij − l j )/ (u j − l j ) where u j � max i∈N (9) z ij , l j � min i∈N (9) z ij , j ∈ N(50). For the solver benchmarking problem, we consider all criteria to be nonbeneficial (i.e., minimizable). We consider a solver to be better if it solves a given problem in less time (ERT RSE ).
To illustrate this, we present the score matrix for the solver benchmarking problem in Table 4. Table 5 presents the obtained R S , R N , R B , R C , R K , R PF , and R GM ranks for the solver benchmarking problem.
Analogously, we consider the problem benchmarking but define the normalized decision matrix X � [x ij ] as follows: , is the corresponding primary decision matrix. For the benchmarking problem, we also assume that all criteria are nonbeneficial (i.e., minimizable). Again, a problem is better (i.e., easier) for a given solver if it is solved in less time (ERT RSE ) by this solver. Table 6 presents the R S , R N , R B , R C , R K , R PF , and R GM ranks for the problem benchmarking (the score matrix for the problem benchmarking is not presented). Table 5 indicates, the results of solver ranking using the considered methods (R S , R N , R B , R C , R K , R PF , and R GM ) are somewhat similar.
Discussion. As
is observation was confirmed quantitatively by considering the Spearman correlations between ranks (Table 7), where the correlations of the solver ranks for the R S , R N , R B , R C , R K , R PF , and R GM rankings are presented. As Table 7 demonstrates, the R S , R N , R B , R C , R K , R PF , and R GM ranks are strongly correlated with each other. Analogously, Table 8 reflects the interrelation between ranks for problem benchmarking.
In particular, R S , R N , R B , R C , R K , R PF , and R GM ranks are strongly correlated with each other.
Regarding the results of the correlation analysis, the observed similarity of the ranking results for R S , R N , R B , R C , R K , R PF , and R GM ranking methods appears very intriguing, given that these methods have completely different areas of origin and underlying ideas (see the corresponding scholium in Appendix). It is interesting to consider the Pareto optimization results (see the solvers and problems marked in gray in Table 5 and 4, respectively). In particular, from Table 5, all considered solvers were Paretooptimal (i.e., they are considered "equally good" in the considered benchmarking context). We believe that this is due to the large (compared to the number of solvers) number of problems (i.e., too many criteria exist in the Scientific Programming corresponding MCDM problem) and, accordingly, each solver is good in "its own way." However, ranking methods enable the establishment of an appropriate hierarchy among solvers. Analogously, Table 6 demonstrates that Paretooptimal problems are allocated to different groups or clusters, indicating similar problems belonging to the same clusters. Ranking methods also make it possible to establish an appropriate hierarchy among the problems. Summarizing the results of the case-study investigation, we conclude the following: (i) e results of the calculations (Table 5) confirm that the SQG-DE algorithm (solver S 9 ) is the best in the considered benchmarking context (for comparison, see [12]), and this conclusion is correct for all rankings used in this study, despite their quite different natures. Moreover, the worst results are DE2 (solver S 2 ) according to all considered ranking methods, excluding Neustadt's method, and DE (solver S 1 ) according to Neustadt's ranking method.
(ii) Unlike Sala et al. [12], where the analysis of the problems was not carried out, our calculations also indicate ( Table 6) that the best problems in the considered benchmarking context (in the sense of a lower value of the considered metric) are the shifted sphere function in Dimension 50 (problem 26) and the rotated hybrid composition function in Note: the Pareto-optimal problems are gray marked. We stress that these results were obtained using only the ranking-theory methods without an analysis of any statistical indicators of the assessment function values, as currently practiced (see, e.g., the related literature overview in the Introduction section).
Conclusions
In this study, we presented a new MCDM technique for solving decision-making problems for benchmarking. Our investigation was based on the concept of a benchmarking context, presented in detail, and the observation that a benchmarking problem is an MCDM problem. Correspondingly, to solve benchmarking problems successfully, an extensive array of MCDM methods can be used. We also presented a new approach to the MCDM problem solution based on the ranking-theory methods. e corresponding ranks are obtained by constructing a special score matrix. We emphasize that this method defines the appropriate ranks directly from the decision matrix and does not use preliminary assessments conducted by external experts or other methods. erefore, the technique presented in this study is useful when the relative importance of various criteria has not been evaluated in advance. As a case study, the benchmarking problem of DE algorithms was considered based on the data presented by Sala et al. [12]. A detailed numerical investigation was conducted using various ranking methods. Moreover, these ranks were also correspondingly compared for solvers and problems. e results demonstrate that the method presented in this study is competitive and generates relevant solutions.
Referring to the analysis presented in this study, we conclude the following: (i) e results of applying MCDM methods to aid benchmarking problem solutions based on the proposed approach are encouraging. (ii) e proposed approach provides a constructive view of the benchmarking problem solution, identifying the "best" and "worst" cases and ordering all intermediate cases. (iii) e proposed approach is easily implementable because of its simplicity and flexibility. Moreover, the approach is sufficiently general and can be successfully used to investigate benchmarking problems in other application areas.
However, this study has limitations because we provided a tool for benchmarking only in the case in which the benchmarking context is given (i.e., when the sets of solvers (problems), problems (solvers), and performance metrics are given). However, issues regarding selecting benchmarking context components remain unresolved. e literature does not contain clear and direct recommendations regarding the correct selection of solvers, problems, and performance metrics. Hence, further investigation in this direction will be helpful. Now, using score matrix S � [S ij ], 1 ≤ i, j ≤ N, we define quantities w i , l i , n i , n ij as follows: and, obviously, n i � w i + l i � j� 1 N n ij , 1 ≤ i, j ≤ N. e Colley rating vector, r C , is obtained as a solution of the equation Cr C � v C , and the ranking defined by rating vector r C is called R C rank.
A.5. Keener Method. We describe the Keener method [46] as follows: let N be the number of athletes/teams and S � [S ij ], i, j � 1, . . . , N is the corresponding score matrix. Keener matrix K � [K ij ] i.j�1,...,N is defined as follows: Correspondingly, the rating vector for the Keener method r K is obtained as a solution of the eigenvalue problem Kr K � λr K and the ranking defined by rating vector r K is called R K rank.
A.6. Analytical Hierarchy Process. e analytical hierarchy process (AHP) is a well-known decision-making method [47]. Many modifications of this method exist, but we restrict ourselves to considering only two of them: AHP Perron-Frobenius version (AHP PF ) and AHP geometric mean version (AHP GM ), which are briefly described below. A main problem related to AHP is the inconsistency problem (of a pairwise comparison matrix). We will not discuss this problem here because of its technical nature. erefore, we consider AHP only as a procedure for constructing a rating vector. Let us assume again that N is the number of athletes/teams, which should be ranked based on the score matrix S � [S ij ], i, j � 1, . . . , N. We also assume that the score matrix S allows the construction of a matrix A � A(S) which is the reciprocal matrix of pairwise comparisons. Recall that matrix A � [a ij ], i, j � 1, . . . , N, is called the reciprocal matrix of pairwise comparisons if it has the following properties: a ij > 0, a ii � 1, a ij � a −1 ji , ∀i, j ∈ 1, . . . , m { }. Note also that for a positive reciprocal matrix A, its principal eigenvalue λ max has following properties: λ max ≥ n and if λ max ≠ n we have an inconsistency problem. e AHP PF rating vector r PF is defined as the solution to the eigenvalue problem: Ar PF � λ max r PF , with the principal eigenvalue λ max , and the corresponding ranking is called R PF rank. At the other hand, the AHP GM rating vector r GM � (r GM 1 , . . . , r GM m ) is defined as follows: and corresponding ranking will be call R GM rank.
Data Availability e data of Sala et al. [12] were used to support this study.
Conflicts of Interest
e authors declare no conflicts of interest regarding this article. | 8,691.2 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Modern Hybrid Excited Electric Machines
The paper deals with the overview of different designs of hybrid excited electrical machines, i.e., those with conventional permanent magnets excitation and additional DC-powered electromagnetic systems in the excitation circuit. The paper presents the most common topologies for this type of machines found in the literature—they were divided according to their electrical, mechanical and thermal properties. Against this background, the designs of hybrid excited machines that were the subject of scientific research of the authors are presented.
Controlled magnets are the basic executive part of all magnetic bearings [16,17]. Figure 2 shows two examples of magnetic bearings: a conventional bearing with an external moving part and a hybrid bearing with the control coils placed in the stationary part of the bearing. Such bearings are stable in the axial direction, and the central position of the moving parts is maintained by appropriate control of the coils currents. The mathematical description (voltage equations) of magnetic bearings is very similar to the description of hybrid-excited electric machines due to the appropriate selection of the coordinate system (rotating with the rotor). The main goal in magnetic bearings is to maintain a constant air gap, while in hybrid-excited machines, it is to obtain the required values of the magnetic flux.
Review of Hybrid Excited Machines
Hybrid excited electric machines can be divided into two groups. In the first group, the flux caused by the excitation winding passes through permanent magnets. The second group includes parallel excited hybrid machines. In these machines, the permanent magnet flux and the excitation winding flux have different trajectories. The magnetic permeability of PM is similar to air permeability. Therefore, for the first group machines, the coil magnetic reluctance is relatively high. This is a reason for the introduction of magnetic bridges into the machine in order to ensure lower reluctance for the excitation circuit.
For all machines with hybrid excitation the general mathematical model is described by Equations (1) and (2). They show that the induced voltage and magnetic flux related to the machine axes d and q -and, consequently, to the electromagnetic torque depend on the flux from permanent magnets and the current in the excitation coil [18,19]. Information about transformation from threephase (L1 -L2 -L3) system to two phase (α-β) system, and then to d-q axis, are described in details e.g., in [20], Controlled magnets are the basic executive part of all magnetic bearings [16,17]. Figure 2 shows two examples of magnetic bearings: a conventional bearing with an external moving part and a hybrid bearing with the control coils placed in the stationary part of the bearing. Such bearings are stable in the axial direction, and the central position of the moving parts is maintained by appropriate control of the coils currents. Controlled magnets are the basic executive part of all magnetic bearings [16,17]. Figure 2 shows two examples of magnetic bearings: a conventional bearing with an external moving part and a hybrid bearing with the control coils placed in the stationary part of the bearing. Such bearings are stable in the axial direction, and the central position of the moving parts is maintained by appropriate control of the coils currents. The mathematical description (voltage equations) of magnetic bearings is very similar to the description of hybrid-excited electric machines due to the appropriate selection of the coordinate system (rotating with the rotor). The main goal in magnetic bearings is to maintain a constant air gap, while in hybrid-excited machines, it is to obtain the required values of the magnetic flux.
Review of Hybrid Excited Machines
Hybrid excited electric machines can be divided into two groups. In the first group, the flux caused by the excitation winding passes through permanent magnets. The second group includes parallel excited hybrid machines. In these machines, the permanent magnet flux and the excitation winding flux have different trajectories. The magnetic permeability of PM is similar to air permeability. Therefore, for the first group machines, the coil magnetic reluctance is relatively high. This is a reason for the introduction of magnetic bridges into the machine in order to ensure lower reluctance for the excitation circuit.
For all machines with hybrid excitation the general mathematical model is described by Equations (1) and (2). They show that the induced voltage and magnetic flux related to the machine axes d and q -and, consequently, to the electromagnetic torque depend on the flux from permanent magnets and the current in the excitation coil [18,19]. Information about transformation from threephase (L1 -L2 -L3) system to two phase (α-β) system, and then to d-q axis, are described in details e.g., in [20], The mathematical description (voltage equations) of magnetic bearings is very similar to the description of hybrid-excited electric machines due to the appropriate selection of the coordinate system (rotating with the rotor). The main goal in magnetic bearings is to maintain a constant air gap, while in hybrid-excited machines, it is to obtain the required values of the magnetic flux.
Review of Hybrid Excited Machines
Hybrid excited electric machines can be divided into two groups. In the first group, the flux caused by the excitation winding passes through permanent magnets. The second group includes parallel excited hybrid machines. In these machines, the permanent magnet flux and the excitation winding flux have different trajectories. The magnetic permeability of PM is similar to air permeability. Therefore, for the first group machines, the coil magnetic reluctance is relatively high. This is a reason for the introduction of magnetic bridges into the machine in order to ensure lower reluctance for the excitation circuit.
For all machines with hybrid excitation the general mathematical model is described by Equations (1) and (2). They show that the induced voltage and magnetic flux related to the machine axes d and qand, consequently, to the electromagnetic torque depend on the flux from permanent magnets and the Energies 2020, 13, 5910 4 of 21 current in the excitation coil [18,19]. Information about transformation from three-phase (L 1 -L 2 -L 3 ) system to two phase (α-β) system, and then to d-q axis, are described in details e.g., in [20], where u d -d-axis voltage component, u q -q-axis voltage component, u c -voltage on the excitation coil, R s -stator winding resistance, s-operator d/dt, L d -d-axis inductance, ω e -angular velocity, L q -q-axis inductance, M sc -mutual inductance between the stator winding and the excitation coil, R c -excitation coil resistance, L c -inductance of the excitation coil, i d -d-axis stator current component, i q -q-axis stator current component, i c -excitation coil current, Ψ PM -flux of permanent magnets, where: Ψ d -d-axis flux, Ψ q -q-axis flux. The comparison of different structures of hybrid excited electrical machines is very difficult because of their variety. They can be compared, e.g., in terms of their external characteristics (mechanical design) [1][2][3]21]. Hybrid excited electrical machines can also be categorized according to the path-determining design of the combined excitation flux. There is a huge number of design solutions for hybrid excited machines. This Chapter presents the most interesting ones found in the literature.
Synchronous Machines with Permanent Magnets
In order to regulate the excitation flux, in addition to permanent magnets, an additional source is used in the form of a winding. A different approach is presented [22]. The synchronous generator rotor has been modified by adding permanent magnets. In this way, it became independent to some extent from the failure of the sensitive part of the machine, which is the arrangement of brushes and slip rings. The machine can operate at high rotational speed with weakened excitation field. There is a high flux density between two adjacent PM poles, which can increase iron losses in the stator core.
Similar structures have been presented in [23,24]. Furthermore author of [23] also described a direct torque control strategy dedicated to hybrid excited permanent magnet machines.
Flux-Switching Machines
A novel hybrid excitation flux-switching motor (HEFS) presented in [25] is dedicated to hybrid vehicles. A new motor topology has been proposed, in which the dimensions of the magnets have been reduced to save space for additional excitation winding, while the rotor and stator lamination remain unchanged. It should be noted that this allows the machine stream to be adjusted by controlling the length of the magnets in the radial direction. This solution in its idea is to eliminate the disadvantages, even higher torque ripple due to the cogging torque, which has, for example, a permanent magnet motor with flux switching (FSPM). Similar design has been investigated in [26]. The paper presents numerical research as well as experimental tests on built machine prototype.
Doubly Salient Machines
The paper [27] presents design of hybrid excited doubly salient machine with parallel excitation system. Authors of the paper analyzed the regulation possibility of air gap flux in three types of the machine main poles. By performing simulation and then experimental tests, obtain very good control properties from approx. 30 V to approx. 220 V. Hybrid excited doubly salient machines are also presented in [28][29][30].
Axial Flux Machines
The text [31] discusses a synchronous, hybrid excitation, axial flux generator in autonomous mode, with a field winding powered by an armature winding. The proposed solution allows for very precise control of the magnetic flux, which allows to obtain the set value of the output voltage in cases where the load or speed changes, or both. Very interesting designs of hybrid excited axial flux machines are also described in various papers [32][33][34][35].
Axial-Radial Flux Machines
Structural optimization, which maximizes the flux control range of a dual excitation synchronous machine, is discussed in the paper [36]. The air gap flux in this type of machine can be regulated by controlling the field currents. A machine of this type is able to regulate the air gap flow more flexibly compared to conventional PM machines. This has been achieved at the expense of more volume and higher costs due to the presence of additional field windings. Both electromagnetic and thermal complexity has been well addressed through the use of equivalent circuit networks. It was also found that one of the analyzed configurations almost eliminates the PM flux. Similar construction of the machine is shown in [37]. On the other hand, in [38] the authors presented simulation studies of a machine with an excitation flux in both, radial and axial direction, while the rotor is similar to a rotor of a flux-switching machine. In addition, some parts of the machine are proposed to be made of SMC material.
Dual Rotor/Stator Machines
In [39], a new toroidal winding twin-rotor permanent magnet synchronous reluctance machine (PM-SynRM) is discussed, which is proposed for high electromagnetic torque taking full advantage of the permanent magnet torque and reluctance torque due to the special design of the mounting angles of the two rotors. Permanent magnet torque and reluctance torque of the proposed machine can obtain their maximum values near the same current phase angle due to the special configuration of the two rotors, which significantly increases the total torque. It turned out that, as a result of the FEM analysis, the proposed machine gives much better torque results. On the other hand, the proposed double rotor structure has excellent properties and resistance to irreversible magnetization. The paper [40] discusses the research on a hybrid excited machine with a double rotor, in which one part is a rotor with permanent magnets, and the other-a classic wound rotor.
Inverted structure is presented by the authors of the paper [41]. In the paper, a hybrid-excited PM machine, based on the flux modulation effect, has been proposed. The authors state that in the machine there is no risk of irreversible demagnetization of PMs. Moreover, the machine does not need slip rings and brushes, since the DC excitation coils are placed in the stator, which makes the structure simple and reliable. The paper shows FEA numerical model of the machine, its structure and the working principle. Similar design is presented in [42].
A very interesting, but at the same time very complicated structure, was presented by the authors [43]. The paper presents a machine with two stators (inner and outer) and two parts of the rotor, one of which was composed of alternately arranged N and S magnets, and the other part was a classic claw pole rotor with excitation from the coil inside it.
Hybrid Excited Machines with DC Winding on Stator
The concept of a machine with hybrid excitation with permanent magnets and excitation of the AC field winding is presented in [44]. In this machine, permanent magnets (PM) are placed on the rotor side and the AC windings on the stator side for flux control while ensuring high torque. Since the magnetic field PM rotates with the rotor, the alternating currents would have the same frequency as the motor speed. The obtained results and FEM analyzes show that, in the case of the HEPM machine, flux regulation and operation in a wide speed range can be realized, and the electrical parameters can Energies 2020, 13, 5910 6 of 21 be improved compared to the original IPMSM, which could verify the theoretical analysis presented above, expand the method of designing permanent magnet machines and control strategy and provide a reference to the design of machines with hybrid excitation of permanent magnets.
Similar design can be found in [45]. An example of a parallel hybrid excitation machine is hybrid excitation flux reversal machine (HEFRM). It has been designed for application in electric vehicles propulsion. It may not only show better overload and the possibility of weakening the flux, but also reduces the risk of PM demagnetization. The change in the air gap flux can be controlled by controlling the excitation winding current, which also improves the overload torque at low speeds.
Axial Flux SRM Machines
The issues of construction and modeling of a machine with some favorable features for wind energy conversion applications are presented in [46]. A machine with a double stator with an axial rotor and permanent magnet flux switching (AFSPM) was adopted for consideration. The developed model of this machine was verified by comparing its results with the results from the two-dimensional (2D) FEM model. The modeling approach adopted has proved to be effective and gives good results compared to FEM. The open-circuit AFSPM performance was compared to the previously developed SMPMAF prototype. This comparative study showed that the EMF waveform is very close to the sinusoidal signal for AFSPM, which is desirable for the intended applications. However, for SMPMAF the EMF wave contains more harmonics.
Consequent-Pole Permanent Magnet Machines
An innovative concept of a permanent magnet motor with sequence poles was proposed in [47]. The motor has a unique rotor configuration in which the actual PM pole pairs and image pole pairs are positioned every other pole pair. The pole pairs of the image are formed on parts of the iron core next to the actual solid parts of the rotor surface. One of the most important features of the proposed motor is the ability to run at high speed with a lower field-weakening current (negative d-axis current). This enables the operating range to be effectively extended at high speeds without increasing copper losses. One disadvantage is that the amount of effective magnetic flux is too low, which results in a lower torque in the low speed range. Therefore, optimization of the magnetic circuit is needed.
A slightly different construction is presented in [48]. The subject of the research was a machine, the rotor of which had alternating permanent magnets and iron poles, consequently one pole of the machine was a magnet and iron element. Between the two parts of the stator the excitation regulating coil was placed.
Claw Pole Machines
The rapid development of hybrid vehicles has prompted the need to develop a highly efficient source of electricity for this type of vehicle. One of the ways to achieve this goal is to equip the synchronous generator with claw poles.
The article [49] proposes a new design of a machine with a claw pole and compares its performance with a conventional machine. The new design features permanent magnets in the inter-claw region to reduce leakage flux and provide increased magnetic flux in the machine. It was observed that when adding permanent magnets weighing only a few grams, the machine output power increased significantly by over 22%. The geometrical dimensions of the magnets were also changed to verify their influence on the operation and it was observed that, as the mass of the magnet increases, there is a non-linear increase in the machine torque. The power-to-weight ratio of the machine has also been significantly improved, which is one of the main advantages for mild hybrid applications.
A similar variant of the rotor structure, only seven-phase, was considered in [50]. This solution is characterized by an unequal number of pairs of magnetic poles between the armature and the excitation circuit. In this case, it is necessary to model only a quarter of the structure, not a third as in Energies 2020, 13, 5910 7 of 21 the case of a classical generator, where the number of bars of the rotor and stator poles is the same p = 6. The main goal of the project was to increase the output power and reduce the core losses.
The paper [51] presents a hybrid excited machine, in which the sources of the excitation field are placed inside a claw rotor-toroidal permanent magnets are inside the toroidal core, on which the excitation coil is located.
Electric Controlled Permanent Magnet Synchronous Machine
Electric Controlled Permanent Magnet Synchronous machine (ECPMS-machine) is one of the hybrid excited machine concepts with good application prospects. The magnetic field excited by the DC coil current (I DC ) gives possibility to change a machine air gap flux and consequently the stator flux linkage Ψ s . In this way, the output voltage of the machine is electrically controlled. Figure 3 shows ECPMS-machine concept, in which an air gap flux is controlled by a DC excitation control coil (DC control coil), fixed on the stator or on the machine rotor. The presented 12-poles ECPMS-machine design has a winded double stator, which is separated by a stator DC control coil and is centrally placed between two stator laminations. The stator DC control coil is locked inside a toroidally-wound additional stator core. The machine rotor has two lamination stacks which are separated by toroidally-wound additional rotor core. The rotor lamination structure has multi-flux barriers and embedded flat magnets NdFeB type, that form six iron poles (IP) and six permanent magnets poles (PMP) for each of two rotor stacks. The main feature of the proposed rotor structure is related with proper machine air gap flux distribution and simultaneously required machine flux control (FC) ability. The presented ECPMS-machine concept has been presented in [7,[52][53][54].
Energies 2020, 13, x FOR PEER REVIEW 7 of 21 the case of a classical generator, where the number of bars of the rotor and stator poles is the same p = 6. The main goal of the project was to increase the output power and reduce the core losses. The paper [51] presents a hybrid excited machine, in which the sources of the excitation field are placed inside a claw rotor-toroidal permanent magnets are inside the toroidal core, on which the excitation coil is located.
Electric Controlled Permanent Magnet Synchronous Machine
Electric Controlled Permanent Magnet Synchronous machine (ECPMS-machine) is one of the hybrid excited machine concepts with good application prospects. The magnetic field excited by the DC coil current (IDC) gives possibility to change a machine air gap flux and consequently the stator flux linkageΨs. In this way, the output voltage of the machine is electrically controlled. Figure 3 shows ECPMS-machine concept, in which an air gap flux is controlled by a DC excitation control coil (DC control coil), fixed on the stator or on the machine rotor. The presented 12poles ECPMS-machine design has a winded double stator, which is separated by a stator DC control coil and is centrally placed between two stator laminations. The stator DC control coil is locked inside a toroidally-wound additional stator core. The machine rotor has two lamination stacks which are separated by toroidally-wound additional rotor core. The rotor lamination structure has multi-flux barriers and embedded flat magnets NdFeB type, that form six iron poles (IP) and six permanent magnets poles (PMP) for each of two rotor stacks. The main feature of the proposed rotor structure is related with proper machine air gap flux distribution and simultaneously required machine flux control (FC) ability. The presented ECPMS-machine concept has been presented in [7,[52][53][54].
Stator DC Control Coil of the ECPMS-Machine Concept
The concept of hybrid excitation with permanent magnets and the additional DC field winding locked on the stator machine is presented in Figure 4. The DC control coil placed on the stator side ( Figure 4a) is used to control the machine air gap flux. The presented machine design concept has been widely analyzed in [52], where the influence of rotor structures on field regulation capability of the machine has been described.
Stator DC Control Coil of the ECPMS-Machine Concept
The concept of hybrid excitation with permanent magnets and the additional DC field winding locked on the stator machine is presented in Figure 4. The DC control coil placed on the stator side ( Figure 4a) is used to control the machine air gap flux. The presented machine design concept has been widely analyzed in [52], where the influence of rotor structures on field regulation capability of the machine has been described. The main results of the study show a rotor structure which ensures the effective machine flux regulation. The results obtained during FEA carried out on the three-dimensional (3D) model of the machine (Figure 4b, where B is flux density expressed in tesla) confirmed effective flux control of the machine. The characteristics of magnetic flux linkage Ψ s versus of DC coil magneto-motive force (MMF) θ DC , which can be seen in the Figure 4c, is proof. To validate the simulation results, a set of experimental tests have been carried out on a machine prototype. Figure
Rotor DC Control Coil of the ECPMS-Machine Concept
The DC excitation source can be placed in the rotor of ECPMS-machine. Figure 6a presents a 3D-FE model and rotor prototype of the machine where the placement of the DC control coil on the rotor is clearly shown. It should be noted that, depending of the presence of the stator DC control coil, the rotor DC control coil can be an additional or independent source of excitation. To validate the simulation results, a set of experimental tests have been carried out on a machine prototype. To validate the simulation results, a set of experimental tests have been carried out on a machine prototype.
Rotor DC Control Coil of the ECPMS-Machine Concept
The DC excitation source can be placed in the rotor of ECPMS-machine. Figure 6a presents a 3D-FE model and rotor prototype of the machine where the placement of the DC control coil on the rotor is clearly shown. It should be noted that, depending of the presence of the stator DC control coil, the rotor DC control coil can be an additional or independent source of excitation.
Rotor DC Control Coil of the ECPMS-Machine Concept
The DC excitation source can be placed in the rotor of ECPMS-machine. Figure 6a presents a 3D-FE model and rotor prototype of the machine where the placement of the DC control coil on the rotor is clearly shown. It should be noted that, depending of the presence of the stator DC control coil, the rotor DC control coil can be an additional or independent source of excitation. To validate the simulation results, a set of experimental tests have been carried out on a machine prototype.
Rotor DC Control Coil of the ECPMS-Machine Concept
The DC excitation source can be placed in the rotor of ECPMS-machine. Figure 6a presents a 3D-FE model and rotor prototype of the machine where the placement of the DC control coil on the rotor is clearly shown. It should be noted that, depending of the presence of the stator DC control coil, the rotor DC control coil can be an additional or independent source of excitation. Commonly, in order to supply the windings placed on the rotor, it is necessary to use brushes and slip rings. Alternatively to this, modern contactless energy transfer (CET) system shown in Figure 6b have been performed and described in [54], and it has been successfully used, in this case. Figure 6b shows construction details of the ECPMS-machine prototype with the rotor DC control coil and supply coils used for the wireless power transfer which have been designed and locked on the housing of the machine prototype. Such a solution includes a transformer whose windings are formed by a double-sided printed circuit board (PCB) (used 70 µm copper thickness) plates. On both parts of the secondary and primary plates, ferrite sheets have been used as the path of the magnetic flux (Wurth Electronic ® WE-FSFS flexible ferrite sheet, number 344,003). Air gap between TX and RX-coil was approx. 1 mm, in this case. Figure 6c shows experimental results of phase back-EMF waveforms recorded at constant rotor speed of 1000 rpm by three operating conditions at rotor DC control coil MMF (0 and ± 1000 AT). The results show that the field control ratio (FCR) up to 4:1 can be effectively obtained. Additionally, the result of FWR has been achieved at low losses and low power consumption of the rotor DC control coil. In this case, the power consumption of the DC control coil (at 1000 AT excitation) is approx. 20 W.
ECPMS-Machine Concept-Conclusion
The presented ECPMS-machine belongs to the concept of hybrid excited machines with excellent field-control capability, which can be used in wide adjustable speed drives. The experiment validations have shown that the field control ratio 10:1 of the presented machine can be effectively obtained. This property can be used in electric vehicle drives and other adjustable speed drive applications.
Advantages: wide flux control range, high starting torque, totally flux-weakening possibility, low demagnetization risk of rotor magnets, possibility to locate additional DC field sources on the stator or rotor machine, or both.
Drawbacks: additional components, complex machine structure and greater weight and dimensions compared to conventional machines.
Hybrid Excited Disk Type Machine
The hybrid-excited axial flux machine (HEAFM) is built on the basis of an internal double winding stator and two external 12-pole rotors connected by a ferromagnetic bushing. Additional electromagnetic excitation is placed in the stator circuit. The structure of the machine is shown in Figure 7.
Energies 2020, 13, x FOR PEER REVIEW 9 of 21 Commonly, in order to supply the windings placed on the rotor, it is necessary to use brushes and slip rings. Alternatively to this, modern contactless energy transfer (CET) system shown in Figure 6b have been performed and described in [54], and it has been successfully used, in this case. Figure 6b shows construction details of the ECPMS-machine prototype with the rotor DC control coil and supply coils used for the wireless power transfer which have been designed and locked on the housing of the machine prototype. Such a solution includes a transformer whose windings are formed by a double-sided printed circuit board (PCB) (used 70 µm copper thickness) plates. On both parts of the secondary and primary plates, ferrite sheets have been used as the path of the magnetic flux (Wurth Electronic ® WE-FSFS flexible ferrite sheet, number 344003). Air gap between TX and RXcoil was approx. 1 mm, in this case. Figure 6c shows experimental results of phase back-EMF waveforms recorded at constant rotor speed of 1000 rpm by three operating conditions at rotor DC control coil MMF (0 and ± 1000 AT). The results show that the field control ratio (FCR) up to 4:1 can be effectively obtained. Additionally, the result of FWR has been achieved at low losses and low power consumption of the rotor DC control coil. In this case, the power consumption of the DC control coil (at 1000 AT excitation) is approx. 20 W.
ECPMS-Machine Concept-Conclusion
The presented ECPMS-machine belongs to the concept of hybrid excited machines with excellent field-control capability, which can be used in wide adjustable speed drives. The experiment validations have shown that the field control ratio 10:1 of the presented machine can be effectively obtained. This property can be used in electric vehicle drives and other adjustable speed drive applications.
Advantages: wide flux control range, high starting torque, totally flux-weakening possibility, low demagnetization risk of rotor magnets, possibility to locate additional DC field sources on the stator or rotor machine, or both.
Drawbacks: additional components, complex machine structure and greater weight and dimensions compared to conventional machines.
Hybrid Excited Disk Type Machine
The hybrid-excited axial flux machine (HEAFM) is built on the basis of an internal double winding stator and two external 12-pole rotors connected by a ferromagnetic bushing. Additional electromagnetic excitation is placed in the stator circuit. The structure of the machine is shown in Figure 7. The machine's stator is made of two laminated toroidal cores, and in each of them 32 slots with a 3-phase winding. The DC coil used as an additional excitation source was mounted on the inside of the stator around the rotor bushing. The coil is stationary, so it has no brushes or slip rings. The rotor consists of two outer discs connected by a steel sleeve. On each disk are alternately mounted iron poles with magnets polarized in one direction. There are 6 pole pairs on each disc. The operating principle of HEAFM is shown in Figure 8. When no current flows through the DC coil, the main magnetic flux in the machine flows through the air gap between the magnets, and part of it between the magnet and the iron pole. Depending on the direction of the current in the DC coil, the iron poles magnetize, which in turn leads to a strengthening of the main flux (FS) or its weakening (FW).
Energies 2020, 13, x FOR PEER REVIEW 10 of 21 The machine's stator is made of two laminated toroidal cores, and in each of them 32 slots with a 3-phase winding. The DC coil used as an additional excitation source was mounted on the inside of the stator around the rotor bushing. The coil is stationary, so it has no brushes or slip rings. The rotor consists of two outer discs connected by a steel sleeve. On each disk are alternately mounted iron poles with magnets polarized in one direction. There are 6 pole pairs on each disc. The operating principle of HEAFM is shown in Figure 8. When no current flows through the DC coil, the main magnetic flux in the machine flows through the air gap between the magnets, and part of it between the magnet and the iron pole. Depending on the direction of the current in the DC coil, the iron poles magnetize, which in turn leads to a strengthening of the main flux (FS) or its weakening (FW). The prototype of the machine (Figure 9) was made in accordance with the assumptions and tested on the experimental stand ( Figure 10) in generator mode. The prototype of the machine (Figure 9) was made in accordance with the assumptions and tested on the experimental stand ( Figure 10) in generator mode.
Energies 2020, 13, x FOR PEER REVIEW 10 of 21 The machine's stator is made of two laminated toroidal cores, and in each of them 32 slots with a 3-phase winding. The DC coil used as an additional excitation source was mounted on the inside of the stator around the rotor bushing. The coil is stationary, so it has no brushes or slip rings. The rotor consists of two outer discs connected by a steel sleeve. On each disk are alternately mounted iron poles with magnets polarized in one direction. There are 6 pole pairs on each disc. The operating principle of HEAFM is shown in Figure 8. When no current flows through the DC coil, the main magnetic flux in the machine flows through the air gap between the magnets, and part of it between the magnet and the iron pole. Depending on the direction of the current in the DC coil, the iron poles magnetize, which in turn leads to a strengthening of the main flux (FS) or its weakening (FW). The prototype of the machine (Figure 9) was made in accordance with the assumptions and tested on the experimental stand ( Figure 10) in generator mode. The machine's stator is made of two laminated toroidal cores, and in each of them 32 slots with a 3-phase winding. The DC coil used as an additional excitation source was mounted on the inside of the stator around the rotor bushing. The coil is stationary, so it has no brushes or slip rings. The rotor consists of two outer discs connected by a steel sleeve. On each disk are alternately mounted iron poles with magnets polarized in one direction. There are 6 pole pairs on each disc. The operating principle of HEAFM is shown in Figure 8. When no current flows through the DC coil, the main magnetic flux in the machine flows through the air gap between the magnets, and part of it between the magnet and the iron pole. Depending on the direction of the current in the DC coil, the iron poles magnetize, which in turn leads to a strengthening of the main flux (FS) or its weakening (FW). The prototype of the machine (Figure 9) was made in accordance with the assumptions and tested on the experimental stand ( Figure 10) in generator mode. Waveforms induced in the machine without load at a speed of 600 rpm for different currents in the DC coil are shown in Figure 11 by a dotted line.
Waveforms induced in the machine without load at a speed of 600 rpm for different currents in the DC coil are shown in Figure 11 by a dotted line. A 3D model of HEAFM was built and simulations were made using the finite element method (FEM). Figure 12 shows the distribution of magnetic induction in a magnetic circuit of the machine with the unpowered DC coil. Figure 13 shows the 2-D distribution of magnetic induction in an air gap for different currents in the DC coil on an arc, the radius of which is the average of the inner and outer radius of the machine active parts. This arc spans two adjacent poles: iron pole and PM pole. A 3D model of HEAFM was built and simulations were made using the finite element method (FEM). Figure 12 shows the distribution of magnetic induction in a magnetic circuit of the machine with the unpowered DC coil. Figure 13 shows the 2-D distribution of magnetic induction in an air gap for different currents in the DC coil on an arc, the radius of which is the average of the inner and outer radius of the machine active parts. This arc spans two adjacent poles: iron pole and PM pole. Waveforms induced in the machine without load at a speed of 600 rpm for different currents in the DC coil are shown in Figure 11 by a dotted line. A 3D model of HEAFM was built and simulations were made using the finite element method (FEM). Figure 12 shows the distribution of magnetic induction in a magnetic circuit of the machine with the unpowered DC coil. Figure 13 shows the 2-D distribution of magnetic induction in an air gap for different currents in the DC coil on an arc, the radius of which is the average of the inner and outer radius of the machine active parts. This arc spans two adjacent poles: iron pole and PM pole. Waveforms induced in the machine without load at a speed of 600 rpm for different currents in the DC coil are shown in Figure 11 by a dotted line. A 3D model of HEAFM was built and simulations were made using the finite element method (FEM). Figure 12 shows the distribution of magnetic induction in a magnetic circuit of the machine with the unpowered DC coil. Figure 13 shows the 2-D distribution of magnetic induction in an air gap for different currents in the DC coil on an arc, the radius of which is the average of the inner and outer radius of the machine active parts. This arc spans two adjacent poles: iron pole and PM pole. The results show that the magnetic flux only changes under the iron pole, which is the advantage of this design, unfortunately the FS level is much higher than FW. Waveforms of magnetic induction in air gap for different currents in the DC coil are shown in Figure 13. Subsequent simulation studies were to show the effect of modification of the machine's magnetic core on the magnetic field regulation (FCR) range. The height of the magnets was examined first. The base model was the machine in which the height of PM was 12 mm and it was changed in the range from 2 to 14 mm. ∆FCR is the ratio of the base machine FCR to the FCR of the machine with different heights of PM ( Figure 14).
The results show that the amount of PM has an impact on FCR. With increasing PM height, the induced voltage increases, but at the same time the possibility of its regulation decreases.
Energies 2020, 13, x FOR PEER REVIEW 12 of 21 The results show that the magnetic flux only changes under the iron pole, which is the advantage of this design, unfortunately the FS level is much higher than FW. Waveforms of magnetic induction in air gap for different currents in the DC coil are shown in Figure 13.
Subsequent simulation studies were to show the effect of modification of the machine's magnetic core on the magnetic field regulation (FCR) range. The height of the magnets was examined first. The base model was the machine in which the height of PM was 12 mm and it was changed in the range from 2 to 14 mm. ΔFCR is the ratio of the base machine FCR to the FCR of the machine with different heights of PM ( Figure 14).
The results show that the amount of PM has an impact on FCR. With increasing PM height, the induced voltage increases, but at the same time the possibility of its regulation decreases. Further research aimed to demonstrate the impact of permanent magnet materials on FCR. For this purpose, various types of PM were shown, shown in Table 1. Table 2 shows the distribution of induction throughout the entire air gap and the percentage ratio of magnetic flux flowing through the surface over the iron pole (IP) to pole with permanent magnet (PMP). The waveforms of induced voltages for selected materials of PM-neodymium (N38H) and ferrite (F30) depending on the currents in the additional DC coil at 200 rpm are also presented in Figure 15. Further research aimed to demonstrate the impact of permanent magnet materials on FCR. For this purpose, various types of PM were shown, shown in Table 1. FEM simulations have been conducted for no-load generator mode. Table 2 shows the distribution of induction throughout the entire air gap and the percentage ratio of magnetic flux flowing through the surface over the iron pole (IP) to pole with permanent magnet (PMP). The waveforms of induced voltages for selected materials of PM-neodymium (N38H) and ferrite (F30) depending on the currents in the additional DC coil at 200 rpm are also presented in Figure 15. Based on the simulating results, which was made, it can be concluded that despite the N38H magnet is the strongest and has the largest coefficient kΦ%, it allows the smallest possibility of flux control, and thus the induced voltage regulation.
Item (Unit) Material Code Remanent Magnetic Flux Density (T) Coercive Force (kA/m)
The FCR coefficient of the analyzed machine is influenced, among other factors, by the magnetic circuit topology, including the shape, dimension and material of PM.
Hybrid Excited Claw Pole Machine (HECPM)
One of the innovative solutions is the use of hybrid excitation in claw pole machines by placing permanent magnets on or inside the claws. A construction of a hybrid excited claw machine with permanent magnets placed in milled areas on one [56] or both parts [57] of the rotor was proposed. Figure 16 shows these structures. Based on the simulating results, which was made, it can be concluded that despite the N38H magnet is the strongest and has the largest coefficient k Φ% , it allows the smallest possibility of flux control, and thus the induced voltage regulation.
The FCR coefficient of the analyzed machine is influenced, among other factors, by the magnetic circuit topology, including the shape, dimension and material of PM.
Hybrid Excited Claw Pole Machine (HECPM)
One of the innovative solutions is the use of hybrid excitation in claw pole machines by placing permanent magnets on or inside the claws. A construction of a hybrid excited claw machine with permanent magnets placed in milled areas on one [56] or both parts [57] of the rotor was proposed. Figure 16 shows these structures. Based on the simulating results, which was made, it can be concluded that despite the N38H magnet is the strongest and has the largest coefficient kΦ%, it allows the smallest possibility of flux control, and thus the induced voltage regulation.
The FCR coefficient of the analyzed machine is influenced, among other factors, by the magnetic circuit topology, including the shape, dimension and material of PM.
Hybrid Excited Claw Pole Machine (HECPM)
One of the innovative solutions is the use of hybrid excitation in claw pole machines by placing permanent magnets on or inside the claws. A construction of a hybrid excited claw machine with permanent magnets placed in milled areas on one [56] or both parts [57] of the rotor was proposed. Figure 16 shows these structures. During the scientific research of the proposed solutions, in order to experimentally validate the results of numerical tests, a car alternator manufactured by Denso, with the nominal current I n = 100 A and nominal voltage U n = 12 V, was used and rebuilt, while maintaining the standard excitation regulator. As a result of the work, it was possible to develop a technical solution that made it possible to self-excite the machine without the use of an additional DC source. In the first solution [56], self-excitation took place at the rotational speed equal to 1300 rpm, while in the second solution [57]-at 850 rpm. This feature can be used in a home wind turbine-in the absence of wind, thanks to the use of one diode, the generator regulator would not get energy from the batteries, while when the wind of sufficient strength appears, the generator would be self-excited, and consequently the energy would be generated for the storage system.
The influence of the excitation current on the cogging torque of the machines was also investigated. The results are shown in Figure 17a which shows cogging torque waveforms for a machine with permanent magnets on one part of the rotor for five current values in the excitation coil-parameter α is an angular (mechanical angle) position between rotor and stator. During the scientific research of the proposed solutions, in order to experimentally validate the results of numerical tests, a car alternator manufactured by Denso, with the nominal current In = 100 A and nominal voltage Un = 12 V, was used and rebuilt, while maintaining the standard excitation regulator. As a result of the work, it was possible to develop a technical solution that made it possible to self-excite the machine without the use of an additional DC source. In the first solution [56], selfexcitation took place at the rotational speed equal to 1300 rpm, while in the second solution [57]-at 850 rpm. This feature can be used in a home wind turbine-in the absence of wind, thanks to the use of one diode, the generator regulator would not get energy from the batteries, while when the wind of sufficient strength appears, the generator would be self-excited, and consequently the energy would be generated for the storage system.
The influence of the excitation current on the cogging torque of the machines was also investigated. The results are shown in Figure 17a which shows cogging torque waveforms for a machine with permanent magnets on one part of the rotor for five current values in the excitation coil-parameter α is an angular (mechanical angle) position between rotor and stator. In the second model, the magnets from the first model (on one part of the rotor) were retained and another 6 magnets were added to the second part of the rotor, however, simulation studies on the influence of the γ angle between the permanent magnets on this part of the rotor and the machine axis were previously carried out with using FEA. This angle varied from 0 to 15° in steps of 1°. It turned out that the angle at which the lowest cogging moment occurs is γ = 9°. The chosen results are presented in Figure 17b, which demonstrates the maximum values of the cogging torque as a function of the angle γ, in a no-load excitation coil state.
The test results showed that the induced voltage decreased with the increase of the angle γ, while the cogging torque reached the minimum for the angle γ = 9°. Additionally-thanks to the use of additional magnets set at an angle of γ = 9°-the machine was self-excited at speeds up to 450 rpm lower, moreover, despite the fact that twice as many sources of magnetomotive forces were installed, the cogging torque in the no-load state of the excitation coil did not increase in relation to the machine presented in [57]. Figure 18 presents experimental stand with HECPM. In the second model, the magnets from the first model (on one part of the rotor) were retained and another 6 magnets were added to the second part of the rotor, however, simulation studies on the influence of the γ angle between the permanent magnets on this part of the rotor and the machine axis were previously carried out with using FEA. This angle varied from 0 to 15 • in steps of 1 • . It turned out that the angle at which the lowest cogging moment occurs is γ = 9 • . The chosen results are presented in Figure 17b, which demonstrates the maximum values of the cogging torque as a function of the angle γ, in a no-load excitation coil state.
The test results showed that the induced voltage decreased with the increase of the angle γ, while the cogging torque reached the minimum for the angle γ = 9 • . Additionally-thanks to the use of additional magnets set at an angle of γ = 9 • -the machine was self-excited at speeds up to 450 rpm lower, moreover, despite the fact that twice as many sources of magnetomotive forces were installed, the cogging torque in the no-load state of the excitation coil did not increase in relation to the machine presented in [57]. Figure 18 presents experimental stand with HECPM. Very important, from the technological point of view, is the simplicity and ease of production of all kinds of devices, including electromechanical energy converters. For this reason, a new approach to designing a hybrid excited claw machine has been proposed. The paper [58] presents the concept of building a machine with the use of a laminated rotor made by sheets of an appropriate shape (Hybrid Excited Claw Pole Machine with Laminated Rotor-HECPMLR). Figure 19 shows FEA model of the tested HECPMLR machine which is 1/6 of whole machine. This type of approach allows the construction of even the most complex electromagnetic structures. The paper [58] presents also preliminary results of simulation tests of the proposed structure and the relationship between the maximum cogging torque and the induced voltage distribution depending on the current in the excitation coil ( Figure 20). Figure 20a shows the maximum value of the cogging torque Temax depending on the current in the excitation coil Iexc, and Very important, from the technological point of view, is the simplicity and ease of production of all kinds of devices, including electromechanical energy converters. For this reason, a new approach to designing a hybrid excited claw machine has been proposed. The paper [58] presents the concept of building a machine with the use of a laminated rotor made by sheets of an appropriate shape (Hybrid Excited Claw Pole Machine with Laminated Rotor-HECPMLR). Figure 19 shows FEA model of the tested HECPMLR machine which is 1/6 of whole machine. Very important, from the technological point of view, is the simplicity and ease of production of all kinds of devices, including electromechanical energy converters. For this reason, a new approach to designing a hybrid excited claw machine has been proposed. The paper [58] presents the concept of building a machine with the use of a laminated rotor made by sheets of an appropriate shape (Hybrid Excited Claw Pole Machine with Laminated Rotor-HECPMLR). Figure 19 shows FEA model of the tested HECPMLR machine which is 1/6 of whole machine. This type of approach allows the construction of even the most complex electromagnetic structures. The paper [58] presents also preliminary results of simulation tests of the proposed structure and the relationship between the maximum cogging torque and the induced voltage distribution depending on the current in the excitation coil ( Figure 20). Figure 20a shows the maximum value of the cogging torque Temax depending on the current in the excitation coil Iexc, and Figure 20b-the distribution of the back-EMF depending on the current in the excitation coil, where α is a mechanical angle between rotor and stator.
This type of approach allows the construction of even the most complex electromagnetic structures. The paper [58] presents also preliminary results of simulation tests of the proposed structure and the relationship between the maximum cogging torque and the induced voltage distribution depending on the current in the excitation coil ( Figure 20). Figure 20a shows the maximum value of the cogging torque T emax depending on the current in the excitation coil I exc , and Figure 20b-the distribution of the back-EMF depending on the current in the excitation coil, where α is a mechanical angle between rotor and stator. Very important, from the technological point of view, is the simplicity and ease of production of all kinds of devices, including electromechanical energy converters. For this reason, a new approach to designing a hybrid excited claw machine has been proposed. The paper [58] presents the concept of building a machine with the use of a laminated rotor made by sheets of an appropriate shape (Hybrid Excited Claw Pole Machine with Laminated Rotor-HECPMLR). Figure 19 shows FEA model of the tested HECPMLR machine which is 1/6 of whole machine. This type of approach allows the construction of even the most complex electromagnetic structures. The paper [58] presents also preliminary results of simulation tests of the proposed structure and the relationship between the maximum cogging torque and the induced voltage distribution depending on the current in the excitation coil ( Figure 20). Figure 20a shows the maximum value of the cogging torque Temax depending on the current in the excitation coil Iexc, and Figure 20b-the distribution of the back-EMF depending on the current in the excitation coil, where α is a mechanical angle between rotor and stator. The research shows that the cogging torque always increases with the increase of the current in the excitation coil, regardless of its direction- Figure 20a. On the other hand, the induced voltage U imax has the following adjustment range from 189.4 V to 253.8 V (−18% ÷ +10%).
PM Electric Machine with Magnetic Barriers and Excitation Coils in the Rotor (HESMFB)
The purpose of the work of a new design of HESMFB machine was to develop a construction with magnetic flux barriers, embedded PMs and additional electromagnetic excitation in the machine rotor. It should be added that in order to achieve a wide speed control range of PM machines a large inductance ratio L q /L d (L q -inductance in q-axis, L d -inductance in d-axis) of the machine is required. The magnetic flux density distribution in the FEA model has been presented in Figure 21.
As can be seen in Figure 21a, the large saturation in magnetic bridges in the rotor close to the air gap and permanent magnets is noticeably. Due to this, flux leakage is reduced because the most of the magnetic flux passes through the air gap. Figure 21b presents a novel conception of the machine rotor with barriers and hybrid excitation. The research shows that the cogging torque always increases with the increase of the current in the excitation coil, regardless of its direction- Figure 20a. On the other hand, the induced voltage Uimax has the following adjustment range from 189.4 V to 253.8 V (−18% ÷ +10%).
PM Electric Machine with Magnetic Barriers and Excitation Coils in the Rotor (HESMFB)
The purpose of the work of a new design of HESMFB machine was to develop a construction with magnetic flux barriers, embedded PMs and additional electromagnetic excitation in the machine rotor. It should be added that in order to achieve a wide speed control range of PM machines a large inductance ratio Lq/Ld (Lq-inductance in q-axis, Ld-inductance in d-axis) of the machine is required. The magnetic flux density distribution in the FEA model has been presented in Figure 21.
As can be seen in Figure 21a, the large saturation in magnetic bridges in the rotor close to the air gap and permanent magnets is noticeably. Due to this, flux leakage is reduced because the most of the magnetic flux passes through the air gap. Figure 21b During FEA investigations induced voltage waveforms have been plotted- Figure 22a, where α is a mechanical angle between rotor and stator. Furthermore the influence of additional windings current density (jDC) on the electromagnetic torque characteristics of the machine has been specified. These numerical tests have been conducted for three stator currents at Is max = 4; 8 and 12 A, at various additional winding current density jDC in the range from −8 A/mm 2 to + 8 A/mm 2 and for whole load angle range (from 0 to 360⁰ el ). The maximum values of electromanetic torque Te depending on jDC and Is max have been presented in Figure 22b. During FEA investigations induced voltage waveforms have been plotted- Figure 22a, where α is a mechanical angle between rotor and stator. Furthermore the influence of additional windings current density (j DC ) on the electromagnetic torque characteristics of the machine has been specified. These numerical tests have been conducted for three stator currents at I s max = 4; 8 and 12 A, at various additional winding current density j DC in the range from −8 A/mm 2 to + 8 A/mm 2 and for whole load angle range (from 0 to 360 0el ). The maximum values of electromanetic torque T e depending on j DC and I s max have been presented in Figure 22b. The research shows that the cogging torque always increases with the increase of the current in the excitation coil, regardless of its direction- Figure 20a. On the other hand, the induced voltage Uimax has the following adjustment range from 189.4 V to 253.8 V (−18% ÷ +10%).
PM Electric Machine with Magnetic Barriers and Excitation Coils in the Rotor (HESMFB)
The purpose of the work of a new design of HESMFB machine was to develop a construction with magnetic flux barriers, embedded PMs and additional electromagnetic excitation in the machine rotor. It should be added that in order to achieve a wide speed control range of PM machines a large inductance ratio Lq/Ld (Lq-inductance in q-axis, Ld-inductance in d-axis) of the machine is required. The magnetic flux density distribution in the FEA model has been presented in Figure 21.
As can be seen in Figure 21a, the large saturation in magnetic bridges in the rotor close to the air gap and permanent magnets is noticeably. Due to this, flux leakage is reduced because the most of the magnetic flux passes through the air gap. Figure 21b presents a novel conception of the machine rotor with barriers and hybrid excitation. During FEA investigations induced voltage waveforms have been plotted- Figure 22a, where α is a mechanical angle between rotor and stator. Furthermore the influence of additional windings current density (jDC) on the electromagnetic torque characteristics of the machine has been specified. These numerical tests have been conducted for three stator currents at Is max = 4; 8 and 12 A, at various additional winding current density jDC in the range from −8 A/mm 2 to + 8 A/mm 2 and for whole load angle range (from 0 to 360⁰ el ). The maximum values of electromanetic torque Te depending on jDC and Is max have been presented in Figure 22b. Next, the experimental tests have been conducted. Figure 23 presents chosen experimental results and comparison with simulation predictions. Figure 23 show that in the proposed machine the induced voltage control range from 77.6 V to 129.8 V has been obtained. Whereas according to FEA results the back-EMF control range from 72.2 V to 129.3 V was reached. It follows from the above that the field control range (FCR) for the experiment is 1.67 but for FEA-1.79. This means a good representation of the real machine using the developed simulation model. Regarding the cogging torque, the results of the experiment differ slightly from the results obtained in the simulations. However, these differences are rather minor. These differences may result from the not perfect torque measuring, because of very small its values and the FEA model's mesh and accuracy. Next, the experimental tests have been conducted. Figure 23 presents chosen experimental results and comparison with simulation predictions. Figure 23 show that in the proposed machine the induced voltage control range from 77.6 V to 129.8 V has been obtained. Whereas according to FEA results the back-EMF control range from 72.2 V to 129.3 V was reached. It follows from the above that the field control range (FCR) for the experiment is 1.67 but for FEA-1.79. This means a good representation of the real machine using the developed simulation model. Regarding the cogging torque, the results of the experiment differ slightly from the results obtained in the simulations. However, these differences are rather minor. These differences may result from the not perfect torque measuring, because of very small its values and the FEA model's mesh and accuracy.
Conclusions
The paper provides an overview of various hybrid excited machine topologies. In the literature a lot of solutions for hybrid excited machines can be found. In this paper the most common ones have been presented. Against this background, some new designs, sometimes completely innovative, developed by the authors were presented. In Table 3, the advantages, disadvantages and characteristic features of these machines are summarized.
Conclusions
The paper provides an overview of various hybrid excited machine topologies. In the literature a lot of solutions for hybrid excited machines can be found. In this paper the most common ones have been presented. Against this background, some new designs, sometimes completely innovative, developed by the authors were presented. In Table 3, the advantages, disadvantages and characteristic features of these machines are summarized. Finally, we conclude that some of the presented solutions have very good flux control properties, but their complicated structure eliminates them from the possibility of practical application. Hence, the legitimacy of further search for such structures will be easy and inexpensive to manufacture, durable in operation, and at the same time will be characterized by a large range of control.
Author Contributions: All authors worked on this manuscript together. Conceptualization, M.W. and R.P.; investigation, resources, writing-original draft preparation and writing-review and editing, M.W., R.P., P.P. (Piotr Paplicki), P.P. (Pawel Prajzendanc), and T.Z. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Acknowledgments: This work has been supported with the grant of the National Science Centre, Poland 2018/02/X/ST8/01112.
Conflicts of Interest:
The authors declare no conflict of interest. | 14,120.4 | 2020-11-12T00:00:00.000 | [
"Engineering",
"Physics"
] |
Perron-Frobenius theory and frequency convergence for reducible substitutions
We prove a general version of the classical Perron-Frobenius convergence property for reducible matrices. We then apply this result to reducible substitutions and use it to produce limit frequencies for factors and hence invariant measures on the associated subshift. The analogous results are well known for primitive substitutions and have found many applications, but for reducible substitutions the tools provided here were so far missing from the theory.
Introduction
One of the most investigated dynamical systems, with important applications in many areas, are subshifts that are generated by substitutions. If the substitution is primitive, then a number of well known and powerful tools are available, most notably the Perron-Frobenius theorem for primitive matrices, which ensures that the subshift in question is uniquely ergodic.
On the other hand, substitutions with reducible incidence matrices have only recently received some serious attention (see Remark 3.18 and Remark 7.3). One reason for this neglect is that the standard methods, employed in the primitive case for analyzing the dynamics of such substitutions and their incidence matrices, uses tools that so far didn't have analogues in reducible case. It is the purpose of this paper to provide these tools, and thus to extend the basic theory from the primitive to the reducible case.
We concentrate on substitutions ζ which are expanding, i.e. ζ does not act periodically or erasing on any subset of the given alphabet (for our notation and terminology on substitutions see §3.1).
Every non-negative irreducible square matrix has a power which is a block diagonal matrix, where every diagonal block is primitive. The classical Perron-Frobenius theorem asserts that, for any primitive matrix M and for any non-negative column vector v = 0, the sequence of vectors M t v, after normalization, converges to a positive eigenvector of M , and that the latter is unique up to rescaling.
In analogy with the above facts, in section 2 we introduce the PB-Frobenius form for matrices, which is set up so that, up to conjugation with a permutation matrix, every non-negative integer square matrix has a positive power which is in PB-Frobenius form. We prove the following convergence result for matrices in PB-Frobenius form; its proof spans sections 4-7 and can be read independently from the rest of the paper.
Theorem 1.1. Let M be a non-negative integer (n × n)-matrix which is in PB-Frobenius form. Assume that none of the coordinate vectors is mapped by a positive power of M to itself or to 0.
Then for any non-negative column vector v = 0 there exists a "limit vector" and v ∞ is an eigenvector of M .
In symbolic dynamics the classical Perron-Frobenius theorem plays a key role, when applied to the incidence matrix M ζ of a primitive substitution ζ: Any finite word w in the language L ζ associated to ζ : A → A * has the property that for any letter a i of the alphabet A, the number |ζ t (a i )| w of occurrences of w as a factor in ζ t (a i ), normalized by the word length |ζ t (a i )|, converges to a well defined limit frequency. The latter can be used to define the unique (up to scaling) invariant measure on the subshift Σ ζ defined by the primitive substitution ζ.
The purpose of this paper is to establish the analogous results for expanding reducible substitutions ζ. The key observation (Proposition 3.5) here is that for any n ≥ 2 the classical level n blow-up substitution ζ n (based on a derived alphabet A n which contains all factors w i ∈ L ζ of length |w i | = n as "blow-up letters") has incidence matrix M ζn in PB-Frobenius form, assuming that the incidence matrix M ζ is in PB-Frobenius form.
Combining Proposition 3.5 with Theorem 1.1 gives the following (see Lemma 3.2 and Proposition 3.11): Theorem 1.2. Let ξ be an expanding substitution on a finite alphabet A. Then there exist a positive power ζ = ξ s such that for any non-empty word w ∈ A * and any letter a i ∈ A the limit frequency lim t→∞ |ζ t (a i )| w |ζ t (a i )| exists.
As a consequence of Theorem 1.2 we obtain -precisely as in the primitive case -for any a i ∈ A an invariant measure on the subshift Σ ζ defined by the substitution ζ. However, contrary to the primitive case, in general this invariant measure will heavily depend on the chosen letter a i , see Question 3.15. We prove (see Remark 3.14): Corollary 1. 3. For any expanding substitution ζ : A → A * and any letter a i ∈ A there is a well defined invariant measure µ a i on the substitution subshift Σ ζ . For any non-empty w ∈ A * and the associated cylinder Cyl w ⊂ Σ ζ (see subsection 3.6) the value of µ a i is given, after possibly raising ζ to a suitable power according to Theorem 1.2, by the limit frequency µ a i (Cyl w ) = lim t→∞ |ζ t (a i )| w |ζ t (a i )| .
Although there are various generalizations of the classical Perron-Frobenius theorem for primitive matrices in the literature, we could not find one with the convergence statement as in Theorem 1.1, which is needed for our applications. Perron-Frobenius theory and its generalizations are relevant in many more branches of mathematics than just symbolic dynamics, including applied linear algebra, and some areas of analysis and probability theory (see for instance [AGN11], [BSS12] and [Lem06]). We expect that Theorem 1.1 will find useful applications in other contexts.
Our proof of Theorem 1.1 uses only standard methods from linear algebra and is hence accessible to mathematicians from all branches. The reader interested only in Theorem 1.1 may go straight to section 4 and start reading from there. The sections 4 to 7 are organized as follows: After setting up some definitions and terminology in section 4, we state Theorem 5.1, a slight strengthening of Theorem 1.1. To stay within the realm if this paper we phrase Theorem 5.1 for integer matrices, but this assumption is not used in the proof of Theorem 5.1.
The proof of Theorem 5.1 is done by induction over the number of primitive diagonal blocks in a suitable power of the given matrix M , and the induction step itself (Proposition 5.4) reveals a crucial amount of information about the dynamics on the non-negative cone R n ≥0 induced by iterating the map which is defined by the matrix M . The proof of Proposition 5.4, which involves a careful (and hence a bit lengthy) 3-case analysis, is assembled in section 6. In section 7.1 some results about the eigenvectors of such a matrix M are shown to be direct consequence of Proposition 5.4.
Non-negative matrices in PB-Frobenius form
A non-negative integer (n × n)-matrix M is called irreducible if for any 1 ≤ i, j ≤ n there exists an exponent k = k(i, j) such that the (i, j)-th entry of M k is positive. The matrix M is called primitive if the exponent k can be chosen independent of i and j. The matrix M is called reducible if M is not irreducible. Since in some places in the literature the (1 × 1)-matrix with entry 0 is also accepted as "primitive" we will be explicit whenever this issue comes up.
It is a well known fact for non-negative matrices that every irreducible matrix has a power which is, up to conjugation with a permutation matrix, a block diagonal matrix where every diagonal block is a primitive square matrix.
For the purposes of our results on reducible substitutions presented in the next section the following terminology turns out to be crucial: Let M be a non-negative integer square matrix as considered above, and assume that M is partitioned into matrix blocks which along the diagonal are square matrices.
Definition 2.2. (a) The matrix M is in PB-Frobenius form if M is a lower diagonal block matrix where every diagonal block is either primitive or power bounded. (b) If M is in PB-Frobenius form, then the special case of a diagonal block which is a (1 × 1)-matrix with entry 1 or 0 will be counted as PB block and not as primitive block, although technically speaking such a block could also be considered as "primitive". Lemma 2.3. Every non-negative square matrix M has a positive power M t which is in PB-Frobenius form (with respect to some block decomposition of M ).
Proof. This is an immediate consequence of the well known normal form for non-negative matrices, which says that, up to conjugation with a permutation matrix, M is a lower block diagonal matrix with all diagonal blocks are either zero or irreducible. It suffices now to rise M to a power such that every diagonal block matrix block is itself a block diagonal matrix with primitive matrix blocks, and to refine the block structure of M accordingly.
As is often done when working with non-negative matrices, we will use in this paper as norm on R n the 1 -norm, i.e.
a i e i = |a i | for all a 1 , . . . , a n ∈ R. In section 7 we prove the convergence result for matrices in PB-Frobenius form stated in Theorem 1.1, which is crucial for our extension of the classical theory for primitive substitutions to the much more general class of expanding substitutions in the next section. It turns out (see Proposition 3.5) that the class of PB-Frobenius matrices is precisely the class of matrices for which the blow-up technique known from primitive matrices can be extended naturally.
For practical purposes we formalize the condition that is used as assumption in Theorem 1.1: Definition-Remark 2.4. (1) An integer square matrix M is called expanding if none of the coordinate vectors e i is mapped by a positive power of M to itself or to 0.
(2) It is easy to see that this is equivalent to the condition that for any non-negative column vector v = 0 the length of the iterates satisfy M t v → ∞ for t → ∞.
(3) Let M be in PB-Frobenius form. The statement that "M is expanding" is equivalent to the requirement that no minimal diagonal matrix block M i,i of M is PB. Here minimal refers to the partial order on blocks as defined in section 4. Thus "M i,i is minimal" means that M v has non-zero coefficients only in the coordinates corresponding to M i,i , if the same assertion is true for v.
Dynamics of expanding substitutions
3.1. Basics of substitutions. A substitution ζ on a finite set A = {a 1 , a 2 , . . . a n } (called the alphabet) of letters a i is given by associating to every a i ∈ A a finite word ζ(a i ) in the alphabet A: This defines a map from A to A * , by which we denote the free monoid over the alphabet A. The map ζ extends to a well defined monoid endomorphism ζ : A * → A * which is usually denoted by the same symbol as the substitution. The combinatorial length of ζ(a i ), denoted by |ζ(a i )|, is the number of letters in the word ζ(a i ). We call a substitution ζ expanding if there exists k ≥ 1 such that for every a i ∈ A one has |ζ k (a i )| ≥ 2.
It follows directly that this is equivalent to stating that ζ is non-erasing, i.e. none of the ζ(a i ) is equal to the empty word, and that ζ doesn't act periodically on any subset of the generators.
Let A Z be the set of all biinfinite words . . . x −1 x 0 x 1 x 2 . . . in A, endowed with the product topology. It is equipped with the shift operator, which shifts the indices of any biinfinite word by −1, and is continuous.
Any substitution ζ defines a language L ζ ⊂ A * which consists of all words w ∈ A * that appear as a factor of ζ k (a i ) for some a i ∈ A and some k ≥ 0. Here factor means any finite subword of a word in A * or A Z , referring to the multiplication in the free monoid A * .
Furthermore, ζ defines a substitution subshift, i.e. a subshift Σ ζ ⊂ A Z which is the space of all biinfinite words in A which have the property that any finite factor belongs to L ζ .
A substitution ζ on A is called irreducible if for all 1 ≤ i, j ≤ n, there exist k = k(i, j) ≥ 1 such that ζ k (a j ) contains the letter a i . It is called primitive if k can be chosen independent of i, j. A substitution is called reducible if it is not irreducible. Note that any irreducible substitution ζ (and hence any primitive ζ) is expanding, except if A = {a 1 } and ζ(a 1 ) = a 1 .
Given a substitution ζ : A → A * , there is an associated incidence matrix M ζ defined as follows: The (i, j) th entry of M ζ is the number of occurrences of the letter a i in the word ζ(a j ). Note that the matrix M ζ is a non-negative integer square matrix. It is easy to verify that an expanding substitution ζ is irreducible (primitive) if and only if the matrix M ζ is irreducible (primitive), as defined in section 2.
It also follows directly that M ζ t = (M ζ ) t for any exponent t ∈ N. Furthermore, the incidence matrix M ζ is expanding (see Definition-Remark 2.4) if and only if the substitution ζ is expanding. In particular, we obtain directly from Lemma 2.3: Lemma 3.1. Every expanding substitution ζ has a positive power ζ t such that the incidence matrix M ζ t is PB-Frobenius and expanding.
Frequencies of letters.
For any letter a i ∈ A and any word w ∈ A * we denote the number of occurrences of the letter a i in the word w by |w| a i .
We observe directly from the definitions that the resulting occurrence vector v(w) := (|w| a i ) a i ∈A satisfies: The statement of the following lemma, for the special case of primitive substitutions, is a well known classical tool in symbolic dynamics (see [Que10,Proposition 5.8]). Lemma 3.2. Let ζ : A * → A * be an expanding substitution. Then, up to replacing ζ by a positive power, for any a ∈ A and any a i ∈ A the limit frequency exists. The resulting limit frequency vector v ∞ (a) := (f a i (a)) a i ∈A is an eigenvector of the matrix M ζ .
Proof. By Lemma 3.1 we can assume that, up to replacing ζ by a positive power, the incidence matrix M ζ is in PB-Frobenius form and expanding. Thus, Theorem 1.1 applied to the occurrence vector v(a) gives the required result, where we note that M t ζ v(a) = v(ζ t (a)) = |ζ t (a)| is a direct consequence of equality (3.1) and the definition of the norm in section 2.
Notice that, as for primitive substitutions, it follows that the sum of the coefficients of the limit frequency vector v ∞ (a) is equal to 1. However, contrary to the primitive case, for a reducible substitution ζ the limit frequency vector v ∞ (a) will in general depend on the choice of a ∈ A. 3.3. Frequencies of factors via the level n blow-up substitution. Recall from section 3.1 that for any substitution ζ we denote by L ζ the subset of A * which consists of all factors of any iterate ζ k (a i ), for any letter a i ∈ A. We say that w is used by a i if w appears as a factor in some ζ k (a i ). We see from Lemma 3.2 that the frequencies of letters are encoded in the incidence matrix M ζ ; however, this matrix doesn't give us any information about the frequencies of factors. In order to understand the asymptotic behavior of frequencies of factors one has to appeal to a classical "blowup" technique for the substitution (see for instance [Que10]). We now give a quick introduction to this blow-up technique, which will be crucially used below.
Let n ≥ 2, and denote by A n = A n (ζ) the set of all words in L ζ of length n. We consider A n as the new alphabet, and define a substitution ζ n on A n as follows: For w = a 1 a 2 . . . a n ∈ A n , consider the word ζ(a 1 a 2 . . . a n ) = x 1 x 2 . . . x |ζ(a 1 )| x |ζ(a 1 )|+1 . . . x |ζ(w)| .
That is, ζ n (w) is defined as the ordered list of first |ζ(a 1 )| factors of length n of the word ζ(w). As before, ζ n extends to A * n and A Z n , by concatenation. Here a word w ∈ A * n of length k is an ordered list of k words of length n in A * . Namely, w = w 0 w 1 . . . w k such that |w i | = n for all i = 1, . . . , k. We call ζ n the level n blow-up substitution for ζ. From this definition it follows directly that (ζ n ) t = (ζ t ) n , hence we will omit the parentheses. Observe that for w = a 1 a 2 . . . a n ∈ A n , we have |ζ n (w)| = |ζ(a 1 )|, from which it follows that for an expanding substitution ζ the blow-up substitution ζ n is expanding, for any n ≥ 2.
One of the classical tools that is used to understand irreducible substitutions and their invariant measures is the following: Then for any n ≥ 1, the incidence matrix M ζn for the level n blow-up substitution ζ n is again primitive.
We show that the analogue is true for expanding substitutions with possibly reducible incidence matrices: Then for any n ≥ 1, the incidence matrix M ζn for the level n blow-up substitution ζ n is again in PB-Frobenius form.
The proof of this proposition, which is one of the main results of this paper, requires several lemmas; we assemble all of them in the next subsection.
3.4. The proof of Proposition 3.5. Let ζ : A * → A * be a substitution as before, and let A ⊂ A be a ζ-invariant subalphabet, i.e. we assume that ζ(a ) ∈ A * for any a ∈ A , where we identify the free monoid A * with the submonoid of A * that is generated by the letters from A .
For most applications one may chose A to be a maximal proper ζ-invariant subalphabet of A, although formally we don't need this assumption. The terminology below comes from thinking of A A as representing the "top stratum" for the reducible substitution ζ.
For any n ≥ 2 and for the level n blow-up substitution ζ n : A n → A * n we consider the subalphabet A n ⊂ A n which is given by all words w = x 1 . . . x n with x i ∈ A that are used by some a i ∈ A . From the ζ-invariance of A it follows directly that A n is ζ n -invariant.
We now partition the letters w of A n A n , i.e. w = x 1 . . . x n is a word of length n which is used by some a i ∈ A A but not by any a i ∈ A , into two classes: ( Remark 3.6. From the definition of the map ζ n and from the ζ-invariance of A it follows directly that the top-transition words together with A n constitute a ζ n -invariant subalphabet of A n . Indeed, recall that for any w = x 1 . . . x n ∈ A n the image ζ t n (w) is a word w 1 w 2 . . . w r in A n , with r = r(t) = |ζ t (x 1 )| such that w k is the prefix of length n of the word obtained from ζ t (w) by deleting the first k − 1 letters. Thus it follows that the first r − (n − 1) of the words w k are factors of ζ t (x 1 ), and that the last n − 1 of the words w k have at least their first letter in ζ t (x 1 ). Hence, if x 1 ∈ A , then the first r − (n − 1) of the words w k belong to A n , and the last n − 1 words w k are all top-transition.
We now consider the incidence matrices M ζ and M ζn : From the ζ-invariance of A it follows that after properly reordering the letters of A the matrix M ζ is a 2×2 lower triangular block matrix, with M ζ| A as lower diagonal block. Similarly, M ζn is a 3 × 3 lower triangular block matrix, with M ζn| A n as bottom diagonal block. The top-used edges form the top diagonal block, and top-transition edges form the middle diagonal block.
The arguments given below work also for the special case where A is empty; in this case the bottom diagonal block of M ζ and the two bottom diagonal blocks of M ζn have size 0 × 0, so that both, M ζ and M ζn , consist de facto of a single matrix block.
Lemma 3.7. The middle diagonal block of M ζn as defined above is power bounded.
Proof. Using the same terminology as in Remark 3.6 we recall that for w = x 1 . . . x n ∈ A n and ζ t n (w) = w 1 w 2 . . . w |ζ t (x 1 )| it follows from x 1 ∈ A that only the last n − 1 words w k may possibly be in A n A n , but their first letter always belongs to A . This shows that independently of t any coefficient in the middle diagonal block of M ζ t n is bounded above by n, for any t ≥ 1. Lemma 3.8. If the top block diagonal matrix of M ζ is power bounded, then so is the top block diagonal matrix of M ζn .
Proof. From the hypothesis that the top block diagonal matrix of M ζ is power bounded we obtain that there is a constant K ∈ N such that for any letter a i ∈ A A and any t ≥ 0 the number of letters x i of the factor ζ t (a i ) that do not belong to A is bounded above by K. But then it follows directly that there can be at most K top-used letters y 1 . . . y n from A n A n in any of the ζ t n (w) with w = x 1 . . . x n ∈ A n A n top-used, since any such y 1 . . . y n must have its initial letter y 1 in ζ t (x 1 ), and y 1 must belong to A A . Remark 3.9. From the definition of "top-used" and from the finiteness of A n it follows that there is an exponent t ≥ 0 such that for any word u ∈ A n A n (and hence in particular for any top-used u) there is a letter a i ∈ A A such that u is a factor of the word ζ t (a i ) for some positive integer t ≤ t. Proof. It suffices to show that there is an integer t 0 ≥ 0 such that for any two top-used words w = x 1 . . . x n and w of A n the word w is a factor of the prefix of length |ζ t 0 (x 1 )| of ζ t 0 (w). From the assumption that the top diagonal block matrix of M ζ is primitive we know that there is an exponent t 1 ≥ 0 such that for any two letters a and a of A A the word ζ t 1 (a ) contains as factor the letter a, for any integer t 1 ≥ t 1 . From the observation stated in Remark 3.9 we deduce that there is an exponent t 2 ≥ 0 such that w is a factor of ζ t 2 (a ) of some letter a of A A , for some positive integer t 2 ≤ t 2 . Thus from setting a = x 1 and a = a it follows that w is a factor of ζ t 1 +t 2 (x 1 ). This shows the claim, for t 0 = t 1 + t 2 .
We now obtain as direct consequence of the above Lemmas: Proof of Proposition 3.5. The claim that the incidence matrix M ζn is in PB-Frobenius form follows from an easy inductive argument over the number of blocks in the PB-Frobenius form of M ζ : At each induction step the top left diagonal block of M ζ is either primitive or power bounded, and all other blocks are assembled together in an invariant subalphabet A of the given alphabet A. Then M ζn is considered as above as 3 × 3 lower triangular block matrix. For the two upper diagonal blocks the claim follows directly from the above lemmas. The bottom diagonal block is equal to ζ n | A n , which is equal to the incidence matrix of (ζ| A ) n . But for ζ| A the claim can be assumed to be true via the induction hypothesis.
3.5. Level n limit frequencies. We can now state the analogue of Lemma 3.2 for words w of length n ≥ 2 instead of letters a i ∈ A. As done there for n = 1, we can use all words w from the alphabet A n = A n (ζ) as "coordinates" and consider, for any word w ∈ A * n , the level n occurrence vector v n (w ) := (|w | w ) w∈An . Again we obtain: Proposition 3.11. Let ζ : A → A * be an expanding substitution. Then, up to replacing ζ by a power, the frequencies of factors converge: For any word w ∈ A * of length |w| ≥ 2 and any letter a ∈ A the limit frequency Proof. Set n = |w|. If w does not belong to A n , then |ζ t (a)| w = 0 for all t ∈ N, so that we can assume w ∈ A n . By Lemma 3.1 we can assume that, up to replacing ζ by a positive power, the incidence matrix M ζ is in PB-Frobenius form. Thus we can apply Proposition 3.5 to obtain that the blow-up incidence matrix M ζn is also in PB-Frobenius form. Furthermore, if ζ is expanding, then so is ζ n , and hence M ζn .
From the definition of ζ n we have the following estimate: For any two w, w 1 ∈ A n , with Now, let w ∈ A n be a word of length n that starts with the letter a. As in the proof of Lemma 3.2 we can thus use Theorem 1.1, which applied to the level n occurrence vector v n (w ) gives that lim t→∞ |ζ t n (w )| w |ζ t n (w )| exists, and together with the above observation equals to Similar to the case where n = 1 in Lemma 3.2 it follows that the sum of the coefficients of the limit frequency vector v ∞ n (a) is equal to 1. Again, for an expanding reducible substitution ζ the limit frequency vector v ∞ n (a) will in general depend on the choice of a ∈ A.
3.6. Invariant measures for expanding substitutions. Recall from section 3.1 that the subshift Σ ζ associated to a substitution ζ is the space of all biinfinite words which have the property that any finite factor belongs to L ζ . Any word In the classical case where ζ is primitive, it is well known that the subshift Σ ζ defined by ζ is uniquely ergodic. In this case the limit frequency f w (a) obtained in Proposition 3.11 is typically used to describe the value that the invariant probability measure µ ζ takes on the cylinder Cyl w defined by any w ∈ L ζ ⊂ A * (see section 5.4.2 of [Que10]).
In the situation treated in this paper, where ζ is only assumed to be expanding (so that M ζ may well be reducible), there is no such hope for a similar unique ergodicity result. However, the definition of invariant measures on Σ ζ , through limit frequencies as known from the primitive case, extends naturally via the results of this paper to any expanding reducible substitution ζ. We will use the remainder of this subsection to elaborate this, and to comment on some related developments.
Every shift-invariant measure µ on Σ ζ defines a function ω µ : A * → R ≥0 by setting ω µ (w) := µ(Cyl w ) if w belongs to L ζ , and ω µ (w) := 0 otherwise. Conversely, it is well known (see for instance [FM10]) that any function ω : A * → R ≥0 is defined by an invariant measure µ on the full shift A Z if and only if ω is a weight function, i.e. ω satisfies the Kirchhoff conditions spelled out in Definition 3.12 below. In this case ω determines µ, i.e. there is a unique invariant measure µ on A Z that satisfies ω = ω µ . Furthermore, the support of µ is contained in Σ ζ ⊂ A Z if and only of ω(w) = 0 for all w ∈ A * L ζ .
Proposition 3.13. Let ζ : A → A * be an expanding substitution, raised to a suitable power according to Proposition 3.11. Then for any letter a ∈ A the function given by the limit frequencies f w (a) from Proposition 3.11, satisfies the Kirchhoff conditions.
Proof. We consider ζ t (a) as in Proposition 3.11 and observe that any occurrence of a word w as factor in ζ t (a), unless it is a prefix, together with its preceding letter a i in ζ t (a) gives an occurrence of the factor a i w, and conversely. The analogous statement holds for factors wa i . Hence for every w ∈ A * each of the two equalities in Definition 3.12, for ω(w) := |ζ t (a)| w , either holds directly, or else it holds up to an additive constant ±1. Since by the assumption that ζ is expanding we have |ζ t (a)| → ∞ , the Kirchhoff conditions must hold for the limit quotient function ω a = f w (a) = lim t→∞ |ζ t (a)|w |ζ t (a)| . Remark 3.14. Since for any a ∈ A and any w / ∈ L ζ the limit frequencies satisfy f a (w) = 0, we obtain directly from Proposition 3.13 that the weight function ω a defines an invariant measure µ a on Σ ζ . This proves Corollary 1.3 from the Introduction.
From the definition via limit frequencies it follows immediately that any of the µ a is a probability measure, i.e. µ a (Σ ζ ) = 1. Contrary to the primitive case, for an expanding substitution ζ distinct letters a i of A may well define distinct measures µ a i on Σ ζ . However, as it happens in the primitive case, distinct a i ∈ A may also define the same measure µ a i . This raises several natural questions: Question 3.15. Let ζ be an expanding substitution as before.
(1) What is the precise condition on letters a, a ∈ A such that they define the same measure µ a = µ a on Σ ζ ? (2) Are there invariant measures on Σ ζ that are not contained in the convex cone C ζ , by which we denote the set of all non-negative linear combinations of the µ a ? (3) Which of the measures in C ζ have the property that in addition to being invariant under the shift operator they are also projectively invariant under application of the substitution ζ ? By this we mean that there exist some scalar λ > 0 such that the image measure ζ * (µ) on Σ ζ satisfies ζ * (µ)(X) = λµ(X) for any measurable subset X ⊂ Σ ζ .
Attempting seriously to find answers to these questions with the methods laid out here goes beyond the scope of this paper. We limit ourselves to the following: Remark 3.16. Our analysis of the eigenvectors of non-negative matrices in PB-Frobenius form in §7, when combined with the technique presented in §3.4 above to understand simultaneous eigenvectors for all blow-up level incidence matrices, seems to have the potential to show that the convex cone C ζ is spanned by invariant measures that are determined by the principal eigenvectors (see §7) of the "level 1" incidence matrix M ζ . In particular -regarding Question 3.15 (1) -it seems feasible that µ a = µ a if and only if a and a define coordinate vectors e a and e a which converge (up to normalization) to the same eigenvector of M ζ .
Remark 3.17. In the special case where the substitution ζ, reinterpreted as "positive" endomorphism of the free group F (A) with basis A, is invertible with no periodic non-trivial conjugacy classes in F (A), a negative answer to Question 3.15 (2) follows from the the main result of our paper [LU15], which was our original motivation to do the work presented here.
In much more generality reducible substitutions on the whole and Question 3.15 (2) in particular have already been treated in the literature, by the work of Bezuglyi-Kwiatkowski-Medynets-Solomyak, see [BKMS10] and the papers cited there. A more restricted class of substitutions had been treated previously by Hama-Yuasa, see [HY11]. In particular, the following should be noted: Remark 3.18. It is shown in [BKMS10] for expanding substitutions ζ with a mild extra restriction that the ergodic invariant probability measures on the subshift Σ ζ are in 1-1 correspondence with the normalized (extremal) distinguished eigenvectors (see Remark 7.3) of the incidence matrix M ζ (or perhaps rather, of the incidence matrix of a conjugate substitution defined there).
However, a direct translation of the results of [BKMS10], which is based on Bratteli diagrams and Vershik maps, to the framework of the work presented here seems to be non-evident.
Also in this context, in particular with respect to Question 3.15 (3) above, we note: Remark 3.19. In the recent preprint [BHL15] a conceptually new machinery (called "train track towers" and "weight towers") for subshifts in general has been developed, and applied as special case to reducible substitutions ζ as considered here. As a main result a bijection has been established there between the non-negative eigenvectors of M ζ and the "invariant" measures on Σ ζ . Although limit frequencies are not treated in [BHL15], it can be seen via weight functions that this bijection is the same as the one indicated in Remark 3.16 above.
However, a crucial difference to the work presented here is that in [BHL15] "invariant" means not just shift-invariance but also projective invariance with respect to the map on measures induced by the substitution ζ, see = C .
We say that the growth type of g is strictly bigger than that of f if It follows directly that, given any infinite family of vectors U = ( u t ) t∈N in R n , then any two normalization functions h and h for U must be of the same growth type, and that, conversely, any other function h : N → R which is of the same growth type, can be used as normalization function for U : the family of values ut h (t) converges to some non-zero vector in R n , and the latter must be a positive scalar multiple of the above limit vector v U .
The following is a direct consequence of the definitions: Lemma 4.2. Let U = ( u t ) t∈N and U = ( u t ) t∈N be two infinite family of vectors in R n , and define U + U = ( u t + u t ) t∈N . Let h : N → R and h : N → R be normalization functions for U and U respectively.
(1) If the growth type of h is strictly bigger than that of h , then h is also a normalization function for U + U . Similarly, if the growth type of h is strictly smaller than that of h , then h is a normalization function for U + U .
(2) If h and h have the same growth type, then a normalization function for U + U is given by both, h or h .
Lower triangular block matrices.
Let M be a non-negative integer square matrix. Assume that the rows (and correspondingly the columns) of M are partitioned into blocks B i so that M is a lower triangular block matrix with square diagonal matrix blocks. We now define a relation on the set of blocks as follows: We write B i B j if and only if B i = B j and if there exists a non-negative vector v which has non-zero coefficients only in the block B i , such that for some t ≥ 1 the vector M t v has a non-zero coefficient in the block B j . This is equivalent to stating that for some t ≥ 1, in the matrix M t the off-diagonal matrix block in the i th block column and the j th block row has at least one positive entry.
For any block B i we define the dependency block union C(B i ) to be the union of all blocks B j with B i B j .
Observe that, if every diagonal block of M is either irreducible or a (1 × 1)-matrix, this relation defines a partial order on the blocks, denoted by writing Let us denote by C n the non-negative cone in R n with respect to the fixed "standard basis" e 1 , . . . , e n . For any block B i we define the associated cone B i as the set of all non-negative column vectors in C n that have non-zero entries only in the block B i , i.e. all convex combinations of those e i that "belong" to B i .
A block cone C is a subcone of C n which has the property that each cone B i is either "contained or disjoint", i.e. one has either B i ⊂ C or B i ∩ C = { 0}. Unless otherwise stated, we are only interested in block cones C that are invariant under the action of M , i.e. M v ∈ C for any v ∈ C. This is equivalent to stating that for any block B with B ⊂ C the block cone C(B) (called the dependency block cone) associated to the dependency block union C(B) is contained in C.
Primitive Frobenius Form.
Let M be a non-negative integer square matrix as considered above, and assume that M is partitioned into matrix blocks so that M is a lower triangular block matrix, and along the diagonal all matrix blocks are squares. (2) For every block B i we refer to the Perron-Frobenius eigenvalue λ i of the corresponding diagonal block of M as the PF-eigenvalue of the block B i . This includes (for the special case of a (1 × 1)-zero block B i ) the possible value λ i = 0. For any matrix M in primitive Frobenius form we define the growth type associated to any of its blocks B i as follows: Among the blocks B j with B j B i , we consider the maximal PFeigenvalue λ max (B i ) := max{λ j | B i B j }, and the longest (or rather: "a longest") chain of blocks as the growth type function of the block B i . Similarly, we define the growth type function h C : N → R of any union of blocks C (or of the associated block cone C) as the maximal growth type function h j of any B j which belongs to C.
Definition 4.5 (Dominant Interior). Let C be the block cone associated to any union C of blocks. Define the dominant interior of C as follows: Pick some longest chain of blocks B i k B i k−1 . . . B i 1 as above, i.e. all B i j have PF-eigenvalue λ i k = λ i k−1 = . . . = λ i 1 = λ max (C) (in other words: the block B i j is part of a "realization" of the growth type function h C ).
Let v ∈ C be a vector for which the coordinates, for all vectors e i of the standard basis that belong to one of the blocks B i j , are non-zero. The dominant interior of C consists of all such vectors v, for any longest chain of blocks as above, which may of course vary with the choice of v. If B i B j , then by definition of for some integer k = k(i, j) the power M k has in its offdiagonal block M k j,i some positive coefficient a p,q . If both, M i,i and M j,j are primitive non-zero, it follows that for M k+2s the same diagonal block is positive, and this is also true for any exponent t ≥ k + 2s.
If B i B j and M i,i is primitive non-zero but M j,j is zero, we deduce from above the positive coefficient a p,q of M k j,i that for M k+s all coefficients in the p-th line of the block M k+s j,i must be positive. We now use the fact that the diagonal zero matrix M j,j must be a (1 × 1)-matrix, so that M k+s j,i consists of a single line, which is thus positive throughout. The same argument holds for any t = k + s with s ≥ s.
If B i B j and M j,j is primitive non-zero but M i,i is zero, we deduce from above the positive coefficient a p,q of M k j,i that for M k+s all coefficients in the q-th column of the block M k+s j,i must be positive. We now use the fact that the diagonal zero matrix M i,i must be a (1 × 1)-matrix, so that M k+s j,i consists of a single column, which is thus positive throughout. The same argument holds for any t = k + s with s ≥ s.
If B i B j and both, M i,i and M j,j are zero matrices, then M 2 has as (j, i)-th block the zero matrix, and the same is true for all powers M t with t ≥ 2.
Finally, if it doesn't hold that B i B j , then by definition of the (j, i)-th block of any positive power of M is the zero matrix.
with v * t ∈ B i and u * t ∈ C i . Then there is a bound t 0 ∈ N depending only on M such that for every t ≥ t 0 the vector u * t is contained in the dominant interior of C i .
Proof. Let t 0 be as in Lemma 4.6. Then v * t + u * t = M t v has positive coordinates in all blocks B j of C i for which M has a primitive non-zero diagonal block M j,j . Since M has no zero-columns, the maximal eigenvalue for the blocks in C i must be strictly bigger then 0. Thus the dominant interior of C i is defined through chains of blocks which are primitive non-zero. Hence u * t is contained in the dominant interior. 4.4. An example. Before proceeding with the proof of the main theorem, we discuss an example explaining the above concepts: Let M be the following matrix: We have the following relations: Hence, with the above definitions B 1 has growth type t(2 + √ 2) t , B 2 has t( √ 5+3 2 ) t , B 3 has (2 + √ 2) t , and B 4 has ( √ 5+3 2 ) t . The dependency blocks are given by C(B 1 ) = B 2 ∪B 3 ∪B 4 , C(B 2 ) = C(B 3 ) = B 4 , and C(B 4 ) = ∅.
The dominant interiors are given (where • X denotes the interior of a space X) There is one more M -invariant block cone, given by C = B 2 + B 3 + B 4 . Its dominant interior is given by
Convergence for primitive Frobenius matrices
The goal of this and the following section is to give a complete proof of the following result. For related statements the reader is directed to the work of H. Schneider [Sch86] and the references given there.
Theorem 5.1. Let M be a non-negative integer square matrix which is in primitive Frobenius form as given in Definition 4.3. Assume that M has no zero columns. Then for any non-negative vector v = 0 there exists a normalization function h v such that This result is proved by induction, and the induction step has some interesting features in itself, so that we pose it here as independent statement. But first we state a property which will be used below repeatedly: Definition 5.2. Let M be as in Theorem 5.1, and let C be a union of matrix blocks such that the associated block cone C ⊂ C n is M -invariant, with growth type function h C = λ t * t d * for some value λ * ≥ 1. We say that C satisfies the convergence condition CC(C) if for every vector u ∈ C the sequence 1 h C (t) M t u converges to a vector u ∞ which is either an eigenvector u ∞ ∈ C of M , or else one has u ∞ = 0. We require furthermore that u ∞ = 0 if u is contained in the dominant interior of C (as defined above in Definition 4.5).
Remark 5.3. (a) For u ∞ as in Definition 5.2 the condition u ∞ = 0 implies directly that u ∞ is an eigenvector of M . (b) Its eigenvalue is always equal to λ * , as follows directly from the following consideration: Proposition 5.4. Let M be a non-negative integer square matrix which is in primitive Frobenius form, with no zero columns. Let B be any block of the associated block decomposition, and let C := C(B) be the corresponding dependency block union (see §4.1). Let B and C be the block cones associated to B and C respectively. Let λ ≥ 0 and λ u ≥ 1 be the maximal PF-eigenvalues of B and C respectively, and let h : t → λ t * t d (for λ * = max{λ, λ u }) and h u : t → λ t u t du be the growth type functions for B and C respectively (see §4. 3).
Assume that C satisfies the above convergence condition CC(C). Then for every vector 0 = v 0 ∈ B the sequence converges to an eigenvector w ∞ of M which satisfies: where v ∞ is the extended PF-eigenvector (see Definition 4.3 (3)) of the primitive diagonal block of M corresponding to B, the vector w 0 ∈ C is entirely determined by v ∞ , and λ( v 0 ) ∈ R >0 depends on v 0 . (2) If λ = λ u then w ∞ = λ( v 0 ) u ∞ , where u ∞ = 0 is an eigenvector of C that depends only on the above extended PF-eigenvector v ∞ , and λ( v 0 ) ∈ R >0 depends on v 0 . (3) If λ < λ u then w ∞ = 0 is an eigenvector of C that may well depend on the choice of v 0 .
Before proving Proposition 5.4 in section 6, we first show how to derive Theorem 5.1 from Proposition 5.4. We first show that Proposition 5.4 also implies the following: Lemma 5.5. Assume that B and C as well as B and C are as in Proposition 5.4. Then we have: (1) The cone B+C associated to the block union B∪C satisfies the convergency condition CC(B+C).
(2) Assume that C is contained in a larger block cone C with growth type function h , and assume that C satisfies the convergency condition CC(C ). Then the cone B + C also satisfies the convergency condition CC(B + C ).
Proof.
(1) If B belongs to the blocks of B ∪ C that determine the dominant interior of B + C, then the eigenvalue of the PF-eigenvector of B satisfies λ ≥ λ u ≥ 1, and is maximal among all PF-eigenvalues for blocks in B ∪ C. If λ > λ u , then We note that case (3) of Proposition 5.4 is excluded by the inequalities λ ≥ λ u , and that in cases (1) and (2) of Proposition 5.4 our claim lim If B does not belong to the blocks of B ∪ C that determine the dominant interior, then we have λ u > λ, so that we are in case (3) of Proposition 5.4: In this case, however, any vector in the dominant interior of B + C must also belong to the dominant interior of C. The growth type function for B ∪ C is given by h = h u , and hence the claim follows from our assumption CC(C).
(2) Similar to the situation considered above in the proof of (1), if B does not belong to the blocks that determine the dominant interior of B + C , then any vector in the dominant interior of B + C must also belong to the dominant interior of C , and the growth type function for B + C is equal to that for C , so that the claim follows from the assumption CC(C ).
If on the other hand B belongs to the blocks that determine the dominant interior of B + C , then the growth type function for B + C is equal to that of B, so that part (1) shows that the limit vector is non-zero for any v = 0 in the dominant interior of B + C. Any vector w in the dominant interior of B + C can be written as sum w = v + u + w 0 where w 0 belongs to B + C but not to its dominant interior, while v lies in the dominant interior of B + C and u in the dominant interior of C , and at least one of them is non-zero. Thus we deduce the claim follows directly from Lemma 4.2, applied to v and u.
We will now prove Theorem 5.1, assuming the results of Proposition 5.4. The proof of Proposition 5.4 is deferred to section 6.
Proof of Theorem 5.1. Consider the block decomposition of M according to its primitive Frobenius form, and denote by B the top matrix block. Let C = C(B) be the corresponding dependency block union.
If C is empty, then B is minimal with respect to the partial order on blocks (as defined in subsection 4.2). In this case, from the assumption that M has no zero columns, it follows that B is not a zero matrix. Hence the claim of Theorem 5.1 for any vector v ∈ B follows directly from the classical Perron-Frobenius theory.
If C is non-empty, it follows from the previously considered case that the maximal eigenvalue for C satisfies λ u ≥ 1. Thus via induction over the number of blocks contained in C we can invoke Lemma 5.5 (2) to obtain that the convergency condition CC(C) holds.
We can hence apply Proposition 5.4 to get directly the the claim of Theorem 5.1 for any nonnegative vector v ∈ B.
We can then assume by induction that the claim of Theorem 5.1 is true for any vector u = 0 that has zero-coefficients in the B-coordinates. Now, an arbitrary vector w = 0 in the non-negative cone C n can be written as a sum w = v + u, with v and u as before, and at least one of them is different from 0. Hence the claim of Theorem 5.1 follows from Lemma 4.2.
Remark 5.6. The last proof also shows the following slight improvement of Theorem 5.1: For every primitive block B i of the Frobenius form of M , and for any vector v = 0 in the associated non-negative cone B i , the normalization function h v from Theorem 5.1 for the family (M t v) t∈N is of the same growth type as the function h i defined in section 4.3.
Recall from section 2 that a i e i = |a i | .
The following elementary observation is repeatedly used in the next section.
Lemma 5.7. Let M be a non-negative integer (n × n)-matrix. Assume that there exists a function h : N → R >0 such that for any vector u in the non-negative cone C n = (R ≥0 ) n the sequence 1 h(t) M t u converges to a limit vector u ∞ ∈ C n which is either equal to 0 or else an eigenvector u ∞ ∈ C n of M . Then there is a "universal constant" K = K(C) > 0 which satisfies: for any t ∈ N and for any (not necessarily non-negative) v ∈ R n .
Proof. We first consider the finitely many coordinate vectors e i from the canonical base of R n and observe that the hypothesis for any t ∈ N and any i = 1, . . . , n.
An arbitrary vector v = a i e i ∈ R n satisfies || v|| = |a i | ≥ |a i | · || e i ||, which gives thus proving the claim for K(C) := nK 0 .
6. Proof of the Proposition 5.4 Let us consider an arbitrary vector 0 = v 0 ∈ B, and define iteratively, for any integer t ≥ 1, vectors v t ∈ B and u t ∈ C through Therefore, for any t ≥ 1, we compute Case 1: Assume that λ u < λ.
In this case the diagonal block M ii of M corresponding to B is primitive. Let v ∈ B be the extended PF-eigenvector of M as given in section 4.3.
Let u ∈ C be the non-negative vector determined by the equation Then we compute: Recall that, since u ∈ C, by assumption there is a vector u ∞ ∈ C with Hence we deduce that for some constant K ≥ 0 one has We now observe: In other words, v + 1 λ w is an eigenvector of M with eigenvalue λ which is contained in the nonnegative cone B + C spanned by B and C.
We now consider an arbitrary vector v 0 ∈ B, as well as the vectors v t ∈ B and u t ∈ C as defined iteratively at the beginning of this section. For any integer s with 1 ≤ s ≤ t − 1, we have We now consider the limit of this sum for t → ∞: By the classical Perron-Frobenius theorem for primitive non-negative matrices we have lim t→∞ v t = λ v for some λ > 0. From our definition of the v t and u t it follows that their lengths v t and u t are uniformly bounded. We can hence apply Lemma 5.7 to the subspace R m ⊂ R n generated by C in order to deduce that there is a uniform bound to the length of any of the 1 λ m u ·m du M m u t−m . Hence for any s ≥ 0 the sum As a consequence, for any ε > 0 there is a value s = s(ε) ≥ 0 such that for any t ≥ s + 2 the third term of the above sum (6) satisfies: On the other hand, for large values of t the vectors v t−m−1 will be close to λ v, and hence u t−m will be close to λ u, for u as defined above by means of the eigenvector v. That is, for any ε > 0 there is a bound t 0 = t 0 (ε) ≥ 0 such that for any t ≥ t 0 there is a (not necessarily non-negative !) vector w t of length w t ≤ ε with u t = λ u + w t . This gives, for any s ≤ t − t 0 : where K(ε) is the constant from Lemma 5.7 (again applied to the subspace generated by C). As a consequence, for any t ≥ s + t 0 (ε) and some constant K which only depends on C the second term in the above sum will be εK -close to which converges (according to the above definition of w) to λ λ w as s tends to infinity.
Given ε > 0, use the first part of our considerations to find s = s(ε) which ensures that the third term in the above sum 6 is smaller than ε. We then find t 0 = t 0 ( ε K ), and consider any value t ≥ t 0 + s. The above derived estimates give where w * t is a (not necessarily non-negative) error term that satisfies w * t ≤ ε. Therefore we obtain which proves the claim for w 0 = 1 λ w.
Case 2: Assume that λ u = λ. Similar to the previous case we first consider the extended PF-eigenvector v ∈ B corresponding to the block B. Recall that u ∈ C is the vector given by the equation We compute: The first term in this sum tends to 0 when t goes to infinity. In order to understand the limit of the second term in the above sum ( † †) we recall from the inductive hypothesis in Proposition 5.4 that the vectors 1 λ s · s du M s u converge for s → ∞ to some vector u ∞ in C.
Since we need it later, we observe here that it follows from Lemma 4.7 that some iterate M t u belongs to the dominant interior of C . Thus the inductive hypothesis in Proposition 5.4 states that u ∞ = 0 is an eigenvector of M .
In both cases, we derive that for any ε > 0 there exists a bound s(ε) ≥ 0 such that for all s ≥ s(ε) we have 1 λ s · s du M s u − u ∞ ≤ ε , from which we deduce that Thus we can split the second term in the above sum ( † †) as follows: For fixed ε > 0 and hence fixed s(ε) the second term in the last sum converges to 0 as t tends to ∞, since In order to compute the first term in ( † †) we observe that This shows that We note here that 1 t du+1 t−1 k=0 k du ≤ 1 for all t ≥ 1. On the other hand, for sufficiently large t, so that, using the above observation that u ∞ = 0, we conclude that the limit vector λ 0 u ∞ with (1) is an eigenvector of M in C. This proves the claim for the extended PF-eigenvector v.
We now consider an arbitrary vector v 0 ∈ B, as well as the vectors v t ∈ B and u t ∈ C as defined iteratively as before. We obtain: The first term in this sum tends to 0 when t goes to infinity. In order to understand the limit of the second term we observe that the primitivity of the diagonal matrix block of M corresponding to B i implies that the v t converge to λ v for some scalar λ > 0. We write (as in Case 1) u t+1 = λ u+ w t+1 and note that for any ε > 0 there exists an integer t 0 = t 0 (ε) such that w t+1 ≤ ε for any t ≥ t 0 . As in Case 1 we have 1 λ t u · t du M t w t ≤ K(ε) for all t ≥ t 0 where K(ε) is the constant given by Lemma 5.7.
As before, let s(ε) be an integer which ensures for all s ≥ s(ε) that 1 λ s · s du M s u − u ∞ ≤ ε , from which we deduce that 1 where v PF i is the extended PF-eigenvector (see Definition 4.3 (3)) of the primitive diagonal block of M corresponding to B i , and w i ∈ C(B i ).
(2) The vector v(B i ) is the only eigenvector in B i + C(B i ) which admits such a decomposition: Any other eigenvector in B i +C(B i ) is either contained in C(B i ), or else it is a scalar multiple of v(B i ). Hence, v(B i ) will be called the "principal eigenvector" of B i (or of B i + C(B i )).
Proof. Any non-zero vector v ∈ B i + C(B i ) can be written as v = v 0 + u, with v 0 ∈ B and u ∈ C(B i ). From the hypothesis that B i is principal it follows that the growth type of C(B i ) and thus that of u is strictly smaller than that of B i , which is given by the function h(t) = λ t i . Case (1) of Proposition 5.4 thus shows that, if v 0 = 0, then 1 h(t) M t (v 0 ) converges to a scalar multiple of the eigenvector v PF i + w i , where w i ∈ C(B i ) is uniquely determined by the extended eigenvector v PF i . It follows directly that either v 0 = 0 and thus v ∈ C(B i ), or else for some λ > 0. In particular, we observe that any eigenvector in B i + C(B i ) which is not contained in C(B i ) must (up to rescaling) agree with v PF i + w i . The latter is indeed an eigenvector with eigenvalue λ i , by Remark 5.3 and Lemma 5.5 (1).
We will denote by C(λ) ⊂ C n the non-negative cone spanned by all principal eigenvectors of M with eigenvalue λ. As before, we write here C n to denote the standard non-negative cone in R n . We also recall that for matrices in primitive Frobenius form there is a natural partial order on the blocks (see subsection 4.2), to which we refer below when a block is called "minimal" or "maximal". Proof. Clearly any v ∈ C(λ) { 0} is an eigenvector with eigenvalue λ. For the converse implication we consider a maximal block B of M , and assume by induction over the number of blocks in M that the claim is true for the restriction of M to the invariant block C spanned by all coordinate vectors not contained in B. If B is not principal, it follows directly from the cases (2) and (3) of Proposition 5.4 that any eigenvector of M must have zero entries in the coordinates that belong to B, so that the claim follows from the induction hypothesis.
Similarly, if B is principal but the eigenvalue λ of v is different from the PF-eigenvalue λ 0 of B, it follows from case (1) of Proposition 5.4 that v belongs to C, so that the claim follows again from the induction hypothesis.
Finally, if B is principal with P F -eigenvalue equal to λ, and with principal eigenvector v PF + w, then by the M -invariance of C we can apply Lemma 7.1 to obtain a decomposition v = λ ( v PF + w) + u for some vector u ∈ C and some scalar λ ≥ 0. Since both, v and v PF + w are eigenvectors with eigenvalue λ, the same is true for u. Hence the claim follows again from our induction hypothesis.
Remark 7.3.
(1) Eigenvectors of non-negative matrices have been investigated previously by several authors, see for instance [ESS14] and [Rot75] and the references given there. Indeed, the statements of Lemma 7.1 and Proposition 7.2 are very close to results obtained there.
for some normalization function h v for the vector v. The same statement (up to replacing v ∞ by a scalar multiple) stays valid if we replace h v by any other normalization function for v. Thus in particular for the normalization function (see Remark 4.1) h v (t) = ||M t v|| we want to consider the accumulation points of the values .
As is true for all sequences of type f n (x) for which for some fixed k the subsequence f kn (x) converges, the sequence of vectors M t v h v (t) must accumulate (up to rescaling) onto the finite M -orbit of lim | 15,829.8 | 2016-05-07T00:00:00.000 | [
"Mathematics"
] |
Tangles, generalized Reidemeister moves, and three-dimensional mirror symmetry
Three-dimensional \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \mathcal{N} $\end{document} = 2 superconformal field theories are constructed by compactifying M5-branes on three-manifolds. In the infrared the branes recombine, and the physics is captured by a single M5-brane on a branched cover of the original ultraviolet geometry. The branch locus is a tangle, a one-dimensional knotted submanifold of the ultraviolet geometry. A choice of branch sheet for this cover yields a Lagrangian for the theory, and varying the branch sheet provides dual descriptions. Massless matter arises from vanishing size M2-branes and appears as singularities of the tangle where branch lines collide. Massive deformations of the field theory correspond to resolutions of singularities resulting in distinct smooth manifolds connected by geometric transitions. A generalization of Reidemeister moves for singular tangles captures mirror symmetries of the underlying theory yielding a geometric framework where dualities are manifest.
The (2, 0) superconformal field theories in six dimensions, in particular the theory of N parallel M5-branes, are among the most important quantum systems, and yet they remain poorly understood. Their importance stems not only from the fact that they represent the highest possible dimension in which superconformal field theories can exist [1], but also from the observation that their compactifications to lower dimensions yield a rich class of quantum field theories whose dynamics are encoded by geometry. For example, fourdimensional N = 2 theories arise upon compactification on a Riemann surface [2][3][4][5], and provide a geometric explanation for Seiberg-Witten theory [6,7]. It is natural to expect that more general compactifications will provide more information about these mysterious six-dimensional theories. One way to do this is to increase the dimension of the compactification geometry. Thus, the next cases of interest would be compactifications with dimensions d ≥ 3 resulting at low-energies in effective quantum field theories in dimensions 6 − d. The aim of this paper is to focus on the situation where d = 3 with N = 2 supersymmetry. Examples of this type have been recently considered in [8][9][10] for the situation where 2 M5-branes wrap some ultraviolet geometry. In such constructions, as advocated in [10], the infrared dynamics of the system is described by a single recombined brane, similar to the situation studied in [11], that can be viewed as a double cover of the original compactifiaction manifold. This infrared geometry is captured by describing the branching strands for the cover which in general are knotted. When the branching strands collide the cover becomes singular and on that locus an M2-brane of vanishing size can end on the M5-branes leading to massless charged matter fields. The goal of this paper is to clarify and extend the rules discussed in [10] and find the correspondence between the knotted branch locus encoding the geometry of the double cover, and the underlying N = 2 quantum field theory.
With this background we can phrase more precisely what we wish to do: we would like to uncover the relationship between three-dimensional N = 2 supersymmetric conformal field theories and a class of mathematical objects called singular tangles. In words, a tangle is a generalization of a knot to allow for open ends, and a singular tangle is the situation where the pieces of string are permitted to merge and loose their individual identity. Examples are illustrated in figure 1.
The class of three-manifolds M where the infrared M5-brane resides are defined as double covers of R 3 branched along a singular tangle. The reduction of the theory of a single M5-brane along M will result in the three-dimensional quantum field theories under investigation. The simplest class of examples are associated to non-singular tangles. In this situation M is a smooth manifold and a single M5-brane on M constructs a free Abelian N = 2 Chern-Simons theory in the macroscopic dimensions. Light matter, appearing in chiral multiplets in three dimensions, arises in the theory from M2-brane discs which end along M . When such matter becomes massless, the associated cycle shrinks and M develops a singularity. The collapsing of this cycle can be described by the geometry of a singular tangle. A conceptual slogan for the program described above is that we are investigating a three-dimensional analog of Seiberg-Witten theory. In the ultraviolet, one may envision an unknown non-Abelian three-dimensional field theory arising from the interacting theory of two M5-branes on R 3 with suitable boundary conditions at infinity. Moving onto the moduli space of this theory is accomplished geometrically by allowing the pair of M5-branes to fuse together into a single three-manifold M . The long-distance Abelian physics can then be directly extracted from the geometry of M . The situation we have described should be compared with the case of four-dimensional N = 2 theories whose infrared moduli space physics can be extracted from a Seiberg-Witten curve. In that case, charged matter fields are described by BPS states and can be constructed in M-theory from M2-branes. The case of an interacting conformal field theory can arise when the M2-brane particles become massless and the Seiberg-Witten curve develops a singularity, directly analogous to the three-dimensional setup outlined above.
An important feature of the constructions carried out in this paper, familiar from many constructions of field theories by branes, is that non-trivial quantum properties of field theories are mapped to simpler geometric properties of the compactification manifold. In the case of N = 2 Abelian Chern-Simons matter theories the quantum features which are apparent in geometry are the following.
• Sp(2F, Z) Theory Multiplets: the set of three dimensional theories with N = 2 supersymmetry and U(1) F flavor symmetry is naturally acted on by the group Sp(2F, Z) [12,13]. This group does not act by dualities. It provides us with a simple procedure for building complicated theories out of simpler ones by a sequence of shifts in Chern-Simons levels and gauging operations.
• Anomalies: in three dimensions, charged chiral multiplets have non-trivial parity anomalies. This means that upon integrating out a massive chiral field the effective Chern-Simons levels are shifted by half-integral amounts [14].
• Dualities: three dimensional N = 2 conformal field theories enjoy mirror symmetry dualities. Thus, distinct N = 2 Abelian Chern-Simons matter theories may flow in JHEP05(2014)014 the infrared to the same conformal field theory. In the case of three-dimensional Abelain Chern-Simons matter theories there are essentially three building block mirror symmetries which we may compose to engineer more complicated dualities.
-Equivalences amongst pure CS theories. These theories are free and characterized by a matrix of integral levels K. It may happen that two distinct classical theories given by matrices K 1 and K 2 nevertheless give rise to equivalent correlation functions and hence are quantum mechanically equivalent.
-Gauged U(1) at level 1/2 with a charge one chiral multiplet is mirror to the theory of a free chiral multiplet [13].
-Super-QED with one flavor of electron is mirror to a theory of three chiral multiplets, no gauge symmetry, and a cubic superpotential [15,16].
One way non-trivial dualities appear stems from the fact that the M5-brane theory reduced on M does not have a preferred classical Lagrangian. To obtain a Lagrangian description of the dynamics requires additional choices. In our context such a choice is a Seifert surface, which is a Riemann surface with boundary the given tangle. For any given tangle there exist infinitely many distinct choices of Seifert surfaces each of which corresponds to a distinct equivalent Lagrangian description of the physics. This fact is closely analogous to the choice of triangulation appearing in the approach of [8] for studying the same theories, as well as the choice of pants decomposition required to provide a Lagrangian description of M5-branes on Riemann surfaces [5].
Throughout the paper, our discussion of duality will be guided by a particular invariant of the infrared conformal field theory, the squashed three-sphere partition function Z b (x 1 , · · · , x F ). (1.1) This is a complex-valued function of a squashing parameter b (which we frequently suppress in notation) as well as F chemical potentials x i . It is an invariant of a field theory with prescribed couplings to U(1) F background flavor fields. This partition function gives us a strong test for two theories to be mirror and as such it is useful to build into the formalism techniques for computing Z.
One method of explicit computation is provided by supersymmetric localization formulas. At the classical level, an Abelian Chern-Simons matter theory coupled to background flavor fields is determined by the following data:
JHEP05(2014)014
Given such data, the three-sphere partition function for the infrared conformal field theory can be presented as a finite dimensional integral 1 [17,18] Z(x i ) = d G y exp −πi(y x)K y x a E(q a · (y x)).
(1. 2) In the above, E(x) denotes a certain transcendental function, the so-called non-compact quantum dilogarithm, which will be discussed in detail in section 3. The superpotential W enters the discussion only in so far as it restricts the flavor symmetries of the theory. The real integration variables y appearing in the formula can be interpreted as parameterizing fluctuations of the real scalars in the N = 2 vector multiplets. We will be interested in computation of Z up to multiplication by an overall phase independent of all flavor variables. Physically this means in particular that throughout this work we will ignore all framing anomalies of Chern-Simons terms. We will see that the partition function in (1.2) can be usefully viewed as a wavefunction in a certain finite dimensional quantum mechanics and develop this interpretation throughout. This connection of three-dimensional partition functions to quantum mechanics has been previously studied in [8,[19][20][21].
One important test of the ideas that we develop can be found in their application to a class of three-manifolds M of the form Σ t × R t , where the Reimann surface Σ varies in complex structure along the line parameterized by t. These examples are closely connected to four-dimensional quantum field theories. At a fixed value of t, the situation is that of an M5-brane on Σ which can be interpreted as a Seiberg-Witten curve for a four-dimensional N = 2 field theory. As t varies this field theory moves in its parameter space and hence describes a kind of domain wall in four dimensions. When equipped with suitable boundary conditions, this geometry can engineer a three-dimensional N = 2 theory.
Moreover, in such a construction the physical significance of of the finite dimensional quantum mechanics governing the partition function becomes more manifest. It is the quantum mechanics whose operator algebra coincides with the algebra of line defects of the parent four-dimensional theory [8,9,22].
In the context of such examples, one may utilize the machinery of BPS state counting to determine the resulting three-dimensional physics. When the variation of Σ takes a particularly natural form, known as R-flow, the spectrum of three-dimensional chiral multiplets is in one-to-one correspondence with the BPS states of the underlying fourdimensional model in a particular chamber. As the moduli of the four-dimensional theory are varied, one may cross walls of marginal stability and hence find distinct spectra of chiral multiplets in three-dimensions. Remarkably, the resulting three-dimensional theories are mirror symmetric. In this way, the geometry provides a striking confluence between two fundamental quantum phenomena: wall crossing of BPS states, and mirror symmetry.
The organization of this paper is as follows. In section 2 we explain how free Abelian Chern-Simons theories arise from tangles, and how their partition functions are encoded JHEP05(2014)014 in a simple quantum mechanical setup. In section 3 we show how the data of massless chiral fields is encoded in terms of singular tangles where branch loci collide. Each such singularity can be geometrically resolved in one of three ways, matching the expected deformations of the field theory. Upon fixing a Seifert surface, a surface with boundary on the tangle, we are able to extract a Lagrangian description of the theory associated to the singular tangle including superpotential couplings. In section 4 we generalize to arbitrary singular tangles, and explore physical redundancy in the geometry. As a consequence of mirror symmetries, distinct singular tangles can give rise to the same superconformal theory. These equivalences on field theories can be described geometrically by introducing a set of generalized Reidemeister moves acting on singular tangles. On deforming away from the critical point by activating relevant deformations of the field theory, we find that the generalized Reidemeister moves resolve to the ordinary Reidemeister moves familiar from elementary knot theory. The appearance of Reidemeister moves clarifies the relationship between quantum dilogarithm functions and braids first observed by [23]. In section 5 we describe how three-dimensional mirror symmetries can be understood from the perspective of four-dimensional N = 2 parent theories via R-flow. Finally, in section 6 we describe three-dimensional U(1) SQED with arbitrary N f .
Abelian Chern-Simons theory and tangles
In this section we explore the simplest class of examples: Abelian N = 2 Chern-Simons theories without matter fields. Such theories are free and topologically invariant. Thus, in particular, they are (rather trivial) conformal field theories. We find that such models are usefully constructed via reduction of the M5-brane on a non-singular manifold which is conveniently viewed as a double cover of R 3 branched over a tangle, and describe the necessary geometric technology for elucidating their structure. In addition we describe a finite dimensional quantum mechanical framework for evaluating their partition functions. Throughout we will study the theories with U(1) F flavor symmetries and couple them to F non-dynamical vector multiplets. The set of such theories is acted upon by Sp(2F, Z) and we describe this action from various points of view.
Chern-Simons actions, Sp(2F, Z), and quantum mechanics
Consider a classical N = 2 Abelian Chern-Simons theory. Let G denote the number of U(1) gauge groups, and F the number of U(1) flavor groups. 2 The Lagrangian of the theory coupled to F background vector multiplets is specified by a (G + F ) × (G + F ), symmetric matrix of levels Here, k G denotes the ordinary Chern-Simons levels of the U(1) G gauge group, k M indicates the G × F matrix of mixed gauge-flavor levels, and k F the F × F matrix of flavor levels.
JHEP05(2014)014
The action for the theory is Where in the above the terms " · · · " indicate the supersymetrization of the Chern-Simons Lagrangian. K αβ is integrally quantized with minimal unit one. The first G vector multiplets are dynamical variables in the path integral while the last F are non-dynamical background fields. 3 It is worthwhile to note that one might naively think that the matrix K does not completely specify an N = 2 Chern-Simons theory. Indeed, since such theories are conformal they contain a distinguished flavor symmetry, U(1) R , whose associated conserved current appears in the same supersymmetry multiplet as the energy-momentum tensor. One might therefore contemplate Chern-Simons couplings involving background U(1) R gauge fields. However such terms, violate superconformal invariance [25]. Thus, as our interest here is superconformal field theories, we are justified in ignoring these couplings. 4 Already in this simple context of Abelian Chern-Simons theory, we can see the action of Sp(2F, Z) specified as operations on the level matrix K defined in equation (2.1). For later convenience, it is useful to use a slightly unconventional form of the symplectic matrix J In this basis, the integral symplectic group is conveniently generated by 2F generators σ n with n = 1, 2, · · · , 2F whose matrix elements are given as To define an action of the symplectic group Sp(2F, Z) on this class of theories, it therefore suffices to specify the action of the generators σ n . The action of the generators with odd labels σ 2n−1 preserves the number of gauge groups and shifts the levels of the n-th background field The action of the even generators, σ 2n , is more complicated and performs a change of basis in the flavor symmetries while at the same time increasing the number of gauge groups by 3 The normalization of the Chern-Simons levels appearing in (2.2) indicates that these are spin Chern-Simons theories [24] whose definition depends on a choice of spin structure on spacetime. Since all the models we consider are supersymmetric and hence contain dynamical fermions, this is no restriction. 4 We expand upon this point further in the following analysis.
JHEP05(2014)014
one. Explicitly, σ 2n can be factored as σ 2n = g n •c U where c U is a change of basis operation where in the above, the F × F matrix U is given by And the gauging operation g n is given by Straightforward calculation using Gaussian path integrals may be used to verify that these operations satisfy the defining relations of Sp(2F, Z). Notice that, while these relations are simple to prove, they nevertheless involve quantum field theory in an essential way. If w is any word in the generators σ i which is equal to the identity element by a relation in the symplectic group, then the action of w on a given matrix of levels K produces a new matrix w(K) which in general is not equal, as a matrix, to K. Nevertheless, the path integral performed with the matrices K and w(K) produce identical correlation functions. Thus, the relations in Sp(2F, Z) provide us with elementary, provable examples of duality in three-dimensional conformal field theory.
Let us now turn our attention to the partition function Z for this class of models. Since Abelian Chern-Simons theory is free, an application of the localization formula (1.2) reduces the computation to a simple Gaussian integral which is a function of an F -dimensional vector x of chemical potentials for the U(1) F flavor symmetry (2.9)
JHEP05(2014)014
The integral is trivially done to obtain 5 From the resulting formula we see that the partition function is labeled by two invariants The possibility that the matrix τ may have infinite entries is included to allow for noninvertible k G . In that case, the associated vector in the kernel of k G describes a massless U(1) vector multiplet and the flavor variable coupling to this multiplet is interpreted as a Fayet-Illiopoulos parameter. At the origin of this flavor variable the vector multiplet in question has a non-compact cylindrical Coulomb branch. This flat direction is not lifted when computing the path integral on S 3 because the R-charge assignments do not induce conformal mass terms. This implies that the partition function Z has a diveregence. Meanwhile, away from from the origin the non-zero FI parameter breaks supersymmetry and Z vanishes. In total then, the partition function is proportional to a delta function in the flavor variable, and the narrow width limit of the Gaussian, when entries of τ are infinite, with infinite coefficient, det(k G ) → 0, should be interpreted as such a delta function. The partition function formula (2.10) provides another context to illustrate the symplectic group Sp(2F, Z) on conformal field theories, in this case, via its action on the invariants (2.11). A general symplectic matrix can be usefully written in terms of F ×F blocks as Where R is certain invertible matrix which transforms the standard symplectic form to our choice (2. 3) whose precise form is not important. Then, the action of symplectic transformations on τ is simply the standard action of the sympletic group on the Siegel half-space Meanwhile, det(k G ) transforms as a modular form Thus the symplectic action on field theories reduces, at the level of partition functions, to the more familiar symplectic action on Gaussian integrals. Before moving on to additional methods for studying these theories, let us revisit the issue of Chern-Simons couplings involving a background U(1) R gauge field. As remarked above such couplings are forbidden by superconformal invariance. Nevertheless, to elucidate the physical content of Z(x) as well as the partition functions on interacting field JHEP05(2014)014 theories appearing later in this paper it is useful to examine exactly how such spurious terms would enter the result.
The squashed three-sphere partition functions under examination are Euclidean path integrals on the manifold This geometry is labelled by a parameter b, a positive real number, however the symmetry under b → 1/b allows us to restrict our attention to the parameter In this geometry preservation of supersymmetry requires one to turn on background values for scalars in the supergravity multiplet. While these fields are normally real, like the real mass variables x i coupling to the ordinary flavors, in this background they are imaginary and proportional to c b . As a result R − R Chern-Simons levels, and R-flavor Chern-Simons levels appear as Gaussian prefactors in the partition function of the form From the above, we note that the R − R Chern-Simons levels appear as multiplicative constants independent of the flavor variables x. Since we are interested in computation of partition functions up to overall multiplication by phases such terms are not relevant for this work. On the other hand, the R − F Chern-Simons terms appear as linear terms in x in the exponent. One can easily see why such terms violate superconformal invariance. The round three-sphere partition function for the conformal field theory in the absence of background fields is given by evaluating Z(x) at vanishing x and c b = i. The first derivative with respect to x evaluated at the round three-sphere and vanishing x therefore computes the one-point function of the associated current As the three-sphere is conformal to flat space, conformal invariance means that this one point function vanishes implying that k RF must also vanish. Quite generally throughout this paper we encounter examples of partition functions of interacting CFTs where the naive value of k RF , as extracted from the first derivative of Z(x) evaluated at the conformal point, does not vanish. Superconformal invariance can always be restored in such examples by explicitly including ultraviolet counterterm values for k RF to cancel the spurious contributions [25]. Thus, from now on we write expressions for partition functions with non-vanishing first derivatives, always keeping in mind that the true physical partition function of the conformal theory is only obtained by including suitable counterterms.
Quantum mechanics and partition functions
The partition function calculations described in the previous section can be phrased in useful way in elementary quantum mechanics. In this context, the associated action of Sp(2F, Z) is known as the Weil representation [26]. We consider the Hilbert space of complex valued functions of F real variables and aim to interpret Z(x) as a wavefunction. 6 First, introduce position and momentum operators acting on wavefunctions and consistent with the symplectic matrix J introduced in (2.3) We use Dirac bra-ket notation for states, and let |y denote a normalized simultaneous eigenstate of the position operatorŝ For convenience we also note that the wavefunction of a momentum eigenstate takes the form y|p = exp [2πi (y 1 p 1 + y 2 (p 1 + p 2 ) + · · · + y F (p 1 + p 2 + · · · + p F ))] . (2.21) On this Hilbert space there is a natural unitary representation of Sp(2F, Z). This representation is defined using the generators (2.4) as follows 7 One important feature of this representation is that its action by conjugation on position and momentum operators produces quantized canonical transformations. Explicitly, if M is any symplectic transformation we have This fact underlies the significance of this representation in all that follows. We now wish to show that we may interpret the partition function of a theory Ψ as a wavefunction of an associated state |Ψ Z Ψ (x) = x|Ψ . (2.24) Of course both wavefunctions and partition functions are complex-valued functions of a F real variables x i so we are free to make the identification appearing in (2.24). The nontrivial aspect of this identification is that the Sp(2F, Z) action on quantum field theories,
JHEP05(2014)014
defined by the operations appearing in (2.5)-(2.6) can be achieved at the level of the partition function by the action of the operators of the same name defined by the representation given in (2.22). To see that these quantum mechanics operators behave correctly, note that given any arbitrary state |Ψ we have Thus, if the state |Ψ corresponds to a quantum field theory with partition function x|Ψ , then the integral definition of the partition function given in equation (2.9) implies that σ 2j−1 shifts the background Chern-Simons level for the j-th flavor by one unit as expected.
We can similarly see that the quantum mechanical σ 2j operator acts as required. We have while the S generator acts to gauge the flavor symmetry and introduces a new flavor which is dual to the original symmetry The relevant quantum mechanics is now single variable for the single U(1) flavor symmetry with standard commutation relations And the representation of symplectic transformations is given by
JHEP05(2014)014
A simple class of theories is defined starting from the trivial theory Ω. This theory has no gauge groups and vanishing flavor Chern-Simons levels. Its partition function is unity Z Ω (x) = x|Ω = 1. (2.32) More interesting theories can be generated by starting with the trivial theory Ω and acting with S and T . For a general SL(2, Z) element O we have the following result for the partition function 9 33) The answer thus takes the general form (2.10) with associated invariants As in the case of a single flavor symmetry discussed above, the resulting quantum field theory and partition function depends only on the element O in Sp(2F, Z), while a particular Lagrangian realization of the theory requires a choice of word in the generators σ n which represents O. This quantum mechanical setup naturally suggests additional quantities to compute. Rather than considering the wavefunction of O acting on the trivial state |Ω , we may instead double the flavor variables and compute the complete matrix element of O Z Op O (x 1 , · · · , x F , y 1 , · · · , y F ) ≡ x 1 , · · · , x F |O|y 1 , · · · y F . The construction of (2.36) is not limited to the case of symplectic operators. Indeed, in section 5 we will see that an interesting class of non-symplectic operators O have matrix elements which are identified with partition functions of interacting three-dimensional conformal field theories coupled to 2F flavor fields. In general, such matrix element partition functions have the following features.
In the physical interpretation we have developed, the integration over the y variables is the gauging of the associated flavor variables at vanishing values of the associated FI parameters.
• More generally, the quantum-mechanical operation of operator multiplication can be interpreted in field theory. A product of operators can always be decomposed into a convolution by an insertion of a complete set of states Again, the integration is physically interpreted as gauging. We consider the two theories, whose partition functions are given by the matrix elements of O i , we identify flavors as indicated in (2.38) and gauge with no FI-term.
• Z Op O (x, y) is a partition function of a theory coupled to 2F background flavor fields. A general theory of this type is acted on by the symplectic group Sp(4F, Z), however a matrix element is acted on only by the subgroup which does not mix the x and y variables. The geometrical and physical interpretation of this splitting will be explained in section 5.
Tangles
Our goal in this section is to give a geometric counterpart to the field theory and partition function formalism developed in the previous analysis. A natural way to develop such an interpretation is to engineer the Abelian Chern-Simons theory by compactification of the M5-brane on a three-manifold M . In six dimensions, the worldvolume of the M5-brane supports a two-form field B with self-dual three-form field strength [27]. When reduced on a three-manifold, the modes of B may engineer an Abelian Chern-Simons theory. We review aspects of this reduction and explain the three-dimensional geometry required to understand the Sp(2F, Z) action.
Reduction of the chiral two-form
Consider the free Abelian M5-brane theory reduced on a three-manifold M . To formulate the theory of a chiral two-form, M must be endowed with an orientation which we freely use throughout our analysis. The effective theory in the three macroscopic dimensions is controlled by the integral homology group H 1 (M, Z). The simplest way to understand this fact is to note that a massive probe particle in the theory arises from an M2-brane which ends on a one-cycle γ in M . In particular the homology class of γ ∈ H 1 (M, Z) labels the charge of the particle.
In the effective theory in three dimensions, massive charged probes are described by Wilson lines. Let C denote a one-cycle in the non-compact Minkowski space. A general Wilson line can be written as If the theory in question has G gauge fields and F flavor fields, then the charge vector q α has G + F components and integral entries. However, in the presence of non-vanishing Chern-Simons levels, the charge vector q is in general torsion valued. Thus, distinct values of the integral charge vector q may be physically equivalent. The allowed distinct values of the charge vector are readily determined by examining the two-point function of Wilson loops in Abelian Chern-Simons theory coupled to background vectors. The results are summarized as follows. Let Z G ⊂ Z G+F be the subset of charges uncharged under the flavor group U(1) F . By restricting to this subset we may view the level matrix as specifying a map K : and those charge vectors in the image of this map are physically equivalent to no charge at all. Since we have determined that possible Wilson lines encode the homology of M it follows that Equation (2.42) encodes the appropriate generalization of Kaluza-Klein reduction to the case of torsion valued charges. The fact that we study Chern-Simons theories up to possible framing anomalies (equivalently, overall phases in the partition function) means that the entire theory is characterized by the group (2.42). However, the homology of M , and hence the underlying physics has no preferred description via a classical Lagrangian. Indeed as we will illustrate in the remainder of this section, distinct classical theories, with the same group of Wilson line charges, can in fact arise from compactification on the same underlying manifold M . Thus, already in this elementary discussion of reduction of the two-form we see the important fact that compactification of the M5-brane theory produces a specific quantum field theory not, as one might naively expect, a specific Lagrangian presentation of a classical theory which we subsequently quantize. It is for this reason that our geometric constructions of field theories are powerful, dualities are manifest. Finally, before moving on to discuss explicit examples we remark on the geometry associated to flavor symmetries. These arise when the manifold M is allowed to become JHEP05(2014)014 non-compact. Suppose that M develops cylindrical regions near infinity which take the form of R × R + × S 1 . Then on the asymptotic S 1 cycle we may reduce the two-form field to obtain another gauge field (2.43) However, unlike the compact cycles in the interior of M , the cycle S 1 has no compact Poincaré dual and hence A is a non-dynamical background field; it provides the effective theory in three dimensions with a U(1) flavor symmetry. Moreover, since the boundary behavior of A must be specified to obtain a well-defined theory in three dimensions, the resulting theory is of the type we have considered in the introduction: a theory with flavor symmetries and a specified coupling to background gauge fields. As a result the partition function Z(x) is a well-defined observable of the theory. The number of flavor variables on which the result depends is the number of homologically independent cylindrical ends of M . In the context of the examples constructed in section 2.2.2, for F flavors we will require F + 1 cylindrical ends.
Double covers from tangles
The specific class of geometries that we will study are conveniently presented as double covers over the non-compact space R 3 , branched over a one-dimensional locus L Topologically L is simply the union of F + 1 lines, however its embedding in R 3 is constrained. On the asymptotic two-sphere at the boundary of three-space, we mark 2F + 2 distinct points p 1 , · · · p 2F +2 . The 2F + 2 ends of L at infinity are the points p i . Meanwhile, in the interior of R 3 the components of L may be knotted. Such an object is known as an (F + 1)-tangle. An example in the case of F = 1 is illustrated in figure 2. Given two distinct tangles L 1 and L 2 , they are considered to be equal topologically when one can be deformed to the other by isotopy in the interior of R 3 which keeps the ends at infinity fixed. In the following we will also need to be more precise about the behavior near the asymptotes p i . Let B r ⊂ R 3 denote the exterior of a closed ball of radius r centered at the origin. We view B r topologically as S 2 × I where I is an open interval. For large r the portion of the tangle L ∩ B r contained in B r consists of 2F + 2 arcs. We constrain the behavior of these arcs by requiring that the pair (B r , L ∩ B r ) is homeomorphic to the trivial pair (S 2 × I, {p 1 , p 2 , · · · , p 2F +2 } × I) where the p i are points in S 2 . This constraint implies that the knotting behavior of the tangle eventually stops as we approach infinity. In practice it means that any planar projection of the tangle L appears at sufficiently large distances as 2F + 2 disjoint semi-infinite line segment which undergo no crossings.
For most of the remainder of this section, we will argue that the class of three-manifolds obtained as double covers branched over tangles have exactly the correct properties to engineer the Abelian Chern-Simons theories coupled to a background flavor gauge field which we JHEP05(2014)014 The four endpoints of L extend forever towards the points at infinity.
have discussed in the previous section. As a first step, observe that such geometries do indeed support F flavor symmetries. Group the asymptote points into F +1 pairs {p 2i−1 , p 2i }.
The double cover of R 3 branched over the two straight arcs emanating from {p 2i−1 , p 2i } yields the anticipated cylindrical ends of M required to support flavor symmetry.
To see this more explicitly, note that the the boundary of the base may be viewed as a two-sphere. The boundary of the double cover is a double cover of the two-sphere branched over 2F + 2 points and is therefore a Riemann surface of genus F . The flavor cycles are the homology classes in this boundary Riemann surface which remain non-contractible in the three-manifold. It is easy to convince oneself that there are exactly F such flavor cycles. For instance the simplest class of such three-manifolds consists of handlebodies defined by choosing F non-intersecting cycles on the boundary and filling them.
In section 2.3 we explain how to extract a Lagrangian for an Abelian Chern-Simons theory from the geometric data of a tangle. As we have previously described, the M5brane on M does not provide a preferred Lagrangian. Consistent with this fact, we find that a Lagrangian description of the field theory associated to a particular tangle requires additional geometric choices. In this case the choice is a Seifert surface, a surface whose boundary is the given tangle. For any fixed L there are infinitely many such surfaces each giving rise to a distinct Lagrangian presentation of the same underlying physics.
Finally we argue that tangles, and hence the class of three-manifolds described as double covers branched over tangles, enjoy a natural action by Sp(2F, Z). To illustrate this action, we draw a generic tangle with F + 1 strands as in figure 3. Then, the action of the symplectic group is defined by the generators σ j where j = 1, · · · 2F , which act on the tangles by braid moves in a neighborhood of the asymptotes p i . Several examples are illustrated in figure 4.
Again we can understand this three-dimensional geometry by examining the boundary at infinity of M. As we described above, this is a Riemann surface of genus F . The action defined in figure 4 is a surgery on M which in general changes its topology. This surgery is induced by mapping class group transformations in a neighborhood of the boundary of M .
JHEP05(2014)014
Figure 3. A generic tangle L in R 3 . The ellipsis indicate that the strands continue to infinity with no additional crossings. In the interior of the box, the strands are in general knotted in an arbitrary way. In particular, as is clear from the illustrations, what we have defined is not, a priori, an action of the symplectic group, but rather an action of the braid group, B 2F +1 on 2F + 1 strands [28]. 10 The braid group and the symplectic group are related by a well-known exact sequence Where T 2F +1 is the Torrelli group, and the last map in the above arises because the the braid group B 2F +1 acts on the boundary Reimann surface preserving its intersection form.
To make contact with our discussion of field theories, we wish to illustrate that the action of the braid group defined by figure 4 reduces to an action of the symplectic group on the associated field theories. This implies that any two elements of B 2F +1 that differ by multiplication by a Torelli element must give rise equivalent actions on the field theories extracted from an arbitrary tangle. More bluntly, the Torelli group generates dualities. One of the outcomes of this section is a proof of this fact.
Seifert surfaces
To understand the physics encoded by a tangle we need control over the homology of the cover manifold M . The appropriate tool for this task is a Seifert surface. In general given any knot, 11 a Seifert surface Σ for the knot is a connected Riemann surface with boundary 10 The 'last strand' appearing at the bottom of the diagram in figure 3 is stationary under all braid moves.
Alternatively one may work with the spherical braid group and impose additional relations. For simplicity we stick with the more familiar planar braids. 11 In this paper the term knot will be used broadly to include both knots and multicomponent links. the given knot. An example is illustrated in figure 5. In the mathematics literature it is common to impose the additional requirement that Σ be oriented. In our context there is no natural orientation for Σ and hence we proceed generally allowing possibly non-orientable Seifert surfaces. For any knot, there exist infinitely many distinct Seifert surfaces and given a knot diagram a number of simple algorithms exists to construct a Σ [29]. We describe one useful algorithm in section 2.3.1. The reason that Seifert surfaces are relevant for our discussion is that if one wishes to construct a double cover branched over a knot then a choice of Σ is equivalent to a choice of branch sheet. As such, features of the homology of the branched cover M can be extracted from a knowledge of a Seifert surface. However, the resulting three-manifold M depends only on the branch locus L and hence the homology and ultimately the associated physical theory are independent of the choice of Σ. In the following we explain how any fixed choice of Seifert surface allows us to extract a set of gauge and flavor groups and a matrix of Chern-Simons levels from the geometry.
To begin for simplicity, we assume that we are dealing with a knot in S 3 , as opposed the non-compact tangles in R 3 needed to support flavor symmetry. The generalizations to the present non-compact situation will then be straightforward. The detailed statements that we require are as follows. Any cycle in H 1 (M, Z) can be thought of as a cycle on the base S 3 which encircles Σ. This can be viewed as a direct parallel with the theory of branched covers of the two-sphere. 12 Thus, we deduce that there is a surjective map Meanwhile, there is a linking number pairing between cycles in H 1 (S 3 − Σ, Z) and cycles in H 1 (Σ, Z). This linking number pairing is perfect and hence we may extend (2.46) to Our task is thus reduced to determining which cycles on the Seifert surface correspond to trivial cycles in the homology of M .
JHEP05(2014)014
To this end, we define a symmetric bilinear form, the so-called Trotter form (2.48) Our choice of notation is intentional: we will see that the Trotter form defines the Chern-Simons levels. To extract K we let α ∈ H 1 (Σ, Z), and set α to be the cycle in S 3 obtained from locally pushing α off of Σ in both directions. The cycle α is a two-to-one cover of α.
If Σ is orientable then α consists of two disconnected cycles each on a given side of Σ (as determined by the orientation), however in general α is connected. The definition of the Trotter form is Where lk # denotes the linking number pairing of cycles in S 3 . A simple calculation illustrates that K is symmetric. A slightly less trivial argument shows that the image of K is exactly the set of cycles on Σ which are trivial in M . Thus, the completion of the sequence (2.47) is In particular we conclude that Double covers of S 3 branched over knots are exactly the geometries we expect to engineer Abelian Chern-Simons theories without flavor symmetries and we may relate theorem (2.50) to physics as follows: • A choice of Seifert surface Σ and a set of generators of homology α 1 , · · · , α G determines a set of G Abelian gauge fields.
• The Trotter form pairing on cycles in H 1 (Σ, Z) is equal to the Chern-Simons levels matrix on the associated gauge fields.
Distinct choices of Seifert surfaces are physically related by duality transformations. This fact is easy to verify directly. For example, distinct choices of Σ which differ by gluing in handles or Mobius bands add new gauge cycles and compensating levels to keep the underlying physics unmodified. Finally, we generalize our discussion of Seifert surfaces and homology to the case of non-compact geometries required to discuss flavor symmetries. Let L denote a tangle in R 3 . We introduce non-compact Seifert surfaces Σ again defined by the condition that they are connected surfaces with boundary L. However, now to compute flavor data we must fix a compactification of both L and Σ. We achieve this by identifying the points p i in pairs and glueing in arcs near infinity as illustrated in figure 6.
Let δ indicate the union of the arcs at infinity, and Σ c the compactified Seifert surface including δ. The surface Σ c should be viewed as embedded inside S 3 , the one-point compactification of R 3 and calculations of linking numbers etc. take place inside S 3 . For simplicity in future diagrams we often leave the compactification data of the Seifert surface implicit by setting the convention that whenever a non-compact Seifert surface consists of JHEP05(2014)014 Figure 6. The asymptotic geometry of a Seifert surface for a generic tangle. The shaded blue region indicates the interior of Σ. The arcs at infinity indicate the compactification of L and Σ. The non-compact cycles on Σ give rise to flavor symmetries. strips extending to infinity in R 3 the intended compactification is the one where the strips are capped off with arcs as in figure 6.
With these preliminaries about compactifications fixed, we may now state the required generalization of the sequence (2.50) Note that in addition to the boundaryless cycles in Σ c which give rise to gauge groups, H 1 (Σ c , δ, Z) also contains F cycles with boundary in δ. In the uncompactified Seifert surface these cycles are non-compact and illustrated in figure 6. They correspond physically to the U(1) F flavor symmetry. To complete the construction it thus remains to extend the definition of the Trotter form. For boundaryless cycles in Σ c the definition is as before.
Meanwhile to evaluate the Trotter form on cycles with boundary we again push them out locally in both directions from Σ c and compute the local linking number from the interior of Σ. Alternatively, one may simply think of the pair of points in the boundary of a flavor cycle in Σ c as formally identified. In this way we obtain a closed cycle in S 3 and we compute its Trotter pairings as before. In this way we obtain a bilinear form K defined on H 1 (Σ c , δ, Z), and the image of this form restricted to the boundaryless cycles in H 1 (Σ c , δ, Z) defines the term Im(K) appearing in (2.51).
To summarize, given any tangle L in R 3 , we extract a Lagrangian description of the effective Abelian Chern-Simons theory as follows: • a choice of Seifert surface Σ and a set of generators of the relative homology H 1 (Σ c , δ, Z), α 1 , · · · , α G+F , determines a set of Abelian vector fields. Generators corresponding to boundaryless one-cycles correspond to gauged U(1)'s while those corresponding to one-cycles with boundary in δ are background flavor fields.
• The Trotter form pairing on cycles in H 1 (Σ c , δ, Z) is equal to the Chern-Simons levels pairing on the associated vector fields. We denote by Im(K) the image of this pairing restricted to the subset of boundaryless cycles in Σ c , and we have
Checkerboards
The previous discussion of Seifert surfaces is complete but abstract. numbers and hence extracting a set of Chern-Simons levels from geometry. One such method, described in this section, is provided by so-called checkerboard Seifert surfaces.
To begin, fix a planar projection of the tangle L ⊂ R 3 . In such a planar diagram the information about the knotting behavior of L is contained in the crossings in the diagram. Each crossing locally divides the plane into four quadrants. We construct a Seifert surface for L by coloring the two of the four quadrants at each crossing in checkerboard fashion and extending consistently to all crossings. The colored region then defines Σ. Note that each crossing c in the diagram is endowed with a sign ζ(c) = ±1 depending on whether the cross-product of the over-strand with the under-strand through Σ at c is in or out of the plane as shown in figure 7.
To compute the Trotter form, we first assume that Σ c appears compactly in the plane. 13 Then, there is a natural basis of boundaryless cycles in Σ c associated to the compact uncolored regions of the plane. We orient these cycles counterclockwise. Similarly, in the diagram of Σ, non-compact white regions may be associated to flavor cycles. These cycles are again canonically oriented "counterclockwise," i.e. the cross-product of the tangent vector to the cycle with the outward normal pointing into the associated non-compact uncolored region must be out of the plane. 14 The Trotter pairing on these cycles is determined by the summing over crossings involving a given pair of cycles weighted by the sign of the crossing. Explicitly, for α and β a pair of generators as defined above we have (2.53) Equation (2.53) provides a convenient way to read off Chern-Simons levels for a given tangle and will be utilized heavily (although often implicitly) throughout the remainder of this work. 13 This assumption cannot in general be relaxed. Indeed when Σc is non-comact in the plane one must take into account the fact that in the compactification procedure, the plane becomes and embedded S 2 inside S 3 and hence may endow Σc with additional topology. 14 There is one linear relation among the flavor cycles obtained in this way. So a given Σ will have F + 1 non-compact uncolored regions and F independent flavor cycles.
The action of σ2 on L. Figure 8. The action of braid moves on linking numbers. In (a), all linking number are unmodified except for those of the flavor cycle α 1 which runs from δ F +1 to δ 1 , is illustrated in red, whose self-linking number is increased by one. In (b), we first change basis of flavor cycles to β j which runs from δ j to δ j+1 . Then we gauge β 1 , shown in green, and introduce a new flavor cycle, shown in red, linked with the gauged cycle.
The Torelli group of dualities
We are now equipped to investigate the symplectic action on tangles. In particular, we wish to prove that the action of the braid group B 2F +1 on tangles, reduces to an action of the symplectic group Sp(2F, Z) when considered as an action on the corresponding physical theories.
To prove this statement, we proceed in the most direct way possible. We compute the action of the braid group generators σ n , illustrated in figure 4, on the Chern-Simons levels extracted from any Seifert surface associated to the tangle. We show that this action matches exactly the previously defined action (2.5)-(2.6). Since the later action is symplectic this implies that the former is as well. In particular, this suffices to prove that the Torelli group acts trivially on the underlying quantum field theory.
To begin, we fix a Seifert surface with definite compactification data δ. As we have previously described, δ is a union of F + 1 arcs δ i with i = 1, · · · , F + 1. We draw diagrams such that the arcs are ordered down the page, with δ 1 appearing at the top, δ 2 next and so on. A basis of flavor cycles in H 1 (Σ c , δ, Z) is given by F cycles α i each of which begins at δ F +1 and terminates at δ i . This geometry is shown in figure 6. With these conventions, the braid moves act as in figure 8.
Consider first the odd braid moves σ 2j−1 illustrated in figure 8a. According to formula 2.53, the effect of such a move is to modify the Trotter form by increasing K(α j , α j ) by one while leaving all other entries invariant. This is exactly the expected action given by (2.5) on Chern-Simons levels for this transformation.
Similarly we may consider the braid moves with even index σ 2j illustrated in figure 8b. To understand this transformation we first change basis on flavor cycles to β i which run from δ i to δ i+1 . The transformation from the basis α i to the basis β i is facilitated by the U matrix of equations (2.6)-(2.7). Then, the braid move σ 2j gauges β j and introduces a new flavor cycle β j . Finally, we update the Trotter form to account for the new linking numbers apparent in figure 8b δK(β j , β j ) = −1, δK( β j , β j ) = −1, δK(β j , β j ) = 1. (2.54)
JHEP05(2014)014
This is exactly the gauging operation of equation (2.8). Thus we have competed the verification of the symplectic action. As a result of this analysis we conclude that the Torrelli group T 2F +1 acts via dualities on Abelian Chern-Simons theories. Given any tangle one may act on it with a Torrelli element to obtain a new geometry. Fixing Seifert surfaces, the two geometries in general will have distinct classical Lagrangian descriptions yet their underlying quantum physics is identical.
Moreover, as we see in section 3 and beyond, the technology of this section generalizes immediately to the more complicated geometries required for constructing interacting field theories. In particular, the symplectic action we have described arises from braid moves near infinity and hence is enjoyed by any geometry with the same asymptotics.
Geometric origin of quantum mechanics
To conclude our discussion of Abelian Chern-Simons theories we briefly comment on the origin of the quantum mechanical framework for partition function calculations discussed in section 2.1.1. We fix an Abelian Chern-Simons theory T (M ) engineered by reduction of the M5-brane on a three-manifold M . In this section we are only interested in the modes of this theory which descend from the six-dimensional chiral two-form, and throughout we ignore scalars and fermions. The three-sphere partition function of such a theory then has an underlying six-dimensional origin as the M5-brane partition function on the product Thus far, we have viewed M as small and interpreted the long-distance physics as an Abelian Chern-Simons theory coupled to flavors which we subsequently compactify on S 3 . However, an alternative point of view is to consider S 3 to be small, and obtain another effective three-dimensional description which is subsequently compactified on M . As S 3 has vanishing first homology, the resulting three-dimensional description is one with no Wilson line observables and hence from the point of view of this paper which studies partition functions on compact manifolds up to multiplication by overall factors we cannot distinguish the result from the trivial theory.
However, a standing conjecture is that in fact the reduction on S 3 gives rise to a U(1) Chern-Simons theory at level one. Assuming the veracity of this statement, we then arrive at a beautiful physical interpretation of the quantum mechanical calculations in section 2.1.1.
Recall that M is not a compact manifold, but rather has non-compact cylindrical ends required to support flavor symmetry. One may equivalently view M as a manifold with boundary at infinity and with specified boundary conditions supplied by the background flavor gauge fields. On general grounds, the path-integral of U(1) level one Chern-Simons theory on M produces a state in the boundary Hilbert space determined by the quantization of Chern-Simons theory on ∂M . In this case, as a consequence of the conjecture, one is quantizing a space of U(1) flat connections on a Riemann surface with 2F independent cycles. The Hilbert space thus consists of wavefunctions of F real variables x 1 , · · · x F , JHEP05(2014)014 which are interpreted as the holonomies of a flat connection around a maximal collection of F non-intersecting homology classes in ∂M . The symplectic action is then the standard action in this Hilbert space induced by the action on the homology of the genus F Riemann surface ∂M .
Thus, the quantum mechanical framework which emerged abstractly from supersymmetric localization formulas in section 2.1.1, takes on a natural physical interpretation when the associated field theories are geometrically engineered. In particular, the viewpoint of the partition function Z T (M ) S 3 (x) as a wavefunction in a Hilbert space is a simple consequence of the six-dimensional origin of the computation and leads to a correspondence of partition functions Z This identification is reminiscent to the one studied in [30] and was obtained in the case of three-manifolds from different perspectives by [31,32].
Particles, singularities, and superpotentials
In this section we exit the realm of free Abelian Chern-Simons theories and enter the world of interacting quantum systems. We study conformal field theories described as the terminal point of renormalization group flows from Abelian Chern-Simons matter theories.
Thus, in addition to the vector multiplets describing gauge fields, our field theories will now have charged chiral multiplets. We will find that, in close analogy with the study of N = 2 theories in four-dimensions, such theories can be geometrically encoded by studying the M5-brane on a singular manifold. In the context of three-manifolds branched over tangles the natural class of singularities are those where strands of the tangle collide and lose their individual identity. We refer to such objects as singular tangles. Our main aim in this section is to give a precise description of these objects and explain how they encode non-trivial conformal field theories. In the process we will also describe how the geometry encodes superpotentials. A summary of results in the form of a concise set of rules for converting singular tangles to physics appears in section 3.4.
Singularities and special Lagrangians
We begin with a discussion of the geometric meaning of chiral multiplets and their associated wavefunctions in the three-sphere partition function. In our M-theory setting the three-manifold M is embedded in an ambient Calabi-Yau Q, and massive particles arise from M2-branes which end along M on a one-cycle. In the simplest case of a spinless BPS chiral multiplet, supersymmetry implies that M is a special-Lagrangian and the M2-brane is a holomorphic disc as illustrated in figure 9 [33,34]. The mass of the BPS particle is proportional to the area of the disc, and hence in the massless limit the cycle on which the M2-brane ends collapses. Thus, when a particle becomes massless the three-manifold M develops a singularity. A local model for this geometry is a special Lagrangian cone on T 2 in C 3 . Such a cone is defined to be the subset L 0 in C 3 obeying [35] L 0 = (z 1 , z 2 , z 3 ) ∈ C 3 : |z 1 | 2 = |z 2 | 2 = |z 3 | 2 , Im(z 1 z 2 z 2 ) = 0, Re(z 1 z 2 z 3 ) ≥ 0 . When the mass of the M2-brane is restored, the singularity is resolved. This can be done in three distinct ways [35]. Let m > 0, then the resolutions are The resulting spaces are special Lagrangain three-manifolds in C 3 [34] diffeomorphic to S 1 × R 2 . They differ by the orientation of a closed holomorphic disc in C 3 with area πm which represents the M2-brane. In the case of L 1 m this disc is given by The other cases, D 2 m and D 3 m , are analogous. We see that the boundary of the disc is an oriented S 1 in L 1 m whose homology class generates H 1 (L 1 m , Z) ∼ = Z. In the other cases the boundary is given by an oriented circle around the origin of z 2 and z 3 respectively. One can thus see that the difference between the three ways the disc appears is determined by the orientation of its central axis in C 3 .
To make contact with our discussion of tangles we view this local model for the singularity as a double cover over R 3 . The special Lagrangians L a m are acted on by the involution The quotient space is parametrized by the triple ( Locally the x i provide coordinates on L a m , but the global structure of the special Lagrangian is a double cover. The branch locus is the fixed points of (3.4) and is composed of two strands explicitly given by Where t ∈ R provides a coordinate along the strands. One way to see that the branched cover is an equivalent description of the original topology is to slice R 3 into planes labelled by a time direction. The coordinate t on the branch lines in (3.5) provides such a foliation and increasing time defines a notion of flow. Each slice is a Riemann surface which is a double-cover of the plane branched over two points and is thus a cylinder. Therefore, including time, we see that topologically the cover is R 2 × S 1 . We pursue this perspective on local flows in M and connect them to four-dimensional physics in section 5.
Returning to our analysis of the special Lagrangian cone, we note that when viewed as a double cover it is easy to see how the three different resolutions L a m are realized in terms of the configurations of the branch lines (3.5). We fix a planar projection of the geometry by declaringx 3 to be the oriented perpendicular direction. Then, we can depict the geometry as in figure 10. Note that figure 10c only shows the overcross. The other choice, where the strand from upper left to lower right goes under the second strand, called the undercross, does not occur. This is an artifact of the planar projection which we use to visualize the configuration. Indeed, exchanging the oriented normalx 3 to −x 3 exchanges the overcross for the undercross. By contrast, changing the normal direction fromx 3 tox 1 orx 2 permutes the resolutions appearing in 10 but leaves the triple, as a set, invariant.
In the limit m → 0 the branch lines collide and we recover the singularity (3.1). In R 3 , this appears as four branch half-lines all emanating from the origin. These half-lines approach infinity in four distinct octants and hence specify the faces of a tetrahedron.
In this way, we see the tetrahedral geometry of [8] emerge from the structure of special Lagrangian singularities. In particular, any triangulation of the three-ball into tetrahedra gives rises to one of the tangles described here.
Having thoroughly analyzed the local model, we may now introduce a precise definition of the concept of a singular tangle. It is simply a tangle where we permit pairs of strands to touch at a finite number of points. The local structure of the cover manifold M at each such point is that of the singular special Lagrangian cone discussed above, and the global JHEP05(2014)014 (a) (b) Figure 11. Two different singularities. In (a) we see how an overcross singularity resolves after applying figure 10c. In (b) the corresponding resolution is shown for the undercross singularity. In both cases the two other resolutions of figure 10 are also present but not depicted.
identification of strands in the tangle indicates how these local models are glued together.
In specifying the gluing we must keep track of additional pieces of discrete data.
• We draw singular tangles in planar projections of R 3 . Hence each singularity is equipped with an oriented normal vector ±x 3 . Varying the sign of the normal vector changes whether the overcross or undercross appears upon resolution.
• Fix a sheet labeling 1, and 2, at each singularity. Then in the gluing we must specify whether the identified sheets are the same or distinct. Varying between these two choices alters the relative signs of the charges of the particles as determined by the orientation of the M2-branes.
Both of the data described above have only a relative meaning: for a single singularirty they are convention dependent while for multiple singularities they may be compared. All told then, if we draw singular tangles in a plane, each singularity is one of four possible types. We encode the four possibilities graphically with a thickened arrow on one of the strands passing through the singularity as in figure 11. The thickened strand always resolves out of the page while the direction of the arrow encodes the charge of the massless M2-brane residing at the origin of the singularity. In general we expect that double covers branched over singular tangles may be realized as singular special Lagrangians embedded inside noncompact Calabi-Yau three-folds. However aside from the specific case of the geometry defined in (3.1) by [35], no examples are known. We view this as an interesting problem for future work.
Wavefunctions and Lagrangians
Our next task is to explain in general how to extract a Lagrangian description of the physics defined by a singular tangle. As in the case of the free Abelian Chern-Simons theories studied in section 2, there is no unique Lagrangian but rather for each choice of Seifert surface we obtain a distinct dual presentation. In the case of singular tangles, we will see that these changes in Seifert surfaces are related by non-trivial mirror symmetries.
To begin, let us recall the data associated to a chiral multiplet in an Abelian Chern-Simons matter theory.
JHEP05(2014)014
• A charge vector q α ∈ Z G+F indicating its transformation properties under U(1) G × U(1) F gauge and flavor rotations. In all of our examples the vector q α will be primitive meaning that the greatest common divisor of the integers q α is one.
• A parity anomaly contribution. If a chiral multiplet is given a mass m, it may be integrated out leaving a residual contribution to the Chern-Simons levels of fields. The shift in the levels in given by For primitive charge vectors the above shift has at least one non-integral entry. This implies that the ultraviolet levels are subject to a shifted half-integral quantization law. We take the associated shift to be part of the definition of the chiral multiplet.
• An R charge indicating the scaling dimension of the associated chiral operator in the conformal field theory. This data is fixed by a maximization principle once a superpotential is specified, and hence is not an additional data in the geometry [17]. This will be addressed in section 3.3.
To encode the partition function of such chiral multiplets we must introduce a new class of wavefunctions depending on these data. Each is given by a non-compact quantum dilogarithm of the form where c b is the imaginary constant given in (2.16), and the function s b (x), defined as was obtained through a localization computation on the squashed three-sphere in [36] where the numerator and denominator come from vortex partition functions on the two half-spheres [37]. The physical interpretation of this function is read from the variables as follows.
• The subscript of E ± encodes the fractional ultraviolet Chern-Simons level ± 1 2 assigned to the particle.
• The variable z indicates the linear combination of gauge and flavor fields under which the chiral multiplet is charged. For E ± the charge is z = ±q · (y x).
• The variable R denotes the R-charge. Thus, we see that the physical data of a chiral multiplet is completely encoded by the wavefunctions (3.7). It follows that to assign a definite matter content to a singular tangle, as well as extract the associated contributions to the partition function Z, it suffices to assign a quantum dilogarithm to each singularity. To proceed, we introduce a singular Seifert surface Σ for a singular tangle L. As explained in section 2.3, from the homology of Σ we extract a basis of gauge and flavor cycles under which particles may be charged. Let α be such a cycle. Utilizing the sequence (2.51), we may view α equivalently as a cycle in the cover M . An M2-brane disc D ending on M has a charge determined by its linking numbers q α = lk # (α, ∂D). (3.9) The extension of this formula to the case of singular M is then depicted in our graphical notation in figure 12. These dilogarithm assignments completely determine the matter content of a singular tangle. However, the assignments require a choice of Seifert surface. This surface is a choice of branch sheet for the double cover and varying it does not alter the underlying geometry. As a consequence, our rules are subject to the crucial test: the underlying quantum physics must be independent of the choice of Seifert surface.
Given the dualities between free Abelian Chern-Simons theories already described in section 2, independence of the choice of Seifert surface is ensured provided we have the equality shown in figure 13. There, we see that one and the same singularity may make different contributions to an ultraviolet Lagrangian depending on the choice of Seifert surface. At the level of partition functions, this means that a singularity which contributes as E + (x α ) with one choice of branch sheet can contribute with E − (x β ) with a different JHEP05(2014)014 Figure 13. A duality results from changing the Seifert surface. In (a) a singularity contributing E + (x α ) to the partition function. In (b) the Seifert surface is changed and the same singularity contributes E − (x β ). Figure 14. The duality between a free Chiral multiplet and a U(1) gauge field with a charged chiral field. In (a) we see the free chiral field couplet to the flavor cycle α. In (b) we see the gauge cycle β and flavor cycle α of the dual theory.
choice. Thus, we see that consistency of our analysis requires a mirror symmetry implying that the same underlying conformal field theory may arise from ultraviolet theories with distinct matter content.
To understand the nature of the duality implied by figure 13 we analyze its impact on the local model of the singular tangle involving a single singularity. Equality in more complicated examples follows from the locality of our constructions. The singular tangle together with its dual choices of Seifert surface and fixed compactification data δ i are shown in figure 14. The ultraviolet field content in each case is given by the following.
• Figure 14a: there is a background U(1) flavor symmetry associated to the cycle α and no propagating gauge fields. Associated to the singularity there is a chiral multiplet with charge 1 under the flavor symmetry. This particle contributes + 1 2 to the Chern-Simons level. The scalar x α in the background U(1) multiplet is the real mass of the chiral field.
• Figure 14b: there is a U(1) flavor symmetry associated to the cycle α and a U (1) gauge symmetry associated to the cycle β. Associated to the singularity is a chiral multiplet uncharged under the flavor symmetry but with charge −1 under the gauge symmetry. The level matrix, including classical contributions from the Trotter JHEP05(2014)014 pairing as well as the fractional contributions of the particles is given by The off-diagonal portion of the level implies that the scalar x α is the FI-parameter of the gauged U(1).
These two field theories are indeed known to form a mirror pair [13]. At the level of partition functions this equivalence is represented by a quantum dilogarithm identity, known as the Fourier transform identity [23] (3.11) The fact that our geometric description of conformal field theories provides a framework where this duality is manifest is a satisfying outcome of our analysis.
To gain further insight into this duality we now study resolutions of the singularity in both theories and interpret these from the viewpoint of three-dimensional physics. These resolutions correspond to motion onto the moduli space of the conformal field theory. From the perspective of the ultraviolet Lagrangians, the various branches of the moduli space can be described as Coulomb or Higgs branches, and the effect of the mirror symmetry is to exchange the two descriptions. 15 The three different resolutions (3.5) have the following effect on the geometry of branch lines, see figure 15. Let us start with the case (c). One can clearly see that the self-Chern-Simons level of the field α, as determined by the Trotter pairing, is one. This has a simple explanation from the point of view of field theory. Resolving the singularity means making the M2-brane massive with a mass m 0. Thus the IR physics is obtained by integrating out this massive field which according to (3.6) gives rise to a shift Thus, as the ultraviolet Chern-Simons level was already one-half, the effective level is one exactly as the geometry of resolution (c) predicts. There is yet another way to see this.
The limiting behavior of the quantum dilogarithm is as follows which results in an effective Chern-Simons level k αα = 0. This is in complete accord with the geometry as cycle α has no self-linking after push-off in figure 15b. Equivalently, this can be again seen in the limiting behaviour of the quantum dilogarithm The two resolutions we have studied thus correspond to motion onto the Coulomb branch of the theory parameterized by the real mass m. Now let us come to resolution (a) which is of a different nature. In order to understand what is happening we follow a path in the moduli space of the Joyce special Lagrangian starting from a point which corresponds to a resolution (b) or (c) to a point of resolution type (a). Along such a path the absolute value of the mass of the particle shrinks, as the volume of the M2-brane disc shrinks, until the field becomes massless at the singularity. As long as the field is massive it is not possible to turn on a vacuum expectation value for the scalar φ of the chiral multiplet as this would lead to an infinite energy potential. However, when we sit at the CFT point and the field is massless we can deform the theory onto the Higgs branch by activating an expectation value for φ. We draw the three branches of the theory schematically in figure 16. We claim that motion onto the Higgs branch corresponds to resolution (a) on the geometry. In order to see how this comes about we flip the Seifert surface to obtain the resolutions of the dual description of the theory as shown in figure 17. In this dual theory resolution (a) arises from choosing x β 0 as can be seen from the JHEP05(2014)014 Figure 17. Resolutions of the theory dual to a free Chiral field.
limiting behavior of the negative parity quantum dilogarithm Thus in the dual channel this resolution is obtained by giving a vev to the scalar part of a vector multiplet and therefore corresponds to a point on the Coulomb branch of the dual theory. But then the D-term equation of the dual theory requires that x α be set to zero due to the Chern-Simons coupling of the two fields. Translating back to the original theory we indeed see that m = x α = 0 and that we have a propagating massless field and are thus capturing the correct effective description of the physics on the Higgs branch. For completeness we note that the dual theory is on the Higgs branch for resolution (b) and on the Coulomb branch for resolution (c). This can be easily seen by noting the limiting behavior of the negative parity quantum dilogarithm for x β 0 The fact that resolutions of singular tangles capture motion onto the moduli space of the corresponding conformal field theories is a general feature of our constructions which will be pursued in more detail in section 4.2.
Superpotentials from geometry
There is one more ingredient in defining a three-dimensional theory with N = 2 supersymmetry that we have yet to address: the superpotential. In this section we fill this gap. As explained below the existence of a superpotential can be described in terms of the intrinsic geometry of our three-manifolds. However, a precise form of the superpotential as an explicit expression involving fields depends on a choice of Seifert surface used to construct a Lagrangian description.
In our context, the existence of interaction described by a superpotential can readily be seen in terms of M2-brane instantons, as described in [10]. Here we will briefly review that discussion. Consider some collection of massless chiral fields, X i . Our M5-brane resides on a three-manifold M, which is a double cover of R 3 branched over a singular tangle L. Meanwhile, the entire construction is embedded in an ambient Calabi-Yau Q. As studied above, each of the particles X i corresponds to a singularity of the tangle L.
Given this setup, a superpotential interaction for the chiral fields X i may arise from an instanton configuration of an M2-brane. This is a three-manifold C in Q, whose boundary ∂C is a two-cycle in M that intersects the particle singularities X i . Consider the JHEP05(2014)014 projection of the instanton M 2 to one sheet of the double cover, ∂C ± . This must be a polygon bounded by the tangle L with vertices given by the singularities of X i . A volumeminimizing configuration of this three-cycle will correspond to an interaction generated by a supersymmetric M2 instanton. This object is precisely of the correct geometric form to generate a superpotential term of the schematic form W = i X i . To sharpen this discussion, there are several further considerations.
• The coefficient of the interaction is controlled by the instanton action, which is proportional to e −V , where V is the volume of the supersymmetric three-manifold C.
To generate a non-zero interaction, we need the three-manifold to have finite volume. Since our framework allows a non-compact manifold M with L going off to infinity, we must restrict our superpotential polygons on ∂C ± to be compact.
• The instanton action gets a contribution of exp i ∂C B , from the boundary of the M2 ending on the M5-brane. If ∂C = 0, that is, the boundary of the M2 is a trivial two-cycle, then this term is irrelevant. However in general, ∂C is a non-trivial homology class and we find Where γ is a scalar field dual to a photon. This indicates the presence of a monopole operator M j = exp (σ + iγ) in the superpotential. So in this situation, we find a superpotential W = M i X i . Of course, more generally ∂C is some integer linear combination of homology basis elements and so we might find multiple monopole operators in the superpotential.
• The invariance of W under all gauge symmetries apparent in the homology of the Seifert surface implies a compatibility condition on the discrete data living at the sin-
JHEP05(2014)014
gularities bounding the associated polygonal region. To analyze the charge, we make use of the fact that the exact quantum corrected charge of the monopole operator is where k αβ is the Chern-Simons level including both the integral part from the Trotter form, and the fractional contribution from particle singularities.
Given the above discussion, the next step is to analyze the explicit geometry of supersymmetric M2-brane instantons and determine which possible contributions in fact occur. This problem is important, but beyond the scope of this work. For our purposes we simply take as an ansatz that every possible gauge invariant contribution to the superpotential present in the geometry as a polygon bounded by singularities in fact occurs.
With this hypothesis, to extract the superpotential in complete generality, we analyze a candidate contribution by expressing the boundary two-cycle ∂C in a basis of two-cycles We include such a term in the superpotential provided it is gauge invariant as dictated by the charge formula (3.19). The full superpotential is then a sum over all gauge invariant terms associated to all polygonal regions present in the tangle diagram of L.
Although it may seem cumbersome to explicitly calculate which polygons yield gauge invariant contributions to W, in practice there is a simple sufficient, but not necessary, graphical rule which ensures gauge invariance that applies to the simplest class of contributions to the superpotential namely polygons which lie entirely in the plane of a given projection of the Seifert surface. This rule is simply that the arrows on the singularities must circulate all in one direction around the gauge cycle in question. It may be easily derived from formula (3.19) as well as the charge assignments of particles dictated by figure 12. Examples of this type are shown in figure 19. We encounter more general 'non-planar' superpotential terms in our analysis of examples in section 6.1.
We shall provide non-trivial evidence for the consistency of these superpotential rules by using them to reproduce known mirror symmetries in section 6.2. We leave for future work the interesting problems of deriving our prescription from first principles in M-theory and further proving that our combinatorial rules are consistent with all possible dualities to be described in section 4.
JHEP05(2014)014
(a) Superpotential without monopole. (b) Superpotential with monopole. Figure 19. Projections of BPS M2-brane instanton, with the singular tangle in black. The particles X i are indicated by the location of the black arrows, the Seifert surface is shaded in blue, and the projection of the instanton is shown in green. In (a), the M2 instanton projects to a trivial 2-cycle in M, and therefore has no monopole contribution. We find W = X 1 X 2 X 3 . In (b), the M2 projects to the non-trivial 2-cycle dual to the 1-cycle y shown on the Seifert surface This contributes a monopole operator, yielding W = M y X 1 X 2 X 3 .
Physics from singular tangles: a dictionary
To conclude our discussion of singularities, we briefly summarize the algorithm for extracting an ultraviolet Lagrangian description of the physics associated to a singular tangle L.
• Pick a Seifert surface Σ. The homology H 1 (Σ c , δ, Z) specifies a basis of gauge and flavor cycles. Boundaryless cycles are dynamical gauge variables, while cycles with boundary are background flavor fields.
• Compute the Chern-Simons levels by computing the Trotter form on the homology H 1 (Σ c , δ, Z). In this procedure the singularities make fractional contributions to linking numbers. The singularities of plus type, illustrated in figures 12a and 12b, contribute 1/2. The singularities of minus type, illustrated in figures 12c and 12d, contribute −1/2.
• Assign to each singularity a chiral field X i . The field is charged under cycles on Σ passing through the singularity. The charge is +1 (-1) if the singularity is of plus type and the cycle is oriented with (against) the arrow at the singularity. The charge is −1 (+1) if the singularity is of minus type and the cycle is oriented with (against) the arrow at the singularity.
• Compute the superpotential by summing over gauge invariant contributions from closed polygonal regions in L. Each monomial entering in W contains a product of chiral fields dictated by the vertices of the polygon, and possibly various monopole operators determined by expressing the polygon in a basis of two-cycles dual to H 1 (Σ c , δ, Z). Gauge invariance of the contribution of a given polygon is determined by application of the quantum corrected charge formula for monopole operators (3.19).
JHEP05(2014)014
The physical theory associated to L is the infrared fixed point determined by this ultraviolet Lagrangian data. Varying the choice of Seifert surface, provides mirror ultraviolet Lagrangians, but does not alter the underling infrared dynamics. In general the resulting theory is a strongly interacting system which enjoys a U(1) F flavor symmetry. The action of Sp(2F, Z) on this conformal field theory is determined geometrically by the braid group action studied in section 2.4. The three-sphere partition function Z is an invariant of the theory which is extracted from this ultraviolet Lagrangian by generalizing the quantum-mechanical framework of section 2.1.1 and assigning to each singularity the quantum dilogarithm wavefunctions dictated by figure 12.
In the remainder of this paper we apply these rules to further analyze the geometric description of mirror symmetries, and explore applications of the framework.
Dualities and generalized Reidemeister moves
In the previous sections we have developed a technique for extracting conformal field theories from singular tangles. However, there is still non-trivial redundancy in our description: as a consequence of mirror symmetry, two distinct singular tangles may give rise to equivalent quantum field theories. In this section, we determine the equivalence relation implied on singular tangles by mirror symmetries, and explore their geometric content.
In searching for such relationships, one may take inspiration from the case of nonsingular tangles. In that case, the basic relations are the Reidemeister moves shown below.
=
These moves are local and may be applied piecewise in any larger tangle diagram. Further, these moves are a generating set for equivalences: any two tangles which are isotopic may be related to one another by a sequence of Reidemeister moves.
In the case of singular tangles, we find similar structure. Basic mirror symmetries determine relations on singular tangles which take the form of generalized Reidemeister moves. They are related to the moves presented above by replacing some crossings by singularities. Further, each of these equivalences is local, and hence they may be applied piecewise in a larger singular tangle to engineer more complicated relations. It is natural to conjecture that these generalized Reidemeister moves, together with the Torelli dualities of section 2.4 provide a complete set of quantum equivalence relations on singular tangles.
In section 4.1 we present a detailed description of the generalized Reidemeister moves as well as the associated quantum dilogarithm identities that result from application of JHEP05(2014)014 these moves to partition functions. In section 4.2 we show how deformations away from the conformal fixed point result resolve generalized Reidemeister moves into the ordinary Reidemeister moves.
Generalized Reidemeister moves
In this section we present the list of generalized Reidemeister moves. Each takes the form of a graphical identity involving two singular tangles. The precise form of these equalities depends on the discrete data living at the singularities. There are two things to note about this dependence which follow immediately from our analysis of the local model in section 3.1.
• If we flip all arrows by 180 degrees on both sides of an identity, it still holds. Indeed, such a flip is equivalent to reflecting the sign of all U(1) gauge and flavor groups. Geometrically, this is equivalent to globally changing the labeling of sheets from 1 to 2 in the double cover.
• Given any identity, if we exchange all overcross and undercross of non-singular crossings in the diagram, while at the same time exchanging all overcross vs. undercross singularities, the identity still holds. This is true because each of our diagrams is drawn in a fixed projection with oriented normal vectorx 3 . Globally reflectinĝ x 3 → −x 3 generates the indicated transformation on diagrams, as shown for example in figure 20.
In the following, we take these two principles into account and thereby present a reduced set of generalized Reidemeister moves. Additional dualities may be generated by changing the discrete data at the singularities as above.
Rules descending from move 1
Here, we consider a singular version of the first Reidemeister move. Populating the singular tangles with a Seifert surface generates partition function identities. We will look at two such choices of Seifert surface differing by black-white duality. The first choice does not contain a gauge group whereas the second choice does and is yet another version of the Fourier transform identity.
= With a choice of planar Seifert surface it has the following two interpretations.
In quantum mechanics language this is equivalent to starting with a quantum dilogarithm and applying a T -transformation. This does not involve any integrals, as the quantum dilogarithm is an eigenstate of the T -operator. Hence there is also no gauge group in the 3d gauge theory interpretation. The only effect on the gauge theory is a change in the background Chern-Simons levels: they are decreased by one unit.
This represents a duality containing a U(1) gauge field on the one side but no gauge field on the other. This rule is equivalent to the Fourier transformation identity discussed in section 3.1, and is another singular-tangle representation of that duality. Here, the theory of one U(1) gauge field at level one-half together with a charged chiral particle is mirror to a free chiral field.
Rules descending from move 2
The second Reidemeister move can be generalized to give rise to an identity between singular tangles where neighbouring singularities cancel pairwise such that on the other side of the identity there is no singularity at all. Therefore, we denote these identities with the term pairwise cancellation of singularities. We will also examine a partition function identity inherited from the tangle identity for one choice of Seifert surface. The relevant singular tangle identities are the following. = = From the perspective of the 3d gauge theory these can be understood as follows. We have a closed polygonal region bounded by two singularities. As discussed in section 3.3
JHEP05(2014)014
this gives rise to a superpotential with the two chiral fields. Thus the particles are given mass and make no contribution to the infrared physics. The dual theory then contains no particles, but depending on the UV Chern-Simons levels it can contain background Chern-Simons levels.
Picking a Seifert surface these rules translate to the following quantum dilogarithm identities.
From this perspective, the underlying identity of pairwise cancellation of singularities is equation (16) in the appendix of reference [23].
Rules descending from move 3
The most important rule arises from singularization of the third Reidemeister move. This rule is called the 3-2 move and encodes a non-trivial three-dimensional mirror symmetry. In this section we will clarify its relation to the third Reidemeister move by singularizing all crossings on one side of the identity and only two on the other side. Apart from the 3-2 move, the third Reidemeister move can be singularized by adding only one singularity on both sides. This application follows from the previously identified Fourier transform identity and hence does not represent an independent mirror symmetry. Nevertheless, the simple application is useful when moving between Seifert surfaces in the examples of section 5 and 6. We will turn to this simple application first and then discuss the 3-2 move.
Change of branch sheet. Applying the Fourier transform identity of figure 13 locally, we obtain a generalization of the third Reidmester move. On one side of the duality we have a theory with a chiral particle charged under a U(1) gauge field which in turn couples to two background gauge fields. The duality relates this theory to one with no gauge group, a chiral mulitplet and two flavor fields. The partition function equality is again an application of figure 13.
The 3-2 move. The relevant singular tangle identity is depicted below.
=
We clearly see that this identity relates a theory with three chiral fields to the one with just two chiral fields. Such theories are known to come in mirror pairs in three dimensions [13,15,16,38]. Examining the left-hand-side we notice the presence of a closed polygonal region bounded by three singularities and hence the existence of a superpotential. To extract the physical content we choose Seifert surfaces as shown below.
The physical theories are then read off: • left-hand-side: a theory with three chiral fields X, Y, Z no gauge symmetry and a cubic superpotential W = XY Z, known as the XY Z-model.
• right-hand-side: a theory with a gauged U(1) with vanishing self Chern-Simons level and two oppositely charged chiral fields Q and Q, known as U(1) super-QED with N f = 1.
These theories are known to form a mirror pair [16]. At the level of partition functions this duality is the pentagon identity for quantum dilogarithms [23].
Resolutions of dualities
In this section we make the connection between generalized Reidemeister moves and ordinary Reidemeister moves precise. We show that motion onto the moduli space of the conformal field theories appearing on both sides of a generalized Reidemeister move resolves them into ordinary Reidemeister moves. To achieve this we will choose a particular Seifert surface such that all the resolutions in question are obtained as a motion onto the Coulomb branch. In general such a deformation gives masses to all chiral fields and in the infrared they can be integrated out. Generically, this leads to a fractional shift in the Chern-Simons levels of the form [16] (K IJ ) eff = K IJ + 1 2 N f a=1 (q a ) I (q a ) J sign(m a ) ∈ Z, I, J = 1, · · · , G + F, (4.1) where we have noted that the effective levels are integral in order to ensure gauge invariance. These effective levels are depicted in figure 21 as applied to a single singularity as studied in section 3.2.
In applying this logic to study resolutions of singular tangles, one must take care to remain in a supersymmetric vacuum. In other words the F -and D-term equations have to be satisfied. This will be dealt with next.
F-and D-term equations
Let us elaborate the Coulomb branch resolutions from the viewpoint of the 3d gauge theory. The singular tangle describes the CFT at the origin of the Coulomb and Higgs branches. If we discuss only resolutions which remain at the origin of the Higgs branch then the resulting resolutions correspond to different leaves of the Coulomb branch parameterized by Fayet-Iliopoulos parameters and scalar fields in vector multiplets.
In order to determine which resolutions are possible in a complicated singular tangle we need to solve the D-and F-term equations of the relevant 3d gauge theory. The potential V for the theory is a sum of a D-term and an F-term contributions of the form
JHEP05(2014)014
In a supersymmetric vacuum this potential must vanish. As both V D and V F are nonnegative, both must vanish separately. Let us first consider the F-term potential which reads where W is the superpotential of the theory and φ a is the scalar component of the chiral field X a . In our geometric examples, W arises from a sum over polygons and hence each monomial in W has degree larger than one. It follows that if we remain at the origin of the Higgs branch φ a = 0 the F-term potential is trivially minimized. Let us next turn to the D-term potential. In the following we will drop the subscript eff from all Chern-Simons levels and assume that the IR limit has been taken. The D-term potential is then given by where the summation is over i, j = 1, · · · , G for the gauge indices, and λ = 1, · · · , F for the Fayet-Illiopoulos parameters x λ . The associated D-term equation then reads On the Coulomb branch we have that φ a = 0 which simplifies the above equation considerably. Defining it is possible to write equation (4.5) in the compact form for i = 1, · · · , G. Equation (4.7) is our desired result. It implies that provided we are interested only in Coulomb branch deformations, we can determine which are allowed by searching for null-vectors of the effective level matrix K.
Resolution of move descending from rule 1
Here, we examine how a particular resolution on the two sides of our first generalized Reidemeister move gives back the ordinary Reidemeister move of first kind. In order to proceed, we need to pick a particular Seifert surface which allows us to obtain the relevant JHEP05(2014)014 resolution as motion onto the Coulomb branch. We will pick the second Seifert surface corresponding to the dilogarithm identity (4.8) The limit we take is the following resulting in z − y = 0. (4.12) As this is consistent with the limit taken we are indeed looking at a valid resolution satisfying the equations of motion of the gauge theory. The pictorial representation is shown in figure 22. We clearly see that the resolution reproduces the ordinary first Reidemeister move as claimed.
Resolution of moves descending from rule 2
Next, we look at resolutions of the second generalized Reidemeister move. This rule consists of two parts and we shall examine both of them. Again we have to pick a Seifert surface which we choose to be the same as in section 4.1.2. The relevant quantum dilogarithm identity for the first subrule is Here we can consider the following limit (4.14) As the limit gives the right hand side of the identity trivially there is nothing to be checked. Therefore, this resolution does not involve any Reidemeister moves.
Let us now move to the second subrule. The relevant quantum dilog identity is Taking the limit x → ∞ the left-hand-side becomes (4.16) The pictorial representation of this resolution is the second Reidemeister rule, as shown in figure 23.
Resolution of move descending from rule 3
Let us now come to our last and most involved case, namely the 3-2-move. The relevant identity here is we will consider the limit c i 0 for i = 1, 2, 3. Setting w ≡ c 3 ensures that we have the effective Chern-Simons-levels The D-term equation (4.7) thus gives 22) and hence confirms that we are on the Coulomb-branch. The pictorial representation of the limit discussed is the third Reidemeister move as shown in figure 24.
R-flow
We have seen how singular tangles capture the content of a 3d conformal field theory with four supercharges and that resolutions of such objects describe dynamics on the moduli space of the same theory. This is very similar to how Seiberg-Witten theory describes the Coulomb branch of 4d gauge theories with eight supercharges. In fact, the similarity goes even further. In the Seiberg-Witten case the multi-cover of a complex curve with punctures captures all the information about the BPS states of the four-dimensional gauge theory [2][3][4][5]. In our case a multicover (more specifically, a double cover) of R 3 with specified boundary conditions captures the content of a three-dimensional theory. The connection of these two descriptions can be made precise by looking at specific class of examples where the three-manifolds in question arise from flows of a Seiberg-Witten curve of a 4d theory. By this we mean that there exists a slicing of the three-manifold along a time direction such that each slice represents a SW-curve. It turns out, that such a flow indeed exists and is known as R-flow [10,39]. This section is devoted to the definition and properties of R-flow. It is defined on the space of central charges of certain 4d N = 2 theories and describes a domain wall solution which has the interpretation of a 3d N = 2 theory [40][41][42]. Figure 25. R-flow for an example with three central charges.
Definition of the flow
R-flow is a motion in the space central charges of four-dimensional theories with eight supercharges. In theories which are known to be complete [43] deformations in the space of central charges are locally equivalent to deformations of branch points of the Seiberg-Witten curve. We define the flow to be of the following form where Z i is the central charge of the i-th charge in the N = 2 4d theory. This tells us that the central charges flow along straight lines preserving their real parts while their imaginary parts move at a rate which is proportional to their real parts. As a consequence of this flow equation, the phase ordering of central charges is preserved and hence the entire evolution takes place in a fixed BPS chamber. In summary, we can say that phase ordering is time ordering and depict this in a graph shown in figure 25. This describes a three-dimensional theory as a domain-wall solution of the four-dimensional parent theory where each 4d BPS state gives rise to a 3d BPS state whose mass is given by the real part of Z i .
A n flow and the KS-operator
In this paper we are in particular interested in flows of 4d gauge theories which arise from wrapping a M5-brane on a Riemann surface of the type A n describing Argyres-Douglas
JHEP05(2014)014
CFTs [5,44]. These are Riemann surfaces which are double covers of the C-plane of the form where a n+1 = n i=1 a i . The Seiberg-Witten differential is given by the square root of the quadratic differential i.e. λ SW = √ φ. Having established the above definitions, it is straightforward to write down the central charges of the theory: Now, choosing a specific ordering of the phases of the central charges one arrives in a particular chamber of the moduli space where a specific number of BPS particles is stable. For the choice argZ 1 < argZ 2 < · · · < argZ n , (5.5) we obtain the so called minimal chamber with exactly n stable particles. On the other hand, the maximal chamber is defined for the configuration argZ n < argZ n−1 < · · · < argZ 1 .
Here the number of stable BPS particles is 1 2 n(n + 1) [45]. There will be also intermediate chambers with less particles and we shall refer to the number of states in a given chamber by N . Note that for each of these states there is a corresponding central charge which in general is a linear combination of those given in (5.4). We next assign to each central charge ordering a Kontsevich-Soibelman operator of the following form [46][47][48]: where E + is the non-compact quantum dilogarithm while theγ i label the stable BPS states and can be interpreted as phase space variables of the quantum Hilbert space which differ by actions of Sp(n, Z) if n is even and Sp(n−1, Z) if n is odd. From the point of view of the A n curve theγ i represent cycles determined by two branch points a k and a l . In particular, from the point of view of the quantum mechanics description of section 2.1.1, they are linear combinations ofx i andp i and are mapped to each other by actions of the generators Figure 26. For each KS-operator there is an associated singular braid B K .
We can assign to each KS-operator a quantum mechanical matrix element of the form which have an interpretation as partition functions of 3d theories as discussed in section 2.1.1. These partition functions enjoy a Sp(n, Z) × Sp(n, Z) action which has the interpretation of the braid group action on the two ends of a braid with n+1 strands. In our case we can thus assign a singular braid B K to the matrix element Z K . This is depicted schematically in figure 26. As also indicated there, the braid naturally defines a time direction which we can understand as follows. Each line of the braid describes the flow of a branch point of the A n -curve along the time direction and at the singularities these branch points come close to each other and actually touch, thereby loosing their individual identities. Let us zoom into the braid B K to see how the strands approach each other for an isolated singularity. To this end, we rewrite the partition function as a gluing of three braids according to the formalism developed in section 2.1.1 Z K = dx dy x| · · · |y y |E + (γ kl )|x x | · · · |y , (5.10) whereγ kl represents the contribution of the 4d BPS state whose central charge is given by Zooming into the braid we then have the local representation for an isolated singularity shown in table 1. Resolving the singularity means turning the points at which the branch points touch to near misses. As we have seen, for each singularity there are exactly three ways to do this. R-flow, as a flow of branch points of the Seiberg-Witten curve, is equivalent to choosing the resolution of figure 10 (b) for all singularities. Said differently, the singular braid B K is obtained from the flow defined by equation (5.1) in the limit in which all near misses are replaced by singularities. Table 1. Braid realization of a local singularity. The relevant branch points come close to each other until they collide in the singularity and loose their individual identities. After that they depart again until they reach their original positions in the braid.
JHEP05(2014)014
Let us now come to the justification of this picture. The initial condition of R-flow is determined by the chamber in which the flow starts. Furthermore, as the flow continues one stays in the initial chamber due to the phase-preserving property of the flow. As central charges cross the real axis something special happens. Recall that a 4d BPS hypermultiplet has an interpretation as a geodesic on the complex plane between branch points of the Riemann surface [49,50]. These geodesics obey the equation where θ m , m = 1, · · · , N is the phase of the mth BPS state, i.e.
θ m = argZ m . (5.13) There are two remarks in order here. First, R-flow describes a motion on the Coulomb branch (including mass parameters) of the four-dimensional gauge theory. On the other hand, the flow equation (5.12) is a flow on the C-plane at a fixed point in the moduli space. The Seiberg-Witten curve, being a double-branched cover of the C-plane, is not subject to change under the flow (5.12). Therefore, in order to relate the two motions, we have to choose a fixed angle θ m corresponding to a line in the complex plane of central charges. Secondly, the geometry of R-flow predestines exactly such a line, namely the real axis which defines a mirror axis for the flow. Thus we see that each time a central charge crosses the real axis there is a geodesic solution with minimal length. Thus at such points the pair of branch points corresponding to the BPS bound state whose central charge crosses the real axis are closest.
Examples
In this section present some examples of R-flow. We start with the simplest case and proceed to increasing complexity. Already in the very first example, the A 1 flow, we will find that R-flow gives insight into the behavior of branch lines near local singularities.
As a first example we will consider the most simple case of R-flow. This is the theory corresponding to the curve y 2 = x 2 + , (5.14) with a single central charge, denoted by Z 1 , given by We will find that this theory has significant importance for the resolution of arbitrary singular tangles as it predicts the possible local resolutions of an isolated singularity by turning on different values of Fayet-Iliopoulos parameters. Let us describe how this comes by. First of all, note that we can parametrize as = − 2 π (−im + t) with m real and positive so that (5.16) obeys the flow equation (5.1). 16 The motion of the branch points of the curve are then given by the law where α is a proportionality constant. We can now view this motion from two perspectives. The first is as a motion on the C-plane which forms the base of the double cover. The second perspective is obtained by looking at the motion of the two branch points as giving rise to branch lines in C×R where R is the time-direction parametrized by t. As the square root behaviour of (5.17) is fairly simple we can depict the two perspectives easily as shown below in figure 27. We see that this exactly mirrors two of the three possible resolutions described in section 3.1, namely resolutions 10 (b) and (c). Note that resolution (a) cannot be obtained in this formalism as it breaks time-flow or equivalently keeps the mass parameter m at zero but deforms the theory onto the Higgs branch.
We now turn to our next example, the A 2 curve. It is, apart from the A 1 case, the most important flow example as it provides insight into three-dimensional mirror symmetry in terms of flows of four-dimensional theories. In order to illustrate this we consider the two central charge orderings of this theory which provide two BPS chambers with different particle content. More precisely, we have a 2-particle chamber: argZ 1 < argZ 2 < 0, (5.18) and a three-particle chamber argZ 2 < argZ 1 < 0, (5.19) where the third state is the one with charge Z 1 +Z 2 . Looking at the Kontsevich-Soibelmann operator we see that in the first case it is given by (5.20) while in the second case one has The crucial point here is that these two operators are actually equal if we impose the commutator [x,p] = i 2π , (5.22) as was first proven in [23]. This is the underlying equality leading to the 3-2-move discussed in section 4.1.3. Therefore, the 3-2-move can actually be thought of as arising from R-flow of the A 2 curve. However, note that the 3-2 move is obtained by looking at matrix elemts x|K|p , that is position/momentum matrix elements, whereas R-flow is equivalent to matrix elements of the form x|K|y , namely position/position matrix elements. Furthermore, JHEP05(2014)014 there are many braid realizations of these matrix elements differing by the other various dualities discussed in section 4.1. In this section we will look at representations which are obtained from the prescription described in table 1. That is, we will now look at the above KS-operators and their braid realizations from the perspective of branch-point flow.
Let us start with the minimal particle chamber. Using the identity we obtain This way we have rewritten the partition function in terms the σ i which describe actions of the braid group. The braid representation of the right-hand side of the above identity is shown in figure 29. 17 The single integration variable in (5.24) corresponds to a U(1) gauge group manifest as a compact white region in figure 29. Furthermore, we have used that σ 1 and σ −1 1 commute with E + (x) and therefore cancel each other. Note that the theory described by the braid 29 is related to U(1) SQED by changing the branch sheet as discussed in section 4.1.3. 18 We will not discuss this here and rather turn our attention to a particular resolution of the singular braid. Applying resolution rule (b) of figure 10 to all singularities we obtain the figure 31. It is also possible to explicitly solve equation (5.1) and compute the flow of branch points in the minimal chamber. The result is shown in the second part of figure 31. We see that the resolved braid and the flow of branch points are topologically equivalent and just differ by change of projection plane. That is the location of particles is represented in both pictures by cusps at which the same strands come closest.
Next, we turn to the maximal chamber. Here, we need further the following identity which allows us to rewrite the partition function as We depict the corresponding braid representation in figure 30. One can immediately ex- 17 We have suppressed the R-charges of the singularities as these are not relevant for the present discussion. 18 We also need to apply an S-transformation to the boundary condition in order to switch from position boundary to momentum boundary. The KS-operator corresponding to the minimal particle chamber is given by
JHEP05(2014)014
A partition function can be formed from this operator by considering the wave-function Z K = x|E + (x + c)E + (p)E + (x)|y . (5.29) 19 We have chosen here a different commutator betweenx andp compared to the A2 case. This is merely a convention. We could also have worked with the former commutator. This partition function now represents a singular braid. In order to extract the braid, we have to rewrite it as a gluing of simple partition functions containing no gauge groups. This is done by using the identity E + (p) = e −iπx 2 e −iπp 2 e −iπx 2 E + (x)e iπx 2 e iπp 2 e iπx 2 , (5.30) which allows us to rewrite Z K in the form Z K = dx x|E + (x + c)e −iπx 2 e −iπp 2 e −iπx 2 |x x |E + (x )e iπx 2 e iπp 2 e iπx 2 E + (y)|y . (5.31) This partition function can be represented by the singularized braid shown in figure 33. We see again a U(1) gauge group corresponding to the one compact white region. Furthermore, a chiral field is charged under this gauge group while the two other chiral fields are gauge neutral. Applying duality rules we can transform this picture to different ones with more or less gauge groups. Applying resolution rule (b) of figure 10 to all singularities we obtain picture 34. This resolved braid can again be reproduced by letting the central charges of the A 3 curve R-flow as depicted in figure 25. One can carry out the flow procedure by inverting the central charges as functions of the branch points locally along the flow. The resulting flow of branch points for the minimal chamber is depicted in figure 35. Comparing figure 34 with figure 35 we find that the two are topologically identical in that the strands which come closest at the location of particles are the same in both pictures, i.e. first γ 3 contracts, then γ 2 and at last γ 1 . They merely differ by a change of the projection plane. We find that this behavior generalizes. That is, associated to the KS-operator corresponding to the A n theory in a particular chamber, there exists a resolution which arises as R-flow of the branch points. The prescription for finding the resolution corresponding JHEP05(2014)014 to R-flow is as follows. Start with the partition function Z An = x|K(q)|x . (5.32) Associate to this matrix element the particular braid-representation which contains all particles as black dots within the Seifert-surface, where by within we mean that the Seifertsurface goes horizontally through the dot as depicted in table 1. Apply resolution rule of figure 10 (b). Note that it is not possible to obtain other resolutions for the singular braids such as the one of figure 33 from R-flow. The reason is that a local flip of the corresponding central charge, as described in the case of A 1 , changes the KS-operator and will thus lead to a completely different picture.
Applications
In this section we study some further applications of the developed rules. As a first example we examine a more complicated geometry arising from the R-flow prescription. The particular geometry contains a closed non-planar polygon, i.e. a superpotential, which is only partly shaded and thus gives rise to a monopole operator. We will establish that this monopole operator appears in the superpotential. As a second example for the application of the methods developed in this paper we will look at U(1) SQED with N f > 1. This example does not arise from R-flow. However, we will find that the rules presented in section 4.1 are powerful enough to establish mirror symmetry even for these more complicated models geometrically.
Superpotentials from R-flow
In this section we look at an example of a 3d gauge theory which arises from R-flow of an intermediate chamber of the A 4 theory. This example was already analyzed to some extent in [10]. The relevant KS-operator is given by The 3d partition function associated to the KS-operator is now Z K = x|E + (x 1 )E + (x 2 )E + (p 1 +x 2 )E + (x 2 )E + (p 2 )|x . (6.3) Its representation in terms of a singular braid is depicted in figure 36. We can clearly see 4 U(1) gauge groups represented by the four white regions in the braid. Applying the Fourier transform identity twice and the T -transform rule of section 4.1 we obtain the simpler braid depicted in figure 37. This braid represents a dual description of the same quantum field theory. In this description, there is a U(1) gauge group under which two chiral multiplets, denoted by X 3 and X 2 , are charged oppositely. Furthermore, one can clearly see a compact polygonal region bounded by three chiral singularities. This corresponds to a superpotential in the effective 3d gauge theory to which all three chiral multiplets contribute. This theory contains a monopole operator which also participates in the superpotential term. One way to see this, is through the white region contained within the bounded polygonal region. One can check, using the formula (3.19) for the charge of the monopole operator discussed JHEP05(2014)014 in section 3.3, that the monopole operator M is invariant under the U(1) gauge group. This immediately tells us that we can write down a superpotential of the form W = MX 2 X 3 X 4 , (6.4) which is gauge invariant. Furthermore, this superpotential breaks exactly one U(1) flavor symmetry which is consistent as there are five chiral fields but only four non-compact white regions in the geometry.
6.2 U(1) SQED with N f > 1 Here, we will demonstrate that our rules for the singular tangles provide a convenient geometric way of encoding general mirror symmetries of 3d N = 2 gauge theories. The example we will use to demonstrate this is the generalization of U(1) SQED/XY Z mirror symmetry. Start with a 3d N = 2 gauge theory with U(1) gauge group and N f > 1 charged hypermultiplets. This theory has a RG fixed point with a mirror dual description as a (U(1) N f )/U(1) gauge theory with N f charged hypermultiplets (consisting of chiral multiplets q i andq i ) and N f neutral chiral multiplets S i together with a superpotential [16] W The charge assignments are as follows The aim will now be to translate both theories into geometric tangles and transform them into each other by using ordinary as well as singularized Reidemeister moves, thereby proofing they are mirror pairs. 6.2.1 U(1) SQED with N f = 2 We will start with the geometry corresponding to U(1) SQED and specialize to the case N f = 2. The relevant diagram describing this gauge theory is depicted in figure 38. The interior white region represents the U(1) gauge group and each pair of singularities corresponds to a hypermultiplet whose constituents have opposite charges under the U(1). Let us next apply the second Reidemeister move to this diagram. The result is depicted in figure 39. Here we see that there are two extra U(1)'s and that two singularities are charged under the first one whereas the second pair is charged under the second. We are now in a position to apply the generalized Reidemeister move known as the 3-2 move. This move can be applied twice, once to the upper white triangle and once to the lower white triangle, resulting in figure 40. This diagram simply shows a U(1) gauge theory with two chiral JHEP05(2014)014 fields charges positively under it and two fields charges negatively. Moreover, we observe two superpotential terms each combining a neutral field with two oppositely charged fields. These data exactly match those of the mirror dual which confirms the duality.
U(1) SQED with N f = 3
As a second and last example we will consider the more complicated case of U(1) SQED with N f = 3. The relevant diagram is We can see 6 chiral multiplets charged under a U(1) gauge group with the charges of the particles adding up to zero pairwise. The overcross and undercross singularities are arranged such that the net self-Chern-Simons level of the U(1) is zero. We can add a JHEP05(2014)014 T-transform to turn one type of singularity to another, as shown in figure 42. Next, we do a second Reidemeister move to create a white region.
Performing the 3-2 move we end up with a superpotential and an extra U(1), shown in figure 44.
We now perform the Reidemeister move a second time to create a third white region with two charged fields.
Application of the 3-2 move for a second time leads to the second superpotential term. As should by now be obvious, we again perform the Reidermeister move with the result shown in figure 47.
JHEP05(2014)014
The last step is again a 3-2 move leading to the final result depicted in figure 48.
As one can clearly see the above picture is the diagram describing the mirror dual of our original theory. We have three superpotentials each containing one neutral field and we have three U(1)'s under each of which 2 chiral fields are charged. Note that the white region in the interior, under which no particle is charged, ensures that the sum of all U(1)'s JHEP05(2014)014 Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 28,352.8 | 2014-05-01T00:00:00.000 | [
"Mathematics"
] |
A Flow in a Thin Plastic Layer: Generalizations of the Prandtl’s Problem
The analytical solutions of various generalizations of the classical L.Prandtl’s problem are of great interest in studying the problem of straightening plates using uniaxial stretching beyond the elastic limit. The straightening by stretching allows obtaining a high degree of flatness of thin wide strips and sheets of high-strength steels and special alloys, while the straightening by other methods does not provide satisfactory results. Based on Ilyushin’s theory of flow in a thin plastic layer, generalizations of the classical Prandtl’s problem on plastic strip compression were studied and their solutions were obtained. The planar problem of compression of a plastic strip between two parallel rough planes with accounting for the asymmetry of the conditions on the spreading ends has been solved and the upper estimate of the total compression force of the face-end areas of the plastically stretched strip has been obtained.
Introduction
The classical problem of compression of a plastic strip between rough planes of solids [1][2][3] keeps drawing attention of the y research community. This problem is closely related to a number physical processes and phenomena including the phenomenon of slippage along the contact surface, the dominance of the spherical segment of the stress tensor over the deviator components in the plastic strip, the commensurability of normal elastic displacements of contacting bodies with the thickness of the strip [3], the effect of " cold welding" and others. On the other hand, this problem is a key to a better understanding of the mechanism of contact interaction between solid and plastically deformable bodies. L. Prandtl first constructed a limiting stress field, which Nadai supplemented with the corresponding flow velocities.
In refs. [1,2] the planar problem of compression of a preheated strip by means of cooler external bodies has been investigated. As a result of intensive heat exchange, contact layers of solidification are formed. In ref. [4], the solution of a planar problem of a plastic layer spreading with inhomogeneous properties on thickness in an isothermal approximation has been derived. In ref. [5] the planar problem of plastic compression of a threelayer strip with piecewise homogeneous and symmetric thickness properties has been solved. In ref. [6] the solution of Prandtl's problem for a weakly inhomogeneous plastic layer with a yield strength has been presented.
Based on the analysis of the Prandtl -Nadai solution, A.A. Ilyushin proposed a hypotheses, using which he has developed a theory of flow in a thin plastic layer [1]. This theory is based an approximate, averaged over the thickness of the current layer, two-dimensional mathematical model [1] of the spreading process of plastic layers, described by nonlinear partial differential equations of the second order. In more recent studies [7,8], this theory was generalized to include the case of the flow of a plastic layer on surfaces with anisotropy of properties with respect to the forces of contact friction. In the present work, we develop a newer formulation of the aforementioned models and obtain new results regarding phenomena being studied.
The plane problem of compression of a plastic strip and its solution
Below on the basis of the averaged (in terms of the thickness of the current layer) theory of flow in a thin plastic layer [1], we present the solutions to various generalizations of the Prandtl's problem of free spreading of a strip enclosed between two parallel planes converging along the normal and relating to external bodies., and occupying a region Differential equations of the problem have the form [1]: where ( ) h h t = is a well-known law of change of the thickness of plastic layer, occupying the area ≥ is the degree of deformation by A. A. Ilyushin; ( , ); ( , ) u u x t p p x t = = is the rate of flow along the layer and contact pressure in the strip. Boundary conditions at free ends: 1 2 : ; : As is known [1], there is an unknown ramification line of the flow Integrating the equation of quasistatic equilibrium (1) with consideration of the boundary conditions (2), we determine the contact pressure in the layer ( Fig.1): Integrating the incompressibility equation (1) with consideration of the condition on the ramification line, we find: in this case, the flow ramification line determined from the condition of continuity of the contact pressure. The total required force for the plastic compression of the strip (the area marked in Fig.1):
Results. Discussion
Below we consider the various generalizations of the problem of the free spreading of a plastic layer.
1) The strip spreads freely in both directions so that the contact area of the tool with the plastic strip expanding in both directions, forms a segment movable ends (Fig.2). In this case the free flowing ends of the strip are found from the solution of the following Cauchy problem for a system of linear homogeneous differential equations: ( ) 2) The plastic strip settles and occupies the contact area with the fixed left end so that it can flow freely in both directions, and represents a segment ( ) 10 2 , l l t with one movable end (Fig.3). At the same time we make an assumption, that plastic material can flow out from the left end of the contact area. The value of the total force required for compressing of the plastic strip in this case certainly depends on the conditions at the left end, and takes the form: 4) A tensile force is applied at the left (fixed) end of the contact area: where ( ) ( ) ( )
Conclusion
The analytical solutions of various generalizations of the classical L.Prandtl problem are of considerable interest in studying of the problem of straightening plates using uniaxial stretching beyond the elastic limit. As shown here, straightening by stretching allows to obtain a high degree of flatness of thin wide strips and sheets of high-strength steels and special alloys, while the straightening with other methods does not provide such good y results.
The above-mentioned solutions of L.Prandtl planar problem can be extended to the space contact problems of flow in a thin plastic layer between the converging surfaces of solid (elastically deformable) external bodies.
This work was carried out using equipment provided by the Center of Collective Use of MSUT "STANKIN". | 1,482.6 | 2019-10-01T00:00:00.000 | [
"Materials Science"
] |
Evaluation of Repellent Effectiveness of Polyvinyl Alcohol/Eucalyptus globules Nanofibrous Membranes against Forcipomyia taiwana
This study aims to develop nanofibrous membranes where Eucalyptus globules oil (EGO) is wrapped in polyvinyl alcohol (PVA). The EGO-based nanofibrous membranes are then evaluated for the protection against Forcipomyia taiwana (F. taiwana). In the first stage, the PVA solutions are formulated with different concentrations and are measured for viscosity and electrical conductivity. In the next stage, PVA solution and EGO are blended at different ratios and electrospun into PVA/EGO nanofibrous membranes (i.e., EGO-based repellent). In this study, a PVA concentration of 14 wt% has a positive influence on fiber formation. Furthermore, the finest nanofibers of 291 nm are presented when the voltage is 15 kV. The repellent efficacy can reach 80% in a 60-min release when the repellent is composed of a PVA/oil ratio of 90/10. To sum up, the nanofibrous membranes of essential oil exhibit good repellent efficacy against F. taiwana and significant slow-release effect, instead of adversely affecting the cell viability.
Introduction
Mosquito repellents on the markets are required to provide lasting effective relief and they are thus commonly composed of N,N-diethyl-meta-toluamide (DETT), the most effective substance. Although the constituent DETT is used within the dosage limit, it may still infiltrate the human body and get into the blood as a result of frequent or extensive use of DETT-based repellents. The residual DETT Polymers 2020, 12, 870 3 of 12 dimethyl sulfoxide (DMSO). MTT, DMSO, and phosphate buffer solution (PBS) were purchased from Quantum Biotechnology, Taichung, Taiwan. Mouse fibroblast (L929) was purchased from the Bioresource Collection and Research Center, Hsinchu, Taiwan. Cell culture solution containing 90% of Dulbecco's Modified Eagle Medium (DMEM), 9% horse serum, and 1% of antibiotics was purchased from Quantum Biotechnology, Taiwan.
PVA Nanofibrous Membranes
PVA powders and deionized water were added to a sealed Erlenmeyer flask and mixed for 2 h using a magnetic stirrer, and then cooled for 1 h, thereby forming 12, 14, and 16 wt% PVA solutions. Then, 25 mL of PVA solutions were filled into a #18 stainless steel syringe. The anode and cathode of the spinning voltage were connected to the syringe and the collector respectively. The voltage was between 10 and 20 kV, the flow rate of the PVA solution was 1 mL/h, and the collection distance was 10 cm. The collector was covered with a layer of aluminum foil. The morphology and diameter distribution were observed using an SEM and it was obtained that the optimal concentration of PVA solution was 14 wt%, which was used for the production of PVA/EGO nanofibrous membranes.
PVA/EGO Nanofibrous Membranes
Eucalyptus globules essential oil (EGO) and 1 wt% Tween 80 were evenly mixed for 30 min, after which a PVA solution was added and stirred for another 30 min to form PVA/EGO blends at ratios of 95/5, 90/10, and 85/15. The blends were electrospun into PVA/EGO nanofibrous membranes at a voltage of 15 kV.
Mechanical Properties
The tests were conducted based on the test standard referred to in a previous study [19]. Made of different manufacturing parameters, nanofibers were collected over a collector plate for a specified length of time. After being removed from the collectors, samples were tested for tensile properties at a tensile rate of 10 mm/min using a universal testing machine (HT-2402, Hung Ta Instrument, Taichung, Taiwan) The distance between the gauge was 10 mm and the samples size was 8 cm × 1 cm. Three samples for each specification were taken for the test in order to test and record the average.
Infrared Moisture Determination Balance
This test was conducted at a temperature of 50 • C for 99 min, after which the release capacity of essential oil was computed using the equation as follows.
where W m is the weight of the nanofibrous membrane before the test and W t is the weight of the nanofibrous membrane after the test.
Scanning Electron Microscopy (SEM)
Electrospun nanofibrous membranes were observed and photographed using the SEM (Phenom Pure Desktop SEM, Thermo Fisher Scientific, Waltham, MA, USA). Based on the SEM images, 100 nanofibers were observed using Image-Pro Plus version 6.2 (Media Cybernetics, Rockville, MD, USA). The mean of the nanofiber diameter was computed to plot the normal diameter distribution.
MTT Assay
MTT assay was used to measure the cell viability of the PVA/EGO nanofibrous membranes as specified in ISO 10993-5. Fibroblasts at 5 × 10 3 cells/well in a 96-well culture plate were cultured in a CO 2 incubator for 24 h and processed in a sterilized laminar flow. The culture medium was removed using a Pasteur pipette, and the sample extract was then added to the plate for the 1 and 3 d culture. Then, the sample extract was removed and an MTT agent was added, after which the plate was kept in the dark for 4 h. The MTT agent was removed, and 70 µL of DMSO was added to serve as the solvent of the crystal. An ELISA reader (Thermo Fisher Scientific, Waltham, MA, USA) was used to measure the absorbance, (i.e., optical density, OD). The ODs were used to compute cell viability, indicating whether the materials have cytotoxicity. The control group was without using the sample extract. Cell Viability was computed using the equation as follows.
Cell Viability = (D of the experimental group/OD of the control group) × 100%
Repellent Timeliness Measurement
A total of 20 ± 3 female F. taiwana were allocated on the insect end ( Figure 1A). An EGO nanofibrous membrane was positioned on the odor end ( Figure 1B) of the Y-tube olfactometer and simultaneously a PVA fibrous membrane was placed on the control end ( Figure 1C). After 3 min, the EGO membrane was removed for 12 min, and repositioned on the odor end again. This procedure was to prevent F. taiwana from being numb with olfaction after they stay in a ligand-odor space. A test cycle lasted 15 min, and the number of F. taiwana was recorded for 8 cycles. Namely, the length of a full test was 2 h. A new batch of F. taiwana was used for each test and 3 samples for each specification were used. The repellency rate against F. taiwana was computed to have the value in percent using the following equation.
Repellency rate = (1 − (the average data of three samples on end (B)/the total of the F.taiwana)) × 100% Polymers 2020, 12, x FOR PEER REVIEW 4 of 12 using a Pasteur pipette, and the sample extract was then added to the plate for the 1 and 3 d culture.
Then, the sample extract was removed and an MTT agent was added, after which the plate was kept in the dark for 4 h. The MTT agent was removed, and 70 μL of DMSO was added to serve as the solvent of the crystal. An ELISA reader (Thermo Fisher Scientific,Waltham, MA, US) was used to measure the absorbance, (i.e., optical density, OD). The ODs were used to compute cell viability, indicating whether the materials have cytotoxicity. The control group was without using the sample extract. Cell Viability was computed using the equation as follows.
Repellent Timeliness Measurement
A total of 20 ± 3 female F. taiwana were allocated on the insect end ( Figure 1A). An EGO nanofibrous membrane was positioned on the odor end ( Figure 1B) of the Y-tube olfactometer and simultaneously a PVA fibrous membrane was placed on the control end ( Figure 1C). After 3 min, the EGO membrane was removed for 12 min, and repositioned on the odor end again. This procedure was to prevent F. taiwana from being numb with olfaction after they stay in a ligand-odor space. A test cycle lasted 15 min, and the number of F. taiwana was recorded for 8 cycles. Namely, the length of a full test was 2 h. A new batch of F. taiwana was used for each test and 3 samples for each specification were used. The repellency rate against F. taiwana was computed to have the value in percent using the following equation.
Repellency rate = (1 -(the average data of three samples on end (B)/the total of the F.taiwana)) × 100%
Effects of Viscosity and Electrical Conductivity of PVA Solutions on PVA Nanofibrous Membranes
The morphology of PVA nanofibers is dependent on the parameters of electrospinning and PVA solutions. An excessive viscosity or low electrical conductivity has a negative influence on the morphology of nanofibers. Namely, the nanofibers have a greater diameter [20]. Table 1
Effects of Viscosity and Electrical Conductivity of PVA Solutions on PVA Nanofibrous Membranes
The morphology of PVA nanofibers is dependent on the parameters of electrospinning and PVA solutions. An excessive viscosity or low electrical conductivity has a negative influence on the Polymers 2020, 12, 870 5 of 12 morphology of nanofibers. Namely, the nanofibers have a greater diameter [20]. Table 1 shows the viscosity and electrical conductivity of the PVA solutions at 12, 14, and 16 wt %. The viscosity and electrical conductivity of the PVA solutions are in proportion to the concentration of the PVA solution. Moreover, regardless of whether it is 10, 15, or 20 kV, the PVA concentration of 16 wt % obtains the greatest thickness, which is ascribed to its high viscosity. A concentration of 16 wt % has the highest viscosity that bonds the nanofibers, which is unfavorable to the expansion and dissociation of the nanofibers. By contrast, a low concentration of PVA solution cannot keep the PVA jet continual and stabilized, which results in spray with droplets [21,22] that transforms into crimped fibers on the collector. A suitable viscosity that exceeds a critical value prevents the deformation of the jet, which successfully produces uniform nanofibers and then nanofibrous membranes. Figure 2a-i displays the morphology of nanofibers based on different concentrations of PVA solutions. In particular, a concentration of 14 wt% outperforms 12 wt% and 16 wt% in obtaining evenly formed nanofibers. A PVA solution at 12 wt% creates crimped nanofibers, and PVA solution at 16 wt% causes discontinuous or bead-shaped nanofibers. A high concentration renders the reaction between polymers, making the molecular chains entangled and thus forming a jelly-like substance [22]. Moreover, the PVA solution easily forms jelly that clogs the needle and causes instability of the fluid, hindering the electrospinning process. The majority of previous studies employed cross-linking in order to enhance the mechanical strength of PVA nanofibrous membranes. By contrast, cross-linking is absent in this study because it may jeopardize the release of essential oil enwrapped in PVA/EGO nanofibrous membranes. In addition, when practically used, PVA/EGO nanofibrous membranes are situated in a nonwoven bag, which precludes the requirement of mechanical properties. The tensile strength is between 0.23 N and 0.3 N regardless of whether it is a 12 wt%, 14 wt%, or 16 wt% PVA membrane. The effects of enwrapping essential oil on the mechanical properties are discussed in Section 3.3.
it increases to 15 kV, the evaporation time of the solvent is limited, allowing the jet to be expanded efficiently. Therefore, the nanofibers are evenly formed without beads. When the voltage is 20 kV, a more powerful electric field and a shorter evaporation time make the jet erupting quickly to land on the collection board, resulting in a specified amount of fine nanofibers accompanied by some nanofibers with uneven diameters. Hence, the diameter has a greater range, as shown in Figure 3 [25], and is less ideal. The optimal voltage is proven to be 15 kV, which is in conformity with the finding of the study by Ojha et al. [26].
Effect of Electrospinning Voltage on Morphology of PVA Nanofibrous Membranes
The morphology of nanofibrous membranes correlates closely with the electrospinning parameters. Increasing the collection distance can allot longer evaporation time to the solvent. An excessive collection distance causes an uneven diameter distribution of nanofibers. It also causes the accumulation of spinning solution over the needle, which has an adverse effect on the formation of Taylor cone as well as the evaporation time of the solvent. Based on the observation in our previous study, a collection distance of 10 cm and an optimal jet velocity of 1 mL/h contributed to an optimal morphology of nanofibers [23]. The aforementioned parameters are thus used in this study, and only the voltage is changed to 10, 15, and 20 kV. The voltage is a crucial parameter for the electrospinning process as it influences the shape of droplets, surface electrical load, the withdrawn time of the jet, and expansion of the jet [24]. The test results show that PVA solution can be electrospun into PVA nanofibrous membranes regardless of whether the electrospinning voltage is 10, 15, or 20 kV. When it increases to 15 kV, the evaporation time of the solvent is limited, allowing the jet to be expanded efficiently. Therefore, the nanofibers are evenly formed without beads. When the voltage is 20 kV, a more powerful electric field and a shorter evaporation time make the jet erupting quickly to land on the collection board, resulting in a specified amount of fine nanofibers accompanied by some nanofibers with uneven diameters. Hence, the diameter has a greater range, as shown in Figure 3 [25], and is less ideal. The optimal voltage is proven to be 15 kV, which is in conformity with the finding of the study by Ojha et al. [26].
Polymers 2020, 12, x FOR PEER REVIEW 7 of 12 PVA(wt%) 12 14 16 Electrospinning Voltage(kV) Table 2 shows the physical properties and diameter distribution of PVA/EGO nanofibrous membranes. The viscosity of the PVA/EGO blends is proportional to the content of the EGO. An excessive viscosity inhibits the full expansion of the jet during the electrospinning process, and the average diameter is thus higher. The increasing viscosity has a positive influence on the stability of jet dynamics, preventing the jet from dissociating into beads. Although the nanofibers are successfully formed, the average diameter is increased. The PVA/EGO ratio of 80/20 has a viscosity of 2108 cp and an electrical conductivity of 196 μS/cm. Namely, the viscosity is excessive and the electrical conductivity is low. The PVA/EGO blend solidifies in the needle of the syringe and cannot be electrospun into nanofibers, and the 80/20 ratio is thus eliminated from the experiment. Figures 4 and 5 show that with a PVA/oil ratio of 95/5, the nanofiber has a normal distribution of nanofiber diameter between 100 and 400 nm, which demonstrates a relatively greater comparable diameter. Moreover, increasing the oil ratio thickens the nanofiber diameter, but the increase in viscosity is also beneficial for the jet to form the Taylor cone. Regardless of whether the PVA/EGO ratio is 95/5, 90/10, or 85/15, the electrical conductivity of the experimental groups is lower than that of the essential-oilfree nanofibrous membranes (i.e., the control group made of 14 wt % PVA solution as indicated in Figure 2). The electric field of the experimental groups is comparatively lower, which renders the Table 2 shows the physical properties and diameter distribution of PVA/EGO nanofibrous membranes. The viscosity of the PVA/EGO blends is proportional to the content of the EGO. An excessive viscosity inhibits the full expansion of the jet during the electrospinning process, and the average diameter is thus higher. The increasing viscosity has a positive influence on the stability of jet dynamics, preventing the jet from dissociating into beads. Although the nanofibers are successfully formed, the average diameter is increased. The PVA/EGO ratio of 80/20 has a viscosity of 2108 cp and an electrical conductivity of 196 µS/cm. Namely, the viscosity is excessive and the electrical conductivity is low. The PVA/EGO blend solidifies in the needle of the syringe and cannot be electrospun into nanofibers, and the 80/20 ratio is thus eliminated from the experiment. Figures 4 and 5 show that with a PVA/oil ratio of 95/5, the nanofiber has a normal distribution of nanofiber diameter between 100 and 400 nm, which demonstrates a relatively greater comparable diameter. Moreover, increasing the oil ratio thickens the nanofiber diameter, but the increase in viscosity is also beneficial for the jet to form the Taylor cone. Regardless of whether the PVA/EGO ratio is 95/5, 90/10, or 85/15, the electrical conductivity of the experimental groups is lower than that of the essential-oil-free nanofibrous membranes (i.e., the control group made of 14 wt% PVA solution as indicated in Figure 2). The electric field of the experimental groups is comparatively lower, which renders the nanofibers with a greater diameter. suggests that enwrapping essential oil does not affect the structure and tensile properties. Moreover, the nanofibrous membranes have enough tensile strength to withstand force when being trimmed. the tensile strength is between 0.20 N and 0.26 N, which is comparable to the tensile strength of membranes discussed in Section 3.1. This result suggests that enwrapping essential oil does not affect the structure and tensile properties. Moreover, the nanofibrous membranes have enough tensile strength to withstand force when being trimmed. Figure 6 shows that regardless of whether the PVA/EGO ratio is 95/5, 90/10, and 85/15, the cell viability of fibroblasts reaches the standard. For the one-day-culture group, more EGO has a negative influence on the cell viability of fibroblasts. The cell viability is 95%, 93%, and 90% when the PVA/EGO ratio is 95/5, 90/10, and 85/15, suggesting that all of the PVA/EGO nanofibrous membranes have good cell viability. For the three-day-culture group, the cell viability is 91%, 89%, and 87% when nm to 893 nm as a result of increasing EGO from 5 to 15 wt %. Despite the PVA/EGO blending ratio, the tensile strength is between 0.20 N and 0.26 N, which is comparable to the tensile strength of membranes discussed in Section 3.1. This result suggests that enwrapping essential oil does not affect the structure and tensile properties. Moreover, the nanofibrous membranes have enough tensile strength to withstand force when being trimmed. Figure 6 shows that regardless of whether the PVA/EGO ratio is 95/5, 90/10, and 85/15, the cell viability of fibroblasts reaches the standard. For the one-day-culture group, more EGO has a negative influence on the cell viability of fibroblasts. The cell viability is 95%, 93%, and 90% when the PVA/EGO ratio is 95/5, 90/10, and 85/15, suggesting that all of the PVA/EGO nanofibrous membranes have good cell viability. For the three-day-culture group, the cell viability is 91%, 89%, and 87% when Figure 6 shows that regardless of whether the PVA/EGO ratio is 95/5, 90/10, and 85/15, the cell viability of fibroblasts reaches the standard. For the one-day-culture group, more EGO has a negative influence on the cell viability of fibroblasts. The cell viability is 95%, 93%, and 90% when the PVA/EGO ratio is 95/5, 90/10, and 85/15, suggesting that all of the PVA/EGO nanofibrous membranes have good cell viability. For the three-day-culture group, the cell viability is 91%, 89%, and 87% when the PVA/EGO ratio is 95/5, 90/10, and 85/15. It is clear that cell viability decreases by 3%-4% when the culture time is expanded to three days. This result is due to the fact that the cells have grown and occupied all the space in the culture well in one day. When the culture is expanded to 72 h, there is no space for the newly grown cells, which leads to comparatively lower cell viability.
Effect of Culture Time on Cell Viability of PVA/EGO Nanofibrous Membranes
Polymers 2020, 12, x FOR PEER REVIEW 9 of 12 the PVA/EGO ratio is 95/5, 90/10, and 85/15. It is clear that cell viability decreases by 3%-4% when the culture time is expanded to three days. This result is due to the fact that the cells have grown and occupied all the space in the culture well in one day. When the culture is expanded to 72 h, there is no space for the newly grown cells, which leads to comparatively lower cell viability.
Effect of PVA/EGO Ratios on Repellent Effectiveness of PVA/EGO Nanofibrous Membranes
In Figure 7, the infrared moisture determination balance test is conducted in order to verify that essential oil is released and whether it shows the repellent effect against F. taiwana. With a PAV/EGO ratio of 95/5 and 85/15, the resulting nanofibrous membranes release the whole essential oil in 90 min. By contrast, the essential oil is not completely released in 99 min from the nanofibrous membranes made with a 90/10 ratio. Because of the absence of valid repellent efficacy against F. taiwana, Y-tube olfactometer is, thus, conducted to make up for it. Figure 8 compares the repellent effectiveness of PVA/EGO nanofibrous membranes in terms of the PVA/EGO ratio and evaporation time. Compared to the repellency of the 95/5, 90/10, and 85/15 groups in the first 15 min, the 95/5 nanofibrous membranes have lower effectiveness of 73%. Compared to the other two groups, the group made with a PVA/EGO ratio of 95/5 yields the finest diameter of 349 nm, as shown in Figure 5. The evaporation of essential oil is more efficient when the fiber diameter is large. Therefore, as for the 95/5 group, the evaporation of essential oil does not generate a saturated gas concentration in the first 15 min, which, in turn, renders this specified group with a lower repellent efficacy. The repellent efficacy is 86.6% for 30 min and 85% for 45 min because the concentration reaches saturation. When exceeding 60 min, the repellent efficacy is reduced from 73% to 61% as a result of a decrease in the released amount of essential oil. When the PVA/EGO ratio is 90/10, the EGO is well wrapped and the repellent effectiveness has a tendency that first increases and then decreases. The nanofibrous membranes give an 85% repellency for the first 15 min and over 90% repellency for the first 30 min. Afterward, the EGO decreases with evaporation time, and the repellency thus eventually decreases. Moreover, the 90/10 group remains a repellent efficacy of 75% in 120 min. By contrast, the repellent effectiveness of 85/15 nanofibrous membranes has a tendency of decreasing with time, which is ascribed to the poor wrapping of EGO. The EGO evaporates so soon, and the 85/15 nanofibrous membranes are short-lived in effectiveness, and the repellent efficacy is reduced from 88% to 46%. In light of the service life, the 90/10 nanofibrous membranes outperform the other two groups, and are proved to be the most effective EGO-based repellent. The 90/10 group gives a 75% repellency lasting 120 min.
Effect of PVA/EGO Ratios on Repellent Effectiveness of PVA/EGO Nanofibrous Membranes
In Figure 7, the infrared moisture determination balance test is conducted in order to verify that essential oil is released and whether it shows the repellent effect against F. taiwana. With a PAV/EGO ratio of 95/5 and 85/15, the resulting nanofibrous membranes release the whole essential oil in 90 min. By contrast, the essential oil is not completely released in 99 min from the nanofibrous membranes made with a 90/10 ratio. Because of the absence of valid repellent efficacy against F. taiwana, Y-tube olfactometer is, thus, conducted to make up for it. Figure 8 compares the repellent effectiveness of PVA/EGO nanofibrous membranes in terms of the PVA/EGO ratio and evaporation time. Compared to the repellency of the 95/5, 90/10, and 85/15 groups in the first 15 min, the 95/5 nanofibrous membranes have lower effectiveness of 73%. Compared to the other two groups, the group made with a PVA/EGO ratio of 95/5 yields the finest diameter of 349 nm, as shown in Figure 5. The evaporation of essential oil is more efficient when the fiber diameter is large. Therefore, as for the 95/5 group, the evaporation of essential oil does not generate a saturated gas concentration in the first 15 min, which, in turn, renders this specified group with a lower repellent efficacy. The repellent efficacy is 86.6% for 30 min and 85% for 45 min because the concentration reaches saturation. When exceeding 60 min, the repellent efficacy is reduced from 73% to 61% as a result of a decrease in the released amount of essential oil. When the PVA/EGO ratio is 90/10, the EGO is well wrapped and the repellent effectiveness has a tendency that first increases and then decreases. The nanofibrous membranes give an 85% repellency for the first 15 min and over 90% repellency for the first 30 min. Afterward, the EGO decreases with evaporation time, and the repellency thus eventually decreases. Moreover, the 90/10 group remains a repellent efficacy of 75% in 120 min. By contrast, the repellent effectiveness of 85/15 nanofibrous membranes has a tendency of decreasing with time, which is ascribed to the poor wrapping of EGO. The EGO evaporates so soon, and the 85/15 nanofibrous membranes are short-lived in effectiveness, and the repellent efficacy is reduced from 88% to 46%. In light of the service life, the 90/10 nanofibrous membranes outperform the other two groups, and are proved to be the most effective EGO-based repellent. The 90/10 group gives a 75% repellency lasting 120 min. Polymers 2020, 12, x FOR PEER REVIEW 10 of 12
Conclusion
The mixtures composed of PVA solution and essential oil are successfully made into nanofibrous membranes that can repel F. taiwana via the electrospinning tsechnique in this study. A raise of PVA concentration from 12 to 14 wt % helps improve the viscosity and electrical conductivity, which, in turn, improves the nanofiber formation. The nanofibers made with a PVA/oil ratio of 95/5 can attain an average diameter of 349 nm and a bead-free morphology. In addition, the results of the cell viability measurement indicate that the nanofibrous membranes composed of PVA/oil with ratios of 95/5, 90/10, and 85/15 do not interfere with the growth of fibroblasts while having cell viability that exceeds 85%, which suggests they do not harm the human cells. In the repellent effectiveness and effective time evaluations, the membranes composed of a PVA/oil ratio of 85/15 exhibit a repellent effectiveness of 45% in two hours, but those composed of a PVA/oil ratio of 90/10 exhibit a repellent effectiveness of 80%. The test results indicate that the proposed PVA/oil nanofibrous membranes have high repellent efficacy against F. taiwana and a remarkable slow-release effect, which makes a contribution to the mosquito repellent market.
Conclusion
The mixtures composed of PVA solution and essential oil are successfully made into nanofibrous membranes that can repel F. taiwana via the electrospinning tsechnique in this study. A raise of PVA concentration from 12 to 14 wt % helps improve the viscosity and electrical conductivity, which, in turn, improves the nanofiber formation. The nanofibers made with a PVA/oil ratio of 95/5 can attain an average diameter of 349 nm and a bead-free morphology. In addition, the results of the cell viability measurement indicate that the nanofibrous membranes composed of PVA/oil with ratios of 95/5, 90/10, and 85/15 do not interfere with the growth of fibroblasts while having cell viability that exceeds 85%, which suggests they do not harm the human cells. In the repellent effectiveness and effective time evaluations, the membranes composed of a PVA/oil ratio of 85/15 exhibit a repellent effectiveness of 45% in two hours, but those composed of a PVA/oil ratio of 90/10 exhibit a repellent effectiveness of 80%. The test results indicate that the proposed PVA/oil nanofibrous membranes have high repellent efficacy against F. taiwana and a remarkable slow-release effect, which makes a contribution to the mosquito repellent market.
Conclusions
The mixtures composed of PVA solution and essential oil are successfully made into nanofibrous membranes that can repel F. taiwana via the electrospinning tsechnique in this study. A raise of PVA concentration from 12 to 14 wt% helps improve the viscosity and electrical conductivity, which, in turn, improves the nanofiber formation. The nanofibers made with a PVA/oil ratio of 95/5 can attain an average diameter of 349 nm and a bead-free morphology. In addition, the results of the cell viability measurement indicate that the nanofibrous membranes composed of PVA/oil with ratios of 95/5, 90/10, and 85/15 do not interfere with the growth of fibroblasts while having cell viability that exceeds 85%, which suggests they do not harm the human cells. In the repellent effectiveness and effective time evaluations, the membranes composed of a PVA/oil ratio of 85/15 exhibit a repellent effectiveness of 45% in two hours, but those composed of a PVA/oil ratio of 90/10 exhibit a repellent effectiveness of 80%. The test results indicate that the proposed PVA/oil nanofibrous membranes have high repellent efficacy against F. taiwana and a remarkable slow-release effect, which makes a contribution to the mosquito repellent market.
Author Contributions: In this study, the concepts and designs for the experiment, all required materials, as well as processing and assessment instruments were provided by C. | 6,668.2 | 2020-04-01T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
Not Just a Theory—The Utility of Mathematical Models in Evolutionary Biology
Models have made numerous contributions to evolutionary biology, but misunderstandings persist regarding their purpose. By formally testing the logic of verbal hypotheses, proof-of-concept models clarify thinking, uncover hidden assumptions, and spur new directions of study. thumbnail image credit: modified from the Biodiversity Heritage Library
A Conceptual Gap: Models and Misconceptions
Recent advances in many fields of biology have been driven by a synergistic approach involving observation, experiment, and mathematical modeling (see, e.g., [1]). Evolutionary biology has long required this approach, due in part to the complexity of population-level processes and to the long time scales over which evolutionary processes occur. Indeed, the ''modern evolutionary synthesis'' of the 1930s and 40s-a pivotal moment of intellectual convergence that first reconciled Mendelian genetics and gene frequency change with natural selection-hinged on elegant mathematical work by RA Fisher, Sewall Wright, and JBS Haldane. Formal (i.e., mathematical) evolutionary theory has continued to mature; models can now describe how evolutionary change is shaped by genome-scale properties such as linkage and epistasis [2,3], complex demographic variability [4], environmental variability [5], and individual and social behavior [6,7] within and between species.
Despite their integral role in evolutionary biology, the purpose of certain types of mathematical models is often questioned [8]. Some view models as useful only insofar as they generate immediately testable quantitative predictions [9], and others see them as tools to elaborate empirically-derived biological patterns but not to independently make substantial new advances [10]. Doubts about the utility of mathematical models are not limited to present day studies of evolution-indeed, this is a topic of discussion in many fields including ecology [11,12], physics [13], and economics [14], and has been debated in evolution previously [15]. We believe that skepticism about the value of mathematical models in the field of evolution stems from a common misunderstanding regarding the goals of particular types of models. While the connection between empiricism and some forms of theory (e.g., the construction of likelihood functions for parameter inference and model choice) is straightforward, the importance of highly abstract modelswhich might not make immediately testable predictions-can be less evident to empiricists. The lack of a shared understanding of the purpose of these ''proof-of-concept'' models represents a roadblock for progress and hinders dialogue between scientists studying the same topics but using disparate approaches. This conceptual gap obstructs the stated goals of evolutionary biologists; a recent survey of evolutionary biologists and ecologists reveals that the community wants more interaction between theoretical and empirical research than is currently perceived to occur [16].
To promote this interaction, we clarify the role of mathematical models in evolutionary biology. First, we briefly describe how models fall along a continuum from those designed for quantitative prediction to abstract models of biological processes. Then, we highlight the unique utility of proof-of-concept models, at the far end of this continuum of abstraction, presenting several examples. We stress that the development of rigorous analytical theory with proof-of-concept models is itself a test of verbal hypotheses [11,17], and can in fact be as strong a test as an elegant experiment. and relates its conclusions back to biological questions. Building such theory requires different degrees of biological abstraction depending on the specific question. Some questions are best addressed by building models to interface directly with data. For example, DNA substitution models in molecular evolution can be built to take into account the biochemistry of DNA, including variation in guanine and cytosine (GC) content [18] and the structure of the genetic code [19]. These substitution models form the basis of the likelihood functions used to infer phylogenetic relationships from sequence data. Models can also provide baseline expectations against which to compare empirical observations (e.g., coalescent genealogies under simple demographic histories [20] or levels of genetic diversity around selective sweeps [21]).
In contrast, higher degrees of abstraction are required when models are built to qualitatively, as opposed to quantitatively, describe a set of processes and their expected outcomes. Though not mathematical, verbal or pictorial models have long been used in evolutionary biology to form abstract hypotheses about processes that operate among diverse species and across vast time scales. Darwin's [22] theory of natural selection represents one such model, and many others have followed since; for example, Muller proposed that genetic recombination might evolve to prevent the buildup of deleterious mutations (''Muller's ratchet'') [23], and the ''Red Queen hypothesis'' proposes that coevolution between antagonistically interacting species can proceed without either species achieving a longterm increase in fitness [24]. A clear verbal model lays out explicitly which biological factors and processes it is (and is not) considering and follows a chain of logic from these initial assumptions to conclusions about how these factors interact to produce biological patterns.
However, evolutionary processes and the resulting patterns are often complex, and there is much room for error and oversight in verbal chains of logic. In fact, verbal models often derive their influence by functioning as lightning rods for debate about exactly which biological factors and processes are (or should be) under consideration and how they will interact over time. At this stage, a mathematical framing of the verbal model becomes invaluable. It is this proof-of-concept modeling on which we focus below.
Proof-of-Concept Models: Testing Verbal Logic in Evolutionary Biology
Proof-of-concept models, used in many fields, test the validity of verbal chains of logic by laying out the specific assumptions mathematically. The results that follow from these assumptions emerge through the principles of mathematics, which reduces the possibility of logical errors at this step of the process. The appropriateness of the assumptions is critical, but once they are established, the mathematical analysis provides a precise mapping to their consequences.
A clear analogy exists between proof-ofconcept models and other forms of hypothesis testing. In general, the hypotheses generated by verbal models must ultimately be tested as part of the scientific process ( Figure 1A). Empirical research tests a hypothesis by gathering data in order to determine whether those data match predicted outcomes ( Figure 1B). Proof-of-concept models function very similarly ( Figure 1C): to test the validity of a verbal model, precise predictions from a mathematical analysis of the assumptions are compared against verbal predictions. This important function of mathematical modeling is commonly misunderstood, as theoreticians are often asked how they might test their proof-ofconcept models empirically. The models themselves are tests of whether verbal models are sound; if their predictions do Figure 1. Parallels between empirical experimental techniques and proof-of-concept modeling in the scientific process. This flowchart shows the steps in the scientific process, emphasizing the relationship between experimental empirical techniques and proof-of-concept modeling. Other approaches, including ones that combine empirical and mathematical techniques, are not shown. We note that some questions are best addressed by one or the other of these techniques, while others might benefit from both approaches. Proof-of-concept models, for example, are best suited to testing the logical correctness of verbal hypotheses (i.e., whether certain assumptions actually lead to certain predictions), while only empirical approaches can address hypotheses about which assumptions are most commonly met in nature. (A) A general description of the scientific process. (B) Steps in the scientific process as approached by experimental empirical techniques. In this case, statistical techniques are often used to analyze the gathered data. (C) Steps in the scientific process as approached by proof-ofconcept modeling. Here, techniques such as invasion and stability analyses, stochastic simulations, and numerical analyses are employed to analyze the expected outcomes of a model. In both cases, the hypothesis can be evaluated by comparing the results of the analyses to the original predictions. doi:10.1371/journal.pbio.1002017.g001 not match, the verbal model is flawed, and that form of the hypothesis is disproved.
That is not to say, however, that proofof-concept models do not need to interact with natural systems or with empirical work; in fact, quite the contrary is true. There are vital links between theory and natural systems at the assumption stage (Box 1), and there can also be important connections at the predictions stage (Box 2); connections also occur at the discussion stage, where empirical results are synthesized into a broader conceptual framework. Additionally, theoretical models often point to promising new directions for empirical research, even if these models do not provide immediately testable predictions (see below). When empirical results run counter to theoretical expectations, theorists and empiricists have an opportunity to discover unknown or underappreciated phenomena with potentially important consequences.
Proof-of-concept models can both bring to light hidden assumptions present in verbal models and generate counterintuitive predictions. When a verbal model is converted into a mathematical one, casual or implicit assumptions must be made explicit; in doing so, any unintended assumptions are revealed. Once these hidden assumptions are altered or removed, the predicted outcomes and resulting inferences of the formal model may differ from, or even contradict, those of the verbal model (Box 3). This benefit of mathematical models has brought clarity and transparency to virtually all fields of evolutionary biology. Additionally, in spite of their abstract simplicity, proof-of-concept models, much like simple, elegant experiments, have the capacity to surprise. Even formalizations of seemingly straightforward verbal models can yield outcomes that are unanticipated using a verbal chain of logic (Box 4). Proof-of-concept models thus have the ability both to reinforce the foundations of evolutionary explanations and to advance the field by introducing new predictions.
Investigating Evolutionary Puzzles through Proof-of-Concept Modeling
Proof-of-concept models have proven to be an essential tool for investigating some of the classic and most enduring puzzles in the study of evolutionary biology, such as ''why is there sex?'' and ''how do new species originate?'' These areas of research remain highly active in part because the relevant time scales are long and the processes are intricate. They represent
Box 1. A Critical Connection-Assumptions
Although the steps between assumptions and predictions in proof-of-concept models do not need to be empirically tested, empirical support is essential to ensure that key assumptions of mathematical models are biologically realistic. The process of matching assumptions to data is a two-way street; if a model demonstrates that a certain assumption is very important, it should motivate empirical work to see if it is met. Importantly, however, not all assumptions must be fully realistic for a model to inform our understanding of the natural world.
We can group assumptions into three general categories (with some overlap between them): we name these 1) critical, 2) exploratory, and 3) logistical. Critical assumptions are those that are integral to the hypothesis, analogous to the factors that an empirical scientist varies in an experiment (they would be part of the purple ''hypothesis'' box of Figure 1). These assumptions are crucial in order to properly test the verbal model; if they do not match the intent of the verbal hypothesis, then the mathematical model is not a true test of the verbal one. To illustrate this category of assumptions (and those below), consider the mathematical model by Rice [35], which tests the verbal model that ''antagonistic selection between the sexes can maintain sexual dimorphism.'' In this model, assumptions that fall into the critical category are that (i) antagonistic selection at a locus results in higher fitness for alternate alleles in each sex, and (ii) sexual dimorphism results from a polymorphism between these alleles. If critical assumptions cannot be supported by underlying data or observation, and are therefore biologically unrealistic, then the entire modeling exercise is devoid of biological meaning [36].
The second category, exploratory assumptions, may be important to vary and test, but are not at the core of the verbal hypothesis. These assumptions are analogous to factors that an empiricist wishes to control for, but that are not the primary variables. Examining the effects of these assumptions may give new insights and breadth to our understanding of a biological phenomenon. (These assumptions, and those below, might best fit in the blue ''assumptions'' box of Figure 1C.) Returning to Rice's [35] model of sexual dimorphism, two exploratory assumptions are the dominance relationship between the alleles under antagonistic selection and whether the locus is autosomal or sex linked. Analysis of the model shows that dominance does not affect the conditions for sexual dimorphism when the locus is autosomal, but it does when the locus is sex linked.
Finally, every mathematical modeling exercise requires that logistical assumptions be made. These assumptions are partly necessary for tractability. Additionally, proof-of-concept models in evolutionary biology, as in other fields, are not meant to replicate the real world; their purpose instead is to identify the effects of certain assumptions (critical and exploratory ones) by isolating them and placing them in a simplified and abstract context. A key to creating a meaningful model is to be certain that logistical assumptions made to reduce complexity do not qualitatively alter the model's results. In many cases, theoreticians know enough about the effects of an assumption to be able to make it safely. In Rice's [35] sexual dimorphism example, the logistical assumptions include random mating, infinitely large population size, and nonoverlapping generations. These are common and well-understood assumptions in many population genetic models. In other cases, the robustness of logistical assumptions must be tested in a specific model to understand their effects in that context. Because assumptions in mathematical models are explicit, potential limitations in applicability caused by the remaining assumptions can be identified; it is important that modelers acknowledge the potential effects of relaxing these assumptions to make these issues more transparent. As with the other categories of assumptions above, logistical assumptions have an analogy in empirical work; many experiments are conducted in lab environments, or under altered field conditions, with the same purpose of reducing biological complexity to pinpoint specific effects.
Much of the doubt about the applicability of models may stem from a mistrust of the effects of logistical assumptions. It is the responsibility of the theoretician to make his or her knowledge of the robustness of these assumptions transparent to the reader; it may not always be obvious which assumptions are critical versus logistical, and whether the effects of the latter are known. It is likewise the responsibility of the empirically-minded reader to approach models with the same open mind that he or she would an experiment in an artificial setting, rather than immediately dismiss them because of the presence of logistical assumptions. excellent examples of topics in which mathematical approaches allow investigators to explore the effects of biologically complex factors that are difficult or impossible to manipulate experimentally.
Why Is There Sex?
A century after Darwin [25] published his comprehensive treatment of sexual reproduction, John Maynard Smith [26] used a simple mathematical formalization to identify a biological paradox: why is sexual reproduction ubiquitous, given that asexual organisms can reproduce at a higher rate than sexual ones by not producing males (the ''2-fold cost of sex'')? Increased genetic variation resulting from sexual reproduction is widely thought to counteract this cost, but simple proof-of-concept models quickly revealed both a flaw in this verbal logic and an unexpected outcome: sex need not increase variation, and even when it does, the increased variation need not increase fitness [27]. Subsequent theoretical work has illuminated many factors that facilitate the evolution and maintenance of sex. Otto and Nuismer [28], for example, used a population genetic model to examine the effects on the evolution of sex of antagonistic interactions between species. Such interactions were long thought to facilitate the evolution of sex [29,30]. They found, however, that these interactions only select for sex under particular circumstances that are probably relatively rare. Although these predictions might be difficult to test empirically, their implications are important for our conceptual understanding of the evolution of sex.
How Do New Species Originate?
Speciation is another research area that has benefitted from extensive proof-of-concept modeling. Even under the conditions most unfavorable to speciation (e.g., continuous contact between individuals from diverging types), one can weave plausible-sounding verbal speciation scenarios [22]. Verbal models, however, can easily underestimate the strength of biological factors that maintain species cohesion (e.g., gene flow and genetic constraints). Mathematical models have allowed scientists to explicitly outline the parameter space in which speciation can and cannot occur, highlighting many critical determinants of the speciation process that were previously unrecognized [31]. Felsenstein [32], for example, revolutionized our understanding of the difficulties of speciation with gene flow by using a proof-of-concept model to identify hitherto unconsidered genetic constraints. Speciation models in general have made it clear that the devil is in the details; there are many important biological conditions that combine to determine whether speciation is more or less likely to occur. Because speciation is exceedingly difficult to replicate experimentally, theoretical developments such as these have been particularly valuable.
Pitfalls and Promise
Although mathematical models are potentially enlightening, they share with experimental tests the danger of possible overinterpretation. Mathematical models can clearly outline the parameter space in which an evolutionary phenomenon such as speciation or the evolution of sex can occur under certain assumptions, but is this space ''big'' or ''little''? As with any scientific study, the impression that a model leaves can be misleading, either through faults in the presentation or improper citation in subsequent literature.
Overgeneralization from what a model actually investigates, and claims to Box 2. The Complete Picture-Testing Predictions The predictions of some proof-of-concept models can be evaluated empirically. These tests are not ''tests of the model''; the model is correct in that its predictions follow mathematically from its assumptions. They are, though, tests of the relevance or applicability of the model to empirical systems, and in that sense another way of testing whether the assumptions of the model are met in nature (i.e., an indirect test of the assumptions).
A well-known example of an empirical test of theoretically-derived predictions arises in local mate competition theory, which makes predictions about the sex ratio females should produce in their offspring in order to maximize fitness in structured populations, based on the intensity of local competition for mates [37]. These predictions have been assessed, for example, using experimental evolution in spider mites (Tetranychus urticae) [38]. The predictions of other evolutionary models might be best suited to comparative tests rather than tests in a single system. For example, inclusive fitness models suggest that, all else being equal, cooperation will be most likely to evolve within groups of close kin [6]. In support of this idea, comparative analyses suggest that mating with a single male (monandry), rather than polyandry, was the ancestral state for eusocial hymenoptera, meaning that this extreme form of cooperation arose within groups of full siblings [39].
In other cases, comparative data might be very difficult to collect. Theoretical models, for example, have demonstrated that speciation is greatly facilitated if isolating mechanisms that occur before and after mating are controlled by the same genes (e.g., are pleiotropic) [40]. While this condition is found in an increasing number of case studies [41], each case requires manipulative tests of selection and/or identification of specific genes, so that a rigorous comparative test of how often such pleiotropy is involved in speciation remains far in the future.
Box 3. Uncovering Hidden Assumptions
A striking example of the utility of mathematical models comes from the literature on the evolution of indiscriminate altruism (the provision of benefits to others, at a cost to oneself, without discriminating between partners who cooperate and partners who do not). Hamilton [6] proposed that indiscriminate altruism can evolve in a population if individuals are more likely to interact with kin. He also suggested that population viscosity-the limited dispersal of individuals from their birthplace-can increase the probability of interacting with kin. For a long time after Hamilton's original work, it was assumed, often without any explicit justification, that limited dispersal alone could facilitate the evolution of altruism [42]. A simple mathematical model by Taylor [43], however, showed that population viscosity alone cannot facilitate the evolution of altruism, because the benefits of close proximity to kin are exactly balanced by the costs of competition with those kin. Taylor's model revealed the importance of kin competition and clarified that additional assumptions about life history, such as age structure and the timing of dispersal relative to reproduction, are required for population viscosity to promote (or even inhibit) the evolution of altruism.
investigate, is strikingly common in this age when time for reading is short [33], and this problem is exacerbated when the presentation is not accessible to readers with a more limited background in theoretical analysis [34]. Indeed, these problems, universal to many fields of science, introduce the greatest potential for error in the conclusions that the research community draws from evolutionary theory.
We follow this word of caution with a final positive thought: in addition to the roles of mathematical models in testing verbal logic, the ability of theory to circumvent practical obstructions of experimental tractability in order to tackle virtually any problem is a benefit that should not be underestimated. Science is a quest for knowledge, and if a problem is, at least currently, empirically intractable, it is very unsatisfactory to collectively throw up our hands and accept ignorance. Surely it is far better, in such cases, to use mathematical models to explore how evolution might have proceeded, illuminating the conditions under which certain evolutionary paths are possible.
Box 4. A Proof-of-Concept Model Finds a Flaw and Introduces a New Twist
In stalk-eyed flies, males' exaggerated eyestalks play two roles in sexual selection: they are used in male-male competition and are the object of female choice. Researchers noticed that generations of experimental selection for less exaggerated eyestalks resulted in males that fathered proportionally fewer sons than expected [44]. Both verbal intuition and preliminary evidence led the research group to propose that females preferred males with long eyestalks because this exaggerated trait resided on a Y chromosome that was resistant to an X chromosome driver with biased transmission [45]. However, a proof-ofconcept model highlighted the flawed logic of this verbal model; the mathematical model showed that females choosing to mate with males bearing a drive-resistant Y chromosome (as putatively indicated by long eyestalks) would have lower fitness than nonchoosy females, and therefore this preference would not evolve [46]. In contrast, female choice for long eyestalks could be favored if long eyestalks were genetically associated with a nondriving allele at the (Xlinked) drive locus [46], so long as the eyestalk-length and drive loci were tightly linked [47]. These proof-of-concept models provided a new direction for empirical work, leading to the collection of new evidence demonstrating that the X-driver is linked to the eyestalk-length locus by an inversion [48], with the nondriver and long eyestalk in coupling phase (i.e., on the same haplotype). | 5,268.2 | 2014-12-01T00:00:00.000 | [
"Biology",
"Mathematics"
] |
Multiregion segmentation of microcalcification in mammogram images by using Parametric Kernel Graph Cut algorithm
– Early detection
INTRODUCTION
Breast cancer is the most prevalent cancer, ranking second worldwide and becoming the leading mortality cause among women [1]. The survival rate is enormously enhanced if breast abnormalities are detected at an early stage. Among the screening modalities, mammography is widely used as a gold standard in early detection of breast cancer [2]. The mammography images make it probable to abnormalities in the breast such as microcalcification.
Microcalcification is a tiny deposit of calcium that has hoarded in the breast tissue tends to make the suspected region not seen through complementary views on the mammograms. Moreover, radiologists only diagnose them visually which may lead to human errors, detection errors, and thus this can lead to late detection. The mammogram image containing microcalcification must undergo segmentation process where the image can be transformed into more significant for evaluation purposes.
Segmentation is a process of divided an image into meaningful regions which correspond to the same criteria [3]. More precisely, image segmentation can be defined as a process of allocating each pixel in an image with the same features such as color, intensity, or texture in same visual characteristic. Generally, image segmentation approaches can be divided into two categories, which are edge-based and region-based. Edge-based approach is based on intensity discontinuity and linked to form the boundaries of the region [4]. While, region-based approach will segment the image into regions that having similar sets of pixels [5].
However, the drawbacks of edge-based approaches are sensitive to noise, not robust in practice and sometimes it cannot overcome the difficulties in extracting the boundaries of interested region [6]. Thus, this study used region-based method in segmenting microcalcification. The Graph Cut algorithm is one of the region-based method which capable in solving a wide range of computer visions, including image segmentation by maintaining a global minimum energy function, where the basis of graph theory was implemented [7]. With this criteria, the Graph Cut algorithm has become more robust in practices, efficient and can overcome the 2D and 3D problems [8].
Unsupervised Graph Cut algorithm which does not require user interaction needs to use a piecewise model or Gaussian generalization so that it can recognize the data term. Unfortunately, these models are not flexible as different images may require different models [9]. Even every region in the same image may also require different model. Therefore, an improvisation of Graph Cut method, which is Parametric Kernel Graph Cut, was proposed by [10] for multiple region segmentation. Their study investigates kernel mapping to bring the unsupervised Graph Cut algorithm become more general that can segment various images without an assumption regarding the image model.
Hence, this paper aims to focus on performing the Parametric Kernel Graph Cut algorithm in microcalcification segmentation. By the end of this study, the performance of the proposed method will be evaluated by using two quantitative measures: Dice and Jaccard coefficient and the accuracy and sensitivity by using percentage relative error of area between method and experts.
The remainder of this paper is organized as follows: Section 2 briefly presents an overview of Parametric Kernel Graph Cut Algorithm. Section 3 explains the implementation of the proposed method which comprises of four main phases. Section 4 provides the experimental results and parameter analysis about the method. Section 5 concludes the paper.
ABSTRACT -Early detection of breast cancer can be detected through screening mammography. However, the potential abnormality such as microcalcification can hardly be differentiated by the radiologists due to the tiny size, which sometimes be hidden behind the density of breast tissue. Therefore, image segmentation technique is required. This paper proposes the potential use of Parametric Kernel Graph Cut Algorithm in segmenting microcalcification. The performances of this method were measured based on accuracy, sensitivity, Dice and Jaccard coefficient. All the experimental results generated satisfying results, whereby all images produced the average of 91.67% for Dice coefficient and 84.72% for Jaccard coefficient. Meanwhile, both accuracy and sensitivity results acquired 97.84% and 96%, respectively. Therefore, Parametric Kernel Graph Cut algorithm had proved its ability to segment the microcalcification robustly and efficiently. 2 journal.ump.edu.my/ijsecs ◄
PARAMETRIC KERNEL GRAPH CUT ALGORITHM
Graph Cut method was originally introduced by [10]. Graph Cut used a weighted directed graph is a set of vertices and E is a set of directed edges connecting the neighboring vertices. There are two special designated nodes or can be called as terminal nodes that will represent the foreground (object) and background, which are the source s and the sink b. Therefore, , . Besides that, the other components in Graph Cut are two types of links, which are defined as t-links and n-links. The t-links are edges between the pixels and terminal nodes while n-links are the edges between the pixels. For better illustrations, Figure 1 shows the simple 2D segmentation example of graph G for a 3 x 3 image [10].
Green line in Figure 1 shows the illustrations of s-b cut on graph G. Any solution of cut C must be feasible on graph G such that: • C severs exactly one t-links for each pixel • { , } p q C if p and q are linked to different terminals Graph Cut theorem relates two quantities which is max flow-min cut theorem. Note that the cut is minimal in the sense that none of its subsets separates the terminals into the same two subgraphs. The maximum flow from the source s to sink b is equivalent to the net flow of the edges in the minimum cut. Therefore, it means that min-cut problem can be solved directly by solving the max-flow problem [11].
Graph Cut algorithm, can achieve an optimum solution when the energy optimization is equivalent to the minimum cut. A minimum cut is a cut whose capacity is the least over all the s-b cuts of the network. Since a cut will separate both terminals, each pixel cannot sever both t-links. For any feasible cut, there exist a unique corresponding segmentation () AC which can be defined as, Based on the Equation (1), let 12 ( , , ..., ) , where A defines as a segmentation vector and for each components in which can be either "object" or "background". The segmentation functional for energy optimisation of Graph Cut can be written as the following formula, where, R(A) is the regional term which comprises the region properties. In other words, () RA act as penalties in assigning the pixel as a background or object. B(A) is a boundary term that assumes the smoothness term of the boundary properties, whereby it will determine the discontinuity term between the pixels. The coefficient represents as a positive factor that will control the smoothness term in labelling. Specifically, Equation (2) is written as, L refers to the set of the region label which is background label or object label of the pixel p while l refers to the value of the label. N is a set of neighbourhood pixels. However, Equation (3) is the segmentation functional for the original Graph Cut. For Parametric Kernel Graph Cut Algorithm (PKGC), kernel function will be substituted into the regional term, R(A). In this paper, the proposed kernel trick is able to solve a nonlinear problem into a simple linear problem , so that the image data will have a better separability. The substitution of the kernel function gives, value will be based on which label that pixel p is belong to. For the Equation (5), this paper chose RBF kernel in clustering the data since [12] state that RBF kernel generated more accurate result as compared to the other kernel function where the function is, uses the Gaussian probability distribution which is multiplied by the inverse Euclidean distance and written as, The larger the { , } pq B , the higher the similarity between the intensity of pixels p and q.
( , ) dist p q is the Euclidean distance between p and q while is a constant where the value should be less than the ( , ) dist p q . p I and q I are the intensities of pixel p and q, respectively. The value that will be returned from the boundary term is only within 0 to 1, whereby the higher the value means the higher the similarity between intensity value for pixel p and q.
For every graph, each edges eE is assigned as non-negative weight (cost), () we and the total weight of a graph refers to the total cost of a cut which can be defined as, Parametric Kernel Graph Cut Algorithm generates the optimal segmentation result in terms of properties that are built into the edge weights. Therefore, the segmentation result that generated is optimal since the cost of the cut are built based on the properties for each edge. The following Table 1 gives the exact weights of all these edges [10]. Table 1, the weight of edges for both n-link and t-link. The weight for each edge is assigned based on each partition and all the weights are referred as the minimum cut for each edge. Then, the boundary will be drawn based on the specified weights in Table 1. Therefore, a graph will be constructed according to the obtained optimized solution of the Equation (4) via max flow-min cut algorithm.
IMPLEMENTATION
The implementation for this paper begins with data acquisition. 25 mammogram images were provided from the National Cancer Society Malaysia (NCSM) and all the microcalcification were already confirmed by the radiologist. At the second phase, the image data was clustered into k regions by using K-means clustering. Since there were two clusters that have been set up to, thus the image is segmented according to these regions at the next phase where this paper focused on multiregional segmentation of mammograms image by using Parametric Kernel Graph Cut Algorithm. The outcome of the segmentation was then proceeded with the performance evaluation. This paper was focused on the accuracy and sensitivity so that the capability of the method can be tested by applying two evaluation techniques, which were Dice and Jaccard coefficient and also based on the accuracy and sensitivity using percentage relative error of area between the proposed method and expert. The flows of implementation phases are shown in Figure 2 The images of the microcalcification are set in standard size which is in 200 x 200 pixels in the format of Portable Network Graphics (PNG). The software that has been used for this study is MATLAB R2014a. The mammogram image is stored as an 8-bit integer (2D image) that has a specific range of values from 0 to 255 where 0 value indicates as a black and 255 value indicates as a white. These values will differentiate the level of intensity for each pixel.
Phase 2: K-Means Clustering
As for the second phase, the image data are clustered into k regions using the K-means clustering algorithm. The main objective of cluster analysis is to divide the image data set into many disjoint groups or clusters [13]. The 'K' in K-means denotes the number of clusters that will be created. Clustering is like finding the similarity of the object such that the object is homogeneous within the group and heterogeneous between the groups where the similarity of the data set is measured based on the Euclidean distance. The following steps explained the processes involved in K-means clustering algorithm.
Step 1: Set the number of clusters into k regions. The data need to be clustered into k regions. In this study, the number of clusters are assigned to 2 (let k = 2). This is because the number of clusters are representing both background and an object which is the microcalcification that will be extracted.
Step 2: Initialize the centroids. Centroids or centres were chosen based on the number of clusters. The centroids can either be randomly selected or sort the data into k, then choose one data point from each cluster. Since this study has 2 clusters. So, two points were randomly selected as cluster centroids.
Step 3: Calculate the Euclidean distance. Since the centroids were already assigned at the previous step then the Euclidean distances were computed between the centroids and all the objects. The distances were calculated using following formula (9) where k is the number of clusters and n is the number of objects.
Step 4: Assign each object to the closest cluster. Since there are two clusters, so each object will be assigning into one of the clusters.
Step 5: Update the cluster centroids. After all the objects were already assigned to each cluster, calculate the new centroid for each cluster. To find the new centroids, calculate the average of all data points in each cluster.
Step 6: Repeat step 3, 4 and 5 until the same point are assigned to each cluster in consecutive rounds. Since there are two clusters that has been set up, therefore the image is segmented according to these regions where each pixel is assigned to its cluster and Parametric Graph Cut algorithm will poses smoothness constraint at the next phase.
Phase 3: Segmentation using Parametric Kernel Graph Cut Algorithm
In this phase, Parametric Kernel Graph Cut is used to segment the microcalcification in mammogram images. The steps are described as follows: Step 1: Find the regional term. Regional term comprises the smoothness term which refers to the region properties of a graph. In other words, the term act as a penalty in assigning the pixel as a background or object. The region properties will assume the penalties for assigning each pixel to object or background [14]. The equation of the regional term is computes as in Equation (5).
Step 2: Calculate the boundary term. Boundary term, () BA refers to the data cost which comprises the boundary properties of a graph. In Graph Cut, it is also known as the boundary penalties for the discontinuity between a pair of pixels (Wang et al., 2013
TP
Step 3: Find the energy term. The energy term refers to the energy segmentation functional of Graph Cut which can be written as in Equation (2).
Step 4: Find the minimum cut. The goal of Parametric Kernel Graph Cut algorithm is to compute the best cut such that the cuts would provide the optimal segmentation. Optimal segmentation can be achieved by finding the minimum cost cuts, which refers to the sum of its edge weight as in equation (8). The minimum cost for a cut of a graph can be obtained by finding all three edges that listed in Table 1. From the table, the first edge refers to n-links which means the edges of all the neighboring pixels. Note that any solution of cut C must be feasible on graph G such that • C severs exactly one t-links for each pixel • { , } p q C if p and q are linked to different terminals • Since a cut will separate both terminals for object and background, a minimum cut is severing when both pixel p and q are connected to the opposite terminal and that is why the neighboring pixels should be in the different groups.
Phase 4: Performance Evaluation of Segmentation Results
The results were then evaluated by applying two quantitative measures: the evaluation of Dice and Jaccard coefficient, accuracy and sensitivity based on percentage relative error of area between the method and experts.
Dice and Jaccard Coefficient
Dice and Jaccard coefficient is an overlapping measure that is often utilized to quantify the similarity between two sample sets [15]. In this paper, the evaluation of the performance was determined by the overlapping accuracy ratio of segmented image which was generated by PKGC against the ground truth image, which is the image that was marked from the radiologist. Both images need to be converted to binary images. In order to calculate Dice coefficient, the following equation is used [16].
TP Dice TP FP FN
where TP is the number of true positives when both image have the same pixel value of 1, FP is the number of false positives where the pixel value of 1 only in the PKGC. FN is false negatives which refer to the number of pixels with value 1 that only in the ground truth image. Then, as for the Jaccard coefficient is calculated by using the following formula.
2 Dice Jaccard Dice The result was ranged from 0 to 1, where the 0 value indicated no overlap occurrence and 1 indicated the complete congruence. Therefore, the higher ratio will depict better results.
Accuracy
Accuracy is defined as a measurement on how close a result compared with a true value. In this case, the accuracy was determined by the closeness of the PKGC segmentation result and radiologist result. When the results of image segmentation by using PKGC were obtained, the average area of the segmented images was calculated as well as the average areas of the expert findings.
Sensitivity
The sensitivity is the proportion of true positives that are correctly identified by the test [17]. Higher sensitivity values give the best result. The sensitivity is evaluated based on the recognition statistic from [18] as in Table 2. Based on Table 2, the percentage relative error obtained from each image is categorized into 5 categories ranging from very good to poor cases. True positive (TP) is considered from very good to average cases. Then, sensitivity is measured based on total number of TP divide by the total number of images.
EXPERIMENTAL RESULTS
Parametric Kernel Graph Cut Algorithm has successfully segmented all 25 mammogram images. Figure 3 shows samples of successful segmentation results.
(a) Original images (b) Region of segmentation results
(c) Boundary of segmentation results Figure 3. Segmentation Results by Using PKGC Figure 3 shows three samples of mammogram images which were successfully segmented by using PKGC. PKGC had successfully produces both region and boundary of segmented image. The result partitioned the image into two regions, which were object and background and assigned each of them with two different kinds of color as depicted in Figure 3(b). Figure 3(c) depicts the boundary of the region which shows the actual shape of the microcalcification so that this shape can help radiologist in analyzing whether these tumors are benign or malignant. Such a result enables to help the radiologist to calculate the microcalcification area, according to the boundary circumference. Moreover, the different curve lines of boundary play a crucial role in determining whether the microcalcification is a cancerous cell or not.
One of the objectives of kernel functions in PKGC is to ensure that the Graph Cut formulation has the ability to segment a multiregion image [10]. The third image in Figure 3 proves this ability, whereby PKGC was capable in segmenting the clusters of microcalcification. The intensity grey level of the second image was quite blurred as compared to the other images. Since the segmentation of the region-based method determined by the similarity, color and texture of the image, the segmentation might be quite challenging in detecting the microcalcification. However, by using PKGC, the precise boundary was properly segmented. The robustness of PKGC improve the flexibility of the method in segmenting various kinds of image.
Parameter Analysis of Parametric Kernel Graph Cut Algorithn
Based on the Equation (2), is a variable that refers to the weight of image smoothness. It is also known as the degree of opacity of the pixel and the value can be specified from 0 to 255. This is because the value represents the transparency information of the pixel, whereby the 0 and 255 values represent the full transparency and the full opacity, respectively. Obviously, the boundaries of each image are varied in size and shape. Also, there is intensity inhomogeneity for every pixel. Therefore, this parameter needs to be customised beforehand to achieve precise and successful results in image segmentation.
During the segmentation process, several values of were tested until the optimum segmentation was achieved. The range value which was set up for all the mammogram images was between 0 until 4.5. In this paper, most of the mammogram images was set up with 0.1 as the value of . The value depends on the intensity level of the image. For example, any blurred image will be set up in low values of since a low transparency was needed to segment this kind of image successfully. Meanwhile, any solid and clear image required a higher . Figure 5 too low or too high may lead to a wrong segmentation of the region. In this experiment, both too low and too high values of parameter were set up to see the differences between the results and all the results did not give an accurate segmentation. The parameter needed to be customised beforehand to achieve precise and successful results in the image segmentation, which means that the parameter needed to be set up based on the suitability and characteristic of the mammogram image. Figure 4(c) presented the image of segmentation result in which the parameters were set such that the segmentation result was most accurate, where 0.5 was the optimum value of for the image.
Based on the Equation (4), there were three parameters which are , x and y and all of them were related to each other in order to enable the function to segment the images. Sigma plays an important role to be an amplifier of the distance between x and y. In this study all these parameters were set by default. If the distance between x and y is much larger than sigma, the kernel function tends to be zero. Figure 5 illustrates the experiments of the segmentation results by using different values. segmentation of the region. The larger the sigma is, the higher the possibilities in making wrong classification since larger sigma tends to make a much more general classifier ( Figure 5(b)) while smaller sigma tends to make a local classifier. For smaller sigma, the decision boundary tends to be strict and sharp as shown in Figure 5(a). This parameter was set by default for all images with 0.5 since it is the optimum values of . journal.ump.edu.my/ijsecs ◄ Figure 6 shows the result for all images of Dice and Jaccard coefficient. Figure 6. The Result of Dice and Jaccard Coefficient Figure 6 illustrates the results for all mammogram images that were obtained from Dice and Jaccard coefficient. All the ratio results can be considered satisfying enough for the overlapping ratio comparison where all the images produce the average of 91.67% for Dice coefficient and 84.72% for Jaccard coefficient. Apart from that, the results for Dice coefficient were higher as compared to the Jaccard Coefficient. This is due to the fact that Jaccard is more sensitive, whereby Jaccard is numerically more sensitive to mismatch, especially when there is no reasonably strong overlap segmentation [19].
Performances Evaluation
The results of percentage relative error of areas for all 25 images are shown in Figure 7. Based on Figure 7, the comparison of all the average area was illustrated in the form of bar chart for better visualization. There was only a small difference between the area that obtained by PKGC and radiologists. The accuracy that generated was 97.84% where it has only less than 3% of the inaccurate result. It can be concluded that the area that is obtained by using PKGC can be considered as accurately segmented.
Next, in term of sensitivity, the frequency of percentage relative error for each images were categorized into 5 categories based on Table 3, ranging from very good to poor cases. From very good to average cases, the true positive (TP) was considered. Table 3, all images were categorised based on the relative error for each image. PKGC obtained 96% sensitivity. Out of 25 images, PKGC had successfully obtained 24 images for true positive category while only one image was categorised as poor cases. For better illustration, the result was visualised in the form of pie chart, as shown in the following Figure 8. Figure 8 illustrates the percentages result for the number of images obtained based on the percentage relative error range for all five categories. The category for very good cases obtained 64% and it was the highest among all of the categories. Besides, only 4% was for poor cases category, whereby only one image was diagnosed as a poor case. Therefore, it can be concluded that the segmentation result of PKGC gave optimum result where it gave 96% for true positive category.
CONCLUSION
In the nutshell, the main purpose of this paper was to segment the microcalcification on mammogram images by using Parametric Kernel Graph Cut Algorithm. The proposed method was the improvisation of Graph Cut method where it used kernel mapping in the segmentation function instead of using piecewise model or Gaussian generalization where these models are not flexible enough since different image may require different models. Hence, this paper aims to directly segment the mammogram images without using any models.
The objective of the improvisation is to investigate the kernel mapping to bring the unsupervised Graph Cut formulation become more general especially on multiregional segmentation. The flexibility and robustness of Parametric Kernel Graph Cut was proved, where the method could segment the microcalcification in the mammogram images and achieve outstanding results in the evaluation performances. Therefore, Parametric Kernel Graph Cut Algorithm proved its ability in segmenting the microcalcification robustly and efficiently.
The future plan for this research aims to implement this method to the other type of images as well as the other type of abnormality. Also, since the parameters in this proposed method were set manually, thus other methods will be explored in such a way that all the parameters can be generated automatically. | 6,045.2 | 2021-02-28T00:00:00.000 | [
"Computer Science"
] |
Cytotoxic Activity of CuO NPs Prepared by PLAL Against Liver Cancer (Hep-G2) Cell Line and HdFn Cell Lines
Abstract
Introduction
There has been an increase demand for nanoparticles, which has resulted from large-scale manufacturers employing high-energy processes and solvents. Nanoparticles exhibit unique electrical, optical, chemical, and biological capabilities [1]. Nanoparticles differ from bulk particles in terms of electrical resistance, chemical reactivity, electrical conductivity, strength and hardness, diffusivity, and biological activity, photovoltaics, heterogeneous catalysis, gas-sensor technologies, nonlinear optics, medicine, and microelectronics [2][3][4][5][6]. For synthesizing nanoparticles, a simple top-down method Pulsed Laser Ablation in liquid (PLAL( was utilized. Among the many advantages of this method is the capacity to manage the size and quality of the generated nanoparticles and the guarantee that they are contaminated-free [7][8][9]. A range of ablation variables, including laser fluence, pulse width, repetition rate, wavelength, temperature, ablation duration, and the concentration of the stabilizing agent, influence nanoparticles shape, size, and morphology [10]. Researchers are particularly interested in metallic oxide nanoparticles since they are used in various industrial operations and medical and pharmaceutical applications. They can also be used in the manufacture of cosmetics, microelectronic devices, and semiconductors [11][12][13][14][15][16]. Metal oxide nanoparticles, such as (CuO), have generated interest due to their antibacterial and biocidal properties, and their potential use in a wide range of biomedical applications [17,18].
Cancer of the liver is the world's third leading cause of cancer death. Liver cancer is a major public health concern because it has such a dramatic impact on our lives. Fundamental research into the molecular mechanisms of liver cancer is required for long-term and dependable prevention and treatment methods. The cell lines are treated as in vitro equivalents of tumor tissues, making them indispensable for basic cancer research. Certified cell lines retain most of the original tumor's genetic properties and mimic its microenvironment. Hep-G2 is a well-known hepatic cell line. It is used in various scientific research applications, from oncogenesis to the cytotoxicity of substances on the liver [19]. The aim of the study is to prepare copper oxide particles in an economical and inexpensive way and to employ these particles in measuring the cytotoxicity of normal and cancerous cells.
CuO Nanoparticles Synthesis
CuO nanoparticles were created in deionized water using a pulsed Q-switched Nd:YAG nanosecond laser with the following parameters: energy = 400 mJ, frequency f = 4, 6, and 8 Hz, wavelength λ = 1064 nm, and the number of pulses = 100 shots. The operation was achieved by placing a (1x1) cm copper plate of (1) mm thickness at the bottom of a quartz container filled with (3) ml of deionized water, as illustrated in Fig.1.
Nanoparticles Sample Preparation
The drop method was used to prepare thin films; the solution containing the copper nanoparticles was deposited on a glass slide at room temperature; this was done several times and left to completely dry, thus forming a thin layer on the surface of the glass slide.
Atomic Absorption Spectroscopy (AAS) measures the amount of UV/Visible light energy absorbed by an element. The wavelength of light absorbed corresponds to the energy needed to excite electrons from the ground state to a higher energy level. The amount of energy absorbed in this excitation process is proportional to the concentration of the element in the sample, as shown in Table 1. The Field Effect Scanning Electron Microscope (FESEM), Energy-Dispersion X-ray Spectroscopy (EDX), and X-Ray Diffraction (XRD) were employed to study the characteristics of the prepared thin films.
MTT Assay
10,000 cells from different cell lines were cultured into 96-well plates with different concentrations of CuO NPs and incubated for 24 hours at 37° C in an incubator with 5% CO 2 . Also, cells without adding CuO NPS were cultivated to serve as the positive control group. The well without the cells serves as the negative control. After 24 hours, using a sampler, 10 μl of MTT solution was added to 100 μl of cell culture supernatant with gentle shaking by hand or on a shaker until smooth. The plates were Dimethyl sulfoxide (DMSO) incubated for 4 hours in an incubator (CO 2 ٪ 5) at 37 ° C. Then, empty the whole medium, add (100) μl of DMSO to each well, and wait for the formazan crystals to dissolve to form a pink-purple solution. Then the light absorption of the samples was read at (570) nm using a Dana 3200 microplate reader, value was calculated with Prism software version 8.2.
Results and Discussion
The X-Ray diffraction analyses were done for the synthesized CuO NPs to confirm their crystalline nature. The results of the XRD analysis of the CuO NPs were interrelated with the Joint Committee on Powder Diffraction Standards (JCPDS), which confirmed the crystalline nature of CuO NPs (JCPDS 96-901-5925) and (JCPDS 48-1548). Fig. 2 shows the XRD patterns at different frequencies of (4,6,8) Hz for the synthesized CuONPs.
The crystallite size was determined for different frequencies using the Scherrer formula, which is given by [8]: where: λ is the wavelength of the X-ray, β is the FWHM, and θ is the Bragg angle. The structural property parameters are displayed in Table (2). For Cu films, the XRD results indicated that the crystallite size increase was frequency dependent.
The main characteristic diffraction peaks of the three samples were consistent, as shown in Fig. 2, and the corresponding 2θ is also consistent. The peaks of the copper oxide were 111, 002, and 020 corresponding to 2θ of 35.63°, 38.9°, and 52.65°, respectively. θ 2° The morphology of nanocrystalline CuO thin film was examined using FESEM, as shown in Fig. 3. The sample formed at a f=4Hz was observed to have a surface with uniform grains but of small numbers. The increase in laser frequency (6,and 8 Hz) resulted in an increase in grain density, as it is evident from Table 2. The crystal size of copper nanoparticles ranged between (18-50) nm. The Energy Dispersive X-ray (EDX) spectrum confirmed the presence of elemental gold (specific to the test device), copper and oxygen at different laser frequencies, as shown in Fig. 4. The energy is displayed on the horizontal axis in KeV, while the number of X-ray count rate is shown on the vertical axis.
Effects of CuONPs on liver Cancer (Hep-G2) Cell Line
Maximum cytotoxicity was 37.81 percent after 24 hours of incubation of (HepG-2) cells, whereas maximum viability was 62.19 percent, which was achieved at 500 µg mL -1 CuO concentration, as shown in Fig. 5.
Effects of CuONPs on normal (HdFn) cell line
Maximum cytotoxicity was 2.89 percent after 24 hours of incubation of (HdFn) cells, whereas maximum viability was 97.11 percent, which was achieved at 500 µg mL -1 CuO concentration, as illustrated in Fig. 6.
This in vitro study validated the selective effects of nanoparticles on cells. Nanotechnology has opened up a whole new area in cancer care by monitoring the release of the medication and lowering its side effects, i.e., cancer cells have been inhibited without harming normal cells, opened up a whole new sector in cancer care through regulating the delivery of medicine and lowering its side effects. Another aspect is that nanoparticles with a high surface-to-volume ratio aid distinct functional groups in attaching to such a nanoparticle and therefore tie tumor cells together. Because of the small nanoparticle size (under 100 nm) and the lack of a good tumor lymphatic drainage system, tumor cells can act as an active center for collecting nanoparticles [20]. Cytotoxicity is an essential factor in studying the activity of prepared nanomaterials on normal and cancer cells. Biologically, cytotoxicity is dependent on the production of a b c ROS. Aside from oxidative stress, other factors include dose, autophagy activation, exposure time, cell uptake, and substance concentration effects on the cytotoxicity [21,22].
Conclusions
In terms of cost and speed, this approach has several advantages. The copper nanoparticles' crystal sizes ranged from (18 to 50) nm. The presence of copper and oxygen was confirmed in the prepared materials, which were free of impurities. The cytotoxic effects of CuO NPs on cancer cell lines (human Hep-G2 liver cancer) and normal cell lines (HdFn) were studied. The toxicity of CuONPs on the cancerous Hep-G2 cell line was 37.81 %, while that of the normal HdFn cell line was 27%. Although the toxicity on the cancer cells is higher, it is not effective to the extent required to kill cancer cells. | 1,951.6 | 2023-06-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Potent Anti‐HIV Activity of Alkyl‐Modified DiPPro‐Nucleotides
Two convergent approaches for synthesizing a new class of nucleoside diphosphate prodrugs bearing different nucleoside analogs are reported herein. The DiPPro‐nucleotides comprise an acyloxybenzyl group in combination with a lipophilic alkyl residue at the β‐phosphate or β‐phosphonate group, respectively. They are selectively cleaved to form their corresponding β‐alkylated nucleoside diphosphate derivatives in chemical and biological hydrolysis studies. In contrast, there is a selective but slow cleavage observed in the hydrolysis of the DiPPro‐compounds bearing two different, nonbioreversible alkyl moieties in human CD4+ T‐lymphocyte CEM/0 cell extracts. In these studies, the delivery of nucleoside monophosphates rather than nucleoside diphosphates is being observed, most likely due to a pure chemical phosphoranhydride cleavage of the β‐phosph(on)ate moiety. The antiviral evaluation of these two types of prodrugs reveals that these compounds exhibit marked anti‐HIV efficacy in HIV‐2‐infected thymidine kinase‐deficient CD4+ CEM T‐cells (CEM/TK−), with significantly better activities (up to 6700‐fold) against HIV‐2 replication than the parent nucleosides. Primer extension assays demonstrate that the β‐dialkylphosphate‐modified nucleoside derivatives, β‐monoalkylated‐diphosphates, and nucleoside diphosphates serve as substrates for HIV reverse transcriptase for the viral DNA elongation.
Introduction
For many decades, nucleoside analogs such as 1 (Figure 1) have been an important part of the treatment of several viral infections caused by the human immunodeficiency virus (HIV), herpes virus (herpes simplex virus, varicella-zoster viruses, and cytomegalovirus), hepatitis B virus and hepatitis C virus (HCV), influenza, and recently the severe acute respiratory syndrome coronavirus 2. [1][2][3][4][5][6][7][8][9] Nucleoside analogs still serve as cornerstones of current antiviral and antitumoral chemotherapies. [2]][16] These NTPs are substrates for the viral RNA-dependent DNA polymerase of HIV (HIV-RT) [2,3] involved in the early phase of the infection either by blocking RT's enzymatic function or incorporation of the nucleotide analogs into the viral DNA followed by chain termination which results in the inhibition of HIV replication.Previous studies have shown that one or more steps in the phosphorylation pathways can be rate-limiting due to the specificity of the metabolizing cellular kinases, which results in low or no biological activity [15,17,18] or adverse effects. [19,20]As examples, for the nucleoside analogs d4T 1a [14,21] or AZT 1b [15,22] the limiting phosphorylation step is the conversion of d4T 1a to d4TMP 2a but in the case of AZT from AZTMP 2b to AZTDP 3b mediated by thymidine kinase (TK) or thymidylate kinase (TMP-K), respectively.
Chemical and Enzymatic Stabilities of DiPPro-Compounds 8 and 9
All DiPPro-compounds 8 and 9 were studied in different media with respect to their stabilities and potential hydrolysis products.The studies of DiPPro-compounds 8 and 9 were conducted by means of reversed-phase RP18-HPLC.The half-lives of DiPPro-compounds 8 and 9 in PBS, PLE, and CEM cell extracts are summarized in Table 1.The assumed chemical and enzymatic hydrolysis pathways of DiPPro-compounds 8 are summarized in Scheme 2 (lower section).
Hydrolysis in Human Plasma
Based on the phosphorylation profiles, nucleoside analogs 1b,c,d, h are phosphorylated the corresponding mono-, di-, and finally triphosphates.However, the action of antiviral active abacavir (ABC, 1e) is transformed into carbovir-triphosphate (CBVTP). [10]ccording to a report, [65] heparin-stabilized plasma mimics closest the serum (gold standard), whereas citrate-stabilized plasma differed markedly from serum, largely due to the addition of the complexing chelator citrate that "fishes out" divalent ions as Mg 2þ or Ca 2þ .In order to compare effects caused by the different plasma preparations, in the work presented here seven (AZT or ABC)-DiPPro-compounds 8b,e and 9b,e were incubated in both heparin-stabilized human or citrate-stabilized human plasma.The half-lives for DiPPro-compounds 8bw,8bx,8ex comprising a biodegradable moiety were found to be lower in heparinstabilized human plasma than in citrate-stabilized human plasma.This my point to a contribution to the compound's stability of the divalent ions still available in the heparin-stabilized plasma samples either directly by interaction with the charged diphosphate moieties or that plasma enzymes present depending on these ions loose activity in the citrate-stabilized samples.In contrast, the stabilities of DiPPro-compounds 8by,8bz,9b,9e bearing only noncleavable alkyl moieties were found to be very high and not influenced by the human plasma preparation (Table 2).
Dialkyl-Modified NDP Compounds 8by,8ez
The half-lives determined for dialkylated diphosphate derivatives 8by,8ez (t 1/2 > 8 h, Table 2) bearing two alkyl groups were very high.As hydrolysis products starting from the DiPPro-nucleotides 8by,8ez, a small amount of nucleoside analogs was detected (Figure 5B and S37-S39, Supporting Information).Additionally, almost no formation of NMPs was detected using these dialkylated diphosphate derivatives in contrast to the studies in PBS.
Antiviral Evaluation
DiPPro-nucleotides 8 and 9 and the corresponding parent nucleosides were investigated for their in vitro anti-HIV activity in HIV-1 and HIV-2-infected wild-type CEM/0 cells and in HIV-2-infected mutant TK-deficient (CEM/TK À ) cells.The antiviral and cytostatic data are displayed in Table 3.As anticipated, nucleoside analogs 1b,1d-f were devoid of any antiretroviral activity (EC 50 > 100 μM, Table 3) in CEM/TK À cells because of the lack of their intracellular kinase-catalyzed activation. [15,22]Their corresponding NDP derivatives 3 (e.g., AZTDP (EC 50 = 40.9μM)) also exhibited no antiviral activity in CEM/TK À cell cultures because of the high polarity and dephosphorylation.Assay conditions are summarized in the Experimental Section (Supporting Information).
It was concluded from these studies that DiPPro-nucleotides 8 studied here delivered successfully nucleotide metabolites (NMPs and alkylated diphosphate analogs 9,10) inside cells with demonstrated potent antiviral activity.The promising antiviral data of the DiPPro-compounds 8 bearing different nucleoside analogs demonstrate the general applicability and high potential of the DiPPro-approach described here.Therefore, these advanced DiPPro-compounds 8 offer high potential and expectations in the development of future antiviral chemotherapies.
As anticipated, most of the C18-NDPs 9 exhibited lower activities against HIV-2 replication in cultures of infected wild-type CEM/0 cells as compared to the antiviral evaluation of the parent nucleoside analogs 1, which might be due to their high polarity which hampered cell membrane penetration.Nevertheless, the antiviral activity determined for C18-ABCDP 9e (EC 50 : 0.044 μM/HIV-1; EC 50 : 0.026 μM/HIV-2) in wild-type CEM/0 cells was 210-fold and 130-fold, respectively, better as compared to the parent nucleoside ABC 1e for unknown reasons.Interestingly and somewhat surprisingly, all mono-alkylated DiPPro-compounds 9 bearing a C18 aliphatic chain also showed moderate to marked antiviral activity against HIV-2 in CEM/TK À cells.With C18-ABCDP 9e (EC 50 < 0.001 μM/HIV-2) the antiviral potency in CEM/TK À cells was also considerably improved by >2000-fold as compared to ABC 1e, indicating a successful uptake into cells of the mono-alkylated nucleoside diphosphates 9.In addition to their anti-HIV activity in virus-infected cell cultures, DiPPro-compounds 8 exhibited only slightly higher cellular cytotoxicity than the parent nucleosides 1 whereas most monoalkylated DiPPro-compounds 9 (CC 50 > 100 μM) were endowed with minimal cellular cytotoxicity.
Primer Extension Assays
As disclosed above, DiPPro-compounds 8 showed significant anti-HIV activity in HIV-1/2-infected wild-type CEM/0 cells and in HIV-2-infected CEM/TK À cells.Taken the results from the hydrolysis studies, the formation of monoalkylated nucleoside diphosphates 9,10 (for the DiPPro-compounds 8bw-fw and 8bx-fx) and NMPs 2 (for the DiPPro-compounds 8by-fy and 8bz-fz) was detected in PBS as well as in CEM/0 cell extracts, which is a striking difference to the doubly, bioreversibly modified DiPPro-compounds 5 (Scheme 2).Previously we have shown that (alkyl)-d4TDPs 9a were substrates for HIV-RT. [60]To shed further light into these results, primer extension assays with HIV-RT and two different human DNA-polymerases α and γ were performed here as well.As controls in these primer extension assays, the four canonical dNTPs were added to the polymerases (positive control (þ lane)) and a further experiment without the polymerases (negative control (-lane)) was performed.Additionally, TTP was used as the reference compound because TTP was accepted not only by HIV-RT but also for DNA polymerases α, γ as a substrate (Figure 6).
As expected, with HIV-RT full extension of the primer to the 30mer proceeded (þ lane).No extension was observed (-lane) without HIV-RT.Interestingly, HIV-RT recognize (C4;C18)-AZTDP 8by and C18-AZTDP 9b as substrate which was concluded from the appearance (though weak) of the corresponding n þ 1 bands.Furthermore, it appears that the ABC derivatives 8ey and 9e 2 may also serve as substrates for HIV-RT, albeit with low affinity.As also can be seen in Figure 6A,B, not only the triphosphates of the antiviral active nucleoside analogs such as AZT, ABC or FddU and FLT were well incorporated into the primer but also their diphosphate derivatives.
The analogous experiments using DNA polymerase α showed no incorporation of the alkylated NDP compounds studied here (Figure 7A,B).In contrast to AZTTP, interestingly two triphosphate analogs proved to be substrates: FddUTP and FLTTP (Figure 7A).
All new alkylated compounds proved inactive in being substrates for DNA polymerase γ (Figure 8).
Conclusion
Here, a class of lipophilic nucleoside diphosphate compounds 8 comprising different nucleoside analogs is disclosed.The synthesis of DiPPro-nucleotides 8,9 was performed by using the H-phosphonate and/or H-phosphinate chemistry.The stability of DiPPro-compounds 8 was dependent on the present nucleoside analogs and the different types of masking groups.It was shown that the AB-moiety in DiPPro-compounds 8bw-fw and 8bx-fx was selectively cleaved to form monoalkylated NDP derivatives 9,10 by chemical hydrolysis in PBS and particularly in CEM/0 cell extracts, with PLE as well as in human plasma.The formation of nucleoside monophosphates 2 from DiPProcompounds 8by-fy and 8bz-fz in PBS containing PLE was purely chemically driven.As compared to the PLE hydrolysis, small amounts of nucleoside monophosphates were observed in the case of DiPPro-compounds 8by-fy and 8bz-fz in CEM/0 cell extracts, most probably due to the presence of phosphatases.From these studies, it was concluded that the cleavage of DiPPro-compounds 8 proceeded similar to the hydrolysis pathways for DiPPro-AZTDPs 8bw-bz in different chemical and biological media.
Hence, it was convincingly shown that the DiPPro-technology can provide a high potential to be used in antiviral chemotherapies compared to the β-(AB;AB)-nucleoside diphosphates [45] and γ-(AB;ACB or AB)-nucleoside triphosphates. [49,52]Highly active prodrugs 8 may be further studied in terms of their PK/PD properties as well as for their in vivo activity in the future.
in CD4 þ T-lymphocytes: 50% effective concentration; values are the mean AE SD of n = 2-3 independent experiments; b) Cytotoxicity: 50% cytostatic concentration or compound concentration required to inhibit CD4 þ T-cell (CEM) proliferation by 50%; values are the mean AE SD of n = 2-3 independent experiments.
Table 1 .
Hydrolysis half-lives of DiPPro-NDPs 8 and alkyl-NDPs 9 in PBS, PLE, and CEM/0 cell extracts as well as retention times.
Table 3 .
Antiviral activity and cytotoxicity profile of DiPPro-compounds 8,9 and NDPs in comparison with the parent nucleoside analogs 1. | 2,412.2 | 2023-12-22T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
A Novel Two-Tier Cooperative Caching Mechanism for the Optimization of Multi-Attribute Periodic Queries in Wireless Sensor Networks
Wireless sensor networks, serving as an important interface between physical environments and computational systems, have been used extensively for supporting domain applications, where multiple-attribute sensory data are queried from the network continuously and periodically. Usually, certain sensory data may not vary significantly within a certain time duration for certain applications. In this setting, sensory data gathered at a certain time slot can be used for answering concurrent queries and may be reused for answering the forthcoming queries when the variation of these data is within a certain threshold. To address this challenge, a popularity-based cooperative caching mechanism is proposed in this article, where the popularity of sensory data is calculated according to the queries issued in recent time slots. This popularity reflects the possibility that sensory data are interested in the forthcoming queries. Generally, sensory data with the highest popularity are cached at the sink node, while sensory data that may not be interested in the forthcoming queries are cached in the head nodes of divided grid cells. Leveraging these cooperatively cached sensory data, queries are answered through composing these two-tier cached data. Experimental evaluation shows that this approach can reduce the network communication cost significantly and increase the network capability.
Introduction
With the rapid development of microelectronic, wireless communication, new and renewable energy technologies, smart sensor nodes become smaller in physical size, stronger in storage and computational capabilities, more powerful in battery capacity and less expensive in price. Sensor nodes form wireless sensor networks (WSNs), which have been adopted in widespread domain applications, including ambient assisted living [1], target tracking [2], bridge or traffic monitoring [3], etc. Sensor nodes are mostly battery-powered, which are difficult to be recharged and replaced, especially when deployed in harsh environments. Although energy harvesting from natural sources [4], energy replenishment [5,6] and radio optimization and charging [7,8] technologies have been developed to recharge the battery of sensor nodes, network lifetime maximization and prolongation is still essential, and energy efficiency is one of the most important research challenges in WSNs nowadays [7]. Note that the network lifetime can be defined as various semantics depending on the context of application domains, and a definition with wide acceptance is the time when the first sensor node depletes its energy [9]. Therefore, techniques that facilitate sensory data gathering efficiently for answering queries, while prolonging the network lifetime as much as possible, are fundamental.
As presented by Xu et al. [10], queries in WSNs are typically conducted in a periodic, rather than one-shot, fashion for supporting one or multiple applications. Note that queries in WSNs are different from those in query-based WSNs [11], where sensor nodes are producers (sources) and consumers (sinks) of resources simultaneously. In this article, sensor nodes gather sensory data, which are aggregated and routed to the sink node according to the requirement of certain applications. Usually, WSNs can be shared by multiple applications to improve the network utilization efficiency [12]. Consequently, multiple queries are performed in a certain time period, and multiple-attribute sensory data are often interested [13,14]. These queries may have overlapping sub-regions of interest. Besides, the points of interest for certain applications may be within a certain sub-region for a certain time duration, while evolving moderately to a neighboring sub-region [15,16]. In this setting, when the number of queries is relatively large and each query is to be processed independently, the capability required for processing these queries may be above the capability that the network can provide, and consequently, the delay for query answering may be towards infinity [10,17]. Therefore, the mechanism for the optimization of multi-attribute query processing, while prolonging the network lifetime, is an important research challenge.
Traditional techniques have studied the query processing in WSNs from various aspects, including in-network query processing [18], aggregated query processing [19], compressed data aggregation [20], spatial correlation data aggregation [21], range query processing [22], opportunistic sampling-based query processing [23], snapshot and continuous data aggregation [17,24], real-time query processing [25], multiple dimensional or attributes query optimization [26][27][28], cooperative caching-based query processing [29,30], etc. Generally, these techniques are mostly exploring the one-shot query scheduling, where one single attribute is interested, whereas few efforts study periodic, aggregated and multi-attribute query processing [31]. Note that queries to be conducted in a certain time duration usually have overlapping sub-regions, where sensory data in these sub-regions gathered from the network can be shared for answering these concurrent queries. Besides, sensory data gathered at the current time slot may be reused for answering the forthcoming queries, when the variation of these sensory data is within an allowed threshold. In fact, sensory data may not vary dramatically in certain applications (like health or environmental monitoring), and many applications may work well when the bias of sensory data ((i) being used and (ii) being sensed in real time) is within a certain threshold [32]. Without loss of generality, a time duration is divided and represented by discrete time slots. Queries issued at a certain time slot are rewritten into one query. Hence, these concurrent queries are processed in batches. Besides, certain sensory data are cached in the network for answering the forthcoming queries and are retrieved from the network when the bias between the cached value and the current sensing value is above a certain threshold. Generally, this threshold is pre-specified as an appropriate value considering the characteristics of certain attributes and the requirements of certain applications. This strategy should reduce the network communication cost, improve the network capability, shorten the response time of query answering and, most importantly, prolong the network lifetime to some extent. Consequently, caching and refreshing multi-attribute sensory data in the network, while diminishing the cost of answering the forthcoming queries, is an important research problem to be explored further.
To remedy this issue, a two-tier popularity-based cooperative caching (PCC) mechanism is developed to support the periodic query processing, where multi-attribute sensory data are cached in the sink node and leaf head nodes of an index tree. Our main contributions are presented as follows: • Given a network represented as square grid cells with inverted files, an index tree is constructed, where grid cells correspond to the leaf nodes in this tree. A query expects to return sensory data for certain attributes in a set of neighboring grid cells. For simplicity, sensory data in a grid cell with an attribute is considered as an atomic unit for query answering and caching manipulation. A query is answered through composing sensory data that are: (i) cached in the sink node; and (ii) gathered from the network in real time. • The sink node is usually limited in storage and computational capabilities and can hardly cache sensory data of all sensor nodes in the whole network. Besides, a sub-region, rather than the whole network, is usually interested in applications within a certain time duration. Therefore, a two-tier cooperative caching mechanism is proposed, such that sensory data of the most popular are cached at the sink node, and these data can be reused for answering the forthcoming queries. This strategy can reduce the energy consumption of answering concurrent queries to a large extent. Specifically, the popularity of sensory data, which are interested in most queries in the most recent time slots, is the highest. These sensory data are assumed to be interested mostly in the forthcoming queries and are cached in the sink node. On the other hand, as for a grid cell that may not be interested in queries at this moment, sensory data in this grid cell are cached locally in the memory space of the corresponding head node. A flag, which indicates that sensory data have been varied significantly, is cached in the head node. When this grid cell is interested in queries, sensory data of these sensor nodes, whose flags indicate a dramatic variation, are retrieved from the network in real time. • Extensive simulations are conducted for evaluating the effectiveness and efficiency of the proposed algorithms. The experimental results show that the technique proposed in this article outperforms another technique proposed by Zhou et al. [33] where multiple-attribute query processing is also explored. Generally, our technique is more efficient than [33] in reducing the communication energy consumption and increasing the network capability, especially when the number of queries and the number of attributes interested in these queries are relatively large.
The rest of this article is organized as follows. Section 2 introduces the energy model used in the following. Section 3 presents the index tree construction algorithm and the network caching model. Section 4 proposes our cooperative caching mechanism for facilitating query answering. Section 5 evaluates the technique developed in the previous sections. Section 6 reviews and compares traditional techniques, and Section 7 concludes this work.
Preliminary: Energy Model
Several protocols for wireless sensor networks have been proposed leveraging the assumption made for the radio characteristics in the transmission and receiving modes. Without loss of generality, we apply the well-adopted first order radio model [34] in this article, for the computation of the energy consumption, where the parameters are presented in Table 1. In this model, the energy consumed for running the transmitter or receiver circuitry E elec is set to 50 nJ/bit, while that for the transmit amplifier amp is set to 100 pJ/bit/m 2 . The energy consumption(s) E T x (k, d) (or E Rx (k)) for transmitting (or receiving) a packet of k bits within a distance d is (are) specified by the following equations: The number of bits in one pocket d The distance of transmission n The attenuation index of transmission r The communication radius of sensor nodes E T x (k, d) The energy consumed to transmit a k bit packet to a distance d E Rx (k) The energy consumed to receive a k bit packet E ij (k) Energy consumption for transmitting a k bit packet from a node i to a neighboring node j Consequently, the energy consumption E ij (k) for transmitting a packet of k bits from a sensor node i to a neighboring sensor node j is computed as follows: Note that the energy consumption of transmitting a packet to a sensor node is different from that to the sink node (SN, or called the base station), since SN is assumed to have no energy constraint, and the cost of receiving a packet is ignored in this model [34]. The parameter d refers to the distance between sensor nodes i and j (or SN). The energy of transmitting a packet from a sensor node i to another j is assumed the same as that of transmitting a packet from j to i, i.e., E ij (k) = E ji (k). The parameter n, which refers to the attenuation index of transmission as presented in Table 1, is determined by the surrounding environment. If sensor nodes in the network are barrier-free when forwarding packets, n is set to two. Otherwise, n is set to a value between three and five, when sensor nodes for long-distance transmission are distributed in the area of buildings and a vegetation cover. Without loss of generality, the network is assumed to be deployed in an area that is barrier-free, and n is set to two in our experiments in Section 5.
As an example, Figure 1 shows part of our sample network region, as shown in Figure 2, where 13 sensor nodes are deployed in this sub-region. The lines of arrows reflect the fact that sensor nodes in neighboring grid cells are within their communication radius r. Consequently, the energy consumed for forwarding a packet with the size of k bits from a sensor node (e.g., 47) to a neighboring one (e.g., 49) is where the parameter d represents the geographical distance between sensor Nodes 47 and 49. The network region is divided into 25 square grid cells, which are the same in geographical size. The region of a query (for instance, q1) is rewritten into a set of grid cells. For instance, q1.qr can be rewritten into a set of grid cells of {gc 0 , gc 1 , gc 2 , gc 5 , gc 6 , gc 7 }.
Index Tree Construction and Network Caching Model
This section proposes an index tree construction algorithm leveraging the method developed by Zhou et al. [33] and introduces the network caching model, which is used to facilitate the query processing in Section 4.
Index Tree Construction
A tree-based routing structure has been used widely in WSNs for supporting the data gathering, aggregation and transmission in a multi-hop manner [35,36]. Leveraging the index tree construction algorithm developed in our previous work [33], we develop a novel index tree for organizing sensor nodes in a balanced manner, which is better at facilitating the query processing when multiple kinds of attributes are interested according to certain requirements of domain applications. Similar to the assumptions made in our previous work [33], sensor nodes are assumed in a skewed distribution, where sensor nodes are dense in some sub-regions of the network, while they are sparse in the others. In fact, sensor nodes are distributed unevenly in real-world applications, including bridge or traffic monitoring scenarios [3]. An example of sensor nodes in a skewed distribution is shown in Figure 2, where sensor nodes located in the upper center and the right bottom of the network region are dense, while the others are sparse. A sensor node is assumed accompanied by one sensing equipment for sensing a certain attribute, such as humidity, temperature, gas flow, etc. Various sensor nodes with different sensing equipment are able to sense diverse attributes. The index tree construction algorithm proposed by Algorithm 1 in our previous work [33] works as follows: • The network region is divided into square grid cells whose side-length is set to √ 2r, and an inverted file [37] is attached to each grid cell for specifying the attributes to be sensed by the sensor nodes therein. An example of grid cell division is shown in Figure 2, where the network is divided into 25 grid cells, and a two-dimensional matrix is adopted to represent the coordinates of these grid cells. Actually, grid cells correspond to the leaf nodes of the index tree to be constructed.
• The weight between two neighboring grid cells or sub-regions is calculated according to the mechanism proposed by the Algorithm 2 in our previous work [33]. This weight specifies the energy consumption of forwarding the same size of message between neighboring grid cells. Intuitively, sensor nodes in neighboring grid cells contribute to this weight calculation when the Euclidean distance between them is no more than the communication radius of sensor nodes r. • Neighboring grid cells or sub-regions are merged in a pair-wise fashion when the weight between the candidates is the greatest, and the inverted file is processed accordingly. This merging procedure iterates until the root node of the index tree, which is a binary tree in shape, is constructed. Note that this merging strategy makes adjacent sub-regions, which may induce relatively larger energy consumption when forwarding messages in between, to be included by different children. Consequently, the energy consumption for routing data packets along the paths specified by this index tree is balanced somehow.
Generally, the Algorithm 1 in our previous work [33] constructs a tree that may be unbalanced, especially when sensor nodes are distributed unevenly in the network. Since [33] does not cache sensory data in leaf head nodes, leaf head nodes gather sensory data from sensor nodes and route these data to the SN. An unbalanced tree for a skewed distribution of sensor nodes prolongs the network lifetime, as evidenced by the experimental evaluation conducted in [33]. On the other hand, leaf head nodes require caching sensory data locally, as presented by Section 4 in this article, and sensory data, whose variation is beyond an allowed threshold, are not required to be routed to the SN. Therefore, when sensory data of most sensor nodes do not vary dramatically, the effort for routing sensory data to the SN should be lighter than that of [33] in the network, but the effort of leaf head nodes should be heavier due to the caching of sensory data locally. Intuitively, an unbalanced tree should a have much greater number of head nodes whose children include sensor nodes and head nodes, while all head nodes should be leaf head nodes for a half tree [38]. Therefore, more hops are to be involved when routing sensory data to the sink node in a hop-by-hop manner, especially when the path is longer in hops. To facilitate the caching mechanism upon leaf head nodes as presented in Section 4, a relatively balanced tree is constructed for organizing sensor nodes in this article.
Algorithm 1 Index tree construction.
Require: LN set : the set of leaf nodes with inverted files Ensure: rt: the root of the index tree 1: T N set ← set of leaf nodes and initially set to LN set 2: nT N set ← set of tree nodes recording newly-merged nodes 3: while |T N set | > 1 do 4: wgt(nd 1 , nd 2 ) ← calN gbN dW gt(nd 1 , nd 2 ) as presented by Algorithm 2 in our previous work [33], where nd 1 and nd 2 are neighboring nodes in T N set
5:
while ∃ nd 1 , nd 2 ∈ T N set : nd 1 and nd 2 are neighboring nodes and wgt(nd 1 , nd 2 ) is the biggest do 6: tn ← merge tree nodes nd 1 and nd 2 As previously mentioned, the index tree to be constructed in this article should be a relatively balanced tree for a skewed distribution, rather than an unbalanced tree, as developed in our previous work [33]. The difference lies mainly in the tree node merging strategy. As presented in Algorithm 1, after dividing the network region into square grid cells whose side-length is √ 2r, we firstly get the set of leaf nodes (denoted T N set ) with inverted files as presented by Algorithm 1 (Lines 1-13) in our previous work [33] (Line 1). Note that these leaf nodes corresponds to grid cells, rather than sensor nodes in the network. The weight (denoted wgt(nd1, nd2)) between neighboring nodes (denoted nd 1 and nd 2 ), which reflects the energy consumption when routing sensory data from one node to another, is calculated using the function calN gbN dW gt(nd 1 , nd 2 ).
As presented by Algorithm 2 in our previous work [33] (Line 4), this function returns the sum of weights among all pairs of neighboring grid cells in the corresponding neighboring sub-regions. If there are neighboring nodes (i.e., nd 1 and nd 2 ) in T N set that can be merged, in other words, wgt(nd1, nd2) is the biggest (Line 5), nd 1 and nd 2 are merged as a newly-generated node tn (Lines 6-7), and the inverted files (denoted tn.IvtF , for instance) are processed accordingly (Line 8).
Note that tn corresponds to a sub-region in the network, which is the merger of sub-regions for nd 1 and nd 2 . After this merging procedure, nd 1 and nd 2 are removed from T N set (Line 9), while tn is inserted into nT N set as a candidate for the next merging procedure (Line 10). This procedure iterates until all neighboring nodes in T N set have been examined and processed. Thereafter, T N set and nT N set are updated accordingly (Lines 12-13). The symbol ∅ in Line 13 means an empty set. This tree construction procedure terminates one there is only one node left in T N set (Line 3), which corresponds to the root node of the index tree (Line 15). An example of the index tree constructed through this algorithm is shown in Figure 3, which is a binary tree and more balanced than the tree as shown in Figure 5 in our previous work [33]. Specifically, 25 grid cells, as shown in Figure 2, correspond to the leaf nodes, and sub-regions composed of multiple grid cells correspond to non-leaf nodes, in the index tree. This index tree construction is to facilitate the popularity-based cooperative caching mechanism to be presented in the following sections. The time complexity of Algorithm 1 is O(n × log 2 n), where n is the number of leaf nodes in, while log 2 n is the height of the index tree. Generally, a node in an index tree corresponds to a sub-region containing a subset of tree nodes and/or sensor nodes, and these tree nodes are responsible for query propaga-tion and sensory data routing to the SN. The low-energy adaptive clustering hierarchy protocol (LEACH) [39] is adopted for the head node selection in these sub-regions or grid cells, where these sub-regions or grid cells correspond to the clusters as presented by Heinzelman et al. [39]. LEACH incorporates randomized rotation of high-energy sensor nodes as the head nodes to avoid draining the energy of head nodes. Consequently, the energy consumption of being a cluster head node is distributed and balanced among sensor nodes. This strategy facilitates the prolongation of the network lifetime to some extent. Optimal head nodes should be located at the center of a cluster [40,41]. Therefore, a higher priority is given to sensor nodes that are closer to the center of sub-regions or grid cells when voting for head nodes.
Two-Tier Cooperative Caching Model
Applications leveraging WSNs may require gathering sensory data in a short latency, while minimizing the energy consumption of the network. Queries issued periodically and continuously for gathering sensory data of multiple attributes should induce high communication cost, which may be above the network capability. Without loss of generality, we divide the time duration into time slots as our previous work [42], and queries are assumed to be conducted in batches in each time slot. Note that sensory data in some applications, such as health or wildlife monitoring, may not change dramatically within a certain time duration. Besides, some applications may tolerate a bias of sensory data, when the variation of these data can satisfy a certain constraint. These suggest that sensory data may be valid for the applications within some time slots after the sensing time point. Therefore, these sensory data may be appropriate to reuse for answering the forthcoming queries, rather than fetching from the network in a real-time fashion [42]. To facilitate this sensory data reusability strategy, this article proposes a cooperative data caching mechanism, where sensory data are cached in the memory of: (i) the SN; and (ii) head nodes of grid cells, which correspond to the leaf nodes in the index tree. We use the notion of intermediate nodes (INs) to represent these leaf head nodes in the following. Generally, the SN, which has a larger memory space and more capability in computation than sensor nodes, is responsible for caching the bulk (may be not all) of sensory data from the network. An IN is required to cache sensory data of sensor nodes in the corresponding grid cell. Since there are usually a limited number of sensor nodes in each grid cell, an IN is assumed to have enough memory space and computational capability for processing sensory data of all sensor nodes in the corresponding grid cell. To facilitate the query processing mechanism in the following sections, we define a wireless sensor network as follows: • SN is the sink node of this network.
• V IN is a set of intermediate nodes, which are responsible for handling sensory data in grid cells.
• V sn is a set of sensor nodes in the network.
• AT R is a set of attributes to be sensed by V sn .
Given a sensor node v sn ∈ V sn , v sn is defined in terms of the vector: where sd id means the identifier of this sensor node v sn , sd atr ∈ AT R represents the single attribute to be sensed by v sn , sd stt specifies the status of v sn , which can be active or inactive, and val vsn is the sensory data that may be cached in the corresponding intermediate node v IN ∈ V IN and the SN. Note that val vsn is not the sensory data val cur vsn at this moment, and val vsn is to be replaced by val cur vsn only when the bias between val vsn and val cur vsn is above an allowed threshold thrd. In this case, v sn should: • • Otherwise, v sn should report a sensory data change notification message to v IN when v sn is in the status of inactive, and thereafter, v IN will invalidate the sensory data in the cache.
Note that thrd is determined according to: (i) the kind of attribute to be sensed by v sn ; and (ii) the specific requirement of certain applications. As for the sensory data synchronization procedure, we refer to Section 4.1 for the details.
An intermediate node v IN should cache sensory data of all sensor nodes in the corresponding grid cell, in terms of the vector for a sensor node v sn : where sd id , sd atr , sd stt and val vsn are the same as those of v sn , respectively. ts f mS records the time slot when val vsn is reported from the sensor node, while ts toSN records the time slot when val vsn is retrieved by SN. When ts f mS is the same as ts toSN , the last sensory datum of v sn has been cached in the SN if the SN has enough memory space. Otherwise, the cached sensory data in the SN is not synchronized with that in the corresponding IN. Note that vec vsn IN .val vsn should be set to null when v sn reports a sensory data change notification message to the IN.
The SN caches sensory data of sensor nodes in terms of the vector: where sd id , sd atr and val vsn are the same as those of sensor node v sn , respectively. gc id refers to the ID of the corresponding grid cell where v sn lies. Since the SN has a limited storage capability, sensory data of sensor nodes, which are the most popular according to the recent query history, are to be cached in the SN. The popularity computation of grid cells is presented in Section 4.2.
Query Processing with Cache Mechanism
Given a network deployed in a certain region where multiple kinds of attributes are interested, queries are to be issued periodically and continuously according to the requirement of certain applications. Generally, a multiple-attribute query can be described in terms of a three-dimensional vector, including: (i) a query region; (ii) a set of interested attributes; and (iii) a certain time slot. A query region is typically represented by a rectangle. As mentioned in Section 3.1, the network region is divided into grid cells. Consequently, a query region is transferred to a set of grid cells with the minimum number, which can cover the rectangle prescribed by this query. Formally, a query is defined as follows: Definition 2 (Query). A query is a tuple q = (tm, qr, AT R), where: • tm is the time slot when the query q is issued.
• qr is the query region, which is represented by a set of corresponding square grid cells.
• AT R is a set of attributes interested in q.
Given a query q = <tm, qr, AT R q > where q.qr is rewritten as a set of grid cells GC q (i.e., q.qr = GC q ), q returns sensory data at q.tm of sensor nodes: It is worth mentioning that in certain domain applications, including health monitoring, queries are typically issued continuously and periodically. The points of interest should be within certain sub-regions for certain contiguous time slots, while evolving moderately to neighboring sub-regions [15]. In this context, when the sub-regions for the queries in contiguous time slots have overlapping sub-regions, query results in the previous time slots should be valid and (partially) reusable for answering the forthcoming queries. On the other hand, queries are answered independently by the SN. Generally, for a certain query q to be processed by the SN, q is answered by a cooperative caching mechanism, including the following steps: • Step 1: The SN determines a set of grid cells GC q of a minimum number that can cover the query sub-region q.qr. • Step 2: The SN synchronizes with the intermediate nodes (INs), which are actually the head nodes of gc q ∈ GC q in the index tree, for retrieving the last sensory data. It is worth mentioning that the SN may cache sensory data of some, but not all, INs. Intuitively, a flag is adopted for specifying whether sensory data of a certain IN has been cached in the memory space of SN or not. • Step 3: An IN examines the freshness of cached sensory data for each sensor node (denoted v sn ) in the corresponding grid cell gc q .
-When v sn .ts f mS = v sn .ts toSN , which suggests that sensory data cached in IN is up-to-date, and is synchronized with that cached in the SN, a flag indicating this scenario is sent to the SN, whereas sensory data of v sn do not need to be forwarded to the SN. -Otherwise, sensory data cached in the IN are not up-to-date. Consequently, the head node of the IN requests to retrieve the last sensory datum from v sn , which is to be routed to the SN afterwards.
Note that the status of v sn is set to active when v sn is interested in queries currently and/or in recent time slots. The reader is referred to Section 4.1 for the details of the sensory data synchronization mechanism between an IN and the sensor nodes contained therein. • Step 4: When the SN gets sensory data of all sensor nodes in each grid cell gc q ∈ GC q , the answer to the query q is aggregated. Sensory data, which were cached in the memory space of the SN and are not consistent with the last ones retrieved from INs, are updated accordingly. Note that the SN usually has a constraint in storage and computational capabilities. As presented in Section 4.2, when sensory data of some grid cells are retrieved from INs, sensory data cached in the SN, which may not be reused for answering the forthcoming queries, should be removed, and the released storage capability can be adopted for caching more popular sensory data.
As discussed at Steps 2-4, the technical details of our two-tier cooperative caching mechanism on the SN and IN for query answering are presented in Section 4.3.
Sensory Data Synchronization for INs and Corresponding Sensor Nodes
As discussed, in certain applications of WSNs, the points of interest may be within certain regions in certain time durations, while evolving to neighboring sub-regions moderately and continuously. This suggests that a certain grid cell may be interested in applications of some time durations, while not in the others. To reduce the energy consumption, sensor nodes, which do not contribute to answering the queries in a few recent time slots, are set to the status of inactive. Therefore, these sensor nodes are not required to send sensory data to the IN at each time slot afterwards. Instead, when their sensory data have been changed dramatically and the variation is not tolerable for applications, a flag indicating this situation is sent to the IN. It is worth mentioning that the strategy of active and inactive is different from adaptive sleeping [43,44] and duty-cycle [45,46] strategies, where sensor nodes do not sense environmental variables and respond to queries. For simplicity, we assume that sensor nodes decide to send either sensory data or a flag (data of a bit usually) to the IN according to the status of active or inactive. This strategy decreases the amount of data to be sent and thus reduces the energy consumption. In fact, adaptive sleeping and duty-cycle strategies complement our technique, and when applied, energy consumption should be further reduced.
The sensory data synchronization strategy for the IN and contained sensor nodes is presented in Algorithm 2. For each sensor node v sn in each IN, v sn examines whether the sensory data at this moment (denoted v sn .val cur vsn ) have been changed dramatically with respect to the sensory data reported to the IN (denoted v sn .val vsn ) in previous time slots (Line 2). If the variation v sn .val df vsn , which is represented by the absolute value of the difference between v sn .val cur vsn and v sn .val vsn , is above an allowed and pre-specified threshold (denoted thrh atr ), v sn should report this data change situation to the IN (Line 3). In this case, v sn sets its reported sensory data v sn .val vsn as the current value v sn .val
Popularity-Based Sensory Data Replacement Mechanism in the SN
As mentioned in Section 3.2, sensory data of certain sensor nodes may not change dramatically in certain time durations, and domain applications may work well when the bias of sensory data is beyond a certain threshold with respect to the data in real time. In this setting, sensory data cached in the memory space of the SN may be appropriate to be (partially) reused for answering the forthcoming queries, rather than retrieving from the network in real time. This mechanism should reduce the effort of sensory data sensing, gathering and routing and, thus, should prolong the network lifetime to some extent. Note that the SN usually has limited storage and computational capabilities, which may not be capable of caching sensory data of all sensor nodes in the network. Therefore, sensory data that have a high possibility of being reused for answering the forthcoming queries should be cached. Generally, queries are issued at each time slot. When fresh sensory data of sensor nodes are retrieved from the network, these fresh sensory data should be used for updating the obsolete counterparts if cached in the SN previously. On the other hand, sensory data cached in the SN, which may not contribute to the forthcoming queries, should be removed, and the released memory space is used for caching the fresh sensory data. This section proposes a mechanism for sensory data replacement in the cache of the SN depending on the popularity of sensory data. Note that the network is divided into square grid cells in this article, and a grid cell (denoted gd), with an interested attribute (denoted atr), is considered as an atomic unit for query processing. Intuitively, the vector vc = <gd, atr> is applied to represent the index of the set of sensory data to be cached in the SN.
Given a set of sensory data represented by the vector vc i = <gc i , atr i >, where these data are (i) cached in the SN or (ii) were not cached in the SN and are retrieved from the network at this moment, we calculate the popularity pop i vc of vc i according to (i) the history of queries conducted in the previous k time slots and (ii) the size of sensory data contained by vc i . A large value of pop i vc means that vc i is higher in possibility of being cached in the SN and of being interested in the queries in the consequent time slots. Specifically, pop i vc is calculated using the following formula: where sz vc i means the size of sensory data in vc i , which is proportional to the number of sensor nodes contained in gc i accompanying the attribute atr i and the size of sensory data for the attribute atr i . This indicates that the larger the size of vc i is (i.e., 1 szvc i is smaller), the higher the possibility that vc i is removed from the SN. f j i represents the number of queries that are interested in vc i at the time slot j, while f j all represents the number of all vectors vc = <gd, atr> interested in all queries issued at the time slot j. Generally, the more frequently that the vc i is interested in queries, the larger the popularity pop i vc is. The parameter α corresponds to an attenuation coefficient, which is set to a value between zero and one. In fact, α reflects the importance of queries conducted in a recent, while different, time slot. Intuitively, the more recent queries the vc i is interested in, the larger the popularity pop i vc is. Given a set of V C = {vc 1 , vc 2 , · · · } that are gathered at a certain time slot, the popularity pop i vc of vc i ∈ V C is calculated using Formula 7. pop i vc are ranked according to their values. Sensory data of vc i are cached in the SN until not enough storage capability in the SN is available for the next. These cached sensory data are used for facilitating the cooperative caching mechanism, as detailed in the following.
Query Processing with Two-Tier Cooperative Caching Mechanism
Leveraging sensory data cached in the memory space of INs and the SN, we propose a two-tier cooperative caching mechanism for answering periodic queries. As presented in our previous work [33], the network region is divided into square grid cells. A two-dimensional matrix is used to represent grid cells, where each grid cell is accompanied by an identifier (denoted gc id ). gc id is computed based on the value of (i) row (denoted row) and column (denoted col) coordinates of the grid cell and (ii) the columns of the grid cell in the matrix (denoted cols). Specifically, gc id = row × cols + col. An example is shown in Figure 2, where pi (i = 1, 2, ...) represents sensor nodes sensing various kinds of attributes. As for the grid cell containing sensor node p17, its grid cell ID is computed as 7 = 1 × 5 + 2, and this grid cell is denoted as gc 7 . The first grid cell, which contains p1, as shown in Figure 2, is denoted as gc 0 .
Given a query q to be answered, q is rewritten for determining a minimum set of grid cells that q.qr covers. When a grid cell is intersected with q.qr, this grid cell is counted. An example is shown in Figure 2, where the query region is represented using a rectangle with dotted lines, and grid cells for q.qr are marked in gray. The attributes that q.AT R includes are represented using a bitmap. An example is shown in Table 2, where one means that the query interests in the corresponding attribute, while zero means it is not. The parameter k represents the number of attributes to be sensed in a certain network. Table 2. An example of a bitmap for attributes interested in a certain query q, where atr i specifies the i-th attribute. The value 1 means that a certain attribute (e.g., atr 3 ) is interested in this query q, while 0 means it is not. When answering a query q, sensory data of multiple attributes in a certain grid cell should be gathered and aggregated once. An example is the attributes atr 1 and atr 3 with respect to grid cells gc 0 , gc 1 , gc 2 , gc 5 , gc 6 and gc 7 , as shown in Figure 2 and Table 2. To facilitate the query processing, a GC-attribute table is adopted to clearly represent this relation for grid cells and interested attributes. For instance, Table 3 shows the relation for grid cells (i.e., gc 0 , gc 1 , gc 2 , gc 5 , gc 6 and gc 7 ) and interested attributes (i.e., atr 1 and atr 3 ). It is worth mentioning that a grid cell with a single attribute of interest is considered as the atomic unit for query processing and the sensory data caching mechanism as presented in Section 4.2. Generally, Table 3 is an example for the relation of grid cells and attributes for a single query, where one means that the corresponding attribute in the certain grid cell is interested in the query, whereas zero means it is not. A table in this format can also be applied to specify the relation of grid cells and attributes aggregated for multiple queries that are to be processed concurrently at a certain time slot. For simplicity, as presented by Algorithm 3, a single query is considered for the query processing procedure leveraging the cooperative caching mechanism, whereas multiple queries can be handled in a similar fashion. Besides, a table, which is named the GC-SNCache table and an example shown in Table 4, is used to represent sensory data of grid cells that are cached in the SN currently, where one means that the corresponding attribute in the certain grid cell has been cached in the memory space of the SN, whereas zero means it is not. Note that the SN is usually limited in storage and computational capabilities and should cache sensory data of grid cells, which have a high possibility of being covered by the forthcoming queries. Table 3. An example of the GC-attribute table, where gc i specifies the i-th grid cell and atr j specifies the j-th attribute. The value 1 means that the sensor nodes in a certain grid cell (e.g., gc 1 ) are interested in a certain attribute (e.g., atr 3 ), while 0 means they are not. Table 4. An example of the GC-SNCache table, where gc i specifies the i-th grid cell and atr j specifies the j-th attribute. The value 1 means that sensory data for a certain grid cell (e.g., gc 1 ) with a certain attribute (e.g., atr 1 ) are cached in the memory of the SN, while 0 means they are not. As presented by Algorithm 3, when a query q is issued at a certain time slot, q is rewritten and a set of grid cells (denoted GC q set ) are retrieved that covers q.qr, as illustrated by Figure 2 (Line 1). If the attributes interested in the query q (denoted q.AT R) are not sensed by the sensor nodes in the sub-region corresponding to the tree node tn or the query region (i.e., GC q set ) and the sub-region contained in the tree node (denoted tn.GC) have no overlapping areas (Line 2), no sensory data should be returned, and the tree node tn does not contribute to the query q (Line 3). The symbol ∅ in Line 2 specifies an empty set. Otherwise, tn contains sensor nodes that can contribute to the answering of the query q.
Generally, two scenarios are considered leveraging the fact of whether tn corresponds to a leaf node in the index tree or not. When tn does not reflect a leaf node in the index tree (Line 5), the left and right children of tn are processed independently, and their results are represented as q lf rt and q rt rt , respectively (Lines 6-7). The querying result of q (denoted q rt ) is assembled as the aggregation of q lf rt and q rt rt (Line 8). On the other hand, when tn reflects a leaf node in the index tree, tn corresponds to a grid cell actually. In this setting, the intermediate node (IN) with respect to tn is identified (Line 10), and the set of sensor nodes (denoted V all sn ) contained in tn, which contribute to the answering of q, are retrieved (Line 11). The status of sensor nodes in V all sn is changed to active, when it is inactive currently (Line 12).
Algorithm 3 QueryProcCoopCaching
Require: q: a query issued at a certain time slot tn: a node in the index tree Ensure: q rt : the result for the query q 1: GC q set ← set of grid cells that are covered by q.qr 2: if q.AT R ∩ tn.IvtF = ∅ or tn.GC ∩ GC q set = ∅ then suggests that the sensory data for v sn have never been reported to the SN) (Line 13), the last sensory datum for v sn should be retrieved from the network, and the time slots, which reflect the time when (i) sensor nodes report their sensory data to the IN (denoted ts f mS ) and (ii) the IN reports sensory data of certain sensor nodes to the SN (denoted ts toSN ), are updated accordingly (Line 16). It is worth mentioning that, when the status of v sn is inactive, the IN may not cache the sensory data for v sn , according to our data synchronization mechanism presented by Algorithm 2. As for the sensor nodes whose cached sensory data are consistent between the SN and the corresponding IN, according to the GC-SNCache table, these sensory data are not required to be retrieved from the network and to be routed to the SN. Instead, the sensory data cached in the SN are adopted for answering the query q. This strategy should reduce the energy consumption, which may be unnecessary somehow. Otherwise, the GC-SNCache table is updated for showing the sensory data consistency between the SN and the corresponding IN at this moment (Lines 17 and 21). Note that when the SN does not cache the sensory data for sensor nodes in V all sn (Line 17), the sensory data of all sensor nodes in V all sn should be forwarded to the SN for answering the query q (Lines [18][19][20]. Consequently, sensory data of all sensor nodes in V all sn are synchronized between the SN and the corresponding IN (Lines 23-25). The result of the query q, which is represented by q rt , is assembled leveraging the sensory data cached in the SN accordingly (Lines 26-28).
The time complexity of Algorithm 3 is O(n×m), where n is the number of tree nodes to be examined in the index tree leveraging the inverted file, and m is the maximum number of sensor nodes in grid cells. The worst case occurs when all sensor node in the whole network are to be traversed. In this setting, all sensor nodes are required to report their sensory data to the SN.
Implementation and Evaluation
A prototype has been implemented in a Java program, and experiments have been conducted for evaluating our technique. In the following, we introduce the environment settings and discuss the results of our experiments.
Environmental Settings
Experiments are designed for evaluating our technique, and the factors considered include various skewness distributions for the network setting and various cache sizes in the SN. In the network setting, 1000 sensor nodes are generated and distributed unevenly with skewness degrees varying from 20%-80%. Intuitively, a skewness degree (denoted sd) is computed using the formula: sd = dn−sn N , where dn and SN refer to the number of sensor nodes in dense and sparse sub-regions, respectively, while N is the number of sensor nodes deployed in the network (i.e., N = dn + sn) [33]. For instance, assuming a network contains N (e.g., N = 1000) sensor nodes, the network region is divided into four sub-regions, which are rectangular in shape and the same in geographical size. A skewness degree is set to sd (e.g., sd = 60%). Consequently, n × (1 − sd) (1000 × (1 − 60%) = 400) sensor nodes are deployed evenly in the whole network, while the remaining n × sd (1000 × 60% = 600) sensor nodes are distributed to any two dense sub-regions in a random fashion. Therefore, a network region with N sensor nodes, whose distribution follows a skewness degree sd, is constructed.
Experiments are performed on a desktop with an Intel(R) Core(TM) i5-2400 CPU at 3.10 GHz, a 4-GB memory and a 32-bit Windows system. Parameter settings for our experiments are presented in Table 5. Note that several, but typically not too many, kinds of attributes are interested in domain applications. Without loss of generality, 10 kinds of attributes are assumed interested in the experiments.
A sensor node is randomly assigned an attribute to be sensed, and each kind of attribute is sensed by around 1000/10 = 100 sensor nodes. The network region is set to 350 m × 350 m, which is divided into square grid cells with the same geographical size. The side-length of grid cells is set to √ 2r [40], where r = 50 m is the communication radius of sensor nodes. The size of cache in the SN is set to a number between 500 and 900, which means that the SN can accommodate sensory data for 50-900 sensor nodes, respectively. The attenuation coefficient α and the number of preceding time slots k used in Equation (7) are set to 0.6 and four, respectively. Queries are issued every 2 min, and sensory data between INs and the corresponding sensor nodes are synchronized every 5 min, when the status of these sensor nodes is inactive. These parameters can be set to other values if appropriate.
Experimental Evaluation
An index tree is constructed through recursively merging neighboring sub-regions when forwarding messages of the same size in between is minimum in energy consumption. An example is shown in Figure 3, where the skewness degree is set to 60%. Leaf nodes are denoted by the IDs of correspond grid cells, as mentioned in Section 3.2, and R i (i = 1, 2, ...) represents non-leaf nodes, which are (sub-)regions composed of several neighboring grid cells. For instance, R 5 represents a sub-region whose grid cells are gc 22 and gc 23 , while R 2 is composed of R 3 and grid cell gc 24 . Query answering is performed leveraging this index tree for data gathering, aggregation and routing to the SN.
Our technique is evaluated with respect to four types of queries [33] leveraging the sub-region and attributes of interest, and a query is denoted as q = <tm, qr, AT R q >: • Single-attribute query in the whole region (SAQWR): q retrieves sensory data of all sensor nodes in the whole network region (i.e., q.qr = nr) for a certain attribute (i.e., |q.AT R| = 1). nr specifies the whole network region. Sensory data of the attribute q.AT R at nr are gathered and routed to the SN at a certain time slot q.tm. • Single-attribute query in a sub-region (SAQSR): The difference between SAQSR and SAQWR lies in the query region. SAQSR retrieves sensory data of sensor nodes in a sub-region of the network (i.e., q.qr nr) with a certain attribute q.AT R (i.e., |q.AT R| = 1). • Multi-attribute query in the whole region (MAQWR): q.qr is the whole network region (i.e., q.qr = nr). q.AT R contains some, but not all, attributes. Generally, sensory data of the attributes in q.AT R at nr are gathered and routed to the SN at q.tm. • Multi-attribute query in a sub-region (MAQSR): The difference between MAQWR and MAQSR lies in the query region where MAQSR retrieves sensory data of sensor nodes in a sub-region of the network (i.e., q.qr nr) with multiple attributes. respectively. The gradient of the curves represents the ratio of energy consumption for query answering. This figure shows that when the cache size in the SN is relatively small and is not capable of caching all sensory data requested by a certain query, more sensory data should be replaced in the SN according to our data replacement mechanism, as presented in Section 4.2, and more energy is required for answering the query. For instance, in comparison with the case when the number of attributes is three, 198% (or 355%) more energy is consumed for the case when the number of attributes is seven (or nine).
Experiments are conducted to evaluate the performance of these four kinds of queries leveraging our two-tier cooperative caching mechanism. Figure 4 compares the energy consumption of MAQWR, where the number of attributes varies as 1, 3, 5, 7 and 9, respectively. The cache size of the SN is set to 600, and the skewness degree is set to 60%. It is worth mentioning that Figure 4 shows the energy consumed in total from scratch, rather than that at a certain time point. The gradient of a curve corresponds to the energy consumed at a certain time point. The same principle holds for the energy consumption shown in Figures 5-7. Intuitively, more energy is consumed when more attributes are of interest, since more sensor nodes are involved in sensory data gathering and aggregation. The energy consumption for the scenarios, where the number of attributes is 1, 3 or 5, is relatively small and stable. In contrast, the energy consumption for the scenarios, where the number of attributes is seven or nine, increases to a certain extent. Note that sensor nodes involved in the query answering should be 700 (or 900), when the number of interested attributes is seven (or nine). Since the cache size is 600, the sensory data of around 100 (or 300) sensor nodes cannot be cached in the SN. Hence, some sensory data have to be gathered from the network at each time slot, and the sensory data replacement mechanism is always enacted to cache sensory data with the highest popularity. These are the main causes for the increase of energy consumption. For instance, in comparison with the case when the number of attributes is three, 198% (or 355%) more energy is consumed for the case when the number of attributes is seven (or nine). Generally, the larger the cache size of the SN, the less the energy consumption in the network is. Caching in the SN should reduce the energy consumption to a certain extent, especially when the query region is relatively large and the number of interested attributes is relatively big. Figure 5. Comparison of the accumulated energy consumption for MAQWR, where the cache size of the SN is set to 500, 600, 700, 800 and 900, respectively. The gradient of the curves represents the ratio of energy consumption for answering the query. This figure shows that when the cache size of the SN is large enough to cache all sensory data requested by the query, the energy consumption is relatively small and steady. Otherwise, the energy consumption is much more and increases significantly. For instance, in comparison with the case when the cache size is set to 800, 66% (or 40%) more energy is consumed for the case when the cache size is set to 500 (or 600). Figure 6. Comparison of the accumulated energy consumption for single-attribute query in the whole region (SAQWR), where the skewness degree for the sensor node distribution in the network is set to 20%, 40%, 60% and 80%, respectively. The gradient of the curves represents the ratio of energy consumption for answering the query. This figure shows that the bigger the skewness degree is, the less energy is consumed for answering the query. For instance, in comparison with the case when the skewness degree is set to 80%, 73% (or 91%) more energy is consumed for the case when the skewness degree is set to 40% (or 20%). Figure 8 shows cache hit rates hrt cah for MAQWR. hrt cah is calculated as the ratio of |SD q cah | / |SD q |, where (i) SD q cah is the set of sensory data cached in the SN that contributes to answering of the query q and (ii) SD q is the set of sensory data of q inquiries. Without loss of generality, the value of sensory data is assumed to vary according to the formula: val vsn = log (k × t cur + 1) + C, where: (i) t cur is the current time; and (ii) k and C are constants, which are initially set to random values, and vary according to a normal distribution. Therefore, sensory data of sensor nodes are mostly different and change moderately. Figure 8 shows that cache hit rates for the scenarios, where the number of attributes is 1, 3 or 5, are quite high (roughly 95%), since the SN can have enough storage capability to cache almost all sensory data gathered in recent time slots. As for the scenarios where the number of attributes is seven or nine, cache hit rates are relatively lower (roughly 70%). Similar to the situation for energy consumption, a certain amount of sensory data has to be gathered from the network for query answering in real time. In addition, sensory data, which may be reused for answering the forthcoming queries, have to be removed from the cache by the data replacement mechanism, due to the limitation of the storage capability of the SN. Figure 8 shows that cache hit rates drop every 5 min. Since INs synchronize with sensor nodes in the corresponding grid cells every 5 min, sensory data, which have been changed remarkably, are retrieved from the network. These variations are routed to the SN, but these sensory data are not counted in SD q cah , which induces the dropping of hrt cah consequently. Figure 7. Comparison of the accumulated energy consumption for single-attribute query in a sub-region (SAQSR) with various query configurations, where S1-sparse means the sparse sub-regions with our cache mechanism, S2-sparse means sparse sub-regions without our cache mechanism, S1-dense means the dense sub-regions with our cache mechanism and S2-dense means the dense sub-regions without our cache mechanism. The gradient of the curves represents the ratio of energy consumption for answering the query. This figure shows that the energy consumption is decreased dramatically when our cooperative caching mechanism is adopted, especially when the sensors nodes are densely deployed in the network. For instance, 348% (or 599%) more energy is consumed for the case of S2-sparse (or S2-dense) than the case of S1-sparse (or S1-dense).
Figure 8.
Comparison of cache hit rates for MAQWR, where the number of attributes are set to 1, 3, 5, 7 and 9, respectively. Similar to Figure 4, when the cache size of the SN is relatively small and is not capable of caching all sensory data requested by a certain query, the cache hit rates for sensory data cached in the SN decrease significantly (roughly from 95% down to 70%). Figures 5 and 9 show the energy consumption and cache hit rates for MAQWR, where: (i) the cache size of the SN is set to 500, 600, 700, 800 and 900, respectively; and (ii) the number of attributes interested in queries is seven. Sensory data of around 700 sensor nodes are able to be cached in the SN. When the cache size is more than 700, much less energy (roughly 40%∼66% of a decrease) is consumed, as shown in Figure 5, and cache hit rates are much higher (roughly 25% of an increase) as shown in Figure 9. As discussed, the sensory data replacement mechanism and real-time data gathering from the network are the main causes of more energy consumption and cache hit rates dropping. Figure 9. Comparison of cache hit rates for MAQWR, where the cache size of the SN is set to 500, 600, 700, 800 and 900, respectively. Similar to Figure 8, when the cache size of the SN is relatively small and is not capable of caching all sensory data requested by a certain query, the cache hit rates for sensory data cached in the SN decrease significantly (roughly from 95% down to 70%). Figure 6 shows the energy consumption for SAQWR, where: (i) the skewness degree is set to 20%, 40%, 60% and 80%, respectively; and (ii) the cache size of the SN is set to 600. Note that around 100 sensor nodes are responsible for the query answering, and the cache of the SN can accommodate all of these sensory data. This figure shows that the bigger the skewness degree, the less the energy consumption is for the query answering in the whole network. For instance, in comparison with the case when the skewness degree is set to 80%, 73% (or 91%) more energy is consumed for the case when the skewness degree is set to 40% (or 20%). As presented in Section 3.1, our index tree is constructed through merging two neighboring sub-regions, where the energy consumption of forwarding the same size of messages is the least. Besides, our index tree is a relatively balanced tree, which reduces the path length of dense sub-regions for routing sensory data to the SN. Generally, our technique is more efficient, when sensor nodes are distributed in a skewness fashion.
As mentioned, the points of interest (POI) of domain applications are usually within a certain sub-region for a certain time duration, while evolving to neighboring sub-regions moderately. Queries are often issued periodically and continuously; a query region is typically part of a POI region, and concurrent queries often have overlapping sub-regions. Experiments are conducted for SAQSR, where a skewness degree is set to 60% and the cache size is set to 600. A POI region is assumed to be a rectangle in shape and is set to be dense or sparse sub-regions of the network. A query region is encoded as a set of grid cells contained in the POI region. These grid cells may not be neighboring, for simulating the scenario where multiple queries are issued concurrently. Grid cells at a certain time slot are randomly determined, and the number of grid cells may be different in contiguous time slots. The settings of experiments include: (i) S1-sparse: sparse sub-regions with our cache mechanism; (ii) S2-sparse: sparse sub-regions without our cache mechanism; (iii) S1-dense: dense sub-regions with our cache mechanism; and (iv) S2-dense: dense sub-regions without our cache mechanism. Figure 7 shows the energy consumption for these four types of experimental configurations. Intuitively, our cache mechanism can reduce the energy consumption to a large extent, when the query region is within sparse or dense sub-regions, i.e., S2-sparse is 348% more in energy consumption than S1-sparse, and S2-dense is 599% more in energy consumption than S1-dense. Note that the difference of energy consumption for S1-sparse and S1-dense is quite small. This indicates that our cache mechanism is efficient in reducing energy consumption and increasing the network capability, especially when the cache of the SN can accommodate almost all sensory data interested in queries. Figure 10. Comparison of cache hit rates for S1-sparse and S1-dense, where S1-sparse means sparse sub-regions with our cache mechanism, and S1-dense means dense sub-regions with our cache mechanism. Similar to Figure 6, this figure shows that our cooperative caching mechanism benefits the cache hit rates for cached sensory data of the SN (roughly from 30%-65%), especially when sensors nodes are densely deployed in the network. Figure 10 shows the cache hit rates for the scenarios S1-sparse and S1-dense. Generally, the cache hit rate of S1-sparse (roughly 30%) is lower than that of S1-dense (roughly 65%), although the energy consumption for S1-sparse and S1-dense is almost the same, as shown in Figure 7. Since grid cells are chosen randomly for representing a query region, common grid cells between continuous queries are relatively less in number than those of Figure 6, which induces a smaller value of the cache hit rates. Note that relatively few sensor nodes are involved in S1-sparse, and a minor change of the number of sensor nodes may have a relatively big impact on cache hit rates, which results in a relatively smaller value of cache hit rates for S1-sparse. Consequently, our cache mechanism is more efficient, especially when periodic and continuous queries have more overlapping sub-regions.
Comparison with Relevant Techniques
This section presents the results of our experiments comparing the efficiency and performance of our popularity-based cooperative caching mechanism (denoted PCC) with respect to those of our multiple-attribute query processing (MQP) mechanism, as presented by Zhou et al. [33]. Different from PCC, the cooperative caching mechanism has not been adopted in MQP for facilitating the query processing with the same environmental settings.
Experiments have been conducted for the comparison of the energy consumption for PCC and MQP [33] with respect to the query types of MAQWR and SAQWR. The number of attributes are set to 1, 3, 5, 7 and 9, respectively. The cache size of the SN is set to 900, and the skewness degree is set to 60%. Figure 11 shows the experimental results, where the energy consumption at certain time points (e.g., TP2, TP4, etc.) is illustrated. Note that the symbol TPi (e.g., i = 2) means the i-th (second) time point. It is worth mentioning that the energy consumption for MQP is almost the same for all time points, whose values are illustrated at the left side of Figure 11. As for our PCC, the energy consumption is quite large at the first time point (denoted INITin Figure 11), since no sensory data have been cached in the SN for reducing the data gathering from the network in real time, and all intermediate nodes (INs) are required to gather sensory data from the corresponding sensor nodes and to cache them locally. The energy consumption at the succeeding time points decreases to a large extent due to the reusability of sensory data cached in the SN for supporting the forthcoming query answering. This figure shows that the energy to be consumed will be in a steady state after around 18 time slots when the number of attributes is one and around 40 time slots when the number of attributes is nine, due to the fact that sensory data cached in the SN and INs can hardly reduce the energy consumed for query processing any further. Generally, our PCC outperforms MQP on energy consumption, especially when the query attributes are relatively large in number. Figure 12 illustrates the energy consumption for our PCC and MQP [33] at different time points. As previously mentioned, the energy consumption for MQP is the same for all time points. Figure 12 shows that more energy is consumed at the first time point (denoted INIT). The energy consumption for our PCC is much less than that of MQP in the consequent time points. It is evident from this figure that our PCC is more energy efficient than MQP, especially when the query attributes are relatively large in number. Figure 11. Comparison of energy consumption of our popularity-based cooperative caching (PCC) and multiple-attribute query processing (MQP) [33] for the query types of MAQWR and SAQWR, where the number of attributes is set to 1, 3, 5, 7 and 9, respectively. The energy consumption at various time points (TP2, TP4, etc.) are illustrated. Generally, the energy consumption of our PCC is much smaller than that of MQP (roughly 30% less on average for PCC than MQP), especially when the number of query attributes is relatively large. Note that the energy consumption of PCC is quite large at the first time point, since no sensory data have been cached in the SN for reducing the data gathering from the network in real-time to facilitate the query processing. Figure 12. Comparison of energy consumption of our PCC and MQP [33] for the query types of MAQWR and SAQWR, where the number of attributes are set to 1, 3, 5, 7 and 9, respectively. This figure shows that more energy is consumed for our PCC than MQP at the first time point (denoted INIT), while much less energy is consumed afterwards.
Related Work and Comparison
Traditional techniques have been developed for facilitating the query processing in wireless sensor networks (WSNs) leveraging the cooperative caching mechanism. We have proposed a popularity-based caching mechanism for optimizing periodic queries in WSNs [42]. One kind of attribute is sensed by sensor nodes in the network. Sensory data are cached only in the memory space of the sink node, and these data are assumed to be valid for answering the forthcoming queries within a certain number of time slots. Usually, the point of interest evolves moderately to neighboring sub-regions, whose sensory data may have become stale already and are not being cached at the sink node at this moment. To facilitate this query processing procedure, grid cells, which may be covered by the forthcoming queries, are derived from the previous queries according to the popularity of interested grid cells. Sensory data are pre-fetched from the network for these grid cells whose popularity is among the highest. Generally, the technique developed in this article is inspired by our previous work [42]. However, multiple kinds of attributes are considered to be sensed by sensor nodes in this technique, and the staleness of sensory data is determined according to whether sensory data have been changed significantly. Besides, this technique removes the assumption made by Zhou et al. [42] that the sink node has enough storage capability for caching sensory data of sensor nodes in the whole network. Instead, only sensory data, which have a high possibility of being reused for answering the forthcoming queries for certain attributes, are cached in the sink node. Grid cells, which are not covered by recent queries, cache sensory data of certain attributes or just a flag indicating a dramatic change of sensor nodes. This two-tiered cooperative mechanism, which caches sensory data at the sink node and the head nodes of grid cells, is efficient for facilitating the query answering, as evidenced by the experimental evaluation in Section 5.3.
A cluster-based cooperative caching mechanism is developed by Chauhan et al. [47] for supporting query processing. The network is divided into non-overlapping clusters, and each sensor node is assumed to have some cache space. Sensory data are stored in the cache space of sensor nodes that are near the sink node. When a query request is to be responded to, sensory data are retrieved through a cache discovery process. Generally, a sensor node that is responsible for this query is examined for determining whether the required sensory data are saved in its cache space. If not, the cluster of the source sensor node is examined, and the source sensor node is visited for routing the required sensory data to the sink node. This method claims to reduce the requirement for bandwidth, energy and storage of the network. However, the sink node is not responsible for caching sensory data. In fact, sensor nodes near the sink node should cache sensory data and are responsible for routing data to the sink node. These sensor nodes have much more energy consumption and should deplete their energy quickly. In our technique, sensory data are cached in the sink node and the head nodes of grid cells, and the popularity of sensor nodes is considered when determining which sensory data of certain attributes should be cached. A cooperative caching mechanism is developed by Sharma et al. [48], where sensory data are cached in the sink node and sensor nodes. A cache zone is formed as a region around a sensor node, where the storage of surrounding sensor nodes can be used to build a larger cumulative cache. A cache discovery mechanism is proposed for identifying required data items, and a cache replacement policy is developed for evicting data items of less importance. This is an interesting work and inspired us to develop our technique. Generally, this work tries to increase data availability nearer the sink and to reduce unnecessary energy consumption. A query processing mechanism is not discussed specifically when multiple queries are issued periodically and continuously.
To better support the cooperative caching in WSNs, sensor nodes, which can take the role of coordinating and packet caching and forwarding, are essential. A metric of energy betweenness centrality is proposed by Dimokas et al. [49] to evaluate the significance of sensor nodes and to examine whether these sensor nodes can take the special role of cooperation concerning the caching decisions. Consequently, a new energy-efficient cooperative caching protocol is developed. An in-network distributed query processor called Corona is developed by Khoury et al. [32], which aims to cluster sensor readings into a local storage buffer in sensor nodes. When the freshness of these sensory data is within a certain threshold, they can be used for answering concurrent queries directly, rather than being fetched from the network in real time. Therefore, sensor activation can be minimized. A survey is presented by Kumar et al. [30] about cache-based policies in WSNs for reducing the network traffic and bandwidth usage. Besides, cooperative caching has been used in other domains, like mobile ad hoc networks, to increase data availability and reduce data access delays [50], and cooperative caching policies have been applied in social wireless networks for minimizing electronic content provisioning cost [51]. Generally, these techniques explore the routing of data packets in the network, and a single query processing is of interest mostly. The strategy of caching sensory data in the intermediate nodes is the main focus; whereas in this article, we propose a two-tiered cooperative caching mechanism for reducing the energy consumption of query answering in the forthcoming time slots.
To establish efficient paths for routing sensory data to the sink node, a cache-based routing metrics is proposed by Grilo et al. [29], where intermediate nodes in WSNs are used for caching packets and transmitting them to the sink node. Note that in a heterogeneous Internet of Things, sensor nodes may differ to some extent in terms of storage and computational capabilities. Hence, sensor nodes with larger capabilities are good candidates for caching the packets. Therefore, cache utilization is set as a novel metric applied in this technique, and routing paths should be selected considering cache-rich intermediate nodes. Generally, this work is interesting and inspires caching packets at the intermediate nodes and determining appropriate routing paths; whereas we explore a cooperative caching mechanism at the sink node and the head nodes of grid cells, according to the popularity of sensory data leveraging the recent query history. These two techniques can complement each other for better facilitating the query processing and, thus, improving the energy efficiency. A caching platform is presented by Léone et al. [52] for reducing the network communication cost. A gateway based on a constrained application protocol (CoAP)-HTTP proxy is proposed, where cross-layer data are cached in the proxy [53]. When a query is to be answered, the gateway will delegate for query answering. Only when the requested sensory data are not fresh enough or missed in the cache, this query should be transferred to the network for fetching data. This work is similar to what we have developed. However, mainly a caching model is presented, while technical details about the caching strategy are not clear.
Besides, some methods study periodic query processing. In [17,24], the authors study the achievable network capability of snapshot and continuous data collection for a probabilistic WSN model. Cell-based path scheduling and zone-based pipeline scheduling algorithms are proposed for improving the concurrency of snapshot and continuous data collection, respectively. This work inspired us to partition the network region into square grid cells and to use grid cells as elementary units for caching sensory data. Node localization is an important research problem, especially for large-scale WSNs [54,55]. The network is divided into overlapped local networks, and the corresponding local maps are constructed using a local semidefinite programming method. These local maps are merged into a global map, which contains the exact position of the nodes of interest. It is argued that in certain WSN applications, query requests come periodically with stringent delay constraints [10]. Therefore, a periodic aggregation query scheduling is performed by the designed routing strategy along with packet scheduling protocols. The quality of queries in WSNs is discussed by Brayner et al. [56], which aims to deliver a reasonable level of data quality as expected, while ensuring the intelligent consumption of limited network resources. Generally, these methods mainly explore the strategies of supporting (periodic) query processing in WSNs, in order to consume less resources while prolonging the network lifetime. The sharing of sensory data retrieved at a certain time slot and between concurrent queries and the reuse of sensory data gathered by recent queries for answering the forthcoming queries are not discussed extensively.
Conclusions
Wireless sensor networks, which act as important interfaces between physical environments and computational systems, have been used extensively to support widespread application domains. Usually, multiple attributes should be sensed in a network, and multiple-attribute sensory data are queried from the network continuously and periodically for facilitating domain applications. Note that sensory data may not change significantly within a certain time duration, and applications may tolerate a variation of adopted sensory data with accurate ones to a certain extent. Consequently, sensory data gathered at this moment can be shared for answering concurrent queries and may be reused for answering the forthcoming queries. To remedy this issue, a two-tier cooperative caching mechanism is proposed in this article. Specifically, the popularity of sensory data, which reflects the possibility of reusing these data by the forthcoming queries, is calculated according to the queries issued in recent time slots. Sensory data of a higher popularity are cached in the sink node, and these data can be used for query answering directly. Sensory data of a lower popularity are cached in the head nodes of grid cells. This two-tier cooperative caching strategy promotes the reuse of sensory data for answering the forthcoming queries significantly. The results of experimental evaluation show that our technique is efficient in the reduction of energy consumption for query answering, especially when the number of queries is relatively large. Specifically, the energy consumption for the case when the sink node is lacking caching space is around 40%∼66% more, as shown in Figure 5, than that for the case when the sink node is capable of caching all sensory data requested by the queries, while the cache hit rate increases significantly, as well (roughly from 70%-95%, as shown in Figures 8 and 9) for these two cases. Compared with our previous technique [33], this two-tier cooperative caching strategy can reduce around 30% of the energy consumption for answering the queries, as shown in Figure 11.
As to the future directions, we are adopting this cooperative caching mechanism to the scenario where a wireless sensor network is shared by multiple applications. The challenge includes the sharing and reusing of query results of these applications for answering forthcoming queries, in order to reduce the energy consumption. Besides, sensory data pre-fetching from the network should be beneficial for the decrease of energy consumption, especially when the forthcoming queries can be (partially) predicted according to the queries in the past. A prediction model is under the construction. | 18,485.6 | 2015-06-26T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Calcium imaging at kHz frame rates resolves millisecond timing in neuronal circuits and varicosities
We have configured a widefield fast imaging system that allows imaging at 1000 frames per second (512x512 pixels). The system was extended with custom processing tools including a time correlation method to facilitate the analysis of static subcellular compartments (e.g. neuronal varicosities) with enhanced contrast, as well as a dynamic intensity processing (DIP) algorithm that aids in data size reduction and fast visualization and interpretation of timing and directionality in neuronal circuits. This system, together with our custom developed processing tools enables efficient detection of fast physiological events, such as action potential dependent calcium steps. We show, using a specific blocker of nerve communication, that with this setup it is possible to discriminate between a pre and post synaptic event in an all optical way.
Introduction
To capture all facets of cellular communication in an optical way, it is of utmost importance to record with high temporal resolution as many biological interactions occur faster than the human eye can see or a classic camera can capture. Action potential firing and the consecutive intracellular calcium signaling events in neurons, which occur at the millisecond timescale, is a typical example of such fast cellular signaling. To fully understand neuronal function it is important to resolve timing at this scale because it allows assessing the chronology of events taking place in multicellular neuronal networks. Electrophysiological techniques such as patch clamping are so far the only approaches that deliver sufficient temporal resolution [1]. However, they are quite limited in terms of spatial resolution as only one or a few [2][3][4] cells can be monitored simultaneously. Furthermore, these recordings can compromise the physiological environment because of their invasive nature. Optical readouts have a clear advantage as they are minimally-invasive and allow recordings from many different cells in a network at the same time.
To overcome these spatiotemporal limitations, different types of 'fast imaging' have been implemented. Confocal approaches have been optimized for speed based on resonant scanning technology, however they still make a tradeoff between temporal and spatial resolution [5]. A point scanning confocal microscope remains relatively slow because the sample is scanned point by point, which classically takes seconds to minutes to acquire a full field of view. Spinning disk confocal microscopy, on the other hand, is able to acquire whole frames at once, resulting in an increased recording speed up to a few hundreds of Hz [6], again partially trading spatial (mainly axial) resolution for speed [7].
However, when maximum temporal resolution is required widefield fluorescence microscopy is still the optical method of choice [8][9][10][11]. Using high speed cameras that are optimally designed for fast acquisition, enough photons can be caught in very short time intervals. Especially EMCCD's and sCMOS now allow full frame recordings at a few hundreds of Hz [12]. Some setups even reach up to 10 kHz, which is at the expense of the number of pixels on the sensor chip (e.g. 80x80) [8,9], and therefore spatial resolution. Also, signal to noise ratio issues arise at these kHz frame rates as only a limited number of photons -if any-can be collected per time point. As an alternative to these camera approaches, photodiode arrays can be used but they are also limited in the number of spatial data points [13][14][15]. Because of the restriction in spatial resolution of the currently available systems, it remains extremely challenging to accurately record fast events in smaller cellular components such as neuronal processes.
In this study we report the successful development of a microscopy configuration based on a widefield approach with a dual camera system that allows recording images both at fast (up to 2000 Hz) and standard (2-10 Hz) frame rates with sufficient spatial resolution. With this setup it is possible to resolve individual neurotransmitter release sites and record their activity as they operate in a neuronal network.
In combination with our custom image processing tools we are able to detect the chronology of firing using the calcium indicator Fluo4 AM in neuronal circuits both in cell culture and tissue preparations. Furthermore, the approach yields high resolution feedback from sub-millisecond stimuli and allows calculating the propagation speed of the calcium wave front.
Microscopy setup
A Zeiss Examiner Z1 upright microscope was used as the core of our experimental physiology setup. A cooled CCD camera (PCO; Kelheim, Germany) mounted on the top port was used to record up to 2 Hz and a CMOS camera (Focuscope SV200-i, Photron; Tokyo, Japan) was attached to the side port [ Fig. 1(a)] and allowed us to record images at a maximum speed of 2 kHz. This camera has a CMOS sensor of 512 by 512 pixels equipped with a GaAsP photocathode surface. It incorporates a manually adjustable image intensifier allowing full frame high speed recordings up to 2000 Hz, even for low light fluorescence. A custommade automated mirror switches mechanically between both cameras in about 220 ms. Images were acquired through a water dipping lens (20x, 1.0 NA) and excitation light for fluorescence imaging was generated by a Poly V monochromator (Till Vision; Gräfelfing, Germany). Furthermore, an electric Grass S88 stimulator (Grass Technologies; Warwick, USA), a picospritzer (General Valve Corporation; Fairfield, USA) and a gravity-fed perfusion system (1 ml/min) were used to provide physiological stimuli and buffered solutions to the studied cells and tissues. All components (switching mirror included) were automatically controlled via a control unit and protocol editor available in the Till Vision software (Gräfelfing, Germany). The electrical stimuli (300 µs) were delivered via a local Pt/Ir electrode (Ø 50 µm) to the interconnectives of the nerve network either as single or train (20 Hz) pulses.
kHz camera performance
To test the performance of the Photron CMOS camera, 50 bias frames were recorded with a minimal exposure time of 3.7 µs for each intensifier voltage, increasing the image intensifier up to 850 (Δ = 50). All these frames had zero counts as a consequence of the preset threshold of the camera. It also shows that read noise, at least up to this threshold, is not influenced by the image intensifier. Secondly, 50 dark frames were recorded with an exposure time of about 1 ms for each intensifier voltage, increasing the image intensifier up to 850 (Δ = 50). For most pixels the counts were again zero, except for some noise that is only present at higher intensifier voltages [ Fig. 1(b)]. This noise is single pixel limited as 98% of the spot noise population over all 850 dark frames appears in one single pixel only [ Fig. 1(c)]. Possibly this is caused by cosmic radiation that is able to penetrate the camera shielding during dark frame acquisition and is then enhanced by the image intensifier. In order to have a good estimate of the overall noise levels we set out the standard deviation versus size of region of interest (spatial signal integration) and intensifier voltage measured on a fixed sample of comparable fluorescence intensity to our live tissue samples [ Fig. 1(d)].
We recorded at 1000 Hz for all experiments, except for a specific set of calcium wave speed measurements that were recorded at the maximum of 2000 Hz. Thus, we benefit from the increased exposure time and reduced (shot) noise due to a lower intensifier voltage, while still retaining 2 extra orders of magnitude compared to standard (~10 Hz) acquisition rates. The A/D converter's 10 bit stacks were sampled down to 8 bit to reduce file size and to facilitate software handling. Recordings in live tissues proved that our system has sufficient resolution and sensitivity to resolve individual varicosities and their responses with high signal-to-noise ratio [ Fig. 1(e)] at kHz recording rates.
Tissue and cell culture preparation
For live experiments we used enteric neurons, which reside in two specific layers within the intestinal wall: the submucosal and myenteric plexus [16]. These plexus consist of a complex network of neurons and glia and especially the myenteric plexus is organized in a cellular layer, which is ideally suited for widefield imaging [17]. These nerves communicate via classic synaptic contacts but many processes also have en route varicose release sites, which are referred to as boutons or varicosities in the remainder of the manuscript. For our recordings, we used both enteric nerve cell cultures (small intestine) and whole mount myenteric plexus preparations, taken from the mouse large intestine.
Mice (C57/Bl6) were killed by cervical dislocation and the large intestine was removed and placed in Krebs solution (in mM: 120.9 NaCl, 5.9 KCl, 1.2 MgCl2, 2.5 CaCl2, 1.2 NaH2PO4, 14.4 NaHCO3, 11.5 Glucose). All animal procedures were approved by the ethical committee of the University of Leuven (Leuven, Belgium). The myenteric plexus from the large intestine was loaded for 20 minutes with the fluorescent calcium indicator Fluo4 AM (1E-6 M). Cremophor (0.01% v/v) was added to enhance uniform loading. After washing (10 minutes in Krebs solution) the tissue was mounted on the microscope stage while continuously perfused with carbogenated (95% O 2 −5% CO 2 ) Krebs (22°C). Apart from tissue primary neuronal cell cultures were also used for some tests. These were made from mouse (C57/Bl6) intestine and grown for 3 days at 37°C and 5% CO 2 . They were then loaded with Fluo4 AM (5E-6 M), washed for 10 minutes and kept under constant perfusion with HEPES buffered solution during the experiment (22°C) [17,18] After live experiments, some tissues were fixed at room temperature using 4% PFA (30 min) and processed for immunohistochemistry to identify the cellular components that were recorded from: rabbit anti-synaptotagmin (gift from Dr. R. Jahn, Göttingen, Germany) was used to identify varicosities, mouse anti-HuC/D (Invitrogen, Merelbeke, Belgium) for neuronal cell soma and goat anti-peripherin (Santa Cruz, Heidelberg, Germany) to label neuronal fibers. Secondary antibodies were coupled to AMCA (Jackson ImmunoResearch, West Grove, USA), alexa594 and alexa488 (Invitrogen, Merelbeke, Belgium) [ Fig. 2(c)]. These tissues were then imaged with a scanning 2-photon/confocal microscope (LSM510 META, Zeiss, Germany) in the Cell Imaging Core (CIC, University of Leuven).
Autocorrelation of high frame rate images allows accurate localization and visualization of individual varicosities
Even though spatial resolution in our setup is significantly higher (512x512) than fast CCD (80x80) based systems, not all varicosities can easily be discerned from a single (1 ms exposure) frame. Therefore we designed a software tool based on an autocorrelation approach, which due to the high temporal resolution performs well to assist and facilitate varicosity identification. The algorithm calculates the arithmetic mean (V) of the autocorrelation [Eq. Output of the autocorrelation tool in a time window of 500 ms before (d) and after (e) the stimulation. (f) Image that results from the subtraction of (d) from (e), which selectively shows those parts of the network that respond within 500 ms after stimulation, clearly highlighting responding varicosities.
Detection of fast calcium events
Calcium responses could be measured in individual varicosities from Fluo4 loaded tissue using 3 different types of stimuli. The response to a single electric pulse was typically characterized by a sharp upstroke reaching its maximum in less than 10 ms (3 mice, 4 ganglia, max F/F 0 = 2.2 ± 0.45) [ Fig. 3(b)]. Using a 20 Hz pulse train stimulus, we could show that our detection is fast enough to resolve individual calcium steps at a frequency that matches the stimulation frequency (5 mice, 8 ganglia, max F/F 0 = 3.2 ± 0.59) [ Fig. 3(c)]. This response frequency was confirmed using fast Fourier transformations that reveal a ~20Hz dominant peak in the spectra [ Fig. 3(e) and 3(f)]. A stepwise calcium response was also observed when the neurotransmitter serotonin (5-HT) was locally applied to the tissue using a pressure spritzer (5 mice, 5 ganglia, max F/F 0 = 1.7 ± 0.19) [ Fig. 3(a)]. To improve signal to noise, a low pass filter can be applied either directly or after taking the derivative of the signal [ Fig. 3(d)], which facilitates automatic step detection as the derivative signal clearly divulges the individual calcium step location. Next, we tested whether the improved temporal and spatial resolution could be used to assess calcium signal propagation in neuronal fibers. To address this question, interganglionic fiber tracts were stimulated with a single electrical pulse, classically used to elicit fast excitatory postsynaptic potentials in the enteric nervous system [8]. By recording at 2 kHz, we were able to continuously monitor the calcium signal as it travelled through ganglia and connecting fibers at a calculated propagation speed of about 80 mm/s (7 ganglia, 4 mice).
Remarkably this speed is in the same order of magnitude as the one extrapolated from recordings of calcium waves in the intestinal muscles layers [19].
Computation and visualization of neuronal activity spread using dynamic intensity processing (DIP)
To highlight the importance of timing in the sequence of biological events such as communication in neuronal networks, we sought to further improve the temporal contrast of our recordings by designing an algorithm that comprises image thresholding, peak detection and color mapping as schematized in Fig. 4. The initial thresholding [ Fig. 4(d)] is introduced to discard background pixels and reduce computation time. In a following step (b) the signal is differentiated to transform responses into peaks. An increase in signal-to-noise results from low-pass filtering [ Fig. 3 Fig. 4(e) (lower trace) and squaring [ Fig. 4(f)]. Next, Igor Pro's peak detection algorithm based on smoothed derivatives is called to define the peak location. Once the new image is calculated the rainbow color scale can be adjusted at any time to color code a specific time window [ Fig. 4(f)]. This allows discriminating both very fast calcium rises appearing within ms after the stimulus as well as cells or fibers that respond within a few tens of ms after the stimulus. By using this millisecond time gating, specific structures and responses can be isolated, even for samples with ample background fluorescence as is the case in Fluo4 loaded myenteric plexus tissues. This image processing tool, optimized for fast recordings, is similar in readout to the time-to-maximum images earlier described by our group for standard recordings [17]. DIP generates MB size images from GB size time lapse stacks clearly summarizing the temporal aspects of activity spread in individual varicosities and neuronal networks in both cell culture (g) and tissue (h). This reveals that fibers, even though they are physically close to the stimulated cell are not necessarily the first responders. Figure 4(g) shows that fibers are responding in the direction towards the electrode (bottom right), rather than away from it (Media 1). This suggests that these processes belong to neurons that are among the last to respond in a cascade of responders that starts at the electrode, with the connecting circuitry mainly present outside the field of view. Alternatively we are not sure whether some signals remain under the detection limit. These color maps allow fast interpretation of large data volumes and introduce time with ms resolution as a practical scientific research parameter.
(d)] (blue) and
Specific color map selections [ Fig. 5] will give a better contrast to the human eye and can be used to highlight different aspects (varicosities
Pre-and postsynaptic intracellular calcium responses can be temporally resolved
To further address the synchronicity in fiber and varicosity responses in a network, we used a cell culture model of enteric neurons and took advantage of our newly developed DIP to distinguish between timing of responsive elements. Even though some network elements are spatially close together, it is still possible to detect a ~20 ms delay between some of their responses [ Fig. 6]. We used the nicotinic receptor blocker hexamethonium to silence cholinergic transmission, the dominant component of fast neurotransmission in the ENS, and found that the slower fibers [ Fig. 6(a)] were delayed even further [ Fig. 6(b)]. Upon washout of the drug, this effect was reverted [ Fig. 6(c)]. This indicates that cholinergic transmission is implicated in the complex wiring most likely via autaptic communication (as present in the artificial conditions of the cell culture system). We found that the effect of hexamethonium can be predicted from the maximum response delay in the first recording, as only in those recordings with substantial delays (17 ± 1.4 ms) blocking of nicotinic receptors was able to amplify the delay (27.5 ± 3.5 ms), while in other recordings with synchronous responses (10.2 ± 2.5 ms) hexamethonium did not have any effect (10.6 ± 2.4 ms). Therefore, we expect that the observed distinct delays reflect the length of the network element before it reappears in the field of view. Instead of focusing on different response elements in a network, we also analyzed individual varicosities to examine whether circuitry information could be deduced from a single spot [Fig. 7]. For this set of experiments we used isolated tissues, which have an advantage in that they contain the native wiring of the network. The responses evoked by a single electrical stimulus were highly repeatable over sequential trials (3 repeats, 258 varicosities from 2 mice). The majority of varicosities displayed one single sharp upstroke in response to the single stimulus [ Fig. 3(b)], however in a select number of cases (~10%) a biphasic response could be detected [ Fig. 7(a)] suggestive of a recurrent or delayed secondary activation. These secondary responses seemed to be hard wired as they were also consistent in repeated recordings. To investigate the nature of this second peak, we again used hexamethonium and imaged the varicosities (330 varicosities from 6 mice) first in control and secondly in the presence of hexamethonium (1E-4 M, 2.5 min). In over 40 varicosities, secondary responses were observed. Hexamethonium completely abolished the secondary response in 14 varicosities [ Fig. 7(b)] while the double response in 26 varicosities was not affected by hexamethonium. After washout (2.5 min) all secondary responses reappeared [ Fig. 7(c)], clearly proving the involvement of an intermediate cholinergic connection.
We conclude that with our microscopy setup it is possible to simultaneously record and discriminate between pre-(direct/antidromic) and postsynaptic activity in an all optical way.
Discussion
Live fluorescence microscopy is a powerful and heavily used tool in biomedical research mainly because it yields an optical readout from many cells simultaneously, which creates the possibility to visualize cellular interactions and circuits. The main disadvantage of these techniques is the lack of speed at which recordings can be made in comparison to electrophysiological approaches [20]. To overcome this, strategies have been designed to improve camera capture speed or -in case of scanning microscopy -enhance the (resonating) speed of the scanner.
Confocal measurements on the millisecond time scale are an option but only by limiting the recorded area to single lines or single points. In this case, the increase in temporal resolution is at the expense of a general overview. As an alternative approach to increasing temporal resolution resonant scanners have been implemented [21,22], which sweep the excitation beam over the field of view much faster than classic linear galvanometric scanning mirrors. Here the disadvantage is that pixel dwell times are often too short to collect a sufficient amount of photons. Alternatively acousto-optic deflectors can be used to change the beam direction in a non-mechanical way [23].
Unlike point or line scanning systems, also fast (kHz) cameras have been elegantly used in cardiac [24], peripheral [8,9] and central nervous system [10] research to improve recording speed. However, to maximize readout speed, sensitivity has to be raised, very often accomplished by increasing pixel size and therefore lowering spatial resolution. Here we report a microscopy configuration with an additional high spatial resolution fast camera (512 by 512 readout), which makes it possible to record both fast and slow events in small (close to diffraction limited) objects, such as varicosities in a neuronal network. A custom-made mechanical sliding mirror allows switching (220 ms delay) back and forth from fast to standard recordings automatically. Our hardware setup, combined with custom made analysis tools, close an important gap in the available microscopy techniques because of the obvious speed advantage while still having the possibility to simultaneously record from different components in an integrated network. Another advantage relates to the flexibility of the technique as it can easily be integrated in existing microscopy setups.
Our system (hard-and software) is unique in terms of temporal and spatial resolution as it reveals events that could not be detected before in an optical way. The distinct intracellular calcium steps that we recorded in individual varicosities during serotonin application are a good example. Most likely each step is the consequence of a unitary event like a single action potential or the opening of a 5HT 3 receptor (cluster). This is in line with the stepwise intracellular calcium rise that was measured for nicotinic receptor stimulation [8] of enteric neurons, which was indeed associated with action potential firing. We further show that we can resolve distinct intracellular calcium steps in varicosities using action potential pulse trains (20 Hz), thus yielding much more detail about the response upstroke compared to classic CCD cameras.
Our image capture speed proved fast enough to deduce calcium propagation speed (~80 mm/s) in unmyelinated neuronal fibers of the enteric nervous system, which is remarkably similar to what was extrapolated from muscle recordings [19]. It remains to be elucidated why these high speeds are necessary to control physiological behaviour (intestinal motility, mixing) that is far slower. It could be argued that this conduction speed is that high to allow signal integration over larger areas generating concerted actions of smooth muscle units in the gut wall. Muscle twitch indeed happens simultaneously for groups of cells spread over 100-200 µm, a distance that is bridged in ~2 ms at the measured calcium propagation speed. Compared to action potential propagation in unmyelinated fibers the calcium propagation speed is still low. This may be due to the fact that the experiments were performed at room temperature or caused by a true mismatch between Ca 2+ signal and action potential propagation. The latter could be explained if Na + and Ca 2+ channels are not evenly distributed along the fiber or if the volume of the process is not constant. We do not expect that delays are due to the properties of the indicator, as Fluo4 has a sufficiently high affinity (K d = 345 nM), S/N ratio and Ca 2+ binding kinetics [25,26] to generate enough photons during the short (ms) integration times. The temperature dependency of Fluo4 [27] is most likely negligible in light of the effect temperature will have on the physiology of the network. Higher conduction velocities as present in myelinated (motor) neurons cannot be measured at the moment, since this would require imaging larger fields of view with lower magnification lenses and therefore less detail.
The first of our processing tools allows summarizing a GB time lapse stack into MB images that are easy and fast to interpret. They either enhance spatial contrast (time correlation) or temporal contrast (DIP). Despite these obvious advantages, it is important to realize that this type of computation takes time, for instance a time lapse stack reduced to 350 frames around the event of interest requires typically around 10 min to be processed because of the pixel by pixel multi-step analysis. This process can be sped up by adjusting the threshold value, which is especially useful in recordings from cell cultures as more background pixels can be discarded from the calculations.
A second tool, DIP, also allows time gating at ms resolution, which can be used to identify varicosities that fire in synchrony. This technique can be employed to separate single active fibers within complex neuronal circuits from background information, even using widefield microscopy. Furthermore, it allows determining signaling order within neuronal networks as well as distinguishing between direct and synaptic or long-distance communication. Indeed, using hexamethonium, a drug that blocks the most important fast component of neurotransmission in the enteric nervous system [28], we were able to delay responses by forcing the response propagation over longer (non-cholinergic) connections.
In tissue experiments, where neuronal wiring is still intact, single fast responses to fiber tract stimulation dominate the response spectrum of varicosities in the entire field of view. Recurring post-synaptic activity within the circuitry is limited to a stochastic minimum.
Although altered intracellular calcium buffering, summation of inputs or paracrine factors may influence the response in a single varicosity at any given moment, most varicosities in which a secondary intracellular calcium step occurred, were shown to faithfully repeat that response in up to 3 consecutive recordings. Only in the presence of hexamethonium, the secondary response was completely abolished in a subgroup of varicosities. These data indicate that at least one cholinergic release site is involved in the generation of the secondary response, which proves the potential of fast imaging to distinguish between direct and circuit related activation. Synaptic contact redundancy or involvement of a neurotransmitter other than acetylcholine may explain the presence of hexamethonium resistant responses.
In summary, we show that high resolution full frame kHz imaging further expands the application spectrum of calcium imaging, since calcium propagation speed and stepwise calcium increases associated with single action potentials in individual varicosities can be detected. Our image processing tools assist in the discrimination of neuronal network activity on the millisecond timescale. | 5,880 | 2014-07-16T00:00:00.000 | [
"Biology"
] |
Comprehensive Profiling of the Native and Modified Peptidomes of Raw Bovine Milk and Processed Milk Products
Bovine milk contains a variety of endogenous peptides, partially formed by milk proteases that may exert diverse bioactive functions. Milk storage allows further protease activities altering the milk peptidome, while processing, e.g., heat treatment can trigger diverse chemical reactions, such as Maillard reactions and oxidations, leading to different posttranslational modifications (PTMs). The influence of processing on the native and modified peptidome was studied by analyzing peptides extracted from raw milk (RM), ultra-high temperature (UHT) milk, and powdered infant formula (IF) by nano reversed-phase liquid chromatography coupled online to electrospray ionization (ESI) tandem mass spectrometry. Only unmodified peptides proposed by two independent software tools were considered as identified. Thus, 801 identified peptides mainly originated from αS- and β-caseins, but also from milk fat globular membrane proteins, such as glycosylation-dependent cell adhesion molecule 1. RM and UHT milk showed comparable unmodified peptide profiles, whereas IF differed mainly due to a higher number of β-casein peptides. When 26 non-enzymatic posttranslational modifications (PTMs) were targeted in the milk peptidomes, 175 modified peptides were identified, i.e., mostly lactosylated and a few hexosylated or oxidized peptides. Most modified peptides originated from αS-caseins. The numbers of lactosylated peptides increased with harsher processing.
Introduction
The bovine raw milk contains a variety of endogenous peptides. Many of them exert bioactive functions, such as immunomodulatory effects and antimicrobial or mineral binding activities [1,2]. Native peptides are cleaved from the proteins by proteases naturally present in milk [1]. Plasmin, the dominant protease in bovine milk, shows a high specificity for β-, α S1 -, and α S2 -casein with only low or no activity towards κ-casein, β-lactoglobulin, and α-lactalbumin [3][4][5]. Cathepsin D digests mostly β-casein and α-lactalbumin at two specific sites, whereas native β-lactoglobulin is resistant to cleavage [4]. Other important proteases in bovine milk are elastase and cathepsin B [6].
Studies on the peptidome of raw milk identified high numbers of αand β-casein derived peptides, mostly explained by activities of plasmin, cathepsin B and D, and elastase [7,8]. Peptidome analyses of milk indicated higher activities of cathepsin D and elastase in cows suffering from mastitis than in healthy cows and thus increased numbers and abundances of endogenous peptides in milk from infected cows [9,10]. Moreover, the peptide profile of colostrum sweet whey permeate, a by-product from cheese production, contained mainly β-casein derived peptides [11]. All these studies showed that the bovine milk peptidome is dominated by αand β-casein derived peptides, whereas peptides from whey proteins and κ-casein are present at low contents or were not even detected [7][8][9][10][11]. stored at −80 • C. IF (nutritional values: 36 g/L fat, 71 g/L lactose, and 14 g/L proteins originating from skimmed milk and sweet whey) was bought at a local supermarket and prepared according to the manufacturer's instructions. Peptides were extracted from three aliquots of each sample (50 µL) using a Folch extraction protocol [19]. Briefly, methanol and chloroform were added and the samples incubated (1 h, 4 • C). After the addition of water and a second incubation (10 min, 4 • C), the samples were centrifuged (10 min, 4 • C; 10,000× g), the organic phase removed and centrifuged again using the same conditions. The aqueous phase was dried under vacuum, reconstituted in aqueous acetonitrile (3%, v/v) containing formic acid (0.1%, v/v), and desalted by solid-phase extraction (SPE, Oasis HLB 1cc, 30 mg, Waters GmbH, Eschborn, Germany) [19]. The dried eluates were dissolved in aqueous acetonitrile (3%, v/v) containing formic acid (0.1%, v/v) and peptide concentrations determined on a NanoPhotometer NP80 (IMPLEN, Munich, Germany, λ= 280 nm).
Tandem Mass Spectrometry
Peptides were analyzed on a nanoAcquity UPLC (Waters GmbH, Eschborn, Germany) coupled on-line to an LTQ Orbitrap XL ETD mass spectrometer equipped with a nano-ESI source (Thermo Fisher Scientific, Bremen, Germany). After trapping (nanoAcquity Symmetry C18-column) at a flow rate of 5 µL/min (1% eluent B), peptides were separated on a BEH 130 column (30 • C) using a flow rate of 0.4 µL/min. Eluent A and B were water containing formic acid (0.1%, v/v) and acetonitrile containing formic acid (0.1%, v/v), respectively. Peptides were eluted by a two-step linear gradient increasing eluent B from 1% to 40% within 89 min and further to 85% within 5 min. The transfer capillary temperature was set to 200 • C and an ion spray voltage of 1.4 kV was applied to a PicoTip TM on-line nano-ESI emitter (New Objective, Berlin, Germany). Mass spectra were recorded in the Orbitrap mass analyzer (m/z range 400 to 2000) at a resolution of 60,000 at m/z 400. Tandem mass spectra were acquired using data-dependent acquisition mode (DDA) for the six most intense signals in collision-induced dissociation (CID) mode as described before (isolation width of 2 m/z units, normalized collision energy of 35%, activation time of 30 s, default charge state of 2, intensity threshold of 500 counts, and dynamic exclusion window of 60 s) [22]. The samples were reanalyzed (after pooling the replicates for each sample) using a retention time based (±1.5 min) exclusion list of proposed unmodified peptides and the conditions described above. For modified peptides, tandem mass spectra were acquired for individual samples using electron transfer dissociation (ETD, isolation width of 2 m/z units, activation time 100 ms, default charge state 2, intensity threshold of 500 counts, and dynamic exclusion window of 60 s) in DDA mode for the six most intense signals [22]. Modified peptides proposed in previous measurements and preliminary experiments were targeted as well.
Unmodified Peptides
Acquired data were processed with Sequest using Proteome Discoverer 2.2 (Version 2.2.0.388, Thermo Fisher Scientific, Bremen, Germany) and PEAKS Studio 10.5 (Bioinformatics Solutions Inc., Waterloo, ON, Canada) using the following database and search parameters: bovine milk database (release 2016_11) [31], no enzyme, precursor mass tolerance 10 ppm, fragment mass tolerance 0.8 Da, false discovery rate 1%, and dynamic modifications including oxidation of methionine (+15.99 Da, Ox) and phosphorylation of serine (+79.96 Da, Phospho). As data processing relied on native peptides, i.e., searches were performed with no enzyme specificity, the processing times were very long and thus it was necessary to use a smaller, in-house milk-specific database.
Peptides identified in at least two of the three replicates by both software tools were considered for further processing ( Figure S1). Their presence in all measured samples was confirmed with Skyline (20.1.0.155, MacCoss Lab, Department of Genome Sciences, University of Washington) by generating a spectral library [33] and adjusting the parameters for the instrument used. Peptides were considered Foods 2020, 9,1841 4 of 14 present if the precursor was detected at the same retention time and the isotope dot-product value (idotp) was above 0.95 in at least two individual replicates.
Modified Peptides
Acquired data were processed with Sequest within Proteome Discoverer 2.2 (as described above) additionally targeting 26 modifications (lactosylation, hexosylation, 12 AGEs, and 12 oxidation/carbonylation types) listed in Table S1. As Proteome Discoverer allows only six dynamic modifications per template at the same time, modifications were split into six templates. Proposed modified peptides were combined into an inclusion list that was used to analyze the individual milk samples again by DDA in ETD mode ( Figure S1). Identification of modified peptides relied on peptides proposed by Proteome Discoverer 2.2 and manual confirmation of proposed modification sites. The presence of the confirmed modified peptides within all three sample types was confirmed within a spectral library generated in Skyline as described above for unmodified peptides.
Native Peptidome
The data sets acquired for peptides in RM, UHT milk, and IF were processed by two different software packages relying on different strategies (Proteome Discoverer 2.2 and PEAKS Studio 10.5) to obtain a confident identification of unmodified peptides in at least two replicates of each milk sample. Their presence among all samples was confirmed after integrating the data into a spectral library within Skyline. This strategy identified 801 unmodified peptides originating from 36 different milk proteins (Table S2). The peptide length ranged from seven to 64 residues corresponding to peptides from 801.44 to 6783.34 Da with an average peptide length of 15.4 residues and an average peptide mass of 1766.86 Da. Although 502 peptides were present in all sample types, more peptides were detected in the processed milk products than in raw milk (Figure 1), i.e., 683 peptides from 25 proteins in IF, 635 peptides from 31 proteins in UHT milk, and 612 peptides from 30 proteins in RM. Interestingly, the peptidomes of RM and UHT milk overlapped by more than 95%, whereas 149 peptides present in IF were not detected in RM and UHT milk ( Figure 1). Skyline (20.1.0.155, MacCoss Lab, Department of Genome Sciences, University of Washington) by generating a spectral library [33] and adjusting the parameters for the instrument used. Peptides were considered present if the precursor was detected at the same retention time and the isotope dotproduct value (idotp) was above 0.95 in at least two individual replicates.
Modified Peptides
Acquired data were processed with Sequest within Proteome Discoverer 2.2 (as described above) additionally targeting 26 modifications (lactosylation, hexosylation, 12 AGEs, and 12 oxidation/carbonylation types) listed in Table S1. As Proteome Discoverer allows only six dynamic modifications per template at the same time, modifications were split into six templates. Proposed modified peptides were combined into an inclusion list that was used to analyze the individual milk samples again by DDA in ETD mode ( Figure S1). Identification of modified peptides relied on peptides proposed by Proteome Discoverer 2.2 and manual confirmation of proposed modification sites. The presence of the confirmed modified peptides within all three sample types was confirmed within a spectral library generated in Skyline as described above for unmodified peptides.
Native Peptidome
The data sets acquired for peptides in RM, UHT milk, and IF were processed by two different software packages relying on different strategies (Proteome Discoverer 2.2 and PEAKS Studio 10.5) to obtain a confident identification of unmodified peptides in at least two replicates of each milk sample. Their presence among all samples was confirmed after integrating the data into a spectral library within Skyline. This strategy identified 801 unmodified peptides originating from 36 different milk proteins (Table S2). The peptide length ranged from seven to 64 residues corresponding to peptides from 801.44 to 6783.34 Da with an average peptide length of 15.4 residues and an average peptide mass of 1766.86 Da. Although 502 peptides were present in all sample types, more peptides were detected in the processed milk products than in raw milk (Figure 1), i.e., 683 peptides from 25 proteins in IF, 635 peptides from 31 proteins in UHT milk, and 612 peptides from 30 proteins in RM. Interestingly, the peptidomes of RM and UHT milk overlapped by more than 95%, whereas 149 peptides present in IF were not detected in RM and UHT milk ( Figure 1). About 70% of the identified peptides originated from αand β-caseins ( Figure 1 and Figure S2, and Table S2) including 246 peptides from α S1 -casein. Around 77% of the α S1 -casein-derived peptides were present in all sample types ( Figure 1 and Figure S2). UHT milk contained slightly more peptides (226) compared to RM (214) and IF (214). Independent of the sample type, most peptides originated from three protein regions, i.e., Gly10 to Val37, His80 to Met123, and Ser180 to Trp199 ( Figure S3). Interestingly, one peptide corresponding to Glu69 to Lys79 was present only in IF ( Figure S3). Regions Gln59 to Ser68 and Gln155 to Tyr165 were not represented by any detected peptide ( Figure S3).
With 202 β-casein-derived peptides, this protein was the second most dominant source of free peptides ( Figure 1 and Figure S2; and Table S2) whereof 142 were present in RM and UHT milk and 184 in IF ( Figure 1). The peptides from IF covered the full protein sequence except for the signal peptide ( Figure 2), whereas sequences Leu58 to Asn68 and His134 to Val162 were missing in RM and UHT milk ( Figure 2). However, most β-casein peptides originated from three parts of the sequence, i.e., Lys29 to Ala53, Glu108 to Ser124, and Leu171 to Ile207 ( Figure 2). About 70% of the identified peptides originated from α-and β-caseins ( Figure 1 and Figure S2, and Table S2) including 246 peptides from αS1-casein. Around 77% of the αS1-casein-derived peptides were present in all sample types ( Figure 1 and Figure S2). UHT milk contained slightly more peptides (226) compared to RM (214) and IF (214). Independent of the sample type, most peptides originated from three protein regions, i.e., Gly10 to Val37, His80 to Met123, and Ser180 to Trp199 ( Figure S3). Interestingly, one peptide corresponding to Glu69 to Lys79 was present only in IF ( Figure S3). Regions Gln59 to Ser68 and Gln155 to Tyr165 were not represented by any detected peptide ( Figure S3).
With 202 β-casein-derived peptides, this protein was the second most dominant source of free peptides ( Figure 1 and Figure S2; and Table S2) whereof 142 were present in RM and UHT milk and 184 in IF ( Figure 1). The peptides from IF covered the full protein sequence except for the signal peptide ( Figure 2), whereas sequences Leu58 to Asn68 and His134 to Val162 were missing in RM and UHT milk ( Figure 2). However, most β-casein peptides originated from three parts of the sequence, i.e., Lys29 to Ala53, Glu108 to Ser124, and Leu171 to Ile207 ( Figure 2).
Figure 2.
Protein sequence of β-casein without the signal peptide. The numbers below the sequence indicate how many peptides containing this specific residue were detected over all samples (total) and in RM (sequence coverage 80.9%), UHT milk (sequence coverage 80.9%), and IF (sequence coverage 100%). AA denotes amino acid.
Around two-thirds of the 111 peptides derived from αS2-casein were present in all three milk types ( Figure S2 and Table S2). Although these numbers are lower than for the other caseins, αS2casein also showed four regions where most peptides originated from, i.e., Ser13 to Lys21, Gly102 to Lys113, Val139 to Lys149, and Leu153 to Phe163 ( Figure S4). A relatively long sequence from Asn25 to Lys70 was not covered by an unmodified peptide ( Figure S4). Peptides from Ala81 to Gln97 and Lys166 to Arg170 were found solely in IF ( Figure S4).
Peptides related to the glycosylation-dependent cell adhesion molecule 1 (GlyCAM-1) were the most common among the group of non-casein-protein-derived peptides. In total, 56 peptides Around two-thirds of the 111 peptides derived from α S2 -casein were present in all three milk types ( Figure S2 and Table S2). Although these numbers are lower than for the other caseins, α S2 -casein also showed four regions where most peptides originated from, i.e., Ser13 to Lys21, Gly102 to Lys113, Val139 to Lys149, and Leu153 to Phe163 ( Figure S4). A relatively long sequence from Asn25 to Lys70 was not covered by an unmodified peptide ( Figure S4). Peptides from Ala81 to Gln97 and Lys166 to Arg170 were found solely in IF ( Figure S4).
Peptides related to the glycosylation-dependent cell adhesion molecule 1 (GlyCAM-1) were the most common among the group of non-casein-protein-derived peptides. In total, 56 peptides corresponding mostly to regions Ile1 to Phe22 and Ser54 to Lys73 were identified, with 60.7% present in RM (39 peptides detected), UHT milk (42 peptides), and IF (51 peptides) (Figure 1, Figures S2 and S5, Table S2). Sequence Arg76 to Met108 was not represented by any peptide ( Figure S5). The increase in IF peptides was mainly due to a higher number of peptides originating from the C-terminal part of GlyCAM-1 ( Figure S5).
Thirty-four peptides derived from κ-casein were detected in all ( Figure 1 and Figure S2, and Table S2) with 32 present in IF, ten in UHT milk, and seven in RM. Only 12% of the identified peptides were common for all samples (Figure 1 and Figure S2). Most peptides originated from the C-terminal sequence Val152 to Val169, whereas regions Val31 to Val48 and Met106 to Glu147 were only present in IF ( Figure S6). No peptides corresponding to regions Gln1 to Lys13, Tyr25 to Tyr30, and Ser80 to Phe105 of κ-casein were identified ( Figure S6).
Non-Enzymatic Modifications in the Bovine Milk Peptidome
Twenty-six glycation, AGE-and oxidation/carbonyl-related modifications (Table S1) were targeted in peptides present in RM, UHT milk, and IF. By data processing of the tandem mass spectra and manual confirmation of all proposed sequences, 175 peptides corresponding to seven milk proteins, i.e., α S1 -, α S2 -, and β-casein, GlyCAM-1, FGF-BP, LPO, and BT, containing 30 unique modification sites were confidently identified (Table S3). More than three-quarters of the peptides (137) carried a lactosylated lysine representing 26 unique modification sites (Figure 3a). Furthermore, 23 hexosylated peptides (nine modification sites), as well as one oxidized threonine and one oxidized proline residues (Figure 3a), were identified. The numbers of modified peptides increased from RM (38) to UHT milk (83) and further to IF (169), similar to the number of modification sites increasing from RM (14) to UHT milk (24) and to IF (30) (Figure 3b). Most modified peptides were derived from α S1 -(64) and α S2 -casein (73) corresponding to eight and nine modification sites, respectively (Figure 4a, Table S3). Interestingly, 26 modified peptides originated from GlyCAM-1 representing seven modification sites ( Figure 4a, Table S3), whereas only a few modified peptides derived from β-casein (5) and FGF-BP (4) corresponding to two unique modification sites in each protein ( Figure 4a, Table S3). LPO and BT had only one modification site per protein ( Figure 4a, Table S3) in the processed milk samples identified. Table S2). Sequence Arg76 to Met108 was not represented by any peptide ( Figure S5). The increase in IF peptides was mainly due to a higher number of peptides originating from the C-terminal part of GlyCAM-1 ( Figure S5). Thirty-four peptides derived from κ-casein were detected in all ( Figure 1 and Figure S2, and Table S2) with 32 present in IF, ten in UHT milk, and seven in RM. Only 12% of the identified peptides were common for all samples (Figure 1 and Figure S2). Most peptides originated from the C-terminal sequence Val152 to Val169, whereas regions Val31 to Val48 and Met106 to Glu147 were only present in IF ( Figure S6). No peptides corresponding to regions Gln1 to Lys13, Tyr25 to Tyr30, and Ser80 to Phe105 of κ-casein were identified ( Figure S6).
Non-Enzymatic Modifications in the Bovine Milk Peptidome
Twenty-six glycation, AGE-and oxidation/carbonyl-related modifications (Table S1) were targeted in peptides present in RM, UHT milk, and IF. By data processing of the tandem mass spectra and manual confirmation of all proposed sequences, 175 peptides corresponding to seven milk proteins, i.e., αS1-, αS2-, and β-casein, GlyCAM-1, FGF-BP, LPO, and BT, containing 30 unique modification sites were confidently identified (Table S3). More than three-quarters of the peptides (137) carried a lactosylated lysine representing 26 unique modification sites (Figure 3a). Furthermore, 23 hexosylated peptides (nine modification sites), as well as one oxidized threonine and one oxidized proline residues (Figure 3a), were identified. The numbers of modified peptides increased from RM (38) to UHT milk (83) and further to IF (169), similar to the number of modification sites increasing from RM (14) to UHT milk (24) and to IF (30) (Figure 3b). Most modified peptides were derived from αS1-(64) and αS2-casein (73) corresponding to eight and nine modification sites, respectively ( Figure 4a, Table S3). Interestingly, 26 modified peptides originated from GlyCAM-1 representing seven modification sites (Figure 4a, Table S3), whereas only a few modified peptides derived from β-casein (5) and FGF-BP (4) corresponding to two unique modification sites in each protein (Figure 4a, Table S3). LPO and BT had only one modification site per protein (Figure 4a, Table S3) in the processed milk samples identified. In particular, the total number of modified peptides originating from αS1-, αS2-casein, and GlyCAM-1 increased from RM to UHT milk and further to IF (Figure 4b). For example, the numbers of peptides detected for αS1-casein increased from ten (six modification sites) in RM to 28 (eight sites) in UHT milk, and 64 (eight sites) in IF (Figure 4b). All modification sites corresponded to lysine residues with half of them (Lys34, Lys36, Lys83, and Lys105) being located in regions where most unmodified peptides originated from ( Figure S7). Additionally, modified peptides were derived from the regions Arg1 to Lys42 and His80 to Lys124 ( Figure S7). Interestingly, RM was lacking peptides from Phe24 to Lys42.
Similarly, the numbers of modified αS2-casein-derived peptides increased from RM (24) to UHT milk (42), and IF (67) and the number of modification sites from five in RM, to seven in UHT milk, and nine in IF (Figure 4b, αS2-casein). In contrast to αS1-casein, only two modification sites were located in regions with a high density of unmodified peptides (Lys21 and Lys158), whereas most modified peptides corresponded to four modification sites from Lys150 to Lys165 ( Figure 5). Besides eight glycated Lys residues (Lys21, 24, 150, 152, 165, 173, and 188), Thr38 was identified as oxidized and, interestingly, only modified peptides from this part of the protein sequence ( Figure 5, Asn24 to Arg45) were identified. Lys173 and Lys188 were modified only in IF ( Figure 5, Table S3) whereas In particular, the total number of modified peptides originating from α S1 -, α S2 -casein, and GlyCAM-1 increased from RM to UHT milk and further to IF (Figure 4b). For example, the numbers of peptides detected for α S1 -casein increased from ten (six modification sites) in RM to 28 (eight sites) in UHT milk, and 64 (eight sites) in IF (Figure 4b). All modification sites corresponded to lysine residues with half of them (Lys34, Lys36, Lys83, and Lys105) being located in regions where most unmodified peptides originated from ( Figure S7). Additionally, modified peptides were derived from the regions Arg1 to Lys42 and His80 to Lys124 ( Figure S7). Interestingly, RM was lacking peptides from Phe24 to Lys42.
Similarly, the numbers of modified α S2 -casein-derived peptides increased from RM (24) to UHT milk (42), and IF (67) and the number of modification sites from five in RM, to seven in UHT milk, and nine in IF (Figure 4b, α S2 -casein). In contrast to α S1 -casein, only two modification sites were located in regions with a high density of unmodified peptides (Lys21 and Lys158), whereas most modified peptides corresponded to four modification sites from Lys150 to Lys165 ( Figure 5). Besides eight glycated Lys residues (Lys21, 24, 150, 152, 165, 173, and 188), Thr38 was identified as oxidized and, interestingly, only modified peptides from this part of the protein sequence ( Figure 5, Asn24 to Arg45) were identified. Lys173 and Lys188 were modified only in IF ( Figure 5, Table S3) whereas Lys3, Lys7, Lys32, and Lys34 were not modified in RM ( Figure 5). Although only 26 modified peptides originated from GlyCAM-1, they followed a similar trend (Figure 4b), i.e., the numbers increased from one peptide modified at one Lys residue identified in RM to three unique modification sites in UHT milk, and 26 peptides were identified in IF ( Figure S8 and Table S3). Most of these peptides corresponded to regions dominantly represented by unmodified peptides, i.e., Ile1 to His10 and Ser54 to Lys73 ( Figure S8). Lys3, Lys7, Lys32, and Lys34 were not modified in RM ( Figure 5). Although only 26 modified peptides originated from GlyCAM-1, they followed a similar trend (Figure 4b), i.e., the numbers increased from one peptide modified at one Lys residue identified in RM to three unique modification sites in UHT milk, and 26 peptides were identified in IF ( Figure S8 and Table S3). Most of these peptides corresponded to regions dominantly represented by unmodified peptides, i.e., Ile1 to His10 and Ser54 to Lys73 ( Figure S8).
Discussion
Milk peptidomics typically relies on milk skimming by centrifugation, protein precipitation (e.g., trichloroacetic acid), and SPE [7,8], whereas milk proteomics often utilizes the Folch procedure prior to digestion and SPE [19,21,31]. In our hands, both protocols showed similar results for the extraction of endogenous peptides from a UHT milk (without skimming), but slightly more peptides were detected after the Folch procedure. Therefore, we applied this procedure to extract endogenous peptides from RM, UHT milk, and IF. Confident peptide identification was aimed for by utilizing two fragmentation techniques (i.e., CID and ETD) in DDA mode, processing the data with two software tools (i.e., Proteome Discoverer 2.2 and PEAKS Studio 10.5), and validation with a spectral library generated within Skyline as previously described by Dallas and Nielsen [33], which allowed confirming the presence of proposed peptides among all analyzed samples ( Figure S1). To identify more modified peptides detected with low intensities, previously identified unmodified peptides were excluded from the second analysis of each milk sample. All modified peptides were confirmed by ETD, which is more suitable for the identification of glycated peptides due to the dominant cleavage of the backbone [19]. Finally, tandem mass spectra of proposed modification sites were confirmed by manual interpretations.
Discussion
Milk peptidomics typically relies on milk skimming by centrifugation, protein precipitation (e.g., trichloroacetic acid), and SPE [7,8], whereas milk proteomics often utilizes the Folch procedure prior to digestion and SPE [19,21,31]. In our hands, both protocols showed similar results for the extraction of endogenous peptides from a UHT milk (without skimming), but slightly more peptides were detected after the Folch procedure. Therefore, we applied this procedure to extract endogenous peptides from RM, UHT milk, and IF. Confident peptide identification was aimed for by utilizing two fragmentation techniques (i.e., CID and ETD) in DDA mode, processing the data with two software tools (i.e., Proteome Discoverer 2.2 and PEAKS Studio 10.5), and validation with a spectral library generated within Skyline as previously described by Dallas and Nielsen [33], which allowed confirming the presence of proposed peptides among all analyzed samples ( Figure S1). To identify more modified peptides detected with low intensities, previously identified unmodified peptides were excluded from the second analysis of each milk sample. All modified peptides were confirmed by ETD, which is more suitable for the identification of glycated peptides due to the dominant cleavage of the backbone [19]. Finally, tandem mass spectra of proposed modification sites were confirmed by manual interpretations.
Native Peptidome
Most peptidomic studies focused on endogenous peptides in raw milk of healthy cows, cows with mastitis, or different species [7][8][9][10]34]. The 801 peptides identified here correspond well to the reported sequences, at the same time expanding the bovine milk peptidome (Table S2). However, some previously identified peptides were missed here, as modifications such as pyroglutamate formation of N-terminal glutamine [8,10] were not considered or peptides of different lengths were identified. Peptides identified by our approach ranged from seven to 64 residues, with an average length of 15.4. Shorter or longer peptides might also be present in bovine milk, but might have been missed due to their low abundances, poor ionization properties, ionization suppression effects, or the low efficacy of the applied LC-MS techniques to identify confidently peptides shorter than five or longer than 64 residues. However, the identification of 612 peptides derived from 30 proteins in RM ( Figure 1, Table S2) is much higher than the 159 peptides reported in bovine milk of six individual healthy cows [8] and the 248 peptides in a raw milk pool [7]. In comparison to the endogenous peptides reported in raw milk from healthy and diseased (mastitis) cows, a slightly lower number of peptides was observed, possibly attributed to a higher release of peptides in diseased cows [10]. Independent of the sample type, peptides were mostly derived from α S1 -casein, β-casein, α S2 -casein, and GlyCAM-1 being in good agreement with the literature [7,8,10]. Similarly, peptides originating from κ-casein, PIgR, BT, β-lactoglobulin, LPO, osteopontin, and other minor milk proteins were previously identified in raw milk [8,10]. Interestingly, many of the identified proteins, such as GlyCAM-1, BT, mucin-1, mucin-15, and xanthine dehydrogenase/oxidase belong to the group of milk fat globule membrane (MFGM) proteins.
As most studies focused on unprocessed milk, little is known about changes in the peptide profile along the processing chain of milk products. The current study analyzed samples from RM and the corresponding UHT milk collected after industrial processing (first pasteurized at min. 72.5 • C for at least 15 s and subsequently UHT treated at 140 • C for 3 s) to judge the changes between RM and UHT milk. For most proteins, the same peptides were identified ( Figure 1). However, the numbers of α S1 -casein-, κ-casein-, GlyCAM-1-, and β-lactoglobulin-derived peptides slightly increased in UHT milk. The higher number of κ-casein-derived peptides might be attributed to the higher levels of κ-casein present in the serum phase due to its depletion from the casein micelle at temperatures above 70 • C [35]. It should be noted that no peptides from α-lactalbumin and only 16 from β-lactoglobulin were identified in total, which may indicate low activity of plasmin and cathepsin D towards these proteins [3,4,36]. Alternatively, they might have been missed due to low quantities or the Folch protocol. A recent study reported that the contents of specific α S1 -and β-casein-derived peptides increased during the storage of UHT milk [15]. Here, several of these marker peptides were also identified in RM and UHT milk (Table S2), however, quantitative analysis was beyond the scope of this study.
Most peptides were detected in IF (Figure 1), i.e., 683 peptides from 25 proteins. In particular, the number of β-casein, β-lactoglobulin, GlyCAM-1, and κ-casein peptides was higher compared to RM and UHT milk. The information about the protein sources provided on the original package indicates that this IF was produced from skimmed milk and sweet whey, which is the remaining liquid in cheese production after casein precipitation by rennet coagulation. A peptidomic study of whey permeate from colostrum found predominantly peptides from β-casein, α S1 -, and κ-casein besides peptides corresponding to GlyCAM-1, PIgR, α S2 -casein, and serum amyloid A [11]. As whey permeate is part of sweet whey, the increasing numbers of IF peptides might originate from the dried sweet whey powder added during IF production. Moreover, the peptidome determined from a whey protein isolate (WPI) revealed peptides originating mainly from βand α S1 -casein, followed by β-lactoglobulin [37]. Whey proteins are added to IF to increase the ratio of whey proteins to caseins from 20:80 in bovine milk to better resemble the human milk composition with a ratio of approximately 60:40 [38]. However, these are just assumptions from the presented data, as no further details about the added sweet whey were available. Alternatively, the increase of κ-casein-derived peptides can be explained by the cleavage of κ-casein between Phe105 and Met106 during rennet coagulation of caseins in the course of cheese making, leading to the diffusion of glycomacropeptide (GMP, also called caseinomacropeptide, CMP), i.e., the C-terminal fragment starting at Met106, into the whey phase [35,39]. It is worth mentioning that GMP has been identified with its full sequence in IF (Table S2, Peptide 697). Hence, mapping of κ-casein peptides underlined that peptides present only in IF originate mainly from the GMP region, especially between Met106 and Glu147, whereas peptides present in all samples are mainly derived from the subsequent C-terminal part ( Figure S6, Ser149 to Val169). The low number of κ-casein-derived peptides, especially for RM and UHT milk, are in good agreement with previous studies [7,8,10], whereas the higher numbers detected in sweet whey permeate match the trend seen for IF [11]. Many peptides reported for whey permeate overlap with peptides identified here in IF. However, some parts of the protein were not represented by peptides, for example, the region from Ser80 to Phe105, which might be less prone to proteolysis. The majority of the peptides corresponded to the C-terminal part of the protein. Similarly, regions Leu35 to Tyr52 and Asn25 to Lys70 of α S2 -casein were not covered by any peptide in the current study ( Figure S4), although a few peptides corresponding to a part of the second region were reported for raw milk [10]. Alpha S1 casein ( Figure S3) was mostly resembled by peptides identified here and a few missing parts of the sequence were identified in an earlier study [10].
Interestingly, the β-casein peptides identified in IF covered the complete protein sequence although this protein is longer than the other caseins ( Figure 2). Peptides from a few sequence regions were absent in RM and UHT milk, but peptides covering these regions were previously identified in raw and UHT milk [10,11,15]. Similarly to our observations in IF, a recent study focusing on the analysis of in vitro digests of human milk and IF identified peptides from the caseins, β-lactoglobulin, and some minor milk proteins including the N-and C-termini of β casein where most peptides originated from [40].
This study focused on the processing of related changes in the milk peptidome without considering peptide bioactivities. However, several well-known bioactive peptides were identified. For example, β-casein peptides Ala177 to Arg183 and Tyr193 to Arg202, which belong to the class of β-casokinins with known ACE-inhibitory properties [1], are located in regions represented by many unmodified peptides. A similar trend was observed for a β-casein phosphopeptide (Lys29 to Thr41) with mineral binding properties [1]. Moreover, α S1 casokinin (Pep 40) with ACE-inhibitory properties, as well as sequences of antimicrobial peptides caseicins B (Pep 32), C (Pep 205), and A (Pep 79), were present in slightly longer sequences (Table S2) [1,41].
Non-Enzymatic Modifications
Many proteomic studies have targeted a variety of non-enzymatic PTMs, such as glycation, AGEs, and oxidations, in diverse milk samples including raw, pasteurized, UHT milk, and IF [18][19][20]23,29,31,42,43]. Generally, the modification degrees and the number of modified residues increased with harsher processing conditions depending on the modification type. Although endogenous peptides are far less abundant than proteins, they might also be a target of the same chemical reactions during milk processing. Additionally, modified peptides might be released from proteins by proteases. However, studies on the modified milk peptidome are still lacking despite their possible effect on bioactive peptides. Therefore, this study aimed to comprehensively characterize 26 diverse PTMs in endogenous milk peptides. However, only four different modifications, i.e., two glycation-(lactosylation and hexosylation) and two oxidation-products (T(Ox) and P(Ox)) were identified in 175 peptides originating from seven proteins that were also represented with unmodified peptides, i.e., α S1 -casein, α S2 -casein, β-casein, BT, FGF-BP, GlyCAM-1, and LPO. Despite the fact that unmodified peptides from α S1 -casein were most common, the highest number of modified peptides originated from α S2 -casein, followed by α S1 -casein, and GlyCAM-1. Notably, only five modified compared to 202 unmodified β-casein peptides were identified, particularly in IF. The increasing numbers of modified peptides from RM to UHT milk and further to IF resulted from lactosylation, which is in agreement with bottom-up proteomic studies of diverse milk samples showing increasing numbers and quantities of lactosylated peptides along with harsher processing conditions [19,22,23]. Modifications of intact proteins likely reduce the protease activity, as plasmin digests lactosylated αand κ-casein less efficiently while β-casein is still cleaved [24]. However, the presented data do not provide conclusive data on this aspect, as the numbers of unmodified α-casein peptides were similar among all samples despite increasing numbers of lactosylated peptides and the possibility of peptides being modified after proteolysis.
The numbers of hexose-derived peptides increased in the same order as observed for lactosylation being highest in IF, which corresponds well to a previous proteomics study on hexosylation in milk and IF samples [20]. Additionally, most lactosylation and hexosylation sites were identical [18][19][20][21]23] with only three novel sites reported here for the first time, i.e., Lys21 of α S2 -casein, Lys4 of GlyCAM-1, and Lys74 of LPO. Noteworthy, only two oxidation products were detected, i.e., reactive carbonyls at Pro160 of BT and Thr38 of α S2 -casein, which has not been reported previously. While oxidized Pro160 was detected in one peptide present in all milk types, oxidized Thr38 was present in several peptides, but their number was the lowest in IF. Interestingly, no AGEs were identified in the peptidome, although formylated and carboxymethylated lysine residues were identified as major AGE-modifications in milk proteomic studies [22,31,32]. It remains open whether AGEs and carbonylation sites were missed due to their low abundances. This could be evaluated at least for carbonylated peptides by enriching them after derivatization using biotin-avidin affinity chromatography [29].
Interestingly, many α S1 -casein-and GlyCAM-1-derived modified peptides originated from areas already covered by high numbers of unmodified peptides, e.g., Val25 to Val37 in α S1 -casein and Ser54 to Lys73 in GlyCAM-1. However, α S2 -casein peptides oxidized at Thr38 were not represented by any of the identified unmodified peptides ( Figure 5). Noteworthy, this modification site was identified at the protein level in flavored milk drinks [32]. Most modified peptides corresponded to region Lys150 and Lys165 containing four out of eight modified Lys residues. This region was most affected during processing as the numbers of modified peptides significantly increased from RM to UHT milk and further to IF.
In general, the identified peptides followed the same trends as reported at the protein level with increasing numbers of glycated peptides along the processing chain. Furthermore, proteins followed different trends for the location of modified peptides within the protein sequence. It remains open if peptides get modified before or after proteolytic release, but most likely the modifications will occur at both levels.
Conclusions
This study analyzed changes in the native peptidomes of raw bovine milk, UHT milk, and IF as well as non-enzymatic modifications present therein. Independent of the milk type, casein-derived peptides were most common. The native peptidomes of RM and its UHT milk appeared to be very similar, whereas IF contained significantly more β-casein-derived peptides, probably due to the addition of sweet whey during its production. To study the effects of thermal processing on the peptidome, in total 26 PTMs were targeted at the peptide level, as increasing degrees of non-enzymatic modifications are well known from proteome studies. To the best of our knowledge, this is the first study targeting so many different PTMs related to glycation, AGE-formation, and oxidation/carbonylation in the peptidomes of RM, UHT milk, and IF. Although many of these modifications were reported in milk proteins, only four types were identified, i.e., lactosylation as the most dominant, followed by hexosylation as well as proline and threonine oxidation. The numbers of lactosylated peptides increased from RM to UHT milk and further to IF as reported before for protein-bound modifications. Thus, the native milk peptidome is affected by diverse chemical reactions including Maillard reactions occurring during processing and storage. Future studies should evaluate quantitative changes in the peptidome induced along the processing chain of bovine milk, especially focusing on known bioactive peptides as well as the effects of the identified modifications on their functional properties. | 9,084.6 | 2020-12-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Chemistry"
] |
Agent-Based Data Analysis Towards the Dynamic Adaptation of Industrial Automation Processes
. Industrial complex systems demand the dynamic adaptation and optimization of their operation to cope with operational and business changes. In order to address such requirements and challenges, cyber-physical systems promotes the development of intelligent production units and products. The realization of such concepts requires, amongst others, advanced data analysis approaches, capable to take advantage of increased availability of data, in order to overcome the inherent dynamics of industrial environments, by providing more modular, adaptable and responsiveness systems. In this context, this work introduces an agent-based data analysis approach to support the supervisory and control levels of industrial processes. It proposes to endow agents with data analysis capabilities and cooperation strategies, enabling them to perform distributed data analysis and dynamically improve their analysis capabilities, based on the aggregation of shared knowledge. Some experiments have been performed in the context of an electric micro grid to validate this approach.
Introduction
Companies are subject to the market dynamics and competitiveness, demanding highly customized and quality products and services with reduced prices.Additionally, they operate in complex environments, characterized by distributed and heterogeneous systems, which require the dynamic adaptation and optimization of their processes to cope with operational changes caused by technical problems (e.g., equipment damage and resource availability) or changes in business rules (e.g., new product demands or design).In order to address such requirements, Industrie 4.0 vision [1] and Cyber-Physical Systems (CPS) principles [2] promote the use of smart machines, systems and products.To support the realization of such concepts, the use of advanced data analysis approaches should be considered, taking advantage of the increased availability of great amounts of data produced in such environments.The continuous data analysis enables to identify and predict the system operational conditions, providing valuable information to support process supervision and control.
The existing approaches handle these issues without worrying about the inherent dynamics and complexity of these environments (e.g., in face of condition changes, be capable to adapt its data analysis capabilities), which require features of modularity, adaptability and responsiveness.In industrial environments, approaches also need to support different data analysis scopes: 1) at operational level, applying distributed streaming data analysis for rapid response, and 2) at supervisory level, applying centralized and more robust data analysis for decision-making.
Multi-agent systems (MAS) [3][4] have being pointed as a suitable approach to support the design and development of distributed, flexible and dynamic systems.Thus, the goal of the ongoing work is to design an advanced distributed data analysis approach, based on MAS, to support intelligent and adaptive supervisory control applications, towards the dynamic adaptation of industrial automation processes.This approach proposes to endow agents with data analysis capabilities and cooperation strategies, enabling them to perform distributed data analysis, continuously improve and dynamically adapt their local capabilities, based on the aggregation of knowledge.Some experiments, in the context of an electrical micro grid, have being performed to consolidate and validate the proposed approach.The preliminary results shown that agents are able to perform distributed predictive data analysis of energy production.
Although this approach presents a great potential to address the recent challenges faced by industry, there are some open questions regarding to how properly endow agents with data analysis capabilities and how the extracted information could enhance agents' behaviors.The conceptual and technical answers for such questions will enable the design and development of innovative and powerful approaches.
This paper is organized as follows.Section 2 describes the contributions of the work to the realization of CPS.Section 3 presents the literature review and Section 4 presents the proposed approach.Section 5 overviews the critical analysis of this proposal and discusses the preliminary results.Finally, Section 6 wraps up the paper with the conclusions and states the research directions.
Contribution to Cyber-Physical Systems
CPS promotes the integration of physical and virtual worlds, the first is characterized by a large network of interacting heterogeneous hardware devices, while the second provides robust computing infrastructures, replete of software platforms, applications and information technologies.Such integration aims a more effective management of the physical environment and their processes, by embedding computational elements in physical entities and connecting such entities in a cloud-based infrastructure.
CPS have been deployed in several fields, related to smart production, grids and buildings, where large amount of devices should be efficiently sensed and controlled in a reliable, secure, real-time and distributed manner.Additionally, such devices produce large volume of data, requiring advanced data analysis approaches in order to enable the capabilities and features envisioned by CPS, namely self-adaptation, fault tolerance, automated diagnosis and proactive maintenance [5][6].While most of existing works focus in the design of control approaches for CPS, this work intends to contribute with the issues and challenges related to supervisory aspects, considering the Big Data features.The main objective is to provide algorithms and mechanisms to support more intelligent and adaptive monitoring and supervisory CPS.
Literature Review
Industrial environments are characterized by a large network of heterogeneous devices (endowed with sensors and actuators), which monitor and control related processes [7].The industrial management systems need to integrate and coordinate such devices, automating the overall process in order to optimize and ensure the quality of outcomes, and keep the plant availability.MAS have been proposed to address the issues of industrial systems, which need to be flexible and adaptive to cope with inherent complexity required to manage dynamic, heterogeneous and distributed components [3], [4].In MAS, several autonomous, collaborative and selforganizing decision-making entities, called agents, interact and exchange knowledge to achieve their goals [3].The application of agent-based technology in the industrial domain, to solve problems related to production automation and control, supervision and diagnosis, production planning, and supply chain and logistics, is surveyed in [4], [8], [9], and have been covered by the Industrial Agents [8] research field.
Technological advances in sensor devices have contributed to leverage their use in industrial environments and consequently the amount of collected data [10] further increases the complexity of such environments.In many cases, the produced data is underused, mainly because it is necessary a great expertise and specialized knowledge for its integration and analysis.However, the recent popularization of the Big Data concept and its potential, cached the attention of industry.In this context, data analysis has been widely applied in industrial domain, e.g., at operational level for process monitoring, diagnosis, optimization and control, and at business level for customer relationship management, supply chain, sales and others [11], [12], [13], [14].However, to effectively use data analysis and extract its full potential, several challenges found in industrial scenarios need to be overcome, such as, mechanisms to integrate distributed, heterogeneous, dynamic and streaming data sources [15].
In general, MAS and data analysis have been used successfully, but separately, to address several issues in industrial domain.In particular, MAS is used to develop adaptive and intelligent control systems, while data analysis to provide effectively data-driven decision-making algorithms.In this sense, several works leverage and discuss the potentials and how the integration of these technologies can provide better solutions in various domains [16], [17], [18].
Research Contribution and Innovation
Considering the assumptions discussed in the previous section, this work intends to design and develop an agent-based data analysis approach towards a flexible and adaptive industrial supervisory control system, capable to cope with the dynamics and high amount of distributed and heterogeneous industrial devices.This approach is more concerned with the supervisory and monitoring aspects than with the control of processes.Therefore, the general objective of this project encompasses mechanisms and algorithms to derive information and knowledge from data of different industrial levels, and then properly provide them for decision-making and process management.
Agent-based Data Analysis Features and Requirements
The design of the proposed approach requires the consideration of some essential requirements and features, as illustrated in Figure 1.They are directly related to ongoing and upcoming industry challenges and issues that are raised by the Industrie 4.0 vision.As already discussed in the previous section, MAS and Data Analysis are the basis technologies that will support this approach.The first provides the base infrastructure to achieve the required flexibility and adaptability, while the second, provides the proper tools required to take advantage of increased data availability.
On the other hand, to cover different industry automation levels, such as, the monitoring of the operational process and the supervision of the whole plant, the proposed approach requires to support different data analysis scopes: 1) at operational level, distributed data streaming analysis for rapid response; and, 2) at supervisory level, more robust big data analysis for decision-making.Big Data considers the volume, variety and velocity of data, requiring dedicated and usually distributed computing infrastructures to extract valuable information from raw data, while Data Streaming considers the analysis of data at real or near-real time, providing simpler information that address rapid response requirements.In the literature some works already discuss approaches to address these two kinds of data analysis scopes [18].Other requirements consider (Figure 1): 1) MAS infrastructures for distributed DA (Data Analysis), and, 2) Multi-algorithm, plug&play and continuous models' improving.The first focuses in providing a modular and scalable data analysis infrastructure, by taking advantage of the MAS approach to support and enhance the various data analysis phases.For example, agents can be employed for data retrieve, preprocess, integrate and analyze, in a distributed and cooperative way.The second comprises three related features where the focus is the utilization of MAS to provide a dynamic and adaptive infrastructure to perform data analysis.Multi-algorithms comprises the deployment of different data analysis algorithms and models, e.g., one per agent, which can perform the same task over the data and at the end the results could be combined to obtain more accurate information.Plug&play comprises the use of MAS to provide an open and dynamic infrastructure that enables the seamless addition of new algorithms and data sources to the system.Continuous models' improving comprises mechanisms and algorithms that enable data analysis models to be updated to fit the environments' dynamics.In this case, specialized agents could be in charge to analyze the performance of current data analysis models, updating them to enhance their current accuracy.
While the previous features are more related to infrastructural aspects, there are also others related to industrial supervisory and control aspects.In the Figure 1, the Distributed decision-making and support element comprises coordination and negotiation mechanisms for agents monitoring and diagnosing the system's conditions.The Pattern recognition, anomaly detection and prediction element represents the common application of data analysis to solve industrial problems, while the Dynamic control of complex environments element comprises the support of dynamic adaptation and optimization of operations and processes in face of changes in the environment or operating process conditions.
Agent-based Model
Considering the analysed features and requirements, an agent-based model is proposed (Figure 2), comprising two layers of agents.In the left side of Figure 2, at the lower layer, agents are in charge of streaming data analysis, providing simple information about the processes (e.g., operation status, triggers and events), but attending rapid response constraints.In this layer, each agent is responsible to retrieve and analyze the data from process devices, in order to support control actions.These agents could be embedded into devices to perform distributed data analysis and intelligent monitoring, cooperating to identify problems or provide information about the system.At the upper layer, agents are responsible to process and analyze great amounts of historical and incoming data from plant operations, business and also external data, in order to provide information for high level decision-making (e.g., performance, quality or degradation indicators, event diagnosis, tends and forecasts).These agents could be deployed in a cloud-based computing environment, taking advantage of such kind of infrastructure and other tools to perform their tasks and also to manage the lower level agents.
In this approach, the agents of each layer comprise three modules (illustrated in the right side of Figure 2) that group a set of specific components, which define the agent behaviors and capabilities.The Data Analysis module defines the components that perform data analysis tasks, the Decision module defines the components that process, organize and consolidate the analysis result, and the Execution module defines the components that use the consolidated information to act in the environment.Agents from both layers have two common components, the Raw/Operational data and the Inter agent communication, responsible to retrieve external data from the environment and manage the agent interaction, respectively.
The components of lower layer agents Data Analysis module comprise: • Preprocess Integrate, which prepares the raw data to be analyzed; • Monitoring, which performs several types of data analysis; • Analysis models, which comprises all the data analysis models used by the agent.
• Evaluate results, which assess the analysis model accuracy (e.g., by comparing its output with a system feedback); The Decision module comprises the Interpret, which contextualizes and makes assumptions over the analysis result, the Collaborative monitoring, which realizes if the agent need any kind of information that could be provided by other agents, and, the Context aware, which provides a local knowledge used by the other components.The upper layer agents are also defined by several components: • Supervision, which receives monitoring information from lower layer agents and uses it to obtain the status of production stations, plants or the whole process; • Improve models, which retrains or rebuilds data analysis models used by lower layer agents, based on the feedback provided by these agents; • Set up monitoring, which builds new data analysis models, sets up and deploys lower layer monitoring agents; • Big Data analysis, which considers data from different sources, including external and historical data, in order to extract information for a broader context; • Analysis models, which, like in lower level agents, comprises all the data analysis models used by the agent; • Data sets, which represents the access interfaces for historical data, since external data was provided by Raw/Operational data component.
The Decision module of upper layer agents comprises the following components: • Discovery, which monitors the system components (e.g., agents or devices) to support the dynamic adaptation of the system; • Report Prescribe, which compiles and provides information about the conditions of some parts or whole system, and suggests actions and their possible consequences in the system (what-if information) considering the information provided by the Supervision, Big Data analysis, or Knowledge components; • Knowledge, which is related to operational and technical characteristics and constraints associated to some parts or the whole system; • Distributed diagnose, which interact with other upper layer agents to collaborative identify and diagnose the whole system conditions.
Discussion of Results and Critical View
The described approach is being designed and verified based on a case study in the context of an electric micro grid comprising some wind turbines and photovoltaic panels.The preliminary results showed that producer agents (PA) are capable to perform distributed analysis of the energy production and weather data from sensors installed in the energy production units.PAs were able to monitor the operational conditions of production units, in order to identify abnormalities in energy production, by performing short-term prediction of energy production using different data analysis models, which were built based on historical data.Through a mid-term prediction of energy production, performed by integrating external weather forecasting data, PAs were able to provide information about the amount of energy expected to be produced in a near future, which could be used by engineers, grid operators and other systems to enhance and optimize the energy distribution and balance.During energy predictions, agents were able to continuously evaluate and improve their analysis models [19].
The experiments performed so far only covered some of the lower agents' aspects of the proposed approach.The preliminary experiments showed promising results, but there are still many features and requirements that need to be addressed in order to verify the expected potentials and benefits.Moreover, these features need to be evaluated by its performance, robustness and scalability regarding the data analysis aspects.Other aspects that should be explored in this case study, comprise the development of predictive capabilities for consumer and storage agents in order to manage energy consumption and power storage of micro grids nodes.
Conclusions and Future Work
In industrial domain, MAS have been used as a suitable approach to design and develop flexible and adaptable industrial control systems, while data analysis is being used to provide effective algorithms to support data-driven decision-making.In this context, the proposed approach intends to combine the features of these two technologies to contribute for the realization of CPS principles, in order to attend the requirements imposed by the Industrie 4.0 vision.This work describes an agent-based data analysis approach for intelligent and adaptive industrial supervisory control systems.Moreover, the approach covers the requirements of process monitoring and supervision automation levels.Although the promising perspectives, it is clear that to achieve the desired objectives, some aspects and issues need to be addressed, namely the dynamic, openness and rapid response requirements of industrial environments, and mechanisms for distributed, cooperative and self-improving data analysis.
Future works encompass the detailed specification and the definition of the required mechanisms and strategies to cover the more advanced aspects and features of the proposed approach.Thereafter, the current case study should be further explored, extending the preliminary experiments in order to validate and assess other aspects.Moreover, it is intended to explore another case study scenario in the manufacturing domain.
Fig. 1 .
Fig. 1.Essential requirements and features of the proposed approach.
Fig. 2 .
Fig. 2. Agent-based data analysis approach for adaptive industrial supervisory control systems. | 4,025 | 2016-04-11T00:00:00.000 | [
"Computer Science"
] |
Constitutional Spheres and Forms of Interaction among Chamber in Modern Parliaments
The article is devoted to the identification of spheres and associated forms of interaction between the chambers of modern bicameral parliaments. It was noted that the constitutional forms of inter-chamber cooperation differ by considerable variety and depend on a legal status of chambers. On the basis of constitutional legislation analysis, the article proposes the systematization of spheres and the forms of interaction between the chambers of parliaments.
Introduction
Modern parliaments are studied from the point of essence and functioning (Rogers & Walters, 2015;Palmer, 2015;Beetham, 2006) in the country (Kriesi, 2001;Remington, 2008;Norton, 2013;Bach, 2003;Ziegenhain, 2008) and European (Dann, 2003;Judge & Earnshaw, 2008) aspects; political (Easton, 1953) and party (Kreppel , 2002;Hix et al., 2003) conjugation, etc.The problems of interaction between chambers are concerned only with those countries that have bicameralism (Tsebelis, 1997) and are designed contextually.We propose to pay attention to the interaction of bicameral parliament chambers in order to identify the possible spheres and forms of their cooperation.The relevance of this trend is conditioned by the constitutional design of a parliament as a single body that implements a general state (legislative, control, personnel, etc.) policy.
Methodology
The study was based on a dialectical approach to the study of processes and phenomena using general scientific (system and logical analysis and synthesis) and private-science methods.The latter included formal-legal, historical-legal, comparative-legal approach, assuming the exchange of information at the level of the world legal science and the search of new parameters to compare the phenomena of legal reality in different countries.
Discussion and Results
A vivid example of an organic unity of chambers within the legislative body of power can be represented by the Art.24 of the French Constitution of 1958, which states: "Parliament adopts laws.It controls the activities of the Government.Parliament assesses state policy.Parliament consists of the National Assembly and Senate."Sec. 1 of the article I in the US Constitution of 1787 stipulates that "legislative power belongs to the Congress of the United States, which consists of the Senate and the House of Representatives".According to the Art.44 of the Constitution of the Federal Republic of Brazil, 1988 "Legislative power is exercised by the National Congress, which consists of the Chamber of Deputies and the Federal Senate".Similar wordings are contained in the constitutions of Thailand, 2007(Article 88), of the Czech Republic, 1992(Article 15), of Japan, 1947 (Article 42), etc.
The conditionality of mutual relations between the chambers was precisely and shortly formulated by the well-known Russian Constitutionalist V.E.Chirkin: "the bicameral nature of chambers makes them to interact" (Chirkin, 2011, p. 137).Accordingly, the constitutions and the regulations of the Chambers in the countries with a similar structure of a legislative body provide for various organizational, functional, procedural and other measures to ensure the cooperation of chambers.
One of the constitutional interaction forms between chambers is represented by joint meetings.Thus, Art.136 of the Kingdom of Thailand Constitution, 2007 contains a detailed list of 16 reasons to hold joint meetings of the House of Representatives and the Senate, among which the confirmation or approval of the throne succession, the adoption of the National Assembly procedure rules, the declaration of war approval, a draft or a project of an organic law consideration, etc.
Some constitutions leave open a list of issues to gold joint meetings.For example, according to Art. 37 of the Czech Republic Constitutions, 1992 the joint sessions of the Chambers are convened by the Chairman of the Chamber of Deputies, but there is no list of reasons to hold such meetings.The determination of issues for a joint meeting of the Chamber of Deputies and the Senate are purely "discretionary powers of the lower chamber President".A joint meeting can also be used as a means of inter-chamber conflict resolution.So, in Australia, in case of a legislative conflict between the House of Representatives and the Senate, a Governor-General has the right to dissolve both chambers simultaneously.If, after such dissolution the Federal Parliament chamber fails to reach an agreement concerning an issue again, a joint meeting is formed at which this issue is solved by a majority vote from the total number of the Senate and the House of Representatives members.
Proceeding from the functional certainty, bicameralism determines the interaction of chambers in the legislative process.The world practice represents a significant variety of interaction forms between chambers in the legislative sphere.They include, for example, the approval of the draft law in both houses within an identical edition (Part 2, Article 107 of the Republic of India Constitution, 1949, Article 59 of Japan Constitution, 1947); "Silent consent" (Part 2, Article 121 of the Constitution of Poland, 1997 "... if the Senate does not adopt a relevant order within 30 days from the date of a law transfer, the law shall be considered adopted in the edition adopted by Seim"), the resolution of disagreements on a draft law.
We think it necessary to comment on the latter.In some countries, in order to overcome a negative position from one of the chambers, it is possible to vote again on a draft law in a chamber with greater powers.So, if the House of Councilors of Japan does not approve a draft law of the House of Representatives within 60 days after its receipt, the draft law is considered as rejected.To overcome the Chamber of Advisers veto, a second vote is necessary with the approval of at least two-thirds of the deputy votes in the lower house (Article 59 of the Constitution of Japan, 1947).An absolute majority may take place.For example, to overcome the objections of the House of Lords in the UK, a repeat vote in the House of Commons may be sufficient, but at least after one year after the first voting in the lower house (Act on the Parliament, 1949).Such a procedure is provided by Part 1. Art. 47 of the Czech Republic Constitution in 1992, according to which a second voting is held in case of rejection of the draft law by the Senate in the Chamber of Deputies.The draft law will be considered as adopted if more than half of the total number of deputies voted for it.
The possibilities of the upper house influence on the content of an adopted law are very limited in this case and in fact its role is reduced to the approval or disapproval of a ready-made legislative decision.In a number of countries this is possible within the framework of the same session of the parliament, but in the UK, for example, it is possible only at another session.Since a session is convened once a year in this country, it means theoretically that the House of Lords can delay the adoption of the law for a year.
With regard to financial bills in such countries, the upper house does not have the right to apply a veto.It can only delay the entry of financial laws into force, for example, for 30 days in Japan (Article 60 of the Japan Constitution, 1947).
The analysis of constitutional acts made it possible to identify conciliation procedures as a form of interaction and as a way of achieving a compromise between the chambers with equal rights.In this case, the development of conciliation commissions is provisioned.Thus, in France Presidents have the right to "convene a meeting of a mixed parity commission authorized to propose an act concerning problematic provisions" as the result of disagreements between the chambers (Part 2, Article 45 of the French Constitution, 1958).
The overcoming of discrepancies between chambers in the legislative process is also achieved through so-called quasi-consensus approaches.For example, in Spain, according to Art. 74 of the Constitution of 1978, when, according to the text submitted by the combined commission (consisting of the same number of deputies and senators) no agreement is reached between the Congress of Deputies and the Senate, the issue is decided by the Congress using an absolute majority of votes.
Besides, joint meetings are the form of an agreed opinion achievement between chambers according to a draft law under issue where decisions are taken by the majority of votes of both chamber members (Part 4, Article 108 of the Indian Constitution, 1949).In V.E.Chirkin's opinion, the separation of chambers provides a versatile and a balanced approach to a draft law, enables the chamber, representing the interests of the territories, to express their will.Besides, the House of Representatives is often more numerous, which allows it to overcome theoretically always the opinion of another chamber (Chirkin, 2011, p. 74).
Along with the situational forms of interaction between the chambers of the parliament, there are also permanent ones provided constitutionally.In particular, chambers create common bodies in the form of commissions and committees on a parity basis to prepare for the discussion of laws during a plenary session, as well as to resolve other issues.They can work on a permanent and temporary basis.The examples of the first approach are the Joint Committee on Human Rights in the UK Parliament or the Joint Committee of the Bundestag and the Bundesrat deputies (Article 53 of the FRG Basic Law, 1949).The second approach can be illustrated using the example of a chamber representative conference in the US Congress, convened to consider relevant proposals to improve any branch of legislation or the work on a specific draft law.The decisions taken by such bodies can be of different force: in some cases, the decision of the commission is recommendatory and must be approved (or rejected) by the chambers; In other cases, the commission can take a final decision on a law, after which an act is sent to a head of state for signature and promulgation.
In addition to the legislative cooperation of the Parliamentary Chambers in the above-mentioned forms, they cooperate on other issues.
The sphere of the conjugation of the chamber powers typical for most modern states is the organizational and personal one, which prescribes the joint development of state power higher bodies or the appointment of officials.The analysis of foreign country constitutions made it possible to identify here four forms of interaction between the parliament chambers.
The first assumes the parity participation of chambers.In Germany, for example, the members of the Federal Constitutional Court are elected equally (by 8) by the Bundestag and the Bundesrat (Article 94 of the Basic Law of the Federal Republic of Germany, 1949), in Italy a joint session of the Parliament Chambers appoints five judges of the Constitutional Court of Italy (p. 1, art.135 of the Italian Republic Constitution, 1947).
The second one is a parity-attracted, when the chambers of the parliament draw the representatives of the federation subjects or administrative-territorial units to the elections of the state head (Article 83 of the Constitution of Italy, 1947, article 52 of the Indian Constitution, 1949).
The third form is parity and conciliation, as, for example, in Poland: the Chairman of the Supreme Control Chamber (Part 1 of the Article 205 of the Constitution of Poland, 1997) and the Advocate for Civil Rights (Part 1, Article 209) are appointed by Seim, but with the Senate consent.In Japan, if an agreement is not reached on the candidacy of the Prime Minister through a joint session of the Chamber of Advisers and the House of Representatives, the decision of the lower house becomes the decision of the entire Parliament (Article 67 of the Constitution of Japan, 1947).
The fourth form is parity-vacation, when representative commissions (parliamentary deputations) developed by both chambers are constitutionally provided for work during parliamentary holidays.Their numbers are accurately recorded sometime in the constitutions (according to Article 78 of the Constitution of Mexico, 1917 "The Constant Committee consists of 29 members, of which 15 are the Deputies, and 14 are the Senators appointed by the respective Chambers during the last session before an ad journal") or maybe not defined (par.4, article 58 of the Brazilian Constitution of 1988 "the composition of the commission should be proportional to the representation of political parties as much as possible").
The sphere of cooperation among the chambers of foreign parliaments is recognized as the controlling one.There are the grounds to distinguish three varieties within its framework -personnel control, organizational control and financial control.
The first one is realized in connection with the removal from office (impeachment) or the resignation of the highest state officials.There is a significant difference in the regulation of initiators and the procedural rules for constitutional responsibility application.The most complete powers in this sphere belong to the US Congress, since the House of Representatives initiates charges against federal officials (President, Vice President, ministers, governors, federal judges, ambassadors), and the Senate (if the President's case is handled the Chief Justice of the Supreme Court is at the head of the Senate) makes a decision on their guilt or innocence by a qualified majority (2/3) of the votes.
In France, "the dismissal of the President is exercised by the Parliament, constituting the Supreme Court".The initiative should proceed from 10% of the senators and 10% of the deputies of the National Assembly at least, and two thirds of each of the chamber members must support it.The President of the National Assembly presides over the Supreme Court, established on equal origins by the National Assembly and the Senate.If the Supreme Court decides to impeach a President, then he acts as the Supreme Court of general competence.In Italy, the President of the Republic is tried by the Parliament at a joint session of the Chambers by an absolute majority of its members (Article 90 of the Constitution, 1947), and the decision on the charge put forward under the Constitution is referred to the Constitutional Court powers (art.134).
The variation of the personnel-control interaction between the parliament chambers is an expression of distrust to the ministers.In the constitutional format this option is fixed in some Latin American states.Thus, in Colombia and Paraguay deputies can express distrust to ministers "in connection with their official duties" during the joint sessions of chambers (Chirkin, 2011, p. 108).
The joint activity of the chambers is also expressed in the control over delegated legislation (with the provision of relevant powers to the government).The delegation of legislative powers by both chambers of parliament is provided by Art.76 of the Italian Constitution of 1947, according to which "the exercise of the legislative function can be delegated to the Government, provided that the principles and the guiding criteria of such a delegation are determined and it is provided only for a limited time and on a certain range of issues".
The second organizational and control variety is realized in connection with the right of chambers to create joint permanent and temporary commissions.Permanent commissions mediate reports of ministers on the activities of the ministries headed by them.In the United States and in the presidential republics of Latin America, the information from the heads of federal agencies is discussed at joint meetings of chambers regularly, although binding decisions on such reports are not taken by executive bodies, but these materials are taken into account in its Legislative activity (Chirkin, 2011, p. 119).
In Italy, permanent bicameral commissions with control powers are set constitutionally.Their establishment helps to eliminate unnecessary duplication of functions in the Italian parliament chambers, which have equal status.So, the control over the activities of the regional councils is carried out by the constant Committee on local issues, which consists of 20 deputies and 20 senators, taking into account the representation of all regions of the country and the members of the various parliamentary groups.
Temporary joint commissions may have an "investigative" character.According to Art. 76 of the Spanish Constitution of 1978, an investigation commission can be developed in connection with the consideration of any matter of public interest, and its conclusions can be transferred to the prosecutor's office.In accordance with § 3, Section VII, Ch.I of the Constitution of Brazil (1988) the same commissions have the authority to conduct their own investigations as judicial bodies.They are developed by the Chamber of Deputies and the Federal Senate at the request of one third part of these chambers' members, and the conclusions of the commissions can be sent to the prosecutor's office to bring perpetrators to the civil or criminal responsibility.
The third financial and control variety of joint activities of the parliament chambers concerns the state budget, the establishment of taxes and levies, tax benefits, the decision-making on external and internal loans of the state, monetary emission and on the creation of diverse extrabudgetary funds, etc.Moreover, it can be referred to the legislative sphere, since it is implemented in the form of law adoption.Thus, in accordance with the Constitution of Italy (Article 81), both chambers approve the budget submitted by the Government and the law in its execution annually.
Let's also designate the cooperation of the parliament chambers in the international sphere.Constitutional practice provides the evidence that the parliament traditionally is entitled to ratify and denounce international treaties.The constitutions of a number of countries stipulate that both chambers of the parliament are given with the consent to the conclusion of a treaty traditionally.For example, in the Spanish Constitution of 1978 (Article 94), the prior consent of the General Cortes is required for those treaties and agreements that contain political, military and financial obligations, affect the territorial integrity of the state or imply the adoption, modification or abrogation of laws.
The subsequent approval of a concluded agreement is also possible.Part 1, Art. 49 of the Constitution of Brazil (1988), refers to the National Congress competence the adoption of final decisions on international treaties, the agreements or acts that entail obligations or encumbrances on national property that makes the property of the state.The same order is traditionally provided for the denunciation of treaties as for their ratification.
Conclusions
Taking into account the mentioned above, it should be noted that the interaction spheres and forms between the chambers of parliaments presented in the constitutional legislation of foreign countries allow one to judge the approaches of states to their status development.
Typical constitutional spheres of interaction between the chambers of bicameral parliaments are legislative, organizational, personnel and control one; typical constitutional forms are the joint meetings on a specific or a variety of issues and occasions, as well as the resolution of inter-chamber conflicts.
The most varied forms of interaction between the chambers of parliaments are stipulated in the legislative process, which is related to the functional certainty of the parliament.Among such forms is the approval of the draft law in two houses in an identifiable form, a tacit agreement, the use of the veto, conciliation procedures, the creation of joint situational or permanent bodies.
We have determined four forms of interaction between the chambers of parliaments in the organizational and personnel sphere: parity-staff, parity-attracted, parity-conciliation and parity-vacation.
We determined three forms of interaction between the Chambers of Parliaments in the control sphere: personnel control, organizational control and financial control.
The proposed author's approach to the systematization of interaction spheres and forms between the Chambers of Parliament in the article is aimed on the search for their cooperation effective system and for further development of scientifically based recommendations on the optimal use of modern parliamentarianism resources, etc.
The reasons of joint meetings can be different: ceremonial (taking the oath of office by a head of state -Article 91 of the Constitution ofItaly, 1947), listening to the information provided by the President about the state of affairs in the country (Section 3, Article II of the United States Constitution, 1787), listening to a monarch's throne speech, containing the government program for the coming year (Great Britain), overcoming the President's veto (paragraph 4, article 66 of the Brazilian Constitution of 1988) or the King's veto (Article 151 of ThailandConstitution, 2007), the decision to declare war (par.11, section 8, article I of The United States of America Constitution, 1787, article 78 of the Italian Constitution of 1947), etc. | 4,613 | 2017-08-30T00:00:00.000 | [
"Economics"
] |
ADD-Net: An Effective Deep Learning Model for Early Detection of Alzheimer Disease in MRI Scans
Alzheimer’s Disease (AD) is a neurological brain disorder marked by dementia and neurological dysfunction that affects memory, behavioral patterns, and reasoning. Alzheimer’s disease is an incurable disease that primarily affects people over 40. Alzheimer’s disease is diagnosed through a manual evaluation of a patient’s MRI scan and neuro-psychological examinations. Deep Learning (DL), a type of Artificial Intelligence (AI), has pioneered new approaches to automate medical image diagnosis. This study aims to create a reliable and efficient system for classifying AD using MRI by applying the deep Convolutional Neural Network (CNN). In this paper, we propose a new CNN architecture for detecting AD with relatively few parameters, and the proposed solution is ideal for training a smaller dataset. This proposed model successfully distinguishes the early stages of Alzheimer’s disease and shows class activation maps as a heat map on the brain. The proposed Alzheimer’s Disease Detection Network (ADD-Net) is built from scratch to precisely classify the stages of AD by decreasing parameters and calculation costs. The Kaggle MRI image dataset has a significant class imbalance problem, and we exploited a synthetic oversampling technique to evenly distribute the image among the classes to prevent the problem of class imbalance. The proposed ADD-Net is extensively evaluated against DenseNet169, VGG19, and InceptionResNet V2 using precision, recall, F1-score, Area Under the Curve (AUC), and loss. The ADD-Net achieved the following values for evaluation metrics: 98.63%, 99.76%, 98.61%, 98.63%, 98.58%, and 0.0549% accuracy, AUC, F1-score, precision, recall, and loss, respectively. The simulation results show that the proposed ADD-Net outperforms other state-of-the-art models in all the evaluation metrics.
The proposed CNN model uses a series of conventional 88 blocks consisting of different deep layers to accomplish out-89 standing classification results. The proposed ADD-Net aims 90 to obtain an accurate classification result for detecting AD in 91 its earlier stages with better accuracy. The main contributions 92 of the research study are: 93 • We propose a new convolutional neural network archi- 94 tecture for detecting AD with relatively few parameters, 95 and the proposed solution is ideal for training a smaller 96 dataset.
97
• The previous methods [22], [23], [25] accuracy com-98 promised on Alzheimer's data-set due to an imbalanced 99 number of classes. To handle the imbalance problem 100 of the Alzheimer's data set, we exploited the SMOTE-101 TOMEK oversampling algorithm, which interpolates 102 new images to balance the class samples. 103 • In our proposed model, we used the Grad-CAM to show 104 and highlight the infected part of the brain for different 105 stages of Alzheimer's disease, and the generated heat 106 map intensities highlight each stage's severity.
107
• The proposed model is extensively compared with sev-108 eral other approaches using various evaluation parame-109 ters: Accuracy, AUC, Precision, Recall, F1-score, and 110 size of trainable parameters. It is observed that our 111 approach outperforms other state-of-the-art models.
112
The rest of the paper is arranged in the following way: The 113 related studies of the proposed model are briefed in section II. 114 The methodology and proposed ADD-Net model for AD 115 classification details are presented with the description of 116 the dataset, and model components are shown in section III. 117 The visualization process and the ADD-Net model evaluation 118 with the state-of-the-art models are presented in section IV. 119 The ADD-Net's limitations and the conclusion with future 120 goals are described in section V and section VI, respectively. 121
122
Precise classification of medical images is a strenuous task 123 because of the complicated procedure of obtaining med-124 ical data sets [25]. Unlike other data sets, medical data 125 sets are prepared by expert specialists and contain sensi-126 tive and private information about patients, which cannot 127 be publicly disclosed to anyone. That is why organizations 128 and institutions like Alzheimer's Disease Neuroimaging Ini-129 tiative (ADNI) [26] and Open Access Series of Imaging 130 Studies (OASIS) [27] providing medical data-sets have a 131 screening process for accessing their data-sets which requires 132 an application to be filled and terms to be agreed by the 133 researcher, constraining them from using it for research pur-134 poses only [28], [29], [30], [31]. Medical data sets are inher-135 ently highly imbalanced because it is impossible to compile 136 a data set with an equal number of patients with health and 137 ailment samples. The techniques to tackle this problem are 138 pretty challenging themselves [32], [33], [34], [35]. OASIS 139 data-set containing 416 3D samples is used by Islam
173
These two models are selected due to the ability of VGG19 174 to train on many classes with remarkable accuracy, and 175 the DenseNet169 can handle vanishing gradient issues and 176 reduce the number of training parameters. The data set from 177 Kaggle was fed to both models via Image Data Genera-178 tor (IDG) with different augmentation parameters. Through 179 the augmentation, the pre-trained models like VGG19 and 180 DenseNet169 achieved an accuracy of 88% and 87%, respec-181 tively. Battineni et al. [33] employed an OASIS-3 data set and 182 created a five-layer CNN model to classify three different 183 early stages of Alzheimer's disease [45].
184
Not all the features extracted by a deep model are helpful 185 in accurately predicting the correct class of a sample, and 186 some hinder a model from reaching desired results [46], [47]. 187 This issue of deep models was tackled by El-Aal et al. [29] 188 and presented a novel approach to selecting specific fea-189 tures from the feature map of deep models, which ultimately 190 improves the classification results and reduces the train-191 ing time of the model. globally. A few researchers have used data augmentation 262 techniques to improve their results. In contrast, none of 263 the reviewed research papers regarding the classification 264 of Alzheimer's disease has recognized the imbalance data-265 set issue. Some researchers failed to obtain notable results 266 because they did not train their models enough. It is observed 267 that research papers focus on discovering new approaches 268 toward classification purposes for biomedical diagnoses. In 269 this proposed model, the input data set is pre-processed using 270 normalization. The essential process of converting the cate-271 gorical data variables is to be provided to the ADD-Net using 272 the one-hot encoder. Then, the Synthetic Minority Oversam-273 pling Technique (SMOTETOMEK) algorithm is utilized to 274 solve the imbalanced data-set issue that over-samples the 275 classes to balance the data-set. Afterward, the data set is 276 split into train, test, and validation by 60%, 20%, and 20%, 277 respectively. Furthermore, the features are extracted using a 278 standard CNN for effectively training the ADD-Net, as shown 279 in Fig. 1. The size of training parameters is smaller in compar-280 ison with [29], [31], and [33] for the robustness of the model 281 in AD classification. The Grad-CAM heat-map algorithm is 282 utilized to visualize the class activation map, highlighting the 283 features that lead to the classification of an image sample. According to the description of the data set, each sample 302 in the data set available on Kaggle is personally verified by 303 the uploader himself. Also, the data set size is reasonable, and 304 the pieces are already cleaned up, i.e., resized and organized.
305
Based on these factors, this data set is used in our research.
306
The data set has 6400 samples in total. The samples are respectively. The only downside of this data set is that it is 312 imbalanced, as discussed in Table 2
318
Typically, oversampling and under-sampling are two tech-319 niques for re-sampling. However, another type of re-sampling 320 approach exists, which is a hybrid of both methods. 321 For this research study, we have employed the hybrid 322 SMOTETOMEK algorithm. It combines SMOTE, the up-323 sampling algorithm, and TOMEK, the down-sampling 324 method. SMOTE generates new samples relying on class 325 nearest neighbors, while TOMEK is an implementation 326 of condensed nearest neighbors. Both algorithms work in 327 sequence, and SMOTE chooses a random instance from a 328 minority class and increases its proportion by interpolating 329 new samples. TOMEK then selects a random sample and 330 discards it if its nearest neighbors belong to the minority 331 class. In this way, SMOTETOMEK evens the examples of 332 each type and effectively solves the dataset imbalance prob-333 lem as depicted in Table 3. To balance out the data set, 334 SMOTETOMEK utilizes the Nearest Neighbor technique to 335 interpolate new imitation samples for the minority classes 336 shown in Fig. 3. Table 4, and a description of hyper-parameters that plays 358 a vital role in practical training of the ADD-Net model in 359 Table 5. a convolutional 2D, a ReLU, and an average-pooling2D. 364 The kernel initializer is used to choose weights for the con-365 volutional 2D layer. The ReLU activation function is used 366 to overcome the gradient vanishing problem and allow the 367 network to learn and perform faster. At the same time, the 368 VOLUME 10, 2022 convolutional 2D down-samples the image and its spatial
391
The are two Dense blocks in the proposed architecture and 392 each ADD-Net block has few layers. The details of each layer 393 is discussed in the next subsection. Activation functions are mathematical operations that decide 396 whether output from a perceptron is to be forwarded to the 397 next layer. In short, they activate and deactivate nodes in a 398 deep model. The activation function is used in the output 399 layer to start the node, which returns its label, which is then 400 assigned to the image processed through the model. There are 401 several activation functions. We used ReLU in hidden layers 402 because of its simple and time-saving calculation. SoftMax, 403 a probability-based activation function, is used for the output 404 layer because our model is for multi-class classification.
where L is the calculated loss of each class, and P is the 482 probability calculated by the SOFT function. ROC curves are commonly used in binary classification 505 to investigate a classifier's output. Binarizing the output 506 is required to expand the ROC curve and ROC area to 507 multi-class or multi-label classification. One ROC curve can 508 be generated for each label; however, each element of the 509 label indicator matrix can also be treated as a binary predic-510 tion (micro-averaging). The proposed ADD-Net is compared 511 using the Extension of the ROC curve with DenseNet169, 512 InceptionResNet V2, and VGG19 on the balance and imbal-513 ance AD dataset as depicted in Fig. 9. We can note that 514 after balancing the AD data-set using the SMOTETOMEK 515 algorithm, the AUC significantly for all the approaches, 516 as shown in Fig. 10. AUC has also noted a similar effect 517 for all the classes of the proposed ADD-Net. The AUC of 518 Several deep models were created to classify the early stages 555 of AD. Some were conventional CNN models, while others 556 were based on pre-trained deep architectures. Our proposed 557 model is a deep CNN-based ADD-Net consisting of different 558 ADD blocks and is very effective in classifying the different 559 AD classes, as discussed earlier in this paper. We also created 560 a few hybrid models using state-of-the-art classification mod-561 els InceptionResNet V2, VGG19, and DenseNet169. The first 562 model is a hybrid framework of DenseNet169 and MobileNet 563 V2, reaching an AUC = 98% and AUC = 99% before and 564 after balancing the AD data-set through SMOTETOMEK as 565 depicted in Fig. 13 Several deep models were developed to classify Alzheimer's 616 disease in its early stages. Some algorithms were tradi-617 tional CNN, while others were pre-trained deep architec-618 tures. As mentioned earlier in this paper, our proposed model 619 is a deep CNN-based ADD-Net comprising distinct ADD 620 blocks. We compared our model with the InceptionRes-621 Net V2, VGG19, and DenseNet169 classification models as 622 shown in Fig. 19 In this proposed ADD-Net model, the input data set is 640 pre-processed using normalization. The essential process of 641 converting the categorical data variables is to be provided to 642 the model using the one-hot encoder.
643
Then, the SMOTETOMEK algorithm is applied to resolve 644 the imbalanced data-set issue that over-samples the classes 645 VOLUME 10, 2022 Table 6.
686
A solution to solve any real-world problem is not perfect 687 in every aspect; this ideal case for a solution is used to 688 solve a critical real-world problem that is well matured in 689 its early versions and does not need upgrades. Solutions are 690 prepared after studying the base requirement necessary to fix 691 a problem and then gradually improve by analyzing real-time 692 reviews about the system. In this proposed study, we present 693 VOLUME 10, 2022 Although outperforming other models still has shortcom-696 ings, the proposed mode efficiency suffers on the imbalanced 697 dataset. As discussed above, due to an imbalanced dataset, the accuracy of deep learning models is compromised; our | 3,049.2 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Data on the corrosion inhibition effect of 2-meracaptobenzothiazole on 316 austenitic stainless steel, low carbon steel and 1060 aluminium in dilute acid media
2-Mercaptobenzothiazole was evaluated for its corrosion inhibition effect on 316 stainless steel, low carbon steel and 1060 aluminium alloy in 3 M HCl, 1 M HCl and 2 M H 2 SO 4 solu-tion by coupon measurement. Results showed the organic compound performed effectively at all concentrations studied on 316 steel and 1060 aluminium with highest inhibition values of 94.07% and 79.7%. Generally, the inhibition performance of 2-mercaptobenzothiazole was above 90% at all concentrations for 316 steel and 70% for 1060 aluminium. The inhibition performance was observed to be independent of inhibitor concentration, though performance was significantly time dependant on 1060 aluminium. 2-mercaptobenzothiazole performed very poorly on low carbon steel at inhibitor concentrations above 1.25% and inhibition efficiency values below 0%. At concentrations below 1.25%, the inhibition performance was marginal at average value of 57%. 2-mercaptobenzothiazole inhibition performance on low carbon steel was observed to be time and concentration dependant.
Subject area Chemistry
The data is with this article
Rationale
Corrosion cost worldwide is estimated to be between €1.3 and €1.4 trillion which is equivalent to 3.5% of developed nations GDP annually [1] .Corrosion is the deterioration of metallic alloys by chemical interaction with their environments [2] .The metals are extracted from their ores at the expense of huge energy resources, thus they are thermodynamically unstable in the refined state and tend to gradually lose their energy by reverting to their original energy states [3] .Metallic alloys are used in the fabrication of machinery and devices, buildings and structures due to their excellent physical and mechanical properties [4] .Stainless steels are extensively used metallic alloys due to its exceptional properties compared to other alloys, durability and corrosion resistance.316 austenitic stainless steel is extensively applied in heat exchangers, food production, pharmaceuticals, marine structures and vessels, petrochemicals and chemical processing industries [5] .The steel is second to 304 in importance within the austenitic grade of stainless steels.Mb enhances the corrosion resistance of 316 steel especially against pitting and crevice corrosion in Cl − and SO 4 2 − anion containing environments [6][7][8] .However, the steel is prone to localized corrosion within the environment earlier mentioned at certain conditions.Even in seawater the steel is not fully resistant to corrosion.Carbon steel is the most widely used ferrous alloy worldwide due to its low cost, recyclability and ease of fabrication in petrochemical operations, chemical processing units, energy generating plants, pipelines, automobiles etc. [9][10][11] .The steel has weak resistance to corrosion due to its inability to passivate in the presence of corrosive anions.Aluminium is an important structural engineering alloy whose application is only behind ferrous alloys as a result of their light weight, relatively high strength and excellent corrosion resistance properties.Aluminium alloys are highly reactive metals and vulnerable to corrosion due to its amphoteric nature wherewith it can sometimes undergo accelerated degradation in the presence of threshold concentrations of salts, acids or bases.Conventional corrosion prevention and control methods such as electroplating, galvanizing, use of sacrificial anodes, proper material selections etc. have their disadvantages in terms of cost, versatility and application.The most appropriate method of corrosion control of metallic alloys in Cl-anion containing environments is through the use of chemical compounds known as corrosion inhibitors [12][13][14][15] .Corrosion inhibitors play an important role in oil extraction and processing industries, heavy industrial manufacturing, water treatment facility, cooling systems, refinery units, pipelines, oil and gas production units, boilers and water processing, paints, pigments, lubricants etc. to minimize localized corrosion and unexpected failures [16][17][18] .Inhibitors reduce the rate of metal wastage and can function as anodic, cathodic, passivating or mixed type inhibitors depending on performance.This article discusses the effect of 2-mercaptobenzothiazole on the corrosion inhibition of 316-stainless steel, low carbon steel and 1060 aluminium alloy in dilute HCl and H 2 SO 4 media simulating industrial operating conditions.Mixed type inhibitors are more preferable for inhibition performance on stainless steels due to the vulnerability to and prevailing occurrence of localized corrosion deterioration.
Experimental design, materials and methods
316 austenitic stainless steel (316ST), low carbon steel (LCS) and 1060 aluminium (AL1060) rods were cut and prepared into 7 experimental samples.The surface ends of the samples were grinded with emery papers of 80, 120, 220, 800 and 10 0 0 grits for weight loss measurement.2-mercaptobenzothiazole (MBT) was prepared in volumetric concentrations of 0%, 0.19%, 0.25%, 0.31%, 0.38%, 0.44% and 0.50% per 200 ml of 3 M of HCl acid solution for 316ST.The compound was prepared in volumetric concentrations of 0%, 0.75%, 1%, 1.25%, 1.5%, 1.75% and 2% per 1 M HCl for LCS while for AL1060; the compound was prepared in volumetric concentrations of 0%, 0.19%, 0.25%, 0.31%, 0.38%, 0.44% and 0.50% per 2 M H 2 SO 4 .Weight measured 316ST, LCS and AL1060 samples were separately immersed in 200 ml of the acid electrolytes for 384 h.The samples were weighed every 48 h with Ohaus analytical weighing balance.The weighing balance instrument was checked for possible causes of systematic errors.The uncertainty of single measurement is limited by the precision and accuracy of the measuring instrument.As a result calibration of the instrument and hardware test was performed.Pre-experimental test confirmed the reproducibility of results and the experiment was performed once.Tabulated results of metal sample corrosion rates and inhibition efficiencies of MBT on them in the electrolyte at specific MBT concentrations are shown from Tables 1-6 .The weight loss is the difference between the initial weight of the metal sample (kept constant for 384 h) and the final weight taken every 48 h.Table 3 shows the data of inhibition efficiency ( IE ) calculated from the equation below; M 1 and M 2 are the weight-loss of the control and inhibited metal sample in the acid media with respect to exposure time.
Data, value and validation
Tables 1-3 show the corrosion rate values of ST316, LCS and AL1060 in 3 M HCl, 1 M HCl and 2 M H 2 SO 4 solution at specific MBT concentrations.The corrosion rate value of the control (0% MBT) 316ST and AL1060 samples significantly differs from the inhibited samples as shown in Tables 1 and 3 due to the inhibiting action of MBT.The control 316ST corroded at significantly higher corrosion rate values at the onset of the exposure hours after which it progressively decreased due to weakening of the acid solution while the control AL1060 corrosion rate values varies with exposure time alternating between high and low values before attaining stability at 288 h.The corrosion rate of inhibited 316ST and AL1060 varies with respect to MBT concentration.It is observed that the inhibiting action of MBT on 316ST depends to a slight degree on its concentration.The higher the concentration of MBT in the acid solution, the lower the corrosion rate of 316ST while MBT performance on AL1060 is completely independent of its concentration and time of exposure.The corrosion rate of control LCS ( Table 2 ) was generally higher than the values obtained for the inhibited alloy at 0.75% and 1% MBT concentration.Beyond 0.75% MBT till 1.75% MBT, the corrosion rate values of LCS increased significantly beyond the value obtained at 0% MBT.Table 4 -6 shows the data on MBT inhibition performance on ST316, LCS and AL1060.Observation has shown that MBT inhibition performance on ST316 ( Table 4 ) is concentration dependence.The performance is also observed to be time dependant.The inhibition performance of MBT on ST316 improves with respect to exposure time.This is due to the slow rate of adsorption of MBT cations on 316ST surface stifling the electrochemical processes responsible for surface degradation.The inhibition performance of MBT on AL1060 as shown in Table 6 is strongly dependant on time progressively increasing with respect to exposure time and attaining inhibition performance values above 70% at all concentrations.MBT performed extremely poorly on LCS at higher concentrations above 0.75% MBT while at lower MBT concentrations the inhibition performance was marginal at average value of 57%.MBT performed the best on ST316 with inhibition efficiency values above 90% at all concentrations.
Declaration of Competing Interest
None.
Table 1
Data on the corrosion rate of 316ST in 3 M HCl solution at 0% −1%MBT concentration ( n = 1).
Table 2
Data on the corrosion rate of LCS in 1 M HCl solution at 0% −2% MBT concentrations ( n = 1).
Table 4
Data on the inhibition efficiency of MBT compound on 316ST in 3 M HCl solution at 0% −1% concentration ( n = 1).
Table 6
Data on the inhibition efficiency of MBT compound on AL1060 in 2 M H 2 SO 4 solution at 0% -0.5% MBT concentration. | 2,014.6 | 2020-04-01T00:00:00.000 | [
"Materials Science"
] |
Optical sampling to enhance Nyquist-shaped signal detection under limited receiver bandwidth
ZIHAN GENG,1,2 DEMING KONG,1,2,3 VALERY ROZENTAL,1,4 ARTHUR JAMES LOWERY,1,2 AND BILL CORCORAN1,2,* 1Electro-Photonics Laboratory, Department of Electrical and Computer Systems Engineering, Monash University, VIC 300, Australia 2Centre for Ultrahigh-bandwidth Devices for Optical Systems (CUDOS), Australia 3Now with: DTU Fotonik, Department of Photonics Engineering, Technical University of Denmark, 2800 Kgs., Denmark 4Now with: Toga Networks Huawei, 4 HaHarash St. Neve Ne‘eman, Hod Hasharon, Israel<EMAIL_ADDRESS>
Introduction
Orthogonal multiplexing allows optical communication systems to efficiently occupy their available fiber bandwidth [1][2][3].Nyquist-shaped signals, composed of overlapping pulses that satisfy Nyquist zero inter-symbol interference (ISI) criterion, such as sinc pulses, have a nearly rectangular spectrum, minimizing their spectral footprint [4][5][6][7].Orthogonal pulse shaping has received significant attention in systems employing coherent detection (e.g.[2,5]).Direct-detection systems have recently been of interest for short-reach optical communication systems, and orthogonal multiplexing can also be used in such systems (e.g.[3]).Ideally, these signals are received using direct detection without any ISI, if sampled at the ISI-free point, due to their time-domain orthogonality [3].However, this orthogonality can be significantly degraded by electrical bandwidth limitations in the receiver.
Although it is well-known that when the receiver filter is matched to the transmitted signal, the receiver sensitivity is optimal [8][9][10][11][12], we are interested in the non-optimal case where the ratio (R) of the receiver's electrical bandwidth over the signal's baud rate is between 25% and 50%.This range is particularly interesting for study as a 50% bandwidth represents the minimum necessary bandwidth for coherent detection of optical Nyquist-shaped signals, and we expect ISI to play a major impact for any detection system below that bandwidth.In intensity modulated, direct detection (IM/DD) systems, bandwidth limitations have been extensively studied, and are again becoming relevant in a modern context for use in short-range and data-center links.Winzer and Kalmar [13] have developed a general theory for the receiver sensitivity under a number of bandwidths subject to return-to-zero (RZ) and non-return-to-zero (NRZ) coding.They show that, if the dominate noise source is signal-dependent, in the range of R between 25% and 50%, RZ coding provides at least a 3-dB benefit to receiver sensitivity compared with NRZ coding.This was of particular interest for optical time-division multiplexing (OTDM) systems, with optical demultiplexing via optical samplers intrinsically producing RZ pulses before the receiver.Additionally, OTDM demultiplexing showed further sensitivity improvement by gating amplified spontaneous emission (ASE) noise [14].Further practical considerations in OTDM demultiplexing, such as the gating window width and extinction ratio [15], have also been analyzed.These studies show that receiving signals on short optical pulses, either through optical sampling or RZ encoding, reduces signal degradation from ISI and ASE-ASE beating.However, while carrying data on short optical pulses is advantageous for receivers, such signals have a wide bandwidth, which can limit achievable signal throughput in wavelength division multiplexing.This then suggests that a system which transports Nyquist-shaped signals, but receives RZ coded signals, may simultaneously provide efficient spectral utilization, and reduced impairments from ISI.
In this work, we focus on bandwidth limited reception of Nyquist-shaped IM/DD signals.Nyquist-shaped signals are transmitted with high spectral efficiency, and at the receiver, they are converted to RZ-shaped signal for better sensitivity.A single photo-detector and optical sampling at one sample-per-symbol with a 33% duty-cycle sampling pulse are used in this work.Our assumption is the signal's baud rate is 2 to 4 times of the receiver's electrical bandwidth, in other words, the receiver's electrical bandwidth is 25% to 50% of the signal's baud rate.This is an extension paper of the work we presented at the Optical Fiber Communication Conference (OFC) [16], in which we discussed optical sampling of an electrically-shaped Nyquist on-off-keying (N-OOK) signal.In this paper, we focus on optical sampling of an optically shaped N-OOK signal and make further analysis into factors such as the optical sampling pulse width and the sharpness of the magnitude response of the receiver.Simulations show that optical pre-sampling can improve receiver sensitivity for optically-shaped N-OOK signals under a number of receiver bandwidths.In our proof-of-concept experiment, we use a 4 × 10-Gbaud orthogonal time division multiplexing (OrthTDM) signal to emulate an optically-shaped 40-Gbaud N-OOK signal and show that optical sampling can improve the sensitivity of a band-limited 18-GHz receiver by 4 dB.
Principle of operation
Figure 1 (a) shows three detection systems for optically shaped N-OOK with their eye diagrams and electrical spectra.With sufficient receiver bandwidth (upper system), there is a clear and short optimal sampling window for each symbol period, as observed in the eye diagram.If the receiver is bandwidth-limited (middle system), such that the receiver's electrical bandwidth is 25% to 50% of the signal's baud rate, there is significant degradation due to the attenuation of the high frequency components, causing ISI.In our proposed system (bottom system), the degradation due to insufficient bandwidth is reduced by the optical pre-sampler.
The optical sampling at one sample-per-symbol can be viewed in the frequency domain as a convolution of the signal spectrum and a frequency comb, leading to repeated frequency components and spectral "whitening" [17].As indicated in the bottom row of Fig. 1(a), the repeated frequency components from the sampling operation allow for reshaping through filtering.In our previous works [18,19], the repeated frequency components have been shown to increase tolerance to band-limiting effects of optical filters in transmission links.The effects observed in these previous investigations are conceptually similar to the effect of optical sampling and receiver electrical bandwidth limiting filters in this investigation.
Alternatively, this approach can be explained in the time domain.Figure 1(b) shows the envelope of the optical field of an optically shaped 40-Gbaud N-OOK signal [3,20] before and after optical sampling.The envelope has negative excursions, which are π phase shifts to the optical carrier.As each '1' data bit is a sinc pulse with long tails that change sign from bit-slot to bit-slot, the tails of adjacent 1-bits will partially destructively interfere (e.g the blue and red traces), whereas 1-bits separated by two bit slots have tails that constructively interfere (e.g. the blue and green traces in Fig. 1(b)).Because the sincs have long tails, many 1-bits will contribute to the total field at a given instant; but zero bits will not contribute as they are encoded as zero power.Thus the total field will depend on the pattern of data sent.Fortunately, if there is no bandwidth limitation, the zero-crossing points of all of the tails align with the peak of the wanted pulse; this is the sampling time that will give zero-ISI.However, if the receiver has insufficient bandwidth, the low-pass filter (LPF) distorts the symbols, causing a loss of orthogonality between adjacent symbols, and there will not be a sampling time that will give zero-ISI.Optical sampling (that is, sampling before the photodiode (PD)-induced bandwidth limitation) selects the peak of the desired data-bit's sinc, and suppresses the tails of the preceding and following sincs away from the crossing points.Thus, there is very little energy left in the tails of the preceding and following sincs, energy that would have caused ISI after band-limiting.This means that the electrical waveform becomes much less dependent on the data pattern, reducing the variances of the traces corresponding to the 1 and 0 bits.
In this work, we used Gaussian pulses to sample optically shaped N-OOK signals.When the optical sampling pulses are significantly narrower than the bit-period of the N-OOK signal, the optically shaped N-OOK symbols are transformed into return-to-zero (RZ) symbols through optical sampling.If the sampling pulse width is equal to or less than 0.44× the symbol duration, the sampled signal is roughly RZ-shaped with some inter-symbol-interference from other symbols.In Fig. 2, optical spectra, electrical spectra and eye-diagrams of optically shaped 40-Gbaud N-OOK and 40-Gbaud RZ signals are compared.The bandwidth-limiting is modelled by a 4 thorder Bessel electrical low-pass filters.In particular, the spectra shown in Fig. 2 can qualitatively provide an explanation of ISI in direct-detection systems with limited receiver bandwidth.While in a coherent detection system, the electrical spectrum of the received signal is directly mapped from the optical signal, the spectra in Fig. 2(a) show that the optically shaped 40-Gbaud double side-band (DSB) N-OOK signal has a 40-GHz optical bandwidth centred around a carrier.If the electrical signal was solely due to the mixing of the carrier and the sidebands (as in coherent detection systems), a 20-GHz-wide electrical spectrum would be expected.However, additional frequency components are generated due to square-law photo-detection, causing mixing of all of the tones within both sidebands, to give a 40-GHz-wide electrical spectrum.As shown in the eye diagrams of Fig. 2(a), limiting the electrical bandwidth causes substantial ISI.
There are trade-offs in using N-OOK in DD systems.N-OOK signals have a well-confined optical spectrum, which is suitable for multiplexing, all-optical routing and elastic networks [6,7].However, as shown in Fig. 2(a), optically shaped N-OOK can be very sensitive to receiver bandwidth limitations, with the required electrical bandwidth exceeding the symbol rate for penalty-free reception.On the other hand, an RZ signal is less sensitive to limited receiver bandwidths, but an RZ signal occupies a much wider optical bandwidth during transmission, so limits the total achievable data rate by limiting the number of channels able to be multiplexed onto a given optical bandwidth.In this way, Nyquist-shaped signals are preferred for transmission, and RZ signals are more desirable for detection.Optical pre-sampling before photodetection combines the merits of both of them.Figure 3 shows the simulation setup for the optical pre-sampler.A 15 th -order pseudorandom binary sequence (PRBS) of 2 18 bits is simulated for each BER calculation.The optically shaped N-OOK signal is generated by modulating a Gaussian-shaped ultra-short optical pulse train, whose repetition rate is 40-GHz, with a 40-Gbaud electrical non-return-to-zero (NRZ) signal, followed by a 40-GHz rectangular optical band-pass filter (BPF) [20].A zero roll-off, optically shaped 40-Gbaud N-OOK signal ("signal") centered at 1550.9 nm, is optically sampled by 3.67-ps width (120-GHz bandwidth), 40-GHz repetition rate short Gaussian pulses ("pump") centered at 1547 nm, in a HNLF to generate the optically sampled signal ("idler") at a new frequency through the FWM process.We simulate the sampling process using a HNLF to gain insight into the experiments presented in Sections 5 & 6.The HNLF is 1-km in length with a 1547-nm zero dispersion wavelength and 9.3 W −1 •km −1 non-linear coefficient.The conversion efficiency of the FWM is about -10 dB.The idler is extracted by a 120-GHz optical band pass filter, an idler power set to 0 dBm average power by a noiseless EFDA to simplify simulation.The signal is then sent to an amplified receiver, where an attenuator varies the received optical power, and noise is added through amplification with an EDFA which has 25 dB gain and a 5.5 dB noise figure.This "back-to-back" set-up provides a general benchmark of receiver sensitivity, independent of the link placed before the receiver.We are assuming that our proposed system is primarily of interest for short-reach systems where OSNR is high.We further assume that in order to employ high bandwidth signals in a direct-detection system, optical dispersion compensation is to be used.Under these assumptions, our results should provide a good indication of the sensitivity improvements enabled by our proposed optical sampling front-end.The 3-dB bandwidth of the electrical low-pass filter (LPF) after the photodiode was switched between 11 to 62 GHz to emulate receiver bandwidth limitations, as shown in Fig. 4. The performance of systems with a sharp brick-wall rectangular filter (implemented as a 30 th -order Gaussian filter, red traces in Fig. 4) and a slow roll-off filter (4 th -order Bessel filter, blue traces in Fig. 4) are compared.The brick-wall filter resembles a digitally-defined filter used in some real-time digital oscilloscopes, and more generally the effect of anti-aliasing filters in receiver sampling front-ends.The Bessel filter is more typical of analog bandwidth limits (e.g.photodiode response).Note that the time-bandwidth product of the sampling pulses is 0.44, which is the transform limit for Gaussian pulses.
Simulation results
Figure 4 shows the simulated BER versus received optical power for signals and idlers under various electrical filter bandwidths.The Bessel filter (blue traces) and the rectangular filter (red traces) mentioned above are simulated.The solid lines represent signals without optical pre-sampling, and the dashed lines indicate idlers or optically sampled signals.We define the receiver sensitivity here as the required received power (average power into the preamplified reciever shown in Fig. 3) in order to reach a BER of 3.8 × 10 −3 , assuming a commonly used second-generation, 7% overhead, concatenated hard-decision forward error correction (HD-FEC) code.When the receiver bandwidth is set by a 11-GHz Bessel filter, error-free operation below the FEC threshold can be achieved with optical sampling, providing 7.1-dB sensitivity improvement.When brick-wall filtering is applied, optical sampling provides no improvement and error-rates are well above the FEC threshold.Figures 4(b)-(e) show the performance with filter bandwidths between 18-22 GHz.For the brick-wall filter, optical sampling improves sensitivity by 9, 5.5, 4.1 and 2.2 dB for 18, 19, 20 and 22 GHz bandwidth filters, respectively.For the Bessel filter, the improvements are 3.8, 3.3, 3.1 and 2.8 dB for the same filter bandwidths, respectively.In Fig. 4, the traces for 62-GHz filters follow the same tendency, but there is a 3.3-dB sensitivity improvement from optical sampling.We attribute this base line sensitivity improvement to the RZ shaping [10][11][12][13][14][15].Given the same average power, compared with the N-OOK signal, the low duty-cycle RZ signal has 3.3 times higher power for its "1" level.Figure 5(a) summarizes the sensitivity versus electrical filter bandwidth, when the receiver bandwidth is modelled as a 4 th -order Bessel filter.The sampling pulse width is set to 3.67 ps, which corresponds to a 120-GHz pump bandwidth, and the electrical filter at the receiver is 4 th -order Bessel filter.Comparing the two lines in Fig. 5(a), the optically shaped N-OOK signal requires a higher power for all bandwidths investigated.As the bandwidth is increased, the difference between the sensitivities for the sampled idler and the optically shaped N-OOK signal are reduced.This shows that, predictably, the optically shaped N-OOK signal has a larger penalty than the sampled signal (idler) as receiver bandwidth is reduced.The optical pre-sampler improves the receiver sensitivity by 3.1 dB for a 20-GHz electrical filter bandwidth, and 7.1 dB at 11-GHz bandwidth.
Figure 5(b) plots BER versus pump bandwidth.The measurement is taken with an 18-GHz low-pass 4 th -order Bessel filter at the receiver.According to Fig. 5(b) optical sampling with 80-GHz pump bandwidth outperforms sampling with 120-GHz pump bandwidth, and this conclusion is consistent with our experimental results in the following section.When the pump bandwidth is less than 40 GHz, the sampling pulses with a repetition rate of 40 GHz start to overlap.Thus the optical sampling gate will not be completely closed.When the pump bandwidth is larger than 120-GHz, due to the optical and electrical filters in the system, the received signal power before the PD is reduced.In addition, as the increase of pump bandwidth, the spectra of the pump and idler may be partially overlapped.Among the parameters swept in the simulation, 40-GHz and 60-GHz bandwidths give maximum performance.
Experimental setup
We now validate the conclusions drawn from our simulations through a proof-of-concept experiment.Figure 6 shows the experimental setup of an optically shaped 40-Gbaud N-OOK transmitter and a receiver with an optical pre-sampler, which is conceptually similar to the simulation described in Section 3. The optically shaped 40-Gbaud N-OOK signal was sampled by short optical pulses with a 40-GHz repetition rate.The signal is emulated by time-division multiplexing of four optically shaped, 25% duty cycle, 10-Gbaud N-OOK signals.An ERGO mode-locked laser, centered at 1555.5 nm, generates frequency comb lines with 10-GHz spacing and 1.4-ps optical pulses.The frequency comb is amplified by an erbium-doped fiber amplifier (EDFA) to 21.8 dBm and spectrally broadened through self-phase modulation (SPM) in a normally-dispersive highly nonlinear fiber (HNLF) (508 m length, -0.5 ps/nm/km dispersion @ 1550 nm, 0.016 ps/nm 2 /km dispersion slope @ 1550 nm, 11 W −1 •km −1 non-linear coefficient).Two outputs of a wavelength selective switch (WSS) are used for the signal or sampling pulse generation.
The sampling pulse train is generated by filtering the 10-GHz spacing comb lines with the WSS and two delay-line interferometers, and it is centered at 1547 nm with a 40-GHz repetition rate.
Nyquist signals can be generated optically by either filtering modulated signals [6,7,[20][21][22][23] or by modulating sinc-shaped optical pulses [3,[24][25][26].These reduce the requirements on DSP and DACs, while being able to operate over very high bandwidths [22].In this experiment, the optically shaped 40-Gbaud N-OOK signal is generated by modulation of the frequency comb and time division multiplexing.The WSS-filtered frequency comb with 40-GHz bandwidth, 10-GHz frequency comb spacing and 1550.9-nmcenter frequency, is intensity modulated by a 10-Gbaud PRBS-15 signal, and then de-correlated and time division multiplexed by 4 times with a Pritel optical clock multiplier.The intensity modulator is biased at quadrature point and driven to maximize the extinction ratio.The time alignment between the signal and gating pulses is achieved by tuning the variable optical delay line.Note that since the Nyquist-shaping WSS has a 10-GHz granularity, there is a deviation in the Nyquist-shaping filter responses between the experiment and the simulation.The influence on the experimental results will be discussed in Section 7.
The optical sampler is a 1-km HNLF with 1547 nm zero dispersion wavelength (ZDW), and 0.074 ps/nm 2 /km dispersion slope at 1550 nm.The signal and pump powers at the input of the WDM coupler are 2.2 dBm and 15.5 dBm respectively.Polarization controllers are used to align the signal and pump polarizations for maximum FWM efficiency.A WSS extracts the idler by applying a Gaussian-shaped band-pass filter at 1543.1 nm with 120-GHz bandwidth.
In the pre-amplified receiver, a variable optical attenuator (VOA) sets the average power at the input of the pre-amplified receiver, and 10% of the power is monitored by a power meter.Before photodetection, a 1-nm optical BPF reduces out-of-band noise produced by the receiver EDFA pre-amplifier.The photodiode is a Finisar XPDV3120 70-GHz photodetector.In the digital sampling oscilloscope (DSO), the sampling rate is 160 GSa/s.The anti-alias filter, which is a 9-bit FIR digital low-pass filter built into the oscilloscope, is switched between 18 and 62 GHz (Fig. 8).
Experimental results
Figure 7(a) shows the FWM spectra at the output of HNLF2 in blue.In the spectral domain, the idler carries the information of the signal with about -8 dB FWM conversion efficiency.The green and the red traces are the pump and signal before HNLF2.Comparing the three traces indicates the spectrum of the pump is broadened, and the signal's spectrum is shaped.We attribute pump broadening to SPM in the HNLF, and signal shaping due to a combination of SPM and cross-phase modulation (XPM) from the pump.We expect these processes to introduce some distortion on the idler.Figure 7(b) shows the electrical spectrum of the received idler limited by the 62-GHz LPF in the oscilloscope.This digital LPF has a sharp edge and a high extinction of 57 dB.The electrical spectrum fits well to a 0.02 roll-off raised-cosine filter response represented by a red dashed line.From the prior simulations, this brick-wall-like filter will have a strong effect on the received signal quality, dominating over the 70-GHz 3-dB bandwidth limit of the photodiode.Figure 7(c) shows the eye-diagram of an optically shaped N-OOK signal at -31 dBm average power detected by a 62-GHz electrical bandwidth receiver, which has sufficient bandwidth to detect the optically shaped 40-Gbaud N-OOK signal.
Figure 8 shows BER versus received optical power, as well as eye diagrams for the received optically sampled idler at -36 dBm average power.We plotted two separate curves for the idler, corresponding to Gaussian-shaped pumps with either 80-or 120-GHz bandwidths, which should ideally result in sampling pulses with 5.5-ps or 3.6-ps durations, respectively.We tested the performance with 18, 19, 20 and 62-GHz filters applied in the oscilloscope.For the 18-GHz and 19-GHz filters, the 120-GHz bandwidth sampling pulse produces worse sensitivity than the 80-GHz wide pulse (Figs.8(a) and (b)), owing to the greater signal loss caused by the electrical and optical filters in the receiver.
For the 18-and 19-GHz bandwidth receiver filters, we measured a sensitivity improvement of 4 dB for pre-sampling.With the 20-GHz filter, this improvement drops to 2 dB.With a 62-GHz filter applied, the improvement from optical sampling increases to 3.8 dB.This we attribute to full conversion of the idler to an RZ signal (as illustrated by the eye diagram), alongside a large susceptibility of the signal to timing jitter in the receiver due to the signal's pulse shape.
Discussion
Both the simulation and the experiment show that, for receiver bandwidths less than half the signal symbol rate, optical pre-sampling can improve sensitivity in optically shaped N-OOK systems.The main contributor to the improvement is the N-OOK to RZ conversion, as indicated in the simulation, and a minor contributor could be the reduced noise-noise beating due to the noise being time-gated by optical sampling [27].Comparing the simulation and the experiment, we note that there are other factors besides the receiver bandwidth and the sampling pulses that define the receiver sensitivity.Most factors affect the results of signal and idler similarly, except for the optical Nyquist-shaping filter response.In the experiment, the WSS for optical Nyquist-shaping is set to 40-GHz rectangular BPF, but it has a basic optical transfer function of a 10 GHz Gaussian.The resultant optical spectrum has a roll-off softer than the simulated ideal rectangular spectra.According to [3], the quasi-orthogonal property can be maintained in Nyquist pulses even when the roll-off is not zero, and the softer roll-off results in better tolerance to the bandwidth limitation.Therefore, compared with simulation, the optically shaped N-OOK signal in the experiment has better sensitivities, leading to reduced sensitivity improvements in the experiment.Note that we do not use any equalization stages in either simulations or experiments, with our receiver acting only as a simple threshold comparator.It may be possible to improve system performance through the use of an equalizer, at the cost of some computational complexity and processing latency.Moreover, the combination of optical sampling and equalization may enable Nyquist signal reception where equalization alone may not be sufficient for signal recovery.
There are several further challenges to be met in order to make our system of immediate use in installed communication systems.To use this proposed receiver in a densely packed Nyquist wavelength-division-multiplexed (NWDM) system, it requires sharp optical filtering such as a ringassisted Mach-Zender interferometer (RAMZI), which we have previously demonstrated [22,28].For future mass production, the mode-locked laser and the HNLF in this work should be replaced with on-chip frequency combs [29] and on-chip optical sampling [30,31].Moreover, the sampler used for this function must be synchronized to the clock of the incoming signal.This can be achieved in synchronous systems, or through a clock recovery/synchronization stage [32], at the expense of some electronic control.These improvements also relate to reducing receiver complexity through integration onto a chip-based platform, which may help to reduce the extra costs expected in systems with increased complexity.Fundamentally, our proposal targets systems where the use of a higher bandwidth photodiode and electronic front end is either impractical or not possible, and a cost-benefit analysis should be done on that basis.
For further studies, it would be interesting to investigate the effect of optical sampling at one sample-per-symbol for coherent detection of Nyquist shaped signals, considering that parallel optical sampling has been shown to be advantageous for coherent detection [33,34].Moreover, the number of sampling stages in a WDM system might be reduced in a WDM receiver measuring multiple channels simultaneously.Ensuring that the sampled channels are sufficiently optically demultiplexed that upon sampling they do not spectrally overlap (causing inter-channel interference) would be key, as would be the time domain synchronization of such channels.This is less likely to be of benefit in a system employing wavelength routing, where a per-channel front end is likely to be necessary.
Conclusion
We have experimentally demonstrated that the quality of a direct-detected, optically shaped 40-Gbaud Nyquist OOK signal can be improved by optical pre-sampling in a band-limited system, where the receiver bandwidths are below the single-sided bandwidth of the optical signal.Proof-of-concept experiments using FWM in an HNLF show a receiver sensitivity improvement of 4 dB with an 18-GHz electrical bandwidth receiver, where this bandwidth limiting is performed by a brick-wall anti-aliasing filter.Simulations show that with a 4 th -order Bessel filter, emulating analog component responses, it is possible for optical pre-sampling to improve receiver sensitivity of an 11-GHz bandwidth receiver by 7.1 dB.This work shows that when the receiver's electrical bandwidth is about 25% to 50% of the signal's baud rate, the sensitivity measured at the receiver can be improved by optical sampling, especially for Nyquist-shaped signals.
Fig. 1 .
Fig. 1.(a) Principle of optical pre-sampling before bandwidth-limited receivers.(b) Optical field of optically shaped 40-Gbaud N-OOK before and after optical sampling.
Fig. 2 .
Fig. 2. Optical and electrical spectra and eye-diagrams of 40-Gbaud (a) optically shaped N-OOK and (b) RZ direct detection with no electrical band limitation, or with 20-GHz or 15-GHz 4 th -order Bessel electrical low pass filter.The results are from a VPItransmissionMaker simulation without additional noise.
Fig. 4 .
Fig. 4. Simulated BER versus received optical power plots for signals and idlers filtered by 4 th order Bessel electrical filter (circles), and rectangular electrical filter (squares) after photo-detection.Filter bandwidths change with each sub-figure, as indicated on each.
Fig. 7 .FECFig. 8 .
Fig. 7. (a) Red: spectrum of the signal before FWM process.Green: spectrum of the sampling frequency comb (pump) before FWM process.Blue: spectrum at the output of the HNLF2.(b) Electrical spectrum of the idler limited by 62-GHz electrical bandwidth.(c)Eye diagram of the 40-Gbaud N-OOK signal detected with 62-GHz electrical bandwidth receiver. | 5,906.6 | 2019-08-07T00:00:00.000 | [
"Physics",
"Engineering"
] |
Power Ripple Control Method for Modular Multilevel Converter under Grid Imbalances
: Modular multilevel converters (MMCs) are primarily adopted for high-voltage applications, and are highly desired to be operated even under fault conditions. Researchers focused on improving current controllers to reduce the adverse effects of faults. Vector control in the DQ reference domain is generally adopted to control the MMC applications. Under unstable grid conditions, it is challenging to control double-line frequency oscillations in the DQ reference frame. Therefore, active power fluctuations are observed in the active power due to the uncontrolled AC component’s double line frequency component. This paper proposes removing the active power’s double-line frequency under unbalanced grid conditions during DQ transformation. Feedforward and feedback control methods are proposed to eliminate ripple in active power under fault conditions. An extraction method for AC components is also proposed for the power ripple control to eliminate the phase error occurring with the conventional high-pass filters. The system’s stability with the proposed controller is tested and compared with a traditional MMC controller using the Nyquist stability criterion. A real-time digital simulator (RTDS) and Xilinx Virtex 7-based FPGA were used to verify the proposed control methods under single-line-to-ground (SLG) faults.
Introduction
Demand for electrical energy gradually increases with the development of modern technology. Therefore, the global need for sustainable energy is greater than ever. To integrate bulk power from renewable energy sources and reduce carbon emissions, conventional AC power grid infrastructures should be strengthened. However, upgrading existing aged power grids is quite challenging and costly. High-voltage direct-current (HVDC) transmission is a highly economical and versatile alternative for bringing more renewable power from remote locations. Moreover, HVDC transmission allows for bidirectional power flow control, the interconnection of asynchronous or multiple weak AC sources, and more power transfer to long distances [1]. Hence, HVDC transmission accommodates more clean energy sources while carrying more power than that of traditional high-voltage AC (HVAC) transmission.
Until 2003, HVDC transmission applications were designed on the basis of voltage source-based two-or three-level converter (VSC) topologies. In 2003, the modular multilevel converter (MMC) topology was adopted for the Trans Bay Cable project [2]. Since then, the MMC has dominated the market for high-voltage applications due to its simple, modular, and scalable structure [3]. The MMC topology has many advantages over other converter topologies at high voltage, but its performance highly depends on the control structure and modulation technique [4]. A modulation technique such as nearest-level modulation (NLM) or pulse width modulation (PWM) generates switching signals for submodules (SMs). An SM is the smallest power module of an MMC [5][6][7]. There are several SM types available for MMC applications. A half-bridge SM (HBSM) is the most preferred SM type due to a simple structure where it only consists of two switching elements, such as an IGBT and a storage element, such as a capacitor. The adopted modulation technic decides the SM that is inserted into an MMC arm [8,9]. Unlike the other converter topologies, capacitors are distributed in SMs in MMC arms. Thus, keeping all capacitor voltages around the reference value is critical for proper and stable operation.
For this reason, capacitor voltage balancing and sorting algorithms are utilized to keep the SM capacitor voltage around the reference value [10][11][12]. If capacitor voltages are not balanced well, the magnitude of the circulating current rises inside an MMC. A circulating current is a negative sequence current at twice the fundamental frequency. Thus, circulating current control methods target eliminating the negative sequence component at double the fundamental frequency during a steady state. However, positive and zero-sequence components also occur under unstable conditions [13]. The DC current was unevenly distributed among phases [14].
The double-grid frequency component causes a ripple increase in active power, DC voltage, and DC current under grid imbalances [15][16][17]. Harmonic instability may occur in an MMC unless internal dynamics are properly controlled. This may eventually result in an inevitable converter shutdown. Thus, various current control techniques are proposed to dampen the internal dynamics of an MMC to prevent undesired effects of the ripple and possible system shutdown.
In [18], a control strategy based on proportional integral (PI) and resonant (R) controllers is proposed. Even though the control method can eliminate the power ripple, the controllers may not achieve 0 dB response at the resonant frequency for the closed-loop operation. Researchers [19] proposed a control strategy to eradicate the active power ripples of the AC grid based on a proportional resonant (PR) controller. Although no DQ transformation is required in this method, an asymmetric current may still cause improper operation of the PR controller and the protection devices. The authors in [20] proposed to control the arm currents with PI and a vector PI. Although DQ transformation is required, the proposed method achieved better performance than that of the PR controller where no DQ transformation is required under an unstable grid condition. Researchers [21] targeted the circulating current to eliminate the double-grid frequency component during grid imbalance to reduce grid-side disturbances. However, suppressing the circulating current under grid imbalances is challenging, as positive and zero-sequence components also occur. The steady-state error also increases with the number of PI controllers. The authors in [22] proposed a predictive closed-loop averaging control algorithm to eliminate uneven loss distribution due to unbalanced DC, and double-frequency components to indirectly eliminate power ripples. The authors in [23] suggested a dual-vector current control algorithm to separately control positive and negative sequence components with several proportional integral (PI) controllers in the DQ reference frame. However, having multiple PI controllers can further complicate the system and increase computation time. Thus, this method may require a larger look-up table and increase complexity for larger MMC applications. This paper proposes removing the power ripple under an unbalanced grid-side condition. The AC component of the grid current is controlled in the DQ domain to eliminate the power ripple that increases under grid imbalances. The power fluctuation is eliminated under unbalanced grid voltages. Feedforward and a feedback closed-loop control methods are proposed for suppressing power ripple under grid imbalances. Furthermore, an extraction method of AC components is proposed for power-ripple control to eliminate phase error occurring with conventional high-pass filters. A hardware-in-loop (HIL) setup with a real-time digital simulator (RTDS) and Xilinx Virtex FPGAs were used to verify the effectiveness of the proposed control methods under SLG faults. Result validation shows that the power ripple was significantly reduced with the proposed feedforward and feedback control strategies.
MMC Mathematical Model
An MMC has N t series-connected submodules (SMs) in a phase arm, as seen in Figure 1. Various SM types are available for MMC applications, and HBSM, seen in Figure 1, was preferred in this paper. An MMC consists of three-phase legs, and each leg contains an upper (positive) and a lower (negative) arm. Each arm has an arm inductor to limit the circulating current amplitude and arm current ripples. Arms operate complimentary to generate the requested voltage. Figure 2 shows a single-line diagram equivalent circuit for an MMC. Kirchhoff's voltage law can be applied to the positive and the negative arms to determine the voltages of arms: where v u,x and v l,x are the output voltage of positive and negative arms of phase x (x ∈ a, b, c), respectively. v m,x is the converter AC side output voltage. i u,x and i l,x denote the currents flowing in the arms, respectively. V DC represents the DC bus voltage. L o and R o is the MMC arm inductance and resistance, respectively. Induced voltage across arm inductors and resistors is relatively small compared to the arm valve voltage; thus, it can be ignored. Accordingly, converter AC side output voltage v m,x is expressed as follows: Similarly, currents for upper i u,x and lower arms i l,x are expressed as follows: where i z,x is the differential current, and i x is the AC side current. The differential current comprises a DC current i dc,x and circulating current i circ,x . Adding Equations (1) and (2) and substituting Equations (4) and (5) gives: Thus, the dynamic of differential voltage v z,x can be expressed as follows:
MMC Control System
The control system of an MMC mainly consists of SM-and system-level control. Figure 3 shows a typical MMC control structure. Unlike conventional two-and three-level converters, the MMC needs an additional controller to balance SM capacitor voltages and the circulating current (CC) to ensure stable operation. In this paper, the circulating current (CC) control strategy proposed in [16], and the sorting-algorithm-based SM voltage balancing [24] method were implemented.
The most common control method for developing current control systems is vector transformations that convert three-phase voltages and currents into two equivalent vectors. The vector transformation technique benefits VSC control systems, such as independent active and reactive power control, current limiting without waveform distortions, and relatively less computational burden. DQ vectors rotate with a specified angular speed (e.g., ω), and three-phase sinusoidal voltages and currents rotate and synchronize with AC grid voltages. Rotating speed is obtained from the phase-locked loop (PLL) circuit. Thus, grid voltages and currents are transformed into DC components in the DQ reference frame under balanced grid conditions [25]. The dynamic behavior of the AC side output voltage of the MMC was derived from Figure 2 as follows: where v x and i x are the three-phase grid voltages and currents, respectively. L and R represent the transformer inductance and resistance, respectively. L o and R o is the MMC arm inductance and resistance, respectively. L e and R e are the equivalent inductance and resistance of the MMC AC system. The output voltage of an MMC can be represented in the DQ reference frame as: where v m,d and v m,q are MMC output voltages in the DQ reference frame. v d and v q are the DQ grid voltages. i d and i q represent the DQ grid currents. The inner current control of an MMC is established on the basis of Equations (12) and (13) to generate reference voltages for gate signals. Reference commands i * d and i * q are typically used to control voltage and power under normal grid conditions. Conventional PI controllers are sufficient for MMC voltage and power controls under balanced conditions.
Analysis of DQ Vectors
Detailed mathematical derivation and validations of the DQ transformation under unbalanced conditions are essential to ensure proper converter operation. Ideal AC grid voltages and currents are assumed as follows: wherev a ,v b andv c are the amplitude of the grid voltages.Î a ,Î b andÎ c are the amplitude of the three-phase current. α is the phase shift between the voltage and current. θ x is the phase angle of phase x. ω is the angular frequency. The three-phase voltages and currents in Equations (14) and (15) are transformed into the synchronous DQ reference frame, as explained in Appendix A. The DQ reference frame is selected in a way such that the Q-axis grid voltage is adjusted to zero (e.g., v q = 0). Thus, the DQ vectors of the grid voltages and currents can be written as follows: From Equations (17) and (18), DQ vectors of the grid voltages and currents comprised DC and AC components. The D axis was 90 • out of phase with the Q axis, and AC components of the D and Q axes always had the same amplitudes. AC components had double-line frequency (e.g., 2ωt) and only existed under the unbalanced grid conditions when the amplitudes of the grid voltages (e.g.,v a ,v b andv c ) and the amplitudes of the currents (e.g.,Î a ,Î b andÎ c ) are unbalanced. Thus, DQ grid voltages and currents can be simplified in terms of DC and AC components as follows: where (Y ∈ v, i).Ŷ ac is the amplitude of the AC components of the DQ vectors; and δ Y represents the initial phase angle. Under unbalanced grid conditions, conventional PI controllers are insufficient to precisely control AC components, which ultimately cannot satisfy the condition of the zero steady-state error. Resonant controllers must control the double-line frequency oscillation under the unbalanced grid voltages. Thus, the inner current control system was developed on Equations (12) and (13) considering AC and DC components, as shown in Figure 4. Reference commands i * d,dc and i * q,dc were used to control the DC voltage, active power, and reactive power. Reference commands i * d,dc and i * q,dc should always be DC components. The reference for i * d,ac and i * q,ac can be set to zero to suppress the AC components (e.g., i d,ac = i q,ac = 0) under the unbalanced grid voltages to balance the three-phase grid currents. Figure 5 shows the block diagram of the inner current control loop for tuning the controller parameters.
Proposed Power Control Method
Under unbalanced grid voltages, controlling power using conventional methods results in power ripples because of AC components presented in DQ grid voltages and currents. Active power P and reactive power Q are given as follows: From Equations (17) and (18), the DQ vectors of the grid voltages and currents comprise DC and AC components. Thus, the power equation derived in Equation (21) can be rewritten in terms of the DC and AC components as follows: where P dc and P ac are the active power's DC and AC components (ripple), respectively. Q dc and Q ac are the reactive power's DC and AC components (ripple), respectively. DC and AC components of grid voltages (e.g., v d,dc , v q,dc , v d,ac and v q,ac ) and grid currents (e.g., i d,dc , i q,dc , i d,ac and i q,ac ) are defined in Equations (17) and (18). Thus, power terms in Equation (22) Results also show that (v d,ac i d,ac + v q,ac i q,ac ) contributes to the active power DC component. The other power terms contribute to the power ripple. Figure 6 shows the active power analysis under grid imbalances. Initially, the three-phase voltage and current are balanced, where the amplitude of the grid voltage and current are 1 per unit. The phase shift between the voltage and current is 30 • (e.g., α = 30 o ). At Time = 10 ms, the three-phase voltage and current become unbalanced, where the amplitude of the grid voltagev a and grid currentÎ a become 0.05 and 1.5 per unit, respectively. After 40 ms, the three-phase system becomes balanced, again. Thus, the active and the reactive power can be expressed into AC and DC power as follows: Derived equations in Equation (23) indicate that the active and reactive power can independently be controlled using the currents (i d,dc and i q,dc ), respectively. The active power fluctuation P ac and the reactive power fluctuation Q ac can also be controlled using the currents i d,ac and i q,ac , respectively. However, the AC components i d,ac and i q,ac cannot be controlled independently as explained in Equations (19) and (20). The AC component amplitudesŶ ac should be the same for the D and Q axes, and a phase shift between the AC components in which the D axis lags the Q axis by 90 • is required. In this paper, the active power fluctuation control was considered and analyzed.
Feedforward Control Method for the Active Power Ripple
Equation (23) shows that the active power ripple could be suppressed (e.g., P ac = 0) by controlling AC component i d,ac under unbalanced grid conditions. Reference commands for the inner current control system shown in Figure 4 were derived from Equation (23) to eliminate power ripple P ac as follows: (26) i * q,ac (t) = i * d,ac (t − T ) (27) where P * dc and Q * dc are active and reactive power references, respectively. T indicates the time-delay constant for the 90 • phase shift between DQ vectors.
Equation (27) shows that reference command i * d,ac was shifted by 90 • to determine reference command i * q,ac and satisfy DQ vector characteristics explained in Equations (19) and (20). A simple time-delay function can be used to determine i * q,ac . However, the time-delay function may provide excessive delay during transient states (e.g., at the beginning and end of events). Therefore, reference command i * q,ac was calculated to eliminate the time-delay function impact. Voltages v d,ac and v q,ac can be stated as follows: Applying time delay T to Equation (26) and substituting it with Equations (28) and (29) yields: Thus, reference command i * q,ac is calculated from DC and AC components without the need for the time-delay function.
Feedback Control Loop Method for Active Power Ripple
The feedback control loop systems are widely utilized in industrial practice due to their ability to enhance dynamic performance and reduce the disturbance of systems. From Equation (23), power ripple P ac can be manipulated by injecting current i d,ac , while other power terms (e.g., v d,ac i d,dc and v q,ac i q,dc ) are viewed as disturbances to the control system. Hence, the power ripple control shown in Figure 7 was developed with a proportional resonant (PR) controller. Figure 8 shows the block diagram of the power ripple control loop for tuning the controller parameters.
Extraction of DC and AC Components
The AC component magnitude and phase angle are critical and should be determined to control power ripple under unbalanced grid voltages. DC and AC components are typically extracted using low-(LPF) and high-pass filters (HPF). However, conventional HPFs cause multiple phase-shift angles to the output. This phase error may result in instability issues for the power ripple control system. The open-loop frequency analysis for the conventional first and higher-order HPFs is shown in Figure 9. Cut-off frequency was 10 Hz. The phase error was about 4.77 • at the double-line frequency with the firstorder HPF. Using higher-order HPFs increased the phase error. Therefore, DC and AC components of DQ grid voltages and currents were extracted using series-cascaded firstorder LPFs as shown in Figure 10, where Y is the input to the filter, Y dc and Y ac are DC and AC outputs of the filters. ω c is the cut-off frequency of the LPF, and i represents the number of series-connected LPFs. As shown in Figure 11, the phase error was almost zero when using four-series cascaded LPFs.
RTDS Results
Controller validation was tested on the MMC-based HVDC system shown in Figure 12. A real-time digital simulator (RTDS) and Xilinx Virtex 7 FPGAs, seen in Figure 13, were used to verify the power control method under normal and abnormal operating conditions. A Y-∆ converter transformer eliminates the zero sequence from flow into the MMC. Table 1 shows the system parameters. The MMC system utilized 400 SMs per arm with SM voltage ratings of 1.6 kV.
Feedforward Power Ripple Control Method Validation
The performance of the MMC system was evaluated under single-line-to-ground (SLG) faults to validate the proposed feedforward control method. The SLG fault occurred at the high-voltage side of the converter transformer on phase a. MMC active power performance under the SLG fault with the conventional control and with the proposed feedforward power ripple control is compared in Figure 14. The power ripple of active power under the SLG fault is eliminated using the proposed feedforward power control technique. Figure 15 shows AC grid-side voltages, active power, reactive power, DQ current vectors i d and i q , and AC grid current performance under SLG fault. MMC system operating events are as follows:
1.
At time = 0, the MMC system operates in inverter mode with a unity power factor (P = 1 pu and Q = 0 pu).
2.
At time = 0.1 s, the SLG fault is initiated.
3.
At time = 0.3 s, the fault is cleared.
At time = 0.8 s, the fault is cleared.
As shown in Figure 15, the feedforward control method was valid to suppress the active power ripple under the SLG fault in an inverter or rectifier mode. Active power precisely tracks its reference command. AC grid voltages and currents are also pure sinusoidal waveforms. The overall dynamic performance of the MMC was acceptable under SLG faults in inverter and rectifier modes.
Feedback Power Ripple Control Method Validation
To validate the power control method proposed in Section 4.2, the dynamic performance of the MMC system was studied under SLG fault. MMC active power performance with the proposed feedforward and feedback control techniques under the SLG fault is shown in Figure 16. The feedback control method had higher suppression capability and lower transient oscillation when compared to the feedforward control method. Figure 17, shows AC grid side voltages, active power, reactive power, DQ current vectors i d and i q , and AC grid current performance under the SLG fault. The MMC system operating events in this section were similar to those in the case in Section 5.1.
Stability Effect of Controller on MMC Systems
A switching power electronic converter's stability and dynamic performance are critical for safe operation. Therefore, stability analysis of feedback-controlled converters is a significant design consideration. Power-electronics-based systems are sensitive to instabilities under impedance interactions between several power-electronics-based subsystems. Therefore, subsystems are divided as a source and load subsystems at a random DC interface point within the system. Numerous stability criteria for DC systems were proposed, such as the Middlebrook [26,27], gain and phase margin [28], energy source analysis consortium [22], three-step impedance [29], and passivity-based stability [30] criteria. Impedance-based stability analysis treats the power-electronics-based system as a black box and tries to predict its stability. The authors in [31,32] indicated that the impedance ratio of the load to the source can be measured at the point of common coupling (PCC). Therefore, the impedance ratio can be considered to be the entire system's open-loop gain, and stability could be observed on the basis of the Nyquist stability criterion (NSC). In this paper, the frequency scanning technique is applied to the system shown in Figure 12 to investigate the system's stability by measuring the MMC impedance and the AC grid's admittance in the DQ domain at different frequencies under a balanced grid condition. The frequency scanning is first applied to a conventional MMC controller under a balanced grid condition. As seen in Figure 18, the Nyquist plot reveals an unstable response with an encirclement around (−1, j0). The same test was applied to the same MMC setup with the proposed feedback controller. As seen in Figure 19, the Nyquist plot does not encircle (−1, j0), which can be interpreted as the system is stable at different frequencies with the proposed controller.
Conclusions
This paper implemented active power control using the vector current in the DQ synchronous reference frame for an MMC system under SLG fault. A simple feedforward control method was proposed to eliminate the active power ripples under unbalanced grid voltages. The power fluctuation under the SLG fault is significantly reduced with the feedforward control approach. A feedback loop control method was also proposed to eliminate power ripple with more robustness than that of the feedforward control method. The power ripple feedback control method is significantly reduced with lower oscillation during the fault transient. An extraction technique for AC components was proposed for power ripple controls to eliminate phase error with conventional high-pass filters. The dynamic performance of the MMC-based HVDC system was examined in the RTDS system to verify the effectiveness of the proposed control methods under SLG faults. Results show that the power ripple was significantly reduced with the feedforward and feedback control strategies under unbalanced grid voltages.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,527.2 | 2022-05-12T00:00:00.000 | [
"Engineering"
] |
Identification of distribution features of the instantaneous power components of the electric energy of the circuit with polyharmonic current
Many researchers pay attention to the problem of the distortion electric power. The known theoretical results that are reflected in the acting standards are reasonably criticized in the part of the distortion power generation by current and voltage of different frequencies. The paper deals with the approach based on the order of generation of instantaneous electric power components depending on the combination (sum or difference) of current and voltage frequencies. The analytical expressions for the instantaneous electric power components of a harmonic current circuit consisting of a capacitor, an inductance coil and a resistor connected in series are obtained. With this purpose in view, two components – active and reactive – are singled out from the obtained four components of instantaneous power, as well as the orthogonal power components oscillating with double frequency. It is demonstrated that the sum of the squares of the active and reactive components and the sum of the squares of the mentioned orthogonal components coincide and are equal to the total (apparent) power. The used method is developed for the general case of polyharmonic current and voltage. In this case, it is observed that the active and reactive power components can be singled out from the instantaneous power components as orthogonal quadrature components of zero frequency. Though, it is impossible to single out the apparent power. Two numerical experiments with periodic current and voltage, containing three harmonics each, were carried out. The acting values of currents and voltages in each of the experiments were assumed equal. In this case, the amplitudes of the second and the third harmonics exchanged their positions. This example demonstrates that the integral indices of the apparent power and the distortion power, calculated by conventional methods, proved to be the same in both cases, which is incorrect. Under the same conditions, such instantaneous power components whose values turned out to be different for each of the experiments are found among the proposed instantaneous power components. As a result, it is proposed to use a root-mean-square value of the mentioned components for the assessment of the distortion degree in relation to the root-mean-square value of the instantaneous power
Introduction
In electric power, electrotechnical and electromechanical systems and complexes, to solve the problems related to the conversion of electric energy into other types of energy, one uses energy or power balance equations. Balance equations are used to verify the results of the problem solution or to assess the distribution of power fluxes. In most cases, the balance equation is formed according to the corresponding values averaged at a certain time period.
The values of power in most problems of electrical engineering and electrical mechanics provide the generalizing numerical characteristic and are used for the comparison of the power indices of the electric system elements -the observance of balance by the active and reactive powers. During the operation of the networks supplying the electric energy to consumers, independently of the current character, there appear problems of electric energy accounting. The average (at a certain period of time) value of power is used as an accounting index for direct current networks, the values of the active and reactive power are used for alternating current networks [1,2]. Electric energy distortions caused by the action of the energy source or load are neglected during electric energy accounting. So, the problems of determining the power components reflecting the nature and level of electric energy distortion are topical.
Literature review and problem statement
Electric power systems are used for the generation and transmission of the energy that is primarily determined for the electric circuit by active power: where u is instantaneous voltage; i is instantaneous current; t is time; t 0 is the time of the countdown start; T is the countdown period.
If the circuit contains elements capable of energy accumulation, the result of the voltage and current interaction is analyzed with the use of the Cauchy -Schwarz -Bunyakovsky inequality [3] reduced to the following level: In the case of harmonic currents and voltages, the following equation is obtained: where S is total power; Q is reactive power (the power of the energy accumulating elements). The presence of the elements with a nonlinear volt-ampere characteristic in the electric circuit makes it impossible to use the notion of «reactive power» for expression (1). In paper [4], based on the Budeanu theory, this notion is referred to as inactive power N. The latter joins reactive power Q of the energy accumulating elements, and distortion power D caused by the elements with a nonlinear volt-ampere characteristic, i. e.: A number of papers deal with the research of the methods for determining distortion power D. In particular, the paper [1] reveals the cause of distortion power depending on current and voltage distortion, but the distortion power dependence on the harmonic order relation is ignored. In the paper [5], the author compares the methods for the determination of reactive power and comes to the conclusion as to the difference of the results of the calculation in circuits with polyharmonic current. At present, power components are most clearly systematized in the IEEE standard [6]. The standard declares a certain number of electric energy power components, each of which reflects characteristic indices. Using the known vector forms and notions: total, active, inactive, reactive power, distortion power, additionally dividing them for the fundamental and higher harmonics of current or voltage, the authors multilaterally determine the power flux characteristic. With this purpose in view, the representation of currents, voltages and instantaneous power in the trigonometric form of the Fourier series is used. The mentioned power components are based on the Budeanu concept, but in some papers, e. g. [7], the determination of power harmonic components based on harmonic components of current and voltage is criticized.
At the same time, the determination of power components with the use of harmonic components of current and voltage, as stated in papers [8,9], creates the ground for the assessment of the energy process. The authors of paper [10] use the analysis of polyharmonic functions of current, voltage and power provided the analysis of the electric machine mode is performed, and in paper [11], it is used to solve the problems of identifying the parameters and characteristics of the equivalent circuit elements. In this case, the energy conservation law and Tellegen's theorem are observed, which makes this approach more acceptable for the assessment of electric power. The author of paper [12] emphasizes this and substantiates the relation of power distortion and formation of the electric energy cost.
All this allows us to state that it is reasonable to research the order of determining the electric power components in a circuit with polyharmonic current.
The aim and objectives of the study
The aim of the paper consists in the analytical determination of the components of instantaneous electric power in a polyharmonic current circuit with the separation of active and reactive power.
The following objectives were set to achieve the aim of the paper: -to determine the instantaneous electric power for a harmonic current electric circuit; -to find the reactive and apparent powers of the alternating harmonic current circuit; -to take into account the correlation of the polyharmonic current and voltage harmonic frequencies in the calculation of the instantaneous electric power and its components; -to assess experimentally the level of the proposed components in the instantaneous electric power.
Determination of the instantaneous electric power for a harmonic current electric circuit
Let us consider an elementary circuit with resistive impedance R, inductive L and capacitive C elements connected in series. The harmonic current flux in the linear circuit ( Fig. 1, a) is provided by the action of the voltage supply. The mode parameters are considered harmonic and are written down as follows: where U, I are the amplitudes of voltage and current, respectively; ψ u , ψ i are the initial phases of voltage and current, respectively; ω is the angular frequency.
For the circuit in Fig. 1, a, the voltage and current of the circuit are related in the following way: where u R , u L , u C ; i R , i L , i C ; I R , I L , I C ; ψ iR , ψ iL , ψ iC are the voltages, currents, current amplitudes, phases of the currents of the resistor, inductance and capacitor, respectively. Then it is possible to write down: Instantaneous power p of the circuit with the set current and voltage: We introduce the following designations: , then the expression for the instantaneous power will be as follows: . cos s in , where P a.1-1 , P b.1-1 are the amplitudes of the cosine and sine components of the power harmonic caused by the current and voltage harmonics whose frequencies are subtracted; P a.1+1 , P b.1+1 are the amplitudes of the cosine and sine components of the power harmonic caused by the current and voltage harmonics whose frequencies are summed up.
Determination of reactive and total power of the alternating monoharmonic current circuit
Deliberately not excluding the component of the function sin (0) that does not participate in the formation of instantaneous power from the analysis, we indicate that P a.1+1 ≠P a.1-1 , P b.1+1 ≠P b.1-1 . We introduce the following designations: the active power of the circuit - To preserve a certain written form, we rewrite equation (2) in the following form: Such an approach clearly indicates the place of active P, reactive Q and total S powers in the instantaneous power. It is a well-known fact that the balance in the electric circuit is not reproduced by the total power. It should be mentioned that it is also impossible to reproduce the instantaneous power in the circuit by the components of active, reactive and total power. This is caused by the impossibility of determining the initial phase of the oscillating power. The power components with amplitudes P a.1-1 , P b.1-1 , P a.1+1 , P b.1+1 completely reproduce the instantaneous power.
It is rational to additionally use the indices characterizing the power as a signal [1,3]: maximal P max , average P av and root-mean-square P rms values: The latter index is the RMS value of the instantaneous power signal P p rms = . With the use of the initial written form of current and voltage and their initial phases, the expression for power will be of the following form: Comparing expressions (2) and (3) we obtain: the cosine component of the total power; sin( ) the sine component of the total power. Thus, the total power components, pulsating with double frequency, are not active and reactive powers, but they are orthogonal components of the oscillating total power S, having the initial phase ψ s . These components are artificially reduced to active and reactive powers, which some authors perform by means of a corresponding phase shift of the cosine and sine components of the total power.
Taking into account the correlation of the polyharmonic current and voltage harmonic frequencies in the calculation of the instantaneous electric power and its components
Let us consider the polyharmonic current and voltage: As stated in [6], it follows from the latter expression that the function of the instantaneous power contains harmonics whose order s is determined by both difference (k-n) and sum (k+n) of the orders of voltage and current harmonics, i. e. s = k ± n. Thus, the instantaneous power where P b.nc.s is the amplitude of the sine component caused by the action of the harmonics of current and voltage of different frequencies, whose sum or difference provides an unpaired result. Consequently, the active and reactive powers are unambiguously separated in the instantaneous power in the available representation; in this case, it is impossible to separate total power S among the components of the instantaneous power [3].
The way of the representation of the instantaneous power in the form: complies with the energy conservation law and Tellegen's theorem [3]. In this case, the power signal is generalized by the RMS value P rms .
Experimental assessment of the level of the proposed components in the instantaneous electric power
The rationality of using such representation of the instantaneous power can be illustrated by the following example. Let us assume that the current and voltage are set by three harmonics with corresponding amplitudes and initial phases (Table 1), and the frequency of the fundamental harmonic is ω = 314 s -1 . We consider two numerical experiments performed in the Mathcad package, which differ in the values of the amplitudes of the second and third current harmonics (they interchange their positions). The diagrams of current and voltage for the first and se cond experiment are given in Fig. 2, a, c, respectively, the resulting diagram of power -in Fig. 2, b, d. Obviously, if there are different values of current harmonics, different power will be obtained, but the integral indices [5]: active P, reactive Q, total S powers and even distortion power D remain unchanged. As seen in Fig. 2, b, c, the character of power distortion is different, which results in different values of the maximum power. Table 2 contains the values of the orthogonal components of power for the first and second experiments obtained according to expression (4). The components with zero frequency P a.0 = P and P a.0 = Q, whose values coincide in Tables 2, 3, are determined unambiguously. It is impossible to relate the components of other frequencies with the mentioned powers or total power S, but this way of the representation of the instantaneous power allows one to differentiate instantaneous power p obtained in the two proposed experiments.
It is necessary to point out different values of the power quadratic norm and its maximal value for each of the experiments. Additionally, one should pay attention to the coincidence of the values of the orthogonal components of the power fifth harmonic (s = 5), caused by the interaction of the second and third harmonics of voltage and current. The values of the orthogonal components of the power sixth harmonic (s = 6) are different by one order as they are caused by the interaction of exclusively third harmonics of current and voltage. Separately, it is necessary to pay attention to the values of the orthogonal components of the power second harmonic (s = 2) that indicates the interaction of the current and voltage harmonics, for which n ± k = s = 2. In this case, apart from the first harmonic of current and voltage, all other harmonics of current and voltage, for which the condition n ± k = s = 2 is true, take part in the generation of this power component. Table 3 contains the values of the orthogonal components of power for the first and second experiment according to the components obtained by formula (5). In this case, the values of the orthogonal components of zero harmonic and harmonics with numbers s = 5 and s = 6 coincide with the values given in Table 2.
The orthogonal components of the power second harmonic (s = 2) are divided into two components according to formula (5). Table 3 contains the following designations: canonic components of power (index «с»), indicating the action of the components of current and voltage of one frequency n = k = 1; pseudo-canonic components of power (index «pс»), indicating the action of the harmonics of current and voltage of different frequencies n ≠ k. That is why the values of the canonic parts of the power harmonics orthogonal components s = 2 coincide for both experiments and for the non-canonic ones they differ.
Discussion of the results of the research of the order of generating the electric power instantaneous components in a polyharmonic current circuit
The amplitude of the sine and cosine orthogonal components of power harmonics depends on the current and voltage harmonics whose frequencies difference or sum is equal to the frequency of the power harmonic. The method, used in the research, proposes to separate the power components depending on the combination of the current and voltage harmonics frequencies. The calculation of the power components is complicated with the increase of the number of current and voltage harmonics, because it is necessary to verify if certain combinations of the frequencies of current, voltage harmonics and the frequency of the power harmonic meet particular conditions. If the number of current and voltage harmonics is limited, a certain error in the calculation of the power components is possible. Taking into account the high level of the info-and knoware of modern signal-processing devices, the mentioned errors can be minimized. The further development of the research consists in the substantiation of the indices of the power components contribution into the instantaneous electric power. In this case, some difficulties related to the power fluxes separation in the nodes of electric power systems are possible.
For a harmonic current electric circuit, resulting from a combination of orthogonal components of current and voltage, the sine and cosine orthogonal components of power harmonics are determined (2). Components corresponding to the active, reactive and oscillating (apparent) power are singled out from them. The method used during the research of the polyharmonic current circuit proposes to separate power components depending on the combination of the frequencies of current and voltage harmonics (4). The advantage of the proposed solution over the existing one is illustrated by numerical experiments with different instantaneous powers for which the known indices turned out equal. The calculation of the proposed power components is complicated with the increase of the number of current and voltage components. If the number of current and voltage harmonics is limited, a certain error of the power components calculation is probable. Taking into account the high level of the infoware and knoware of the modern signal processing equipment, the mentioned errors can be reduced to the minimal admissible value. The development of the research consists in the substantiation of the indices of the power components contribution into the instantaneous electric power of electric power system units. In this case, probable difficulties relate to the power fluxes branching in the mentioned units.
Conclusions
1. Unlike the conventional analytical expression of the instantaneous power of the alternating harmonic current circuit, an alternative expression has been proposed. It includes components corresponding to the active, reactive and total power. The power sine orthogonal components with the function zero argument have not been excluded from the analysis.
2. The low efficiency of the indices of the total power and distortion power in a circuit with polyharmonic current and voltage, calculated according to the known methods, has been proved. These indices appeared to be unchanged under the conditions of two numerical experiments in which the acting values of current and voltage and their initial phases were set equal. In this case, the current amplitudes of the second and third harmonics interchanged their positions. This resulted in the change of the power root-mean-square value P rms1 = 1.11P rms2 , and it's maximal value P max1 = 1.03P max2 .
3. Dividing the power harmonics into orthogonal components, observing the condition of pairness and unpairness of the combinations of the current and voltage harmonics frequencies, the analytical expressions for the determination of the power components: zero frequency, canonic, pseudocanonic and non-canonic, have been obtained. 4. For the numerical experiments using the proposed methods for the determination of the power components, we have found the values of the canonic parts of the power second harmonic components, which coincide in both experiments, and the pseudo-canonic parts, which differ. This can be used in the formulation of the instantaneous power indices. | 4,618.6 | 2019-03-22T00:00:00.000 | [
"Engineering",
"Physics"
] |
Nonlinear Magneto-Quasistatic Simulation of Superconducting Tapes With a-ψ Algebraic Formulation
The study of superconducting tapes requires specialized formulations to account for the high aspect ratio of the tapes and the extremely high conductivity of the thin superconducting layer. This article presents a new formulation coupling the line integral of the magnetic vector potential <inline-formula> <tex-math notation="LaTeX">$a$ </tex-math></inline-formula> defined in the whole domain to the stream function <inline-formula> <tex-math notation="LaTeX">$\psi $ </tex-math></inline-formula> defined on the nodes of the superconducting layer. Results on two benchmarks show that the proposed <inline-formula> <tex-math notation="LaTeX">$a-\psi $ </tex-math></inline-formula> formulation is effective in solving problems where the currents are limited to thin layers also in case of a strong material nonlinearity.
I. INTRODUCTION
H IGH-TEMPERATURE superconducting (HTS) materials represent a new frontier in high-energy applications: operating at elevated temperatures, when compared with low-temperature superconductors, they are able to withstand stronger magnetic fields and guarantee high transport current with minimal loss of energy [1].In this context, ReBCO coated conductors are a promising choice which could fulfill both technical and economical issues.ReBCO refers to composite material based on rare-earth elements, available in thin tape form in a layered structure, enveloped by a copper support layer.In this work, Yttrium-based tapes, namely, YBCO tapes, are considered (Fig. 1).The high tolerance of HTS tapes to the tensile strength and compressive strain due to the presence of the hastelloy substrate allows the development of the conductor on round core (CORC) technology: the tape is wounded around a central copper core, guaranteeing mechanical and electrical isotropy [2], which makes them suitable for high magnetic field applications.Due to the high aspect ratio of the tape, between the extremely thin thickness (from 40 to 100 µm) and the width, they can suitably be represented with infinitely thin sheets [3].Instead, when 3-D models of the tapes are considered, objects are discretized with volume elements (usually tetrahedra or hexahedra), with two major drawbacks.
1) A large number of elements are required for the discretization, increasing the number of unknowns.
2) The elements are characterized by a large aspect ratio, affecting the efficiency of the numerical solution [4].This article presents a new formulation based on the cell method that couples a 2-D discretization of the supercon- ducting layer with a 3-D discretization of the surrounding environment.Particular attention is devoted to the treatment of the electrical resistivity, to avoid singularities of the final system when it reaches the typical infinitesimal values of superconductors.
II. MATERIAL MODELING
The YBCO tape under study is a second-generation coated conductor made by different layers as shown in Fig. 1.The tape width is 4 mm, whereas the thicknesses of the different layers are reported in Table I along with the electrical conductivities at 77 K [5].The tape conductivity is homogenized considering that all the layers are connected in parallel.The linear conductivities, due to the copper (Cu) stabilizer, the silver (Ag) layer, and the Hastelloy (Ha) substrate, are averaged using their thicknesses δ as weights This value is then scaled to reference the conductivity to the thickness of the superconductive (sc) YBCO layer Finally, the linear contribution is averaged with the nonlinear conductivity of YBCO, assuming the conventional power law with constant critical current J c [6] where E 0 = 0.1 mV/m, J c = 2.85 × 10 10 A/m 2 , and n = 30.5 [6].By exploiting the electrical parallel between the superconducting YBCO layer and the equivalent linear materials, it is possible to apply the current divider rule.Due to the scaling (2), the equivalent cross section of the two layers is identical, so the current divider rule holds in terms of current densities where J is the total equivalent current density in the tape.Equation ( 4) can be solved for different values of J obtaining the homogenized characteristic σ = σ sc (J ) + σ ′ lin of the tape.
III. DOMAIN DISCRETIZATION AND INTEGRAL VARIABLES
Following the principles of the cell method [7], the domain under study is subdivided into two complementary regions: the conductive domain, with negligible thickness compared with other dimensions, represented by 2-D layers, and the surrounding 3-D nonconductive domain.The latter is discretized with a tetrahedral mesh G, while the former is discretized with a surface triangular mesh G s , created with the constraint that the faces of G s are also faces of G. From the tetrahedral mesh, a barycentric dual mesh G is derived [7], where dual nodes are located in correspondence of the tetrahedra barycenters, dual edges connect adjacent nodes and pass through the primal face mid-points, and dual faces hinge around primal edges and are created from a closed loop of dual edges, Fig. 2(a).Similarly, a surface dual mesh Gs is built starting from G s , as shown in Fig. 2(b).
A. a Formulation in the 3-D Domain In the 3-D subdomain, the electromagnetic variables associated with the spatial entities of G and G are [see also Fig. 2(a)]: the line integral of the magnetic vector potential a along primal edges, the magnetic flux b through primal faces, the magneto-motive force h along the dual edges, and the electric current ĩ through the dual faces (the bold notation is used for the vectors that collect these quantities).According to [8], the following equations hold: where C and C are the face-to-edge incidence matrices defined on G and G, respectively, and M ν is the constitutive reluctance matrix.Using the identity C T = C, the previous equations can be combined in Standard tree gauging can be applied to guarantee the uniqueness of the solution [9].
B. ψ Formulation in the 2-D Domain
On the spatial entities of the surface mesh G s and Gs , the following quantities are defined [see also Fig. 2(b)]: the stream function ψ on primal nodes, the electric current i through primal edges, the electric voltage ẽ along dual edges, and the magnetic flux b through dual faces.As detailed in [10], after defining the classical constitutive resistance matrix M ρ and using the edge-to-node G s and dual face-to-edge Cs surface incidence matrices, it is possible to write the current field equations In (10), the nonlinear Ohm's law is linearized using the fixed-point scheme with residual R and fixed point resistivity ρ 0 .Using the identity Cs = G T s , the previous equations are combined in
C. Coupling Terms
It is worth noting that the assignment of physical variables to the space elements defined in Section III-B does not follow the conventional association rules [7].For example, the stream function is associated with primal nodes instead of the conventional assignment to dual nodes.In this way, the total current in a conductor can be defined as the difference in stream functions values using (9).The coupling (8) and (12) requires to express the current through dual faces ĩ in terms of ψ and the magnetic flux through the dual faces b in terms of a.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. 1) ψ − ĩ Relation: To map the vector of stream functions ψ defined on the primal nodes to the corresponding vector of stream functions defined on the dual nodes ψ, two steps are necessary.First, the dual grid is augmented by adding the mid-edge nodes to the list of dual nodes [11].In this way, ψ can be easily interpolated using the face-to-node incidence matrix T and the edge-to-node incidence matrix G + s , where all the entries are taken as positive, as shown in Fig. 3(a) The current vector through the dual edges of the surface mesh is then obtained using the discrete gradient on the augmented dual grid Gs,aug where the matrix P projects the currents defined on the 2-D surface mesh into the corresponding currents defined on the 3-D mesh.From a geometrical point of view, P is the matrix that maps the edges of the 2-D mesh into the edges of the 3-D mesh.
2) a − b Relation: The mapping between the magnetic vector potential circulations a to the magnetic flux densities b is obtained in a similar fashion.Initially, the flux densities through the faces of the 3-D mesh that belong also to the 2-D mesh are selected using the selection matrix S. Then the flux through a triangular face is divided in three contributions associated with the dual faces, as shown in Fig. 3(b), using again the face-to-node incidence matrix
D. Final Equations
Combining the equations of the 3-D and 2-D domains ( 8) and ( 12) with the coupling ( 14) and (15), the final system becomes where The source term is set by suitable Dirichlet boundary conditions on a (external field) or ψ (transport current).The nonlinear transient problem is solved using the θ -method (here θ = 0.5) with a fixed time step.At each time step, a fixed-point nonlinear problem is solved.This coupled scheme exploits the initial factorization of the solution matrix that is then used either for the nonlinear iterations or for the time stepping scheme.
The choice of the initial conductivity σ 0 = (1/ρ 0 ) is crucial for the convergence of the nonlinear iterations.A preliminary study, not reported here for brevity, showed that using J = 1.2 J c in (4) provides a good convergence in all the benchmarks.Another key strategy for the convergence is the under-relaxation of the update of the residual term R k .In fact, due to the large variation in the conductivity values of YBCO, the residual term is subject to large variations, especially when the solution is far from the convergence at the initial iterations.For this reason, a dumping factor α = 10 −4 is used to update the residual IV. RESULTS
A. Verification With a Nonlinear Static Benchmark
The first benchmark is used to test the behavior of the proposed formulation with the high nonlinearity of the conductivity (3).It consists of a circular ring made by YBCO tape having the inner and outer radii equal to R i = 50 mm and R o = 54 mm, respectively.A transport current I 0 ranging from 0.1 to 1.1 times the critical current I c is imposed.The mesh consists of ∼207k tetrahedra (surrounding air) with average quality index 0.7430, and ∼24k triangles (superconducting tape) with average quality index 0.9963.The average number of nonlinear iterations to reach a relative error on the solution vector below 10 −6 is 129 for a total simulation time of 28.1 s (factorization: 5.6 s, nonlinear iterations: 22.5 s).Timings refer to a pure MATLAB implementation of the algorithm, running on a workstation equipped with two Intel Xeon Gold 6154 18-core at 3.0 GHz and 512 GB of RAM.
The reference solution of this benchmark can be obtained by subdividing the ring in N t flux tubes (see Fig. 4), where the nonlinear conductance of each tube is The current in each flux tube i k is calculated by solving the system of nonlinear equations Fig. 5 shows comparison of the power losses calculated with the a − ψ formulation with those calculated with (19) using Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.N t = 20.A perfect agreement can be observed, showing the good convergence of the nonlinear scheme also when the transport current is in the order of the critical one.
B. Validation With a Nonlinear Transient Benchmark
The second benchmark is a single-layer, three-tape CORC cable.The three tapes are wounded around a former (not included in the simulation) having 4.76 mm diameter with a pitch of 40 mm [12].The mesh, shown in Fig. 6, consists of ∼67k tetrahedra with average quality index 0.7631, and ∼7k triangles with average quality index 0.9976.The external excitation is a 50 Hz uniform magnetic flux density, orthogonal to the cable axis, with intensity ranging from 10 to 90 mT.The transient simulation is run for three periods, with a fixed step size of 0.1 ms.The numerical results are compared with the experimental measurements reported in [12] and the simulation results obtained in [13] using an H -formulation in terms of ac losses per unit length.Losses are computed in the last of the three periods (60 ms) simulated.The average simulation time for each solution is 1.13 h.The results of Fig. 7 show a good agreement for all the values of applied magnetic field up to 60 mT.Beyond this value, heating effects that lead to the reduction in the value of J c are reported in [12], with a consequent reduction in the measured losses.
V. CONCLUSION In this article the a − ψ formulation in terms of algebraic quantities is presented.The coupling between the 3-D and 2-D equations makes it possible to keep the mesh quality also in case where the aspect ratio is critical.The formulation is verified with a semi-analytical problem and validated versus measurements available in the literature.The results show that the method is effective and efficient in solving problems with superconductive tapes also in case of strong nonlinearities.
Fig. 4 .
Fig. 4. Geometry of the static benchmark with current flux tubes' subdivision (not to scale).
Fig. 5 .
Fig. 5. Power losses for the circular ring static benchmark.
TABLE I THICKNESS
AND CONDUCTIVITY OF THE DIFFERENT LAYERS OF THE YBCO TAPE UNDER STUDY | 3,077.2 | 2024-03-01T00:00:00.000 | [
"Physics",
"Engineering"
] |