id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
42,841,555 | https://en.wikipedia.org/wiki/De%20materia%20medica | (Latin name for the Greek work , , both meaning "On Medical Material") is a pharmacopoeia of medicinal plants and the medicines that can be obtained from them. The five-volume work was written between 50 and 70 CE by Pedanius Dioscorides, a Greek physician in the Roman army. It was widely read for more than 1,500 years until supplanted by revised herbals in the Renaissance, making it one of the longest-lasting of all natural history and pharmacology books.
The work describes many drugs known to be effective, including aconite, aloes, colocynth, colchicum, henbane, opium and squill. In total, about 600 plants are covered, along with some animals and mineral substances, and around 1000 medicines made from them.
was circulated as illustrated manuscripts, copied by hand, in Greek, Latin, and Arabic throughout the medieval period. From the 16th century onwards, Dioscorides' text was translated into Italian, German, Spanish, French, and into English in 1655. It served as the foundation for herbals in these languages by figures such as Leonhart Fuchs, Valerius Cordus, Lobelius, Rembert Dodoens, Carolus Clusius, John Gerard, and William Turner. Over time, these herbals incorporated increasing numbers of direct observations, gradually supplementing and eventually supplanting the classical text.
Several manuscripts and early printed versions of survive, including the illustrated Vienna Dioscurides manuscript written in the original Greek in 6th-century Constantinople; it was used there by the Byzantines as a hospital text for just over a thousand years. Sir Arthur Hill saw a monk on Mount Athos still using a copy of Dioscorides to identify plants in 1934.
Book
Between 50 and 70 AD, a Greek physician in the Roman army, Dioscorides, wrote a five-volume book in his native Greek, (, "On Medical Material"), known more widely in Western Europe by its Latin title . He had studied pharmacology at Tarsus in Roman Anatolia (now Turkey). The book became the principal reference work on pharmacology across Europe and the Middle East for over 1,500 years, and was thus the precursor of all modern pharmacopoeias.
In contrast to many classical authors, was not "rediscovered" in the Renaissance, because it never left circulation; indeed, Dioscorides' text eclipsed the Hippocratic Corpus. In the medieval period, was circulated in Latin, Greek, and Arabic. In the Renaissance from 1478 onwards, it was printed in Italian, German, Spanish, and French as well. In 1655, John Goodyer made an English translation from a printed version, probably not corrected from the Greek.
While being reproduced in manuscript form through the centuries, the text was often supplemented with commentary and minor additions from Arabic and Indian sources. Several illustrated manuscripts of survive. The most famous is the lavishly illustrated Vienna Dioscurides (the Juliana Anicia Codex), written in the original Greek in Byzantine Constantinople in 512/513 AD; its illustrations are sufficiently accurate to permit identification, something not possible with later medieval drawings of plants; some of them may be copied from a lost volume owned by Juliana Anicia's great-grandfather, Theodosius II, in the early 5th century. The Naples Dioscurides and Morgan Dioscurides are somewhat later Byzantine manuscripts in Greek, while other Greek manuscripts survive today in the monasteries of Mount Athos. Densely-illustrated Arabic copies survive from the 12th and 13th centuries. The result is a complex set of relationships between manuscripts, involving translation, copying errors, additions of text and illustrations, deletions, reworkings, and a combination of copying from one manuscript and correction from another.
is the prime historical source of information about the medicines used by the Greeks, Romans, and other cultures of antiquity. The work also records the Dacian names for some plants, which otherwise would have been lost. The work presents about 600 medicinal plants in all, along with some animals and mineral substances, and around 1,000 medicines made from these sources. Botanists have not always found Dioscorides' plants easy to identify from his short descriptions, partly because he had naturally described plants and animals from southeastern Europe, whereas by the 16th century his book was in use all over Europe and across the Islamic world. This meant that people attempted to force a match between the plants they knew and those described by Dioscorides, leading to what could be catastrophic results.
Approach
Each entry gives a substantial amount of detail on the plant or substance in question, concentrating on medicinal uses but giving such mention of other uses (such as culinary) and help with recognition as considered necessary. For example, on the "Mekon Agrios and Mekon Emeros", the opium poppy and related species, Dioscorides states that the seed of one is made into bread: it has "a somewhat long little head and white seed", while another "has a head bending down" and a third is "more wild, more medicinal and longer than these, with a head somewhat long—and they are all cooling." After this brief description, he moves at once into pharmacology, saying that they cause sleep; other uses are to treat inflammation and erysipela, and if boiled with honey to make a cough mixture. The account thus combines recognition, pharmacological effect, and guidance on drug preparation. Its effects are summarized, accompanied by a caution:
Dioscorides then describes how to tell a good from a counterfeit preparation. He mentions the recommendations of other physicians, Diagoras (according to Eristratus), Andreas, and Mnesidemus, only to dismiss them as false and not borne out by experience. He ends with a description of how the liquid is gathered from poppy plants, and lists names used for it: chamaesyce, mecon rhoeas, oxytonon; papaver to the Romans, and wanti to the Egyptians.
As late as in the Tudor and Stuart periods in Britain, herbals often still classified plants in the same way as Dioscorides and other classical authors, not by their structure or apparent relatedness but by how they smelt and tasted, whether they were edible, and what medicinal uses they had. Only when European botanists like Matthias de l'Obel, Andrea Cesalpino and Augustus Quirinus Rivinus (Bachmann) had done their best to match plants they knew to those listed in Dioscorides did they go further and create new classification systems based on similarity of parts, whether leaves, fruits, or flowers.
Contents
The book is divided into five volumes. Dioscorides organized the substances by certain similarities, such as their being aromatic, or vines; these divisions do not correspond to any modern classification. In David Sutton's view the grouping is by the type of effect on the human body.
Volume I: Aromatics
Volume I covers aromatic oils, the plants that provide them, and ointments made from them. They include what are probably cardamom, nard, valerian, cassia or senna, cinnamon, balm of Gilead, hops, mastic, turpentine, pine resin, bitumen, heather, quince, apple, peach, apricot, lemon, pear, medlar, plum and many others.
Volume II: Animals to herbs
Volume II covers an assortment of topics: animals including sea creatures such as sea urchin, seahorse, whelk, mussel, crab, scorpion, electric ray, viper, cuttlefish and many others; dairy produce; cereals; vegetables such as sea kale, beetroot, asparagus; and sharp herbs such as garlic, leek, onion, caper and mustard.
Volume III: Roots, seeds and herbs
Volume III covers roots, seeds and herbs. These include plants that may be rhubarb, gentian, liquorice, caraway, cumin, parsley, lovage, fennel and many others.
Volume IV: Roots and herbs, continued
Volume IV describes further roots and herbs not covered in Volume III. These include herbs that may be betony, Solomon's seal, clematis, horsetail, daffodil and many others.
Volume V: Vines, wines and minerals
Volume V covers the grapevine, wine made from it, grapes and raisins; but also strong medicinal potions made by boiling many other plants including mandrake, hellebore, and various metal compounds, such as what may be zinc oxide, verdigris and iron oxide.
Influence and effectiveness
In Europe
Writing in The Great Naturalists, the historian of science David Sutton describes as "one of the most enduring works of natural history ever written" and that "it formed the basis for Western knowledge of medicines for the next 1,500 years."
The historian of science Marie Boas writes that herbalists depended entirely on Dioscorides and Theophrastus until the 16th century, when they finally realized they could work on their own. She notes also that herbals by different authors, such as Leonhart Fuchs, Valerius Cordus, Lobelius, Rembert Dodoens, Carolus Clusius, John Gerard and William Turner, were dominated by Dioscorides, his influence only gradually weakening as the 16th-century herbalists "learned to add and substitute their own observations".
Early science and medicine historian Paula Findlen, writing in the Cambridge History of Science: Early Modern Science, calls "one of the most successful and enduring herbals of antiquity, [which] emphasized the importance of understanding the natural world in light of its medicinal efficiency", in contrast to Pliny's Natural History (which emphasized the wonders of nature) or the natural history studies of Aristotle and Theophrastus (which emphasized the causes of natural phenomena). Medicine historian Vivian Nutton, in Ancient Medicine, writes that Dioscorides's "five books in Greek On Materia medica attained canonical status in Late Antiquity." Science historian Brian Ogilvie calls Dioscorides "the greatest ancient herbalist", and "the summa of ancient descriptive botany", observing that its success was such that few other books in his domain have survived from classical times. Further, his approach matched the Renaissance liking for detailed description, unlike the philosophical search for essential nature (as in Theophrastus's ). A critical moment was the decision by Niccolò Leoniceno and others to use Dioscorides "as the model of the careful naturalist—and his book as the model for natural history."
The Dioscorides translator and editor Tess Anne Osbaldeston notes that "For almost two millennia Dioscorides was regarded as the ultimate authority on plants and medicine", and that he "achieved overwhelming commendation and approval because his writings addressed the many ills of mankind most usefully." To illustrate this, she states that "Dioscorides describes many valuable drugs including aconite, aloes, bitter apple, colchicum, henbane, and squill". The work mentions the painkillers willow (leading ultimately to aspirin, she writes), autumn crocus and opium, which however is also narcotic. Many other substances that Dioscorides describes remain in modern pharmacopoeias as "minor drugs, diluents, flavouring agents, and emollients ... [such as] ammoniacum, anise, cardamoms, catechu, cinnamon, colocynth, coriander, crocus, dill, fennel, galbanum, gentian, hemlock, hyoscyamus, lavender, linseed, mastic, male fern, marjoram, marshmallow, mezereon, mustard, myrrh, orris (iris), oak galls, olive oil, pennyroyal, pepper, peppermint, poppy, psyllium, rhubarb, rosemary, rue, saffron, sesame, squirting cucumber (elaterium), starch, stavesacre (delphinium), storax, stramonium, sugar, terebinth, thyme, white hellebore, white horehound, and couch grass—the last still used as a demulcent diuretic." She notes that medicines such as wormwood, juniper, ginger, and calamine also remain in use, while "Chinese and Indian physicians continue to use liquorice". She observes that the many drugs listed to reduce the spleen may be explained by the frequency of malaria in his time. Dioscorides lists drugs for women to cause abortion and to treat urinary tract infection; palliatives for toothache, such as colocynth, and others for intestinal pains; and treatments for skin and eye diseases. As well as these useful substances, she observes that "A few superstitious practices are recorded in ," such as using Echium as an amulet to ward off snakes, or Polemonia (Jacob's ladder) for scorpion stings.
In the view of the historian Paula De Vos, formed the core of the European pharmacopoeia until the end of the 19th century, suggesting that "the timelessness of Dioscorides' work resulted from an empirical tradition based on trial and error; that it worked for generation after generation despite social and cultural changes and changes in medical theory".
At Mount Athos in northern Greece Dioscorides's text was still in use in its original Greek into the 20th century, as observed in 1934 by Sir Arthur Hill, Director of the Royal Botanic Gardens, Kew:
Arabic medicine
Along with his fellow physicians of Ancient Rome, Aulus Cornelius Celsus, Galen, Hippocrates and Soranus of Ephesus, Dioscorides had a major and long-lasting effect on Arabic medicine as well as medical practice across Europe. was one of the first scientific works to be translated from Greek into Arabic (Arabic:Hayūlā ʿilāj al-ṭibb). It was translated first into Syriac and then into Arabic in 9th century Baghdad. The translators were most often Syriac Christians, such as Hunayn ibn Ishaq, and their work is known to have been sponsored by local rulers, such as the Artuqids.
Manuscripts
Leiden Dioscurides (1083)
Manuscript (Or. 289), dated 1083, an illustrated Arabic translation of Dioscurides' . The work was originally translated from Greek into Arabic via Syriac by Hunayn ibn Ishaq (810–873) with the collaboration of Stephanus b. Bāsīl between 847–861. This translation was slightly revised by Ḥusayn b. Ibrāhīm al-Nātilī in 990–991. The current copy is based on an exemplar in the hand of al-Nātilī. The work was offered to the amīr of Samarqand, Abū ʿAlī al-Simǧūrī. Acquired by Levinus Warner (1619–1665) and bequeathed to Leiden University Library on his death.
A digitized version is available via Leiden's Digital Collections.
1224 manuscript
One manuscript is dated to 1224, but its provenance is uncertain. It is generally cautiously attributed to "Iraq or Northern Jazira, possibly Baghdad". Its folios have been dispersed among multiple institutions and collectors.
Istanbul, Topkapı Palace, Ahmet II 2127 (1229)
This copy was created by Abd Al-Jabbar ibn Ali in 1229.
References
Cited sources
(subscription required for online access)
Further reading
Manuscripts and editions
Note: Editions may vary by both text and numbering of chapters
Arabic
Digitized version of Kitāb al-Ḥašāʾiš fī hāyūlā al-ʿilāg ̌al-ṭibbī Or. 289 Illustrated Arabic De Materia Medica of Dioscorides from Digital Collections at Leiden University Libraries
English
The Greek Herbal of Dioscorides ... Englished by John Goodyer A. D. 1655, edited by R.T. Gunter (1933).
De materia medica, translated by Lily Y. Beck (2005). Hildesheim: Olms-Weidman.
(from the Latin, after John Goodyer 1655])
French
Edition of Martin Mathee, Lyon (1559) in six books
German
Edition of J Berendes, Stuttgart 1902
Greek
Naples Dioscurides: Codex ex Vindobonensis Graecus 1 ca 500 AD, at Biblioteca Nazionale di Napoli site
English description, World Digital Library
Edition of Karl Gottlob Kühn, being Volume XXV of his Medicorum Graecorum Opera, Leipzig 1829, together with annotation and parallel text in Latin
Book I – Book II – Book III – Book IV – Book V – Indices
Edition of Max Wellman, Berlin
Books I, II – Books III, IV – Book V
Greek and Latin
(Index in frontispiece)
Latin
Edition of Jean Ruel 1552
Index – Preface – Book I – Book II – Book III – Book IV – Book V
De Medica Materia : libri sex, Ioanne Ruellio Suesseionensi interprete, translated by Jean Ruel (1546).
De Materia medica : libri V Eiusdem de Venenis Libri duo. Interprete Iano Antonio Saraceno Lugdunaeo, Medico, translated by Janus Antonius Saracenus (1598).
Spanish
Edition of Andres de Laguna 1570 site
Andres de Laguna, published at Antwerp 1555 , at Biblioteca Nacional de España site
Dioscórides Interactivo Ediciones Universidad Salamanca. Spanish and Greek.
External links
Ancient Roman medicine
Medical manuals
Herbals
History of pharmacy
Natural history books
Pharmacology literature
Pharmacopoeias
1st-century books in Latin | De materia medica | [
"Chemistry"
] | 3,747 | [
"Pharmacology",
"Pharmacology literature"
] |
71,507,939 | https://en.wikipedia.org/wiki/Hafnium%20carbonitride | Hafnium carbonitride (HfCN) is an ultra-high temperature ceramic (UHTC) mixed anion compound composed of hafnium (Hf), carbon (C) and nitrogen (N).
Ab initio molecular dynamics calculations have predicted the HfCN (specifically the HfC0.75N0.22 phase) to have a melting point of 4,110 ± 62 °C (), highest known for any material. Another approach based on the artificial neural network machine learning pointed towards a similar composition — HfC0.76N0.24. Experimental testing conducted in 2020 has confirmed a melting point above , substantiating earlier predictions made with atomistic simulations in 2015.
Properties
The HfCxN1−x has been assessed to possess the following properties:
Thermal conductivity:
19–24 W·m−1·K−1 at room temperature,
32–39 W·m−1·K−1 at high temperature and with increased nitrogen content.
Electrical conductivity: (149×104)–(213×104) Ω−1 m−1
Plasticity limit:
Fusion enthalpy:
Flexural strength:
638 ± 28 MPa at room temperature,
324 MPa at ,
139 MPa at ,
100 MPa at .
Fracture toughness: 6.73 ± 0.07 MPa·m1/2, 4.7 ± 0.3 MPa·m1/2
Vickers hardness: , 21.3 ± 0.55 GPa ()
References
Hafnium compounds
Carbides
Nitrides
Mixed anion compounds
Refractory materials | Hafnium carbonitride | [
"Physics",
"Chemistry"
] | 327 | [
"Matter",
"Mixed anion compounds",
"Refractory materials",
"Materials",
"Ions"
] |
71,515,739 | https://en.wikipedia.org/wiki/Friedrich%20Wilhelm%20Heinrich%20von%20Trebra | Friedrich Wilhelm Heinrich von Trebra (5 April 1740 – 16 July 1819) was a mining officer in Saxony. He took an interest in geology and was a friend of Johann Wolfgang von Goethe who worked in Ilmenau. He was involved in the recovery of Saxon mining following the Seven Years' War.
Trebra came from a noble family of Thuringia and was born in Allstedt, son of Christoph Heinrich von Trebra (1694-1745) and Amalia Carolina Liberta née von Werder. He studied at Roßleben and law at the University of Jena before joining the newly created mining school in Freiberg after meeting Friedrich Anton von Heynitz in 1766. He then became a superintendent (Bergmeister) of the mines in Marienberg in 1767 and rose to mining captain in 1773. He was in charge of mines in Saxony for which he got Dutch investors and introduced a number of innovations including improved conditions of the miners. He became a friend of Goethe who was in charge of the mines at Ilmenau. He moved to Clausthal in 1779 to replace Claus Friedrich von Reden. Trebra founded a society for mining science, the Societät der Bergbaukunde in 1786. He attempted to bring in new subjects at the Bergakademie but conflicted with Abraham Gottlob Werner. He resigned in 1795 and moved to his estate in Bretleben where he worked on agriculture. He took a position as inspector of mines for Saxony in 1801 and remained there until his death.
References
External links
Erfahrungen vom Innern der Gebirge (1785)
Mineraliencabinett (1795)
Dutch investors for Saxon mines
1740 births
1819 deaths
Mining engineers
University of Jena alumni
18th-century German engineers | Friedrich Wilhelm Heinrich von Trebra | [
"Engineering"
] | 354 | [
"Mining engineering",
"Mining engineers"
] |
49,134,790 | https://en.wikipedia.org/wiki/Frenkel%E2%80%93Kontorova%20model | The Frenkel–Kontorova (FK) model is a fundamental model of low-dimensional nonlinear physics.
The generalized FK model describes a chain of classical particles with nearest neighbor interactions and subjected to a periodic on-site substrate potential. In its original and simplest form the interactions are taken to be harmonic and the potential to be sinusoidal with a periodicity commensurate with the equilibrium distance of the particles. Different choices for the interaction and substrate potentials and inclusion of a driving force may describe a wide range of different physical situations.
Originally introduced by Yakov Frenkel and in 1938 to describe the structure and dynamics of a crystal lattice near a dislocation core, the FK model has become one of the standard models in condensed matter physics due to its applicability to describe many physical phenomena. Physical phenomena that can be modeled by FK model include dislocations, the dynamics of adsorbate layers on surfaces, crowdions, domain walls in magnetically ordered structures, long Josephson junctions, hydrogen-bonded chains, and DNA type chains. A modification of the FK model, the Tomlinson model, plays an important role in the field of tribology.
The equations for stationary configurations of the FK model reduce to those of the standard map or Chirikov–Taylor map of stochastic theory.
In the continuum-limit approximation the FK model reduces to the exactly integrable sine-Gordon (SG) equation, which allows for soliton solutions. For this reason the FK model is also known as the "discrete sine-Gordon" or "periodic Klein–Gordon equation".
History
A simple model of a harmonic chain in a periodic substrate potential was proposed by Ulrich Dehlinger in 1928. Dehlinger derived an approximate analytical expression for the stable solutions of this model, which he termed , which correspond to what is today called kink pairs. An essentially similar model was developed by Ludwig Prandtl in 1912/13 but did not see publication until 1928.
The model was independently proposed by Yakov Frenkel and Tatiana Kontorova in their 1938 article On the theory of plastic deformation and twinning to describe the dynamics of a crystal lattice near a dislocation and to describe crystal twinning. In the standard linear harmonic chain any displacement of the atoms will result in waves, and the only stable configuration will be the trivial one.
For the nonlinear chain of Frenkel and Kontorova, there exist stable configurations beside the trivial one. For small atomic displacements the situation resembles the linear chain; however, for large enough displacements, it is possible to create a moving single dislocation, for which an analytical solution was derived by Frenkel and Kontorova. The shape of these dislocations is defined only by the parameters of the system such as the mass and the elastic constant of the springs.
Dislocations, also called solitons, are distributed non-local defects and mathematically are a type of topological defect. The defining characteristic of solitons/dislocations is that they behave much like stable particles, they can move while maintaining their overall shape. Two solitons of equal and opposite orientation may cancel upon collision, but a single soliton can not annihilate spontaneously.
Generalized model
The generalized FK model treats a one-dimensional chain of atoms with nearest-neighbor interaction in periodic on-site potential, the Hamiltonian for this system is
where the first term is the kinetic energy of the atoms of mass , and the potential energy is a sum of the potential energy due to the nearest-neighbor interaction and that of the substrate potential: .
The substrate potential is periodic, i.e. for some .
For non-harmonic interactions and/or non-sinusoidal potential, the FK model will give rise to a commensurate–incommensurate phase transition.
The FK model can be applied to any system that can be treated as two coupled sub-systems where one subsystem can be approximated as a linear chain and the second subsystem as a motionless substrate potential.
An example would be the adsorption of a layer onto a crystal surface, here the adsorption layer can be approximated as the chain, and the crystal surface as an on-site potential.
Classical model
In this section we examine in detail the simplest form of the FK model. A detailed version of this derivation can be found in the literature. The model describes a one-dimensional chain of atoms with a harmonic nearest neighbor interaction and subject to a sinusoidal potential. Transverse motion of the atoms is ignored, i.e. the atoms can only move along the chain.
The Hamiltonian for this situation is given by , where we specify the interaction potential to be
where is the elastic constant, and is the inter-atomic equilibrium distance. The substrate potential is
with being the amplitude, and the period.
The following dimensionless variables are introduced in order to rewrite the Hamiltonian:
In dimensionless form the Hamiltonian is
which describes a harmonic chain of atoms of unit mass in a sinusoidal potential of period with amplitude . The equation of motion for this Hamiltonian is
We consider only the case where and are commensurate, for simplicity we take . Thus in the ground state of the chain each minimum of the substrate potential is occupied by one atom.
We introduce the variable for atomic displacements which is defined by
For small displacements the equation of motion may be linearized and takes the following form:
This equation of motion describes phonons with with the phonon dispersion relation with the dimensionless wavenumber . This shows that the frequency spectrum of the chain has a band gap with cut-off frequency .
The linearised equation of motion are not valid when the atomic displacements are not small, and one must use the nonlinear equation of motion.
The nonlinear equations can support new types of localized excitations, which are best illuminated by considering the continuum limit of the FK model. Applying the standard procedure of Rosenau to derive continuum-limit equations from a discrete lattice results in the perturbed sine-Gordon equation
where the function
describes in first order the effects due to the discreteness of the chain.
Neglecting the discreteness effects and introducing reduces the equation of motion to the sine-Gordon (SG) equation in its standard form
The SG equation gives rise to three elementary excitations/solutions: kinks, breathers and phonons.
Kinks, or topological solitons, can be understood as the solution connecting two nearest identical minima of the periodic substrate potential, thus they are a result of the degeneracy of the ground state. These solutions are
where is the topological charge. For the solution is called a kink, and for it is an antikink. The kink width is determined by the kink velocity , where is measured in units of the sound velocity and is . For kink motion with , the width approximates 1.
The energy of the kink in dimensionless units is
from which the rest mass of the kink follows as , and the kinks rest energy as .
Two neighboring static kinks with distance have energy of repulsion
whereas kink and antikink attract with interaction
A breather is
which describes nonlinear oscillation with frequency , with .
The breather rest energy
For low frequencies the breather can be seen as a coupled kink–antikink pair. Kinks and breathers can move along the chain without any dissipative energy loss. Furthermore, any collision between all the excitations of the SG equation result in only a phase shift. Thus kinks and breathers may be considered nonlinear quasi-particles of the SG model. For nearly integrable modifications of the SG equation such as the continuum approximation of the FK model kinks can be considered deformable quasi-particles, provided that discreetness effects are small.
The Peierls–Nabarro potential
In the preceding section the excitations of the FK model were derived by considering the model in a continuum-limit approximation. Since the properties of kinks are only modified slightly by the discreteness of the primary model, the SG equation can adequately describe most features and dynamics of the system.
The discrete lattice does, however, influence the kink motion in a unique way with the existence of the Peierls–Nabarro (PN) potential , where is the position of the kink's center. The existence of the PN potential is due to the lack of translational invariance in a discrete chain. In the continuum limit the system is invariant for any translation of the kink along the chain. For a discrete chain, only those translations that are an integer multiple of the lattice spacing leave the system invariant. The PN barrier, , is the smallest energy barrier for a kink to overcome so that it can move through the lattice. The value of the PN barrier is the difference between the kink's potential energy for a stable and unstable stationary configuration. The stationary configurations are shown schematically in the figure.
References
Classical mechanics
Lattice models
Solitons | Frenkel–Kontorova model | [
"Physics",
"Materials_science"
] | 1,867 | [
"Classical mechanics",
"Lattice models",
"Computational physics",
"Mechanics",
"Condensed matter physics",
"Statistical mechanics"
] |
49,143,377 | https://en.wikipedia.org/wiki/Rhizopus%20oryzae | Rhizopus oryzae is a filamentous heterothallic microfungus that occurs as a saprotroph in soil, dung, and rotting vegetation. This species is very similar to Rhizopus stolonifer, but it can be distinguished by its smaller sporangia and air-dispersed sporangiospores. It differs from R. oligosporus and R. microsporus by its larger columellae and sporangiospores. The many strains of R. oryzae produce a wide range of enzymes such as carbohydrate digesting enzymes and polymers along with a number of organic acids, ethanol and esters giving it useful properties within the food industries, bio-diesel production, and pharmaceutical industries. It is also an opportunistic pathogen of humans causing mucormycosis.
History and taxonomy
Rhizopus oryzae was discovered by Frits Went and Hendrik Coenraad Prinsen Geerligs in 1895. The genus Rhizopus (family Mucoraceae) was erected in 1821 by the German mycologist, Christian Gottfried Ehrenberg to accommodate Mucor stolonifer and Rhizopus nigricans as distinct from the genus Mucor. The genus Rhizopus is characterized by having stolons, rhizoids, sporangiophores sprouting from the points of which rhizoids were attached, globose sporangia with columellae, striated sporangiospores. In the mid 1960s, researchers divided the genus based on temperature tolerance. Numerical methods were later used in the early 1970s where researchers arrived at similar conclusions. R. oryzae was relegated to a distinct section because it grew well at 37 °C but failed to grow at 45 °C. In the past, strains were identified through isolating active components of the species that were commonly found in food and alcoholic drinks in Indonesia, China, and Japan. There are approximately 30 synonyms, the most common being R. arrhizus. Scholer popularized R. oryzae because he thought R. arrhizus represented an extreme form of R. oryzae.
Growth and morphology
Rhizopus oryzae grows quickly in optimal temperatures, at 1.6 mm per hour (nearly 0.5 μm per second - enough to be able to directly visualize hyphal elongation in real-time under the microscope). R. oryzae can grow in temperature of 7 °C to 44 °C and the optimum growth temperature is 37 °C. There is very poor growth from 10 °C to 15 °C and negligible growth at 45 °C. There is substantial growth in media containing 1% NaCl, very poor growth at 3% NaCl, and none at 5% NaCl. R. oryzae favors slightly acidic media. Good growth is observed at a pH of 6.8; in the range of 7.7-8.1, there is very poor growth. Most amino acids—with the exception of L-valine—promote R. oryzae growth, with L-tryptophan and L-tyrosine being the most effective. It also grows well on mineral nitrogen sources, except nitrate, and can utilize urea.
Rhizopus oryzae has variable sporangiosphores. They can be straight or curved, swollen or branched, and the walls can be smooth or slightly rough. The colour of sporangiosphores range from pale brown to brown. Sporangiosphores grow between 210-2500 μm in length and 5-18 μm in diameter. The sporangia in R. oryzae are globose or subglobose, wall spinous and black when mature, 60-180 μm in diameter. They can be distinguishable from Rhizopus stolonifer as they have smaller sporangia and spores. The optimal conditions for sporangium production are temperatures between 30 °C to 35 °C and low water levels. Sporulation is stimulated by amino acids (except L-valine) when grown in light, while in darkness only L-tryptophan and L-methionine effect stimulation of growth. The columellae are globose, subglobose, or oval in shape. The wall is usually smooth and the colour is pale brown. The average diameter growth ranges from 30-110 μm. Sporangiospores are elliptical, globose, or polygonal, they are striated and grow 5-8 μm in length. Dormant and germinated sporangiospores show deep furrows and prominent ridges with a pattern that makes it distinguishable from that of R. stolonifer. The germination of sporangiospores can be induced by the combined action of L-proline and phosphate ions. L-ornithine, L-arginine, D-glucose and D-mannose are also effective. Optimal germination occurs on media containing D-glucose and mineral salts.R. oryzae has abundant, root-shaped rhizoids. Zygospores are produced by diploid cells when sexual reproduction occurs under nutrient poor conditions. They have colors that range from red to brown, they are spherical or laterally flattened, and ranges from 60-140μm in size. In high nutrient levels, R. oryzae reproduces asexually, producing azygospores. The stolons found in R. oryzae are smooth or slightly rough, almost colorless or pale brown, 5-18 μm in diameter. The chlamydospores are abundant, globose ranging in 10-24 μm in diameter, elliptical, and cylindrical. Colonies of R. oryzae are white initially, becoming brownish with age and can grow to about 1 cm thick.
Habitat and ecology
Rhizopus oryzae can be found in various soils across the world. For example, it has been found in India, Pakistan, New Guinea, Taiwan, Central America, Peru, Argentina, Namibia, South Africa, Iraq, Somalia, Egypt, Libya, Tunisia, Israel, Turkey, Spain, Italy, Hungary, Czech Republic, Slovakia, Germany, Ukraine, British Isles, and the USA. The soils where R. oryzae has been isolated are varied ranging from grassland, cultivated soils under lupin, corn, wheat, groundnuts, other legumes, sugar canes, rice, citrus plantations, steppe type vegetation, alkaline soils, salt-marshes, farm manure soils, to sewage filled soils. The pH of the soils where the species has been isolated typically range from 6.3 to 7.2.
Rhizopus oryzae is often identified as R. arrhizus when isolated from foods. It is found in rotting fruits and vegetables where it is often called R. stolonifer. Unlike the other species such as R. stolonifer, R. oryzae is common in tropical conditions. In East Asia, it is common in peanuts. For instance, there was 21% isolation from peanut kernels from Indonesia. It is present in maize, beans, sorghum, and cowpeas, pecans, hazelnuts, pistachios, wheat, barley, potatoes, sapodillas, and various other tropical foods. Maize meal on which isolates of R. oryzae had been grown was found to be toxic to ducklings and rats, causing growth depression.
Pathogenicity
Rhizopus oryzae is one of the most common causes of a disease known as mucormycosis, characterized by growing hyphae within and surrounding blood vessels. The causal agents of mucormycosis may also produce toxins like agroclavine which is toxic to humans, sheep and cattle. This infection usually occurs in immunocompromised individuals but is rare. Common risk factors associated with primary cutaneous mucormycosis is ketoacidosis, neutropenia, acute lymphobloastic leukemia, lymphomas, systemic steroids, chemotherapy, and dialysis. Treatment includes amphotericin B, posaconazole, itraconazole, and fluconazole. The majority of the cases of infection are rhinocerebral infections. At the same time, it has been found in literature that R. oryzae can produce antibiotic activity on some bacteria.
The pathogenicity towards plants is attributed to the presence of large number of carbohydrate digesting enzymes.
Physiology and industrial uses
Rhizopus oryzae is involved in steroid transformations and it produces 4-desmethyl steroids which has been useful in the fermentation industry. The carbon sources does influence the ratio of polar and neutral lipids. The mycelium found in R. oryzae contains lipids and the highest lipid content occurs when grown on fructose. The highest unsaturated fatty acid content is observed at 30 °C and lowest at 15 °C. Proteolytic properties have been observed well under the conditions of pH 7 at 35 °C. Pyridozine and thiamine prefer proteinase production. R. oryzae can degrade aflatoxin A1 to isomeric hydroxy compounds and aflatoxin G1 to fluorescent metabolite aflatoxin A1. There are various factors that influence the production of dextro-lactic acids, fumaric acid, and metabolism of R. oryzae. For examples, in 40 °C there is more favorable growth for glucose consumption, however this influenced production of d-lactic acid production negatively. Glucose concentration of 15% is needed for optimal production of d-lactic acid. Fumaric acid production was suppressed in media containing more than 6 grams of NH4NO3 per liter and is favorable to d-lactic acid production.
Rhizopus oryzae is considered GRAS by the FDA and thus recognized as safe to use industrially as it can consume a range of carbon sources. During fermentation. R. oryzae produce amylase, lipase, and protease activity to increase nutrient's ability to use many compounds as an energy and carbon source. Historically, it has been used in fermentation, specifically to ferment soybean and create tempeh in Malaysia and Indonesia. Using the same methods to create traditional tempeh, R. oryzae can be inoculated in other cooked legumes such as peas, beans, and fava beans. Similarly in tempeh making, there is an initial bacterial fermentation in legumes when they are soaked for a while before being cooked. Fermentation incubation lasts for 48 hours at 33 °C. After incubation, mycelium can be observed between the legumes creating a larger, uniform product. Overall, fruits, grains, nuts, and legumes mold-fermentation with R. oryzae produces sensory changes in foods such as creating acidity, sweetness and bitterness. R. oryzae can produce lactate from glucose at high levels, which is used as a food additive and can also degrade plastics. In enzyme-modified cheese products, R. oryzae provides microbial enzymes where milk fat and proteins are broken down to create powder and paste forms of cheese. Specifically, it breaks down cheese curds and acid casein.
Among finding cellulases and hemicellulases, other enzymes such as protease, urease, ribonuclease, pectate lyase, and polygalacturonase are found in cultural media of R. oryzae. Besides producing a number of enzymes, it can also produce a number of organic acids, alcohol, and esters. Cellulases in R. oryzae can be applied to biotechnology, in food, brewery and wine, animal feed, textiles and laundry, pulp and paper industries, and agriculture. R. oryzae can convert both glucose and xylose under aerobic conditions into pure L (+)-lactic acids with by-products such as xylitol, glycerol, ethanol, carbon dioxide and fungal biomass. Endo-xylanase is a key enzyme for xylan depolymerization and was produced by R. oryzae fermentation from different xylan-containing agricultural by-products such as wheat straw, wheat stems, cottons bagasse, hazelnut shells, corn cobs, and oat sawdust. Pectinases are required for extraction and clarification of fruit juices and wines, extraction of oils, flavors and pigmentation from plant material, preparation of cellulose fibers for linen, jute and hemp manufacture as well as, coffee and tea fermentations.
R. oryzae can break down starch content in rice plants and therefore shows amylolytic activities. Also, it has been reported to produce extra cellular isoamylase which is used in food industries. Isoamylase was found to saccharify potato starch, arrow root, tamarind kernel, tapioca, and oat. The saccharifying ability of the enzyme is highly applicable in sugar production industries. Proteases, which can be found in R. oryzae are highly useful in commercial industries. For instance, it has increased application in food, pharmaceutical, detergent, leather, tanning industries. It is also involved in silver recovery and peptide synthesis. One strain of R. oryzae was found to secrete alkaline serine protease which shows high pH stability within 3 to 6 and poor thermos-stability. Lipase that is extracted from R. oryzae have been consumed as digestive aids without adverse reactions. Lipases hydrolyze fats and oils with subsequent release of free fatty acids such as diacylglycerols, monoacylglycerols and glycerol. Lipases have been involved in biotechnology applications because of its ability to catalyze synthetic reactions in non-aqueous solutions. One study has reported the expression of a fungal 11 alpha-steroid hydroxylase from R. oryzae which can be used to perform the 11 alpha-hydroxylation of the steroid skeleton which has simplified steroid drug production.R. oryzae can produce intracellular ribonuclease in a metal ion-regulated liquid medium with the addition of calcium and molybdenum stimulating ribonuclease production. R. oryzae strain ENHE isolated from contaminated soil was found to be capable of tolerating and removing pentachlorophenol.
R. oryzae is known to produce L (+)-lactic acid because the fungus cells possess better resistance to high concentration of accumulated lactic acid and lower content of nutrient requirement compared to the commonly used bacterial procedures. Thus, R. oryzae is the most efficient approached to improve lactic acid production process that facilitates multiple reuses of fungal cells for long-term lactic acid production. Ethanol is the main by-product in the fermentation process of R. oryzae during the production of L-lactic acid. R. oryzae can be used as a biocatalyst for ester production in organic solvent. Dry mycelium of four R. oryzae strains proved effective for catalysing the synthesis of different flavor esters. For example, the pineapple flavour or butyl acetate esters was produced by the esterification reactions between acetic acid and butanol by R. oryzae. This flavor compound can be used in food, cosmetic and pharmaceutical industries. Within the biodiesel industry, biodiesel fuel as fatty acid methyl ester is produced by the esterification of plant oil or animal fat with methanol. This is a renewable fuel resource compared to the traditional petroleum-based fuels. Production of biodiesel fuel from plant oils from cells of R. oryzae immobilized within biomass support particles were investigated for the methanolysis of soybean oil. Olive oil or oleic acid was found to be effective for enhancing methanolysis activity which is a promising results within the biodiesel industry.
R. oryzae has been investigated as a bioremediation agent fluoride sequestrant.
References
Fungal fruit diseases
Carrot diseases
Mango tree diseases
Mucoraceae
Fungi described in 1895
Fungal pathogens of humans
Fungus species | Rhizopus oryzae | [
"Biology"
] | 3,507 | [
"Fungi",
"Fungus species"
] |
54,328,391 | https://en.wikipedia.org/wiki/Jacobi%20transform | In mathematics, Jacobi transform is an integral transform named after the mathematician Carl Gustav Jacob Jacobi, which uses Jacobi polynomials as kernels of the transform
.
The Jacobi transform of a function is
The inverse Jacobi transform is given by
Some Jacobi transform pairs
References
Integral transforms
Mathematical physics | Jacobi transform | [
"Physics",
"Mathematics"
] | 60 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
54,328,417 | https://en.wikipedia.org/wiki/Dicarbonyl%28acetylacetonato%29rhodium%28I%29 | Dicarbonyl(acetylacetonato)rhodium(I) is an organorhodium compound with the formula Rh(O2C5H7)(CO)2. The compound consists of two CO ligands and an acetylacetonate. It is a dark green solid that dissolves in acetone and benzene, giving yellow solutions. The compound is used as a precursor to homogeneous catalysts.
It is prepared by treating rhodium carbonyl chloride with sodium acetylacetonate in the presence of base:
[(CO)2RhCl]2 + 2 NaO2C5H7 → 2 Rh(O2C5H7)(CO)2 + 2 NaCl
The complex adopts square planar molecular geometry. The molecules stack with Rh---Rh distances of about 326 pm. As such, it is representative of a linear chain compound.
References
Organorhodium compounds
Homogeneous catalysis
Carbonyl complexes
Acetylacetonate complexes
Rhodium(I) compounds | Dicarbonyl(acetylacetonato)rhodium(I) | [
"Chemistry"
] | 218 | [
"Catalysis",
"Homogeneous catalysis"
] |
54,328,809 | https://en.wikipedia.org/wiki/Laguerre%20transform | In mathematics, Laguerre transform is an integral transform named after the mathematician Edmond Laguerre, which uses generalized Laguerre polynomials as kernels of the transform.
The Laguerre transform of a function is
The inverse Laguerre transform is given by
Some Laguerre transform pairs
References
Integral transforms
Mathematical physics | Laguerre transform | [
"Physics",
"Mathematics"
] | 65 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
54,329,238 | https://en.wikipedia.org/wiki/Lawbot | Lawbots are a broad class of customer-facing legal AI applications that are used to automate specific legal tasks, such as document automation and legal research. The terms robot lawyer and lawyer bot are used as synonyms to lawbot. A robot lawyer or a robo-lawyer refers to a legal AI application that can perform tasks that are typically done by paralegals or young associates at law firms. However, there is some debate on the correctness of the term. Some commentators say that legal AI is technically speaking neither a lawyer nor a robot and should not be referred to as such. Other commentators believe that the term can be misleading and note that the robot lawyer of the future won't be one all-encompassing application but a collection of specialized bots for various tasks.
Lawbots use various artificial intelligence techniques or other intelligent systems to limit humans' direct ongoing involvement in certain steps of a legal matter. The user interfaces on lawbots vary from smart searches and step-by-step forms to chatbots. Consumer and enterprise-facing lawbot solutions often do not require direct supervision from a legal professional. Depending on the task, some client-facing solutions used at law firms operate under an attorney supervision.
Levels of autonomy
The following levels of autonomy (LoA) are suggested for automated AI legal reasoning:
Level 0 (LoA0): No automation for AI legal reasoning
Level 1 (LoA1): Simple assistance automation
Level 2 (LoA2): Advanced assistance automation
Level 3 (LoA3): Semi-autonomous automation
Level 4 (LoA4): Domain automation
Level 5 (LoA5): Fully-autonomous automation
Level 6 (LoA6): Superhuman automation
Examples
Some legal AI solutions are developed and marketed directly to the customers or consumers, whereas other applications are tools for the attorneys at law firms. There are already hundreds of legal AI solutions that operate in multitude of ways varying in sophistication and dependence on scripted algorithms.
One notable legal technology chatbot application is DoNotPay. It had started off as an app for contesting parking tickets, but has since expanded to include features that help users with many different types of legal issues, ranging from consumer protection to immigration rights and other social issues.
Impact on the legal industry
In the 2016 report, Deloitte estimated that more than 110,000 law jobs in just the United Kingdom alone could disappear within the next twenty years due to automation. This change could result in the creation of more highly skilled jobs and in the reduction of paralegal and temporary positions. Deloitte's report asserts that "there is significant potential for high-skilled roles that involve repetitive processes to be automated by smart and self-learning algorithms". According to Lawyers to Engage, between 22% of a lawyer’s work and 35% of a legal assistant’s work can be automated in the US. Top law schools like Harvard have already begun to integrate Artificial Intelligence into the curriculum.
Legal tech start-up companies have begun developing applications that assist law firms with completing low-risk legal processes. These applications can enable lawyers to focus on more work that requires their specific expertise.
The automation of processes like contract reviewing, enforcement of negotiations (smart contracts) and client intake (expert systems) allows law firms to streamline their procedures and improve efficiency. In addition, automation benefits small-to-medium law firms that do not have the resources to utilize junior talent on such routine tasks.
The increase of law firms utilizing automated applications could result into legal tech becoming a necessity in the industry. Digital Reason CEO, Tim Estes, stated that those who refuse the opportunity to integrate AI in their workflow are “most at risk.”
In 2018, Forbes reported a 713% increase in investments in legal tech. This rapid growth is reflective of law firms beginning to “cede business to… new model legal providers… that meld technological, business and legal expertise.”
Access to law and justice
It has been widely estimated for at least the last generation that all the programs and resources devoted to ensuring access to justice address only 20% of the civil legal needs of low-income people in the United States. Drawing on this experience, in late 2011, the U.S. government-funded Legal Services Corporation decided to convene a summit of leaders to explore how best to use technology in the access-to-justice community. The group adopted a mission for The Summit on the Use of Technology to Expand Access to Justice (Summit) consistent with the magnitude of the challenge: "to explore the potential of technology to move the United States toward providing some form of effective assistance to 100% of persons otherwise unable to afford an attorney for dealing with essential civil legal needs".
In April 2017, joined by Microsoft and Pro Bono Net, the Legal Services Corporation (LSC) announced a pilot program to develop online, statewide legal portals to direct individuals with civil legal needs to the most appropriate forms of assistance.
Technological limitations
Current research in subjects such as computational privacy, explainable machine learning, Bayesian deep learning, knowledge-intensive machine learning, and transfer learning reveals that we do not yet have the technology to enable Level 4 to 6 AI lawbots.
In 2023, OpenLaw began developing a model called Law Bot, which interacts in a conversational way as an attorney. The dialogue format makes it possible for Law Bot to answer follow-up questions, challenge incorrect premises, and reject inappropriate requests. Currently, they try to ensure it is in full compliance with all laws and regulations while conducting further beta testing before releasing it to the general public.
See also
Automation
Artificial intelligence and law
Computational law
Document automation
DoNotPay
Government by algorithm
Legal expert systems
Legal informatics
Legal technology
Robo-advisor
References
External links
CodeX Techindex, Stanford Law School Legal Tech List
LawSites List of Legal Tech Startups
Argument technology
Automation
Practice of law
American inventions
Parallel computing | Lawbot | [
"Engineering"
] | 1,192 | [
"Control engineering",
"Automation"
] |
54,334,250 | https://en.wikipedia.org/wiki/Proper%20reference%20frame%20%28flat%20spacetime%29 | A proper reference frame in the theory of relativity is a particular form of accelerated reference frame, that is, a reference frame in which an accelerated observer can be considered as being at rest. It can describe phenomena in curved spacetime, as well as in "flat" Minkowski spacetime in which the spacetime curvature caused by the energy–momentum tensor can be disregarded. Since this article considers only flat spacetime—and uses the definition that special relativity is the theory of flat spacetime while general relativity is a theory of gravitation in terms of curved spacetime—it is consequently concerned with accelerated frames in special relativity. (For the representation of accelerations in inertial frames, see the article Acceleration (special relativity), where concepts such as three-acceleration, four-acceleration, proper acceleration, hyperbolic motion etc. are defined and related to each other.)
A fundamental property of such a frame is the employment of the proper time of the accelerated observer as the time of the frame itself. This is connected with the clock hypothesis (which is experimentally confirmed), according to which the proper time of an accelerated clock is unaffected by acceleration, thus the measured time dilation of the clock only depends on its momentary relative velocity. The related proper reference frames are constructed using concepts like comoving orthonormal tetrads, which can be formulated in terms of spacetime Frenet–Serret formulas, or alternatively using Fermi–Walker transport as a standard of non-rotation. If the coordinates are related to Fermi–Walker transport, the term Fermi coordinates is sometimes used, or proper coordinates in the general case when rotations are also involved. A special class of accelerated observers follow worldlines whose three curvatures are constant. These motions belong to the class of Born rigid motions, i.e., the motions at which the mutual distance of constituents of an accelerated body or congruence remains unchanged in its proper frame. Two examples are Rindler coordinates or Kottler-Møller coordinates for the proper reference frame of hyperbolic motion, and Born or Langevin coordinates in the case of uniform circular motion.
In the following, Greek indices run over 0,1,2,3, Latin indices over 1,2,3, and bracketed indices are related to tetrad vector fields. The signature of the metric tensor is (-1,1,1,1).
History
Some properties of Kottler-Møller or Rindler coordinates were anticipated by Albert Einstein (1907) when he discussed the uniformly accelerated reference frame. While introducing the concept of Born rigidity, Max Born (1909) recognized that the formulas for the worldline of hyperbolic motion can be reinterpreted as transformations into a "hyperbolically accelerated reference system". Born himself, as well as Arnold Sommerfeld (1910) and Max von Laue (1911) used this frame to compute the properties of charged particles and their fields (see Acceleration (special relativity)#History and Rindler coordinates#History). In addition, Gustav Herglotz (1909) gave a classification of all Born rigid motions, including uniform rotation and the worldlines of constant curvatures. Friedrich Kottler (1912, 1914) introduced the "generalized Lorentz transformation" for proper reference frames or proper coordinates () by using comoving Frenet–Serret tetrads, and applied this formalism to Herglotz' worldlines of constant curvatures, particularly to hyperbolic motion and uniform circular motion. Herglotz' formulas were also simplified and extended by Georges Lemaître (1924). The worldlines of constant curvatures were rediscovered by several author, for instance, by Vladimír Petrův (1964), as "timelike helices" by John Lighton Synge (1967) or as "stationary worldlines" by Letaw (1981). The concept of proper reference frame was later reintroduced and further developed in connection with Fermi–Walker transport in the textbooks by Christian Møller (1952) or Synge (1960). An overview of proper time transformations and alternatives was given by Romain (1963), who cited the contributions of Kottler. In particular, Misner & Thorne & Wheeler (1973) combined Fermi–Walker transport with rotation, which influenced many subsequent authors. Bahram Mashhoon (1990, 2003) analyzed the hypothesis of locality and accelerated motion. The relations between the spacetime Frenet–Serret formulas and Fermi–Walker transport was discussed by Iyer & C. V. Vishveshwara (1993), Johns (2005) or Bini et al. (2008) and others. A detailed representation of "special relativity in general frames" was given by Gourgoulhon (2013).
Comoving tetrads
Spacetime Frenet–Serret equations
For the investigation of accelerated motions and curved worldlines, some results of differential geometry can be used. For instance, the Frenet–Serret formulas for curves in Euclidean space have already been extended to arbitrary dimensions in the 19th century, and can be adapted to Minkowski spacetime as well. They describe the transport of an orthonormal basis attached to a curved worldline, so in four dimensions this basis can be called a comoving tetrad or vierbein (also called vielbein, moving frame, frame field, local frame, repère mobile in arbitrary dimensions):
Here, is the proper time along the worldline, the timelike field is called the tangent that corresponds to the four-velocity, the three spacelike fields are orthogonal to and are called the principal normal , the binormal and the trinormal . The first curvature corresponds to the magnitude of four-acceleration (i.e., proper acceleration), the other curvatures and are also called torsion and hypertorsion.
Fermi–Walker transport and proper transport
While the Frenet–Serret tetrad can be rotating or not, it is useful to introduce another formalism in which non-rotational and rotational parts are separated. This can be done using the following equation for proper transport or generalized Fermi transport of tetrad , namely
where
or together in simplified form:
with as four-velocity and as four-acceleration, and "" indicates the dot product and "" the wedge product. The first part represents Fermi–Walker transport, which is physically realized when the three spacelike tetrad fields do not change their orientation with respect to the motion of a system of three gyroscopes. Thus Fermi–Walker transport can be seen as a standard of non-rotation. The second part consists of an antisymmetric second rank tensor with as the angular velocity four-vector and as the Levi-Civita symbol. It turns out that this rotation matrix only affects the three spacelike tetrad fields, thus it can be interpreted as the spatial rotation of the spacelike fields of a rotating tetrad (such as a Frenet–Serret tetrad) with respect to the non-rotating spacelike fields of a Fermi–Walker tetrad along the same world line.
Deriving Fermi–Walker tetrads from Frenet–Serret tetrads
Since and on the same worldline are connected by a rotation matrix, it is possible to construct non-rotating Fermi–Walker tetrads using rotating Frenet–Serret tetrads, which not only works in flat spacetime but for arbitrary spacetimes as well, even though the practical realization can be hard to achieve. For instance, the angular velocity vector between the respective spacelike tetrad fields and can be given in terms of torsions and :
Assuming that the curvatures are constant (which is the case in helical motion in flat spacetime, or in the case of stationary axisymmetric spacetimes), one then proceeds by aligning the spacelike Frenet–Serret vectors in the plane by constant counter-clockweise rotation, then the resulting intermediary spatial frame is constantly rotated around the axis by the angle , which finally gives the spatial Fermi–Walker frame (note that the timelike field remains the same):
For the special case and , it follows and and , therefore () is reduced to a single constant rotation around the -axis:
Proper coordinates or Fermi coordinates
In flat spacetime, an accelerated object is at any moment at rest in a momentary inertial frame , and the sequence of such momentary frames which it traverses corresponds to a successive application of Lorentz transformations , where is an external inertial frame and the Lorentz transformation matrix. This matrix can be replaced by the proper time dependent tetrads defined above, and if is the time track of the particle indicating its position, the transformation reads:
Then one has to put by which is replaced by and the timelike field vanishes, therefore only the spacelike fields are present anymore. Subsequently, the time in the accelerated frame is identified with the proper time of the accelerated observer by . The final transformation has the form
These are sometimes called proper coordinates, and the corresponding frame is the proper reference frame. They are also called Fermi coordinates in the case of Fermi–Walker transport (even though some authors use this term also in the rotational case). The corresponding metric has the form in Minkowski spacetime (without Riemannian terms):
However, these coordinates are not globally valid, but are restricted to
Proper reference frames for timelike helices
In case all three Frenet–Serret curvatures are constant, the corresponding worldlines are identical to those that follow from the Killing motions in flat spacetime. They are of particular interest since the corresponding proper frames and congruences satisfy the condition of Born rigidity, that is, the spacetime distance of two neighbouring worldlines is constant. These motions correspond to "timelike helices" or "stationary worldlines", and can be classified into six principal types: two with zero torsions (uniform translation, hyperbolic motion) and four with non-zero torsions (uniform rotation, catenary, semicubical parabola, general case):
Case produces uniform translation without acceleration. The corresponding proper reference frame is therefore given by ordinary Lorentz transformations. The other five types are:
Hyperbolic motion
The curvatures , where is the constant proper acceleration in the direction of motion, produce hyperbolic motion because the worldline in the Minkowski diagram is a hyperbola:
The corresponding orthonormal tetrad is identical to an inverted Lorentz transformation matrix with hyperbolic functions as Lorentz factor and as proper velocity and as rapidity (since the torsions and are zero, the Frenet–Serret formulas and Fermi–Walker formulas produce the same tetrad):
Inserted into the transformations () and using the worldline () for , the accelerated observer is always located at the origin, so the Kottler-Møller coordinates follow
which are valid within , with the metric
.
Alternatively, by setting the accelerated observer is located at at time , thus the Rindler coordinates follow from () and (, ):
which are valid within , with the metric
Uniform circular motion
The curvatures , produce uniform circular motion, with the worldline
where
with as orbital radius, as coordinate angular velocity, as proper angular velocity, as tangential velocity, as proper velocity, as Lorentz factor, and as angle of rotation. The tetrad can be derived from the Frenet–Serret equations (), or more simply be obtained by a Lorentz transformation of the tetrad of ordinary rotating coordinates:
The corresponding non-rotating Fermi–Walker tetrad on the same worldline can be obtained by solving the Fermi–Walker part of equation (). Alternatively, one can use () together with (), which gives
The resulting angle of rotation together with () can now be inserted into (), by which the Fermi–Walker tetrad follows
In the following, the Frenet–Serret tetrad is used to formulate the transformation. Inserting () into the transformations () and using the worldline () for gives the coordinates
which are valid within , with the metric
If an observer resting in the center of the rotating frame is chosen with , the equations reduce to the ordinary rotational transformation
which are valid within , and the metric
.
The last equations can also be written in rotating cylindrical coordinates (Born coordinates):
which are valid within , and the metric
Frames (, , ) can be used to describe the geometry of rotating platforms, including the Ehrenfest paradox and the Sagnac effect.
Catenary
The curvatures , produce a catenary, i.e., hyperbolic motion combined with a spacelike translation
where
where is the velocity, the proper velocity, as rapidity, is the Lorentz factor. The corresponding Frenet–Serret tetrad is:
The corresponding non-rotating Fermi–Walker tetrad on the same worldline can be obtained by solving the Fermi–Walker part of equation (). The same result follows from (), which gives
which together with () can now be inserted into (), resulting in the Fermi–Walker tetrad
The proper coordinates or Fermi coordinates follow by inserting or into ().
Semicubical parabola
The curvatures , produce a semicubical parabola or cusped motion
The corresponding Frenet–Serret tetrad with is:
The corresponding non-rotating Fermi–Walker tetrad on the same worldline can be obtained by solving the Fermi–Walker part of equation (). The same result follows from (), which gives
which together with () can now be inserted into (), resulting in the Fermi–Walker tetrad (note that in this case):
The proper coordinates or Fermi coordinates follow by inserting or into ().
General case
The curvatures , , produce hyperbolic motion combined with uniform circular motion. The worldline is given by
where
with as tangential velocity, as proper tangential velocity, as rapidity, as orbital radius, as coordinate angular velocity, as proper angular velocity, as angle of rotation, is the Lorentz factor. The Frenet–Serret tetrad is
The corresponding non-rotating Fermi–Walker tetrad on the same worldline is as follows: First inserting () into () gives the angular velocity, which together with () can now be inserted into (, left), and finally inserted into (, right) produces the Fermi–Walker tetrad. The proper coordinates or Fermi coordinates follow by inserting or into () (the resulting expressions are not indicated here because of their length).
Overview of historical formulas
In addition to the things described in the previous #History section, the contributions of Herglotz, Kottler, and Møller are described in more detail, since these authors gave extensive classifications of accelerated motion in flat spacetime.
Herglotz
Herglotz (1909) argued that the metric
where
satisfies the condition of Born rigidity when . He pointed out that the motion of a Born rigid body is in general determined by the motion of one of its point (class A), with the exception of those worldlines whose three curvatures are constant, thus representing a helix (class B). For the latter, Herglotz gave the following coordinate transformation corresponding to the trajectories of a family of motions:
(H1) ,
where and are functions of proper time . By differentiation with respect to , and assuming as constant, he obtained
(H2)
Here, represents the four-velocity of the origin of , and is a six-vector (i.e., an antisymmetric four-tensor of second order, or bivector, having six independent components) representing the angular velocity of around . As any six-vector, it has two invariants:
When is constant and is variable, any family of motions described by (H1) forms a group and is equivalent to an equidistant family of curves, thus satisfying Born rigidity because they are rigidly connected with . To derive such a group of motion, (H2) can be integrated with arbitrary constant values of and . For rotational motions, this results in four groups depending on whether the invariants or are zero or not. These groups correspond to four one-parameter groups of Lorentz transformations, which were already derived by Herglotz in a previous section on the assumption, that Lorentz transformations (being rotations in ) correspond to hyperbolic motions in . The latter have been studied in the 19th century, and were categorized by Felix Klein into loxodromic, elliptic, hyperbolic, and parabolic motions (see also Möbius group).
Kottler
Friedrich Kottler (1912) followed Herglotz, and derived the same worldlines of constant curvatures using the following Frenet–Serret formulas in four dimensions, with as comoving tetrad of the worldline, and as the three curvatures
corresponding to (). Kottler pointed out that the tetrad can be seen as a reference frame for such worldlines. Then he gave the transformation for the trajectories
(with )
in agreement with (). Kottler also defined a tetrad whose basis vectors are fixed in normal space and therefore do not share any rotation. This case was further differentiated into two cases: If the tangent (i.e., the timelike) tetrad field is constant, then the spacelike tetrads fields can be replaced by who are "rigidly" connected with the tangent, thus
The second case is a vector "fixed" in normal space by setting . Kottler pointed out that this corresponds to class B given by Herglotz (which Kottler calls "Born's body of second kind")
,
and class (A) of Herglotz (which Kottler calls "Born's body of first kind") is given by
which both correspond to formula ().
In (1914a), Kottler showed that the transformation
,
describes the non-simultaneous coordinates of the points of a body, while the transformation with
,
describes the simultaneous coordinates of the points of a body. These formulas become "generalized Lorentz transformations" by inserting
thus
in agreement with (). He introduced the terms "proper coordinates" and "proper frame" () for a system whose time axis coincides with the respective tangent of the worldline. He also showed that the Born rigid body of second kind, whose worldlines are defined by
,
is particularly suitable for defining a proper frame. Using this formula, he defined the proper frames for hyperbolic motion (free fall) and for uniform circular motion:
In (1916a) Kottler gave the general metric for acceleration-relative motions based on the three curvatures
In (1916b) he gave it the form:
where are free from , and , and , and linear in .
Møller
Møller (1952) defined the following transport equation
in agreement with Fermi–Walker transport by (, without rotation). The Lorentz transformation into a momentary inertial frame was given by him as
in agreement with (). By setting , and , he obtained the transformation into the "relativistic analogue of a rigid reference frame"
in agreement with the Fermi coordinates (), and the metric
in agreement with the Fermi metric () without rotation. He obtained the Fermi–Walker tetrads and Fermi frames of hyperbolic motion and uniform circular motion (some formulas for hyperbolic motion were already derived by him in 1943):
Worldlines of constant curvatures by Herglotz and Kottler
References
Bibliography
Textbooks
; First edition 1911, second expanded edition 1913, third expanded edition 1919.
New edition 2013: Editor: Domenico Giulini, Springer, 2013 .
Journal articles
Historical sources
External links
Physics FAQ: Acceleration in Special Relativity
Eric Gourgoulhon (2010): Special relativity from an accelerated observer perspective
Special relativity
Acceleration
Frames of reference | Proper reference frame (flat spacetime) | [
"Physics",
"Mathematics"
] | 4,162 | [
"Physical quantities",
"Acceleration",
"Coordinate systems",
"Frames of reference",
"Quantity",
"Classical mechanics",
"Special relativity",
"Theory of relativity",
"Wikipedia categories named after physical quantities"
] |
57,641,853 | https://en.wikipedia.org/wiki/Lisocabtagene%20maraleucel | Lisocabtagene maraleucel, sold under the brand name Breyanzi, is a cell-based gene therapy used to treat B-cell lymphomas, including follicular lymphoma.
Side effects include hypersensitivity reactions, serious infections, low blood cell counts, and a weakened immune system. The most common side effects include decreases in neutrophils (a type of white blood cell that fights infections), in red blood cells or in blood platelets (components that help the blood to clot), as well as cytokine release syndrome (a potentially life-threatening condition that can cause fever, vomiting, shortness of breath, pain and low blood pressure) and tiredness. The most common adverse reactions for treating follicular lymphoma include cytokine release syndrome, headache, musculoskeletal pain, fatigue, constipation, and fever.
Lisocabtagene maraleucel, a chimeric antigen receptor (CAR) T cell (CAR-T) therapy, is the third gene therapy approved by the US Food and Drug Administration (FDA) for certain types of non-Hodgkin lymphoma, including diffuse large B-cell lymphoma. Lisocabtagene maraleucel was approved for medical use in the United States in February 2021.
Medical uses
In the US, lisocabtagene maraleucel is indicated for the treatment of adults with large B-cell lymphoma, including diffuse large B-cell lymphoma not otherwise specified (including diffuse large B-cell lymphoma arising from indolent lymphoma), high-grade B cell lymphoma, primary mediastinal large B-cell lymphoma, and follicular lymphoma grade 3B, who have refractory disease to first-line chemoimmunotherapy or relapse within 12 months of first-line chemoimmunotherapy; or disease to first-line chemoimmunotherapy or relapse after first-line chemoimmunotherapy and are not eligible for hematopoietic stem cell transplantation (HSCT) due to comorbidities or age; or relapsed or refractory disease after two or more lines of systemic therapy. It is also indicated for adults with relapsed or refractory chronic lymphocytic leukemia or small lymphocytic lymphoma who have received at least two prior lines of therapy, including a Bruton tyrosine kinase inhibitor and a B-cell lymphoma 2 inhibitor.
In the EU, lisocabtagene maraleucel is indicated for the treatment of adults with diffuse large B-cell lymphoma, high grade B-cell lymphoma, primary mediastinal large B-cell lymphoma and follicular lymphoma grade 3B, who relapsed within 12 months from completion of, or are refractory to, first-line chemoimmunotherapy.
Lisocabtagene maraleucel is not indicated for the treatment of people with primary central nervous system lymphoma.
In May 2024, the US Food and Drug Administration (FDA) expanded the indication for lisocabtagene maraleucel to include adults with relapsed or refractory follicular lymphoma who have received two or more prior lines of systemic therapy; and the treatment of adults with relapsed or refractory mantle cell lymphoma who have received at least two prior lines of systemic therapy, including a Bruton tyrosine kinase inhibitor.
Adverse effects
The US Food and Drug Administration (FDA) prescription label carries a boxed warning for cytokine release syndrome (CRS), which is a systemic response to the activation and proliferation of CAR-T cells, causing high fever and flu-like symptoms and neurologic toxicities.
In April 2024, the FDA prescription label boxed warning was expanded to include T cell malignancies.
History
Lisocabtagene maraleucel's safety and efficacy were established in a multicenter clinical trial of more than 250 adults with refractory or relapsed large B-cell lymphoma. The complete remission rate after treatment was 54%.
The US Food and Drug Administration (FDA) granted lisocabtagene maraleucel priority review, orphan drug, regenerative medicine advanced therapy (RMAT), and breakthrough therapy designations. Lisocabtagene maraleucel is the first regenerative medicine therapy with RMAT designation to be licensed by the FDA. The FDA granted approval of Breyanzi to Juno Therapeutics Inc., a Bristol-Myers Squibb Company.
Efficacy was evaluated in TRANSFORM (NCT03575351), a randomized, open-label, multicenter trial in adults with primary refractory large B-cell lymphoma or relapse within twelve months of achieving complete response (CR) to first-line therapy. Participants had not yet received treatment for relapsed or refractory lymphoma and were potential candidates for autologous HSCT. A total of 184 participants were randomized 1:1 to receive a single infusion of lisocabtagene maraleucel following fludarabine and cyclophosphamide lymphodepleting chemotherapy or to receive second-line standard therapy, consisting of three cycles of chemoimmunotherapy followed by high-dose therapy and autologous HSCT in participants who attained CR or partial response (PR).
Efficacy was also evaluated in PILOT (NCT03483103), a single-arm, open-label, multicenter trial in transplant-ineligible patients with relapsed or refractory large B-cell lymphoma after one line of chemoimmunotherapy. The study enrolled participants who were ineligible for high-dose therapy and HSCT due to organ function or age, but who had adequate organ function for CAR-T cell therapy. Efficacy was based on CR rate and duration of response (DOR) as determined by an IRC. Of 74 participants who underwent leukapheresis (median age, 73 years), 61 (82%) received lisocabtagene maraleucel of whom 54% (95% CI: 41, 67) achieved CR. The median DOR was not reached (95% CI: 11.2 months, not reached) in participants who achieved CR and 2.1 months (95% CI: 1.4, 2.3) in participants with a best response of PR. Among all leukapheresed participants, the CR rate was 46% (95% CI: 34, 58).
For the treatment of follicular lymphoma, efficacy was evaluated in TRANSCEND-FL (NCT04245839), a phase II, open-label, multicenter, single-arm trial in adults with relapsed or refractory follicular lymphoma after two or more lines of systemic therapy (including an anti-CD20 antibody and an alkylating agent). Participants were eligible to enroll in the study if they had adequate bone marrow function to receive lymphodepleting chemotherapy and an ECOG performance status of 1 or less.
Society and culture
Legal status
In January 2022, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Breyanzi, intended for the treatment of adults with relapsed or refractory diffuse large B cell lymphoma, primary mediastinal large B-cell lymphoma and follicular lymphoma grade 3B, after at least two previous lines of treatments. The applicant for this medicinal product is Bristol-Myers Squibb Pharma EEIG. Lisocabtagene maraleucel was approved for medical use in the European Union in April 2022.
Names
Lisocabtagene maraleucel is the international nonproprietary name (INN).
References
External links
Drugs developed by Bristol Myers Squibb
Cancer treatments
Drugs that are a gene therapy
Approved gene therapies
CAR T-cell therapy
Orphan drugs
Antineoplastic drugs | Lisocabtagene maraleucel | [
"Biology"
] | 1,786 | [
"Cell therapies",
"CAR T-cell therapy"
] |
57,643,049 | https://en.wikipedia.org/wiki/Fancy%20Nancy%20%28TV%20series%29 | Fancy Nancy, titled Fancy Nancy Clancy internationally, is an American animated family comedy children's television series developed by Jamie Mitchell and Krista Tucker and produced by Disney Television Animation for Disney Junior based on the eponymous children's picture book series by Jane O'Connor with illustrations by Robin Preiss Glasser. The show follows the adventures of Nancy Clancy, a 6 (and then later 7) year-old girl who loves everything fancy and French, while living with her family and friends in a fictional version of Plainfield, Ohio.
The series premiered on July 13, 2018, in the United States and Canada the following day. Disney Junior renewed the series for a second season, which premiered on October 4, 2019, in the United States. On September 18, 2019, a third season was commissioned, and Krista Tucker confirmed that it would be the last for the entire series. The third season began simulcasting on Disney Junior, DisneyNOW and Disney+ on November 12, 2021. The series finale aired on February 18, 2022. Fancy Nancy received generally positive reviews from critics.
Premise
Six-year-old Nancy Clancy enjoys fancy and French things that range from her outfit to her creative elaborate attire as she plans to teach some fanciness to her ordinary family and her friends.
Characters
Nancy Margaret Clancy (voiced by Mia Sinclair Jenness) is a little girl who enjoys fancy things and is a bit of Francophile; she loves France and can speak French. She was 6 years old until the episode "Nancy's Parfait Birthday!", where she turned 7. Her favorite color is fuchsia, and she adores butterflies; she has an adorable Golden Doodle named Frenchy, her own secret delivery mailbox to share all her party invitations with her best friend Bree, and own playhouse. She also has a doll named Marabelle and sometimes carries her everywhere. Her mother sometimes uses her full name when she is in trouble. Her middle name is Margaret after her late maternal grandmother. In "Paris, Adieu!", Nancy starts saving money in hopes of eventually affording to visit Paris; she finally completes her goal in the series finale.
Josephine Jane "JoJo" Clancy (voiced by Spencer Moss) is Nancy's little sister. She has an imaginary friend named Dudley, is a PIT (Pirate in Training), loves to help, is 3 years old but she turned 4 in "Big Top Nancy", laughs a lot, and loves her stuffed animal "Mr. Monkey".
Douglas "Doug" Clancy (voiced by Rob Riggle) and Claire Clancy (voiced by Alyson Hannigan) are Nancy and Jojo's parents.
Mrs. Dolores Devine (voiced by Christine Baranski) is Nancy's elderly widowed neighbor. Her name is a play on the word "divine". Her late husband's name was Ronnie.
Franklin "Frank" Anderson (voiced by George Wendt) is Nancy's widowed maternal grandfather. His late wife was named "Margaret."
Frenchy (voiced by Fabio Tassone) is Nancy's Golden Doodle.
Poppy (Sid Clancy) (voiced by John Ratzenberger) is Nancy's paternal grandfather. He is a professor of geology. Grammy and Poppy live in Chicago.
Grammy (Fay Clancy) (voiced by Miriam Flynn) is Nancy's paternal grandmother. Nancy often thinks she's a spy when she's actually a librarian who works for the government.
Briana Rose "Bree" James (voiced by Dana Heath) is Nancy's best friend. She's also Nancy's next-door neighbor, loves nature, is a fantastique ice skater; she has a dog named Waffles and has a doll named Chiffon.
Frederick "Freddy" James (voiced by Blake Moore) is Bree's younger brother and JoJo's best friend. He is 3 years old.
Calvin and Gloria James (voiced by Geno Henderson and Tatyana Ali respectively) are Bree's parents.
Mrs. Priya Singh (voiced by Aparna Nancherla) is Doug's accountant agency boss.
Mr. Ravi Singh (voiced by Kal Penn) is Priya's husband.
Jonathan (voiced by Ian Chen) is Nancy's cousin who also enjoys fancy stuff. Prior to the series, he went by "Johnny", but now he goes by "Jonathan". He's a magician and loves clothes.
Gus (voiced by Chi McBride) is a local courier who makes deliveries. He can be quite goofy at times and a bit clumsy. He also hates lying and being dishonest. In the episode "Parcel Pursuit" he loses his kitten Parcel while on his mail route.
Lionel (voiced by Malachi Barton) is a boy who is a bit of a comedian. Lionel has curly blond hair and has blue eyes. He also has a dog named Flash and a rubber chicken named Bok-Bok, which he carries with everywhere he goes, that he got to get over the chicken incident that happened when he was younger. He has an autistic cousin named Sean that he cares about in the episode "Nancy's New Friend", which he teaches Nancy to be calm around him. In the episode "Love, Lionel", he has a huge crush on Wanda and had trouble revealing his feelings to her, as she and her other friends already know him for his cracking of jokes and where Nancy tries to help him get over his fear and come clean.
Sean (voiced by George Yionoulis) is Lionel's cousin who's autistic. He loves trains and is really educated about them.
Brigitte (voiced by Madison Pettis) is Nancy's favorite waitress who always serves her and her friends and family their favorite pizza at the pizza parlor. She was also her and JoJo's babysitter. In the third season, Brigitte moves to Chicago to go to college.
Grace White (voiced by Hannah Nordberg) is Nancy's frenemy. She is from a wealthy family. Grace often brags about what she has, which can sometimes annoy the other kids. In the Season 3 episode, "Grace Gets Real", however, Grace finally learns how to be humbler and not brag.
Rhonda and Wanda (both voiced by Ruby Jay) are identical twin sisters who are Nancy's friends and Lionel has trouble telling them apart. They love to play sports. They both wear bows on their heads, each with their own pattern with Rhonda's bow having stripes and Wanda's having polka dots. They also have the first letter of their names on the side at the top of their shirts to tell them apart. They are both tomboys and they love to play sports in their backyard, and when they go to practice.
Roberto (voiced by Nathan Arenas) is Nancy's friend and Lionel's new buddy who just moved to their neighborhood from Paris, Texas in the episode "Le Boy Next Door."
Daisy (voiced by Darci Lynne) is Nancy's friend that she met at a Food Drive. In Season 3, she moves closer to Nancy's neighborhood.
Lucille (voiced by Rachael MacFarlane) is Nancy's dance teacher.
Mr. Chen (voiced by James Sie) is the shoe store owner and judge for the Plainfield ballroom dance competition.
Flash is Lionel's dog.
Waffles is Bree's dog.
Serena and Venus are Rhonda and Wanda's pet hamsters. They are named after the famous tennis-playing sisters, Serena and Venus Williams.
Fritters is Grace's pet bunny.
Pepper is Grace's pet pony.
Dumpling is Grace's pet Silkie chicken.
Jean Claude is JoJo's fish that died and had a funeral in the episode "Au Revior Jean Claude".
Jean Claude Jr. is Nancy and JoJo's fish that they got in the episode "Au Revior Jean Claude" after Jean Claude died.
Marabelle is Nancy's favorite doll.
Chiffon is Bree's favorite doll.
Bok Bok is Lionel's favorite toy chicken that he likes to be funny with and makes jokes with. He carries him everywhere and got him when he was younger to get over the chicken incident.
Penelope is Grace's favorite doll who looks identical to her.
Flower Shop Owner is a kindly middle-aged woman who owns the Flower Shop and invites Nancy and Bree to participate in the poetry contest.
Episodes
Release
Fancy Nancy premiered in the United States and in Canada on July 13, 2018. The series was later made available to stream on Disney+.
Home media
Reception
Critical response
Alex Reif of LaughingPlace.com called Fancy Nancy "full of bright colors, fun characters, and musical numbers," writing, "Like the beloved book series, kids are going to think Disney’s Fancy Nancy is très magnifique. Nancy will inspire viewers to be themselves and to make every day extra special. Her interests gravitate towards things that are pink and sparkly, but her personality is optimistic and friendly. Not only will kids expand their vocabulary, but they will also see a great role model for overcoming personal struggles and being a better friend." Emily Ashby of Common Sense Media gave Fancy Nancy a grade of four out of five stars, complimented the educational value, citing self-expression and individuality, and praised the depiction of positive messages and role models, stating that the show promotes respect and positivity across its characters.
Dave Trumbore of Collider included Fancy Nancy in their "2018's Best New Animated Series for Kids" list, saying that the series celebrates "uniqueness, diversity, and individuality." Azure Hall and Casey Suglia of Romper included Fancy Nancy in their "Great Shows Your Kids Will Love To Stream On Disney+" list, stating, "Although Nancy likes everything fancy, the TV series promotes individuality and self expression, rather than materialism. Throughout the show, Nancy learns about the beauty of people’s differences and learns to appreciate all that makes her friends unique. This mirrors itself in the show’s positive messages about family members, as Nancy loves on her younger sister and her supportive parents. While Nancy might not necessarily be your cup of tea, you have to give credit to a show that teaches kids to embrace their truest selves."
Nuray Bulbul of WalesOnline included Fancy Nancy in their "6 Best Disney Plus animated films and TV shows in 2021" list, asserting, "From her vast vocabulary to her creative attire, six-year-old Nancy is one to look out for. A high-spirited young girl whose imagination and enthusiasm transforms the ordinary into the extraordinary - showcases it’s important to make the most of each day and encourage others to do the same." Charles Curtis of USA Today ranked Fancy Nancy 6th in their "20 Best Shows For Kids Right Now (March 2020)" list.
Accolades
References
External links
Disney Jr. original programming
2010s American animated television series
2020s American animated television series
2010s preschool education television series
2020s preschool education television series
2018 American television series debuts
2018 animated television series debuts
2022 American television series endings
American English-language television shows
American preschool education television series
Animated preschool education television series
Television shows set in Columbus, Ohio
Television series by Disney Television Animation
American television shows based on children's books
American children's animated adventure television series
American children's animated fantasy television series
American computer-animated television series
Animated television series about children
Computers | Fancy Nancy (TV series) | [
"Technology"
] | 2,352 | [] |
61,900,168 | https://en.wikipedia.org/wiki/ASASSN-19bt | ASASSN-19bt was a tidal disruption event (TDE) discovered by the All Sky Automated Survey for SuperNovae (ASAS-SN) project, with early-time, detailed observations by the TESS satellite. It was first detected on January 21, 2019, and reached peak brightness on March 4. The black hole which caused the TDE is in the 16th magnitude galaxy 2MASX J07001137-6602251 in the constellation Volans at a redshift of 0.0262, around 375 million light years away.
Observations in UV light made with NASA's Neil Gehrels Swift Observatory showed a drop in the temperature of the tidal disruption from around 71,500 to 35,500 degrees Fahrenheit (40,000 to 20,000 degrees Celsius) over a few days. This is the first time such an early temperature drop has been seen in a tidal disruption event. The transient resulting from the tidal disruption event has been cataloged as AT 2019ahk.
References
Black holes
Volans
2019 in outer space
Tidal disruption events | ASASSN-19bt | [
"Physics",
"Astronomy"
] | 224 | [
"Black holes",
"Tidal disruption events",
"Physical phenomena",
"Physical quantities",
"Concepts in astronomy",
"Astronomical events",
"Unsolved problems in physics",
"Astronomy stubs",
"Astrophysics",
"Constellations",
"Volans",
"Stellar astronomy stubs",
"Astrophysics stubs",
"Density",
... |
61,900,989 | https://en.wikipedia.org/wiki/Magic%20wavelength | The magic wavelength (also known as a related quantity, magic frequency) is the wavelength of an optical lattice where the polarizabilities of two atomic clock states have the same value, such that the AC Stark shift caused by the laser intensity fluctuation has no effect on the transition frequency between the two clock states.
AC Stark shift by optical lattice
The laser field in an optical lattice induces an electric dipole moment in the atoms to exert forces on them and hence confine them. However, the difference in polarizabilities of the atomic states leads to an AC Stark shift in the transition frequency between the two states, a shift that is dependent on the laser optical intensity at the particular atom location in the lattice. When it comes to precise measurements of transition frequency such as atomic clocks, the temporal fluctuations of the laser optical intensity would then deteriorate the clock accuracy. Furthermore, due to the spatial variation of laser intensity in the lattice, the atom's motion within the lattice would also be coupled into the uncertainty of the internal transition frequency of the atom.
Polarizability depends on wavelength
Despite having different function forms, the polarizabilities of two atomic states do have a dependency on the wavelength of the laser field. In some cases, it is then possible to find a particular wavelength at which the two atomic states happen to have exactly the same polarizability. This particular wavelength, where the AC Stark shift vanishes for the transition frequency, is called the magic wavelength, and the frequency that corresponds to this wavelength is called the magic frequency. This idea was first introduced by Hidetoshi Katori's calculation in 2003, and then experimentally achieved by Katori's group in 2005.
References
Physical quantities
Atomic clocks
Atomic physics | Magic wavelength | [
"Physics",
"Chemistry",
"Mathematics"
] | 350 | [
"Physical phenomena",
"Physical quantities",
"Time",
"Time stubs",
"Quantity",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Spacetime",
"Physical properties",
" and optical physics"
] |
61,903,668 | https://en.wikipedia.org/wiki/Alice%20Christine%20Stickland | Alice Christine Stickland (16 March 1906 – 16 April 1987) was an applied mathematician and astrophysics engineer with interests in radar and radiowave propagation.
Early life
Alice Christine Stickland was born in Camberwell, London, on 16 March 1906. Her father was a publisher's clerk.
Education
Stickland studied mathematics at King's College, London, and graduated with a BSc in 1927. She then went on to study privately while working at the Radio Research Station, Ditton Park. First receiving an MSc in mathematical physics in 1929 and then being awarded a PhD in mathematical physics from University of London in 1943. Her dissertation title was ‘The Propagation of the Magnetic Field of the Electron Magnetic Wave along the Ground and in the Lower Atmosphere’.
Career
Stickland worked as a scientific civil servant at the Radio Research Station between 1928 and 1947. She worked with radar pioneer, Robert Watson-Watt, on long-wave propagation, Reginald Smith-Rose on short-wave propagation, and Edward Appleton on the properties of the ionosphere.
Stickland, along with Smith-Rose, read a paper entitled 'Ultra-Short Wave Propagation - Comparison Between Theory and Experimental data' at the Institution of Electrical Engineers. The paper described the results of field intensity measurements obtained between 1937 and 1939 using the Post Office radio-telephone link between Guernsey and Chaldon.
She officially retired in 1968 but continued to work as General Editor of the Annals of the International Years of the Quiet Sun (1964-65), and with the International Council for Science’s Committee on Space Research (COSPAR). She was heavily involved in the Girl Guides’ Association.
Selected publications
Ultra-Short Wave Propagation - Comparison Between Theory and Experimental data - Dr. R. L. Smith-Rose, Miss A. C. Stickland
References
1906 births
1987 deaths
British mathematicians
Applied mathematicians
British electrical engineers
Alumni of King's College London
British women engineers
People from Camberwell
British women mathematicians | Alice Christine Stickland | [
"Mathematics"
] | 391 | [
"Applied mathematics",
"Applied mathematicians"
] |
70,063,207 | https://en.wikipedia.org/wiki/Bernoulli%20quadrisection%20problem | In triangle geometry, the Bernoulli quadrisection problem asks how to divide a given triangle into four equal-area pieces by two perpendicular lines. Its solution by Jacob Bernoulli was published in 1687. Leonhard Euler formulated a complete solution in 1779.
As Euler proved, in a scalene triangle, it is possible to find a subdivision of this form so that two of the four crossings of the lines and the triangle lie on the middle edge of the triangle, cutting off a triangular area from that edge and leaving the other three areas as quadrilaterals. It is also possible for some triangles to be subdivided differently, with two crossings on the shortest of the three edges; however, it is never possible for two crossings to lie on the longest edge. Among isosceles triangles, the one whose height at its apex is 8/9 of its base length is the only one with exactly two perpendicular quadrisections. One of the two uses the symmetry axis as one of the two perpendicular lines, while the other has two lines of slope , each crossing the base and one side.
This subdivision of a triangle is a special case of a theorem of Richard Courant and Herbert Robbins that any plane area can be subdivided into four equal parts by two perpendicular lines, a result that is related to the ham sandwich theorem. Although the triangle quadrisection has a solution involving the roots of low-degree polynomials, the more general quadrisection of Courant and Robbins can be significantly more difficult: for any computable number there exist convex shapes whose boundaries can be accurately approximated to within any desired error in polynomial time, with a unique perpendicular quadrisection whose construction computes .
In 2022, the first place in an Irish secondary school science competition, the Young Scientist and Technology Exhibition, went to a project by Aditya Joshi and Aditya Kumar using metaheuristic methods to find numerical solutions to the Bernoulli quadrisection problem.
Notes and references
Area
Triangle geometry | Bernoulli quadrisection problem | [
"Physics",
"Mathematics"
] | 405 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Geometry",
"Geometry stubs",
"Wikipedia categories named after physical quantities",
"Area"
] |
70,064,140 | https://en.wikipedia.org/wiki/Cilofexor | Cilofexor (also known as GS-9674) is a nonsteroidal farnesoid X receptor (FXR) agonist in clinical trials for the treatment of non-alcoholic fatty liver disease (NAFLD), non-alcoholic steatohepatitis (NASH), and primary sclerosing cholangitis (PSC). It is being investigated for use alone or in combination with firsocostat, selonsertib, or semaglutide. In rat models and human clinical trials of NASH it has been shown to reduce fibrosis and steatosis, and in human clinical trials of PSC it improved cholestasis and reduced markers of liver injury.
It is being developed by the pharmaceutical company Gilead Sciences.
References
Pyridines
Chlorobenzene derivatives
Cyclopropyl compounds
Oxazoles
Azetidines
Carboxylic acids
Farnesoid X receptor agonists | Cilofexor | [
"Chemistry"
] | 200 | [
"Carboxylic acids",
"Functional groups"
] |
70,065,399 | https://en.wikipedia.org/wiki/Materials%20Science%20and%20Engineering%20B | Materials Science and Engineering: B — Advanced Functional Solid-State Materials is a peer-reviewed scientific journal. It is the section of Materials Science and Engineering dedicated to "calculation, synthesis, processing, characterization, and understanding of advanced quantum materials" and is published monthly by Elsevier. It aims at providing a leading international forum for material researchers across the disciplines of theory, experiment, and device applications. The current editor-in-chief is Jing Xia (University of California Irvine).
According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.407.
References
External links
Physics review journals
Materials science journals
Elsevier academic journals
Academic journals established in 1993
English-language journals
Monthly journals | Materials Science and Engineering B | [
"Materials_science",
"Engineering"
] | 142 | [
"Materials science journals",
"Materials science"
] |
70,076,585 | https://en.wikipedia.org/wiki/High%20integrity%20software | High-integrity software is software whose failure may cause serious damage with possible "life-threatening consequences." "Integrity is important as it demonstrates the safety, security, and maintainability of... code." Examples of high-integrity software are nuclear reactor control, avionics software, automotive safety-critical software and process control software.
A number of standards are applicable to high-integrity software, including:
DO-178C, Software Considerations in Airborne Systems and Equipment Certification
CENELEC EN 50128, Railway applications - Communication, signalling and processing systems - Software for railway control and protection systems
IEC 61508, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems (E/E/PE, or E/E/PES)
ISO 26262, Road Vehicles - Functional Safety (especially 'part 6' of the standard, which is titled "Product development at the software level"
See also
Safety-critical system
High availability software
Formal methods
Software of unknown pedigree
References
External links
Software by type
Software quality
Safety engineering | High integrity software | [
"Technology",
"Engineering"
] | 214 | [
"Systems engineering",
"Safety engineering",
"Software by type",
"Software engineering stubs",
"Software engineering"
] |
60,745,023 | https://en.wikipedia.org/wiki/Symbol%20Nomenclature%20For%20Glycans | The Symbol Nomenclature For Glycans (SNFG) is a community-curated standard for the depiction of simple monosaccharides and complex carbohydrates (glycans) using various colored-coded, geometric shapes, along with defined text additions. It is hosted by the National Center for Biotechnology Information at the NCBI-Glycans Page. It is curated by an international groups of researchers in the field that are collectively called the SNFG Discussion Group. The overall goal of the SNFG is to:
Facilitate communications and presentations of monosaccharides and glycans for researchers in the Glycosciences, and for scientists and students less familiar with the field.
Ensure uniform usage of the nomenclature in the literature, thus helping to ensure scientific accuracy in journal and online publications.
Continue to develop the SNFG and its applications to aid wider use by the scientific community.
Description and examples
The SNFG consists of a table that provides color coded symbols for various monosaccharides that are commonly found in nature. It also includes a set of footnotes that describe rules for rendering glycans, including guidelines on how to modify the base set of symbols depicted in the table. These footnotes are organized into 10 themes that provide streamlined recommendations for: i. general usage of the SNFG; ii. CMYK / RGB color codes; iii. symbol colors and shapes; iv. ring configurations; v. bond linkage presentation; vi. sialic acids; vii. glycan modifications; viii. amino substitutions; ix. handling ambiguous or partially defined glycans; and x. depicting non-glycan entities using SNFG renderings. More details are available at the main SNFG webpage, which is periodically updated with additional directions.
The monosaccharides can be linked together to describe complex carbohydrate structures or glycans. More exhaustive cases for mammalian species, other eukaryotes, plants and microbes are considered at the main SNFG page.
Software
Several software tools have been developed to support SNFG implementation by the community including:
GlycoGlyph: An open source glycan drawing and naming tool which enables drawing glycans in SNFG format using either a graphical user interface or from names in CFG linear nomenclature format. When structures are drawn, the application produces both the CFG linear name and the GlycoCT which in turn can be used to get the GlyTouCan ID numbers for the glycan.
3D-SNFG: For the cartoon representation of the SNFG in atomic models of carbohydrates. Here, the monosaccharides are depicted as large shapes, or icons centered within the rings.
DrawGlycan-SNFG: For the conversion of IUPAC-condensed string inputs to generate glycan and glycopeptide drawings. Bond fragmentation, glycan descriptors and other carbohydrate modifications can be included using string inputs.
GlycanBuilder2: A standalone version of GlycanBuilder that supports the expanded SNFG nomenclature.
Sugarsketcher: An intuitive web-based drag and drop tool for rendering SNFG images.
The SNFG nomenclature has also been adopted as a standard by major databases and journals in the Biomedical Sciences.
History
In 1978, Stuart Kornfeld and colleagues at the Washington University School of Medicine presented a system for symbolic representation of vertebrate glycans. This system gained popularity when it was implemented as a core method for glycan representation in the NCBI text book Essentials of Glycobiology edited by Ajit Varki (University of California, San Diego) and colleagues. While the first edition of this text published in 1999 used black-and-white symbols similar to the Kornfeld system, color was introduced in the second edition of the text (2009). The advantage of color is that different monosaccharide stereoisomers could now be depicted using the same shape, only with different colors. The system of carbohydrate representation was adopted and widely disseminated by many including the NIGMS-funded Consortium for Functional Glycomics, and thus was often referred to as "CFG Nomenclature". This color representation was vastly expanded in the third edition of the text to include 49 new monosaccharides that appear mostly in non-vertebrates, microbes and plants. Inputs and recommendations from a number of scientists beyond the editors of the Essentials textbook was included in this implementation, and the release of the expanded glycan symbol system was coordinated with the IUPAC Carbohydrate Nomenclature committee. For long-term development of this symbol nomenclature and standardization of glycan representation in the Glycosciences, in 2015, the Essentials editors suggested that the representation be formally called SNFG ('Symbol Nomenclature For Glycans'), and future development be entrusted to a global community of scientists. To aid this development, each of the SNFG monosaccharide symbols was linked to PubChem entries at NCBI/NLM and a dedicated website at NCBI was established for future SNFG updates. Thus, the development of the SNFG is currently undertaken by an international community of scientist that are called the SNFG Discussion Group.
References
External links
Symbol Nomenclature For Glycans (SNFG) site at NCBI-Glycans
Chemical nomenclature
Carbohydrates | Symbol Nomenclature For Glycans | [
"Chemistry"
] | 1,166 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Organic compounds",
"Carbohydrate chemistry",
"nan"
] |
60,748,891 | https://en.wikipedia.org/wiki/Philosophy%20of%20ecology | Philosophy of ecology is a concept under the philosophy of science, which is a subfield of philosophy. Its main concerns centre on the practice and application of ecology, its moral issues, and the intersectionality between the position of humans and other entities. This topic also overlaps with metaphysics, ontology, and epistemology, for example, as it attempts to answer metaphysical, epistemic and moral issues surrounding environmental ethics and public policy.
The aim of the philosophy of ecology is to clarify and critique the 'first principles’, which are the fundamental assumptions present in science and the natural sciences. Although there has yet to be a consensus about what presupposes philosophy of ecology, and the definition for ecology is up for debate, there are some central issues that philosophers of ecology consider when examining the role and purpose of what ecologists practice. For example, this field considers the 'nature of nature', the methodological and conceptual issues surrounding ecological research, and the problems associated with these studies within its contextual environment.
Philosophy addresses the questions that make up ecological studies, and presents a different perspective into the history of ecology, environmental ethics in ecological science, and the application of mathematical models.
Background
History
Ecology is considered as a relatively new scientific discipline, having been acknowledged as a formal scientific field in the late nineteenth and early twentieth century. Although an established definition of ecology has yet to be presented, there are some commonalities in the questions proposed by ecologists.
Ecology was considered as “the science of the economy [and] habits,” according to Stauffer, and was proponent in understanding the external interrelations between organisms. It was recognised formally as a field of science in 1866 by German zoologist Ernst Haeckel (1834-1919). Haeckel termed ‘ecology’ in his book, Generelle Morphologie der Organismen (1866), in the attempt to present a synthesis of morphology, taxonomy, and the evolution of animals.
Haeckel aimed to refine the notion of ecology and propose a new area of study to investigate population growth and stability, as influenced by Charles Darwin and his work in Origin of Species (1859). He had first expressed ecology as an interchangeable term constituted within an area of biology and an aspect of ‘physiology of relationships’. In the English translation by Stauffer, Haeckel defined ecology as “the whole science of the relationship of organism to environment including, in the broad sense, all the ‘conditions for existence.'” This neologism was used to distinguish studies conducted on the field, as opposed to those conducted within the laboratory. He expanded upon this definition of ecology after considering the Darwinian theory of evolution and natural selection.
Defining ecology
There is yet to be an established consensus amongst philosophers about the exact definition of ecology, however, there are commonalities in the research agendas that helps differentiate this discipline from other natural sciences.
Ecology underlies an ecological worldview, wherein interaction and connectedness are emphasized and developed through several themes:
The idea that living and non-living beings are related and interconnected components in the biospherical web.
Living entities possess an identity that expresses their relatedness.
It is essential to understand the system of the biosphere and the components as a whole, rather than as their parts (also known as holism).
Occurrence of naturalism, whereby all living organisms are governed by the same natural laws.
Non-anthropocentrism, which is the rejection of anthropocentrism and its views on humans being the central entity, governed by the belief that value in the non-human world is to serve human interest. Non-anthropocentrism dictates that non-human world retains value and does not serve to benefit human interest.
Anthropogenic degradation of the environment dictates a necessity for environmental ethics.
There are three main disciplinary categories of ecology: Romantic ecology, political ecology, and scientific ecology. Romantic ecology, also called aesthetic or literary ecology, was a counter-movement to the increasingly anthropocentric and mechanistic ideology presented in modern Europe and America of the nineteenth century, especially during the Industrial Revolution. Some notable figures of this period include William Wordsworth (1770-1862), John Muir (1838-1914), and Ralph Waldo Emerson (1803-1882). Scope of romantic ecological influence also extends into politics, and in which political interrelation with ethics underline political ecology.
Political ecology, also known as axiological or values-based ecology, considers the socio-political implications surrounding the ecological landscape. Some fundamental questions political ecologists ask generally focus on the ethics between nature and society. American environmentalist Aldo Leopold (1886-1948), affirm that ethics should be extended to encompass the land and biotic communities as well, rather than pertaining exclusively to individuals. In this sense, political ecology can be denoted as a form of environmental ethics.
Finally, scientific ecology, or commonly known as ecology, addresses central concerns, such as understanding the role of the ecologists and what they study, and the types of methodology and conceptual issues that surround the development of these studies and what type of problem this may present.
Contemporary ecology
Defining contemporary ecology requires looking at certain fundamental principles, namely principles of system and evolution. System entails understanding the processes, of which interconnected sections establish a holistic identity, not separated or predictable from their components. Evolution results from the ‘generation of variety’ as a means to produce change. Certain entities that interact with their environments create evolution through survival, and it is the production of changes that shape ecological systems. This evolutionary process is central to ecology and biology.
There are three main concerns that ecologists generally concur with: naturalism, scientific realism, and the comprehensive scope of ecology.
Philosopher Frederick Ferre defines two different primary meanings for nature in Being and Value: Toward a Constructive Postmodern Metaphysics (1996). The first definition does not consider nature as 'artifacts of human manipulation’, and nature, in this sense, comprises those not of artificial origins. The second definition establishes natures as those not of supernatural conceptions, which includes artefacts of human manipulation in this case. However, there is confusion of meaning as both connotations are used interchangeably in its application in different contexts by different ecologists.
Naturalism
There is yet to be a defined explanation of naturalism within philosophy of ecology, however, its current usage connotes the idea that underlines a system containing a reality subsumed by nature, independent of the ‘supernatural’ world or existence. Naturalism, asserts the notion that scientific methodology is sufficient to obtain knowledge about reality. Naturalists who support this perspective view mental, biological, and social operations as physical entities. For example, considering a pebble or a human being, these existences occur concurrently within the same space and time. Applications of these scientific methods remain relevant and sufficient as it explains the spatiotemporal processes that physical entities undergo as spatiotemporal beings.
Methodology
Holism vs reductionism
The holism-reductionism debate encompasses ontological, methodological and epistemic concerns. Common questions involve examining whether the means to understanding an object is through critical analyses of its constituents (reductionism) or ‘contextualisation’ of its components (holism) to retain phenomenological value. Holists maintain that certain unique properties are attributed to the abiotic or biotic entity, such as an ecosystem, and how these characteristics are not intrinsically applicable to its separate components. Analysis of just the parts are insufficient in obtaining knowledge of the entire unit. On the other spectrum, reductionists argue that these parts are independent of each other, and that knowledge of the components provide understanding of the composite entity. This approach, however, has been criticised, as the entity does not just denote just the unity of its aggregates but rather a synthesis between the whole and its parts.
Rationalism vs empiricism
Rationalism within scientific ecology such methodologies remain necessary and relevant in their role for establishing ecological theory as a guide. Methodology employed under rationalist approaches became pronounced in the 1920s by Alfred Lotka's (1956) and Vito Volterra's (1926) logistic models that are known as Lotka-Volterra equations. Empiricism establishes the need for observational and empirical testing. An obvious consequence of this paradigm is the presence and usage of pluralistic methodology, although there has yet to be a unifying model adequate for application in ecology, and neither has there yet to establish a pluralistic theory as well.
Environmental ethics
Environmental ethics emerged in the 1970s in response to traditional anthropocentrism. It studies the moral implications between social and environmental interactions, prompted from concerns of environmental degradation, and challenged the ethical positionality of humans. A common belief amongst environmental philosophy is the view that biological entities are morally valuable and independent of human standards. Within this field, there is the shared assumption that environmental issues are prominently anthropogenic, and that this stems from an anthropocentric argument. The basis in rejecting anthropocentrism is to refute the belief that non-human entities are not worthy of value.
A main concern in environmental ethics is anthropogenically induced mass extinction within the biosphere. The attempt to interpret it non-anthropocentrically is vital to the foundations of environmental ethics. Paleontology, for example, details mass extinction as pivotal and a precursor to major radiations. Those with non-anthropocentric views interpret the death of dinosaurs as a preservation of biodiversity and principle to anthropocentric values. As ecology is closely entwined with ethics, understanding environmental approaches require understanding the world, which is the role of ecology and environmental ethics. The main issue is to also incorporate natural entities in its ethical concern, which involves conscious, sentient, living and existing beings.
Mathematical models
Mathematical models play a role in questioning the issues presented in ecology and conservation biology. There are mainly two types of models used to explore the relationship between applications of mathematics and practice within ecology. The first are descriptive models, which details single-species population growth, for example, and multi-species models like Lotka-Volterra predator-prey models or Nicholson-Baily host-parasitoid model. These models explain behavioural activity through the idealisation of the intended target. The second type are normative models, which describe the current state of variables and how certain variables should behave.
In ecology, complicated biological interactions require explanation, which is where the models are used to investigate hypotheses. For example, identification and explanations of certain organisms and population abundance is essential for understanding the role of ecology and biodiversity. Applications of equations provide an inclination towards a prediction, or a model to suggest an answer for these questions that come up. Mathematical model in particular also provide contextual supporting information regarding factors on a wider, more global scale as well.
The purpose of these models and the differences in normative models and scientific models is that the differences in their standards entail different applications. These models aid in illustrating decision making outcomes, and also aid in tackling group decisions. For example, mathematical models incorporate environmental decisions of people within a group holistically. The model helps represent the values of each members, and the weightings of respect in the matrix. The model will then deliver the final result. In the case of conflict about proceedings or how to represent certain quantities, the model may be limited in that it would be deemed not of use. Furthermore, the number of idealisations in the model are also presented.
Criticisms
The process of mathematical modelling presents distinction between reality and theory, or more specifically, the application of models against the genuine phenomena these models aim to represent. Critics of the employment of mathematical models within ecology question its use and the extent of their relevance, prompted by an imbalance in investigative procedure and theoretical propositions. According to Weiner (1995), deterministic models have been ineffectual within ecology. The Lotka-Volterra models, Weiner argues, have not yielded testable predictions. In cases where theoretical models within ecology produced testable predictions, they have been refuted.
The purpose of the Lotka-Volterra models is to track the predator and prey interaction and their population cycles. The usual pattern maintains that the predator population follows the prey population fluctuations. For example, as prey population increase, so does the predator, and likewise in prey population decrease, predator population decreases. However, Weiner argues that, in reality, prey population still maintains their oscillating cycles, even if the predator is removed, and is an inaccurate representation of natural phenomena. Criticism in how idealisation is inherent within modelling and application of this is methodologically deficient. They also maintain that mathematical modelling within ecology is an oversimplification of reality, and a misrepresentation or insufficient representation of the biological system.
Application of simple or complex models are also up for debate. There is concern regarding the model results, wherein complexities of a system are not able to be replicated or adequately captured with a complicated model.
See also
Chemical ecology
Circles of Sustainability
Cultural ecology
Dialectical naturalism
Ecological death
Ecological psychology
Ecology movement
Ecosophy
Ecopsychology
Industrial ecology
Information ecology
Landscape ecology
Natural resource
Normative science
Political ecology
Sensory ecology
Spiritual ecology
Sustainable development
References
ecology
Ecology | Philosophy of ecology | [
"Biology"
] | 2,738 | [
"Ecology"
] |
41,419,738 | https://en.wikipedia.org/wiki/Cobordism%20hypothesis | In mathematics, the cobordism hypothesis, due to John C. Baez and James Dolan, concerns the classification of extended topological quantum field theories (TQFTs). In 2008, Jacob Lurie outlined a proof of the cobordism hypothesis, though the details of his approach have yet to appear in the literature as of 2022. In 2021, Daniel Grady and Dmitri Pavlov claimed a complete proof of the cobordism hypothesis, as well as a generalization to bordisms with arbitrary geometric structures.
Formulation
For a symmetric monoidal -category which is fully dualizable and every -morphism of which is adjointable, for , there is a bijection between the -valued symmetric monoidal functors of the cobordism category and the objects of .
Motivation
Symmetric monoidal functors from the cobordism category correspond to topological quantum field theories. The cobordism hypothesis for topological quantum field theories is the analogue of the Eilenberg–Steenrod axioms for homology theories. The Eilenberg–Steenrod axioms state that a homology theory is uniquely determined by its value for the point, so analogously what the cobordism hypothesis states is that a topological quantum field theory is uniquely determined by its value for the point. In other words, the bijection between -valued symmetric monoidal functors and the objects of is uniquely defined by its value for the point.
See also
Cobordism
References
Further reading
Seminar on the Cobordism Hypothesis and (Infinity,n)-Categories, 2013-04-22
Jacob Lurie (4 May 2009). On the Classification of Topological Field Theories
External links
Quantum field theory | Cobordism hypothesis | [
"Physics",
"Mathematics"
] | 354 | [
"Quantum field theory",
"Quantum mechanics",
"Topology stubs",
"Topology",
"Quantum physics stubs"
] |
41,419,857 | https://en.wikipedia.org/wiki/Simplicial%20space | In mathematics, a simplicial space is a simplicial object in the category of topological spaces. In other words, it is a contravariant functor from the simplex category Δ to the category of topological spaces.
References
Homotopy theory
Topological spaces | Simplicial space | [
"Mathematics"
] | 58 | [
"Topological spaces",
"Mathematical structures",
"Topology",
"Space (mathematics)"
] |
41,420,172 | https://en.wikipedia.org/wiki/Beta%20attenuation%20monitoring | Beta attenuation monitoring (BAM) is an air monitoring technique employing the absorption of beta radiation by solid particles extracted from air flow. The technique allows for the detection of PM10 and PM2.5, which are monitored by most air pollution regulatory agencies. The main principle is based on a kind of Bouguer (Lambert–Beer) law: the amount by which the flow of beta radiation (electrons) is attenuated by a solid matter is exponentially dependent on its mass and not on any other feature (such as density, chemical composition or some optical or electrical properties) of this matter. So, the air is drawn from outside of the detector through an "infinite" (cycling) ribbon made from some filtering material so that the particles are collected on it. There are two sources of beta radiation placed one before and one after the region where air flow passes through the ribbon leaving particles on it; and there are also two detectors on the opposite side of the ribbon, facing the detectors. The sources' intensity and detectors' sensitivity being the same (or corrected with appropriate calibration lookup table), the intensity of beta rays detected by one of detectors is compared to that of the other. Thus one can deduce how much mass has the ribbon acquired upon being exposed to air flow; knowing the drain velocity, actual particle mass concentration in air could be assessed.
The radiation source can be a gas chamber, filled with 86Kr gas, or a pieces of 14C-rich polymer plastic, such as PMMA. Detector is simply a Geiger–Mueller counter. The particulate matter content measured is affected by the moisture content in the air, unfortunately.
To discriminate between particle of different sizes (e. g., between PM10 and PM2.5), some preliminary separation could be accomplished, for example, by cyclone battery.
A similar method exists, where instead of beta particle flow an X-ray Fluorescence Spectroscopic monitoring is applied on the either side of air flow contact with the ribbon. This allows to obtain not only cumulative measurement of particle mass, but also to detect their average chemical composition (technique works for potassium and elements heavier than it).
References
Literature
List of Designated Reference and Equivalent Methods. EPA: Research Triangle park, 2013. Online: http://www.epa.gov/ttn/amtic/criteria.html .
Air pollution
Aerosols
Detectors
Measuring instruments
Radioactivity
Meteorological instrumentation and equipment | Beta attenuation monitoring | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 500 | [
"Meteorological instrumentation and equipment",
"Measuring instruments",
"Colloids",
"Aerosols",
"Nuclear physics",
"Radioactivity"
] |
41,422,644 | https://en.wikipedia.org/wiki/SDSS%20J1106%2B1939 | SDSS J1106+1939 (SDSS J110644.95+193930.6) is a quasar, notable for its energetic matter outflow. It is the record holder for the most powerful matter outflow by a quasar. The engine is a supermassive black hole, pulling in matter at the rate of 400 solar masses per year and ejecting it at the speed of 8000 km/s. The outflow produces a luminosity of 1046 ergs. This makes the quasar more than two trillion times brighter than the Sun, one of the most luminous quasars on record. The quasar has the visual magnitude of about ~19, despite its extreme distance of 11 billion light years. The outflow of matter from the quasar produces about 1/20 of its luminosity.
References
Quasars
Supermassive black holes
SDSS objects
Leo (constellation) | SDSS J1106+1939 | [
"Physics",
"Astronomy"
] | 196 | [
"Black holes",
"Galaxy stubs",
"Unsolved problems in physics",
"Supermassive black holes",
"Astronomy stubs",
"Constellations",
"Leo (constellation)"
] |
41,423,071 | https://en.wikipedia.org/wiki/Bolshoi%20cosmological%20simulation | The Bolshoi simulation, a computer model of the universe run in 2010 on the Pleiades supercomputer at the NASA Ames Research Center, was the most accurate cosmological simulation to that date of the evolution of the large-scale structure of the universe.
The Bolshoi simulation used the now-standard ΛCDM (Lambda-CDM) model of the universe and the WMAP five-year and seven-year cosmological parameters from NASA's Wilkinson Microwave Anisotropy Probe team. "The principal purpose of the Bolshoi simulation is to compute and model the evolution of dark matter halos, thereby rendering the invisible visible for astronomers to study, and to predict visible structure that astronomers can seek to observe." “Bolshoi” is a Russian word meaning “big.”
The first two of a series of research papers describing Bolshoi and its implications were published in 2011 in the Astrophysical Journal. The first data release of Bolshoi outputs has been made publicly available to the world's astronomers and astrophysicists. The data include output from the Bolshoi simulation and from the BigBolshoi, or MultiDark, simulation of a volume 64 times that of Bolshoi. The Bolshoi-Planck simulation, with the same resolution as Bolshoi, was run in 2013 on the Pleiades supercomputer using the Planck satellite team's cosmological parameters released in March 2013. The Bolshoi-Planck simulation is currently being analyzed in preparation for publication and distribution of its results in 2014.
Bolshoi simulations continue to be developed as of 2018.
Contributors
Joel R. Primack's team at the University of California, Santa Cruz, partnered with Anatoly Klypin's group at New Mexico State University, in Las Cruces to run and analyze the Bolshoi simulations. Further analysis and comparison with observations by Risa Wechsler's group at Stanford University and others are reflected in the papers based on the Bolshoi simulations.
Rationale
A successful large-scale simulation of the evolution of galaxies, with results consistent with what is actually seen by astronomers in the night sky, provides evidence that the theoretical underpinnings of the models employed, i.e., the supercomputer implementations ΛCDM, are sound bases for understanding galactic dynamics and the history of the universe, and opens avenues to further research. The Bolshoi Simulation isn't the first large-scale simulation of the universe, but it is the first to rival the extraordinary precision of modern astrophysical observations.
The previous largest and most successful simulation of galactic evolution was the Millennium Simulation Project, led by Volker Springel. Although the success of that project stimulated more than 400 research papers, the Millennium simulations used early WMAP cosmological parameters that have since become obsolete. As a result, they led to some predictions, for example about the distribution of galaxies, that do not match very well with observations. The Bolshoi simulations use the latest cosmological parameters, are higher in resolution, and have been analyzed in greater detail.
Methods
The Bolshoi simulation follows the evolving distribution of a statistical ensemble of 8.6 billion particles of dark matter, each of which represents about 100 million solar masses, in a cube of 3-dimensional space about 1 billion light years on edge. Dark matter and dark energy dominate the evolution of the cosmos in this model. The dynamics are modeled with the ΛCDM theory and Albert Einstein's general theory of relativity, with the model including cold dark matter (CDM) and the Λ cosmological constant term simulating the cosmic acceleration referred to as dark energy.
The first 100 million years (Myr) or so of the evolution of the universe after the Big Bang can be derived analytically. The Bolshoi simulation was started at redshift z=80, corresponding to about 20 Myr after the Big Bang. Initial parameters were calculated with linear theory as implemented by the CAMB tools, part of the WMAP website. The tools provide the initial conditions, including a statistical distribution of positions and velocities of the particles in the ensemble, for the much more demanding Bolshoi simulation of the next approximately 13.8 billion years. The experimental volume thus represents a random region of the universe, so comparisons with observations must be statistical.
The Bolshoi simulation employs a version of an adaptive mesh refinement (AMR) algorithm called an adaptive refinement tree (ART), in which a cube in space with more than a predefined density of matter is recursively divided into a mesh of smaller cubes. The subdivision continues to a limiting level, chosen to avoid using too much supercomputer time. Neighboring cubes are not permitted to vary by too many levels, in the case of Bolshoi by more than one level of subdivision, to avoid large discontinuities. The AMR/ART method is well suited to model the increasingly inhomogeneous distribution of matter that evolves as the simulation proceeds. “Once constructed, the mesh, rather than being destroyed at each time step, is promptly adjusted to the evolving particle distribution.” As the Bolshoi simulation ran, the position and velocity of each of the 8.6 billion particles representing dark matter was recorded in 180 snapshots roughly evenly spaced over the simulated 13.8-billion-year run on the Pleiades supercomputer. Each snapshot was then analyzed to find all the dark matter halos and the properties of each (particle membership, location, density distribution, rotation, shape, etc.). All this data was then used to determine the entire growth and merging history of every halo. These results are used in turn to predict where galaxies will form and how they will evolve. How well these predictions correspond to observations provides a measure of the success of the simulation. Other checks were also made.
Results
The Bolshoi simulation is considered to have produced the best approximation to reality so far obtained for so large a volume of space, about 1 billion light years across. “Bolshoi produces a model universe that bears a striking and uncanny resemblance to the real thing. Starting with initial conditions based on the known distribution of matter shortly after the Big Bang, and using Einstein’s general theory of relativity as the ‘rules’ of the simulation, Bolshoi predicts a modern-day universe with galaxies lining up into hundred-million-light-year-long filaments that surround immense voids, forming a cosmic foam-like structure that precisely matches the cosmic web as revealed by deep galaxy studies such as the Sloan Digital Sky Survey. To achieve such a close match, Bolshoi is clearly giving cosmologists a fairly accurate picture of how the universe actually evolved.” The Bolshoi simulation found that the Sheth–Tormen approximation overpredicts the abundance of halos by a factor of for redshifts .
Support
This research was supported by grants from NASA and the National Science Foundation (U.S.) to Joel Primack and Anatoly Klypin, including massive grants of supercomputer time on the NASA Advanced Supercomputing (NAS) supercomputer Pleiades at NASA Ames Research Center. Hosting of the Bolshoi outputs and analyses at Leibniz Institute for Astrophysics Potsdam (AIP) is partially supported by the MultiDark grant from the Spanish MICINN Programme.
In popular culture
A visualization from the Bolshoi simulation was narrated in the National Geographic TV special Inside the Milky Way. The Icelandic singer-songwriter Björk used footage from the Bolshoi cosmological simulation in the performance of her musical number “Dark Matter” in her Biophilia concert.
References
References for figure
Mantz, A., Allen, S. W., Ebeling, H., & Rapetti, D. 2008, MNRAS, 387, 1179
Henry, J. P., Evrard, A. E., Hoekstra, H., Babul, A., & Mahdavi, A. 2009, ApJ,691, 1307
Vikhlinin, A., Kravtsov, A. V., Burenin, R. A., et al. 2009, ApJ, 692, 1060
Rozo, E., Rykoff, E. S., Evrard, A., et al. 2009, ApJ, 699, 768
External links
A. Klypin’s (NMSU) Bolshoi Cosmological Simulation Website
Bolshoi Movies
Physical cosmology
Cosmological simulation | Bolshoi cosmological simulation | [
"Physics",
"Astronomy"
] | 1,814 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"Computational physics",
"Cosmological simulation",
"Physical cosmology"
] |
65,720,315 | https://en.wikipedia.org/wiki/Death%20regulator%20Nedd2-like%20caspase | Death regulator Nedd2-like caspase (Nc, Nedd2-like caspase or Dronc) was firstly identified and characterised in Drosophila in 1999 as a cysteine protease containing an amino-terminal caspase recruitment domain. At first, it was thought of as an effector caspase involved in apoptosis, but subsequent findings have proved that it is, in fact, an initiator caspase with a crucial role in said type of programmed cell death.
Structure
Caspase Dronc is a Drosophila melanogaster protein codified by the Dronc gene. It belongs to the cysteine-aspartic proteases family, as it is a protease enzyme that takes part in programmed cell death processes. It is composed of beta-sheet structures surrounded by alpha-helices, which is a common feature in the protein family. Caspases are formed by ensembled or single active heterodimers that come from the intra-chain proteolytic cleavage of inactive zymogens. This cleavage is coordinated by a specific initiator caspase (which undergoes adaptor-assisted self-activation) for each effector caspase. A common feature among initiator caspases is the presence of a long amino-terminal domain with intermolecular interaction motifs such as the caspase recruiting domain (CARD).
Five caspases have been identified in Drosophila. Two of these present long prodomains (structural domains of the caspases) and are considered to be initiator caspases (Dronc and DCP-2 / DREDD). The other three (DCP-1, DrICE and DECAY), which have short prodomains, act as effector caspases activated by initiator caspases. The uncleaved Dronc zymogen is only presented as a monomer, while the autocleaved protein forms a homodimer. It is presumed that its autocatalytic cleavage activates the protein by generating a dimerisation in which both monomers stabilise their active sites mutually. Caspase Dronc's active site is located between positions 271 and 318 of the protein sequence.
Genetics
The Dronc gene is located in the 3L chromosome of the fly, between the 9,968,479 bp and the 9,971,002 bp. It transcripts one single polypeptide and there are 66 reported alleles of the gene. Two mutations have been found in the positions 150 and 184, respectively. The Dronc protein has a length of 450 amino acids and a mass of 51,141 Da.
Human orthologs
Although most human caspases are considered orthologs of caspase Dronc, the one that resembles it the most is Caspase-2. However, Nedd2-like caspase is the functional homolog of mammalian Caspase-9.
Location
Due to its function and its mostly hydrophilic character, Dronc can be located in the apoptosome, the plasma membrane and the nucleus of the cell. Cells containing this protein have been found in the following structures of the Drosophila melanogaster: extended germ band embryo; eye disc; foregut primordium; germline cyst; and gut section.
Activation
The activation of caspases is a fundamental step required to execute apoptosis. Initiator caspases, such as Dronc, which is the only initiator caspase in Drosophila melanogaster, are activated through different mechanisms, the main one being autocleavage. Understanding how caspases are activated is a crucial element in order to come up with therapeutic treatments triggering specific caspases. For instance, it has been shown that mice missing or having inactivated caspase-9 present several neurological impairments and have a faulty response to cell damage. Just as caspase 9 in mammals, caspase Dronc is a protein that has a caspase activation and recruitment domain (CARD). It is the only Drosophila caspase presenting this domain: this can represent the possibility of establishing a valuable comparison between their activation or inhibition mechanisms.
Activation
Autocleavage
To activate zymogen at first, caspase Dronc autocleaves at residue E352 (Glu352, a glutamic acid residue). Autocleavage at this residue induces dimerization and stabilization of the active site. Autocleavage at E352 results in Pr1, which weighs 40kDa. One study showed that, in Drosophila melanogaster S2 cells, Dronc autocleavage is essential for the zymogen activation. Even though caspase Dronc goes through other processes which will be discussed later, Dronc autocleavage was found to be absolutely necessary to induce apoptosis, whereas other cell cofactors seemed to not be enough to induce activation of Dronc if it had not previously been autocleaved at residue E352. After autocleavage, catalytic activity of Dronc is drastically higher than that of the zymogen. However, these results seem to be contradictory with those of another study conducted in BG2 cells. This study concluded that autocleavage at residue E352, just as autocleavage at other sites such as E143, is not needed for Dronc to activate and, consequently, to induce apoptosis.
Processing by DrICE
Dronc can also be cleaved by the effector caspase DrICE, which is activated by caspase Dronc itself after autocleavage. However, it has also been suggested that caspases other than Dronc could activate DrICE. Even though uncleavable mutations of Dronc may have the ability to process caspase DrICE, it is autocleaved Dronc that processes caspase DrICE in Drosophila melanogaster cells. Cleavage of Dronc by DrICE occurs at residue D135 (Asp135, an aspartic acid residue). At the beginning of apoptosis, full-length Dronc and autocleaved Dronc, Pr1, are present and can be used as a substrate for DrICE processing. Cleavage at residue D135 can therefore result in two products: cleavage of Dronc Pr1 results in fully processed Dronc, whereas cleavage of full Dronc results in Pr2, which weights 36kDa. However, since Dronc Pr1 seems to be the most abundant during early stages of apoptosis, most caspase Dronc ends up being fully processed.
Interaction with apoptosome
In humans, initiator caspases such as Caspase-2 and Caspase-9 have a prodomain that cleaves caspases to a holoenzyme complex in order to activate the protein. These caspases have a caspase recruitment domain (CARD) that allows them to interact with Apaf-1. The Drosophila melanogaster Apaf-1 homolog, Dark, forms an oligomer with eight Dark chains when dATP is present. This complex is called the apoptosome. Dark, just as Apaf-1, has an N-terminal CARD domain that interacts with caspase Dronc, which is incorporated into this protein complex that will induce cell death.
Recruitment of Dronc by Dark seems to facilitate Dronc autocleavage at residue E352. Dark might not be enough on its own to activate caspase Dronc; it has been suggested that other factors could increase the activation of Dronc (and DrICE) through interaction with the apoptosome. However, in Drosophila melanogaster cells, Dark activity is necessary for normal execution of apoptosis.
Inhibition
Ubiquitylation
Activation of Dronc is also monitored by ubiquitylation. (see figure 2) Ubiquitylation is a modification that happens after protein translation: in this process, protein ubiquitin conjugates to a lysine residue. However, the exact mechanism of Dronc ubiquitylation remains partially unknown to this day. In Drosophila melanogaster cells, mono-ubiquitylation on residue K78 (located in the CARD domain) is carried in caspase Dronc. Ubiquitylation of this residue of the CARD domain, which interacts with the apoptosome during apoptosis, keeps Dronc from activating in the apoptosome and prevents apoptosis of the cell. Ubiquitylation by E3 ubiquitin ligase activity of Diap-1 negatively regulates Dronc activity. Even a partial decrease in ubiquitylation is already enough to importantly rise Dronc activity. Moreover, it has also been shown that K78 mono-ubiquitylation assumes an inhibitory role in Dronc's non-apoptotic capacities, which may not need its catalytic movement, but still being significant for the endurance of the fruit fly. In Drosophila melanogaster cells, caspase Dronc is ubiquitylated by Diap-1. Similarly, effector caspases Caspase-3 and Caspase-7 are monoubiquitylated by cIAP2 in vitro.
Diap-1
Inhibitor of Apoptosis Proteins (IAPs) are a family of proteins that act as endogenous inhibitors of apoptosis: they inhibit caspases. Diap-1 is the Drosophila melanogaster IAP that interacts with both Dronc and DrICE through IAP domains. Some studies have found that Diap-1 inhibits and degrades caspase Dronc, and impairment of Diap-1 interaction with Dronc would prevent the caspase from degrading.
Diap-1 regulates caspase activity in Drosophila melanogaster, thus making Dronc activation dependent on removing Diap-1. Diap-1 is the protein that inhibits both Dronc and DrICE, and prevents apoptosis from being executed. Removal of Diap-1 RNAi triggers caspase activation and, thus, apoptosis. Moreover, during apoptosis, removal of Diap-1 facilitates interaction between Dronc and Dark, which supports the fact that Diap-1 is charged of regulating and inhibiting caspase Dronc. In fact, during apoptosis, proteosomal degradation of Diap-1 takes place (and is necessary) right before cleavage and activation of caspases Dronc and DrICE. Finally, binding of Diap-1 seems to not be sufficient for Dronc inhibition; it seems that ubiquitylation the RING domain of Diap-1 is necessary for complete inhibition of Dronc.
Functions
Caspase Dronc has an essential catalytic activity. It is defined as a cysteine protease (or thiol protease), which means that a nucleophilic cysteine thiol forms a catalytic triad (Cys–His–Asn) at the active site of the enzyme that intercedes in the catalysis. The main function of this kind of endopeptidases is to catalyze the hydrolysis of peptide bonds in order to cleave proteins into smaller fragments (see figure 3). For that reason, Nedd2-like caspase is responsible for the activation of effector caspases. On the other hand, as a caspase, Dronc is fully involved in programmed cell death (PCD) processes, which have repercussions in regulative and reproductive functions of the Drosophila melanogaster.
Apoptotic functions
Apoptosis is an essential process for the development of multicellular organisms. Its role is to remove excess cells during development (e.g. sculpting the digits of vertebrates), as well as being responsible for detaching damaged, potentially dangerous, cells. Because of its extreme importance, this pathway has been shown to be vastly conserved throughout evolution. One event that is capable of triggering this type of programmed cell death is the activation of the JNK (c-Jun N-terminal kinase) signalling pathway, due to stress caused by chromosomal instability (CIN). The apoptosis pathway is regulated by caspases, a family of proteases that lead to the disassembling of the cell by cleaving protein targets following an aspartate residue. In response to certain apoptotic stimuli, the inactive caspases zymogens turn into active enzymes that start a cascade of caspases-induced proteolytic cleavage processes which culminate with the DNA decomposition and the cell death. Next, neighbouring cells or macrophages (often called apoptotic bodies) engulf whatever is left of the dwindled cell in order to minimize its effect on the surrounding ones, and at the same time to avoid inducing an immune response in the body. Studies link failure of apoptosis activation with the development of some types of cancer.
Specifically, in the context of the apoptotic signalling cascade, Dronc's role as an initiator caspase consists in the activation of effector caspases such as DrICE or Dcp-1. Nonetheless, Dronc has been found to be a surprisingly efficient catalyst, with a kinetic performance one hundred and eightyfold lesser than that of caspase-9. Hence, akin to caspase 9's behaviour, an adequate enzymatic activity might demand the formation of a holoenzyme complex involving close associations with Dark, Apaf-1's Drosophila ortholog.
Comparison of the main apoptotic machinery
The diagram (figure 4) shows functional homologues of apoptotic proteins (colour-coded correspondence) in Drosophila melanogaster and mammals.
A) Drosophila: In individuals of the Drosophila genus numerous signalling pathways in charge of regulating anti-IAPs Reaper, Hid, and Grim (RHG), Dark scaffold proteins, and Dronc initiator caspases. On the one hand, RHG expression causes the degradation of Diap-1, Inhibitors of apoptosis proteins (IAPs) and the release of Dronc initiator caspases. On the other hand, it allows it to associate with Dark scaffold proteins to form a functional apoptosome and activate the DrICE and Dcp-1 effector caspases. Both pathways are necessary to properly activate caspases and are coordinately regulated. Of note is the existence of the protein from a p35 baculovirus, which has the ability to suppress them, thus blocking the pathway.
B) Mammals: Among members of the class Mammalia, the balance between members of the pro- and anti-apoptotic Bcl-2 family is a key factor in the commitment to apoptosis by regulating the release of cytochrome and IAP antagonists of mitochondria. The binding of cyt c to Apaf-1 promotes apoptosome assembly, which in turn groups and activates caspase-9s. In the end, anti-IAPs release the IAPs proteins (mainly XIAPs).
Development (Metamorphosis)
Apoptosis can be triggered by extrinsic or intrinsic signals. Both of them occur in Drosophila during its development. For example, a higher concentration of ecdysones (common during the metamorphosis) leads cuticle deposition and larval moults, as well as puparium formation and histolysis of larval midgut. This hormone causes head evasion as well as cell death of larval salivary gland. It is thought that ecdysone induced apoptosis is regulated by the Caspase Dronc. This protein conducts Rpc- and Grim-induced apoptosis, but not Hid-. Regarding other developmental processes, apoptosis conducts the ratio of stem cell-like neuroblasts in the central nervous system.
Additionally, research shows that through the employment the iRNA (RNA interference) mechanism, budding Drosophila embryos have displayed a striking reduction of cell death processes, thus demonstrating that Dronc is important for programmed cell death during embryogenesis. The outcome of said experiments results lead to believe that D. melanogaster's initiator caspase plays an essential role in mediating PCD.
Accidental Cell Death and Compensatory Proliferation
Cell death during animal tissues development is rapidly compensated by cell divisions in a process called compensatory proliferation. The developing Drosophila imaginal disk has a very high regenerative capacity that is independent of the size control mechanism that governs the disk. It is not completely known how these phenomena are regulated, but it is thought that dying cells secrete mitogens that activate the reproduction of neighbouring cells, a process that would be regulated by apoptotic signalling pathway (in which Caspase Dronc is involved). This means that if cells were stimulated to undergo apoptosis, and at the same time artificially kept alive (e.g. by overexpressing the inhibitor of effector caspases, p35), neighbouring cells would be led to conduct uncontrollable compensatory proliferation. The fact that Dronc is insensitive to p35 inhibition has suggested that it could be required for compensatory proliferation, a hypothesis that was demonstrated in 2006.
Non-apoptotic functions
Promoting DNA damage signalling
In addition to its role in apoptosis and compensatory proliferation, Caspase Dronc promotes DNA damage signalling by facilitating γH2Av activity, a variation of histone H2Ax whose phosphorylation marks DNA damage and repairs it by recruiting DNA repair machinery. Thus, Dronc would be involved in both DDR and apoptosis depending on its availability and the needs of the organism.
Involvement in genetic pathologies
Cancer
Tissue homeostasis can be defined as the maintenance of a balance between cell division and PCD, resulting in the tissue in question maintaining a relatively constant number of cells. In case it gets disturbed, two things could happen. The first would be for the cells to die faster than they can divide, which would result in tissue atrophy. Alternatively, if programmed cell death were blocked in some of the cells, they would not die enough and would end up triggering carcinogenesis in the tissue. For this reason, apoptosis evasion has been identified as a key hallmark of cancer.
As previously mentioned, caspases have a decisive implication in the initiation and execution of apoptosis. Therefore, it is reasonable to think that at low levels they can cause decreased apoptosis and carcinogenesis. Somatic mutations in caspase-3 were detected at fairly low frequencies in certain human cancers, like colon and stomach cancer and non-Hodgkin's lymphoma (NHL). Furthermore, decreased caspase 9 (a human Dronc ortholog) expression has also been linked to 46% of colon cancers.
Alzheimer's disease
Caspase-2 could have an implication in neurodegenerative disorders such as Alzheimer's disease. Latest evidence has shown that Tau's Caspase-2 cleavage results in the decline of Alzheimer's memory function. Notably, Dronc-dependent Tau cleavage was also shown in an experiment that connected circadian rhythm and Alzheimer's disease. To boot, it was also discovered that Tau expression mediated by the initiator caspase in question could induce a rough eye phenotype due to degeneration of photoreceptor neurons of Drosophila melanogaster.
Oxidative stress and ageing
An experiment questioning the impact of apoptosis on ageing found that caspase-2-deficient mice displayed shortened life-span and elevated characteristics associated with ageing. Further research has established that caspase-2 plays a role in oxidative stress response, mitochondrial function regulation and metabolic pathways, which may be potential mechanisms through which said caspase regulates ageing. Concurrently, Dronc null mutant flies, also perished within a few days after hatching. In addition, another review has shown that aging flies have raised activated Dronc levels.
References
Further reading
Programmed cell death
Caspases | Death regulator Nedd2-like caspase | [
"Chemistry",
"Biology"
] | 4,356 | [
"Senescence",
"Programmed cell death",
"Signal transduction"
] |
65,728,131 | https://en.wikipedia.org/wiki/Transition%20metal%20carboxylate%20complex | Transition metal carboxylate complexes are coordination complexes with carboxylate (RCO2−) ligands. Reflecting the diversity of carboxylic acids, the inventory of metal carboxylates is large. Many are useful commercially, and many have attracted intense scholarly scrutiny. Carboxylates exhibit a variety of coordination modes, most common are κ1- (O-monodentate), κ2 (O,O-bidentate), and bridging.
Acetate and related monocarboxylates
Structure and bonding
Carboxylates bind to single metals by one or both oxygen atoms, the respective notation being κ1- and κ2-. In terms of electron counting, κ1-carboxylates are "X"-type ligands, i.e., a pseudohalide-like. κ2-carboxylates are "L-X ligands", i.e. resembling the combination of a Lewis base (L) and a pseudohalide (X). Carboxylates are classified as hard ligands, in HSAB theory.
For simple carboxylates, the acetate complexes are illustrative. Most transition metal acetates are mixed ligand complexes. One common example is hydrated nickel acetate, Ni(O2CCH3)2(H2O)4, which features intramolecular hydrogen-bonding between the uncoordinated oxygens and the protons of aquo ligands. Stoichiometrically simple complexes are often multimetallic. One family are the basic metal acetates, of the stoichiometry [M3O(OAc)6(H2O)3]n+.
Homoleptic complexes
Homoleptic carboxylate complexes are usually coordination polymers. But exceptions exist.
A molecular monocarboxylate is silver acetate, Ag2(OAc)2.
Molecular diacetates are more common. Several diacetates adopt the Chinese lantern structure. Well studied examples include the dimetal tetraacetates (M2(OAc)4) including rhodium(II) acetate, copper(II) acetate, molybdenum(II) acetate, and chromium(II) acetate. Platinum diacetate and palladium diacetate feature Pt4 and Pd3 cores, further illustrating the tendency of acetate ligands to stabilize multimetallic structures.
Mononuclear tricarboxylates include derivatives of 1-adamantanecarboxylic acid, which have the formula [M(O2CC10H11)3]− (M = Co, Ni, Zn). The carboxylates are bidentate.
Synthesis
Many methods allow the synthesis of metal carboxylates. From preformed carboxylic acid, the following routes have been demonstrated:
acid-base reactions:
protonolysis:
oxidative addition:
From preformed carboxylate, salt metathesis reactions are common:
Metal carboxylates can be prepared by carbonation of highly basis metal alkyls:
Reactions
A common reaction of metal carboxylates is their displacement by more basic ligands. Acetate is a common leaving group. They are especially prone to protonolysis, which is widely used to introduce ligands, displacing the carboxylic acid. In this way octachlorodimolybdate is produced from dimolybdenum tetraacetate:
Acetates of electrophilic metals are proposed to function as bases in concerted metalation deprotonation reactions.
Attempts to prepare some carboxylate complexes, especially for electrophilic metals, often gives oxo derivatives. Examples include the oxo-acetates of Fe(III), Mn(III), and Cr(III). Pyrolysis of metal carboxylates affords acid anhydrides and the metal oxide. This reaction explains the formation of basic zinc acetate from anhydrous zinc diacetate.
In some cases, monodentate carboxylates undergo O-alkylation to give esters. Strong alkylating agents are required.
Other carboxylates
Many carboxylates form complexes with transition metals. Alkyl and simple aryl carboxylates behave similarly to the acetates. Trifluoroacetates differ in mononuclear complexes because it is usually monodentate, e.g. [Zn(κ2-O2CCH3)2(OH2)2] vs [Zn(κ1-O2CCF3)2(OH4)2].
Applications
Metal naphthenates and ethylhexanoates
Naphthenic acids, mixtures of long chain and cyclic carboxylic acids extracted from petroleum, form lipophilic complexes (often called salts) with transition metals. These metal naphthenates, have the formula M(naphthenate)2, or M3O(naphthenate)6, have diverse applications including synthetic detergents, lubricants, corrosion inhibitors, fuel and lubricating oil additives, wood preservatives, insecticides, fungicides, acaricides, wetting agents, thickening agent, and oil drying agents. Industrially useful naphthenates include those of aluminium, magnesium, calcium, barium, cobalt, copper, lead, manganese, nickel, vanadium, and zinc.< Illustrative is the use of cobalt naphthenate for the oxidation of tetrahydronaphthalene to the hydroperoxide.
Like naphthenic acid, 2-ethylhexanoic acid forms lipophilic complexes that are used in organic and industrial chemical synthesis. They function as catalysts in polymerizations as well as for oxidation reactions as oil drying agents.
Metal ethylhexanoates are referred to as metallic soaps.
Aminopolycarboxylates
A commercially important family of metal carboxylates are derived from aminopolycarboxylates, e.g., EDTA4-. Related to these synthetic chelating agents are the amino acids, which form large families of amino acid complexes. Two amino acids, glutamate and aspartate, have carboxylate side chains, which function as ligands for iron in nonheme iron proteins, such as hemerythrin.
Metal organic frameworks (MOFs)
Metal organic frameworks, porous, three-dimensional coordination polymers, are often derived from metal carboxylate clusters. These clusters, called secondary bonding units (SBU's), are often linked by the conjugate bases of benzenedi- and tri- carboxylic acids.
Reagents for organic synthesis
It has been claimed that "cobalt carboxylates are the most widely used homogeneous catalysts in industry" as they are used in the oxidation of p-xylene to terephthalic acid.
Palladium(II) acetate has been described as being "among the most extensively used transition metal complexes in metal-mediated organic synthesis". Many coupling reactions utilize this reagent, which is soluble in organic solvents and which contains a built-in Bronsted base (acetate).
Dirhodium tetrakis(trifluoroacetate) is widely used catalyst for reactions involving diazo compounds.
Related topics
transition metal oxalate complex
References
Ligands
Coordination chemistry
Salts of carboxylic acids | Transition metal carboxylate complex | [
"Chemistry"
] | 1,580 | [
"Salts of carboxylic acids",
"Ligands",
"Coordination chemistry",
"Salts"
] |
44,287,236 | https://en.wikipedia.org/wiki/Continued%20process%20verification | Continued process verification (CPV) is the collection and analysis of end-to-end production components and processes data to ensure product outputs are within predetermined quality limits. In 2011 the Food and Drug Administration published a report outlining best practices regarding business process validation in the pharmaceutical industry. Continued process verification is outlined in this report as the third stage in Process Validation.
Its central purpose is to ensure that processes are in a constant state of control, thus ensuring final product quality. Central to effective CPV is a method with which to identify unwanted process inconsistencies in order to execute corrective or preventive measures. Once quality standards are set in place they must be monitored with regular frequency to confirm those parameters are being met. Continued process verification not only helps protect consumers from production faults, but business also see benefits in implementing a CPV program. Should product outputs not match target standards it can be very costly to investigate the problem source without existing CPV data.
Vital components of continued process verification
An alert system to identify process malfunctions that lead to deviations from quality standards.
A framework for gathering and analyzing data of final product quality and process consistency. Analysis should include source materials consistency and manufacturing equipment condition; and data should be collected in a format that allows for long-term trend analysis as well as intra-production quality analysis.
Continued review of quality qualification standards and process reliability. Departures from any predetermined standards should be flagged for review by trained personnel and appropriate measures undertaken to restore end-to-end quality standards.
Data collection and analysis
Crucial in effective CPV implementation is an appropriate data collection procedure. Data must allow for statistical analytics and trend analysis of process consistency and capability. A correctly implemented procedure will minimize overreactions to individual production outlier events and guarantee genuine process inconsistency are detected. While production variability can sometimes be obvious and even casually identified the FDA recommends using statistical tools to quantitatively detect problems and identify root causes. Initially, continued process verification should be based on quality standards established in the design phase. After a period of time variations can be detected by identifying deviation from historical data using statistical tools. Furthermore, these same tools can also be used to identify opportunities to optimize processes that may pre-emptively increase quality reliability.
References
Further reading
BPOG, 2014, Continued Process Verification: An Industry Position Paper with Example Plan © 2014, BPOG - Biophorum Operations Group. Retrieved 26 August 2015.
Software testing
Formal methods
Software quality
Enterprise modelling
Quality management
Business process | Continued process verification | [
"Engineering"
] | 512 | [
"Systems engineering",
"Software testing",
"Enterprise modelling",
"Software engineering",
"Formal methods"
] |
44,293,086 | https://en.wikipedia.org/wiki/Galvanic%20corrosion | Galvanic corrosion (also called bimetallic corrosion or dissimilar metal corrosion) is an electrochemical process in which one metal corrodes preferentially when it is in electrical contact with another, in the presence of an electrolyte. A similar galvanic reaction is exploited in primary cells to generate a useful electrical voltage to power portable devices. This phenomenon is named after Italian physician Luigi Galvani (1737–1798).
A similar type of corrosion caused by the presence of an external electric current is called electrolytic corrosion.
Overview
Dissimilar metals and alloys have different electrode potentials, and when two or more come into contact in an electrolyte, one metal (that is more reactive) acts as anode and the other (that is less reactive) as cathode. The electropotential difference between the reactions at the two electrodes is the driving force for an accelerated attack on the anode metal, which dissolves into the electrolyte. This leads to the metal at the anode corroding more quickly than it otherwise would and corrosion at the cathode being inhibited. The presence of an electrolyte and an electrical conducting path between the metals is essential for galvanic corrosion to occur. The electrolyte provides a means for ion migration whereby ions move to prevent charge build-up that would otherwise stop the reaction. If the electrolyte contains only metal ions that are not easily reduced (such as Na+, Ca2+, K+, Mg2+, or Zn2+), the cathode reaction is the reduction of dissolved H+ to H2 or O2 to OH−.
In some cases, this type of reaction is intentionally encouraged. For example, low-cost household batteries typically contain carbon-zinc cells. As part of a closed circuit (the electron pathway), the zinc within the cell will corrode preferentially (the ion pathway) as an essential part of the battery producing electricity. Another example is the cathodic protection of buried or submerged structures as well as hot water storage tanks. In this case, sacrificial anodes work as part of a galvanic couple, promoting corrosion of the anode, while protecting the cathode metal.
In other cases, such as mixed metals in piping (for example, copper, cast iron and other cast metals), galvanic corrosion will contribute to accelerated corrosion of parts of the system. Corrosion inhibitors such as sodium nitrite or sodium molybdate can be injected into these systems to reduce the galvanic potential. However, the application of these corrosion inhibitors must be monitored closely. If the application of corrosion inhibitors increases the conductivity of the water within the system, the galvanic corrosion potential can be greatly increased.
Acidity or alkalinity (pH) is also a major consideration with regard to closed loop bimetallic circulating systems. Should the pH and corrosion inhibition doses be incorrect, galvanic corrosion will be accelerated. In most HVAC systems, the use of sacrificial anodes and cathodes is not an option, as they would need to be applied within the plumbing of the system and, over time, would corrode and release particles that could cause potential mechanical damage to circulating pumps, heat exchangers, etc.
Examples
A common example of galvanic corrosion occurs in galvanized iron, a sheet of iron or steel covered with a zinc coating. Even when the protective zinc coating is broken, the underlying steel is not attacked. Instead, the zinc is corroded because it is less "noble". Only after it has been consumed can rusting of the base metal occur. By contrast, with a conventional tin can, the opposite of a protective effect occurs: because the tin is more noble than the underlying steel, when the tin coating is broken, the steel beneath is immediately attacked preferentially.
Statue of Liberty
A spectacular example of galvanic corrosion occurred in the Statue of Liberty when regular maintenance checks in the 1980s revealed that corrosion had taken place between the outer copper skin and the wrought iron support structure. Although the problem had been anticipated when the structure was built by Gustave Eiffel to Frédéric Bartholdi's design in the 1880s, the insulation layer of shellac between the two metals had failed over time and resulted in rusting of the iron supports. An extensive renovation was carried out with replacement of the original insulation with PTFE. The structure was far from unsafe owing to the large number of unaffected connections, but it was regarded as a precautionary measure to preserve a national symbol of the United States.
Royal Navy and HMS Alarm
In 1681, Samuel Pepys (then serving as Admiralty Secretary) agreed to the removal of lead sheathing from English Royal Navy vessels to prevent the mysterious disintegration of their rudder-irons and bolt-heads, though he confessed himself baffled as to the reason the lead caused the corrosion.
The problem recurred when vessels were sheathed in copper to reduce marine weed accumulation and protect against shipworm. In an experiment, the Royal Navy in 1761 had tried fitting the hull of the frigate HMS Alarm with 12-ounce copper plating. Upon her return from a voyage to the West Indies, it was found that although the copper remained in fine condition and had indeed deterred shipworm, it had also become detached from the wooden hull in many places because the iron nails used during its installation "were found dissolved into a kind of rusty Paste". To the surprise of the inspection teams, however, some of the iron nails were virtually undamaged. Closer inspection revealed that water-resistant brown paper trapped under the nail head had inadvertently protected some of the nails: "Where this covering was perfect, the Iron was preserved from Injury". The copper sheathing had been delivered to the dockyard wrapped in the paper which was not always removed before the sheets were nailed to the hull. The conclusion therefore reported to the Admiralty in 1763 was that iron should not be allowed direct contact with copper in sea water.
US Navy littoral combat ship Independence
Serious galvanic corrosion has been reported on the latest US Navy attack littoral combat vessel the USS Independence caused by steel water jet propulsion systems attached to an aluminium hull. Without electrical isolation between the steel and aluminium, the aluminium hull acts as an anode to the stainless steel, resulting in aggressive galvanic corrosion.
Corroding lighting fixtures
The unexpected fall in 2011 of a heavy light fixture from the ceiling of the Big Dig vehicular tunnel in Boston revealed that corrosion had weakened its support. Improper use of aluminium in contact with stainless steel had caused rapid corrosion in the presence of salt water. The electrochemical potential difference between stainless steel and aluminium is in the range of 0.5 to 1.0V, depending on the exact alloys involved, and can cause considerable corrosion within months under unfavorable conditions. Thousands of failing lights would have to be replaced, at an estimated cost of $54 million.
Lasagna cell
A "lasagna cell" is accidentally produced when salty moist food such as lasagna is stored in a steel baking pan and is covered with aluminium foil. After a few hours the foil develops small holes where it touches the lasagna, and the food surface becomes covered with small spots composed of corroded aluminium. In this example, the salty food (lasagna) is the electrolyte, the aluminium foil is the anode, and the steel pan is the cathode. If the aluminium foil touches the electrolyte only in small areas, the galvanic corrosion is concentrated, and corrosion can occur fairly rapidly. If the aluminium foil was not used with a dissimilar metal container, the reaction was probably a chemical one. It is possible for heavy concentrations of salt, vinegar or some other acidic compounds to cause the foil to disintegrate. The product of either of these reactions is an aluminium salt. It does not harm the food, but any deposit may impart an undesired flavor and color.
Electrolytic cleaning
The common technique of cleaning silverware by immersion of the silver or sterling silver (or even just silver plated objects) and a piece of aluminium (foil is preferred because of its much greater surface area than that of ingots, although if the foil has a "non-stick" face, this must be removed with steel wool first) in a hot electrolytic bath (usually composed of water and sodium bicarbonate, i.e., household baking soda) is an example of galvanic corrosion. Silver darkens and corrodes in the presence of airborne sulfur molecules, and the copper in sterling silver corrodes under a variety of conditions. These layers of corrosion can be largely removed through the electrochemical reduction of silver sulfide molecules: the presence of aluminium (which is less noble than either silver or copper) in the bath of sodium bicarbonate strips the sulfur atoms off the silver sulfide and transfers them onto and thereby corrodes the piece of aluminium foil (a much more reactive metal), leaving elemental silver behind. No silver is lost in the process.
Prevention
There are several ways of reducing and preventing this form of corrosion:
Electrically insulate the two metals from each other. If they are not in electrical contact, no galvanic coupling will occur. This can be achieved by using non-conductive materials between metals of different electropotential. Piping can be isolated with a spool of pipe made of plastic materials, or made of metal material internally coated or lined. It is important that the spool be a sufficient length to be effective. For reasons of safety, this should not be attempted where an electrical earthing system uses the pipework for its ground or has equipotential bonding.
Metal boats connected to a shore line electrical power feed will normally have to have the hull connected to earth for safety reasons. However the end of that earth connection is likely to be a copper rod buried within the marina, resulting in a steel-copper "battery" of about 0.5 V. Additionally, the hull of each boat is connected to the hull of all other boats, resulting in further "batteries" between propellers (which may be made of bronze) and steel hulls, which may cause corrosion of the expensive propellers. For such cases, the use of a galvanic isolator is essential, typically two semiconductor diodes in series, in parallel with two diodes conducting in the opposite direction (antiparallel). This device is inserted in the protective earth connection between the hull and the shoreline protective conductor. This prevents any current in the protective conductor while the applied voltage is less than 1.4 V (i.e. 0.7 V per diode), but allows a full current in the case of an electrical fault. There will still be a very minor leakage of current through the diodes, which may result in slightly faster corrosion than normal.
Ensure there is no contact with an electrolyte. This can be done by using water-repellent compounds such as greases, or by coating the metals with an impermeable protective layer, such as a suitable paint, varnish, or plastic. If it is not possible to coat both, the coating should be applied to the more noble, the material with higher potential. This is advisable because if the coating is applied only on the more active material, in case of damage to the coating there will be a large cathode area and a very small anode area, and for the exposed anodic area the corrosion rate will be correspondingly high.
Using antioxidant paste is beneficial for preventing corrosion between copper and aluminium electrical connections. The paste consists of a lower nobility metal than aluminium or copper.
Choose metals that have similar electropotentials. The more closely matched the individual potentials, the smaller the potential difference and hence the smaller the galvanic current. Using the same metal for all construction is the easiest way of matching potentials.
Electroplating or other plating can also help. This tends to use more noble metals that resist corrosion better. Chrome, nickel, silver and gold can all be used. Galvanizing with zinc protects the steel base metal by sacrificial anodic action.
Cathodic protection uses one or more sacrificial anodes made of a metal which is more active than the protected metal. Alloys of metals commonly used for sacrificial anodes include zinc, magnesium, and aluminium. This approach is commonplace in water heaters and many buried or immersed metallic structures.
Cathodic protection can also be applied by connecting a direct current (DC) electrical power supply to oppose the corrosive galvanic current. (See .)
Galvanic series
All metals can be classified into a galvanic series representing the electrical potential they develop in a given electrolyte against a standard reference electrode. The relative position of two metals on such a series gives a good indication of which metal is more likely to corrode more quickly. However, other factors such as water aeration and flow rate can influence the rate of the process markedly.
Anodic index
The compatibility of two different metals may be predicted by consideration of their anodic index. This parameter is a measure of the electrochemical voltage that will be developed between the metal and gold. To find the relative voltage of a pair of metals it is only required to subtract their anodic indices.
To reduce galvanic corrosion for metals stored in normal environments such as storage in warehouses or non-temperature and humidity controlled environments, there should not be more than 0.25V difference in the anodic index of the two metals in contact. For controlled environments in which temperature and humidity are controlled, 0.50V can be tolerated. For harsh environments such as outdoors, high humidity, and salty environments, there should be not more than 0.15V difference in the anodic index. For example: gold and silver have a difference of 0.15V, therefore the two metals will not experience significant corrosion even in a harsh environment.
When design considerations require that dissimilar metals come in contact, the difference in anodic index is often managed by finishes and plating. The finishing and plating selected allow the dissimilar materials to be in contact, while protecting the more base materials from corrosion by the more noble. It will always be the metal with the most negative anodic index which will ultimately suffer from corrosion when galvanic incompatibility is in play. This is why sterling silver and stainless steel tableware should never be placed together in a dishwasher at the same time, as the steel items will likely experience corrosion by the end of the cycle (soap and water having served as the chemical electrolyte, and heat having accelerated the process).
Electrolytic corrosion
The term "electrolytic corrosion" is most frequently used to indicate corrosion caused by electric current applied to the metal from external sources. The mechanism of this corrosion is essentially the same as for the galvanic corrosion.
See also
Corrosion
Galvanic anode
Galvanic isolation in electrical/electronic circuits
Galvanic series
Galvanization
References
Sources
External links
Corrosion | Galvanic corrosion | [
"Chemistry",
"Materials_science"
] | 3,121 | [
"Metallurgy",
"Electrochemistry",
"Materials degradation",
"Corrosion"
] |
44,294,824 | https://en.wikipedia.org/wiki/Proterra%20%28earthen%20architecture%20project%29 | Proterra is an Ibero American organization promoting earthen architecture. It was initially founded as a four-year project of CYTED in 2001, but continued to become a UNESCO WHEAP partner in 2012.
References
Architecture organizations
Organizations established in 2001
UNESCO | Proterra (earthen architecture project) | [
"Engineering"
] | 53 | [
"Architecture organizations",
"Architecture"
] |
44,294,882 | https://en.wikipedia.org/wiki/Joshua%20Dobbs | Robert Joshua Dobbs (born January 26, 1995) is an American professional football quarterback for the San Francisco 49ers of the National Football League (NFL). He played college football for the Tennessee Volunteers, and was selected by the Pittsburgh Steelers in the fourth round of the 2017 NFL draft. Dobbs has been a member of eight NFL teams during his career, including as the starter for the Tennessee Titans, Arizona Cardinals, and Minnesota Vikings.
Early life
Dobbs was born and raised in Alpharetta, Georgia, the son of Stephanie and Robert Dobbs. His mother retired from United Parcel Service (UPS) as a region manager in corporate human resources, and his father retired as a senior vice president for Wells Fargo. Dobbs has alopecia areata, an autoimmune disease causing hair loss, which began when he was transitioning from elementary to junior high school.
Dobbs started playing football when he was five years old. He attended Wesleyan School and then Alpharetta High School. As a senior with the Alpharetta Raiders football team, he threw for 3,625 yards with 29 touchdowns. Dobbs was rated a three-star recruit by Rivals.com and a four-star by Scout.com. He originally committed to Arizona State University to play college football, but in February 2013, he changed his commitment to the University of Tennessee.
College career
2013 season
As a true freshman at the University of Tennessee in 2013, Dobbs played in five games with four starts after starter Justin Worley suffered an injury in a loss against the #1 Alabama Crimson Tide at Bryant–Denny Stadium. Dobbs came into the game and completed 5 of 12 passes for 75 yards. He made his first career start against the #10 Missouri Tigers at Faurot Field. He completed 26 of 42 passes for 240 yards in the 31–3 loss, which was the most passing yards in a freshman debut since 2004 (Erik Ainge (118) and Brent Schaefer (123) against the UNLV Rebels). After a 55–23 loss to the #7 Auburn Tigers and a 14–10 loss to the Vanderbilt Commodores, Dobbs put together a solid performance against the Kentucky Wildcats at Commonwealth Stadium. In the 27–14 victory, Dobbs threw his first two career touchdown passes and had a 40-yard rushing touchdown. Overall, he completed 72 of 121 passes for 695 yards with two touchdowns and six interceptions and also rushed for 189 yards and a touchdown in his true freshman season.
2014 season
Dobbs competed with Worley (a senior) and Nathan Peterman (a sophomore), to be Tennessee's starter for the 2014 season. Worley was announced the starter, but Dobbs took over as the starter in November after Worley was injured in a 34–3 loss to the #3 Ole Miss Rebels at Vaught–Hemingway Stadium. Although Dobbs was not pushed into action immediately after the injury, in the following week against #4 Alabama Crimson Tide, Peterman was named the starter, but he was relieved quickly by Dobbs. Dobbs performed well in the 34–20 defeat by recording 192 passing yards and two rushing touchdowns against the Crimson Tide. Against South Carolina, Dobbs had a breakout performance against the Gamecocks at Williams–Brice Stadium. In the 45–42 comeback win in overtime, Dobbs had 301 passing yards, two passing touchdowns, 166 rushing yards, and three rushing touchdowns. Against the Kentucky Wildcats at Neyland Stadium, Dobbs had 297 passing yards, three passing touchdowns, 48 rushing yards, and one rushing touchdown in the 50–16 victory. Dobbs and team helped Tennessee reach their first bowl game since the 2010 season. Dobbs was named the 2015 TaxSlayer Bowl MVP in Tennessee's 45–28 victory over Iowa. In the game, Dobbs passed for 129 yards and one touchdown and rushed for 76 yards and two touchdowns. Dobbs threw for 1,206 yards with nine touchdowns and six interceptions during his sophomore season. He finished the 2014 season with 469 yards rushing and eight rushing touchdowns in just six games. Dobbs received two Offensive Player of the Week honors from the Southeastern Conference (SEC), both of which came from his combined passing and rushing performances for over 400 yards in each game.
2015 season
Dobbs entered the 2015 season as Tennessee's starting quarterback. He started and appeared in all 12 regular season games and the bowl game. To open Tennessee's season on September 5, Dobbs recorded 205 passing yards, two passing touchdowns, 89 rushing yards, and one rushing touchdown against the Bowling Green Falcons in a 59–30 win at Nissan Stadium in Nashville, Tennessee. In a double overtime 31–24 loss to the #19 Oklahoma Sooners in the Tennessee 2015 home opener, Dobbs had 125 passing yards, one passing touchdown, 12 rushing yards, and one rushing touchdown. In a 28–27 loss to SEC East rival Florida at Ben Hill Griffin Stadium, Dobbs had a season-high 136 rushing yards and had a 58-yard receiving touchdown thrown by teammate wide receiver Jauan Jennings on a trick play. Dobbs's touchdown reception against Florida was the first reception by a Tennessee quarterback since Peyton Manning caught a 10-yard pass from running back Jamal Lewis in 1997 against Arkansas. Against the rival #19 Georgia Bulldogs, Dobbs had a season-high 312 yards passing and three touchdowns to go along with 118 rushing yards and two touchdowns. His efforts in the game led Tennessee to their first win over the Bulldogs since 2009. Against the #8 Alabama Crimson Tide in their annual rivalry game, Dobbs had 171 yards passing and one passing touchdown in the narrow 19–14 loss at Bryant–Denny Stadium. Against rival South Carolina, Dobbs passed for 255 yards and two touchdowns in the 27–24 home victory. Dobbs led Tennessee to a 9–4 record, which was the most wins for the Tennessee program since 2007. The 2015 season culminated with a 45–6 victory over the #12 Northwestern Wildcats in the 2016 Outback Bowl. In the bowl game, Dobbs had two rushing touchdowns.
2016 season
Dobbs entered the 2016 season as Tennessee's starting quarterback in his final season of collegiate eligibility. He started and appeared in all 12 regular season games and the bowl game. Dobbs started the season with a solid performance in a home game against Appalachian State. In the 20–13 overtime win, Dobbs had 192 yards passing but fumbled on the goal line; the ball was recovered by teammate and running back Jalen Hurd to give Tennessee the go-ahead score. In the 2016 Pilot Flying J Battle at Bristol, Dobbs threw three passing touchdowns to go along with two rushing touchdowns. In a 38–28 comeback victory over the #19 Florida Gators, Dobbs had 319 yards passing, four passing touchdowns, 80 rushing yards, and a rushing touchdown to lead the Volunteers to their first win over the Gators since 2004. Against #25 Georgia, Dobbs had 230 yards passing, three passing touchdowns, and a rushing touchdown to win 34–31. Dobbs's last touchdown was a Hail Mary throw to wide receiver Jauan Jennings as time expired. The winning play is referenced by many as the "Dobbs-Nail Boot". With the victory, Tennessee was 5–0 with Dobbs as quarterback and ranked as high as top 10 in some polls. In a double overtime 45–38 loss to the #8 Texas A&M Aggies at Kyle Field, Dobbs had a season-high 398 passing yards and one passing touchdown. In addition, he caught a receiving touchdown from Jauan Jennings, his second career receiving touchdown. Dobbs continued solid performances over the rest of the season: he had five touchdowns, 223 passing yards and 190 rushing yards in a 63–37 win over Missouri and 340 passing yards in a 45–34 loss against Vanderbilt at Vanderbilt Stadium. Despite his play, Tennessee faded from their 5–0 start to finish 8–4.
In the final game of his Tennessee career, Dobbs led the Volunteers past the #24 Nebraska Cornhuskers by a score of 38–28 in the 2016 Music City Bowl at Nissan Stadium in Nashville. He had 291 passing yards, one passing touchdown, 11 rushes for 118 yards, and three rushing touchdowns. Dobbs was named the MVP of the game.
Dobbs led Tennessee to a second consecutive 9–4 record. Tennessee's 18 wins with Dobbs at the helm were the most for the school over a two-year span since 2006–2007.
Dobbs majored in aerospace engineering during his time at the University of Tennessee. The university presented him with the 2017 Torchbearer Award, the highest honor for an undergraduate student, which recognizes accomplishments in the community and academics. Dobbs was heralded as possessing a perfect 4.0 grade point average and being named to the SEC Academic Honor Roll.
Dobbs was inducted into Omicron Delta Kappa at Tennessee in 2016.
College statistics
Professional career
Pre-draft
Dobbs received an invitation to the Senior Bowl and was named the starting quarterback for the South. He finished the game completing 12-of-15 pass attempts for 102 passing yards and an interception, as the South defeated the North 16–15. The majority of NFL draft experts and analysts projected him to be a fourth to fifth round pick. NFL analyst Mike Mayock projected him to be selected in the second round and NFL.com projected him to be drafted in the third round. After attending the NFL Scouting Combine, he was ranked the seventh best quarterback in the draft by ESPN, the ninth best quarterback by Sports Illustrated, and NFLDraftScout.com ranked him the eighth best quarterback in the draft. He attended Tennessee's Pro Day and scripted his own set of plays; 19 other teammates also participated in Tennessee's Pro Day. He held workouts for six teams: the Kansas City Chiefs, Tennessee Titans, Carolina Panthers, San Diego Chargers, Pittsburgh Steelers, and New Orleans Saints.
Pittsburgh Steelers (first stint)
The Steelers selected Dobbs in the fourth round (135th overall) of the 2017 NFL draft. He was the seventh quarterback selected, and the Steelers also drafted his former Tennessee and Senior Bowl teammate, cornerback Cameron Sutton. He replaced Zach Mettenberger following the draft. On May 22, 2017, the Steelers signed Dobbs to a four-year, $2.95 million contract with a signing bonus of $554,295. He was named the starter for the Steelers' pre-season opener against the New York Giants. After two starts and four appearances during the pre-season, Dobbs spent his entire rookie season behind incumbent starter Ben Roethlisberger and long-term backup Landry Jones.
Dobbs made his NFL regular season debut on October 7, 2018, in a 41–17 Steelers win against the Atlanta Falcons, as on the final play of the game, he kneeled down. On November 4, 2018, in a 23–16 Steelers Week 9 victory against the Baltimore Ravens, Dobbs completed a 22-yard pass to JuJu Smith-Schuster, after stepping in for Ben Roethlisberger, who got injured on the previous play. In Week 14, against the Oakland Raiders, Dobbs once again stepped in for Roethlisberger, who suffered a rib injury. He finished 4-of-9 for 24 yards and one interception in the 24–21 loss. Overall, in the 2018 season, he appeared in five games and went 6-of-12 for 43 yards and one interception.
Jacksonville Jaguars
On September 9, 2019, Dobbs was traded to the Jacksonville Jaguars for a fifth-round pick in the 2020 NFL draft. Dobbs was traded after Mason Rudolph won the backup job for the Steelers, and Jaguars' quarterback Nick Foles sustained a broken clavicle during the season opener and was placed on injured reserve.
While in Jacksonville, Dobbs participated in an internship at NASA's Kennedy Space Center.
On September 5, 2020, Dobbs was waived by the Jaguars.
Pittsburgh Steelers (second stint)
On September 6, 2020, Dobbs was claimed off waivers by the Steelers, his former team. He re-signed with the Steelers on a one-year contract on April 19, 2021.
On August 31, 2021, Dobbs was placed on injured reserve.
Cleveland Browns (first stint)
On April 9, 2022, Dobbs signed a one-year, $1 million deal with the Cleveland Browns. He was waived on November 28, 2022, after Deshaun Watson returned from suspension.
Detroit Lions
On December 5, 2022, Dobbs was signed to the practice squad of the Detroit Lions.
Tennessee Titans
On December 21, 2022, Dobbs was signed by the Titans off the Lions practice squad.
On December 29, with Ryan Tannehill out for the season with an injury and rookie Malik Willis underperforming, Dobbs was named the starter for the Titans Week 17 matchup against the Dallas Cowboys. In his first NFL start, Dobbs completed 20-of-39 passes for 232 yards, his first career touchdown pass, and an interception in the 27–13 loss.
On January 2, head coach Mike Vrabel announced that Dobbs would start the Week 18 matchup against the Jaguars. Needing a win to clinch the division, Dobbs completed 20-of-29 passes for 179 yards to go with a touchdown and an interception. Despite leading for most of the game, Dobbs was sacked from behind by Jaguars safety Rayshawn Jenkins and fumbled the ball, with the Jaguars returning it 37 yards for the go-ahead touchdown with under three minutes to go. The Titans lost 20–16, ultimately costing them a playoff spot.
Cleveland Browns (second stint)
On March 23, 2023, Dobbs signed with the Browns.
Arizona Cardinals
During the 2023 preseason, on August 24, Dobbs was traded to the Arizona Cardinals along with a seventh-round pick in the 2024 NFL draft, in exchange for a fifth-round pick in the 2024 Draft. Dobbs entered the 2023 NFL season as the starting quarterback for the Cardinals, as Kyler Murray started the season on injured reserve.
On September 24, Dobbs led the Cardinals to their first win of the season in an upset over the Cowboys 28–16. At the time, the Cowboys were undefeated at 2–0. This would be Cardinals' only win with Dobbs as starter, going 1–7 through eight games before being traded to the Vikings. He finished his tenure as the Cardinals’ quarterback with 1,569 passing yards, eight passing touchdowns, and five interceptions, as well as 258 rushing yards and three rushing touchdowns.
Minnesota Vikings
After the Minnesota Vikings' starting quarterback Kirk Cousins suffered a season-ending Achilles injury, the Cardinals traded Dobbs, along with a conditional seventh-round pick, to the Vikings in exchange for a sixth-round pick on October 31.
On November 5 against the Falcons, Dobbs entered the game after new starting quarterback Jaren Hall left the game with a concussion. Dobbs had not taken a single practice repetition for the Vikings, and had to rehearse his cadence with the team's offensive linemen during the game. Dobbs threw for 158 yards and scored three total touchdowns in a 31–28 come-from-behind victory. He became the first quarterback in NFL history to record three touchdowns in consecutive games while playing for different teams and was named as the NFC player of the week for his performance. Dobbs during this time received increased media attention for his journeyman career and sudden unexpected success. His nickname "The Passtronaut," referring to Dobbs' background in aerospace engineering, first appeared in October 2023 and gained prominence during this period.
After losses to the Denver Broncos and the Chicago Bears, Dobbs was benched in the second half of the Vikings' Week 14 game against the Las Vegas Raiders after completing 10-of-23 passes for 63 yards and scoring no points for Minnesota through three quarters. After backup quarterback Nick Mullens led the Vikings to a game-winning drive in the fourth quarter, which resulted in a 3–0 victory, the team announced that Mullens would be the starter moving forward, and Dobbs was relegated to backup. He was later demoted to the third-string quarterback after head coach Kevin O'Connell announced that Jaren Hall would backup Mullens.
San Francisco 49ers
On March 19, 2024, Dobbs signed a one-year contract with the San Francisco 49ers.
Dobbs spent the majority of the season as the third-string quarterback behind Brock Purdy and Brandon Allen. He saw his first snaps of the season in Week 17 against the Detroit Lions after Purdy departed late in the fourth quarter with an injury. In his absence, Dobbs threw for 35 yards and ran for a seven-yard touchdown as the 49ers lost 40–34. With Purdy injured, the 49ers named Dobbs their starter for their final game of the season against the Cardinals.
NFL career statistics
Personal life
Dobbs is a Christian. He has spoken about the most important day in his life by saying, "That’s the day I got baptized and went public with the decision of shedding my old life and putting on the new and starting a relationship with Jesus Christ. The day will come that I won't be a part of any football team. But the decision I made during my sophomore year in high school — to be a part of Team Jesus — I'll be a part of that team for the rest of my life, and for all eternity."
Dobbs is the founder of the ASTROrdinary Dobbs Foundation.
Dobbs is an advocate for awareness about alopecia, a hair loss condition that personally affects him.
Dobbs' cousin, Parker Washington, is a wide receiver for the Jaguars.
References
External links
San Francisco 49ers bio
Tennessee Volunteers bio
1995 births
Living people
Aerospace engineers
American football quarterbacks
Arizona Cardinals players
Cleveland Browns players
Detroit Lions players
Jacksonville Jaguars players
Minnesota Vikings players
People with autoimmune disease
Pittsburgh Steelers players
Players of American football from Fulton County, Georgia
San Francisco 49ers players
Sportspeople from Alpharetta, Georgia
Tennessee Titans players
Tennessee Volunteers football players | Joshua Dobbs | [
"Engineering"
] | 3,696 | [
"Aerospace engineers",
"Aerospace engineering"
] |
51,473,136 | https://en.wikipedia.org/wiki/Rolapitant | Rolapitant (INN, trade name Varubi in the US and Varuby in the European Union) is a drug originally developed by Schering-Plough and licensed for clinical development by Tesaro, which acts as a selective NK1 receptor antagonist (antagonist for the NK1 receptor). It has been approved as a medication for the treatment of chemotherapy-induced nausea and vomiting (CINV) after clinical trials showed it to have similar or improved efficacy and some improvement in safety over existing drugs for this application.
Medical uses
Rolapitant is used in combination with other antiemetic (anti-vomiting) agents in adults for the prevention of delayed nausea and vomiting associated with initial and repeat courses of emetogenic cancer chemotherapy, including, but not limited to, highly emetogenic chemotherapy. The approved antiemetic combination consists of rolapitant plus dexamethasone and a 5-HT3 antagonist.
Contraindications
Under the US approval, rolapitant is contraindicated in combination with thioridazine, whose inactivation could be inhibited by rolapitant. Under the European approval, it is contraindicated in combination with St. John's Wort, which is expected to accelerate inactivation of rolapitant.
Side effects
In studies comparing chemotherapy plus rolapitant, dexamethasone and a 5-HT3 antagonist to chemotherapy plus placebo, dexamethasone and a 5-HT3 antagonist, most side effects had comparable frequencies in both groups, and differed more between chemotherapy regimens than between rolapitant and placebo groups. Common side effects included decreased appetite (9% under rolapitant vs. 7% under placebo), neutropenia (9% vs. 8% or 7% vs. 6%, depending on the kind of chemotherapy), dizziness (6% vs. 4%), indigestion and stomatitis (both 4% vs. 2%).
Overdose
Up to eightfold therapeutic doses have been given in studies without problems.
Interactions
Rolapitant moderately inhibits the liver enzyme CYP2D6. Blood plasma concentrations of the CYP2D6 substrate dextromethorphan have increased threefold when combined with rolapitant; and increased concentrations of other substrates are expected. The drug also inhibits the transporter proteins ABCG2 (breast cancer resistance protein, BCRP) and P-glycoprotein (P-gp), which has been shown to increase plasma concentrations of the ABCG2 substrate sulfasalazine twofold and the P-gp substrate digoxin by 70%.
Strong inducers of the liver enzyme CYP3A4 decrease the area under the curve of rolapitant and its active metabolite (called M19); for rifampicin, this effect was almost 90% in a study. Inhibitors of CYP3A4 have no relevant effect on rolapitant concentrations.
Pharmacology
Pharmacodynamics
Both rolapitant and its active metabolite M19 block the NK1 receptor with high affinity and selectivity: to block the closely related receptor NK2 or any other of 115 tested receptors and enzymes, more than 1000-fold therapeutic concentrations are necessary.
Pharmacokinetics
Rolapitant is practically completely absorbed from the gut, independently of food intake. It undergoes no measurable first-pass effect in the liver. Highest blood plasma concentrations are reached after about four hours. When in the bloodstream, 99.8% of the substance are bound to plasma proteins.
It is metabolized by the liver enzyme CYP3A4, resulting in the major active metabolite M19 (C4-pyrrolidine-hydroxylated rolapitant) and a number of inactive metabolites. Rolapitant is mainly excreted via the feces (52–89%) in unchanged form, and to a lesser extent via the urine (9–20%) in form of its inactive metabolites. Elimination half-life is about seven days (169 to 183 hours) over a wide dosing range.
Chemistry
The drug is used in form of rolapitant hydrochloride monohydrate, a white to off-white, slightly hygroscopic crystalline powder. Its maximum solubility in aqueous solutions is at pH 2–4.
References
External links
Antiemetics
CYP2D6 inhibitors
NK1 receptor antagonists
Trifluoromethyl compounds
Spiro compounds
Drugs developed by GSK plc
Gamma-lactams
Pyrrolidones
Piperidines | Rolapitant | [
"Chemistry"
] | 978 | [
"Organic compounds",
"Spiro compounds"
] |
51,474,451 | https://en.wikipedia.org/wiki/Menger%20space | In mathematics, a Menger space is a topological space that satisfies a certain basic selection principle that generalizes σ-compactness. A Menger space is a space in which for every sequence of open covers of the space there are finite sets such that the family covers the space.
History
In 1924, Karl Menger
introduced the following basis property for metric spaces:
Every basis of the topology contains a countable family of sets with vanishing
diameters that covers the space. Soon thereafter,
Witold Hurewicz
observed that Menger's basis property can be reformulated to the above form using sequences of open covers.
Menger's conjecture
Menger conjectured that in ZFC every Menger metric space is σ-compact.
A. W. Miller and D. H. Fremlin
proved that Menger's conjecture is false, by showing that there is,
in ZFC, a set of real numbers that is Menger but not σ-compact.
The Fremlin-Miller proof was dichotomic, and the set witnessing the failure
of the conjecture heavily depends on whether a certain (undecidable) axiom
holds or not.
Bartoszyński and Tsaban
gave a uniform ZFC example of a Menger subset of the real line that is not σ-compact.
Combinatorial characterization
For subsets of the real line, the Menger property can be characterized using continuous functions into the Baire space .
For functions , write if for all but finitely many natural numbers . A subset of is dominating if for each function there is a function such that . Hurewicz proved that a subset of the real line is Menger iff every continuous image of that space into the Baire space is not dominating. In particular, every subset of the real line of cardinality less than the dominating number is Menger.
The cardinality of Bartoszyński and Tsaban's counter-example to Menger's conjecture is
.
Properties
Every compact, and even σ-compact, space is Menger.
Every Menger space is a Lindelöf space
Continuous image of a Menger space is Menger
The Menger property is closed under taking subsets
Menger's property characterizes filters whose Mathias forcing notion does not add dominating functions.
References
Properties of topological spaces
Topology | Menger space | [
"Physics",
"Mathematics"
] | 472 | [
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
51,477,178 | https://en.wikipedia.org/wiki/Approximate%20group | In mathematics, an approximate group is a subset of a group which behaves like a subgroup "up to a constant error", in a precise quantitative sense (so the term approximate subgroup may be more correct). For example, it is required that the set of products of elements in the subset be not much bigger than the subset itself (while for a subgroup it is required that they be equal). The notion was introduced in the 2010s but can be traced to older sources in additive combinatorics.
Formal definition
Let be a group and ; for two subsets we denote by the set of all products . A non-empty subset is a -approximate subgroup of if:
It is symmetric, that is if then ;
There exists a subset of cardinality such that .
It is immediately verified that a finite 1-approximate subgroup is the same thing as a genuine subgroup. Of course this definition is only interesting when is small compared to (in particular, any subset is a -approximate subgroup). In applications it is often used with being fixed and going to infinity.
Examples of approximate subgroups which are not groups are given by symmetric intervals and more generally arithmetic progressions in the integers. Indeed, for all the subset is a 2-approximate subgroup: the set is contained in the union of the two translates and of . A generalised arithmetic progression in is a subset in of the form , and it is a -approximate subgroup.
A more general example is given by balls in the word metric in finitely generated nilpotent groups.
Classification of approximate subgroups
Approximate subgroups of the integer group were completely classified by Imre Z. Ruzsa and Freiman. The result is stated as follows:
For any there are such that for any -approximate subgroup there exists a generalised arithmetic progression generated by at most integers and containing at least elements, such that .
The constants can be estimated sharply. In particular is contained in at most translates of : this means that approximate subgroups of are "almost" generalised arithmetic progressions.
The work of Breuillard–Green–Tao (the culmination of an effort started a few years earlier by various other people) is a vast generalisation of this result. In a very general form its statement is the following:
Let ; there exists such that the following holds. Let be a group and a -approximate subgroup in . There exists subgroups with finite and nilpotent such that , the subgroup generated by contains , and with .
The statement also gives some information on the characteristics (rank and step) of the nilpotent group .
In the case where is a finite matrix group the results can be made more precise, for instance:
Let . For any there is a constant such that for any finite field , any simple subgroup and any -approximate subgroup then either is contained in a proper subgroup of , or , or .
The theorem applies for example to ; the point is that the constant does not depend on the cardinality of the field. In some sense this says that there are no interesting approximate subgroups (besides genuine subgroups) in finite simple linear groups (they are either "trivial", that is very small, or "not proper", that is almost equal to the whole group).
Applications
The Breuillard–Green–Tao theorem on classification of approximate groups can be used to give a new proof of Gromov's theorem on groups of polynomial growth. The result obtained is actually a bit stronger since it establishes that there exists a "growth gap" between virtually nilpotent groups (of polynomial growth) and other groups; that is, there exists a (superpolynomial) function such that any group with growth function bounded by a multiple of is virtually nilpotent.
Other applications are to the construction of expander graphs from the Cayley graphs of finite simple groups, and to the related topic of superstrong approximation.
Notes
References
Group theory
Geometric group theory
Additive combinatorics | Approximate group | [
"Physics",
"Mathematics"
] | 809 | [
"Geometric group theory",
"Group actions",
"Additive combinatorics",
"Combinatorics",
"Group theory",
"Fields of abstract algebra",
"Symmetry"
] |
51,478,216 | https://en.wikipedia.org/wiki/Increased%20limit%20factor | Increased limit factors or ILFs are multiplicative factors that are applied to premiums for "basic" limits of coverage to determine premiums for higher limits of coverage. They are commonly used in casualty insurance pricing.
Overview
Often, limited data is available to determine appropriate charges for high limits of insurance. In order to price policies with high limits of insurance adequately, actuaries may first determine a "basic limit" premium and then apply increased limits factors. The basic limit is a lower limit of liability under which there is a more credible amount of data.
For example, basic limit loss costs or rates may be calculated for many territories and classes of business. At a relatively low limit of liability, such as $100,000, there may be a high volume of data that can be used to derive those rates. For higher limits, there may be a credible volume of data at the countrywide level but not much data available for individual territories or classes. Increased limit factors can be derived at the countrywide level (or some other broad grouping) and then applied to the basic limit rates to arrive at rates for higher limits of liability.
Formula
An increased limit factor (ILF) at limit L relative to basic limit B can be defined as
where ALAE is the allocated loss adjustment expense provision, ULAE is the unallocated loss adjustment expense provision, and RL is the risk load provision.
An indemnity-only ILF can be expressed as
Often, frequency is assumed to be independent of the policy limit, in which case the formula can be simplified to
The expected severity at each limit is often referred to as "limited average severity," or LAS.
Examples
In the United States, many insurers use ILFs published by the Insurance Services Office, a division of Verisk.
References
Further reading
External links
Actuarial science | Increased limit factor | [
"Mathematics"
] | 371 | [
"Applied mathematics",
"Actuarial science"
] |
62,859,875 | https://en.wikipedia.org/wiki/Living%20building%20material | A living building material (LBM) is a material used in construction or industrial design that behaves in a way resembling a living organism. Examples include: self-mending biocement, self-replicating concrete replacement, and mycelium-based composites for construction and packaging. Artistic projects include building components and household items.
History
The development of living building materials began with research of methods for mineralizing concrete, that were inspired by coral mineralization. The use of microbiologically induced calcite precipitation (MICP) in concrete was pioneered by Adolphe et al. in 1990, as a method of applying a protective coating to building façades.
In 2007, "Greensulate", a mycelium-based building insulation material was introduced by Ecovative Design, a spin off of research conducted at the Rensselaer Polytechnic Institute. Mycelium composites were later developed for packaging, sound absorption, and structural building materials such as bricks.
In the United Kingdom, the Materials for Life (M4L) project was founded at Cardiff University in 2013 to "create a built environment and infrastructure which is a sustainable and resilient system comprising materials and structures that continually monitor, regulate, adapt and repair themselves without the need for external intervention." M4L led to the UK's first self-healing concrete trials. In 2017 the project expanded into a consortium led by the universities of Cardiff, Cambridge, Bath and Bradford, changing its name to Resilient Materials 4 Life (RM4L) and receiving funding from the Engineering and Physical Sciences Research Council. This consortium focuses on four aspects of material engineering: self-healing of cracks at multiple scales; self-healing of time-dependent and cycling loading damage; self-diagnosis and healing of chemical damage; and self-diagnosis and immunization against physical damage.
In 2016 the United States Department of Defense's Defense Advanced Research Projects Agency (DARPA) launched the Engineered Living Materials (ELM) program. The goal of this program is to "develop design tools and methods that enable the engineering of structural features into cellular systems that function as living materials, thereby opening up a new design space for building technology... [and] to validate these new methods through the production of living materials that can reproduce, self-organize, and self-heal." In 2017 the ELM program contracted Ecovative Design to produce "a living hybrid composite building material... [to] genetically re-program that living material with responsive functionality [such as] wound repair... [and to] rapidly reuse and redeploy [the] material into new shapes, forms, and applications." In 2020 a research group at the University of Colorado, funded by an ELM grant, published a paper after successfully creating exponentially regenerating concrete.
Self-replicating concrete
Self-replicating concrete is produced using a mixture of sand and hydrogel, which are used as a growth medium for synechococcus bacteria to grow on.
Synthesis and fabrication
The sand-hydrogel mixture from which self-replicating concrete is made has a lower pH, lower ionic strength, and lower curing temperatures than a typical concrete mix, allowing it to serve as a growth medium for the bacteria. As the bacteria reproduce they spread through the medium, and biomineralize it with calcium carbonate, which is the main contributor to the overall strength and durability of the material. After mineralization the sand-hydrogel compound is strong enough to be used in construction, as concrete or mortar.
The bacteria in self-replicating concrete react to humidity changes: they are most active - and reproduce the fastest - in an environment with 100% humidity, though a drop to 50% does not have a large impact on the cellular activity. Lower humidity does result in a stronger material than high humidity.
As the bacteria reproduce, their biomineralization activity increases; this allows production capacity to scale exponentially.
Properties
The structural properties of this material are similar to those of Portland cement-based mortars: it has an elastic modulus of 293.9 MPa, and a tensile strength of 3.6 MPa (the minimum required value for Portland-cement based concrete is approximately 3.5 MPa); however it has a fracture energy of 170 N, which is much less than most standard concrete formulations, which can reach up to several kN.
Uses
Self-replicating concrete can be used in a variety of applications and environments, but the effect of humidity on the properties of the end material (see above) means that the application of the material must be tailored to its environment. In humid environments the material can be used as to fill cracks in roads, walls and sidewalks, sipping into cavities and growing into a solid mass as it sets; while in drier environments it can be used structurally, due to its increased strength in low-humidity environments.
Unlike traditional concrete, the production of which releases massive amounts of carbon dioxide to the atmosphere, the bacteria used in self-replicating concrete absorb carbon dioxide, resulting in a lower carbon footprint.
This self-replicating concrete is not meant to replace standard concrete, but to create a new class of materials, with a mixture of strength, ecological benefits, and biological functionality.
Calcium carbonate biocement
Biocement is a sand aggregate material produced through the process of microbiologically induced calcite precipitation (MICP). It is an environmentally friendly material which can be produced using a variety of stocks, from agricultural waste to mine tailings.
Synthesis and fabrication
Microscopic organisms are the key component in the formation of bioconcrete, as they provide the nucleation site for CaCO to precipitate on the surface. Microorganisms such as Sporosarcina pasteurii are useful in this process, as they create highly alkaline environments where dissolved inorganic carbon (DIC) is present at high amounts. These factors are essential for microbiologically induced calcite precipitation (MICP), which is the main mechanism in which bioconcrete is formed. Other organisms that can be used to induce this process include photosynthesizing microorganisms such as microalgae, cyanobacteria, and sulphate reducing bacteria (SRB) such as Desulfovibrio desulfuricans.
Calcium carbonate nucleation depends on four major factors:
Calcium concentration
DIC concentration
pH levels
Availability of nucleation sites
As long as calcium ion concentrations are high enough, microorganisms can create such an environment through processes such as ureolysis.
Advancements in optimizing methods to use microorganisms to facilitate carbonate precipitation are rapidly developing.
Properties
Biocement is able to "self-heal" due to bacteria, calcium lactate, nitrogen, and phosphorus components that are mixed into the material. These components have the ability to remain active in biocement for up to 200 years. Biocement like any other concrete can crack due to external forces and stresses. Unlike normal concrete however, the microorganisms in biocement can germinate when introduced to water. Rain can supply this water which is an environment that biocement would find itself in. Once introduced to water, the bacteria will activate and feed on the calcium lactate that was part of the mixture. This feeding process also consumes oxygen which converts the originally water-soluble calcium lactate into insoluble limestone. This limestone then solidifies on surface it is lying on, which in this case is the cracked area, thereby sealing the crack up.
Oxygen is one of the main elements that cause corrosion in materials such as metals. When biocement is used in steel reinforced concrete structures, the microorganisms consume the oxygen thereby increasing corrosion resistance. This property also allows for water resistance as it actually induces healing, and reducing overall corrosion. Water concrete aggregates are what are used to prevent corrosion and these also have the ability to be recycled. There are different methods to form these such as through crushing or grinding of the biocement.
The permeability of biocement is also higher compared to normal cement. This is due to the higher porosity of biocement. Higher porosity can lead to larger crack propagation when exposed to strong enough forces. Biocement is now roughly 20% composed of a self healing agent. This decreases its mechanical strength. The mechanical strength of bioconcrete is about 25% weaker than normal concrete, making its compressive strength lower. Organisms such as Pesudomonas aeruginosa are effective in creating biocement. These are unsafe to be near humans so these must be avoided.
Uses
Biocement is currently used in applications such as in sidewalks and pavements in buildings. There are ideas of biological building constructions as well. The uses of biocement are still not widespread because there is currently not a feasible method of mass-producing biocement to such a high extent. There is also much more definitive testing that needs to be done to confidently use biocement in such large scale applications where mechanical strength can not be compromised. The cost of biocement is also twice as much as normal concrete. Different uses in smaller applications however include spray bars, hoses, drop lines, and bee nesting. Biocement is still in its developmental stages however its potential proves promising for its future uses.
Mycelium composites
Mycelium composites are materials that are based on mycelium – the mass of branching, thread-like hyphae produced by fungi. There are several ways to synthesize and fabricate mycelium composites, lending to different properties and use cases of the finish product. Mycelium composites are economical and sustainable.
Synthesis and fabrication
Mycelium-based composites are usually synthesised by using different kinds of fungi, especially mushrooms. An individual microbe of fungi is introduced to different types of organic substances to form a composite. The selection of fungal species is important for creating a product with specific properties. Some of the fungal species that are used to make composites are G. lucidum, Ganoderma sp. P. ostretus, Pleurotus sp., T. versicolor, Trametes sp., etc. A dense network is formed when the mycelium of the microbe of fungi degrades and colonises the organic substance. Plant waste is a common organic substrate that is used in mycelium-based composites. Fungal mycelium is incubated with a plant waste product to produce sustainable alternatives mostly for petroleum-based materials. The mycelium and organic substrate need time to incubate properly and this time is crucial as it is the period that these particles interact together and bind to form a dense network and hence form a composite. During this incubation period, mycelium uses essential nutrients such as carbon, minerals, and water from the waste plant product. Some of the organic substrate components include cotton, wheat grains, rice husks, sorghum fibres, agricultural waste, sawdust, bread particles, banana peel, coffee residue, etc. The composites are synthesised and fabricated using different techniques such as adding carbohydrates, altering fermentation conditions, using different fabrication technology, altering post-processing stages, and modifying genetics or biochemicals to form products with certain properties. Fabrication of most of the mycelium composites are by using plastic molds, so the mycelium can be grown directly into the desired shape. Other fabrication methods include laminate skin mold, vacuum skin mold, glass mold, plywood mold, wooden mold, petri dish mold, tile mold, etc. During fabrication process, it is essential to have a sterilised environment, a controlled environment condition of light, temperature (25-35 °C) and humidity around 60-65% for the best results. One way to synthesise a mycelium based composite is by mixing different composition ratios of fibers, water and mycelium together and putting in a PVC molds in layers while compressing each layer and letting it incubate for couple of days. Mycelium based composites can be processed in foam, laminate and mycelium sheet by using processing techniques such as later cutting, cold and heat compression, etc. Mycelium composites tend to absorb water when they are newly fabricated, therefore this property can be changed by drying the product.
Properties
One of the advantages about using mycelium based composites is that properties can be altered depending on fabrication process and the use of different fungus. Properties depend on type of fungus used and where they are grown. Additionally, fungi has an ability to degrade the cellulose component of the plant to make composites in a preferable manner. Some important mechanical properties such as compressive strength, morphology, tensile strength, hydrophobicity, and flexural strength can be modified as well for different use of the composite. To increase the tensile strength, the composite can go through heat pressing. The properties of a mycelium composite are affected by its substrate; for example, a mycelium composite made out of 75 wt% rice hulls has a density of 193 kg/m3, while 75 wt% wheat grains has 359 kg/m3. Another method to increase the density of the composite would be by deleting a hydrophobin gene. These composites also have the ability of self-fusion which increases their strength. Mycelium based composites are usually compact, porous, lightweight and a good insulator. The main property of these composites is that they are entirely natural, therefore sustainable. Another advantage of mycelium based composites is that this substance acts as an insulator, is fireproof, nontoxic, water-resistant, rapidly growing, and able to bond with neighboring mycelium products. Mycelium-based foams (MBFs) and sandwich components are two common types of composite. MBFs are the most efficient type because of their low density property, high quality, and sustainability. The density of MBFs can be decreased by using substrates that are smaller than 2 mm in diameter. These composites have higher thermal conductivity as well.
Uses
One of the most common use of mycelium based composites is for the alternatives for petroleum and polystyrene based materials. These synthetic foams are usually used for sustainable design and architecture products. The use of mycelium based composites are based on their properties. There are several bio-sustainable companies
Further applications
Beyond the use of living building materials, the application of microbially induced calcium carbonate precipitation (MICP) has the possibility of helping remove pollutants from wastewater, soil, and the air. Currently, heavy metals and radionuclei provide a challenge to remove from water sources and soil. Radionuclei in ground water do not respond to traditional methods of pumping and treating the water, and for heavy metals contaminating soil, the methods of removal include phytoremediation and chemical leaching do work; however, these treatments are expensive, lack longevity in effectiveness, and can destroy the productivity of the soil for future uses. By using ureolytic bacteria that is capable of CaCO3 precipitation, the pollutants can move into the calc-be structure, thereby removing them from the soil or water. This works through substitution of calcium ions for pollutants that then form solid particles and can be removed. It's reported that 95% of these solid particles can be removed by using ureolytic bacteria. However, when calcium scaling in pipelines occurs, MICP cannot be used as it is calcium-based. Instead of calcium, it is possible to add a low concentration of urea to remove up to 90% of the calcium ions.
Another further application involves a self-constructed foundation that forms in response to pressure through the use of engineering bacteria. The engineered bacteria could be used to detect increased pressure in soil, and then cement the soil particles in place, effectively solidifying the soil. Within soil, pore pressure consists of two factors: the amount of applied stress, and how quickly water in the soil is able to drain. Through analyzing the biological behavior of the bacteria in response to a load and the mechanical behavior of the soil, a computational model can be created. With this model, certain genes within the bacteria can be identified and modified to respond a certain way to a certain pressure. However, the bacteria analyzed in this study was grown in a highly controlled lab, so real soil environments may not be as ideal. This is a limitation of the model and study it originated from, but it still remains a possible application of living building materials.
References
Construction
Building materials | Living building material | [
"Physics",
"Engineering"
] | 3,424 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
62,860,232 | https://en.wikipedia.org/wiki/Zoe%20Pikramenou | Zoe Pikramenou is Professor of Inorganic Chemistry and Photophysics at the University of Birmingham, where she is the first female professor in the chemistry department.
Education and career
Pikramenou graduated in 1987 with a B.Sc. in Chemistry from the University of Athens in Greece. She then moved to Michigan State University where she worked in the lab of Daniel G. Nocera, graduating with a Ph.D. in Chemistry in 1993. She then conducted post-doctoral studies at University of Strasbourg in France as a Marie Curie and Collège de France fellow working with Nobel prize-winner Jean-Marie Lehn. She became a lecturer at the University of Edinburgh in 1995, then was appointed to the University of Birmingham in 2000.
Research
Pikramenou is a inorganic chemist with experience in nanotechnology and photophysics, who has researched lanthanide luminescent complexes. Recent research has investigated how gold nanorods could be applied to treat cancerous cells in the body. This research is in partnership with the Canadian company Sona Nanotech Inc. Pikramenou has researched other applications of gold nanoparticles, including their use in tracking blood flow in capillary networks. She was part of a team that developed iridium-coated gold nanoparticles, significant because they have a longer lifetime of use. She has co-investigated platelet nodules, using microscopy.
Another medical application of Pikramenou's nanoparticle research includes the application of coated silica particles to treat sensitive teeth. As part of her doctoral research at Michigan State University, Pikramenou invented a nanoparticle bucket, which lights up when in contains a particular compound. This kind of microscopic bucket is described as a supramolecule.
Coated nanoparticles patent
In 2017, Pikramenou and her co-researcher Nicola J Rogers, were granted a patent to protect their invention of a new process of combining at least one metal complex and a surfactant.
Awards
2012 - Leverhulme Trust Research Fellowship
2007 - EPSRC Discipline Hopping Award with Chemical Engineering
2000 - The Aventis Scientia Europea Prize, Aventis Foundation and French Academy of Sciences for collaborative work with Physicist and Biologist
References
External links
The Zoe Pikramenou Photophysics Group
Professor Pikramenou's Inaugural Lecture
Michigan State University alumni
British nanotechnologists
Academics of the University of Birmingham
Year of birth missing (living people)
Living people
British women engineers
Women chemical engineers
21st-century women engineers
Women bioengineers
Place of birth missing (living people)
Nationality missing | Zoe Pikramenou | [
"Chemistry"
] | 527 | [
"Women chemical engineers",
"Chemical engineers"
] |
62,860,439 | https://en.wikipedia.org/wiki/Regular%20numerical%20predicate | In computer science and mathematics, more precisely in automata theory, model theory and formal language, a regular numerical predicate is a kind of relation over integers. Regular numerical predicates can also be considered as a subset of for some arity . One of the main interests of this class of predicates is that it can be defined in plenty of different ways, using different logical formalisms. Furthermore, most of the definitions use only basic notions, and thus allows to relate foundations of various fields of fundamental computer science such as automata theory, syntactic semigroup, model theory and semigroup theory.
The class of regular numerical predicate is denoted , and REG.
Definitions
The class of regular numerical predicate admits a lot of equivalent definitions. They are now given. In all of those definitions, we fix and a (numerical) predicate of arity .
Automata with variables
The first definition encodes predicate as a formal language. A predicate is said to be regular if the formal language is regular.
Let the alphabet be the set of subset of . Given a vector of integers , it is represented by the word of length whose -th letter is . For example, the vector is represented by the word .
We then define as .
The numerical predicate is said to be regular if is a regular language over the alphabet . This is the reason for the use of the word "regular" to describe this kind of numerical predicate.
Automata reading unary numbers
This second definition is similar to the previous one. Predicates are encoded into languages in a different way, and the predicate is said to be regular if and only if the language is regular.
Our alphabet is the set of vectors of binary digits. That is: . Before explaining how to encode a vector of numbers, we explain how to encode a single number.
Given a length and a number , the unary representation of of length is the word over the binary alphabet , beginning by a sequence of "1"'s, followed by "0"'s. For example, the unary representation of 1 of length 4 is .
Given a vector of integers , let . The vector is represented by the word such that, the projection of over its -th component is . For example, the representation of is . This is a word whose letters are the vectors , and and whose projection over each components are , and .
As in the previous definition, the numerical predicate is said to be regular if is a regular language over the alphabet .
A predicate is regular if and only if it can be defined by a monadic second order formula , or equivalently by an existential monadic second order formula, where the only atomic predicate is the successor function .
A predicate is regular if and only if it can be defined by a first order logic formula , where the atomic predicates are:
the order relation ,
the predicate stating that a number is a multiple of a constant , that is .
Congruence arithmetic
The language of congruence arithmetic is defined as the est of Boolean combinations, where the atomic predicates are:
the addition of a constant , with an integral constant,
the order relation ,
the modular relations, with a fixed modular value. That is, predicates of the form where and are fixed constants and is a variable.
A predicate is regular if and only if it can be defined in the language of congruence arithmetic. The equivalence with previous definition is due to quantifier elimination.
Using recursion and patterns
This definition requires a fixed parameter . A set is said to be regular if it is -regular for some .
In order to introduce the definition of -regular, the trivial case where should be considered separately. When , then the predicate is either the constant true or the constant false. Those two predicates are said to be -regular (for every ). Let us now assume that . In order to introduce the definition of regular predicate in this case, we need to introduce the notion of section of a predicate.
The section of is the predicate of arity where the -th component is fixed to . Formally, it is defined as . For example, let us consider the sum predicate . Then is the predicate which adds the constant , and is the predicate which states that the sum of its two elements is .
The last equivalent definition of regular predicate can now be given. A predicate of arity is -regular if it satisfies the two following conditions:
all of its sections are -regular,
there exists a threshold such that, for each vectors with each , .
The second property intuitively means that, when number are big enough, then their exact value does not matter. The properties which matters are the order relation between the numbers and their value modulo the period .
Using recognizable semigroups
Given a subset , let be the characteristic vector of . That is, the vector in whose -th component is 1 if , and 0 otherwise. Given a sequence of sets, let .
The predicate is regular if and only if for each increasing sequence of set , is a recognizable submonoid of .
Definition of non regular language
The predicate is regular if and only if all languages which can be defined in first order logic with atomic predicates for letters and the atomic predicate are regular. The same property would hold for the monadic second order logic, and with modular quantifiers.
Reducing arity
The following property allows to reduce an arbitrarily complex non-regular predicate to a simpler binary predicate which is also non-regular.
Let us assume that is definable in Presburger Arithmetic. The predicate is non regular if and only if there exists a formula in which defines the multiplication by a rational . More precisely, it allows to define the non-regular predicate for some .
Properties
The class of regular numerical predicate satisfies many properties.
Satisfiability
As in previous case, let us assume that is definable in Presburger Arithmetic. The satisfiability of is decidable if and only if is regular.
This theorem is due to the previous property and the fact that the satisfiability of is undecidable when and .
Closure property
The class of regular predicates is closed under union, intersection, complement, taking a section, projection and Cartesian product. All of those properties follows directly from the definition of this class as the class of predicates definable in .
Decidability
It is decidable whether a predicate defined in Presburger arithmetic is regular.
Elimination of quantifier
The logic considered above admit the elimination of quantifier. More precisely, the algorithm for elimination of quantifier by Cooper does not introduce multiplication by constants nor sums of variable. Therefore, when applied to a it returns a quantifier-free formula in.
References
Automata (computation)
Theoretical computer science | Regular numerical predicate | [
"Mathematics"
] | 1,423 | [
"Theoretical computer science",
"Formal languages",
"Mathematical logic",
"Applied mathematics"
] |
62,864,779 | https://en.wikipedia.org/wiki/Humanium%20Metal | Humanium Metal is a brand of metal made by melting down illegal firearms seized in conflict zones. The creation and distribution of this metal is done through a marketing campaign called "The Humanium Metal Initiative", started in 2016 by Swedish nonprofit organization IM Swedish Development Partner. The stated objective of the program is to draw attention to issues of gun violence and contribute toward the ending of illegal firearms trade. Humanium Metal is used for the creation of non-lethal commodities, such as wristwatches, buttons, and spinning tops, with proceeds returning to violence prevention efforts and support for gun-violence survivors in the areas from which the firearms were seized.
History
The Humanium Metal Initiative was developed by Peter Brune of IM Swedish Development Partner in partnership with designer Johan Pihl. The objective of Humanium Metal is "to spread awareness of the devastating impact of illegal firearms and armed violence, as well as generate funds urgently needed to empower people living in conflict-torn societies." The campaign is implemented in conjunction with Swedish advertising agencies Great Works and Akestam Holst.
Humanium Metal was first produced in November 2016 in El Salvador, where firearms seized by the Salvadoran government were converted into one ton of metal. The project has since expanded to Guatemala, and, as of 2018, it plans to expand to Honduras and Colombia.
The program has received endorsements from the Dalai Lama, former director general of the International Atomic Energy Agency Hans Blix, and Nobel Peace Prize winner Desmond Tutu. The program has also partnered with the Swedish Ministry for Foreign Affairs.
As of end of 2022, the program had destroyed more than 12,000 firearms in El Salvador, Zambia and the United States. More than US$1.2 million have already been channeled to civil society interventions in violence-affected areas.
Production and use
The most common method for producing Humanium Metal is when governments seize illegal firearms and melt down their metal, turning it into ingots, wire, or pellets. The metal is 95% iron and is sent to Sweden, where they are reduced to powder that can be used in the production of metal objects. As of 2018, Humanium Metal was priced at about $6.60 per ounce.
In 2018, Stockholm-based watchmaker TRIWA began to market wristwatches 3D-printed with Humanium Metal. In 2019, the Humanium Metal Initiative partnered with The Non-Violence Project Foundation to produce small-scale replicas of Swedish artist Carl Fredrik Reuterswärd's 1985 sculpture Non-Violence. Other companies have produced spinning tops, buttons, and bracelets made from Humanium Metal. A Good Company has made a limited-edition A Good Humanium Metal pen, 25% of the sales of which goes to support projects tackling violent crime and rebuilding conflicted-afflicted communities in El Salvador.
In 2020, Scottish artist Frank To created paintings using powdered Humanium Metal mixed with paint.
In December 2020, IM partnered with the Zambia Police Service to destroy more than 6,000 firearms and turn them into Humanium Metal.
In 2021, the police department of Falmouth, Maine publicly destroyed a set of illegal weapons and announced their intention to turn them into Humanium Metal.
Awards
In 2017, the Humanium Metal Initiative won the Grand Prix for Innovation at the Cannes Lions Festival for Creativity. In 2018, the program won the advertising category of Fast Company's 2018 World Changing Ideas Awards.
References
External links
Humanium Metal website
Interview with Hans Blix about humanium
Metals
Gun violence | Humanium Metal | [
"Chemistry"
] | 712 | [
"Metals"
] |
53,021,322 | https://en.wikipedia.org/wiki/Fusome | The fusome is a membranous structure found in the developing germ cell cysts of many insect orders. Initial description of the fusome occurred in the 19th century and since then the fusome has been extensively studied in Drosophila melanogaster male and female germline development. This structure has roles in maintaining germline cysts, coordinating the number of mitotic divisions prior to meiosis, and oocyte determination by serving as a structure for intercellular communication.
Structure
In D. melanogaster, germline cysts form from four mitotic divisions with incomplete cytokinesis that originated from one germline stem cell. Incomplete cytokinesis results in intercellular bridges connecting every cell in the cyst, called ring canals3. The four mitotic divisions result in cysts of 16 cells connected by 15 ring canals. The fusome is composed of membrane vesicles and originates from endoplasmic reticulum. Fusome material is inside ring canals and can range in size from 1 to 10 um depending on the stage of development.
1.1 Fusome Development
The spectrosome is a round structure in germline stem cells that develops into the fusome in cyst cells. Fusome divides asymmetrically into daughter cells in females by attaching to one spindle pole during meiosis, resulting in one cell receiving all fusome material. Fusome is generated de novo in the ring canal connecting the two cells. The two fusome parts then fuse together to connect the cells. Asymmetric fusome partitioning and new formation followed by fusion occurs at each mitotic division. In spermatogenesis, the fusome partitioning is symmetric and the fusome is still present during the meiotic divisions.
1.2 Fusome components
Many proteins and organelles associate with the fusome throughout germ cell development. Cytoskeleton components, such as alpha and beta spectrins, hu-li tai shao (hts), and ankyrin were the first proteins identified in the fusome. Centrosomes travel along the fusome and the fusome is involved in microtubule organization. The interactions between the fusome and microtubules result in cyst polarity in oogenesis. Associations between the fusome and microtubules change throughout the cell cycle. Mitochondria associates with the fusome and travel through ring canals to the oocyte. Microtubules travel through ring canals and form the tracks for transport of materials between cells.
Function
There are numerous functions of the fusome as a structure necessary for cell-cell communication in developing germ cell cysts. The fusome connects cells, allowing for transport of proteins and RNAs between cells and synchronous activities. Mutations in essential fusome components can result in infertility.
2.1 Role in cell cycle synchrony
Developing cells in germline cysts undergo mitotic divisions synchronously and in males all cells in a cyst also undergo meiosis synchronously. The fusome is a track where an event can happen and then feedback mechanisms quickly communicate to each cell to ensure a specific outcome occurs simultaneously in every cell. Cells in a cyst fail to divide synchronously if the fusome is disrupted. The rosette formation of germline cyst cells allows cells to be in the closest configuration for communication.
Throughout the cell cycle, different cyclins associate with the fusome to induce synchronous cell divisions. Cyclin A and Cyclin E localize to the fusome in female germline cysts and are required for the correct number of mitotic divisions to occur. Abnormal cyclin levels result in too few or too many divisions. Cyclin E at the fusome is phosphorylated for degradation by the SCF complex and if not degraded, an extra division occurs. The fusome may be the degradation site for other cell cycle proteins. Myt1 kinase inhibits CycA/Cdk1 in males during G2. Without Myt1 regulation, fusome and centrosome behavior is abnormal, resulting in cells with irregular spindles.
2.2 Differences in male vs female fusomes
In females, the fusome plays a role in cell fate and differentiation. Asymmetric fusome distribution and centriole orientation determines which cell in the developing female germline cyst becomes the oocyte. One of the two cells from the first division within the cyst becomes the oocyte and contains the most fusome material. The fusome degrades after the 16-cell cyst forms. In females, the connections are the channels through which nurse cells send proteins and RNAs to the oocyte along polarized microtubules.
In males, the fusome is necessary for ensuring quality control in individual cysts. DNA damage in one cell leads to all cells in a cyst dying by communication through the fusome, either by disseminating a death signal or additive DNA damage inducing apoptosis. This ensures mature sperm cells have intact genomes before fertilizing an egg. In addition, the fusome connections ensure haploid spermatids have proteins and RNA made by the other chromosome for “gamete equivalency”.
Similar structures in other animals
Fusomes were previously thought to be specific to insect gametogenesis. Fusome-like structures have been identified in Xenopus laevis oogenesis by electron microscopy and immunostaining for fusome components such as spectrin and hts. Intercellular bridges also connect developing germ cells in mammals, contributing to cell cycle synchrony and gamete quality control by sharing substances between cells. Future studies are required to elucidate all of the functions that arise from cell-cell communication through intercellular bridges. In addition, a future area of research is to determine why some organisms lack fusomes. Do these organisms have another structure that carries out the role of the fusome or are these roles not necessary in germline cyst development of these other organisms?
See also
Intercellular junctions
Gametogenesis
Spectrin
Cyclin
References
^PG Wilson Cell Biol Int. 2005 May;29(5):360-9.
Centrosome inheritance in the male germ line of Drosophila requires hu-li tai-shao function.
External links
Huynh JR. (2006) Fusome as a Cell-Cell Communication Channel of Drosophila Ovarian Cyst. In: Cell-Cell Channels. Springer, New York, NYhttps://www.ncbi.nlm.nih.gov/books/NBK6300/
http://www.oxfordreference.com/view/10.1093/acref/9780195307610.001.0001/acref-9780195307610-e-2383?rskey=LqAWUj&result=2381
Lighthouse, D. V., M. Buszczak, and A. C. Spradling. (2008). New components of the Drosophila fusome suggest it plays novel roles in signaling and transport. Dev Biol 317: 59–71. doi:10.1016/j.ydbio.2008.02.009
de Cuevas, M., M. A. Lilly, and A. C. Spradling. (1997). Germline cyst formation in Drosophila. Annu. Rev. Genet. 31: 405–428. DOI: 10.1146/annurev.genet.31.1.405
Yamashita, Y. M., H. Yuan, J. Cheng, and A. J. Hunt. (2010). Polarity in stem cell division: asymmetric stem cell division in tissue homeostasis. Cold Spring Harb Perspect Biol 2:a001313 doi: 10.1101/cshperspect.a001313
Rieger R. Michaelis A., Green M. M. (1976). Glossary of genetics and cytogenetics: Classical and molecular. Heidelberg - New York: Springer-Verlag. .
King R. C., Stransfield W. D. (1998): Dictionary of genetics. Oxford University Press, New York, Oxford, ; .
Cell biology | Fusome | [
"Biology"
] | 1,743 | [
"Cell biology"
] |
53,022,621 | https://en.wikipedia.org/wiki/Rayleigh%27s%20quotient%20in%20vibrations%20analysis | The Rayleigh's quotient represents a quick method to estimate the natural frequency of a multi-degree-of-freedom vibration system, in which the mass and the stiffness matrices are known.
The eigenvalue problem for a general system of the form
in absence of damping and external forces reduces to
The previous equation can be written also as the following:
where , in which represents the natural frequency, M and K are the real positive symmetric mass and stiffness matrices respectively.
For an n-degree-of-freedom system the equation has n solutions , that satisfy the equation
By multiplying both sides of the equation by and dividing by the scalar , it is possible to express the eigenvalue problem as follow:
for .
In the previous equation it is also possible to observe that the numerator is proportional to the potential energy while the denominator depicts a measure of the kinetic energy. Moreover, the equation allow us to calculate the natural frequency only if the eigenvector (as well as any other displacement vector) is known. For academic interests, if the modal vectors are not known, we can repeat the foregoing process but with and taking the place of and , respectively. By doing so we obtain the scalar , also known as Rayleigh's quotient:
Therefore, the Rayleigh's quotient is a scalar whose value depends on the vector and it can be calculated with good approximation for any arbitrary vector as long as it lays reasonably far from the modal vectors , i = 1,2,3,...,n.
Since, it is possible to state that the vector differs from the modal vector by a small quantity of first order, the correct result of the Rayleigh's quotient will differ not sensitively from the estimated one and that's what makes this method very useful. A good way to estimate the lowest modal vector , that generally works well for most structures (even though is not guaranteed), is to assume equal to the static displacement from an applied force that has the same relative distribution of the diagonal mass matrix terms. The latter can be elucidated by the following 3-DOF example.
Example – 3DOF
As an example, we can consider a 3-degree-of-freedom system in which the mass and the stiffness matrices of them are known as follows:
To get an estimation of the lowest natural frequency we choose a trial vector of static displacement obtained by loading the system with a force proportional to the masses:
Thus, the trial vector will become
that allow us to calculate the Rayleigh's quotient:
Thus, the lowest natural frequency, calculated by means of Rayleigh's quotient is:
Using a calculation tool is pretty fast to verify how much it differs from the "real" one. In this case, using MATLAB, it has been calculated that the lowest natural frequency is: that has led to an error of using the Rayleigh's approximation, that is a remarkable result.
The example shows how the Rayleigh's quotient is capable of getting an accurate estimation of the lowest natural frequency. The practice of using the static displacement vector as a trial vector is valid as the static displacement vector tends to resemble the lowest vibration mode.
References
Abstract algebra
Linear algebra
Mathematical physics
Mechanical vibrations | Rayleigh's quotient in vibrations analysis | [
"Physics",
"Mathematics",
"Engineering"
] | 680 | [
"Structural engineering",
"Applied mathematics",
"Theoretical physics",
"Mechanics",
"Mechanical vibrations",
"Linear algebra",
"Abstract algebra",
"Mathematical physics",
"Algebra"
] |
47,390,194 | https://en.wikipedia.org/wiki/Solar%20observation | Solar observation is the scientific endeavor of studying the Sun and its behavior and relation to the Earth and the remainder of the Solar System. Deliberate solar observation began thousands of years ago. That initial era of direct observation gave way to telescopes in the 1600s followed by satellites in the twentieth century.
Prehistory
Stratigraphic data suggest that solar cycles have occurred for hundreds of millions of years, if not longer; measuring varves in precambrian sedimentary rock has revealed repeating peaks in layer thickness corresponding to the cycle. It is possible that the early atmosphere on Earth was more sensitive to solar irradiation than today, so that greater glacial melting (and thicker sediment deposits) could have occurred during years with greater sunspot activity.
This would presume annual layering; however, alternative explanations (diurnal) have also been proposed.
Analysis of tree rings revealed a detailed picture of past solar cycles: Dendrochronologically dated radiocarbon concentrations have allowed for a reconstruction of sunspot activity covering 11,400 years.
Early observations
Solar activity and related events have been regularly recorded since the time of the Babylonians. In the 8th century BC, they described solar eclipses and possibly predicted them from numerological rules. The earliest extant report of sunspots dates back to the Chinese Book of Changes, . The phrases used in the book translate to "A dou is seen in the Sun" and "A mei is seen in the Sun", where dou and mei would be darkening or obscuration (based on the context). Observations were regularly noted by Chinese and Korean astronomers at the behest of the emperors, rather than independently.
The first clear mention of a sunspot in Western literature, around 300 BC, was by the ancient Greek scholar Theophrastus, student of Plato and Aristotle and successor to the latter. On 17 March AD 807 Benedictine monk Adelmus observed a large sunspot that was visible for eight days; however, Adelmus incorrectly concluded he was observing a transit of Mercury.
The earliest surviving record of deliberate sunspot observation dates from 364 BC, based on comments by Chinese astronomer Gan De in a star catalogue. By 28 BC, Chinese astronomers were regularly recording sunspot observations in official imperial records.
A large sunspot was observed at the time of Charlemagne's death in AD 813. Sunspot activity in 1129 was described by John of Worcester and Averroes provided a description of sunspots later in the 12th century; however, these observations were also misinterpreted as planetary transits.
The first unambiguous mention of the solar corona was by Leo Diaconus, a Byzantine historian. He wrote of the 22 December 968 total eclipse, which he experienced in Constantinople (modern-day Istanbul, Turkey):
The earliest known record of a sunspot drawing was in 1128, by John of Worcester.
Another early observation was of solar prominences, described in 1185 in the Russian Chronicle of Novgorod.
17th and 18th centuries
Giordano Bruno and Johannes Kepler suggested the idea that the sun rotated on its axis. Sunspots were first observed telescopically on 18 December 1610 (Gregorian calendar, not yet adopted in England) by English astronomer Thomas Harriot, as recorded in his notebooks. On 9 March 1611 (Gregorian calendar, also not yet adopted in East Frisia) they were observed by Frisian medical student Johann Goldsmid (latinised name Johannes Fabricius) who subsequently teamed up with his father David Fabricius, a pastor and astronomer, to make further observations and to publish a description in a pamphlet in June 1611. The Fabricius' used camera obscura telescopy to get a better view of the solar disk, and like Harriot made observations shortly after sunrise and shortly before sunset. Johann was the first to realize that sunspots revealed solar rotation, but he died on 19 March 1616, aged 26 and his father a year later. Several scientists such as Johannes Kepler, Simon Marius, and Michael Maestlin were aware of the Fabricius' early sunspot work, and indeed Kepler repeatedly referred to it his writings. However, like that of Harriot, their work was otherwise not well known. Galileo Galilei almost certainly began telescopic sunspot observations around the same time as Harriot, given he made his first telescope in 1609 on hearing of the Dutch patent of the device, and that he had managed previously to make naked-eye observations of sunspots. He is also reported to have shown sunspots to astronomers in Rome, but we do not have records of the dates. The records of telescopic observations of sunspots that we do have from Galileo do not start until 1612, for when they are of unprecedented quality and detail as by then he had developed the telescope design and greatly increased its magnification. Likewise Christoph Scheiner had probably been observing the spots using an improved helioscope of his own design. Galileo and Scheiner, neither of whom knew of the work of Harriot or Fabricius vied for the credit for the discovery. In 1613, in Letters on Sunspots, Galileo refuted Scheiner's 1612 claim that sunspots were planets inside Mercury's orbit, showing that sunspots were surface features.
Although the physical aspects of sunspots were not identified until the 20th century, observations continued. Study was hampered during the 17th century due to the low number of sunspots during what is now recognized as an extended period of low solar activity, known as the Maunder Minimum. By the 19th century, then-sufficient sunspot records allowed researchers to infer periodic cycles in sunspot activity. In 1845, Henry and Alexander observed the Sun with a thermopile and determined that sunspots emitted less radiation than surrounding areas. The emission of higher than average amounts of radiation later were observed from the solar faculae.
Sunspots had some importance in the debate over the nature of the Solar System. They showed that the Sun rotated, and their comings and goings showed that the Sun changed, contrary to Aristotle, who had taught that all celestial bodies were perfect, unchanging spheres.
Sunspots were rarely recorded between 1650 and 1699. Later analysis revealed the problem to be a reduced number of sunspots, rather than observational lapses. Building upon Gustav Spörer's work, the wife-and-husband team of Annie Maunder and Edward Maunder suggested that the Sun had changed from a period in which sunspots all but disappeared to a renewal of sunspot cycles starting in about 1700. Adding to this understanding of the absence of solar cycles were observations of aurorae, which were absent at the same time, except at the very highest magnetic latitudes
The lack of a solar corona during solar eclipses was also noted prior to 1715.
The period of low sunspot activity from 1645 to 1717 later became known as the "Maunder Minimum". Observers such as Johannes Hevelius, Jean Picard and Jean Dominique Cassini confirmed this change.
19th century
Solar spectroscopy
After the detection of infra-red radiation by William Herschel in 1800 and of Ultraviolet radiation by Johann Wilhelm Ritter, solar spectrometry began in 1817 when William Hyde Wollaston noticed that dark lines appeared in the solar spectrum when viewed through a glass prism. Joseph von Fraunhofer later independently discovered the lines and they were named Fraunhofer lines after him. Other physicists discerned that properties of the solar atmosphere could be determined from them. Notable scientists to advance spectroscopy were David Brewster, Gustav Kirchhoff, Robert Wilhelm Bunsen and Anders Jonas Ångström.
Solar cycle
The cyclic variation of the number of sunspots was first observed by Samuel Heinrich Schwabe between 1826 and 1843.
Rudolf Wolf studied the historical record in an attempt to establish a history of solar variations. His data extended only to 1755. He also established in 1848 a relative sunspot number formulation to compare the work of different astronomers using varying equipment and methodologies, now known as the Wolf (or Zürich) sunspot number.
Gustav Spörer later suggested a 70-year period before 1716 in which sunspots were rarely observed as the reason for Wolf's inability to extend the cycles into the 17th century.
Also in 1848, Joseph Henry projected an image of the Sun onto a screen and determined that sunspots were cooler than the surrounding surface.
Around 1852, Edward Sabine, Wolf, Jean-Alfred Gautier and Johann von Lamont independently found a link between the solar cycle and geomagnetic activity, sparking the first research into interactions between the Sun and the Earth.
In the second half of the nineteenth century Richard Carrington and Spörer independently noted the migration of sunspot activity towards the solar equator as the cycle progresses. This pattern is best visualized in the form of the so-called butterfly diagram, first constructed by Edward Walter Maunder and Annie Scott Dill Maunder in the early twentieth century (see graph). Images of the Sun are divided into latitudinal strips, and the monthly-averaged fractional surface of sunspots calculated. This is plotted vertically as a color-coded bar, and the process is repeated month after month to produce a time-series diagram.
Half a century later, the father-and-son team of Harold and Horace Babcock showed that the solar surface is magnetized even outside of sunspots; that this weaker magnetic field is to first order a dipole; and that this dipole undergoes polarity reversals with the same period as the sunspot cycle (see graph below). These observations established that the solar cycle is a spatiotemporal magnetic process unfolding over the Sun as a whole.
Photography
The Sun was photographed for the first time, on 2 April 1845, by French physicists Louis Fizeau and Léon Foucault. Sunspots, as well as the limb darkening effect, are visible in their daguerrotypes. Photography assisted in the study of solar prominences, granulation and spectroscopy. Charles A. Young first captured a prominence in 1870. Solar eclipses were also photographed, with the most useful early images taken in 1851 by Berkowski and in 1860 by De la Rue's team in Spain.
Rotation
Early estimates of the Sun's rotation period varied between 25 and 28 days. The cause was determined independently in 1858 by Richard C. Carrington and Spörer. They discovered that the latitude with the most sunspots decreases from 40° to 5° during each cycle, and that at higher latitudes sunspots rotate more slowly. The Sun's rotation was thus shown to vary by latitude and that its outer layer must be fluid. In 1871 Hermann Vogel, and shortly thereafter by Charles Young confirmed this spectroscopically. Nils Dúner's spectroscopic observation in the 1880s showed a 30% difference between the Sun's faster equatorial regions and its slower polar regions.
Space weather
The first modern, and clearly described, accounts of a solar flare and coronal mass ejection occurred in 1859 and 1860 respectively. On 1 September 1859, Richard C. Carrington, while observing sunspots, saw patches of increasingly bright light within a group of sunspots, which then dimmed and moved across that area within a few minutes. This event, also reported by R. Hodgson, is a description of a solar flare. The widely viewed total solar eclipse on 18 July 1860 resulted in many drawings, depicting an anomalous feature that corresponds with modern CME observations.
For many centuries, the earthly effects of solar variation were noticed but not understood. E.g., displays of auroral light have long been observed at high latitudes, but were not linked to the Sun.
In 1724, George Graham reported that the needle of a magnetic compass was regularly deflected from magnetic north over the course of each day. This effect was eventually attributed to overhead electric currents flowing in the ionosphere and magnetosphere by Balfour Stewart in 1882, and confirmed by Arthur Schuster in 1889 from analysis of magnetic observatory data.
In 1852, astronomer and British major general Edward Sabine showed that the probability of the occurrence of magnetic storms on Earth was correlated with the number of sunspots, thus demonstrating a novel solar-terrestrial interaction. In 1859, a great magnetic storm caused brilliant auroral displays and disrupted global telegraph operations. Richard Carrington correctly connected the storm with a solar flare that he had observed the day before in the vicinity of a large sunspot group—thus demonstrating that specific solar events could affect the Earth.
Kristian Birkeland explained the physics of aurora by creating artificial aurora in his laboratory and predicted the solar wind.
20th century
Observatories
Early in the 20th century, interest in astrophysics grew in America, and multiple observatories were built. Solar telescopes (and thus, solar observatories), were installed at Mount Wilson Observatory in California in 1904, and in the 1930s at McMath–Hulbert Observatory. Interest also grew in other parts of the world, with the establishment of the Kodaikanal Solar Observatory in India at the turn of the century, the Einsteinturm in Germany in 1924, and the Solar Tower Telescope at the National Observatory of Japan in 1930.
Around 1900, researchers began to explore connections between solar variations and Earth's weather. Smithsonian Astrophysical Observatory (SAO) assigned Abbot and his team to detect changes in the radiation of the Sun. They began by inventing instruments to measure solar radiation. Later, when Abbot was SAO head, they established a solar station at Calama, Chile to complement its data from Mount Wilson Observatory. He detected 27 harmonic periods within the 273-month Hale cycles, including 7, 13, and 39-month patterns. He looked for connections to weather by means such as matching opposing solar trends during a month to opposing urban temperature and precipitation trends. With the advent of dendrochronology, scientists such as Glock attempted to connect variation in tree growth to periodic solar variations and infer long-term secular variability in the solar constant from similar variations in millennial-scale chronologies.
Coronagraph
Until the 1930s, little progress was made on understanding the Sun's corona, as it could only be viewed during infrequent total solar eclipses. Bernard Lyot's 1931 invention of the Coronagraph – a telescope with an attachment to block out the direct light of the solar disk – allowed the corona to be studied in full daylight.
Spectroheliograph
American astronomer George Ellery Hale, as an MIT undergraduate, invented the spectroheliograph, with which he made the discovery of solar vortices. In 1908, Hale used a modified spectroheliograph to show that the spectra of hydrogen exhibited the Zeeman effect whenever the area of view passed over a sunspot on the solar disc. This was the first indication that sunspots were basically magnetic phenomena, which appeared in opposite polarity pairs. Hale's subsequent work demonstrated a strong tendency for east-west alignment of magnetic polarities in sunspots, with mirror symmetry across the solar equator; and that the magnetic polarity for sunspots in each hemisphere switched orientation from one solar cycle to the next. This systematic property of sunspot magnetic fields is now commonly referred to as the Hale–Nicholson law, or in many cases simply Hale's laws.
Solar radio bursts
The introduction of radio revealed periods of extreme static or noise. Severe radar jamming during a large solar event in 1942 led to the discovery of solar radio bursts.
Satellites
Many satellites in Earth orbit or in the heliosphere have deployed solar telescopes and instruments of various kinds for in situ measurements of particles and fields.
Skylab, a notable large solar observational facility, grew out if the impetus of the International Geophysical Year campaign and the facilities of NASA.
Other spacecraft, in an incomplete list, have included the OSO series, the Solar Maximum Mission, Yohkoh, SOHO, ACE, TRACE, and SDO among many others; still other spacecraft (such as MESSENGER, Fermi, and NuSTAR) have contributed solar measurements by individual instruments.
Modulation of solar bolometric radiation by magnetically active regions, and more subtle effects, was confirmed by satellite measurements of the total solar irradiance (TSI) by the ACRIM1 experiment on the Solar Maximum Mission (launched in 1980). The modulations were later confirmed in the results of the ERB experiment launched on the Nimbus 7 satellite in 1978. Satellite observation was continued by ACRIM-3 and other satellites.
Measurement proxies
Direct irradiance measurements have been available during the last three cycles and are a composite of multiple observing satellites. However, the correlation between irradiance measurements and other proxies of solar activity make it reasonable to estimate solar activity for earlier cycles. Most important among these proxies is the record of sunspot observations that has been recorded since ~1610. Solar radio emissions at 10.7 cm wavelength provide another proxy that can be measured from the ground, since the atmosphere is transparent to such radiation.
Other proxy data – such as the abundance of cosmogenic isotopes – have been used to infer solar magnetic activity, and thus likely brightness, over several millennia.
Total solar irradiance has been claimed to vary in ways that are not predicted by sunspot changes or radio emissions. These shifts may be the result of inaccurate satellite calibration. A long-term trend may exist in solar irradiance.
Other developments
The Sun was, until the 1990s, the only star whose surface had been resolved. Other major achievements included understanding of:
X-ray-emitting loops
Corona and solar wind
Variance of solar brightness with level of activity and verification of this effect in other solar-type stars
The intense Fibril state of the magnetic fields at the visible surface of a star like the sun
The presence of magnetic fields of 0.5×105 to 1×105 gauss at the base of the conductive zone, presumably in some fibril form, inferred from the dynamics of rising azimuthal flux bundles.
Low-level Electron neutrino emission from the Sun's core.
21st century
The most powerful flare observed by satellite instrumentation began on 4 November 2003 at 19:29 UTC, and saturated instruments for 11 minutes. Region 486 has been estimated to have produced an X-ray flux of X28. Holographic and visual observations indicate significant activity continued on the far side of the Sun.
Sunspot and infrared spectral line measurements made in the latter part of the first decade of the 2000s suggested that sunspot activity may again be disappearing, possibly leading to a new minimum. From 2007 to 2009, sunspot levels were far below average. In 2008, the Sun was spot-free 73 percent of the time, extreme even for a solar minimum. Only 1913 was more pronounced, with no sunspots for 85 percent of that year. The Sun continued to languish through mid-December 2009, when the largest group of sunspots to emerge for several years appeared. Even then, sunspot levels remained well below those of recent cycles.
In 2006, NASA predicted that the next sunspot maximum would reach between 150 and 200 around the year 2011 (30–50% stronger than cycle 23), followed by a weak maximum at around 2022. Instead, the sunspot cycle in 2010 was still at its minimum, when it should have been near its maximum, demonstrating its unusual weakness.
Cycle 24's minimum occurred around December 2008 and the next maximum was predicted to reach a sunspot number of 90 around May 2013. The monthly mean sunspot number in the northern solar hemisphere peaked in November 2011, while the southern hemisphere appears to have peaked in February 2014, reaching a peak monthly mean of 102. Subsequent months declined to around 70 (June 2014). In October 2014, sunspot AR 12192 became the largest observed since 1990. The flare that erupted from this sunspot was classified as an X3.1-class solar storm.
Independent scientists of the National Solar Observatory (NSO) and the Air Force Research Laboratory (AFRL) predicted in 2011 that Cycle 25 would be greatly reduced or might not happen at all.
References
External links
NOAA / NESDIS / NGDC (2002) Solar Variability Affecting Earth NOAA CD-ROM NGDC-05/01. This CD-ROM contains over 100 solar-terrestrial and related global data bases covering the period through April 1990.
Recent Total Solar Irradiance data updated every Monday
Solar phenomena
Vortices
Space physics
Solar System | Solar observation | [
"Physics",
"Chemistry",
"Astronomy",
"Mathematics"
] | 4,210 | [
"Physical phenomena",
"Outer space",
"Vortices",
"Space physics",
"Solar phenomena",
"Dynamical systems",
"Stellar phenomena",
"Solar System",
"Fluid dynamics"
] |
47,392,500 | https://en.wikipedia.org/wiki/Hamiltonian%20field%20theory | In theoretical physics, Hamiltonian field theory is the field-theoretic analogue to classical Hamiltonian mechanics. It is a formalism in classical field theory alongside Lagrangian field theory. It also has applications in quantum field theory.
Definition
The Hamiltonian for a system of discrete particles is a function of their generalized coordinates and conjugate momenta, and possibly, time. For continua and fields, Hamiltonian mechanics is unsuitable but can be extended by considering a large number of point masses, and taking the continuous limit, that is, infinitely many particles forming a continuum or field. Since each point mass has one or more degrees of freedom, the field formulation has infinitely many degrees of freedom.
One scalar field
The Hamiltonian density is the continuous analogue for fields; it is a function of the fields, the conjugate "momentum" fields, and possibly the space and time coordinates themselves. For one scalar field , the Hamiltonian density is defined from the Lagrangian density by
with the "del" or "nabla" operator, is the position vector of some point in space, and is time. The Lagrangian density is a function of the fields in the system, their space and time derivatives, and possibly the space and time coordinates themselves. It is the field analogue to the Lagrangian function for a system of discrete particles described by generalized coordinates.
As in Hamiltonian mechanics where every generalized coordinate has a corresponding generalized momentum, the field has a conjugate momentum field , defined as the partial derivative of the Lagrangian density with respect to the time derivative of the field,
in which the overdot denotes a partial time derivative , not a total time derivative .
Many scalar fields
For many fields and their conjugates the Hamiltonian density is a function of them all:
where each conjugate field is defined with respect to its field,
In general, for any number of fields, the volume integral of the Hamiltonian density gives the Hamiltonian, in three spatial dimensions:
The Hamiltonian density is the Hamiltonian per unit spatial volume. The corresponding dimension is [energy][length]−3, in SI units Joules per metre cubed, J m−3.
Tensor and spinor fields
The above equations and definitions can be extended to vector fields and more generally tensor fields and spinor fields. In physics, tensor fields describe bosons and spinor fields describe fermions.
Equations of motion
The equations of motion for the fields are similar to the Hamiltonian equations for discrete particles. For any number of fields:
where again the overdots are partial time derivatives, the variational derivative with respect to the fields
with · the dot product, must be used instead of simply partial derivatives.
Phase space
The fields and conjugates form an infinite dimensional phase space, because fields have an infinite number of degrees of freedom.
Poisson bracket
For two functions which depend on the fields and , their spatial derivatives, and the space and time coordinates,
and the fields are zero on the boundary of the volume the integrals are taken over, the field theoretic Poisson bracket is defined as (not to be confused with the commutator from quantum mechanics).
where is the variational derivative
Under the same conditions of vanishing fields on the surface, the following result holds for the time evolution of (similarly for ):
which can be found from the total time derivative of , integration by parts, and using the above Poisson bracket.
Explicit time-independence
The following results are true if the Lagrangian and Hamiltonian densities are explicitly time-independent (they can still have implicit time-dependence via the fields and their derivatives),
Kinetic and potential energy densities
The Hamiltonian density is the total energy density, the sum of the kinetic energy density () and the potential energy density (),
Continuity equation
Taking the partial time derivative of the definition of the Hamiltonian density above, and using the chain rule for implicit differentiation and the definition of the conjugate momentum field, gives the continuity equation:
in which the Hamiltonian density can be interpreted as the energy density, and
the energy flux, or flow of energy per unit time per unit surface area.
Relativistic field theory
Covariant Hamiltonian field theory is the relativistic formulation of Hamiltonian field theory.
Hamiltonian field theory usually means the symplectic Hamiltonian formalism when applied to classical field theory, that takes the form of the instantaneous Hamiltonian formalism on an infinite-dimensional phase space, and where canonical coordinates are field functions at some instant of time. This Hamiltonian formalism is applied to quantization of fields, e.g., in quantum gauge theory. In Covariant Hamiltonian field theory, canonical momenta pμi corresponds to derivatives of fields with respect to all world coordinates xμ. Covariant Hamilton equations are equivalent to the Euler–Lagrange equations in the case of hyperregular Lagrangians. Covariant Hamiltonian field theory is developed in the Hamilton–De Donder, polysymplectic, multisymplectic and k-symplectic variants. A phase space of covariant Hamiltonian field theory is a finite-dimensional polysymplectic or multisymplectic manifold.
Hamiltonian non-autonomous mechanics is formulated as covariant Hamiltonian field theory on fiber bundles over the time axis, i.e. the real line .
See also
Analytical mechanics
De Donder–Weyl theory
Four-vector
Canonical quantization
Hamiltonian fluid mechanics
Covariant classical field theory
Polysymplectic manifold
Non-autonomous mechanics
Notes
Citations
References
Mathematical physics
Classical mechanics
Classical field theory
Quantum field theory
Differential geometry | Hamiltonian field theory | [
"Physics",
"Mathematics"
] | 1,170 | [
"Quantum field theory",
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Classical mechanics",
"Mechanics",
"Classical field theory",
"Mathematical physics"
] |
47,393,061 | https://en.wikipedia.org/wiki/Exohedral%20fullerene | Exohedral fullerenes, also called exofullerenes, are fullerenes that have additional atoms, ions, or clusters attached their outer spheres, such as C50Cl10 and C60H8. or fullerene ligands.
See also
Fullerene ligands
Endohedral fullerene
References
Fullerenes
Supramolecular chemistry | Exohedral fullerene | [
"Chemistry",
"Materials_science"
] | 73 | [
"Nanotechnology",
"nan",
"Supramolecular chemistry"
] |
64,237,234 | https://en.wikipedia.org/wiki/Weyl%20expansion | In physics, the Weyl expansion, also known as the Weyl identity or angular spectrum expansion, expresses an outgoing spherical wave as a linear combination of plane waves. In a Cartesian coordinate system, it can be denoted as
,
where , and are the wavenumbers in their respective coordinate axes:
.
The expansion is named after Hermann Weyl, who published it in 1919. The Weyl identity is largely used to characterize the reflection and transmission of spherical waves at planar interfaces; it is often used to derive the Green's functions for Helmholtz equation in layered media. The expansion also covers evanescent wave components. It is often preferred to the Sommerfeld identity when the field representation is needed to be in Cartesian coordinates.
The resulting Weyl integral is commonly encountered in microwave integrated circuit analysis and electromagnetic radiation over a stratified medium; as in the case for Sommerfeld integral, it is numerically evaluated. As a result, it is used in calculation of Green's functions for method of moments for such geometries. Other uses include the descriptions of dipolar emissions near surfaces in nanophotonics, holographic inverse scattering problems, Green's functions in quantum electrodynamics and acoustic or seismic waves.
See also
Angular spectrum method
Fourier optics
Green's function
Plane wave expansion
Sommerfeld identity
References
Sources
Mathematical identities
Mathematical physics
Electrodynamics
Wave mechanics | Weyl expansion | [
"Physics",
"Materials_science",
"Mathematics"
] | 286 | [
"Physical phenomena",
"Materials science stubs",
"Mathematical theorems",
"Applied mathematics",
"Theoretical physics",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Electrodynamics",
"Mathematical identities",
"Mathematical problems",
"Mathematical physics",
"Electromagnetism stubs",
... |
64,237,800 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20Hilbert%20spaces | In mathematics, specifically in functional analysis and Hilbert space theory, the fundamental theorem of Hilbert spaces gives a necessarily and sufficient condition for a Hausdorff pre-Hilbert space to be a Hilbert space in terms of the canonical isometry of a pre-Hilbert space into its anti-dual.
Preliminaries
Antilinear functionals and the anti-dual
Suppose that is a topological vector space (TVS).
A function is called semilinear or antilinear if for all and all scalars ,
Additive: ;
Conjugate homogeneous: .
The vector space of all continuous antilinear functions on is called the anti-dual space or complex conjugate dual space of and is denoted by (in contrast, the continuous dual space of is denoted by ), which we make into a normed space by endowing it with the canonical norm (defined in the same way as the canonical norm on the continuous dual space of ).
Pre-Hilbert spaces and sesquilinear forms
A sesquilinear form is a map such that for all , the map defined by is linear, and for all , the map defined by is antilinear.
Note that in Physics, the convention is that a sesquilinear form is linear in its second coordinate and antilinear in its first coordinate.
A sesquilinear form on is called positive definite if for all non-0 ; it is called non-negative if for all .
A sesquilinear form on is called a Hermitian form if in addition it has the property that for all .
Pre-Hilbert and Hilbert spaces
A pre-Hilbert space is a pair consisting of a vector space and a non-negative sesquilinear form on ;
if in addition this sesquilinear form is positive definite then is called a Hausdorff pre-Hilbert space.
If is non-negative then it induces a canonical seminorm on , denoted by , defined by , where if is also positive definite then this map is a norm.
This canonical semi-norm makes every pre-Hilbert space into a seminormed space and every Hausdorff pre-Hilbert space into a normed space.
The sesquilinear form is separately uniformly continuous in each of its two arguments and hence can be extended to a separately continuous sesquilinear form on the completion of ; if is Hausdorff then this completion is a Hilbert space.
A Hausdorff pre-Hilbert space that is complete is called a Hilbert space.
Canonical map into the anti-dual
Suppose is a pre-Hilbert space. If , we define the canonical maps:
where , and
where
The canonical map from into its anti-dual is the map
defined by .
If is a pre-Hilbert space then this canonical map is linear and continuous;
this map is an isometry onto a vector subspace of the anti-dual if and only if is a Hausdorff pre-Hilbert.
There is of course a canonical antilinear surjective isometry that sends a continuous linear functional on to the continuous antilinear functional denoted by and defined by .
Fundamental theorem
Fundamental theorem of Hilbert spaces: Suppose that is a Hausdorff pre-Hilbert space where is a sesquilinear form that is linear in its first coordinate and antilinear in its second coordinate. Then the canonical linear mapping from into the anti-dual space of is surjective if and only if is a Hilbert space, in which case the canonical map is a surjective isometry of onto its anti-dual.
See also
Complex conjugate vector space
Dual system
Hilbert space
Pre-Hilbert space
Linear map
Riesz representation theorem
Sesquilinear form
References
Topological vector spaces
Linear functionals | Fundamental theorem of Hilbert spaces | [
"Mathematics"
] | 760 | [
"Theorems in mathematical analysis",
"Vector spaces",
"Topological vector spaces",
"Space (mathematics)",
"Theorems in functional analysis"
] |
64,240,594 | https://en.wikipedia.org/wiki/MIPOL1 | MIPOL1 (Mirror Image Polydactyly 1), also known as CCDC193 (Coiled-coil domain containing 193), is a protein that in humans is encoded by the MIPOL1 gene. Mutation of this gene is associated with mirror-image polydactyly (also known as Laurin-Sandrow syndrome.) in humans, which is a rare genetic condition characterized by mirror-image duplication of digits.
Gene
MIPOL1 is also known as CCDC193 (Coiled-coil domain containing 193).
Locus
The MIPOL1 gene is located at 14q13.3-q21.1 on the plus strand, spanning base pairs 37,197,888 to 37,579,207 (in the human GRCh38 primary assembly, length: 381,320 base pairs), consisting of 15 exons and 11 introns. Some notable genes in its neighborhood include SLC25A21 (mutation of this gene causes synpolydactyly) and FOXA1.
mRNA
MIPOL1 has at least 15 known splice isoforms produced by alternative splicing.
Protein
Properties
The unmodified MIPOL1 protein isoform 1 in humans has an isoelectric point of 5.6 and molecular weight 51.5 kDa. Relative to other human proteins, MIPOL1 consists of unusually low amounts of Proline and Glycine and higher amounts of Glutamic acid and Glutamine.
Isoforms
There are at least three known isoforms of this protein in humans produced by alternative splicing: isoform 1, of length 442 amino acids, isoform 2 of length 261 amino acids and isoform 3 of length 169 amino acids.
Domains and motifs
MIPOL1 contains two coiled-coil domains in its C-terminus at positions 107 – 212 and 253 – 435 (shown in Fig.1). A bipartite nuclear localization signal is predicted at position 128 – 143.
Post-translational modifications
The following post-translational modifications are predicted using bioinformatics tools for MIPOL1. Multiple phosphorylation sites are predicted for this protein, that are conserved in close orthologs, including a Casein kinase 1 (CK1) site, three Casein kinase 2 (CK2) sites, and three NEK2 sites.
Structure
The exact structure of the MIPOL1 has not yet been characterized. Homology-based and de novo predictions of its tertiary structure suggest that it may consist of inter-twined alpha helices, forming coiled-coil domains (see Fig.4.).
Sub-cellular localization
Immunofluorescence imaging in the human U2OS cell line (bone Osteosarcoma epithelial cells) shows localization in the cytosol. Immunohistochemistry imaging of human prostate tissue also suggests cytosolic localization. A bipartite nuclear localization signal is predicted at position 128 – 143, which is highly conserved in mammalian orthologs (see Fig.2.), indicating possible localization in the nucleus.
Gene regulation
The predicted promoter sequence for this gene spans from base pair 37196852 to 37198126 (1,275 bp) and has multiple predicted binding sites for transcription factors such as GATA binding factors, SMAD3, TP63 and NRF1.
Gene Expression
MIPOL1 is ubiquitously expressed at low levels in humans, with highest expression in the prostate.
Transcript regulation
The RNA secondary structure is stabilized by multiple stem loops that have been predicted (using bioinformatics tools), and conserved across closely related species. Multiple binding targets are found for microRNAs such as MIR3163 and MIR190a, that could silence these regions on the mRNA and inhibit translation.
Clinical significance
The MIPOL1 gene is an autosomal dominant gene. It is one of six genes in humans causing non-syndromic polydactyly (i.e. polydactyly occurring as a separate event with no other associated anomalies). Mutation of this gene is associated with mirror-image polydactyly (also known as Laurin-Sandrow syndrome) in humans, which is a rare genetic condition characterized by mirror-image duplication of digits in hands and feet.
This gene has also been associated with central nervous system development, and the loss of this gene can cause craniofacial defects and agenesis of the corpus callosum.
The gene is shown to function as a tumor suppressor in nasopharyngeal carcinoma (NPC), through the up-regulation of the p21 (WAF1/CIP1) and p27 (proteins that are both cyclin-dependent kinases that are linked with tumor suppression via cell cycle arrest) pathways. Another study investigating the role of MIPOL1 gene in cancer progression reported that MIPOL1 was downregulated in NPC tumor tissues, and that artificially re-expressing the gene caused tumor suppression by down-regulating angiogenic factors and reducing the phosphorylation of metastasis associated proteins like AKT, p65 and FAK14. MIPOL1 interacts with another well-known tumor-suppressing gene, RhoB and this interaction was confirmed to enhance RhoB activity.
In a study of pediatric high grade glioma (pHGG), MIPOL1 gene was found to be down-regulated 2.4-fold in the high vascularity tumors
The protein is known to interact with Replicase polyprotein 1ab in SARS-CoV2, which is a protein involved in the transcription and replication of viral RNAs.
Interacting proteins
This protein is known to interact with multiple human proteins, verified via two-hybrid screening. A few notable examples include:
LATS2: Negatively regulates YAP1 in the Hippo signaling pathway that plays a pivotal role in organ size control and tumor suppression by restricting cell proliferation and promoting apoptosis.
ZGPAT (Zinc finger CCCH-type with G patch domain-containing protein): A transcription repressor that negatively regulates expression of EGFR, a gene involved in cell proliferation, survival and migration, suggesting that it may act as a tumor suppressor.
RCOR3 (REST Corepressor 3): A protein that may act as a component of a co-repressor complex that represses transcription
It also interacts with viral proteins such as:
Replicase polyprotein 1ab (SARS-CoV2): A multifunctional protein involved in the transcription and replication of viral RNAs.
Protein E7 (Human Papillomavirus): Plays a role in viral genome replication by driving entry of quiescent cells into the cell cycle.
Origin and evolution
The earliest known ortholog of this protein appeared around 948 million years ago in Trichoplax adhaerens in phylum Placozoa in kingdom Animalia. The next most distant orthologs appear in phylum Cnidaria, around 824 million years ago.
Sequence Homology
The MIPOL1 protein has no known paralogs in humans and other species for which orthologs have been found, therefore, it is the only member of its gene family.
There are more than 300 known orthologs of the MIPOL1 protein in Animalia, ranging from primates to corals and sea anemones in phylum Cnidaria. Orthologs of the protein were found in species as distant as Trichoplax adhaerens, a simple primitive invertebrate species. Table 2 shows a sample of the ortholog space.
Closely related orthologs are found in chordates such as mammals, reptiles, birds and amphibians, with sequence similarities greater than 70%. Sequence lengths of orthologs were similar to the human MIPOL1 protein, with no significant gene duplication observed.
Organisms with sequence similarities in the 55-70% range (moderately related orthologs) were found in bony fish, cartilaginous fish and coelacanths. Sequence length is generally longer in these species, with a longer amino acid sequence in the N-terminus (alignment with human protein occurs around amino acid 100).
Distantly related orthologs with similarities less than 50% (around 30 – 40%) are found in hemichordates, echinoderms, arthropods, molluscs, cnidaria and placozoa. Multiple sequence alignment with distant orthologs indicates poor alignment in the N-terminus of the protein.
Two COG (Clusters of Orthologous Groups of proteins) domains were found in this protein (see Fig.3): COG1196 at position 106 - 340 (Chromosome segregation ATPase) and COG4372 at 259 - 431 (uncharacterized conserved protein containing a DUF3084 domain)
Phylogenetics
Using a linear regression analysis on a plot of corrected percent divergence (amino acid changes per 100 amino acids) as a function of date of divergence from humans for different MIPOL1 orthologs (see Fig.5), it is estimated that a 1% change in amino acids in the MIPOL1 protein takes 5.68 million years. MIPOL1 protein is evolving at a moderate rate relative to fast evolving protein such as fibrinogen alpha, and slow evolving proteins such as cytochrome C.
References
Proteins | MIPOL1 | [
"Chemistry"
] | 1,971 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
64,242,188 | https://en.wikipedia.org/wiki/Transcriptional%20adaptation | Transcriptional adaptation is a recently described type of genetic compensation by which a mutation in one gene leads to the transcriptional modulation of related genes, termed adapting genes or modifiers.
References
Gene expression | Transcriptional adaptation | [
"Chemistry",
"Biology"
] | 40 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
67,200,157 | https://en.wikipedia.org/wiki/Translation%20regulation%20by%205%E2%80%B2%20transcript%20leader%20cis-elements | Translation regulation by 5′ transcript leader cis-elements is a process in cellular translation.
Background
Gene expression is tightly controlled at many different stages. Alterations in translation of mRNA into proteins rapidly modulates the proteome without changing upstream steps such as transcription, pre-mRNA splicing, and nuclear export. The strict regulation of translation in both space and time is in part governed by cis-regulatory elements located in 5′ mRNA transcript leaders (TLs) and 3′ untranslated regions (UTRs).
Due to their role in translation initiation, mRNA 5′ transcript leaders (TLs) strongly influence protein expression. Eukaryotic translation consists of three stages: initiation elongation, and termination. Translation is primary regulated at the initiation stage where the small ribosomal subunit and initiation factors are recruited to the mRNA; directionally scanning along the 5′ TL to select the first “best” start codon to begin protein synthesis. Cap-dependent ribosomal scanning accounts for 95-97% of all translation in eukaryotes under normal conditions. Therefore, the cis-regulatory elements in TLs greatly influence translation initiation and ultimately protein expression.
Kozak consensus sequence
The first step in initiation is formation of the pre-initiation complex, 48S PIC. The small ribosomal subunit and various eukaryotic initiation factors are recruited to the mRNA 5′ TL and to form the 48S PIC complex, which scans 5′ to 3′ along the mRNA transcript, inspecting each successive triplet for a functional start codon. Translation initiation is most successful at an AUG codon surrounded upstream and downstream by a favorable sequence known as the “Kozak consensus sequence” or “Kozak context”. (See A) Weak or absent Kozak context surrounding the AUG leads to “leaky” scanning where the start codon is skipped, whereas a strong Kozak context leads to start codon recognition by the 48S PIC and binding of Met-tRNAi in the “closed” state. Recent studies suggest that initiation occurs surprisingly often in eukaryotes at Near Cognate Codons (NCCs), which differ from AUG by one nucleotide. Eukaryotic initiation factors rearrange the 48S PIC and permit the large subunit to join, thus forming the complete translation competent 80S ribosome.
uORFs
Upstream open reading frames (uORFs) in the 5′ TLs typically inhibit translation of the downstream main protein coding region (CDS). (See B) Translation suppression of the CDS is attributable to the 5′ to 3′ directional nature of 48S PIC scanning. After successfully translating the uORF, the ribosome dissociates from the mRNA as part of termination before it can reach and translate the CDS. This destabilization of the translational machinery can trigger nonsense mediated decay of the mRNA transcript. However, in some cases uORFs will actually enhance the translation of the downstream CDS. For example, in S. cerevisiae, the gene GCN4 has a 5′ TL with multiple uORFs. The uORFs closest to the 5′ cap protect the CDS from the inhibitory activities of the downstream uORFs located closer to the CDS. In summary, uORFs generally decease translation of the main ORF, but they are also capable of increasing protein synthesis under certain circumstances.
Secondary structure
The 3-dimensional structure of the 5′ TL may also impact translation. (See C) Stem-loops have been demonstrated to both inhibit and enhance translation. Stem-loops can prevent cap binding and efficient 48S PIC scanning. Conversely, downstream stem-loops may increase the probability of translation initiation at start codons with a weak Kozak context, possibly by blocking scanning. Besides stem-loops, other higher order structures such as G-quadraplexes and pseudoknots also impede eukaryotic translation. To overcome translati on suppression by structures, DEAD-box RNA helicases unwind RNA structures, promoting scanning through the 5′ TL.
Alternative transcript leaders
Multiple transcription start sites may be used for the same gene generating alternative 5′ TLs with varied length and regulatory features. (See D)This is especially common in organisms with relatively compact genomes such as yeasts. In S. cerevisiae, alternative transcription start sites generate long alternative mRNA TLs with substantially lower translation efficiencies. Counterintuitively, upstream transcriptional induction of these genes actually silences their expression during meiosis by blocking translation. Furthermore, alternative transcription initiation within the CDS may generate protein isoforms with varied functions in S. cerevisiae. These examples from the model organism S. cerevisiae suggest that mRNA transcripts with alternative 5′ TLs may have a regulatory function in eukaryotes especially during events requiring proteome remodeling such as meiosis and stress responses.
References
Eukaryote genetics
Cell biology
Protein biosynthesis
Cellular processes | Translation regulation by 5′ transcript leader cis-elements | [
"Chemistry",
"Biology"
] | 1,020 | [
"Protein biosynthesis",
"Eukaryote genetics",
"Cell biology",
"Gene expression",
"Biosynthesis",
"Cellular processes",
"Genetics by type of organism"
] |
67,209,354 | https://en.wikipedia.org/wiki/H3R8me2 | H3R8me2 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the di-methylation at the 8th arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
H3R8me2 is associated with transcriptional repression, and modified by PRMT5, but not CARM1.
As of March 2021, there are no disease associations known for H3R8me2.
Nomenclature
The name of this modification indicates dimethylation of arginine 8 on histone H3 protein subunit:
Arginine
Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases.
Arginine methylation affects the interactions between proteins and has been implicated in a variety of cellular processes, including protein trafficking, signal transduction, and transcriptional regulation.
Arginine methylation plays a major role in gene regulation because of the ability of the PRMTs to deposit key activating (histone H4R3me2, H3R2me2, H3R17me2, H3R26me2) or repressive (H3R2me2, H3R8me2, H4R3me2) histone marks.
Histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin.
Mechanism and function of modification
The H3R8 site is methylated by PRMT5 and linked to transcriptional repression. PRMT5 is recruited by several transcriptional repressors, such as Snail, ZNF224 and Ski. A prior acetylation of H3K9 or H3K14 prevents methylation of H3R8 by PRMT5.
Epigenetic implications
The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is interpreted by the cell and leads to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states, which define genomic regions by grouping different proteins and/or histone modifications together.
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterized by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions.
The human genome is annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Clinical significance
PRMT2 was shown to mediate the dorsal developmental program through methylation of H3R8me2a.
Methods
The histone mark H3K4me1 can be detected in a variety of ways:
1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.
2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well-positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well-positioned nucleosomes are seen to have enrichment of sequences.
3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
See also
Histone methylation
Histone methyltransferase
References
Epigenetics
Post-translational modification | H3R8me2 | [
"Chemistry"
] | 1,098 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
59,311,209 | https://en.wikipedia.org/wiki/Direct%20air%20capture | Direct air capture (DAC) is the use of chemical or physical processes to extract carbon dioxide () directly from the ambient air. If the extracted is then sequestered in safe long-term storage, the overall process is called direct air carbon capture and sequestration (DACCS), achieving carbon dioxide removal and be a "negative emissions technology" (NET).
DAC is in contrast to carbon capture and storage (CCS), which captures from point sources, such as a cement factory or a bioenergy plant. After the capture, DAC generates a concentrated stream of for sequestration or utilization. Carbon dioxide removal is achieved when ambient air makes contact with chemical media, typically an aqueous alkaline solvent or sorbents. These chemical media are subsequently stripped of CO2 through the application of energy (namely heat), resulting in a CO2 stream that can undergo dehydration and compression, while simultaneously regenerating the chemical media for reuse.
As of 2023, DACCS has yet to be integrated into emissions trading because, at over US$1000, the cost per ton of carbon dioxide is many times the carbon price on those markets. For the end-to-end process to remain net carbon negative, DAC machines must be powered by renewable energy sources, since the process can be quite energy expensive. Future innovations may reduce the energy intensity of this process.
DAC was suggested in 1999 and is still in development. Several commercial plants are planned or in operation in Europe and the US. Large-scale DAC deployment may be accelerated when connected with economical applications or policy incentives.
In contrast to carbon capture and storage (CCS) which captures emissions from a point source such as a factory, DAC reduces the carbon dioxide concentration in the atmosphere as a whole. Thus, DAC can be used to capture emissions that originated in non-stationary sources such as airplanes.
Methods of capture
There are the three stages of capture in DAC: the contacting stage, the capture stage, and the separation stage. In the contacting stage, the DAC system transports atmospheric air containing to the equipment using large-scale fans. Subsequently, in the capture stage, rapidly and effectively binds with liquid solvents in chemical reactors or solid sorbents in filters, which must possess binding energies equivalent to that of . Later in the separation stage, external energy sources facilitate the separation of from the solvents or sorbents, yielding pure and regenerated solvents or sorbents. Following the completion of these three stages, the separated pure is either utilized or stored, while the recovered solvents or sorbents are recycled for reuse in the capture process.
The low temperature DAC process uses solid sorbents (S-DAC) and the high temperature process utilizes liquid solvents (L-DAC) that feature different properties in terms of kinetics and heat transfers. Currently, liquid DAC (L-DAC) and solid DAC (S-DAC) represent two mature technologies for industrial deployment. Additionally, several emerging DAC technologies, including electro-swing adsorption (ESA), moisture-swing adsorption (MSA), and membrane-based DAC (m-DAC), are in different stages of development, testing, or limited practical application.
More recently, Ireland-based company Carbon Collect Limited has developed the MechanicalTree™ which simply stands in the wind to capture . The company claims this 'passive capture' of significantly reduces the energy cost of Direct Air Capture, and that its geometry lends itself to scaling for gigaton capture.
Most commercial techniques use a liquid solvent—usually amine-based or caustic—to absorb from a gas. For example, a common caustic solvent: sodium hydroxide reacts with and precipitates a stable sodium carbonate. This carbonate is heated to produce a highly pure gaseous stream. Sodium hydroxide can be recycled from sodium carbonate in a process of causticizing. Alternatively, the binds to solid sorbent in the process of chemisorption. Through heat and vacuum, the is then desorbed from the solid.
Among the specific chemical processes that are being explored, three stand out: causticization with alkali and alkali-earth hydroxides, carbonation, and organic−inorganic hybrid sorbents consisting of amines supported in porous adsorbents.
Other explored methods
The idea of using many small dispersed DAC scrubbers—analogous to live plants—to create environmentally significant reduction in levels, has earned the technology a name of artificial trees in popular media.
Moisture swing sorbent
In a cyclical process designed in 2012 by professor Klaus Lackner, the director of the Center for Negative Carbon Emissions (CNCE), dilute can be efficiently separated using an anionic exchange polymer resin called Marathon MSA, which absorbs air when dry, and releases it when exposed to moisture. A large part of the energy for the process is supplied by the latent heat of phase change of water. The technology requires further research to determine its cost-effectiveness.
Metal-organic frameworks
Other substances which can be used are metal–organic frameworks (MOFs).
Membranes
Membrane-based separation (m-DAC) employs semi-permeable membranes. This method requires little water and has a smaller footprint. Typically polymeric membranes, either glassy or rubbery, are used for direct air capture. Glassy membranes typically exhibit high selectivity with respect to Carbon Dioxide; however, they also have low permeabilities. Membrane capture of carbon dioxide is still in development and needs further research before it can be implemented on a larger scale.
Electro-Swing Adsorption
Electro-swing adsorption (ESA) has also been proposed.
Rock flour
Rock flour, soil ground into nanoparticles by glacier ice, has potential both as a soil conditioner and for carbon capture. Glacier melting deposits one billion tons of rock flour annually, and one ton of Greenlandic rock flour can capture of carbon.
Environmental impact
Proponents of DAC argue that it is an essential component of climate change mitigation. Researchers posit that DAC could help contribute to the goals of the Paris Agreement (namely limiting the increase in global average temperature to well below 2 °C above pre-industrial levels). However, others claim that relying on this technology is risky and might postpone emission reduction under the notion that it will be possible to fix the problem later, and suggest that reducing emissions may be a better solution.
Opponents of DAC argue that the resources required to operate DAC technologies, are an immense burden that may outweigh the goal of the technology itself. A 2020 analysis revealed that DAC 2 technology may be an unsuitable option to capture the projected 30 Gt-CO2 per year as it requires an enormous amount of materials (16.3–27.8 Gt of NH3 and 3.3–5.6 Gt of EO) The same study found that DAC 1 technology requires at least 8.4–13.1 TW-yr (46–71% TGES), an estimate that was calculated with the exclusion of the associated energy costs for carbon storage.
Energy cost concerns were explored in 2021 and found that In order for DAC technology to maintain a carbon removal of 73-86% per ton of CO2 captured, DAC would demand land occupation and renewable energy equivalent to what is needed for a global switch from gasoline to electric vehicles, with approximately five times higher material consumption.
Some DAC technologies, especially liquid systems, require both high temperature heat and electricity. In these systems the electrical demand is made using natural gas, imported electricity from the grid, and oxyfuel combustion of natural gas. This means that many DAC technologies are powered by fossil fuels, the very thing the technology is meant to eliminate reliance on. The physical scale of the air contactor in any DAC system is a formidable challenge and may also have an impact on the environment. A DAC system meant to combat six million metric tons of CO2 per year, may be sized at about 30 kilometers in length and 10 meters in height
Though using fossil fuel to generate electricity would release more CO2 than captured CO2, the minimum energy required for DAC technologies is estimated to be 250kWh per tonne of CO2m whereas capturing with natural gas and coal power plants requires about 100 and 65 kWh per ton of CO2 This could lead to a new set of environmental impacts in the future.
DAC relying on amine-based absorption demands significant water input. It was estimated, that to capture 3.3 gigatonnes of a year would require 300 km3 of water, or 4% of the water used for irrigation. On the other hand, using sodium hydroxide needs far less water, but the substance itself is highly caustic and dangerous.
DAC also requires much greater energy input in comparison to traditional capture from point sources, like flue gas, due to the low concentration of . The theoretical minimum energy required to extract from ambient air is about 250 kWh per tonne of , while capture from natural gas and coal power plants requires, respectively, about 100 and 65 kWh per tonne of . Because of this implied demand for energy, some have proposed using "small nuclear power plants" connected to DAC installations.
When DAC is combined with a carbon capture and storage (CCS) system, it can produce a negative emissions plant, but it would require a carbon-free electricity source. The use of any fossil-fuel-generated electricity would end up releasing more to the atmosphere than it would capture. Moreover, using DAC for enhanced oil recovery would cancel any supposed climate mitigation benefits.
Applications
Practical applications of DAC include
enhanced oil recovery,
production of carbon-neutral synthetic fuel and plastics,
beverage carbonation,
carbon sequestration,
improving concrete strength,
creating carbon-neutral concrete alternative,
enhancing productivity of algae farms,
enrichment of air in greenhouses
liquid fuels
enhanced coal bed methane
These applications require different concentrations of product formed from the captured gas. Forms of carbon sequestration such as geological storage require pure products (concentration > 99%), while other applications such as agriculture can function with more dilute products (~ 5%). Since the air that is processed through DAC originally contains 0.04% (or 400 ppm), creating a pure product requires more energy than a dilute product and is thus typically more expensive. Capture carbon that is used for food typically requires CO2 with higher purity, ranging from 50+% followed by additional chemical processing.
DAC is not an alternative to traditional, point-source carbon capture and storage (CCS), rather it is a complementary technology that could be utilized to manage carbon emissions from distributed sources, fugitive emissions from the CCS network, and leakage from geological formations. Because DAC can be deployed far from the source of pollution, synthetic fuel produced with this method can use already existing fuel transport infrastructure.
Typical discourse surrounding DAC is relegated to its effectiveness at mitigating climate change/global warming issues. However, the majority of existing DAC facilities are small scale, And operate primarily to sell the captured CO2 for use in other products rather than permanently sequestering it. DAC facilities that sell CO2 for beverage production operate with low recovery rates of around 4.7% and produces 58-tCO2 per day. The use of DAC facilities for commercial purposes, reemphasizes the opinion of naysayers, that DAC is a ploy used by corporations to protect and promote financial interest.
Given the myriad of DAC applications, proponents of DAC argue that the political utility of the technology lies in its ability to create new employment opportunities.
Operational/developing DAC facilities
DAC Projects and their respective processes for Carbon removal and/or storage
International DAC development
53 DAC plants are expected to be operational by the end of 2024
93 DAC plants to be operating in 2030 with a combined capacity of 6.4-11.4 MtCO2/yr
By the end of 2024, 18 plants are scheduled to be operational in North America and 24 in Europe
The leading countries in DAC include the US, Canada and European nations
China
DAC technologies have been proposed to help China in its pursuit for carbon neutrality by 2060. Following the 2021 Glasgow Climate Conference, as the leading GHG emitter, China has begun the development of various low-emission strategies. With China's commitment to DAC alone, global warming could decrease by approximately 0.2 °C–0.3 °C. Recent studies on deep decarbonization in China suggest that carbon neutrality can be attained with contribution from carbon capture and storage to dispose of multiple GtCO2 yr-1 point-source emissions. China has developed its own direct air capture (DAC) technology, called "CarbonBox," developed by Shanghai Jiao Tong University and China Energy Engineering Corporation. Each module can extract over 100 tonnes of carbon dioxide (CO2) annually, resulting in a 99% pure CO2 product. CarbonBox DAC facilities are the size of a shipping container, can be installed on site and utilize low-carbon energy sources to remove CO2 from the atmosphere.
Iceland
The Orca, pioneered by Zurich-based Climeworks with support from Microsoft in 2021, was the first large-scale DAC plant, removing 4000 tons of CO2 annually this amount corresponds to approximately 1.75 million liters of gasoline. The DAC facility is located in Iceland, Hellisheidi, and is powered by the Hellisheidi Geothermal Power Plant. Orca consists of 12 amine-holding containers that collect a total of around 600 kg of CO2 per hour. This facility operates in conjunction with CarbFix, an Icelandic technology firm. CarbFix takes the captured CO2 from the DAC facility and injects the CO2 into the Earth's crust (through mineralization) The mineralization process circumvents risks of fire and leaks, that are associated with alternative DAC technologies.
Kenya
Octavia Carbon, founded by Martin Freimüller in 2022, is the first Direct Air Capture Company in the Global South. The company plans to develop DAC technology in alignment with the country's renewable grid and rich geology, both of which are suitable for CO2 storage. This project is still in its development phase, however, following support from the Kenyan government and international DAC companies, the team has swelled to employ 53+ individuals. In collaboration with Carbonfuture, Octavia Carbon now seeks to implement a breakthrough digital Monitoring, Reporting, and Verification (dMRV) system for DAC. dMRV systems allow real-time data tracking across the entire carbon removal process. The current DAC pilot facility, Project Hummingbird, is located in Kenya's Rift Valley in Naivasha and is projected to capture and securely store 1000 tons of CO2 annually (1000tCO/yr). Project Hummingbird will utilize the mineralization process by injecting the stored CO2 into the basalt rock formations native to the Rift Valley
Cost
One of the largest hurdles to implementing DAC is the cost of separating and air. Although DAC implementation was initially and optimistically estimated to cost around $100-$300 per tonne, As of 2023 it is estimated that the total system cost is over $1,000 per tonne of CO2. Large-scale DAC deployment can be accelerated by policy incentives. There is discourse surrounding the actual cost of globalized usage of DAC technology as cost values reported by private companies tend to be lower than academic estimates. The Department of Energy estimated costs per tonne to be under $100, while other sources have estimated the cost to be much larger. it is estimated that the total system cost is over $1,000 per tonne of CO2. Large-scale DAC deployment can be accelerated by policy incentives.
Under the Bipartisan Infrastructure Law, the U.S. Department of Energy will invest $3.5 billion in four direct air capture hubs. According to the agency, the hubs have the potential to capture at least 1 million metric tonnes of carbon dioxide (CO2) annually from the atmosphere. Once captured, the CO2 will be permanently stored in a geologic formation.
The Department of Energy invested $1.2 billion to further developments of direct air capture facilities in Texas and Louisiana. These projects are the result of initial selections from President Biden's Bipartisan Infrastructure Law
Development
Carbon engineering
Carbon Engineering is a commercial DAC company founded in 2009 and backed, among others, by Bill Gates and Murray Edwards. , it runs a pilot plant in British Columbia, Canada, that has been in use since 2015 and is able to extract about a tonne of a day. An economic study of its pilot plant conducted from 2015 to 2018 estimated the cost at $94–232 per tonne of atmospheric removed.
Partnering with California energy company Greyrock, Carbon Engineering converts a portion of its concentrated into synthetic fuel, including gasoline, diesel, and jet fuel.
The company uses a potassium hydroxide solution. It reacts with to form potassium carbonate, which removes a certain amount of from the air.
Climeworks
Climeworks's first industrial-scale DAC plant, which started operation in May 2017 in Hinwil, in the canton of Zurich, Switzerland, can capture 900 tonnes of per year. To lower its energy requirements, the plant uses heat from a local waste incineration plant. The is used to increase vegetable yields in a nearby greenhouse.
The company stated that it costs around $600 to capture one tonne of from the air.
Climeworks partnered with Reykjavik Energy in Carbfix, a project launched in 2007. In 2017, the CarbFix2 project was started and received funding from EuropeanUnion's Horizon2020 research program. The CarbFix2 pilot plant project runs alongside a geothermal power plant in Hellisheidi, Iceland. In this approach, is injected 700 meters under the ground and mineralizes into basaltic bedrock forming carbonate minerals. The DAC plant uses low-grade waste heat from the plant, effectively eliminating more than they both produce.
On May 8, 2024, Climeworks activated the world's largest DAC planet named Mammoth in Iceland. It will be able to pull 36,000 tons of carbon from the atmosphere a year at full capacity, according to Climeworks, equivalent to taking around 7,800 gas-powered cars off the road for a year.
Global thermostat
Global Thermostat is private company founded in 2010, located in Manhattan, New York, with a plant in Huntsville, Alabama. Global Thermostat uses amine-based sorbents bound to carbon sponges to remove from the atmosphere. The company has projects ranging from 40 to 50,000 tonnes per year.
The company claims to remove for $120 per tonne at its facility in Huntsville.
Global Thermostat has closed deals with Coca-Cola (which aims to use DAC to source for its carbonated beverages) and ExxonMobil which intends to start a DACtofuel business using Global Thermostat's technology.
Soletair power
Soletair Power is a startup founded in 2016, located in Lappeenranta, Finland, operating in the fields of Direct Air Capture and Power-to-X. The startup is primarily backed by the Finnish technology group Wärtsilä. According to Soletair Power, its technology is the first to combine Direct Air Capture with buildings' HVAC systems. The technology captures from the air running through a building's existing ventilation units inside buildings for removing atmospheric while reducing the building's net emissions. The captured is mineralized to concrete, stored or utilized to create synthetic products like food, textile or renewable fuel. In 2020, Wärtsilä, together with Soletair Power and Q Power, created their first demonstration unit of Power-to-X for Dubai Expo 2020, that can produce synthetic methane from captured from buildings.
Prometheus Fuels
Prometheus Fuels is a start-up company based in Santa Cruz which launched out of Y Combinator in 2019 to remove CO2 from the air and turn it into zero-net-carbon gasoline and jet fuel. The company uses a DAC technology, adsorbing CO2 from the air directly into process electrolytes, where it is converted into alcohols by electrocatalysis. The alcohols are then separated from the electrolytes using carbon nanotube membranes, and upgraded to gasoline and jet fuels. Since the process uses only electricity from renewable sources, the fuels are carbon neutral when used, emitting no net CO2 to the atmosphere.
Heirloom Carbon Technologies
Heirloom's first direct air capture facility opened in Tracy, California, in November 2023. The facility can remove up to 1,000 U.S. tons of annually, which is then mixed into concrete using technologies from CarbonCure. Heirloom also has a contract with Microsoft in which the latter will purchase 315,000 metric tons of removal.
Other companies
Center for Negative Carbon Emissions of Arizona State University
Carbfix – a subsidiary of Reykjavik Energy, Iceland
Energy Impact Center – a research institute that advocates for the use nuclear energy to power direct air capture technologies
Mission Zero Technologies — a startup in London, UK
Carbyon – a startup in Eindhoven, the Netherlands
Innovations in research
Within the research domain, the ETH Zurich team's development of a photoacid solution for direct air capture marks a significant innovation. This technology, still under refinement, stands out for its minimal energy requirements and its novel chemical process that enables efficient CO2 capture and release. This method's potential for scalability and its environmental benefits align it with ongoing efforts by other companies listed in this section, contributing to the global pursuit of effective and sustainable carbon capture solutions.
Political discourse
Environmentalist opposition
In the United States there is conflict between politicians and politically unaffiliated environmental advocates on Direct Air Capture as it relates to economic benefit and efficiency in improving climate change associated risks.
One of the main grievances climate campaigners have is in regards to how DAC is perceived to be at best, a costly irrelevance to the more pressing need to cut emissions and, is a ploy that is utilized to maintain the fossil fuel industry's status quo, and perpetuate pollution The Stratos Project, was purchased by Occidental Petroleum for $1.1 billion. This investment is regarded by some as an attempt to extend the longevity of the fossil fuel industry. The Stratos project is ultimately owned by Occidental Petroleum, an American oil company that bought Carbon Engineering on November 3, 2023 for $1.1bn and views carbon removal as a sort of future-proofing for its industry. Jonathan Foley, executive director of Project Drawdown (a research-based plan to reverse global warming and stop climate change) regards DAC technology as a greenwashing exercise, that mitigates climate change issues but does not seek to solve them. The Consumer's Association of Penang perceive DAC to be something that exacerbates the climate crisis, and is fundamentally against the principle of climate justice.
A study conducted in 2024, analyzed the conditional support of DAC technologies in the United States. The study revealed that most of the participants who were familiar with DAC technology and had concerns about climate change had questions regarding the moral hazards of DAC technology. Participants expressed disdain for the possibility that DAC might allow companies to continue pollutive practices while greenwashing their public image was raised across all focus groups. Other participants worried that DAC technology would be used as a front by fossil fuel corporations, to create the illusion that something was being done to combat climate change without contributing real benefit to the environment.
Environmentalist opposition to DAC often concerns the ecological impacts of the associated energy infrastructure. Complications associated with the impact DAC may have on air quality in specific communities are called into question as well. Some critics of DAC are in opposition to the technology because of the locations they tend to be placed in, as some feel that these projects are always developed in poor areas, objectors expressed that they feel "experimented on."
Another study focusing on perceptions of DAC technology from climate concerned persons from the United States and the United Kingdom found similar results. A theme across all groups was the perception of DAC as a technology that is incongrous with the vision for a sustainable society. Participants reported DAC to be "reactionary" to climate change as opposed to a viable solution to it. A consistent theme across all workshops was the idea that CDR does not necessarily reflect people's ‘vision’ for a sustainable future society: “The survey also showed that "very few people believed that CDR deals with the root cause of emissions." The study revealed that the overall perception was that DAC is merely an intervention that fails to address the root cause of climate change and instead sustains the contributors to the crisis itself.'
Political opposition to DAC technology has also been related to doubts in the feasibility of DAC development and deployment at scale. Technologies analogous with DAC such as CCS and BECCS have been subject to immense public opposition. These technologies have also been characterized by multiple failures and aborted projects, contributing to the already persistent doubt regarding the credibility of DAC projects.
Biden's Bipartisan Infrastructure Law
Some environmentalists believe that the 3.5 billion investment in DAC is a "dangerous gamble" that puts the lives of frontline communities at risk. The Institute of Policy studies regards this decision to be risky because "the promise of DAC may never materialize" and should the deployment of this technology fail, the result will be only harm on frontline communities in "new and unacceptable ways". Surveys revealed that among those against DAC Trust in local government was generally low, in addition to mistrust in fossil fuel companies who sponsor DAC development. Environmentalists lack of faith in the bipartisan infrastructure law grew after a 2020 Treasury Department Inspector General investigation revealed that 90% of the tax credits used for carbon capture operations were done so without verifying that any carbon was being captured. Additionally, the IRS decision to not release information about which companies are benefiting from these new investments in DAC increases uncertainty among people who are concerned how their taxes are paying for DAC development.
Partisan perception in the United States
A poll taken in 2023 assessing the opinions on Direct Air Capture based on political party affiliation found that, 42% of Democrats were strongly in favor of DAC, 34% of independent voters were in favor while only 28% of Republicans indicated their fervent support of DAC technology. However, despite the negative response from the climate conscious community, politically, DAC technology has received Bipartisan support in government.
The reason for Bipartisan support for DAC seems to be due to two merits, the environmental benefit of DAC and the potential economic advantages. Republicans argue that DAC can provide economic advantages to the countries and local areas hosting these facilities through job creation, increase tax revenue and economic diversification. The economic utiltiy DAC also provides is protection for fossil fuel industries as many including ExxonMobil have donated generously to DAC research and development. Bipartisan support stems from the perception of DAC as a solution that satisfies economic and environmental concerns. However, despite bipartisan support for DAC in congress, a survey conducted in 2024 revealed that "Republicans and Independents were significantly less likely than Democrats to support the development of DAC in and near their communities and in the U.S."
Much of the discourse surrounding DAC comes from environmental activists, and though there are discrepancies in how Republicans and Democrats view DAC, these differences are generally relegated to the perception of the benefits DAC offers. Some view DAC as a feasible solution to combat global warming (primarily Democrats), whereas Republicans support for DAC lies in the way the technology will not interfere with the economic interests of fossil fuel companies.
Previous direct air capture shortcomings
BECCS project
Bioenergy with carbon capture and storage (BECCS) has come under scrutiny for a variety of reasons but primarily because the technology is energy intensive, requires large land changes/usage and has the potential to leak carbon dioxide back into the atmosphere. Environmentalists argued that BECCS was an infeasible option because of the emissions that the project would produce. BECCS is proposed as a solution based on the assumption that bioenergy would be carbon neutral. This assumption was found to be incorrect because many believe that the deforestation, logging and land required to accommodate this technology would offset the amount of carbon the technology removes. Individuals concerned with protecting animal life also argue that increasing demand for land for BECCS would be an additional threat to biodiversity. Opponents also argue that the risk of a Carbon Dioxide leak outweighs the potential benefits if the technology functions properly. Carbon dioxide that is stored underground has a high risk of leakage, and the consequences of a major leak could be catastrophic. "Atmospheric CO2 levels could spike significantly, especially if a leak were to occur from a major storage site." Anxiety surrounding the possibility of a CO2 leakage is common worry among those who doubt DAC.
See also
Artificial photosynthesis
Carbon dioxide removal
CityTrees
Smog tower
References
Carbon dioxide removal
Carbon dioxide
Climate engineering
Sustainability
Direct air capture | Direct air capture | [
"Chemistry",
"Engineering"
] | 6,051 | [
"Greenhouse gases",
"Geoengineering",
"Carbon dioxide",
"Planetary engineering"
] |
59,325,810 | https://en.wikipedia.org/wiki/Building%20information%20modeling%20in%20green%20building | Building information modeling (BIM) in green buildings aims at enabling sustainable designs and in turn allows architects and engineers to integrate and analyze building performance. It quantifies the environmental impacts of systems and materials to support the decisions needed to produce sustainable buildings, using information about sustainable materials that are stored in the database and interoperability between design and analysis tools. Such data can be useful for building life cycle assessments.
Services
BIM services, including conceptual modeling and topographic modeling, offer an approach to green building.
Conceptual energy analysis
Conceptual energy analysis allows designers and BIM service providers to transfer conceptual modeling into analytical energy models through exporting mass to gbXML. Possible information that can be transferred includes climate data, graphical energy analysis results, and design contrast options.
Solar and shadow analysis
Software tools can aid designers and BIM service providers in envisaging or quantifying solar and shadow effects.
Sustainability analysis
BIM tools and workflow have two phases: inherent BIM features and BIM-based analysis tools.
Inherent BIM features include functions such as 3D Model, visualization clash, and detection, which help integrated project delivery and design optimization.
BIM-based analysis tools are used to analyze energy, solar, thermal, etc. The benefits of those tools are to enable better communication and cooperation, as well as higher accuracy and efficiency.
The following tabulation compares BIM-based software used for green analyses.
Industry Foundation Classes data model
Industry Foundation Classes (IFC) or COBie is a standard exchange protocol to be used in data exchange between BIM software and rating systems.
Construction
BIM aids in four main areas— land, water, energy and materials.
Land
BIM and GIS are integrated for site planning. BIM simulations can estimate the progress of construction.
Water
BIM is utilized in large scale schemes as well as across the industry. It helps decrease unnecessary loss and effectively saves water. BIM improves the design process of building water supply and drainage.
Energy
BIM can be used to simulate energy consumption. It integrates and analyzes information at the construction stage to calculate the thermal environment that could shorten the construction period.
Material
BIM tracks material consumption, calculates material requirements, and manages material information uniformly.
Rating systems
Sustainable rating systems are used to evaluate the environmental performance of buildings. These systems have common criteria and are similar in their evaluation of energy consumption, indoor environmental quality, water efficiency, and material. Three rating systems that can integrate with BIM are LEED, BREEAM, and Green Star.
The framework of integrating BIM-based with sustainable rating systems includes "design assistance" and "certification management" modules. The design assistance module assists designers with efficient and sustainable knowledge that is built into the BIM tool to ensure the design-oriented through BIM tool's application programming interface (API). The certification management module is a web-based application used to manage project information, sustainable documentation and submissions for certification purposes.
See also
List of BIM software
References
Sustainable building
Building information modeling | Building information modeling in green building | [
"Engineering"
] | 609 | [
"Building engineering",
"Sustainable building",
"Building information modeling",
"Construction"
] |
49,147,063 | https://en.wikipedia.org/wiki/Isolation%20%28microbiology%29 | In microbiology, the term isolation refers to the separation of a strain from a natural, mixed population of living microbes, as present in the environment, for example in water or soil, or from living beings with skin flora, oral flora or gut flora, in order to identify the microbe(s) of interest. Historically, the laboratory techniques of isolation first developed in the field of bacteriology and parasitology (during the 19th century), before those in virology during the 20th century.
History
The laboratory techniques of isolating microbes first developed during the 19th century in the field of bacteriology and parasitology using light microscopy. 1860 marked the successful introduction of liquid medium by Louis Pasteur. The liquid culture pasteur developed allowed for the visulization of promoting or inhibiting growth of specific bacteria. This same technique is utilized today through various mediums like Mannitol salt agar, a solid medium. Solid cultures were developed in 1881 when Robert Koch solidified the liquid media through the addition of agar
Proper isolation techniques of virology did not exist prior to the 20th century. The methods of microbial isolation have drastically changed over the past 50 years, from a labor perspective with increasing mechanization, and in regard to the technologies involved, and with it speed and accuracy.
General techniques
In order to isolate a microbe from a natural, mixed population of living microbes, as present in the environment, for example in water or soil flora, or from living beings with skin flora, oral flora or gut flora, one has to separate it from the mix.
Traditionally microbes have been cultured in order to identify the microbe(s) of interest based on its growth characteristics.
Depending on the expected density and viability of microbes present in a liquid sample, physical methods to increase the gradient as for example serial dilution or centrifugation may be chosen.
In order to isolate organisms in materials with high microbial content, such as sewage, soil or stool, serial dilutions will increase the chance of separating a mixture.
In a liquid medium with few or no expected organisms, from an area that is normally sterile (such as CSF, blood inside the circulatory system) centrifugation, decanting the supernatant and using only the sediment will increase the chance to grow and isolate bacteria or the usually cell-associated viruses.
If one expects or looks for a particularly fastidious organism, the microbiological culture and isolation techniques will have to be geared towards that microbe. For example, a bacterium that dies when exposed to air, can only be isolated if the sample is carried and processed under airless or anaerobic conditions. A bacterium that dies when exposed to room temperature (thermophilic) requires a pre-warmed transport container, and a microbe that dries and dies when carried on a cotton swab will need a viral transport medium before it can be cultured successfully.
Bacterial and fungal culture
Inoculation
Laboratory technicians inoculate the sample onto certain solid agar plates with the streak plate method or into liquid culture medium, depending what the objective of the isolation is:
If one wants to isolate only a particular group of bacteria, such as Group A Streptococcus from a throat swab, one can use a selective medium that will suppress the growth of concomitant bacteria expected in the mix (by antibiotics present in the agar), so that only Streptococci are "selected", i.e. visibly stand out. To isolate fungi, Sabouraud agar can be used. Alternatively, lethal conditions for streptococci and gram negative bacteria like high salt concentrations in Mannitol salt agar favor survival of any staphylococci present in a sample of gut bacteria, and phenol red in the agar acts as a ph indicator showing if the bacteria are able to ferment mannitol by excreting acid into the medium. In other agar substances are added to exploit an organism's ability to produce a visible pigment (e.g. granada medium for Group B Streptococcus) which changes the bacterial colony's color, or to dissolve blood agar by hemolysis so that they can be more easily spotted. Some bacteria like Legionella species require particular nutrients or toxin binding as in charcoal to grow and therefore media such as Buffered charcoal yeast extract agar must be used.
If one wants to isolate as many or all strains possible, different nutrient media as well as enriched media, such as blood agar and chocolate agar and anaerobic culture media such as thioglycolate broth need to be inoculated. To enumerate the growth, bacteria can be suspended in molten agar before it becomes solid, and then poured into petri dishes, the so-called 'pour plate method' which is used in environmental microbiology and food microbiology (e.g. dairy testing) to establish the so-called 'aerobic plate count'.
Incubation
After the sample is inoculated into or onto the choice media, they are incubated under the appropriate atmospheric settings, such as aerobic, anaerobic or microaerophilic conditions or with added carbon dioxide (5%), at different temperature settings, for example 37 °C in an incubator or in a refrigerator for cold enrichment, under appropriate light, for example strictly without light wrapped in paper or in a dark bottle for scotochromogen mycobacteria, and for different lengths of time, because different bacteria grow at a different speed, varying from hours (Escherichia coli) to weeks (e.g.mycobacteria).
At regular, serial intervals laboratory technicians and microbiologists inspect the media for signs of visible growth and record it. The inspection again has to occur under conditions favoring the isolate's survival, i.e. in an 'anaerobic chamber' for anaerobe bacteria for example, and under conditions that do not threaten the person looking at the plates from being infected by a particularly infectious microbe, i.e. under a biological safety cabinet for Yersinia pestis (plague) or Bacillus anthracis (anthrax) for example.
Identification
When bacteria have visibly grown, they are often still mixed. The identification of a microbe depends upon the isolation of an individual colony, as biochemical testing of a microbe to determine its different physiological features depends on a pure culture.
To make a subculture, one again works in aseptic technique in microbiology, lifting a single colony off the agar surface with a loop and streaks the material into the 4 quadrants of an agar plate or all over if the colony was singular and did not look mixed.
Gram staining allows for visualization of the bacteria's cell wall composition based on the color the bacteria stains after a series of staining and decolorization steps. This staining process allows for the identification of gram-negative and gram positive bacteria. Gram-negative bacteria will stain a pink color due to the thin layer of peptidoglycan. If a bacteria stains purple, due to the thick layer of peptidoglycan, the bacteria is a gram-positive bacteria.
In clinical microbiology numerous other staining techniques for particular organisms are used (acid fast bacterial stain for mycobacteria). Immunological staining techniques, such as direct immunofluorescence have been developed for medically important pathogens that are slow growing (Auramine-rhodamine stain for mycobacteria) or difficult to grow (such as Legionella pneumophila species) and where the test result would alter standard management and empirical therapy.
Biochemical testing of bacteria involves a set of agars in vials to separate motile from non-motile bacteria.
In 1970 a miniaturized version was developed, called the analytical profile index.
Successful identification via e.g. genome sequencing and genomics depends on pure cultures.
Bacteria, culture-independent
While the most rapid method to identify bacteria is by sequencing their 16S rRNA gene, which has been PCR-amplified beforehand, this method does not require isolation. Since most bacteria cannot be grown with conventional methods (particularly environmental or soil bacteria) metagenomics or metatranscriptomics are used, shotgun sequencing or PCR directed sequencing of the genome. Sequencing with mass spectrometry as in Matrix-assisted laser desorption/ionization (MALDI-TOF MS) is used in the analysis of clinical specimens to look for pathogens. Whole genome sequencing is an option for a singular organism that cannot be sufficiently characterized for identification. Small DNA microarrays can also be used for identification.
References
Microbiology | Isolation (microbiology) | [
"Chemistry",
"Biology"
] | 1,808 | [
"Microbiology",
"Microscopy"
] |
49,153,903 | https://en.wikipedia.org/wiki/Philippinite | Philippinites, or rizalites are tektites found in the Philippines. They are considered to be about 710,000 years old on the average and generally ranging in size from millimeters to centimeters. Their age corresponds with the age of other tektites in the Australian strewn tektite field. In 1964, a very large philippinite, weighing with dimensions 6.5 x 6.2 x 5.2 cm, was purchased by the University of California, Los Angeles Department of Astronomy. The heaviest philippinite ever found weighs in its splash-form, which is also the heaviest tektite of this kind.
Etymology
The term rizalite was named after the Philippine province of Rizal where the first black tektites were rediscovered in October 1926 at Novaliches (which was then part of Rizal). Although, it was only in 1928 that the term was proposed by American anthropologist H. Otley Beyer, dubbed as the father of Philippine tektite studies, to refer to tektites found in the Philippines. Philippinite has become the more favored term because other tektites were found in other areas of the Philippines such as the Bicol region and the town of Anda in the province of Pangasinan. Some early authors referred to Philippine tektites as "obsidianites" but that too has fallen out of use due to the introduction of the term philippinite by succeeding authors.
Uses
In ancient Philippines, tektites were used by early settlers in the Philippines as arrowheads and other tools as well as decorative purposes. During the Philippine Iron Age, due to the polish features of philippinites found in graves, it was evidenced that philippinites were used as amulets or charms. In modern times, it is generally used as a collector's item.
Composition
The following table details the chemical composition of philippinite:
Notes
Splash-form tektites are tektites that are shaped like spheres, ellipsoids, teardrops, dumbbells, and other forms. They have shaped this way due to the ejecta or splash of silicate liquid following a meteorite impact, scattering them to a distance up to thousands of kilometers.
References
Glass in nature
Impact event minerals
Amorphous solids
Geology of the Philippines | Philippinite | [
"Physics"
] | 467 | [
"Amorphous solids",
"Unsolved problems in physics"
] |
49,155,297 | https://en.wikipedia.org/wiki/Calder%C3%B3n%20projector | In applied mathematics, the Calderón projector is a pseudo-differential operator used widely in boundary element methods. It is named after Alberto Calderón.
Definition
The interior Calderón projector is defined to be:
where is almost everywhere, is the identity boundary operator, is the double layer boundary operator, is the single layer boundary operator, is the adjoint double layer boundary operator and is the hypersingular boundary operator.
The exterior Calderón projector is defined to be:
References
Potential theory
Partial differential equations
Complex analysis
Operator theory
Numerical analysis | Calderón projector | [
"Mathematics"
] | 106 | [
"Functions and mappings",
"Computational mathematics",
"Mathematical objects",
"Potential theory",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
49,155,519 | https://en.wikipedia.org/wiki/Itruvone | Itruvone (; developmental code name PH10), also known as pregn-4-en-20-yn-3-one, is a vomeropherine which is under development by VistaGen Therapeutics as a nasal spray for the treatment of major depressive disorder.
See also
List of investigational antidepressants
List of neurosteroids § Pheromones and pherines
References
Ethynyl compounds
Experimental antidepressants
Ketones
Pregnanes | Itruvone | [
"Chemistry"
] | 104 | [
"Ketones",
"Functional groups"
] |
49,155,874 | https://en.wikipedia.org/wiki/LY-2459989 | LY-2459989 is a silent antagonist of the κ-opioid receptor (KOR) that has been developed by Eli Lilly as a radiotracer of that receptor, labeled either with carbon-11 or fluorine-18. It possesses high affinity for the KOR (Ki = 0.18 nM) and is highly selective for it over the μ-opioid receptor (Ki = 7.68 nM) and the δ-opioid receptor (Ki = 91.3 nM) (over 43-fold selectivity for the KOR over the other opioid receptors). LY-2459989 is a fluorine-containing analogue and follow-up compound of LY-2795050, the first KOR-selective antagonist radiotracer. Relative to LY-2795050, LY-2459989 displays 4-fold higher affinity for the KOR and similar selectivity and also possesses greatly improved central nervous system permeation (brain levels were found to be 6-fold higher than those of LY-2795050). The drug appears to possess a short duration of action, with only 25% remaining in serum at 30 minutes post-injection in rhesus monkeys, making it an ideal agent for application in biomedical imaging, for instance in positron emission tomography (PET).
Earlier analogues of LY-2459989 besides LY-2795050 with similar actions and potential uses have also been described.
See also
κ-Opioid receptor § Antagonists
List of investigational antidepressants
References
Benzamides
Kappa-opioid receptor antagonists
Fluoroarenes
3-Pyridyl compounds
Pyrrolidines
Radiopharmaceuticals
Synthetic opioids
Experimental antidepressants | LY-2459989 | [
"Chemistry"
] | 380 | [
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
49,156,968 | https://en.wikipedia.org/wiki/Endocannabinoid%20enhancer | An endocannabinoid enhancer (eCBE) is a type of cannabinoidergic drug that enhances the activity of the endocannabinoid system by increasing extracellular concentrations of endocannabinoids. Examples of different types of eCBEs include fatty acid amide hydrolase (FAAH) inhibitors, monoacylglycerol lipase (MAGL) inhibitors, and endocannabinoid transporter (eCBT) inhibitors (or "endocannabinoid reuptake inhibitors" ("eCBRIs")). An example of an actual eCBE is AM404, the active metabolite of the analgesic paracetamol (acetaminophen; Tylenol) and a dual FAAH inhibitor and eCBRI.
See also
Cannabinoid receptor
Synthetic cannabinoid
Cannabinoid receptor antagonist
References
Endocannabinoids | Endocannabinoid enhancer | [
"Chemistry"
] | 189 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
49,158,055 | https://en.wikipedia.org/wiki/Sulfide%20intrusion | In ecology, sulfide intrusion refers to an excess of sulfide molecules (S2-) in the soil that interfere with plant growth, often seagrass.
Seagrass bed sediment (soil) is typically anoxic, containing a reduced form of sulfur: hydrogen sulfide (H2S). H2S is a phytotoxin that results from anaerobic digestion, the decomposition of organic matter in the absence of oxygen. However, seagrass can persist in this environment because of physiological adaptations, as well as functional adaptations of other organisms in the ecosystem. For example, bivalves (clams) in the family Lucinidae host symbiotic bacteria that oxidize sulfides. Lucinid bivalves' gills house the bacteria, and the siphon supplies the bacteria and surrounding pore water with oxygenated water from above the sediment. Bacterial oxidation of the sulfides results in sulfates, reducing toxicity.
See also
Nutrient cycle
Redox
Sulfur cycle
Soil chemistry
Soil biology
Environmental microbiology
Microbial biodegradation
References
Ecology
Sulfur
Soil chemistry | Sulfide intrusion | [
"Chemistry",
"Biology"
] | 222 | [
"Soil chemistry",
"Ecology"
] |
49,158,860 | https://en.wikipedia.org/wiki/Discontinuous%20electrophoresis | Discontinuous electrophoresis (colloquially disc electrophoresis) is a type of polyacrylamide gel electrophoresis. It was developed by Ornstein and Davis. This method produces high resolution and good band definition. It is widely used technique for separating proteins according to size and charge.
Method
In this method, the gel is divided into two discontinuous parts, resolving and stacking gel, both have different concentrations of polyacrylamide. The one with lower concentration is stacked on top of the one with higher concentration. Discontinuity is based on four parameters: gel structure, pH value of the buffer, ionic strength of the buffer, and the nature of the ions in the gel and electrode buffer. The electrode buffer contains glycine. Glycine has very low net charge at pH 6.8 of stacking gel, so it has low mobility. The proteins are separated according to the principle of isotachophoresis and form stacks in the order of mobility (stacking effect). Mobility depends on net charge, not on the size of the molecule. Proteins move towards anode slowly at constant speed till they reach limit of separation gel. Suddenly, frictional resistance increases but glycine is not affected and it passes the proteins and becomes highly charged in resolving zone. Proteins present in homogeneous buffer start to separate based on principles of zone electrophoresis. Now their mobility depends on size as well as charge. pH value rises to 9.5 and net charge increases.
See also
Affinity electrophoresis
SDS-PAGE
Isotachophoresis
References
External links
Analysis of C14-labeled proteins by disc electrophoresis
Electrophoresis | Discontinuous electrophoresis | [
"Chemistry",
"Biology"
] | 350 | [
"Instrumental analysis",
"Molecular biology techniques",
"Electrophoresis",
"Biochemical separation processes"
] |
71,519,050 | https://en.wikipedia.org/wiki/Le%20Potier%27s%20vanishing%20theorem | In algebraic geometry, Le Potier's vanishing theorem is an extension of the Kodaira vanishing theorem, on vector bundles. The theorem states the following
In case of r = 1, and let E is an ample (or positive) line bundle on X, this theorem is equivalent to the Nakano vanishing theorem. Also, found another proof.
generalizes Le Potier's vanishing theorem to k-ample and the statement as follows:
gave a counterexample, which is as follows:
See also
vanishing theorem
Barth–Lefschetz theorem
Note
References
Further reading
External links
(OpenContent book)
Theorems in algebraic geometry
Theorems in complex geometry | Le Potier's vanishing theorem | [
"Mathematics"
] | 138 | [
"Theorems in algebraic geometry",
"Theorems in complex geometry",
"Theorems in geometry"
] |
71,522,116 | https://en.wikipedia.org/wiki/Lutetium%20vanadate | Lutetium vanadate is inorganic compound with ferromagnetic and semiconducting properties, with the chemical formula of Lu2V2O7 with the same structure as pyrochlore.
Preparation
Lutetium vanadate can be obtained by the reaction between lutetium oxide, vanadium trioxide and vanadium pentoxide at a high temperature (1400 °C) in an argon atmosphere with oxygen pressure of 2.0×10−5 bar.
2 Lu2O3 + V2O3 + V2O5 → 2 Lu2V2O7
See also
Ytterbium-doped lutetium orthovanadate
References
Lutetium compounds
Vanadates
Vanadium(IV) compounds | Lutetium vanadate | [
"Chemistry"
] | 157 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
71,522,734 | https://en.wikipedia.org/wiki/Fluorohydride%20salt | Fluorohydride salts are ionic compounds containing a mixture of fluoride and hydride anions, generally with strongly electropositive metal cations. Unlike other types of mixed hydrides such as oxyhydrides, fluorohydride salts are typically solid solutions because of the similar sizes and identical charges of fluoride and hydride ions.
Examples
Fluorohydride salts typically contain one or more alkali or alkaline earth metals whose parent fluorides and hydrides are predominantly ionic. Examples with a single metal counterion include Li(H,F), Na(H,F), Mg(H,F)2, and Ca(H,F)2. More complex fluorohydride salts include the perovskite-structured NaMg(H,F)3 and MCa(H,F)3 (M = Rb or Cs), and the high-pressure pyrochlore compound NaCaMg2(H,F)7.
Applications
Fluorohydride salts have drawn interest as thermoelectric storage materials because their continuous solid-solution range enables balancing of hydrogen-storage capacity with thermal stability. Replacing hydride with fluoride ions in solid solution reduces the hydrogen-storage capacity of the compound but also reduces the hydrogen-dissociation pressure, enabling higher-temperature operation. With sodium fluorohydride, the maximum rate of hydrogen release (and thus pressure buildup) is found to be 443 °C for the solid solution with 1:1 H:F ratio, versus 408 °C for the pure sodium hydride. A cost comparison reveals that a hydrofluoride salt with the composition NaMgH2F can be operated at lower cost than the parent hydride NaMgH3 and other magnesium-hydride based materials, despite its lower hydrogen-storage capacity, because of the improved stability of the fluorohydride salt at high temperature.
References
Fluorides
Mixed anion compounds
Hydrides | Fluorohydride salt | [
"Physics",
"Chemistry"
] | 422 | [
"Matter",
"Mixed anion compounds",
"Salts",
"Fluorides",
"Ions"
] |
71,527,563 | https://en.wikipedia.org/wiki/Gadolinium%20diiodide | Gadolinium diiodide is an inorganic compound, with the chemical formula of GdI2. It is an electride, with the ionic formula of Gd3+(I−)2e−, and therefore not a true gadolinium(II) compound. It is ferromagnetic at 276 K with a saturation magnetization of 7.3 B; it exhibits a large negative magnetoresistance (~70%) at 7 T near room temperature. It can be obtained by reacting gadolinium and gadolinium(III) iodide at a high temperature:
Gd + 2 GdI3 → 3 GdI2
It can react with hydrogen at high temperature (800 °C) to obtain gadolinium hydrogen iodide (GdI2H0.97).
References
Gadolinium compounds
Iodides
Electrides
Ferromagnetic materials
Lanthanide halides | Gadolinium diiodide | [
"Physics",
"Chemistry"
] | 197 | [
"Electron",
"Inorganic compounds",
"Electrides",
"Ferromagnetic materials",
"Inorganic compound stubs",
"Salts",
"Materials",
"Matter"
] |
71,528,348 | https://en.wikipedia.org/wiki/ISO%2025119 | ISO 25119, titled "Tractors and machinery for agriculture and forestry – Safety-related parts of control systems", is an international standard for functional safety of electrical and/or electronic systems that are installed in tractors and machines used in agriculture and forestry, defined by the International Organization for Standardization (ISO).
Parts of ISO 25119
ISO 25119 consists of following parts:
General principles for design and development
Concept phase
Series development, hardware and software
Production, operation, modification and supporting processes
See also
IEC 61508
References
25119
Safety engineering | ISO 25119 | [
"Engineering"
] | 109 | [
"Safety engineering",
"Systems engineering"
] |
57,654,263 | https://en.wikipedia.org/wiki/Museum%20of%20Concrete | The Museum of Concrete is the first museum in Ukraine dedicated to concrete. The museum is in Odesa on the site of an enterprise that produces building materials. The opening of the museum was held on December 5, 2017. At the time of the opening, two thematic halls were organized. Since January 2018, two rooms have been added. As of May 2018, the museum collection numbered about 2500 exhibits, which are associated with concrete and the technology of its production.
Decor
The first place is occupied by an exposition about the history of the appearance of concrete and its development as a building material from Ancient Rome to fiberglass. Separately presented area a wide range of reagents used in the modern production of concrete products, such as hydrophobicity additives, and plasticizers.
The second hall recreates the atmosphere of the Soviet era and demonstrates the working conditions of the engineer of an industrial enterprise for the production of reinforced concrete. In the hall there is also a technical library on construction topics and reinforced concrete products.
Interactive hall
The hall contains interactive stands connected, for example, with the production and storage of reinforced concrete panels, which were used in the construction of houses of Khrushchev's building. Separate exposition: laboratory measuring equipment for metal and metal forms. The exposition demonstrates direct participation in the materials of concrete products and concrete products.
Notes
External links
Museums in Odesa
Concrete | Museum of Concrete | [
"Engineering"
] | 279 | [
"Structural engineering",
"Concrete"
] |
57,657,930 | https://en.wikipedia.org/wiki/Ocean%20Nuclear | Ocean Nuclear () is a financial services provider for the nuclear energy industry. It provides capital market services for energy projects worldwide and has negotiated nuclear infrastructure projects in more than 20 countries.
Capital
Ocean Nuclear is currently raising to fund infrastructure projects in nuclear energy.
Global Nuclear Investment Summit
Ocean Nuclear co-organises the Global Nuclear Investment Summit (GNIS). The first was held in Beijing in January 2018. The second took place in London in June 2018, in partnership with the Financial Times.
References
Companies based in Shenzhen
Financial services companies of China
Nuclear industry organizations
Nuclear technology | Ocean Nuclear | [
"Physics",
"Engineering"
] | 114 | [
"Nuclear technology",
"Nuclear industry organizations",
"Nuclear organizations",
"Nuclear physics"
] |
57,663,296 | https://en.wikipedia.org/wiki/Jewell%20water%20filter | A Jewell water filter was a system of sand filters for filtering and treating water for drinking purposes that made use of gravity to allow water to percolate through a column of sand inside cylindrical cisterns that was widely used in the early twentieth century. They are named after Omar Hestrian Jewell (1 July 1842 - 19 June 1931) established Jewell Pure Water Company in Chicago in 1890 and managed later by two of his sons. Jewell water filters were used in many city water supply systems across the world and modified versions continue to be in use.
History
Slow sand filters were introduced at a point when the nature of disease causing organisms in typhoid and cholera had been established. Omar Jewell was a mechanical engineer who designed farm equipment and he took an interest in solving some of the problems involved in the filtration of water and established the O.H.Jewell Filter Company and was financed by Chicago-based waterwork dealers James B. Clow and Sons. Omar's son William H. Jewell graduated in 1887 from the College of Pharmacy, University of Illinois and served as a chemist in the company. Another son, Ira worked for a while with the company but sold his stock in 1900 to start a breakaway company I.H. Jewell Filter Company.
The first Jewell filters were built for use at Rock Island, Illinois in 1891. Jewell filters evolved over time to substitute open sand bed filters which had problems in the United States: freezing in winter and algal growth in summer introducing an odour to the water. Over time Omar and his sons owned several patents in water filtration, nearly 50 patents between 1888 and 1900, including novel systems for combining filtering and chlorination. By 1896 nearly 21 plants in the United States of America used Jewell filters. In 1898 the O.H.Jewell Filter Company settled a patent infringement claim over a coagulation process patented by Isaiah Smith Hyatt, brother of John Wesley Hyatt, in 1884 and owned by the New York Filter Manufacturing Company. Other filter companies came up during the period and there were numerous patent litigations and company mergers with Jewell merging with the New York Filter Manufacturing Company in 1900 to become the single major New York Continental Jewell Filtration Company. The resulting company owned the licenses to most of the valuable patents of the day and by 1909 they had nearly 360 plants in operation. Several were built in far away places like India with the largest being in Kolar at Bethamangala with a capacity of 2,000,000 gallons per day. The one in Warsaw was the largest in Europe in its time.
An outbreak of typhoid during the 1890s in the city of Pittsburgh led to calls for improved sanitation and improvements in the quality of drinking water supply. Pittsburgh Filtration Commission was established in June 1896 and it recommended in 1899 a slow-sand filtration system. Once this became operational, the cases of typhoid were greatly reduced. The Filtration commission wrote to several companies but only two agreed to enter the tests. These were the Cumberland manufacturing company and the Morison-Jewell Filtration Company and the committee experimented with a Warren filter and a Jewell filter. Jewell filters underwent further bacteriological tests in Alexandria and Berlin and their approval led to their wider adoption in numerous town water supplies in the early 1900s. The British troops at Alexandria brought down typhoid deaths to zero by 1905 with water treatments that included the use of Jewell filters. Jewell filters became commonplace in British Indian military towns in the plains after around 1910 and their construction had been standardized in engineering manuals.
Construction and working
Jewell filters, unlike their predecessors, open sand filters, were housed indoors and included mechanical action to turn and wash the sand with their key advantages being their ability to work in winter and reduce bacterial counts. The water from a river or lake is first passed through sedimentation beds where a coagulant such as alum is added. The water then goes into the sand bed within cylinders of the Jewell filters and the coagulant forms a film on top of it. The sand beds are cleaned by stirring them with rotary arms and washing with water pumped at pressure from below. The Warren filter also had a similar system for washing sand. The design to carry away the wash effluent however differed between the Warren and Jewell filters. The system also included automatic control the flow of water inflow and devices to control the addition of chemicals such as lime and iron. The company later produced variations that used water under pressure than to merely rely on gravity.
References
External links
Catalog from 1897
New York Continental Jewell Filtration Company 1913
Patents related to the company and the filters
Water treatment | Jewell water filter | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 960 | [
"Water treatment",
"Environmental engineering",
"Water technology",
"Water pollution"
] |
44,301,990 | https://en.wikipedia.org/wiki/MicroSolutions%20Backpack | MicroSolutions Backpack was a line of peripheral devices introduced in 1990 allowing users to attach a peripheral drive, namely hard drives, CD-ROM drives, and DVD±RW drives, to their system. When the original model was released, USB ports did not yet exist, so the drive plugged into a system's printer port. Backpacks could be daisy-chained and still allow for printer usage. Some models also offered audio capability via expansion. Later models introduced faster connectivity to the host system by means of a proprietary PC Card, and later USB. MicroSolutions was located in DeKalb, Illinois, USA.
References
Computer peripherals | MicroSolutions Backpack | [
"Technology"
] | 134 | [
"Computer peripherals",
"Components"
] |
42,855,872 | https://en.wikipedia.org/wiki/Primitive%20decorating | Primitive decorating is a style of decorating using primitive folk art style that is characteristic of a historic or early Americana time period, typically using elements with muted colors and a rough and simple look to them. Decorating in the primitive style can incorporate either true antiques or contemporary folk art. Contemporary primitive folk art is designed to have an old or antique look but created using new materials.
Examples
Examples of antiquing techniques used by primitive folk artists include tea or coffee staining and sanding down paint to create a worn, aged look. The style is sometimes referred to as country style.
Primitive decorating often features a number of recurring themes and characters including primitive angels, barnstars, primitive crows, primitive dolls & rag dolls, saltbox houses, sheep, willow trees, primitive wooden signs, and pottery. Primitive design focuses on furniture made between the mid-18th century and the early 19th century by farmers.
A number of magazines specialize in primitive decorating.
Gallery
See also
Country Living
Interior design
Willow Tree (figurines)
References
Interior design
Architectural design
American folk art | Primitive decorating | [
"Engineering"
] | 217 | [
"Design",
"Architectural design",
"Architecture"
] |
42,856,562 | https://en.wikipedia.org/wiki/Marble%20Canyon%20Dam | The Marble Canyon Dam, also known as the Redwall Dam, was a proposed dam on the Colorado River in Arizona, United States. The dam was intended to impound a relatively small reservoir in the central portion of Marble Canyon to develop hydroelectric power. Plans centered on two sites between miles 30 and 40 in the canyon. At one point a tunnel was proposed to a site just outside Grand Canyon National Park to develop the site's full power generation potential, reducing the Colorado River to a trickle through the park.
Although first proposed in the 1920s to generate hydroelectricity, work did not begin until the dam was incorporated as part of the U.S. Bureau of Reclamation (USBR)'s Pacific Southwest Water Plan in the 1940s for its Central Arizona component. Together with Bridge Canyon Dam, located at the lower end of the Grand Canyon, it would have provided the hydroelectric power necessary to lift Colorado River water from Lake Havasu to central Arizona's farms and cities, including Phoenix and Tucson. The two dams would have operated as "cash register" facilities to provide funds for future reclamation projects through the sale of cheap hydropower.
After a series of studies and site investigations, the dam was abandoned as a project in order to facilitate legislation creating the Central Arizona Project. The dam sites were incorporated into Marble Canyon National Monument in 1968, which was absorbed into Grand Canyon National Park in 1975.
Project description
First proposed in the 1940s by the United States Bureau of Reclamation, the location in Marble Canyon was regarded as one of the most remote and difficult-to-access dam sites in the United States, about below the rim of the canyon. The proposed dam was of moderate size, about high, using a thin-arch concrete design and retaining about 363,000 acre-feet. To keep the small reservoir free of silt, silt retention dams were needed on the tributary canyons. The retention dam in Paria Canyon was planned to be tall, and only wide at the base and wide at the crest, retaining 100 years of silt deposits. Despite these measures, the Marble Canyon reservoir was projected to have a lifespan of 104 years before filling with silt.
One appealing aspect of a Marble Canyon damsite was the potential to develop the Grand Canyon's hydroelectric potential without building a dam or reservoir in Grand Canyon National Park, by diverting about 90% of the Colorado River's waters at Marble Canyon into a tunnel, in diameter and capable of carrying about , to a site in the western part of the canyon in what was then Kaibab National Forest. The plan would provide about of hydraulic head for power generation. The power plant would produce about 6.5 billion kilowatt hours (KWh) per year – almost twice that of Hoover Dam. The tunnel would allow the hydroelectric system to effectively bypass Grand Canyon National Park, avoiding the construction of more dams within the park between Marble and Bridge. A flow of would be released from Marble Canyon at all times – a "scenic trickle" – for wildlife and recreational purposes in the Grand Canyon. This water would be released through a 22 megawatt power station at the base of the dam, with an annual energy production of 164 million kilowatt hours. Together, the two dams would utilize most of the elevation drop between Lee's Ferry and Lake Mead, which represents one of the biggest untapped hydroelectric potentials in the United States.
A number of accessory lakes or pools were projected in the adjacent Deer Creek drainage, or possibly at a dam on Kanab Creek. The power station at Deer Creek would have been at the head of the reservoir backed up by the never-built Bridge Canyon Dam. By the 1960s the tunnel project had been dropped in favor of a 600 MW power station at the dam's base, reducing the project's costs considerably.
The two dams would be operated as "cash register" power plants, meaning that sale of hydroelectricity would pay for their construction cost as well as provide funds for future reclamation projects. The power thus produced would be vital for pumping along the Central Arizona Project (CAP), which would lift water from the Colorado River near Lake Havasu to central and southern Arizona. The pumped water would reach as far as Tucson, which sits about higher than the Colorado River. The Bureau had even grander visions, too: eventually, this generated revenue would be necessary to fund water import projects from the Columbia River system to the Colorado River, which continues to face shortages due to over-appropriation of its flow. Thus, "hydropower at Bridge and Marble Canyon Dams was viewed as an instrument and not as a major goal in itself".
Exploration
The feasibility of Marble Canyon Dam was first explored in the 1920s by the United States Geological Survey, which designated sites for "an unbroken string of [hydroelectric] dams from the mountains of Colorado and Wyoming to the last canyons of the Colorado River along the California/Arizona border". Although the Bureau of Reclamation expressed interest at this time in developing hydroelectric generation at the Marble site, these plans were postponed in favor of building the Hoover Dam, which would provide a much greater storage capacity for irrigation and flood control. However, once Hoover was completed the Bureau once again set its sights on the Grand Canyon, which has the greatest hydroelectric potential of any canyon in the American Southwest. In the late 1940s, crews led by Bureau engineer Bert Lucas began to survey Marble Canyon and identified at least two suitable dam sites, one below Lee's Ferry and the second at . A road had to be built to connect the nearest highway to "one of the most inaccessible damsites ever explored by Bureau of Reclamation engineers".
In 1949 the Bureau opened bids for the construction of a cableway to transport workers and materials down into the remote canyon at Mile 32.8. Once the cableway was completed, a temporary camp was set up on the bottom of the gorge, and exploratory drilling work commenced in 1951. The work at the lower site at river mile 39.5 left two drifts measuring about by and about deep in the cliffs on either side of the river, together with 32 drill holes in the river bed. At the upper site, about upstream at river mile 32.8, there were a total of 35 drill holes and two drifts and deep. The work sites were accessed by an aerial tram from the rim. Although the lower site was preferred due to an additional of hydraulic head and potentially 45% larger reservoir than was possible at the upper site, it required more work to remediate weak rock joints. The project was not pursued by the Bureau of Reclamation in the 1950s. In 1960 the Arizona Power Authority became interested in the site, whose reservoir would have been entirely in Arizona, giving the state control over the waters and power generation. In 1963 the effort was endorsed by Bureau of Reclamation director Floyd Dominy. Investigations for the project's feasibility report extended into 1964. In 1965 a Bureau of Reclamation report concluded that solution cavities in the upstream limestone would prevent the reservoir from holding water. By 1968, as part of a bargain between the Arizona delegation to Congress, which supported the Central Arizona Project, and the California delegation, which opposed the project and the dam, the Marble Canyon project was dropped from the plan.
Abandonment
In 1969 President Lyndon B. Johnson proclaimed the establishment of Marble Canyon National Monument, effectively forestalling the possibility of a dam in Marble Canyon. In 1975 the monument was added to Grand Canyon National Park by the Grand Canyon Enlargement Act. The lower dam would have flooded a number of natural features, including Redwall Cavern and Vasey's Paradise. The upper dam was located just above the cavern, and was sometimes referred to as Redwall Dam.
References
External links
The Marble Canyon Damsite: Are We Safe from the Threat?, republished from a 1951 article in Arizona Highways on preliminary damsite exploration
Dams on the Colorado River
Dams in Arizona
Buildings and structures in Coconino County, Arizona
Colorado River Storage Project
United States Bureau of Reclamation proposed dams
Grand Canyon National Park | Marble Canyon Dam | [
"Engineering"
] | 1,618 | [
"Colorado River Storage Project"
] |
42,857,925 | https://en.wikipedia.org/wiki/Vicianose | Vicianose is a disaccharide.
Vicianin is a cyanogenic glycoside containing vicianose. The enzyme vicianin beta-glucosidase uses (R)-vicianin and water to produce mandelonitrile and vicianose.
The fruits of Viburnum dentatum appear blue. One of the major pigments is cyanidin 3-vicianoside, but the total mixture is very complex.
References
Disaccharides | Vicianose | [
"Chemistry"
] | 106 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
42,859,210 | https://en.wikipedia.org/wiki/Larssen%20sheet%20piling | Larssen sheet piling is a kind of sheet piling retaining wall. Segments with indented profiles (troughs) interlock to form a wall with alternating indents and outdents. The troughs increase resistance to bending. The segments are typically made of steel or another metal.
Larssen sheet piling was developed in 1906 by Tryggve Larssen, engineer from Bremen (Germany). Its applications include piers, oil terminals, waste storage facilities, shoreline protection, bridges, houses, buildings, dry docks, other construction sites, and for the strengthening of pond banks, preventing slumping into a pit, and flooding.
Construction
Lengths can reach 36 meters.
Each segment is flipped 180° versus the preceding segment. The segments lock together using a variety of interconnections.
The fully assembled structure is formed in a linear, circular, or other shape.
To reduce the filtering space, mixed sealant is injected. Additionally, it may be combined with the use of dowels, metal beams and pipes.
Metal dowels are hot-rolled steel and cold-rolled.
Design
Tongue Larssen - Tongue Larssens are up to 34 meters long and 80 centimeters wide. They have locks, which allow people to connect one profile to another vertically to create a sealed metal diaphragm wall. Transverse profiles can be in the shape of letters: S, Z, L or Ω (Omega) where the trough can be of varying depth.
Special Profile - Special profiles are long and narrow without locks. They usually have a wavy or trough shape to increase the resistance to bending.
Cantilever - Bending moments and shears are calculated under the assumption that the wall is a cantilever beam fixed at the bottom of the wall.
Anchored Wall Design- Bending moments, shears, and anchor force are calculated under the assumption that the wall is a beam with simple supports at the anchor elevation and at the bottom of the wall (the place where the wall moves beneath the surface of the ground. With the bottom of the wall at the penetration consistent with a factor of safety of 1, the lateral reaction at the bottom support will be zero and the lateral reaction at the upper support will be the horizontal component of the anchor force.
Applications
Larssens are used in foundation pits, coastline strengthening, bridge construction, piers, tide control, flood protection, agriculture irrigation, water reservoir and other work requiring extremely strong support in a narrow geometry.
References
Building materials | Larssen sheet piling | [
"Physics",
"Engineering"
] | 492 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
42,860,543 | https://en.wikipedia.org/wiki/Sauropod%20neck%20posture | Sauropod neck posture is a subject occasionally debated among scientists, with some favoring postures closer to horizontal whilst others a more upright posture. Research has looked at various avenues of evidence and analysis including: attempting to reconstruct the neutral posture of their necks and estimating range of motion by studying the bones; attempting to reconstruct sauropod metabolism and the energy requirements of sustaining incredibly long necks in various postures; and comparing sauropod neck anatomy to those of living animals.
Biomechanics
The biomechanics of sauropod skeletons and necks can help determine at what angle the neck was positioned.
Flexibility
In 2013, a study led by Matthew J. Cobley and published in PLOS ONE focused on the flexibility of the necks of sauropods. They compared the necks of ostriches with sauropod genera to find out how flexible the necks really were. The study noted that previous biomechanics studies found the necks to have been positioned between the extremes of a vertical, and a downward slanted neck. In conclusion, the study found that sauropod neck flexibility should not be based on osteology alone, and if it is, the results should be used with caution. Even though there is a lack of preserved muscle tissue that would determine flexibility, sauropod necks were probably less flexible than previously thought.
In 2014, Mike P. Taylor analysed the flexibility in the necks of Apatosaurus and Diplodocus. He found that Cobley et al. was incorrect in the fact that vertebrae imply the neck is less flexible than in actuality. Cobley et al. found necks to be much less flexible than in reality when cartilage was added. It was found that the cartilage between the joints would have allowed for the neck to flex far past 90°. However, Taylor noted that while the neck could flex above the vertical, the osteological neutral pose would have been around horizontal, with the habitual pose having the head held upwards in an alert pose.
Muscling
Sauropod necks were probably highly muscled to suit their feeding level. Brachiosaurus brancai (now Giraffititan) was probably a high browser, so it would have been more muscled along the neck than other sauropods like Diplodocus and Dicraeosaurus interpreted as low browsers. The tail and limb length of B. brancai would also need to be greater, to balance out the inclined neck. However, the question of whether sauropods were endothermic or ectothermic plays a major part in how sauropods were muscled, as endotherms have particularly more intestines and stomach than ectotherms. The amount of gut needed could determine how much food was eaten by sauropods, and therefore at what elevation their heads were held.
Heart and metabolic stress
The upright posture of sauropod necks is seen by some as requiring implausibly high blood pressure and heart strength. A 2000 study conducted by Roger Seymour and Harvey Lillywhite found that the blood pressure needed to reach the head with an upright neck would be , interpreted as fatal to an endotherm, or highly dangerous to an ectotherm, even with adequate heart musculature. A later study by Seymour concluded that it would have required half the animal's energy intake to pump the blood to the head. This would disfavor sauropods being high browsers, and instead having lower necks while feeding than commonly portrayed.
The above work summarily dismisses the hypothesis of secondary hearts in the neck as evolutionarily implausible, assuming arterial valves could have no role without associated musculature.
Hypotheses
A few hypotheses have been generated to solve the dispute over how sauropods held their necks.
Horizontal pose
Kent Stevens and Michael Parrish have been the two main supporters of a horizontal neck posture. In 1999, they studied the genera Apatosaurus and Diplodocus, finding the habitual pose of the genera to be slightly declined. They claimed that both sauropods had necks much less flexible than previously thought, with the neck vertebrae of Diplodocus being more inflexible than Apatosaurus. Those two poses would suggest that the sauropods were ground feeders, instead of browsing off taller flora.
Later, in 2005, Stevens and Parrish studied the biomechanics of sauropod necks on a wider variety of sauropods, from the Jurassic: Apatosaurus, Diplodocus, Camarasaurus, Brachiosaurus, Dicraeosaurus, Cetiosaurus, and Euhelopus. All were stated to have a horizontal, or even declining neck.
However, in 2009 multiple flaws were found with this argument. Michael P. Taylor et al. compared the neck posture of sauropods to that of extant reptiles and other tetrapods, finding these animals' habitual poses to be entirely different from the assumptions of Stevens and Parrish. The latters' errors come mainly from their preconceptions about animals' habitual pose in life, which they simply assumed would naturally match the Osteological Neutral Pose (or ONP). Taylor et al. find the ONP to be, not the actual habitual pose of any examined animal, but an arbitrarily chosen midpoint between the two structural extremes of bone placement. ONP, then, is merely one place in the range of physically possible motion.
Incline pose
Another, more widely supported hypothesis about sauropod neck posture is that the necks were held at an incline, but not as upright as commonly shown. Daniela Schwartz et al. in 2006 published a study of the scapula and coracoids, sometimes fused into scapulocoracoids, of sauropod genera. Previously, sauropod shoulder girdles were thought to have been positioned horizontally along the torso, but Schwartz et al. found that the girdles should not have been positioned horizontally, and instead, they would have been angled at an average of 55° to 65°. The study reconstructed the skeletons of Diplodocus, Camarasaurus, and the titanosaur Opisthocoelicaudia, all known from a complete shoulder girdle, with the correct orientation of the scapulocoracoids. For Diplodocus, a 60° shoulder blade would have meant that the neck was more-or-less horizontal, not too much different from the horizontal pose. A juvenile Camarasaurus found by Gilmore was originally described as having the scapulocoracoid in "just the right place", but with it oriented at an angle of 45°, Schwartz et al. criticized the stance. The skeleton found by Schwartz et al. with the angle of the scapulocoracoid is similar to previous reconstruction of the genus by Osborn and Mook, and Jensen. Opisthocoelicaudia was found to have had two possibly poses, both with the scapulocoracoid angled at about 60°. No previous reconstructions, unlike with Camarasaurus, have restored Opisthocoelicaudia similarly.
Upright pose for some sauropods
Despite skepticism, Euhelopus and Brachiosaurus have been found on anatomical evidence to have held their necks at a vertical angle, which has been treated as impossible for sauropods. Studies have concluded that the blood pressure and energy spent holding necks erect would have been too great to survive; yet Euhelopus and Brachiosaurus, at least, did so anyhow. The energy spent by pumping blood to the head is interpreted as too great for most sauropods, but when they travel often, which has been suggested for those two genera, it would have actually saved energy. The biomechanical evidence favours an upright neck when travelling to spread apart resources. The study finding this conclusion also tested how much energy would have been expended when walking and standing, both with an upright neck. The approximate conclusion was that an about equal amount of energy would have been used up. Elongated cervical ribs are skeletal evidence for a strong core to support the neck and limit its movement when walking. The study supports the idea that during times of drought and famine, an upright neck was crucial for these sauropods to survive.
References
Anatomy
Sauropods
Biomechanics | Sauropod neck posture | [
"Physics",
"Biology"
] | 1,713 | [
"Biomechanics",
"Mechanics",
"Anatomy"
] |
41,433,885 | https://en.wikipedia.org/wiki/Co-fired%20ceramic | Co-fired ceramic devices are monolithic, ceramic microelectronic devices where the entire ceramic support structure and any conductive, resistive, and dielectric materials are fired in a kiln at the same time. Typical devices include capacitors, inductors, resistors, transformers, and hybrid circuits. The technology is also used for robust assembly and packaging of electronic components multi-layer packaging in the electronics industry, such as military electronics, MEMS, microprocessor and RF applications.
Co-fired ceramic devices are fabricated using a multilayer approach. The starting material is composite green tapes, consisting of ceramic particles mixed with polymer binders. The tapes are flexible and can be machined, for example, using cutting, milling, punching and embossing. Metal structures can be added to the layers, commonly using filling and screen printing. Individual tapes are then bonded together in a lamination procedure before the devices are fired in a kiln, where the polymer part of the tape is combusted and the ceramic particles sinter together, forming a hard and dense ceramic component.
Co-firing can be divided into low-temperature (LTCC) and high-temperature (HTCC) applications: low temperature means that the sintering temperature is below , while high temperature is around . The lower sintering temperature for LTCC materials is made possible through the addition of a glassy phase to the ceramic, which lowers its melting temperature.
Due to a multilayer approach based on glass-ceramics sheets, this technology offers the possibility to integrate into the LTCC body passive electrical components and conductor lines typically manufactured in thick-film technology. This differs from semiconductor device fabrication, where layers are processed serially, and each new layer is fabricated on top of previous layers.
History
Co-fired ceramics were first developed in the late 1950s and early 1960s to make more robust capacitors. The technology was later expanded in the 1960s to include multilayer structures similar to printed circuit boards.
Components
Hybrid circuits
LTCC technology is especially beneficial for RF and high-frequency applications. In RF and wireless applications, LTCC technology is also used to produce multilayer hybrid integrated circuits, which can include resistors, inductors, capacitors, and active components in the same package. In detail, these applications comprise mobile telecommunication devices (0.8–2 GHz), wireless local networks such as Bluetooth (2.4 GHz) to in-car radars (50–140 GHz, and 76 GHz). LTCC hybrids have a smaller initial ("non recurring") cost as compared with ICs, making them an attractive alternative to ASICs for small scale integration devices.
Inductors
Inductors are formed by printing conductor windings on ferrite ceramic tape. Depending on the desired inductance and current carrying capabilities a partial winding to several windings may be printed on each layer. Under certain circumstances, a non-ferrite ceramic may be used. This is most common for hybrid circuits where capacitors, inductors, and resistors will all be present and for high operating frequency applications where the hysteresis loop of the ferrite becomes an issue.
Resistors
Resistors may be embedded components or added to the top layer post-firing. Using screen printing, a resistor paste is printed onto the LTCC surface, from which resistances needed in the circuit are generated. When fired, these resistors deviate from their design value (±25%) and therefore require adjustment to meet the final tolerance. With Laser trimming one can achieve these resistances with different cut forms to the exact resistance value (±1%) desired. With this procedure, the need for additional discrete resistors can be reduced, thereby allowing a further miniaturization of the printed circuit boards.
Transformers
LTCC transformers are similar to LTCC inductors except transformers contain two or more windings. To improve coupling between windings transformers includes a low-permeability dielectric material printed over the windings on each layer. The monolithic nature of LTCC transformers leads to a lower height than traditional wire wound transformers. Also, the integrated core and windings mean these transformers are not prone to wire break failures in high mechanical stress environments.
Sensors
Integration of thick-film passive components and 3D mechanical structures inside one module permitted the fabrication of sophisticated 3D LTCC sensors e.g. accelerometers.
Microsystems
The possibility of the fabrication of many various passive thick-film components, sensors and 3D mechanical structures enabled the fabrication of multilayer LTCC microsystems.
Using HTCC technology, microsystems for harsh environments, such as working temperatures of 1000 °C, have been realized.
Applications
LTCC substrates can be most beneficially used for the realization of miniaturized devices and robust substrates. LTCC technology allows the combination of individual layers with different functionalities such as high permittivity and low dielectric loss into a single multilayer laminated package and thereby to achieve multi-functionality in combination with a high integration and interconnection level. It also provides the possibility to fabricate three-dimensional, robust structures enabling in combination with thick film technology the integration of passive, electronic components, such as capacitors, resistors, and inductors into a single device.
Comparison
Low-temperature co-firing technology presents advantages compared to other packaging technologies including high-temperature co-firing: the ceramic is generally fired below 1,000 °C due to a special composition of the material. This permits the co-firing with highly conductive materials (silver, copper, and gold). LTCC also features the ability to embed passive elements, such as resistors, capacitors and inductors into the ceramic package, minimising the size of the completed module.
HTCC components generally consist of multilayers of alumina or zirconia with platinum, tungsten and moly-manganese metalization. The advantages of HTCC in packaging technology includes mechanical rigidity and hermeticity, both of which are important in high-reliability and environmentally stressful applications. Another advantage is HTCC's thermal dissipation capability, which makes this a microprocessor packaging choice, especially for higher-performance processors.
Compared to LTCC, HTCC has higher-resistance conductive layers.
See also
Tape casting
Laser trimming
References
External links
Animation of LTCC production process
Packaging (microfabrication)
Electronics manufacturing | Co-fired ceramic | [
"Materials_science",
"Engineering"
] | 1,322 | [
"Electronic engineering",
"Packaging (microfabrication)",
"Electronics manufacturing",
"Microtechnology"
] |
41,436,516 | https://en.wikipedia.org/wiki/Fuss%E2%80%93Catalan%20number | In combinatorial mathematics and statistics, the Fuss–Catalan numbers are numbers of the form
They are named after N. I. Fuss and Eugène Charles Catalan.
In some publications this equation is sometimes referred to as Two-parameter Fuss–Catalan numbers or Raney numbers. The implication is the single-parameter Fuss-Catalan numbers are when and .
Uses
The Fuss-Catalan represents the number of legal permutations or allowed ways of arranging a number of articles, that is restricted in some way. This means that they are related to the Binomial Coefficient. The key difference between Fuss-Catalan and the Binomial Coefficient is that there are no "illegal" arrangement permutations within Binomial Coefficient, but there are within Fuss-Catalan. An example of legal and illegal permutations can be better demonstrated by a specific problem such as balanced brackets (see Dyck language).
A general problem is to count the number of balanced brackets (or legal permutations) that a string of m open and m closed brackets forms (total of 2m brackets). By legally arranged, the following rules apply:
For the sequence as a whole, the number of open brackets must equal the number of closed brackets
Working along the sequence, the number of open brackets must be greater than the number of closed brackets
As an numeric example how many combinations can 3 pairs of brackets be legally arranged? From the Binomial interpretation there are or numerically = 20 ways of arranging 3 open and 3 closed brackets. However, there are fewer legal combinations than these when all of the above restrictions apply. Evaluating these by hand, there are 5 legal combinations, namely: ()()(); (())(); ()(()); (()()); ((())). This corresponds to the Fuss-Catalan formula when p=2, r=1 which is the Catalan number formula or =5. By simple subtraction, there are or =15 illegal combinations. To further illustrate the subtlety of the problem, if one were to persist with solving the problem just using the Binomial formula, it would be realised that the 2 rules imply that the sequence must start with an open bracket and finish with a closed bracket. This implies that there are or =6 combinations. This is inconsistent with the above answer of 5, and the missing combination is: ())((), which is illegal and would complete the binomial interpretation.
Whilst the above is a concrete example Catalan numbers, similar problems can be evaluated using Fuss-Catalan formula:
Computer Stack: ways of arranging and completing a computer stack of instructions, each time step 1 instruction is processed and p new instructions arrive randomly. If at the beginning of the sequence there are r instructions outstanding.
Betting: ways of losing all money when betting. A player has a total stake pot that allows them to make r bets, and plays a game of chance that pays p times the bet stake.
Tries: Calculating the number of order m tries on n nodes.
Special Cases
Below is listed a few formulae, along with a few notable special cases
If , we recover the Binomial coefficients
;
;
;
.
If , Pascal's Triangle appears, read along diagonals:
;
;
;
;
;
.
Examples
For subindex the numbers are:
Examples with :
, known as the Catalan Numbers
Examples with :
Examples with :
Algebra
Recurrence
equation (1)
This means in particular that from
equation (2)
and
equation (3)
one can generate all other Fuss–Catalan numbers if is an integer.
Riordan (see references) obtains a convolution type of recurrence:
equation(4)
Generating Function
Paraphrasing the Densities of the Raney distributions paper, let the ordinary generating function with respect to the index be defined as follows:
equation (5).
Looking at equations (1) and (2), when =1 it follows that
equation (6).
Also note this result can be derived by similar substitutions into the other formulas representation, such as the Gamma ratio representation at the top of this article. Using (6) and substituting into (5) an equivalent representation expressed as a generating function can be formulated as
.
Finally, extending this result by using Lambert's equivalence
.
The following result can be derived for the ordinary generating function for all the Fuss-Catalan sequences.
.
Recursion Representation
Recursion forms of this are as follows:
The most obvious form is:
Also, a less obvious form is
Alternate Representations
In some problems it is easier to use different formula configurations or variations. Below are a two examples using just the binomial function:
These variants can be converted into a product, Gamma or Factorial representations too.
See also
Combinatorics
Statistics
Binomial coefficient
Binomial Distribution
Catalan number
Dyck language
Pascal's triangle
References
Factorial and binomial topics
Enumerative combinatorics | Fuss–Catalan number | [
"Mathematics"
] | 1,007 | [
"Factorial and binomial topics",
"Enumerative combinatorics",
"Combinatorics"
] |
41,437,742 | https://en.wikipedia.org/wiki/Heap%27s%20algorithm | Heap's algorithm generates all possible permutations of objects. It was first proposed by B. R. Heap in 1963. The algorithm minimizes movement: it generates each permutation from the previous one by interchanging a single pair of elements; the other elements are not disturbed. In a 1977 review of permutation-generating algorithms, Robert Sedgewick concluded that it was at that time the most effective algorithm for generating permutations by computer.
The sequence of permutations of objects generated by Heap's algorithm is the beginning of the sequence of permutations of objects. So there is one infinite sequence of permutations generated by Heap's algorithm .
Details of the algorithm
For a collection containing different elements, Heap found a systematic method for choosing at each step a pair of elements to switch in order to produce every possible permutation of these elements exactly once.
Described recursively as a decrease and conquer method, Heap's algorithm operates at each step on the initial elements of the collection. Initially and thereafter . Each step generates the permutations that end with the same final elements. It does this by calling itself once with the element unaltered and then times with the () element exchanged for each of the initial elements. The recursive calls modify the initial elements and a rule is needed at each iteration to select which will be exchanged with the last. Heap's method says that this choice can be made by the parity of the number of elements operated on at this step. If is even, then the final element is iteratively exchanged with each element index. If is odd, the final element is always exchanged with the first.
// Output the k! permutations of A in which the first k elements are permuted in all ways.
// To get all permutations of A, use k := length of A.
//
// If, k > length of A, will try to access A out of bounds.
// If k <= 0 there will be no output (empty array has no permutations)
procedure permutations(k : integer, A : array of any):
if k = 1 then
output(A)
else
// permutations with last element fixed
permutations(k - 1, A)
// permutations with last element swapped out
for i := 0; i < k-1; i += 1 do
if k is even then
swap(A[i], A[k-1])
else
swap(A[0], A[k-1])
end if
permutations(k - 1, A)
end for
end if
One can also write the algorithm in a non-recursive format.
procedure permutations(n : integer, A : array of any):
// c is an encoding of the stack state.
// c[k] encodes the for-loop counter for when permutations(k - 1, A) is called
c : array of int
for i := 0; i < n; i += 1 do
c[i] := 0
end for
output(A)
// i acts similarly to a stack pointer
i := 1;
while i < n do
if c[i] < i then
if i is even then
swap(A[0], A[i])
else
swap(A[c[i]], A[i])
end if
output(A)
// Swap has occurred ending the while-loop. Simulate the increment of the while-loop counter
c[i] += 1
// Simulate recursive call reaching the base case by bringing the pointer to the base case analog in the array
i := 1
else
// Calling permutations(i+1, A) has ended as the while-loop terminated. Reset the state and simulate popping the stack by incrementing the pointer.
c[i] := 0
i += 1
end if
end while
Proof
In this proof, we'll use the below implementation as Heap's algorithm as it makes the analysis easier, and certain patterns can be easily illustrated. While it is not optimal (it does not minimize moves, which is described in the section below), the implementation is correct and will produce all permutations.
// Output the k! permutations of A in which the first k elements are permuted in all ways.
// To get all permutations of A, use k := length of A.
//
// If, k > length of A, will try to access A out of bounds.
// If k <= 0 there will be no output (empty array has no permutations)
procedure permutations(k : integer, A : array of any):
if k = 1 then
output(A)
else
for i := 0; i < k; i += 1 do
permutations(k - 1, A)
if k is even then
swap(A[i], A[k-1])
else
swap(A[0], A[k-1])
end if
end for
end if
Claim: If array A has length , then permutations(n, A) will result in either A being unchanged, if is odd, or, if is even, then A is rotated to the right by 1 (last element shifted in front of other elements).
Base: If array has length 1, then permutations(1, A) will output A and stop, so A is unchanged. Since 1 is odd, this is what was claimed, so the claim is true for arrays of length 1.
Induction: If the claim is true for arrays of length ≥ 1, then we show that the claim is true for arrays of length +1 (together with the base case this proves that the claim is true for arrays of all lengths). Since the claim depends on whether is odd or even, we prove each case separately.
If is odd, then, by the induction hypothesis, for an array A of length , permutations(l, A) will not change A, and for the claim to hold for arrays of length +1 (which is even), we need to show that permutations(l+1, A) rotates A to the right by 1 position. Doing permutations(l+1, A) will first do permutations(l, A) (leaving A unchanged since is odd) and then in each iteration of the for-loop, swap the elements in positions and (the last position) in A. The first swap puts element (the last element) in position 0, and element 0 in position . The next swap puts the element in position (where the previous iteration put original element 0) in position 1 and element 1 in position . In the final iteration, the swap puts element -1 is in position , and the element in position (where the previous iteration put original element -2) in position -1. To illustrate the above, look below for the case = 4.
1,2,3,4 ... original array
1,2,3,4 ... 1st iteration (permute subarray)
4,2,3,1 ... 1st iteration (swap 1st element into last position)
4,2,3,1 ... 2nd iteration (permute subarray)
4,1,3,2 ... 2nd iteration (swap 2nd element into last position)
4,1,3,2 ... 3rd iteration (permute subarray)
4,1,2,3 ... 3rd iteration (swap 3rd element into last position)
4,1,2,3 ... 4th iteration (permute subarray)
4,1,2,3 ... 4th iteration (swap 4th element into last position)
The altered array is a rotated version of the original
If is even, then, by the induction hypothesis, for an array A of length , permutations(l, A) rotates A to the right by 1 position, and for the claim to hold for arrays of length +1 (which is odd), we need to show that permutations(l+1, A) leaves A unchanged. Doing permutations(l+1, A) will in each iteration of the for-loop, first do permutations(l, A) (rotating the first elements of A by 1 position since is even) and then, swap the elements in positions 0 and (the last position) in A. Rotating the first elements and then swapping the first and last elements is equivalent to rotating the entire array. Since there are as many iterations of the loop as there are elements in the array, the entire array is rotated until each element returns to where it started. To illustrate the above, look below for the case = 5.
1,2,3,4,5 ... original array
4,1,2,3,5 ... 1st iteration (permute subarray, which rotates it)
5,1,2,3,4 ... 1st iteration (swap)
3,5,1,2,4 ... 2nd iteration (permute subarray, which rotates it)
4,5,1,2,3 ... 2nd iteration (swap)
2,4,5,1,3 ... 3rd iteration (permute subarray, which rotates it)
3,4,5,1,2 ... 3rd iteration (swap)
1,3,4,5,2 ... 4th iteration (permute subarray, which rotates it)
2,3,4,5,1 ... 4th iteration (swap)
5,2,3,4,1 ... 5th iteration (permute subarray, which rotates it)
1,2,3,4,5 ... 5th iteration (swap)
The final state of the array is in the same order as the original
The induction proof for the claim is now complete, which will now lead to why Heap's Algorithm creates all permutations of array . Once again we will prove by induction the correctness of Heap's Algorithm.
Basis: Heap's Algorithm trivially permutes an array of size as outputting is the one and only permutation of .
Induction: Assume Heap's Algorithm permutes an array of size . Using the results from the previous proof, every element of will be in the "buffer" once when the first elements are permuted. Because permutations of an array can be made by altering some array through the removal of an element from then tacking on to each permutation of the altered array, it follows that Heap's Algorithm permutes an array of size , for the "buffer" in essence holds the removed element, being tacked onto the permutations of the subarray of size . Because each iteration of Heap's Algorithm has a different element of occupying the buffer when the subarray is permuted, every permutation is generated as each element of has a chance to be tacked onto the permutations of the array without the buffer element.
Frequent mis-implementations
It is tempting to simplify the recursive version given above by reducing the instances of recursive calls. For example, as:
procedure permutations(k : integer, A : array of any):
if k = 1 then
output(A)
else
// Recursively call once for each k
for i := 0; i < k; i += 1 do
permutations(k - 1, A)
// swap choice dependent on parity of k (even or odd)
if k is even then
// no-op when i == k-1
swap(A[i], A[k-1])
else
// XXX incorrect additional swap when i==k-1
swap(A[0], A[k-1])
end if
end for
end if
This implementation will succeed in producing all permutations but does not minimize movement. As the recursive call-stacks unwind, it results in additional swaps at each level. Half of these will be no-ops of and where but when is odd, it results in additional swaps of the with the element.
These additional swaps significantly alter the order of the prefix elements.
The additional swaps can be avoided by either adding an additional recursive call before the loop and looping times (as above) or looping times and checking that is less than as in:
procedure permutations(k : integer, A : array of any):
if k = 1 then
output(A)
else
// Recursively call once for each k
for i := 0; i < k; i += 1 do
permutations(k - 1, A)
// avoid swap when i==k-1
if (i < k - 1)
// swap choice dependent on parity of k
if k is even then
swap(A[i], A[k-1])
else
swap(A[0], A[k-1])
end if
end if
end for
end if
The choice is primarily aesthetic but the latter results in checking the value of twice as often.
See also
Steinhaus–Johnson–Trotter algorithm
References
Combinatorial algorithms
Permutations | Heap's algorithm | [
"Mathematics"
] | 2,797 | [
"Combinatorial algorithms",
"Functions and mappings",
"Permutations",
"Mathematical objects",
"Computational mathematics",
"Combinatorics",
"Mathematical relations"
] |
41,439,747 | https://en.wikipedia.org/wiki/Pseudohypoxia | Pseudohypoxia refers to a condition that mimics hypoxia, by having sufficient oxygen yet impaired mitochondrial respiration due to a deficiency of necessary co-enzymes, such as NAD+ and TPP. The increased cytosolic ratio of free NADH/NAD+ in cells (more NADH than NAD+) can be caused by diabetic hyperglycemia and by excessive alcohol consumption. Low levels of TPP results from thiamine deficiency.
The insufficiency of available NAD+ or TPP produces symptoms similar to hypoxia (lack of oxygen), because they are needed primarily by the Krebs cycle for oxidative phosphorylation, and NAD+ to a lesser extent in anaerobic glycolysis. Oxidative phosphorylation and glyocolysis are vital as these metabolic pathways produce ATP, which is the molecule that releases energy necessary for cells to function.
As there is not enough NAD+ or TPP for aerobic glycolysis nor fatty acid oxidation, anaerobic glycolysis is excessively used which turns glycogen and glucose into pyruvate, and then the pyruvate into lactate (fermentation). Fermentation also generates a small amount of NAD+ from NADH, but only enough to keep anaerobic glycolysis going. The excessive use of anaerobic glycolysis disrupts the lactate/pyruvate ratio causing lactic acidosis. The decreased pyruvate inhibits gluconeogenesis and increases release of fatty acids from adipose tissue. In the liver, the increase of plasma free fatty acids results in increased ketone production (which in excess causes ketoacidosis). The increased plasma free fatty acids, increased acetyl-CoA (accumulating from reduced Krebs cycle function), and increased NADH all contribute to increased fatty acid synthesis within the liver (which in excess causes fatty liver disease).
Pseudohypoxia also leads to hyperuricemia as elevated lactic acid inhibits uric acid secretion by the kidney; as well as the energy shortage from inhibited oxidative phosphorylation leads to increased turnover of adenosine nucleotides by the myokinase reaction and purine nucleotide cycle.
Research has shown that declining levels of NAD+ during aging cause pseudohypoxia, and that raising nuclear NAD+ in old mice reverses pseudohypoxia and metabolic dysfunction, thus reversing the aging process. It is expected that human NAD trials will begin in 2014.
Pseudohypoxia is a feature commonly noted in poorly-controlled diabetes.
Reactions
In poorly controlled diabetes, as insulin is insufficient, glucose cannot enter the cell and remains high in the blood (hyperglycemia). The polyol pathway converts glucose into fructose, which can then enter the cell without requiring insulin. The oxidative damage done to cells in diabetes damages DNA and causes poly (ADP ribose) polymerases or PARPs to be activated, such as PARP1. Both processes reduce the available NAD+.
In ethanol catabolism, ethanol is converted into acetate, consuming NAD+. When alcohol is consumed in small quantities, the NADH/NAD+ ratio remains in balance enough for the acetyl-CoA (converted from acetate) to be used for oxidative phosphorylation. However, even moderate amounts of alcohol (1-2 drinks) results in more NADH than NAD+, which inhibits oxidative phosphorylation. In chronic excessive alcohol consumption, the microsomal ethanol oxidizing system (MEOS) is used in addition to alcohol dehydrogenase.
Diabetes
Polyol pathway
D-glucose + NADPH → Sorbitol + NADP+ (catalyzed by aldose reductase)
Sorbitol + NAD+ → D-fructose + NADH (catalyzed by sorbitol dehydrogenase)
Poly (ADP-ribose) polymerase-1
Protein + NAD+ → Protein + ADP-ribose + nicotinamide (catalyzed by PARP1)
Ethanol catabolism
Alcohol dehydrogenase
Ethanol + NAD+ → Acetaldehyde + NADH + H+ (catalyzed by alcohol dehydrogenase)
Acetaldehyde + NAD+ → Acetate + NADH + H+ (catalyzed by aldehyde dehydrogenase)
MEOS
Ethanol + NADPH + H+ + O2 → Acetaldehyde + NADP+ + 2H2O (catalyzed by CYP2E1)
Acetaldehyde + NAD+ → Acetate + NADH + H+ (catalyzed by aldehyde dehydrogenase)
See also
Hypoxia (medical)
Hypoxia (disambiguation) - list under Hypoxia (medical) e.g. Intrauterine hypoxia
Bioenergetic systems - metabolic pathways of producing ATP
Metabolic acidosis
References
Cell biology
Medical signs
Geriatrics
Senescence | Pseudohypoxia | [
"Chemistry",
"Biology"
] | 1,075 | [
"Senescence",
"Cell biology",
"Metabolism",
"Cellular processes"
] |
53,031,776 | https://en.wikipedia.org/wiki/Stochastic%20thermodynamics |
Overview
When a microscopic machine (e.g. a MEM) performs useful work it generates heat and entropy as a byproduct of the process, however it is also predicted that this machine will operate in "reverse" or "backwards" over appreciable short periods. That is, heat energy from the surroundings will be converted into useful work. For larger engines, this would be described as a violation of the second law of thermodynamics, as entropy is consumed rather than generated. Loschmidt's paradox states that in a time reversible system, for every trajectory there exists a time-reversed anti-trajectory. As the entropy production of a trajectory and its equal anti-trajectory are of identical magnitude but opposite sign, then, so the argument goes, one cannot prove that entropy production is positive.
For a long time, exact results in thermodynamics were only possible in linear systems capable of reaching equilibrium, leaving other questions like the Loschmidt paradox unsolved. During the last few decades fresh approaches have revealed general laws applicable to non-equilibrium system which are described by nonlinear equations, pushing the range of exact thermodynamic statements beyond the realm of traditional linear solutions. These exact results are particularly relevant for small systems where appreciable (typically non-Gaussian) fluctuations occur. Thanks to stochastic thermodynamics it is now possible to accurately predict distribution functions of thermodynamic quantities relating to exchanged heat, applied work or entropy production for these systems.
Fluctuation theorem
The mathematical resolution to Loschmidt's paradox is called the (steady state) fluctuation theorem (FT), which is a generalisation of the second law of thermodynamics. The FT shows that as a system gets larger or the trajectory duration becomes longer, entropy-consuming trajectories become more unlikely, and the expected second law behaviour is recovered.
The FT was first put forward by and much of the work done in developing and extending the theorem was accomplished by theoreticians and mathematicians interested in nonequilibrium statistical mechanics.
The first observation and experimental proof of Evan's fluctuation theorem (FT) was performed by
Jarzynski equality
Seifert writes:
proved a remarkable relation which allows to express the free energy difference between two equilibrium systems by a nonlinear average over the work required to drive the system in a non-equilibrium process from one state to the other. By comparing probability distributions for the work spent in the original process with the time-reversed one, Crooks found a “refinement” of the Jarzynski relation (JR), now called the Crooks fluctuation theorem. Both, this relation and another refinement of the JR, the Hummer-Szabo relation became particularly useful for determining free energy differences and landscapes of biomolecules. These relations are the most prominent ones within a class of exact results (some of which found even earlier and then rediscovered) valid for non-equilibrium systems driven by time-dependent forces. A close analogy to the JR, which relates different equilibrium states, is the Hatano-Sasa relation that applies to transitions between two different non-equilibrium steady states.
This is shown to be a special case of a more general relation.
Stochastic energetics
History
Seifert writes:
Classical thermodynamics, at its heart, deals with general laws governing the transformations of a system, in particular, those involving the exchange of heat, work and matter with an environment. As a central result, total entropy production is identified that in any such process can never decrease, leading, inter alia, to fundamental limits on the efficiency of heat engines and refrigerators.
The thermodynamic characterisation of systems in equilibrium got its microscopic justification from equilibrium statistical mechanics which states that for a system in contact with a heat bath the probability to find it in any specific microstate is given by the Boltzmann factor. For small deviations from equilibrium, linear response theory allows to express transport properties caused by small external fields through equilibrium correlation functions. On a more phenomenological level, linear irreversible thermodynamics provides a relation between such transport coefficients and entropy production in terms of forces and fluxes. Beyond this linear response regime, for a long time, no universal exact results were available.
During the last 20 years fresh approaches have revealed general laws applicable to non-equilibrium system thus pushing the range of validity of exact thermodynamic statements beyond the realm of linear response deep into the genuine non-equilibrium region. These exact results, which become particularly relevant for small systems with appreciable (typically non-Gaussian) fluctuations, generically refer to distribution functions of thermodynamic quantities like exchanged heat, applied work or entropy production.
Stochastic thermodynamics combines the stochastic energetics introduced by with the idea that entropy can consistently be assigned to a single fluctuating trajectory.
Open research
Quantum stochastic thermodynamics
Stochastic thermodynamics can be applied to driven (i.e. open) quantum systems whenever the effects of quantum coherence can be ignored. The dynamics of an open quantum system is then equivalent to a classical stochastic one. However, this is sometimes at the cost of requiring unrealistic measurements at the beginning and end of a process.
Understanding non-equilibrium quantum thermodynamics more broadly is an important and active area of research. The efficiency of some computing and information theory tasks can be greatly enhanced when using quantum correlated states; quantum correlations can be used not only as a valuable resource in quantum computation, but also in the realm of quantum thermodynamics. New types of quantum devices in non-equilibrium states function very differently to their classical counterparts. For example, it has been theoretically shown that non-equilibrium quantum ratchet systems function far more efficiently then that predicted by classical thermodynamics. It has also been shown that quantum coherence can be used to enhance the efficiency of systems beyond the classical Carnot limit. This is because it could be possible to extract work, in the form of photons, from a single heat bath. Quantum coherence can be used in effect to play the role of Maxwell's demon though the broader information theory based interpretation of the second law of thermodynamics is not violated.
Quantum versions of stochastic thermodynamics have been studied for some time and the past few years have seen a surge of interest in this topic. Quantum mechanics involves profound issues around the interpretation of reality (e.g. the Copenhagen interpretation, many-worlds, de Broglie-Bohm theory etc are all competing interpretations that try to explain the unintuitive results of quantum theory) . It is hoped that by trying to specify the quantum-mechanical definition of work, dealing with open quantum systems, analyzing exactly solvable models, or proposing and performing experiments to test non-equilibrium predictions, important insights into the interpretation of quantum mechanics and the true nature of reality will be gained.
Applications of non-equilibrium work relations, like the Jarzynski equality, have recently been proposed for the purposes of detecting quantum entanglement and to improving optimization problems (minimize or maximize a function of multivariables called the cost function) via quantum annealing .
Active baths
Until recently thermodynamics has only considered systems coupled to a thermal bath and, therefore, satisfying Boltzmann statistics. However, some systems do not satisfy these conditions and are far from equilibrium such as living matter, for which fluctuations are expected to be non-Gaussian.
Active particle systems are able to take energy from their environment and drive themselves far from equilibrium. An important example of active matter is constituted by objects capable of self propulsion. Thanks to this property, they feature a series of novel behaviours that are not attainable by matter at thermal equilibrium, including, for example, swarming and the emergence of other collective properties. A passive particle is considered in an active bath when it is in an environment where a wealth of active particles are present. These particles will exert nonthermal forces on the passive object so that it will experience non-thermal fluctuations and will behave widely different from a passive Brownian particle in a thermal bath. The presence of an active bath can significantly influence the microscopic thermodynamics of a particle. Experiments have suggested that the Jarzynski equality does not hold in some cases due to the presence of non-Boltzmann statistics in active baths. This observation points towards a new direction in the study of non-equilibrium statistical physics and stochastic thermodynamics, where also the environment itself is far from equilibrium.
Active baths are a question of particular importance in biochemistry. For example, biomolecules within cells are coupled with an active bath due to the presence of molecular motors within the cytoplasm, which leads to striking and largely not yet understood phenomena such as the emergence of anomalous diffusion (Barkai et al., 2012). Also, protein folding might be facilitated by the presence of active fluctuations (Harder et al., 2014b) and active matter dynamics could play a central role in several biological functions (Mallory et al., 2015; Shin et al., 2015; Suzuki et al., 2015). It is an open question to what degree stochastic thermodynamics can be applied to systems coupled to active baths.
References
Notes
Citations
Academic references
Press
Statistical mechanics
Thermodynamics
Non-equilibrium thermodynamics
Branches of thermodynamics
Stochastic models | Stochastic thermodynamics | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,980 | [
"Non-equilibrium thermodynamics",
"Statistical mechanics",
"Thermodynamics",
"Branches of thermodynamics",
"Dynamical systems"
] |
70,079,766 | https://en.wikipedia.org/wiki/Structural%20composite%20supercapacitor | Structural composite supercapacitors are multifunctional materials that can both bear mechanical load and store electrical energy. That when combined with structural batteries, could potentially enable an overall weight reduction of electric vehicles.
Typically, structural composite supercapacitors are based on the design of carbon fiber reinforced polymers. Carbon fibers act as mechanical reinforcement, current collectors, and eventually electrodes.
The matrix of a Structural composite supercapacitor is a polymer electrolyte that transfers load via shear mechanisms between the carbon fibers and has a reasonable ionic conductivity.
In a supercapacitor, the specific capacitance is proportional to the exact surface area of the electrodes. Structural carbon fibers usually have low specific surface area, and is therefore necessary to modify their surface to enable sufficient energy storage ability. To increase the surface area of the structural electrodes, several methods have been employed, mainly consisting in the modification of the surface of the carbon fiber itself, or by coating the carbon fiber through a material which covers its entire surface area.
Physical and chemical activation of the carbon fibers have increased their specific surface area by two orders of magnitude without damaging their mechanical properties, but have limited energy storage ability when combined with a structural polymer electrolyte. Coating carbon fibers with carbon nanotubes, carbon aerogel, or graphene nanoplatelets allowed for higher energy densities.
References
Capacitors
Energy conversion
Electric vehicles
Composite materials | Structural composite supercapacitor | [
"Physics"
] | 285 | [
"Physical quantities",
"Composite materials",
"Capacitors",
"Materials",
"Capacitance",
"Matter"
] |
70,080,199 | https://en.wikipedia.org/wiki/Polaris%20program | The Polaris program is a private spaceflight program organized by entrepreneur Jared Isaacman. Building on his experience as commander of the Inspiration4 mission—the first all-civilian spaceflight—Isaacman contracted with SpaceX to establish Polaris. The program involves two missions using SpaceX's Crew Dragon spacecraft and is planned to culminate in the first crewed launch on Starship.
Flights
Polaris Dawn
On 10 September 2024, The Polaris Dawn mission propelled Isaacman and his crew of three—Scott Poteet, Sarah Gillis and Anna Menon—to an elliptic orbit 1,400 kilometres (870 mi) away from Earth. This was farthest anyone had been from Earth since NASA's Apollo program. They passed through parts of the Van Allen radiation belt to study the health effects of space radiation and spaceflight on the human body. Later in the mission, with a lower apogee, Isaacman and Gillis successfully completed the first commercial spacewalk and tested the mobility and functionality of SpaceX's EVA spacesuit.
Mission II
The second mission in the Polaris Program will launch via a Falcon 9 Block 5 vehicle with a Crew Dragon capsule. SpaceX and Polaris had studied a crewed mission to lift the Hubble Space Telescope into a higher orbit to prevent it from burning up in the atmosphere, but this option was rejected by NASA in June 2024. Data obtained through Polaris Dawn will inform the objectives and timing of Mission II.
Mission III
The third Polaris mission was set to be the first crewed launch on Starship, SpaceX's next-generation launch system. Starship was in early flight testing as of December 2024 and was expected to carry crew after making at least 100 successful cargo flights, though this was not a firm requirement. This is the final listed flight of the Polaris Program.
See also
Timeline of private spaceflight
Notes
References
External links
Polaris Program photos on Flickr
2020s in spaceflight
2022 establishments in the United States
Human spaceflight programs
Private spaceflight
SpaceX
SpaceX Starship | Polaris program | [
"Engineering"
] | 417 | [
"Space programs",
"Human spaceflight programs"
] |
70,084,973 | https://en.wikipedia.org/wiki/James%20Kay%20Graham%20Watson | Jim Watson, FRS, (20 April 1936—18 December 2020) who published under the name J.K.G. Watson, was a molecular spectroscopist most well known for the development of the widely used molecular Hamiltonians named after him (sometimes called "Watsonians" or "Watson Hamiltonians"). These Hamiltonians are used to describe the quantum dynamics of molecules.
Education and career
Watson did his Ph.D. at the University of Glasgow, and worked in the UK, United States and Canada. He was a postdoctoral fellow under Jon Hougen in the Molecular Spectroscopy Group of Gerhard Herzberg at the National Research Council of Canada in Ottawa, Ontario from 1963 to 1965. He eventually joined the staff in the group in 1982 where he remained until retirement.
Watson published a number of papers in which he developed and applied molecular Hamiltonians to problems in spectroscopy. In 1968 Watson published Simplification of the molecular vibration-rotation Hamiltonian, in which he presented a practical framework for the quantum-mechanical description of molecular ro-vibrational dynamics within the Born–Oppenheimer approximation.
Honors and awards
He was a Fellow of the Royal Society, the Royal Society of Canada and the American Physical Society. He received the 1986 Earle K. Plyer Prize for Molecular Spectroscopy and Dynamics from the American Physical Society. The citation reads:
"For his numerous fundamental contributions to the theory of rovibronic interactions in molecules, especially the development of the universally used Watson Hamiltonian for vibration-rotation energy levels, the unified treatment of centrifugal distortion in molecules, the elucidation of forbidden rotational transitions in spherical tops, the application of advanced symmetry arguments to perturbations in external fields, and investigations of the Jahn-Teller effect in H_3 and NH_4."
Personal life
Watson was married to Carolyn Kerr. He died in his home in New Edinburgh after a brief illness on 17 December 2020 at the age of 84.
References
1930s births
2020 deaths
Fellows of the Royal Society
Fellows of the Royal Society of Canada
Fellows of the American Physical Society
Spectroscopists
Academics of the University of Glasgow | James Kay Graham Watson | [
"Physics",
"Chemistry"
] | 432 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
70,087,334 | https://en.wikipedia.org/wiki/Alcyoneus%20%28galaxy%29 | Alcyoneus is a low-excitation, Fanaroff–Riley class II radio galaxy located from Earth, with host galaxy SDSS J081421.68+522410.0. It is located in the constellation Lynx and it was discovered in Low-Frequency Array (LOFAR) data by a team of astronomers led by Martijn Oei. As of 2024, it has the second-largest extent of radio structure of any radio galaxy identified, with lobed structures spanning across, described by its discoverers at the time as the "largest known structure of galactic origin." It has since been superseded by another radio galaxy, Porphyrion, with lobed structures of .
Aside from the size of its radio emissions, the central galaxy is otherwise of ordinary radio luminosity, stellar mass, and supermassive black hole mass. It is a standalone galaxy with an isophotal diameter at 25.0 r-mag/arcsec2 of about , with the nearest cluster located 11 million light-years away from it. The galaxy was named after the giant Alcyoneus from Greek mythology.
Discovery
Alcyoneus was first reported in a paper published in February 2022 by Martijn Oei and colleagues after obtaining results from the Low-Frequency Array (LOFAR) Two-metre Sky Survey (LoTSS), an interferometric radio survey of the Northern Sky, as part of a search that resulted in the discovery of over 8,000 new giant radio galaxies. The object was first observed as a bright, three-component radio structure visible on at least four spatial resolutions of the LoTSS (6-, 20-, 60- and 90-arcsecond resolutions). The two outer components of the radio structure are separated by a similar distance to the smaller, elongated radio structure, signifying their nature as possible radio lobes. Further confirmations using radio-optical overlays dismiss the possibility of the two being separate radio lobes from different galaxies, and confirm that they have been produced by the same source.
Characteristics
Alcyoneus has been described as a giant radio galaxy, a special class of objects characterized by the presence of radio lobes generated by relativistic jets powered by the central galaxy's supermassive black hole. Giant radio galaxies are different from ordinary radio galaxies in that they can extend to much larger scales, reaching upwards to several megaparsecs across, far larger than the diameters of their host galaxies. In the case of Alcyoneus, the host galaxy does not host a quasar and is relatively quiescent, with spectral imaging from the Sloan Digital Sky Survey's 12th data release (SDSS DR12) suggesting a star formation rate of only 0.016 solar masses per year (). This classifies it as a low excitation radio source, with Alcyoneus obtaining most of its energy due to the relativistic process of the central galaxy's jet rather than radiation from its active galactic nucleus.
The central host galaxy of Alcyoneus has a stellar mass of 240 billion solar masses (), with its central supermassive black hole estimated to have a mass of million solar masses (); both typical for elliptical galaxies, but substantially lower than for similar galaxies generating giant radio sources.
It is currently unknown how Alcyoneus's radio emissions grew so large. One explanation proposes that the radio galaxy's cosmic web environment might be less dense than that of other giant radio galaxies, leading to a lower resistance to growth. In comparison to other known giant radio galaxies, Alcyoneus does not appear to have a particularly massive stellar population or central black hole, or particularly powerful jet streams.
See also
List of largest galaxies
Hercules A
Cygnus A
List of galaxies with notable features
Notes
References
Radio galaxies
Active galaxies
Astronomical radio sources
Lynx (constellation) | Alcyoneus (galaxy) | [
"Astronomy"
] | 792 | [
"Astronomical radio sources",
"Lynx (constellation)",
"Astronomical events",
"Constellations",
"Astronomical objects"
] |
61,920,776 | https://en.wikipedia.org/wiki/Catalytic%20resonance%20theory | In chemistry, catalytic resonance theory was developed to describe the kinetics of reaction acceleration using dynamic catalyst surfaces. Catalytic reactions occur on surfaces that undergo variation in surface binding energy and/or entropy, exhibiting overall increase in reaction rate when the surface binding energy frequencies are comparable to the natural frequencies of the surface reaction, adsorption, and desorption.
History
Catalytic resonance theory is constructed on the Sabatier principle of catalysis developed by French chemist Paul Sabatier. In the limit of maximum catalytic performance, the surface of a catalyst is neither too strong nor too weak. Strong binding results in an overall catalytic reaction rate limitation due to product desorption, while weak binding catalysts are limited in the rate of surface chemistry. Optimal catalyst performance is depicted as a 'volcano' peak using a descriptor of the chemical reaction defining different catalytic materials. Experimental evidence of the Sabatier principle was first demonstrated by Balandin in 1960.
The concept of catalytic resonance was proposed on dynamic interpretation of the Sabatier volcano reaction plot. As described, extension of either side of the volcano plot above the peak defines the timescales of the two rate-limiting phenomena such as surface reaction(s) or desorption. For binding energy oscillation amplitudes that extend across the volcano peak, the amplitude endpoints intersect the transiently accessible faster timescales of independent reaction phenomena. At the conditions of sufficiently fast binding energy oscillation, the transient binding energy variation frequency matches the natural frequencies of the reaction and the rate of overall reaction achieves turnover frequencies greatly in excess of the volcano plot peak. The single resonance frequency (1/s) of the reaction and catalyst at the selected temperature and oscillation amplitude is identified as the purple tie line; all other applied frequencies are either slower or less efficient.
Theory
The basis of catalytic resonance theory utilizes the transient behavior of adsorption, surface reactions, and desorption as surface binding energy and surface transition states oscillate with time. The binding energy of a single species, i, is described via a temporal functional including square or sinusoidal waves of frequency, fi, and amplitude, dUi:
Other surface chemical species, j, are related to the oscillating species, i, by the constant linear parameter, gamma γi-j:
The two surface species also share the common enthalpy of adsorption, delta δi-j. Specification of the oscillation frequency and amplitude of species i and relating γi-j and δi-j for all other surface species j permits determination of all chemical surface species adsorption enthalpy with time.
The transition state energy of a surface reaction between any two species i and j is predicted by the linear scaling relationship of the Bell–Evans–Polanyi principle which relates to the surface reaction enthalpy, ΔHi-j, to the transition state energy, Ea, by parameters α and β with the following relationship:
The oscillating surface and transition state energies of chemical species alter the kinetic rate constants associated with surface reaction, adsorption, and desorption. The surface reaction rate constant of species i converting to surface species j includes the dynamic activation energy:
The resulting surface chemistry kinetics are then described via a surface reaction rate expression containing dynamic kinetic parameters responding to the oscillation in surface binding energy:
,
with k reactions with dynamic activation energy. The desorption rate constant also varies with oscillating surface binding energy by:
.
Implementation of dynamic surface binding energy of a reversible A-to-B reaction on a heterogeneous catalyst in a continuous flow stirred tank reactor operating at 1% conversion of A produces a sinusoidal binding energy in species B as shown. In the transition between surface binding energy amplitude endpoints, the instantaneous reaction rate (i.e., turnover frequency) oscillates over an order of magnitude as a limit cycle solution.
Implications for Chemistry
Oscillating binding energies of all surface chemical species introduces periodic instances of transient behavior to the catalytic surface. For slow oscillation frequencies, the transient period is only a small quantity of the oscillation time scale, and the surface reaction achieves a new steady state. However, as the oscillation frequency increases, the surface transient period approaches the timescale of the oscillation, and the catalytic surface remains in a constant transient condition. A plot of the effective catalytic rate of a reaction with respect to applied oscillation frequency identifies the 'resonant' frequency for which the transient conditions of the catalyst surface match the applied frequencies. The catalytic rate at the resonance frequency exists above the Sabatier volcano plot maximum of a static system with average reaction rates as high as five orders of magnitude faster than that achievable by conventional catalysis.
Surface binding energy oscillation also occurs to different extent with the various chemical surface species as defined by the γi-j parameter. For any non-unity γi-j system, the asymmetry in the surface energy profile results in conducting work to bias the reaction to a steady state away from equilibrium. Similar to the controlled directionality of molecular machines, the resulting ratchet (device) energy mechanism selectively moves molecules through a catalytic reaction against a free energy gradient.
Application of dynamic binding energy to a surface with multiple catalytic reactions exhibits complex behavior derived from the differences in the natural frequencies of each chemistry; these frequencies are identified by the inverse of the adsorption, desorption, and surface kinetic rate parameters. Considering a system of two parallel elementary reactions of A-to-B and A-to-C that only occur on a surface, the performance of the catalyst under dynamic conditions will result in varying capability for selecting either reaction product (B or C). For the depicted system, both reactions have the same overall thermodynamics and will produce B and C in equal amounts (50% selectivity) at chemical equilibrium. Under normal static catalyst operation, only product B can be produced at selectivities greater than 50% and product C is never favored. However, as shown, the application of surface binding dynamics in the form of a square wave at varying frequency and fixed oscillation amplitude but varying endpoints exhibits the full range of possible reactant selectivity. In the range of 1-10 Hertz, there exists a small island of parameters for which product C is highly selective; this condition is only accessible via dynamics.
Mechanisms of Efficient Dynamic Catalysts
Dynamic catalysts that undergo forced variation of free energy surface during a catalytic reaction are called 'programmable catalysts.' Perturbation of the active site with strain, light, or condensed charge modulates the binding energy of adsorbates and transition states to alter the rate, selectivity, and conversion of a catalytic reaction. New opportunities exist with dynamic catalysts and oscillating free energy surfaces not possible with conventional static active sites. However, these opportunities require energy input to modulate the catalyst, raising the issue of efficiency of a programmable catalyst.
A programmable catalyst oscillating between strong and weak binding energies exhibits positive scaling between reaction intermediates; B* and A* both weaken and strengthen in binding energy simultaneously. Under strong binding conditions, A* readily reacts over the transition state to form B*. For weaking catalyst binding conditions, B* readily desorbs to form B(g), as A(g) immediately adsorbs as A* to restart the catalytic cycle. An efficient programmable catalyst converts molecules from reactants to products with every oscillation of binding energy of the active site, such that most active sites on the catalyst surface produce a product molecule for every catalytic oscillation cycle. Of key importance is the height of the transition state barrier in the weak-binding catalyst state; a high barrier creates a ratchet mechanism, whereby B* is prohibited from reacting backwards to A*.
The efficiency of a programmable catalyst can be determined by the metric of the turnover efficiency (ηTOE). The turnover efficiency compares the difference between the time-averaged dynamic turnover frequency of the reaction (TOFdyn) and the steady state turnover frequency (TOFss) to the applied catalytic oscillation frequency, fapp.
Highly efficient programmable catalysts will exhibit turnover efficiencies close to unity (ηTOE ~ 1), indicating that there is parity between the applied frequency and the catalytic turnover frequency.
Of fundamental importance are mechanisms that lead to a reduction in the turnover efficiency. One such catalytic mechanism is the leaky programmable catalyst mechanism. On these dynamic free energy surfaces, the adsorbed surface reactant A* readily reacts to form surface product B* in the strong-binding catalyst state. However, in the weak-binding catalyst state, B* readily reacts backwards to reform A* rather than desorbing to form B(g) in the gas phase. This yields an inefficient programmable catalyst, whereby most input energy to modulate the catalyst between states results in heat generation as molecules interconvert between A* and B*. Programmable catalysts exhibiting the leaky ratchet phenomenon exhibit time-averaged turnover frequencies far from parity with the applied catalytic oscillation frequency, resulting in low turnover efficiency.
An alternative programmable catalytic mechanism leading to reduced turnover efficiency derives from the extent of the surface that participates in the overall reaction. When the programmable catalyst switches to the strong-binding state, A* reacts to B* and equilibrates. However, for systems with comparable free energy of A* and B* in the strong binding state, only a fraction of reactant A* is converted to B*. In this case, only a fraction of surface active sites desorb B* to form B(g) when the catalyst switches to the weak-binding state. Adsorbates of A* that change in binding energy between modulating catalyst states consume energy and release heat without completing the catalytic cycle, yielding an inefficient programmable catalyst.
High turnover efficiency is critical for efficient use of a programmable catalyst. The two key mechanisms leading to lower efficiency, the leaky ratchet and low participating surface mechanisms, can significantly reduce the time-averaged catalytic rate, even orders of magnitude lower than the applied catalyst oscillation frequency.
Characteristics of Dynamic Surface Reactions
Catalytic reactions on surfaces exhibit an energy ratchet that biases the reaction away from equilibrium. In the simplest form, the catalyst oscillates between two states of stronger or weaker binding, which in this example is referred to as 'green' or 'blue,' respectively. For a single elementary reaction on a catalyst oscillating between two states (green & blue), there exists four rate coefficients in total, one forward (k1) and one reverse (k−1) in each catalyst state. The catalyst switches between catalyst states (j of blue or green) with a frequency, f, with the time in each catalyst state, τj, such that the duty cycle, Dj is defined for catalyst state, j, as the fraction of the time the catalyst exists in state j. For the catalyst in the 'blue' state:
The bias of a catalytic ratchet under dynamic conditions can be predicted via a ratchet directionality metric, λ, that can be calculated from the rate coefficients, ki, and the time constants of the oscillation, τi (or the duty cycle). For a catalyst oscillating between two catalyst states (blue and green), the ratchet directionality metric can be calculated:
For directionality metrics greater than 1, the reaction exhibits forward bias to conversion higher than equilibrium. Directionality metrics less than 1 indicate negative reaction bias to conversion less than equilibrium. For more complicated reactions oscillating between multiple catalyst states, j, the ratchet directionality metric can be calculated based on the rate constants and time scales of all states.
The kinetic bias of an independent catalytic ratchet exists for sufficiently high catalyst oscillation frequencies, f, above the ratchet cutoff frequency, fc, calculated as:
For a single independent catalytic elementary step of a reaction on a surface (e.g., A* ↔ B*), the A* surface coverage, θA, can be predicted from the ratchet directionality metric,
Experiments and Evidence
Catalytic rate enhancement via dynamic perturbation of surface active sites has been demonstrated experimentally with dynamic electrocatalysis and dynamic photocatalysis. Those results may be explained in the framework of catalytic resonance theory but conclusive evidence is still lacking:
In 1978, the electro-oxidation of formic acid on a platinum electrode was studied under the application of constant potentials and square-wave pulsed potentials. The latter was found to enhance the current density (and thus catalytic activity) by up to 20 times compared to the potentiostatic conditions, with the optimal wave amplitude and frequency of 600 mV and 2000 Hz, respectively. In 1988, the oxidation of methanol on a platinum electrode was conducted under pulsed potentials between 0.4 and 1.18 V, resulting in an average current almost 100 times higher than the steady-state current at 0.4 V.
Using the formic acid electro-oxidation reaction, oscillation of the applied electrodynamic potential between 0 and 0.8 volts accelerated the formation rate of carbon dioxide more than an order of magnitude higher (20X) than what was achievable on platinum, the best existing catalyst. The maximum catalytic rate was experimentally observed at a frequency of 100 Hz; slower catalytic rates were observed at higher and lower electrodynamic frequencies. The resonant frequency was interpreted as the oscillation between conditions favorable to formic acid decomposition (0 V) and conditions favorable to form CO2 (0.8 V).
The concept of implementing periodic illumination to improve the quantum yield of a typical photocatalytic reaction was first introduced in 1964 by Miller et al. In this work, they showed enhanced photosynthetic efficiency in the conversion of CO2 to O2 when the algal culture was exposed to periodic illumination in a Taylor vortex reactor. Sczechowski et al. later implemented the same approach for heterogeneous photocatalysis in 1993, where they demonstrated 5-fold increment in photoefficiency of formate decomposition by cycling between light and dark conditions with periods of 72 ms and 1.45 s respectively. They hypothesized that upon illumination of the catalyst, there is a critical illumination time during which absorbed photons generate oxidizing species (hvb+) on the surface of the catalyst. The generated species or their intermediates go on to react with substrates on the surface or in the bulk. During dark period, adsorption, desorption, and diffusion generally occurs in the absence of photons. After a critical recovery period in the dark, the photocatalyst can efficiently use photons again when photons are reintroduced. A summary of work involving “dynamic” photocatalysis was provided by Tokode et al. in 2016.
Dynamic promotion of methanol decomposition was demonstrated on 2 nm Pt nanoparticles using pulsed light. The rate acceleration to form H2 relative to static illumination was attributed to the selective weakening of adsorbed carbon monoxide, thereby also increasing the quantum efficiency of applied light.
In 2021, Sordello et al. experimentally demonstrated a 50% increase of the quantum yield for the Hydrogen Evolution Reaction (HER) over Pt/TiO2 nanoparticles via formic acid photoreforming under Controlled Period Illumination (CPI).
Implementation of catalyst dynamics has been proposed to occur by additional methods using oscillating light, electric potential, and physical perturbation.
References
Catalysis
Chemical kinetics
Chemical reactions
Chemical processes
Chemical reaction engineering
Reaction mechanisms
Physical organic chemistry | Catalytic resonance theory | [
"Chemistry",
"Engineering"
] | 3,248 | [
"Catalysis",
"Reaction mechanisms",
"Chemical reaction engineering",
"Chemical engineering",
"Chemical processes",
"nan",
"Physical organic chemistry",
"Chemical process engineering",
"Chemical kinetics"
] |
54,342,619 | https://en.wikipedia.org/wiki/Central%20Basin%20Spreading%20Center | Central Basin Spreading Center (CBSC), formerly Central Basin Fault, is a seafloor spreading center of the West Philippine Basin. It is a long, NW-SE-trending structure that is considered to have been the spreading center of the West Philippine Basin (WPB) from the Eocene to the middle Oligocene. It is a remnant spreading center, meaning that it is no longer active. However, it still displays many of the features that are characteristic of spreading centers, such as a rift valley, axial ridges, and abyssal hills. The CBSC is divided into two segments: the eastern segment and the western segment. The eastern segment is characterized by slow-spreading features, such as a deep rift valley and nodal basins. The western segment is characterized by fast-spreading features, such as overlapping spreading centers and volcanic axial ridges.
The CBSC is also associated with a number of other features, including oceanic plateaus and seamount chains. These features suggest that the CBSC formed in a complex tectonic environment, possibly involving a mantle plume. The study of the CBSC provides important insights into the formation and evolution of marginal basins. Marginal basins are small ocean basins that are formed on the margins of continents or island arcs. The CBSC is a good example of a marginal basin that formed through seafloor spreading.
References
Oceanography | Central Basin Spreading Center | [
"Physics",
"Environmental_science"
] | 278 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
54,345,393 | https://en.wikipedia.org/wiki/Monosodium%20xenate | Monosodium xenate is the sodium salt of xenic acid with formula NaHXeO4. It is a powerful oxidizer, owing to being a highly reactive compound of xenon.
Synthesis
Monosodium xenate can be made by mixing solutions of xenon trioxide and sodium hydroxide, followed by freezing to liquid nitrogen temperatures, and dehydrating in a vacuum.
Properties
Monosodium xenate usually exists in the sesquihydrate form, with 1.5 waters of hydration per unit molecule. It is stable up to 160 °C heated in a pure state. However it can explode when subjected to mechanical shock, or lower temperatures when mixed with XeO3.
Sodium xenate is slightly toxic with a median lethal dose between 15 and 30 mg/kg of body weight in mice. Xenate leaves the body very quickly. In mice, the level in blood drops by half in twenty seconds due to it being decomposed and exhaled. In the peritoneum the half-life extends to six minutes.
The dialkali xenates XeO42- have not been discovered, as xenate disproportionates in more alkaline conditions, hence it being rare to the find the dialkaline salt Na2XeO4.
References
Xenon(VI) compounds
Oxidizing agents
Sodium compounds | Monosodium xenate | [
"Chemistry"
] | 285 | [
"Redox",
"Oxidizing agents"
] |
55,918,466 | https://en.wikipedia.org/wiki/Bismuth%20polycations | Bismuth polycations are polyatomic ions of the formula . They were originally observed in solutions of bismuth metal in molten bismuth chloride. It has since been found that these clusters are present in the solid state, particularly in salts where germanium tetrachloride or tetrachloroaluminate serve as the counteranions, but also in amorphous phases such as glasses and gels. Bismuth endows materials with a variety of interesting optical properties that can be tuned by changing the supporting material. Commonly-reported structures include the trigonal bipyramidal cluster, the octahedral cluster, the square antiprismatic cluster, and the tricapped trigonal prismatic cluster.
Known materials
Crystalline
Bi5(AlCl4)3
Bi8(AlCl4)2
Bi5(GaCl4)3
Bi8(GaCl4)2
Metal complexes
[CuBi8][AlCl4]3
[Ru(Bi8)2]6+
[Ru2Bi14Br4][AlCl4]4
Structure and bonding
Bismuth polycations form despite the fact that they possess fewer total valence electrons than would seem necessary for the number of sigma bonds. The shapes of these clusters are generally dictated by Wade's rules, which are based on the treatment of the electronic structure as delocalized molecular orbitals. The bonding can also be described with three-center two-electron bonds in some cases, such as the cluster.
Bismuth clusters have been observed to act as ligands for copper and ruthenium ions. This behavior is possible due to the otherwise fairly inert lone pairs on each of the bismuth that arise primarily from the s-orbitals left out of Bi–Bi bonding.
Optical properties
The variety of electron-deficient sigma aromatic clusters formed by bismuth gives rise to a wide range of spectroscopic behaviors. Of particular interest are the systems capable of low-energy electronic transitions, as these have demonstrated potential as near-infrared light emitters. It is the tendency of electron-deficient bismuth to form sigma-delocalized clusters with small HOMO/LUMO gaps that gives rise to the near-infrared emissions. This property makes these species potentially valuable to the field of near-infrared optical tomography, which exploits the near-infrared window in biological tissue.
References
Bismuth compounds
Cations
Cluster chemistry | Bismuth polycations | [
"Physics",
"Chemistry"
] | 500 | [
"Matter",
"Cluster chemistry",
"Cations",
"Organometallic chemistry",
"Ions"
] |
55,926,948 | https://en.wikipedia.org/wiki/Pesticide%20standard%20value | Pesticide standard values are applied worldwide to control pesticide pollution, since pesticides are largely applied in numerous agricultural, commercial, residential, and industrial applications. Usually, pesticide standard value is regulated in residential surface soil (i.e., pesticide soil regulatory guidance value, or RGV), drinking water (i.e., pesticide drinking water maximum concentration level, or MCL), foods (i.e., pesticide food maximum residue level, or MRL), and other ecological sections (e.g., air, surface water, groundwater, bed sediment, or aquatic organisms).
Definition
Pesticide standard values specify the maximum amount of a pollutant that may be present without prompting some form of regulatory response such as human health and ecological effects. Pesticide standard values are often derived from laboratory toxicology data (i.e., animal tests), human or ecological parameters (i.e., body weight, intake rate, lifetime, etc.), and human health risk models such as USEPA and RIVM models. On the other hand, the European Union took a precautionary approach (in accordance with the principles of its environmental policy) before toxicological data was available and provided very strict and protective standards for all pesticides in drinking water.
Worldwide pesticide standard values
Up till now (November 2017), less than 30% of the worldwide nations have regulated pesticide standard values in surface residential soil, about 50% of the total nations have provided pesticide standard values in drinking water and agricultural foods. Many nations in Africa, Asia, and South America are lacking pesticide standard values for the major human and ecological exposure pathways such as soil, sediment, and water.
Pesticide standard values for many current and historical largely used pesticides such as DDT, aldrin, lindane, glyphosate, MCPA, chlorpyrifos, and 2,4-D often vary over seven, eight, or nine orders of magnitude and are log-normally distributed, which indicates that there is little agreement on the regulation of pesticide standard values among worldwide jurisdictions. Additionally, many worldwide pesticide standard values are not sufficiently low to protect public health based on human health risk uncertainty bounds calculations and maximum legal contribution estimations.
See also
Persistent organic pollutant
Aquatic toxicology
Regulation of pesticides in the European Union
Pesticide regulation in the United States
References
Pollution control technologies
Pesticides
Environmental effects of pesticides
Environmental standards | Pesticide standard value | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 507 | [
"Pesticides",
"Toxicology",
"Pollution control technologies",
"Environmental engineering",
"Biocides"
] |
55,928,290 | https://en.wikipedia.org/wiki/Glass%20bead%20road%20surface%20marking | Glass beads composed of soda lime glass are essential for providing retroreflectivity in many kinds of road surface markings. Retroreflectivity occurs when incident light from vehicles is refracted within glass beads that are imbedded in road surface markings and then reflected back into the driver's field of view. In North America, approximately 227 million kilograms of glass beads are used for road surface markings annually. Roughly 520 kilograms of glass beads are used per mile during remarking of a five lane highway system, and road remarking can occur every two to five years. In the United States, the massive demand for glass beads has led to importing from countries using outdated manufacturing regulations and techniques.
These techniques include the use of heavy metals such as arsenic, antimony, and lead during the manufacturing process as decolorizes and fining agents. It has been found that the heavy metals become incorporated into the bead's glass matrix and may leach under environmental conditions that roads experience.
Composition and manufacturing
The synthesis of these beads begins when calcium carbonate is heated to anywhere from 800 to 1300C. This heating causes a decomposition reaction which forms solid calcium oxide and releases carbon dioxide gas.
CaCO3 ->[{800-1300C}]{CaO(s)} + CO2(g)
Similarly, sodium carbonate decomposes to sodium oxide and releases carbon dioxide gas.
Na2CO3 -> [{800-1300C}] {Na2O(s)} + CO2(g)
Sodium oxide is then reacted with silica to produce sodium silicate liquid glass.
{Na2O(s)}+SiO2(s) -> Na2SiO3(l)
Lastly, to complete the general structure of the soda-lime glass, calcium oxide is dissolved in solution with sodium silicate glass, which ultimately reduces the softening temperature of the glass. Additional metals and ions are added to this melted glass to improve its properties, and the compound is then sprayed and formed into beads using either the direct or indirect method.
{Na2SiO3(l)}+ CaO(s) -> Na2O*CaO*SiO2
Overall, the percent composition of major compounds found in the final glass bead product is shown below.
In addition to these primary components of soda-lime glass, manufacturers include heavy metals arsenic, antimony, and lead to refine and improve the properties of the glass bead. Lead in the form of PbO is added to increase the durability of the glass to withstand harsh road conditions. Arsenic and antimony are used as fining agents that facilitate the removal of gas bubbles from the molten mixture. Carbon dioxide produced by the decomposition of calcium carbonate and sodium carbonate is removed to obtain the required retroreflective properties of the glass. In addition, both arsenic and antimony are used as decolorizers. Having a colorless glass is crucial to maximizing retroreflectivity. Arsenic in its inorganic form assists in the decolorization of the glass by controlling iron's oxidation state. Arsenic oxidizes ferrous oxide to its less colorful counterpart, ferric oxide.{As2O5}+4Fe3O4->{As2O3} + 6Fe2O3
Antimony in the form of Sb2O5 performs a similar reaction as arsenic, oxidizing ferrous oxide to ferric oxide.
{Sb2O5} + 4Fe3O4 -> {Sb2O3}+6Fe2O3
While these three heavy metals can typically be found in both domestic and imported glass beads, they vary in concentration. According to the US Environmental Protection Agency, the Resource Conservation and Recovery Act limits the levels of heavy metal content in accordance with their toxicity. Due to increasing demands for marked roads, however, the majority of glass beads used in the U.S. are imported from countries with little to no regulation on heavy metal content. For example, beads obtained from North America contain approximately 15 mg of arsenic per kg of beads, while some from China have concentrations of up to 1000 mg/kg. Imported bead concentrations of each of these metals are listed in the table below.
Degradation of glass beads
Environmental conditions can cause degradation of glass beads, leading to release of incorporated heavy metals into the environment. While abrasion may dislodge these beads from the road marking itself, the reaction of these beads with an aqueous environment vastly accelerate their decomposition and heavy metal release.
There are three reactions involved in the corrosion of silicon dioxide. The first is an ion exchange reaction, in which mobile ions of a solution are exchanged for those of similar charge on the solid. Particularly, this reaction is involving cation exchange material, where a negatively charged structural backbone allows the replacement of positively charged cations. This reaction involved in the degradation of soda lime beads shows various ions that are interaction with the silicon-oxygen network (e.g. Na+, Ca^2+, K+, Mg^2+) being replaced with a hydrogen ion.
{{Si-O^-} Na+} + ({H^+} + OH^-) -> {Si-OH} + {Na+} +OH-
In addition to this reaction, a hydroxyl ion can attack the Si-O bond causing dissolution of the SiO2 matrix and creating silanol and non-bridging oxygen groups.
{Si-O-Si} + {OH^-} -> {Si-OH} + {^-O-Si}
As dissolution occurs, the non-bridging oxygen groups can abstract hydrogen ions from solution.
{Si-O^-} + ({H^+} + OH^-) -> {Si-OH}+{OH^-}
An increase in the concentration of hydroxyl ions comes with increased alkalinity of the aqueous solution. This increase in pH has shown, in varying column leaching studies, to increase the reduction potential and DOC (dissolved organic carbon) concentration of the solution. This ultimately leads to an increase in mobility of many metals including arsenic, copper, and nickel.
The mobility of these heavy metals are therefore affected by the presence of alkali oxides. The Na+, Ca^2+, Mg^2+, and K+ ions can associate with the tetrahedral networks of silicon and oxygen, forming a trigonal antiprism network. In trigonal antiprism formation, the ions coordinate with three oxygen atoms at a distance of 2.3 angstroms and then another three oxygen atoms at a nonbonding distance of 3 angstroms. As the concentration of alkali oxides increases in metal beads, the probability of chemical attack increases due to the more open and accessible glass chemical network and structure.
Heavy metal speciation and leaching
During both routine road marking removal and harsh environmental conditions, these glass beads can degrade and leach incorporated heavy metals. Although the exact mechanism of heavy metal incorporation into the glass beads is unknown, current literature hypothesizes that the heavy metals are associated with alkali and alkali earth metals on the surface of glass beads. Environmental conditions relevant to road surfaces such as pH, different salts, and ionic strength strongly influence the leaching process. In particular, pH determines the speciation of the heavy metal which is critical for solubility in the aqueous phase. The following graphs show the speciation of heavy metals as a function of pH.
Few states have regulations on leached concentrations of heavy metals. For example, New Jersey limits arsenic to 3 μg/L, lead to 65 μg/L, and antimony to 78 μg/L. In studies that subjected batches of glass beads to environmental conditions in a lab setting, 96% of the leached concentrations of arsenic exceeded 3 μg/L, 75% of leached lead exceeded 65 μg/L, and 27% of the leached concentrations of antimony exceeded the criterion of 78 μg/L. The following graphs show the total concentrations of heavy metals leached from glass beads after 160 days as a function of pH, salt type, and ionic strength.
Interaction with roadside soil
Once the arsenic is mobilized in aqueous form, humic substances interact with arsenic. It has been shown that particularly under acidic environments, humic acids contribute immensely to the retention of arsenic in the soil matrix. While an exact mechanism for this has not been confirmed, it has been hypothesized that humic acids are acting as anion exchange moieties, potentially through amine interaction within the humic material with arsenic. This is only likely if the amine is quaternary, thus justifying the low pH claim, as similar resins are used to separate As(III) and As(V). Another possible mechanism of arsenic's interaction with humic substances is through metal complexes. Potentially, arsenic adsorption could occur as a humic-acid-metal-As bridging ligand, or possibly adsorbed to the clay that is bound to the humic acid itself as well.
Lead, on the other hand, has been shown to increase binding to humic substances with increasing pH and decreasing ionic strength. Research has indicated that monodentate lead binds at a relatively high measure to carboxylic type groups present in humic materials. There is also evidence of the bidentate form of lead binding to phenolic-type groups in the ortho position in humic material when concentrations of lead are high, as is the case for soils nearby marked roads.
In the case of antinomy, qualitative studies on its association with humic substances is scarce and rarely conclusive. It has been shown in many cases, however, that pH has little indication on these interactions. One study indicated that organic ligands that possess carboxylic groups or hydroxyl groups create stable bidentate chelates in its speciation as As(III) and As(V). Another indicated that As(III) when bound to humic material is easily oxidized, and can be released back into aqueous solution as (SbOH)6-, thus showing that As(V) is more commonly bound to humic material. The details of how this binding occurs mechanistically remains relatively unresolved, but knowledge of the primary form of its binding is important to furthering this research.
Alternative to heavy metal usage
Retroreflectivity is essential to safe driving conditions. While metals are necessary to achieve these goals, there are other, non-toxic metals that can achieve the same results. These may include zirconium, tungsten, titanium, and barium. The amount of these metals that could be incorporated into the glass varies based on its country of origins and the regulations placed on those countries, but further research on alternatives to heavy metal usage in road markings would assist in reducing heavy metal leachate near roadside soils.
See also
Road surface marking
Toxic heavy metal
References
External links
U.S. Federal Highway Administration—Learn About Pavement Markings
Road surface markings | Glass bead road surface marking | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Environmental_science"
] | 2,280 | [
"Glass engineering and science",
"Environmental chemistry",
"Transport and the environment",
"Materials science",
"Physical systems",
"Transport",
"Soil contamination"
] |
51,496,520 | https://en.wikipedia.org/wiki/Strongback%20%28girder%29 | A strongback is a beam or girder which acts as a secondary support member to an existing structure. A strongback in a staircase is usually ordinary two-by dimensional lumber attached to the staircase stringers to stiffen the assembly. In shipbuilding, a strongback, known as a waler is oriented lengthwise along a ship to brace across several frames to keep the frames square and plumb. In formwork strongbacks (typically vertical) reinforce typically horizontal walers to provide additional support against hydrostatic pressure during concrete pours.
Some rockets like the Antares, the Falcon 9 and the Falcon Heavy use a strongback to restrain the rocket prior to launch. This structure tilts several degrees away from the rocket to clear the launch, either at the moment of launch or a few minutes before.
References
Building materials
Construction | Strongback (girder) | [
"Physics",
"Engineering"
] | 166 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Architecture stubs",
"Matter",
"Architecture"
] |
67,221,648 | https://en.wikipedia.org/wiki/Introduction%20to%20Lattices%20and%20Order | Introduction to Lattices and Order is a mathematical textbook on order theory by Brian A. Davey and Hilary Priestley. It was published by the Cambridge University Press in their Cambridge Mathematical Textbooks series in 1990, with a second edition in 2002. The second edition is significantly different in its topics and organization, and was revised to incorporate recent developments in the area, especially in its applications to computer science. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics
Both editions of the book have 11 chapters; in the second book they are organized with the first four providing a general reference for mathematicians and computer scientists, and the remaining seven focusing on more specialized material for logicians, topologists, and lattice theorists.
The first chapter concerns partially ordered sets, with a fundamental example given by the partial functions ordered by the subset relation on their graphs, and covers fundamental concepts including top and bottom elements and upper and lower sets. These ideas lead to the second chapter, on lattices, in which every two elements (or in complete lattices, every set) has a greatest lower bound and a least upper bound. This chapter includes the construction of a lattice from the lower sets of any partial order, and the Knaster–Tarski theorem constructing a lattice from the fixed points of an order-preserving functions on a complete lattice. Chapter three concerns formal concept analysis, its construction of "concept lattices" from collections of objects and their properties, with each lattice element representing both a set of objects and a set of properties held by those objects, and the universality of this construction in forming complete lattices. The fourth of the introductory chapters concerns special classes of lattices, including modular lattices, distributive lattices, and Boolean lattices.
In the second part of the book, chapter 5 concerns the theorem that every finite Boolean lattice is isomorphic to the lattice of subsets of a finite set, and (less trivially) Birkhoff's representation theorem according to which every finite distributive lattice is isomorphic to the lattice of lower sets of a finite partial order. Chapter 6 covers congruence relations on lattices. The topics in chapter 7 include closure operations and Galois connections on partial orders, and the Dedekind–MacNeille completion of a partial order into the smallest complete lattice containing it. The next two chapters concern complete partial orders, their fixed-point theorems, information systems, and their applications to denotational semantics. Chapter 10 discusses order-theoretic equivalents of the axiom of choice, including extensions of the representation theorems from chapter 5 to infinite lattices, and the final chapter discusses the representation of lattices with topological spaces, including Stone's representation theorem for Boolean algebras and the duality theory for distributive lattices.
Two appendices provide background in topology needed for the final chapter, and an annotated bibliography.
Audience and reception
This book is aimed at beginning graduate students, although it could also be used by advanced undergraduates. Its many exercises make it suitable as a course textbook, and serve both to fill in details from the exposition in the book, and to provide pointers to additional topics. Although some mathematical sophistication is required of its readers, the main prerequisites are discrete mathematics, abstract algebra, and group theory.
Writing of the first edition, reviewer Josef Niederle calls it "an excellent textbook", "up-to-date and clear". Similarly, Thomas S. Blyth praises the first edition as "a well-written, satisfying, informative, and stimulating account of applications that are of great interest", and in an updated review writes that the second edition is as good as the first. Likewise, although Jon Cohen has some quibbles with the ordering and selection of topics (particularly the inclusion of congruences at the expense of a category-theoretic view of the subject), he concludes that the book is "a wonderful and accessible introduction to lattice theory, of equal interest to both computer scientists and mathematicians".
Both Blyth and Cohen note the book's skilled use of LaTeX to create its diagrams, and its helpful descriptions of how the diagrams were made.
References
Mathematics textbooks
Order theory
1990 non-fiction books
2002 non-fiction books | Introduction to Lattices and Order | [
"Mathematics"
] | 888 | [
"Order theory"
] |
67,222,715 | https://en.wikipedia.org/wiki/Cerium%28III%29%20sulfide | Cerium(III) sulfide, also known as cerium sesquisulfide, is an inorganic compound with the formula Ce2S3. It is the sulfide salt of cerium(III) and exists as three polymorphs with different crystal structures.
Its high melting point (comparable to silica or alumina) and chemically inert nature have led to occasional examination of potential use as a refractory material for crucibles, but it has never been widely adopted for this application.
The distinctive red colour of two of the polymorphs (α- and β-Ce2S3) and aforementioned chemical stability up to high temperatures have led to some limited commercial use as a red pigment (known as cerium sulfide red).
Synthesis
The oldest syntheses reported for cerium(III) sulfide follow a typical rare earth sesquisulfide formation route, which involves heating the corresponding cerium sesquioxide to 900–1100 °C in an atmosphere of hydrogen sulfide:
Ce2O3 + 3 H2S → Ce2S3 + 3 H2O
Newer synthetic procedures utilise less toxic carbon disulfide gas for sulfurisation, starting from cerium dioxide which is reduced by the CS2 gas at temperatures of 800–1000 °C:
6 CeO2 + 5 CS2 → 3 Ce2S3 + 5 CO2 + SO2
Polymorphs
Ce2S3 exists in three polymorphic forms: α-Ce2S3 (orthorhombic, burgundy colour), β-Ce2S3 (tetragonal, red colour), γ-Ce2S3 (cubic, black colour). They are analogous to the crystal structures of the likewise trimorphic Pr2S3 and Nd2S3.
Following the synthetic procedures given above will yield mostly the α- and β- polymorphs, with the proportion of α-Ce2S3 increasing at lower temperatures (~700–900 °C) and with longer reaction times. The α- form can be irreversibly transformed into β-Ce2S3 by vacuum heating at 1200 °C for 7 hours. Then γ-Ce2S3 is obtained from sintering of β-Ce2S3 powder via hot pressing at an even higher temperature (1700 °C).
α polymorph
The α polymorph of cerium(III) sulfide has the same structure as α-. It contains both 7-coordinate and 8-coordinate cerium ions, , with monocapped and bicapped trigonal prismatic coordination geometry, respectively. The sulfide ions, , are 5-coordinate. Two thirds of them adopt a square pyramidal geometry and one third adopt a trigonal bipyramidal geometry.
γ polymorph
The γ polymorph of cerium(III) sulfide adopts a cation-deficient form of the structure. 8 out the 9 metal positions in the structure are occupied by cerium in γ-, with the remainder as vacancies. This composition can be represented by the formula . The cerium ions are 8-coordinate while the sulfide ions are 6-coordinate (distorted octahedral).
Reactions
Some reported reactions of cerium(III) sulfide are with bismuth compounds in order to form superconducting crystalline materials of the M(O,F)BiS2 family (for M=Ce).
The reaction of Ce2S3 with Bi2S3 and Bi2O3 in a sealed tube at 950 °C gives the parent compound CeOBiS2:
3 Ce2S3 + Bi2S3 + 2 Bi2O3 → 6 CeOBiS2
This material is superconducting on its own, but the properties can be enhanced if it is doped with fluoride by including BiF3 in the reaction mixture.
Applications
Refractory material
Cerium(III) and cerium(IV) sulfides were first investigated in the 1940s as part of the Manhattan project, where they were considered -but eventually not adopted- as advanced refractory materials. Their suggested application was as the material in crucibles for the casting of uranium and plutonium metal.
Although the sulfide's properties (high melting point and large, large negative ΔfG° and chemical inertness) are suitable and cerium is a relatively common element (66 ppm, about as much as copper), the danger of the traditional H2S-involving production route and the difficulty in controlling the formation of the resulting Ce2S3/CeS solid mixture meant that the compound was ultimately not developed further for such applications.
Pigment and other uses
The main non-research use of cerium(III) sulfide is as a specialty inorganic pigment. The strong red hues of α- and β-Ce2S3, non-prohibitive cost of cerium, and chemically inert behaviour up to high temperature are the factors which make the compound desirable as a pigment.
Regarding other applications, the γ-Ce2S3 polymorph has a band gap of 2.06 eV and high Seebeck coefficient, thus it has been proposed as a high-temperature semiconductor for thermoelectric generators. A practical implementation thereof has not been demonstrated so far.
References
Sesquisulfides
Cerium(III) compounds
Refractory materials
Inorganic pigments | Cerium(III) sulfide | [
"Physics",
"Chemistry"
] | 1,113 | [
"Inorganic compounds",
"Refractory materials",
"Inorganic pigments",
"Materials",
"Matter"
] |
60,763,706 | https://en.wikipedia.org/wiki/Wine/water%20paradox | The wine/water paradox is an apparent paradox in probability theory. It is stated by Michael Deakin as follows:
The core of the paradox is in finding consistent and justifiable simultaneous prior distributions for and .
Calculation
This calculation is the demonstration of the paradoxical conclusion when making use of the principle of indifference.
To recapitulate, We do not know , the wine to water ratio.
When considering the numbers above, it is only known that it lies in an interval between the minimum of one quarter wine over three quarters water on one end (i.e. 25% wine), to the maximum of three quarters wine over one quarter water on the other (i.e. 75% wine). In term of ratios, resp. .
Now, making use of the principle of indifference, we may assume that is uniformly distributed. Then the chance of finding the ratio below any given fixed threshold , with , should linearly depend on the value . So the probability value is the number
As a function of the threshold value , this is the linearly growing function that is resp. at the end points resp. the larger .
Consider the threshold , as in the example of the original formulation above. This is two parts wine vs. one part water, i.e. 66% wine. With this we conclude that
.
Now consider , the inverted ratio of water to wine but the equivalent wine/water mixture threshold. It lies between the inverted bounds.
Again using the principle of indifference, we get
.
This is the function which is resp. at the end points resp. the smaller .
Now taking the corresponding threshold (also half as much water as wine). We conclude that
.
The second probability always exceeds the first by a factor of . For our example the number is .
Paradoxical conclusion
Since , we get
,
a contradiction.
References
Probability theory paradoxes | Wine/water paradox | [
"Mathematics"
] | 380 | [
"Probability theory paradoxes",
"Mathematical problems",
"Mathematical paradoxes"
] |
62,878,887 | https://en.wikipedia.org/wiki/Curvature%20renormalization%20group%20method | In theoretical physics, the curvature renormalization group (CRG) method is an analytical approach to determine the phase boundaries and the critical behavior of topological systems. Topological phases are phases of matter that appear in certain quantum mechanical systems at zero temperature because of a robust degeneracy in the ground-state wave function. They are called topological because they can be described by different (discrete) values of a nonlocal topological invariant. This is to contrast with non-topological phases of matter (e.g. ferromagnetism) that can be described by different values of a local order parameter. States with different values of the topological invariant cannot change into each other without a phase transition. The topological invariant is constructed from a curvature function that can be calculated from the bulk Hamiltonian of the system. At the phase transition, the curvature function diverges, and the topological invariant correspondingly jumps abruptly from one value to another. The CRG method works by detecting the divergence in the curvature function, and thus determining the boundaries between different topological phases. Furthermore, from the divergence of the curvature function, it extracts scaling laws that describe the critical behavior, i.e. how different quantities (such as susceptibility or correlation length) behave as the topological phase transition is approached. The CRG method has been successfully applied to a variety of static, periodically driven, weakly and strongly interacting systems to classify the nature of the corresponding topological phase transitions.
Background
Topological phases are quantum phases of matter that are characterized by robust ground state degeneracy and quantized geometric phases. Transitions between different topological phases are usually called topological phase transitions, which are characterized by discrete jumps of the topological invariant . Upon tuning one or multiple system parameters , jumps abruptly from one integer to another at the critical point . Typically, the topological invariant takes the form of an integration of a curvature function in momentum space:Depending on the dimensionality and symmetries of the system, the curvature function can be a Berry connection, a Berry curvature, or a more complicated object.
In the vicinity of high symmetry points in a -dimensional momentum space, where is a reciprocal lattice vector, the curvature function typically displays a Lorentzian shape
where defines the width of the multidimensional peak. Approaching the critical point the peak gradually diverges, flipping sign across the transition:This behavior is displayed in the video on the side for the case .
Scaling laws, critical exponents, and universality
The divergence of the curvature function permits the definition of critical exponents as The conservation of the topological invariant , as the transition is approached from one side or the other, yields a scaling law that constraints the exponents where is the dimensionality of the problem. These exponents serve to classify topological phase transitions into different universality classes.
To experimentally measure the critical exponents, one needs to have access to the curvature function with a certain level of accuracy. Good candidates at present are quantum engineered photonics and ultracold atomic systems. In the first case, the curvature function can be extracted from the anomalous displacement of wave packets under optical pulse pumping in coupled fibre loops. For ultracold atoms in optical lattices, the Berry curvature can be achieved through quantum interference or force-induced wave-packet velocity measurements.
Correlation function
The Fourier transform of the curvature function typically measures the overlap of certain quantum mechanical wave functions or more complicated objects, and therefore it is interpreted as a correlation function. For instance, if the curvature function is the noninteracting or many-body Berry connection or Berry curvature, the correlation function is a measure of the overlap of Wannier functions centered at two home cells that are distance apart. Because of the Lorentzian shape of the curvature function mentioned above, the Fourier transform of the curvature function decays with the length scale . Hence, is interpreted as the correlation length, and its critical exponent is assigned to be like in Landau theory. Furthermore, the correlation length is related to the localization length of topological edge states, such as Majorana modes.
Scaling equation
The scaling procedure that identifies the topological phase transitions is based on the divergence of the curvature function. It is an iterative procedure that, for a given parameter set that controls the topology, searches for a new parameter set that satisfies where is a high-symmetry point and is a small deviation away from it. This procedure searches for the path in the parameter space of along which the divergence of the curvature function reduces, yielding a renormalization group flow that flows away from the topological phase transitions. The name "curvature renormalization group" is derived precisely from this procedure that renormalizes the profile of the curvature function. Writing and , and expanding the scaling equation above to leading order yields the generic renormalization group equation
The renormalization group flow can be obtained directly as a stream plot of the right hand side of this differential equation. Numerically, this differential equation only requires the evaluation of the curvature function at few momenta. Hence, the method is a very efficient way to identify topological phase transitions, especially in periodically driven systems (aka Floquet systems) and interacting systems.
See also
Topological quantum number
Berry connection and curvature
Topological insulator
Periodic table of topological invariants
Dirac matter
Landau theory
Critical exponent
Scaling law
Correlation function (statistical mechanics)
Universality (dynamical systems)
Renormalization group
Floquet theory
Majorana fermion
Surface states
References
Theoretical physics | Curvature renormalization group method | [
"Physics"
] | 1,106 | [
"Theoretical physics"
] |
62,881,422 | https://en.wikipedia.org/wiki/Transverse-field%20Ising%20model | The transverse field Ising model is a quantum version of the classical Ising model. It features a lattice with nearest neighbour interactions determined by the alignment or anti-alignment of spin projections along the axis, as well as an external magnetic field perpendicular to the axis (without loss of generality, along the axis) which creates an energetic bias for one x-axis spin direction over the other.
An important feature of this setup is that, in a quantum sense, the spin projection along the axis and the spin projection along the axis are not commuting observable quantities. That is, they cannot both be observed simultaneously. This means classical statistical mechanics cannot describe this model, and a quantum treatment is needed.
Specifically, the model has the following quantum Hamiltonian:
Here, the subscripts refer to lattice sites, and the sum is done over pairs of nearest neighbour sites and . and are representations of elements of the spin algebra (Pauli matrices, in the case of spin 1/2) acting on the spin variables of the corresponding sites. They anti-commute with each other if on the same site and commute with each other if on different sites. is a prefactor with dimensions of energy, and is another coupling coefficient that determines the relative strength of the external field compared to the nearest neighbour interaction.
Phases of the 1D transverse field Ising model
Below the discussion is restricted to the one dimensional case where each lattice site is a two-dimensional complex Hilbert space (i.e., it represents a spin 1/2 particle). For simplicity here and are normalised to each have determinant -1. The Hamiltonian possesses a symmetry group, as it is invariant under the unitary operation of flipping all of the spins in the direction. More precisely, the symmetry transformation is given by the unitary .
The 1D model admits two phases, depending on whether the ground state (specifically, in the case of degeneracy, a ground state which is not a macroscopically entangled state) breaks or preserves the aforementioned spin-flip symmetry. The sign of does not impact the dynamics, as the system with positive can be mapped into the system with negative by performing a rotation around for every second site .
The model can be exactly solved for all coupling constants. However, in terms of on-site spins the solution is generally very inconvenient to write down explicitly in terms of the spin variables. It is more convenient to write the solution explicitly in terms of fermionic variables defined by Jordan-Wigner transformation, in which case the excited states have a simple quasiparticle or quasihole description.
Ordered phase
When , the system is said to be in the ordered phase. In this phase the ground state breaks the spin-flip symmetry. Thus, the ground state is in fact two-fold degenerate. For this phase exhibits ferromagnetic ordering, while for antiferromagnetic ordering exists.
Precisely, if is a ground state of the Hamiltonian, then is also a ground state, and together and span the degenerate ground state space. As a simple example, when and , the ground states are and , that is, with all the spins aligned along the axis.
This is a gapped phase, meaning that the lowest energy excited state(s) have an energy higher than the ground state energy by a nonzero amount (nonvanishing in the thermodynamic limit). In particular, this energy gap is .
Disordered phase
In contrast, when , the system is said to be in the disordered phase. The ground state preserves the spin-flip symmetry, and is nondegenerate. As a simple example, when is infinity, the ground state is , that is with the spin in the direction on each site.
This is also a gapped phase. The energy gap is .
Gapless phase
When , the system undergoes a quantum phase transition. At this value of , the system has gapless excitations and its low-energy behaviour is described by the two-dimensional Ising conformal field theory. This conformal theory has central charge , and is the simplest of the unitary minimal models with central charge less than 1. Besides the identity operator, the theory has two primary fields, one with conformal weights and another one with conformal weights .
Jordan-Wigner transformation
It is possible to rewrite the spin variables as fermionic variables, using a highly nonlocal transformation known as the Jordan-Wigner Transformation.
A fermion creation operator on site can be defined as . Then the transverse field Ising Hamiltonian (assuming an infinite chain and ignoring boundary effects) can be expressed entirely as a sum of local quadratic terms containing Creation and annihilation operators. This Hamiltonian fails to conserve total fermion number and does not have the associated global continuous symmetry, due to the presence of the term. However, it does conserve fermion parity. That is, the Hamiltonian commutes with the quantum operator that indicates whether the total number of fermions is even or odd, and this parity does not change under time evolution of the system. The Hamiltonian is mathematically identical to that of a superconductor in the mean field Bogoliubov-de Gennes formalism and can be completely understood in the same standard way. The exact excitation spectrum and eigenvalues can be determined by Fourier transforming into momentum space and diagonalising the Hamiltonian.
In terms of Majorana fermions and , the Hamiltonian takes on an even simpler form (up to an additive constant): .
Kramers-Wannier duality
A nonlocal mapping of Pauli matrices known as the Kramers–Wannier duality transformation can be done as follows:
Then, in terms of the newly defined Pauli matrices with tildes, which obey the same algebraic relations as the original Pauli matrices, the Hamiltonian is simply . This indicates that the model with coupling parameter is dual to the model with coupling parameter , and establishes a duality between the ordered phase and the disordered phase. In terms of the Majorana fermions mentioned above, this duality is more obviously manifested in the trivial relabeling .
Note that there are some subtle considerations at the boundaries of the Ising chain; as a result of these, the degeneracy and symmetry properties of the ordered and disordered phases are changed under the Kramers-Wannier duality.
Generalisations
The q-state quantum Potts model and the quantum clock model are generalisations of the transverse field Ising model to lattice systems with states per site. The transverse field Ising model represents the case where .
Classical Ising Model
The quantum transverse field Ising model in dimensions is dual to an anisotropic classical Ising model in dimensions.
References
Lattice models
Spin models
Quantum models | Transverse-field Ising model | [
"Physics",
"Materials_science"
] | 1,400 | [
"Spin models",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Quantum models",
"Condensed matter physics",
"Statistical mechanics"
] |
62,884,556 | https://en.wikipedia.org/wiki/Combined%20diesel-electric%20and%20diesel | Combined diesel-electric and diesel (CODLAD) is a naval propulsion system in which an electric motor and a diesel engine act on a single propeller. The transmission system takes care of making one or both motors act on the propeller shaft.
Description
The CODLAD propulsion system is based on the use of electric motors directly connected to the axes (generally two) of the propellers. The electric motors are powered by diesel generators and to have higher speeds, as happens in CODAD propulsion systems, the higher power diesel engine is inserted which is disconnected from the transmission system to return to cruising speed.
This system that uses diesel engines for both propulsion and for the production of electricity for on-board services significantly reduces costs, as the number of diesel engines for the various ship services decreases and the electric motors need less maintenance. Furthermore, since electric motors can work more effectively in a larger number of revolutions, and being directly connected to the propeller axis, the transmission systems for coupling and decoupling the diesel-electric systems with the diesel engines used to have higher speeds.
The CODLAD system has been adopted in the new Vulcano-class logistic support ship under construction for the Italian Navy.
References
Marine propulsion
Diesel engine technology | Combined diesel-electric and diesel | [
"Engineering"
] | 242 | [
"Marine propulsion",
"Marine engineering"
] |
62,886,564 | https://en.wikipedia.org/wiki/List%20of%20mechanical%20engineering%20awards | This list of mechanical engineering awards is an index to articles about notable awards for mechanical engineering.
Awards
See also
Lists of awards
Lists of science and technology awards
List of engineering awards
References
Mechanical engineering | List of mechanical engineering awards | [
"Technology",
"Engineering"
] | 39 | [
"Science and technology awards",
"Mechanical engineering awards",
"Mechanical engineering",
"Lists of science and technology awards"
] |
44,307,291 | https://en.wikipedia.org/wiki/Chemical%20reaction%20model | Chemical reaction models transform physical knowledge into a mathematical formulation that can be utilized in computational simulation of practical problems in chemical engineering. Computer simulation provides the flexibility to study chemical processes under a wide range of conditions. Modeling of a chemical reaction involves solving conservation equations describing convection, diffusion, and reaction source for each component species.
Species transport equation
Ri is the net rate of production of species i by chemical reaction and Si is the rate of creation by addition from the dispersed phase and the user defined source. Ji is the diffusion flux of species i, which arises due to concentration gradients and differs in both laminar and turbulent flows. In turbulent flows, computational fluid dynamics also considers the effects of turbulent diffusivity. The net source of chemical species i due to reaction, Ri which appeared as the source term in the species transport equation is computed as the sum of the reaction sources over the NR reactions among the species.
Reaction models
These reaction rates R can be calculated by following models:
Laminar finite rate model
Eddy dissipation model
Eddy dissipation concept
Laminar finite rate model
The laminar finite rate model computes the chemical source terms using the Arrhenius expressions and ignores turbulence fluctuations. This model provides with the exact solution for laminar flames but gives inaccurate solution for turbulent flames, in which turbulence highly affects the chemistry reaction rates, due to highly non-linear Arrhenius chemical kinetics. However this model may be accurate for combustion with small turbulence fluctuations, for example supersonic flames.
Eddy dissipation model
The eddy dissipation model or the Magnussen model, based on the work of Magnussen and Hjertager, is a turbulent-chemistry reaction model. Most fuels are fast burning and the overall rate of reaction is controlled by turbulence mixing. In the non-premixed flames, turbulence slowly mixes the fuel and oxidizer into the reaction zones where they burn quickly. In premixed flames the turbulence slowly mixes cold reactants and hot products into the reaction zones where reaction occurs rapidly. In such cases the combustion is said to be mixing-limited, and the complex and often unknown chemical kinetics can be safely neglected. In this model, the chemical reaction is governed by large eddy mixing time scale. Combustion initiates whenever there is turbulence present in the flow. It does not need an ignition source to initiate the combustion. This type of model is valid for the non-premixed combustion, but for the premixed flames the reactant is assumed to burn at the moment it enters the computation model, which is a shortcoming of this model as in practice the reactant needs some time to get to the ignition temperature to initiate the combustion.
Eddy dissipation concept
The eddy dissipation concept (EDC) model is an extension of the eddy dissipation model to include detailed chemical mechanism in turbulent flows. The EDC model attempts to incorporate the significance of fine structures in a turbulent reacting flow in which combustion is important. EDC has been proven efficient without the need for changing the constants for a great variety of premixed and diffusion controlled combustion problems, both where the chemical kinetics is faster than the overall fine structure mixing as well as in cases where the chemical kinetics has a dominating influence.
References
Ansys Fluent Help, Chapters 7, 8.
Henk Kaarle Versteeg, Weeratunge Malalasekera. An Introduction to Computational Fluid Dynamics: The Finite Volume Method.
Magnussen, B. F. & B. H. Hjertager (1977). "On Mathematical Models of Turbulent Combustion with Special Emphasis on Soot Formation and Combustion". Symposium (International) on Combustion. 16 (1): 719–729. doi:10.1016/S0082-0784(77)80366-4.
Bjørn F. Magnussen. Norwegian University of Science and Technology Trondheim (Norway), Computational Industry Technologies AS (ComputIT), The Eddy Dissipation Concept: A Bridge Between Science and Technology.
Schlögl, Friedrich. "Chemical reaction models for non-equilibrium phase transitions." Zeitschrift für Physik 253.2 (1972): 147–161.
Levenspiel, Octave. Chemical reaction engineering. Vol. 2. New York etc.: Wiley, 1972.
Chemical reaction engineering
Mathematical modeling | Chemical reaction model | [
"Chemistry",
"Mathematics",
"Engineering"
] | 902 | [
"Chemical engineering",
"Applied mathematics",
"Chemical reaction engineering",
"Mathematical modeling"
] |
44,308,640 | https://en.wikipedia.org/wiki/Bipedal%20gait%20cycle | A (bipedal) gait cycle is the time period or sequence of events or movements during locomotion in which one foot contacts the ground to when that same foot again contacts the ground, and involves propulsion of the centre of gravity in the direction of motion. A gait cycle usually involves co-operative movements of both the left and right legs and feet. A single gait cycle is also known as a stride.
Each gait cycle or stride has two major phases:
Stance Phase, the phase during which the foot remains in contact with the ground, and the
Swing Phase, the phase during which the foot is not in contact with the ground.
Components of gait cycle
A gait cycle consists of stance phase and swing phase. Considering the number of limb supports, the stance phase spans from initial double-limb stance to single-limb stance and terminal double-limb stance. The swing phase corresponds to the single-limb stance of the opposite leg. The stance and swing phases can further be divided by seven events into seven smaller phases in which the body postures are specific. For analyzing gait cycle one foot is taken as reference and the movements of the reference foot are studied.
Phases and events
Stance Phase: Stance phase is that part of a gait cycle during which the foot remains in contact with the ground. It constitutes 60% of the gait cycle (10% for initial double-limb stance, 40% for single-limb stance and 10% for terminal double-limb stance). Stance phase consists of four events and four phases:
Initial Contact (Heel Strike): The heel of the reference foot touches the ground in front of the body. The respective knee is extended while the hip is extending from flexed position, bringing the torso to the lowest vertical position. This event marks the initiation of stance phase.
Loading Response (Foot Flat) Phase: Loading response phase begins immediately after the heel strikes the ground. In loading response phase, the weight is transferred onto the referenced leg. It is important for weight-bearing, shock-absorption and forward progression.
Opposite Toe-off: The toes of the opposite foot are raised above the ground as the foot begins to hover forward. This event terminates the period of double-limb support.
Mid-stance Phase: It involves alignment and balancing of body weight on the reference foot regarding single-limb support. The respective knee flexes while the hip is extending, bringing the torso to the highest vertical position. The center of gravity moves laterally to the supporting-limb side. During mid-stance phase the reference foot contact the ground flat-footed.
Heel Rise: The heel of the reference foot rises while the toes are still in contact with the ground. This event marks the end of mid-stance phase and the beginning of terminal stance phase.
Terminal Stance Phase: In this phase the heel of reference foot continues to rise while its toes are still in contact with the ground. The center of gravity is in front of the foot.
Opposite Initial Contact: The heel of the opposite foot makes contact with the ground while the toes of the reference foot still touch the ground, providing double support. The knee and hip on the reference side are extended. The torso moves to the lowest vertical position.
Pre-swing Phase: This phase corresponds to the loading response phase of the opposite foot. The center of gravity moves to the opposite side.
Swing Phase: Swing phase is that part of the gait cycle during which the reference foot is not in contact with the ground and swings in the air. It constitutes about 40% of gait cycle. It can be separated by three events into three phases:
Toe-off: The toes of reference foot rise above the ground. Flexion of the respective knee and hip is initiated as the foot prepares to swing in air. This event is the beginning of the swing phase of the gait cycle. The body weight is single-supported by the opposite foot.
Initial Swing Phase: The reference foot moves forward towards the opposite foot, while the knee and the hip are flexing. The body trunk moves laterally to the supporting side.
Feet adjacent: The reference foot hovers above the ground adjacent to the opposite foot. The knee is most flexed while the torso moves to the highest vertical position.
Mid-swing Phase: This phase is marked by feet adjacent event. The reference foot moves forward and eventually surpasses the supporting foot while the respective hip continues flexion.
Tibial Vertical: The hip on the reference side is at its most flexed position in the gait. The orientation of respective tibia is approximately perpendicular to the ground. The event is regarded as the end of mid-swing phase.
Terminal Swing Phase: During terminal swing phase, the reference foot begins landing to the ground as the respective knee and hip begin extension. The torso surpasses the supporting foot and moves downward.
Support
Single support: In single support only one foot is in contact with the ground.
Double support: In double support both feet are in contact with the ground. Double support occurs from heel strike, continues during loading response phase, until the toes of the opposite foot rise off the ground.
Terminology
Step Length: It is defined as the distance between corresponding successive points of heel contact of the opposite feet. In a normal gait, the right step length is equal to left step length.
Stride Length: It is defined as the distance between any two successive points of heel contact of the same foot. In a normal gait, the stride length is double the step length.
Walking Base or Stride Width: It is defined as the side-to-side distance between the line of step of the two feet.
Cadence: It is defined as the number of steps per unit time. In normal gait, cadence is about 100–115 steps per minute. Cadence of a person is subject to various factors.
Comfortable Walking Speed: It is a characteristic speed at which there is least energy consumption per unit distance. It is about 80 meters per minute in a normal gait.
References
External links
Medschool.lsuhsc.edu
Kau.edu.sa
Walking
Biomechanics | Bipedal gait cycle | [
"Physics"
] | 1,237 | [
"Biomechanics",
"Mechanics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.