id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
55,498,066 | https://en.wikipedia.org/wiki/Developmental%20bioelectricity | Developmental bioelectricity is the regulation of cell, tissue, and organ-level patterning and behavior by electrical signals during the development of embryonic animals and plants. The charge carrier in developmental bioelectricity is the ion (a charged atom) rather than the electron, and an electric current and field is generated whenever a net ion flux occurs. Cells and tissues of all types use flows of ions to communicate electrically. Endogenous electric currents and fields, ion fluxes, and differences in resting potential across tissues comprise a signalling system. It functions along with biochemical factors, transcriptional networks, and other physical forces to regulate cell behaviour and large-scale patterning in processes such as embryogenesis, regeneration, and cancer suppression.
Overview
Developmental bioelectricity is a sub-discipline of biology, related to, but distinct from, neurophysiology and bioelectromagnetics. Developmental bioelectricity refers to the endogenous ion fluxes, transmembrane and transepithelial voltage gradients, and electric currents and fields produced and sustained in living cells and tissues. This electrical activity is often used during embryogenesis, regeneration, and cancer suppression—it is one layer of the complex field of signals that impinge upon all cells in vivo and regulate their interactions during pattern formation and maintenance. This is distinct from neural bioelectricity (classically termed electrophysiology), which refers to the rapid and transient spiking in well-recognized excitable cells like neurons and myocytes (muscle cells); and from bioelectromagnetics, which refers to the effects of applied electromagnetic radiation, and endogenous electromagnetics such as biophoton emission and magnetite.
The inside/outside discontinuity at the cell surface enabled by a lipid bilayer membrane (capacitor) is at the core of bioelectricity. The plasma membrane was an indispensable structure for the origin and evolution of life itself. It provided compartmentalization permitting the setting of a differential voltage/potential gradient (battery or voltage source) across the membrane, probably allowing early and rudimentary bioenergetics that fueled cell mechanisms. During evolution, the initially purely passive diffusion of ions (charge carriers), become gradually controlled by the acquisition of ion channels, pumps, exchangers, and transporters. These energetically free (resistors or conductors, passive transport) or expensive (current sources, active transport) translocators set and fine tune voltage gradients – resting potentials – that are ubiquitous and essential to life's physiology, ranging from bioenergetics, motion, sensing, nutrient transport, toxins clearance, and signaling in homeostatic and disease/injury conditions. Upon stimuli or barrier breaking (short-circuit) of the membrane, ions powered by the voltage gradient (electromotive force) diffuse or leak, respectively, through the cytoplasm and interstitial fluids (conductors), generating measurable electric currents – net ion fluxes – and fields. Some ions (such as calcium) and molecules (such as hydrogen peroxide) modulate targeted translocators to produce a current or to enhance, mitigate or even reverse an initial current, being switchers.
Endogenous bioelectric signals are produced in cells by the cumulative action of ion channels, pumps, and transporters. In non-excitable cells, the resting potential across the plasma membrane (Vmem) of individual cells propagate across distances via electrical synapses known as gap junctions (conductors), which allow cells to share their resting potential with neighbors. Aligned and stacked cells (such as in epithelia) generate transepithelial potentials (such as batteries in series) and electric fields, which likewise propagate across tissues. Tight junctions (resistors) efficiently mitigate the paracellular ion diffusion and leakage, precluding the voltage short circuit. Together, these voltages and electric fields form rich and dynamic and patterns inside living bodies that demarcate anatomical features, thus acting like blueprints for gene expression and morphogenesis in some instances. More than correlations, these bioelectrical distributions are dynamic, evolving with time and with the microenvironment and even long-distant conditions to serve as instructive influences over cell behavior and large-scale patterning during embryogenesis, regeneration, and cancer suppression. Bioelectric control mechanisms are an important emerging target for advances in regenerative medicine, birth defects, cancer, and synthetic bioengineering.
History
18th century
Developmental bioelectricity began in the 18th century. Several seminal works stimulating muscle contractions using Leyden jars culminated with the publication of classical studies by Luigi Galvani in 1791 (De viribus electricitatis in motu musculari) and 1794. In these, Galvani thought to have uncovered intrinsic electric-producing ability in living tissues or "animal electricity". Alessandro Volta showed that the frog's leg muscle twitching was due to a static electricity generator and from dissimilar metals undergoing or catalyzing electrochemical reactions. Galvani showed, in a 1794 study, twitching without metal electricity by touching the leg muscle with a deviating cut sciatic nerve, definitively demonstrating "animal electricity". Unknowingly, Galvani with this and related experiments discovered the injury current (ion leakage driven by the intact membrane/epithelial potential) and injury potential (potential difference between injured and intact membrane/epithelium). The injury potential was, in fact, the electrical source behind the leg contraction, as realized in the next century. Subsequent work ultimately extended this field broadly beyond nerve and muscle to all cells, from bacteria to non-excitable mammalian cells.
19th century
Building on earlier studies, further glimpses of developmental bioelectricity occurred with the discovery of wound-related electric currents and fields in the 1840s, when the electrophysiologist Emil du Bois-Reymond reported macroscopic level electrical activities in frog, fish and human bodies. He recorded minute electric currents in live tissues and organisms with a then state-of-the-art galvanometer made of insulated copper wire coils. He unveiled the fast-changing electricity associated with muscle contraction and nerve excitation – the action potentials. Du Bois-Reymond also reported in detail less fluctuating electricity at wounds – injury current and potential – he made to himself.
Early 20th century
Developmental bioelectricity work began in earnest at the beginning of the 20th century. Ida H. Hyde studied the role of electricity in the development of eggs.
T. H. Morgan and others studied the electrophysiology of the earthworm.
Oren E. Frazee studied the effects of electricity on limb regeneration in amphibians.
E. J. Lund explored morphogenesis in flowering plants.
Libbie Hyman studied vertebrate and invertebrate animals.
In the 1920s and 1930s, Elmer J. Lund and Harold Saxton Burr wrote multiple papers about the role of electricity in embryonic development. Lund measured currents in a large number of living model systems, correlating them to changes in patterning. In contrast, Burr used a voltmeter to measure voltage gradients, examining developing embryonic tissues and tumors, in a range of animals and plants. Applied electric fields were demonstrated to alter the regeneration of planarian by Marsh and Beams in the 1940s and 1950s, inducing the formation of heads or tails at cut sites, reversing the primary body polarity.
Late 20th century
In the 1970s, Lionel Jaffe and Richard Nuccittelli's introduction and development of the vibrating probe, the first device for quantitative non-invasive characterization of the extracellular minute ion currents, revitalized the field.
Researchers such as Joseph Vanable, Richard Borgens, Ken Robinson, and Colin McCaig explored the roles of endogenous bioelectric signaling in limb development and regeneration, embryogenesis, organ polarity, and wound healing.
C.D. Cone studied the role of resting potential in regulating cell differentiation and proliferation.
Subsequent work has identified specific regions of the resting potential spectrum that correspond to distinct cell states such as quiescent, stem, cancer, and terminally differentiated.
Although this body of work generated a significant amount of high-quality physiological data, this large-scale biophysics approach has historically come second to the study of biochemical gradients and genetic networks in biology education, funding, and overall popularity among biologists. A key factor that contributed to this field lagging behind molecular genetics and biochemistry is that bioelectricity is inherently a living phenomenon – it cannot be studied in fixed specimens. Working with bioelectricity is more complex than traditional approaches to developmental biology, both methodologically and conceptually, as it typically requires a highly interdisciplinary approach.
Study techniques
Electrodes
The gold standard techniques to quantitatively extract electric dimensions from living specimens, ranging from cell to organism levels, are the glass microelectrode (or micropipette), the vibrating (or self-referencing) voltage probe, and the vibrating ion-selective microelectrode. The former is inherently invasive, and the two latter are non-invasive, but all are ultra-sensitive and fast-responsive sensors extensively used in a plethora of physiological conditions in widespread biological models.
The glass microelectrode was developed in the 1940s to study the action potential of excitable cells, deriving from the seminal work by Hodgkin and Huxley in the giant axon squid. It is simply a liquid salt bridge connecting the biological specimen with the electrode, protecting tissues from leachable toxins and redox reactions of the bare electrode. Owing to its low impedance, low junction potential and weak polarization, silver electrodes are standard transducers of the ionic into electric current that occurs through a reversible redox reaction at the electrode surface.
The vibrating probe was introduced in biological studies in the 1970s. The voltage-sensitive probe is electroplated with platinum to form a capacitive black tip ball with large surface area. When vibrating in an artificial or natural DC voltage gradient, the capacitive ball oscillates in a sinusoidal AC output. The amplitude of the wave is proportional to the measuring potential difference at the frequency of the vibration, efficiently filtered by a lock-in amplifier that boosts probe's sensitivity.
The vibrating ion-selective microelectrode was first used in 1990 to measure calcium fluxes in various cells and tissues. The ion-selective microelectrode is an adaptation of the glass microelectrode, where an ion-specific liquid ion exchanger (ionophore) is tip-filled into a previously silanized (to prevent leakage) microelectrode. Also, the microelectrode vibrates at low frequencies to operate in the accurate self-referencing mode. Only the specific ion permeates the ionophore, therefore the voltage readout is proportional to the ion concentration in the measuring condition. Then, flux is calculated using the Fick's first law.
Emerging optic-based techniques, for example, the pH optrode (or optode), which can be integrated into a self-referencing system may become an alternative or additional technique in bioelectricity laboratories. The optrode does not require referencing and is insensitive to electromagnetism simplifying system setting up and making it a suitable option for recordings where electric stimulation is simultaneously applied.
Much work to functionally study bioelectric signaling has made use of applied (exogenous) electric currents and fields via DC and AC voltage-delivering apparatus integrated with agarose salt bridges. These devices can generate countless combinations of voltage magnitude and direction, pulses, and frequencies. Currently, lab-on-a-chip mediated application of electric fields is gaining ground in the field with the possibility to allow high-throughput screening assays of the large combinatory outputs.
Fluorescence
Progress in molecular biology over the last six decades has produced powerful tools that facilitate the dissection of biochemical and genetic signals; yet, they tend to not be well-suited for bioelectric studies in vivo. Prior work relied extensively on current applied directly by electrodes, reinvigorated by significant recent advances in materials science and extracellular current measurements, facilitated by sophisticated self-referencing electrode systems. While electrode applications for manipulating neuraly-controlled body processes have recently attracted much attention, there are other opportunities for controlling somatic processes, as most cell types are electrically active and respond to ionic signals from themselves and their neighbors.
In the early part of the 21st century, a number of new molecular techniques were developed that allowed bioelectric pathways to be investigated with a high degree of mechanistic resolution, and to be linked to canonical molecular cascades. These include:
Pharmacological screens to identify endogenous channels and pumps responsible for specific patterning events;
Voltage-sensitive fluorescent reporter dyes and genetically encoded fluorescent voltage indicators for the characterization of the bioelectric state in vivo.
Panels of well-characterized dominant ion channels that can be misexpressed in cells of interest to alter the bioelectric state in desired ways; and
Computational platforms that are coming on-line to assist in building predictive models of bioelectric dynamics in tissues.
Compared with the electrode-based techniques, the molecular probes provide a wider spatial resolution and facilitated dynamic analysis over time. Although calibration or titration can be possible, molecular probes are typically semi-quantitative, whereas electrodes provide absolute bioelectric values. Another advantage of fluorescence and other probes is their less-invasive nature and spatial multiplexing, enabling the simultaneous monitoring of large areas of embryonic or other tissues in vivo during normal or pathological pattering processes.
Roles in organisms
Early development
Work in model systems such as Xenopus laevis and zebrafish has revealed a role for bioelectric signaling in the development of heart, face, eye, brain, and other organs. Screens have identified roles for ion channels in size control of structures such as the zebrafish fin, while focused gain-of-function studies have shown for example that body parts can be re-specified at the organ level – for example creating entire eyes in gut endoderm. As in the brain, developmental bioelectrics can integrate information across significant distance in the embryo, for example such as the control of brain size by bioelectric states of ventral tissue. and the control of tumorigenesis at the site of oncogene expression by bioelectric state of remote cells.
Human disorders, as well as numerous mouse mutants show that bioelectric signaling is important for human development (Tables 1 and 2). Those effects are pervasively linked to channelopathies, which are human disorders that result from mutations that disrupt ion channels.
Several channelopathies result in morphological abnormalities or congenital birth defects in addition to symptoms that affect muscle and or neurons. For example, mutations that disrupt an inwardly rectifying potassium channel Kir2.1 cause dominantly inherited Andersen-Tawil Syndrome (ATS). ATS patients experience periodic paralysis, cardiac arrhythmias, and multiple morphological abnormalities that can include cleft or high arched palate, cleft or thin upper lip, flattened philtrum, micrognathia, dental oligodontia, enamel hypoplasia, delayed dentition eruption, malocclusion, broad forehead, wide set eyes, low set ears, syndactyly, clinodactyly, brachydactyly, and dysplastic kidneys. Mutations that disrupt another inwardly rectifying K+ channel Girk2 encoded by KCNJ6 cause Keppen-Lubinsky syndrome which includes microcephaly, a narrow nasal bridge, a high arched palate, and severe generalized lipodystrophy (failure to generate adipose tissue). KCNJ6 is in the Down syndrome critical region such that duplications that include this region lead to craniofacial and limb abnormalities and duplications that do not include this region do not lead to morphological symptoms of Down syndrome. Mutations in KCNH1, a voltage gated potassium channel lead to Temple-Baraitser (also known as Zimmermann- Laband) syndrome. Common features of Temple-Baraitser syndrome include absent or hypoplastic of finger and toe nails and phalanges and joint instability. Craniofacial defects associated with mutations in KCNH1 include cleft or high arched palate, hypertelorism, dysmorphic ears, dysmorphic nose, gingival hypertrophy, and abnormal number of teeth.
Mutations in CaV1.2, a voltage gated Ca2+ channel, lead to Timothy syndrome, which causes severe cardiac arrhythmia (long-QT) along with syndactyly and similar craniofacial defects to Andersen-Tawil syndrome including cleft or high-arched palate, micrognathia, low set ears, syndactyly and brachydactyly. While these channelopathies are rare, they show that functional ion channels are important for development. Furthermore, in utero exposure to anti-epileptic medications that target some ion channels also cause increased incidence of birth defects such as oral cleft. The effects of both genetic and exogenous disruption of ion channels lend insight into the importance of bioelectric signaling in development.
Wound healing and cell guidance
One of the best-understood roles for bioelectric gradients is at the tissue-level endogenous electric fields utilized during wound healing. It is challenging to study wound-associated electric fields, because these fields are weak, less fluctuating, and do not have immediate biological responses when compared to nerve pulses and muscle contraction. The development of the vibrating and glass microelectrodes, demonstrated that wounds indeed produced and, importantly, sustained measurable electric currents and electric fields. These techniques allow further characterization of the wound electric fields/currents at cornea and skin wounds, which show active spatial and temporal features, suggesting active regulation of these electrical phenomena. For example, the wound electric currents are always the strongest at the wound edge, which gradually increased to reach a peak about 1 hour after injury. At wounds in diabetic animals, the wound electric fields are significantly compromised. Understanding the mechanisms of generation and regulation of the wound electric currents/fields is expected to reveal new approaches to manipulate the electrical aspect for better wound healing.
How are the electric fields at a wound produced? Epithelia actively pump and differentially segregate ions. In the cornea epithelium, for example, Na+ and K+ are transported inwards from tear fluid to extracellular fluid, and Cl− is transported out of the extracellular fluid into the tear fluid. The epithelial cells are connected by tight junctions, forming the major electrical resistive barrier, and thus establishing an electrical gradient across the epithelium – the transepithelial potential (TEP). Breaking the epithelial barrier, as occurs in any wounds, creates a hole that breaches the high electrical resistance established by the tight junctions in the epithelial sheet, short-circuiting the epithelium locally. The TEP therefore drops to zero at the wound. However, normal ion transport continues in unwounded epithelial cells beyond the wound edge (typically <1 mm away), driving positive charge flow out of the wound and establishing a steady, laterally-oriented electric field (EF) with the cathode at the wound. Skin also generates a TEP, and when a skin wound is made, similar wound electric currents and fields arise, until the epithelial barrier function recovers to terminate the short-circuit at the wound. When wound electric fields are manipulated with pharmacological agents that either stimulate or inhibit transport of ions, the wound electric fields also increase or decrease, respectively. Wound healing can be speed up or slowed down accordingly in cornea wounds.
How do electric fields affect wound healing? To heal wounds, cells surrounding the wound must migrate and grow directionally into the wound to cover the defect and restore the barrier. Cells important to heal wounds respond remarkably well to applied electric fields of the same strength that are measured at wounds. The whole gamut of cell types and their responses following injury are affected by physiological electric fields. Those include migration and division of epithelial cells, sprouting and extension of nerves, and migration of leukocytes and endothelial cells. The most well studied cellular behavior is directional migration of epithelial cells in electric fields – electrotaxis. The epithelial cells migrate directionally to the negative pole (cathode), which at a wound is the field polarity of the endogenous vectorial electric fields in the epithelium, pointing (positive to negative) to the wound center. Epithelial cells of the cornea, keratinocytes from the skin, and many other types of cells show directional migration at electric field strengths as low as a few mV mm−1. Large sheets of monolayer epithelial cells, and sheets of stratified multilayered epithelial cells also migrate directionally. Such collective movement closely resembles what happens during wound healing in vivo, where cell sheets move collectively into the wound bed to cover the wound and restore the barrier function of the skin or cornea.
How cells sense such minute extracellular electric fields remains largely elusive. Recent research has started to identify some genetic, signaling and structural elements underlying how cells sense and respond to small physiological electric fields. These include ion channels, intracellular signaling pathways, membrane lipid rafts, and electrophoresis of cellular membrane components.
Limb regeneration in animals
In the early 20th century, Albert Mathews seminally correlated regeneration of a cnidarian polyp with the potential difference between polyp and stolon surfaces, and affected regeneration by imposing countercurrents. Amedeo Herlitzka, following on the wound electric currents footsteps of his mentor, du Bois-Raymond, theorized about electric currents playing an early role in regeneration, maybe initiating cell proliferation. Using electric fields overriding endogenous ones, Marsh and Beams astoundingly generated double-headed planarians and even reversed the primary body polarity entirely, with tails growing where a head previously existed. After these seed studies, variations of the idea that bioelectricity could sense injury and trigger or at least be a major player in regeneration have spurred over the decades until the present day. A potential explanation lies on resting potentials (primarily Vmem and TEP), which can be, at least in part, dormant sensors (alarms) ready to detect and effectors (triggers) ready to react to local damage.
Following up on the relative success of electric stimulation on non-permissive frog leg regeneration using an implanted bimetallic rod in the late 1960s, the bioelectric extracellular aspect of amphibian limb regeneration was extensively dissected in the next decades. Definitive descriptive and functional physiological data was made possible owing to the development of the ultra-sensitive vibrating probe and improved application devices. Amputation invariably leads to a skin-driven outward current and a consequent lateral electric field setting the cathode at the wound site. Although initially pure ion leakage, an active component eventually takes place and blocking ion translocators typically impairs regeneration. Using biomimetic exogenous electric currents and fields, partial regeneration was achieved, which typically included tissue growth and increased neuronal tissue. Conversely, precluding or reverting endogenous electric current and fields impairs regeneration. These studies in amphibian limb regeneration and related studies in lampreys and mammals combined with those of bone fracture healing and in vitro studies, led to the general rule that migrating (such as keratinocytes, leucocytes and endothelial cells) and outgrowing (such as axons) cells contributing to regeneration undergo electrotaxis towards the cathode (injury original site). Congruently, an anode is associated with tissue resorption or degeneration, as occurs in impaired regeneration and osteoclastic resorption in bone. Despite these efforts, the promise for a significant epimorphic regeneration in mammals remains a major frontier for future efforts, which includes the use of wearable bioreactors to provide an environment within which pro-regenerative bioelectric states can be driven and continued efforts at electrical stimulation.
Recent molecular work has identified proton and sodium flux as being important for tail regeneration in Xenopus tadpoles, and shown that regeneration of the entire tail (with spinal cord, muscle, etc.) could be triggered in a range of normally non-regenerative conditions by either molecular-genetic, pharmacological, or optogenetic methods. In planaria, work on bioelectric mechanism has revealed control of stem cell behavior, size control during remodeling, anterior-posterior polarity, and head shape. Gap junction-mediated alteration of physiological signaling produces two-headed worms in Dugesia japonica; remarkably, these animals continue to regenerate as two-headed in future rounds of regeneration months after the gap junction-blocking reagent has left the tissue. This stable, long-term alteration of the anatomical layout to which animals regenerate, without genomic editing, is an example of epigenetic inheritance of body pattern, and is also the only available "strain" of planarian species exhibiting an inherited anatomical change that is different from the wild-type.
Cancer
Defection of cells from the normally tight coordination of activity towards an anatomical structure results in cancer; it is thus no surprise that bioelectricity – a key mechanism for coordinating cell growth and patterning – is a target often implicated in cancer and metastasis. Indeed, it has long been known that gap junctions have a key role in carcinogenesis and progression. Channels can behave as oncogenes and are thus suitable as novel drug targets. Recent work in amphibian models has shown that depolarization of resting potential can trigger metastatic behavior in normal cells, while hyperpolarization (induced by ion channel misexpression, drugs, or light) can suppress tumorigenesis induced by expression of human oncogenes. Depolarization of resting potential appears to be a bioelectric signature by which incipient tumor sites can be detected non-invasively. Refinement of the bioelectric signature of cancer in biomedical contexts, as a diagnostic modality, is one of the possible applications of this field. Excitingly, the ambivalence of polarity – depolarization as marker and hyperpolarization as treatment – make it conceptually possible to derive theragnostic (portmanteau of therapeutics with diagnostics) approaches, designed to simultaneously detect and treat early tumors, in this case based on the normalization of the membrane polarization.
Pattern regulation
Recent experiments using ion channel opener/blocker drugs, as well as dominant ion channel misexpression, in a range of model species, has shown that bioelectricity, specifically, voltage gradients instruct not only stem cell behavior but also large-scale patterning. Patterning cues are often mediated by spatial gradients of cell resting potentials, or Vmem, which can be transduced into second messenger cascades and transcriptional changes by a handful of known mechanisms. These potentials are set by the function of ion channels and pumps, and shaped by gap junctional connections which establish developmental compartments (isopotential cell fields). Because both gap junctions and ion channels are themselves voltage-sensitive, cell groups implement electric circuits with rich feedback capabilities. The outputs of developmental bioelectric dynamics in vivo represent large-scale patterning decisions such as the number of heads in planarian, the shape of the face in frog development, and the size of tails in zebrafish. Experimental modulation of endogenous bioelectric prepatterns have enabled converting body regions (such as the gut) to a complete eye, inducing regeneration of appendages such as tadpole tails at non-regenerative contexts, and conversion of flatworm head shapes and contents to patterns appropriate to other species of flatworms, despite a normal genome. Recent work has shown the use of physiological modeling environments for identifying predictive interventions to target bioelectric states for repair of embryonic brain defects under a range of genetic and pharmacologically induced teratologies.
Future research
Life is ultimately an electrochemical enterprise; research in this field is progressing along several frontiers. First is the reductive program of understanding how bioelectric signals are produced, how voltage changes in the cell membrane are able to regulate cell behavior, and what the genetic and epigenetic downstream targets of bioelectric signals are. A few mechanisms that transduce bioelectric change into alterations of gene expression are already known, including the bioelectric control of movement of small second-messenger molecules through cells, including serotonin and butyrate, voltage sensitive phosphatases, among others. Also known are numerous gene targets of voltage signaling, such as Notch, BMP, FGF, and HIF-1α. Thus, the proximal mechanisms of bioelectric signaling within single cells are becoming well-understood, and advances in optogenetics and magnetogenetics continue to facilitate this research program. More challenging however is the integrative program of understanding how specific patterns of bioelectric dynamics help control the algorithms that accomplish large-scale pattern regulation (regeneration and development of complex anatomy). The incorporation of bioelectrics with chemical signaling in the emerging field of probing cell sensory perception and decision-making is an important frontier for future work.
Bioelectric modulation has shown control over complex morphogenesis and remodeling, not merely setting individual cell identity. Moreover, a number of the key results in this field have shown that bioelectric circuits are non-local – regions of the body make decisions based on bioelectric events at a considerable distance. Such non-cell-autonomous events suggest distributed network models of bioelectric control; new computational and conceptual paradigms may need to be developed to understand spatial information processing in bioelectrically active tissues. It has been suggested that results from the fields of primitive cognition and unconventional computation are relevant to the program of cracking the bioelectric code. Finally, efforts in biomedicine and bioengineering are developing applications such as wearable bioreactors for delivering voltage-modifying reagents to wound sites, and ion channel-modifying drugs (a kind of electroceutical) for repair of birth defects and regenerative repair. Synthetic biologists are likewise starting to incorporate bioelectric circuits into hybrid constructs.
Table 1: Ion Channels and Pumps Implicated in Patterning
Table 2: Gap Junctions Implicated in Patterning
Table 3: Ion Channel Oncogenes
References
External links
Biophysics
Electricity | Developmental bioelectricity | [
"Physics",
"Biology"
] | 6,404 | [
"Applied and interdisciplinary physics",
"Biophysics"
] |
55,507,154 | https://en.wikipedia.org/wiki/Solder%20fatigue | Solder fatigue is the mechanical degradation of solder due to deformation under cyclic loading. This can often occur at stress levels below the yield stress of solder as a result of repeated temperature fluctuations, mechanical vibrations, or mechanical loads. Techniques to evaluate solder fatigue behavior include finite element analysis and semi-analytical closed-form equations.
Overview
Solder is a metal alloy used to form electrical, thermal, and mechanical interconnections between the component and printed circuit board (PCB) substrate in an electronic assembly. Although other forms of cyclic loading are known to cause solder fatigue, it has been estimated that the largest portion of electronic failures are thermomechanically driven due to temperature cycling. Under thermal cycling, stresses are generated in the solder due to coefficient of thermal expansion (CTE) mismatches. This causes the solder joints to experience non-recoverable deformation via creep and plasticity that accumulates and leads to degradation and eventual fracture.
Historically, tin-lead solders were common alloys used in the electronics industry. Although they are still used in select industries and applications, lead-free solders have become significantly more popular due to RoHS regulatory requirements. This new trend increased the need to understand the behavior of lead-free solders.
Much work has been done to characterize the creep-fatigue behavior of various solder alloys and develop predictive life damage models using a Physics of Failure approach. These models are often used when trying to assess solder joint reliability. The fatigue life of a solder joint depends on several factors including: the alloy type and resulting microstructure, the joint geometry, the component material properties, the PCB substrate material properties, the loading conditions, and the boundary conditions of the assembly.
Thermomechanical solder fatigue
During a product's operational lifetime it undergoes temperature fluctuations from application specific temperature excursions and self-heating due to component power dissipation. Global and local mismatches of coefficient of thermal expansion (CTE) between the component, component leads, PCB substrate, and system level effects drive stresses in the interconnects (i.e. solder joints). Repeated temperature cycling eventually leads to thermomechanical fatigue.
The deformation characteristics of various solder alloys can be described at the microscale due to the differences in composition and resulting microstructure. Compositional differences lead to variations in phase(s), grain size, and intermetallics. This affects susceptibility to deformation mechanisms such as dislocation motion, diffusion, and grain boundary sliding. During thermal cycling, the solder's microstructure (grains/phases) will tend to coarsen as energy is dissipated from the joint. This eventually leads to crack initiation and propagation which can be described as accumulated fatigue damage.
The resulting bulk behavior of solder is described as viscoplastic (i.e. rate dependent inelastic deformation) with sensitivity to elevated temperatures. Most solders experience temperature exposures near their melting temperature (high homologous temperature) throughout their operational lifetime which makes them susceptible to significant creep. Several constitutive models have been developed to capture the creep characteristics of lead and lead-free solders. Creep behavior can be described in three stages: primary, secondary, and tertiary creep. When modeling solder, secondary creep, also called steady state creep (constant strain rate), is often the region of interest for describing solder behavior in electronics. Some models also incorporate primary creep. Two of the most popular models are hyperbolic sine models developed by Garofalo and Anand to characterize the steady state creep of solder. These model parameters are often incorporated as inputs in FEA simulations to properly characterize the solder response to loading.
Fatigue models
Solder damage models take a physics-of-failure based approach by relating a physical parameter that is a critical measure of the damage mechanism process (i.e. inelastic strain range or dissipated strain energy density) to cycles to failure. The relationship between the physical parameter and cycles to failure typically takes on a power law or modified power law relationship with material dependent model constants. These model constants are fit from experimental testing and simulation for different solder alloys. For complex loading schemes, Miner's linear superposition damage law is employed to calculate accumulated damage.
Coffin–Manson model
The generalized Coffin–Manson model considers the elastic and plastic strain range by incorporating Basquin's equation and takes the form:
Here ∆ε ⁄ 2 represents the elastic-plastic cyclic strain range, E represents elastic modulus, σm represents means stress, and Nf represents cycles to failure. The remaining variables, namely σf,ε'f,b,and c are fatigue coefficients and exponents representing material model constants. The generalized Coffin–Manson model accounts for the effects of high cycle fatigue (HCF) primarily due to elastic deformation and low cycle fatigue (LCF) primarily due to plastic deformation.
Engelmaier model
In the 1980s Engelmaier proposed a model, in conjunction with the work of Wild, that accounted for some of the limitations of the Coffin–Manson model, such as the effects of the frequency and temperature. His model takes a similar power law form:
Engelmaier relates the total shear strain (∆γ) to cycles to failure (Nf). ε'f and c are model constants where c is a function of mean temperature during thermal cycling (Ts) and thermal cycling frequency (f).
∆γ can be calculated as function of the distance from the neutral point (LD) solder joint height (hs), coefficient of thermal expansion (∆α), and change in temperature (ΔT). In this case C is empirical model constant.
This model was initially proposed for leadless devices with tin-lead solder. The model has since been modified by Engelmaier and others to account for other phenomena such as leaded components, thermal cycling dwell times, and lead-free solders. While initially a substantial improvement over other techniques to predict solder fatigue, such as testing and simple acceleration transforms, it is now generally acknowledged that Engelmaier and other models that are based on strain range do not provide a sufficient degree of accuracy.
Darveaux model
Darveaux proposed a model relating the quantity of volume weighted average inelastic work density, the number of cycles to crack initiation, and the crack propagation rate to the characteristic cycles to failure.
In the first equation N0 represents the number of cycles to crack initiation, ∆W represents inelastic work density, K1 and K2 are material model constants. In the second equation, da/dN represents the crack prorogation rate, ∆W represents inelastic work density, K3 and K4 are material model constants. In this case the crack propagation rate is approximated to be constant. Nf represents the characteristic cycles to failure and a represents the characteristic crack length. Model constants can be fit for different solder alloys using a combination of experimental testing and finite element analysis (FEA) simulation.
The Darveaux model has been found to be relatively accurate by several authors. However, due to the expertise, complexity, and simulation resources required, its use has been primarily limited to component manufacturers evaluating component packaging. The model has not received acceptance in regards to modeling solder fatigue across an entire printed circuit assembly and has been found to be inaccurate in predicting system-level effects (triaxiality) on solder fatigue.
Blattau model
The current solder joint fatigue model preferred by the majority of electronic OEMs worldwide is the Blattau model, which is available in the Sherlock Automated Design Analysis software. The Blattau model is an evolution of the previous models discussed above. Blattau incorporates the use of strain energy proposed by Darveaux, while using closed-form equations based on classic mechanics to calculate the stress and strain being applied to the solder interconnect. An example of these stress/strain calculations for a simple leadless chip component is shown in the following equation:
Here α is the CTE, T is temperature, LD is the distance to the neutral point, E is elastic modulus, A is the area, h is the thickness, G is shear modulus, ν is Poisson's ratio, and a is the edge length of the copper bond pad. The subscripts 1 refer to the component, 2 and b refer to the board, and s refer to the solder joint. The shear stress (∆τ) is then calculated by dividing this calculated force by the effective solder joint area. Strain energy is computed using the shear strain range and shear stress from the following relationship:
This approximates the hysteresis loop to be roughly equilateral in shape. Blattau uses this strain energy value in conjunction with models developed by Syed to relate dissipated strain energy to cycles to failure.
Other fatigue models
The Norris–Landzberg model is a modified Coffin–Manson model.
Additional strain range and strain energy based models have been proposed by several others.
Vibration and cyclic mechanical fatigue
While not as prevalent as thermomechanical solder fatigue, vibration fatigue and cyclic mechanical fatigue are also known to cause solder failures. Vibration fatigue is typically considered to be high cycle fatigue (HCF) with damage driven by elastic deformation and sometimes plastic deformation. This can depend on the input excitation for both harmonic and random vibration. Steinberg developed a vibration model to predict time to failure based on the calculated board displacement. This model takes into account the input vibration profile such as the power spectral density or acceleration time history, the natural frequency of the circuit card, and the transmissibility. Blattau developed a modified Steinberg model that uses board level strains rather than displacement and has sensitivity to individual package types.
Additionally, low-temperature isothermal mechanical cycling is typically modeled with a combination of LCF and HCF strain range or strain energy models. The solder alloy, assembly geometry and materials, boundary conditions, and loading conditions will affect whether fatigue damage is dominated by elastic (HCF) or plastic (LCF) damage. At lower temperatures and faster strain rates the creep can approximated to be minimal and any inelastic damage will be dominated by plasticity. Several strain range and strain energy models have been employed in this type of a case, such as the Generalized Coffin–Manson model. In this case, much work has been done to characterize the model constants of various damage models for different alloys.
See also
Cold solder joint
Creep (deformation)
Fatigue (material)
Plasticity (physics)
Poor metal
Potting (electronics)
Vibration fatigue
References
Further reading
External links
Solder joint fatigue calculators
Soldering defects
Fracture mechanics | Solder fatigue | [
"Materials_science",
"Technology",
"Engineering"
] | 2,208 | [
"Structural engineering",
"Fracture mechanics",
"Technological failures",
"Materials science",
"Soldering defects",
"Materials degradation"
] |
56,879,891 | https://en.wikipedia.org/wiki/Discovery%20Seamounts | The Discovery Seamounts are a chain of seamounts in the Southern Atlantic Ocean, including Discovery Seamount. The seamounts are east of Gough Island and once formed islands. Various volcanic rocks as well as glacial dropstones and sediments have been dredged from the Discovery Seamounts.
The Discovery Seamounts appear to be a volcanic seamount chain produced by the Discovery hotspot, whose earliest eruptions occurred either in the ocean, Cretaceous kimberlite fields in southern Namibia or the Karoo-Ferrar large igneous province. The seamounts formed between 41 and 35 million years ago; presently the hotspot is thought to lie southwest of the seamounts, where there are geological anomalies in rocks from the Mid-Atlantic Ridge that may reflect the presence of a neighbouring hotspot.
Name and discovery
Discovery Seamount was discovered in 1936 by the research ship RRS Discovery II. It was named Discovery Bank by the crew of a German research ship, RV Schwabenland. Another name, Discovery Tablemount, was coined in 1963. In 1993 the name "Discovery Bank" was transferred by the General Bathymetric Chart of the Oceans to another seamount at Kerguelen, leaving the name "Discovery Seamounts" for the seamount group.
Geography and geomorphology
The Discovery Seamounts are a group of 12 seamounts east of Gough Island and southwest from Cape Town. The seamounts are more than high and reach a minimum depth of or , typically or . They are guyots, former islands that were eroded to a flat plateau and submerged through thermal subsidence of the lithosphere. These seamounts are also referred to as the Discovery Rise and subdivided into a northwestern and a southeastern trend. The group extends over an east-west region of more than length.
The largest of these seamounts is named Discovery Seamount. It is covered with ice-rafted debris and fossil-containing sediments, which have been used to infer paleoclimate conditions in the region during the Pleistocene. Other evidence has been used to postulate that the seamount subsided by about during the late Pleistocene. Other named seamounts are Shannon Seamount southeast and Heardman Seamount due south from Discovery. The seafloor is covered by ponded sediments, sand waves, rocks, rubble and biogenic deposits; sediment covers most of the ground.
The crust underneath Discovery Seamount is about 67 million years (late Cretaceous) old. A fracture zone (a site of crustal weakness) is located nearby.
Geology
The Southern Atlantic Ocean contains a number of volcanic systems such as the Discovery Seamounts, the Rio Grande Rise, the Shona Ridge and the Walvis Ridge. Their existence is commonly attributed to hotspots, although this interpretation has been challenged. The hotspot origin of Discovery and the Walvis–Tristan da Cunha seamount chains was proposed first in 1972. In the case of the Shona Ridge and the Discovery Seamounts, the theory postulates that they formed as the African Plate moved over the Shona hotspot and the Discovery hotspot, respectively.
The Discovery hotspot, if it exists, would be located southwest of the Discovery Seamounts, off the Mid-Atlantic Ridge. The seamounts wane out in that direction, but the Little Ridge close to the Mid-Atlantic Ridge may be their continuation after the hotspot crossed the Agulhas Fracture Zone. The Discovery Ridge close to the Mid-Atlantic Ridge may come from the hotspot as well. Low seismic velocity anomalies have been detected in the mantle southwest of the Discovery Seamounts and may constitute the Discovery hotspot. Deeper in the mantle, the Discovery hotspot appears to connect with the Shona and Tristan hotspots to a single plume, which in turn emanates from the African superplume and might form a "curtain" of hotspots at the edge of the superplume. Material from the Discovery hotspot reached as far as Patagonia in South America, where it appears in volcanoes.
Magma may flow from the Discovery hotspot to the Mid-Atlantic Ridge, feeding the production of excess crustal material at its intersection with the Agulhas-Falklands Fracture Zone, one of the largest transform faults of Earth. There is a region on the Mid-Atlantic Ridge southwest of the seamounts where there are fewer earthquakes than elsewhere along the ridge, the central valley of the ridge is absent, and where dredged rocks share geochemical traits with the Discovery Seamount. Petrological anomalies at spreading ridges have been often attributed to the presence of mantle plumes close to the ridge, and such has been proposed for the Discovery hotspot as well. Alternatively, the Discovery hotspot may have interacted with the ridge in the past, and the present-day mantle temperature and neodymium isotope anomalies next to the ridge could be left from this past interaction.
The Agulhas-Falkland fracture zone has an unusual structure on the African Plate, where it displays the Agulhas Ridge, two over high ridge segments which are parallel to each other. This unusual structure may be due to magma from the Discovery hotspot, which would have been channelled to the Agulhas Ridge.
Whether there is a link between the Discovery hotspot and Gough Island or the Tristan hotspot is unclear. An alternative hypothesis is that the Discovery Seamounts formed when magma rose along a fracture zone or other crustal weakness.
Composition
Rocks dredged from the seamounts include lavas, pillow lavas and volcaniclastic rocks. Geochemically they are classified as alkali basalt, basalt, phonolite, tephriphonolite, trachyandesite, trachybasalt and trachyte. Minerals contained in the rocks include alkali feldspar, apatite, biotite, clinopyroxene, iron and titanium oxides, olivine, plagioclase, sphene and spinel. Other rocks are continental crust rocks, probably glacial dropstones, and manganese.
The Discovery hotspot appears to have erupted two separate sets of magmas with distinct compositions in a north-south pattern, similar to the Tristan da Cunha-Gough Island hotspot. The composition of the Discovery Seamounts rocks has been compared to Gough Island. The more felsic rocks at Discovery appear to have formed in magma chambers, similar to felsic rocks at other Atlantic Ocean islands.
Biology
Seamounts tend to concentrate food sources from seawater and thus draw numerous animal species. In the Discovery Seamounts they include bamboo corals, brachiopods, cephalopods, cirripedes, sea fans, sea urchins and sea whips. There are 150 fish species at Discovery Seamount, including the pygmy flounder; the deep-sea hatchetfish Maurolicus inventionis and the codling Guttigadus nudirostre are endemic to Discovery Seamount. Fossil corals have been recovered in dredges, while no stone coral colonies were reported during a 2019 investigation.
Both Japanese and Soviet fishers trawled the seamounts during the 1970s and 1980s, but there was no commercial exploitation of the resources. Observations in 2019 detected changes in the Discovery Seamount ecosystems that may be due to fishing or sea urchin outbreaks.
Eruption history
A number of dates ranging from 41 to 35 million years ago have been obtained on dredged samples from the seamounts on the basis of argon-argon dating. The age of the seamounts decreases in southwest direction, similar to the Walvis Ridge, and at a similar rate. It is possible that Discovery Seamount split into a northern and southern part about 20 million years ago. Activity there may have continued until 7-6.5 million years ago.
Unlike the Walvis Ridge, which is connected to the Etendeka flood basalts, the Discovery Seamounts do not link with onshore volcanic features. However, it has been proposed that the 70- to 80-million-year-old Blue Hills, Gibeon and Gross Brukkaros kimberlite fields in southern Namibia may have been formed by the Discovery hotspot, and some plate reconstructions place it underneath the Karoo-Ferrar large igneous province at the time at which it was emplaced. Kimberlites in South Africa and Greater Cederberg-False Bay large igneous province has been associated with the Discovery hotspot. The latter large igneous province may have formed at a triple junction around the nascent South Atlantic Ocean, and, together with hotspots farther north, precipitated the rifting of the South Atlantic. Between 60 and 40 million years ago the hotspot was located close to the spreading ridge of the South Atlantic.
References
Sources
Seamounts of the Atlantic Ocean
Oceanography
Submarine volcanoes
Eocene volcanoes | Discovery Seamounts | [
"Physics",
"Environmental_science"
] | 1,855 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
56,887,061 | https://en.wikipedia.org/wiki/SAE%20J306 | SAE J306 is a standard that defines the viscometric properties of automotive gear oils. It is maintained by SAE International. Key parameters for this standard are the kinematic viscosity of the gear oil, the maximum temperature at which the oil has a viscosity of 150,000 cP, and a measure of its shear stability through the KRL test.
References
Lubrication
Gear oils
Automotive standards
Viscosity | SAE J306 | [
"Physics"
] | 91 | [
"Physical phenomena",
"Physical quantities",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties"
] |
41,010,974 | https://en.wikipedia.org/wiki/LASNEX | LASNEX is a computer program that simulates the interactions between x-rays and a plasma, along with many effects associated with these interactions. The program is used to predict the performance of inertial confinement fusion (ICF) devices such as the Nova laser or proposed particle beam "drivers".
Versions of LASNEX have been used since the late 1960s or early 1970s, and the program has been constantly updated. LASNEX's existence was mentioned in John Nuckolls' seminal paper in Nature in 1972 that first widely introduced the ICF concept, saying it was "...like breaking an enemy code. It tells you how many divisions to bring to bear on a problem."
LASNEX uses a 2-dimensional finite element method (FEM) for calculations, breaking down the experimental area into a grid of arbitrary polygons. Each node on the grid records values for various parameters in the simulation. Values for thermal (low-energy) electrons and ions, super-thermal (high-energy and relativistic) electrons, x-rays from the laser, reaction products and the electric and magnetic fields were all stored for each node. The simulation engine then evolves the system forward through time, reading values from the nodes, applying formulas, and writing them back out. The process is very similar to other FEM systems, like those used in aerodynamics.
In spite of numerous problems in very early ICF research, LASNEX offered clear suggestions that slight increases in performance would be all that was needed to reach ignition. By the late 1970s further work with LASNEX indicated that the issue was not energy as much as the number of laser beams, and suggested that the Shiva laser with 10 kJ of energy in 20 beams would reach ignition. It did not, failing to contain the Rayleigh–Taylor instability. A review of the progress by The New York Times the following year noted that the system "fell short of the more optimistic estimates by a factor of 10,000".
Real-world results from the Shiva project were then used to tune the LASNEX code, which now predicted that a somewhat larger machine, the Nova laser, would reach ignition. It did not; although Nova demonstrated fusion reactions on a large scale, it was far from ignition.
Nova's results were also used to tune the LASNEX system, which once again predicted that ignition could be reached, this time with a significantly larger machine. Given the past failures and rising costs, the Department of Energy decided to directly test the concept with a series of underground nuclear tests known as "Halite" and "Centurion", depending on which lab was handling the experiment. Halite/Centurion placed typical ICF targets in hohlraums, metal cylinders intended to smooth out the driver's energy so it shines on the fuel target evenly. The hohlraum/fuel assemblies were then placed at various distances from a small atomic bomb, detonation of which released significant quantities of x-rays. These x-rays heated the hohlraums until they glowed in the x-ray spectrum (having been heated "x-ray hot" as opposed to "white hot") and it was this smooth x-ray illumination that started the fusion reactions within the fuel. These results demonstrated that the amount of energy needed to cause ignition was approximately 100 MJ, about 25 times greater than any machine that was being considered.
The data from Halite/Centurion was used to further tune LASNEX, which then predicted that careful shaping of laser pulse would reduce the energy required by a factor of about 100 times, between 1 and 2 MJ, so a design with a total output of 4 MJ began to be on the safe side. This emerged as the National Ignition Facility concept. In 2022, NIF achieved ignition, triggering a self-sustaining fusion reaction which released 3.15 MJ of energy using 2.05 MJ of laser energy.
For these reasons, LASNEX is somewhat controversial in the ICF field. More accurately, LASNEX generally predicted a device's low-energy behaviour quite closely, but becomes increasingly inaccurate as the energy levels are increased.
Advanced 3D versions of the same basic concept, like ICF3D and HYDRA, continue to drive modern ICF design, and likewise have failed to closely match experimental performance.
References
Citations
Bibliography
Nuclear fusion | LASNEX | [
"Physics",
"Chemistry"
] | 897 | [
"Nuclear fusion",
"Nuclear physics"
] |
41,015,495 | https://en.wikipedia.org/wiki/Comparison%20of%20EM%20simulation%20software | The following table lists software packages with their own article on Wikipedia that are nominal EM (electromagnetic) simulators;
References
Software comparisons | Comparison of EM simulation software | [
"Technology"
] | 27 | [
"Software comparisons",
"Computing comparisons"
] |
41,017,951 | https://en.wikipedia.org/wiki/Peptoid%20nanosheet | In nanobiotechnology, a peptoid nanosheet is a synthetic protein structure made from peptoids. Peptoid nanosheets have a thickness of about three nanometers and a length of up to 100 micrometers, meaning that they have a two-dimensional, flat shape that resembles paper on the nanoscale.
This makes them one of the thinnest known two-dimensional organic crystalline materials with an area to thickness ratio of greater than 109 nm. Peptoid nanosheets were discovered in the laboratory of Dr. Ron Zuckermann at the Lawrence Berkeley National Laboratory in 2010. Due to the ability to customize peptoids and therefore the properties of the peptoid nanosheet, it has possible applications in the areas of drug and small molecule delivery and biosensing.
Synthesis
For assembly, a purified amphiphilic polypeptoid of specific sequence is dissolved in aqueous solution. These form a monolayer (Langmuir–Blodgett film) on the air-water interface with their hydrophobic side chains oriented in air and hydrophilic side chains in the water. When this mono-layer is shrunk, it buckles into a bilayer with the hydrophobic groups forming the interior core of the peptoid nanosheet. This method has been standardized in the Zuckermann laboratory by repetitively tilting vials of peptoid solution at 85° before returning the vials to the upright position. This repetitive vial “rocking” motion lessens the interfacial area of the air-water interface inside the vial, compressing the peptoid mono-layer by a factor of four and causing the mono-layer to buckle into peptoid nanosheets. Using this method, nanosheets are produced in high yield, and 95% of the peptoid polymer starting material is efficiently converted into peptoid nanosheets after rocking the vials several hundred times.
Applications
Peptoid nanosheets have a very high surface area, which can be readily functionalized to serve as a platform for sensing and templating. Also, their hydrophobic interiors can accommodate hydrophobic small molecule cargos, which have been demonstrated by the sequestration of Nile red when this dye was injected into an aqueous solution of the peptoid nanosheets. For these reasons, the hydrophobic interior of the 2D nanosheets could be an attractive platform for loading or embedding hydrophobic cargo, such as drug molecules, fluorophores, aromatic compounds, and metal nanoparticles.
See also
Nanosheet
Langmuir–Blodgett film
Nanosheets.org Images, videos and interactive molecular models of the peptoid nanosheet.
References
Protein structure
Two-dimensional nanomaterials
Crystals
Organic chemistry | Peptoid nanosheet | [
"Chemistry",
"Materials_science"
] | 587 | [
"Crystallography",
"Crystals",
"Structural biology",
"nan",
"Protein structure"
] |
48,744,418 | https://en.wikipedia.org/wiki/Spatial%20dispersion | In the physics of continuous media, spatial dispersion is usually described as a phenomenon where material parameters such as permittivity or conductivity have dependence on wavevector. Normally such a dependence is assumed to be absent for simplicity, however spatial dispersion exists to varying degrees in all materials.
The underlying physical reason for the wavevector dependence is often that the material has some spatial structure smaller than the wavelength of any signals (such as light or sound) being considered. Since these small spatial structures cannot be resolved by the waves, only indirect effects (e.g. wavevector dependence) remain detectable. An example of spatial dispersion is that of visible light propagating through a crystal such as calcite, where the refractive index depends on the direction of travel (the orientation of the wavevector) with respect to the crystal structure. In such a case, although the light cannot resolve the individual atoms, they nevertheless can as an aggregate affect how the light propagates. Another common mechanism is that the (e.g.) light is coupled to an excitation of the material, such as a plasmon.
Spatial dispersion can be compared to temporal dispersion, the latter often just called dispersion. Temporal dispersion represents memory effects in systems, commonly seen in optics and electronics. Spatial dispersion on the other hand represents spreading effects and is usually significant only at microscopic length scales. Spatial dispersion contributes relatively small perturbations to optics, providing weak effects such as optical activity. Spatial dispersion and temporal dispersion may occur in the same system.
Origin: nonlocal response
The origin of spatial dispersion can be modelled as a nonlocal response, where response to a force field appears at many locations, and can appear even in locations where the force is zero. This usually arises due to a spreading of effects by the hidden microscopic degrees of freedom.
As an example, consider the current that is driven in response to an electric field , which is varying in space (x) and time (t). Simplified laws such as Ohm's law would say that these are directly proportional to each other, , but this breaks down if the system has memory (temporal dispersion) or spreading (spatial dispersion). The most general linear response is given by:
where is the nonlocal conductivity function.
If the system is invariant in time (time translation symmetry) and invariant in space (space translation symmetry), then we can simplify because for some convolution kernel . We can also consider plane wave solutions for and like so:
which yields a remarkably simple relationship between the two plane waves' complex amplitudes:
where the function is given by a Fourier transform of the space-time response function:
The conductivity function has spatial dispersion if it is dependent on the wavevector k. This occurs if the spatial function is not pointlike (delta function) response in x-x' .
Spatial dispersion in electromagnetism
In electromagnetism, spatial dispersion plays a role in a few material effects such as optical activity and doppler broadening. Spatial dispersion also plays an important role in the understanding of electromagnetic metamaterials. Most commonly, the spatial dispersion in permittivity ε is of interest.
Crystal optics
Inside crystals there may be a combination of spatial dispersion, temporal dispersion, and anisotropy. The constitutive relation for the polarization vector can be written as:
i.e., the permittivity is a wavevector- and frequency-dependent tensor.
Considering Maxwell's equations, one can find the plane wave normal modes inside such crystals. These occur when the following relationship is satisfied for a nonzero electric field vector :
Spatial dispersion in can lead to strange phenomena, such as the existence of multiple modes at the same frequency and wavevector direction, but with different wavevector magnitudes.
Nearby crystal surfaces and boundaries, it is no longer valid to describe system response in terms of wavevectors. For a full description it is necessary to return to a full nonlocal response function (without translational symmetry), however the end effect can sometimes be described by "additional boundary conditions" (ABC's).
In isotropic media
In materials that have no relevant crystalline structure, spatial dispersion can be important.
Although symmetry demands that the permittivity is isotropic for zero wavevector, this restriction does not apply for nonzero wavevector. The non-isotropic permittivity for nonzero wavevector leads to effects such as optical activity in solutions of chiral molecules. In isotropic materials without optical activity, the permittivity tensor can be broken down to transverse and longitudinal components, referring to the response to electric fields either perpendicular or parallel to the wavevector.
For frequencies nearby an absorption line (e.g., an exciton), spatial dispersion can play an important role.
Landau damping
In plasma physics, a wave can be collisionlessly damped by particles in the plasma whose velocity matches the wave's phase velocity. This is typically represented as a spatially dispersive loss in the plasma's permittivity.
Permittivity–permeability ambiguity at nonzero frequency
At nonzero frequencies, it is possible to represent all magnetizations as time-varying polarizations. Moreover, since the electric and magnetic fields are directly related by , the magnetization induced by a magnetic field can be represented instead as a polarization induced by the electric field, though with a highly dispersive relationship.
What this means is that at nonzero frequency, any contribution to permeability μ can instead be alternatively represented by a spatially dispersive contribution to permittivity ε. The values of the permeability and permittivity are different in this alternative representation, however this leads to no observable differences in real quantities such as electric field, magnetic flux density, magnetic moments, and current.
As a result, it is most common at optical frequencies to set μ to the vacuum permeability μ0 and only consider a dispersive permittivity ε. There is some discussion over whether this is appropriate in metamaterials where effective medium approximations for μ are used, and debate over the reality of "negative permeability" seen in negative index metamaterials.
Spatial dispersion in acoustics
In acoustics, especially in solids, spatial dispersion can be significant for wavelengths comparable to the lattice spacing, which typically occurs at very high frequencies (gigahertz and above).
In solids, the difference in propagation for transverse acoustic modes and longitudinal acoustic modes of sound is due to a spatial dispersion in the elasticity tensor which relates stress and strain. For polar vibrations (optical phonons), the distinction between longitudinal and transverse modes can be seen as a spatial dispersion in the restoring forces, from the "hidden" non-mechanical degree of freedom that is the electromagnetic field.
Many electromagnetic wave effects from spatial dispersion find an analogue in acoustic waves. For example, there is acoustical activity — the rotation of the polarization plane of transverse sound waves — in chiral materials, analogous to optical activity.
References
Physical phenomena | Spatial dispersion | [
"Physics"
] | 1,512 | [
"Physical phenomena"
] |
48,745,491 | https://en.wikipedia.org/wiki/Exometabolomics | Exometabolomics, also known as 'metabolic footprinting', is the study of extracellular metabolites and is a sub-field of metabolomics.
While the same analytical approaches used for profiling metabolites apply to exometabolomics, including liquid-chromatography mass spectrometry (LC-MS), nuclear magnetic resonance (NMR) and gas chromatography–mass spectrometry (GC–MS), analysis of exometabolites provides specific challenges and is most commonly focused on investigation of the transformations of exogenous metabolite pools by biological systems. Typically, these experiments are performed by comparing metabolites at two or more time points, for example, spent vs. uninoculated/control culture media; this approach can differentiate different physiological states of wild-type yeast and between yeast mutants. Since, in many cases, the exometabolite (extracellular) pool is less dynamic than endometabolite (intracellular) pools (which are often perturbed during sample processing) and chemically defined media can be used, it reduces some of the experimental challenges of metabolomics.
Exometabolomics is also used as a complementary tool with genomic, transcriptomic and proteomic data, to gain insight into the function of genes and pathways. Additionally, exometabolomics can be used to measure polar molecules being consumed or released by an organism, and to measure secondary metabolite production.
History
The study of extracellular metabolites has been prevalent in scientific literature. However, global exometabolite profiling was only realized with recent advances allowing for improved chromatographic separation and detection of hundreds to thousands of compounds by the mid-2000s. The first work to demonstrate the biological relevance of comparative profiling of exometabolite pools was not until 2003, when the term "metabolite footprinting" was coined by Jess Allen and coworkers. This work attracted a great deal of interest in the community, particularly for characterization of microbial metabolism. The idea of the "exometabolome" encompassing the components of the exometabolite pool was not introduced until 2005.
Recent advances in mass spectrometry imaging have allowed for spatial localization of released metabolites. As the field of microbiology becomes increasingly more centered on microbial community structure, exometabolomics has provided for rapid understanding of metabolic interactions between two or more species. Recently, exometabolomics has been used to design co-culture systems. Because the analysis of extracellular metabolites allows for the predictions and determinations of metabolite exchange, exometabolomics analyses can be used for understanding community ecological networks.
Analytical technologies
In principle, any technologies used for metabolomics can be used for exometabolomics. However, liquid chromatography–mass spectrometry (LC–MS) has been the most widely used. As with typical metabolomic measurements, metabolites are identified based on accurate mass, retention time, and their MS/MS fragmentation patterns, in comparison to authentic standards. Chromatographies typically used are hydrophilic interaction liquid chromatography for the measurement of polar metabolites, or reversed-phase (C18) chromatography for the measurement of non-polar compounds, lipids, and secondary metabolites. Gas chromatography–mass spectrometry can also be used to measure sugars and other carbohydrates, and to obtain complete metabolic profiles.
Because LC–MS does not give spatial data on metabolite localization, it can be complemented with mass spectrometry imaging (MSI).
Applications
Exometabolomic techniques have been used in the following fields:
Functional genomics
Metabolite utilization to annotate function of unknown genes.
Bioenergy
In lignocellulosic feedstock studies.
Agriculture and food
Characterization of plant root exometabolites to determine how exometabolites affect Plant-growth promoting rhizobacteria.
Metabolic footprinting of yeast strains for identification of yeast strains optimal for enhancing fermentation performance and positive attributes in wine.
Health
Differentiating healthy versus cancerous bladder cells with metabolic footprinting.
Footprinting, in combination with other techniques, for early recognition of outbreak and strain characterization.
Studying aging with C. elegans exometabolomics.
Extracellular metabolite analysis to evaluate pathogenic mechanism of intracellular protozoal parasite.
Analysis of carbon cycling
Global carbon fixation, phytoplankton/dinoflaggelate interactions, and exometabolomics.
Microbial communities
Interaction of E. coli exometabolites with C. elegans affects life span.
Bacteria and yeast in dairy systems.
Bioremediation
Metabolic niche partitioning
In 2010, exometabolomics analysis of the cyanobacterium, Synechococcus sp. PCC 7002 by Baran, et al. revealed that this photoautotroph could deplete a diverse pool of exogenous metabolites. A follow-up exometabolomics study on sympatric microbial isolates from biological soil crust, which exist in communities with cyanobacteria in the desert soils of the Colorado Plateau, suggested that metabolite niche partitioning exists in these communities, where each isolate only utilizes 13-26% of metabolites from the soil
Secondary metabolites
Metabolic footprinting for determination of antifungal substances' mode of action
See also
Mass spectrometry
Metabolomics
Metabolome
Metabolite fingerprinting
Mass spectrometry imaging
References
Metabolism
Systems biology
Biochemistry | Exometabolomics | [
"Chemistry",
"Biology"
] | 1,185 | [
"Cellular processes",
"nan",
"Biochemistry",
"Metabolism",
"Systems biology"
] |
48,750,239 | https://en.wikipedia.org/wiki/Clumped%20isotopes | Clumped isotopes are heavy isotopes that are bonded to other heavy isotopes. The relative abundance of clumped isotopes (and multiply-substituted isotopologues) in molecules such as methane, nitrous oxide, and carbonate is an area of active investigation. The carbonate clumped-isotope thermometer, or "13C–18O order/disorder carbonate thermometer", is a new approach for paleoclimate reconstruction, based on the temperature dependence of the clumping of 13C and 18O into bonds within the carbonate mineral lattice. This approach has the advantage that the 18O ratio in water is not necessary (different from the δ18O approach), but for precise paleotemperature estimation, it also needs very large and uncontaminated samples, long analytical runs, and extensive replication. Commonly used sample sources for paleoclimatological work include corals, otoliths, gastropods, tufa, bivalves, and foraminifera. Results are usually expressed as Δ47 (said as "cap 47"), which is the deviation of the ratio of isotopologues of CO2 with a molecular weight of 47 to those with a weight of 44 from the ratio expected if they were randomly distributed.
Background
Terminology
Molecules made up of elements with multiple isotopes can vary in their isotopic composition; these variant molecules are called isotopologues. For example, consider the isotopologues of carbon dioxide. oxygen has three stable isotopes (16O, 17O and 18O) and carbon has two (13C, 12C).A 12C16O2 molecule (composed only with most abundant isotopes of constituent elements) is called a monoisotopic species. When only one atom is replaced with heavy isotope of any constituent element (ie, 13C16O2), it is called a singly-substituted species. Likewise, when two atoms are simultaneously replaced with heavier isotopes (eg., 13C16O18O), the it is called a doubly substituted or also a multiply substituted isotopologue. The multiply-substituted isotopologue COO contains a bond between two of these heavier isotopes (C and O), which is a "clumped" isotope bond.
The abundance of masses for a given molecule (e.g. CO) can be predicted using the relative abundance of isotopes of its constituent atoms (C/C, O/O and O/O). The relative abundance of each isotopologue (e.g. mass-47 CO) is proportional to the relative abundance of each isotopic species.
This predicted abundance assumes a non-biased stochastic distribution of isotopes, natural materials tend to deviate from these stochastic values, the study of which forms the basis of clumped isotope geochemistry.
When a heavier isotope substitutes for a lighter one (e.g., O for O), the chemical bond's vibration will be slower, lowering its zero-point energy. In other words, thermodynamic stability is related to the isotopic composition of the molecule.
CO (≈98.2%), CO (≈1.1%), COO (≈0.6%) and COO (≈0.11%) are the most abundant isotopologues (≈99%) of carbonate ion, controlling the bulk δC, δO and δO values in natural carbonate minerals. Each of these isopotologes has different thermodynamic stability. For a carbonate crystal at thermodynamic equilibrium, the relative abundances of the carbonate ion isotopologues is controlled by reactions such as:
The equilibrium constants for these reactions are temperature-dependent, with a tendency for heavy isotopes to "clump" with each other (increasing the proportions of multiply substituted isotopologues) as temperature decreases. Reaction 1 will be driven to the right with decreasing temperature, to the left with increasing temperature. Therefore, the equilibrium constant for this reaction can be used as an paleotemperature indicator, as long as the temperature dependence of this reaction and the relative abundances of the carbonate ion isotopologues are known.
Differences from the conventional δ18O analysis
In conventional δ18O analysis, both the δ18O values in carbonates and water are needed to estimate paleoclimate. However, for many times and places, the δ18O in water can only be inferred, and also the 16O/18O ratio between carbonate and water may vary with the change in temperature. Therefore, the accuracy of the thermometer may be compromised.
Whereas for the carbonate clumped-isotope thermometer, the equilibrium is independent of the isotope compositions of waters from which carbonates grew. Therefore, the only information needed is the abundance of bonds between rare, heavy isotopes within the carbonate mineral.
Methods
Extract from carbonates by reaction with anhydrous phosphoric acid. (there is no direct way to measure the abundance of CO32−s in with high enough precision). The phosphoric acid temperature is often held between 25° and 90 °C and can be as high as 110 °C.
Purify the that has been extracted. This step removes contaminant gases like hydrocarbons and halocarbons which can be removed by gas chromatography.
Mass spectrometric analyses of purified , to obtain δ13C, δ18O, and Δ47 (abundance of mass-47 ) values. (Precision needs to be as high as ≈10−5, for the isotope signals of interest are often less than ≈10−3.)
Applications
Paleoenvironment
Clumped isotopes analyses have traditionally been used in lieu of conventional δ18O analyses when the δ18O of seawater or source water is poorly constrained. While conventional δ18O analysis solves for temperature as a function of both carbonate and water δ18O, clumped isotope analyses can provide temperature estimates that are independent of the source water δ18O. Δ47-derived temperature can then be used in conjunction with carbonate δ18O to reconstruct δ18O of the source water, thus providing information on the water with which the carbonate was equilibrated.
Clumped isotope analyses thus allow for estimates of two key environmental variables: temperature and water δ18O. These variables are especially useful for reconstructing past climates, as they can provide information on a wide range of environmental properties. For example, temperature variability can imply changes in solar irradiance, greenhouse gas concentration, or albedo, while changes in water δ18O can be used to estimate changes in ice volume, sea level, or rainfall intensity and location.
Studies have used temperatures derived from clumped isotopes for varied and numerous paleoclimate applications — to constrain δ18O of past seawater, pinpoint the timing of icehouse-hothouse transitions, track changes in ice volume through an ice age, and to reconstruct temperature changes in ancient lake basins.
Paleoaltimetry
Clumped isotope analyses have recently been used to constrain the paleoaltitude or uplift history of a region. Air temperature decreases systematically with altitude throughout the troposphere (see lapse rate). Due to the close coupling between lake water temperature and air temperature, there is a similar decrease in lake water temperature as altitude increases. Thus, variation in water temperature implied by Δ47 could indicate changes in lake altitude, driven by tectonic uplift or subsidence. Two recent studies derive the timing of the uplift of the Andes Mountains and the Altiplano Plateau, citing sharp decreases in Δ47-derived temperatures as evidence of rapid tectonic uplift.
Atmospheric science
Measurements of Δ47 can be used to constrain natural and synthetic sources of atmospheric CO2, (e.g. respiration and combustion), as each of these processes are associated with different average Δ47 temperatures of formation.
Paleobiology
Measurements of Δ47 can be used to better understand the physiology of extinct organisms, and to place constraints on the early development of endothermy, the process by which organisms regulate their body temperature. Prior to the development of clumped isotope analysis, there was no straightforward way to estimate either the body temperature or body water δO of extinct animals. Eagle et al., 2010 measure Δ47 in bioapatite from a modern Indian elephant, white rhinoceros, Nile crocodile and American alligator. These animals were chosen as they span a wide range in internal body temperatures, allowing for the creation of a mathematical framework relating Δ47 of bioapatite and internal body temperature. This relationship has been applied to analyses of fossil teeth, in order to predict the body temperatures of a woolly mammoth and a sauropod dinosaur. The latest Δ47 temperature calibration for (bio)apatite of Löffler et al. 2019 covers a wide temperature range of 1-80°C and was applied to a fossil megalodon shark tooth for calculating seawater temperatures and δO values.
Petrology and metamorphic alteration
A key premise of most clumped isotope analyses is that samples have retained their primary isotopic signatures. However, isotopic resetting or alteration, resulting from elevated temperature, can provide a different type of information about past climates. For example, when carbonate is isotopically reset by high temperatures, measurements of Δ47 can provide information about the duration and extent of metamorphic alteration. In one such study, Δ47 from late Neoproterozoic Doushantou cap carbonate is used to assess the temperature evolution of the lower crust in southern China.
Cosmochemistry
Primitive meteorites have been studied using measurements of Δ47. These analyses also assume that the primary isotopic signature of the sample has been lost. In this case, measurements of Δ47 instead provide information on the high-temperature event that isotopically reset the sample. Existing Δ47 analyses on primitive meteorites have been used to infer the duration and temperature of aqueous alteration events, as well as to estimate the isotopic composition of the alteration fluid.
Ore deposits
An emerging body of work highlights the application potential for clumped isotopes to reconstruct temperature and fluid properties in hydrothermal ore deposits. In mineral exploration, delineation of the heat footprint around an ore body provides critical insight into the processes that drive transport and deposition of metals. During proof of concept studies, clumped isotopes were used to provide accurate temperature reconstructions in epithermal, sediment hosted, and Mississippi Valley Type (MVT) deposits. These case studies are supported by measurement of carbonates in active geothermal settings.
Limitations
The temperature dependent relationship is subtle ().
13C18O16O22− is a rare isotopologue (≈60 ppm [3]).
Therefore, to obtain adequate precision, this approach requires long analyses (≈2–3 hours) and very large and uncontaminated samples.
Clumped isotope analyses assume that measured Δ47 is composed of 13C18O16O22−, the most common isotopologue of mass 47. Corrections to account for less common isotopologues of mass 47 (e.g. 12C18O17O 16O2−) are not completely standardized between labs.
See also
Paleothermometer
Isotopic signature
Isotope analysis
Isotope geochemistry
Isotopic labeling
References
Paleoclimatology
Carbon dioxide
Isotopes of carbon
Isotopes of oxygen
Isotopes | Clumped isotopes | [
"Physics",
"Chemistry"
] | 2,402 | [
"Isotopes of carbon",
"Isotopes",
"Nuclear physics",
"Greenhouse gases",
"Isotopes of oxygen",
"Carbon dioxide"
] |
48,750,901 | https://en.wikipedia.org/wiki/Rivlin%E2%80%93Ericksen%20tensor | A Rivlin–Ericksen temporal evolution of the strain rate tensor such that the derivative translates and rotates with the flow field. The first-order Rivlin–Ericksen is given by
where
is the fluid's velocity and
is -th order Rivlin–Ericksen tensor.
Higher-order tensor may be found iteratively by the expression
The derivative chosen for this expression depends on convention. The upper-convected time derivative, lower-convected time derivative, and Jaumann derivative are often used.
References
Multivariable calculus
Fluid dynamics
Non-Newtonian fluids | Rivlin–Ericksen tensor | [
"Chemistry",
"Mathematics",
"Engineering"
] | 126 | [
"Calculus",
"Chemical engineering",
"Piping",
"Multivariable calculus",
"Fluid dynamics"
] |
48,751,238 | https://en.wikipedia.org/wiki/Campanile%20probe | In near-field scanning optical microscopy the campanile probe is a tapered optical probe with a shape of a campanile (a square pyramid). It is made of an optically transparent dielectric, typically silica, and its two facets are coated with a metal, typically gold. At the probe tip, the metal-coated facets are separated by a gap of a few tens of nanometers, which determines the spatial resolution of the probe. Such a probe design allows collecting optical signals, usually photoluminescence (PL) or Raman scattering, with a subwavelength resolution, breaking the diffraction limit.
The campanile probe is attached to an optical fiber, which both provides a laser excitation of the studied sample and collects the measured signal. The probe is rastered over the sample with a standard scanning probe microscopy scanner, keeping the distance to the sample surface at a few nanometers. Contrary to the traditional (circular) near-field probes, the campanile probe has no cut-off frequency and is insensitive to the spatial mode of the optical near field. Hence its application is not limited to thin-film samples. Another advantage of the campanile probe is a high signal collection efficiency, which exceeds 90%.
Campanile probes are typically fabricated as follows: a standard cylindrical single-mode optical fiber is etched with hydrofluoric acid to create a conical tip with a radius of ca. 100 nm. Then a square pyramid is carved on the tip using focused ion beam (FIB) milling, and its two facets are coated with a metal by shadow evaporation. A nanometer gap is then opened on the tip by FIB. Alternative fabrication method uses nanoimprint lithography to replicate campanile pyramid from a mold. This approach significantly increases fabrication speed.
References
Scanning probe microscopy | Campanile probe | [
"Chemistry",
"Materials_science"
] | 388 | [
"Nanotechnology",
"Scanning probe microscopy",
"Microscopy"
] |
39,687,960 | https://en.wikipedia.org/wiki/Reversible-deactivation%20polymerization | In polymer chemistry, reversible-deactivation polymerization (RDP) is a form of polymerization propagated by chain carriers, some of which at any instant are held in a state of dormancy through an equilibrium process involving other species.
An example of reversible-deactivation anionic polymerization (RDAP) is group transfer polymerization of alkyl methacrylates, where the initiator and the dormant state is a silyl ketene acetal.
In the case of reversible-deactivation radical polymerization (RDRP), a majority of the chain must be held in a dormant state to ensure that the concentration of active carriers is sufficiently low as to render chain termination reactions negligible.
Despite having some common features, RDP is distinct from living polymerization which requires a complete absence of termination and irreversible chain transfer.
References
Polymer chemistry | Reversible-deactivation polymerization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 192 | [
"Polymer stubs",
"Organic chemistry stubs",
"Materials science",
"Polymer chemistry"
] |
53,955,571 | https://en.wikipedia.org/wiki/Inverse%20dynamics-based%20static%20optimization | Inverse dynamics-based static optimization is a method for estimating muscle-tendon forces from the measured (e.g. through gait analysis) kinematics of a given body part. It exploits the concepts of inverse dynamics and static optimization (in opposition to dynamic programming). Joint moments are obtained by inverse dynamics and then, knowing muscular moment arms, a static optimization process is carried on to evaluate optimal single-muscle forces for the system
which is an underdetermined system.
General concepts
We can solve the inverse dynamics of a system to obtain joint torques and nonetheless be unable to estimate the forces exerted by single muscles even knowing the exact geometry of our joints and muscles due to the redundancy of our system. Through an optimization approach we could find a way to understand how our central nervous system chooses its control strategies so as to optimize some aspects of movement production (e.g. minimizing metabolic cost).
Dynamic equations of motion
We use here the matricial form of the equations of motion
in which we are considering a body part with joints and muscles. Then
are the vectors of generalized coordinates, general velocities and general accelerations ();
is the mass matrix ();
is the vector of centrifugal and Coriolis forces ();
is the vector of gravitational forces ();
is the vector of external forces ();
is the vector of muscle-tendon torques ().
The vector of muscle-tendon torques can be further decomposed as follows
in which
is the muscle-arm matrix ();
is the vector of muscle-tendon forces ().
The static optimization process
Once we obtain , suppose known from anatomo-physiological studies and that we can't obtain analytically muscle-tendon forces due to the redundancy of the system. Then we hypothesize that the actual muscle forces minimize a given cost function, , subject to equality and inequality constraints. We have then to solve
Usually this is written as
in which
is the maximum isometric force.
The choice of the cost function
Our choice of the cost function is based on the supposed optimization mechanisms carried on by our CNS. It needs to be clinically validated, especially in unhealthy patients. In [Erdemir, 2007] a list of possible cost functions with a brief rationale and the suggested model validation technique is available.
Clarification on the use of the maximum isometric force
Muscle contraction can be eccentric (velocity of contraction ), concentric () or isometric (). From muscle force-velocity characteristic we notice that muscle force in an eccentric contraction is higher than the maximum isometric force, why then do we use it as a constraint on muscle force? Mainly for two reasons:
Rarely do muscle contraction occur with total activation (), then eccentric contraction force is lowered proportionally to the value of activation;
The maximum isometric force is a remarkable and fixed value of force given the physiological cross-sectional area of a muscle (they are linked by the concept of specific tension of a muscle).
Bibliography
Dynamics (mechanics) | Inverse dynamics-based static optimization | [
"Physics"
] | 636 | [
"Physical phenomena",
"Motion (physics)",
"Classical mechanics",
"Dynamics (mechanics)"
] |
53,959,419 | https://en.wikipedia.org/wiki/Vera%20W.%20de%20Spinadel | Vera Martha Winitzky de Spinadel (August 22, 1929 – January 26, 2017) was an Argentine mathematician. She was the first woman to gain a PhD in mathematics at the University of Buenos Aires, Argentina, in 1958. Between 2010 and 2017, she was full Emeritus Professor in the Faculty of Architecture, Design and Urban Planning of the University of Buenos Aires. In 1995, she was named Director of the Centre of Mathematics and Design. In April 2005 she inaugurated the Laboratory of Mathematics & Design, University Campus in Buenos Aires. From 1998 to her death she was the President of the International Mathematics and Design Association, which organizes international congresses every 3 years and publishes a Journal of Mathematics & Design. She was the author of more than 10 books and published more than 100 research papers.
Spinadel was a leader in the field of metallic mean and in the development of the classical Golden Ratio and got wide international recognition.
Books
From the Golden Mean to Chaos, Editorial Nueva Librería, Buenos Aires, Argentina, 260 pp. , 1998
The Metallic Means and Design, Nexus II: Architecture and Mathematics. Editor: Kim Williams. Edizioni dell’Erba, , 1998
Del Número de Oro al Caos. Editorial Nobuko S. A., , 2003
Geometría Fractal, in collaboration with Jorge G. Perera and Jorge H. Perera, with CD with Images. Editorial Nobuko S. A., , 2003
From the Golden Mean to Chaos, Editorial Nobuko S. A. , 2004
Geometría Fractal, with Jorge G. Perera & Jorge H. Perera, Editorial Nueva Librería, 2nd edition, , 2007
Cálculo Superior, 1st. Editorial Nueva Librería, , 2009.
From the Golden Mean to Chaos, 3rd Edition, Editorial Nueva Librería, , Junio 2010.
Forma y matemática: La familia de Números Metálicos en Diseño, 1st. Edition. Buenos Aires: Nobuko. Ediciones FADU, Serie Difusión 22. , 2011
Forma y matemática II: Fractales y forma, 1st. Edition. Buenos Aires: Nobuko. Ediciones FADU, Serie Difusión 24. , 2012
Papers
"Sistemas Estructurados y Creatividad", Keynote Speaker Open Lecture for the International Mathematics & Design Conference MyD-95, October 23–27, 1995, FADU, Buenos Aires, Argentina. Proc. , 1996
"La familia de números metálicos en Diseño". Primer Seminario Nacional de Gráfica Digital, Sesión de Morfología y Matemática, FADU, UBA, 11-13 de junio de 1997. Volumen II,
"On Characterization of the Onset to Chaos", Chaos, Solitons and Fractals 8 (10): 1631–1643, 1997
"New Smarandache sequences", Proceedings of the First International Conference on Smarandache type Notions in Number Theory, ed. C. Dumitrescu & V. Seleacu, University of Craiova, 21–24 August 1997, American Research Press, Lupton, , 1997, pp. 81–116
"Una nueva familia de números", Anales de la Sociedad Científica Argentina 228 ( 1): 101–107, 1998
"Triangulature in Andrea Palladio", Nexus Network Journal, Architecture and Mathematics on line
"A new family of irrational numbers with curious properties", Humanistic Mathematics Network Journal 19: 33–37, , marzo 1999
"The Metallic Means family and multifractal spectra", Nonlinear Analysis 36: 721–745, 1999
"The Golden Mean and its many relatives", First Interdisciplinary Conference of The International Society of the Arts, Mathematics and Architecture ISAMA 99, San Sebastián, Spain, 7–11 June 1999. Editors: Nathaniel A. Friedman and Javier Barrallo. , pp. 453–460
"The family of Metallic Means", Visual Mathematics I ( 3) 1999
"The family of Metallic Means", Symmetry: Culture and Science. The Quarterly International Society for the Interdisciplinary Study of Symmetry (ISIS-Symmetry) 10 ( 3-4): 317–338, 1999
"The Metallic Means family and Renormalization Group Techniques", Proceedings of the Steklov Institute of Mathematics, Suppl. 1, 2000, pp. S194-S209
"Fracciones continuas y la teoría de las proporciones de Palladio", ICVA Primer Congreso Virtual de Arquitectura, December 1999 to January 2000
"Half-regular Continued Fraction Expansions and Design", Journal of Mathematics & Design 1 ( 1) marzo 2001
"Continued Fraction Expansions and Design", The Proceedings of Mathematics & Design 2001, The Third International Conference, 3 a 5 de julio de 2001, The School of Architecture & Building, The School of Computing & Mathematics, Deakin University, Geelong, Australia,
"Geometric representation of purely periodic Metallic Means", with Martín L. Benarroch, Walter L. Geler and Stella M. Sirianni, Journal of Mathematics & Design 1 ( 2) summer 2001,
"The metallic means family and forbidden symmetries", International Mathematical Journal 2 (3): 279–288, 2002
"The Set of Silver Integers", Journal of Mathematics & Design 2 ( 1) 2002
"Symmetry Groups in Mathematics, Architecture and Art", Special issue of the papers presented at the Matomium Euro-Workshop 2002. Editó Department of Architecture Sint-Lucas, Brusel, Belgic. Symmetry: Art and Science 2 (new serie, 1-4): 385–403, 2002,
"Geometría Fractal y Geometría Euclidiana", Revista de Educación y Pedagogía, Medellín, Colombia, Universidad de Antioquia, Facultad de Educación, vol. XV, Nro. 35, pp. 83–93, January–April 2003,
"Number theory and Art", ISAMA-Bridges 2003. Conference Proceedings of Meeting Alhambra, University of Granada, Granada, España. Editores: Javier Barrallo, Nathaniel Friedman, Reza Sarhangi, Carlo Séquin, José Martínez, Juan A. Maldonado. . pp. 415–423, 2003
"La familia de Números Metálicos", Cuadernos del Cimbage, Instituto de Investigaciones en Estadística y Matemática Actuarial, Facultad de Ciencias Económicas, UBA, No. 6, pp. 17–45, , Mai 2004
"Generalized Silver Means Subfamily", Journal of Mathematics & Design 6 ( 1): 53–59, 2007. Editorial Nueva Librería
"Orígenes Históricos del Número de Plata y sus Applicaciones en Arquitectura", Journal of Mathematics & Design 6 ( 1): 93–99, 2007. Editorial Nueva Librería
"Conceptos fractales aplicados al Diseño", Actas del Primer Congreso Internacional de Matemáticas en Ingeniería y Arquitectura, Universidad Politécnica de Madrid, Mai 30 to June 2007, pp. 137–146,
"Applicaciones de Geometría Fractal en el campo de la construcción", Actas del Primer Congreso Internacional de Matemáticas en Ingeniería y Arquitectura, Universidad Politécnica de Madrid, Mai 30 to June 2007, pp. 215–220,
"Espirales asociadas a los Números Metálicos", in collaboration with Antonia Redondo Buitrago, 5th Mathematics & Design International Conference, Blumenau, Brasil, July 1–4, 2007,
"Golden and Metallic Means in modern Mathematics and Physics", Proceedings of the 13th International Conference on Geometry and Graphics, August 4–8, 2008, .
"On plastic numbers in the plane", in collaboration with Antonia Redondo Buitrago, Proceedings of the 13th International Conference on Geometry and Graphics, August 4–8 de 2008, .
"Visualización y tecnología", Cuadernos del Cimbage, Instituto de Investigaciones en Estadística y Matemática Actuarial, Facultad de Ciencias Económicas, UBA, No. 10, pp. 1–16, , Mai 2008.
"Characterization of the onset to chaos in Economy", Proceedings of the Seventh All-Russian Conference on Financial and Actuarial Mathematics and Related Fields FAM´2008, Part 2, pp. 250–265, .
"Intersection of Mathematics & Arts", Proceedings of the Seventh All-Russian Conference on Financial and Actuarial Mathematics and Related Fields FAM´2008, Part 2, pp. 265– 284, .
"Herramientas matemáticas para la arquitectura y el diseño", in collaboration with Hernán S. Nottoli. 1st. Edition – Buenos Aires Nobuko October 2008, .
"Dynamic Geometrical Constructions based on the Golden Mean", in collaboration with Antonia Redondo Buitrago, Slovak Journal for Geometry and Graphics, Vol. 5, No. 10, pp. 27–39, 2008, .
"Characterization of the onset to Chaos in Economy", Cuadernos del Cimbage, Instituto de Investigaciones en Estadística y Matemática Actuarial, Facultad de Ciencias Económicas, UBA, No. 11, pp. 25–38, (print version). (on line version), 2009.
"La proporción: arte y matemáticas", in collaboration with J. Jiménez (coord.), O. J. Abdounur, E. Badillo, S. Balbás, F. Corbalán, J. M. Dos Santos, M. Edo, J. A. García Cruz and A. Masip. Editorial GRAO, Barcelona, Spain, . 1ra. Edition November 2009.
"Towards van der Laan´s Plastic Number in the Plane", in collaboration with Antonia Redondo Buitrago, Journal for Geometry and Graphics, Vol. 13, Number 2, pp. 163–175, 2009, .
"Sobre los sistemas de proporciones áureo y plástico y sus generalizaciones", in collaboration with Antonia Redondo Buitrago, Journal of Mathematics & Design, Vol. 9, Number 1, pp. 15–34, 2009, .
"Arte fractal", AREA Agenda de reflexión en Arquitectura, Diseño y Urbanismo, No. 15, pp. 89–90. Octubre 2009, .
"Nuevas propiedades de la Familia de Números Metálicos", in collaboration with Antonia Redondo Buitrago. Special Edition with the Proceedings of M&D-2007 5th International Conference of Mathematics & Design, Journal of Mathematics & Design, vol. 7, No. 1, pp. 53–65. , , , 2009.
"Paper folding constructions to the Mean Values of van der Laan and Rosenbusch", in collaboration with Gunter Weiss, International Conference on Geometry and Graphics, 2010, Kyoto, Japan. Proceedings publish in DVD, .
"Use of the powers of the members of the Metallic Means Family in artistic Design", 10th International Conference APLIMAT 2011, Faculty of Mechanical Engineering, Slovak University of Technology in Bratislava, section: Mathematics & Art. February 1–4, 2011.
"Sistemas de proporciones generalizados: aplicaciones", in collaboration with Antonia Redondo Buitrago. Edition especial con los Proceedings de M&D-2010 6th International Conference of Mathematics & Design, Journal of Mathematics & Design, vol. 10, No. 1, pp. 35–43. , , 2011.
"Remarks to classical cubic problems and the mean values of van der Laan and Rosenbusch", in collaboration with Gunter Weiss. Edition especial con los Proceedings de M&D-2010 6th International Conference of Mathematics & Design, Journal of Mathematics & Design, vol. 10, No. 1, pp. 43–51. , , 2011.
"Fractal art and coloring algorithms", Experience-centered Approach and Visuality in The Education of Mathematics and Physics, pp. 221–222, , 2012.
"Generalizing the Golden Spiral", in collaboration with Antonia Redondo Buitrago. Journal of Mathematics & Design, vol. 11, No. 1, pp. 109–117, , 2012.
"Fractal Geometry and Design", Summer School "Structure – Sculpture" – Rebuilding ULM Pavilion, FADU, UBA. Journal of Mathematics & Design, vol. 11, No. 1, pp. 141–151, , 2012.
"The Metallic Means Family ", Summer School "Structure – Sculpture" – Rebuilding ULM Pavilion, FADU, UBA. Journal of Mathematics & Design, vol. 11, No. 1, pp. 151–159, , 2012.
"Del Número de Oro al caos", 2nd. Edition. Editorial Nueva Librería, Buenos Aires. , 2013.
"Visualización y tecnología aplicados al Diseño", 8o. Encuentro de Docentes de Matemática en carreras de Arquitectura y Diseño de Universidades Nacionales del Mercosur, August 14–16, 2013, Facultad de Arquitectura, Urbanismo y Diseño, Universidad Nacional de San Juan, San Juan Argentina. Digital publication. .
"From George Odom to a new system of Metallic Means", in collaboration with Gunter Weiss, VII Conferencia Internacional de Matemática y Diseño M&D-2013 (02-06 Septiembre 2013), Facultad de Arquitectura y Urbanismo, Universidad Nacional de Tucumán, San Miguel de Tucumán, Argentina. Proceedings publish in vol. 13 Journal of Mathematics & Design, pp. 71–86, , 2014.
"Cordovan spirals", in collaboration with Antonia Redondo Buitrago, VII Conferencia Internacional de Matemática y Diseño M&D-2013 (02-06 Septiembre 2013), Facultad de Arquitectura y Urbanismo, Universidad Nacional de Tucumán, San Miguel de Tucumán, Argentina. Proceedings publish vol. 13 Journal of Mathematics & Design, pp. 124–145, , 2014.
"Bi-Arc spirals in Minkowski planes", in collaboration with Gunter Weiss. Proceedings of the 16th International Conference on Geometry and Graphics ICGG 2014, Innsbruck (04-8 August 2014), Eds. Hans-Peter Schröder and Manfred Hosty, Innsbruck University Press, pp- 115-120, 2014.
"Generalized Metallic Means Family". Proceedings of the 16th International Conference on Geometry and Graphics ICGG 2014, Innsbruck (04-8 August 2014), Eds. Hans-Peter Schröder and Manfred Hosty, Innsbruck University Press, pp- 459-465, 2014.
Awards
Gold medal 30º university teaching UBA
2010: Full Emeritus Professor UBA
References
External links
Vera W. de Spinadel. "Intersections of mathematics and arts" (in Wikiznanie English)
Stakhov A.P. "Metallic Means" by Vera Spinadel, Russian
The family of Metallic Means
Nexus Network Journal
ScienceDirect
1929 births
2017 deaths
University of Buenos Aires alumni
Academic staff of the University of Buenos Aires
Argentine women mathematicians
Applied mathematicians
21st-century Argentine mathematicians
21st-century women mathematicians
20th-century Argentine mathematicians
20th-century women mathematicians | Vera W. de Spinadel | [
"Mathematics"
] | 3,348 | [
"Applied mathematics",
"Applied mathematicians"
] |
53,961,341 | https://en.wikipedia.org/wiki/Supersymmetric%20theory%20of%20stochastic%20dynamics | Supersymmetric theory of stochastic dynamics or stochastics (STS) is an exact theory of stochastic (partial) differential equations (SDEs), the class of mathematical models with the widest applicability covering, in particular, all continuous time dynamical systems, with and without noise. The main utility of the theory from the physical point of view is a rigorous theoretical explanation of the ubiquitous spontaneous long-range dynamical behavior that manifests itself across disciplines via such phenomena as 1/f, flicker, and crackling noises and the power-law statistics, or Zipf's law, of instantonic processes like earthquakes and neuroavalanches. From the mathematical point of view, STS is interesting because it bridges the two major parts of mathematical physics – the dynamical systems theory and topological field theories. Besides these and related disciplines such as algebraic topology and supersymmetric field theories, STS is also connected with the traditional theory of stochastic differential equations and the theory of pseudo-Hermitian operators.
The theory began with the application of BRST gauge fixing procedure to Langevin SDEs, that was later adapted to classical mechanics and its stochastic generalization, higher-order Langevin SDEs, and, more recently, to SDEs of arbitrary form, which allowed to link BRST formalism to the concept of transfer operators and recognize spontaneous breakdown of BRST supersymmetry as a stochastic generalization of dynamical chaos.
The main idea of the theory is to study, instead of trajectories, the SDE-defined temporal evolution of differential forms. This evolution has an intrinsic BRST or topological supersymmetry representing the preservation of topology and/or the concept of proximity in the phase space by continuous time dynamics. The theory identifies a model as chaotic, in the generalized, stochastic sense, if its ground state is not supersymmetric, i.e., if the supersymmetry is broken spontaneously. Accordingly, the emergent long-range behavior that always accompanies dynamical chaos and its derivatives such as turbulence and self-organized criticality can be understood as a consequence of the Goldstone theorem.
History and relation to other theories
The first relation between supersymmetry and stochastic dynamics was established in two papers in 1979 and 1982 by Giorgio Parisi and Nicolas Sourlas, who demonstrated that the application of the BRST gauge fixing procedure to Langevin SDEs, i.e., to SDEs with linear phase spaces, gradient flow vector fields, and additive noises, results in N=2 supersymmetric models. The original goal of their work was dimensional reduction, i.e., a specific cancellation of divergences in Feynman diagrams proposed a few years earlier by Amnon Aharony, Yoseph Imry, and Shang-keng Ma. Since then, relation between so-emerged supersymmetry of Langevin SDEs and a few physical concepts have been established including the fluctuation dissipation theorems, Jarzynski equality, Onsager principle of microscopic reversibility, solutions of Fokker–Planck equations, self-organization, etc.
A similar approach was used to establish that classical mechanics, its stochastic generalization, and higher-order Langevin SDEs also have supersymmetric representations. Real dynamical systems, however, are never purely Langevin or classical mechanical. In addition, physically meaningful Langevin SDEs never break supersymmetry spontaneously. Therefore, for the purpose of the identification of the spontaneous supersymmetry breaking as dynamical chaos, the generalization of the Parisi–Sourlas approach to SDEs of general form is needed. This generalization could come only after a rigorous formulation of the theory of pseudo-Hermitian operators because the stochastic evolution operator is pseudo-Hermitian in the general case. Such generalization showed that all SDEs possess N=1 BRST or topological supersymmetry (TS) and this finding completes the story of relation between supersymmetry and SDEs.
In parallel to the BRST procedure approach to SDEs, mathematicians working in the dynamical systems theory introduced and studied the concept of generalized transfer operator defined for random dynamical systems. This concept underlies the most important object of the STS, the stochastic evolution operator, and provides it with a solid mathematical meaning.
STS has a close relation with algebraic topology and its topological sector belongs to the class of models known as Witten-type topological or cohomological field theory.
As a supersymmetric theory, BRST procedure approach to SDEs can be viewed as one of the realizations of the concept of Nicolai map.
Parisi–Sourlas approach to Langevin SDEs
In the context of supersymmetric approach to stochastic dynamics, the term Langevin SDEs denotes SDEs with Euclidean phase space, , gradient flow vector field, and additive Gaussian white noise,
where , is the noise variable, is the noise intensity, and , which in coordinates and , is the gradient flow vector field with being the Langevin function often interpreted as the energy of the purely dissipative stochastic dynamical system.
The Parisi–Sourlas method is a way of construction of the path integral representation of the Langevin SDE. It can be thought of as a BRST gauge fixing procedure that uses the Langevin SDE as a gauge condition. Namely, one considers the following functional integral,
where denotes the r.h.s. of the Langevin SDE, is the operation of stochastic averaging with being the normalized distribution of noise configurations,
is the Jacobian of the corresponding functional derivative, and the path integration is over all closed paths, , where and are the initial and final moments of temporal evolution.
Dimensional reduction
The Parisi–Sourlas construction originally aimed at "dimensional reduction" proposed in 1976 by Amnon Aharony, Yoseph Imry, and Shang-keng Ma who proved that to all orders in perturbation expansion, the critical exponents in a d-dimensional (4 < d < 6) system with short-range exchange and a random quenched field are the same as those of a (d–2)-dimensional pure system. Their arguments indicated that the "Feynman diagrams which give the leading singular behavior for the random case are identically equal, apart from combinatorial factors, to the corresponding Feynman diagrams for the pure case in two fewer dimensions."
Topological interpretation
Topological aspects of the Parisi–Sourlas construction can be briefly outlined in the following manner. The delta-functional, i.e., the collection of the infinite number of delta-functions, ensures that only solutions of the Langevin SDE contribute to . In the context of BRST procedure, these solutions can be viewed as Gribov copies. Each solution contributes either positive or negative unity: with being the index of the so-called Nicolai map, , which in this case is the map from the space of closed paths in to the space of noise configurations, a map that provides a noise configuration at which a given closed path is a solution of the Langevin SDE. can be viewed as a realization of Poincaré–Hopf theorem on the infinite-dimensional space of close paths with the Langevin SDE playing the role of the vector field and with the solutions of Langevin SDE playing the role of the critical points with index . is independent of the noise configuration because it is of topological character. The same it true for its stochastic average, , which is not the partition function of the model but, instead, its Witten index.
Path integral representation
With the help of a standard field theoretic technique that involves introduction of additional field called Lagrange multiplier, , and a pair of fermionic fields called Faddeev–Popov ghosts, , the Witten index can be given the following form,
where denotes collection of all the fields, p.b.c. stands for periodic boundary conditions, the so-called gauge fermion, , with and , and the BRST symmetry defined via its action on arbitrary functional as . In the BRST formalism, the Q-exact pieces like, , serve as gauge fixing tools. Therefore, the path integral expression for can be interpreted as a model whose action contains nothing else but the gauge fixing term. This is a definitive feature of Witten-type topological field theories and in this particular case of BRST procedure approach to SDEs, the BRST symmetry can be also recognized as the topological supersymmetry.
A common way to explain the BRST procedure is to say that the BRST symmetry generates the fermionic version of the gauge transformations, whereas its overall effect on the path integral is to limit the integration only to configurations that satisfy a specified gauge condition. This interpretation also applies to Parisi–Sourlas approach with the deformations of the trajectory and the Langevin SDE playing the roles of the gauge transformations and the gauge condition respectively.
Operator representation
Physical fermions in the high-energy physics and condensed matter models have antiperiodic boundary conditions in time. The unconventional periodic boundary conditions for fermions in the path integral expression for the Witten index is the origin of the topological character of this object. These boundary conditions reveal themselves in the operator representation of the Witten index as the alternating sign operator, where is the operator of the number of ghosts/fermions and the finite-time stochastic evolution operator (SEO), , where, is the infinitesimal SEO with being the Lie derivative along the subscript vector field, being the Laplacian, being the exterior derivative, which is the operator representative of the topological supersymmetry (TS), and , where and are bosonic and fermionic momenta, and with square brackets denoting bi-graded commutator, i.e., it is an anticommutator if both operators are fermionic (contain odd total number of 's and 's) and a commutator otherwise. The exterior derivative and are supercharges. They are nilpotent, e.g., , and commutative with the SEO. In other words, Langevin SDEs possess N=2 supersymmetry. The fact that is a supercharge is accidental. For SDEs of arbitrary form, this is not true.
Hilbert space
The wavefunctions are functions not only of the bosonic variables, , but also of the Grassmann numbers or fermions, , from the tangent space of . The wavefunctions can be viewed as differential forms on with the fermions playing the role of the differentials . The concept of infinitesimal SEO generalizes the Fokker–Planck operator, which is essentially the SEO acting on top differential forms that have the meaning of the total probability distributions. Differential forms of lesser degree can be interpreted, at least locally on , as conditional probability distributions. Viewing the spaces of differential forms of all degrees as wavefunctions of the model is a mathematical necessity. Without it, the Witten index representing the most fundamental object of the model—the partition function of the noise—would not exist and the dynamical partition function would not represent the number of fixed points of the SDE (see below). The most general understanding of the wavefunctions is the coordinate-free objects that contain information not only on trajectories but also on the evolution of the differentials and/or Lyapunov exponents.
Relation to nonlinear sigma model and algebraic topology
In Ref., a model has been introduced that can be viewed as a 1D prototype of the topological nonlinear sigma models (TNSM), a subclass of the Witten-type topological field theories. The 1D TNSM is defined for Riemannian phase spaces while for Euclidean phase spaces it reduces to the Parisi–Sourlas model. Its key difference from STS is the diffusion operator which is the Hodge Laplacian for 1D TNSM and for STS . This difference is unimportant in the context of relation between STS and algebraic topology, the relation established by the theory of 1D TNSM (see, e.g., Refs.).
The model is defined by the following evolution operator , where with being the metric, is the Hodge Laplacian, and the differential forms from the exterior algebra of the phase space, , are viewed as wavefunctions. There exists a similarity transformation, , that brings the evolution operator to the explicitly Hermitian form with . In the Euclidean case, is the Hamiltonian of a N=2 supersymmetric quantum mechanics. One can introduce two Hermitian operators, and , such that . This demonstrates that the spectrum of and/or is real and nonnegative. This is also true for SEOs of Langevin SDEs. For the SDEs of arbitrary form, however, this is no longer true as the eigenvalues of the SEO can be negative and even complex, which actually allows for the TS to be broken spontaneously.
The following properties of the evolution operator of 1D TNSM hold even for the SEO of the SDEs of arbitrary form. The evolution operator commutes with the operator of the degree of differential forms. As a result, , where and is the space of differential forms of degree . Furthermore, due to the presence of TS, , where are the supersymmetric eigenstates, , non-trivial in de Rham cohomology whereas the rest are the pairs of non-supersymmetric eigenstates of the form and . All supersymmetric eigenstates have exactly zero eigenvalue and, barring accidental situations, all non-supersymmetric states have non-zero eigenvalues. Non-supersymmetric pairs of eigenstates do not contribute to the Witten index, which equals the difference in the numbers of the supersymmetric states of even and odd degrees, For compact , each de Rham cohomology class provides one supersymmetric eigenstate and the Witten index equals the Euler characteristic of the phase space.
BRST procedure for SDEs of arbitrary form
The Parisi–Sourlas method of BRST procedure approach to Langevin SDEs have also been adapted to classical mechanics, stochastic generalization of classical mechanics, higher order Langevin SDEs, and, more recently, to SDEs of arbitrary form. While there exist standard techniques that allow to consider models with colored noises, higher-dimensional "base spaces" described by partial SDEs etc., the key elements of STS can be discussed using the following basic class of SDEs, where is a point in the phase space assumed for simplicity a closed topological manifold, is a sufficiently smooth vector field, called flow vector field, from the tangent space of , and is a set of sufficiently smooth vector fields that specify how the system is coupled to the noise, which is called additive/multiplicative depending on whether 's are independent/dependent on the position on .
Ambiguity of path integral representation and Ito–Stratonovich dilemma
BRST gauge fixing procedure goes along the same lines as in case of Langevin SDEs. The topological interpretation of the BRST procedure is just the same and the path integral representation of the Witten index is defined by the gauge fermion, , given by the same expression but with the generalized version of . There is one important subtlety, however, that appears on the way to the operator representation of the model. Unlike for Langevin SDEs, classical mechanics, and other SDEs with additive noises, the path integral representation of the finite-time SEO is an ambiguous object. This ambiguity originates from non-commutativity of momenta and position operators, e.g., . As a result, in the path integral representation has a whole one-parameter family of possible interpretations in the operator representation, , where denotes an arbitrary wavefunction. Accordingly, there is a whole -family of infinitesimal SEOs, with , being the interior multiplication by the subscript vector field, and the "shifted" flow vector field being . Noteworthy, unlike in Langevin SDEs, is not a supercharge and STS cannot be identified as a N=2 supersymmetric theory in the general case.
The path integral representation of stochastic dynamics is equivalent to the traditional understanding of SDEs as of a continuous time limit of stochastic difference equations where different choices of parameter are called "interpretations" of SDEs. The choice , for which and which is known in quantum theory as Weyl symmetrization rule, is known as the Stratonovich interpretation, whereas as the Ito interpretation. While in quantum theory the Weyl symmetrization is preferred because it guarantees hermiticity of Hamiltonians, in STS the Stratonovich–Weyl approach is preferred because it corresponds to the most natural mathematical meaning of the finite-time SEO discussed below—the stochastically averaged pullback induced by the SDE-defined diffeomorphisms.
Eigensystem of stochastic evolution operator
As compared to the SEO of Langevin SDEs, the SEO of a general form SDE is pseudo-Hermitian. As a result, the eigenvalues of non-supersymmetric eigenstates are not restricted to be real positive, whereas the eigenvalues of supersymmetric eigenstates are still exactly zero. Just like for Langevin SDEs and nonlinear sigma model, the structure of the eigensystem of the SEO reestablishes the topological character of the Witten index: the contributions from the non-supersymmetric pairs of eigenstates vanish and only supersymmetric states contribute the Euler characteristic of (closed) . Among other properties of the SEO spectra is that and never break TS, i.e., . As a result, there are three major types of the SEO spectra presented in the figure on the right. The two types that have negative (real parts of) eigenvalues correspond to the spontaneously broken TS. All types of the SEO spectra are realizable as can be established, e.g., from the exact relation between the theory of kinematic dynamo and STS.
STS without BRST procedure
The mathematical meaning of stochastic evolution operator
The finite-time SEO can be obtained in another, more mathematical way based on the idea to study the SDE-induced actions on differential forms directly, without going through the BRST gauge fixing procedure. The so-obtained finite-time SEO is known in dynamical systems theory as the generalized transfer operator and it has also been used in the classical theory of SDEs (see, e.g., Refs. ). The contribution to this construction from STS is the exposition of the supersymmetric structure underlying it and establishing its relation to the BRST procedure for SDEs.
Namely, for any configuration of the noise, , and an initial condition, , SDE defines a unique solution/trajectory, . Even for noise configurations that are non-differentiable with respect to time, , the solution is differentiable with respect to the initial condition, . In other words, SDE defines the family of the noise-configuration-dependent diffeomorphisms of the phase space to itself, . This object can be understood as a collection and/or definition of all the noise-configuration-dependent trajectories, . The diffeomorphisms induce actions or pullbacks, . Unlike, say, trajectories in , pullbacks are linear objects even for nonlinear . Linear objects can be averaged and averaging over the noise configurations, , results in the finite-time SEO which is unique and corresponds to the Stratonovich–Weyl interpretation of the BRST procedure approach to SDEs, .
Within this definition of the finite-time SEO, the Witten index can be recognized as the sharp trace of the generalized transfer operator. It also links the Witten index to the Lefschetz index,, a topological constant that equals the Euler characteristic of the (closed) phase space. Namely, .
The meaning of supersymmetry and the butterfly effect
The N=2 supersymmetry of Langevin SDEs has been linked to the Onsager principle of microscopic reversibility and Jarzynski equality. In classical mechanics, a relation between the corresponding N=2 supersymmetry and ergodicity has been proposed. In general form SDEs, where physical arguments may not be applicable, a lower level explanation of the TS is available. This explanation is based on understanding of the finite-time SEO as a stochastically averaged pullback of the SDE-defined diffeomorphisms (see subsection above). In this picture, the question of why any SDE has TS is the same as the question of why exterior derivative commutes with the pullback of any diffeomorphism. The answer to this question is differentiability of the corresponding map. In other words, the presence of TS is the algebraic version of the statement that continuous-time flow preserves continuity of . Two initially close points will remain close during evolution, which is just yet another way of saying that is a diffeomorphism.
In deterministic chaotic models, initially close points can part in the limit of infinitely long temporal evolution. This is the famous butterfly effect, which is equivalent to the statement that losses differentiability in this limit. In algebraic representation of dynamics, the evolution in the infinitely long time limit is described by the ground state of the SEO and the butterfly effect is equivalent to the spontaneous breakdown of TS, i.e., to the situation when the ground state is not supersymmetric. Noteworthy, unlike traditional understanding of deterministic chaotic dynamics, the spontaneous breakdown of TS works also for stochastic cases. This is the most important generalization because deterministic dynamics is, in fact, a mathematical idealization. Real dynamical systems cannot be isolated from their environments and thus always experience stochastic influence.
Spontaneous supersymmetry breaking and dynamical chaos
BRST gauge fixing procedure applied to SDEs leads directly to the Witten index. The Witten index is of topological character and it does not respond to any perturbation. In particular, all response correlators calculated using the Witten index vanish. This fact has a physical interpretation within the STS: the physical meaning of the Witten index is the partition function of the noise and since there is no backaction from the dynamical system to the noise, the Witten index has no information on the details of the SDE. In contrast, the information on the details of the model is contained in the other trace-like object of the theory, the dynamical partition function, where a.p.b.c. denotes antiperiodic boundary conditions for the fermionic fields and periodic boundary conditions for bosonic fields. In the standard manner, the dynamical partition function can be promoted to the generating functional by coupling the model to external probing fields.
For a wide class of models, dynamical partition function provides lower bound for the stochastically averaged number of fixed points of the SDE-defined diffeomorphisms, Here, index runs over "physical states", i.e., the eigenstates that grow fastest with the rate of the exponential growth given as,, and parameter can be viewed as stochastic version of dynamical entropy such as topological entropy. Positive entropy is one of the key signatures of deterministic chaos. Therefore, the situation with positive must be identified as chaotic in the generalized, stochastic sense as it implies positive entropy: . At the same time, positive implies that TS is broken spontaneously, that is, the ground state in not supersymmetric because its eigenvalue is not zero. In other words, positive dynamical entropy is a reason to identify spontaneous TS breaking as the stochastic generalization of the concept of dynamical chaos. Noteworthy, Langevin SDEs are never chaotic because the spectrum of their SEO is real non-negative.
The complete list of reasons why spontaneous TS breaking must be viewed as the stochastic generalization of the concept of dynamical chaos is as follows.
Positive dynamical entropy.
According to the Goldstone's theorem, spontaneous TS breaking must tailor a long-range dynamical behavior, one of the manifestations of which is the butterfly effect discussed above in the context of the meaning of TS.
From the properties of the eigensystem of SEO, TS can be spontaneously broken only if . This conclusion can be viewed as the stochastic generalization of the Poincare–Bendixson theorem for deterministic chaos.
In the deterministic case, integrable models in the sense of dynamical systems have well-defined global stable and unstable manifolds of . The bras/kets of the global ground states of such models are the Poincare duals of the global stable/unstable manifolds. These ground states are supersymmetric so that TS is not broken spontaneously. On the contrary, when the model is non-integrable or chaotic, its global (un)stable manifolds are not well-defined topological manifolds, but rather have a fractal, self-recurrent structure that can be captured using the concept of branching manifolds. Wavefunctions that can represent such manifolds cannot be supersymmetric. Therefore, TS breaking is intrinsically related to the concept of non-integrability in the sense of dynamical systems, which is actually yet another widely accepted definition of deterministic chaos.
All the above features of TS breaking work for both deterministic and stochastic models. This is in contrast with the traditional deterministic chaos whose trajectory-based properties such as the topological mixing cannot in principle be generalized to stochastic case because, just like in quantum dynamics, all trajectories are possible in the presence of noise and, say, the topological mixing property is satisfied trivially by all models with non-zero noise intensity.
STS as a topological field theory
The topological sector of STS can be recognized as a member of the Witten-type topological field theories. In other words, some objects in STS are of topological character with the Witten index being the most famous example. There are other classes of topological objects. One class of objects is related to instantons, i.e., transient dynamics. Crumpling paper, protein folding, and many other nonlinear dynamical processes in response to quenches, i.e., to external (sudden) changes of parameters, can be recognized as instantonic dynamics. From the mathematical point of view, instantons are families of solutions of deterministic equations of motion, , that lead from, say, less stable fixed point of to a more stable fixed point. Certain matrix elements calculated on instantons are of topological nature. An example of such matrix elements can be defined for a pair of critical points, and , with being more stable than , Here and are the bra and ket of the corresponding perturbative supersymmetric ground states, or vacua, which are the Poincare duals of the local stable and unstable manifolds of the corresponding critical point; denotes chronological ordering; 's are observables that are the Poincare duals of some closed submanifolds in ; are the observables in the Heisenberg representation with being an unimportant reference time moment. The critical points have different indexes of stability so that the states and are topologically inequivalent as they represent unstable manifolds of different dimensionalities. The above matrix elements are independent of as they actually represent the intersection number of -manifolds on the instanton as exemplified in the figure.
The above instantonic matrix elements are exact only in the deterministic limit. In the general stochastic case, one can consider global supersymmetric states, 's, from the De Rham cohomology classes of and observables, , that are Poincare duals of closed manifolds non-trivial in homology of . The following matrix elements, are topological invariants representative of the structure of De Rham cohomology ring of .
Applications
Supersymmetric theory of stochastic dynamics can be interesting in different ways. For example, STS offers a promising realization of the concept of supersymmetry. In general, there are two major problems in the context of supersymmetry. The first is establishing connections between this mathematical entity and the real world. Within STS, supersymmetry is the most common symmetry in nature because it is pertinent to all continuous time dynamical systems. The second is the spontaneous breakdown of supersymmetry. This problem is particularly important for particle physics because supersymmetry of elementary particles, if exists at extremely short scale, must be broken spontaneously at large scale. This problem is nontrivial because supersymmetries are hard to break spontaneously, the very reason behind the introduction of soft or explicit supersymmetry breaking. Within STS, spontaneous breakdown of supersymmetry is indeed a nontrivial dynamical phenomenon that has been variously known across disciplines as chaos, turbulence, self-organized criticality etc.
A few more specific applications of STS are as follows.
Classification of stochastic dynamics
STS provides classification for stochastic models depending on whether TS is broken and integrability of flow vector field. In can be exemplified as a part of the general phase diagram at the border of chaos (see figure on the right). The phase diagram has the following properties:
For physical models, TS gets restored eventually with the increase of noise intensity.
Symmetric phase can be called thermal equilibrium or T-phase because the ground state is the supersymmetric state of steady-state total probability distribution.
In the deterministic limit, ordered phase is equivalent to deterministic chaotic dynamics with non-integrable flow.
Ordered non-integrable phase can be called chaos or C-phase because ordinary deterministic chaos belongs to it.
Ordered integrable phase can be called noise-induced chaos or N-phase because it disappears in the deterministic limit. TS is broken by the condensation of (anti-)instantons (see below).
At stronger noises, the sharp N-C boundary must smear out into a crossover because (anti-)instantons lose their individuality and it is hard for an external observer to tell one tunneling process from another.
Demystification of self-organized criticality
Many sudden (or instantonic) processes in nature, such as, e.g., crackling noise, exhibit scale-free statistics often called the Zipf's law. As an explanation for this peculiar spontaneous dynamical behavior, it was proposed to believe that some stochastic dynamical systems have a tendency to self-tune themselves into a critical point, the phenomenological approach known as self-organized criticality (SOC). STS offers an alternative perspective on this phenomenon. Within STS, SOC is nothing more than dynamics in the N-phase. Specifically, the definitive feature of the N-phase is the peculiar mechanism of the TS breaking. Unlike in the C-phase, where the TS is broken by the non-integrability of the flow, in the N-phase, the TS is spontaneously broken due to the condensation of the configurations of instantons and noise-induced antiinstantons, i.e., time-reversed instantons. These processes can be roughly interpreted as the noise-induced tunneling events between, e.g., different attractors. Qualitatively, the dynamics in the N-phase appears to an external observer as a sequence of sudden jumps or "avalanches" that must exhibit a scale-free behavior/statistics as a result of the Goldstone theorem. This picture of dynamics in the N-phase is exactly the dynamical behavior that the concept of SOC was designed to explain. In contrast with the original understanding of SOC, its STS interpretation has little to do with the traditional critical phenomena theory where scale-free behavior is associated with unstable fixed points of the renormalization group flow.
Kinematic dynamo theory
Magnetohydrodynamical phenomenon of kinematic dynamo can also be identified as the spontaneous breakdown of TS. This result follows from equivalence between the evolution operator of the magnetic field and the SEO of the corresponding SDE describing the flow of the background matter. The so emerged STS-kinematic dynamo correspondence proves, in particular, that both types of TS breaking spectra are possible, with the real and complex ground state eigenvalues, because kinematic dynamo with both types of the fastest growing eigenmodes are known.
Transient dynamics
It is well known that various types of transient dynamics, such as quenches, exhibit spontaneous long-range behavior. In case of quenches across phase transitions, this behavior is often attributed to the proximity of criticality. Quenches that do not exhibit a phase transition are also known to exhibit long-range characteristics, with the best known examples being the Barkhausen effect and the various realizations of the concept of crackling noise. It is intuitively appealing that theoretical explanations for the scale-free behavior in quenches must be the same for all quenches, regardless of whether or not it produces a phase transition; STS offers such an explanation. Namely, transient dynamics is essentially a composite instanton and TS is intrinsically broken within instantons. Even though TS breaking within instantons is not exactly due to the phenomenon of the spontaneous breakdown of a symmetry by a global ground state, this effective TS breaking must also result in a scale-free behavior. This understanding is supported by the fact that condensed instantons lead to appearance of logarithms in the correlation functions. This picture of transient dynamics explains computational efficiency of the digital memcomputing machines.
See also
Stochastic quantization
References
Supersymmetry
Chaos theory
Mathematical physics
Applied and interdisciplinary physics
Complex systems theory
Self-organization
Stochastic processes | Supersymmetric theory of stochastic dynamics | [
"Physics",
"Mathematics"
] | 7,059 | [
"Self-organization",
"Applied and interdisciplinary physics",
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"Physics beyond the Standard Model",
"Mathematical physics",
"Supersymmetry",
"Symmetry",
"Dynamical systems"
] |
38,244,563 | https://en.wikipedia.org/wiki/All-Party%20Parliamentary%20Carbon%20Monoxide%20Group | The All-Party Parliamentary Carbon Monoxide Group (APPCOG) is an official All-Party Parliamentary Group of the UK Parliament, co-chaired by Barry Sheerman MP and Baroness Finlay of Llandaff. The group exists to tackle carbon monoxide (CO) poisoning in the UK, improve government policy around CO safety, and raise public awareness of the threat posed by toxic CO gas.
Alongside the co-chairs, the APPCOG has 8 Parliamentary officers from across the Labour, Conservative, Liberal Democrat, Scottish National, and Democratic Unionist parties. Its official entry on the Houses of Parliament register can be found on the APPG Register.
Its secretariat services are provided by Policy Connect, an independent not-for-profit think tank based in London.
History
The APPCOG, originally named the All-Party Parliamentary Gas Safety Group, was first established to promote awareness of CO poisoning and provide a forum for Parliamentarians, civil servants, industry representatives, charities and emergency services to share information and collaborate in order to improve gas safety.
In July 2012, the group was renamed to the All-Party Parliamentary Carbon Monoxide Group, which better reflected how CO can be produced by a variety of fuels, not just conventional gas.
Events, Research and Campaigning
The APPCOG holds regular events in Parliament, designed to bring together relevant stakeholders and discuss key issues within the field of carbon monoxide safety.
The APPCOG also conducts research and produces evidence-based reports designed to advise government departments on policy making around CO safety, with a particular focus on the Ministry of Housing, Communities and Local Government, the Department for Work and Pensions, and the Department for Business, Energy and Industrial Strategy.
Preventing Carbon Monoxide Poisoning
In October 2011, the APPCOG produced Preventing Carbon Monoxide Poisoning, which compiled evidence collected across a six-month inquiry and set a national strategy to eradicate CO poisoning through preventative measures such as providing CO alarms.
Carbon Monoxide: From Awareness to Action
In April 2014, the APPCOG announced it was undertaking a follow-up to the 2011 Inquiry, including a focus on behavioural insights and nudge theory. In January 2015, the ensuing report - Carbon Monoxide: From Awareness to Action - recommended a more targeted strategy for raising awareness of CO in order to reduce deaths and injuries.
Carbon Monoxide Poisoning: Saving lives, advancing treatment
In October 2017, the APPCOG's medical subgroup COMed published Carbon Monoxide Poisoning: Saving lives, advancing treatment. This report brought together a range of medical experts and made over twenty recommendations to improve the diagnosis and treatment of CO poisoning.
Carbon Monoxide Alarms: Tenants safe and secure in their homes
In November 2017, the APPCOG released Carbon Monoxide Alarms: Tenants safe and secure in their homes. The report advocated that landlords should be required to install CO alarms in all properties with a fuel-burning appliance.
Parliamentary Members
See also
All-Party Parliamentary Group
Carbon Monoxide
Carbon monoxide poisoning
Policy Connect
External links
APPCOG official website
Policy Connect official website
References
Air pollution in the United Kingdom
Air pollution organizations
Carbon Monoxide
Carbon monoxide
Natural gas industry in the United Kingdom
Natural gas organizations
Natural gas safety | All-Party Parliamentary Carbon Monoxide Group | [
"Chemistry",
"Engineering"
] | 643 | [
"Natural gas organizations",
"Natural gas safety",
"Natural gas technology",
"Energy organizations"
] |
38,245,286 | https://en.wikipedia.org/wiki/Phenotype%20microarray | The phenotype microarray approach is a technology for high-throughput phenotyping of cells.
A phenotype microarray system enables one to monitor simultaneously the phenotypic reaction of cells to environmental challenges or exogenous compounds in a high-throughput manner.
The phenotypic reactions are recorded as either end-point measurements or respiration kinetics similar to growth curves.
Usages
High-throughput phenotypic testing is increasingly important for exploring the biology of bacteria, fungi, yeasts, and animal cell lines such as human cancer cells. Just as DNA microarrays and proteomic technologies have made it possible to assay the expression level of thousands of genes or proteins all a once, phenotype microarrays (PMs) make it possible to quantitatively measure thousands of cellular phenotypes simultaneously. The approach also offers potential for testing gene function and improving genome annotation. In contrast to many of the hitherto available molecular high-throughput technologies, phenotypic testing is processed with living cells, thus providing comprehensive information about the performance of entire cells. The major applications of the PM technology are in the fields of systems biology, microbial cell physiology, microbiology, and taxonomy, and mammalian cell physiology including clinical research such as on autism. Advantages of PMs over standard growth curves are that cellular respiration can be measured in environmental conditions where cellular replication (growth) may not be possible, and that it is more accurate than optical density, which can vary between different cellular morphologies. In addition, respiration reactions are usually detected much earlier than cellular growth.
Technology
A sole carbon source that can be transported into a cell and metabolized to produce NADH engenders a redox potential and flow of electrons to reduce a tetrazolium dye, such as tetrazolium violet, which produces a purple color. The more rapid this metabolic flow, the more quickly purple color forms. The formation of purple color is a positive reaction. interpreted such that the sole carbon source is used as an energy source. A microplate reader and incubation facility are needed to provide the appropriate incubation conditions, and to automatically read the intensity of colour formation during tetrazolium reduction in intervals of, e.g., 15 minutes.
The principal idea of retrieving information about the abilities of an organism and its special modes of action when making use of certain energy sources can be equivalently applied to other macro-nutrients such as nitrogen, sulfur, or phosphorus and their compounds and derivatives.
As an extension, the impact of auxotrophic supplements or antibiotics, heavy metals or other inhibitory compounds on the respiration behaviour of the cells can be determined.
Data structure
During a positive reaction, the longitudinal kinetics are expected to appear as sigmoidal curves in analogy to typical bacterial growth curves. Comparable to bacterial growth curves, the respiration kinetic curves may provide valuable information coded in the length of the lag phase λ, the respiration rate μ (corresponding to the steepness of the slope), the maximum cell respiration A (corresponding to the maximum value recorded), and the area under the curve (AUC). In contrast to bacterial growth curves, there is typically no death phase in PMs, as the reduced tetrazolium dye is insoluble.
Software
Proprietary and commercially available software is available that provides a solution for storage, retrieval, and analysis of high throughput phenotype data. A powerful free and open source software is the "opm" package based on R. "opm" contains tools for analyzing PM data including management, visualization and statistical analysis of PM data, covering curve-parameter estimation, dedicated and customizable plots, metadata management, statistical comparison with genome and pathway annotations, automatic generation of taxonomic reports, data discretization for phylogenetic software and export in the YAML markup language. In conjunction with other R packages it was used to apply boosting to re-analyse autism PM data and detect more determining factors. The "opm" package has been developed and is maintained at the Deutsche Sammlung von Mikroorganismen und Zellkulturen. Another free and open source software developed to analyze Phenotype Microarray data is "DuctApe", a Unix command-line tool that also correlates genomic data. Other software tools are PheMaDB, which provides a solution for storage, retrieval, and analysis of high throughput phenotype data, and the PMViewer software which focuses on graphical display but does not enable further statistical analysis. The latter is not publicly available.
See also
Cell Painting
References
External links
PheMaDB website
Microbiology
Physiology
Phenomics | Phenotype microarray | [
"Chemistry",
"Biology"
] | 972 | [
"Microbiology",
"Physiology",
"Microscopy"
] |
60,314,915 | https://en.wikipedia.org/wiki/Particle%20method | Particle methods is a widely used class of numerical algorithms in scientific computing. Its application ranges from computational fluid dynamics (CFD) over molecular dynamics (MD) to discrete element methods.
History
One of the earliest particle methods is smoothed particle hydrodynamics, presented in 1977. Libersky et al. were the first to apply SPH in solid mechanics. The main drawbacks of SPH are inaccurate results near boundaries and tension instability that was first investigated by Swegle.
In the 1990s a new class of particle methods emerged. The reproducing kernel particle method (RKPM) emerged, the approximation motivated in part to correct the kernel estimate in SPH: to give accuracy near boundaries, in non-uniform discretizations, and higher-order accuracy in general. Notably, in a parallel development, the Material point methods were developed around the same time which offer similar capabilities. During the 1990s and thereafter several other varieties were developed including those listed below.
List of methods and acronyms
The following numerical methods are generally considered to fall within the general class of "particle" methods. Acronyms are provided in parentheses.
Smoothed particle hydrodynamics (SPH) (1977)
Dissipative particle dynamics (DPD) (1992)
Reproducing kernel particle method (RKPM) (1995)
Moving particle semi-implicit (MPS)
Particle-in-cell (PIC)
Moving particle finite element method (MPFEM)
Cracking particles method (CPM) (2004)
Immersed particle method (IPM) (2006)
Definition
The mathematical definition of particle methods captures the structural commonalities of all particle methods. It, therefore, allows for formal reasoning across application domains.
The definition is structured into three parts:
First, the particle method algorithm structure, including structural components, namely data structures, and functions.
Second, the definition of a particle method instance. A particle method instance describes a specific problem or setting, which can be solved or simulated using the particle method algorithm.
Third, the definition of the particle state transition function.
The state transition function describes how a particle method proceeds from the instance to the final state using the data structures and functions from the particle method algorithm.
A particle method algorithm is a 7-tuple , consisting of the two data structures
such that
is the state space of the particle method, and five functions:
An initial state defines a particle method instance for a given particle method algorithm :
The instance consists of an initial value for the global variable and an initial tuple of particles .
In a specific particle method, the elements of the tuple need to be specified. Given a specific starting point defined by an instance , the algorithm proceeds in iterations.
Each iteration corresponds to one state transition step that advances the current state of the particle method to the next state .
The state transition uses the functions
to determine the next state.
The state transition function generates a series of state transition steps until the stopping function is . The so-calculated final state is the result of the state transition function. The state transition function is identical for every particle method.
The state transition function is defined as
with
.
The pseudo-code illustrates the particle method state transition function:
1
2 while
3 for to
4
5 for to
6
7
8 for to
9
10
11
12
13
The fat symbols are tuples, are particle tuples and is an index tuple. is the empty tuple. The operator is the concatenation of the particle tuples, e.g. . And is the number of elements in the tuple , e.g. .
See also
Continuum mechanics
Boundary element method
Immersed boundary method
Stencil code
Meshfree methods
References
Further reading
Liu MB, Liu GR, Zong Z, AN OVERVIEW ON SMOOTHED PARTICLE HYDRODYNAMICS, INTERNATIONAL JOURNAL OF COMPUTATIONAL METHODS Vol. 5 Issue: 1, 135–188, 2008.
Liu, G.R., Liu, M.B. (2003). Smoothed Particle Hydrodynamics, a meshfree and Particle Method, World Scientific, .
External links
Particle Methods
Numerical analysis
Numerical differential equations
Computational fluid dynamics | Particle method | [
"Physics",
"Chemistry",
"Mathematics"
] | 827 | [
"Computational fluid dynamics",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Fluid dynamics"
] |
60,316,348 | https://en.wikipedia.org/wiki/Finiteness%20properties%20of%20groups | In mathematics, finiteness properties of a group are a collection of properties that allow the use of various algebraic and topological tools, for example group cohomology, to study the group. It is mostly of interest for the study of infinite groups.
Special cases of groups with finiteness properties are finitely generated and finitely presented groups.
Topological finiteness properties
Given an integer n ≥ 1, a group is said to be of type Fn if there exists an aspherical CW-complex whose fundamental group is isomorphic to (a classifying space for ) and whose n-skeleton is finite. A group is said to be of type F∞ if it is of type Fn for every n. It is of type F if there exists a finite aspherical CW-complex of which it is the fundamental group.
For small values of n these conditions have more classical interpretations:
a group is of type F1 if and only if it is finitely generated (the rose with petals indexed by a finite generating family is the 1-skeleton of a classifying space, the Cayley graph of the group for this generating family is the 1-skeleton of its universal cover);
a group is of type F2 if and only if it is finitely presented (the presentation complex, i.e. the rose with petals indexed by a finite generating set and 2-cells corresponding to each relation, is the 2-skeleton of a classifying space, whose universal cover has the Cayley complex as its 2-skeleton).
It is known that for every n ≥ 1 there are groups of type Fn which are not of type Fn+1. Finite groups are of type F∞ but not of type F. Thompson's group is an example of a torsion-free group which is of type F∞ but not of type F.
A reformulation of the Fn property is that a group has it if and only if it acts properly discontinuously, freely and cocompactly on a CW-complex whose homotopy groups vanish. Another finiteness property can be formulated by replacing homotopy with homology: a group is said to be of type FHn if it acts as above on a CW-complex whose n first homology groups vanish.
Algebraic finiteness properties
Let be a group and its group ring. The group is said to be of type FPn if there exists a resolution of the trivial -module such that the n first terms are finitely generated projective -modules. The types FP∞ and FP are defined in the obvious way.
The same statement with projective modules replaced by free modules defines the classes FLn for n ≥ 1, FL∞ and FL.
It is also possible to define classes FPn(R) and FLn(R) for any commutative ring R, by replacing the group ring by in the definitions above.
Either of the conditions Fn or FHn imply FPn and FLn (over any commutative ring). A group is of type FP1 if and only if it is finitely generated, but for any n ≥ 2 there exists groups which are of type FPn but not Fn.
Group cohomology
If a group is of type FPn then its cohomology groups are finitely generated for . If it is of type FP then it is of finite cohomological dimension. Thus finiteness properties play an important role in the cohomology theory of groups.
Examples
Finite groups
A finite cyclic group acts freely on the unit sphere in , preserving a CW-complex structure with finitely many cells in each dimension. Since this unit sphere is contractible, every finite cyclic group is of type F∞.
The standard resolution for a group gives rise to a contractible CW-complex with a free -action in which the cells of dimension correspond to -tuples of elements of . This shows that every finite group is of type F∞.
A non-trivial finite group is never of type F because it has infinite cohomological dimension. This also implies that a group with a non-trivial torsion subgroup is never of type F.
Nilpotent groups
If is a torsion-free, finitely generated nilpotent group then it is of type F.
Geometric conditions for finiteness properties
Negatively curved groups (hyperbolic or CAT(0) groups) are always of type F∞. Such a group is of type F if and only if it is torsion-free.
As an example, cocompact S-arithmetic groups in algebraic groups over number fields are of type F∞. The Borel–Serre compactification shows that this is also the case for non-cocompact arithmetic groups.
Arithmetic groups over function fields have very different finiteness properties: if is an arithmetic group in a simple algebraic group of rank over a global function field (such as ) then it is of type Fr but not of type Fr+1.
Notes
References
Group theory
Homological algebra
Geometric group theory | Finiteness properties of groups | [
"Physics",
"Mathematics"
] | 1,023 | [
"Geometric group theory",
"Mathematical structures",
"Group actions",
"Group theory",
"Fields of abstract algebra",
"Category theory",
"Symmetry",
"Homological algebra"
] |
60,323,001 | https://en.wikipedia.org/wiki/Aircraft%20engine%20performance | Aircraft engine performance refers to factors including thrust or shaft power for fuel consumed, weight, cost, outside dimensions and life. It includes meeting regulated environmental limits which apply to emissions of noise and chemical pollutants, and regulated safety aspects which require a design that can safely tolerate environmental hazards such as birds, rain, hail and icing conditions. It is the end product that an engine company sells.
Aircraft engines are part of the propulsion system of an airplane, helicopter, rocket or UAV which produce rotary power transferred to a propeller or kinetic energy as a high-velocity gas exhaust stream. Aircraft engine types include turboprop, turbojet, turbofan and turboshaft. Piston engines are used in recreational personal aircraft and older aircraft. Electric engines are used in model aircraft, small drones, small UAVs and small crewed aircraft. Aircraft engine performance has improved dramatically since the advent of the first powered flight in 1848 by John Stringfellow. Aircraft engine manufacturers have to constantly innovate to remain competitive by offering more efficient and more reliable engines. Improving the performance of aircraft engines reduces the cost of ownership for operators of commercial, military and private aircraft.
Performance criteria
The following are different measures of the engine as a black box and most are negotiated between the engine manufacturer and its customer for a particular aircraft installation. Some, like noise, exhaust pollutants and certain operability requirements, such as acceleration times, are regulated with limits that have to be met for commercial operation. Each is the result of design iterations inside the "black box" using both analytical computer modelling and development testing.
Thrust, Shaft power, Fuel consumption, Weight, Cost, Installation envelope, Overhaul life, Operability, Noise, Exhaust pollutants.
Factors affecting engine performance
Fuel
The cost of fuel is a significant part of the operating cost of an aircraft, about 56% for a wide-body airliner in 1983. Particular fuels are approved for use in a particular engine to prevent safety and reliability issues. Fuels include jet fuel and AVGAS (aviation gasoline), which differ from automotive engine fuels. Gas turbine engines will run on aviation gasoline as an alternative to jet fuel as in the case of turbojet booster engines on piston-engined aircraft. Small turboprop and business aircraft may be approved for a limited running time on avgas to allow refuelling at remote airstrips with no jet fuel supply. Different fuels are used for different applications due to their performance characteristics.
Jet fuel
Kerosene jet fuel, also known as aviation turbine fuel (ATF), is designed to be used in aircraft powered by gas turbine engines. Jet fuel used to power gas turbine engines has been the preferred propellant since the advent of this type of engine due to the fuel's favourable combustion characteristics and relatively high energy content. Jet fuel remains the most commonly used fuel in aviation due to the popularity of turbofan and turboprop engines. Turbofan engines power most large commercial passenger and cargo aircraft today. Civil jet fuel grades include A-1, A, B, TS-1. Military grades include JP-4, JP-8 and JP-5. Military varieties differ from civil jet fuels due to the addition of corrosion inhibitors and anti-icing additives. JP-8 jet fuel is the most common fuel among NATO aircraft fleets.
AVGAS
AVGAS (aviation gasoline) is widely used in reciprocating engines (piston engines). Aviation gasoline is highly volatile and very flammable, with a low flash point, which makes it unsuitable for use in gas turbine engines. Volatility is how easily a substance will change from a liquid to a gaseous state. Highly volatile fuel is required to power reciprocating engines as the liquid gasoline pumped to the carburettor must readily vaporise in order to combust in the engine. There is however a balance of volatility needed. If AVGAS fuel is too volatile, it may cause vapour lock and early detonation in the engine cylinder. If the AVGAS is not volatile enough, there will be inconsistent engine acceleration and power throughout the revolution range. AVGAS is commonly supplemented with Tetraethyl-lead (TEL) to prevent engine knocking, which is a damaging build-up of pressure inside the engine caused by low octane rated fuel which may lead to engine failure in reciprocating engines. Antiknock additives allow for greater efficiency and peak power. TEL has been banned by the European Union for automotive use due to environmental concerns, but remains approved for use in aircraft.
Rocket fuel
Rocket fuel consists of solid, liquid and gel state fuels for propulsion. In order to power rockets, a fuel and an oxidiser are mixed within the combustion chamber, producing a high energy propulsive exhaust as thrust. The main uses for rocket fuel are for space shuttle boosters in order to propel the craft out of the atmosphere, or for missiles. Solid rocket propellant does not degrade in long-term storage and remains reliable on combustion. This allows munitions to remain loaded and fired when needed, which is highly regarded for military use. Once ignited, solid rocket propellants cannot be shut down. The fuel and the oxidiser are stored within a metal casing. Once ignited, the fuel burns from the centre of the solid compound towards the edges of the metal casing. Burn rates and intensity are manipulated by the changing of the shape of a channel between the fuel and the casing shell. Two varieties of solid rocket fuel propellants exist. These include homogeneous and composite solid rocket fuels. These fuels are characteristically dense, stable at ordinary temperatures and easily storable. Liquid fuels are more controllable than solid rocket fuels, and can be shutoff after ignition and restarted, as well as offering greater thrust control. Liquid propellants are stored in two parts in an engine, as the fuel in one tank and an oxidiser in another. These liquids are mixed in the combustion chamber and ignited. Hypergolic fuel is mixed and ignites spontaneously, requiring no separate ignition. Liquid fuel compounds include petroleum, hydrogen and oxygen.
Electric
Electricity may be transmitted to an aircraft's electric motors through batteries, ground power cables, solar cells, ultra-capacitors, fuel cells and power beaming. Electrically powered engines are currently only suitable for light aircraft and UAV's (unmanned aerial vehicles). Electrical engines are praised for being environmentally friendly and relatively quiet. There are a multitude of personal UAV's and drones available for purchase without a licence or age restriction globally, capable of high speed manoeuvres and agile flight characteristics. Typically aircraft with electric engines have significantly shorter flight durations than conventional fuel powered aircraft although battery technology developments and solar energy conversion has created potential for use in commercial aircraft. Jeffrey Engler, CEO of Wright Electric, estimates that commercially viable electric planes will reduce energy costs by 30%.
Hydrogen
Hydrogen as a fuel, through the combustion of hydrogen in a jet engine or fuel cell, is a viable fuel source for aircraft engines. Currently, pressurised tanks to hold the hydrogen fuel with sufficient volume and a low enough weight are not available for large commercial aircraft, but have been successfully implemented on smaller personal aircraft such as the Boeing Fuel Cell Demonstrator by Boeing Phantom Works and on launch rockets for space shuttles when stored cryogenically. Hydrogen can be used to power a multitude of craft, via turbine engines, piston engines and rocket engines. Hydrogen fuel cells create electrical power through hydrolysis and are in various stages of research for applications in environmentally friendly engines as they emit no toxic exhaust. Hydrogen powered engines only emit water through the bonding of oxygen and hydrogen, as well as any excess hydrogen as exhaust. This means that this is a highly environmentally friendly propulsion system.
Electro-aerodynamic thrust
Researchers from MIT (Massachusetts Institute of Technology) have developed an ion drive propulsion system with no moving parts. The 'engine' is propelled by ionic wind, also known as electro-aerodynamic thrust. This new form of aircraft propulsion would be completely silent and require far less maintenance than conventional fossil-fuel powered engines. This technology has the potential to be used in conjunction with conventional aircraft combustion engines as a hybrid system with further development or even as propulsion systems on spacecraft.
Atmospheric conditions
Atmospheric conditions are an important consideration in the analysis of the factors contributing to differing aircraft engine performance. These factors include altitude, temperature and humidity. Aircraft engine performance decreases as altitude and temperature increase. In the case of high humidity, the volume of air available for combustion is reduced, causing losses in power in combustion engines. Aircraft engine performance is measured at baseline parameters of a standard atmosphere (29.92” of mercury) at 15 °C.
Weather may be a physical barrier to aircraft operation, as it is in the case of forecast of hail or volcanic ash, because of the risk of serious damage to all the engines installed on the aircraft.
Altitude
When altitude is increased, air density decreases. With lower air density, air molecules are further apart from each other, which will lead to declines in performance of combustion engines. Electric-powered aircraft will not see losses of power output at high altitude, but rather aerodynamic losses as propellers work harder to propel the same amount of air as at ground level. However, cooling capacity will decline on both combustion and electric motors at high altitude due to the lower density of air. This phenomenon is why the operating limit of helicopters is constrained, as propeller thrust returns to a value of 0 when the air becomes too thin at high altitude. This makes high altitude airports significantly more dangerous than airports at sea level.
Temperature
Temperature has significant effects on the maximum power available and the operational efficiency of an aircraft engine. This applies for combustion and electrical engines. Pilots account for the ambient temperature on the day of a flight in order to calculate the takeoff distance required. Extreme heat or cold temperatures are performance limitations for aircraft engines.
An aircraft flying at a constant altitude with an ambient air temperature of 20 °C would experience more favourable performance than flying with an ambient air temperature of 40 °C. With cold temperatures, air is denser and a larger mass of air/fuel mixture is combusted, leading to higher efficiency and greater power.
Humidity
Humidity affects the mass of oxygen in each unit of volume of air in the atmosphere, reducing the burn rate and increasing the combustion time of fuel in a combustion engine which will reduce thermal efficiency. Minimal losses of power occur where the energy of the engine's combustion heat the moisture in the engine. For electrical components found within electric motors, excess moisture is capable of damaging circuits and electrical systems. In reality, air is never fully dry, or without moisture in the atmosphere. Even when air is considered dry, it retains a moisture content of around 5%.
Weather
Weather has significant impacts the performance of an engine, and also the propensity to cause engine malfunction or failure. Winds are both beneficial and unfavourable depending on the direction of the wind and the heading of the aircraft. A significant weakness of many aircraft is their use of propellers or turbines in their engines. This is because any particulates that enter the engine other than air may cause damage. An example of this is hail, when precipitation freezes. If the hail is severe enough, engine inlet guide vanes or compressor blades can bend or break under impact. Volcanic ash ejected into the atmosphere due to a volcanic eruption is another example of reduced engine performance due to weather. Particles of volcanic ash are abrasive at high speed, leading to abrasion on compressor fan blades. The glass-like silicate compound found in volcanic ash has a lower melting point than the combustion temperature of fuel and air in a jet engine. When ingested into the engine, the material melts and deposits in cooler areas of the engine, leading to compressor stall and thrust loss.
See also
Index of aviation articles
Jet engine performance
References
Engine
Aircraft performance | Aircraft engine performance | [
"Physics",
"Technology"
] | 2,402 | [
"Physical quantities",
"Engines",
"Power (physics)",
"Powered flight",
"Aircraft engines"
] |
60,323,851 | https://en.wikipedia.org/wiki/Physical%20mapping | Physical map is a technique used in molecular biology to find the order and physical distance between DNA base pairs by DNA markers. It is one of the gene mapping techniques which can determine the sequence of DNA base pairs with high accuracy. Genetic mapping, another approach of gene mapping, can provide markers needed for the physical mapping. However, as the former deduces the relative gene position by recombination frequencies, it is less accurate than the latter.
Physical mapping uses DNA fragments and DNA markers to assemble larger DNA pieces. With the overlapping regions of the fragments, researchers can deduce the positions of the DNA bases. There are different techniques to visualize the gene location, including somatic cell hybridization, radiation hybridization and in situ hybridization.
The different approaches to physical mapping are available for analyzing different sizes of genome and achieving different levels of accuracy. Low- and high-resolution mapping are two classes for various resolution of genome, particularly for the investigation of chromosomes. The three basic varieties of physical mapping are fluorescent in situ hybridization (FISH), restriction site mapping and sequencing by clones.
The goal of physical mapping, as a common mechanism under genomic analysis, is to obtain a complete genome sequence in order to deduce any association between the target DNA sequence and phenotypic traits. If the actual positions of genes which control certain phenotypes are known, it is possible to resolve genetic diseases by providing advice on prevention and developing new treatments.
Low-resolution mapping
Low-resolution physical mapping is typically capable of resolving DNA ranging from one base pair to several mega bases. In this category, most mapping methods involve generating a somatic cell hybrid panel, which is able to map any human DNA sequences, the gene of interest, to specific chromosomes of animal cells, such as those of mice and hamsters. The hybrid cell panel is produced by collecting hybrid cell lines containing human chromosomes, identified by polymerase chain reaction (PCR) screening with primers specific to the human sequence of interest as the hybridization probe. The human chromosome would be presented in all of the cell lines.
There are different approaches to producing low-resolution physical mapping, including chromosome-mediated gene transfer and irradiation fusion gene transfer which generate the hybrid cell panel. Chromosome-mediated gene transfer is a process that coprecipitates human chromosome fragments with calcium phosphate onto the cell line, leading to a stable transformation of recipient chromosomes retaining human chromosomes ranging in size from 1 to 50 mega base pairs. Irradiation fusion gene transfer produces radiation hybrids which contain the human sequence of interest and a random set of other human chromosome fragments. Markers from fragments of human chromosome in radiation hybrids give cross-reactivity patterns, which are further analyzed to generate a radiation hybrid map by ordering the markers and breakpoints. This provides evidence on whether the markers are located on the same human chromosome fragment, and hence the order of gene sequence.
High-resolution mapping
High-resolution physical mapping could resolve hundreds of kilobases to a single nucleotide of DNA. A major technique to map such large DNA regions is high resolution FISH mapping, which could be achieved by the hybridization of probes to extended interphase chromosomes or artificially extended chromatin. Since their hierarchic structure is less condensed comparing to prometaphase and metaphase chromosomes, the standard in situ hybridization target, a high resolution of physical mapping could be produced.
FISH mapping using interphase chromosome is a conventional in situ method to map DNA sequences from 50 to 500 kilobases, which are mainly syntenic DNA clones. However, naturally extended chromosomes might be folded back and produces alternative physical map orders. As a result, statistical analysis is necessary to generate the accurate map order of interphase chromosomes.
If artificially stretched chromatin is used instead, mapping resolutions could be over 700 kilobases. In order to produce extended chromosomes on a slide, direct visual hybridization (DIRVISH) is often carried out, that cells are lysed by detergent to allow DNA released into the solution to flow to the other end of the slide. An example of high resolution FISH mapping using stretched chromatin is extended chromatin fiber (ECF) FISH. The method suggests the order of desired regions on the DNA sequence by analyzing the partial overlaps and gaps between yeast artificial chromosomes (YACs). Eventually, the linear sequence of the interested DNA regions could be determined. One more to note is that if metaphase chromosome is used in FISH mapping, the resolution resulted will be very poor, which is to be classified to low-resolution mapping rather than a high-resolution mapping.
Restriction site mapping
Restriction mapping is a top-down strategy that divides a chromosome target into finer regions. Restriction enzymes are used to digest a chromosome and produce an ordered set of DNA fragments. It involves genomic fragments of the target rather than cloned fragments in the library. They will be pinned to probes from the genomic library that are chosen randomly for detection purpose. The lengths of the fragments are measured by electrophoresis, which can be used to deduce their distance along the map according to the restriction site, the markers of a physical map. The progress involves combinatorial algorithms.
During the progress, a chromosome is obtained from a hybrid cell and cut at rare restriction site to produce large fragments. The fragments will be separated by size and undergo hybridization, forming the macrorestriction map and different contiguous blocks (i.e. contigs). To ensure the fragments are linked, linking clones with the same rare cutting sites at the large fragments can be used.
After producing the low-resolution map, the fragments can be cut into smaller sections by restriction nucleases for further analysis to produce a map with higher resolution. PFG fractionation can be used for separation and purification of the fragments generated for small genome.
Through different digestion approaches, different types of DNA fragments are produced. The variation in types of fragments might affect the calculation result.
Double digestion
This technique uses two restriction enzymes and a combination of the two enzymes for digestion separately. It assumes that complete digestion occurs at each restriction site. The lengths of the DNA fragments are measured and used for ordering of fragments by computation. This approach has easier experimental handling, but more difficult in terms of the combinatorial problem required for mapping.
Partial digestion
This technique uses one restriction enzyme to digest the desired DNA in separated experiments with different durations of exposure. The extent of digestion for the fragments differs. DNA methylation is a technique that prevents the reaction from being completed at cutting sites. This method must be done more carefully, but its mathematical problem can be easily solved by exponential algorithm.
Sequencing by clones
Using clones to generate a physical map is a bottom-up approach with fairly high resolution. It uses the existing cloned fragments in genomic libraries to form contigs. Through cloning the partially digested fragments generated by bacterial transformation, the immortal clones with overlapping regions of the genome, which will be examined by fingerprinting methods and stored in the libraries, are produced. During sequencing process, the clones are randomly selected and placed on a set of microtitre plates randomly. They will be fingerprinted by different methods. To ensure there is a minimum set of clones that form one config for a genome (i.e. tiling path), the library used will have five to ten times redundancy. However, such techniques might produce unknown gaps in the map produced or result in saturation in clones eventually.
Application
Physical mapping is a technique to complete the sequencing of a genome. Ongoing projects that determine DNA base pair sequences, namely the Human Genome Project, give knowledge on the order of nucleotide and allow further investigation to answer genetic questions, particularly the association between the target sequence and the development of traits. From the individual DNA sequence isolated and mapped in physical mapping, it could provide information on the transcription and translation process during development of organisms, hence identifying the specific function of the gene and associated traits produced. As a result of understanding the expression and regulation of the genes, potential new treatments can be developed to alter protein expression patterns in specific tissues. Moreover, if the location and sequence of disease genes are identified, medical advice can be given to potential patients who are the carrier the disease gene, with reference to the knowledge of the gene function and products.
References
Molecular biology techniques | Physical mapping | [
"Chemistry",
"Biology"
] | 1,711 | [
"Molecular biology techniques",
"Molecular biology"
] |
43,868,891 | https://en.wikipedia.org/wiki/Social%20Bonding%20and%20Nurture%20Kinship | Social Bonding and Nurture Kinship: Compatibility between Cultural and Biological Approaches is a book on human kinship and social behavior by Maximilian Holland, published in 2012. The work synthesizes the perspectives of evolutionary biology, psychology and sociocultural anthropology towards understanding human social bonding and cooperative behavior. It presents a theoretical treatment that many consider to have resolved longstanding questions about the proper place of genetic (or 'blood') connections in human kinship and social relations, and a synthesis that "should inspire more nuanced ventures in applying Darwinian approaches to sociocultural anthropology".
The aim of the book is to show that "properly interpreted, cultural anthropology approaches (and ethnographic data) and biological approaches are perfectly compatible regarding processes of social bonding in humans." Holland's position is based on demonstrating that the dominant biological theory of social behavior (inclusive fitness theory) is typically misunderstood to predict that genetic ties are necessary for the expression of social behaviors, whereas in fact the theory only implicates genetic associations as necessary for the evolution of social behaviors. Whilst rigorous evolutionary biologists have long understood the distinction between these levels of analysis (see Tinbergen's four questions), past attempts to apply inclusive fitness theory to humans have often overlooked the distinction between evolution and expression.
Beyond its central argument, the broader philosophical implications of Holland's work are considered by commentators to be that it both "helps to untangle a long-standing disciplinary muddle" and "clarifies the relationship between biological and sociocultural approaches to human kinship." It is claimed that the book "demonstrates that an alternative non-deterministic interpretation of evolutionary biology is more compatible with actual human social behavior and with the frameworks that sociocultural anthropology employs" and as a consequence, delivers "a convincing, solid and informed blow to the residual genetic determinism that still influences the interpretation of social behaviour."
Synopsis
The book's form consists of a cumulative argument (using a wide range of supporting evidence) made over nine chapters, with each chapter ending in a brief retrospective summary, and the final chapter containing a recapitulation and summary of the whole, and drawing some wider conclusions.
Continuing debate over 'blood kinship'
Holland begins by tracing transitions in the history of anthropological theories of social behavior and kinship, noting the varying importance with which 'blood ties' have been understood to be a necessary element of human kinship and social relations. He suggests that whilst the mounting ethnographic evidence has led to a move away from the 'blood kinship' concept in recent decades, many sociocultural anthropologists still query the connection between kinship and blood, reproduction or some other apparently biological functions. Meanwhile, many biologists, biological anthropologists and evolutionary psychologists have persisted in viewing human kinship and cooperative behavior as necessarily associated with genetic relationships and 'blood ties'. The current situation has been characterized as "a clash between incommensurate paradigms, holding as they may, completely incompatible ideas about human nature." Holland argues that a clear resolution to these questions is still outstanding, and would therefore be of value. In closing the introduction, Holland writes; "The approach is not reductive. The claim is rather that a thorough investigation of the 'biological facts' can be useful mainly though allowing a change in focus... away from confusion about the place of genealogy in social ties, and onto a reformulated baseline, built around varied processual aspects of social bonding."
Evolutionary biology theory of social behavior
The book reviews the background and key elements of Hamilton's inclusive fitness theory from the 1960s onwards, setting out its significant conceptual and heuristic value. Holland notes that Hamilton acknowledged that his earliest and most widely known account (1964) contained technical inaccuracies. He also notes Hamilton's early speculations about possible proximate mechanisms of the expression of social behavior (supergenes as a possible alternative to behaviour-evoking-situations) contained errors that have nevertheless remained very influential in popular accounts. Specifically, the supergenes notion (sometimes called the Green-beard effect) - that organisms may evolve genes that are able to identify identical copies in others and preferentially direct social behaviours towards them - was theoretically clarified and withdrawn by Hamilton in 1987. However, in the intervening years, the notion that supergenes (or more often, simply individual organisms) have evolved to identify genetic relatives and preferentially cooperate with them took hold, and became the way many biologists came to understand the theory. This persisted, despite Hamilton's 1987 correction. In Holland's view it is the pervasiveness of this longstanding but erroneous perspective, and the suppression of the alternative 'behaviour-evoking-situations' perspective regarding social expression mechanisms, that is largely responsible for the ongoing clash between biological and sociocultural approaches to human kinship.
Sociobiology and kinship
Holland shows that, in the 1970s and 80s, the first wave of attempts (known as human sociobiology or Darwinian anthropology) to apply inclusive fitness theory to human social behavior relied on, and further reinforced, this same misinterpretation (above section) about the theory's predictions and the proximate mechanisms of social behavior. Holland also shows that this period of research was burdened with many misplaced assumptions about universal attributes of the human sexes, sexuality and gender roles, apparently projected from the specific cultural values of the researchers themselves. Holland also shows that, following the perceived failures of this early wave, and particularly its methodological agnosticism regarding proximate mechanisms of social behavior, the evolutionary psychology school grew up in its place. Although this latter school typically avoided engaging with the ethnographic data on human kinship, Holland argues that in the few cases where it did so, it repeated the misinterpretation of inclusive fitness theory that characterized the first wave. Holland also notes that Kitcher, in his 1985 critique of the sociobiological position, suggested that perhaps the expression of social behaviors in humans might quite simply be based on cues of context and familiarity, rather than genetic relatedness per se.
Proximate mechanisms and 'kin recognition'
Chapters four and five investigate further the theory and evidence surrounding the proximate mechanisms of social behavior; specifically the question of whether social behaviors are expressed by organisms via behaviour-evoking-situations or via direct detection of actual genetic relatedness. Related questions have been the domain of kin recognition theory. Holland notes that the name 'kin recognition' itself suggests some expectation that a positive identification of genetic relatedness is a prediction of inclusive fitness theory, and is thus expected. Similar points have been made by others; "many behavioural ecologists seem to implicitly assume that specialised mechanisms allowing individuals to distinguish their kin from non-kin must have evolved." Again, the possibility that behaviour-evoking-situations might be the more parsimonious mechanism of the expression of social behavior, and fully compatible with inclusive fitness theory, has often been underemphasized. However, Holland's review of the evidence notes that field studies in this area quickly established that behaviour-evoking-situations do in fact overwhelmingly mediate social behaviours in those species studied, and that, particularly in mammal species, social bonding and familiarity formed in early developmental contexts (e.g. in burrows or nesting sites) are a common mediating mechanism for social behaviors, independently of genetic relatedness per se. On the basis of the preceding theoretical analysis and review of evidence, at the end of chapter five, Holland argues that;
It is entirely erroneous, both in reference to theory and in reference to the evidence, to claim or suggest that 'the facts of biology' support the claim that organisms have evolved to cooperate with genetic relatives per se.
Primate social bonding and attachment theory
Having argued for the above position on the lack of necessity for genetic relatedness per se to mediate social bonding and behavior, Holland suggests that "The further question then is; can we uncover in any greater detail how familiarity and other context-dependent cues operate?". To discover the extent to which the variety of human kinship behaviors may nevertheless be compatible with this (less deterministic) interpretation of biological theory of social behavior, Holland suggests that a survey of primates' most fundamental social patterns may give clues, especially those of species most closely connected with humans. The variety of primate mating systems, group-membership ('philopatry') patterns, and life-cycle patterns are reviewed. Holland finds that;
Like other mammals, Catarrhini primate demographics are strongly influenced by ecological conditions, particularly density and distribution of food sources... Cohesive social groups and delayed natal dispersal mean that maternally related individuals, including maternal siblings, face a statistically reliable context of interaction in all Catarrhini primates. This reliable context of interaction with maternally related individuals is extended amongst those species with female philopatry (especially Cercopithecinae).
As with other social mammals, evidence suggest that the reliability of 'behaviour-evoking-situations' this social context provides has shaped the mechanisms of proximate expression of social bonding and behavior;
Adoption of infants by females (and sometimes males) demonstrates that care-giving and bonding to infants is not mediated by positive powers of discrimination. From the infant's perspective, it will bond with any responsive carer. If not necessarily the actual mother, in natural conditions this will often be a maternal relative (particularly an older sibling), but the context is primary, not the actual relatedness. Similarly, social bonding and social behaviours between maternal siblings (and occasionally between other maternal relatives) is context-driven in primates, and mediated via the care-giver.
Holland also notes how Bowlby and colleagues' attachment theory was strongly informed by primate bonding patterns and mechanisms, and that in Bowlby's later writing the then emerging inclusive fitness theory was explicitly linked to.
[Bowlby's] work demonstrated that social attachments form on the basis of provision of care, and responsiveness to elicitations for care. The social context of living together and the familiarity this brings, provides the circumstance within which social bonds can form...
On the basis of combining more recent primate research with the findings of attachment theory, Holland proposes that "In attempting to define more specific forms of the giving of care and nurture which may mediate social bonding we [find] that provision of food is likely to play a part, as well as the more intangible provision of warmth and comfort, and a safe base for sleeping."
Processual and nurture kinship in humans
Holland claims that, while biological theory of social behavior is not deterministic in respect of genetic relatedness vis-a-vis the formation of social bonds and expression of social behaviors, evidence does point to compatibility between a non-reductive interpretation of the theory and how such bonds and behaviors operate in social mammals, primates and in humans. In the final part of the book, Holland explores the extent to which this perspective is also compatible with sociocultural anthropology's ethnographic accounts of human kinship and social behavior, both occasional accounts from the past, as well as more contemporary accounts that have explicitly eschewed the earlier 'blood ties' assumption. Holland finds that;
Many contemporary accounts focus on social bonds formed in childhood and the importance of the performance of acts of care, including food provision, in mediating these bonds. In all cases it is this performance of care which is considered the overriding factor in
mediating social bonds, notwithstanding 'blood ties'. In short, there is strong compatibility between the perspectives on social bonding that emerge from a proper account of biological theory and those documented by ethnographers.
Conclusion
Holland's concluding chapter gives a summary of his fundamental position;
A crucial implication of this argument taken as a whole is that the expression of the kinds of social behaviours treated by inclusive fitness theory does not require genetic relatedness. Sociobiology and evolutionary psychology's claims that biological science predicts that organisms will direct social behaviour towards relatives are thus both theoretically and empirically erroneous. Such claims and their supporting arguments also give a highly misleading and reductive account of basic biological theory. Properly interpreted, cultural anthropology approaches (and ethnographic data) and biological approaches are perfectly compatible regarding processes of social bonding in humans. Most of all, this requires a focus on the circumstances and processes which lead to social bonding.
The book notes that, as an outcome of the analysis, Schneider's sociocultural perspective on human kinship is vindicated;
Do the biological facts have some priority or are they but one of the conditions, like ecology, economy, demography, etc., to which kinship systems must adapt? Take note: if the latter is the case, then kinship must be as much rooted in these other conditions as in the biological facts.
The author supplies several examples of the insight that Schneider's broad approach can provide. The book closes with an example of a clash of cultural perspectives on kinship and family norms, and makes the suggestion that;
Constructing from narrow cultural particulars (Euro-American or otherwise) an essentialised model of 'human nature' does not constitute science; it is closer to cultural colonialism. In any analysis intended to shed light on proposed universals of the human condition, reflexivity is essential, and cultural and biological approaches both surely necessary.
Reception and reviews
General
Kinship theorist and member of the US National Academy of Sciences, Robin Fox wrote of the work:
An excellent and constructive discussion of matters in kinship and its cultural and biological components, handsomely reconciling what have been held to be incompatible positions.Max Holland gets to the heart of the matter concerning the contentious relationship between kinship categories, genetic relatedness and the prediction of behavior. If he had been in the debate in the 1980s then a lot of subsequent confusion could have been avoided"
Irwin Bernstein, distinguished research professor in the university of Georgia's Behavioral and Brain Sciences Program made the following comment on Holland's book:
Max Holland has demonstrated extraordinarily thorough scholarship in his exhaustive review of the often contentious discussions of kinship. He has produced a balanced synthesis melding the two approaches exemplified in the biological and sociocultural behavioral positions. His work in reconciling opposing views clearly demonstrates the value of interdisciplinary approaches. This should be the definitive word on the subject.
Philip Kitcher, John Dewey Professor of Philosophy, and James R. Barker Professorship of Contemporary Civilization at Columbia University, past president of the American Philosophical Association and inaugural winner of the Prometheus Prize, stated of the book:
Max Holland has provided a wide-ranging and deeply-probing analysis of the influence of genetic relatedness and social context on human kinship. He argues that while genetic relatedness may play a role in the evolution of social behavior, it does not determine the forms of such behavior. His discussion is exemplary for its thoroughness, and should inspire more nuanced ventures in applying Darwinian approaches to sociocultural anthropology.
Sociocultural anthropology
Kirk Endicott, professor emeritus of anthropology at the university of Dartmouth, wrote that Holland's book was:
A brilliant discussion of the relationship between kinship and social bonding as understood in evolutionary biology and in sociocultural anthropology. Among other contributions, it debunks the common misconception that biological evolution involves individual organisms actively pursuing the goal of increasing the numbers of their genes in successive generations, the measure of their so-called 'individual inclusive fitness'. Holland demonstrates that an alternative non-deterministic interpretation of evolutionary biology is more compatible with actual human social behavior and with the frameworks that sociocultural anthropology employs.
Janet Carsten, kinship theorist and professor of anthropology at the university of Edinburgh stated that:
This book is a scholarly attempt to get beyond the often sterile oppositions between evolutionary and culturalist approaches to kinship. In bringing together two sides of the debate, it constitutes a valuable contribution to kinship studies.
In a review for the journal Critique of Anthropology, Nicholas Malone concluded that:
Lucid and effective... Holland has produced a significant work of scholarship that will be of interest to a wide swath of the anthropological community.
Commenting on the book for the journal Social Analysis, Anni Kajanus found that:
Holland has done an excellent and thorough job in reviewing the disciplinary and interdisciplinary histories of approaches to kinship and social bonds in anthropology, biology, and psychology. Most importantly, he clarifies the different levels of analysis when looking at human behavior in real time and in the evolutionary time frame. This makes the book essential reading for anyone who acknowledges that human relatedness and social bonds are shaped by the
evolved dispositions of our species, their development through the life-course of an individual, and our specific cultural-historical environments... Holland's book goes a long way toward clarifying and therefore advancing these theoretical debates
Biology
An in-depth review of the book by primatologist Augusto Vitale, in the journal Folia Primatologica, found that:
This is, without a doubt, a very significant and important contribution to the on-going discussion about the determinants of sociality in humans as well as in other animals... A painstaking analysis of inclusive fitness, attachment theory and non-human primate social relationships, through a fascinating journey which ends with an anthropological account of social bonds in different cultures... It is a landmark in the field of evolutionary biology, which places genetic determinism in the correct perspective.
Stuart Semple, evolutionary anthropologist, reviewing the book in the journal Acta Ethologica stated that:
As someone who teaches behavioural ecology to biologists, and primate biology to social and biological anthropologists, I will be strongly recommending this book to all of my advanced undergraduates, masters and PhD students, as well as to my colleagues. Not only does it help to resolve debates that have run for many years, but it is also an outstanding example of what can be achieved by immersing oneself in literature from different fields, while retaining an intellectual openness and exercising incisive analysis. Many of us talk enthusiastically about inter- and multi-disciplinarity, but often this is not much more than lip service. This book is a shining example of what can be achieved when excellent scholars engage fully across disciplinary boundaries. There should be more texts like this.
Published debate and criticism of the book
In addition to praise for the book's significance, the Folia Primatologica review noted that the book is at times too dense and requires close reading;
The argument here and there becomes too detailed and tortuous, but it is absolutely captivating... [Colleagues] who are less used to extremely detailed theoretical reasoning, will find it difficult at the beginning...
See also
Fictive kinship
Inclusive fitness
Kin recognition
Kin selection
Kinship
Nurture kinship
Scientific method
Social animal
References
External links
Social Bonding and Nurture Kinship
2012 non-fiction books
Anthropology books
Books about sociobiology
Kinship and descent
Behavioral ecology
Biological anthropology
Biological concepts
Evolutionary biology
Evolution of primates | Social Bonding and Nurture Kinship | [
"Biology"
] | 3,860 | [
"Evolutionary biology",
"Behavior",
"Behavioral ecology",
"Behavioural sciences",
"nan",
"Ethology",
"Human behavior",
"Kinship and descent"
] |
43,875,229 | https://en.wikipedia.org/wiki/Vantaa%20incinerator | The Vantaa incinerator is an incinerator power plant taken to use in Vantaa, Finland, on 17 September 2014. It is operated by Vantaan Energia. It is the largest incinerator in Finland, and it cost 300 million Euros to build. It is located immediately to the northeast of the intersection of the Finnish national road 7 and the Ring III bypass road.
The construction of the plant began in the autumn of 2011, the cornerstone was laid in May 2012, and trial runs of the plant began in March 2014, when the first batches of waste were burned.
The waste burned in the incinerator is collected from the Uusimaa Province, from an area that extends from Hanko to Porvoo and from Helsinki to Nurmijärvi. The plant receives between 100 and 150 loads of waste every day. They are delivered to it by HSY (‘Helsinki area environmental services’) from the metropolitan area and by Rosk’n Roll Oy from Uusimaa.
The plant has two incinerators, which can burn up to 400 cubic metres of waste per second, that is, a volume equivalent to that of a single family house every 5 seconds. There is also a storage space called a bunker that can store the waste produced by 1.5 million people in a space of 10 days. The plant makes it possible to make better use of mixed waste, as 320 000 tons of mixed waste no longer ends up in landfills but is used to produce heat and electricity for the city of Vantaa instead.
The waste is burned in a grate, which according to Vantaan Energia is a reliable technique and the most common incineration technique used for burning waste in the world. In addition, natural gas is used as fuel, and it is said that it contributes to the energy efficiency of the plant.
The plant produces 920 gigawatt-hours of district heat per year, which is nearly half of what the city of Vantaa needs, and 600 gigawatt-hours of electricity, which is 30% of what Vantaa needs per year. The use of the plant means that Vantaa will use almost one third less of fossil fuel and its carbon dioxide emissions decrease by 20%. The waste is burned in a temperature of nearly 1000 °C, which eliminates most of the toxic compounds. Around 700 tons of various materials are extracted from the flue gas every year, mostly heavy metals, which are disposed of by a company called Ekokem in Riihimäki. The slag is taken to the Ämmässuo landfill site, and the gravellike bottom slag is used in earthworks. A further use for the ashes is being investigated.
References
Buildings and structures in Vantaa
Incinerators | Vantaa incinerator | [
"Chemistry"
] | 573 | [
"Incinerators",
"Incineration"
] |
58,833,146 | https://en.wikipedia.org/wiki/Alice%20%28spacecraft%20instrument%29 | Alice is any one of two ultraviolet imaging spectrometers; one used on the New Horizons spacecraft and the other used on the Rosetta spacecraft. Alice is a small telescope with a spectrograph and a special detector with 32 pixels each with 1024 spectral channels detecting ultraviolet light. Its primary role is to determine the relative concentrations of various elements and isotopes in Pluto's atmosphere.
Alice has an off-axis telescope which sends light to a Rowland-circle spectrograph, and the instrument has a field of view of 6 degrees. It is designed to capture airglow and solar occultation at the same time, and has two inputs to allow this.
Overview
Alice uses an array of potassium bromide and caesium iodide type of photocathodes. It detects in the extreme and far ultraviolet spectrum, from wavelengths of light, with a spectral resolution of and a spatial resolution of per of altitude.
Alice is intended, among its capabilities, to detect ultraviolet signatures of noble (aka inert) gases including helium, neon, argon, and krypton. Alice should also be able to detect water, carbon monoxide, and carbon dioxide in the ultraviolet. Although the instrument was designed to study Pluto's atmosphere, ALICE will also be tasked with studying Pluto's moon Charon, in addition to various Kuiper-belt objects.
ALICE was built and operated by the Southwest Research Institute for NASA's Jet Propulsion Laboratory. The instrument is powered using a radiation hardened version of an Intel 8052 micro-processor. The instrument uses 32KB of programmable read only memory (PROM), 128 KB of EEPROM, and 32KB of SRAM. The command and data handling electronics are contained across four circuit boards which sit behind the detectors.
ALICE operates in two separate data modes; Pixel List mode (PLM) and Histogram mode (HM). In Pixel List mode, the number of photons/second are recorded. In Histogram mode, the sensor array collects data/photons for a defined period of time. This data is then read as a 2D image. Furthermore, whilst the image is being read from the first memory bank, a second exposure can be started using the secondary memory bank. An advantage of utilising two different data modes is that the method of data collection can be tailored to the scientific goals. PLM provides time resolution, where as HM consistently requires same amount of memory, regardless of exposure length.
Naming
Alice is not an acronym. The name was chosen by principal investigator Alan Stern for personal reasons.
Alice on New Horizons
In August 2018, NASA confirmed, based on results by Alice on the New Horizons spacecraft, the detection of a "hydrogen wall" at the outer edges of the Solar System that was first detected in 1992 by the two Voyager spacecraft which have detected a surplus of ultraviolet light determined to be coming from hydrogen.
The New Horizons version of Alice uses an average power of 4.4 watts and weighs 4.5 kg (9.9 pounds).
Alice on Rosetta
On Rosetta, a mission to a comet, Alice performed ultraviolet spectroscopy to search and quantify the noble gas content in the comet nucleus.
On Rosetta it is a instrument which uses 2.9 watts.
See also
UVS (Juno) (ultraviolet imaging spectrometer on Juno Jupiter orbiter)
Ultraviolet–visible spectroscopy
List of New Horizons topics
References
Spacecraft instruments
New Horizons
Rosetta mission
Spectrometers | Alice (spacecraft instrument) | [
"Physics",
"Chemistry"
] | 704 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
58,840,544 | https://en.wikipedia.org/wiki/Queer%20ecology | Queer ecology/ Queer ecologies is an endeavor to understand nature, biology, and sexuality in the light of queer theory, rejecting the presumptions that heterosexuality and cisgenderedness constitute any objective standard. It draws from science studies, ecofeminism, environmental justice, and queer geography. These perspectives break apart various "dualisms" that exist within human understandings of nature and culture.
Overview
Queer ecology states that people often regard nature in terms of dualistic notions like "natural and unnatural", "alive or not alive" or "human or not human", when in reality, nature exists in a continuous state. The idea of "natural" arises from human perspectives on nature, not "nature" itself.
Queer ecology rejects ideas of human exceptionalism and anthropocentrism that propose that humans are unique and more important than the non-human. Specifically, queer ecology challenges traditional ideas regarding which organisms, individuals, memories, species, visions, objects, etc. have value.
Queer ecology also states that heteronormative ideas saturate human understanding of "nature" and human society, and calls for the inclusion of a more radically inclusive, queered perspective in environmental movements. It rejects the associations that exist between "natural" and "heterosexual" or "heteronormative", and draws attention to how both nature and marginalized social groups have been historically exploited.
Through the lens of queer ecology, all living things are considered to be connected and interrelated. "To queer" nature is to acknowledge the complexities present in nature and to rid interpretations of nature from human assumptions and their disastrous impacts.
Queer ecologies can be associated with what Tabassi calls "dirty resilience," or "the dismantling of structures of violence that target particular racialized and gendered bodies as disposable... [dirty resilience] is thus also the contextually specific creation of spaces and structures supporting self-determination and collective liberation, such as: land sovereignty; prison and apartheid regime abolition; new food systems; community accountability in place of policing and criminalization; non-proliferation and demilitarization; healthcare accessibility; free housing; collective decision-making; trauma transformation... [etc.]."
In speaking to the radically interdisciplinary nature of queer ecologies, Knox draws a thread between this and 'insurgent posthumanism,' – which "dissolves the dichotomy between humans and non-humans" and asks how to contribute to "the making of lively ecologies as a form of material transformation that instigates justice as an immediate, lived, worldly experience." - as well as to the work of Arakawa and Gins, and Simondon.
Definition
The term 'queer ecology' refers to a loose, interdisciplinary constellation of practices that aim, in different ways, to disrupt prevailing heteronormative discursive and institutional articulations of human and nature, and also to reimagine evolutionary processes, ecological interactions, and environmental politics in light of queer theory. Drawing from traditions as diverse as: evolutionary biology; LGBTTIQQ2SA (lesbian, gay, bisexual, transgender, transsexual, intersex, queer, questioning, two-spirited, and asexual) movements; queer geography and history; feminist science studies; ecofeminism; disability studies; and environmental justice - queer ecology highlights the complexity of contemporary biopolitics, draws important connections between the material and cultural dimensions of environmental issues, and insists on an articulatory practice in which sex and nature are understood in light of multiple trajectories of power and matter.
History
The theoretical beginnings of queer ecology are commonly traced back to what are considered foundational texts of queer theory. For example, scholar Catriona Sandilands cites queer ecology's origins back to Michel Foucault's The History of Sexuality (1976). Sandilands suggests Foucault "lays the groundwork for much contemporary queer ecological scholarship" by examining the conception of sex as "a specific object of scientific knowledge, organized through, on the one hand, a 'biology of reproduction' that considered human sexual behavior in relation to the physiologies of plant and animal reproduction, and on the other, a 'medicine of sex' that conceived of human sexuality in terms of desire and identity." Foucault explains the "medicine of sex" as a way of talking about human health separate from the "medicine of the body". Early notions of queer ecology also come from the poetry of Edward Carpenter, who addressed themes of sexuality and nature in his work.
Judith Butler's work regarding gender also laid an important foundation for queer ecology. Specifically, Butler explores gender as performativity in their 1990 book, Gender Trouble: Feminism and the Subversion of Identity. Queer ecology proposes that when Butler's notion of performativity is applied to the realm of ecology, it dismantles the 'nature-culture binary. From the perspective of queer ecology, essential differences do not exist between "nature" and "culture". Rather, humans who have categorized "nature" and "culture" as distinct from one another perform these differences. From a scientific perspective, "nature" cannot be fully understood if animals or particles are considered to be distinct, stagnant entities; rather, nature exists as a "web" of interactions.
In part, queer ecology also emerged from ecofeminist work. Although queer ecology rejects traits of essentialism found in early ecofeminism, ecofeminist texts such as Mary Daly's Gyn/Ecology (1978) laid the foundation for understanding intersections between wom_n and the environment. Queer ecology develops these intersectional understandings that began in the field of ecofeminism about the ways sex and nature have historically been depicted. As a political theory that insists ecological and social problems are enmeshed, queer ecology has been compared to Murray Bookchin's concept of social ecology.
Heteronormativity and the environment
Queer ecology recognizes that people often associate heteronormativity with the idea of "natural", in contrast to, for example, homosexuality, trans, and non-binary identities, which people generally, under particular structures, associate with the "unnatural". These expectations of sexuality and nature often influence scientific studies of the non-human. The natural world often defies the heteronormative notions held by scientists, helping humans to redefine our cultural understanding of what "natural" is and therefore how we might be able to "queer" environmental spaces. For example, in 'The Feminist Plant: Changing Relations with the Water Lily,' Prudence Gibson and Monica Gagliano explain how the water lily defies heterosexist notions. They argue that because the water lily is so much more than its reputation as a "pure" or "feminine" plant, we need to reevaluate our understanding of plants and acknowledge the connections between plant biology and models for cultural practice, through a feminist lens.
In A Political Ecology of 'Unnatural Offenses,' Kath Weston points out that environmentalism and queer politics rarely seem to intersect, but that "this dislocation rests on a narrow association of ecology with visible landscapes and sexuality with visible bodies bounded by skin." In The Body as Bioregion, Deborah Slicer wrote that "[t]he environmentalists' silence about the body is all too familiar. My worry is that this silence reflects that traditional and dangerous way of thinking that the body is of no consequence, that our own corporeal nature is irrelevant to whatever environmentalists are calling "Nature"." As Nicole Seymour states, "... new models of gender and sexuality emerge not just out of shifts in areas such as politics, economics, and medicine, but out of shifts in ecological consciousness."
In the Orion Magazine article, “How to Queer Ecology: One Goose at a Time”, Alex Carr Johnson calls for a stop to the dualistic and generalizing categorization of nature and its possibilities. Two opposing interpretations are found by comparing David Quammen’s essay “The Miracle of Geese” and Bruce Bagemihl’s book, Biological Exuberance. While Quammen used evidence of monogamous and heterosexual partnerships amongst geese as an ecological mandate for such behaviors, Bagemihl observed monogamous and homosexual partnerships. These partnerships were frequent and persistent, not from a lack of potential mates of the opposite sex. Such conflicting accounts of the “natural” exemplify how interpretation, extrapolation, and communication of nature and the natural subsequently restricts and reduces the capacity to conceptualize and understand what it constitutes.
Reimagining scientific perspectives
In disciplines of the natural sciences like evolutionary biology and ecology, queer ecology allows scholars to reimagine cultural binaries that exist between "natural and unnatural" and "living and non-living".
Timothy Morton proposes that biology and ecology deconstruct notions of authenticity. Specifically, he proposes that life exists as a "mesh of interrelations" that blurs traditional scientific boundaries, like species, living and nonliving, human and nonhuman, and even between an organism and its environment. Queer ecology, according to Morton, emphasizes a perspective on life that transcends dualisms and distinctive boundaries, instead recognizing that unique relationships exist between life forms at different scales. Queer ecology nuances traditional evolutionary perspectives on sexuality, regarding heterosexuality as impractical at many scales and as a "late" evolutionary development.
Other scholars challenge the contrast that exists between "human" and "non-human" classifications, proposing that the idea of "fluidity" from queer theory should also extend to the relationship between humans and the non-human.
Queer ecology and human society
Queer ecology is also relevant when considering human geography. For example, Catriona Sandilands considers lesbian separatist communities in Oregon as a specific manifestation of queer ecology. Marginalized communities, according to Sandilands, create new cultures of nature against dominant ecological relations. Environmental issues are closely linked to social relations that include sexuality, and so a strong alliance exists between queer politics and environmental politics. "Queer geography" calls attention to the spatial organization of sexuality, which implicates issues of access to natural spaces, and the sexualization of these spaces. This implies that unique ecological relationships arise from these sexuality-based experiences. Furthermore, queer ecology disrupts the association of nature with sexuality. Matthew Gandy proposes that urban parks, for example, are heteronormative because they reflect hierarchies of property and ownership. "Queer", in the case of urban nature, refers to spatial difference and marginalization, beyond sexuality.
Queer ecology is also important within individual households. As a space influenced by society, the home is often an ecology that perpetuates heteronormativity. Will McKeithen examines queer ecology in the home by considering the implications of the label "crazy cat lady". Because the "crazy cat lady" often defies societal heterosexist expectations for the home, as she, instead of having a romantic, cis-male, human partner, treats animals as legitimate companions. This rejection of heteropatriarchal norms and acceptance of multispecies intimacy, creates a queer ecology of the home.
Queer ecology is also connected to feminist economics, concerned with topics such as social reproduction, extractivism, and feminized forms of labour, largely unrecognized and unremunerated by dominant Capitalist, Neo-Colonial and Neo-Imperialist systems. Feminist economics may be said to be using queer ecology, to disentangle the gender binary, including the ties between the cis-female body's reproductive potential and the responsibilities of social reproduction, childcare, and nation building.
Arts and literature
A significant shift towards an ecological aesthetic in New York can be traced back to an interdisciplinary festival in 1990 called the Sex Salon which took place at the art space Epoché in Williamsburg, Brooklyn. Celebrating both nonbinary forms of sexuality and the rooting of culture within a neighborhood ecosystem, the three day salon was the first large gathering of artists, writers and musicians outside the Borough of Manhattan. The ecologically engaged movement, eventually referred to as the Brooklyn Immersionists, included the ecofeminist periodical, The Curse and the night space, El Sensorium which promoted a form of identity-free abandon called the "omnisensorial sweepout."
The Immersionist scene came to a climax in 1993, according to Domus, with the ecological culture experiment, Organism. The event blurred the boundaries between humans and their environment and featured numerous overlapping cultural and natural systems cultivated by 120 members of Williamsburg's creative community. The ecological "web jam" included a genderless "elvin napping system" and a participatory exercise in sexual empowerment called The Boom Boom Womb by the polyamorous rock group, Thrust. The all night event was attended by over 2000 guests and has been cited by Newsweek, the Performing Arts Journal (PAJ), Die Zeit and the New York Times. Organism's program notes invited the audience into an implicitly queer merging of the human body with its ecosystem:
"Wiffle your fingers through the mush. Invite a friend into the jello with you. This is all one strange continuum, a conflux of linkages, systems, feedback loops, waveforms... How do we extract pleasure from such an equation? Can we build a hybrid of steel, brick, plants, [bodies] and thought, absorbing pleasure from it as we ourselves become integrated into its monstrous flesh?"
In May 1994, an editorial essay in UnderCurrents: Journal of Critical Environmental Studies entitled "Queer Nature" spoke to the notion of queer ecology. The piece identified the disruptive power possible when one examines normative categories associated with nature. The piece asserted that white cis-heterosexual males hold power over the politics of nature, and that this pattern cannot continue. Queer Ecological thinking and literature was also showcased in this issue, in the form of poetry and art submissions—deconstructing heteronormativity within both human and environmental sexualities. In 2015, Undercurrents proceeded to release an update to their original issue and a podcast to celebrate 20 years of continued studies in queer ecology.
In 2013, Strange Natures, by Nicole Seymour, explored the queer ecological imagination, futurity, and empathy through culture and popular culture, including the contemporary transgender novel and different forms of cinema.
Theater is a significant setting for exploring ideas of queer ecology, because the theater-space can provide an alternative environment, from which to consider a reality independent from the socially constructed and enforced, binaries and heteronormativity of the outside world. In this way, theater has the potential to construct temporary "queer ecologies" on stage.
Writers such as Henry David Thoreau, Herman Melville, Willa Cather, and Djuna Barnes, have been said to complicate the common notion that environmental literature consists exclusively of heterosexual doctrine and each of their work sheds light on the ways that human sexuality is connected to environmental politics. Robert Azzarello, has also identified common themes of queerness and environmental studies in American Romantic and post-Romantic literature that challenge conventional ideas of the "natural".
In 2023, Knox referred to Camille Vidal-Naquet's Sauvage, as a queer ecological film, in a presentation titled Queer Ecologies through Camille Vidal-Naquet’s, Sauvage (2018).
Queer Ecologies and Crip Theory
Placing queer ecologies in intimate relation with disability studies, in Queer Ecologies; Sex, Nature, Politics, Desire, Giovanna Di Chiro quotes Eli Clare as follows: "The body as home, but only if it is understood that bodies can be stolen, fed lies and poison, torn away from us. They rise up around me - bodies stolen by hunger, war, breast cancer, AIDS, rape, the daily grind of the factory, sweatshop, cannery, sawmill; the lynching rope; the freezing streets; the nursing home and prison... disabled people cast as supercrips and tragedies; lesbian/gay/bisexual/trans people told over and over again that we are twisted and unnatural; poor people made responsible for their own poverty. Stereotypes and lies lodge in our bodies as surely as bullets. They live and fester there, stealing the body."
See also
Bruce Bagemihl
Mel Y. Chen
Eli Clare
Climate justice
Crip theory
Donna Haraway
Queer theory
Catriona Sandilands
Sauvage (film)
Sexecology
Simondon
Social ecology
References
External links
Arakawa and Madeline Gins
Institute of Queer Ecology
Queer Ecology by Catriona Sandilands in Keywords for Environmental Studies. Eds. Joni Adamson, William A. Gleason and David N. Pellow. New York: New York University Press, 2015
Queer Ecologies through Camille Vidal-Naquet’s Sauvage (2018)
Queer theory
Environmentalism
Environmental movements
Environmental humanities
Environmental studies
Environmental social science concepts
Political ecology | Queer ecology | [
"Environmental_science"
] | 3,514 | [
"Environmental social science concepts",
"Political ecology",
"Environmental social science"
] |
58,840,824 | https://en.wikipedia.org/wiki/Mechanics%20of%20gelation | Mechanics of gelation describes processes relevant to sol-gel process.
In a static sense, the fundamental difference between a liquid and a solid is that the solid has elastic resistance against a shearing stress while a liquid does not. Thus, a simple liquid will not typically support a transverse acoustic phonon, or shear wave. Gels have been described by Born as liquids in which an elastic resistance against shearing survives, yielding both viscous and elastic properties. It has been shown theoretically that in a certain low-frequency range, polymeric gels should propagate shear waves with relatively low damping. The distinction between a sol (solution) and a gel therefore appears to be understood in a manner analogous to the practical distinction between the elastic and plastic deformation ranges of a metal. The distinction lies in the ability to respond to an applied shear force via macroscopic viscous flow.
In a dynamic sense, the response of a gel to an alternating force (oscillation or vibration) will depend upon the period or frequency of vibration. As indicated here, even most simple liquids will exhibit some elastic response at shear rates or frequencies exceeding 5 x 106 cycles per second. Experiments on such short time scales probe the fundamental motions of the primary particles (or particle clusters) which constitute the lattice structure or aggregate. The increasing resistance of certain liquids to flow at high stirring speeds is one manifestation of this phenomenon. The ability of a condensed body to respond to a mechanical force by viscous flow is thus strongly dependent on the time scale over which the load is applied, and thus the frequency and amplitude of the stress wave in oscillatory experiments.
Structural relaxation
The structural relaxation of a viscoelastic gel has been identified as primary mechanism responsible for densification and associated pore evolution in both colloidal and polymeric silica gels. Experiments in the viscoelastic properties of such skeletal networks on various time scales require a force varying with a period (or frequency) appropriate to the relaxation time of the phenomenon investigated, and inversely proportional to the distance over which such relaxation occurs. High frequencies associated with ultrasonic waves have been used extensively in the handling of polymer solutions, liquids and gels and the determination of their viscoelastic properties. Static measurements of the shear modulus have been made, as well as dynamic measurements of the speed of propagation of shear waves, which yields the dynamic modulus of rigidity. Dynamic light scattering (DLS) techniques have been utilized in order to monitor the dynamics of density fluctuations through the behavior of the autocorrelation function near the point of gelation.
Phase transition
Tanaka et al., emphasize that the discrete and reversible volume transitions which occur in partially hydrolyzed acrylamide gels can be interpreted in terms of a phase transition of the system consisting of the charged polymer network, hydrogen (counter)ions and liquid matrix. The phase transition is a manifestation of competition among the three forces which contribute to the osmotic pressure in the gel:
The positive osmotic pressure of (+) hydrogen ions
The negative pressure due to polymer-polymer affinity
The rubber-like elasticity of the polymer network
The balance of these forces varies with change in temperature or solvent properties. The total osmotic pressure acting on the system is the sum osmotic pressure of the gel. It is further shown that the phase transition can be induced by the application of an electric field across the gel. The volume change at the transition point is either discrete (as in a first-order Ehrenfest transition) or continuous (second order Ehrenfest analogy), depending on the degree of ionization of the gel and on the solvent composition.
Elastic continuum
The gel is thus interpreted as an elastic continuum, which deforms when subjected to externally applied shear forces, but is incompressible upon application of hydrostatic pressure. This combination of fluidity and rigidity is explained in terms of the gel structure: that of a liquid contained within a fibrous polymer network or matrix by the extremely large friction between the liquid and the fiber or polymer network. Thermal fluctuations may produce infinitesimal expansion or contraction within the network, and the evolution of such fluctuations will ultimately determine the molecular morphology and the degree of hydration of the body.
Quasi-elastic light scattering offers direct experimental access to measurement of the wavelength and lifetimes of critical fluctuations, which are governed by the viscoelastic properties of the gel. It is reasonable to expect a relationship between the amplitude of such fluctuations and the elasticity of the network. Since the elasticity measures the resistance of the network to either elastic (reversible) or plastic (irreversible) deformation, the fluctuations should grow larger as the elasticity declines. The divergence of the scattered light intensity at a finite critical temperature implies that the elasticity approaches zero, or the compressibility becomes infinite, which is the typically observed behavior of a system at the point of instability. Thus, at the critical point, the polymer network offers no resistance at all to any form of deformation.
Ultimate microstructure
The rate of relaxation of density fluctuations will be rapid if the restoring force, which depends upon the network elasticity, is large—and if the friction between the network and the interstitial fluid is small. The theory suggests that the rate is directly proportional to the elasticity and inversely proportional to the frictional force. The friction in turn depends upon both the viscosity of the fluid and the average size of the pores contained within the polymer network.
Thus, if the elasticity is inferred from the measurements of the scattering intensity, and the viscosity is determined independently (via mechanical methods such as ultrasonic attenuation) measurement of the relaxation rate yields information on the pore size distribution contained within the polymer network, e.g. large fluctuations in polymer density near the critical point yield large density differentials with a corresponding bimodal distribution of porosity. The difference in average size between the smaller pores (in the highly dense regions) and the larger pores (in regions of lower average density) will therefore depend upon the degree of phase separation which is allowed to occur before such fluctuations become thermally arrested or "frozen in" at or near the critical point of the transition.
See also
Freeze-casting
Freeze gelation
Random graph theory of gelation
References
External links
International Sol–Gel Society
The Sol–Gel Gateway
Ceramic engineering
Gels | Mechanics of gelation | [
"Chemistry",
"Engineering"
] | 1,301 | [
"Ceramic engineering",
"Gels",
"Colloids"
] |
32,637,211 | https://en.wikipedia.org/wiki/Neutron%20capture%20therapy%20of%20cancer | Neutron capture therapy (NCT) is a type of radiotherapy for treating locally invasive malignant tumors such as primary brain tumors, recurrent cancers of the head and neck region, and cutaneous and extracutaneous melanomas. It is a two-step process: first, the patient is injected with a tumor-localizing drug containing the stable isotope boron-10 (B), which has a high propensity to capture low energy "thermal" neutrons. The neutron cross section of B (3,837 barns) is 1,000 times more than that of other elements, such as nitrogen, hydrogen, or oxygen, that occur in tissue. In the second step, the patient is radiated with epithermal neutrons, the sources of which in the past have been nuclear reactors and now are accelerators that produce higher energy epithermal neutrons. After losing energy as they penetrate tissue, the resultant low energy "thermal" neutrons are captured by the B atoms. The resulting decay reaction yields high-energy alpha particles that kill the cancer cells that have taken up enough B.
All clinical experience with NCT to date is with boron-10; hence this method is known as boron neutron capture therapy (BNCT). Use of another non-radioactive isotope, such as gadolinium, has been limited to experimental animal studies and has not been done clinically. BNCT has been evaluated as an alternative to conventional radiation therapy for malignant brain tumors such as glioblastomas, which presently are incurable, and more recently, locally advanced recurrent cancers of the head and neck region and, much less often, superficial melanomas mainly involving the skin and genital region.
Boron neutron capture therapy
History
James Chadwick discovered the neutron in 1932. Shortly thereafter, H. J. Taylor reported that boron-10 nuclei had a high propensity to capture low energy "thermal" neutrons. This reaction causes nuclear decay of the boron-10 nuclei into helium-4 nuclei (alpha particles) and lithium-7 ions. In 1936, G.L. Locher, a scientist at the Franklin Institute in Philadelphia, Pennsylvania, recognized the therapeutic potential of this discovery and suggested that this specific type of neutron capture reaction could be used to treat cancer. William Sweet, a neurosurgeon at the Massachusetts General Hospital, first suggested the possibility of using BNCT to treat malignant brain tumors to evaluate BNCT for treatment of the most malignant of all brain tumors, glioblastoma multiforme (GBMs), using borax as the boron delivery agent in 1951. A clinical trial subsequently was initiated by Lee Farr using a specially constructed nuclear reactor at the Brookhaven National Laboratory in Long Island, New York, U.S.A. Another clinical trial was initiated in 1954 by Sweet at the Massachusetts General Hospital using the Research Reactor at the Massachusetts Institute of Technology (MIT) in Boston.
A number of research groups worldwide have continued the early ground-breaking clinical studies of Sweet and Farr, and subsequently the pioneering clinical studies of Hiroshi Hatanaka (畠中洋) in the 1960s, to treat patients with brain tumors. Since then, clinical trials have been done in a number of countries including Japan, the United States, Sweden, Finland, the Czech Republic, Taiwan, and Argentina. After the nuclear accident at Fukushima (2011), the clinical program there transitioned from a reactor neutron source to accelerators that would produce high energy neutrons that become thermalized as they penetrate tissue.
Basic principles
Neutron capture therapy is a binary system that consists of two separate components to achieve its therapeutic effect. Each component in itself is non-tumoricidal, but when combined they can be highly lethal to cancer cells.
BNCT is based on the nuclear capture and decay reactions that occur when non-radioactive boron-10, which makes up approximately 20% of natural elemental boron, is irradiated with neutrons of the appropriate energy to yield excited boron-11 (11B*). This undergoes radioactive decay to produce high-energy alpha particles (4He nuclei) and high-energy lithium-7 (7Li) nuclei. The nuclear reaction is:
10B + nth → [11B] *→ α + 7Li + 2.31 MeV
Both the alpha particles and the lithium nuclei produce closely spaced ionizations in the immediate vicinity of the reaction, with a range of 5–9 μm. This approximately is the diameter of the target cell, and thus the lethality of the capture reaction is limited to boron-containing cells. BNCT, therefore, can be regarded as both a biologically and a physically targeted type of radiation therapy. The success of BNCT is dependent upon the selective delivery of sufficient amounts of 10B to the tumor with only small amounts localized in the surrounding normal tissues. Thus, normal tissues, if they have not taken up sufficient amounts of boron-10, can be spared from the neutron capture and decay reactions. Normal tissue tolerance, however, is determined by the nuclear capture reactions that occur with normal tissue hydrogen and nitrogen.
A wide variety of boron delivery agents have been synthesized. The first, which has mainly been used in Japan, is a polyhedral borane anion, sodium borocaptate or BSH (), and the second is a dihydroxyboryl derivative of phenylalanine, called boronophenylalanine or BPA. The latter has been used in many clinical trials. Following administration of either BPA or BSH by intravenous infusion, the tumor site is irradiated with neutrons, the source of which, until recently, has been specially designed nuclear reactors and now is neutron accelerators. Until 1994, low-energy (< 0.5 eV) thermal neutron beams were used in Japan and the United States, but since they have a limited depth of penetration in tissues, higher energy (> .5eV < 10 keV) epithermal neutron beams, which have a greater depth of penetration, were used in clinical trials in the United States, Europe, Japan, Argentina, Taiwan, and China until recently when accelerators replaced the reactors. In theory BNCT is a highly selective type of radiation therapy that can target tumor cells without causing radiation damage to the adjacent normal cells and tissues. Doses up to 60–70 grays (Gy) can be delivered to the tumor cells in one or two applications compared to 6–7 weeks for conventional fractionated external beam photon irradiation. However, the effectiveness of BNCT is dependent upon a relatively homogeneous cellular distribution of 10B within the tumor, and more specifically within the constituent tumor cells, and this is still one of the main unsolved problems that have limited its success.
Radiobiological considerations
The radiation doses to tumor and normal tissues in BNCT are due to energy deposition from three types of directly ionizing radiation that differ in their linear energy transfer (LET), which is the rate of energy loss along the path of an ionizing particle:
1. Low-LET gamma rays, resulting primarily from the capture of thermal neutrons by normal tissue hydrogen atoms [H(n,γ)H];
2. High-LET protons, produced by the scattering of fast neutrons and from the capture of thermal neutrons by nitrogen atoms [N(n,p)C]; and
3. High-LET, heavier charged alpha particles (stripped down helium [He] nuclei) and lithium-7 ions, released as products of the thermal neutron capture and decay reactions with B [B(n,α)Li].
Since both the tumor and surrounding normal tissues are present in the radiation field, even with an ideal epithermal neutron beam, there will be an unavoidable, non-specific background dose, consisting of both high- and low-LET radiation. However, a higher concentration of B in the tumor will result in it getting a higher total dose than that of adjacent normal tissues, which is the basis for the therapeutic gain in BNCT. The total radiation dose in Gy delivered to any tissue can be expressed in photon-equivalent units as the sum of each of the high-LET dose components multiplied by weighting factors (Gyw), which depend on the increased radiobiological effectiveness of each of these components.
Clinical dosimetry
Biological weighting factors have been used in all of the more recent clinical trials in patients with high-grade gliomas, using boronophenylalanine (BPA) in combination with an epithermal neutron beam. The B(n,α)Li part of the radiation dose to the scalp has been based on the measured boron concentration in the blood at the time of BNCT, assuming a blood: scalp boron concentration ratio of 1.5:1 and a compound biological effectiveness (CBE) factor for BPA in skin of 2.5. A relative biological effectiveness (RBE) or CBE factor of 3.2 has been used in all tissues for the high-LET components of the beam, such as alpha particles. The RBE factor is used to compare the biologic effectiveness of different types of ionizing radiation. The high-LET components include protons resulting from the capture reaction with normal tissue nitrogen, and recoil protons resulting from the collision of fast neutrons with hydrogen. It must be emphasized that the tissue distribution of the boron delivery agent in humans should be similar to that in the experimental animal model in order to use the experimentally derived values for estimation of the radiation doses for clinical radiations. For more detailed information relating to computational dosimetry and treatment planning, interested readers are referred to a comprehensive review on this subject.
Boron delivery agents
The development of boron delivery agents for BNCT began in the early 1960s and is an ongoing and difficult task. A number of boron-10 containing delivery agents have been synthesized for potential use in BNCT. The most important requirements for a successful boron delivery agent are:
low systemic toxicity and normal tissue uptake with high tumor uptake and concomitantly high tumor: to brain (T:Br) and tumor: to blood (T:Bl) concentration ratios (> 3–4:1);
tumor concentrations in the range of ~20-50 μg B/g tumor;
rapid clearance from blood and normal tissues and persistence in tumor during BNCT.
However, as of 2021 no single boron delivery agent fulfills all of these criteria. With the development of new chemical synthetic techniques and increased knowledge of the biological and biochemical requirements needed for an effective agent and their modes of delivery, a wide variety of new boron agents has emerged (see examples in Table 1). However, only one of these compounds has ever been tested in large animals, and only boronophenylalanine (BPA) and sodium borocaptate (BSH), have been used clinically.
The delivery agents are not listed in any order that indicates their potential usefulness for BNCT. None of these agents have been evaluated in any animals larger than mice and rats, except for boronated porphyrin (BOPP) that also has been evaluated in dogs. However, due to the severe toxicity of BOPP in canines, no further studies were carried out.
See Barth, R.F., Mi, P., and Yang, W., Boron delivery agents for neutron capture therapy of cancer, Cancer Communications, 38:35 ( 2018 for an updated review.
The abbreviations used in this table are defined as follows: BNCT, boron neutron capture therapy; DNA, deoxyribonucleic acid; EGF, epidermal growth factor; EGFR, epidermal growth factor receptor; MoAbs, monoclonal antibodies; VEGF, vascular endothelial growth factor.
The major challenge in the development of boron delivery agents has been the requirement for selective tumor targeting in order to achieve boron concentrations (20-50 μg/g tumor) sufficient to produce therapeutic doses of radiation at the site of the tumor with minimal radiation delivered to normal tissues. The selective destruction of infliltrative tumor (glioma) cells in the presence of normal brain cells represents an even greater challenge compared to malignancies at other sites in the body. Malignant gliomas are highly infiltrative of normal brain, histologically diverse, heterogeneous in their genomic profile and therefore it is very difficult to kill all of them.
Gadolinium neutron capture therapy (Gd NCT)
There also has been some interest in the possible use of gadolinium-157 (Gd) as a capture agent for NCT for the following reasons: First, and foremost, has been its very high neutron capture cross section of 254,000 barns. Second, gadolinium compounds, such as Gd-DTPA (gadopentetate dimeglumine Magnevist), have been used routinely as contrast agents for magnetic resonance imaging (MRI) of brain tumors and have shown high uptake by brain tumor cells in tissue culture (in vitro). Third, gamma rays and internal conversion and Auger electrons are products of the Gd(n,γ)Gd capture reaction (Gd + n (0.025eV) → [Gd] → Gd + γ + 7.94 MeV). Though the gamma rays have longer pathlengths, orders of magnitude greater depths of penetration compared with alpha particles, the other radiation products (internal conversion and Auger electrons) have pathlengths of about one cell diameter and can directly damage DNA. Therefore, it would be highly advantageous for the production of DNA damage if the Gd were localized within the cell nucleus. However, the possibility of incorporating gadolinium into biologically active molecules is very limited and only a small number of potential delivery agents for Gd NCT have been evaluated. Relatively few studies with Gd have been carried out in experimental animals compared to the large number with boron containing compounds (Table 1), which have been synthesized and evaluated in experimental animals (in vivo). Although in vitro activity has been demonstrated using the Gd-containing MRI contrast agent Magnevist as the Gd delivery agent, there are very few studies demonstrating the efficacy of Gd NCT in experimental animal tumor models, and, as evidenced by a lack of citations in the literature, Gd NCT has not, as of 2019, been used clinically in humans.
Neutron sources
Clinical Studies Using Nuclear reactors as Neutron Sources
Until 2014, neutron sources for NCT were limited to nuclear reactors. Reactor-derived neutrons are classified according to their energies as thermal (E < 0.5 eV), epithermal (0.5 eV < E < 10 keV), or fast (E >10 keV). Thermal neutrons are the most important for BNCT since they usually initiate the B(n,α)Li capture reaction. However, because they have a limited depth of penetration, epithermal neutrons, which lose energy and fall into the thermal range as they penetrate tissues, are now preferred for clinical therapy, other than for skin tumors such as melanoma.
A number of nuclear reactors with very good neutron beam quality have been developed and used clinically. These include:
Kyoto University Research Reactor Institute (KURRI) in Kumatori, Japan;
the Massachusetts Institute of Technology Research Reactor (MITR);
the FiR1 (Triga Mk II) research reactor at VTT Technical Research Centre, Espoo, Finland;
the RA-6 CNEA reactor in Bariloche, Argentina;
the High Flux Reactor (HFR) at Petten in the Netherlands; and
Tsing Hua Open-pool Reactor (THOR) at the National Tsing Hua University, Hsinchu, Taiwan.
JRR-4 at Japan Atomic Energy Agency, Tokai, JAPAN
A compact In-Hospital Neutron Irradiator (IHNI) in a free-standing facility in Beijing, China.
As of May 2021, only the reactors in Argentina, China, and Taiwan are still being used clinically. It is anticipated that, beginning some time in 2022, clinical studies in Finland will utilize an accelerator neutron source designed and fabricated in the United States by Neutron Therapeutics, Danvers, Massachusetts.
Clinical studies of BNCT for brain tumors
Early studies in the US and Japan
It was not until the 1950s that the first clinical trials were initiated by Farr at the Brookhaven National Laboratory (BNL) in New York and by Sweet and Brownell at the Massachusetts General Hospital (MGH) using the Massachusetts Institute of Technology (MIT) nuclear reactor (MITR) and several different low molecular weight boron compounds as the boron delivery agent. However, the results of these studies were disappointing, and no further clinical trials were carried out in the United States until the 1990s.
Following a two-year Fulbright fellowship in Sweet's laboratory at the MGH, clinical studies were initiated by Hiroshi Hatanaka in Japan in 1967. He used a low-energy thermal neutron beam, which had low tissue penetrating properties, and sodium borocaptate (BSH) as the boron delivery agent, which had been evaluated as a boron delivery agent by Albert Soloway at the MGH. In Hatanaka's procedure, as much as possible of the tumor was surgically resected ("debulking"), and at some time thereafter, BSH was administered by a slow infusion, usually intra-arterially, but later intravenously. Twelve to 14 hours later, BNCT was carried out at one or another of several different nuclear reactors using low-energy thermal neutron beams. The poor tissue-penetrating properties of the thermal neutron beams necessitated reflecting the skin and raising a bone flap in order to directly irradiate the exposed brain, a procedure first used by Sweet and his collaborators.
Approximately 200+ patients were treated by Hatanaka, and subsequently by his associate, Nakagawa. Due to the heterogeneity of the patient population, in terms of the microscopic diagnosis of the tumor and its grade, size, and the ability of the patients to carry out normal daily activities (Karnofsky performance status), it was not possible to come up with definitive conclusions about therapeutic efficacy. However, the survival data were no worse than those obtained by standard therapy at the time, and there were several patients who were long-term survivors, and most probably they were cured of their brain tumors.
Further clinical studies in the United States and Japan
BNCT of patients with brain tumors was resumed in the United States in the mid-1990s by Chanana, Diaz, and Coderre and their co-workers at the Brookhaven National Laboratory using the Brookhaven Medical Research Reactor (BMRR) and at Harvard/Massachusetts Institute of Technology (MIT) using the MIT Research Reactor (MITR). For the first time, BPA was used as the boron delivery agent, and patients were irradiated with a collimated beam of higher energy epithermal neutrons, which had greater tissue-penetrating properties than thermal neutrons. A research group headed up by Zamenhof at the Beth Israel Deaconess Medical Center/Harvard Medical School and MIT was the first to use an epithermal neutron beam for clinical trials. Initially patients with cutaneous melanomas were treated and this was expanded to include patients with brain tumors, specifically melanoma metastatic to the brain and primary glioblastomas (GBMs). Included in the research team were Otto Harling at MIT and the Radiation Oncologist Paul Busse at the Beth Israel Deaconess Medical Center in Boston. A total of 22 patients were treated by the Harvard-MIT research group. Five patients with cutaneous melanomas were also treated using an epithermal neutron beam at the MIT research reactor (MITR-II) and subsequently patients with brain tumors were treated using a redesigned beam at the MIT reactor that possessed far superior characteristics to the original MITR-II beam and BPA as the capture agent. The clinical outcome of the cases treated at Harvard-MIT has been summarized by Busse. Although the treatment was well tolerated, there were no significant differences in the mean survival times (MSTs)of patients that had received BNCT compared to those who received conventional external beam X-irradiation.
Shin-ichi Miyatake and Shinji Kawabata at Osaka Medical College in Japan have carried out extensive clinical studies employing BPA (500 mg/kg) either alone or in combination with BSH (100 mg/kg), infused intravenously (i.v.) over 2 h, followed by neutron irradiation at Kyoto University Research Reactor Institute (KURRI). The Mean Survival Time (MST) of 10 patients with high grade gliomas in the first of their trials was 15.6 months, with one long-term survivor (>5 years). Based on experimental animal data, which showed that BNCT in combination with X-irradiation produced enhanced survival compared to BNCT alone, Miyatake and Kawabata combined BNCT, as described above, with an X-ray boost. A total dose of 20 to 30 Gy was administered, divided into 2 Gy daily fractions. The MST of this group of patients was 23.5 months and no significant toxicity was observed, other than hair loss (alopecia). However, a significant subset of these patients, a high proportion of which had small cell variant glioblastomas, developed cerebrospinal fluid dissemination of their tumors. Miyatake and his co-workers also have treated a cohort of 44 patients with recurrent high grade meningiomas (HGM) that were refractory to all other therapeutic approaches. The clinical regimen consisted of intravenous administration of boronophenylalanine two hours before neutron irradiation at the Kyoto University Research Reactor Institute in Kumatori, Japan. Effectiveness was determined using radiographic evidence of tumor shrinkage, overall survival (OS) after initial diagnosis, OS after BNCT, and radiographic patterns associated with treatment failure. The median OS after BNCT was 29.6 months and 98.4 months after diagnosis. Better responses were seen in patients with lower grade tumors. In 35 of 36 patients, there was tumor shrinkage, and the median progression-free survival (PFS) was 13.7 months. There was good local control of the patients' tumors, as evidenced by the fact that only 22.2% of them experienced local recurrence of their tumors. From these results, it was concluded that BNCT was effective in locally controlling tumor growth, shrinking tumors, and improving survival with acceptable safety in patients with therapeutically refractory HGMs.
In another Japanese trial, carried out by Yamamoto et al., BPA and BSH were infused over 1 h, followed by BNCT at the Japan Research Reactor (JRR)-4 reactor. Patients subsequently received an X-ray boost after completion of BNCT. The overall median survival time (MeST) was 27.1 months, and the 1 year and 2-year survival rates were 87.5 and 62.5%, respectively. Based on the reports of Miyatake, Kawabata, and Yamamoto, combining BNCT with an X-ray boost can produce a significant therapeutic gain. However, further studies are needed to optimize this combined therapy alone or in combination with other approaches including chemo- and immunotherapy, and to evaluate it using a larger patient population.
Clinical studies in Finland
The technological and physical aspects of the Finnish BNCT program have been described in considerable detail by Savolainen et al. A team of clinicians led by Heikki Joensuu and Leena Kankaanranta and nuclear engineers led by Iro Auterinen and Hanna Koivunoro at the Helsinki University Central Hospital and VTT Technical Research Center of Finland have treated approximately 200+ patients with recurrent malignant gliomas (glioblastomas) and head and neck cancer who had undergone standard therapy, recurred, and subsequently received BNCT at the time of their recurrence using BPA as the boron delivery agent. The median time to progression in patients with gliomas was 3 months, and the overall MeST was 7 months. It is difficult to compare these results with other reported results in patients with recurrent malignant gliomas, but they are a starting point for future studies using BNCT as salvage therapy in patients with recurrent tumors. Due to a variety of reasons, including financial, no further studies have been carried out at this facility, which has been decommissioned. However, a new facility for BNCT treatment has been installed using an accelerator designed and fabricated by Neutron Therapeutics. This accelerator was specifically designed to be used in a hospital, and the BNCT treatment and clinical studies will be carried out there after dosimetric studies have been completed in 2021. Both Finnish and foreign patients are expected to be treated at the facility.
Clinical studies in Sweden
To conclude this section on treating brain tumors with BNCT using reactor neutron sources, a clinical trial that was carried out by Stenstam, Sköld, Capala and their co-workers in Studsvik, Sweden, using an epithermal neutron beam produced by the Studsvik nuclear reactor, which had greater tissue penetration properties than the thermal beams originally used in the United States and Japan, will be briefly summarized. This study differed significantly from all previous clinical trials in that the total amount of BPA administered was increased (900 mg/kg), and it was infused i.v. over 6 hours. This was based on experimental animal studies in glioma bearing rats demonstrating enhanced uptake of BPA by infiltrating tumor cells following a 6-hour infusion. The longer infusion time of the BPA was well tolerated by the 30 patients who were enrolled in this study. All were treated with 2 fields, and the average whole brain dose was 3.2–6.1 Gy (weighted), and the minimum dose to the tumor ranged from 15.4 to 54.3 Gy (w). There has been some disagreement among the Swedish investigators regarding the evaluation of the results. Based on incomplete survival data, the MeST was reported as 14.2 months and the time to tumor progression was 5.8 months. However, more careful examination of the complete survival data revealed that the MeST was 17.7 months compared to 15.5 months that has been reported for patients who received standard therapy of surgery, followed by radiotherapy (RT) and the drug temozolomide (TMZ). Furthermore, the frequency of adverse events was lower after BNCT (14%) than after radiation therapy (RT) alone (21%) and both of these were lower than those seen following RT in combination with TMZ. If this improved survival data, obtained using the higher dose of BPA and a 6-hour infusion time, can be confirmed by others, preferably in a randomized clinical trial, it could represent a significant step forward in BNCT of brain tumors, especially if combined with a photon boost.
Clinical Studies of BNCT for extracranial tumors
Head and neck cancers
The single most important clinical advance over the past 15 years has been the application of BNCT to treat patients with recurrent tumors of the head and neck region who had failed all other therapy. These studies were first initiated by Kato et al. in Japan. and subsequently followed by several other Japanese groups and by Kankaanranta, Joensuu, Auterinen, Koivunoro and their co-workers in Finland. All of these studies employed BPA as the boron delivery agent, usually alone but occasionally in combination with BSH. A very heterogeneous group of patients with a variety of histopathologic types of tumors have been treated, the largest number of which had recurrent squamous cell carcinomas. Kato et al. have reported on a series of 26 patients with far-advanced cancer for whom there were no further treatment options. Either BPA + BSH or BPA alone were administered by a 1 or 2 h i.v. infusion, and this was followed by BNCT using an epithermal beam. In this series, there were complete regressions in 12 cases, 10 partial regressions, and progression in 3 cases. The MST was 13.6 months, and the 6-year survival was 24%. Significant treatment related complications ("adverse" events) included transient mucositis, alopecia and, rarely, brain necrosis and osteomyelitis.
Kankaanranta et al. have reported their results in a prospective Phase I/II study of 30 patients with inoperable, locally recurrent squamous cell carcinomas of the head and neck region. Patients received either two or, in a few instances, one BNCT treatment using BPA (400 mg/kg), administered i.v. over 2 hours, followed by neutron irradiation. Of 29 evaluated patients, there were 13 complete and 9 partial remissions, with an overall response rate of 76%. The most common adverse event was oral mucositis, oral pain, and fatigue. Based on the clinical results, it was concluded that BNCT was effective for the treatment of inoperable, previously irradiated patients with head and neck cancer. Some responses were durable but progression was common, usually at the site of the previously recurrent tumor. As previously indicated in the section on neutron sources, all clinical studies have ended in Finland, for variety of reasons including economic difficulties of the two companies directly involved, VTT and Boneca. However, clinical studies using an accelerator neutron source designed and fabricated by Neutron Therapeutics and installed at the Helsinki University Hospital should be fully functional by 2022. Finally, a group in Taiwan, led by Ling-Wei Wang and his co-workers at the Taipei Veterans General Hospital, have treated 17 patients with locally recurrent head and neck cancers at the Tsing Hua Open-pool Reactor (THOR) of the National Tsing Hua University. Two-year overall survival was 47% and two-year loco-regional control was 28%. Further studies are in progress to further optimize their treatment regimen.
Other types of tumor
Melanoma and extramammary Paget's disease
Other extracranial tumors that have been treated with BNCT include malignant melanomas. The original studies were carried out in Japan by the late Yutaka Mishima and his clinical team in the Department of Dermatology at Kobe University using locally injected BPA and a thermal neutron beam. It is important to point out that it was Mishima who first used BPA as a boron delivery agent, and this approach subsequently was extended to other types of tumors based on the experimental animal studies of Coderre et al. at the Brookhaven National Laboratory. Local control was achieved in almost all patients, and some were cured of their melanomas. Patients with melanoma of the head and neck region, vulva, and extramammary Paget's disease of the genital region have been treated by Hiratsuka et al. with promising clinical results. The first clinical trial of BNCT in Argentina for the treatment of melanomas was performed in October 2003 and since then several patients with cutaneous melanomas have been treated as part of a Phase II clinical trial at the RA-6 nuclear reactor in Bariloche. The neutron beam has a mixed thermal-hyperthermal neutron spectrum that can be used to treat superficial tumors. The In-Hospital Neutron Irradiator (IHNI) in Beijing has been used to treat a small number of patients with cutaneous melanomas with a complete response of the primary lesion and no evidence of late radiation injury during a 24+-month follow-up period.
Colorectal cancer
Two patients with colon cancer, which had spread to the liver, have been treated by Zonta and his co-workers at the University of Pavia in Italy. The first was treated in 2001 and the second in mid-2003. The patients received an i.v. infusion of BPA, followed by removal of the liver (hepatectomy), which was irradiated outside of the body (extracorporeal BNCT) and then re-transplanted into the patient. The first patient did remarkably well and survived for over 4 years after treatment, but the second died within a month of cardiac complications. Clearly, this is a very challenging approach for the treatment of hepatic metastases, and it is unlikely that it will ever be widely used. Nevertheless, the good clinical results in the first patient established proof of principle. Finally, Yanagie and his colleagues at Meiji Pharmaceutical University in Japan have treated several patients with recurrent rectal cancer using BNCT. Although no long-term results have been reported, there was evidence of short-term clinical responses.
Accelerators as Neutron Sources
Accelerators now are the primary source of epithermal neutrons for clinical BNCT. The first papers relating to their possible use were published in the 1980s, and, as summarized by Blue and Yanch, this topic became an active area of research in the early 2000s. However, it was the Fukushima nuclear disaster in Japan in 2011 that gave impetus to their development for clinical use. Accelerators also can be used to produce epithermal neutrons. Today several accelerator-based neutron sources (ABNS) are commercially available or under development. Most existing or planned systems use either the lithium-7 reaction, Li(p,n)Be or the beryllium-9 reaction,9Be(p,n)9B, to generate neutrons, though other nuclear reactions also have been considered. The lithium-7 reaction requires a proton accelerator with energies between 1.9 and 3.0 MeV, while the beryllium-9 reaction typically uses accelerators with energies between 5 and 30 MeV. Aside from the lower proton energy that the lithium-7 reaction requires, its main benefit is the lower energy of the neutrons produced. This in turn allows the use of smaller moderators, "cleaner" neutron beams, and reduced neutron activation. Benefits of the beryllium-9 reaction include simplified target design and disposal, long target lifetime, and lower required proton beam current.
Since the proton beams for BNCT are quite powerful (~20-100 kW), the neutron generating target must incorporate cooling systems capable of removing the heat safely and reliably to protect the target from damage. In the case of the lithium-7, this requirement is especially important due to the low melting point and chemical volatility of the target material. Liquid jets, micro-channels and rotating targets have been employed to solve this problem.Several researchers have proposed the use of liquid lithium-7 targets in which the target material doubles as the coolant. In the case of beryllium-9, "thin" targets, in which the protons come to rest and deposit much of their energy in the cooling fluid, can be employed. Target degradation due to beam exposure ("blistering") is another problem to be solved, either by using layers of materials resistant to blistering or by spreading the protons over a large target area. Since the nuclear reactions yield neutrons with energies ranging from < 100keV to tens of MeV, a Beam Shaping Assembly (BSA) must be used to moderate, filter, reflect and collimate the neutron beam to achieve the desired epithermal energy range, neutron beam size and direction. BSAs are typically composed of a range of materials with desirable nuclear properties for each function. A well-designed BSA should maximize neutron yield per proton while minimizing fast neutron, thermal neutron and gamma contamination. It should also produce a sharply delimited and generally forward directed beam enabling flexible positioning of the patient relative to the aperture.
One key challenge for an ABNS is the duration of treatment time: depending on the neutron beam intensity, treatments can take up to an hour or more. Therefore, it is desirable to reduce the treatment time both for patient comfort during immobilization and to increase the number of patients that could be treated in a 24-hour period. Increasing the neutron beam intensity for the same proton current by adjusting the BSA is often achieved at the cost of reduced beam quality (higher levels of unwanted fast neutrons or gamma rays in the beam or poor beam collimation). Therefore, increasing the proton current delivered by ABNS BNCT systems remains a key goal of technology development programs.
The table below summarizes the existing or planned ABNS installations for clinical use (Updated November, 2024).
Clinical Studies Using Accelerator Neutron Sources
Treatment of Recurrent Malignant Gliomas
The single greatest advance in moving BNCT forward clinically has been the introduction of cyclotron-based neutron sources (c-BNS) in Japan. Shin-ichi Miyatake and Shinji Kawabata have led the way with the treatment of patients with recurrent glioblastomas (GBMs). In their Phase II clinical trial, they used the Sumitomo Heavy Industries accelerator at the Osaka Medical College, Kansai BNCT Medical Center to treat a total of 24 patients. These patients ranged in age from 20 to 75 years, and all previously had received standard treatment consisting of surgery followed by chemotherapy with temozolomide (TMZ) and conventional radiation therapy. They were candidates for treatment with BNCT because their tumors had recurred and were progressing in size. They received an intravenous infusion of a proprietary formulation of 10B-enriched boronophenylalanine ("Borofalan," StellaPharma Corporation, Osaka, Japan) prior to neutron irradiation. The primary endpoint of this study was the 1-year survival rate after BNCT, which was 79.2%, and the median overall survival rate was 18.9 months. Based on these results, it was concluded that c-BNS BNCT was safe and resulted in increased survival of patients with recurrent gliomas. Although there was an increased risk of brain edema due to re-irradiation, this was easily controlled. As a result of this trial, the Sumitomo accelerator was approved by the Japanese regulatory authority having jurisdiction over medical devices, and further studies are being carried out with patients who have recurrent, high-grade (malignant) meningiomas. However, further studies for the treatment of patients with GBMs have been put on hold pending additional analysis of the results.
Treatment of Recurrent or Locally Advanced Cancers of the Head and Neck
Katsumi Hirose and his co-workers at the Southern Tohoku BNCT Research Center in Koriyama, Japan, recently have reported on their results after treating 21 patients with recurrent tumors of the head and neck region. All of these patients had received surgery, chemotherapy, and conventional radiation therapy. Eight of them had recurrent squamous cell carcinomas (R-SCC), and 13 had either recurrent (R) or locally advanced (LA) non-squamous cell carcinomas (nSCC). The overall response rate was 71%, and the complete response and partial response rates were 50% and 25%, respectively, for patients with R-SCC and 80% and 62%, respectively, for those with R or LA SCC. The overall 2-year survival rates for patients with R-SCC or R/LA nSCC were 58% and 100%, respectively. The treatment was well tolerated, and adverse events were those usually associated with conventional radiation treatment of these tumors. These patients had received a proprietary formulation of 10B-enriched boronophenylalanine (Borofalan), which was administered intravenously. Although the manufacturer of the accelerator was not identified, it presumably was the one manufactured by Sumitomo Heavy Industries, Ltd., which was indicated in the Acknowledgements of their report. Based on this Phase II clinical trial, the authors suggested that BNCT using Borofalan and c-BENS was a promising treatment for recurrent head and neck cancers, although further studies would be required to firmly establish this.
The Future
Clinical BNCT first was used to treat highly malignant brain tumors and subsequently for melanomas of the skin that were difficult to treat by surgery. Later, it was used as a type of "salvage" therapy for patients with recurrent tumors of the head and neck region. The clinical results were sufficiently promising to lead to the development of accelerator neutron sources, which will be used almost exclusively in the future. Challenges for the future clinical success of BNCT that need to be met include the following:
Optimizing the dosing and delivery paradigms and administration of BPA and BSH.
The development of more tumor-selective boron delivery agents for BNCT and their evaluation in large animals and ultimately in humans.
Accurate, real time dosimetry to better estimate the radiation doses delivered to the tumor and normal tissues in patients with brain tumors and head and neck cancer.
Further clinical evaluation of accelerator-based neutron sources for the treatment of brain tumors, head and neck cancer, and other malignancies.
Reducing the cost.
See also
Particle therapy, Neutrons, protons, or heavy ions (e.g. carbon)
Fast neutron therapy
Proton therapy
References
External links
Boron and Gadolinium Neutron Capture Therapy for Cancer Treatment
Destroying Cancer with Boron and Neutrons - Medical Frontiers - NHK February 21, 2022
Boron
Brain tumor
Gadolinium
Head and neck cancer
Medical physics
Neurosurgery
Capture therapy of cancer
Radiation therapy procedures
Radiobiology | Neutron capture therapy of cancer | [
"Physics",
"Chemistry",
"Biology"
] | 8,648 | [
"Radiobiology",
"Radioactivity",
"Applied and interdisciplinary physics",
"Medical physics"
] |
32,644,454 | https://en.wikipedia.org/wiki/EAF%20family | In molecular biology, the EAF family of proteins act as transcriptional transactivators of ELL and ELL2 RNA Polymerase II (Pol II) transcriptional elongation factors. EAF proteins form a stable heterodimer complex with ELL proteins to facilitate the binding of RNA polymerase II to activate transcription elongation. ELL and EAF1 are components of Cajal bodies, which have a role in leukemogenesis. EAF1 also has the capacity to interact with ELL1 and ELL2. The N terminus of approx 120 of EAF1 has a region of high serine, aspartic acid, and glutamic acid residues.
References
Protein families | EAF family | [
"Biology"
] | 148 | [
"Protein families",
"Protein classification"
] |
32,646,543 | https://en.wikipedia.org/wiki/Ectatomin | Ectatomin is a protein toxin from the venom of the ant Ectatomma tuberculatum. Ectatomin can efficiently insert into the plasma membrane, where it can form channels. Ectatomin was shown to inhibit L-type calcium currents in isolated rat cardiac myocytes. In these cells, ectatomin induces a gradual, irreversible increase in ion leakage across the membrane, which can lead to cell death.
Ectatomin is composed of two subunits, A and B, which are homologous. The structure of ectatomin reveals that each subunit consists of two alpha helices with a connecting hinge region, which form a hairpin structure that is stabilized by disulfide bridges. A disulfide bridge between the hinge regions of the two subunits links the heterodimer together, forming a closed bundle of four alpha helices with a left-handed twist.
References
Protein domains
Protein toxins | Ectatomin | [
"Chemistry",
"Biology"
] | 201 | [
"Protein toxins",
"Protein domains",
"Protein classification",
"Toxins by chemical classification"
] |
52,605,712 | https://en.wikipedia.org/wiki/Linear%20Collider%20Collaboration | The Linear Collider Collaboration (LCC) is an organization designated by the International Committee for Future Accelerators (ICFA) to coordinate global research and development efforts for two next-generation particle physics colliders: the International Linear Collider (ILC) and the Compact Linear Collider (CLIC). The mission of the LCC is to facilitate decisions that the next collider "will be built, and where". Members of the collaboration include approximately 2000 accelerator and particle physicists, engineers and other scientists.
In June 2012 ICFA named Lyn Evans, the former project manager of the CERN Large Hadron Collider, as Linear Collider Director. CERN Courier noted, "Evans is the first to hold this new position, which is to lead the Linear Collider organization, created to bring the two existing large-scale linear collider programmes under one governance." He will "lead the effort to unify these programmes and will represent this combined effort to the worldwide science community and funding agencies."
References
Experimental particle physics | Linear Collider Collaboration | [
"Physics"
] | 220 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
51,101,497 | https://en.wikipedia.org/wiki/UPd2Al3 | {{DISPLAYTITLE:UPd2Al3}}
UPd2Al3 is a heavy-fermion superconductor with a hexagonal crystal structure and critical temperature Tc=2.0K that was discovered in 1991. Furthermore, UPd2Al3 orders antiferromagnetically at TN=14K, and UPd2Al3 thus features the unusual behavior that this material, at temperatures below 2K, is simultaneously superconducting and magnetically ordered.
Later experiments demonstrated that superconductivity in UPd2Al3 is magnetically mediated, and UPd2Al3 therefore serves as a prime example for non-phonon-mediated superconductors.
Discovery
Heavy-fermion superconductivity was discovered already in the late 1970s (with CeCu2Si2 being the first example), but the number of heavy-fermion compounds known to superconduct was still very small in the early 1990s, when Christoph Geibel in the group of Frank Steglich found two closely related heavy-fermion superconductors, UNi2Al3 (Tc=1K) and UPd2Al3 (Tc=2K), which were published in 1991. At that point, the Tc=2.0K of UPd2Al3 was the highest critical temperature amongst all known heavy-fermion superconductors, and this record would stand for 10 years until CeCoIn5 was discovered in 2001.
Metallic state
The overall metallic behavior of UPd2Al3, e.g. as deduced from the dc resistivity, is typical for a heavy-fermion material and can be explained as follows: incoherent Kondo scattering above approximately 80 K and coherent heavy-fermion state (in a Kondo lattice) at lower temperatures. Upon cooling below 14 K, UPd2Al3 orders antiferromagnetically in a commensurate fashion (ordering wave vector (0,0,1/2)) and with a sizable ordered magnetic moment of approximately 0.85 μB per uranium atom, as determined from neutron scattering.
The metallic heavy-fermion state is characterized by a strongly enhanced effective mass, which is connected to a reduced Fermi velocity, which in turn brings about a strongly suppressed transport scattering rate. Indeed, for UPd2Al3 optical Drude behavior with an extremely low scattering rate was observed at microwave frequencies. This is the 'slowest Drude relaxation' observed for any three-dimensional metallic system so far.
Superconducting state
Superconductivity in UPd2Al3 has a critical temperature of 2.0K and a critical field around 3T. The critical field does not show anisotropy despite the hexagonal crystal structure.
For heavy-fermion superconductors it is generally believed that the coupling mechanism cannot be phononic in nature. In contrast to many other unconventional superconductors, for UPd2Al3 there actually exists strong experimental evidence (namely from neutron scattering and tunneling spectroscopy ) that superconductivity is magnetically mediated.
In the first years after the discovery of UPd2Al3 it was actively discussed whether its superconducting state can support a Fulde–Ferrell–Larkin–Ovchinnikov (FFLO) phase, but this suggestion was later refuted.
References
Superconductors
Correlated electrons
Intermetallics
Uranium compounds
Palladium compounds
Aluminium compounds | UPd2Al3 | [
"Physics",
"Chemistry",
"Materials_science"
] | 727 | [
"Inorganic compounds",
"Metallurgy",
"Superconductivity",
"Correlated electrons",
"Intermetallics",
"Condensed matter physics",
"Alloys",
"Superconductors"
] |
51,101,792 | https://en.wikipedia.org/wiki/Ophiocordyceps%20sphecocephala | Ophiocordyceps sphecocephala is a species of parasitic fungus. It is entomopathogenic, meaning it grows within insects, particularly wasps of the genera Polistes, Tachytes, and Vespa. It has been reported across the Americas and China.
Physically, its stromata can be 2–10 cm long, and form an egg-shaped head. It is cream or yellow in color.
The fungus has possible implications in medicine; it may have anti-asthmatic or anti-cancer properties.
After the fungus takes over the insect, the insect would go to the highest place and the fungus would sprout out of the body.
References
Ophiocordycipitaceae
Fungi described in 1843
Fungus species | Ophiocordyceps sphecocephala | [
"Biology"
] | 156 | [
"Fungi",
"Fungus species"
] |
51,105,434 | https://en.wikipedia.org/wiki/Polaris%20Flare | The Polaris Flare is a filamentous gas cloud in the Milky Way which is seen in the sky in the region of the constellation Ursa Minor and around the star Polaris. The area on the sky is estimated at 50 square degrees. The range is approximately 500 light years.
See also
List of molecules in interstellar space
Interplanetary medium – interplanetary dust
Interstellar medium – interstellar dust
Intergalactic medium – Intergalactic dust
Local Interstellar Cloud
References
Ursa Minor
Molecular clouds
Milky Way | Polaris Flare | [
"Astronomy"
] | 106 | [
"Ursa Minor",
"Nebula stubs",
"Astronomy stubs",
"Constellations"
] |
51,107,923 | https://en.wikipedia.org/wiki/K2-72 | K2-72 (also designated EPIC 206209135) is a cool red dwarf star of spectral class M2.7V located about away from the Earth in the constellation of Aquarius. It is known to host four planets, all similar in size to Earth, with one of them residing within the habitable zone.
Nomenclature and history
K2-72 also has the 2MASS catalogue number J22182923-0936444. Its EPIC (Ecliptic Plane Input Catalog) number is 206209135.
The star's planetary companions were discovered by NASA's Kepler Mission, a mission tasked with discovering planets in transit around their stars. The transit method that Kepler uses involves detecting dips in brightness in stars. These dips in brightness can be interpreted as planets whose orbits move in front of their stars from the perspective of Earth. The name K2-72 derives directly from the fact that the star is the catalogued 72nd star discovered by the K2 mission to have confirmed planets.
The designation b, c, d, and e derives from the order of discovery. The designation of b is given to the first planet orbiting a given star, and e to the last. In the case of K2-72, there were four planets, so only letters b to e are used. At first the planets were all thought to be smaller than Earth. However, in 2017, new analysis by Martinez et al. and Courtney Dressing found that K2-72 was significantly larger than previous estimates, and found that the planets were all larger than Earth, although all are still expected to be rocky.
Stellar characteristics
K2-72 is a M-type star that is approximately 27% the mass of and 33% the radius of the Sun, according to the analysis done by Dressing et al. The results found by Martinez et al. suggest a larger star, with about 36% the radius and mass of the Sun. Both give a luminosity estimate between 0.013 and 0.015 solar luminosities. It has a surface temperature of between 3360 and 3370 K and its age is unknown. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5778 K.
The star's apparent magnitude, or how bright it appears from Earth's perspective, is 15.309. Therefore, it is too dim to be seen with the naked eye and can only be observed with a telescope.
Planetary system
The star is known to host four planets, all likely to be rocky. Only one (K2-72e) is currently known to reside inside the habitable zone, although K2-72c may straddle the inner edge.
References
Planetary systems with four confirmed planets
Planetary transit variables
M-type main-sequence stars
J22182923-0936444
Aquarius (constellation) | K2-72 | [
"Astronomy"
] | 590 | [
"Constellations",
"Aquarius (constellation)"
] |
57,302,917 | https://en.wikipedia.org/wiki/Aluminium%20dihydrogenphosphate | Aluminium dihydrogenphosphate describes inorganic compounds with the formula Al(H2PO4)3.xH2O where x = 0 or 3. They are white solids. Upon heating these materials convert sequentially to a family of related polyphosphate salts including aluminium triphosphate (AlH2P3O10.2H2O), aluminium hexametaphosphate (Al2P6O18), and aluminium tetrametaphosphate (Al4(P4O12)3). Some of these materials are used for fireproofing and as ingredients in specialized glasses.
According to analysis by X-ray crystallography, the structure consists of a coordination polymer featuring octahedral Al3+ centers bridged by tetrahedral dihydrogen phosphate ligands. The dihydrogen phosphate ligands are bound to Al3+ as monodentate ligands.
References
Phosphates
Aluminium compounds | Aluminium dihydrogenphosphate | [
"Chemistry"
] | 198 | [
"Phosphates",
"Salts"
] |
42,465,099 | https://en.wikipedia.org/wiki/DEAF1 | The DEAF1 transcription factor (HGNC:14677) (or "deformed epidermal autoregulatory factor 1 in Drosophila) is coded by DEAF1 at 11p15.5. It is a member of the Zinc finger protein and MYND-type protein.
Pathology
Mutations affecting the SAND Domain of DEAF1 cause intellectual disability with severe speech impairment and behavioral troubles.
References
Transcription factors | DEAF1 | [
"Chemistry",
"Biology"
] | 87 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
42,470,137 | https://en.wikipedia.org/wiki/High%20strain%20composite%20structure | High Strain Composite Structures (HSC Structures) are a class of composite material structures designed to perform in a high deformation setting. High strain composite structures transition from one shape to another upon the application of external forces. A single HSC Structure component is designed to transition between at least two, but often more, dramatically different shapes. At least one of the shapes is designed to function as a structure which can support external loads.
High strain composite structures usually consist of fiber-reinforced polymers (FRP), which are designed to undergo relatively high material strain levels under the course of normal operating conditions in comparison to most FRP structural applications. FRP materials are anisotropic and highly tailor-able which allows for unique effects upon deformation. As a result, many HSC Structures are configured to possess one or more stable states (shapes at which the structure will remain without external constraints) which are tuned for a particular application. HSC Structures with multiple stable states can also be classified as bi-stable structures.
HSC Structures are most often used in applications where low weight structures are desired that can also be stowed in a small volume. Flexible composite structures are used within the aerospace industry for deployable mechanisms such antennas or solar arrays on spacecraft. Other applications focus on materials or structures in which multiple stable configurations are required.
History
Metals commonly used in springs (e.g. high strength steel, aluminum and beryllium copper alloys) have been utilized in deformable aerospace structures for several decades with considerable success. They continue to be used in the majority of high strain deployable structure applications and excel where the greatest compaction ratios and electrical conductivity are required. But metals suffer from having high densities, high coefficients of thermal expansion, and lower strain capacities when compared to composite materials. In recent decades, the increasing need for high performance deployable structures, coupled with the emergence of a robust composite materials industry, has increased the demand and utility for High Strain Composites Structures. Today HSCs are used in a variety of niche aerospace applications, mostly in areas where extreme precision and low mass are required.
In early 2014 the American Institute of Aeronautics and Astronautics Spacecraft Structures Technical Committee recognized that the level of active research and development in High Strain Composites warranted an independent focus group to distinguish high strain composites as a technical area with uniquely identifiable challenges, technologies, mechanics, test methods, and applications. The High Strains Composite Technical Subcommittee was formed to provide a forum and framework to support HSC technical challenges and successes, and will promote continued advances in the field.
Space-Flight Heritage
The use of high strain deployable structures dates back to the pioneering days of space exploration and has played a crucial role in enabling a robust spacefaring industry.
Milestones in Space-Based Deformable Structures
Consumer-Goods
Current Research and Development
Material Classification
Rigid Polymer
Rigidizable Polymer
Elastomeric Polymer
Technical Challenges
Creep
Thin Shell Buckling
Simulation Methods
See also
Composite material
Fiber-reinforced plastic
Bistability
References
American Institute of Aeronautics and Astronautics, Structures Technical Committee , High Strain Composite Structures Subcommittee
High Strain Composite Structures
Composite materials | High strain composite structure | [
"Physics"
] | 624 | [
"Materials",
"Composite materials",
"Matter"
] |
42,471,409 | https://en.wikipedia.org/wiki/Vaginal%20transplantation | Vaginal transplantation is procedure whereby donated or laboratory-grown vagina tissue is used to create a 'neovagina'. It is most often used in women who have vaginal aplasia (the congenital absence of a vagina).
Background
Vaginal aplasia is a rare medical condition in which the vagina does not form properly before birth. Those with the condition may have a partially formed vagina, or none at all. The condition is typically treated by reconstructive surgery. First a space is surgically created where the vagina would typically exist. Then tissue from another part of the body is harvested, molded into the shape of a vagina, and grafted into the vagina cavity. This technique has significant drawbacks. Typically, the implanted tissue does not function normally as a muscle, which can lead to low enjoyment of sexual intercourse. Additionally, stenosis (narrowing of the cavity) can occur over time. Most women require multiple surgeries before a satisfactory result is achieved. An alternative to traditional reconstructive surgery is transplantation.
Donor technique
In a handful of cases, a woman with vaginal aplasia has received a successful vagina transplant donated by her mother. The first such case is believed to have occurred in 1970, with no signs of rejection taking place after three years. In at least one case, a woman who received such a transplant was able to conceive and give birth. In 1981, a 12-year-old girl with vaginal aplasia received a vaginal wall implant from her mother. She became sexually active seven years later, without incident. At age 24, she conceived and carried a child to term. The child was born via cesarean section.
Laboratory-grown technique
In April 2014, a team of scientists led by Anthony Atala reported that they had successfully transplanted laboratory-grown vaginas into four female teenaged girls with a rare medical condition called Mayer-Rokitansky-Küster-Hauser syndrome that causes the vagina to develop improperly, or sometimes not at all. Between 1 of 1,500 to 4,000 females are born with this condition.
The four patients began treatment between May 2005 and August 2008. In each case, the medical research team began by taking a small sample of genital tissue from the teenager's vulva. The sample was used as a seed to grow additional tissue in the lab which was then placed in a vaginal shaped, biodegradable mold. Vaginal-lining cells were placed on the inside of the tube, while muscle cells were attached to the outside. Five to six weeks later, the structure was implanted into the patients, where the tissue continued to grow and connected with the girls' circulatory and other bodily systems. After about eight years, all four patients reported normal function and pleasure levels during sexual intercourse according to the Female Sexual Function Index questionnaire, a validated self-report tool. No adverse results or complications were reported.
In two of the four women, the vagina was attached to the uterus, making pregnancy possible. No pregnancies were reported, however, during the study period. Martin Birchall, who works on tissue engineering, but was not involved in the study, said it "addressed some of the most important questions facing translation of tissue engineering technologies." Commentary published by the National Health Service (NHS) called the study "an important proof of concept" and said it showed that tissue engineering had "a great deal of potential." However, the NHS also cautioned that the sample size was very small and further research was necessary to determine the general viability of the technique.
The laboratory-grown autologous transplant technique could also be used on women who want reconstructive surgery due to cancer or other disease once the technique is perfected. However, more studies will need to be conducted and the techniques further developed before commercial production can begin.
References
Organ transplantation
Tissue engineering
Vagina | Vaginal transplantation | [
"Chemistry",
"Engineering",
"Biology"
] | 810 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
42,472,109 | https://en.wikipedia.org/wiki/Law%20of%20reciprocal%20proportions | The law of reciprocal proportions, also called law of equivalent proportions or law of permanent ratios, is one of the basic laws of stoichiometry.
It relates the proportions in which elements combine across a number of different elements. It was first formulated by Jeremias Richter in 1791. A simple statement of the law is:
If element A combines with element B and also with C, then, if B and C combine together, the proportion by weight in which they do so will be simply related to the weights of B and C which separately combine with a constant weight of A.
As an example, 1 gram of sodium (Na = A) is observed to combine with either 1.54 grams of chlorine (Cl = B) or 5.52 grams of iodine (I = C). (These ratios correspond to the modern formulas NaCl and NaI). The ratio of these two weights is 5.52/1.54 = 3.58. It is also observed that 1 gram of chlorine reacts with 1.19 g of iodine. This ratio of 1.19 obeys the law because it is a simple fraction (1/3) of 3.58. (This is because it corresponds to the formula ICl3, which is one known compound of iodine and chlorine.) Similarly, hydrogen, carbon, and oxygen follow the law of reciprocal proportions.
The acceptance of the law allowed tables of element equivalent weights to be drawn up. These equivalent weights were widely used by chemists in the 19th century.
The other laws of stoichiometry are the law of definite proportions and the law of multiple proportions.
The law of definite proportions refers to the fixed composition of any compound formed between element A and element B. The law of multiple proportions describes the stoichiometric relationship between two or more different compounds formed between element A and element B. The law states that if two different elements combine separately with a fixed mass of a third element, the ratio of the masses in which they combine are either the same or are in simple multiple ratio of the masses in which they combine with each other .
History
The law of reciprocal proportions was proposed in essence by Richter, following his determination of neutralisation ratios of metals with acids. In the early 19th century it was investigated by Berzelius, who formulated it as follows:
When two substances, A and B have an affinity for two others, C and D, the ratio of the quantities C and D which saturate the same amount of A is the same as that between the quantities C and D which saturate the same amount of B.
Later Jean Stas showed that within experimental error the stoichiometric laws were correct.
References
Stoichiometry | Law of reciprocal proportions | [
"Chemistry"
] | 561 | [
"Stoichiometry",
"Chemical reaction engineering",
"nan"
] |
42,475,403 | https://en.wikipedia.org/wiki/Spherical%20wave%20transformation | Spherical wave transformations leave the form of spherical waves as well as the laws of optics and electrodynamics invariant in all inertial frames. They were defined between 1908 and 1909 by Harry Bateman and Ebenezer Cunningham, with Bateman giving the transformation its name. They correspond to the conformal group of "transformations by reciprocal radii" in relation to the framework of Lie sphere geometry, which were already known in the 19th century. Time is used as fourth dimension as in Minkowski space, so spherical wave transformations are connected to the Lorentz transformation of special relativity, and it turns out that the conformal group of spacetime includes the Lorentz group and the Poincaré group as subgroups. However, only the Lorentz/Poincaré groups represent symmetries of all laws of nature including mechanics, whereas the conformal group is related to certain areas such as electrodynamics. In addition, it can be shown that the conformal group of the plane (corresponding to the Möbius group of the extended complex plane) is isomorphic to the Lorentz group.
A special case of Lie sphere geometry is the transformation by reciprocal directions or Laguerre inversion, being a generator of the Laguerre group. It transforms not only spheres into spheres but also planes into planes. If time is used as fourth dimension, a close analogy to the Lorentz transformation as well as isomorphism to the Lorentz group was pointed out by several authors such as Bateman, Cartan or Poincaré.
Transformation by reciprocal radii
Development in the 19th century
Inversions preserving angles between circles were first discussed by Durrande (1820), with Quetelet (1827) and Plücker (1828) writing down the corresponding transformation formula, being the radius of inversion:
.
These inversions were later called "transformations by reciprocal radii", and became better known when Thomson (1845, 1847) applied them on spheres with coordinates in the course of developing the method of inversion in electrostatics. Joseph Liouville (1847) demonstrated its mathematical meaning by showing that it belongs to the conformal transformations producing the following quadratic form:
.
Liouville himself and more extensively Sophus Lie (1871) showed that the related conformal group can be differentiated (Liouville's theorem): For instance, includes the Euclidean group of ordinary motions; scale or similarity transformations in which the coordinates of the previous transformations are multiplied by ; and gives Thomson's transformation by reciprocal radii (inversions):
.
Subsequently, Liouville's theorem was extended to dimensions by Lie (1871) and others such as Darboux (1878):
.
This group of conformal transformations by reciprocal radii preserves angles and transforms spheres into spheres or hyperspheres (see Möbius transformation, conformal symmetry, special conformal transformation). It is a 6-parameter group in the plane R2 which corresponds to the Möbius group of the extended complex plane, a 10-parameter group in space R3, and a 15-parameter group in R4. In R2 it represents only a small subset of all conformal transformations therein, whereas in R2+n it is identical to the group of all conformal transformations (corresponding to the Möbius transformations in higher dimensions) therein, in accordance with Liouville's theorem. Conformal transformations in R3 were often applied to what Darboux (1873) called "pentaspherical coordinates" by relating the points to homogeneous coordinates based on five spheres.
Oriented spheres
Another method for solving such sphere problems was to write down the coordinates together with the sphere's radius. This was employed by Lie (1871) in the context of Lie sphere geometry which represents a general framework of sphere-transformations (being a special case of contact transformations) conserving lines of curvature and transforming spheres into spheres. The previously mentioned 10-parameter group in R3 related to pentaspherical coordinates is extended to the 15-parameter group of Lie sphere transformations related to "hexaspherical coordinates" (named by Klein in 1893) by adding a sixth homogeneous coordinate related to the radius. Since the radius of a sphere can have a positive or negative sign, one sphere always corresponds to two transformed spheres. It is advantageous to remove this ambiguity by attributing a definite sign to the radius, consequently giving the spheres a definite orientation too, so that one oriented sphere corresponds to one transformed oriented sphere. This method was occasionally and implicitly employed by Lie (1871) himself and explicitly introduced by Laguerre (1880). In addition, Darboux (1887) brought the transformations by reciprocal radii into a form by which the radius r of a sphere can be determined if the radius of the other one is known:
Using coordinates together with the radius was often connected to a method called "minimal projection" by Klein (1893), which was later called "isotropy projection" by Blaschke (1926) emphasizing the relation to oriented circles and spheres. For instance, a circle with rectangular coordinates and radius in R2 corresponds to a point in R3 with coordinates . This method was known for some time in circle geometry (though without using the concept of orientation) and can be further differentiated depending on whether the additional coordinate is treated as imaginary or real: was used by Chasles (1852), Möbius (1857), Cayley (1867), and Darboux (1872); was used by Cousinery (1826), Druckenmüller (1842), and in the "cyclography" of Fiedler (1882), therefore the latter method was also called "cyclographic projection" – see E. Müller (1910) for a summary. This method was also applied to spheres by Darboux (1872), Lie (1871), or Klein (1893). Let and be the center coordinates and radii of two spheres in three-dimensional space R3. If the spheres are touching each other with same orientation, their equation is given
.
Setting , these coordinates correspond to rectangular coordinates in four-dimensional space R4:
.
In general, Lie (1871) showed that the conformal point transformations in Rn (composed of motions, similarities, and transformations by reciprocal radii) correspond in Rn-1 to those sphere transformations which are contact transformations. Klein (1893) pointed out that by using minimal projection on hexaspherical coordinates, the 15-parameter Lie sphere transformations in R3 are simply the projections of the 15-parameter conformal point transformations in R4, whereas the points in R4 can be seen as the stereographic projection of the points of a sphere in R5.
Relation to electrodynamics
Harry Bateman and Ebenezer Cunningham (1909) showed that the electromagnetic equations are not only Lorentz invariant, but also scale and conformal invariant. They are invariant under the 15-parameter group of conformal transformations (transformations by reciprocal radii) in R4 producing the relation
,
where includes as time component and as the speed of light. Bateman (1909) also noticed the equivalence to the previously mentioned Lie sphere transformations in R3, because the radius used in them can be interpreted as the radius of a spherical wave contracting or expanding with , therefore he called them "spherical wave transformations". He wrote:
Depending on they can be differentiated into subgroups:
(a) correspond to mappings which transform not only spheres into spheres but also planes into planes. These are called Laguerre transformations/inversions forming the Laguerre group, which in physics correspond to the Lorentz transformations forming the 6-parameter Lorentz group or 10-parameter Poincaré group with translations.
(b) represents scale or similarity transformations by multiplication of the space-time variables of the Lorentz transformations by a constant factor depending on . For instance, if is used, then the transformation given by Poincaré in 1905 follows:
.
However, it was shown by Poincaré and Einstein that only produces a group that is a symmetry of all laws of nature as required by the principle of relativity (the Lorentz group), while the group of scale transformations is only a symmetry of optics and electrodynamics.
(c) Setting particularly relates to the wide conformal group of transformations by reciprocal radii. It consists of elementary transformations that represent a generalized inversion into a four-dimensional hypersphere:
which become real spherical wave transformations in terms of Lie sphere geometry if the real radius is used instead of , thus is given in the denominator.
Felix Klein (1921) pointed out the similarity of these relations to Lie's and his own researches of 1871, adding that the conformal group doesn't have the same meaning as the Lorentz group, because the former applies to electrodynamics whereas the latter is a symmetry of all laws of nature including mechanics. The possibility was discussed for some time, whether conformal transformations allow for the transformation into uniformly accelerated frames. Later, conformal invariance became important again in certain areas such as conformal field theory.
Lorentz group isomorphic to Möbius group
It turns out that also the 6-parameter conformal group of R2 (i.e. the Möbius group composed of automorphisms of the Riemann sphere), which in turn is isomorphic to the 6-parameter group of hyperbolic motions (i.e. isometric automorphisms of a hyperbolic space) in R3, can be physically interpreted: It is isomorphic to the Lorentz group.
For instance, Fricke and Klein (1897) started by defining an "absolute" Cayley metric in terms of a one-part curvilinear surface of second degree, which can be represented by a sphere whose interior represents hyperbolic space with the equation
,
where are homogeneous coordinates. They pointed out that motions of hyperbolic space into itself also transform this sphere into itself. They developed the corresponding transformation by defining a complex parameter of the sphere
which is connected to another parameter by the substitution
where are complex coefficients. They furthermore showed that by setting , the above relations assume the form in terms of the unit sphere in R3:
.
which is identical to the stereographic projection of the -plane on a spherical surface already given by Klein in 1884. Since the substitutions are Möbius transformations () in the -plane or upon the -sphere, they concluded that by carrying out an arbitrary motion of hyperbolic space in itself, the -sphere undergoes a Möbius transformation, that the entire group of hyperbolic motions gives all direct Möbius transformations, and finally that any direct Möbius transformation corresponds to a motion of hyperbolic space.
Based on the work of Fricke & Klein, the isomorphism of that group of hyperbolic motions (and consequently of the Möbius group) to the Lorentz group was demonstrated by Gustav Herglotz (1909). Namely, the Minkowski metric corresponds to the above Cayley metric (based on a real conic section), if the spacetime coordinates are identified with the above homogeneous coordinates
,
by which the above parameter become
again connected by the substitution .
Herglotz concluded, that any such substitution corresponds to a Lorentz transformation, establishing a one-to-one correspondence to hyperbolic motions in R3. The relation between the Lorentz group and the Cayley metric in hyperbolic space was also pointed out by Klein (1910) as well as Pauli (1921). The corresponding isomorphism of the Möbius group to the Lorentz group was employed, among others, by Roger Penrose.
Transformation by reciprocal directions
Development in the 19th century
Above, the connection of conformal transformations with coordinates including the radius of spheres within Lie sphere geometry was mentioned. The special case corresponds to a sphere transformation given by Edmond Laguerre (1880–1885), who called it the "transformation by reciprocal directions" and who laid down the foundation of a geometry of oriented spheres and planes. According to Darboux and Bateman, similar relations were discussed before by Albert Ribaucour (1870) and by Lie himself (1871). Stephanos (1881) pointed out that Laguerre's geometry is indeed a special case of Lie's sphere geometry. He also represented Laguerre's oriented spheres by quaternions (1883).
Lines, circles, planes, or spheres with radii of certain orientation are called by Laguerre half-lines, half-circles (cycles), half-planes, half-spheres, etc. A tangent is a half-line cutting a cycle at a point where both have the same direction. The transformation by reciprocal directions transforms oriented spheres into oriented spheres and oriented planes into oriented planes, leaving invariant the "tangential distance" of two cycles (the distance between the points of each one of their common tangents), and also conserves the lines of curvature. Laguerre (1882) applied the transformation to two cycles under the following conditions: Their radical axis is the axis of transformation, and their common tangents are parallel to two fixed directions of the half-lines that are transformed into themselves (Laguerre called this specific method the "transformation by reciprocal half-lines", which was later called "Laguerre inversion"). Setting and as the radii of the cycles, and and as the distances of their centers to the axis, he obtained:
with the transformation:
Darboux (1887) obtained the same formulas in different notation (with and ) in his treatment of the "transformation by reciprocal directions", though he included the and coordinates as well:
with
consequently he obtained the relation
.
As mentioned above, oriented spheres in R3 can be represented by points of four-dimensional space R4 using minimal (isotropy) projection, which became particularly important in Laguerre's geometry. For instance, E. Müller (1898) based his discussion of oriented spheres on the fact that they can be mapped upon the points of a plane manifold of four dimensions (which he likened to Fiedler's "cyclography" from 1882). He systematically compared the transformations by reciprocal radii (calling it "inversion at a sphere") with the transformations by reciprocal directions (calling it "inversion at a plane sphere complex"). Following Müller's paper, Smith (1900) discussed Laguerre's transformation and the related "group of the geometry of reciprocal directions". Alluding to Klein's (1893) treatment of minimal projection, he pointed out that this group "is simply isomorphic with the group of all displacements and symmetry transformations in space of four dimensions". Smith obtained the same transformation as Laguerre and Darboux in different notation, calling it "inversion into a spherical complex":
with the relations
Laguerre inversion and Lorentz transformation
In 1905 both Poincaré and Einstein pointed out that the Lorentz transformation of special relativity (setting )
leaves the relation invariant. Einstein stressed the point that by this transformation a spherical light wave in one frame is transformed into a spherical light wave in another one. Poincaré showed that the Lorentz transformation can be seen as a rotation in four-dimensional space with time as fourth coordinate, with Minkowski deepening this insight much further (see History of special relativity).
As shown above, also Laguerre's transformation by reciprocal directions or half-lines – later called Laguerre inversion – in the form given by Darboux (1887) leaves the expression invariant. Subsequently, the relation to the Lorentz transformation was noted by several authors. For instance, Bateman (1910) argued that this transformation (which he attributed to Ribaucour) is "identical" to the Lorentz transformation. In particular, he argued (1912) that the variant given by Darboux (1887) corresponds to the Lorentz transformation in direction, if , , and the terms are replaced by velocities. Bateman (1910) also sketched geometric representations of relativistic light spheres using such spherical systems. However, Kubota (1925) responded to Bateman by arguing that the Laguerre inversion is involutory whereas the Lorentz transformation is not. He concluded that in order to make them equivalent, the Laguerre inversion has to be combined with a reversal of direction of the cycles.
The specific relation between the Lorentz transformation and the Laguerre inversion can also be demonstrated as follows (see H.R. Müller (1948) for analogous formulas in different notation). Laguerre's inversion formulas from 1882 (equivalent to those of Darboux in 1887) read:
by setting
it follows
finally by setting the Laguerre inversion becomes very similar to the Lorentz transformation except that the expression is reversed into :
.
According to Müller, the Lorentz transformation can be seen as the product of an even number of such Laguerre inversions that change the sign. First an inversion is conducted into plane which is inclined with respect to plane under a certain angle, followed by another inversion back to . See section #Laguerre group isomorphic to Lorentz group for more details of the connection between the Laguerre inversion to other variants of Laguerre transformations.
Lorentz transformation within Laguerre geometry
Timerding (1911) used Laguerre's concept of oriented spheres in order to represent and derive the Lorentz transformation. Given a sphere of radius , with as the distance between its center and the central plane, he obtained the relations to a corresponding sphere
resulting in the transformation
By setting and , it becomes the Lorentz transformation.
Following Timerding and Bateman, Ogura (1913) analyzed a Laguerre transformation of the form
,
which become the Lorentz transformation with
.
He stated that "the Laguerre transformation in sphere manifoldness is equivalent to the Lorentz transformation in spacetime manifoldness".
Laguerre group isomorphic to Lorentz group
As shown above, the group of conformal point transformations in Rn (composed of motions, similarities, and inversions) can be related by minimal projection to the group of contact transformations in Rn-1 transforming circles or spheres into other circles or spheres. In addition, Lie (1871, 1896) pointed out that in R3 there is a 7-parameter subgroup of point transformations composed of motions and similarities, which by using minimal projection corresponds to a 7-parameter subgroup of contact transformations in R2 transforming circles into circles. These relations were further studied by Smith (1900), Blaschke (1910), Coolidge (1916) and others, who pointed out the connection to Laguerre's geometry of reciprocal directions related to oriented lines, circles, planes and spheres. Therefore, Smith (1900) called it the "group of the geometry of reciprocal directions", and Blaschke (1910) used the expression "Laguerre group". The "extended Laguerre group" consists of motions and similarities, having 7 parameters in R2 transforming oriented lines and circles, or 11 parameters in R3 transforming oriented planes and spheres. If similarities are excluded, it becomes the "restricted Laguerre group" having 6 parameters in R2 and 10 parameters in R3, consisting of orientation-preserving or orientation-reversing motions, and preserving the tangential distance between oriented circles or spheres. Subsequently, it became common that the term Laguerre group only refers to the restricted Laguerre group. It was also noted that the Laguerre group is part of a wider group conserving tangential distances, called the "equilong group" by Scheffers (1905).
In R2 the Laguerre group leaves invariant the relation , which can be extended to arbitrary Rn as well. For instance, in R3 it leaves invariant the relation . This is equivalent to relation in R4 by using minimal (isotropy) projection with imaginary radius coordinate, or cyclographic projection (in descriptive geometry) with real radius coordinate. The transformations forming the Laguerre group can be further differentiated into "direct Laguerre transformations" which are related to motions preserving both the tangential distance as well as the sign; or "indirect Laguerre transformations" which are related to orientation-reversing motions, preserving the tangential distance with the sign reversed. The Laguerre inversion first given by Laguerre in 1882 is involutory, thus it belongs to the indirect Laguerre transformations. Laguerre himself did not discuss the group related to his inversion, but it turned out that every Laguerre transformation can be generated by at most four Laguerre inversions and every direct Laguerre transformation is the product of two involutory transformations, thus Laguerre inversions are of special importance because they are generating operators of the entire Laguerre group.
It was noted that the Laguerre group is indeed isomorphic to the Lorentz group (or the Poincaré group if translations are included), as both groups leave invariant the form . After the first comparison of the Lorentz transformation and the Laguerre inversion by Bateman (1910) as mentioned above, the equivalence of both groups was pointed out by Cartan in 1912 and 1914, and he expanded upon it in 1915 (published 1955) in the French version of Klein's encyclopedia. Also Poincaré (1912, published 1921) wrote:
Others who noticed this connection include Coolidge (1916), Klein & Blaschke (1926), Blaschke (1929), H.R. Müller, Kunle & Fladt (1970), Benz (1992). It was recently pointed out:
See also
History of Lorentz transformations
Primary sources
Felix Klein (1884), Vorlesungen über das Ikosaeder und die Auflösung der Gleichungen vom fünften Grade, Teubner, Leipzig; English translation: Lectures on the ikosahedron and the solution of equations of the fifth degree (1888)
Reprinted in English translation by David Delphenich: On the geometric foundations of the Lorentz group
.
English translation by David Delphenich: On complexes - in particular, line and sphere complexes - with applications to the theory of partial differential equations
. Written by Poincaré in 1912, printed in Acta Mathematica in 1914 though belatedly published in 1921.
Secondary sources
Textbooks, encyclopaedic entries, historical surveys:
(Only pages 1–21 were published in 1915, the entire article including pp. 39–43 concerning the groups of Laguerre and Lorentz was posthumously published in 1955 in Cartan's collected papers, and was reprinted in the Encyclopédie in 1991.)
Robert Fricke & Felix Klein (1897), Vorlesungen über die Theorie der autormorphen Functionen - Erster Band: Die gruppentheoretischen Grundlagen, Teubner, Leipzig
(Klein's lectures from 1893 updated and edited by Blaschke in 1926.)
Spheres
History of physics
Special relativity
Equations
Electromagnetism | Spherical wave transformation | [
"Physics",
"Mathematics"
] | 4,724 | [
"Electromagnetism",
"Physical phenomena",
"Mathematical objects",
"Equations",
"Special relativity",
"Fundamental interactions",
"Theory of relativity"
] |
50,175,094 | https://en.wikipedia.org/wiki/John%20Wickstr%C3%B6m | John Wickström (until 1889 Johannes Wickström; 13 December 1870 – 7 June 1959) was a Finnish-Swede engineer and entrepreneur.
Wickström was born in Kvevlax, Ostrobothnia. He emigrated to the United States at age of 19 and settled in Chicago, where he studied engineering. Wickström specialised on combustion engines and founded company for automobile production, but, eventually, his interest focused on boat engines.
Wickström returned to Finland in 1906 and founded a boat engine factory together with his brother Jakob. The engines became reputable and sales grew until the 1950s, when imported outboard engines started to replace the heavy middle engines.
Wickström was married twice.
Early life
Wickström was born as Johannes Wickström in the village of Vassor, Kvevlax, Ostrobothnia. His father Johan Wickström was a coppersmith, manufacturer and painter. The family had nine children, the oldest of which died at a young age. He attended primary school in his home village, but, from a young age, he worked in his father's workshop. He was interested at testing and creating; after seeing a picture of a bicycle on a Swedish newspaper, he built one for himself.
Emigration to America
Like many other Ostrobothnians at the time, Wickström became interested in emigrating to the United States. After completing his school and confirmation, he borrowed money from his uncle for travelling over the Atlantic and left Finland in 1889. In the United States, he changed his name to John. Typical to Finnish immigrants, he became employed in a mine, but, due to his metalworking skills, he soon got a job in a plumbing company, after which he moved to a Chicago engineering company that produced pumps for pumping water inside skyscrapers. The company tested combustion engines, which were still undeveloped. Wickström studied mechanical engineering in North Park College, Chicago and was awarded some patents related to engines. Wickström's two younger brothers Mickel and Jakob moved to Chicago as well.
Soon, Wickström focused solely on combustion engines and founded Chicago Motor Cycle Coach Co. for automobile production in 1898. The first car, branded the Caloric, was produced in the same year. Caloric featured a 10–12-hp petrol engine, and it gained attention on streets of Chicago. The car was further developed and at the early 20th century an upgraded Caloric II was introduced. Reportedly, Caloric was displayed in the first Chicago Automobile Exhibit in 1901. However, less than ten Caloric cars are known to be produced.
Wickström kept his focus on boat engine production. He ran the Chicago Caloric Engine Company with his business partners. The company built boat engines and repaired cars in its premises located on Wabash Avenue. The boat engines were used at lakes Michigan and Superior. Wickström was involved in a third company called Economy Engine Company.
Return to Finland
Wickström had difficulties finding financing for engine development; he had to focus the production on existing, although still profitable, models. By using the property he had collected during his stay in the United States, Wickström believed he would be able to continue development of fishing boat engines in Finland, which was still an untapped market.
Upon returning to Finland in 1906, Wickström had to create a market for his engines. Ostrobothnian fishermen, used to propelling themselves by rowing or sailing over many generations, were wary of combustion engines. In order to convince the potential clientele about the benefits of engine power, Wickström arranged an American-style show event in which he tugged two boats loaded with people by a motor boat up along the Kyrönjoki River. Despite the excessive load, the travelling took less than one third the time compared to rowing. Consequently, Wickström's engines gained a lot of attention and the demand grew high enough that Wickström could start serial production.
Wickström engine production
Wickström founded a boat engine factory in Palosaari, Vaasa together with his brother Jakob who also returned from America; the third emigrated brother Mickel stayed permanently in the United States. The factory was opened in autumn 1906, and, in addition to boat engines, the company produced stationary engines for Finnish farmers to power threshers and other machinery. Production of Wickström-engines grew over time and, in 1910, the brothers opened a new facility in Vaskiluoto, close to Vaasa harbour. The company name became Bröderna Wickströms Motorfabrik Ab in Swedish and Wickström-Veljesten Moottoritehdas Oy in Finnish ("Wickström Brothers' Engine Works Ltd."). At the beginning, the brothers produced Caloric engines based on drawings which they had brought with them from America. As the engines turned out to be too complex and unreliable, the company developed modern four-stroke paraffin engines, which replaced the older portfolio.
Wickström engines changed the fishermen's way of living drastically. Due to their higher speed, the fishermen could travel home every evening, instead of bunking in fishing lodges. In 1912, 150 Ostrobothnian fishing boats featured engines. Among Finnish fishermen, Wickström's products reached a reputation of ultimately reliable power units.
After the modest start, demand grew especially after World War I, and the facilities were enlarged several times. In the 1930s, the annual production was about 1,000 units, and the portfolio consisted of a number of marine and stationary engine models. The company bought production licences for British Lister diesel engines. At the turn of the 1950s, the headcount reached almost 200, and the company produced thousands of engines per year. By that time, few other engine producers had started production in Vaasa, and the city became the centre of the Finnish small engine production. Wickström was the leading producer of small engines, and any other boat engine brands were an exception in the area. As demand of stationary engines decreased after World War II, the company focused exclusively on boat engines. Wickström engines were exported to Sweden, Norway, Iceland, Japan and Canada.
The Wickström Brothers' Engine Works had a good reputation as an employer, and workers typically made long careers in the factory. Wickström led the company until the end of his life, and he was an appreciated manager. He believed that the robust middle engines, which the company produced, would stay competitive on the market; he considered outboard engines to be rubbish and did not believe in their future. However, as light glass fibre boats pushed aside the heavy wooden boats, American and Swedish produced outboard engines replaced the middle engines over time. After Wickström's death in 1959, the company kept existing for a couple of decades. By then, the Wickström company had lost the spirit of innovation, although the main reason for its decline was the spread of outboard engines and subsequent disappearing of its market.
Personal life
Wickström married Ida Maria née Östman (1873–1952) in 1892. Their children were Sigrid Ingeborg (born 1893), Werner Theodor (born 1897), Roy John Elias (born 1901) and Alma Ellida (born 1902). The couple was divorced in 1919; as divorcing was rare and generally not socially accepted at the time, a possible reason could have been rumours about Wickström's illegitimate son in the United States. In the same year, Wickström married Julia Wilhelmina née Bergsten, with whom he had Ingrid Brita Maria (born 1920) and Johannes Carl-Gustav (born 1922).
At an early age, Wickström committed to total abstinence from alcohol, and he took part in temperance society activities. In Chicago, he was a member of the local Swedish–Finnish temperance society (Svensk-Finska Nykterhetsföreningen Topelius), and he was chairman of the Swedish–Finnish Temperance Federation (Svensk-Finska Nykterhetsförbundet). In Finland, Wickström supported prohibition.
Wickström was a founding member of the Finnish Odd Fellows fraternity in 1925. He was member of the Vaasa city council.
Wickström financed his birth municipality Kvevlax, his former school in Vassor village and craft school Kvevlax Slöjdskola. Many craft school students were inspired by Wickström's example, becoming entrepreneurs themselves.
Legacy
The Vaasa City Theatre presented a play about Wickström's life. The performance, written by Antti Tuuri and directed by Erik Kiviniemi, had its premiere in autumn 2015. Jorma Tommila starred in the main role.
Patents
Wickström's patents registered in the United States are listed below:
References
20th-century Finnish engineers
Automotive engineers
20th-century Finnish businesspeople
People from Vaasa
1870 births
1959 deaths
Emigrants from the Grand Duchy of Finland to the United States
Swedish-speaking Finns
Finnish temperance activists | John Wickström | [
"Engineering"
] | 1,849 | [
"Automotive engineering",
"Automotive engineers"
] |
50,176,148 | https://en.wikipedia.org/wiki/Gia%20%28protein%29 | Methuselah-like 5 is a protein that in Drosophila is encoded by the Mthl5 (also known as Gia) gene.
Methuselah-like 5 is a G protein coupled receptor (GPCR) that is essential for cardiac development in Drosophila. Deletion of this gene interferes with cardioblast junction proteins, resulting in a broken hearted phenotype similar to other heterotrimeric cardiac G protein mutants. Gia is expressed at stage 13 within bilateral rows of cardioblasts, but during stages 13–15 anterior cardioblasts demonstrate increasing expression while posterior cardioblast expression decreases. By stage 16, Gia expression occurs only in aortic cardioblasts and is not present in the posterior segment cardioblasts. Gia expression only occurs in the aorta and is presently the only gene known in Drosophila with a strictly aortic expression. This gene is also known as Mthl5 (methuselah-like G protein coupled receptor) and is part of a gene family found in insects but not vertebrates. Overexpression of Gia in a transgenic fly model did not cause any cardiac defects.
G-protein-coupled-receptors (GPCR) have a characteristic arrangement of seven transmembrane portions that culminate in an extracellular N-terminus and intracellular C-terminus. More than 200 different GPCRs can be found in Drosophila. GPCRs activation is facilitated by the g-proteins Gα, Gβ, and Gγ. Drosophila have a relatively small number of G-proteins, making them a useful model for the study of GPCR outcomes. Drosophila have a cardiac structure called the dorsal vessel that comprises a tubular structure with a cardioaortic valve and aortic-like outflow. Genes important for cardiac development in Drosophila include NK2, MEF2, GATA, Tbx, and Hand.
References
G protein-coupled receptors | Gia (protein) | [
"Chemistry"
] | 413 | [
"G protein-coupled receptors",
"Signal transduction"
] |
50,176,405 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20topos%20theory | In mathematics, The fundamental theorem of topos theory states that the slice of a topos over any one of its objects is itself a topos. Moreover, if there is a morphism in then there is a functor which preserves exponentials and the subobject classifier.
The pullback functor
For any morphism f in there is an associated "pullback functor" which is key in the proof of the theorem. For any other morphism g in which shares the same codomain as f, their product is the diagonal of their pullback square, and the morphism which goes from the domain of to the domain of f is opposite to g in the pullback square, so it is the pullback of g along f, which can be denoted as .
Note that a topos is isomorphic to the slice over its own terminal object, i.e. , so for any object A in there is a morphism and thereby a pullback functor , which is why any slice is also a topos.
For a given slice let denote an object of it, where X is an object of the base category. Then is a functor which maps: . Now apply to . This yields
so this is how the pullback functor maps objects of to . Furthermore, note that any element C of the base topos is isomorphic to , therefore if then and so that is indeed a functor from the base topos to its slice .
Logical interpretation
Consider a pair of ground formulas and whose extensions and (where the underscore here denotes the null context) are objects of the base topos. Then implies if and only if there is a monic from to . If these are the case then, by theorem, the formula is true in the slice , because the terminal object of the slice factors through its extension . In logical terms, this could be expressed as
so that slicing by the extension of would correspond to assuming as a hypothesis. Then the theorem would say that making a logical assumption does not change the rules of topos logic.
See also
Timeline of category theory and related mathematics
Deduction Theorem
References
Topos theory | Fundamental theorem of topos theory | [
"Mathematics"
] | 440 | [
"Mathematical theorems",
"Mathematical structures",
"Category theory",
"nan",
"Mathematical problems",
"Topos theory"
] |
34,252,228 | https://en.wikipedia.org/wiki/Reed%20receiver | A reed receiver or tuned reed receiver (US) was a form of multi-channel signal decoder used for early radio control systems. It uses a simple electromechanical device or 'resonant reed' to demodulate the signal, in effect a receive-only modem. The encoding used is a simple form of frequency-shift keying.
These decoders appeared in the 1950s and were used into the early 1970s. Early transistor systems were in use in parallel to them, but they were finally displaced by the appearance of affordable digital proportional systems, based on early integrated circuits. These had the advantage of proportional control.
Operation
The decoder of the reed receiver is based on the 'resonant reed' unit. This comprises a number of vibrating metal reeds, each one having a tuned vibration frequency like a tuning fork. These reeds are manufactured from a single tapered sheet of iron or steel, giving a comb of reeds of varying length. This resembles the comb used to sound musical notes in a music box. Like a music box, the length of each reed affects its resonant frequency. The reeds are powered magnetically, by a single solenoid coil and an iron core wrapped between the ends of the reeds.
A reed's resonant frequency is a mid-range audible frequency of perhaps 300 Hz. The solenoid is driven by the output of the radio control receiver, which is an audio tone or tones. If the receiver output contains the appropriate tone for the resonant frequency of a reed, that reed would be made to vibrate. As the reed vibrates, it touches a contact screw above its free end. These contacts form the output of the decoder. Decoder outputs are generally fed to small relays. These allow a high current load to be controlled, such as the model's propulsion motor. Using a relay also adds a damping time constant to the output, so that the intermittent contact with the reed contact (which is vibrating at the transmitter audible tone frequency) becomes a continuous output signal.
Each reed forms an independent channel and they may be activated individually or in combination, depending on the signal from the transmitter.
Reed system channels are an on/off output, not a proportional (i.e. analogue) signal. These could be used to drive an escapement, or rapidly switching a channel on and off could be used as pulse-width modulation to provide a proportional signal to drive a servo.
Number of channels
To avoid potential problems with harmonic frequencies simultaneously activating multiple reeds, the reed frequencies were kept within an octave of each other. The number of distinct frequencies usable within this range depends on the selectivity or Q factor of each reed. Typical radio control reed units used six reeds, sometimes four or eight on simpler or more sophisticated systems.
The sensitivity of each reed is controlled by mechanically adjusting the contact screw above each reed. This adjustment is critical and temperamental, so a system where reed resonance is pronounced and separate from the other reeds is easiest to adjust. If adjacent reeds also vibrate (at a lesser amplitude) for the same tone, the contact adjustment must not be too sensitive, or else it could be false-triggered by an adjacent channel. This problem becomes worse, the more closely the channels are spaced.
Twelve reed systems were known, but were only required for large ship models, typically warships, with many channels for triggering "working features" such as turrets and cannon firing. In practice these were unreliable and so these models used a sequential drum sequencer instead. One channel, probably from a reed, would be used to step the sequencer through each step of a pre-planned demonstration sequence.
Hedy Lamarr
It is sometimes incorrectly claimed that the origin of the resonant reed decoder was in the wartime torpedo-control patent granted to the actress Hedy Lamarr. This patent did precede spread spectrum radio technology, but the frequency-hopping it describes is primarily applied to the radio carrier wave, not the signal coding. A minor aspect of the radio control system described does use a similar frequency-keying mechanism to select left and right rudder, also this is done by separate filters, presumably electronic rather than reed, of 50 & 100 Hz. As these two frequencies are exactly an octave apart, they could also suffer from the harmonic interference problem described above.
Transmitters
A suitable transmitter need only generate a number of audio tones. Most had a single oscillator, that generated different tones as control buttons were pressed one-by-one. As the control actuators on the model were usually escapements at this time, this limitation was relatively minor. To keep the channels fully independent and simultaneously triggerable, would have required a separate oscillator for each channel, not merely a single tunable oscillator. In the valve era before transistors, that would have been unusually expensive. Many period transmitters merely used a number of push-button switches on their case, although some combined these into joystick or wheel controls.
Similar devices
Aircraft navigation
Resonant reeds, used as mechanical filters in a radio tone decoder, appear in the early 1930s as part of radio navigation systems. Multiple courses were signalled by use of radio beam transmitters. Tones of 65 Hz, 86.7 Hz, and 108.3 Hz were modulated onto these beam transmissions, the position of the beam and its audio modulation being space modulated onto the ideal position of the course and the guard beam areas to either side of it. By visually monitoring the vibrating reeds, the pilot could determine their position within the radio beams, and thus over the ground.
Radio paging
Early radio paging systems such as the Bell Telephone BELLBOY system used a shared carrier frequency and audio tone coding to identify the correct recipient of a message. These selectors used a tuning fork resonator rather than a simple single reed. This gives a more selective mechanical filter, allowing more frequencies to be spaced closely together. Even more importantly, the false-triggering harmonic for a tuning fork is more than six times its natural frequency, rather than merely twice its frequency, as for a reed. This means that the useful frequency range is over two octaves, rather than less than one octave. Multiple reeds could also be used together, either to identify separate frequencies to give multiple indications, or logically ANDed together to require more subscriber selections with a 2-code identifier rather than a single code.
Frequency measurement
Vibrating reed indicators have been used for a low-cost display of frequency. This was typically used for a small generator set, where maintaining an output frequency of 50 Hz or 60 Hz was needed. A comb of reeds centered on this frequency would be mounted edge-on to the control panel and the vibrations of the reed with the greatest amplitude could be seen directly. The reeds used in such an indicator have their ends bent perpendicular to the rest of the reed to give a larger area to view, instead of the small cross-section of the thin metal they are made of.
See also
Vibration galvanometer
Notes
References
Radio control
Radio control | Reed receiver | [
"Mathematics"
] | 1,449 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
34,252,996 | https://en.wikipedia.org/wiki/Repiping | Repiping means replacing the pipes in a building, oil or gas well, or centrifuge.
References
See also
Plumbing
Piping | Repiping | [
"Chemistry",
"Engineering"
] | 30 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
34,253,455 | https://en.wikipedia.org/wiki/Peeling%20theorem | In general relativity, the peeling theorem describes the asymptotic behavior of the Weyl tensor as one goes to null infinity. Let be a null geodesic in a spacetime from a point p to null infinity, with affine parameter . Then the theorem states that, as tends to infinity:
where is the Weyl tensor, and abstract index notation is used. Moreover, in the Petrov classification, is type N, is type III, is type II (or II-II) and is type I.
References
External links
General relativity
Theorems in general relativity | Peeling theorem | [
"Physics",
"Mathematics"
] | 117 | [
"Equations of physics",
"Theorems in general relativity",
"General relativity",
"Theorems in mathematical physics",
"Relativity stubs",
"Theory of relativity",
"Physics theorems"
] |
34,253,886 | https://en.wikipedia.org/wiki/Sticky%20and%20blunt%20ends | DNA ends refer to the properties of the ends of linear DNA molecules, which in molecular biology are described as "sticky" or "blunt" based on the shape of the complementary strands at the terminus. In sticky ends, one strand is longer than the other (typically by at least a few nucleotides), such that the longer strand has bases which are left unpaired. In blunt ends, both strands are of equal length – i.e. they end at the same base position, leaving no unpaired bases on either strand.
The concept is used in molecular biology, in cloning, or when subcloning insert DNA into vector DNA. Such ends may be generated by restriction enzymes that break the molecule's phosphodiester backbone at specific locations, which themselves belong to a larger class of enzymes called exonucleases and endonucleases. A restriction enzyme that cuts the backbones of both strands at non-adjacent locations leaves a staggered cut, generating two overlapping sticky ends, while an enzyme that makes a straight cut (at locations directly across from each other on both strands) generates two blunt ends.
Single-stranded DNA molecules
A single-stranded non-circular DNA molecule has two non-identical ends, the 3' end and the 5' end (usually pronounced "three prime end" and "five prime end"). The numbers refer to the numbering of carbon atoms in the deoxyribose, which is a sugar forming an important part of the backbone of the DNA molecule. In the backbone of DNA the 5' carbon of one deoxyribose is linked to the 3' carbon of another by a phosphodiester bond linkage.
Variations in double-stranded molecules
When a molecule of DNA is double stranded, as DNA usually is, the two strands run in opposite directions. Therefore, one end of the molecule will have the 3' end of strand 1 and the 5' end of strand 2, and vice versa in the other end. However, the fact that the molecule is two stranded allows numerous different variations.
Blunt ends
The simplest DNA end of a double stranded molecule is called a blunt end. Blunt ends are also known as non-cohesive ends. In a blunt-ended molecule, both strands terminate in a base pair. Blunt ends are not always desired in biotechnology since when using a DNA ligase to join two molecules into one, the yield is significantly lower with blunt ends. When performing subcloning, it also has the disadvantage of potentially inserting the insert DNA in the opposite orientation desired. On the other hand, blunt ends are always compatible with each other. Here is an example of a small piece of blunt-ended DNA:
5'-GATCTGACTGATGCGTATGCTAGT-3'
3'-CTAGACTGACTACGCATACGATCA-5'
Overhangs and sticky ends
Non-blunt ends are created by various overhangs. An overhang is a stretch of unpaired nucleotides in the end of a DNA molecule. These unpaired nucleotides can be in either strand, creating either 3' or 5' overhangs. These overhangs are in most cases palindromic.
The simplest case of an overhang is a single nucleotide. This is most often adenine and is created as a 3' overhang by some DNA polymerases. Most commonly this is used in cloning PCR products created by such an enzyme. The product is joined with a linear DNA molecule with a 3' thymine overhang. Since adenine and thymine form a base pair, this facilitates the joining of the two molecules by a ligase, yielding a circular molecule. Here is an example of an A-overhang:
5'-ATCTGACTA-3'
3'-TAGACTGA -5'
Longer overhangs are called cohesive ends or sticky ends. They are most often created by restriction endonucleases when they cut DNA. Very often they cut the two DNA strands four base pairs from each other, creating a four-base 3' overhang in one molecule and a complementary 3' overhang in the other. These ends are called cohesive since they are easily joined back together by a ligase.
For example, these two "sticky" ends (four-base 5' overhangs) are compatible:
5'-ATCTGACT GATGCGTATGCT-3'
3'-TAGACTGACTACG CATACGA-5'
Also, since different restriction endonucleases usually create different overhangs, it is possible to create a plasmid by excising a piece of DNA (using a different enzyme for each end) and then joining it to another DNA molecule with ends trimmed by the same enzymes. Since the overhangs have to be complementary in order for the ligase to work, the two molecules can only join in one orientation. This is often highly desirable in molecular biology.
Sticky ends can be converted to blunt ends by a process known as blunting, which involves filling in the sticky end with complementary nucleotides. This yields a blunt end, however, sticky ends are often preferable, meaning the main use of this method is to label DNA by using radiolabeled nucleotides to fill the gap. Blunt ends can also be converted to sticky ends by addition of double-stranded linker sequences containing recognition sequences for restriction endonucleases that create sticky ends and subsequent application of the restriction enzyme or by homopolymer tailing, which refers to extending the molecule's 3' ends with only one nucleotide, allowing for specific pairing with the matching nucleotide (e.g. poly-C with poly-G).
Frayed ends
Across from each single strand of DNA, we typically see adenine pair with thymine, and cytosine pair with guanine to form a parallel complementary strand as described below. Two nucleotide sequences which correspond to each other in this manner are referred to as complementary:
5'-ATCTGACT-3'
3'-TAGACTGA-5'
A frayed end refers to a region of a double stranded (or other multi-stranded) DNA molecule near the end with a significant proportion of non-complementary sequences; that is, a sequence where nucleotides on the adjacent strands do not match up correctly:
5'-ATCTGACTAGGCA-3'
3'-TAGACTGACTACG-5'
The term "frayed" is used because the incorrectly matched nucleotides tend to avoid bonding, thus appearing similar to the strands in a fraying piece of rope.
Although non-complementary sequences are also possible in the middle of double stranded DNA, mismatched regions away from the ends are not referred to as "frayed".
Discovery
Ronald W. Davis first discovered sticky ends as the product of the action of EcoRI, the restriction endonuclease.
Strength
Sticky end links are different in their stability. Free energy of formation can be measured to estimate stability. Free energy approximations can be made for different sequences from data related to oligonucleotide UV thermal denaturation curves. Also predictions from molecular dynamics simulations show that some sticky end links are much stronger in stretch than the others.
References
Sambrook, Joseph; David Russell (2001). Molecular Cloning: A Laboratory Manual. New York: Cold Spring Harbor Laboratory Press, .
DNA
Genetics techniques
Molecular biology | Sticky and blunt ends | [
"Chemistry",
"Engineering",
"Biology"
] | 1,548 | [
"Genetics techniques",
"Biochemistry",
"Genetic engineering",
"Molecular biology"
] |
36,831,006 | https://en.wikipedia.org/wiki/Hindsight%20optimization | Hindsight optimisation (HOP) is a computer science technique used in artificial intelligence for analysis of actions which have stochastic results. HOP is used in combination with a deterministic planner. By creating sample results for each of the possible actions from the given state (i.e. determinising the actions), and using the deterministic planner to analyse those sample results, HOP allows an estimate of the actual action.
References
Artificial intelligence engineering | Hindsight optimization | [
"Engineering"
] | 95 | [
"Software engineering",
"Artificial intelligence engineering"
] |
36,833,798 | https://en.wikipedia.org/wiki/Wikispeed | Wikispeed is an automotive startup with a modular design car. Wikispeed competed in the Progressive Automotive X Prize competition in 2010 and won the tenth place in the mainstream class, which had a hundred other cars competing, often from big companies and universities. The car debuted at the North American International Auto Show (NAIAS) in Detroit, Michigan in January 2011.
Wikispeed was founded by Joe Justice and is headquartered in Seattle, Washington. In 2011, Justice gave a TEDx talk explaining the management style implemented by the Wikispeed team.
In May of 2012, Joe Justice launched an Indiegogo campaign to crowdfund further refinement of their prototype design into a market-ready kit car. Justice did not seek development funding from "traditional venture capital" in an effort to avoid forcing the Wikispeed project "into commercial short-term money-making". The campaign sought roughly $50,000 over a period of two months. The campaign failed.
Wikispeed innovates by applying scrum development techniques borrowed from the software development industry. They use open source tools and lean management methods to improve their productivity.
On January 6, 2015, Wikispeed announced that they have been unable to create a working engine module since their second model and called on the community for help. On February 15, 2015, Wikispeed announced an update that they have produced another working engine module.
See also
Open-source car
Electric vehicle
Open hardware
References
External links
Open hardware vehicles
Modular design
Electric vehicle manufacturers of the United States
Car manufacturers of the United States | Wikispeed | [
"Engineering"
] | 316 | [
"Systems engineering",
"Design",
"Modular design"
] |
47,046,728 | https://en.wikipedia.org/wiki/Penicillium%20paneum | Penicillium paneum is a species of fungus in the genus Penicillium which can spoil cereal grains. Penicillium paneum produces 1-Octen-3-ol and penipanoid A, penipanoid B, penipanoid C, patulin and roquefortine C
References
Further reading
paneum
Fungi described in 1996
Fungus species | Penicillium paneum | [
"Biology"
] | 79 | [
"Fungi",
"Fungus species"
] |
47,047,431 | https://en.wikipedia.org/wiki/Heisenberg%E2%80%93Langevin%20equations | The Heisenberg–Langevin equations (named after Werner Heisenberg and Paul Langevin) are equations for open quantum systems. They are a specific case of quantum Langevin equations.
In the Heisenberg picture the time evolution of a quantum system is the operators themselves. The solution to the Heisenberg equation of motion determines the subsequent time evolution of the operators. The Heisenberg–Langevin equation is the generalization of this to open quantum systems.
References
Quantum mechanics | Heisenberg–Langevin equations | [
"Physics"
] | 99 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
47,049,443 | https://en.wikipedia.org/wiki/Spatiotemporal%20pattern | Spatiotemporal patterns are patterns that occur in a wide range of natural phenoma and are characterized by a spatial and temporal patterning. The general rules of pattern formation hold. In contrast to "static", pure spatial patterns, the full complexity of spatiotemporal patterns can only be recognized over time. Any kind of traveling wave is a good example of a spatiotemporal pattern. Besides the shape and amplitude of the wave (spatial part), its time-varying position (and possibly shape) in space is an essential part of the entire pattern.
The distinction between spatial and spatio-temporal patterns in nature is not clear-cut because a static, invariable pattern will never occur in the strict sense. Even rock formations will slowly change on a time-scale of tens of millions of years, therefore the distinction lies in the time scale of change in relation to human experience. Already the snapshot state of a dune will usually be taken as an example of a purely spatial pattern although this is clearly not the case. It is thus apt to say that spatiotemporal patterns in nature are the rule rather than the exception.
Physics
Many hydrodynamical systems show s.t. pattern formation:
Rayleigh–Bénard convection
Taylor–Couette flow
Liquid crystal instabilities
Chemistry
Any type of reaction–diffusion system that produces spatial patterns will also, due to the time-dependency of both reactions and diffusion, produce spatiotemporal patterns.
Biology
Neurobiology
Neural networks, both artificial and natural, produce a virtually unbounded variety of s.t. patterns, both in sensory perception, learning, thinking and reasoning as well as in spontaneous activity. It has for example been demonstrated that spiral waves, signatures of many excitable systems can occur in neocortical preparations.
Communication
All communication, language, relies on spatiotemporal encoding of information, producing and transmitting sound variations or any type of signal i.e. single building blocks of information that are varied over time. -Even though written language appears to exist only as a (2D) spatial concatenation of letters - strings, it must be decoded sequentially over time. Any kind of language that is understood by organisms is thus eventually a transcoding of neural s.t. signals and will - in successful communication - evoke similar patterns of neural activity in the recipient as they existed in the sender. For example, the warning call of a bird when it perceives a predator will produce a similar type and degree of alarmedness (eventually a certain kind of neural activity pattern) in other individuals even though they have not yet seen or heard the potential attacker.
Even artificial languages, e.g. computer languages, are not read and interpreted in one step, but sequentially, thus, their meaningfully arranged vocabulary (e.g. "computer code") can be seen as a s.t. pattern.
Genetics
As a particular type of language, the "static" (neglecting random transcription errors, recombination and mutation) DNA and its transcription pattern over time yields biologically essential s.t. patterns. Gene regulatory networks are responsible for regulation the time course of gene expression level which can be analyzed using expression profiling.
Crime
Criminals show spatiotemporal patterns when planning and executing their activities that may be used to predict their future behaviour. Temporal patterns may apply to preferred times when crimes are committed. Spatial patterns can be identified in potential targets and routes for criminal activities.
Spatial patterns may be used to identify the most likely locations for crimes to occur, or to identify potential escape routes. In addition, criminals often use temporal and spatial patterns to hide their activities, such as by committing crimes in areas with low population density or in areas with limited surveillance. By understanding spatiotemporal patterns in relation to crime, law enforcement and crime prevention professionals can develop strategies to better prevent and respond to criminal activities. For example, law enforcement can use spatiotemporal patterns to identify crime hot spots and to determine the most effective strategies for responding to these areas. In addition, law enforcement and crime prevention professionals can use spatiotemporal patterns to identify and monitor potential suspects or areas of criminal activity.
Literature
References
Information theory
Pattern formation
Space and time | Spatiotemporal pattern | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 872 | [
"Telecommunications engineering",
"Physical quantities",
"Time",
"Applied mathematics",
"Computer science",
"Information theory",
"Space",
"Spacetime",
"Space and time"
] |
47,053,170 | https://en.wikipedia.org/wiki/I%20Measure%20U | IMeasureU, Ltd. (IMU) is a company that specializes in wearable technology. They develop inertial measurement units that analyze body movements in sports. The company combines sensor data with computational models to model human body movement. IMeasureU has collaborated with Athletics Australia runners.
In July 2017, the company was acquired by Vicon, an English company specializing in motion capture. The acquisition aimed to integrate Vicon's camera systems with IMeasureU's sensors.
On 23 June 2015, IMeasureU launched an Indiegogo campaign seeking to raise US $200,000 to develop a consumer product.
See also
Running injuries
Biomechanics of sprint running
References
External links
IMeasureU
Wearable devices
Companies related to wearable computers
Activity trackers
Manufacturing companies based in Auckland
Biomechanics
Sportswear brands
Running | I Measure U | [
"Physics"
] | 174 | [
"Biomechanics",
"Mechanics"
] |
47,056,521 | https://en.wikipedia.org/wiki/Emmanuelle%20Charpentier | Emmanuelle Marie Charpentier (; born 11 December 1968) is a French professor and researcher in microbiology, genetics, and biochemistry. As of 2015, she has been a director at the Max Planck Institute for Infection Biology in Berlin. In 2018, she founded an independent research institute, the Max Planck Unit for the Science of Pathogens. In 2020, Charpentier and American biochemist Jennifer Doudna of the University of California, Berkeley, were awarded the Nobel Prize in Chemistry "for the development of a method for genome editing" (through CRISPR). This was the first science Nobel Prize ever won by two women only.
Early life and education
Born in 1968 in Juvisy-sur-Orge in France, Charpentier studied biochemistry, microbiology, and genetics at the Pierre and Marie Curie University (which became the Faculty of Science of Sorbonne University) in Paris. She was a graduate student at the Institut Pasteur from 1992 to 1995 and was awarded a research doctorate. Charpentier's PhD work investigated molecular mechanisms involved in antibiotic resistance. Her paternal grandfather, surnamed Sinanian, was an Armenian who escaped to France during the Armenian Genocide and met his wife in Marseille.
Career and research
Charpentier worked as a university teaching assistant at Pierre and Marie Curie University from 1993 to 1995 and as a postdoctoral fellow at the Institut Pasteur from 1995 to 1996. She moved to the US and worked as a postdoctoral fellow at Rockefeller University in New York from 1996 to 1997. During this time, Charpentier worked in the lab of microbiologist Elaine Tuomanen. Tuomanen's lab investigated how the pathogen Streptococcus pneumoniae utilizes mobile genetic elements to alter its genome. Charpentier also helped to demonstrate how S. pneumoniae develops vancomycin resistance.
Charpentier was an assistant research scientist at the New York University Medical Center from 1997 to 1999. She worked in the lab of Pamela Cowin, a skin-cell biologist interested in mammalian gene manipulation. Charpentier published a paper exploring the regulation of hair growth in mice. She held the position of Research Associate at the St. Jude Children's Research Hospital and at the Skirball Institute of Biomolecular Medicine in New York from 1999 to 2002.
After five years in the United States, Charpentier returned to Europe and became the lab head and a guest professor at the Institute of Microbiology and Genetics, University of Vienna, from 2002 to 2004. In 2004, Charpentier published her discovery of an RNA molecule involved in the regulation of virulence-factor synthesis in Streptococcus pyogenes. From 2004 to 2006 she was lab head and an assistant professor at the Department of Microbiology and Immunobiology. In 2006 she became a privatdozentin (Microbiology) and received her habilitation at the Centre of Molecular Biology. From 2006 to 2009 she worked as lab head and associate professor at the Max F. Perutz Laboratories.
Charpentier moved to Sweden and became lab head and associate professor at the Laboratory for Molecular Infection Medicine Sweden (MIMS), at Umeå University. She held the position of group leader from 2008 to 2013 and was visiting professor from 2014 to 2017. She moved to Germany to act as department head and W3 Professor at the Helmholtz Centre for Infection Research in Braunschweig and the Hannover Medical School from 2013 until 2015. In 2014 she became an Alexander von Humboldt Professor.
In 2015 Charpentier accepted an offer from the German Max Planck Society to become a scientific member of the society and a director at the Max Planck Institute for Infection Biology in Berlin. Since 2016, she has been an Honorary Professor at Humboldt University in Berlin; since 2018, she is the Founding and acting director of the Max Planck Unit for the Science of Pathogens. Charpentier retained her position as visiting professor at Umeå University until the end of 2017 when a new donation from the Kempe Foundations and the Knut and Alice Wallenberg Foundation allowed her to offer more young researchers positions within research groups of the MIMS Laboratory.
CRISPR/Cas9
Charpentier is best known for her Nobel-winning work of deciphering the molecular mechanisms of a bacterial immune system, called CRISPR/Cas9, and repurposing it into a tool for genome editing. In particular, she uncovered a novel mechanism for the maturation of a non-coding RNA which is pivotal in the function of CRISPR/Cas9. Specifically, Charpentier demonstrated that a small RNA called tracrRNA is essential for the maturation of crRNA.
In 2011, Charpentier met Jennifer Doudna at a research conference in San Juan, Puerto Rico, and they began a collaboration. Working with Doudna's laboratory, Charpentier's laboratory showed that Cas9 could be used to make cuts in any DNA sequence desired. The method they developed involved the combination of Cas9 with easily created synthetic "guide RNA" molecules. Synthetic guide RNA is a chimera of crRNA and tracrRNA; therefore, this discovery demonstrated that the CRISPR-Cas9 technology could be used to edit the genome with relative ease. Researchers worldwide have employed this method successfully to edit the DNA sequences of plants, animals, and laboratory cell lines. Since its discovery, CRISPR has revolutionized genetics by allowing scientists to edit genes to probe their role in health and disease and to develop genetic therapies with the hope that it will prove safer and more effective than the first generation of gene therapies.
In 2013, Charpentier co-founded CRISPR Therapeutics and ERS Genomics along with Shaun Foy and Rodger Novak.
Awards
In 2015, Time magazine designated Charpentier one of the Time 100 most influential people in the world (together with Jennifer Doudna).
Charpentier's awards are:
Nobel Prize in Chemistry, the Breakthrough Prize in Life Sciences, the Louis-Jeantet Prize for Medicine, the Gruber Foundation International Prize in Genetics, the Leibniz Prize, the Tang Prize, the Japan Prize, and the Kavli Prize in Nanoscience. She has won the BBVA Foundation Frontiers of Knowledge Award jointly with Jennifer Doudna and Francisco Mojica.
2009 – Theodor Körner Prize for Science and Culture
2011 – The Fernström Prize for young and promising scientists
2014 – Alexander von Humboldt Professorship
2014 – The Göran Gustafsson Prize for Molecular Biology (Royal Swedish Academy of Sciences)
2014 – Dr. Paul Janssen Award for Biomedical Research (shared with Jennifer Doudna)
2014 – The Jacob Heskel Gabbay Award (shared with Feng Zhang and Jennifer Doudna)
2015 – Time 100: Pioneers (shared with Jennifer Doudna)
2015 – The Breakthrough Prize in Life Sciences (shared with Jennifer Doudna)
2015 – Louis-Jeantet Prize for Medicine
2015 – The Ernst Jung Prize in Medicine
2015 – Princess of Asturias Awards (shared with Jennifer Doudna)
2015 – Gruber Foundation International Prize in Genetics (shared with Jennifer Doudna)
2015 – , from German National Academy of Science, Leopoldina
2015 – Massry Prize
2015 – The Family Hansen Award
2016 – Otto Warburg Medal
2016 – L'Oréal-UNESCO "For Women in Science" Award
2016 – Leibniz Prize from the German Research Foundation
2016 – Canada Gairdner International Award (shared with Jennifer Doudna and Feng Zhang)
2016 – Warren Alpert Foundation Prize
2016 – Paul Ehrlich and Ludwig Darmstaedter Prize (jointly with Jennifer Doudna)
2016 – Tang Prize (shared with Jennifer Doudna and Feng Zhang)
2016 – HFSP Nakasone Award (jointly with Jennifer Doudna)
2016 – Knight (Chevalier) French National Order of the Legion of Honour
2016 – Meyenburg Prize
2016 – Wilhelm Exner Medal
2016 – John Scott Award
2017 – BBVA Foundation Frontiers of Knowledge Award (jointly with Jennifer Doudna and Francisco Mojica)
2017 – Japan Prize (jointly with Jennifer Doudna)
2017 – Albany Medical Center Prize (jointly with Jennifer Doudna, Luciano Marraffini, Francisco Mojica, and Feng Zhang)
2017 – Pour le Mérite
2018 – Kavli Prize in Nanoscience (jointly with Jennifer Doudna and Virginijus Šikšnys)
2018 – Austrian Decoration for Science and Art
2018 – Bijvoet Medal of the Bijvoet Center for Biomolecular Research of Utrecht University
2018 – Harvey Prize (jointly with Jennifer Doudna and Feng Zhang)
2019 – Scheele Award of the Swedish Pharmaceutical Society
2019 – Knight Commander's Cross of the Order of Merit of the Federal Republic of Germany
2020 – Wolf Prize in Medicine (jointly with Jennifer Doudna)
2020 – Nobel Prize in Chemistry (jointly with Jennifer Doudna)
2024 – Golden Plate Award of the American Academy of Achievement
Honorary doctorate degrees
2016 – École Polytechnique Fédérale de Lausanne
2016 – KU, (Catholic University) Leuven, Belgium
2016 – New York University (NYU)
2017 – Faculty of Medicine, Umeå University, Sweden
2017 – University of Western Ontario, London, Canada
2017 – Hong Kong University of Science and Technology
2018 – Université catholique de Louvain, Belgium
2018 – University of Cambridge
2018 – University of Manchester
2019 – McGill University, Canada
2024 – University of Perugia, Perugia, Italy
Memberships
2014 – European Molecular Biology Organisation
2015 – National Academy of Sciences Leopoldina
2016 – Berlin-Brandenburg Academy of Sciences
2016 – Austrian Academy of Sciences
2016 – Royal Swedish Academy of Sciences
2017 – U.S. National Academy of Sciences, Foreign Associate
2017 – National Academy of Technologies of France
2017 – French Académie des sciences
2018 – European Academy of Sciences and Arts
2021 – Pontifical Academy of Sciences
2024 – Foreign Member of the Royal Society
In popular culture
In 2019, Charpentier was a featured character in the play STEM FEMMES by Philadelphia theater company Applied Mechanics.
In 2021, Walter Isaacson detailed the story of Jennifer Doudna and her collaboration with Charpentier leading to the discovery of CRISPR/CAS-9, in the biography The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race.
References
External links
Extensive biography of Emmanuelle Charpentier at the Max Planck Unit for the Science of Pathogens
Umeå University Staff Directory: Emmanuelle Charpentier
Molecular Infection Medicine Sweden – Short Curriculum Vitae of Emmanuelle Charpentier
Crispr Therapeutics: Scientific Founders
Emmanuelle Charpentier to become a Director at the Max Planck Institute for Infection Biology in Berlin
Nobel laureates in Chemistry
1968 births
Living people
People from Juvisy-sur-Orge
Bijvoet Medal recipients
French immunologists
French microbiologists
French Nobel laureates
French women academics
Foreign associates of the National Academy of Sciences
Kavli Prize laureates in Nanoscience
Knights Commander of the Order of Merit of the Federal Republic of Germany
L'Oréal-UNESCO Awards for Women in Science laureates
Members of the Pontifical Academy of Sciences
Members of the German National Academy of Sciences Leopoldina
Members of the European Molecular Biology Organization
Recipients of the Pour le Mérite (civil class)
Theodor Körner Prize recipients
Academic staff of Umeå University
Wolf Prize in Medicine laureates
Women biochemists
Women microbiologists
Women Nobel laureates
Genome editing
Genetic engineering
Non-coding RNA
Scientific American people
Members of the Royal Swedish Academy of Sciences
Max Planck Institute directors
Foreign members of the Royal Society | Emmanuelle Charpentier | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 2,346 | [
"Genetics techniques",
"Biological engineering",
"Women biochemists",
"Genome editing",
"Genetic engineering",
"Women Nobel laureates",
"Molecular biology",
"Biochemists",
"Women in science and technology"
] |
47,056,807 | https://en.wikipedia.org/wiki/Stephen%20Myers%20%28engineer%29 | Stephen Myers (born 3 August 1946) is an electronic engineer who works in high-energy physics.
Life
Myers earned a bachelor's degree in electrical and electronic engineering in 1968 from Queen's University, Belfast, and completed his Ph.D. there in 1972. Thereafter he worked at CERN.
In September 2008, he was appointed CERN Director of Accelerators and Technology, and in 2014, he was appointed Head of CERN Medical Applications.
He has been awarded honorary doctorates by the University of Geneva in 2001, by Queen’s University, Belfast in 2003, and by Dublin City University in 2017. In 2013 Queen's University, Belfast named him an honorary professor.
He was elected as a fellow of the Institute of Physics in 2003, and of the Royal Academy of Engineering in 2012.
He became an honorary member of the European Physical Society in 2013, and of the Royal Irish Academy in 2015.
He was awarded the Duddell Medal and Prize of the Institute of Physics in 2003.
In 2010 he was awarded the International Particle Accelerators Lifetime Achievement Prize "for his numerous outstanding contributions to the design, construction, commissioning, performance optimization, and upgrade of energy-frontier colliders - in particular ISR, LEP, and LHC - and to the wider development of accelerator science".
With two other CERN directors he was jointly awarded the EPS Edison Volta Prize in 2012 and the Prince of Asturias Prize of Spain in 2013.
He became an Officer of the Order of the British Empire in 2013.
External links
Scientific publications of Stephen Myers on INSPIRE-HEP
References
Living people
1946 births
Alumni of Queen's University Belfast
British nuclear physicists
People associated with CERN
Experimental particle physics
Myers
Officers of the Order of the British Empire
People educated at St Malachy's College | Stephen Myers (engineer) | [
"Physics"
] | 362 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
47,056,864 | https://en.wikipedia.org/wiki/Jean-Michel%20Raimond | Jean-Michel Raimond (born in Orléans) is a French physicist working in the field of quantum mechanics.
Biography
Raimond enrolled at the École normale supérieure (rue d'Ulm) (ENS) in 1975. After graduating with a DEA in atomic and molecular physics, his first research work was in superradiance and Rydberg atoms.
He became Research Associate and Research Fellow at the Centre national de la recherche scientifique (CNRS), working under Serge Haroche towards his 1984 thesis ("Radiative properties of Rydberg atoms in a resonant cavity").
Since 1988, he has taught at the Université Pierre-et-Marie-Curie.
From 1994 to 1999, he was a junior member of the Institut universitaire de France.
From 2001 to 2011, he was a senior member and held the chair of quantum optics.
From 2004 to 2009, he was head of the Department of Physics at the École normale supérieure (rue d'Ulm).
Raimond specialised in atomic physics and quantum optics as a member of the Kastler-Brossel Laboratory in the , which he ran with the 2012 Nobel Prize winner Serge Haroche and Michel Brune.
He became interested in Rydberg atoms, because their relatively large size and sensitivity to microwave radiation makes them particularly suited to studies of matter/energy interaction. He demonstrated that these atoms, coupled to superconducting cavities containing some photons, are ideal systems for testing the laws of quantum decoherence and for demonstrating the possibility of constructing the components of quantum logic, with promising results for their use in informatics.
His most recent work, quoted in the 2012 Nobel Prize-winning work, allows photons to be counted in the cavity without their being destroyed, thus directly demonstrating the quantum measurement problem. This ideal measure also helps combat quantum decoherence with a quantum feedback scheme which keeps the number of photons in the cavity constant.
Raimond is the son of Michel Raimond, late professor of French literature at the Sorbonne.
Awards
Prix Fernand Holweck by the Académie des sciences (1985)
Grand prix Ampère de l'Électricité de France, given by the Académie des sciences, with M. Brune (1998)
Grand Prix Jean-Ricard by the Société française de physique (2007)
Gay-Lussac-Humboldt research award by the Fondation Alexander von Humboldt (2012)
European Physical Society Edison-Volta prize (2014)
Chevalier de la Légion d'honneur
Officer of the Ordre des Palmes Académiques
Principal publications
In this experiment, for the first time, wave function collapse was observed using quantum mechanical methods.
). Peer reviewed article describing in particular quantum logical operations.
First ideal measurement (i.e. without quantum demolition) of the number of photons in a cavity.
First demonstration of a quantum retroaction schema in a quantum continuum.
References
External links
Cavity Quantum Electro Dynamics
Conférence Ernest - Promenade dans le monde quantique
1955 births
Living people
École Normale Supérieure alumni
Academic staff of the École Normale Supérieure
French physicists
Scientists from Orléans
Quantum physicists
Fellows of the American Physical Society | Jean-Michel Raimond | [
"Physics"
] | 654 | [
"Quantum physicists",
"Quantum mechanics"
] |
46,204,126 | https://en.wikipedia.org/wiki/Metatranscriptomics | Metatranscriptomics is the set of techniques used to study gene expression of microbes within natural environments, i.e., the metatranscriptome.
While metagenomics focuses on studying the genomic content and on identifying which microbes are present within a community, metatranscriptomics can be used to study the diversity of the active genes within such community, to quantify their expression levels and to monitor how these levels change in different conditions (e.g., physiological vs. pathological conditions in an organism). The advantage of metatranscriptomics is that it can provide information about differences in the active functions of microbial communities that would otherwise appear to have similar make-up.
Introduction
The microbiome has been defined as a microbial community occupying a well-defined habitat. These communities are ubiquitous and can play a key role in maintenance of the characteristics of their environment, and an imbalance in these communities can negatively affect the activities of the setting in which they reside. To study these communities, and to then determine their impact and correlation with their niche, different omics approaches have been used. While metagenomics can help researchers generate a taxonomic profile of the sample, metatranscriptomics provides a functional profile by analysing which genes are expressed by the community. It is possible to infer what genes are expressed under specific conditions, and this can be done using functional annotations of expressed genes.
Function
Since metatranscriptomics focuses on what genes are expressed, it enables the characterization of the active functional profile of the entire microbial community. The overview of the gene expression in a given sample is obtained by capturing the total mRNA of the microbiome and performing whole-metatranscriptomics shotgun sequencing.
Tools and techniques
Although microarrays can be exploited to determine the gene expression profiles of some model organisms, next-generation sequencing and third-generation sequencing are the preferred techniques in metatranscriptomics. The protocol that is used to perform a metatranscriptome analysis may vary depending on the type of sample that needs to be analysed. Indeed, many different protocols have been developed for studying the metatranscriptome of microbial samples. Generally, the steps include sample harvesting, RNA extraction (different extraction methods for different kinds of samples have been reported in the literature), mRNA enrichment, cDNA synthesis and preparation of metatranscriptomic libraries, sequencing and data processing and analysis. mRNA enrichment is one of the most technically challenging steps, for which different strategies have been proposed:
removing rRNA through Ribosomal RNA capture
using a 5-3 exonuclease to degrade processed RNAs (mostly rRNA and tRNA)
adding poly(A) to mRNAs by using a polyA polymerase (in E. coli)
using antibodies to capture mRNAs that bind to specific proteins
The last two strategies are not recommended as they have been reported to be highly biased.
Computational analysis
A typical metatranscriptome analysis pipeline:
maps reads to a reference genome, or
performs de novo assembly of the reads into transcript contigs and supercontigs
The first strategy maps reads to reference genomes in databases, to collect information that is useful to deduce the relative expression of the single genes. Metatranscriptomic reads are mapped against databases using alignment tools, such as Bowtie2, BWA, and BLAST. Then, the results are annotated using resources, such as GO, KEGG, COG, and Swiss-Prot. The final analysis of the results is carried out depending on the aim of the study. One of the latest metatranscriptomics techniques is stable isotope probing (SIP), which has been used to retrieve specific targeted transcriptomes of aerobic microbes in lake sediment. The limitation of this strategy is its reliance on the information of reference genomes in databases.
The second strategy retrieves the abundance in the expression of the different genes by assembling metatranscriptomic reads into longer fragments called contigs using different software. The Trinity software for RNA-seq, in comparison with other de novo transcriptome assemblers, was reported to recover more full-length transcripts over a broad range of expression levels, with a sensitivity similar to methods that rely on genome alignments. This is particularly important in the absence of a reference genome.
A quantitative pipeline for transcriptomic analysis was developed by Li and Dewey and called RSEM (RNA-Seq by Expectation Maximization). It can work as stand-alone software or as a plug-in for Trinity. RSEM starts with a reference transcriptome or assembly along with RNA-Seq reads generated from the sample and calculates normalized transcript abundance (meaning the number of RNA-Seq reads cor-responding to each reference transcriptome or assembly).
Although both Trinity and RSEM were designed for transcriptomic datasets (i.e., obtained from a single organism), it may be possible to apply them to metatranscriptomic data (i.e., obtained from a whole microbial community).
Bioinformatics
The use of computational analysis tools has become more important as DNA sequencing capabilities have grown, particularly in metagenomic and metatranscriptomic analysis, which can generate a huge volume of data. Many different bioinformatic pipelines have been developed for these purposes, often as open source platforms such as HUMAnN and the more recent HUMAnN2, MetaTrans, SAMSA, Leimena-2013 and mOTUs2.
HUMAnN2
HUMAnN2 is a bioinformatic pipeline designed from the previous HUMAnN software, which was developed during the Human Microbiome Project (HMP), implementing a “tiered search” approach. In the first tier, HUMAnN2 screens DNA or RNA reads with MetaPhlAn2 in order to identify already-known microbes and constructing a sample-specific database by merging pangenomes of annotated species; in the second tier, the algorithm performs a mapping of the reads against the assembled pangenome database; in the third tier, non-aligned reads are used for a translated search against a protein database.
MetaTrans
MetaTrans is a pipeline that exploits multithreading to improve efficiency. Data is obtained from paired-end RNA-Seq, mainly from 16S RNA for taxonomy and mRNA for gene expression levels. The pipeline is divided in 4 major steps. Firstly, paired-end reads are filtered for quality control purposes, then sorted and filtered for taxonomic analysis (by removal of tRNA sequences) or functional analysis (by removal of both tRNA and rRNA reads). For the taxonomic analysis, sequences are mapped against 16S rRNA Greengenes v13.5 database using SOAP2, while for functional analysis sequences are mapped against a functional database such as MetaHIT-2014 always by using SOAP2 tool. This pipeline is highly flexible, since it offers the possibility to use third-party tools and improve single modules as long as the general structure is preserved.
SAMSA
This pipeline is designed specifically for metatranscriptomics data analysis, by working in conjunction with the MG-RAST server for metagenomics. This pipeline is simple to use, requires low technical preparation and computational power and can be applied to a wide range of microbes. First, sequences from raw sequencing data are filtered for quality and then submitted to MG-RAST (which performs further steps such as quality control, gene calling, clustering of amino acid sequences and use of sBLAT on each cluster to detect the best matches). Matches are then aggregated for taxonomic and functional analysis purposes.
Leimena-2013
This pipeline does not have an official name and is usually referred to using the first author of the article in which it is described. This algorithm foresees the implementation of alignment tools such as BLAST and MegaBLAST. Reads are clustered in groups of identical sequences and then processed for in-silico removal of tRNA and rRNA sequences. Remaining reads are then mapped to NCBI databases using BLAST and MegaBLAST, then classified by their bitscore. Sequences with higher bitscores are used to predict phylogenetic origin and function, and lower-score reads are aligned with the more sensitive BLASTX and eventually can be aligned in protein databases so that their function can be characterized.
mOTUs2
The mOTUs2 profiler, which is based on essential housekeeping genes, is demonstrably well-suited for quantification of basal transcriptional activity of microbial community members. Depending on environmental conditions, the number of transcripts per cell varies for most genes. An exception to this are housekeeping genes that are expressed constitutively and with low variability under different conditions. Thus, the abundance of transcripts from such genes strongly correlate with the abundance of active cells in a community.
Microarrays
Another method that can be exploited for metatranscriptomic purposes is tiling microarrays. In particular, microarrays have been used to measure microbial transcription levels, to detect new transcripts and to obtain information about the structure of mRNAs (for instance, the UTR boundaries). Recently, it has also been used to find new regulatory ncRNA. However, microarrays are affected by some pitfalls:
requirement of probe design
low sensitivity
prior knowledge of gene targets.
RNA-Seq can overcome these limitations: it does not require any previous knowledge about the genomes that have to be analysed and it provides high throughput validation of genes prediction, structure, expression. Thus, by combining the two approaches it is possible to have a more complete representation of bacterial transcriptome.
Limitations
With its dominating abundance, ribosomal RNA strongly reduces the coverage of mRNA (usually the main focus of transcriptomic studies) in the total collected RNA.
Extraction of high-quality RNA from some biological or environmental samples (such as feces) can be difficult.
Instability of mRNA that compromises sample integrity even before sequencing.
Experimental issues can affect the quantification of differences in expression among multiple samples: They can influence integrity and input RNA, as well as the amount of rRNA remaining in the samples, size section and gene models. Moreover, molecular base techniques are very prone to artefacts.
Difficulties in differentiating between host and microbial RNA, although commercial kits for microbial enrichment are available. This may also be done in silico if a reference genome is available for the host.
Transcriptome reference databases are limited in their coverage.
Generally, large populations of cells are exploited in metatranscriptomic analysis, so it is difficult to resolve important variances that can exist between subpopulations. High variability in pathogen populations was demonstrated to affect disease progression and virulence.
Both for microarray and RNA-Seq, it is difficult to set a real threshold to classify genes as “expressed”, due to the high dynamic range in gene expression.
The presence of mRNA is not always associated with the actual presence of the respective protein.
Applications
Human gut microbiome
The gut microbiome has emerged in recent years as an important player in human health. Its prevalent functions are related to the fermentation of indigestible food components, competitions with pathogen, strengthening of the intestinal barrier, stimulation and regulation of the immune system.
Although much has been learnt about the microbiome community in the last years, the wide diversity of microorganisms and molecules in the gut requires new tools to enable new discoveries. By focusing on changes in the expression of the genes, metatrascriptomics can generate a more dynamic picture of the state and activity of the microbiome than metagenomics. It has been observed that metatranscriptomic functional profiles are more variable than what might have been reckoned only by metagenomic information. This suggests that non-housekeeping genes are not stably expressed in situ
One example of metatranscriptomic application is in the study of the gut microbiome in inflammatory bowel disease. Inflammatory bowel disease (IBD) is a group of chronic diseases of the digestive tract that affects millions of people worldwide.
Several human genetic mutations have been linked to an increased susceptibility to IBD, but additional factors are needed for the full development of the disease.
Regarding the relationship between IBD and gut microbiome, it is known that there is a dysbiosis in patients with IBD but microbial taxonomic profiles can be highly different among patients, making it difficult to implicate specific microbial species or strains in disease onset and progression. In addition, the gut microbiome composition presents a high variability over time among people, with more pronounced variations in patient with IBD.
The functional potential of an organism, meaning the genes and pathways encoded in its genome, provides only indirect information about the level or extent of activation of such functions. So, the measurement of functional activity (gene expression) is critical to understand the mechanism of the gut microbiome dysbiosis.
Alterations in transcriptional activity in IBD, established on the rRNA expression, indicate that some bacterial populations are active in patients with IBD, while other groups are inactive or latent.
A metatranscriptomics analysis measuring the functional activity of the gut microbiome reveals insights only partially observable in metagenomic functional potential, including disease-linked observations for IBD. It has been reported that many IBD-specific signals are either more pronounced or only detectable on the RNA level.
These altered expression profiles are potentially the result of changes in the gut environment in patients with IBD, which include increased levels of inflammation, higher concentrations of oxygen and a diminished mucous layer.
Metatranscriptomics has the advantage of allowing researchers to skip the assaying of biochemical products in situ (like mucus or oxygen) and enables evaluation of effects of environmental changes on microbial expression patterns in vivo for large human populations. In addition, it can be coupled with longitudinal sampling to associate modulation of activity with the disease progression. Indeed, it has been shown that while a particular path may remain stable over time at the genomic level, the corresponding expression varies with the disease severity. This suggests that microbial dysbiosis affect the gut health through changing in the transcriptional programmes in a stable community. In this way, metatranscriptomic profiling emerges as an important tool for understanding the mechanisms of that relationship.
Some technical limitations of the RNA measurements in stool are related to the fact that the extracted RNA can be degraded and, if not, it still represents only the organisms presents in the stool sample.
Other
Directed culturing: has been used to understand nutritional preferences of organisms in order to allow the preparation of a proper culture medium, resulting in a successful isolation of microbes in vitro.
Identify potential virulence factors: through comparative transcriptomics, in order to compare different transcriptional responses of related strains or species after specific stimuli.
Identify host-specific biological processes and interactions For this purpose, it's important to develop new technologies which allow the detection, at the same time, of changes in the expression levels of some genes.
Examples of techniques applied:
Microarrays: allow the monitoring of changes in the expression levels of many genes in parallel for both host and pathogen. First microarray approaches have shown the first global analysis of gene expression changes in pathogens such as Vibrio cholerae, Borrelia burgdorferi, Chlamydia trachomatis, Chlamydia pneumoniae and Salmonella enterica, revealing the strategies that are used by these microorganisms to adapt to the host.
In addition, microarrays only provide the first global insights about the host innate immune response to PAMPs, as the effects of bacterial infection on the expression of various host factor.
Anyway, the detection through microarrays of both organisms at the same time could be problematic.
Problems:
Probe selection (hundreds of millions of different probes)
Cross-hybridization
Need of expensive chips (with the proper design; high-density arrays)
Require the pathogen and host cells to be physically separated before gene expression analysis (eukaryotic cells’ transcriptomes are larger in comparison to the pathogens’ ones, so could happen that the signal from pathogens’ RNAs is hidden).
Loss of RNA molecules during the eukaryotic cells lysis.
Dual RNA-Seq: this technique allows the simultaneous study of both host and pathogen transcriptomes as well. It is possible to monitor the expression of genes at different time points of the infection process; in this way could it be possible to study the changes in cellular networks in both organisms starting from the initial contact until the manipulation of the host (interplay host-patogen).
Potential: No need of expensive chips
Probe-independent approach (RNA-seq provides transcript information without prior knowledge of mRNA sequences)
High sensitivity.
Possibility of studying the expression levels of even unknown genes under different conditions
Moreover, RNA-Seq is an important approach for identifying coregulated genes, enabling the organization of pathogen genomes into operons. Indeed, genome annotation has been done for some eukaryotic pathogens, such as Candida albicans, Trypanosoma brucei and Plasmodium falciparum.
Despite the increasing sensitivity and depth of sequencing now available, there are still few published RNA-Seq studies concerning the response of the mammalian host cell to the infection.
References
Bioinformatics
Genomics
Environmental microbiology
Microbiology techniques
Metagenomics | Metatranscriptomics | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 3,596 | [
"Bioinformatics",
"Biological engineering",
"Microbiology techniques",
"Environmental microbiology"
] |
46,205,772 | https://en.wikipedia.org/wiki/Expanding%20monomer | Expanding monomers are monomers which increase in volume (expand) during polymerization. They can be added to monomer formulations to counteract the usual volume shrinking (during polymerization) to manufacture products with higher quality and durability. Volume Shrinkage is in first line for the unmeltable thermosets a problem, since those are of fixed shape after polymerization completed.
Background
The quality of thermosets (crosslinked polymers) is determined by a numerous factors such as the purity of the used monomer, polymerization time and temperature, stoichiometry of comonomers (when used) or type and quantity of catalyst or initiator. Another rarely minded factor is the volume shrinking (and density increase) during polymerization; actually all polymers are shrinking during polymerization to some degree. This volume shrinkage can lead (after the gel point) to mechanical stress within the polymer (internal stress), which may cause microfractures, worse mechanical properties or detaching form the substrate. Expandable monomers occupy after polymerization a greater volume than before and were designed to counteract the volumetric shrinkage upon addition. For other applications, like precision castings or dental fillings, a slight expansion during polymerization would be desirable for complete filling of a given mold. Nonetheless, for some applications even a small shrinkage can be desirable as for one-piece molds, to accomplish an easy removal. Expanding monomers are used to influence respectively control the volume change during the polymerization.
Reason for shrinkage
Shrinkage is observed during both, the polymerization and the crosslinking (curing) of monomers. This volume shrinkage is caused by various factors. The main reason is that the monomers are moving from an intermolecular van der Waals distance to covalent distance when a covalent bond is formed during polymerization. This can be emphasized for the example of the ethene polymerization.
It can be seen that the distance between to monomers changes from van der Waals distance (3.40 Å) to covalent distance of a single bond (1.54 Å), leading to a net change of -1.86 Å. The change from two double bonds (1.34 Å) to single bond (again 1.54 Å) results in a slight expansion (+0.2 Å each). Both effects added up are still resulting in substantial shrinkage.
A minor role is playing furthermore the entropy change while polymerization and the package density, as the polymer is more closely packed than the monomer. In step-growth polymerization (condensation reaction) small molecules are eliminated, which are also contributing to shrinkage when removed. At elevated temperatures also the thermal aging plays a role, where unreacted monomer can polymerize and degradation products and other small molecules are released.
Conventional methods for shrinkage reduction
Considerable research has been done to reduce shrinkage during polymerization. As methods the addition of fillers, the use of prepolymers, the addition of reactive diluents and special crosslinking agents are conventionally used. It is a general rule that the lower the reactive portion, the lower the shrinkage of the resin is during polymerization.
Fillers (silica, mica, quartz, etc.) are reducing the shrinkage in proportion to the used amount since the volume stable filler replaces the shrinking polymer. The viscosity increase which is caused by fillers is disadvantageous since it restricts the flow of resins and mold fill. Furthermore, problematic is their tendency of settling.
Prepolymers did undergo polymerization already to some extent. However, they are still viscous and not yet gelatinised. As prepolymers are already partially polymerized the shrinking is therefore reduced during the final cure. The higher the molecular weight of the used monomers the lower the shrinkage in volume.
Also the addition of reactive diluents can reduce the shrinkage in proportion to the extent of its addition.
Concept of expanding monomers
Ring effects
The expanding monomers were developed on the observation that the shrinkage during ring-opening polymerization is lower than in any other kind of polymerization:
ring-opening polymerization > chain growth polymerization > step-growth
This is mainly based on the fact that cyclic compounds are possessing higher densities than their linear counterparts. This can be illustrated by a comparison of cyclic and linear hydrocarbons (see table). A hypothetical ring-opening process of cyclobutane to n-butane would result in a volumen expansion of approximately 15%. The polymerization of a cyclic compound causes therefore a smaller volume shrinkage because cyclic compounds are already relatively dense.
It can be furthermore seen that the larger the ring, the larger the expansion. This first effect is called ring size effect. However, when the cyclic hydrocarbons would be hypothetically polymerized (to polyethylene), still a volume shrinking would appear overall (as polyethylene has a density of 0.92 g⋅cm-3). Nevertheless, this shrinking would be reduced with increasing ring length.
However, for a real polymerization also the ring strain has to be kept in mind. The ring strain is reduced with increasing ring size and reaches near zero in cyclohexane. This can be illustrated with the fact that oxirane polymerizes readily while oxolane is by far less reactive.
A second effect is the ring per unit volume effect. The volume change during ring-opening polymerization is also influenced by the number of polymerizable rings per monomer. This can be illustrated for the example of cyclopentene, the cyclopentene dimer, adamantane and poly(cyclopentene). It can be seen that hypothetical conversion of cyclopentene to poly(cyclopentene) would result in volume shrinkage of 15.38%, while the conversion of cyclopentene dimer leads to expansion of 5.21% and the conversion of adamantane even to expansion of 14.15%.
The third effect is the ring-opening effect which can be illustrated at the polymerization of oxirane. During the polymerization two molecules are moving from van der Waals distance to covalent distance, what would result taken alone in shrinkage of approximately 40% (as it has been visualized above). At the same time the ring opens and moves from covalent distance to near van der Waals distance, this would result in an expansion of 17%. Thus, the overall volume change is a shrinkage of about 23%.
This minor shrinking during the ring-opening polymerization itself depends on the ring size effect, the number of rings per volume effect and the ring-opening effect.
Expanding monomer concept
Derived from the ring effects the design of the expanding monomers is based on bicyclic compounds. A net expansion is reached when for each bond which undergoes a shift from covalent to van der Waals distance at least two bonds are shifting from covalent to near van der Waals distance, as it is shown in the following picture.
It can be seen that bond a) and bond b) are broken and changing therefore from covalent to near van der Waals distance. At the same time is bond c) is formed between two monomers, which is a change from van der waals distance to covalent distance.
It follows the three requirements that the rings of the bicyclic monomer are fused (the rings are sharing at least one atom), that each ring contains at least one non-carbon atom and that the rings are opening in asymmetrical manner (meaning for example that one oxygen forms an carbonyl group and one an ether group). Compound classes which are fulfilling these requirements are spiro orthoesters, spiro orthocarbonates, bicyclic ketal lactones and bicyclic orthoesters.
Overview, synthesis and polymerization
Most expanding monomers are orthoesters, either spiro orthoesters, bicyclic orthoesters or orthocarbonates. Some expanding monomers are lactones. These classes are listed in the following table.
Synthesis
There are three possibilities given in the literature for the synthesis of expanding monomers which are based on orthoesters. The first possibility is the reaction of an epoxide with a lactone:
The epoxide cyclohexene oxide and the lactone γ-butyrolactone are reacting to the spiro orthoester spiro-7-9-dioxacyclo[4.3.0]nonane-8,2'1'-oxacyclo-pentane.
Also the reaction of an epoxide and a carbonate forming an spiro orthocarbonate is possible and described in literature.
The second possibility is transesterification:
2-Benzyl-1,3-propanediol and tetraethyl orthocarbonate are reacting to 3,9-bis(phenylmethyl)-1,5,7,11-tetraoxaspiro[5.5]undecane.
Also a condensation analogous to acetalisation reaction is possible.
An ethanediol derivate and γ-butyrolactone are reacting to a derivate of 1,4,6-trioxa-spiro[4.4]nonane.
The third possibility is using dibutyltin oxide and carbon disulfide:
1,3-propanediol is reacting with dibutyltin oxide to 2,2-dibutyl-1,3,2-dioxastannane and carbon disulfide to the cyclic sulfite of 1,3-propanediol. Both are forming together 1,5,7,11-tetraoxa-spiro[5.5]undecane
Polymerization
Most expanding monomers are cationically polymerized, some anionically and very few even radically. Spiro orthoesters are forming, when homopolymerized, polyether polyesters.
The reaction mechanism is not yet clear in details, as several side reactions are taking place. Expanding monomers can not just be homopolymerized as it is shown here but also copolymerized with other monomers to counteract their shrinking.
Usually a Lewis acid like boron trifluoride etherate is used for both, the synthesis of the orthoester and the polymerization. The same applies for spiro orthocarbonates and bicyclic orthoesters. All three are, in dependency of structure, very sensitive to moisture.
Application
Expanding monomers are interesting for application as in matrix resins in radically polymerized dental fillings, high-strength composites (e. g. in epoxy resins), adhesives, coatings, precision castings, and sealant materials to counteract shrinking during polymerization. This can be necessary in case of dental fillings since polymerization shrinkage and subsequent contraction stress in the resin composite and at the bonding interface may lead to debonding, microleakage, post-operative sensitivity, a compromise in the material's physical properties and even cracks in healthy tooth structure. They are used in the other called applications to remedy similar problems.
In recent times the UV-induced photopolymerization of spiro orthocarbonates was point of investigation.
References
Monomers | Expanding monomer | [
"Chemistry",
"Materials_science"
] | 2,380 | [
"Monomers",
"Polymer chemistry"
] |
41,024,386 | https://en.wikipedia.org/wiki/Finite%20volume%20method%20for%20unsteady%20flow | Unsteady flows are characterized as flows in which the properties of the fluid are time dependent. It gets reflected in the governing equations as the time derivative of the properties are absent.
For Studying Finite-volume method for unsteady flow there is some governing equations
>
Governing Equation
The conservation equation for the transport of a scalar in unsteady flow has the general form as
is density and is conservative form of all fluid flow,
is the Diffusion coefficient and is the Source term.
is Net rate of flow of out of fluid element(convection),
is Rate of increase of due to diffusion,
is Rate of increase of due to sources.
is Rate of increase of of fluid element(transient),
The first term of the equation reflects the unsteadiness of the flow and is absent in case of steady flows. The finite volume integration of the governing equation is carried out over a control volume and also over a finite time step ∆t.
The control volume integration of the steady part of the equation is similar to the steady state governing equation's integration. We need to focus on the integration of the unsteady component of the equation. To get a feel of the integration technique, we refer to the one-dimensional unsteady heat conduction equation.
Now, holding the assumption of the temperature at the node being prevalent in the entire control volume, the left side of the equation can be written as
By using a first order backward differencing scheme, we can write the right hand side of the equation as
Now to evaluate the right hand side of the equation we use a weighting parameter between 0 and 1, and we write the integration of
Now, the exact form of the final discretised equation depends on the value of . As the variance of is 0< <1, the scheme to be used to calculate depends on the value of the
Thus\\
Different Schemes
1. Explicit Scheme in the explicit scheme the source term is linearised as . We substitute to get the explicit discretisation i.e.:
where . One thing worth noting is that the right side contains values at the old time step and hence the left side can be calculated by forward matching in time. The scheme is based on backward differencing and its Taylor series truncation error is first order with respect to time. All coefficients need to be positive. For constant k and uniform grid spacing, this condition may be written as
This inequality sets a stringent condition on the maximum time step that can be used and represents a serious limitation on the scheme. It becomes very expensive to improve the spatial accuracy because the maximum possible time step needs to be reduced as the square of
2. Crank-Nicolson scheme : the Crank-Nicolson method results from setting . The discretised unsteady heat conduction equation becomes
Where
Since more than one unknown value of T at the new time level is present in equation the method is implicit and simultaneous equations for all node points need to be solved at each time step. Although schemes with including the Crank-Nicolson scheme, are unconditionally stable for all values of the time step it is more important to ensure that all coefficients are positive for physically realistic and bounded results. This is the case if the coefficient of satisfies the following condition
which leads to
the Crank-Nicolson is based on central differencing and hence is second order accurate in time. The overall accuracy of a computation depends also on the spatial differencing practice, so the Crank-Nicolson scheme is normally used in conjunction with spatial central differencing
3. Fully implicit Scheme when the value of Ѳ is set to 1 we get the fully implicit scheme. The discretised equation is:
Both sides of the equation contain temperatures at the new time step, and a system of algebraic equations must be solved at each time level. The time marching procedure starts with a given initial field of temperatures . The system of equations is solved after selecting time step . Next the solution is assigned to and the procedure is repeated to progress the solution by a further time step. It can be seen that all coefficients are positive, which makes the implicit scheme unconditionally stable for any size of time step. Since the accuracy of the scheme is only first-order in time, small time steps are needed to ensure the accuracy of results. The implicit method is recommended for general purpose transient calculations because of its robustness and unconditional stability
References
Computational fluid dynamics | Finite volume method for unsteady flow | [
"Physics",
"Chemistry"
] | 894 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
41,031,824 | https://en.wikipedia.org/wiki/Numerical%20methods%20in%20fluid%20mechanics | Fluid motion is governed by the Navier–Stokes equations, a set of coupled and nonlinear
partial differential equations derived from the basic laws of conservation of mass, momentum
and energy. The unknowns are usually the flow velocity, the pressure and density and temperature. The analytical solution of this equation is impossible hence scientists resort to laboratory experiments in such situations. The answers delivered are, however, usually qualitatively different since dynamical and geometric similitude are difficult to enforce simultaneously between the lab experiment and the prototype. Furthermore, the design and construction of these experiments can be difficult (and costly), particularly for stratified rotating flows. Computational fluid dynamics (CFD) is an additional tool in the arsenal of scientists. In its early days CFD was often controversial, as it involved additional approximation to the governing equations and raised additional (legitimate) issues. Nowadays CFD is an established discipline alongside theoretical and experimental methods. This position is in large part due to the exponential growth of computer power which has allowed us to tackle ever larger and more complex problems.
Discretization
The central process in CFD is the process of discretization, i.e. the process of taking differential equations with an infinite number of degrees of freedom, and reducing it to a system of finite degrees of freedom. Hence, instead of determining the solution everywhere and for all times, we will be satisfied with its calculation at a finite number of locations and at specified time intervals. The partial differential equations are then reduced to a system of algebraic equations that can be solved on a computer. Errors creep in during the discretization process. The nature and characteristics of the errors must be controlled in order to ensure that:
we are solving the correct equations (consistency property)
that the error can be decreased as we increase the number of degrees of freedom (stability and convergence).
Once these two criteria are established, the power of computing machines can be leveraged to solve the problem in a numerically reliable fashion. Various discretization schemes have been developed to cope with a variety of issues. The most notable for our purposes are: finite difference methods, finite volume methods, finite element methods, and spectral methods.
Finite difference method
Finite difference replace the infinitesimal limiting process of derivative calculation:
with a finite limiting process, i.e.
The term gives an indication of the magnitude of the error as a function of the mesh spacing. In this instance, the error is halved if the grid spacing, _x is halved, and we say that this is a first order method. Most FDM used in practice are at least second order accurate except in very special circumstances. Finite Difference method is still the most popular numerical method for solution of PDEs because of their simplicity, efficiency and low computational cost. Their major drawback is in their geometric inflexibility which complicates their applications to general complex domains. These can be alleviated by the use of either mapping techniques and/or masking to fit the computational mesh to the computational domain.
Finite element method
The finite element method was designed to deal with problem with complicated computational regions. The PDE is first recast into a variational form which essentially forces the mean error to be small everywhere. The discretization step proceeds by dividing the computational domain into elements of triangular or rectangular shape. The solution within each element is interpolated with a polynomial of usually low order. Again, the unknowns are the solution at the collocation points. The CFD community adopted the FEM in the 1980s when reliable methods for dealing with advection dominated problems were devised.
Spectral method
Both finite element and finite difference methods are low order methods, usually of 2nd − 4th order, and have local approximation property. By local we mean that a particular collocation point is affected by a limited number of points around it. In contrast, spectral method have global approximation property. The interpolation functions, either polynomials or trigonomic functions are global in nature. Their main benefits is in the rate of convergence which depends on the smoothness of the solution (i.e. how many continuous derivatives does it admit). For infinitely smooth solution, the error decreases exponentially, i.e. faster than algebraic. Spectral methods are mostly used in the computations of homogeneous turbulence, and require relatively simple geometries. Atmospheric model have also adopted spectral methods because of their convergence properties and the regular spherical shape of their computational domain.
Finite volume method
Finite volume methods are primarily used in aerodynamics applications where strong shocks and discontinuities in the solution occur. Finite volume method solves an integral form of the governing equations so that local continuity property do not have to hold.
Computational cost
The CPU time to solve the system of equations differs substantially from method to method. Finite differences are usually the cheapest on a per grid point basis followed by the finite element method and spectral method. However, a per grid point basis comparison is a little like comparing apple and oranges. Spectral methods deliver more accuracy on a per grid point basis than either FEM or FDM. The comparison is more meaningful if the question is recast as ”what is the computational cost to achieve a given error tolerance?”. The problem becomes one of defining the error measure which is a complicated task in general situations.
Forward Euler approximation
Equation is an explicit approximation to the original differential equation since no information about the unknown function at the future time (n + 1)t has been used on the right hand side of the equation. In order to derive the error committed in the approximation we rely again on Taylor series.
Backward difference
This is an example of an implicit method since the unknown u(n + 1) has been used in evaluating the slope of the solution on the right hand side; this is not a problem to solve for u(n + 1) in this scalar and linear case. For more complicated situations like a nonlinear right hand side or a system of equations, a nonlinear system of equations may have to be inverted.
References
Sources
Zalesak, S. T., 2005. The design of flux-corrected transport algorithms for structured grids. In: Kuzmin, D., Löhner, R., Turek, S. (Eds.), Flux-Corrected Transport. Springer
Zalesak, S. T., 1979. Fully multidimensional flux-corrected transport algorithms for fluids. Journal of Computational Physics.
Leonard, B. P., MacVean, M. K., Lock, A. P., 1995. The flux integral method for multi-dimensional convection and diffusion. Applied Mathematical Modelling.
Shchepetkin, A. F., McWilliams, J. C., 1998. Quasi-monotone advection schemes based on explicit locally adaptive dissipation. Monthly Weather Review
Jiang, C.-S., Shu, C.-W., 1996. Efficient implementation of weighed eno schemes. Journal of Computational Physics
Finlayson, B. A., 1972. The Method of Weighed Residuals and Variational Principles. Academic Press.
Durran, D. R., 1999. Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. Springer, New York.
Dukowicz, J. K., 1995. Mesh effects for rossby waves. Journal of Computational Physics
Canuto, C., Hussaini, M. Y., Quarteroni, A., Zang, T. A., 1988. Spectral Methods in Fluid Dynamics. Springer Series in Computational Physics. Springer-Verlag, New York.
Butcher, J. C., 1987. The Numerical Analysis of Ordinary Differential Equations. John Wiley and Sons Inc., NY.
Boris, J. P., Book, D. L., 1973. Flux corrected transport, i: Shasta, a fluid transport algorithm that works. Journal of Computational Physics
Citations
Computational fluid dynamics
Numerical analysis
Functional analysis | Numerical methods in fluid mechanics | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,608 | [
"Functions and mappings",
"Functional analysis",
"Computational fluid dynamics",
"Mathematical objects",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Fluid dynamics"
] |
48,759,979 | https://en.wikipedia.org/wiki/Supermicelle | Supermicelle is a hierarchical micelle structure (supramolecular assembly) where individual components are also micelles. Supermicelles are formed via bottom-up chemical approaches, such as self-assembly of long cylindrical micelles into radial cross-, star- or dandelion-like patterns in a specially selected solvent; solid nanoparticles may be added to the solution to act as nucleation centers and form the central core of the supermicelle. The stems of the primary cylindrical micelles are composed of various block copolymers connected by strong covalent bonds; within the supermicelle structure they are loosely held together by hydrogen bonds, electrostatic or solvophobic interactions.
References
Supramolecular chemistry
Colloidal chemistry | Supermicelle | [
"Chemistry",
"Materials_science"
] | 156 | [
"Colloidal chemistry",
"Colloids",
"Surface science",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
48,760,331 | https://en.wikipedia.org/wiki/TopHat%20%28bioinformatics%29 | TopHat is an open-source bioinformatics tool for the throughput alignment of shotgun cDNA sequencing reads generated by transcriptomics technologies (e.g. RNA-Seq) using Bowtie first and then mapping to a reference genome to discover RNA splice sites de novo. TopHat aligns RNA-Seq reads to mammalian-sized genomes.
History
TopHat was originally developed in 2009 by Cole Trapnell, Lior Pachter and Steven Salzberg at the Center for Bioinformatics and Computational Biology at the University of Maryland, College Park and at the Mathematics Department, UC Berkeley. TopHat2 was a collaborative effort of Daehwan Kim and Steven Salzberg, initially at the University of Maryland, College Park and later at the Center for Computational Biology at Johns Hopkins University. Kim re-wrote some of Trapnell's original TopHat code in C++ to make it much faster, and added many heuristics to improve its accuracy, in a collaboration with Cole Trapnell and others. Kim and Salzberg also developed TopHat-fusion which used transcriptome data to discover gene fusions in cancer tissues.
Uses
TopHat is used to align reads from an RNA-Seq experiment. It is a read-mapping algorithm and it aligns the reads to a reference genome. It is useful because it does not need to rely on known splice sites. TopHat can be used with the Tuxedo pipeline, and is frequently used with Bowtie.
Advantages/Disadvantages
Advantages
When TopHat first came out, it was faster than previous systems. It mapped more than 2.2 million reads per CPU hour. That speed allowed the user to process and entire RNA-Seq experiment in less than a day, even on a standard desktop computer. Tophat uses Bowtie in the beginning to analyze the reads, but then does more to analyze the reads that span exon-exon junctions. If you are using TopHat for RNA-Seq data, you will get more read aligned against the reference genome.
Another advantage for TopHat is that it does not need to rely on known splice sites when aligning reads to a reference genome.
Disadvantages
TopHat is in a low maintenance, low support stage, and contains software bugs that have spawned 3rd party post-processing software to correct. It has been superseded by HISAT2, which is more efficient and accurate and provides the same core functionality (spliced alignment of RNA-Seq reads).
See also
Bowtie (sequence analysis)
List of RNA-Seq bioinformatics tools
Microarray analysis techniques
next generation sequencing
RNA-Seq
References
External links
TopHat page on Center for Computational Biology at JHU
Bioinformatics algorithms
Bioinformatics software
Laboratory software
Software using the Artistic license | TopHat (bioinformatics) | [
"Biology"
] | 574 | [
"Bioinformatics",
"Bioinformatics software",
"Bioinformatics algorithms"
] |
55,524,455 | https://en.wikipedia.org/wiki/Digital%20image%20correlation%20for%20electronics | Digital image correlation analyses have applications in material property characterization, displacement measurement, and strain mapping. As such, DIC is becoming an increasingly popular tool when evaluating the thermo-mechanical behavior of electronic components and systems.
CTE measurements and glass transition temperature identification
The most common application of DIC in the electronics industry is the measurement of coefficient of thermal expansion (CTE). Because it is a non-contact, full-field surface technique, DIC is ideal for measuring the effective CTE of printed circuit boards (PCB) and individual surfaces of electronic components. It is especially useful for characterizing the properties of complex integrated circuits, as the combined thermal expansion effects of the substrate, molding compound, and die make effective CTE difficult to estimate at the substrate surface with other experimental methods. DIC techniques can be used to calculate average in-plane strain as a function of temperature over an area of interest during a thermal profile. Linear curve-fitting and slope calculation can then be used to estimate an effective CTE for the observed area. Because the driving factor in solder fatigue is most often the CTE mismatch between a component and the PCB it is soldered to, accurate CTE measurements are vital for calculating printed circuit board assembly (PCBA) reliability metrics.
DIC is also useful for characterizing the thermal properties of polymers. Polymers are often used in electronic assemblies as potting compounds, conformal coatings, adhesives, molding compounds, dielectrics, and underfills. Because the stiffness of such materials can vary widely, accurately determining their thermal characteristics with contact techniques that transfer load to the specimen, such as dynamic mechanical analysis (DMA) and thermomechanical analysis (TMA), is difficult to do with consistency. Accurate CTE measurements are important for these materials because, depending on the specific use case, expansion and contraction of these materials can drastically affect solder joint reliability. For example, if a stiff conformal coating or other polymeric encapsulation is allowed to flow under a QFN, its expansion and contraction during thermal cycling can add tensile stress to the solder joints and expedite fatigue failures.
DIC techniques will also allow the detection of glass transition temperature (Tg). At a glass transition temperature, the strain vs. temperature plot will exhibit a change in slope.
Determining the Tg is very important for polymeric materials that could have glass transition temperatures within the operating temperature range of the electronics assemblies and components on which they are used. For example, some potting materials can see the Elastic Modulus of the material change by a factor of 100 or more over the glass transition region. Such changes can have drastic effects on an electronic assembly's reliability if they are not planned for in the design process.
Out-of-plane component warpage
When 3D DIC techniques are employed, out-of-plane motion can be tracked in addition to in-plane motion. Out-of-plane warpage is especially of interest at the component level of electronics packaging for solder joint reliability quantification. Excessive warpage during reflow can contribute to defective solder joints by lifting the edges of the component away from the board and creating head-in-pillow defects in ball grid arrays (BGA). Warpage can also shorten the fatigue life of adequate joints by adding tensile stresses to edge joints during thermal cycling.
Thermo-mechanical strain mapping
When a PCBA is over-constrained, thermo-mechanical stress brought about during thermal expansion can cause board strains that could negatively affect individual component and overall assembly reliability. The full-field monitoring capabilities of an image correlation technique allow for the measurement of strain magnitude and location on the surface of a specimen during a displacement-causing event, such as PCBA during a thermal profile. These "strain maps" allow for the comparison of strain levels over full areas of interest. Many traditional discrete methods, like extensometers and strain gauges, only allow for localized measurements of strain, inhibiting their ability to efficiently measure strain across larger areas of interest. DIC techniques have also been used to generate strain maps from purely mechanical events, such as drop impact tests, on electronic assemblies.
See also
Glass Transition
Thermal Analysis
References
Materials
Electronics | Digital image correlation for electronics | [
"Physics"
] | 872 | [
"Materials",
"Matter"
] |
60,330,183 | https://en.wikipedia.org/wiki/Structural%20reliability | Structural reliability is about applying reliability engineering theories to buildings and, more generally, structural analysis. Reliability is also used as a probabilistic measure of structural safety. The reliability of a structure is defined as the probability of complement of failure . The failure occurs when the total applied load is larger than the total resistance of the structure. Structural reliability has become known as a design philosophy in the twenty-first century, and it might replace traditional deterministic ways of design and maintenance.
Theory
In structural reliability studies, both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated. When loads and resistances are explicit and have their own independent function, the probability of failure could be formulated as follows.
where is the probability of failure, is the cumulative distribution function of resistance (R), and is the probability density of load (S).
However, in most cases, the distribution of loads and resistances are not independent and the probability of failure is defined via the following more general formula.
where 𝑋 is the vector of the basic variables, and G(X) that is called is the limit state function could be a line, surface or volume that the integral is taken on its surface.
Solution approaches
Analytical solutions
In some cases when load and resistance are explicitly expressed (such as equation () above), and their distributions are normal, the integral of equation () has a closed-form solution as follows.
Simulation
In most cases load and resistance are not normally distributed. Therefore, solving the integrals of equations () and () analytically is impossible. Using Monte Carlo simulation is an approach that could be used in such cases.
References
Reliability analysis
Reliability engineering
Structural engineering | Structural reliability | [
"Engineering"
] | 350 | [
"Structural engineering",
"Systems engineering",
"Reliability analysis",
"Reliability engineering",
"Construction",
"Civil engineering"
] |
60,330,584 | https://en.wikipedia.org/wiki/Gardner%20transition | In condensed matter physics, the Gardner transition refers to a temperature induced transition in which the free energy basin of a disordered system divides into many marginally stable sub-basins. It is named after Elizabeth Gardner who first described it in 1985.
See also
Glass transition
References
Condensed matter physics | Gardner transition | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 58 | [
"Materials science stubs",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Condensed matter stubs",
"Matter"
] |
60,336,113 | https://en.wikipedia.org/wiki/Kerr%E2%80%93Schild%20perturbations | Kerr–Schild perturbations are a special type of perturbation to a spacetime metric which only appear linearly in the Einstein field equations which describe general relativity. They were found by Roy Kerr and Alfred Schild in 1965.
Form
A generalised Kerr–Schild perturbation has the form , where is a scalar and is a null vector with respect to the background spacetime. It can be shown that any perturbation of this form will only appear quadratically in the Einstein equations, and only linearly if the condition , where is a scalar, is imposed. This condition is equivalent to requiring that the orbits of are geodesics.
Applications
While the form of the perturbation may appear very restrictive, there are several black hole metrics which can be written in Kerr–Schild form, such as Schwarzschild (stationary black hole), Kerr (rotating), Reissner–Nordström (charged) and Kerr–Newman (both charged and rotating).
References
General relativity | Kerr–Schild perturbations | [
"Physics"
] | 215 | [
"General relativity",
"Relativity stubs",
"Theory of relativity"
] |
60,337,965 | https://en.wikipedia.org/wiki/Coupled%20substitution | Coupled substitution is the geological process by which two elements simultaneous substitute into a crystal in order to maintain overall electrical neutrality and keep the charge constant. In forming a solid solution series, ionic size is more important than ionic charge, as this can be compensated for elsewhere in the structure.
Ionic size
To make a geometrically stable structure in a mineral, atoms must fit together in terms of both their size and charge. The atoms have to fit together so that their electron shells can interact with one another and they also have to produce a neutral molecule. For these reasons the sizes and electron shell structure of atoms determine what element combinations are possible and the geometrical form that various minerals take. Because electrons are donated and received, it is the ionic radius of the element that controls the size and determines how atoms fit together in minerals.
Examples
Coupled substitutions are common in the silicate minerals where substitutes for in tetrahedral sites.
For example, when a plagioclase feldspar solid solution series forms, albite (Na Al Si3O8) can change to anorthite (Ca Al2Si2O8) by having replace . However, this leaves a negative charge that has to be balanced by the (coupled) substitution of for .
Despite being nicknamed fool's gold, pyrite is sometimes found in association with small quantities of gold. Gold and arsenic occur as a coupled substitution in the pyrite structure. In the Carlin–type gold deposits, arsenian pyrite contains up to 0.37% gold by weight.
The possible replacement of by in Corundum.
and in Haematite
→ Diopside (MgCaSi2O6) → Jadeite: (NaAlSi2O6 or
2 → 2 As in the Spinel groups
The site being filled to maintain charge does not have to be a substitution. It can also involve filling a site that is normally vacant in order to achieve charge balance. For example, in the amphibole mineral Tremolite - (Ca2(Mg5.0-4.5Fe2+0.0-0.5)Si8O22(OH)2), replaces then can go into a site that is normally vacant to maintain charge balance. This new mineral would then be edenite a variety of hornblende.
Bityite’s structure consists of a coupled substitution it exhibits between the sheets of polyhedra; the coupled substitution of beryllium for aluminium within the tetrahedral sites allows a single lithium substitution for a vacancy without any additional octahedral substitutions. The transfer is completed by creating a tetrahedral sheet composition of Si2BeAl. The coupled substitution of lithium for vacancy and the beryllium for the tetrahedral aluminium maintains all the charges balanced; thereby, resulting in the trioctahedral end member for the margarite sub-group of the phyllosilicate group.
Ferrogedrite is related to anthophyllite amphibole and gedrite through coupled substitution of (Al, Fe3+) for (Mg, Fe2+, Mn) and Al for Si.
References
Chemical reactions
Minerals
Crystallography | Coupled substitution | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 651 | [
"Crystallography",
"Condensed matter physics",
"nan",
"Materials science"
] |
56,892,858 | https://en.wikipedia.org/wiki/Kagome%20metal | In solid-state physics, the kagome metal or kagome magnet is a type of ferromagnetic quantum material. The atomic lattice in a kagome magnet has layered overlapping triangles and large hexagonal voids, akin to the kagome pattern in traditional Japanese basket-weaving. This geometry induces a flat electronic band structure with Dirac crossings, in which the low-energy electron dynamics correlate strongly.
Electrons in a kagome metal experience a "three-dimensional cousin of the quantum Hall effect": magnetic effects require electrons to flow around the kagome triangles, akin to superconductivity. This phenomenon occurs in many materials at low temperatures and high external field, but, unlike superconductivity, materials are known in which the effect remains under standard conditions.
The first room-temperature, vanishing-external-field kagome magnet discovered was the intermetallic , as shown in 2011. Many others have since been found. Kagome magnets occur in a variety of crystal and magnetic structures, generally featuring a 3d-transition-metal kagome lattice with in-plane period ~5.5 Å. Examples include antiferromagnet , paramagnet , ferrimagnet , hard ferromagnet (and Weyl semimetal) , and soft ferromagnet . Until 2019, all known kagome materials contained the heavy element tin, which has a strong spin–orbit coupling, but potential kagome materials under study () included magnetically doped Weyl-semimetal , and the class (A = Cs, Rb, K). Although most research on kagome magnets has been performed on Fe3Sn2, it has since been discovered that in fact exhibits a structure much closer to the ideal kagome lattice.
A kagome lattice harbors massive Dirac fermions, Berry curvature, band gaps, and spin–orbit activity, all of which are conducive to the Hall Effect and zero-energy-loss electric currents. These behaviors are promising for the development of technologies in quantum computing, spin superconductors, and low power electronics. in particular exhibits numerous exotic properties, including superconductivity, topological states, and more. Magnetic skyrmionic bubbles have been found in Kagome metals over a wide temperature range. For example, they were observed in at ~200-600 K using LTEM but with high critical field ~0.8 T.
See also
Herbertsmithite, a natural kagome magnet
References
2018 in science
Materials science
Condensed matter physics | Kagome metal | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 541 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Matter"
] |
56,897,087 | https://en.wikipedia.org/wiki/Agglomerated%20food%20powder | Agglomerated food powder is a unit operation during which native particles are assembled to form bigger agglomerates, in which the original particle can still be distinguished. Agglomeration can be achieved through processes that use liquid as a binder (wet methods) or methods that do not involve any binder (dry methods).
Description
The liquid used in wet methods can be added directly to the product or via a humid environment. Using a fluidized bed dryer and multiple step spray drying are two examples of wet methods while roller compacting and extrusion are two examples of dry methods.
Advantages of agglomeration for food include:
Dust reduction: Dust reduction is achieved when the smallest particles (or "fines") in the product are combined into larger particles.
Improved flow: Flow improvement occurs as the larger, and sometimes more spherical, particles more easily pass over each other than the smaller or more irregularly-shaped particles in the original material.
Improved dispersion and/or solubility: Improved dispersion and solubility is sometimes achieved with instantization, in which the solubility of a product allows it to instantly dissolve upon its addition to water. For a powder to be considered instant it should go through wettability, sinkability, dispersibility, and solubility within a few seconds. Non-fat dry milk and high quality protein powders are good examples of instant powders.
Optimized bulk density: Consistent bulk density is important in accurate and consistent filling of packaging.
Improved product characteristics
Increased homogeneity of the finished product, reducing segregation of fine particles (such as powdered vitamins or spray-dried flavors) from larger particles (such as granulated sugars or acids). As a powder is agitated, smaller particles will fall to the bottom, and larger raise to the top. Agglomeration can reduce the range of particle sizes present in the product, reducing segregation.
Disadvantages of food agglomeration:
Extra cost. The benefits of handling an agglomerate often outweigh the extra cost involved in processing.
Additional processing time. Agglomeration of a finished blend is an additional step after blending.
Particle size distribution is an important parameter to monitor in agglomerated food products. In both wet and dry agglomeration, particles of undesired sizes must be removed to ensure the best possible finished product performance. High-powered cyclones are the most common way to separate undesired fine particles (or "fines") from larger agglomerates (or "overs"). Cyclones utilize the combination of wind power and the different densities of the two products to pull the fines out of the mix. The fines can then be reworked through the agglomeration process to reduce yield loss. In contrast, shaker screens are often used to separate out the overs from the rest of the product. The overs can be reworked into the process by first being broken into smaller particles.
Wet agglomeration methods
Wet agglomeration is a process that introduces a liquid binder to develop adhesion forces between the dry particles to be agglomerated. Mixing disperses the liquid over the particles evenly and promotes growth of the aggregate to the desired size. A final drying step is required to stabilize the agglomerates.
In all wet agglomeration methods, the first step is wetting the particles. This initiates adhesion forces between the particles. The next step, nucleation, is the process by which the native particles come together and are held with liquid bridges and capillary forces. Then, through coalescence or the growth phase, these small groups of particles come together to create larger particles until the particles are the desired size. Consolidation occurs as the agglomerates increase in density and strength through drying and collisions with other particles. Mixing as the powder dries also causes some particles to break and erode, creating smaller particles and fines. To achieve the correct particle size, erosion and growth must be balanced. The last step in wet agglomeration is the final stabilization through drying. The agglomerated particles are dried to less than 5% water content, and cooled to below their glass transition temperature.
Wet agglomeration falls into two categories based on method of agitation: Mechanical mixing and pneumatic mixing.
Pneumatic mixing
Steam-jet agglomeration: A continuous process wherein fine powders are exposed to steam to provide the necessary adhesion properties. Agglomeration is controlled by particle size distribution in the raw materials, gas and steam flow conditions and the adhesion forces between the particles. After the steam section the particles are exposed to warm air flowing upwards and countercurrent to the particles, which solidifies the liquid bridges formed between the particles. Advantages: used for many years in the food industry, a continuous process
Spray drying: Spray drying starts with a liquid raw material which is sprayed as fine droplets into heated air which causes the droplets to dry into fine particles. To agglomerate, fully dried particles (collected from the dry air outlet) are re-introduced at the point where the particles are partly dried and still sticky, to collide and create porous agglomerates. Spray drying agglomeration creates irregularly shaped, highly porous particles with excellent instant properties.
Fluid bed agglomeration uses an air stream to both agitate the particles and to dry the agglomerates. Fluidized bed dryers are hot-air driers in which the air is forced up through a layer of product. The air is evenly distributed along the chamber to maintain a uniform velocity and prevent areas of higher velocities. These dryers can either be batch or continuous. Batch processing methods tend to have more uniform moisture content throughout the product after drying whereas continuous driers vary more throughout the process and may require a finishing set. When used for agglomeration, fluidized bed dryers have spray nozzles located at various heights within the chamber, allowing a spray of water or a solution of other ingredients (the binding solution) to be sprayed on the particles as they are fluidized. This encourages the particles to stick together, and can impart other properties to the finished product. Examples of binding solutions are a water/sugar solution, or lecithin. In this method, particle wetting, growth, consolidation, erosion and drying all occur at the same time.
Mechanical mixing
Pan or disk agglomerators (rarely used for food powders): Pan or disk agglomerators use the rotation of a disk or bowl to agitate the powder as it is sprayed with water. This type of agglomeration creates agglomerates with higher density than agglomerates produced in fluidized beds.
Drum agglomerators: use a drum to agitate the powder as liquid is added via spraying along the drum. This is a continuous process, and the agglomerates are spherical due to the rotation of the drum. Advantages: can successfully agglomerate powders with a wide particle size distribution, and have lower energy needs than fluidized bed agglomerators. Drum agglomerators can handle very large capacities, but this is not an advantage in the food industry. Disadvantage: broad particle size distribution.
Mixer agglomeration: Mixer agglomerators agitate the powder with the use of a blade inside a bowl. The geometry of mixer agglomerators vary widely. The blade can be oriented vertically, horizontally or obliquely. Shear is variable, and the wetting solution is sprayed over the bulk of the powder as it is mixing. Advantages: can work with powders of large particle size distribution, and allow for good distribution of viscous liquids. Equipment is straight forward and common. This type of agglomeration results in relatively dense and spherical agglomerates.
Dry agglomeration methods
Dry agglomeration is agglomeration performed without water or binding liquids, instead using compression only.
Roller compaction
Roller compaction is a process in which powders are forced between two rolls, which compress the powders into dense sticks or sheets. These sticks or sheets are then ground into granules. Material properties will affect the mechanical properties of the resulting granules. Food particles with crystalline structures will deform plastically under pressure, and amorphous materials will deform viscoelastically. Roller compaction is more commonly used on individual ingredients of a finished powdered food product, than on a blend of ingredients producing a granulated finished product.
Some advantages of roller compaction are
No need for binding solution
Dust-free powders are possible
Consistent granule size and bulk density.
Good option for moisture and heat-sensitive materials, as no drying is required.
Disadvantages:
May not be easily soluble in cold water due to high density and low porosity
High amount of fines which need to be re-worked may be produced
Specialized equipment is required, typically with large batch sizes (> 3k pounds)
Examples of agglomerated food powders: Sucrose, sodium chloride, monosodium glutamate and fibers.
Extrusion
Extrusion is executed by mixing the powder with liquid, additives, or dispersants and then compressing the mixture and forcing it through a die. The product is then dried and broken down to the desired particle size. Extruded powders are dense. Extrusion is typically used for ingredients such as minerals and highly-hygroscopic products which benefit from reduced surface area, as well as products that are subject to oxidation. Extrusion for agglomeration should not be confused with the more common food extrusion process that involves creating a dough that is cooked and expands as it passes through the die.
See also
Granulation (process)
Particle size
Particle-size distribution
Pelletizing
References
External links
Welchdry.com Roller Compaction
www.glatt.com Spray Agglomeration
Patent US7998505B2
Food processing
Particle technology
Dried foods
Powders | Agglomerated food powder | [
"Physics",
"Chemistry",
"Engineering"
] | 2,064 | [
"Chemical engineering",
"Materials",
"Powders",
"Environmental engineering",
"Particle technology",
"Matter"
] |
39,695,061 | https://en.wikipedia.org/wiki/Zero%20liquid%20discharge | Zero Liquid Discharge (ZLD) is a classification of water treatment processes intended to reduce wastewater efficiently and produce clean water that is suitable for reuse (e.g., irrigation). ZLD systems employ wastewater treatment technologies and desalination to purify and recycle virtually all wastewater received.
ZLD technologies help industrial facilities meet discharge and water reuse requirements, enabling them to meet government discharge regulations, reach higher water recovery (%), and treat and recover valuable materials from the wastewater streams such as potassium sulfate, caustic soda, sodium sulfate, lithium, and gypsum.
Thermal technologies are the conventional means to achieve ZLD, such as evaporators (for instance multi stage flash distillation), multi effect distillation, mechanical vapor compression, crystallization, and condensate recovery. ZLD plants produce solid waste.
ZLD discharge system overview
ZLD processes begin with pre-treatment and evaporation of an industrial effluent until its dissolved solids precipitate. These precipitates are removed and dewatered with a filter press or a centrifuge. The water vapor from evaporation is condensed and returned to the process.
In the last few decades, there has been an effort from the water treatment industry to revolutionize high water recovery and ZLD technologies.
This has led to processes like electrodialysis, forward osmosis, and membrane distillation.
A quick overview and comparison can be seen in the following representative table:
Configuration
Despite the variable sources of a wastewater stream, a ZLD system is generally comprised by two steps:
Pre-Concentration: Pre-concentrating a brine is usually achieved with membrane brine concentrators or electrodialysis. These technologies concentrate a stream to a high salinity and are able to recover up to 60–80% of the water.
Evaporation/Crystallization: The next step, using thermal processes or evaporation, evaporates all the leftover water, collects it, and sends it to reuse. The waste that is left behind then goes to a crystallizer that boils all the water until all its impurities crystallize and can be filtered out as solids.
See also
Effluent guidelines (US wastewater regulations)
Effluent limitation
References
External links
ZLD Treatment Process Stages
Building engineering
Environmental engineering
Israeli inventions | Zero liquid discharge | [
"Chemistry",
"Engineering"
] | 485 | [
"Building engineering",
"Chemical engineering",
"Civil engineering",
"Environmental engineering",
"Architecture"
] |
39,697,327 | https://en.wikipedia.org/wiki/Fabry%20gap%20theorem | In mathematics, the Fabry gap theorem is a result about the analytic continuation of complex power series whose non-zero terms are of orders that have a certain "gap" between them. Such a power series is "badly behaved" in the sense that it cannot be extended to be an analytic function anywhere on the boundary of its disc of convergence.
The theorem may be deduced from the first main theorem of Turán's method.
Statement of the theorem
Let 0 < p1 < p2 < ... be a sequence of integers such that the sequence pn/n diverges to ∞. Let (αj)j∈N be a sequence of complex numbers such that the power series
has radius of convergence 1. Then the unit circle is a natural boundary for the series f.
Converse
A converse to the theorem was established by George Pólya. If lim inf pn/n is finite then there exists a power series with exponent sequence pn, radius of convergence equal to 1, but for which the unit circle is not a natural boundary.
See also
Gap theorem (disambiguation)
Lacunary function
Ostrowski–Hadamard gap theorem
References
Mathematical series
Theorems in complex analysis | Fabry gap theorem | [
"Mathematics"
] | 248 | [
"Sequences and series",
"Theorems in mathematical analysis",
"Mathematical structures",
"Series (mathematics)",
"Calculus",
"Theorems in complex analysis"
] |
39,697,339 | https://en.wikipedia.org/wiki/Tur%C3%A1n%27s%20method | In mathematics, Turán's method provides lower bounds for exponential sums and complex power sums. The method has been applied to problems in equidistribution.
The method applies to sums of the form
where the b and z are complex numbers and ν runs over a range of integers. There are two main results, depending on the size of the complex numbers z.
Turán's first theorem
The first result applies to sums sν where for all n. For any range of ν of length N, say ν = M + 1, ..., M + N, there is some ν with |sν| at least c(M, N)|s0| where
The sum here may be replaced by the weaker but simpler .
We may deduce the Fabry gap theorem from this result.
Turán's second theorem
The second result applies to sums sν where for all n. Assume that the z are ordered in decreasing absolute value and scaled so that |z1| = 1. Then there is some ν with
See also
Turán's theorem in graph theory
References
Exponentials
Analytic number theory | Turán's method | [
"Mathematics"
] | 229 | [
"E (mathematical constant)",
"Analytic number theory",
"Exponentials",
"Number theory"
] |
39,698,985 | https://en.wikipedia.org/wiki/Thermodynamic%20relations%20across%20normal%20shocks | "Normal shocks" are a fundamental type of shock wave. The waves, which are perpendicular to the flow, are called "normal" shocks. Normal shocks only happen when the flow is supersonic. At those speeds, no obstacle is identified before the speed of sound which makes the molecule return after sensing the obstacle. While returning, the molecule becomes coalescent at certain point. This thin film of molecules act as normal shocks.
Thermodynamic relation across normal shocks
Mach number
The Mach number in the upstream is given by and the mach number in the downstream is given by
Note that the Mach numbers are given in the reference frame of the shock.
Static pressure
Static temperature
Stagnation pressure
Entropy change
Reference list
Shock waves | Thermodynamic relations across normal shocks | [
"Physics"
] | 147 | [
"Waves",
"Physical phenomena",
"Shock waves"
] |
39,699,737 | https://en.wikipedia.org/wiki/Eco-socialism | Eco-socialism (also known as green socialism, socialist ecology, ecological materialism, or revolutionary ecology) is an ideology merging aspects of socialism with that of green politics, ecology and alter-globalization or anti-globalization. Eco-socialists generally believe that the expansion of the capitalist system is the cause of social exclusion, poverty, war and environmental degradation through globalization and imperialism, under the supervision of repressive states and transnational structures.
Eco-socialism asserts that the capitalist economic system is fundamentally incompatible with the ecological and social requirements of sustainability. Thus, according to this analysis, giving economic priority to the fulfillment of human needs while staying within ecological limits, as sustainable development demands, is in conflict with the structural workings of capitalism. By this logic, market-based solutions to ecological crises (such as environmental economics and green economy) are rejected as technical tweaks that do not confront capitalism's structural failures. Eco-socialists advocate for the succession of capitalism by eco-socialism—an egalitarian economic/political/social structure designed to harmonize human society with non-human ecology and to fulfill human needs—as the only sufficient solution to the present-day ecological crisis, and hence the only path towards sustainability.
Eco-socialists advocate dismantling capitalism, focusing on social ownership of the means of production by freely associated producers, and restoring the commons.
Ideology
Eco-socialists are critical of many past and existing forms of both green politics and socialism. They are often described as "Red Greens" – adherents to Green politics with clear anti-capitalist views, often inspired by Marxism (red greens are in contrast to eco-capitalists and green anarchists).
The term "watermelon" is commonly applied, often pejoratively, to Greens who seem to put "social justice" goals above ecological ones, implying they are "green on the outside but red on the inside". The term is common in Australia and New Zealand, and usually attributed to either Petr Beckmann or, more frequently, Warren T. Brookes, both critics of environmentalism. The term is also found in non-English speaking political discourse.
The Watermelon, a New Zealand website, uses the term proudly, stating that it is "green on the outside and liberal on the inside", while also citing "socialist political leanings", reflecting the use of the term "liberal" to describe the political left in many English-speaking countries. Red Greens are often considered "fundies" or "fundamentalist greens", a term usually associated with deep ecology even though the German Green Party "fundi" faction included eco-socialists, and eco-socialists in other Green Parties, like Derek Wall, have been described in the press as fundies.
Eco-socialists also criticise bureaucratic and elite theories of self-described socialism such as Maoism, Stalinism and what other critics have termed bureaucratic collectivism or state capitalism. Instead, eco-socialists focus on imbuing socialism with ecology while keeping the emancipatory goals of "first-epoch" socialism. Eco-socialists aim for communal ownership of the means of production by "freely associated producers" with all forms of domination eclipsed, especially gender inequality and racism.
This often includes the restoration of commons land in opposition to private property, in which local control of resources valorizes the Marxist concept of use value above exchange value. Practically, eco-socialists have generated various strategies to mobilise action on an internationalist basis, developing networks of grassroots individuals and groups that can radically transform society through nonviolent "prefigurative projects" for a post-capitalist, post-statist world.
History
1880s–1930s
Contrary to the depiction of Karl Marx by some environmentalists, social ecologists and fellow socialists as a productivist who favoured the domination of nature, eco-socialists have revisited Marx's writings and believe that he "was a main originator of the ecological world-view". Eco-socialist authors, like John Bellamy Foster and Paul Burkett, point to Marx's discussion of a "metabolic rift" between man and nature, his statement that "private ownership of the globe by single individuals will appear quite absurd as private ownership of one man by another" and his observation that a society must "hand it [the planet] down to succeeding generations in an improved condition". Nonetheless, other eco-socialists feel that Marx overlooked a "recognition of nature in and for itself", ignoring its "receptivity" and treating nature as "subjected to labor from the start" in an "essentially active relationship".
William Morris, the English novelist, poet and designer, is largely credited with developing key principles of what was later called eco-socialism. During the 1880s and 1890s, Morris promoted his eco-socialist ideas within the Social Democratic Federation and the Socialist League.
Following the Russian Revolution, some environmentalists and environmental scientists attempted to integrate ecological consciousness into Bolshevism, although many such people were later purged from the Communist Party of the Soviet Union. The "pre-revolutionary environmental movement", encouraged by the revolutionary scientist Aleksandr Bogdanov and the Proletkul't organisation, made efforts to "integrate production with natural laws and limits" in the first decade of Soviet rule, before Joseph Stalin attacked ecologists and the science of ecology and the Soviet Union fell into the pseudo-science of the state biologist Trofim Lysenko, who "set about to rearrange the Russian map" in ignorance of environmental limits.
1950s–1960s
Social ecology is closely related to the work and ideas of Murray Bookchin and influenced by anarchist Peter Kropotkin. Social ecologists assert that the present ecological crisis has its roots in human social problems, and that the domination of human-over-nature stems from the domination of human-over-human. In 1958, Murray Bookchin defined himself as an anarchist, seeing parallels between anarchism and ecology. His first book, Our Synthetic Environment, was published under the pseudonym Lewis Herber in 1962, a few months before Rachel Carson's Silent Spring. The book described a broad range of environmental ills but received little attention because of its political radicalism. His groundbreaking essay "Ecology and Revolutionary Thought" introduced ecology as a concept in radical politics. In 1968, he founded another group that published the influential Anarchos magazine, which published that and other innovative essays on post-scarcity and on ecological technologies such as solar and wind energy, and on decentralization and miniaturization. Lecturing throughout the United States, he helped popularize the concept of ecology to the counterculture.
Post-Scarcity Anarchism is a collection of essays written by Murray Bookchin and first published in 1971 by Ramparts Press. It outlines the possible form anarchism might take under conditions of post-scarcity. It is one of Bookchin's major works, and its radical thesis provoked controversy for being utopian and messianic in its faith in the liberatory potential of technology. Bookchin argues that post-industrial societies are also post-scarcity societies, and can thus imagine "the fulfillment of the social and cultural potentialities latent in a technology of abundance". The self-administration of society is now made possible by technological advancement and, when technology is used in an ecologically sensitive manner, the revolutionary potential of society will be much changed. In 1982, his book The Ecology of Freedom had a profound impact on the emerging ecology movement, both in the United States and abroad. He was a principal figure in the Burlington Greens in 1986–1990, an ecology group that ran candidates for city council on a program to create neighborhood democracy.
Bookchin later developed a political philosophy to complement social ecology which he called "Communalism" (spelled with a capital "C" to differentiate it from other forms of communalism). While originally conceived as a form of social anarchism, he later developed Communalism into a separate ideology which incorporates what he saw as the most beneficial elements of Anarchism, Marxism, syndicalism, and radical ecology.
Politically, Communalists advocate a network of directly democratic citizens' assemblies in individual communities/cities organized in a confederal fashion. This method used to achieve this is called libertarian municipalism which involves the establishment of face-to-face democratic institutions which are to grow and expand confederally with the goal of eventually replacing the nation-state.
1970s–1990s
In the 1970s, Barry Commoner, suggesting a left-wing response to The Limits to Growth model that predicted catastrophic resource depletion and spurred environmentalism, postulated that capitalist technologies were chiefly responsible for environmental degradation, as opposed to population pressures. East German dissident writer and activist Rudolf Bahro published two books addressing the relationship between socialism and ecology – The Alternative in Eastern Europe and Socialism and Survival – which promoted a 'new party' and led to his arrest, for which he gained international notoriety.
At around the same time, Alan Roberts, an Australian Marxist, posited that people's unfulfilled needs fuelled consumerism. Fellow Australian Ted Trainer further called upon socialists to develop a system that met human needs, in contrast to the capitalist system of created wants. A key development in the 1980s was the creation of the journal Capitalism, Nature, Socialism (CNS) with James O'Connor as founding editor and the first issue in 1988. The debates ensued led to a host of theoretical works by O'Connor, Carolyn Merchant, Paul Burkett and others.
The Australian Democratic Socialist Party launched the Green Left Weekly newspaper in 1991, following a period of working within Green Alliance and Green Party groups in formation. This ceased when the Australian Greens adopted a policy of proscription of other political groups in August 1991. The DSP also published a comprehensive policy resolution, "Socialism and Human Survival" in book form in 1990, with an expanded second edition in 1999 entitled "Environment, Capitalism & Socialism".
1990s onwards
The 1990s saw the socialist feminists Mary Mellor and Ariel Salleh address environmental issues within an eco-socialist paradigm. With the rising profile of the anti-globalization movement in the Global South, an environmentalism of the poor, combining ecological awareness and social justice, has also become prominent. David Pepper also released his important work, Ecosocialism: From Deep Ecology to Social Justice, in 1994, which critiques the current approach of many within Green politics, particularly deep ecologists.
In 2001, Joel Kovel, a social scientist, psychiatrist and former candidate for the Green Party of the United States (GPUS) presidential nomination in 2000, and Michael Löwy, an anthropologist and member of the Reunified Fourth International, released "An Ecosocialist Manifesto", which has been adopted by some organisations and suggests possible routes for the growth of eco-socialist consciousness. Kovel's 2002 work, The Enemy of Nature: The End of Capitalism or the End of the World?, is considered by many to be the most up-to-date exposition of eco-socialist thought.
In October 2007, the International Ecosocialist Network was founded in Paris.
Influence on current green and socialist movements
Currently, many Green Parties around the world, such as the Dutch Green Left Party (), contain strong eco-socialist elements. Radical Red-green alliances have been formed in many countries by eco-socialists, radical Greens and other radical left groups. In Denmark, the Red-Green Alliance was formed as a coalition of numerous radical parties. Within the European Parliament, a number of far-left parties from Northern Europe have organized themselves into the Nordic Green Left Alliance. Red Greens feature heavily in the Green Party of Saskatchewan (in Canada but not necessarily affiliated to the Green Party of Canada). In 2016, GPUS officially adopted eco-socialist ideology within the party.
The Green Party of England and Wales has an eco-socialist group, Green Left, founded in June 2006. Members of the Green Party holding a number of influential positions, including former Principal Speakers Siân Berry and Derek Wall, as well as prominent Green Party candidate and human rights activist Peter Tatchell have been associated with the grouping. Many Marxist organisations also contain eco-socialists, as evidenced by Löwy's involvement in the reunified Fourth International and Socialist Resistance, a British Marxist newspaper that reports on eco-socialist issues and has published two collections of essays on eco-socialist thought: Ecosocialism or Barbarism?, edited by Jane Kelly and Sheila Malone, and The Global Fight for Climate Justice, edited by Ian Angus with a foreword by Derek Wall.
Influence on existing socialist regimes
Eco-socialism has had a minor influence over developments in the environmental policies of what can be called "existing socialist" regimes, notably the People's Republic of China. Pan Yue, deputy director of the PRC's State Environmental Protection Administration, has acknowledged the influence of eco-socialist theory on his championing of environmentalism within China, which has gained him international acclaim (including being nominated for the Person of the Year Award 2006 by The New Statesman, a British current affairs magazine). Yue stated in an interview that, while he often finds eco-socialist theory "too idealistic" and lacking "ways of solving actual problems", he believes that it provides "political reference for China's scientific view of development", "gives socialist ideology room to expand" and offers "a theoretical basis for the establishment of fair international rules" on the environment.
He echoes much of eco-socialist thought, attacking international "environmental inequality", refusing to focus on technological fixes and arguing for the construction of "a harmonious, resource-saving and environmentally-friendly society". He also shows a knowledge of eco-socialist history, from the convergence of radical green politics and socialism and their political "red-green alliances" in the post-Soviet era. This focus on eco-socialism has informed in the essay On Socialist Ecological Civilisation, published in September 2006, which according to China Dialogue "sparked debate" in China.
The current Constitution of Bolivia, promulgated in 2009, is the first both ecologic and pro-socialist Constitution in the world, making the Bolivian state officially ecosocialist.
International organizations
In 2007 biologist David Schwartzman identified the necessity to build a transnational ecosocialist movement as one of the critical challenges facing ecosocialists. Later in 2007, it was announced that attempts to form an Ecosocialist International Network (EIN) would be made and an inaugural meeting of the International occurred on 7 October 2007 in Paris. The meeting attracted "more than 60 activists from Argentina, Australia, Belgium, Brazil, Canada, Cyprus, Denmark, France, Greece, Italy, Switzerland, United Kingdom, and the United States" and elected a steering committee featuring representatives from Britain, the United States, Canada, France, Greece, Argentina, Brazil and Australia, including Joel Kovel, Michael Löwy, Derek Wall, Ian Angus (editor of Climate and Capitalism in Canada) and Ariel Salleh. The Committee states that it wants "to incorporate members from China, India, Africa, Oceania and Eastern Europe". EIN held its second international conference in January 2009, in association with the next World Social Forum in Brazil. The conference released The Belem Ecosocialist Declaration.
International networking by eco-socialists has already been seen in the Praxis Research and Education Center, a group on international researchers and activists. Based in Moscow and established in 1997, Praxis, as well as publishing books "by libertarian socialists, Marxist humanists, anarchists, [and] syndicalists", running the Victor Serge Library and opposing war in Chechnya, states that it believes "that capitalism has brought life on the planet near to the brink of catastrophe, and that a form of ecosocialism needs to emerge to replace capitalism before it is too late".
Critique of capitalist expansion and globalization
Merging aspects of Marxism, socialism, environmentalism and ecology, eco-socialists generally believe that the capitalist system is the cause of social exclusion, inequality and environmental degradation through globalization and imperialism under the supervision of repressive states and transnational structures.
In the "Ecosocialist Manifesto" (2001), Joel Kovel and Michael Löwy suggest that capitalist expansion causes "crises of ecology" through the "rampant industrialization" and "societal breakdown" that springs "from the form of imperialism known as globalization". They believe that capitalism's expansion "exposes ecosystems" to pollutants, habitat destruction and resource depletion, "reducing the sensuous vitality of nature to the cold exchangeability required for the accumulation of capital", while submerging "the majority of the world's people to a mere reservoir of labor power" as it penetrates communities through "consumerism and depoliticization".
Other eco-socialists like Derek Wall highlight how in the Global South free-market capitalist structures economies to produce export-geared crops that take water from traditional subsistence farms, increasing hunger and the likelihood of famine; furthermore, forests are increasingly cleared and enclosed to produce cash crops that separate people from their local means of production and aggravate poverty. Wall shows that many of the world's poor have access to the means of production through "non-monetised communal means of production", such as subsistence farming, but, despite providing for need and a level of prosperity, these are not included in conventional economics measures, like GNP.
Wall therefore views neo-liberal globalization as "part of the long struggle of the state and commercial interests to steal from those who subsist" by removing "access to the resources that sustain ordinary people across the globe". Furthermore, Kovel sees neoliberalism as "a return to the pure logic of capital" that "has effectively swept away measures which had inhibited capital's aggressivity, replacing them with naked exploitation of humanity and nature." For Kovel, this "tearing down of boundaries and limits to accumulation is known as globalization", which was "a deliberate response to a serious accumulation crisis (in the 1970s) that had convinced the leaders of the global economy to install what we know as neoliberalism."
Furthermore, Ramachandra Guha and Joan Martinez Alier blame globalization for creating increased levels of waste and pollution, and then dumping the waste on the most vulnerable in society, particularly those in the Global South. Others have also noted that capitalism disproportionately affects the poorest in the Global North as well, leading to examples of resistance such as the environmental justice movement in the United States, consisting of working-class people and ethnic minorities who highlight the tendency for waste dumps, major road projects and incinerators to be constructed around socially excluded areas. However, as Wall highlights, such campaigns are often ignored or persecuted precisely because they originate among the most marginalized in society: the African-American radical green religious group MOVE, campaigning for ecological revolution and animal rights from Philadelphia, had many members imprisoned or even killed by US authorities from the 1970s onwards.
Eco-socialism disagrees with the elite theories of capitalism, which tend to label a specific class or social group as conspirators who construct a system that satisfies their greed and personal desires. Instead, eco-socialists suggest that the very system itself is self-perpetuating, fuelled by "extra-human" or "impersonal" forces. Kovel uses the Bhopal industrial disaster as an example. Many anti-corporate observers would blame the avarice of those at the top of many multi-national corporations, such as the Union Carbide Corporation in Bhopal, for seemingly isolated industrial accidents. Conversely, Kovel suggests that Union Carbide were experiencing a decrease in sales that led to falling profits, which, due to stock market conditions, translated into a drop in share values. The depreciation of share value made many shareholders sell their stock, weakening the company and leading to cost-cutting measures that eroded the safety procedures and mechanisms at the Bhopal site. Though this did not, in Kovel's mind, make the Bhopal disaster inevitable, he believes that it illustrates the effect market forces can have on increasing the likelihood of ecological and social problems.
Use and exchange value
Eco-socialism focuses closely on Marx's theories about the contradiction between use values and exchange values. Kovel posits that, within a market, goods are not produced to meet needs but are produced to be exchanged for money that we then use to acquire other goods; as we have to keep selling in order to keep buying, we must persuade others to buy our goods just to ensure our survival, which leads to the production of goods with no previous use that can be sold to sustain our ability to buy other goods.
Such goods, in an eco-socialist analysis, produce exchange values but have no use value. Eco-socialists like Kovel stress that this contradiction has reached a destructive extent, where certain essential activities such as caring for relatives full-time and basic subsistence are unrewarded, while unnecessary commodities earn individuals huge fortunes and fuel consumerism and resource depletion.
"Second contradiction" of capitalism
James O'Connor argues for a "second contradiction" of underproduction, to complement Marx's "first" contradiction of capital and labor. While the second contradiction is often considered a theory of environmental degradation, O'Connor's theory in fact goes much further. Building on the work of Karl Polanyi, along with Marx, O'Connor argues that capitalism necessarily undermines the "conditions of production" necessary to sustain the endless accumulation of capital. These conditions of production include soil, water, energy, and so forth. But they also include an adequate public education system, transportation infrastructures, and other services that are not produced directly by capital, but which capital needs in order accumulate effectively. As the conditions of production are exhausted, the costs of production for capital increase. For this reason, the second contradiction generates an underproduction crisis tendency, with the rising cost of inputs and labor, to complement the overproduction tendency of too many commodities for too few customers. Like Marx's contradiction of capital and labor, the second contradiction therefore threatens the system's existence.
In addition, O'Connor believes that, in order to remedy environmental contradictions, the capitalist system innovates new technologies that overcome existing problems but introduce new ones.
O'Connor cites nuclear power as an example, which he sees as a form of producing energy that is advertised as an alternative to carbon-intensive, non-renewable fossil fuels, but creates long-term radioactive waste and other dangers to health and security. While O'Connor believes that capitalism is capable of spreading out its economic supports so widely that it can afford to destroy one ecosystem before moving onto another, he and many other eco-socialists now fear that, with the onset of globalization, the system is running out of new ecosystems. Kovel adds that capitalist firms have to continue to extract profit through a combination of intensive or extensive exploitation and selling to new markets, meaning that capitalism must grow indefinitely to exist, which he thinks is impossible on a planet of finite resources.
Role of the state and transnational organizations
Capitalist expansion is seen by eco-socialists as being "hand in glove" with "corrupt and subservient client states" that repress dissent against the system, governed by international organisations "under the overall supervision of the western powers and the superpower United States", which subordinate peripheral nations economically and militarily. Kovel further claims that capitalism itself spurs conflict and, ultimately, war. Kovel states that the 'War on Terror', between Islamist extremists and the United States, is caused by "oil imperialism", whereby the capitalist nations require control over sources of energy, especially oil, which are necessary to continue intensive industrial growth - in the quest for control of such resources, Kovel argues that the capitalist nations, specifically the United States, have come into conflict with the predominantly Muslim nations where oil is often found.
Eco-socialists believe that state or self-regulation of markets does not solve the crisis "because to do so requires setting limits upon accumulation", which is "unacceptable" for a growth-orientated system; they believe that terrorism and revolutionary impulses cannot be tackled properly "because to do so would mean abandoning the logic of empire". Instead, eco-socialists feel that increasing repressive counter-terrorism increases alienation and causes further terrorism and believe that state counter-terrorist methods are, in Kovel and Löwy's words, "evolving into a new and malignant variation of fascism". They echo Rosa Luxemburg's "stark choice" between "socialism or barbarism", which was believed to be a prediction of the coming of fascism and further forms of destructive capitalism at the beginning of the twentieth century (Luxemburg was in fact murdered by the proto-fascist in the revolutionary atmosphere of Germany in 1919). With individuals now declaring the choice is "ecosocialism or ecofascism".
Tensions within the eco-socialist discourse
Reflecting tensions within the environmental and socialist movements, there is some conflict of ideas. However, in practice a synthesis is emerging which calls for democratic regulation of industry in the interests of people and the environment, nationalisation of some key environmental industries, local democracy and an extension of co-ops and the library principle. For example, Scottish Green Peter McColl argues that elected governments should abolish poverty through a citizens income scheme, regulate against social and environmental malpractice and encourage environmental good practice through state procurement. At the same time, economic and political power should be devolved as far as is possible through co-operatives and increased local decision making. By putting political and economic power into the hands of the people most likely to be affected by environmental injustice, it is less likely that the injustice will take place.
Critique of other forms of green politics
Eco-socialists criticise many within the Green movement for not being overtly anti-capitalist, for working within the existing capitalist, statist system, for voluntarism, or for reliance on technological fixes. The eco-socialist ideology is based on a critique of other forms of Green politics, including various forms of green economics, localism, deep ecology, bioregionalism and even some manifestations of radical green ideologies such as eco-feminism and social ecology.
As Kovel puts it, eco-socialism differs from Green politics at the most fundamental level because the 'Four Pillars' of Green politics (and the 'Ten Key Values' of the US Green Party) do not include the demand for the emancipation of labour and the end of the separation between producers and the means of production. Many eco-socialists also oppose Malthusianism and are alarmed by the gulf between Green politics in the Global North and the Global South.
Opposition to reformism and technologism
Eco-socialists are highly critical of those Greens who favour "working within the system". While eco-socialists like Kovel recognise the ability of within-system approaches to raise awareness, and believe that "the struggle for an ecologically rational world must include a struggle for the state", he believes that the mainstream Green movement is too easily co-opted by the current powerful socio-political forces as it "passes from citizen-based activism to ponderous bureaucracies scuffling for 'a seat at the table.
For Kovel, capitalism is "happy to enlist" the Green movement for "convenience", "control over popular dissent" and "rationalization". He further attacks within-system green initiatives like carbon trading, which he sees as a "capitalist shell game" that turns pollution "into a fresh source of profit". Brian Tokar has further criticised carbon trading in this way, suggesting that it augments existing class inequality and gives the "largest 'players' ... substantial control over the whole 'game.
In addition, Kovel criticises the "defeatism" of voluntarism in some local forms of environmentalism that do not connect: he suggests that they can be "drawn off into individualism" or co-opted to the demands of capitalism, as in the case of certain recycling projects, where citizens are "induced to provide free labor" to waste management industries who are involved in the "capitalization of nature". He labels the notion on voluntarism "ecopolitics without struggle".
Technological fixes to ecological problems are also rejected by eco-socialists. Saral Sarkar has updated the thesis of 1970s 'limits to growth' to exemplify the limits of new capitalist technologies such as hydrogen fuel cells, which require large amounts of energy to split molecules to obtain hydrogen. Furthermore, Kovel notes that "events in nature are reciprocal and multi-determined" and can therefore not be predictably "fixed"; socially, technologies cannot solve social problems because they are not "mechanical". He posits an eco-socialist analysis, developed from Marx, that patterns of production and social organisation are more important than the forms of technology used within a given configuration of society.
Under capitalism, he suggests that technology "has been the sine qua non of growth"; thus he believes that even in a world with hypothetical "free energy" the effect would be to lower the cost of automobile production, leading to the massive overproduction of vehicles, "collapsing infrastructure", chronic resource depletion and the "paving over" of the "remainder of nature". In the modern world, Kovel considers the supposed efficiency of new post-industrial commodities is a "plain illusion", as miniaturized components involve many substances and are therefore non-recyclable (and, theoretically, only simple substances could be retrieved by burning out-of-date equipment, releasing more pollutants). He is quick to warn "environmental liberals" against over-selling the virtues of renewable energies that cannot meet the mass energy consumption of the era; although he would still support renewable energy projects, he believes it is more important to restructure societies to reduce energy use before relying on renewable energy technologies alone.
Critique of green economics
Eco-socialists have based their ideas for political strategy on a critique of several different trends in green economics. At the most fundamental level, eco-socialists reject what Kovel calls "ecological economics" or the "ecological wing of mainstream economics" for being "uninterested in social transformation". He further rejects the Neo-Smithian school, who believe in Adam Smith's vision of "a capitalism of small producers, freely exchanging with each other", which is self-regulating and competitive.
The school is represented by thinkers like David Korten who believe in "regulated markets" checked by government and civil society but, for Kovel, they do not provide a critique of the expansive nature of capitalism away from localised production and ignore "questions of class, gender or any other category of domination". Kovel also criticises their "fairy-tale" view of history, which refers to the abuse of "natural capital" by the materialism of the Scientific Revolution, an assumption that, in Kovel's eyes, seems to suggest that "nature had toiled to put the gift of capital into human hands", rather than capitalism being a product of social relations in human history.
Other forms of community-based economics are also rejected by eco-socialists such as Kovel, including followers of E. F. Schumacher and some members of the cooperative movement, for advocating "no more than a very halting and isolated first step". He thinks that their principles are "only partially realizable within the institutions of cooperatives in capitalist society" because "the internal cooperation" of cooperatives is "forever hemmed in and compromised" by the need to expand value and compete within the market. Marx also believed that cooperatives within capitalism make workers into "their own capitalist ... by enabling them to use the means of production for the employment of their own labour".
For Kovel and other eco-socialists, community-based economics and Green localism are "a fantasy" because "strict localism belongs to the aboriginal stages of society" and would be an "ecological nightmare at present population levels" due to "heat losses from a multitude of dispersed sites, the squandering of scarce resources, the needless reproduction of effort, and cultural impoverishment". While he feels that small-scale production units are "an essential part of the path towards an ecological society", he sees them not as "an end in itself"; in his view, small enterprises can be either capitalist or socialist in their configuration and therefore must be "consistently anti-capitalist", through recognition and support of the emancipation of labour, and exist "in a dialectic with the whole of things", as human society will need large-scale projects, such as transport infrastructures.
He highlights the work of steady-state theorist Herman Daly, who exemplifies what eco-socialists see as the good and bad points of ecological economics — while Daly offers a critique of capitalism and a desire for "workers ownership", he only believes in workers ownership "kept firmly within a capitalist market", ignoring the eco-socialist desire for struggle in the emancipation of labour and hoping that the interests of labour and management today can be improved so that they are "in harmony".
Critique of deep ecology
Despite the inclusion of both in political factions like the fundies of the German Green Party, eco-socialists and deep ecologists hold markedly opposite views. Eco-socialists like Kovel have attacked deep ecology because, like other forms of Green politics and green economics, it features "virtuous souls" who have "no internal connection with the critique of capitalism and the emancipation of labor". Kovel is particularly scathing about deep ecology and its "fatuous pronouncement" that Green politics is "neither left nor right, but ahead", which for him ignores the notion that "that which does not confront the system becomes its instrument".
Even more scathingly, Kovel suggests that in "its effort to decentre humanity within nature", deep ecologists can "go too far" and argue for the "splitting away of unwanted people", as evidenced by their desire to preserve wilderness by removing the groups that have lived there "from time immemorial". Kovel thinks that this lends legitimacy to "capitalist elites", like the United States State Department and the World Bank, who can make preservation of wilderness a part of their projects that "have added value as sites for ecotourism" but remove people from their land. Between 1986 and 1996, Kovel notes that over three million people were displaced by "conservation projects"; in the making of the national parks of the United States, three hundred Shoshone Indians were killed in the development of Yosemite.
Kovel believes that deep ecology has affected the rest of the Green movement and led to calls for restrictions on immigration, "often allying with reactionaries in a ... cryptically racist quest". Indeed, he finds traces of deep ecology in the "biological reduction" of Nazism, an ideology many "organicist thinkers" have found appealing, including Herbert Gruhl, a founder of the German Green Party (who subsequently left when it became more left-wing) and originator of the phrase "neither left nor right, but ahead". Kovel warns that, while 'ecofascism' is confined to a narrow band of far right intellectuals and "disaffected white power skinheads" who involved themselves alongside far left groups in the anti-globalization movement, it may be "imposed as a revolution from above to install an authoritarian regime in order to preserve the main workings of the system" in times of crisis.
Critique of bioregionalism
Bioregionalism, a philosophy developed by writers like Kirkpatrick Sale who believe in the self-sufficiency of "appropriate bioregional boundaries" drawn up by inhabitants of "an area", has been thoroughly critiqued by Kovel, who fears that the "vagueness" of the area will lead to conflict and further boundaries between communities. While Sale cites the bioregional living of Native Americans, Kovel notes that such ideas are impossible to translate to populations of modern proportions, and evidences the fact that Native Americans held land in commons, rather than private property – thus, for eco-socialists, bioregionalism provides no understanding of what is needed to transform society, and what the inevitable "response of the capitalist state" would be to people constructing bioregionalism.
Kovel also attacks the problems of self-sufficiency. Where Sale believes in self-sufficient regions "each developing the energy of its peculiar ecology", such as "wood in the northwest [US]", Kovel asks "how on earth" these can be made sufficient for regional needs, and notes the environmental damage of converting Seattle into a "forest-destroying and smoke-spewing wood-burning" city. Kovel also questions Sale's insistence on bioregions that do "not require connections with the outside, but within strict limits", and whether this precludes journeys to visit family members and other forms of travel.
Critique of variants of eco-feminism
Like many variants of socialism and Green politics, eco-socialists recognise the importance of "the gendered bifurcation of nature" and support the emancipation of gender as it "is at the root of patriarchy and class". Nevertheless, while Kovel believes that "any path out of capitalism must also be eco-feminist", he criticises types of ecofeminism that are not anti-capitalist and can "essentialize women's closeness to nature and build from there, submerging history into nature", becoming more at place in the "comforts of the New Age Growth Centre". These limitations, for Kovel, "keep ecofeminism from becoming a coherent social movement".
Critique of social ecology
While having much in common with the radical tradition of social ecology, eco-socialists still see themselves as distinct. Kovel believes this is because social ecologists see hierarchy "in-itself" as the cause of ecological destruction, whereas eco-socialists focus on the gender and class domination embodied in capitalism and recognise that forms of authority that are not "an expropriation of human power for ... self-aggrandizement", such as a student-teacher relationship that is "reciprocal and mutual", are beneficial.
In practice, Kovel describes social ecology as continuing the anarchist tradition of non-violent direct action, which is "necessary" but "not sufficient" because "it leaves unspoken the question of building an ecological society beyond capital". Furthermore, social ecologists and anarchists tend to focus on the state alone, rather than the class relations behind state domination (in the view of Marxists). Kovel fears that this is political, springing from historical hostility to Marxism among anarchists, and sectarianism, which he points out as a fault of the "brilliant" but "dogmatic" founder of social ecology, Murray Bookchin.
Opposition to Malthusianism and neo-Malthusianism
While Malthusianism and eco-socialism overlap within the Green movement because both address over-industrialism, and despite the fact that eco-socialists, like many within the Green movement, are described as neo-Malthusian because of their criticism of economic growth, eco-socialists are opposed to Malthusianism. This divergence stems from the difference between Marxist and Malthusian examinations of social injustice – whereas Marx blames inequality on class injustice, Malthus argued that the working-class remained poor because of their greater fertility and birth rates.
Neo-Malthusians have slightly modified this analysis by increasing their focus on overconsumption – nonetheless, eco-socialists find this attention inadequate. They point to the fact that Malthus did not thoroughly examine ecology and that Garrett Hardin, a key neo-Malthusian, suggested that further enclosed and privatised land, as opposed to commons, would solve the chief environmental problem, which Hardin labeled the 'tragedy of the commons'.
"Two varieties of environmentalism"
Joan Martinez-Alier and Ramachandra Guha attack the gulf between what they see as the two "varieties of environmentalism" – the environmentalism of the North, an aesthetic environmentalism that is the privilege of wealthy people who no longer have basic material concerns, and the environmentalism of the South, where people's local environment is a source of communal wealth and such issues are a question of survival. Nonetheless, other eco-socialists, such as Wall, have also pointed out that capitalism disproportionately affects the poorest in the Global North as well, leading to examples of resistance such as the environmental justice movement in the US and groups like MOVE.
Critique of other forms of socialism
Eco-socialists choose to use the term "socialist", despite "the failings of its twentieth century interpretations", because it "still stands for the supersession of capital" and thus "the name, and the reality" must "become adequate for this time". Eco-socialists have nonetheless often diverged with other Marxist movements. Eco-socialism has also been partly influenced by and associated with agrarian socialism as well as some forms of Christian socialism, especially in the United States.
Critique of socialist states
While many see socialism as a necessity to respond to the environmental challenges brought about by capitalism, and saw hope in the Soviet Union and other such socialist states in providing an environmental path forward, others have critiqued the history and policies of such states for their lack of environmental planning and policy.
For Kovel and Michael Löwy, eco-socialism is "the realization of the 'first-epoch' socialisms" by resurrecting the notion of "free development of all producers", and distancing themselves from "the attenuated, reformist aims of social democracy and the productivist structures of the bureaucratic variations of socialism", such as forms of Leninism and Stalinism. They ground the failure of past socialist movements in "underdevelopment in the context of hostility by existing capitalist powers", which led to "the denial of internal democracy" and "emulation of capitalist productivism". Kovel believes that the forms of 'actually existing socialism' consisted of "public ownership of the means of production", rather than meeting "the true definition" of socialism as "a free association of producers", with the Party-State bureaucracy acting as the "alienating substitute 'public.
In analysing the Russian Revolution, Kovel feels that "conspiratorial" revolutionary movements "cut off from the development of society" will "find society an inert mass requiring leadership from above". From this, he notes that the anti-democratic Tsarist heritage meant that the Bolsheviks, who were aided into power by World War One, were a minority who, when faced with a counter-revolution and invading Western powers, continued "the extraordinary needs of 'war communism, which "put the seal of authoritarianism" on the revolution; thus, for Kovel, Lenin and Trotsky "resorted to terror", shut down the Soviets (workers' councils) and emulated "capitalist efficiency and productivism as a means of survival", setting the stage for Stalinism.
In Kovel's eyes, Lenin came to oppose the nascent Bolshevik environmentalism and its champion Aleksandr Bogdanov, who was later attacked for "idealism"; Kovel describes Lenin's philosophy as "a sharply dualistic materialism, rather similar to the Cartesian separation of matter and consciousness, and perfectly tooled ... to the active working over of the dead, dull matter by the human hand", which led him to want to overcome Russian backwardness through rapid industrialization. This tendency was, according to Kovel, augmented by a desire to catch-up with the West and the "severe crisis" of the revolution's first years.
Furthermore, Kovel quotes Trotsky, who believed in a Communist "superman" who would "learn how to move rivers and mountains". Kovel believes that, in Stalin's "revolution from above" and mass terror in response to the early 1930s economic crisis, Trotsky's writings "were given official imprimatur", despite the fact that Trotsky himself was eventually purged, as Stalinism attacked "the very notion of ecology... in addition to ecologies". Kovel adds that Stalin "would win the gold medal for enmity to nature", and that, in the face of massive environmental degradation, the inflexible Soviet bureaucracy became increasingly inefficient and unable to emulate capitalist accumulation, leading to a "vicious cycle" that led to its collapse.
Critique of the wider socialist movement
Beyond the forms of "actually existing socialism", Kovel criticises socialists in general as treating ecology "as an afterthought" and holding "a naive faith in the ecological capacities of a working-class defined by generations of capitalist production". He cites David McNally, who advocates increasing consumption levels under socialism, which, for Kovel, contradicts any notion of natural limits. He also criticises McNally's belief in releasing the "positive side of capital's self-expansion" after the emancipation of labor; instead, Kovel argues that a socialist society would "seek not to become larger" but would rather become "more realized", choosing sufficiency and eschewing economic growth. Kovel further adds that the socialist movement was historically conditioned by its origins in the era of industrialization so that, when modern socialists like McNally advocate a socialism that "cannot be at the expense of the range of human satisfaction", they fail "to recognize that these satisfactions can be problematic with respect to nature when they have been historically shaped by the domination of nature".
Eco-socialist strategy
Eco-socialists generally advocate the non-violent dismantling of capitalism and the state, focusing on collective ownership of the means of production by freely associated producers and restoration of the commons. To get to an eco-socialist society, eco-socialists advocate working-class anti-capitalist resistance but also believe that there is potential for agency in autonomous, grassroots individuals and groups across the world who can build "prefigurative" projects for non-violent radical social change.
These prefigurative steps go "beyond the market and the state" and base production on the enhancement of use values, leading to the internationalization of resistance communities in an 'Eco-socialist Party' or network of grassroots groups focused on non-violent, radical social transformation. An 'Eco-socialist revolution' is then carried out.
Agency
Many eco-socialists, like Alan Roberts, have encouraged working-class action and resistance, such as the 'green ban' movement in which workers refuse to participate in projects that are ecologically harmful. Similarly, Kovel and Hans A. Baer focus on working-class involvement in the formation of new eco-socialist parties or their increased involvement in existing Green Parties; however, he believes that, unlike many other forms of socialist analysis, "there is no privileged agent" or revolutionary class, and that there is potential for agency in numerous autonomous, grassroots individuals and groups who can build "prefigurative" projects for non-violent radical social change. He defines "prefiguration" as "the potential for the given to contain the lineaments of what is to be", meaning that "a moment toward the future exists embedded in every point of the social organism where a need arises".
If "everything has prefigurative potential", Kovel notes that forms of potential ecological production will be "scattered", and thus suggests that "the task is to free them and connect them". While all "human ecosystems" have "ecosocialist potential", Kovel points out that ones such as the World Bank have low potential, whereas internally democratic anti-globalization "affinity groups" have a high potential through a dialectic that involves the "active bringing and holding together of negations", such as the group acting as an alternative institution ("production of an ecological/socialist alternative") and trying to shut down a G8 summit meeting ("resistance to capital"). Therefore, "practices that in the same motion enhance use-values and diminish exchange-values are the ideal" for eco-socialists.
Prefiguration
For Kovel, the main prefigurative steps "are that people ruthlessly criticize the capitalist system... and that they include in this a consistent attack on the widespread belief that there can be no alternative to it", which will then "delegitimate the system and release people into struggle". Kovel justifies this by stating that "radical criticism of the given... can be a material force", even without an alternative, "because it can seize the mind of the masses of people", leading to "dynamic" and "exponential", rather than "incremental" and "linear", victories that spread rapidly. Following this, he advocates the expansion of the dialectical eco-socialist potential of groups through sustaining the confrontation and internal cohesion of human ecosystems, leading to an "activation" of potentials in others that will "spread across the whole social field" as "a new set of orienting principles" that define an ideology or party-life' formation".
In the short-term, eco-socialists like Kovel advocate activities that have the "promise of breaking down the commodity form". This includes organizing labor, which is a "reconfiguring of the use-value of labor power"; forming cooperatives, allowing "a relatively free association of labor"; forming localised currencies, which he sees as "undercutting the value-basis of money"; and supporting "radical media" that, in his eyes, involve an "undoing of the fetishism of commodities". Arran Gare, Wall and Kovel have advocated economic localisation in the same vein as many in the Green movement, although they stress that it must be a prefigurative step rather than an end in itself.
Kovel also advises political parties attempting to "democratize the state" that there should be "dialogue but no compromise" with established political parties, and that there must be "a continual association of electoral work with movement work" to avoid "being sucked back into the system". Such parties, he believes, should focus on "the local rungs of the political system" first, before running national campaigns that "challenge the existing system by the elementary means of exposing its broken promises". These views on party action have been supported by other eco-socialists.
Kovel believes in building prefigurations around forms of production based on use values, which will provide a practical vision of a post-capitalist, post-statist system. Such projects include Indymedia ("a democratic rendering of the use-values of new technologies such as the Internet, and a continual involvement in wider struggle"), open-source software, Wikipedia, public libraries and many other initiatives, especially those developed within the anti-globalization movement. These strategies, in Wall's words, "go beyond the market and the state" by rejecting the supposed dichotomy between private enterprise and state-owned production, while also rejecting any combination of the two through a mixed economy. He states that these present forms of "amphibious politics", which are "half in the dirty water of the present but seeking to move on to a new, unexplored territory". Löwy also highlights acting with a post-statist view saying that eco-socialists should take inspiration from Marx's commentary on the Paris Commune.
Wall suggests that open-source software, for example, opens up "a new form of commons regime in cyberspace", which he praises as production "for the pleasure of invention" that gives "access to resources without exchange". He believes that open source has "bypassed" both the market and the state, and could provide "developing countries with free access to vital computer software". Furthermore, he suggests that an "open source economy" means that "the barrier between user and provider is eroded", allowing for "cooperative creativity". He links this to Marxism and the notion of usufruct, asserting that "Marx would have been a Firefox user".
Internationalization of prefiguration and the eco-socialist party
Many eco-socialists have noted that the potential for building such projects is easier for media workers than for those in heavy industry because of the decline in trade unionism and the globalized division of labor which divides workers. Kovel posits that class struggle is "internationalized in the face of globalization", as evidenced by a wave of strikes across the Global South in the first half of the year 2000; indeed, he says that "labor's most cherished values are already immanently ecocentric".
Kovel therefore thinks that these universalizing tendencies must lead to the formation of "a consciously 'Ecosocialist Party that is neither like a parliamentary or vanguardist party. Instead, Kovel advocates a form of political party "grounded in communities of resistance", where delegates from these communities form the core of the party's activists, and these delegates and the "open and transparent" assembly they form are subject to recall and regular rotation of members. He holds up the Zapatista Army of National Liberation (EZLN) and the Gaviotas movement as examples of such communities, which "are produced outside capitalist circuits" and show that "there can be no single way valid for all peoples".
Nonetheless, he also firmly believes in connecting these movements, stating that "ecosocialism will be international or it will be nothing" and hoping that the Ecosocialist Party can retain the autonomy of local communities while supporting them materially. With an ever-expanding party, Kovel hopes that "defections" by capitalists will occur, leading eventually to the armed forces and police who, in joining the revolution, will signify that "the turning point is reached".
Two principles
The economist Pat Devine highlights the necessity to abolish the social division of labor as one of two principles necessary for building an eco-socialist future, the other being the abolition of the metabolic rift, as detailed by John Bellamy Foster.
Revolution and transition to eco-socialism
The revolution as envisaged by eco-socialists involves an immediate socio-political transition. Internationally, eco-socialists believe in a reform of the nature of money and the formation of a World People's Trade Organisation (WPTO) that democratizes and improves world trade through the calculation of an Ecological Price (EP) for goods. This would then be followed by a transformation of socioeconomic conditions towards ecological production, commons land and notions of usufruct (that seek to improve the common property possessed by society) to end private property. Eco-socialists assert that this must be carried out with adherence to non-violence.
Immediate aftermath of the revolution
Eco-socialists like Kovel use the term "Eco-socialist revolution" to describe the transition to an eco-socialist world society. In the immediate socio-political transition, he believes that four groups will emerge from the revolution, namely revolutionaries, those "whose productive activity is directly compatible with ecological production" (such as nurses, schoolteachers, librarians, independent farmers and many other examples), those "whose pre-revolutionary practice was given over to capital" (including the bourgeoisie, advertising executives and more) and "the workers whose activity added surplus value to capitalist commodities".
In terms of political organisation, he advocates an "interim assembly" made up of the revolutionaries that can "devise incentives to make sure that vital functions are maintained" (such as short-term continuation of "differential remuneration" for labor), "handle the redistribution of social roles and assets", convene "in widespread locations", and send delegates to regional, state, national and international organisations, where every level has an "executive council" that is rotated and can be recalled. From there, he asserts that "productive communities" will "form the political as well as economic unit of society" and "organize others" to make a transition to eco-socialist production.
He adds that people will be allowed to be members of any community they choose with "associate membership" of others, such as a doctor having main membership of healthcare communities as a doctor and associate membership of child-rearing communities as a father. Each locality would, in Kovel's eyes, require one community that administered the areas of jurisdiction through an elected assembly. High-level assemblies would have additional "supervisory" roles over localities to monitor the development of ecosystemic integrity, and administer "society-wide services" like transport in "state-like functions", before the interim assembly can transfer responsibilities to "the level of the society as a whole through appropriate and democratically responsive committees".
Transnational trade and capital reform
In Kovel's eyes, part of the eco-socialist transition is the reforming money to retain its use in "enabling exchanges" while reducing its functions as "a commodity in its own right" and "repository of value". He argues for directing money to "enhancement of use-values" through a "subsidization of use-values" that "preserves the functioning core of the economy while gaining time and space for rebuilding it". Internationally, he believes in the immediate cessation of speculation in currencies ("breaking down the function of money as commodity, and redirecting funds on use-values"), the cancellation of the debt of the Global South ("breaking the back of the value function" of money) and the redirecting the "vast reservoir of mainly phony value" to reparations and "ecologically sound development". He suggests the end of military aid and other forms of support to "comprador elites in the South" will eventually "lead to their collapse".
In terms of trade, Kovel advocates a World People's Trade Organization (WPTO), "responsible to a confederation of popular bodies", in which "the degree of control over trade is ... proportional to involvement with production", meaning that "farmers would have a special say over food trade" and so on. He posits that the WPTO should have an elected council that will oversee a reform of prices in favour of an Ecological Price (EP) "determined by the difference between actual use-values and fully realized ones", thus having low tariffs for forms of ecological production like organic agriculture; he also envisages the high tariffs on non-ecological production providing subsidies to ecological production units.
The EP would also internalize the costs of current externalities (like pollution) and "would be set as a function of the distance traded", reducing the effects of long-distance transport like carbon emissions and increased packaging of goods. He thinks that this will provide a "standard of transformation" for non-ecological industries, like the automobile industry, thus spurring changes towards ecological production.
Ecological production
Eco-socialists pursue "ecological production" that, according to Kovel, goes beyond the socialist vision of the emancipation of labor to "the realization of use-values and the appropriation of intrinsic value". He envisions a form of production in which "the making of a thing becomes part of the thing made" so that, using a high quality meal as an analogy, "pleasure would obtain for the cooking of the meal" - thus activities "reserved as hobbies under capitalism" would "compose the fabric of everyday life" under eco-socialism.
This, for Kovel, is achieved if labor is "freely chosen and developed... with a fully realized use-value" achieved by a "negation" of exchange-value, and he exemplifies the Food Not Bombs project for adopting this. He believes that the notion of "mutual recognition ... for the process as well as the product" will avoid exploitation and hierarchy. With production allowing humanity to "live more directly and receptively embedded in nature", Kovel predicts that "a reorientation of human need" will occur that recognises ecological limits and sees technology as "fully participant in the life of eco-systems", thus removing it from profit-making exercises.
In the course on an Eco-socialist revolution, writers like Kovel and Baer advocate a "rapid conversion to ecosocialist production" for all enterprises, followed by "restoring ecosystemic integrity to the workplace" through steps like workers ownership. He then believes that the new enterprises can build "socially developed plans" of production for societal needs, such as efficient light-rail transport components. At the same time, Kovel argues for the transformation of essential but, under capitalism, non-productive labour, such as child care, into productive labour, "thereby giving reproductive labour a status equivalent to productive labour".
During such a transition, he believes that income should be guaranteed and that money will still be used under "new conditions of value... according to use and to the degree to which ecosystem integrity is developed and advanced by any particular production". Within this structure, Kovel asserts that markets and will become unnecessary – although "market phenomena" in personal exchanges and other small instances might be adopted – and communities and elected assemblies will democratically decide on the allocation of resources. Istvan Meszaros believes that such "genuinely planned and self-managed (as opposed to bureaucratically planned from above) productive activities" are essential if eco-socialism is to meet its "fundamental objectives".
Eco-socialists are quick to assert that their focus on "production" does not mean that there will be an increase in production and labor under Eco-socialism. Kovel thinks that the emancipation of labor and the realization of use-value will allow "the spheres of work and culture to be reintegrated". He cites the example of Paraguayan Indian communities (organised by Jesuits) in the eighteenth century who made sure that all community members learned musical instruments, and had labourers take musical instruments to the fields and take turns playing music or harvesting.
Commons, property and usufruct
Most eco-socialists, including Alier and Guha, echo subsistence eco-feminists like Vandana Shiva when they argue for the restoration of commons land over private property. They blame ecological degradation on the inclination to short-term, profit-inspired decisions inherent within a market system. For them, privatization of land strips people of their local communal resources in the name of creating markets for neo-liberal globalization, which benefits a minority. In their view, successful commons systems have been set up around the world throughout history to manage areas cooperatively, based on long-term needs and sustainability instead of short-term profit.
Many eco-socialists focus on a modified version of the notion of 'usufruct' to replace capitalist private property arrangements. As a legal term, usufruct refers to the legal right to use and derive profit or benefit from property that belongs to another person, as long as the property is not damaged. According to eco-socialists like Kovel, a modern interpretation of the idea is "where one uses, enjoys – and through that, improves – another's property", as its Latin etymology "condenses the two meanings of use – as in use-value, and enjoyment – and as in the gratification expressed in freely associated labour". The idea, according to Kovel, has roots in the Code of Hammurabi and was first mentioned in Roman law "where it applied to ambiguities between masters and slaves with respect to property"; it also features in Islamic Sharia law, Aztec law and the Napoleonic Code.
Crucially for eco-socialists, Marx mentioned the idea when he stated that human beings are no more than the planet's "usufructaries, and, like , they must hand it down to succeeding generations in an improved condition". Kovel and others have taken on this reading, asserting that, in an eco-socialist society, "everyone will have ... rights of use and ownership over those means of production necessary to express the creativity of human nature", namely "a place of one's own" to decorate to personal taste, some personal possessions, the body and its attendant sexual and reproductive rights.
However, Kovel sees property as "self-contradictory" because individuals emerge "in a tissue of social relations" and "nested circles", with the self at the centre and extended circles where "issues of sharing arise from early childhood on". He believes that "the full self is enhanced more by giving than by taking" and that eco-socialism is realized when material possessions weigh "lightly" upon the self – thus restoration of use-value allows things to be taken "concretely and sensuously" but "lightly, since things are enjoyed for themselves and not as buttresses for a shaky ego".
This, for Kovel, reverses what Marxists see as the commodity fetishism and atomization of individuals (through the "unappeasable craving" for "having and excluding others from having") under capitalism. Under eco-socialism, he therefore believes that enhancement of use-value will lead to differentiated ownership between the individual and the collective, where there are "distinct limits on the amount of property individuals control" and no-one can take control of resources that "would permit the alienation of means of production from another". He then hopes that the "hubris" of the notion of "ownership of the planet" will be replaced with usufruct.
Non-violence
Most eco-socialists are involved in peace and anti-war movements, and eco-socialist writers, like Kovel, generally believe that "violence is the rupturing of ecosystems" and is therefore "deeply contrary to ecosocialist values". Kovel believes that revolutionary movements must prepare for post-revolutionary violence from counter-revolutionary sources by "prior development of the democratic sphere" within the movement, because "to the degree that people are capable of self-government, so will they turn away from violence and retribution" for "a self-governed people cannot be pushed around by any alien government". In Kovel's view, it is essential that the revolution "takes place in" or spreads quickly to the United States, which "is capital's gendarme and will crush any serious threat", and that revolutionaries reject the death penalty and retribution against former opponents or counter-revolutionaries.
Although traditionally non-violent, there is growing scepticism of solely using non-violent tactics as a strategy in the eco-socialist agenda and as a way of dismantling harmful systems. Although progress has been made in the climate movement with non-violent tactics (as demonstrated by XR who pushed the UK government to declare a climate emergency), the movement is still failing to bring about radical decarbonisation. As eco-socialist activist, Andreas Malm states in his book How to Blow Up a Pipeline, "If non-violence is not to be treated as a holy covenant or rite, then one must adopt the explicitly anti-Gandhian position of Mandela: 'I called for non-violent protest for as long as it was effective', as 'a tactic that should be abandoned when it no longer worked." Malm argues there is another phase beyond peaceful protest.
Criticism
While in many ways the criticisms of eco-socialism combine the traditional criticisms of both socialism and Green politics, there are unique critiques of eco-socialism, which are largely from within the traditional socialist or Green movements themselves, along with conservative criticism.
Some socialists are critical of the term "eco-socialism". David Reilly, who questions whether his argument is improved by the use of an "exotic word", argues instead that the "real socialism" is "also a green or 'eco'" one that you get to "by dint of struggle". Other socialists, like Paul Hampton of the Alliance for Workers' Liberty (a British third camp socialist party), see eco-socialism as "classless ecology", wherein eco-socialists have "given up on the working class" as the privileged agent of struggle by "borrowing bits from Marx but missing the locus of Marxist politics".
Writing in Capitalism Nature Socialism, Doug Boucher, Peter Caplan, David Schwartzman and Jane Zara criticise eco-socialists in general and Joel Kovel in particular for a deterministic "catastrophism" that overlooks "the countervailing tendencies of both popular struggles and the efforts of capitalist governments to rationalize the system" and the "accomplishments of the labor movement" that "demonstrate that despite the interests and desires of capitalists, progress toward social justice is possible". They argue that an ecological socialism must be "built on hope, not fear".
Conservatives have criticised the perceived opportunism of left-wing groups who have increased their focus on green issues since the fall of communism. Fred L. Smith Jr., President of the Competitive Enterprise Institute think-tank, exemplifies the conservative critique of left Greens, attacking the "pantheism" of the Green movement and conflating "eco-paganism" with eco-socialism. Like many conservative critics, Smith uses the term 'eco-socialism' to attack non-socialist environmentalists for advocating restrictions on the market-based solutions to ecological problems. He nevertheless wrongly claims that eco-socialists endorse "the Malthusian view of the relationship between man and nature", and states that Al Gore, a former Democratic Party Vice President of the United States and now a climate change campaigner, is an eco-socialist, despite the fact that Gore has never used this term and is not recognised as such by other followers of either Green politics or socialism.
Some environmentalists and conservationists have criticised eco-socialism from within the Green movement. In a review of Joel Kovel's The Enemy of Nature, David M. Johns criticises eco-socialism for not offering "suggestions about near term conservation policy" and focusing exclusively on long-term societal transformation. Johns believes that species extinction "started much earlier" than capitalism and suggests that eco-socialism neglects the fact that an ecological society will need to transcend the destructiveness found in "all large-scale societies", the very tendency that Kovel himself attacks among capitalists and traditional leftists who attempt to reduce nature to "linear" human models. Johns questions whether non-hierarchical social systems can provide for billions of people, and criticises eco-socialists for neglecting issues of population pressure. Furthermore, Johns describes Kovel's argument that human hierarchy is founded on raiding to steal women as "archaic".
List of eco-socialists
See also
Critique of political economy
Diggers movement
Eco-communalism
Eco-social market economy
Ecological democracy
Ecological economics
Green left
Green libertarianism
Green politics and parties
Green New Deal
Marxist philosophy of nature
Radical environmentalism
Red socialism
Social-ecology
Veganarchism
Yellow socialism
References
Bibliography
External links
Another Green World: Derek Wall's Ecosocialist Blog
Ecosocialist Horizons
The official site of "Ecosocialists Greece" Political Organization
Anti-globalization movement
Economic ideologies
Environmentalism
Green politics
History of environmentalism
Marxism
Political ideologies
Political movements
Political theories
Socialism
Political ecology | Eco-socialism | [
"Environmental_science"
] | 14,651 | [
"Political ecology",
"Environmental social science"
] |
53,971,080 | https://en.wikipedia.org/wiki/NGC%207303 | NGC 7303 is a barred spiral galaxy around 170 million light-years from Earth in the constellation Pegasus. NGC 7303 was discovered by astronomer John Herschel on September 15, 1828.
See also
NGC 4036
NGC 1300
List of NGC objects (7001–7840)
References
External links
Astronomical objects discovered in 1828
Pegasus (constellation)
7303
Barred spiral galaxies
069061
12065 | NGC 7303 | [
"Astronomy"
] | 80 | [
"Pegasus (constellation)",
"Constellations"
] |
53,975,791 | https://en.wikipedia.org/wiki/Salivary%20microbiome | The salivary microbiome consists of the nonpathogenic, commensal bacteria present in the healthy human salivary glands. It differs from the oral microbiome which is located in the oral cavity. Oral microorganisms tend to adhere to teeth. The oral microbiome possesses its own characteristic microorganisms found there. Resident microbes of the mouth adhere to the teeth and gums. "[T]here may be important interactions between the saliva microbiome and other microbiomes in the human body, in particular, that of the intestinal tract."
Characteristics
Unlike the uterine, placental and vaginal microbiomes, the types of organisms in the salivary microbiota remain relatively constant. There is no difference between populations of microbes based upon gender, age, diet, obesity, alcohol intake, race, or tobacco use. The salivary microbiome characteristically remains stable over a lifetime. One study suggests sharing an environment (e.g., living together) may influence the salivary microbiome more than genetic components. Porphyromonas, Solobacterium, Haemophilus, Corynebacterium, Cellulosimicrobium, Streptococcus and Campylobacter are some of the genera found in the saliva.
While the salivary microbiome shows stability, the broader oral microbiome can be influenced by various factors. A number of elements, including diet, dental hygiene, age, underlying medical conditions, and the use of antibiotics, as well as lifestyle choices such as smoking and alcohol consumption, and physiological changes such as pregnancy, the menstrual cycle, and menopause, can exert an influence on the composition of the oral microbiome.
Genetic markers and diagnostic testing
"There is high diversity in the salivary microbiome within and between individuals, but little geographic structure. Overall, ~13.5% of the total variance in the composition of genera is due to differences among individuals, which is remarkably similar to the fraction of the total variance in neutral genetic markers that can be attributed to differences among human populations."
"[E]nvironmental variables revealed a significant association between the genetic distances among locations and the distance of each location from the equator. Further characterization of the enormous diversity revealed here in the human salivary microbiome will aid in elucidating the role it plays in human health and disease, and in the identification of potentially informative species for studies of human population history."
Sixty new genera have been identified from the salivary glands. A total of 101 different genera were identified in the salivary glands. Out of these, 39 genera are not found in the oral microbiome. It is not known whether the resident species remain constant or change.
Though the association between the salivary microbiome is similar to that of the oral microbiome, there also exists an association the salivary microbiome and the gut microbiome. Saliva sampling may be a non-invasive way to detect changes in the gut microbiome and changes in systemic disease. The association between the salivary microbiome those with Polycistic Ovarian Syndrome has been characterized: "saliva microbiome profiles correlate with those in the stool, despite the fact that the bacterial communities in the two locations differ greatly. Therefore, saliva may be a useful alternative to stool as an indicator of bacterial dysbiosis in systemic disease."
The sugar concentration in salivary secretions can vary. Blood sugar levels are reflected in salivary gland secretions. High salivary glucose (HSG) levels are a glucose concentration ≥ 1.0 mg/d, n = 175) and those with low salivary glucose (LSG) levels are < 0.1 mg/dL n = 2,537). Salivary gland secretions containing high levels of sugar change the oral microbiome and contributes to an environment that is conductive to the formation of dental caries and gingivitis.
Salivary glands
Organisms of the salivary microbiome reside in the three major salivary glands: parotid, submandibular, and sublingual. These glands secrete electrolytes, proteins, genetic material, polysaccharides, and other molecules. Most of these substances enter the salivary gland acinus and duct system from surrounding capillaries via the intervening tissue fluid, although some substances are produced within the glands themselves. The level of each salivary component varies considerably depending on the health status of the individual and the presence of pathogenic and commensal organisms.
References
See also
Human microbiome
Human microbiome project
Human virome
List of bacterial vaginosis microbiota
Microbiota of the lower reproductive tract of women
Vaginal microbiota in pregnancy
Microbiology
Microbiomes
Bacteriology
Mouth
Glands of mouth
Gustatory system | Salivary microbiome | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,005 | [
"Microbiomes",
"Environmental microbiology",
"Microbiology",
"Microscopy"
] |
53,977,093 | https://en.wikipedia.org/wiki/Beltrami%20flow | In fluid dynamics, Beltrami flows are flows in which the vorticity vector and the velocity vector are parallel to each other. In other words, Beltrami flow is a flow in which the Lamb vector is zero. It is named after the Italian mathematician Eugenio Beltrami due to his derivation of the Beltrami vector field, while initial developments in fluid dynamics were done by the Russian scientist Ippolit S. Gromeka in 1881.
Description
Since the vorticity vector and the velocity vector are colinear to each other, we can write
where is some scalar function. One immediate consequence of Beltrami flow is that it can never be a planar or axisymmetric flow because in those flows, vorticity is always perpendicular to the velocity field. The other important consequence will be realized by looking at the incompressible vorticity equation
where is an external body forces such as gravitational field, electric field etc., and is the kinematic viscosity. Since and are parallel, the non-linear terms in the above equation are identically zero . Thus Beltrami flows satisfies the linear equation
When , the components of vorticity satisfies a simple heat equation.
Trkalian flow
Viktor Trkal considered the Beltrami flows without any external forces in 1919 for the scalar function , i.e.,
Introduce the following separation of variables
then the equation satisfied by becomes
The Chandrasekhar–Kendall functions satisfy this equation.
Generalized Beltrami flow
The generalized Beltrami flow satisfies the condition
which is less restrictive than the Beltrami condition . Unlike the normal Beltrami flows, the generalized Beltrami flow can be studied for planar and axisymmetric flows.
Steady planar flows
For steady generalized Beltrami flow, we have and since it is also planar we have . Introduce the stream function
Integration of gives . So, complete solution is possible if it satisfies all the following three equations
A special case is considered when the flow field has uniform vorticity . Wang (1991) gave the generalized solution as
assuming a linear function for . Substituting this into the vorticity equation and introducing the separation of variables with the separating constant results in
The solution obtained for different choices of can be interpreted differently, for example, represents a flow downstream a uniform grid, represents a flow created by a stretching plate, represents a flow into a corner, represents an Asymptotic suction profile etc.
Unsteady planar flows
Here,
.
Taylor's decaying vortices
G. I. Taylor gave the solution for a special case where , where is a constant in 1923. He showed that the separation satisfies the equation and also
Taylor also considered an example, a decaying system of eddies rotating alternatively in opposite directions and arranged in a rectangular array
which satisfies the above equation with , where is the length of the square formed by an eddy. Therefore, this system of eddies decays as
O. Walsh generalized Taylor's eddy solution in 1992. Walsh's solution is of the form , where and
Steady axisymmetric flows
Here we have . Integration of gives and the three equations are
The first equation is the Hicks equation. Marris and Aswani (1977) showed that the only possible solution is and the remaining equations reduce to
A simple set of solutions to the above equation is
represents a flow due to two opposing rotational stream on a parabolic surface, represents rotational flow on a plane wall, represents a flow ellipsoidal vortex (special case – Hill's spherical vortex), represents a type of toroidal vortex etc.
The homogeneous solution for as shown by Berker
where are the Bessel function of the first kind and Bessel function of the second kind respectively. A special case of the above solution is Poiseuille flow for cylindrical geometry with transpiration velocities on the walls. Chia-Shun Yih found a solution in 1958 for Poiseuille flow into a sink when .
Beltrami flow in fluid mechanics
Beltrami fields are a classical steady solution to the Euler equation. Beltrami fields play an important role in (ideal) fluid mechanics in equilibrium, as complexity is only expected for these fields.
See also
Gromeka–Arnold–Beltrami–Childress (GABC) flow
References
Fluid dynamics | Beltrami flow | [
"Chemistry",
"Engineering"
] | 892 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
50,197,442 | https://en.wikipedia.org/wiki/List%20of%20linear%20integrated%20circuits | The following is a list of linear integrated circuits. Many were among the first analog integrated circuits commercially produced; some were groundbreaking innovations, and many are still being used.
See also
Linear integrated circuit
List of LM-series integrated circuits
4000-series integrated circuits
List of 4000-series integrated circuits
7400-series integrated circuits
List of 7400-series integrated circuits
References
Electronic design
Electronics lists | List of linear integrated circuits | [
"Engineering"
] | 79 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
38,249,632 | https://en.wikipedia.org/wiki/1-planar%20graph | In topological graph theory, a 1-planar graph is a graph that can be drawn in the Euclidean plane in such a way that each edge has at most one crossing point, where it crosses a single additional edge. If a 1-planar graph, one of the most natural generalizations of planar graphs, is drawn that way, the drawing is called a 1-plane graph or 1-planar embedding of the graph.
Coloring
1-planar graphs were first studied by , who showed that they can be colored with at most seven colors. Later, the precise number of colors needed to color these graphs, in the worst case, was shown to be six. The example of the complete graph K6, which is 1-planar, shows that 1-planar graphs may sometimes require six colors. However, the proof that six colors are always enough is more complicated.
Ringel's motivation was in trying to solve a variation of total coloring for planar graphs, in which one simultaneously colors the vertices and faces of a planar graph in such a way that no two adjacent vertices have the same color, no two adjacent faces have the same color, and no vertex and face that are adjacent to each other have the same color. This can obviously be done using eight colors by applying the four color theorem to the given graph and its dual graph separately, using two disjoint sets of four colors. However, fewer colors may be obtained by forming an auxiliary graph that has a vertex for each vertex or face of the given planar graph, and in which two auxiliary graph vertices are adjacent whenever they correspond to adjacent features of the given planar graph. A vertex coloring of the auxiliary graph corresponds to a vertex-face coloring of the original planar graph. This auxiliary graph is 1-planar, from which it follows that Ringel's vertex-face coloring problem may also be solved with six colors. The graph K6 cannot be formed as an auxiliary graph in this way, but nevertheless the vertex-face coloring problem also sometimes requires six colors; for instance, if the planar graph to be colored is a triangular prism, then its eleven vertices and faces require six colors, because no three of them may be given a single color.
Edge density
Every 1-planar graph with n vertices has at most 4n − 8 edges. More strongly, each 1-planar drawing has at most n − 2 crossings; removing one edge from each crossing pair of edges leaves a planar graph, which can have at most 3n − 6 edges, from which the 4n − 8 bound on the number of edges in the original 1-planar graph immediately follows. However, unlike planar graphs (for which all maximal planar graphs on a given vertex set have the same number of edges as each other), there exist maximal 1-planar graphs (graphs to which no additional edges can be added while preserving 1-planarity) that have significantly fewer than 4n − 8 edges. The bound of 4n − 8 on the maximum possible number of edges in a 1-planar graph can be used to show that the complete graph K7 on seven vertices is not 1-planar, because this graph has 21 edges and in this case 4n − 8 = 20 < 21.
A 1-planar graph is said to be an optimal 1-planar graph if it has exactly 4n − 8 edges, the maximum possible. In a 1-planar embedding of an optimal 1-planar graph, the uncrossed edges necessarily form a quadrangulation (a polyhedral graph in which every face is a quadrilateral). Every quadrangulation gives rise to an optimal 1-planar graph in this way, by adding the two diagonals to each of its quadrilateral faces. It follows that every optimal 1-planar graph is Eulerian (all of its vertices have even degree), that the minimum degree in such a graph is six, and that every optimal 1-planar graph has at least eight vertices of degree exactly six. Additionally, every optimal 1-planar graph is 4-vertex-connected, and every 4-vertex cut in such a graph is a separating cycle in the underlying quadrangulation.
The graphs that have straight 1-planar drawings (that is, drawings in which each edge is represented by a line segment, and in which each line segment is crossed by at most one other edge) have a slightly tighter bound of 4n − 9 on the maximum number of edges, achieved by infinitely many graphs.
Complete multipartite graphs
A complete classification of the 1-planar complete graphs, complete bipartite graphs, and more generally complete multipartite graphs is known. Every complete bipartite graph of the form K2,n is 1-planar (even planar), as is every complete tripartite graph of the form K1,1,n. Other than these infinite sets of examples, the only complete multipartite 1-planar graphs are K6, K1,1,1,6, K1,1,2,3, K2,2,2,2, K1,1,1,2,2, and their subgraphs. The minimal non-1-planar complete multipartite graphs are K3,7, K4,5, K1,3,4, K2,3,3, and K1,1,1,1,3.
For instance, the complete bipartite graph K3,6 is 1-planar because it is a subgraph of K1,1,1,6, but K3,7 is not 1-planar.
Computational complexity
It is NP-complete to test whether a given graph is 1-planar, and it remains NP-complete even for the graphs formed from planar graphs by adding a single edge and for graphs of bounded bandwidth. The problem is fixed-parameter tractable when parameterized by cyclomatic number or by tree-depth, so it may be solved in polynomial time when those parameters are bounded.
In contrast to Fáry's theorem for planar graphs, not every 1-planar graph may be drawn 1-planarly with straight line segments for its edges. However, testing whether a 1-planar drawing may be straightened in this way can be done in polynomial time. Additionally, every 3-vertex-connected 1-planar graph has a 1-planar drawing in which at most one edge, on the outer face of the drawing, has a bend in it. This drawing can be constructed in linear time from a 1-planar embedding of the graph. The 1-planar graphs have bounded book thickness, but some 1-planar graphs including K2,2,2,2 have book thickness at least four.
1-planar graphs have bounded local treewidth, meaning that there is a (linear) function f such that the 1-planar graphs of diameter d have treewidth at most f(d); the same property holds more generally for the graphs that can be embedded onto a surface of bounded genus with a bounded number of crossings per edge. They also have separators, small sets of vertices the removal of which decomposes the graph into connected components whose size is a constant fraction of the size of the whole graph. Based on these properties, numerous algorithms for planar graphs, such as Baker's technique for designing approximation algorithms, can be extended to 1-planar graphs. For instance, this method leads to a polynomial-time approximation scheme for the maximum independent set of a 1-planar graph.
Generalizations and related concepts
The class of graphs analogous to outerplanar graphs for 1-planarity are called the outer-1-planar graphs. These are graphs that can be drawn in a disk, with the vertices on the boundary of the disk, and with at most one crossing per edge. These graphs can always be drawn (in an outer-1-planar way) with straight edges and right angle crossings. By using dynamic programming on the SPQR tree of a given graph, it is possible to test whether it is outer-1-planar in linear time. The triconnected components of the graph (nodes of the SPQR tree) can consist only of cycle graphs, bond graphs, and four-vertex complete graphs, from which it also follows that outer-1-planar graphs are planar and have treewidth at most three.
The 1-planar graphs include the 4-map graphs, graphs formed from the adjacencies of regions in the plane with at most four regions meeting in any point. Conversely, every optimal 1-planar graph is a 4-map graph. However, 1-planar graphs that are not optimal 1-planar may not be map graphs.
1-planar graphs have been generalized to k-planar graphs, graphs for which each edge is crossed at most k times (0-planar graphs are exactly the planar graphs). Ringel defined the local crossing number of G to be the least non-negative integer k such that G has a k-planar drawing. Because the local crossing number is the maximum degree of the intersection graph of the edges of an optimal drawing, and the thickness (minimum number of planar graphs into which the edges can be partitioned) can be seen as the chromatic number of an intersection graph of an appropriate drawing, it follows from Brooks' theorem that the thickness is at most one plus the local crossing number. The k-planar graphs with n vertices have at most O(k1/2n) edges, and treewidth O((kn)1/2). A shallow minor of a k-planar graph, with depth d, is itself a (2d + 1)k-planar graph, so the shallow minors of 1-planar graphs and of k-planar graphs are also sparse graphs, implying that the 1-planar and k-planar graphs have bounded expansion.
Nonplanar graphs may also be parameterized by their crossing number, the minimum number of pairs of edges that cross in any drawing of the graph. A graph with crossing number k is necessarily k-planar, but not necessarily vice versa. For instance, the Heawood graph has crossing number 3, but it is not necessary for its three crossings to all occur on the same edge of the graph, so it is 1-planar, and can in fact be drawn in a way that simultaneously optimizes the total number of crossings and the crossings per edge.
Another related concept for nonplanar graphs is graph skewness, the minimal number of edges that must be removed to make a graph planar.
References
Further reading
Planar graphs
NP-complete problems | 1-planar graph | [
"Mathematics"
] | 2,247 | [
"Planar graphs",
"Computational problems",
"Planes (geometry)",
"Mathematical problems",
"NP-complete problems"
] |
38,251,251 | https://en.wikipedia.org/wiki/Strain-rate%20tensor | In continuum mechanics, the strain-rate tensor or rate-of-strain tensor is a physical quantity that describes the rate of change of the strain (i.e., the relative deformation) of a material in the neighborhood of a certain point, at a certain moment of time. It can be defined as the derivative of the strain tensor with respect to time, or as the symmetric component of the Jacobian matrix (derivative with respect to position) of the flow velocity. In fluid mechanics it also can be described as the velocity gradient, a measure of how the velocity of a fluid changes between different points within the fluid. Though the term can refer to a velocity profile (variation in velocity across layers of flow in a pipe), it is often used to mean the gradient of a flow's velocity with respect to its coordinates. The concept has implications in a variety of areas of physics and engineering, including magnetohydrodynamics, mining and water treatment.
The strain rate tensor is a purely kinematic concept that describes the macroscopic motion of the material. Therefore, it does not depend on the nature of the material, or on the forces and stresses that may be acting on it; and it applies to any continuous medium, whether solid, liquid or gas.
On the other hand, for any fluid except superfluids, any gradual change in its deformation (i.e. a non-zero strain rate tensor) gives rise to viscous forces in its interior, due to friction between adjacent fluid elements, that tend to oppose that change. At any point in the fluid, these stresses can be described by a viscous stress tensor that is, almost always, completely determined by the strain rate tensor and by certain intrinsic properties of the fluid at that point. Viscous stress also occur in solids, in addition to the elastic stress observed in static deformation; when it is too large to be ignored, the material is said to be viscoelastic.
Dimensional analysis
By performing dimensional analysis, the dimensions of velocity gradient can be determined. The dimensions of velocity are , and the dimensions of distance are . Since the velocity gradient can be expressed as . Therefore, the velocity gradient has the same dimensions as this ratio, i.e., .
In continuum mechanics
In 3 dimensions, the gradient of the velocity is a second-order tensor which can be expressed as the matrix :
can be decomposed into the sum of a symmetric matrix and a skew-symmetric matrix as follows
is called the strain rate tensor and describes the rate of stretching and shearing. is called the spin tensor and describes the rate of rotation.
Relationship between shear stress and the velocity field
Sir Isaac Newton proposed that shear stress is directly proportional to the velocity gradient:
The constant of proportionality, , is called the dynamic viscosity.
Formal definition
Consider a material body, solid or fluid, that is flowing and/or moving in space. Let be the velocity field within the body; that is, a smooth function from such that is the macroscopic velocity of the material that is passing through the point at time .
The velocity at a point displaced from by a small vector can be written as a Taylor series:
where the gradient of the velocity field, understood as a linear map that takes a displacement vector to the corresponding change in the velocity.
In an arbitrary reference frame, is related to the Jacobian matrix of the field, namely in 3 dimensions it is the 3 × 3 matrix
where is the component of parallel to axis and denotes the partial derivative of a function with respect to the space coordinate . Note that is a function of and .
In this coordinate system, the Taylor approximation for the velocity near is
or simply
if and are viewed as 3 × 1 matrices.
Symmetric and antisymmetric parts
Any matrix can be decomposed into the sum of a symmetric matrix and an antisymmetric matrix. Applying this to the Jacobian matrix with symmetric and antisymmetric components and respectively:
This decomposition is independent of coordinate system, and so has physical significance. Then the velocity field may be approximated as
that is,
The antisymmetric term represents a rigid-like rotation of the fluid about the point . Its angular velocity is
The product is called the vorticity of the vector field. A rigid rotation does not change the relative positions of the fluid elements, so the antisymmetric term of the velocity gradient does not contribute to the rate of change of the deformation. The actual strain rate is therefore described by the symmetric term, which is the strain rate tensor.
Shear rate and compression rate
The symmetric term (the rate-of-strain tensor) can be broken down further as the sum of a scalar times the unit tensor, that represents a gradual isotropic expansion or contraction; and a traceless symmetric tensor which represents a gradual shearing deformation, with no change in volume:
That is,
Here is the unit tensor, such that is 1 if and 0 if . This decomposition is independent of the choice of coordinate system, and is therefore physically significant.
The trace of the expansion rate tensor is the divergence of the velocity field:
which is the rate at which the volume of a fixed amount of fluid increases at that point.
The shear rate tensor is represented by a symmetric 3 × 3 matrix, and describes a flow that combines compression and expansion flows along three orthogonal axes, such that there is no change in volume. This type of flow occurs, for example, when a rubber strip is stretched by pulling at the ends, or when honey falls from a spoon as a smooth unbroken stream.
For a two-dimensional flow, the divergence of has only two terms and quantifies the change in area rather than volume. The factor 1/3 in the expansion rate term should be replaced by in that case.
Examples
The study of velocity gradients is useful in analysing path dependent materials and in the subsequent study of stresses and strains; e.g., Plastic deformation of metals. The near-wall velocity gradient of the unburned reactants flowing from a tube is a key parameter for characterising flame stability. The velocity gradient of a plasma can define conditions for the solutions to fundamental equations in magnetohydrodynamics.
Fluid in a pipe
Consider the velocity field of a fluid flowing through a pipe. The layer of fluid in contact with the pipe tends to be at rest with respect to the pipe. This is called the no slip condition. If the velocity difference between fluid layers at the centre of the pipe and at the sides of the pipe is sufficiently small, then the fluid flow is observed in the form of continuous layers. This type of flow is called laminar flow.
The flow velocity difference between adjacent layers can be measured in terms of a velocity gradient, given by . Where is the difference in flow velocity between the two layers and is the distance between the layers.
See also
Stress tensor (disambiguation)
, the spatial and material velocity gradient from continuum mechanics
References
Continuum mechanics
Rates
Tensor physical quantities | Strain-rate tensor | [
"Physics",
"Mathematics",
"Engineering"
] | 1,414 | [
"Tensors",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Tensor physical quantities",
"Classical mechanics"
] |
38,254,005 | https://en.wikipedia.org/wiki/Prepainted%20metal | According to EN 13523-0, a prepainted metal (or coil coated metal) is a ‘metal on which a coating material (e.g. paint, film…) has been applied by coil coating’. When applied onto the metallic substrate, the coating material (in liquid, in paste or powder form) forms a film possessing protective, decorative and/or other specific properties.
In 40 years, the European prepainted metal production has multiplied by 18.
Metal
The choice of metallic substrate is determined by the dimensional, mechanical and corrosion resistance properties required of the coated product in use. The most common metallic substrates that are organically coated are:
Hot dip galvanised steel (HDG) which consists of a cold reduced steel substrate onto which a layer of zinc is coated via a hot dip process to impart enhanced corrosion properties onto the base steel.
Galvanized mild steel (GMS) can be used as balustrade and handrail of staircase, pipe, etc.
Other zinc-based alloys are coated onto steel and used as a substrate for coil coating, giving different properties. They give improved corrosion resistance in particular conditions.
Electro-galvanised (EG) coated steel consists of a cold reduced substrate onto which a layer of zinc is coated by an electrolytic process.
Cold reduced steel (CR) without any zinc coating
Wrought aluminium alloys
Many other substrates are organically coated: zinc/iron, stainless steel, tinplate, brass, zinc and copper.
Coil coating
Coil coating is the continuous and highly automated industrial process for efficiently coating metal coils. Because the metal is treated before it is cut and formed, the entire surface is cleaned and treated, providing tightly-bonded finishes. (Formed parts can have many holes, recessed areas, valleys, and hidden areas that make it difficult to clean and uniformly paint.) Coil-coated metal (often called prepainted metal) is often considered more durable and more corrosion-resistant than most post painted metal.
Annually, 4.5 million tons of coil-coated steel and aluminum are produced and shipped in North America, and 5 million tons in Europe. In almost every five-year period since the early 1980s, the growth rate of coil-coated metal has exceeded the growth rates of either steel and/or aluminum production.
Process
The definition of a coil coating process according to EN 10169:2010 is a ‘process in which an (organic) coating material is applied on rolled metal strip in a continuous process which includes cleaning, if necessary, and chemical pre-treatment of the metal surface and either one-side or two-side, single or multiple application of (liquid) paints or coating powders which are subsequently cured or/and laminating with permanent plastic films’.
The metal substrate (steel or aluminum) is delivered in coil form from the rolling mills. Coil weights vary from 5-6 tons for aluminum and up to about 25 tons for steel. The coil is positioned at the beginning of the line, then unwound at a constant speed, passing through the various pre-treatment and coating processes before being recoiled. Two strip accumulators at the beginning and the end of the line enable the work to be continuous, allowing new coils to be added (and finished coils removed) by a metal stitching process without slowing down or stopping the line.
The coil coating line
The continuous process of applying up to three separate coating layers onto one or both sides of a metal strip substrate occurs on a coil coating line. These lines vary greatly in size, with widths from and speeds from ; however, all coil-coating lines share the same basic process steps.
A typical organic coil coating line consists of decoilers, entry strip accumulator, cleaning, chemical pretreatment, primer coat application, curing, final coat application, curing, exit accumulator and recoilers.
The following steps take place on a modern coating line:
Mechanical stitching of the strip to its predecessor
Cleaning the strip
Power brushing
Surface treatment by chemical conversion
Drying the strip
Application of primer on one or both sides
Passage through the first curing oven (between 15 and 60 seconds)
Cooling the strip
Coating the finish on one or both sides
Passage through the second curing oven (between 15 and 60 seconds)
Cooling down to room temperature
Rewinding of the coated coil
Coatings
Available coatings include polyesters, plastisols, polyurethanes, polyvinylidene fluorides (PVDF), epoxies, primers, backing coats and laminate films. For each product, the coating is built up in a number of layers.
Primer coatings form the essential link between the pretreatment and the finish coating. Essentially, a primer is required to provide inter-coat adhesion between the pretreatment and the finish coat and is also required to promote corrosion resistance in the total system. The composition of the primer will vary depending on the type of finish coat used. Primers require compatibility with various pretreatments and top coat paint systems; therefore, they usually comprise a mixture of resin systems to achieve this end.
Backing coats are applied to the underside of the strip with or without a primer. The coating is generally not as thick as the finish coating used for exterior applications. Backing coats are generally not exposed to corrosive environments and not visible in the end application.
Applications
Prepainted metal is used in a variety of products. It can be formed for many different applications, including those with T bends, without loss of coating quality. Major industries use prepainted metal in products such as building panels, metal roofs wall panels, garage doors, office furniture (desks, cubicle divider panels, file cabinets, and modular cabinets), home appliances (refrigerators, dishwashers, freezers, range hoods, microwave ovens, and washers and dryers), heating and air-conditioning outer panels and ductwork, commercial appliances, vending machines, foodservice equipment and cooking tins, beverage cans, and automotive panels and parts (fuel tanks, body panels, bumpers), The list continues to grow, with new industries making the switch from post-painted to prepainted processes each year.
Some high-tech, complex coatings are applied with the coil coating process. Coatings for cool metal roofing materials, smog-eating building panels, antimicrobial products, anti-corrosive metal parts, and solar panels use this process. Pretreatments and coatings can be applied with the coil coating process in very precise, thin, uniform layers, and makes some complex coatings feasible and more cost-effective.
The largest market for prepainted metal is in both commercial and residential construction. It is chosen for the quality, low cost, design flexibility, and environmentally beneficial properties. Using prepainted metal can contribute to credit toward LEED certification for sustainable design. A wide arrange of color options are available with prepainted metal, including vibrant colors for modern designs, and natural weathered finishes in rustic expressions. Prepainted metal also can be formed, almost like plastic, in fluid shapes. This flexibility allows architects to achieve unique, expressive designs using metal.
The output of the coil coating industry is a prepainted metal strip. This has numerous applications in various industries, including in:
The construction industry for both indoor and outdoor applications;
The automotive and transport industries;
The production of white goods including washing machines;
Cabinets for electronic goods;
Office furniture;
Lighting envelopes;
Bakeware.
History
In the old days of traditional manufacturing, steel and other metals arrived at factories in an untreated and unpainted state. Companies would fabricate and paint or treat the metal components of their product before assembly. This was costly, time-consuming, and environmentally harmful. The coil coating process was pioneered in the 1930s for painting, coating and pre-treating large coils of metals before they arrived at a manufacturing facility. The venetian blind industry was the first to utilize pre-painted metal.
Notes
Sources
prepaintedmetal.eu
prepaintedmetalacademy.eu
creativebuilding.eu
creativeroofing.eu
External links
coilcoating.org
What you should know about stamping coated coil
euramax.eu
Metallurgy
Coatings | Prepainted metal | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,699 | [
"Metallurgy",
"Coatings",
"Materials science",
"nan"
] |
38,256,214 | https://en.wikipedia.org/wiki/Glossary%20of%20commutative%20algebra | This is a glossary of commutative algebra.
See also list of algebraic geometry topics, glossary of classical algebraic geometry, glossary of algebraic geometry, glossary of ring theory and glossary of module theory.
In this article, all rings are assumed to be commutative with identity 1.
!$@
A
B
C
D
E
F
G
H
.
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
XYZ
See also
Glossary of ring theory
References
General references
Commutative algebra
Wikipedia glossaries using description lists | Glossary of commutative algebra | [
"Mathematics"
] | 113 | [
"Fields of abstract algebra",
"Commutative algebra"
] |
38,257,383 | https://en.wikipedia.org/wiki/Tetrahexagonal%20tiling | In geometry, the tetrahexagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol r{6,4}.
Constructions
There are for uniform constructions of this tiling, three of them as constructed by mirror removal from the [6,4] kaleidoscope. Removing the last mirror, [6,4,1+], gives [6,6], (*662). Removing the first mirror [1+,6,4], gives [(4,4,3)], (*443). Removing both mirror as [1+,6,4,1+], leaving [(3,∞,3,∞)] (*3232).
Symmetry
The dual tiling, called a rhombic tetrahexagonal tiling, with face configuration V4.6.4.6, and represents the fundamental domains of a quadrilateral kaleidoscope, orbifold (*3232), shown here in two different centered views. Adding a 2-fold rotation point in the center of each rhombi represents a (2*32) orbifold.
Related polyhedra and tiling
See also
Square tiling
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isotoxal tilings
Uniform tilings | Tetrahexagonal tiling | [
"Physics"
] | 377 | [
"Isotoxal tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Uniform tilings",
"Symmetry"
] |
38,257,752 | https://en.wikipedia.org/wiki/Snub%20tetrahexagonal%20tiling | In geometry, the snub tetrahexagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of sr{6,4}.
Images
Drawn in chiral pairs, with edges missing between black triangles:
Related polyhedra and tiling
The snub tetrahexagonal tiling is fifth in a series of snub polyhedra and tilings with vertex figure 3.3.4.3.n.
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Square tiling
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Chiral figures
Hyperbolic tilings
Isogonal tilings
Snub tilings
Uniform tilings | Snub tetrahexagonal tiling | [
"Physics",
"Chemistry"
] | 224 | [
"Snub tilings",
"Isogonal tilings",
"Tessellation",
"Chirality",
"Hyperbolic tilings",
"Uniform tilings",
"Chiral figures",
"Symmetry"
] |
38,258,455 | https://en.wikipedia.org/wiki/Wells%20curve | The Wells curve (or Wells evaporation falling curve of droplets) is a diagram, developed by W. F. Wells in 1934, which describes what is expected to happen to small droplets once they have been exhaled into air.
Coughing, sneezing, and other violent exhalations produce high numbers of respiratory droplets derived from saliva and/or respiratory mucus, with sizes ranging from about 1 μm to 2 mm. Wells' insight was that such droplets would have two distinct fates, depending on their sizes. The interplay of gravity and evaporation means that droplets larger than a humidity-determined threshold size would fall to the ground due to gravity, while droplets smaller than this size would quickly evaporate, leaving a dry residue that drifts in the air. Since droplets from an infected person may contain infectious bacteria or viruses, these processes influence transmission of respiratory diseases.
A traditional hard size cutoff of 5 μm between airborne and respiratory droplets has been criticized as a false dichotomy not grounded in science, as exhaled particles form a continuum of sizes whose fates depend on environmental conditions in addition to their initial sizes. However, it has informed hospital based transmission based precautions for decades.
Background
Quiet breathing produces few droplets, but forced exhalations such as sneezing, coughing, shouting and singing can produce many thousands or even millions of small droplets. Droplets from healthy people consist of saliva from the mouth and/or the mucus that lines the respiratory tract. Saliva is >99% water, with small amounts of salts, proteins and other molecules. Respiratory mucus is more complex, 95% water with large amounts of mucin proteins and varying amounts of other proteins, especially antibodies, as well as lipids and nucleic acids, both secreted and derived from dead airway cells. Sizes of respiratory droplets vary widely, from greater than 1 mm to less than 1 μm, but the distribution of sizes is roughly similar across different droplet-generating activities.
The Wells curve: the effects of gravity and evaporation
In undisturbed moisture-saturated air, all respiratory droplets fall due to gravity until they reach the ground or another horizontal surface. For all but the largest droplets, Stokes Law predicts that falling speeds quickly reach a limit set by the ratio of mass to cross-sectional area, with small droplets falling much more slowly than large ones.
If the air is not saturated with water vapor, all droplets are also subject to evaporation as they fall, which gradually decreases their mass and thus slows the rate at which they are falling. Sufficiently large droplets still reach the ground or another surface, where they continue to dry, leaving potentially infectious residues called fomites. However, the high surface area to volume ratios of small droplets cause them to evaporate so rapidly that they dry out before they reach the ground. The dry residues of such droplets (called 'droplet nuclei' or 'aerosol particles') then cease falling and drift with the surrounding air. Thus, the continuous distribution of droplet sizes rapidly produces just two dichotomous outcomes, fomites on surfaces and droplet nuclei floating in the air.
Wells summarized this relationship graphically, with droplet size on the X-axis and time to evaporate or fall to the ground on the Y-axis. The result is a pair of curves intersecting at the droplet size that evaporates exactly as it hits the ground.
Implications for epidemiology
Wells' insight was widely adopted because of its relevance for the spread of respiratory infections. The efficiency of transmission of specific viruses and bacteria depends both on the types of droplets and droplet nuclei they cause and on their ability to survive in droplets, droplet nuclei and fomites. Diseases such as measles, whose causative viruses remain highly infectious in droplet nuclei, can be spread without personal contact, across a room or through ventilation systems and are said to have airborne transmission. Although later studies demonstrated that the droplet size at which evaporation outpaces falling is smaller than that described by Wells, and the settling time is longer, his work remains important for understanding the physics of respiratory droplets.
Complicating factors
Relative humidity: The effective distinction between 'large' and 'small' droplets depends on the humidity. Exhaled air has become saturated with water vapour during its passage through the respiratory tract, but indoor or outdoor air is usually much less humid. Under 0% humidity, only droplets 125 μm or larger will reach the ground, but the threshold falls to 60 μm for 90% humidity. Since most respiratory droplets are smaller than 75 μm, even at high humidity most droplets will dry out and become airborne.
Movement of exhaled and ambient air: Air that has been violently expelled by a cough or sneeze moves as a turbulent cloud through the ambient air. Such clouds can travel up to several meters, with large droplets falling from the cloud and small ones gradually dispersing and evaporating as they mix with ambient air. The internal turbulence of such clouds may also delay the fall of large droplets, increasing the chance that they will evaporate before reaching the ground. Since exhaled air is usually warmer and thus less dense than the ambient air, such clouds usually also rise. Droplets and dry particles in exhaled air are also dispersed by movement of the ambient air, due to winds and convection currents.
Effects of face shields, masks and respirators
A face shield protects the wearer against impacts by large droplets that may be expelled horizontally by an infected person's cough or sneeze or during medical treatments. Since the shield is an impermeable barrier that air must travel around, it provides little protection against small droplets and dry particles that travel with the air. Surgical masks and home-made masks can filter out large and small droplets, but their pores are too large to block passage of small aerosol particles. They are thought to be more effective when worn by an infected person, preventing release of infectious droplets, than when worn by an uninfected person to protect against infection. Air that travels around a poorly fitting mask is not filtered, nor is violently expelled air produced by a cough or sneeze. N-95 respirator masks are designed to filter out even small dry particles, but they must be individually fitted and checked to prevent leakage of air around the sides.
References
Disease transmission
Medical physics
Epidemiology | Wells curve | [
"Physics",
"Environmental_science"
] | 1,301 | [
"Epidemiology",
"Applied and interdisciplinary physics",
"Environmental social science",
"Medical physics"
] |
38,260,038 | https://en.wikipedia.org/wiki/Bridged%20nucleic%20acid | A bridged nucleic acid (BNA) is a modified RNA nucleotide. They are sometimes also referred to as constrained or inaccessible RNA molecules. BNA monomers can contain a five-membered, six-membered or even a seven-membered bridged structure with a "fixed" C3'-endo sugar puckering. The bridge is synthetically incorporated at the 2', 4'-position of the ribose to afford a 2', 4'-BNA monomer. The monomers can be incorporated into oligonucleotide polymeric structures using standard phosphoramidite chemistry. BNAs are structurally rigid oligo-nucleotides with increased binding affinities and stability.
Chemical structures
Chemical structures of BNA monomers containing a bridge at the 2', 4'-position of the ribose to afford a 2', 4'-BNA monomer as synthesized by Takeshi Imanishi's group. The nature of the bridge can vary for different types of monomers. The 3D structures for A-RNA and B-DNA were used as a template for the design of the BNA monomers. The goal for the design was to find derivatives that possess high binding affinities with complementary RNA and/or DNA strands.
An increased conformational inflexibility of the sugar moiety in nucleosides (oligonucleotides) results in a gain of high binding affinity with complementary single-stranded RNA and/or double-stranded DNA. The first 2',4'-BNA (LNA) monomers were first synthesized by Takeshi Imanishi's group in 1997 followed independently by Jesper Wengel's group in 1998.
BNA nucleotides can be incorporated into DNA or RNA oligonucleotides at any desired position. Such oligomers are synthesized chemically and are now commercially available. The bridged ribose conformation enhances base stacking and pre-organizes the backbone of the oligonucleotide significantly increasing their hybridization properties.
The incorporation of BNAs into oligonucleotides allows the production of modified synthetic oligonucleotides with
equal or higher binding affinity against a DNA or RNA complement with excellent single-mismatch discriminating power;
better RNA selective binding;
stronger and more sequence selective triplex-forming characters;
pronounced higher nuclease resistance, even higher than Sp-phosphorothioate analogues; and
good aqueous solubility of the resulting oligonucleotides when compared to regular DNA or RNA oligonucleotides.
New BNA analogs introduced by Imanishi's group were designed by taking the length of the bridged moiety into account. A six-membered bridged structure with a unique structural feature (N-O bond) in the sugar moiety was designed to have a nitrogen atom. This atom improves the formation of duplexes and triplexes by lowering the repulsion between the negatively charged backbone phosphates. These modifications allow to control the affinity towards complementary strands, regulate resistance against nuclease degradation and the synthesis of functional molecules designed for specific applications in genomics. The properties of these analogs were investigated and compared to those of previous 2',4'-BNA (LNA) modified oligonucleotides by Imanishi's group. Imanishi's results show that "2',4'-BNANC-modified oligonucleotides with these profiles show great promise for applications in antisense and antigene technologies."
Proposed mechanism of action of AONs
Yamamoto et al. in 2012 demonstrated that BNA-based antisense therapeutics inhibited hepatic PCSK9 expression, resulting in a strong reduction of the serum LDL-C levels of mice. The findings supported the hypothesis that PCSK9 is a potential therapeutic target for hypercholesterolemia and the researchers were able to show that BNA-based antisense oligonucleotides (AONs) induced cholesterol-lowering action in hypercholesterolemic mice. A moderate increase of aspartate aminotransferase, ALT, and blood urea nitrogen levels was observed whereas the histopathological analysis revealed no severe hepatic toxicities. The same group, also in 2012, reported that the 2',4'-BNANC[NMe] analog when used in antisense oligonucleotides showed significantly stronger inhibitory activities which is more pronounced in shorter (13- to 16mer) oligonucleotides. Their data led the researchers to conclude that the 2',4'-BNANC[NMe] analog may be a better alternative to conventional LNAs.
Benefits of the BNA technology
Some of the benefits of BNAs include ideal for the detection of short RNA and DNA targets; increase the thermal stability of duplexes; capable of single nucleotide discrimination; increases the thermal stability of triplexes; resistance to exo- and endonucleases resulting in a high stability for in vivo and in vitro applications; increased target specificity; facilitate Tm normalization; strand invasion enables detection of "hard to access" samples; compatible with standard enzymatic processes.
Application of the BNA technology
Application of BNAs include small RNA research; design and synthesis of RNA aptamers; siRNA; antisense probes; diagnostics; isolation; microarray analysis; Northern blotting; real-time PCR; in situ hybridization; functional analysis; SNP detection and use as antigens and many others nucleotide base applications.
References
External links
https://web.archive.org/web/20130126055902/http://www.rockefeller.edu/labheads/tuschl/sirna.html
http://www.sanger.ac.uk/resources/software/
Molecular biology
Nucleotides | Bridged nucleic acid | [
"Chemistry",
"Biology"
] | 1,246 | [
"Biochemistry",
"Molecular biology"
] |
38,260,400 | https://en.wikipedia.org/wiki/Korea%20Carbon%20Capture%20%26%20Sequestration%20R%26D%20Center | The Korea Carbon Capture & Sequestration R&D Center (KCRC) is an institution in Daejeon, South Korea, specialized in Carbon Capture & Sequestration (CCS) R&D. The Korean government has selected CCS technology as part of core technologies for green growth, and has established the National Comprehensive Plan for CCS to commercialize and ensure the international competitiveness of CCS technology by 2020. As part of the plan, the Ministry of Science, ICT and future Planning (MSIP) has developed the ‘Korea CCS 2020 Project' to secure the best original technology of CCS and established KCRC on December 22, 2011.
Vision
The vision of KCRC is to build a research basis and develop innovative original CCS technology by integrating Korea's CCS research capabilities.
Carbon Capture and Sequestration (CCS)
Carbon Capture and Sequestration (CCS) is a technology to capture the large quantities of carbon dioxide () normally released into the atmosphere from the use of fossil fuel in power generation and other industries, transport the captured/compressed to a location for permanent storage site, and inject it into deep underground geologic formations to securely store it or convert it into useful materials.
Korea CCS 2020 Project
Goal
To secure original CCS technology to economically capture from large final emitters
Overview
Periods : November 1, 2011 ~ May 31, 2020 (Approximately 9 years)
Budgets : 172.7 billion KRW
Supported Subcontract Projects (as of 2013) : 42 Industry-University-Institute including Korea Institute of Energy Research, Korea Research Institute of Chemical Technology, Korea Institute of Science & Technology, Seoul National University, Korea University, Yonsei University, Korea Advanced Institute of Science and Technology, University of Texas, and University of California
Participants : 600 researchers with master's and doctoral degrees
Major Roles
Implement Korea CCS 2020 Project
Develop innovative original CCS technology
Secure more than 4 types of 3rd generation original capture technology
Demonstrate Korea's first integration technology for 10,000 tons of capture-transport-storage and secure core technology
Develop more than 2 original technologies for conversion applicable to large final emitters
Build CCS Infrastructure
Think Tank for CCS Technology Policy
Develop R&D policy and research planning
Establish R&D portfolio
Promote CCS public acceptance
Improve CCS legal system
Build a network through international cooperation in the field of CCS
R&D planning and outcome management
Plan R&D through moving targets
Promote commercialization through core technology spin-off and technology transfer
Disseminate outcomes by operating IPR Trust System
Information Exchange Platform for CCS Technology
Develop and operate professional CCS education & training programs
Provide information on the analysis of CCS R&D and policy trends of Korea and other countries
Integrate research capabilities by holding Annual Korea CCS Conference
References
Links
KCRC Webpage
Korea CCS Conference Webpage
What is CCS
Korea CCS 2020 Project
Carbon capture and storage | Korea Carbon Capture & Sequestration R&D Center | [
"Engineering"
] | 587 | [
"Geoengineering",
"Carbon capture and storage"
] |
43,887,987 | https://en.wikipedia.org/wiki/Mobility%20analogy | The mobility analogy, also called admittance analogy or Firestone analogy, is a method of representing a mechanical system by an analogous electrical system. The advantage of doing this is that there is a large body of theory and analysis techniques concerning complex electrical systems, especially in the field of filters. By converting to an electrical representation, these tools in the electrical domain can be directly applied to a mechanical system without modification. A further advantage occurs in electromechanical systems: Converting the mechanical part of such a system into the electrical domain allows the entire system to be analysed as a unified whole.
The mathematical behaviour of the simulated electrical system is identical to the mathematical behaviour of the represented mechanical system. Each element in the electrical domain has a corresponding element in the mechanical domain with an analogous constitutive equation. All laws of circuit analysis, such as Kirchhoff's laws, that apply in the electrical domain also apply to the mechanical mobility analogy.
The mobility analogy is one of the two main mechanical–electrical analogies used for representing mechanical systems in the electrical domain, the other being the impedance analogy. The roles of voltage and current are reversed in these two methods, and the electrical representations produced are the dual circuits of each other. The mobility analogy preserves the topology of the mechanical system when transferred to the electrical domain whereas the impedance analogy does not. On the other hand, the impedance analogy preserves the analogy between electrical impedance and mechanical impedance whereas the mobility analogy does not.
Applications
The mobility analogy is widely used to model the behaviour of mechanical filters. These are filters that are intended for use in an electronic circuit, but work entirely by mechanical vibrational waves. Transducers are provided at the input and output of the filter to convert between the electrical and mechanical domains.
Another very common use is in the field of audio equipment, such as loudspeakers. Loudspeakers consist of a transducer and mechanical moving parts. Acoustic waves themselves are waves of mechanical motion: of air molecules or some other fluid medium.
Elements
Before an electrical analogy can be developed for a mechanical system, it must first be described as an abstract mechanical network. The mechanical system is broken down into a number of ideal elements each of which can then be paired with an electrical analogue. The symbols used for these mechanical elements on network diagrams are shown in the following sections on each individual element.
The mechanical analogies of lumped electrical elements are also lumped elements, that is, it is assumed that the mechanical component possessing the element is small enough that the time taken by mechanical waves to propagate from one end of the component to the other can be neglected. Analogies can also be developed for distributed elements such as transmission lines but the greatest benefits are with lumped-element circuits. Mechanical analogies are required for the three passive electrical elements, namely, resistance, inductance and capacitance. What these analogies are is determined by what mechanical property is chosen to represent voltage, and what property is chosen to represent current. In the mobility analogy the analogue of voltage is velocity and the analogue of current is force. Mechanical impedance is defined as the ratio of force to velocity, thus it is not analogous to electrical impedance. Rather, it is the analogue of electrical admittance, the inverse of impedance. Mechanical admittance is more commonly called mobility, hence the name of the analogy.
Resistance
The mechanical analogy of electrical resistance is the loss of energy of a moving system through such processes as friction. A mechanical component analogous to a resistor is a shock absorber and the property analogous to inverse resistance (conductance) is damping (inverse, because electrical impedance is the analogy of the inverse of mechanical impedance). A resistor is governed by the constitutive equation of Ohm's law,
The analogous equation in the mechanical domain is,
where,
G = 1/R is conductance
R is resistance
v is voltage
i is current
Rm is mechanical resistance, or damping
F is force
u is velocity induced by the force.
Electrical conductance represents the real part of electrical admittance. Likewise, mechanical resistance is the real part of mechanical impedance.
Inductance
The mechanical analogy of inductance in the mobility analogy is compliance. It is more common in mechanics to discuss stiffness, the inverse of compliance. A mechanical component analogous to an inductor is a spring. An inductor is governed by the constitutive equation,
The analogous equation in the mechanical domain is a form of Hooke's law,
where,
L is inductance
t is time
Cm = 1/S is mechanical compliance
S is stiffness
The impedance of an inductor is purely imaginary and is given by,
The analogous mechanical admittance is given by,
where,
Z is electrical impedance
j is the imaginary unit
ω is angular frequency
Ym is mechanical admittance.
Capacitance
The mechanical analogy of capacitance in the mobility analogy is mass. A mechanical component analogous to a capacitor is a large, rigid weight or a mechanical Inerter.
A capacitor is governed by the constitutive equation,
The analogous equation in the mechanical domain is Newton's second law of motion,
where,
C is capacitance
M is mass
The impedance of a capacitor is purely imaginary and is given by,
The analogous mechanical admittance is given by,
.
Inertance
A curious difficulty arises with mass as the analogy of an electrical element. It is connected with the fact that in mechanical systems the velocity of the mass (and more importantly, its acceleration) is always measured against some fixed reference frame, usually the earth. Considered as a two-terminal system element, the mass has one terminal at velocity ''u'', analogous to electric potential. The other terminal is at zero velocity and is analogous to electric ground potential. Thus, mass cannot be used as the analogue of an ungrounded capacitor.
This led Malcolm C. Smith of the University of Cambridge in 2002 to define a new energy storing element for mechanical networks called inertance. A component that possesses inertance is called an inerter. The two terminals of an inerter, unlike a mass, are allowed to have two different, arbitrary velocities and accelerations. The constitutive equation of an inerter is given by,
where,
F is an equal and opposite force applied to the two terminals
B is the inertance
u1 and u2 are the velocities at terminals 1 and 2 respectively
Δu = u2 − u1
Inertance has the same units as mass (kilograms in the SI system) and the name indicates its relationship to inertia. Smith did not just define a network theoretic element, he also suggested a construction for a real mechanical component and made a small prototype. Smith's inerter consists of a plunger able to slide in or out of a cylinder. The plunger is connected to a rack and pinion gear which drives a flywheel inside the cylinder. There can be two counter-rotating flywheels in order to prevent a torque developing. Energy provided in pushing the plunger in will be returned when the plunger moves in the opposite direction, hence the device stores energy rather than dissipates it just like a block of mass. However, the actual mass of the inerter can be very small, an ideal inerter has no mass. Two points on the inerter, the plunger and the cylinder case, can be independently connected to other parts of the mechanical system with neither of them necessarily connected to ground.
Smith's inerter has found an application in Formula One racing where it is known as the J-damper. It is used as an alternative to the now banned tuned mass damper and forms part of the vehicle suspension. It may have been first used secretly by McLaren in 2005 following a collaboration with Smith. Other teams are now believed to be using it. The inerter is much smaller than the tuned mass damper and smoothes out contact patch load variations on the tyres. Smith also suggests using the inerter to reduce machine vibration.
The difficulty with mass in mechanical analogies is not limited to the mobility analogy. A corresponding problem also occurs in the impedance analogy, but in that case it is ungrounded inductors, rather than capacitors, that cannot be represented with the standard elements.
Resonator
A mechanical resonator consists of both a mass element and a compliance element. Mechanical resonators are analogous to electrical LC circuits consisting of inductance and capacitance. Real mechanical components unavoidably have both mass and compliance so it is a practical proposition to make resonators as a single component. In fact, it is more difficult to make a pure mass or pure compliance as a single component. A spring can be made with a certain compliance and mass minimised, or a mass can be made with compliance minimised, but neither can be eliminated altogether. Mechanical resonators are a key component of mechanical filters.
Generators
Analogues exist for the active electrical elements of the voltage source and the current source (generators). The mechanical analogue in the mobility analogy of the constant current generator is the constant force generator. The mechanical analogue of the constant voltage generator is the constant velocity generator.
An example of a constant force generator is the constant-force spring. An example of a practical constant velocity generator is a lightly loaded powerful machine, such as a motor, driving a belt. This is analogous to a real voltage source, such as a battery, which remains near constant-voltage with load provided that the load resistance is much higher than the battery internal resistance.
Transducers
Electromechanical systems require transducers to convert between the electrical and mechanical domains. They are analogous to two-port networks and like those can be described by a pair of simultaneous equations and four arbitrary parameters. There are numerous possible representations, but the form most applicable to the mobility analogy has the arbitrary parameters in units of admittance. In matrix form (with the electrical side taken as port 1) this representation is,
The element is the short circuit mechanical admittance, that is, the admittance presented by the mechanical side of the transducer when zero voltage (short circuit) is applied to the electrical side. The element , conversely, is the unloaded electrical admittance, that is, the admittance presented to the electrical side when the mechanical side is not driving a load (zero force). The remaining two elements, and , describe the transducer forward and reverse transfer functions respectively. They are both analogous to transfer admittances and are hybrid ratios of an electrical and mechanical quantity.
Transformers
The mechanical analogy of a transformer is a simple machine such as a pulley or a lever. The force applied to the load can be greater or less than the input force depending on whether the mechanical advantage of the machine is greater or less than unity respectively. Mechanical advantage is analogous to the inverse of transformer turns ratio in the mobility analogy. A mechanical advantage less than unity is analogous to a step-up transformer and greater than unity is analogous to a step-down transformer.
Power and energy equations
Examples
Simple resonant circuit
The figure shows a mechanical arrangement of a platform of mass M that is suspended above the substrate by a spring of stiffness S and a damper of resistance Rm. The mobility analogy equivalent circuit is shown to the right of this arrangement and consists of a parallel resonant circuit. This system has a resonant frequency, and may have a natural frequency of oscillation if not too heavily damped.
Advantages and disadvantages
The principal advantage of the mobility analogy over its alternative, the impedance analogy, is that it preserves the topology of the mechanical system. Elements that are in series in the mechanical system are in series in the electrical equivalent circuit and elements in parallel in the mechanical system remain in parallel in the electrical equivalent.
The principal disadvantage of the mobility analogy is that it does not maintain the analogy between electrical and mechanical impedance. Mechanical impedance is represented as an electrical admittance and a mechanical resistance is represented as an electrical conductance in the electrical equivalent circuit. Force is not analogous to voltage (generator voltages are often called electromotive force), but rather, it is analogous to current.
History
Historically, the impedance analogy was in use long before the mobility analogy. Mechanical admittance and the associated mobility analogy were introduced by F. A. Firestone in 1932 to overcome the issue of preserving topologies. W. Hähnle independently had the same idea in Germany. Horace M. Trent developed a treatment for analogies in general from a mathematical graph theory perspective and introduced a new analogy of his own.
References
Bibliography
Atkins, Tony; Escudier, Marcel, A Dictionary of Mechanical Engineering, Oxford University Press, 2013 .
Beranek, Leo Leroy; Mellow, Tim J., Acoustics: Sound Fields and Transducers, Academic Press, 2012 .
Busch-Vishniac, Ilene J., Electromechanical Sensors and Actuators, Springer Science & Business Media, 1999 .
Carr, Joseph J., RF Components and Circuits, Newnes, 2002 .
Debnath, M. C.; Roy, T., "Transfer scattering matrix of non-uniform surface acoustic wave transducers", International Journal of Mathematics and Mathematical Sciences, vol. 10, iss. 3, pp. 563–581, 1987.
De Groote, Steven, "J-dampers in Formula One", F1 Technical, 27 September 2008.
Eargle, John, Loudspeaker Handbook, Kluwer Academic Publishers, 2003 .
Fahy, Frank J.; Gardonio, Paolo, Sound and Structural Vibration: Radiation, Transmission and Response, Academic Press, 2007 .
Findeisen, Dietmar, System Dynamics and Mechanical Vibrations, Springer, 2000 .
Firestone, Floyd A., "A new analogy between mechanical and electrical systems", Journal of the Acoustical Society of America, vol. 4, pp. 249–267 (1932–1933).
Hähnle, W., "Die Darstellung elektromechanischer Gebilde durch rein elektrische Schaltbilder", Wissenschaftliche Veröffentlichungen aus dem Siemens-Konzern, vol. 1, iss. 11, pp. 1–23, 1932.
Kleiner, Mendel, Electroacoustics, CRC Press, 2013 .
Pierce, Allan D., Acoustics: an Introduction to its Physical Principles and Applications, Acoustical Society of America 1989 .
Pusey, Henry C. (ed), 50 years of shock and vibration technology, Shock and Vibration Information Analysis Center, Booz-Allen & Hamilton, Inc., 1996 .
Smith, Malcolm C., "Synthesis of mechanical networks: the inerter", IEEE Transactions on Automatic Control, vol. 47, iss. 10, pp. 1648–1662, October 2002.
Talbot-Smith, Michael, Audio Engineer's Reference Book, Taylor & Francis, 2013 .
Taylor, John; Huang, Qiuting, CRC Handbook of Electrical Filters, CRC Press, 1997 .
Trent, Horace M., "Isomorphisms between oriented linear graphs and lumped physical systems", The Journal of the Acoustical Society of America, vol. 27, pp. 500–527, 1955.
Electrical analogies
Electromechanical engineering
Electronic design | Mobility analogy | [
"Engineering"
] | 3,180 | [
"Electronic design",
"Electronic engineering",
"Electromechanical engineering",
"Mechanical engineering by discipline",
"Electrical engineering",
"Design"
] |
43,900,962 | https://en.wikipedia.org/wiki/Lambda2%20method | The Lambda2 method, or Lambda2 vortex criterion, is a vortex core line detection algorithm that can adequately identify vortices from a three-dimensional fluid velocity field. The Lambda2 method is Galilean invariant, which means it produces the same results when a uniform velocity field is added to the existing velocity field or when the field is translated.
Description
The flow velocity of a fluid is a vector field which is used to mathematically describe the motion of a continuum.
The length of the flow velocity vector is the flow speed and is a scalar. The flow velocity of a fluid is a vector field
which gives the velocity of an element of fluid at a position and time
The Lambda2 method determines for any point in the fluid whether this point is part of a vortex core. A vortex is now defined as a connected region for which every point inside this region is part of a vortex core.
Usually one will also obtain a large number of small vortices when using the above definition. In order to detect only real vortices, a threshold can be used to discard any vortices below a certain size (e.g. volume or number of points contained in the vortex).
Definition
The Lambda2 method consists of several steps. First we define the velocity gradient tensor ;
where is the velocity field.
The velocity gradient tensor is then decomposed into its symmetric and antisymmetric parts:
and
where T is the transpose operation. Next the three eigenvalues of
are calculated so that for each
point in the velocity field there are three corresponding eigenvalues; , and . The eigenvalues are ordered in such a way that .
A point in the velocity field is part of a vortex core only if at least two of its eigenvalues are negative i.e. if . This is what gave the Lambda2 method its name.
Using the Lambda2 method, a vortex can be defined as a connected region where is negative. However, in situations where several vortices exist, it can be difficult for this method to distinguish between individual vortices
. The Lambda2 method has been used in practice to, for example, identify vortex rings present in the blood flow inside the human heart
References
Vortices
Computational fluid dynamics | Lambda2 method | [
"Physics",
"Chemistry",
"Mathematics"
] | 458 | [
"Vortices",
"Computational fluid dynamics",
"Computational physics",
"Dynamical systems",
"Fluid dynamics"
] |
43,901,815 | https://en.wikipedia.org/wiki/Flutemetamol%20%2818F%29 | {{DISPLAYTITLE:Flutemetamol (18F)}}
Flutemetamol (18F) (trade name Vizamyl, by GE Healthcare) is a PET scanning radiopharmaceutical containing the radionuclide fluorine-18, used as a diagnostic tool for Alzheimer's disease.
Adverse effects
Adverse effects of flutemetamol include headache, nausea, dizziness, flushing and increased blood pressure.
Mechanism of action
After the substance is given intravenously, it accumulates in beta amyloid plaques in the patient's brain, which thus become visible via positron emission tomography (PET).
Manufacturing and distribution
Flutemetamol (18F) can be produced within five to six hours. It then undergoes a quality check and is ready to be distributed immediately after. The product must be used within a certain time frame for maximum efficacy. Because of the limited time window, flutemetamol is not produced until an order has been placed.
Flutemetamol is typically administered intravenously in 1 to 10 mL doses. Average costs for PET scans without insurance coverage are around $3,000. Currently Medicare does not cover use of amyloid imaging agents except for in clinical trials. Because of this, there is a low market for flutemetamol.
History
Flutemetamol was first approved for use in the US by the Food and Drug Administration (FDA) in 2013 for intravenous use.
Clinical trials
Two clinical trials were conducted for flutemetamol (18F). The first compared PET scans of terminally ill patients with flutemetamol to post mortem standard-of-truth assessments of cerebral cortical neuritic plaque density. The second trial assessed intra-reader reproducibility of PET scans using flutemetamol.
Clinical trial 1
Of the 176 patients imaged in this trial had a median age was 82, with 57 of patients being female. The initial flutemetamol PET scan resulted in 43 positive and 25 negative results for cerebral cortisol amyloid status. 69 of the initial patients died within 13 months of the flutemetamol PET scan. The autopsy for 67 of those patients determined the global brain neuritic plaque density category. Of those 67 patients 41 were positive and 26 were negative. These results correlate with the pre-mortem scan.
Clinical trial 2
The second clinical trial included 276 subjects with a median age of 72. The trial measured the effectiveness of an electronic training program for flutemetamol image interpretation using PET scans from trial 1 among other subjects with a variety of cognitive impairment. Final results met the pre-specified success rate with a Fleiss' kappa statistic of 0.83.
References
Alzheimer's disease
Radiopharmaceuticals
Organofluorides | Flutemetamol (18F) | [
"Chemistry"
] | 574 | [
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
58,856,910 | https://en.wikipedia.org/wiki/DN%20factor | DN factor, also called DN Value, is a number that is used to determine the correct base oil viscosity for the lubrication of various types of bearings.
It can also be used to determine if a bearing is the correct choice for use in a given application. It is a product of bearing diameter (D) and speed (N).
D = diameter (in millimeters) of the bearing in question. For most types of bearings, there are actually two required measurements: the inner diameter and outer diameter. In such cases, D = (A+B)/2, where A = inner diameter and B = outer diameter. The sum of these two values is then divided by 2 to obtain the median diameter, sometimes also called pitch diameter.
N = bearing speed. This is the maximum amount of revolutions per minute (RPM) that the bearing will move.
The DN factor of a bearing is obtained by multiplying the median diameter (A + B)/2 by RPM, and sometimes by a correction factor. This correction factor may vary from manufacturer to manufacturer. No consensus exists among tribologists as to a constant correction factor across manufacturers.
Example formula
For a single or double row cylindrical bearing, the following formula would be used to obtain the DN factor. It includes a correction factor of 2:
Where:
A and B represent inner and outer diameters, respectively
A and B are divided by 2 to find the median diameter
RPM is the maximum speed of the bearing
2 is the correction factor for this particular (hypothetical) manufacturer
Usage
Once the DN factor of a bearing has been obtained, it can be used to consult grease selection charts in order to determine the correct lubricant. Viscosity must be matched to the needs of the bearing in order to obtain maximum efficiency, and to avoid lubricant runout due to overheating, which is a consequence of metal-on-metal contact, as well as the failure of grease to extract heat from the bearing system.
Viscosity is quantified according to the National Lubricating Grease Institute (NLGI) consistency number, which is regarded as the standard measure of grease thickness.
Knowing the DN factor of a bearing is critical to preventing lubricant starvation, which is characterized by decreasing lubricant film thickness coupled with increased bearing speed. Starvation occurs when bearing speed (N) exceeds the ability of the lubricant to flow back into the bearing track. This phenomenon can be the cause of metal-on-metal contact, which causes rapid wear and necessitates early replacement. Jauhari shows that degree of starvation is a function of relative lubricant layer thickness for given operating conditions. He also states that "the rolling fatigue life of [a] bearing depends greatly upon the viscosity and film thickness between the rolling contact-surface [sic]."
References
Online calculators exist to determine DN factor and correct grease viscosity.
Tribology
Bearings (mechanical)
Measurement | DN factor | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 616 | [
"Tribology",
"Physical quantities",
"Quantity",
"Materials science",
"Surface science",
"Measurement",
"Size",
"Mechanical engineering"
] |
51,112,803 | https://en.wikipedia.org/wiki/Turbomixer | A Turbo mixer, also known as a high speed mixer or a tank mixer, is a type of industrial mixer that uses PVC for mixing raw materials to form a free-flowing powder blend.
Design
It includes a cylindrical tank with a mixing tool assembled on the bottom that typically operates at a peripheral speed of between 20 and 50 m/s, depending on the material to blend. The material is heated inside by a mixer, through the mechanical energy that is produced between the mixing tools and the material which generates mutual impacts of the particles. During the mixing phase, the Turbo-mixer creates an axial vortex. The structure and position of the blades inside the mixer guarantee homogeneous material dispersion.
To avoid thermal degradation, it is usually combined with a cooler that cools down the dry blend to the temperature of around 45-55 C. Due to the poor heat conductivity of the cooler, the cooler is usually three times larger than the mixer as the cooling time is proportional to contact surface.
Applications
The typical uses of the Turbo mixer is for the production of PVC (dry-blend rigid or plasticized) and for other kinds of thermoplastic composites (like Master-Batch, Wood Plastic Composites, Additives, Thermoplastics Polymers). The largest high-speed mixer known on the market has a tank volume of 2500 litres, which corresponds to a PVC batch size of about 1160 kg and is combined with a horizontal cooler 8600 L. This machine, due to the kind of products was mixed, they have also an introduction of around 500 kg into the cooler mixer, and they can produce around 14 Ton/hour It was manufactured by the Italian company PROMIXON S.r.L. in 2014.
References
Industrial machinery | Turbomixer | [
"Engineering"
] | 356 | [
"Industrial machinery"
] |
51,114,484 | https://en.wikipedia.org/wiki/Knudsen%20paradox | The Knudsen paradox has been observed in experiments of channel flow with varying channel width or equivalently different pressures. If the normalized mass flux through the channel is plotted over the Knudsen number based on the channel width a distinct minimum is observed around . This is a paradoxical behaviour because, based on the Navier–Stokes equations, one would expect the mass flux to decrease with increasing the Knudsen number. The minimum can be understood intuitively by considering the two extreme cases of very small and very large Knudsen number. For very small Kn the viscosity vanishes and a fully developed steady state channel flow shows infinite flux. On the other hand, the particles stop interacting for large Knudsen numbers. Because of the constant acceleration due to the external force, the steady state again will show infinite flux.
See also
References
Partial differential equations
Statistical mechanics
Transport phenomena
Fluid dynamics
Physical paradoxes | Knudsen paradox | [
"Physics",
"Chemistry",
"Engineering"
] | 184 | [
"Statistical mechanics stubs",
"Transport phenomena",
"Physical phenomena",
"Chemical engineering",
"Piping",
"Statistical mechanics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
51,115,674 | https://en.wikipedia.org/wiki/Anthropometric%20measurement%20of%20the%20developing%20fetus | Anthropometry is defined as the scientific study of the human body measurements and proportions. These studies are generally used by clinicians and pathologists for adequate assessments of the growth and development of the fetus at any specific point of gestational maturity. Fetal height, fetal weight, head circumference (HC), crown to rump length (CR), dermatological observations like skin thickness etc. are measured individually to assess the growth and development of the organs and the fetus as a whole and can be a parameter for normal or abnormal development also including adaptation of the fetus to its newer environment.
Another important factor that contributes towards the anthropometric measurement of the human fetal growth is the maternal nutrition and maternal well-being. Malnutrition, as already established by WHO, is a global serious health problem not only in adults but in pregnant and lactating mothers too and is a serious problem in third world countries. In Africa and South Asia, 27%-50% of women in the reproductive age are underweight resulting in 30 million low birth weight babies.
For decades, the topic of question pertaining to crown-rump length (CR), crown-heel length (CH), head circumference (HC) with respect to the body weight of human fetus at different time periods of gestation has baffled many developmental researchers and biostatisticians. These biological variations are all based on linear curves based on human fetuses between 9 and 28 weeks of gestation.
Co-relation of fetal weight and fetal growth
Body weight, for example, is an important function and parameter for growth with respect to gestational age of the fetus. There will be great variations in the body weight of a 16 weeks old fetus. The weight will not be constant for every fetus and will vary from individual to individual. Therefore, rather than an appropriate or standard value, a range can be specified like 90 to 100 grams. This number of variations applies to all other anthropometric measurements. Often, the scientific world cover up their ignorance by stating that the rate of growth of particular human fetus depends on its intrinsic growth potential and environment provided by the normal mother. It is a visible function of the genetic potential.
The fetal growth is not an individual growth and is dependent on the composite growth of the organs. Growth of the individual organs is controlled by the genetic potential, the environment provided by the mother and by the fetus itself. Scientists have or are trying to determine such relationships through series of investigations.
Streetr, Schults et al., all studied the fetal dimensions obtained from spontaneous abortions and pathological pregnancies on mainly formed and fixed specimens. The growth of an organ from inception to a definitive functional stage is dependent on the integrated function of the whole organism which depends on a number of parameters such as the nucleic acid content of the cells which is one of the most important factors. Functioning of an organ is important for development of the organism.
A Nigerian study showed that the birth weight of the human fetus also depends upon the size and weight of the mother including her height and weight. Further a Polish study reported a similar report that some measurements like the ear height, muscular strength of the shoulders, skin fold thickness, mandibular breadth including the height of the upper and the lower limbs can be co-related to the mother also. Similar observations were also reported by Gueri et al.
One of the first original and unique works to be conducted on the anthropometric measurement of the human fetus in the Indian context was conducted by a group of scientists in Calcutta between 1977 and 1987 under the supervision of K.L.Mukherjee, a stalwart in the field of medical biochemistry in Institute of Post Graduate Medical Education and Research. The researchers divided the fetuses into 6 groups consisting of A, B, C, D, E and F with a difference of 4 weeks of gestation period among the 6 groups. Group A had 90 fetuses of 9–12 weeks of gestation and the weight varied between 1-14 grams. Group B had 337 fetuses, age 13–16 weeks of gestation with weight variation between 15 and 105 grams whereas the third group had 435 fetuses of 17–20 weeks of gestation with a weight range between 106 and 310 grams. Group D consisted of 531 fetuses of 21–24 weeks and weight between 331 and 640 grams and Group E had fetuses of the age range 25–28 weeks of gestation and weight 640-1070 grams. The last two groups F and G had fetuses with gestation period between 29-32 and 33–36 weeks. All aborted fetuses were collected after permission from the institute ethics committee followed with donor consent form with primary objective being the aborted mother's health and safety.
Liver growth
Researchers observed that the liver weight is directly proportional to the body weight. At 8–12 weeks of gestation, liver is a relatively bigger organ which forms 4-5-5.5% of the total body weight and protrudes through the abdominal wall. By 13 to 32 weeks of gestation, it forms 3.4% to 4.0% of the total body weight. The liver weight hence forms a more or less constant proportion of the total body weight of the fetus.
Growth of the lung
Although in adult life, the lung is the only major respiratory organ, in case of fetal life such is not the case though the fetal lung is known to expand and contract in the last phase of development. Both the weight of the right and left lungs are normally assessed at different periods of gestation and is expressed as a function of the total body weight.
An irregular graph was observed by K.L.Mukherjee and his group instead of the standard normal linear graphs which should be normally observed after plotting a graph of weight of the lungs expressed as gm/kg of body weight against the body weight. This relationship was observed from fetuses weighing 350 grams to 850 grams after which the rate of the growth became uniformly proportional again.
Brain and Central nervous System
Brain and the central nervous system are the two most important components of the fetus. Further analysis by this same group involved the CNS up to the medulla at 2nd cervical vertebral level. The process of analyzing the fetal brain and the CNS involved dissecting out the whole brain tissue followed by decantation of an 8.5 weeks old fetal brain weighing 15 grams. The brain at this time had already assumed the appearance of primary divisions and flexures, and the prosen, messen and rhombocephalon already gave rise to the different brain-derived constituents like rhinocephalon, corporastriata, cerebral cortex, hypo and epithalamus and pons medulla to a less differentiation extent. The growth of the fetal brain from this time onwards was proportional to the body weight although some brains from other groups showed variations at the same stage between 20% and even 12% or 13% of the body weight by and large. Scientists are still yet to find an explanation for this.
Kidney and the Adrenal Glands
In the early gestational period, the weight of the adrenal glands outweighs even the metanephric kidneys and is comparatively a larger organ. After the 10th week of gestation, the kidney grows at a much rapid rate than the adrenal glands. Hence with an increase in gestational time, i.e. by the 12th week of gestation, both the kidneys and adrenal glands measure the same. Post 12 weeks the kidneys measure more than the adrenal glands. However, the Adrenal gland is a larger organ in the fetus than the adult. The same group of researchers further observed that with the increase of fetal age, the adrenal glands also weigh more as observed by the research group in 90 human fetuses. However, the rate of increase is not uniform and varies throughout the fetal growth like other organs.
Human fetal testes
The growth of the fetal testes is not uniform as revealed through various other studies. The weight of the right testes weighed more than the weight of the left testes. Exceptions were however noticed in some of the cases as reported by K.L.Mukherjee and his group. Normally like all other organs the growth of the testes including its weight also increases with increase in the gestational period. The research group through their graph plot studies further examined that the growth of testes was not uniform with proportional growth at the initial stages. It soon flattened to increase with different spikes consistently throughout the whole length of the gestational period. Further the weight of the human testes marked as mg/100 gram of body weight was investigated and was observed that there was a steep decline in the early gestation period from about 200 mg/100 gram of body weight to roughly about 60 mg/ 100 gram of body weight when the fetal weight was about 1.5 grams to 20 grams. In the case of a 1.6-kilogram fetus, the testes weighed only 20 mg/100 gram of body weight. This decline was however not maintained uniformly.
Growth of the Human fetal ovaries
A steep decline in the ovarian weight in the early gestational period was observed though it was not a uniformly maintained decline. With increase in the gestational time, progressive weight of the ovaries was found and in most cases, the weight of the ovaries was identical to the weight of the fetuses although some exceptions were observed by the group.
Fetal Thymus growth
At 8 weeks of gestation when the fetus weighed 1 gram, the thymus could not be detected. In many of the 39 fetuses weighing around 1.3-14.7 grams, the thymus tissues could not be dissected by the group especially in the smaller fetuses due to its non-detection. Fetuses weighing more than or equal to 5 grams could be detected. Plotting a graph it was observed that thymus organ formed 52 mg/100 gram of body weight in case of a 5 gram fetus. Further study on 28 fetuses weighing 15 to 100 grams revealed the thymic weight to be 77 mg per 100 grams of the body weight. The relative growth of the thymus was more in this group compared to all the earlier observations. A further group including 39 fetuses weighing between 100 and 300 grams showed a fetal thymic weight between 136 mg/100 gram to 77 mg/ 100 gram. In fetuses up to 28 weeks, it was observed by scientists that the fetal thymic weight was the highest and was in contrast to many other organs like brain, liver which constitute more or less constant proportion of the body weight with very few exceptions. Therefore, it was inferred that with an increase in the gestational period, the thymic weight also increase although exceptions were observed.
Conclusion
Growth and development throughout the fetal life are two most important factors which determine the growth rate of each individual and their specific organs. This process of maturation and development of the organs are observed in postnatal life also. With an increase in gestational time, the fetal organs also grow in progression to the body weight, the phenomenon which is still not understood clearly by many researchers. Some believe that genetic potentiality of the different endocrine organs related to the growth and various other unidentified processes mediate the whole phenomenon.
See also
Anthropometry
Prenatal Development
References
External links
Human biology | Anthropometric measurement of the developing fetus | [
"Biology"
] | 2,366 | [
"Human biology"
] |
51,118,020 | https://en.wikipedia.org/wiki/Samyung%20ENC | Samyung ENC is a South Korean manufacturer of marine communication and navigation systems. The company is publicly listed and traded on the KOSDAQ.
Market share
Samyung ENC is a leading company in a highly fragmented marine electronics industry that includes Raymarine, Humminbird, Lowrance, Simrad, B&G, Magellan, Murphy, Naviop, Northstar, Samyung ENC, Sitex, TwoNav, Furuno, Geonav.
Products
The product line includes high frequency radio, GPS floater, and very high frequency (VHF) transmitter-receiver.
References
Manufacturing companies established in 1978
Engineering companies of South Korea
Manufacturing companies based in Busan
Navigation system companies
Marine electronics
South Korean brands
South Korean companies established in 1978
Companies listed on the Korea Exchange | Samyung ENC | [
"Engineering"
] | 162 | [
"Marine electronics",
"Marine engineering"
] |
52,622,085 | https://en.wikipedia.org/wiki/Indulin%20AA-86 | Indulin AA-86 is the trade name (held by Ingevity) for a proprietary formula used for an asphalt emulsifying agent. As such, it does not have a given CAS number. Its composition is only provided subject to a nondisclosure agreement. The company reports that it is a fatty amine derivative, an amber viscous liquid, pH 9 to 11 at a 15% w/w concentration, reactive with acids and oxidizing agents, with a relative density of 0.89, boiling point greater than 180 C and a closed cup flash point of 126 C. It is not volatile, but is identified as a hazard for inhalation, eye or skin contact and must be used with adequate ventilation. The compound is stable and hazardous decomposition products should not be produced during normal use, but in a fire can produce carbon dioxide, carbon monoxide and nitrogen oxides, so firefighters are advised to wear self-contained breathing apparatus. State regulatory disclosures indicate it contains ethyl acrylate. According to the US EPA, "the hydrochloric salt of this product is only acceptable for use in the production of asphalt emulsions, and the emulsions may only be used in asphalt paving applications." Standard usage involves partial neutralization of basic indulin with hydrochloric acid to form a salt, for a 1.0:1.1 ratio of indulin to its salt.
Corpus Christi water system incident
The compound is notable for a backflow of up to 24 gallons of the material, possibly in a mixture with hydrochloric acid, into the city water supply of Corpus Christi, Texas, leading to a temporary ban (December 14, 2016) on use of tap water throughout the city of 320,000 residents. The ban remained in place in 85% of the city for more than two days, leading to school closures and emergency deliveries of bottled water, after which restrictions were tailored (December 17) to smaller portions of the city. City officials posted a warning to residents that "Boiling, freezing, filtering, adding chlorine or other disinfectants or letting the water stand will not make the water safe." The material originated from a plant leased to Ergon Asphalt and Emulsions on property adjacent to one of the two Valero refineries in the city's large refinery complex. A "white, sudsy liquid" was reported to the city at taps in the company's administration building on December 1 and then, after city workers had flushed the pipe, on December 7, and finally, after a third flush, reported again by Valero workers at the building on December 12. A Valero spokesman described the contamination as "a localized backflow issue from third party operations in the area of Valero's asphalt terminal" and said that the company did not believe the city water had been impacted. It was reported December 17 that city officials were investigating four cases of skin and intestinal issues that were consistent with possible symptoms of exposure, but these claims were dismissed by Mayor Dan McQueen as "rumors", and twelve "reports of possibly related symptoms from prohibited water use" were described as "unconfirmed" by the EPA. The ban was lifted December 18 after 28 samples of city water failed to find Indulin AA-86 contamination.
The solubility of the compound is thought to be relatively low. A blog for Hydroviv, a water filter manufacturer, suggested that the presence of hydrochloric acid might hint at the nature of the backflow: "Indulin AA-86 is prepared in a 0.3% solution to form an emulsion. Therefore, for 24 gallons of Indulin AA-86 would be diluted with water into 8,000 gallons, a volume that is a standard storage/mixing tank size in the industry." The diluted emulsion would be more capable of mixing with the city water supply during a backflow. A statement by Ergon said that it purchases its water via Valero, its landlord at the site, and that a soap solution, consisting of 98% water and 2% indulin AA-86, would have backflowed through this separate supply line.
References
External links
City of Corpus Christi website
Ergon Asphalt and Emulsions
Ingevity products page
Asphalt | Indulin AA-86 | [
"Physics",
"Chemistry"
] | 874 | [
"Amorphous solids",
"Asphalt",
"Unsolved problems in physics",
"Chemical mixtures"
] |
52,624,232 | https://en.wikipedia.org/wiki/4-Hydroxyestradiol | 4-Hydroxyestradiol (4-OHE2), also known as estra-1,3,5(10)-triene-3,4,17β-triol, is an endogenous, naturally occurring catechol estrogen and a minor metabolite of estradiol. It is estrogenic, similarly to many other hydroxylated estrogen metabolites such as 2-hydroxyestradiol, 16α-hydroxyestrone, estriol (16α-hydroxyestradiol), and 4-hydroxyestrone but unlike 2-hydroxyestrone.
See also
Estrogen conjugate
Lipoidal estradiol
References
External links
Metabocard for 4-Hydroxyestradiol - Human Metabolome Database
Sterols
Hydroxyarenes
Cyclopentanols
Estranes
Estrogens
Human metabolites | 4-Hydroxyestradiol | [
"Chemistry",
"Biology"
] | 190 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
52,626,244 | https://en.wikipedia.org/wiki/E%20series%20of%20preferred%20numbers | The E series is a system of preferred numbers (also called preferred values) derived for use in electronic components. It consists of the E3, E6, E12, E24, E48, E96 and E192 series, where the number after the 'E' designates the quantity of logarithmic value "steps" per decade. Although it is theoretically possible to produce components of any value, in practice the need for inventory simplification has led the industry to settle on the E series for resistors, capacitors, inductors, and zener diodes. Other types of electrical components are either specified by the Renard series (for example fuses) or are defined in relevant product standards (for example IEC 60228 for wires).
History
During the Golden Age of Radio (1920s to 1950s), numerous companies manufactured vacuum-tube–based AM radio receivers for consumer use. In the early years, many components were not standardized between numerous AM radio manufacturers. The capacitance values of capacitors (previously called condensers) and resistance values of resistors were not standardized as they are today.
In 1924, the Radio Manufacturers Association (RMA) was formed in Chicago, Illinois by 50 radio manufacturers to license and share patents. Over time, this group created some of the earliest standards for electronics components. In 1936, the RMA adopted a preferred-number system for the resistance values of fixed-composition resistors. Over time, resistor manufacturers migrated from older values to the 1936 resistance value standard.
During World War II (1940s), American and British military production was a major influence for establishing common standards across many industries, especially in electronics, where it was essential to produce large quantities of standardized electronic parts for military devices, such as wireless communications, radar, radar jammers, LORAN, and more.
Later, the mid-20th century baby boom and the invention of the transistor kicked off demand for consumer electronics goods during the 1950s. As portable transistor radio manufacturing migrated from United States towards Japan during the late 1950s, it was critical for the electronic industry to have international standards.
After being worked on by the RMA, the International Electrotechnical Commission (IEC) began work on an international standard in 1948. The first version of this IEC Publication 63 (IEC 63) was released in 1952. Later, IEC 63 was revised, amended, and renamed into the current version known as IEC 60063:2015.
IEC 60063 release history:
IEC 63:1952 (aka IEC 60063:1952), first edition, published 1952-01-01.
IEC 63:1963 (aka IEC 60063:1963), second edition, published 1963-01-01.
IEC 63:1967/AMD1:1967 (aka IEC 60063:1967/AMD1:1967), first amendment of second edition, published 1967.
IEC 63:1977/AMD2:1977 (aka IEC 60063:1977/AMD2:1977), second amendment of second edition, published 1977.
IEC 60063:2015, third edition, published 2015-03-27.
Overview
The E series of preferred numbers was chosen such that when a component is manufactured it will end up in a range of roughly equally spaced values (geometric progression) on a logarithmic scale. Each E series subdivides each decade magnitude into steps of 3, 6, 12, 24, 48, 96, and 192 values, termed E3, E6, and so forth to E192, with maximum errors of 40%, 20%, 10%, 5%, 2%, 1%, 0.5%, respectively. Also, the E192 series is used for 0.25% and 0.1% tolerance resistors.
Historically, the E series is split into two major groupings:
E3, E6, E12, E24 are subsets of E24. Values in this group are rounded to 2 significant figures.
E48, E96, E192 are subsets of E192. Values in this group are rounded to 3 significant figures.
Formula
The formula for each value is determined by the m-th root, but unfortunately the calculated values don't match the official values of all E series.
where:
is rounded to 2 significant figures (E3, E6, E12, E24) or 3 significant figures (E48, E96, E192),
is an integer of the E series group size (3, 6, 12, 24, 48, 96, 192),
is an integer of
exceptions:
The official values for E48 and E96 series match their calculated values, but all other series (E3, E6, E12, E24, E192) have one or more official values that don't match their calculated values (see subsets sections below).
E24 subsets
For E3, E6, E12, and E24, the values from the formula are rounded to 2 significant figures, but eight official values (shown in bold & green) are different from the calculated values (shown in red). During the early half of the 20th century, electronic components had different sets of component values than today. In the late 1940s, standards organizations started working towards codifying a standard set of official component values, and they decided that it wasn't practical to change some of the former established historical values. The first standard was accepted in Paris in 1950, then published as IEC 63 in 1952. The official values of the E3, E6, and E12 series are subsets of the following official E24 values.
{| class="wikitable" style="text-align: center;"
|+Comparison of rounded log-scaled values and official values of E24 series ()
|-
! || 0 || 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10 || 11 || 12 || 13 || 14 || 15 || 16 || 17 || 18 || 19 || 20 || 21 || 22 || 23
|-
| Calculated values || 1.0 || 1.1 || 1.2 || 1.3 || 1.5 || 1.6 || 1.8 || 2.0 || 2.2 || 2.4 || || || || || || || || 5.1 || 5.6 || 6.2 || 6.8 || 7.5 || || 9.1
|-
| Official E24 values || 1.0 || 1.1 || 1.2 || 1.3 || 1.5 || 1.6 || 1.8 || 2.0 || 2.2 || 2.4 || || || || || || || || 5.1 || 5.6 || 6.2 || 6.8 || 7.5 || || 9.1
|}
The E3 series is rarely used, except for some components with high variations like electrolytic capacitors, where the given tolerance is often unbalanced between negative and positive such as or , or for components with uncritical values such as pull-up resistors. The calculated constant tangential tolerance for this series gives ( − 1) ÷ ( + 1) = 36.60%, approximately. While the standard only specifies a tolerance greater than 20%, other sources indicate 40% or 50%. Currently, most electrolytic capacitors are manufactured with values in the E6 or E12 series, thus E3 series is mostly obsolete.
E192 subsets
For E48, E96, and E192, the values from the formula are rounded to 3 significant figures, but one value (shown in bold) is different from the calculated values.
To calculate the E48 series: is 48, then is incremented from 0 to 47 through the formula. All official values of E48 series match their calculated values.
To calculate the E96 series: is 96, then is incremented from 0 to 95 through the formula. All official values of E96 series match their calculated values.
To calculate the E192 series: is 192, then is incremented from 0 to 191 through the formula, with one exception for where 9.20 is the official value instead of the calculated 9.19 value.
Since some values of the E24 series do not exist in the E48, E96, or E192 series, some resistor manufacturers have added missing E24 values into of their 1%, 0.5%, 0.25%, 0.1% tolerance resistor families. This allows easier purchasing migration between various tolerances. This E series merging is noted on resistor datasheets and webpages as "E96 + E24" or "E192 + E24". In the following table, the dashed E24 values don't exist in the E48, E96, or E192 series:
{| class="wikitable" style="text-align: center;"
|+E24 values that exist in E48, E96, and E192 series
|-
! E24 values || 1.0 || 1.1 || 1.2 || 1.3 || 1.5 || 1.6 || 1.8 || 2.0 || 2.2 || 2.4 || 2.7 || 3.0 || 3.3 || 3.6 || 3.9 || 4.3 || 4.7 || 5.1 || 5.6 || 6.2 || 6.8 || 7.5 || 8.2 || 9.1
|-
| E48 values || 1.00 || 1.10 || – || – || – || – || – || – || – || – || – || – || – || – || – || – || – || – || – || – || – || 7.50 || – || –
|-
| E96 values || 1.00 || 1.10 || – || 1.30 || 1.50 || – || – || 2.00 || – || – || – || – || – || – || – || – || – || – || – || – || – || 7.50 || – || –
|-
| E192 values || 1.00 || 1.10 || 1.20 || 1.30 || 1.50 || 1.60 || 1.80 || 2.00 || – || 2.40 || – || – || – || – || – || – || 4.70 || – || – || – || – || 7.50 || – || –
|}
Examples
If a manufacturer sold resistors with all values in a range of 1 ohm to 10 megaohms, the available resistance values for E3 through E12 would be:
{| class="wikitable"
|-
! scope="col" | E3 (in ohms)
! scope="col" | E6 (in ohms)
! scope="col" | E12 (in ohms)
|-
|
1.0, 2.2, 4.7,
10, 22, 47,
100, 220, 470,
1 k, 2.2 k, 4.7 k,
10 k, 22 k, 47 k,
100 k, 220 k, 470 k,
1 M, 2.2 M, 4.7 M,
10 M
|
1.0, 1.5, 2.2, 3.3, 4.7, 6.8,
10, 15, 22, 33, 47, 68,
100, 150, 220, 330, 470, 680,
1 k, 1.5 k, 2.2 k, 3.3 k, 4.7 k, 6.8 k,
10 k, 15 k, 22 k, 33 k, 47 k, 68 k,
100 k, 150 k, 220 k, 330 k, 470 k, 680 k,
1 M, 1.5 M, 2.2 M, 3.3 M, 4.7 M, 6.8 M,
10 M
|
1.0, 1.2, 1.5, 1.8, 2.2, 2.7, 3.3, 3.9, 4.7, 5.6, 6.8, 8.2,
10, 12, 15, 18, 22, 27, 33, 39, 47, 56, 68, 82,
100, 120, 150, 180, 220, 270, 330, 390, 470, 560, 680, 820,
1 k, 1.2 k, 1.5 k, 1.8 k, 2.2 k, 2.7 k, 3.3 k, 3.9 k, 4.7 k, 5.6 k, 6.8 k, 8.2 k,
10 k, 12 k, 15 k, 18 k, 22 k, 27 k, 33 k, 39 k, 47 k, 56 k, 68 k, 82 k,
100 k, 120 k, 150 k, 180 k, 220 k, 270 k, 330 k, 390 k, 470 k, 560 k, 680 k, 820 k,
1 M, 1.2 M, 1.5 M, 1.8 M, 2.2 M, 2.7 M, 3.3 M, 3.9 M, 4.7 M, 5.6 M, 6.8 M, 8.2 M,
10 M
|}
If a manufacturer sold capacitors with all values in a range of 1 pF to 10,000 μF, the available capacitance values for E3 and E6 would be:
{| class="wikitable"
|-
! scope="col" | E3
! scope="col" | E6
|-
|
1.0 pF, 2.2 pF, 4.7 pF,
10 pF, 22 pF, 47 pF,
100 pF, 220 pF, 470 pF,
1 nF, 2.2 nF, 4.7 nF,
10 nF, 22 nF, 47 nF,
100 nF, 220 nF, 470 nF,
1 μF, 2.2 μF, 4.7 μF,
10 μF, 22 μF, 47 μF,
100 μF, 220 μF, 470 μF,
1000 μF, 2200 μF, 4700 μF,
10000 μF
|
1.0 pF, 1.5 pF, 2.2 pF, 3.3 pF, 4.7 pF, 6.8 pF,
10 pF, 15 pF, 22 pF, 33 pF, 47 pF, 68 pF,
100 pF, 150 pF, 220 pF, 330 pF, 470 pF, 680 pF,
1 nF, 1.5 nF, 2.2 nF, 3.3 nF, 4.7 nF, 6.8 nF,
10 nF, 15 nF, 22 nF, 33 nF, 47 nF, 68 nF,
100 nF, 150 nF, 220 nF, 330 nF, 470 nF, 680 nF,
1 μF, 1.5 μF, 2.2 μF, 3.3 μF, 4.7 μF, 6.8 μF,
10 μF, 15 μF, 22 μF, 33 μF, 47 μF, 68 μF,
100 μF, 150 μF, 220 μF, 330 μF, 470 μF, 680 μF,
1000 μF, 1500 μF, 2200 μF, 3300 μF, 4700 μF, 6800 μF,
10000 μF
|}
Lists
List of official values for each E series:
E3 values
(40% tolerance)
1.0, 2.2, 4.7
E6 values
(20% tolerance)
1.0, 1.5, 2.2, 3.3, 4.7, 6.8
E12 values
(10% tolerance)
1.0, 1.2, 1.5, 1.8, 2.2, 2.7, 3.3, 3.9, 4.7, 5.6, 6.8, 8.2
E24 values
(5% tolerance)
1.0, 1.1, 1.2, 1.3, 1.5, 1.6, 1.8, 2.0, 2.2, 2.4, 2.7, 3.0, 3.3, 3.6, 3.9, 4.3, 4.7, 5.1, 5.6, 6.2, 6.8, 7.5, 8.2, 9.1
E48 values
(2% tolerance)
1.00, 1.05, 1.10, 1.15, 1.21, 1.27, 1.33, 1.40, 1.47, 1.54, 1.62, 1.69, 1.78, 1.87, 1.96, 2.05, 2.15, 2.26, 2.37, 2.49, 2.61, 2.74, 2.87, 3.01, 3.16, 3.32, 3.48, 3.65, 3.83, 4.02, 4.22, 4.42, 4.64, 4.87, 5.11, 5.36, 5.62, 5.90, 6.19, 6.49, 6.81, 7.15, 7.50, 7.87, 8.25, 8.66, 9.09, 9.53
E96 values
(1% tolerance)
1.00, 1.02, 1.05, 1.07, 1.10, 1.13, 1.15, 1.18, 1.21, 1.24, 1.27, 1.30, 1.33, 1.37, 1.40, 1.43, 1.47, 1.50, 1.54, 1.58, 1.62, 1.65, 1.69, 1.74, 1.78, 1.82, 1.87, 1.91, 1.96, 2.00, 2.05, 2.10, 2.15, 2.21, 2.26, 2.32, 2.37, 2.43, 2.49, 2.55, 2.61, 2.67, 2.74, 2.80, 2.87, 2.94, 3.01, 3.09, 3.16, 3.24, 3.32, 3.40, 3.48, 3.57, 3.65, 3.74, 3.83, 3.92, 4.02, 4.12, 4.22, 4.32, 4.42, 4.53, 4.64, 4.75, 4.87, 4.99, 5.11, 5.23, 5.36, 5.49, 5.62, 5.76, 5.90, 6.04, 6.19, 6.34, 6.49, 6.65, 6.81, 6.98, 7.15, 7.32, 7.50, 7.68, 7.87, 8.06, 8.25, 8.45, 8.66, 8.87, 9.09, 9.31, 9.53, 9.76
E192 values
(0.5% and lower tolerance)
1.00, 1.01, 1.02, 1.04, 1.05, 1.06, 1.07, 1.09, 1.10, 1.11, 1.13, 1.14, 1.15, 1.17, 1.18, 1.20, 1.21, 1.23, 1.24, 1.26, 1.27, 1.29, 1.30, 1.32, 1.33, 1.35, 1.37, 1.38, 1.40, 1.42, 1.43, 1.45, 1.47, 1.49, 1.50, 1.52, 1.54, 1.56, 1.58, 1.60, 1.62, 1.64, 1.65, 1.67, 1.69, 1.72, 1.74, 1.76, 1.78, 1.80, 1.82, 1.84, 1.87, 1.89, 1.91, 1.93, 1.96, 1.98, 2.00, 2.03, 2.05, 2.08, 2.10, 2.13, 2.15, 2.18, 2.21, 2.23, 2.26, 2.29, 2.32, 2.34, 2.37, 2.40, 2.43, 2.46, 2.49, 2.52, 2.55, 2.58, 2.61, 2.64, 2.67, 2.71, 2.74, 2.77, 2.80, 2.84, 2.87, 2.91, 2.94, 2.98, 3.01, 3.05, 3.09, 3.12, 3.16, 3.20, 3.24, 3.28, 3.32, 3.36, 3.40, 3.44, 3.48, 3.52, 3.57, 3.61, 3.65, 3.70, 3.74, 3.79, 3.83, 3.88, 3.92, 3.97, 4.02, 4.07, 4.12, 4.17, 4.22, 4.27, 4.32, 4.37, 4.42, 4.48, 4.53, 4.59, 4.64, 4.70, 4.75, 4.81, 4.87, 4.93, 4.99, 5.05, 5.11, 5.17, 5.23, 5.30, 5.36, 5.42, 5.49, 5.56, 5.62, 5.69, 5.76, 5.83, 5.90, 5.97, 6.04, 6.12, 6.19, 6.26, 6.34, 6.42, 6.49, 6.57, 6.65, 6.73, 6.81, 6.90, 6.98, 7.06, 7.15, 7.23, 7.32, 7.41, 7.50, 7.59, 7.68, 7.77, 7.87, 7.96, 8.06, 8.16, 8.25, 8.35, 8.45, 8.56, 8.66, 8.76, 8.87, 8.98, 9.09, 9.20, 9.31, 9.42, 9.53, 9.65, 9.76, 9.88
Table
See also
Electronic color codecolor-code used to indicate the values of axial electronic components, such as resistors, capacitors, inductors, diodes (also see IEC 60062)
Geometric progression
Preferred number
Renard seriesused for current rating of electric fuses
Three-character marking code for resistorsfor (E48/)E96 values (see EIA-96 and IEC 60062:2016)
Two-character marking code for capacitorsfor (E3/E6/E12/)E24 values (see ANSI/EIA-198-D:1991, ANSI/EIA-198-1-E:1998, ANSI/EIA-198-1-F:2002 and IEC 60062:2016/AMD1:2019)
Notes
References
External links
Calculate the closest component value to any E-series with an Excel User Defined Function.
Calculate standard resistor values in Excel – EDN magazine
Printable E series tables
E6 to E96 Table – Servenger
E3 to E192 Table – Vishay
Electrical components
Numbers
Industrial design
Logarithmic scales of measurement
es:Números_preferentes#Condensadores_y_resistencias | E series of preferred numbers | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 5,299 | [
"Industrial design",
"Electrical components",
"Design engineering",
"Physical quantities",
"Quantity",
"Mathematical objects",
"Logarithmic scales of measurement",
"Arithmetic",
"Electrical engineering",
"Design",
"Numbers",
"Components"
] |
34,265,726 | https://en.wikipedia.org/wiki/Synge%27s%20world%20function | In general relativity, Synge's world function is a smooth locally defined function of pairs of points in a smooth spacetime with smooth Lorentzian metric . Let be two points in spacetime, and suppose belongs to a convex normal neighborhood of (referred to the Levi-Civita connection associated to ) so that there exists a unique geodesic from to included in , up to the affine parameter . Suppose and . Then Synge's world function is defined as:
where is the tangent vector to the affinely parametrized geodesic . That is, is half the square of the signed geodesic length from to computed along the unique geodesic segment, in , joining the two points. Synge's world function is well-defined, since the integral above is invariant under reparameterization. In particular, for Minkowski spacetime, the Synge's world function simplifies to half the spacetime interval between the two points: it is globally defined and it takes the form
Obviously Synge's function can be defined also in Riemannian manifolds and in that case it has non-negative sign.
Generally speaking, Synge’s function is only locally defined and an attempt to define an extension to domains larger than convex normal neighborhoods generally leads to a multivalued function since there may be several geodesic segments joining a pair of points in the spacetime. It is however possible to define it in a neighborhood of the diagonal of , though this definition requires some arbitrary choice.
Synge's world function (also its extension to a neighborhood of the diagonal of ) appears in particular in a number of theoretical constructions of quantum field theory in curved spacetime. It is the crucial object used to construct a parametrix of Green’s functions of Lorentzian Green hyperbolic 2nd order partial differential equations in a globally hyperbolic manifold, and in the definition of Hadamard Gaussian states.
References
Moretti, Valter (2024) Geometric Methods in Mathematical Physics II: Tensor Analysis on Manifolds and General Relativity, Chapter 7. Lecture Notes Trento University (2024)
General relativity | Synge's world function | [
"Physics"
] | 438 | [
"General relativity",
"Relativity stubs",
"Theory of relativity"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.