id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
41,351,898
https://en.wikipedia.org/wiki/Subset%20simulation
Subset simulation is a method used in reliability engineering to compute small (i.e., rare event) failure probabilities encountered in engineering systems. The basic idea is to express a small failure probability as a product of larger conditional probabilities by introducing intermediate failure events. This conceptually converts the original rare-event problem into a series of frequent-event problems that are easier to solve. In the actual implementation, samples conditional on intermediate failure events are adaptively generated to gradually populate from the frequent to rare event region. These 'conditional samples' provide information for estimating the complementary cumulative distribution function (CCDF) of the quantity of interest (that governs failure), covering the high as well as the low probability regions. They can also be used for investigating the cause and consequence of failure events. The generation of conditional samples is not trivial but can be performed efficiently using Markov chain Monte Carlo (MCMC). Subset simulation takes the relationship between the (input) random variables and the (output) response quantity of interest as a 'black box'. This can be attractive for complex systems where it is difficult to use other variance reduction or rare-event sampling techniques that require prior information about the system behaviour. For problems where it is possible to incorporate prior information into the reliability algorithm, it is often more efficient to use other variance reduction techniques such as importance sampling. It has been shown that subset simulation is more efficient than traditional Monte Carlo simulation, but less efficient than line sampling, when applied to a fracture mechanics test problem. Basic idea Let X be a vector of random variables and Y = h(X) be a scalar (output) response quantity of interest for which the failure probability is to be determined. Each evaluation of h(·) is expensive and so it should be avoided if possible. Using direct Monte Carlo methods one can generate i.i.d. (independent and identically distributed) samples of X and then estimate P(F) simply as the fraction of samples with Y > b. However this is not efficient when P(F) is small because most samples will not fail (i.e., with Y ≤ b) and in many cases an estimate of 0 results. As a rule of thumb for small P(F) one requires 10 failed samples to estimate P(F) with a coefficient of variation of 30% (a moderate requirement). For example, 10000 i.i.d. samples, and hence evaluations of h(·), would be required for such an estimate if P(F) = 0.001. Subset simulation attempts to convert a rare event problem into more frequent ones. Let be an increasing sequence of intermediate threshold levels. From the basic property of conditional probability, The 'raw idea' of subset simulation is to estimate P(F) by estimating and the conditional probabilities for , anticipating efficiency gain when these probabilities are not small. To implement this idea there are two basic issues: Estimating the conditional probabilities by means of simulation requires the efficient generation of samples of X conditional on the intermediate failure events, i.e., the conditional samples. This is generally non-trivial. The intermediate threshold levels should be chosen so that the intermediate probabilities are not too small (otherwise ending up with rare event problem again) but not too large (otherwise requiring too many levels to reach the target event). However, this requires information of the CCDF, which is the target to be estimated. In the standard algorithm of subset simulation the first issue is resolved by using Markov chain Monte Carlo. More generic and flexible version of the simulation algorithms not based on Markov chain Monte Carlo have been recently developed. The second issue is resolved by choosing the intermediate threshold levels {bi} adaptively using samples from the last simulation level. As a result, subset simulation in fact produces a set of estimates for b that corresponds to different fixed values of p = P(Y > b), rather than estimates of probabilities for fixed threshold values. There are a number of variations of subset simulation used in different contexts in applied probability and stochastic operations research For example, in some variations the simulation effort to estimate each conditional probability P(Y > bi | Y > bi−1) (i = 2, ..., m) may not be fixed prior to the simulation, but may be random, similar to the splitting method in rare-event probability estimation. These versions of subset simulation can also be used to approximately sample from the distribution of X given the failure of the system (that is, conditional on the event ). In that case, the relative variance of the (random) number of particles in the final level can be used to bound the sampling error as measured by the total variation distance of probability measures. See also Rare event sampling Curse of dimensionality Line sampling Notes See Au & Wang for an introductory coverage of subset simulation and its application to engineering risk analysis. Schuëller & Pradlwarter reports the performance of subset simulation (and other variance-reduction techniques) in a set of stochastic mechanics benchmark problems. Chapter 4 of Phoon discusses the application of subset simulation (and other Monte Carlo methods) to geotechnical engineering problems. Zio & Pedroni discusses the application of subset simulation (and other methods) to a problem in nuclear engineering. References Reliability analysis Variance reduction
Subset simulation
[ "Engineering" ]
1,092
[ "Reliability analysis", "Reliability engineering" ]
41,352,383
https://en.wikipedia.org/wiki/Maximiscin
Maximiscin is a polyketide-shikimate chemical compound isolated from Tolypocladium that shows tumor growth suppression in an animal model. The discovery of maximiscin was the result of a citizen scientist crowdsourcing project by the University of Oklahoma. The soil sample which yielded maximiscin was sent by a woman from Salcha, Alaska. References Methyl esters Lactams Polyketides Cyclohexenes
Maximiscin
[ "Chemistry" ]
93
[ "Biomolecules by chemical classification", "Natural products", "Polyketides" ]
41,353,264
https://en.wikipedia.org/wiki/American%20College%20of%20Neuropsychopharmacology
Founded in 1961, the American College of Neuropsychopharmacology (ACNP) is a professional organization of leading brain and behavior scientists. The principal functions of the College are research and education. Their goals in research are to offer investigators an opportunity for cross-disciplinary communication and to promote the application of various scientific disciplines to the study of the brain's effect on behavior, with a focus on mental illness of all forms.  Their educational goals are to encourage young scientists to enter research careers in neuropsychopharmacology and to develop and provide accurate information about behavioral disorders and their pharmacological treatment. Organization The College is an honorific society.  Members are selected primarily on the basis of their original research contributions to the broad field of neuroscience.  The membership of the College is drawn from scientists in multiple fields including behavioral pharmacology, neuroimaging, chronobiology, clinical psychopharmacology, epidemiology, genetics, molecular biology, neurochemistry, neuroendocrinology, neuroimmunology, neurology, neurophysiology, psychiatry, and psychology. Annual meeting The annual meeting of the College is a closed meeting; only the ACNP members and their invited guests may attend. Because of the College's intense concern with, and involvement in, the education and training of tomorrow's brain scientists, the College selects a number of young scientists to be invited to the annual meeting through a competitive process open to all early career researchers. This meeting, a mix of foremost brain and behavior research world-wide, is designed to encourage dialogue, discussion, and synergy by those attending. Awards The ACNP offers the following awards. Julius Axelrod Mentorship Award Daniel H. Efron Research Award Joel Elkes Research Award Barbara Fish Memorial Award Paul Hoch Distinguished Service Award Eva King Killam Research Award Dolores Shockley Diversity and Inclusion Advancement Award Media Award Public Service Award Women's Advocacy Award Publication The Springer-Nature Publishing Group journals Neuropsychopharmacology and NPP-Digital Psychiatry and Neuroscience are their official publications. Neuropsychopharmacology was first published in 1987 and NPP-Digital Psychiatry and Neuroscience is an Open Access journal that started in 2023. References See also European Brain Council European College of Neuropsychopharmacology Neuropsychopharmacology (journal) Neuropharmacology Neuroscience organizations Organizations based in Tennessee Organizations established in 1961 Research organizations in the United States
American College of Neuropsychopharmacology
[ "Chemistry" ]
507
[ "Pharmacology", "Neuropharmacology" ]
41,358,691
https://en.wikipedia.org/wiki/Hydrodynamic%20quantum%20analogs
In physics, the hydrodynamic quantum analogs refer to experimentally-observed phenomena involving bouncing fluid droplets over a vibrating fluid bath that behave analogously to several quantum-mechanical systems. The experimental evidence for diffraction through slits has been disputed, however, though the diffraction pattern of walking droplets is not exactly the same as in quantum physics, it does appear clearly in the high memory parameter regime (at high forcing of the bath) where all the quantum-like effects are strongest. A droplet can be made to bounce indefinitely in a stationary position on a vibrating fluid surface. This is possible due to a pervading air layer that prevents the drop from coalescing into the bath. For certain combinations of bath surface acceleration, droplet size, and vibration frequency, a bouncing droplet will cease to stay in a stationary position, but instead “walk” in a rectilinear motion on top of the fluid bath. Walking droplet systems have been found to mimic several quantum mechanical phenomena including particle diffraction, quantum tunneling, quantized orbits, the Zeeman Effect, and the quantum corral. Besides being an interesting means to visualise phenomena that are typical of the quantum-mechanical world, floating droplets on a vibrating bath have interesting analogies with the pilot wave theory, one of the many interpretations of quantum mechanics in its early stages of conception and development. The theory was initially proposed by Louis de Broglie in 1927. It suggests that all particles in motion are actually borne on a wave-like motion, similar to how an object moves on a tide. In this theory, it is the evolution of the carrier wave that is given by the Schrödinger equation. It is a deterministic theory and is entirely nonlocal. It is an example of a hidden variable theory, and all non-relativistic quantum mechanics can be accounted for in this theory. The theory was abandoned by de Broglie in 1932, gave way to the Copenhagen interpretation, but was revived by David Bohm in 1952 as De Broglie–Bohm theory. The Copenhagen interpretation does not use the concept of the carrier wave or that a particle moves in definite paths until a measurement is made. Physics of bouncing and walking droplets History Floating droplets on a vibrating bath were first described in writing by Jearl Walker in a 1978 article in Scientific American. In 2005, Yves Couder and his lab were the first to systematically study the dynamics of bouncing droplets and discovered most of the quantum mechanical analogs. John Bush and his lab expanded upon Couder's work and studied the system in greater detail. In 2015 three separate groups, including John Bush, attempted to reproduce the effect and were unsuccessful. Stationary bouncing droplet A fluid droplet can float or bounce over a vibrating fluid bath because of the presence of an air layer between the droplet and the bath surface. The behavior of the droplet depends on the acceleration of the bath surface. Below a critical acceleration, the droplet will take successively smaller bounces before the intervening air layer eventually drains from underneath, causing the droplet to coalesce. Above the bouncing threshold, the intervening air layer replenishes during each bounce so the droplet never touches the bath surface. Near the bath surface, the droplet experiences equilibrium between inertial forces, gravity, and a reaction force due to the interaction with the air layer above the bath surface. This reaction force serves to launch the droplet back above the air like a trampoline. Molacek and Bush proposed two different models for the reaction force. Walking droplet For a small range of frequencies and drop sizes, a fluid droplet on a vibrating bath can be made to “walk” on the surface if the surface acceleration is sufficiently high (but still below the Faraday instability). That is, the droplet does not simply bounce in a stationary position but instead wanders in a straight line or in a chaotic trajectory. When a droplet interacts with the surface, it creates a transient wave that propagates from the point of impact. These waves usually decay, and stabilizing forces keep the droplet from drifting. However, when the surface acceleration is high, the transient waves created upon impact do not decay as quickly, deforming the surface such that the stabilizing forces are not enough to keep the droplet stationary. Thus, the droplet begins to “walk.” Quantum phenomena on a macroscopic scale A walking droplet on a vibrating fluid bath was found to behave analogously to several different quantum mechanical systems, namely particle diffraction, quantum tunneling, quantized orbits, the Zeeman effect, and the quantum corral. Single and double slit diffraction It has been known since the early 19th century that when light is shone through one or two small slits, a diffraction pattern appears on a screen far from the slits. Light has wave-like behavior, and interferes with itself through the slits, creating a pattern of alternating high and low intensity. Single electrons also exhibit wave-like behavior as a result of wave-particle duality. When electrons are fired through small slits, the probability of the electron striking the screen at a specific point shows an interference pattern as well. In 2006, Couder and Fort demonstrated that walking droplets passing through one or two slits exhibit similar interference behavior. They used a square shaped vibrating fluid bath with a constant depth (aside from the walls). The “walls” were regions of much lower depth, where the droplets would be stopped or reflected away. When the droplets were placed in the same initial location, they would pass through the slits and be scattered, seemingly randomly. However, by plotting a histogram of the droplets based on scattering angle, the researchers found that the scattering angle was not random, but droplets had preferred directions that followed the same pattern as light or electrons. In this way, the droplet may mimic the behavior of a quantum particle as it passes through the slit. Despite that research, in 2015 three teams: Bohr and Andersen's group in Denmark, Bush's team at MIT, and a team led by the quantum physicist Herman Batelaan at the University of Nebraska set out to repeat the Couder and Fort's bouncing-droplet double-slit experiment. Having their experimental setups perfected, none of the teams saw the interference-like pattern reported by Couder and Fort. Droplets went through the slits in almost straight lines, and no stripes appeared. It has since been shown that droplet trajectories are sensitive to interactions with container boundaries, air currents, and other parameters. Though the diffraction pattern of walking droplets is not exactly the same as in quantum physics, and is not expected to show a Fraunhofer-like dependence of the number of peaks on the slit width, the diffraction pattern does appear clearly in the high memory regime (at high forcing of the bath). Quantum tunneling Quantum tunneling is the quantum mechanical phenomenon where a quantum particle passes through a potential barrier. In classical mechanics, a classical particle could not pass through a potential barrier if the particle does not have enough energy, so the tunneling effect is confined to the quantum realm. For example, a rolling ball would not reach the top of a steep hill without adequate energy. However, a quantum particle, acting as a wave, can undergo both reflection and transmission at a potential barrier. This can be shown as a solution to the time dependent Schrödinger Equation. There is a finite, but usually small, probability to find the electron at a location past the barrier. This probability decreases exponentially with increasing barrier width. The macroscopic analogy using fluid droplets was first demonstrated in 2009. Researchers set up a square vibrating bath surrounded by walls on its perimeter. These “walls” were regions of lower depth, where a walking droplet may be reflected away. When the walking droplets were allowed to move around in the domain, they usually were reflected away from the barriers. However, surprisingly, sometimes the walking droplet would bounce past the barrier, similar to a quantum particle undergoing tunneling. In fact, the crossing probability was also found to decrease exponentially with increasing width of the barrier, exactly analogous to a quantum tunneling particle. Quantized orbits When two atomic particles interact and form a bound state, such the hydrogen atom, the energy spectrum is discrete. That is, the energy levels of the bound state are not continuous and only exist in discrete quantities, forming “quantized orbits.” In the case of a hydrogen atom, the quantized orbits are characterized by atomic orbitals, whose shapes are functions of discrete quantum numbers. On the macroscopic level, two walking fluid droplets can interact on a vibrating surface. It was found that the droplets would orbit each other in a stable configuration with a fixed distance apart. The stable distances came in discrete values. The stable orbiting droplets analogously represent a bound state in the quantum mechanical system. The discrete values of the distance between droplets are analogous to discrete energy levels as well. Zeeman effect When an external magnetic field is applied to a hydrogen atom, for example, the energy levels are shifted to values slightly above or below the original level. The direction of shift depends on the sign of the z-component of the total angular momentum. This phenomenon is known as the Zeeman Effect. In the context of walking droplets, an analogous Zeeman Effect can be demonstrated by observing orbiting droplets in a vibrating fluid bath. The bath is also brought to rotate at a constant angular velocity. In the rotating bath, the equilibrium distance between droplets shifts slightly farther or closer. The direction of shift depends on whether the orbiting drops rotate in the same direction as the bath or in opposite directions. The analogy to the quantum effect is clear. The bath rotation is analogous to an externally applied magnetic field, and the distance between droplets is analogous to energy levels. The distance shifts under an applied bath rotation, just as the energy levels shift under an applied magnetic field. Quantum corral Researchers have found that a walking droplet placed in a circular bath does not wander randomly, but rather there are specific locations the droplet is more likely to be found. Specifically, the probability of finding the walking droplet as a function of the distance from the center is non-uniform and there are several peaks of higher probability. This probability distribution mimics that of an electron confined to a quantum corral. See also Pilot-wave models De Broglie–Bohm theory Superfluid vacuum theory Quantum hydrodynamics References External links Research on hydrodynamic quantum analogues Prof. John Bush (MIT) Wired "Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?" 2014 Quantum models
Hydrodynamic quantum analogs
[ "Physics" ]
2,174
[ "Quantum models", "Quantum mechanics" ]
41,359,517
https://en.wikipedia.org/wiki/Recoil%20%28rheology%29
Recoil is a rheological phenomenon observed only in non-Newtonian fluids that is characterized by a moving fluid's ability to snap back to a previous position when external forces are removed. Recoil is a result of the fluid's elasticity and memory where the speed and acceleration by which the fluid moves depends on the molecular structure and the location to which it returns depends on the conformational entropy. This effect is observed in numerous non-Newtonian liquids to a small degree, but is prominent in some materials such as molten polymers. Memory The degree to which a fluid will “remember” where it came from depends on the entropy. Viscoelastic properties in fluids cause them to snap back to entropically favorable conformations. Recoil is observed when a favorable conformation is in the fluid's recent past. However, the fluid cannot fully return to its original position due to energy losses stemming from less than perfect elasticity. Recoiling fluids display fading memory meaning the longer a fluid is elongated, the less it will recover. Recoil is related to characteristic time, an estimate of the order of magnitude of reaction for the system. Fluids that are described as recoiling generally have characteristic times on the order of a few seconds. Although recoiling fluids usually recover relatively small distances, some molten polymers can recover back to 1/10 of the total elongation. This property of polymers must be accounted for in polymer processing. Demonstrations of Recoil When a spinning rod is placed in a polymer solution, elastic forces generated by the rotation motion cause fluid to climb up the rod (a phenomenon known as the Weissenberg effect). If the torque being applied is immediately brought to a stop, the fluid recoils down the rod. When a viscoelastic fluid being poured from a beaker is quickly cut with a pair of scissors, the fluid recoils back into the beaker. When fluid at rest in a circular tube is subjected to a pressure drop, a parabolic flow distribution is observed that pulls the liquid down the tube. Immediately after the pressure is alleviated, the fluid recoils backward in the tube and forms a more blunt flow profile. When Silly Putty is rapidly stretched and held at an elongated position for a short period of time, it springs back. However, if it is held at an elongated position for a longer period of time, there is very little recovery and no visible recoil. References Fluid dynamics Rheology Non-Newtonian fluids
Recoil (rheology)
[ "Chemistry", "Engineering" ]
493
[ "Piping", "Chemical engineering", "Rheology", "Fluid dynamics" ]
41,362,059
https://en.wikipedia.org/wiki/Kolmogorov%27s%20two-series%20theorem
In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers. Statement of the theorem Let be independent random variables with expected values and variances , such that converges in and converges in . Then converges in almost surely. Proof Assume WLOG . Set , and we will see that with probability 1. For every , Thus, for every and , While the second inequality is due to Kolmogorov's inequality. By the assumption that converges, it follows that the last term tends to 0 when , for every arbitrary . References Durrett, Rick. Probability: Theory and Examples. Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005, Section 1.8, pp. 60–69. M. Loève, Probability theory, Princeton Univ. Press (1963) pp. Sect. 16.3 W. Feller, An introduction to probability theory and its applications, 2, Wiley (1971) pp. Sect. IX.9 Probability theorems
Kolmogorov's two-series theorem
[ "Mathematics" ]
238
[ "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
67,106,529
https://en.wikipedia.org/wiki/The%20Zuckerberg%20Institute%20for%20Water%20Research
Zuckerberg Institute for Water Research (ZIWR) is one of three research institutes constituting the Jacob Blaustein Institutes for Desert Research, a faculty of Ben-Gurion University of the Negev (BGU). The ZIWR is located on BGU's Sede Boqer Campus in Midreshet Ben-Gurion in Israel's Negev Desert, and hosts researchers who focus on developing new technologies to provide drinking water and water for agricultural and industrial use and to promote the sustainable use of water resources. The ZIWR encompasses the Department of Environmental Hydrology and Microbiology, and the Department of Desalination and Water Treatment. History The Zuckerberg Institute for Water Research was founded in 2002 and was named for Roy J. Zuckerberg, Senior Director of the Goldman Sachs Group and a philanthropist, based in New York City. The ZIWR is one of three institutes currently constituting the Jacob Blaustein Institutes for Desert Research, which were originally established in 1974. In 2016, the estate of Dr. Howard and Lottie Marcus made a donation of $400 million dollars to Ben-Gurion University, believed to be the largest gift ever to a university in Israel, with a portion of it going to the Zuckerberg Institute for Water Research for research into water resources and desalination technologies. Academic programs The Institute runs two department: The Department of Environmental Hydrology and Microbiology, and the Department of Desalination and Water Treatment. It also offers an MSc degree in Hydrology and Water Quality, in collaboration with the Albert Katz International School for Desert Studies, which is located at BGU's Sde Boker Campus. Department of Environmental Hydrology and Microbiology The Department of Environmental Hydrology and Microbiology hosts researchers who specialize in hydrology, hydrogeology, chemistry, and microbiology. Some of their particular research areas include flow and transport processes, remediation of contaminated water, and biological treatment of wastewater. Department of Desalination and Water Treatment The Department of Desalination and Water Treatment employs researchers who focus on various aspects of desalination and water treatment processes including the improvement and development of membranes for reverse osmosis, forward osmosis, and nanofiltration processes; processes to eliminate toxic materials from industrial effluents and polluted groundwater; and brine concentrate management. MSc in Hydrology and Water Quality This master's degree program, offered through the Albert Katz International School for Desert Studies, aims to introduce students to research in water sciences with the goal of improving human life in drylands and the development of policies for the sustainable use of water resources. The program offers the following tracks of study: 1. Water Resources, 2. Desalination and Water Treatment, and 3. Microbiology and Water Quality. Research Researchers from the ZIWR were involved in studies related to the COVID-19 pandemic. In the first, a study that was led by a team of researchers from the ZIWR and published in Nature Sustainability found that coronaviruses can persist in wastewater for several days, possibly leading to the spread of these viruses to humans. In another study, ZIWR researchers, in cooperation with scientists from Rice University in Houston, Texas, developed a laser-induced graphene technology that can filter airborne COVID-19 particles. References Ben-Gurion University of the Negev Research institutes in Israel Water desalination Hydrology organizations Water treatment Research institutes established in 2002
The Zuckerberg Institute for Water Research
[ "Chemistry", "Engineering", "Environmental_science" ]
710
[ "Hydrology", "Water desalination", "Water treatment", "Water pollution", "Environmental engineering", "Water technology", "Hydrology organizations" ]
67,110,231
https://en.wikipedia.org/wiki/Optical%20Materials%20Express
Optical Materials Express is a monthly peer-reviewed scientific journal published by Optica. It covers advances in and applications of optical materials, including but not limited to nonlinear optical materials, laser media, nanomaterials, metamaterials and biomaterials. Its editor-in-chief is Andrea Alù (City University of New York). The founding editor-in-chief was David J. Hagan. According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.8. References External links Optics journals Materials science journals English-language journals Monthly journals Academic journals established in 2011 Optica (society) academic journals
Optical Materials Express
[ "Materials_science", "Engineering" ]
133
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
67,112,408
https://en.wikipedia.org/wiki/Empowerment%20%28artificial%20intelligence%29
Empowerment in the field of artificial intelligence formalises and quantifies (via information theory) the potential an agent perceives that it has to influence its environment. An agent which follows an empowerment maximising policy, acts to maximise future options (typically up to some limited horizon). Empowerment can be used as a (pseudo) utility function that depends only on information gathered from the local environment to guide action, rather than seeking an externally imposed goal, thus is a form of intrinsic motivation. The empowerment formalism depends on a probabilistic model commonly used in artificial intelligence. An autonomous agent operates in the world by taking in sensory information and acting to change its state, or that of the environment, in a cycle of perceiving and acting known as the perception-action loop. Agent state and actions are modelled by random variables () and time (). The choice of action depends on the current state, and the future state depends on the choice of action, thus the perception-action loop unrolled in time forms a causal bayesian network. Definition Empowerment () is defined as the channel capacity () of the actuation channel of the agent, and is formalised as the maximal possible information flow between the actions of the agent and the effect of those actions some time later. Empowerment can be thought of as the future potential of the agent to affect its environment, as measured by its sensors. In a discrete time model, Empowerment can be computed for a given number of cycles into the future, which is referred to in the literature as 'n-step' empowerment. The unit of empowerment depends on the logarithm base. Base 2 is commonly used in which case the unit is bits. Contextual Empowerment In general the choice of action (action distribution) that maximises empowerment varies from state to state. Knowing the empowerment of an agent in a specific state is useful, for example to construct an empowerment maximising policy. State-specific empowerment can be found using the more general formalism for 'contextual empowerment'. is a random variable describing the context (e.g. state). Application Empowerment maximisation can be used as a pseudo-utility function to enable agents to exhibit intelligent behaviour without requiring the definition of external goals, for example balancing a pole in a cart-pole balancing scenario where no indication of the task is provided to the agent. Empowerment has been applied in studies of collective behaviour and in continuous domains. As is the case with Bayesian methods in general, computation of empowerment becomes computationally expensive as the number of actions and time horizon extends, but approaches to improve efficiency have led to usage in real-time control. Empowerment has been used for intrinsically motivated reinforcement learning agents playing video games, and in the control of underwater vehicles. References Artificial intelligence Cognitive science Robotics engineering
Empowerment (artificial intelligence)
[ "Technology", "Engineering" ]
566
[ "Computer engineering", "Robotics engineering" ]
49,067,304
https://en.wikipedia.org/wiki/Ambush%20hypothesis
The ambush hypothesis is a hypothesis in the field of molecular genetics that suggests that the prevalence of “hidden” or off-frame stop codons in DNA selectively deters off-frame translation of mRNA to save energy, molecular resources, and to reduce strain on biosynthetic machinery by truncating the production of non-functional, potentially cytotoxic protein products. Typical coding sequences of DNA lack in-frame internal stop codons to avoid the premature reduction of protein products when translation proceeds normally. The ambush hypothesis suggests that kinetic, cis-acting mechanisms are responsible for the productive frameshifting of translational units so that the degeneracy of the genetic code can be used to prevent deleterious translation. Ribosomal slippage is the most well described mechanism of translational frameshifting where the ribosome moves one codon position either forward (+1) or backward (-1) to translate the mRNA sequence in a different reading frame and thus produce different protein products. In respect to codon usage, the ambush hypothesis theorizes that there is a positive correlation between the use of a codon and the amount a codon contributes to hidden stops. Phylogenetic analyses of both the nuclear and mitochondrial genomes of all major taxonomic kingdoms suggests ubiquitous off-frame stop codon existence and a positive correlation between the usage frequency of a codon and the number of ways a codon can contribute to hidden stop codons in different translational reading frames. Combinatorics have been used across genetic codes to determine how each in-frame codon can potentially contribute to stop codons in off-frame contexts. The standard genetic code only contains 20 codons that cannot become stop codons in a frameshifted ribosomal environment (-1 frameshift: 42, +1 frameshift: 28) and 127 out of the 400 (31.75%) possible adjacent amino acid combinations in the vertebrate mitochondrial code creates an off-frame stop codon. This suggests that substitutions and synonymous codon usage are not neutral and that selective pressures might have readjusted codon assignments to increase the frequencies of those that can be used as hidden stops. Observations that the number of off-frame stop codons is positively correlated with the expression level of a gene support the ambush hypothesis by increasing translational regulation (hidden stop frequency) to discourage off-frame reading in genes that are expressed at a high level. The positive correlation indicates that the off-frame translation of larger genes with higher expression levels likely costs a cell more energy, resources, and pathway efficiency than translating smaller, more rare genes in a shifted reading frame. Off-frame stop codon frequency is negatively correlated with gestation time in primates and though there are many factors that link molecular translational efficiency to the rate of morphogenesis, these findings suggests that not only individual cells but entire organisms may benefit from the development of hidden stop codons to effectively halt off-frame synthesis. The ambush hypothesis is challenged by recent observations that off-frame stop codons are directly correlated with the GC content in the genome because stop codons are GC-poor. Morgens et al. 2013 argues that previous research concerning the ambush hypothesis has relied on codon usage data which is representative of the GC content of an organism and thus not appropriate to evaluate the selective effect of off-frame stop codons. References Molecular genetics Gene expression
Ambush hypothesis
[ "Chemistry", "Biology" ]
692
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
49,068,749
https://en.wikipedia.org/wiki/Margin%20at%20risk
The Margin-at-Risk (MaR) is a quantity used to manage short-term liquidity risks due to variation of margin requirements, i.e. it is a financial risk occurring when trading commodities. It is similar to the Value-at-Risk (VaR), but instead of simulating EBIT it returns a quantile of the (expected) cash flow distribution. To do so, MaR requires (1) a currency, (2) a confidence level (e.g. 90%) and (3) a holding period (e.g. 3 days). The idea is that a given portfolio loss will be compensated by a margin call by the same amount. The MaR quantifies the "worst case" margin-call and is only driven by market prices. See also Liquidity at risk Value at risk Profit at risk Earnings at risk Cash flow at risk References Mathematical finance Financial_risk_modeling Monte Carlo methods in finance Credit risk
Margin at risk
[ "Mathematics" ]
196
[ "Applied mathematics", "Mathematical finance" ]
49,069,031
https://en.wikipedia.org/wiki/Canadian%20Medical%20and%20Biological%20Engineering%20Society
The Canadian Medical and Biological Engineering Society (CMBES) is a technical society representing the biomedical engineering community in Canada. CMBES is supported by its membership which consists of biomedical engineers, biomedical engineering technologists and students. CMBES also hosts an annual conference and regular webinars. It produces a number of publications including the Clinical Engineering Standards of Practice and a Newsletter. The Society's aims are twofold: scientific and educational: directed toward the advancement of the theory and practice of medical device technology; and professional: directed toward the advancement of all individuals in Canada who are engaged in interdisciplinary work involving engineering, the life sciences and medicine. Conference The first Canadian Medical and Biological Engineering Conference was held September 8 and 9, 1966 in Ottawa, Ontario. CMBES continues to host an annual conference including sponsor seminars, technical paper presentations including peer-reviews, a conference proceedings, affiliation with Journal of Medical and Biological Engineering, vendor exhibit of current medical devices, a continuing education program, workshops, symposia, and networking opportunities. The location of the conference rotates between Canadian cities every year. Occasionally the conference is held in conjunction with like organisations. Partnerships in the past have included: IEEE Engineering in Medicine and Biology Society (EMBS), Festival of International Conferences on Caregiving, Disability, Aging and Technology (FICCDAT) and the International Union for Physical and Engineering Sciences in Medicine (IUPESM). In 2015, the CMBES and IUPESM hosted the World Congress on Medical Physics & Biomedical Engineering, in Toronto Ontario, Canada. Annual conference proceedings are published online under International Standard Serial Number (ISSN) 2371-9516. International Outreach The objective of the Committee is to pursue and develop opportunities for CMBES members to participate in volunteer programs to support Clinical Engineering initiatives and Health Technology Management in developing countries. Clinical Engineering Standards of Practice The CMBES Clinical Engineering Standards of Practice for Canada (CESOP) was first published in 1998 and revised in 2007 and 2014. These guidelines outline criteria for health care institutions on the management of medical devices, promote the professional development of its members, and outline the education and certification requirements for clinical engineers and biomedical engineering technologists and technicians. Affiliations CMBES is a member society of the Engineering Institute of Canada and the International Federation for Medical and Biological Engineering. They also have affiliation agreements with the Association des physiciens et ingénieurs biomédicaux du Québec and Atlantic Canada Clinical Engineering Society. Awards Awards are bestowed to recipients annually, upon recommendation of the awards committee to the CMBES Executive. Award categories include: Outstanding Canadian Biomedical Engineer Outstanding Canadian BMET Early Career Achievement Special Membership Recognition/Honours Special Membership status is awarded only to distinguished individuals who have made significant contributions to the profession of biomedical engineering and to the Society in particular. Fellow Emeritus Honorary The CMBES has bestowed the fellow award on some of the following notable Canadian individuals: James McEwen (engineer), Monique Frize, Morris Milner, and John Alexander Hopps amongst others. Notes and references Engineering societies based in Canada Biomedical engineering
Canadian Medical and Biological Engineering Society
[ "Engineering", "Biology" ]
617
[ "Biological engineering", "Medical technology", "Biomedical engineering" ]
49,072,410
https://en.wikipedia.org/wiki/Luftwaffe%20and%20Kriegsmarine%20radar%20equipment%20of%20World%20War%20II
German Luftwaffe and Kriegsmarine Radar Equipment during World War II, relied on an increasingly diverse array of communications, IFF and RDF equipment for its function. Most of this equipment received the generic prefix FuG (), meaning "radio equipment". During the war, Germany renumbered their radars. From using the year of introduction as their number, they moved to a different numbering scheme. Searchlight and fighter control No German ground radar was accurate enough for flak fire direction. The method of operation during the day was for radar to direct the flak's optical fire control towards the target. Once this was acquired, the flak was controlled by the optical equipment to complete the engagement. During the night, the radar would be used to indicate the target to the searchlight crews. The rest of the engagement would be carried out optically. During the day, fighters would be directed with sufficient precision for them to come into visual contact with their targets, while during the night they would use their onboard aircraft interception (AI) radar to find the target after initial direction from the ground-based radars. Early Units Würzburg The Würzburg was first operational in the summer of 1940, had a parabolic shaped antenna with a diameter of about 3metres and in some models could be folded in half for transport. The Würzburg was produced in the thousands with various estimated figures being between 3000 and 4000 with up to 1500 sets of Würzburg Riese. The antenna of the Würzburg weighed over 9.5 tons and its parabolic surface had a diameter equal to 7.5 metres and a focal length of one metres and 70 cm. Only one German company had the technical skill to build these radars, and that was Zeppelin. The name of the unit was chosen at random by pointing at a map of Germany and Würzburg was chosen. FuMG 62 / FuMG 39 Würzburg: 3D fire-control radar. Used to direct the flack optical directors and searchlights. Wavelength 50 cm approx. In response to jamming various models of Würzburg radar were developed to operate on various frequencies called "Islands". Würzburg A First production version introduced in 1940. 50 cm operating wavelength. Operation range was approximately 30 km. Included an IFF system that worked with the FuG 25z airborne unit. Würzburg B Integrated IR telescope to increase accuracy. Proved unsatisfactory and not placed into production. Würzburg C Replaced the model A in production in 1941. Had lobe switching to improve accuracy. On this unit the integral IFF system was replaced by a system based on the FuG 25a airborne. To support this system which worked at approx 125-160 MHz two antenna were placed inside the main dish. A separate interrogation and receiving units were attached to show the IFF responses. Würzburg D Replaced the model C in production in 1942. It now had a usable range of approximately 40 km. Conical scan was used for fine accuracy. The IFF antenna was now fitted in the center of the dish rather than on the sides. Better instruments were fitted and generally, it was the best of the small Würzburg. FuMG 65 Würzburg Riese(Giant): The electronics of the D model Würzburg combined with a 7-meter dish to improve resolution and range. Range approx 70 km. Version E was a modified unit to fit on railroad flatcars to produce a mobile Flak radar system. Version G had the 2.4-meter antenna and electronics from a Freya installed. The antenna dipoles were inside the reflector. The reason for this was that the allies were flying very high recon flights which were above the maximum height of the Freya. The standard Würzburg Riese's 50 cm beam was too narrow to find them directly. By combining the two systems the Freya could set the Würzburg Riese onto the target. Mannheim FuMG 63 Mainz The Mainz, introduced in 1941, was a development from the Wurzburg with its 3-meter solid metal reflector mounted on top of the same type of control car as used by the ‘Kurmark’. Its range was 25–35 km with an accuracy of ±10–20 meters, azimuth 0.1 degrees, and elevation ±0.3-0.5 degrees. Only 51 units were produced before being superseded by the ‘Mannheim’. FuMG 64 Mannheim The Mannheim was an advanced development from the ‘Mainz’. It also had a 3-meter reflector, which was now made from a lattice framework covered in a fine mesh. This was fixed to the front of a control cabin and the whole apparatus was rotated electrically. Its range was 25–35 km, with an accuracy of ±10–15 meters; azimuth and elevation accuracy of ±0.15 degrees. Though accurate enough to control Flak guns it was not deployed in large numbers. This was due to its cost (time and materials to manufacture was about three times that of a Würzburg D). FuMG 75 Mannheim Riese Just as the Wurzburg's performance was greatly improved when fitted with a 7-meter reflector, so was the Mannheim's, and the result called a Mannheim Riese (Giant Mannheim). There was an optical device for the initial visual acquisition of the target. With its narrow beam it was relatively immune from ‘Window’. Its accuracy and automatic tracking enabled it to be used in anti-aircraft missile research to track and control the missiles in flight. Only a handful were manufactured. FuMG 68 Ansbach There was a need for a mobile radar with the range and accuracy of the ‘Mannheim’. The result, in 1944, was the Ansbach. It had a collapsible reflector of diameter 4.5 meters, operating on a wavelength of 53.6 cm, and peak power of 8 kW, giving it a normal range 25–35 km (70 km in search mode) with an accuracy of 30–40 meters. Azimuth and elevation accuracy was around ±0.2°. The antenna and reflector were remote controlled from a Bayern control van up to 30 meters away. The control system was based on the remote control system of the Michael microwave communication system, this was based on the Ward-Leonard AC/DC control system. The Ansbach was to be installed in large Flak batteries with six or more guns, but only a few were produced by the end of the war, and these did not see operational service Medium-range search Freya & similar units FuMG 450 Freya / FuMG 41G: This was a 2D Early warning radar. (2D means unable to indicate height). It was used for fighter direction and target indication for the Würzburg. Operating wavelength of approx 2.4 meters (125 MHz). In response to jamming various models were developed to operate on various frequencies called "Islands". Over 1000 units delivered in various models FuMG 401 / FMG 42 FREYA - LZ (Models A - D). An Air portable version, the model differences were due to an operating frequency range being in 4 discrete bands between 91 and 200 MHz. Freya-Rotschwarz and Freya-Grünschwarz: These two systems were Freya modified to operate on the same frequency as the British radio navigation system GEE to avoid jamming. However, as by the time they were ready the Germans were jamming GEE it is not clear whether any were ever deployed. FuMG 451 A Freya Flamme: Freya which had been built to use the "Island D" band were modified to be able to trigger the British IFF equipment. Ranges of up to 450 km were obtained. Fell from use as British IFF procedures improved. FuMG 401 Freya Fahrstuhl: A 3D version of the Fraya. (3D means could measure height). Measurements made by moving the antenna up and down on a rack. Only a very rough estimation of height available. originally intended for early warning most of the systems produced went to help "jammed" Würzburg Freya EGON: EGON stood for Erstling Gemse Offensive Navigation system. Where Erstling was the codename of the Fug25a transceiver in the aircraft and Gemse was the codename for the receiver. The system operated on a principle similar to the British OBOE navigation system. An IFF signal was sent from a Freya, that had had its receiver antenna removed, to the aircraft. The Fug25a in the aircraft responded and the received signal was displayed as a range offset on the Freya display. Using a second transmitter and triangulation the position of the aircraft was resolved. Though the system was tested to guide night fighters it was found to be to limited by the number of aircraft that it could control at one time (the same limitation was found with Oboe). The "Y system" was used instead for night fighter control. The EGON system was used to control pathfinders for bombing raids over both England and Russia, however, by now the Luftwaffe bomber force was running out of planes, pilots and fuel so the results were minimal. Work was done using a third transmitter to improve system performance. Range with a normal Freya was up to 250 km, work was underway to use a Wasserman system instead of a Freya to increase range too 350 km. (the Freya signal was too weak to trigger the Fug25a at ranges beyond 250 km), but this was not completed. Long-range search For area air defense (vs point defense) Freya's range was found to be insufficient. This led to attempts to use Freya technology to achieve greater range. This resulted in the Wassermann and Mammut. Although the Mammut units achieved their aims they were large installations with large arrays built on bunkers. This resulted in long building times and vulnerability to air attacks. The Wassermann was a better solution in that being smaller they were harder to locate and quicker to build, 3–4 weeks. However, sources indicate that they never achieved the desired range of 400 km, the best was approx 300 km. This may be why there were so many variants deployed. FuMG 401 Mammut: First deployed in 1942 this was a long range 2D search radar. It consisted of 8 Freya class antenna arranged in a 4 x 2 configuration. It measured 25 meters wide and ten meters high and was mounted on four pylons fixed in concrete. Some installations had a second array mounted back-to-back. Each array could be electronically swung through about 100 degrees, so the dual sided array could look behind itself to continue to track bombers as they flew into Germany. Frequency was the same as Freya (125 MHz). Range was up to 300 km with a transmit power of 200 kW. Installations being very large took up to four months to build. FuMG 402 Wassermann: This system was deployed in 1942. It was basically six Freya antenna mounted on a rotation cylinder. Frequencies were similar to Freya (125 MHz) transmit power was 100 kW, resulting in a usable range of approx 200 km. Three main versions were produced with sub variants in each class. Wassermann L: The original light version. Some sources indicate that it had structural problems. Wassermann S: The heavy version. First deployed late 1942 Some sources indicate it had more than six arrays. Wassermann M: The last family were the medium class units. Again, it is not clear exactly how many Freya arrays were attached to the mast. In 1944, this version received a modification that allowed it to electronically tilt its beams by 16 degrees which allowed it to perform height determination turning it into a 3D search radar. Elefant & See Elefant: These bi-static radars were an attempt to combine jamming resistance with long range. They operated in two bands 23–28 MHz or 32–38 MHz. Range was approximately 400 km but under certain RF conditions much greater ranges were obtained. Antenna were usually mounted on Wassermann towers (all units differed in detail from each other). Three Elefants were in operation at the end of the war with one See Elefant. Sources are unclear what the difference between the two types were. Panoramic search The first type of early-warning radar set giving a panoramic display which come into operation is usually referred to as the Jagdschloss, although its official designation is Jagdschloss F, to distinguish it from later types, such as the Michael B and Z. Jagdschloss F: The antenna was 24 m wide and 3 m high, consisting of sixteen pairs of double horizontal transmit and receive dipoles. Above this, an 8.5 metre wide antenna array of eight vertical dipoles was mounted for the IFF.The first 62 Jagdschloss were of the Voll Wismar type using wide band antenna covering the band 1.90-2.20 metres. Another 18, used the band 1.20–1.90 meters. Range was 100 km. An optional feature known as Landbriefträger (Postman) was a remote PPI display for use with Jagdschloss. This allowed the PPI display from the radar station to be sent simultaneously to command HQ by HF cable, or by a UHF radio link. Jagdschloss Michael B: A ponderous aerial array of two rows of eighteen Würzburg mirrors measuring 56 metres long x 7 metres high was used in the Würzmann experimental early-warning radar, and formed the serial array for Jagdschloss Michael B with the array in a horizontal position. The wavelength employed, was that of a Voll Wismar 53.0-63.8 cm. Range approx 250 km. None may have entered service, though one source mentions one entering service. Forsthaus F: This system was a development of the Jagdschloss Michael B using the so-called Euklid 25–29 cm. waveband employed by the Navy. Once more a very long aerial array 48 metres long and about 8 metres high was used, employing a cylindrical paraboloid. A wave guide antenna (Hohlraumstrahler) was placed along the focal line with a second and a third wave guide parallel to it above and below respectively. Range was expected to be over 200 km. Probably none completed. Forsthaus KF: Development of the Forsthaus F. Reduced in size so that the system would fit in a railway carriage. Antenna 24 meters long. Range 120 km. Dreh Freya: This set, which was also known as Freya Panorama, was first introduced in June 1944. It consisted of a Freya aerial of the Breitband type working in Bereich I (1.90-2.50), the frequency of which could be adjusted at will. The aerial was so built that it rotated through 360° and gave a remote panoramic presentation. About 20 units were in use in January 1945. The range claimed for it was only about 100 km. Jagdhütte:This apparatus, which was produced by Siemens, gave a panoramic PPI display of the German IFF responses, using 24- or 36-metre rotating aerials. The wavelength employed was 2.40 metres and it was planned, with its aid, to trigger off the FuGe 25A. In this way, friendly fighters were to be controlled from the ground at ranges up to about 300 km. It was fully realised that if the FuGe 25A frequency was ever jammed the Jagdhütte would be useless, but it was not considered likely that the Allies would attempt to jam it. Small numbers may have been completed at the end of the war. Jagdwagen: Jagdwagen was designed as a mobile panoramic radar to control fighters at close ranges immediately behind the front. It was a project of the firm of Lorenz. The aerials were considerably smaller than the Jagdhütte, the array being only 8 metres long. The aerial array was to be mounted on the Kumbach stand as used in the Egerland Flak set. The frequency band used was that of the ASV set Hohentwiel namely 53–59 cm. Range 40–60 km. Prototypes only. Jagdhaus (FuMG 404): Jagdhaus was designed and built by Lorenz in 1944 as an early warning radar. It was the most powerful radar built by the Germans, with a peak pulse power of 300 kW, which Lorenz planned to increase to 750 kW. The whole assembly was the size of a house, which is possibly how it got its name (‘haus’ being the German for ‘house’). The rotating upper part of the construction housed the separate parabolic transmit and receive antennae and reflectors, with the IFF above them as usual. It weighed 48 tons and rotated at 10 rpm. It operated on wavelengths of 1.4 to 1.8 metres, and had a range of about 300 km. It could measure altitude, azimuth and range. The control room was located below the antennae, from which its PPI image was also transmitted to command HQ at Charlottenberg by Landbrieftrager, similar to the Jagdschloss system. It is believed that only one Jagdhaus was constructed, which fell into Soviet hands when it was captured by their troops in 1945, during which time it was damaged. The Soviets compelled the Germans to repair it and instruct them in its operation. Aircraft intercept Lichtenstein B/C - FuG 202: Low-UHF band frequency range, introduced in 1941 it was the initial AI radar. Deployed in large numbers with 32-dipole element Matratze (mattress) antenna arrays, it operated on the 61 cm wavelength. Its range was (in theory) 2–3 km but in practise was found to be dependent on factors such as height. Compromised to the Allies on May 9, 1943. Lichtenstein C-1 - FuG 212: Introduced in 1943, this was an improved version of the FuG 202. Lichtenstein SN2 - FuG 220: Low-mid VHF band frequency range, introduced in 1943 in response to Allied jamming, and used an eight-dipole Hirschgeweih (a stag's antlers) antenna array. Transmitter power of 2 kW on 3.3 meters. Range was increased to 6 km. Minimum range was 400 m, which was found to be a problem, hence aircraft carried it and FuG202. Later versions did away with the need for the Fug 202. Compromised to the Allies in July 1944. Lichtenstein SN3 - FuG 228: A higher-powered version of the SN2. Range increased to 8 km. Only a small number accepted into service, perhaps only prototypes. FuG 214 : This was an "add-on" unit to the SN2 which gave it an additional, rear-facing antenna installation. This was in response to Allied night fighters accompanying the bomber streams to hunt the German night fighters while they hunted the bombers. The idea was to prevent Allied fighters attacking the German fighters from behind. Neptun 1 - FuG 216: A small number of experimental sets fitted to Fw 190 and Bf 109. Wavelength 1.3 to 1.8 meters. Neptun 2 - FuG 217: A small number of sets fitted to Fw 190 and Bf 109. 1.6 to 1.8 meters wavelength. Some had a rear warning component. Neptun 3 - Fug 218: A replacement for SN2, deployed late 1944 after SN2 was jammed. Wavelength 1.6 to 1.9 meters, most often using same, eight-dipole "stag's antlers" antenna array with shorter dipole elements. Range up to 5 km. Some were fitted to Me 262 to create night fighters that could catch Mosquito intruders. Neptun 4 - FuG 219: Increased power version of the FuG218, experimental sets only. Berlin A - FuG 224: The first centimetric (3 GHz) band radar. Based on a captured H2S radar unit, codenamed "Rotterdams". Unknown number built but under 100. Range 5K under ideal conditions, 10 cm wavelength. Berlin N1 - FuG 240 N: Combination of the Berlin A and the SN2. Only small numbers delivered. Berlin N2: Increased power Berlin N, Range reported to be 9K. Berlin N3/N4: Experimental units. Bremen - FuG 244: (also known as Berlin D) Berlin A with the frequency changed to 3 cm (10 GHz) rather than 9 cm. Experimental. Bremen O - FuG 245: Another experiments 3 cm unit. Air-to-surface search Neptun: Early system - It failed its acceptance tests - the system was later reworked into an aircraft intercept set. Hohentwiel (FuG 200); UHF-band radar, operated at wavelength between 52 and 57 cm. Range was between 10 km for a small vessel like a surfaced submarine to 70 km for a large ship. Under the best circumstances it could see the coast at approx 150 km. It had separate antennae for transmit and receive. The transmit antenna was centrally mounted, pointing forward, while the two receive antennae were mounted either side, pointing outwards by 30 degrees, giving it a search beam width of about 120 degrees. Each antenna array consisted of sixteen horizontally polarised dipoles, mounted in four groups of four in a vertical stack. A variant of the Hohentwiel the Tiefentwiel (FuMG407); was tried as an Air Surveillance radar on the coast to try and detect low flying aircraft. Naval surface search - land based - Seetakt FuMO 1 - Calis A: Its 6.2 x 2.5m antenna consisted of 2 rows of eight full wave vertical dipoles. Its wavelength was 82 cm and its range depended on the height it was installed above sea level, but typically was about 15–20 km. Given the frequency low angle reflections from the surface, also known a clutter would have been an issue. FuMO 2 - Calis B: Improved version of the FuMO 1 - similar clutter problems but improved transmitter and accuracy. FuMO 3 - Zerstorersaule: A version of the destroyer radar modified for land use. FuMO 4 - Dunkirchen : Improved version of the FuMo 2 - similar otherwise FuMO 5 - Boulogne: Yet another improved version of the FuMO 2 - increased transmitter power again with an improved aerial - usable range now 40 – 50 km. FuMO 11 - Renner: 3M antenna from a Wurzburg combined with a 9 cm "Berlin" unit and mounted on a Seetakt base optimized for sea search rather than air search. Sources differ on usable range. FuMO 12 & 13: Improved Renner units to attempt to compensate for poor reliability with the original unit. FuMO 15 - Sheer: Combination of a Berlin 9 cm and an Antenna from a Giant Wurzburg - seems to have been optimized for surface search in the same was as the Renner series was. FuMO 51 - Mammut G: Version of the Luftwaffe FuMO401 but with Seetakt antenna and waveforms to optimise it for surface search rather than air search. FuMO 214 - Giant Wurzburg: Naval designation for the airforce unit. FuMO 215 See Reise: Improved FuMO 214 Naval air search - land based - Flugmeldung FuMO 52: Naval designation for the FuMG 401 Mammut C. FuMO 64 : A version of the Hohentwiel L ASV radar modifier for coastal air search - different from the unsuccessful xxx FuMO 221: Naval designation for the FuMG 64 Mannheim. FuMO 301 - 303: Versions of the FuMG 39-41 Freya FuMO 311 - 318: Versions of the Freya working on other frequencies (Around 2.2 Meters) from the normal Freya. Sometimes known as the Freiburg FuMO 321 - 328: Based on the fuMO311 family of units but working at 1.5 meters. FuMO 331: Naval designation for the FuMG 402 Wassermann M FuMO 371: Naval designation for the FuMG 403 Jagdschloss Naval flak direction - land based - Flakziel FuMO 201: Flakleit - Using Seetakt 80 cm technology a 3D radar mounted on an underground armoured turret (originally an optical rangefinder) small numbers produced. Multiple antenna. Manufactured by GEMA. FuMO 211 - 213: Naval designation for the FuMG 62 family or radars - the Wurzburg A, C & D. FuMO 215: See Reise. FuMO 221: Mannheim. Naval coastal battery fire control - Seeart FuMO 111: Barbara, 9 cm fire control radar based on modifying a FuMO 15 Giant Wurzburg to operate at 9 cm. Only experimental radars produced. FuMO 214: A Wurzburg Reise reconfigured for use as a naval radar with a range of approximately 50 – 70 km against surface targets. FuMO 215: Improved range version of the FuMO214. Centimeter radars Although the Germans were carrying out research at centimeter wavelengths at the start of the war, the work was abandoned as it was decided that the war would be over before the research and development could be completed. In February 1943 an RAF Stirling bomber was shot down over Rotterdam and a damaged H2S system was recovered. The Germans started a crash development program to use the information deduced from the captured system. Although a range of prototypes were produced, very few reached front line troops. Due to the device being recovered near Rotterdam, the Germans used that name in several code names for the Centimeter (9 cm) systems, such as "Rotterdam Device". Rotterdam: To get the quickest start with development, German industry copied, as far as possible, the H2S system. Approximately 20 systems were manufactured for R&D work. They led to the Roderich jammer and the Berlin & Korfu receivers. Jagdschloss Z: The 9 cm version of the Jagdschloss F panoramic radar system. Prototypes only. Forsthaus Z: The 9 cm version of the Forsthaus panoramic search radar. Prototypes only. FuMG 77: Rotterheim. A combination of the 9 cm receiver/transmitter of the Berlin system with the antenna and other systems from a Mannheim. Its range was about 30 km and it was found to be unaffected by Allied jamming. Its name changed to Marbach V later in the war. FuMG 76: Marbach. A combination of the Berlin transmitter/receiver with the Ansback 4.5 meter reflector and systems. Controlled by the "Michael" remote control system. Sources suggest that three systems were completed. FuMG 74: Kulmbach. A 9 cm panoramic search radar, 6 meter antenna and remote controlled like the FuMG76. When combined with that radar it was known as the Egerland system. Only two were completed, with a range of approx 50 km. Passive search FuG 221 Freya-Halbe: This was a Freya modified to locate British airborne jammers. Development completed but due to lack of parts never deployed. FuG 221 Rosendahl: This was a Freya modified to locate British bombers by tracking their Monica warning radar emissions. By the time development was completed the British had ceased using Monica, so never deployed FuG 223: A family of passive airborne receivers tuned to various radar bands such as Freya and Würzburg. Designed to allow night fighters to home onto bombers fitted with jammers against those radars. The Fug223 was a version build from surplus FuG 227 components that detected reflected energy from an aircraft being illuminated by a ground radar. In this way it was an example of an early semi-active radar homing system. In order to work it seems that the radar beam had to illuminate the target and the night fighter so that the two receivers could be synchronized. Used by one test and development squadron at the end of the war. FuG 227 Flensburg: Built using some components from the FuG 220 range of AI equipment. This was a passive device which allowed night fighters to home onto bombers which had their rear warning 'Monica' active. Monica was a short range VHF radar (200 MHz band) which was fitted to the tail of British heavy bombers facing down and back to give the rear turret gunner a warning display. Using this equipment the night fighters could achieve intercept with apparent ease. Extremely effective until the British captured a Junkers Ju 88G-1 night fighter with FuG 227 installed in July 1944, and realised its mode of operation. there after Monica was removed from bombers and FuG 227 ceased to have any value. Klein Heidelberg was the code-name give to a passive radar system devised in 1941. The system was a bi-static radar system. What was unusual was that the transmitters were British rather than German! The system worked by using the reflections from the Chain Home (British coastal radar system) rather than transmitters associated with the receivers. Klein Heidelberg worked by sensing Chain Home (CH) transmission pulses directly with a small auxiliary antenna, close to the main antenna, whose receiver was tuned to a particular CH station whose exact location, bearing and range was known. The CH signal was then used to synchronise the KH with the CH transmission pulses. The CH pulse started a circular trace on a cathode ray tube (CRT) divided into forty sections. The main antenna received the reflection of these pulses from the target and displayed them on the CRT. Range was between 300 and 600 km. The display was 2D. Resolution was not very good but it allowed the Germans to see bomber formations forming up over England and the general path of the bomber streams. Its big advantage was it was not possible for the British to jam without jamming their own radars. The system entered service in late 1943 and by late 1944 six system were commissioned on the Dutch coast. FuG 350 Naxos & FuG 351 Korfu: This was a family or radar detectors that operated in the 8 to 12 cm band. They were primarily designed to locate Allied H2S radar transmissions. A range of antenna were used some stationary and some rotating. There were intended to be air, land and maritime versions. However, Naxos had a resolution problem that limited its ability to distinguish individual aircraft. This allowed the night fighter to locate the bomber stream but not usually individual bombers. This was not usually an issue with the maritime based system (primarily U-boats) as there was usually only one aircraft detected at a time. To reduce this issue an improved version the Korfu was developed. It was intended to field Korfu as a replacement for Naxos in all three versions but due to a shortage of components only the land-based version was fielded where is resolution could be used to the best effect. FuG 350 Naxos Z: The original system, detected H2S radar system on bombers. Unable to distinguish individual bombers nor the 10 GHz H2X Allied bombing radar, but could reliably guide the fighter into the bomber stream. FuG 350 Naxos ZR: Additional aerials added a tail warning system which allowed British night-fighters to be detected. FuG 350 Naxos ZX: 3 cm version for detecting allied H2X radars. Not known to have ever been fielded. FuG 350 Naxos RX: 3 cm version of the Naxos ZR. Not known to have ever been fielded. FuG 350 Naxos ZD: Combined Z and ZX, allowing 9 cm and 3 cm detection in the same system. FuG 351 Korfu Z: Entered production late 1944, due to shortage of components only ground-based versions deployed though an airborne version completed development. better range and discrimination than Naxos. FuG 280 Kiel Z: IR-based passive receiver. 10-degree field of view - display via CRT. Problems with discrimination between fires aircraft and other IR sources. Falter: Based on the Fug 280 K but detected British IR recognition systems. Development not completed. References Notes Bibliography Muller, Werner. Ground Radar Systems of the Luftwaffe. Schiffer Publishing Limited, 1998. Prichard, David. "The Radar War: Germany's Pioneering Achievement 1904-45". Harpercollins August 1989. External links Luftwaffe radio equipment World War II German electronics Radio spectrum Military terminology Avionics Communication circuits Radio electronics Aircraft radars World War II German radars
Luftwaffe and Kriegsmarine radar equipment of World War II
[ "Physics", "Technology", "Engineering" ]
6,747
[ "Radio electronics", "Telecommunications engineering", "Radio spectrum", "Spectrum (physical sciences)", "Avionics", "Electromagnetic spectrum", "Aircraft instruments", "Communication circuits" ]
54,260,744
https://en.wikipedia.org/wiki/Pentachlorobenzenethiol
Pentachlorobenzenethiol is a chemical compound from the group of thiols and organochlorine compounds. The chemical formula is . Synthesis Pentachlorobenzenethiol can be obtained from hexachlorobenzene. Properties Pentachlorobenzenethiol is a combustible gray solid with an unpleasant odor, practically insoluble in water. It has a monoclinic crystal structure. The compound is not well-biodegradable and presumably bioaccumulable and toxic for aquatic organisms. Pentachlorobenzenethiol is itself a metabolite of hexachlorobenzene and is found in the urine and the excretions of animals receiving hexachlorobenzene. Pentachlorobenzenethiol has a high potential for long-range transport via air as it is very slowly degraded in atmosphere. Applications Pentachlorobenzenethiol is used in the rubber industry. The compound is added to rubber (both natural and synthetic) to facilitate processing (mastication). See also Chlorobenzene Dichlorobenzene Trichlorobenzene Pentachlorobenzene Hexachlorobenzene References External links Preparation of pentachlorothiophenol US 2922820 A Benzene derivatives Thiols Chlorobenzene compounds Foul-smelling chemicals
Pentachlorobenzenethiol
[ "Chemistry" ]
298
[ "Organic compounds", "Thiols" ]
54,263,883
https://en.wikipedia.org/wiki/NGC%207029
NGC 7029 is an elliptical galaxy located about 120 million light-years away from Earth in the constellation Indus. NGC 7029 has an estimated diameter of 129,000 light-years. It was discovered by astronomer John Herschel on October 10, 1834. It is in a pair of galaxies with NGC 7022. One supernova has been observed in NGC 7029: SN 2023qov (type Ia, mag. 17.5). Group Membership NGC 7029 is part of the Indus Triplet of galaxies which contains the galaxies NGC 7041 and NGC 7049. See also List of NGC objects (7001–7840) NGC 7002 References External links Elliptical galaxies Indus (constellation) 7029 66318 Astronomical objects discovered in 1834
NGC 7029
[ "Astronomy" ]
154
[ "Indus (constellation)", "Constellations" ]
54,270,667
https://en.wikipedia.org/wiki/Excited%20state%20intramolecular%20proton%20transfer
Excited state intramolecular proton transfer (ESIPT) is a process in which photoexcited molecules relax their energy through tautomerization by transfer of protons. Some kinds of molecules could have different minimum-energy tautomers in different electronic states, and if the molecular structure of minimum-energy tautomer in the excited state is proton-transferred geometry between neighboring atoms, proton transfer in excited state can occur. The tautomerization often takes the form of keto-enol tautomerism. Characteristic Since a proton-transferred geometry is usually the minimum-energy tautomer only in the excited state and relatively unstable in the ground state, molecules that have ESIPT character may show extraordinarily larger Stokes shift than common fluorescent molecules, or exhibit dual fluorescence that shorter-wavelength one comes from the original tautomer and longer-wavelength one from proton-transferred tautomer. However, there are some exceptional cases where ESIPT molecules have no dual luminescence or significantly red-shifted emission from proton-transferred tautomer, from various reasons. Rate of ESIPT process may slow down by deuterium substitution of hydrogen that is transferred in ESIPT, because the deuteration increases only mass of the transferred significantly while do not change electrostatic potential in the molecule substantially. However the amount of rate change may lie in the range of 1~50, depending on the shape and size of potential energy surfaces of the molecule. Application Based on characteristic that molecules usually have extraordinarily larger Stokes shift when ESIPT occurs, various applications have been developed using red-shifted fluorescence. Applications include turn-on photoluminescence sensor, photochromic, non-destructive optical memory, and white-light emitting materials. Because phenol does not form a ketal under normal conditions because it does not tautomerize to any useful extent; however under ESIPT in the presence of an alcohol, e.g. ethylene glycol, it became possible to trap 1,4-Dioxaspiro[4.5]deca-6,8-diene [23783-59-7]. References Enols Photochemistry
Excited state intramolecular proton transfer
[ "Chemistry" ]
458
[ "Enols", "Functional groups", "nan" ]
44,223,073
https://en.wikipedia.org/wiki/Japan%20Trench%20Fast%20Drilling%20Project
The Japan Trench Fast Drilling Project (JFAST) was a rapid-response scientific expedition that drilled oceanfloor boreholes through the fault-zone of the 2011 Tohoku earthquake. JFAST gathered important data about the rupture mechanism and physical properties of the fault that caused the huge earthquake and tsunami which devastated much of northeast Japan. Background The 2011 Tohoku-oki earthquake, with a moment magnitude of 9.0, was the largest in Japan's history, and severely damaged regions of northeast Honshu, with over 15,000 deaths and economic losses of $US 200 to 300 billion. Because of the huge societal impact, there was an urgency among scientists to respond with information and research results to explain the disastrous event. Soon after the earthquake, researchers of the Integrated Ocean Drilling Program (IODP) began planning the Japan Trench Fast Drilling Project (JFAST) to investigate the earthquake with ocean floor boreholes to the plate boundary fault. This ambitious project drilled boreholes through the fault that slipped during the earthquake in order to understand the unprecedented huge slip (40 to 60 meters) that occurred on the shallow portion of the megathrust fault and was the primary source of the large tsunami that devastated much of the coast of northeast Honshu. There was much public interest in this high-profile scientific project with considerable Japanese and English media coverage of the operations and results Specific science objectives included, Estimation of the stress (mechanics) state in the region of the shallow fault from borehole breakouts. Retrieval of core sample from the plate boundary fault zone to see geologic structures and measure physical properties of the fault zone. Before this project, no one had directly seen a fault zone that recently moved tens of meters in an earthquake. Measurement of the temperature across the fault zone to estimate the level of dynamic friction during the earthquake. These thermal observations needed to be done quickly after the earthquake and was the main reason for the rapid mobilization of JFAST. The site for the offshore drilling was located about 220 km east of Sendai in the region of very large fault slip during the earthquake near the Japan Trench. Deep water drilling operations The D/V Chikyu, operated by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) sailed on IODP Expedition 343 from the port of Shimizu, Shizuoka on April 1, 2012, within 13 months after the earthquake. Chikyu is the only research vessel with the capabilities for the necessary drilling in very deep water of over 6900 meters. Two months of operations from April 1 to 24 May 24, 2012 were scheduled for drilling several boreholes to carry out Logging while drilling (LWD), install temperature sensors and retrieve core samples. The extreme water depths caused many technical challenges that had to be considered, such as the strength of the long pipe string, onboard handling of the pipe sections, and instrument operations at very high water pressure. These aspects needed careful planning and new tools on the ship. Various equipment had not been previously used in such deep water and caused many problems and delays during the first month at sea. Eventually difficult engineering problems were overcome enabling retrieval of borehole core and installation of a temperature observatory across the fault zone at a depth of about 820 meters below the sea floor. New records were set for scientific drilling including, longest drilling string (7740 m) from the ocean surface and deepest core from the ocean surface (7752 m). Because of delays due to technical difficulties and bad weather, the temperature observatory could not be deployed during the main expedition. However, during the supplementary Expedition 343T from July 5 to 19 a new borehole was quickly drilled and the temperature sensors were installed Retrieval of the temperature data was scheduled for cruise KR13-04 during February 11 to 20, 2013 using the JAMSTEC ship R/V Kairei and Remotely Operated Vehicle (ROV) Kaiko-7000II. Kaiko-7000II is one of the few vehicles that can operate at 7000 meters water depth. Because of inclement weather and navigation problems the instruments could not be retrieved at this time. However, during the subsequent cruise KR13-08 from April 21 to May 9, 2013, the temperature instruments were successfully recovered on April 26. Scientific results Borehole stress Fractures in the borehole wall (borehole breakouts) were used to estimate the stress field in the region close to the fault zone. These fractures can be observed in the wall resistivity records obtained from the LWD data. From the orientations and crack widths of the fractures, the direction and magnitude of the stress can be calculated. The results of these analyses show that the region has changed from a thrust fault regime before the earthquake to a normal fault regime after the earthquake. The horizontal stress became close to zero, indicating that almost all of the stress was released during the earthquake. This confirms previous suggestions that the earthquake had a complete stress drop, which is different from most other large earthquakes. Fault zone From the core samples, geologic structure data and measurements of physical properties, a single plate-boundary fault zone was identified with a high level of confidence at a depth of about 820 meters below the seafloor. The fault is localized in a thin layer of highly deformed pelagic clays. The entire section of the fault zone was not retrieved, but from the amount of the recovered and unrecovered sections, the total width of the fault zone is determined to be less than 5 meters. This is a considerably simpler and thinner plate-boundary fault than has been observed at other locations, such as the Nankai Trough . The actual slip surface for the 2011 earthquake may not have been recovered, but it is assumed that the structures and physical properties of the core are representative of the entire fault zone. Fault friction One of the main objectives of JFAST was to estimate the level of friction on the fault during the earthquake. To determine the frictional strength, high-speed laboratory experiments were carried out on samples from the plate boundary fault zone. The measured shear stress strength for permeable and impermeable conditions yielded values of 1.32 and 0.22 MPa, respectively, with the equivalent values for the coefficient of friction of 0.19 and 0.03, respectively. These results show that the fault slipped with very low levels of friction, which are lower than observed from other subduction zones, such as the Nankai Trough. The very low frictional strength for the material from the Japan Trench fault zone is much lower than typically observed for other types of rocks. The low friction properties are largely caused by the high content of the clay mineral smectite. Examination of the microstructures in the laboratory samples, suggests that fluids are important in the faulting process and contribute to the low friction properties, possibly through thermal pressurization. The temperature measurements were also designed to estimate the frictional heat on the fault by measuring the thermal anomaly at the fault zone. A temperature signal was clearly observed in the data about 4 months after instrument installation which is 18 month after the earthquake. At that time the temperature in the fault zone was about 0.3 °C above the geothermal gradient. This is interpreted to represent the frictional heat produced at the time of the earthquake. Analyses of these data showed that the coefficient of friction on the fault at the time of the earthquake was about 0.08 and the average shear stress on the fault was estimated to be 0.54 MPa. The temperature measurements give independent and similar results to the laboratory friction experiments, and confirm the very low frictional properties of the fault. The low friction properties likely contributed to the very large slip during the earthquake. Summary JFAST is considered to be a successful rapid scientific response to a natural hazard event that had a great societal impact. Technical challenges associated with drilling in very deep water of about 6900 meters were overcome enabling borehole stress measurements, recovery of valuable core samples of the plate-boundary fault zone and collection of unique temperature measurements. The results of the scientific investigations show that the huge slip during the 2011 Tohoku earthquake occurred on a simple and thin fault zone composed of pelagic sediments with a high smectite content. Both laboratory experiments on the fault zone material and temperature measurements across the fault zone, show that the friction level was very low during the earthquake. The localized fault zone, low friction properties of its material and complete stress drop during the earthquake, are important characteristics that likely contributed to the huge slip during the earthquake. See also Drilling Vessel Chikyū Integrated Ocean Drilling Program References Media coverage Scientific American, Drilling Ship to Probe Fault Zone that Caused Fukushima Quake, October 31, 2011. Discovery Channel, Daily Planet feature story on March 9, 2012. NHK TV News, Japan, news story on April 14, 2012 (in Japanese). TBS TV News 23, Japan, featured story on May 3, 2012 (in Japanese). Otago Daily Times, New Zealand, Researchers drill deep into fault, May 2, 2012 NHK TV Science Zero, 30 minute TV program about JFAST, June 17, 2012 (in Japanese) phys.org, New report illuminates stress change during the 2011 Tohoku-Oki earthquake, February 7, 2013 Physics Today, Scientists dig deep in Tohoku fault to crack earthquake's secrets, August 2013. Christian Science Monitor, How slippery clay helps Japan's 2011 mega tsunami, December 6, 2013. External links JFAST website IODP 2011 Tōhoku earthquake and tsunami Earthquake engineering Science and technology in Japan
Japan Trench Fast Drilling Project
[ "Engineering" ]
1,929
[ "Earthquake engineering", "Civil engineering", "Structural engineering" ]
44,224,167
https://en.wikipedia.org/wiki/Process%20validation
Process validation is the analysis of data gathered throughout the design and manufacturing of a product in order to confirm that the process can reliably output products of a determined standard. Regulatory authorities like EMA and FDA have published guidelines relating to process validation. The purpose of process validation is to ensure varied inputs lead to consistent and high quality outputs. Process validation is an ongoing process that must be frequently adapted as manufacturing feedback is gathered. End-to-end validation of production processes is essential in determining product quality because quality cannot always be determined by finished-product inspection. Process validation can be broken down into 3 steps: process design (Stage 1a, Stage 1b), process qualification (Stage 2a, Stage 2b), and continued process verification (Stage 3a, Stage 3b). Stage 1: Process Design In this stage, data from the development phase are gathered and analyzed to define the commercial manufacturing process. By understanding the commercial process, a framework for quality specifications can be established and used as the foundation of a control strategy. Process design is the first of three stages of process validation. Data from the development phase is gathered and analyzed to understand end-to-end system processes. These data are used to establish benchmarks for quality and production control. Design of experiment (DOE) Design of experiments is used to discover possible relationships and sources of variation as quickly as possible. A cost-benefit analysis should be conducted to determine if such an operation is necessary. Quality by design (QBD) Quality by design is an approach to pharmaceutical manufacturing that stresses quality should be built into products rather than tested in products; that product quality should be considered at the earliest possible stage rather than at the end of the manufacturing process. Input variables are isolated in order to identify the root cause of potential quality issues and the manufacturing process is adapted accordingly. Process analytical technology (PAT) Process analytical technology is used to measure critical process parameters (CPP) and critical quality attributes (CQA). PAT facilitates measurement of quantitative production variables in real time and allows access to relevant manufacturing feedback. PAT can also be used in the design process to generate a process qualification. Critical process parameters (CPP) Critical process parameters are operating parameters that are considered essential to maintaining product output within specified quality target guidelines. Critical quality attributes (CQA) Critical quality attributes (CQA) are chemical, physical, biological, and microbiological attributes that can be defined, measured, and continually monitored to ensure final product outputs remain within acceptable quality limits. CQA are an essential aspect of a manufacturing control strategy and should be identified in stage 1 of process validation: process design. During this stage, acceptable limits, baselines, and data collection and measurement protocols should be established. Data from the design process and data collected during production should be kept by the manufacturer and used to evaluate product quality and process control. Historical data can also help manufacturers better understand operational process and input variables as well as better identify true deviations from quality standards compared to false positives. Should a serious product quality issue arise, historical data would be essential in identifying the sources of errors and implementing corrective measures. Stage 2: Process Performance Qualification In this stage, the process design is assessed to conclude if the process is able to meet determined manufacturing criteria. In this stage all production processes and manufacturing equipment is proofed to confirm quality and output capabilities. Critical quality attributes are evaluated, and critical process parameters taken into account, to confirm product quality. Once the process qualification stage has been successfully accomplished, production can begin. Process Performance Qualification is the second phase of process validation. Stage 3: Continued Process Verification Continued process verification is the ongoing monitoring of all aspects of the production cycle. It aims to ensure that all levels of production are controlled and regulated. Deviations from prescribed output methods and final product irregularities are flagged by a process analytics database system. The FDA requires production data be recorded (FDA requirements (§ 211.180(e)). Continued process verification is stage 3 of process validation. The European Medicines Agency defines a similar process known as ongoing process verification. This alternative method of process validation is recommended by the EMA for validating processes on a continuous basis. Continuous process verification analyses critical process parameters and critical quality attributes in real time to confirm production remains within acceptable levels and meets standards set by ICH Q8, Pharmaceutical Quality Systems, and Good manufacturing practice. See also Cleaning validation Process qualification Verification and validation References External links FDA – U.S. Food and Drug Administration EMA – European Medicines Agency Parental Drug Association AAPS Process Validation AAPS Technology Transfer Drug Product Manufacturing Process Software testing Formal methods Software quality Enterprise modelling Business process management
Process validation
[ "Engineering" ]
932
[ "Systems engineering", "Software testing", "Enterprise modelling", "Software engineering", "Formal methods" ]
44,224,558
https://en.wikipedia.org/wiki/Transmission%20line%20loudspeaker
A transmission line loudspeaker is a loudspeaker enclosure design which uses the topology of an acoustic transmission line within the cabinet, compared to the simpler enclosures used by sealed (closed) or ported (bass reflex) designs. Instead of reverberating in a fairly simple damped enclosure, sound from the back of the bass speaker is directed into a long (generally folded) damped pathway within the speaker enclosure, which allows far greater control and use of speaker energy and the resulting sound. Inside a transmission line (TL) loudspeaker is a (usually folded) pathway into which the sound is directed. The pathway is often covered with varying types and depths of absorbent material, and it may vary in size or taper, and may be open or closed at its far end. Used correctly, such a design ensures that undesired resonances and energies, which would otherwise cause undesirable auditory effects, are instead selectively absorbed or reduced ("damped") due to the effects of the duct, or alternatively only emerge from the open end in phase with the sound radiated from the front of the driver, enhancing the output level ("sensitivity") at low frequencies. The transmission line acts as an acoustic waveguide, and the padding both reduces reflection and resonance, and also slows the speed of sound within the cabinet to allow for better tuning. Transmission line loudspeakers designs are more complex to implement, making mass production difficult, but their advantages have led to commercial success for a number of manufacturers such as IMF, TDL, and PMC. As a rule, transmission line speakers tend to have exceptionally high fidelity low frequency response far below that of a typical speaker or subwoofer, reaching into the infrasonic range (British company TDL's studio monitor range from the 1990s quoted their frequency responses as starting from as low as 17 Hz depending upon model with a sensitivity of 87 dB for 1 W @ 1 meter), without the need for a separate enclosure or driver. Acoustically, TL speakers roll off more slowly (less steeply) at low frequencies, and they are thought to provide better driver control than standard vented-box cabinet designs, are less sensitive to positioning, and tend to create a very spacious soundstage. Modern TL speakers were described in a 2000 review as "match[ing] reflex cabinet designs in every respect, but with an extra octave of bass, lower LF distortion and a frequency balance which is more independent of listening level". Although more complex to design and tune, and not as easy to analyze and calculate as other designs, the transmission line design is valued by several smaller manufacturers, as it avoids many of the major disadvantages of other loudspeaker designs. In particular, the basic parameters and equations describing sealed and reflex designs are fairly well understood, the range of options involved in a transmission line design mean that the general design can be somewhat calculated but final transmission line tuning requires considerable attention and is less easy to automate. Purpose and design overview A transmission line is used in loudspeaker design to reduce time, phase, and resonance related distortions, and in many designs to gain exceptional bass extension to the lower end of human hearing, and in some cases the near-infrasonic (below 20 Hz). TDL's 1980s reference speaker range (now discontinued) contained models with frequency ranges of 20 Hz upwards, down to 17 Hz upwards, without needing a separate subwoofer. Irving M. Fried, an advocate of TL design, stated that: "I believe that speakers should preserve the integrity of the signal waveform and the Audio Perfectionist Journal has presented a great deal of information about the importance of time domain performance in loudspeakers. I’m not the only one who appreciates time- and phase-accurate speakers but I have been virtually the only advocate to speak out in print in recent years. There’s a reason for that." "It is difficult and costly to design and manufacture a time- and phase-accurate speaker system. Few of today’s high-end loudspeakers are time- and phase-accurate designs. The audio magazines need to appeal to a broad spectrum of advertisers including many who make speaker systems which are time incoherent. The magazines, and the reviewers who write for them, have ignored or downplayed the issue of time- and phase-accuracy in order to maximize advertising revenue. I am not alone in recognizing this situation." Some proponents of TL loudspeakers consider that using a TL is the theoretical ideal manner in which to load a moving-coil drive unit. However, it is also one of the more complex of constructions. The most common and practical implementation is to fit a drive unit to the end of a long duct that is usually open at the far end. In practice, the duct is folded inside a conventional shaped cabinet, so that the open end of the duct appears as a vent on the speaker cabinet. There are many ways in which the duct can be folded, and the line is often tapered in cross section to avoid parallel internal surfaces that encourage standing waves. Some speaker designs also use a spiral or elliptic spiral shaped duct, usually with one speaker element in the front or two speaker elements arranged one on each side of the cabinet. Depending upon the drive unit, and quantity and various physical properties of absorbent material, the amount of taper will be adjusted during the design process to tune the duct to remove irregularities in its response. The internal partitioning provides substantial bracing for the entire structure, reducing cabinet flexing and colouration. The inside faces of the duct or line, are treated with an absorbent material to provide the correct termination with frequency to load the drive unit as a TL. The enclosure behaves like an infinite baffle, potentially absorbing most or all of the speaker unit's rear energies. A theoretically perfect TL would absorb all frequencies entering the line from the rear of the drive unit but remains theoretical, as it would have to be infinitely long. The physical constraints of the real world, demand that the length of the line must often be less than 4 meters before the cabinet becomes too large for any practical applications, so not all the rear energy can be absorbed by the line. In a realized TL, only the upper bass is TL loaded in the true sense of the term (i.e. fully absorbed); the low bass is allowed to freely radiate from the vent in the cabinet. The line therefore effectively works as a low pass filter, another crossover point in fact, achieved acoustically by the line and its absorbent filling. Below this “crossover point” the low bass is loaded by the column of air formed by the length of the line. The length of the line is specified so as to reverse the phase of the rear output of the drive unit as it exits the vent. This acoustic energy combines with the output of the bass unit, extending its response and effectively creating a second driver. Essentially, the goal of the transmission line is to minimize acoustical or mechanical impedance at frequencies corresponding to the fundamental free-air resonance of the bass driver. This simultaneously reduces stored energy in the driver's motion, reduces distortion, and critically damps the driver by maximizing acoustic output (maximal acoustical loading or coupling) at the terminus. This also minimizes the negative effects of acoustic energy that would otherwise (as with a sealed enclosure) be reflected back to the driver in a sealed cavity. Transmission line loudspeakers employ this tube-like resonant cavity, with the length set between 1/6 and 1/2 the wavelength of the fundamental resonant frequency of the loudspeaker driver being used. The cross-sectional area of the tube is typically comparable to the cross-sectional area of the driver's radiating surface area. This cross section is typically tapered down to approximately 1/4 of the starting area at the terminus or open end of the line. While not all lines use a taper, the standard classical transmission line employs a taper from 1/3 to 1/4 area (ratio of terminus area to starting area directly behind driver). This taper serves to dampen the buildup of standing waves within the line, which can create sharp nulls in response at the terminus output at even multiples of the driver's Fs. In a transmission line speaker, the transmission line itself can be open ("vented") or closed at the far end. Closed designs typically have negligible acoustic output from the enclosure except from the driver, while open ended designs exploit the low-pass filter effect of the line, and the resultant low bass energy emerges to reinforce the output from the driver at low frequencies. Well designed transmission line enclosures have smooth impedance curves, possibly from a lack of frequency-specific resonances, but can also have low efficiency if poorly designed. One key advantage of transmission lines is their ability to conduct the back wave behind the transducer more effectively away from it – reducing the chance for reflected energy permeating back through the diaphragm out of phase with the primary signal. Not all transmission lines designs do this effectively. Most offset transmission line speakers place a reflective wall fairly close behind the transducer within the enclosure – posing a problem for internal reflections emanating back through the transducer diaphragm. Older descriptions explained the design in terms of "impedance mismatch", or pressure waves "reflected" back into the enclosure; these descriptions are now considered outdated and inaccurate as technically the transmission line works through selective production of standing waves and constructive and destructive interference (see below). A second benefit is that the resulting music is time coherent (i.e., in phase). Fried quoted in 2002, a listening test performed and reported in December 2000's Hi-Fi News (as he believed) in which a high-quality recording was obtained using reputable but non-time-coherent loudspeakers and this recording was then time phase corrected; an expert listening panel "voted unanimously for the superior realism and accuracy of the time corrected output" for high quality sound reproduction. One of the significant and common problems with a transmission line loudspeaker system is the unwanted phase-cancellation effects of higher line harmonics bleeding from the transmission line and adversely affecting the overall sound field. For example, in the PMC PMC6 mid-sized transmission line monitoring loudspeaker, there is a dip around 300 Hz that is caused by the fifth harmonic of the transmission line’s resonant frequency. This type of problem is quite common, and it was readily apparent in other transmission line loudspeakers. For example, the large IMF TLS80 MkII from 1977 also had an anomaly, but this time at the lower frequency of about 140 Hz, consisting of an almost one-octave-wide deleterious 2-dB dip in the on-axis response. Another problem is that the sound radiation from the exit of the line is spread over a quite broad frequency range caused by the hump of the quarter-wave transmission line resonance, whereas the high-Q port resonance of a vented-box loudspeaker rolls off much more quickly and extends over a much narrower frequency band. These sorts of issues with transmission line loudspeakers can lead to tonal accuracy problems that cannot be resolved. A transmission line speaker employs, essentially, two distinct forms of bass loading, which historically and confusingly have been amalgamated in the TL description. Separating the upper and lower bass analysis reveals why such designs have so many potential advantages and disadvantages over reflex and infinite baffle designs. Measurements indicate that the upper bass is only partially absorbed by the line, making a clean and neutral response somewhat difficult if not impossible to achieve. The lower bass is extended and distortion is lowered by the line's control over the drive unit's excursion. One of the exclusive benefits of a TL design is its ability to produce very low frequencies even at low monitoring levels – TL speakers can routinely produce full range sound usually requiring a subwoofer, and do so to very high levels of low-frequency accuracy. The main disadvantage of the design is that it is more labor-intensive to create and tune a high quality and consistent transmission line, compared to building a simple vented-box or closed-box enclosure. One PMC employee was quoted as saying that optimising a transmission line loudspeaker is "like juggling water". A 2010 Hifi Avenue TL speaker review commented that "One thing I have noticed about transmission line designs is that they create a rather big soundstage and seem to handle crescendoes with ease". History of transmission line loudspeakers Invention and early use The concept was innovated within acoustic enclosure design, and originally termed an "acoustical labyrinth", by acoustic engineer and later Director of Research, Benjamin Olney, who developed the concept at the Stromberg-Carlson Telephone Co. in the early 1930s while studying the effect of enclosure shape and size on speaker output, including the effect of "extreme length in a box baffle". A patent was filed in 1934. The design was used in their console radios beginning in 1936. A loudspeaker enclosure based on the concept was proposed in October 1965 by Dr A.R. Bailey in Wireless World magazine, referencing a production version of an acoustic-line enclosure design from Radford Electronics Ltd. The article postulated that energy from the rear of a driver unit could be essentially absorbed, without damping the cone's motion or superimposing internal reflections and resonance, so Bailey and Radford reasoned that the rear wave could be channelled down a long pipe. If the acoustic energy was absorbed, it would not be available to excite resonances. A pipe of sufficient length could be tapered, and stuffed so that the energy loss was almost complete, minimizing output from the open end. No broad consensus on the ideal taper (expanding, uniform cross-section, or contracting) has been established. "Classic" era transmission line loudspeakers Source for much of this section: Loudspeakers: for music recording and reproduction (Newell & Holland, 2007)  The birth of the modern transmission line speaker design came about in 1965 with the publication of A.R. Bailey's article in Wireless World, “A Non-resonant Loudspeaker Enclosure Design”, detailing a working Transmission Line. Bailey followed up his first article with a second one in 1972. Radford Electronics Ltd took up this innovative design and briefly manufactured the first commercial Transmission Line loudspeaker. Although acknowledged as the father of the Transmission Line, Bailey's work drew on the work on labyrinth design, dating back as early as the 1930s. His design, however, differed significantly in the way in which he filled the cabinet with absorbent materials. Bailey hit upon the idea of absorbing all the energy generated by the bass unit inside the cabinet, providing an inert platform for the drive unit to work from; unchecked, this energy produces spurious resonances in the cabinet and its structure, adding distortion to the original signal. Shortly thereafter the design entered mainstream Hi-Fi, through the works of Irving M. "Bud" Fried in the United States, and a British trio: John Hayes, John Wright, and David Brown. Dave D'Lugos describes the period that followed (approximately 35 years until the start of the 21st Century) as a period when the "classical designs" were created. Fried was exposed during his time at Harvard University to high fidelity audio reproduction, and later became an importer of audiophile items. Under the trademark "IMF" (his initials), from 1961, he eventually became involved with many advancements in audiophile equipment: cartridges (IMF – London, IMF – Goldring), tonearms (SME, Gould, Audio and Design), amplifiers (Quad, Custom Series), loudspeakers (Lowther, Quad, Celestion, Bowers and Wilkins, Barker, etc.). In 1968 he met John Hayes and John Wright, who had already designed an award-winning tonearm in the UK and had brought along a transmission line speaker designed by John Wright — described by Hayes as "fanatical regarding quality"  — in order to promote and demonstrate the tonearm at a New York hifi show. Fried unexpectedly received a number of orders for the unnamed speaker, which he dubbed the "IMF". The British pair, along with Hayes' colleague David Brown, agreed to form a UK company to design and manufacture speakers which would be sold by Fried in the United States. John Hayes later wrote that: Of course, Bud, had called it the IMF, and therefore, perhaps mistakenly we registered IMF and formed an IMF company... At no time did Bud Fried have any input on the designs. We sold him speakers and he was the US Distributor...  [...] Bud Fried was never a Director or shareholder of IMF Electronics. IMF electronics were the only company manufacturing the transmission line speakers. The name IMF was adopted because Bud Fried had demonstrated the first prototype speakers at the New York hi fi show, and because of the publicity and the fact that he had used his name on the then unnamed speakers, we stuck with the name which was a mistake on our part. It was never his company. After our lawsuit he called his speakers Fried. The relationship broke down acrimoniously when Fried began to make his own, poorer quality speakers, also marketed as "IMF", and refused to cease until a court agreed that the UK business had the right to the trademark IMF for loudspeakers. Following the split, Fried in the USA (under the brandname "Fried") and the three founders of IMF Electronics in the UK (via a joint venture with driver manufacturer Elac under the name TDL), both became well known in audiophile circles for many years as major advocates of transmission line speaker design. TDL closed after John Wright's gradual failing health and death in 1999 from cancer. He was described in his 1999 obituary as "one of the most important figures on the British hi-fi scene since the mid-1960s... best remembered for his transmission-line loudspeaker designs". The brand was acquired by Audio Partnerships (part of retailer group Richer Sounds). Fried died six years later, in 2005. 21st century In the early 21st century, mathematical models that seemed to approximate the behavior of real-world TL speakers and cabinets began to emerge. According to the website t-linespeakers.org, this led to an understanding that what he termed the "classical" speakers, designed largely by "trial and error", were a "good job" and the best that was reasonably possible at those time, but that better designs were now achievable based on modeled responses. Design principles Phase inversion is achieved by selecting a length of line that is equal to the quarter wavelength of the target lowest frequency. The effect is illustrated in Fig. 1, which shows a hard boundary at one end (the speaker) and the open-ended line vent at the other. The phase relationship between the bass driver and vent is in phase in the pass band until the frequency approaches the quarter wavelength, when the relationship reaches 90 degrees as shown. However, by this time the vent is producing most of the output (Fig. 2). Because the line is operating over several octaves with the drive unit, cone excursion is reduced, providing higher SPLs and lower distortion levels, as compared with bass reflex and infinite baffle loudspeaker enclosure designs. The complex loading of the bass drive unit demands specific Thiele-Small driver parameters to realise the full benefits of a TL design. Most drive units in the marketplace are developed for the more common reflex and infinite baffle designs and are usually not suitable for TL loading. High efficiency bass drivers with extended low frequency ability, are usually designed to be extremely light and flexible, having very compliant suspensions. Whilst performing well in a reflex design, these characteristics do not match the demands of a TL design. The drive unit is effectively coupled to a long column of air which has mass. This lowers the resonant frequency of the drive unit, negating the need for a highly compliant device. Furthermore, the column of air provides greater force on the driver itself than a driver opening onto a large volume of air (in simple terms it provides more resistance to the driver's attempt to move it), so to control the movement of air requires an extremely rigid cone, to avoid deformation and consequent distortion. The introduction of the absorption materials reduces the velocity of sound through the line, as discovered by Bailey in his original work. Bradbury published his extensive tests to determine this effect in an AES journal article in 1976, and his results agreed that heavily damped lines could reduce the velocity of sound by as much as 50%, although 35% is typical in medium damped lines. The behaviour of various damping materials has also been studied by Lusztak and Bujacz. Bradbury's tests were carried out using fibrous materials, typically longhaired wool and glass fibre. However, these kinds of materials produce highly variable effects that are not consistently repeatable for production purposes. They are also liable to produce inconsistencies due to movement, climatic factors and effects over time. High specification acoustic foams, developed by manufacturers such as PMC, with similar characteristics to longhaired wool, provide repeatable results for consistent production. The density of the polymer, the diameter of the pores and the sculptured profiling are all specified to provide the correct absorption for each speaker model. The quantity and position of the foam is critical to engineer a low-pass acoustic filter that provides adequate attenuation of the upper bass frequencies, whilst allowing an unimpeded path for the low bass frequencies. Although the result may require a lot of modeling and testing, the starting point is usually based on one of three basic principles. Filling the entire tube treats the TL as a damper, aiming at completely eliminating the rear wave. Filling half the cross section throughout the line's entire length treats the TL as an infinite baffle, basically damping high frequencies and wall-to-wall resonances. Filling the tube from the driver to half the tube's length aims at a quarter-wave resonator, leaving the fundamental tone with its velocity maxima at the open end of the tube intact, while damping all the overtones. Mathematical equations, modelling, and design process The external links section of this article links to a number of resources that detail the mathematical principles, models, and DIY calculations, as well as extended practical design material, related to transmission line speakers. For most of the 20th century, transmission line design remained more of an art than a science, requiring much trial and error. Jon Risch states in an article on classic transmission line design, that the hard part was finding the best stuffing density along the line's length, because "the line stuffing affects both the total apparent line length AND the total apparent box volume simultaneously". He summarized the state of design at the time as: "The classic transmission line bass enclosure has never been completely and successfully modeled such that it can be built from a pat set of equations. Some claim to have done this, but it doesn't seem to allow a first time build without adjustments, so the models have enough wrong to require a fudge factor..."  Dave D'Lugos, founder of fan site t-linespeakers.org, comments that this reflects the "classical" designs from the 1960s until Risch's writing, during which period "TL design was seat of the pants". However, from the 21st Century, Martin King and George Augspurger (both separately and referencing each other's works), produced models which show these to be "generally less than optimal" designs which "did a good job of approaching what was possible in their day". Audio engineer Augspurger had modeled TLs using an electrical analogy, and found it to agree closely with King's existing work, based on a mechanical analogy. D'Lugos concluded in his overview of TL modeling and design theory: "I think that using modern drivers and tools such as King's software you can build a better TL easier today". More recently, Andrea Rubino has developed a sophisticated simulation model based on electrical circuit theory and published a series of articles in the Italian electroacoustic journal AUDIOreview. Many resources are available on his website: transmissionlinespeakers.com In addition to these more sophisticated models a number of approximation algorithms exist. One such is to design a closed-box loudspeaker enclosure, then building a transmission line of the same volume tuned to the closed-box loudspeaker's resonance frequency. Another is to design a bass reflex loudspeaker, again building a transmission line of the same volume, tuned to the frequency of the Helmholtz resonator. Prominent individuals and companies Pioneers: Benjamin Olney – originated the idea of a duct in speaker enclosure design, which he termed an "acoustic labyrinth", while working for Stromberg-Carlson as an acoustic engineer and studying the effect of enclosure size on output sound. Bailey and Radford – worked together and developed the concept for loudspeakers (1965). Their design was a significant development from the earlier work. Bailey's name was on the article and Radford built the first commercial TL speaker. John Wright together with business partner John Hayes and (later) David Brown, and their company IMF Electronics Ltd (later: TDL) – Wright, a "fanatical" pursuer of quality, had designed an award winning tonearm and to demonstrate it, brought to New York a non-commercial TL speaker he had also designed. The speaker gained considerable attention and Wright, Hayes and colleague Brown formed a company that specialized in TL speakers, and won numerous awards (1968). TDL disbanded following Wright's death in 1999 and the brand—as a shell—was bought by Richer Sounds. Irving M. "Bud" Fried – American audiophile and TL advocate, who encountered Wright and Hayes in 1968, recognized the potential of Wright's unnamed speaker, and began marketing their TL speakers in the United States. Later set up a TL company of his own to design speakers. Bo Hansson – Swedish designer of HiFi equipment and founder of Opus3 Record Company created the "Rauna Njord" concrete speaker as a transmission line design. Martin King and George Augspurger – researchers and designers who succeeded in modeling realistic TL speaker designs in the early 21st century. Other companies and individuals who have produced or researched TL speakers: Lentek Newtronics (Temperance line) Gini B+ (Bass Extenders line) Quadral T+A Electronics (Criterion line) J M Reynaud, PMC Salk Sound Rega (their Naos then RS7) Adelaide Speakers TBI Audio Systems LLC (subcontracted by Asis to research and design smaller TL speakers suitable for embedding into laptops) Marantz (Karoke range) Merkel Acoustic Research/Jeff Merkel Albedo (Helmholine range) Transmission Audio Audio Reference (Acoustic Zen line) Radford DIY kit manufacturers: UK – IPL Acoustics (article), Falcon Acoustics "Thor" USA – GR Research "N3" USA – New York Acoustics, loosely associated with New York Audio Labs , kits and plans for 8" and 10" driver TL speaker cabinets, active in the mid 1980s. See also Acoustic suspension – a method of loudspeaker cabinet design and utilisation that uses one or more loudspeaker drivers mounted in a sealed box or cabinet. Bass reflex – a type of loudspeaker enclosure that uses a port (hole) or vent cut into the cabinet and a section of tubing or pipe affixed to the port. Frequency response Loudspeaker acoustics Loudspeaker enclosure Loudspeaker measurement Passive radiator References External links Transmission Line Speakers Pages – TL projects, history and more. Quarter-wave.com – by Martin J King, developer of TL modeling software; also includes design calculations for professional and DIY TL speaker creation. http://www.perrymarshall.com/articles/industrial/transmission-line/ - mathematics of the TL speaker, Perry Marshall Brines Acoustics Articles (Archived 2009-10-24) – Application, tips, essays. Loudspeaker Handbook and Lexicon, Windslow Burhoe, 1978 (revised 1995/95/97) – has a sizeable section on TL speakers. Papers Papers and documents related to Olney's original "Acoustic Labyrinth": Acoustic Society of America paper, 1936 Followup paper, 1937 1934/1936 "Labyrinth" patent, filed by Stromberg-Carlson Telephone Manufacturing Co. Archive of Olney's papers at Rush Rhees Library, University of Rochester Loudspeaker technology Audio engineering
Transmission line loudspeaker
[ "Engineering" ]
6,021
[ "Electrical engineering", "Audio engineering" ]
44,227,105
https://en.wikipedia.org/wiki/List%20of%20protein%20secondary%20structure%20prediction%20programs
List of notable protein secondary structure prediction programs See also List of protein structure prediction software Protein structure prediction Structural bioinformatics software Protein structure Protein methods
List of protein secondary structure prediction programs
[ "Chemistry", "Biology" ]
31
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Structural biology", "Protein structure" ]
47,325,951
https://en.wikipedia.org/wiki/Quintessence%3A%20The%20Search%20for%20Missing%20Mass%20in%20the%20Universe
Quintessence: The Search for Missing Mass in the Universe is the fifth non-fiction book by the American theoretical physicist Lawrence M. Krauss. The book was published by Basic Books on December 21, 2000. This text is an update of his 1989 book The Fifth Essence. It was retitled Quintessence after the now widely accepted term for dark energy. Overview Krauss focuses on theoretical physics and has published researches on a number of topics within that field. His primary contribution is to cosmology as one of the first physicists to suggest that most of the mass and energy of the universe resides in empty space, an idea now widely known as "dark energy". Furthermore, Krauss has formulated a model in which the universe could have potentially come from "nothing," as outlined in his later book A Universe from Nothing. Whether our universe is ever-expanding depends on the amount and properties of matter, but there is too little visible matter around us to explain the behavior we can see—over 90% of the universe consists of the missing mass or dark matter, which Krauss termed "the fifth essence." In this book Krauss demonstrates how the dark matter problem is now connected with two widely discussed areas in the modern cosmology: the ultimate fate of the universe and the cosmological constant. He also discusses an antigravity force that may explain recent observations of a permanently expanding universe. See also Dark matter in fiction Exotic matter Mirror matter Negative mass Quintessence (physics) Scalar field dark matter Self-interacting dark matter Unparticle physics The 4 Percent Universe References External links Popular physics books 2000 non-fiction books Books by Lawrence M. Krauss Dark energy
Quintessence: The Search for Missing Mass in the Universe
[ "Physics", "Astronomy" ]
352
[ "Unsolved problems in astronomy", "Physical quantities", "Concepts in astronomy", "Unsolved problems in physics", "Energy (physics)", "Dark energy", "Wikipedia categories named after physical quantities" ]
47,333,806
https://en.wikipedia.org/wiki/C21H21NO6
{{DISPLAYTITLE:C21H21NO6}} The molecular formula C21H21NO6 (molar mass: 383.39 g/mol, exact mass: 383.1369 u) may refer to: Hydrastine Rhoeadine (rheadine) Molecular formulas
C21H21NO6
[ "Physics", "Chemistry" ]
66
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
57,590,109
https://en.wikipedia.org/wiki/Wireline%20QA/QC
Wireline quality assurance and quality control (wireline QA/QC) is a set of requirements and operating procedures which take place before, during, and after the wireline logging job. The main merits of wireline QA/QC include accuracy and precision of recorded data and information. Accuracy is a measure of the correctness of the result and is generally depended on how well the systematic errors (a reproductible inaccuracy introduced by faulty design, inadequate calibration or a change in borehole) are controlled and compensated for. Precision is depended on how well random errors (errors that cannot be reproduced and are mostly related to the physics of a measurement) are analysed and overcame . Introduction Wireline logging is a part of exploration geophysics and is mainly used to detect the presence of economically useful hydrocarbons in the Earth’s sub-terrain. Products of wireline logging are wireline logs or well logs.: (Fig. 1. Ll1LOG) In the oil industry the logs are “a recording against depth of any of the characteristics of the rock formations traversed by a measuring apparatus in the well-bore”. The well logs are obtained by lowering the measuring tools on a wireline (cable) into the well (borehole). The integral parts of wireline logging operations are quality assurance and quality control procedures. Quality control is the “process that defines how well the solution for a specific problem is known”. Well log quality control is a “set of methods that identifies and analyses data deviations from established standards and allows the design of a remedy”. Unlike measurements in well-known and controlled laboratory conditions, logging is performed in-situ and can be affected by different possible failure sources and susceptible to systematic and random errors. The objective of wireline logging job is to obtain a permanent continuous record of the rock properties penetrated by the wellbore with the result of wireline logs, fluid, and rock samples. Of all the well data sets recorded and collected, well logs are the most valuable as they are vital for reservoir and formation evaluation. Wireline (well) logs are then combined with drilling data, mud logs, and measurements while drilling (MWD) and coring information in order to choose correct testing and completion intervals to properly evaluate the production potential of the well. There are two categories of well log data: the original data (e.g. Gamma Ray log as a result of gamma rays measurements in the borehole, through an iodine crystal scintillation detector, calibrated as per normal field operational procedure) or derived data (data resulting from the processing of the original data, e. g. calculating the volume of shale). Main components of the well log data record are logs themselves (main curves for the main and repeat sections with relevant information - calibration information, parameter tables... and additional curves of the main and repeat passes including down-logs and images of quality control logs) and contextual information, such as data acquisition plans, other job reports, witness reports, and tool specification. Contextual data is of a great value for exploration but can be lost or difficult to access. With the best quality wireline logging data acquired, subsequent steps in well production and completion are more precise and effective. High quality data and conscientious data management enables the prognosis of potential problems and failures so the whole process (the whole rig or oilfield lifetime) can be adapted and configured in order to prevent possible consequences in regard to human safety, environmental preservation, and infrastructural integrity. Acquisition of geotechnical and petrophysical data is costly, but necessary. Data of dubious quality cannot yield reliably good decisions. The poorer the quality of data, the higher risk associated with decisions based upon them. The less knowledge there is about the quality of recorded and collected data, the higher is the uncertainty of said decisions and can lead to false interpretations and evaluations. A good evaluation is only possible with good quality data so it is essential to acquire the best quality data. Factors affecting the data quality The data quality may be compromised to a greater or lesser extent by bad hole conditions, wireline equipment failures, human errors, or even extreme weather conditions. Within the bounds of the rig time costs and preservation of the personnel safety, equipment and the well, the logging engineer, operational geologists and wireline log witnesses have to ensure that the best quality data is acquired. The objective of the wireline logging services is to provide the best possible quality data in the minimum possible rig-time. Quality of the data is directly dependent on available rig-time. Factors affecting data quality and time efficiency are operational procedures and environmental conditions, but mainly problems caused by them. During wireline job planning, some factors are accountable for specific types of equipment used for specific borehole geometry and conditions (temperature, pressures and mud type used) and others are wireline responsibility on site and during the wireline logging job. Data quality and time efficiency factors Environmental conditions: Drilling mud – hole diameter affects log readings, mud type and density affect type of tools used and measurements such as conductivity and resistivity, invasion of drilling, and mud fluids into reservoir rock, forming of mud-cake, wash-outs in soft sediment, formation of stress-related break-offs. Borehole geometry – e. g. affects type of logging and tools used (wireline, pipe conveyed or LWD). Borehole environment – temperature, pressures and possible hostile (H2S) environments affect tools used and their operating limits. Operational procedures: Equipment calibrations and verifications – tools are calibrated by adjusting their response to read some predetermined value in a situation for witch the response is known, and the only time we know for certain that the tool is working properly is during calibration and verification. They should be checked as often as possible to ensure the correct data is recorded. Tool positioning and configuration (geometry) – tools have to be centralised or decentralised depending on the type of measurement recorded, they have different depths of recording intervals depending on their position in tool-string, different vertical resolution (e. g. difference in vertical resolution in short and long spacing resistivity tools), spacing of sensors on tools influences the volume of rock recorded. Different tools have different depths of penetration into formation, making the need for accuracy very important. Depth mismatch/accuracy – elemental requirement for consistency of data is to know on what depth the measurement is taken, otherwise it is difficult to compare different logs needed for further data processing (interpretation). Logging speed – logging speed limits directly affect rig time efficiency as they are not the same for all types of logs (and connected tools), and service companies provide maximum speed values for chosen operating tool. Types of Log Data Quality control Log data quality control is a method to assure a certain level of well log data quality and accuracy in the oil company’s system. The most effective way to be certain of the quality is to make checks at the time of the data acquisition while the wireline operation is ongoing or just finished. The best practice is for all three types of log quality checks to be done on-site during a wireline job. According to Storey (2016) there are three types of log data quality control (LQC): Type 1 - Acquisition LQC Quality control applies to original data as they are recorded by the wireline service company and is performed by the wireline service companies engineers and/or wireline log witnesses. The main objectives are to monitor operational procedures, that the logging program is followed, communication about progress and problems, recording of any notable events, verifying the precision, accuracy and completeness of acquired data (logs and contextual information), and equipment verifications and calibrations. Type 2 - Acceptance LQC Quality control during the acceptance of data by the oil/gas company (log data recorded and collected by the wireline service company and/or 3rd party). Objective is to verify the data and record for any deviations from original job plans and register them without delay. The purpose is to assure completeness and accuracy, to remedy all unacceptable deviations from the original plan and record all acceptable deviations, to clarify any significant questions or problems, to communicate progress, problems and concerns to subject matter experts, to record the log quality control summary in writing and to accept the data formally. The main concern is the completeness and accuracy of data, and consistency of different components of the data and information set. Type 3 – Pre-exploitation LQC Verifying that the Type 1 and Type 2 LQCs have been done before data exploitation. This control can happen during operations/job to help decide on the next operation, or after the operation/job to construct, constrain, and refine formation and reservoir models, appraise uncertainties and decide on follow-up action. Wireline QA/QC service providers The well site is the front line of well log quality control. Only while the equipment is on-site and the well bore is open is it possible to investigate in-situ formations, and, if necessary, influence decisions which can result in better, higher data quality. If the problem is detected it can be readily addressed by the personnel around. Quality executed wireline logging job from start to finish makes subsequent steps in oil and gas extraction (the main goal of well site operations) more straightforward and thus ultimately less expensive. It makes better prediction possible and adapting to unwanted surprises and problems which usually cause delays (down-time), additional interventions and higher costs. Service companies Wireline service companies are responsible for delivering complete, accurate and consistently recorded all-encompassing data and information which has been acquired by correctly calibrated and verified instruments, correctly used during the logging runs (standard division of logging job) by providing relevant documentation in hard and soft copy data. Wireline service companies provide the personnel and equipment needed and agreed upon, in coordination with the oil company, for any specific logging job on the specific well site (FORASERV, Schlumberger, Baker Hughes, Halliburton, Archer well to name a few). Wireline logging QA/QC witness Oil companies, to ensure that the highest quality data is recorded, engage an experienced expert to supervise wireline logging job on site that coordinates an entire operation (with assistance of logging engineer and operational geologist) from start to finish and has knowledge of every step of the operation in intimate detail. They are called wireline log witnesses and are either individual consultants or part of the wireline QA/QC consulting firms. Their objective is to keep track of the wireline performance of logging provider, their efficiency and procedures. Some of the notable wireline QA/QC consulting firms are QO Inc., Gaia Earth Sciences Limited, OGEC, one & zero and STAG. Complexity of the modern wireline and LWD logs, with several pages of logging job parameters and calibrations, makes this duty very demanding and even specialized training courses for log witnesses are available (for example Petroknowledge and Opus Kinetic). Roles and responsibilities of wireline log witness The role of wireline log witness is all-encompassing and includes coordination, participation, and supervision of all pre-job, during job (on site), and post-job logging activities: They are responsible for communication with on-site personnel, for reporting all logging objectives, and operations to the contractor (usually the oil company) during the job and a final report after the job. Their answerability includes following up the predetermined logging program and objectives (between the oil/gas company and wireline service company) and approving and recording of all subsequent modifications within the limits of service company abilities and the oil company wants. They need to know what tools are required and what tools are available. They need to make sure the logging crew has checked all the equipment (to ensure all the logging tools are in working condition), that no equipment required for the job is missing, and all tool calibrations and verifications are valid. They are responsible for repeating the tool calibrations and verifications several times during the job to check them. They are obliged to review and discus job risk assessment and potential safety issues. They are in charge of implementing "Lessons learned" from previous jobs and taking the notes of "Lessons learned" on present job. Well log QA/QC procedures Foundations of a good well log data quality are data consistency, tool calibration, tool reliability, and wireline service company performance. In collecting right and good quality data, proper and valid calibrations and verifications of all the equipment, consistent complementary readings checks (e. g. density and neutron logs or porosity logs) and repeatability of recorded logs are all essential. Basic steps in log quality control are: Having a well-planned and detailed logging program which takes into account site and safety conditions, information about the equipment, acquisition parameters, borehole environmental conditions, and planned operational procedures. The program should be coordinated by the oil/gas company, wireline service company and wireline log witness. Paramount to the on-site job is documentation of all the fieldwork – field logbooks, data collection sheets, and field notes, as well as completely recorded instrument digital data. The highest priority before the job and during the job are equipment calibrations and verifications. Routine checks of the equipment should be made on periodic bases and after each problem and repair. An operational check of equipment, along with test measurements, should be carried out before the start of each job and before starting each run. Standard corrections and changes in logging programs made by field engineers or data managers for operational procedures, borehole conditions, previously unknown site conditions and main depth shifts should all be documented. The rationale for why the change was made, compromises and consequences that change may represent should also be documented. Recording the conditions which affect the survey and measurements. When recognized they should be documented to provide guidance for later projects ("Lessons learned"). All the equipment problems, and steps taken to correct these problems, should be documented with insights how the corrections may affect the data. It is important to review electronically recorded (digital) data (well logs) to ensure that the data recorded and their values are consistent with the setting. The logs should be depth matched (final depth control), compared, and any borehole geometry effect taken into account. Overall data quality and accuracy should be documented. Log Quality Control (LQC) Digitalization Data management is more commonly done using a variety of software packages. Most equipment manufacturers have data transfer (real-time data transmission) or download software provided for their instruments which permits data editing and limited data manipulation. Digitalization and digital solutions are an important part of the oil and gas industry. Harnessing new technologies, as Devold said, ''“is a critical business need”. Digitalization and finding new software solutions is a mandate in logging because the largest errors come from inadequate knowledge of the position and orientation of the measurement sensors and the biggest problems come from human error. This can be partly eliminated by employing smart software solutions. No matter how careful, people make mistakes, equipment fails, accidents happen. Effectiveness of a quality wireline logging management system is the ability of the wireline service companies, their engineers and logging witnesses to predict, eliminate, overcome, and compensate for possible problems and limitations. Major steps are taken in the field of log data interpretation and visualisation (e. g. Interactive Petrophysics by Lloyd's Register, Log Studio by Logtek softwares, Petra® by IHS, GeoSoftware by CGG, JewelSuite by Baker and Hughes, Delphi by Schlumberger, Petrosys, Geolog®, GEOSuite7 to name a few) with some software development in the area of well operational performance management and optimisation (RIGIQ® by Trigpoint Solutions, EnergySys), but log data and contextual information management (including wireline program optimisations) is sadly underrepresented. Some of the wireline service QA/QC companies have developed their own internal software solution to this problem (e. g. one & zero consulting company), but almost none of the software is open-sourced or commercially available. At present there can be found one commercial software - RIGPRO by the same company – that offers solutions for digitally supervising all logging data, including the contextual information, in a format that enables quality QA/AC. References Quality assurance Geophysics
Wireline QA/QC
[ "Physics" ]
3,388
[ "Applied and interdisciplinary physics", "Geophysics" ]
39,962,134
https://en.wikipedia.org/wiki/San%20Rafael%20River%20train%20wreck
The San Rafael River train disaster occurred on August 10, 1989, when the Rio Bamoa Bridge collapsed under an 11-car train operated by Ferrocarriles Nacionales de México, traveling from Mazatlan to Mexicali. Several cars fell into the San Rafael River. The bridge's supports had been damaged by heavy rains, causing them to fail. Of the approximately 330 people on the train, 112 perished, most by drowning, and 205 were injured, making it Mexico's second deadliest peacetime rail disaster. References Railway accidents in 1989 Derailments in Mexico 1989 in Mexico Bridge disasters caused by scour damage Bridge disasters in Mexico
San Rafael River train wreck
[ "Technology" ]
135
[ "Railway accidents and incidents", "Rail accident stubs" ]
39,968,404
https://en.wikipedia.org/wiki/Degenerative%20chain%20transfer
In polymer chemistry, degenerative chain transfer (also called degenerate chain transfer) is a process that can occur in a radical polymerization where the active site is transferred from one site along the polymer chain to another site, without changing the active site's reactivity (hence the term "degenerate," signifying that the pre- and post-transfer active sites have the same energy or reactivity). Thus, the prevalence of degenerative chain transfer in relation to other chain transfer mechanisms has a significant influence on the molecular weight distribution of the resulting product. In chain polymerization processes it is observed that during the chemical reactions an active centre on a growing chain is transferred from a growing macromolecule - P• - or oligomer to another molecule (transfer agent XR) or to another site on the same molecule. P• + XR → PX + R• This transfer involves termination of the initially growing chain to the completed macromolecule PX, where X denotes one end-group of the macromolecule. The example shows that the growing macromolecule as well as the transfer agent are consumed during this process. However, there are also chain transfer reactions that generate new chain carriers and new chain transfer agents at the same time with significant consequences for the distribution of the (average) molecular weight distribution, the dispersity Đ and the (average) degree of polymerization of the product. These chain transfer reactions are called degenerative chain transfer reactions and are observed, for example in RAFT-, ITRP-, or TERP- processes. RAFT = reversible addition-fragmentation chain transfer polymerization; ITRP = iodine-transfer polymerization; TERP = telluride-mediated polymerization. These polymerization techniques belong to the class of reversible deactivation radical polymerizations (RDRP) that show some characteristics of a living polymerization, however, they must not be addressed as living polymerizations because true living polymerizations are characterized by the absence of any termination reaction. In this sense, a chain-transfer agent RX is a substance that is able to react with a chain carrier by a reaction in which the original chain P• is deactivated and a new chain carrier R• is generated, as shown above. References Polymerization reactions
Degenerative chain transfer
[ "Chemistry", "Materials_science" ]
474
[ "Polymerization reactions", "Polymer chemistry" ]
55,841,611
https://en.wikipedia.org/wiki/Calcium%20signaling%20in%20cell%20division
Calcium plays a crucial role in regulating the events of cellular division. Calcium acts both to modulate intracellular signaling as a secondary messenger and to facilitate structural changes as cells progress through division. Exquisite control of intracellular calcium dynamics are required, as calcium appears to play a role at multiple cell cycle checkpoints. The major downstream calcium effectors are the calcium-binding calmodulin protein and downstream calmodulin-dependent protein kinases I / II. Evidence points to this signaling cascade as a major mediator of calcium signaling in cell division. Meiosis Historically, one of the most well characterized roles of intracellular calcium is activation of the ovum after sperm fertilization. In deuterosome eggs (mammals, fish, amphibians, ascidians, sea urchins, etc.), successful sperm entry leads to a distinct rise in intracellular calcium ions (Ca2+), with mammals and ascidians displaying a series of intracellular calcium spikes required for completion of meiosis.... Unfertilized vertebrate eggs arrest development after meiosis I. This developmental pause is attributed to the vaguely defined cytostatic factor (CSF). Current researches suggest “CSF” is actually multiple pathways working together to halt division at metaphase of meiosis II. Upon sperm entry into the egg, Ca2+ is released from intracellular stores, leading to inhibition of the CSF-arrest mechanism. Calmodulin dependent kinase II was shown to be the protein responsible for converting the Ca2+ influx signal into inhibition of CSF and activation of cyclin degradation machinery to degrade cyclin B, resulting in progression through meiosis II. In mammals, this rise in Ca2+ was shown to be driven by IP3 stimulation induced by PLCζ provided by the sperm. In general, PLC enzymes stimulate calcium release by internal stores through the breakdown of PIP2 into IP3 and DAG. Mitosis Signaling Beyond the events of meiosis, changes in Ca2+ levels are observed in a variety of organisms at different stages of division, such as nuclear membrane breakdown and the metaphase-anaphase transition. Further, recent work has shown mechanically induced rapid entry into mitosis of cells paused in G2. Further, progression through division requires the presence of calcium (G1/S, G2/M, and metaphase/anaphase), suggesting checkpoints require a calcium-dependent signaling mechanism G1/S Entry into S-phase is calcium dependent. Depleting internal calcium stores inhibits initiation of DNA synthesis. One possible mechanism is that cyclin A synthesis is inhibited, preventing cdk2 activity which is required for initiation of DNA synthesis. G2/M Cell cycle progression is regulated by multiple pathways. It was shown using human cancer cell lines, that the G2/M checkpoint is regulated by CaMKII and MAPK crosstalk. Here, CaMKII activates MEK/ERK, which degrades the cell cycle arresting p27 protein Disease In general, transformed cells proliferate in a calcium-independent manner, whereas non-transformed cells show high sensitivity to extra-cellular calcium concentration, suggesting oncogenic growth may include disruption of calcium signaling. Chromatin Structure Condensation of chromatin is a vital step in cell division, allowing cells to equally distribute chromosomes to the daughter cells. Recent work has suggested that Ca2+ is required for enabling chromatin condensation in prometaphase. Calcium was found to concentrate on condensed DNA to much higher levels compared to normal cytosolic calcium concentration. The role of calcium in condensation was independent of CAMK function, suggesting a purely structural role of Ca2+ in chromatin compaction. Further, this result was demonstrated in vitro with extracted chromatin, emphasizing that the mere presence of Ca2+ can influence the structure of chromatin References Cell signaling Calcium Cell cycle Cellular processes
Calcium signaling in cell division
[ "Biology" ]
812
[ "Cell cycle", "Cellular processes" ]
55,843,837
https://en.wikipedia.org/wiki/Automated%20machine%20learning
Automated machine learning (AutoML) is the process of automating the tasks of applying machine learning to real-world problems. It is the combination of automation and ML. AutoML potentially includes every stage from beginning with a raw dataset to building a machine learning model ready for deployment. AutoML was proposed as an artificial intelligence-based solution to the growing challenge of applying machine learning. The high degree of automation in AutoML aims to allow non-experts to make use of machine learning models and techniques without requiring them to become experts in machine learning. Automating the process of applying machine learning end-to-end additionally offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform hand-designed models. Common techniques used in AutoML include hyperparameter optimization, meta-learning and neural architecture search. Comparison to the standard approach In a typical machine learning application, practitioners have a set of input data points to be used for training. The raw data may not be in a form that all algorithms can be applied to. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods. After these steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their model. If deep learning is used, the architecture of the neural network must also be chosen manually by the machine learning expert. Each of these steps may be challenging, resulting in significant hurdles to using machine learning. AutoML aims to simplify these steps for non-experts, and to make it easier for them to use machine learning techniques correctly and effectively. AutoML plays an important role within the broader approach of automating data science, which also includes challenging tasks such as data engineering, data exploration and model interpretation and prediction. Targets of automation Automated machine learning can target various stages of the machine learning process. Steps to automate are: Data preparation and ingestion (from raw data and miscellaneous formats) Column type detection; e.g., Boolean, discrete numerical, continuous numerical, or text Column intent detection; e.g., target/label, stratification field, numerical feature, categorical text feature, or free text feature Task detection; e.g., binary classification, regression, clustering, or ranking Feature engineering Feature selection Feature extraction Meta-learning and transfer learning Detection and handling of skewed data and/or missing values Model selection - choosing which machine learning algorithm to use, often including multiple competing software implementations Ensembling - a form of consensus where using multiple models often gives better results than any single model Hyperparameter optimization of the learning algorithm and featurization Neural architecture search Pipeline selection under time, memory, and complexity constraints Selection of evaluation metrics and validation procedures Problem checking Leakage detection Misconfiguration detection Analysis of obtained results Creating user interfaces and visualizations Challenges and Limitations There are a number of key challenges being tackled around automated machine learning. A big issue surrounding the field is referred to as "development as a cottage industry". This phrase refers to the issue in machine learning where development relies on manual decisions and biases of experts. This is contrasted to the goal of machine learning which is to create systems that can learn and improve from their own usage and analysis of the data. Basically, it's the struggle between how much experts should get involved in the learning of the systems versus how much freedom they should be giving the machines. However, experts and developers must help create and guide these machines to prepare them for their own learning. To create this system, it requires labor intensive work with knowledge of machine learning algorithms and system design. Additionally, some other challenges include meta-learning challenges and computational resource allocation. See also Artificial intelligence Artificial intelligence and elections Neural architecture search Neuroevolution Self-tuning Neural Network Intelligence ModelOps Hyperparameter optimization References Further reading Ferreira, Luís, et al. "A comparison of AutoML tools for machine learning, deep learning and XGBoost." 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. https://repositorium.sdum.uminho.pt/bitstream/1822/74125/1/automl_ijcnn.pdf Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., & Hutter, F. (2015). Efficient and robust automated machine learning. Advances in neural information processing systems, 28. https://proceedings.neurips.cc/paper_files/paper/2015/file/11d0e6287202fced83f79975ec59a3a6-Paper.pdf Machine learning Artificial intelligence Computational neuroscience
Automated machine learning
[ "Engineering" ]
996
[ "Artificial intelligence engineering", "Machine learning" ]
55,846,592
https://en.wikipedia.org/wiki/Miriam%20Balaban
Miriam Balaban (born in Philadelphia) is a publisher, editor and scientist, recognized for her work in science communication and desalination. She has founded international organizations (European Association of Science Editors; International Federation of Science Editors), conferences (International Conference of Scientific Editors) and journals (Desalination, editor 1966–2009; Desalination and Water Treatment, editor 2009-; Symbiosis 1985-) and has edited numerous journals, conference proceedings and books. She also publishes the Desalination Directory, an international online database. Balaban was a research associate for science communication at Boston University from 1975 to 2008. She became secretary general of the organizing committee of the International Federation of Scientific Editors' Associations (IFSEA) in 1978, and the founder and first president of its successor organization, the International Federation of Science Editors (IFSE) in 1990. She was Professor and founding Dean of the School for Science Communication at the Mario Negri Sud Institute for Biomedical and Pharmacological Research in Italy from 1988 to 1992. She has been the secretary general of the European Desalination Society (EDS) since 1993. Among other awards, Balaban received the Order of the Star of Italian Solidarity from the President of Italy in 2012. Early life Miriam Balaban was born in Philadelphia, Pennsylvania, and graduated from the Philadelphia High School for Girls in 1945. She attended the University of Pennsylvania where she graduated with a B.Sc. degree in chemistry in 1949. Science communication Soon after graduating, Balaban moved to Jerusalem where she found work editing the quarterly scientific bulletin of the Government Research Council. The first volume of the Bulletin of the Research Council of Israel was published by Balaban in both English and Hebrew. Under her direction, publication of the bulletin was greatly expanded, eventually splitting into five separate publications focusing on various areas of science, published through the Weizmann Science Press. In 1958, Balaban established the Israel Program for Scientific Translation, contracting with the United States National Science Foundation to fund the translation of foreign language publications. In the United States, the Council of Biology Editors (CBE), later the Council of Science Editors (CSE), was formed in 1957 following discussions at the Conference of Biological Editors in New Orleans. CBE had the support of the National Science Foundation and the American Institute of Biological Sciences. In 1960, the organization published its first Style Manual for Biological Journals. At the 1964 CBE conference in Ann Arbor, Michigan, a special meeting was held for both American and European editors. This inspired the formation of similar organizations in Europe. Miriam Balaban helped to start the European Association of Editors of Biological Periodicals (EAEBP). EAEBP was formed in April 1967 in Amsterdam, and renamed European Life Science Editors (ELSE) at its first General Assembly, held at the Royal Society in London in 1970. In 1982, ELSE joined with the European Association of Earth Science Editors (Editerra) to form the European Association of Science Editors (EASE). Balaban served multiple terms as treasurer of the organization. Concerns of the organization included the development of international standards for science journals, guidelines for authors whose first language was not English, republishing of articles in multiple languages, and the improvement of science communication generally. From 1975 to 2008, Balaban was a research associate at the Center for Philosophy and History of Science at Boston University, focusing on science communication. In 1977-1978 Balaban helped to found the International Federation of Scientific Editors' Associations (IFSEA). The idea for such an organization was put forward at the First International Conference of Scientific Editors, April 24–29, 1977, which Balaban organized. The idea was further discussed at a UNESCO-supported consultation meeting, chaired by Balaban, which was held June 5–6, 1978 in Paris, France. The formation of IFSEA was agreed upon and Balaban was named to the organizing committee as secretary-general. Balaban organized 12 international conferences for scientific editors, beginning in 1977 with the first International Conference of Scientific Editors in Jerusalem. In 1990, IFSEA opened its membership to individuals as well as associations, and became IFSE, the International Federation of Science Editors (variously the International Federation of Scientific Editors), with Balaban as its founder and first president. Balaban is president, publisher and editor of the scientific publishing house International Science Services. Based in Rehovot, Israel and L'Aquila, Italy. ISS publishes in a variety of scientific disciplines. Balaban has supported the development of journals both within and outside her own field. In 1985, she worked with lichenologist Margalith Galun to create the journal Symbiosis. The first 48 issues were published by Balaban International Science Services. From October 2009 onwards, the journal has been published by Springer. She established the School for Science Communication at the Mario Negri Sud Institute for Biomedical and Pharmacological Research in Italy where she served as Professor and Dean from 1988 to 1992. Desalination The focus of Balaban's research career has been desalination. In 1966, Balaban founded Desalination, the first international journal for desalting and purification of water, serving as its editor-in-chief from 1996 to 2009. She was succeeded as editor by Nidal Hilal. In 2009 Balaban established and became editor-in-chief of the monthly Desalination and Water Treatment Journal, to accommodate the expanding field. She has reviewed and edited more than 20,000 papers and several books from over 100 countries. She is the editor and publisher of the Desalination Directory. The international online database connects over 30,000 individuals and 5,000 academic and government institutions and companies involved in desalination and water conservation. Balaban has been a member of the International Desalination Association since 1975 and has served as a board member and officer. She is a member of the Scientific Program Committee for the International Desalination Workshop. Since 1993, Balaban has been the secretary general of the European Desalination Society (EDS), located at the Università Campus Bio-Medico, Rome, Italy. Balaban organizes international courses, conferences and workshops in desalination, traveling and speaking internationally. She has been referred to as "the soul of the European Desalination Society". In her position with the EDS, she served on the Committee to Review the Desalination and Water Purification Technology Roadmap, a document prepared by Sandia National Laboratories and the U.S. Department of Interior in 2003. The committee's review was published by the National Research Council and the National Academy of Sciences in 2004. It recommended that a more critical focus be taken, examining the steps needed to reach desired long-term objectives and the environmental, economic, and social costs. Balaban is involved with the desalination program at the Center for Clean Water and Energy in the Department of Mechanical Engineering at Massachusetts Institute of Technology (MIT). She is on the Scientific Advisory Council of the Sharing Knowledge Foundation. Awards 2015, named #7 of Top 25 Water Industry Leaders by Water & Wastewater International 2014, Sidney Loeb Award, European Desalination Society 2012, Order of the Star of Italian Solidarity (Stella della solidarietà italiana) from the President of Italy 2009, Lifetime Achievement award, International Desalination Association, Dubai Honorary member of the European Membrane Society (EMS) References 21st-century American women scientists 21st-century American scientists American editors American women editors Jewish American scientists Living people Year of birth missing (living people) University of Pennsylvania School of Arts and Sciences alumni Water desalination 21st-century American women writers American science communicators
Miriam Balaban
[ "Chemistry" ]
1,550
[ "Water treatment", "Water technology", "Water desalination" ]
62,791,988
https://en.wikipedia.org/wiki/Joanna%20McKittrick
Joanna McKittrick (1954 – November 15, 2019) was an American engineer and college professor, the second woman to join the engineering faculty at the University of California, San Diego (UCSD). Early life Joanna McKittrick was born in New Jersey, the daughter of John R. McKittrick and Estella Ruth Pederson McKittrick. Her father was a doctor and her mother was a teacher. She earned a bachelor's degree in mechanical engineering from the University of Colorado, Boulder, and a master's degree from Northwestern University. She completed doctoral studies at the Massachusetts Institute of Technology, in the field of materials science and engineering. Career McKittrick was a professor in the engineering program at the University of California, San Diego, from 1988. She was the second woman to join the program (chemical engineer Jan B. Talbot was the first). McKittrick's research focused luminescent materials for medical, automotive, and aviation applications, and biomaterials such as chitin, keratin, and collagen. She worked with fellow UCSD professor Marc A. Meyers on biomaterials. "Mother Nature gives us templates," she explained in 2013, of her work studying spiders, seahorses, boxfish, and porcupines. "We are trying to understand them better so we can implement them in new materials." McKittrick was a fellow of the American Ceramic Society. She wrote or co-wrote over a hundred academic journal articles, and was an editor of the Journal of the American Ceramic Society. McKittrick was a mentor for minority and women engineering students, including Lauren Rohwer and Olivia Graeve. She was faculty advisor of UCSD's student chapter of the National Society of Black Engineers. Personal life McKittrick died in November 2019, aged 65 years, at her home in La Jolla, California. Her colleagues held a memorial at the 8th International Conference on Mechanics of Biomaterials and Tissues a month later. References External links Website of the McKittrick Group, showing her UCSD laboratory's ongoing work on luminescent materials and biological materials. An interview with McKittrick at the blog Reigning It (January 19, 2016). 1954 births 2019 deaths American materials scientists University of Colorado Boulder alumni Northwestern University alumni MIT School of Engineering alumni University of California, San Diego faculty 20th-century American women engineers Women materials scientists and engineers 20th-century American engineers 21st-century American engineers 21st-century American women engineers Fellows of the American Ceramic Society
Joanna McKittrick
[ "Materials_science", "Technology" ]
521
[ "Women materials scientists and engineers", "Materials scientists and engineers", "Women in science and technology" ]
62,793,183
https://en.wikipedia.org/wiki/Oxybismuthides
Oxybismuthides or bismuthide oxides are chemical compounds formally containing the group BiO, with one bismuth and one oxygen atom. The bismuth and oxygen are not bound together as in bismuthates, instead they make a separate presence bound to the cations (metals), and could be considered as a mixed bismuthide-oxide compound. So a compound with OmBin requires cations to balance a negative charge of 2m+3n. The cations will have charges of +2 or +3. The trications are often rare earth elements or actinides. They are in the category of oxypnictide compounds. Many of the bismuthide oxides have bismuth in an unusual -2 oxidation state. The ones with Ln2BiO2 have the anti-ThCr2Si2 structure. They include alternating layers of LnO (anti-fluorite-type) and LnBiO. The Eu4Bi2O has an anti-K2NiF4 structure, the same as for Na2Ti2As2O. Some other compounds contain calcium and a rare earth CaRE3BiO4, and Ca2RE8Bi3O10. Some of these compounds are superconductors at very low temperatures and many are semiconductors at standard conditions. Examples References Bismuth compounds Bismuthides Oxides Mixed anion compounds
Oxybismuthides
[ "Physics", "Chemistry" ]
291
[ "Matter", "Mixed anion compounds", "Oxides", "Salts", "Ions" ]
62,793,986
https://en.wikipedia.org/wiki/Non-fullerene%20acceptor
Non-fullerene acceptors (NFAs) are types of acceptors used in organic solar cells (OSCs). The name Fullerene comes from another type of acceptor-molecule which was used as the main acceptor material for bulk heterojunction Organic solar cells. Non-fullerene acceptors are thus defined as not being a part of this sort of acceptors. Research in non-fullerene acceptors did not show promising results starting up when being compared to fullerene based organic solar cells. However, recent developments in this field launched a series of new opportunities for the NFA based OSCs. The most important breakthrough was the development of the small molecule acceptors (SMAs). These acceptors are showing promising results to be better alternatives for Fullerene acceptors because of their properties. The property that makes these SMAs such a big research topic is their tunability. SMAs can be modified to a much greater extent than Fullerene acceptors. There are, however, still many improvements to make on the design of the SMAs in order become profitable to use in OSCs. Recent research on designing NFA-OSCs showed an efficiency of 15% with a so-called tandem solar cell which made use of Non-fullerene acceptors as well as fullerene acceptors. With a good chance that researchers will be able to boost this percentage up to 18%, it is clear that NFA-OSCs have a great potential in becoming a profitable photovoltaic in commercial application. NFA Potential Advantages Fullerene acceptors (FAs) have been used extensively in OSCs. This is rationalized by several characteristics of fullerenes. The three-dimensional character causes them to be suitable materials for bulk heterojunction structures. Additionally, its electronic configuration (delocalized LUMOs) allows for efficient percolation and high electron mobility. Another consequence is that they are easily coupled to compatible donor polymers. However, fullerene acceptor organic solar cells (FA-OSCs) encounter a limited efficiency. The energy levels in fullerene compounds are relatively constant and difficult to alter. Moreover, they employ weak absorption in the visible spectrum and the near-infrared spectrum and low thermal instability and photochemical instability. The acceptors need to be purified extensively, adding to the economical and temporal disadvantages of using FAs. The organic NFAs, in the form of small molecular acceptors (SMAs), can be used to overcome these fullerene deficiencies. They have more structural degrees of freedom, allowing higher electron affinity tunability; they absorb incidental visible-NIR radiation more strongly; they are more stable; they are compatible with donor polymers and they are (in general) easier to synthesize. NF-OSCs with power conversion efficiencies (PCE) of over 13% have been reported, reaching a higher value than its FA-based counterpart. Disadvantages One of the downsides of using SMAs is the fact that, under atmospheric conditions, they tend to engage in disordered (anisotropic) states as a result of their planar structures. They are often planar as aromaticity is required for sufficient electron mobility. The lack of order may diminish electron transport and effective extraction routes that lead to induced current. Moreover, the corresponding lack of orientation affects donor-acceptor exciton formation. This makes them less compatible for bulk heterojunction blends than FAs. Another downside to research on SMA usage is the profound scala of possibilities of donor-acceptor pairs that scientists are challenged to induce. Physics The mechanism of current induction in organic solar cells involves a charge transfer. After electromagnetic absorption and exciton formation in the electron donor polymer, the excited electron is moved towards the acceptor conduction band (LUMO) as a result of the lower energy value than the donor LUMO. This process is called a charge separation, and the corresponding energy value satisfies where CS denotes charge separation, A denotes the acceptor and D denotes the donor molecule. Along with the Coulombic potential that needs to be surpassed, the maximum energy obtained from the process is defined as the Charge Transfer energy, . The difference between the optical excitation energy (the optical band gap energy, ) and the charge transfer energy is the driving force of the system. An advantage of NF-OSCs over current fullerene-based OSCs is that the SMAs used are relatively compatible with donors, as a result of their electronic affinity tunability. Their compatibility originates from their LUMO-energy value similarity. The driving force is minimized to solely Coulombic contributions (<0.3 eV) with negligible charge separation loss. This results in low potential spillage, , which depends explicitly on the value of the driving force, along with radiative and non-radiative losses during the current induction process. Thus, for NF-OSCs, , with q the electron's charge, is minimized, leading to a higher useful energy output. The result is a high open-circuit voltage of the solar cell compared to fullerene counterparts, with reports of values as high as 1.1V. However, the diminished charge separation energy cost negatively influences the tendency of excited electrons in the donor conduction band to transport to the acceptor LUMO as it is less preferred energetically. This gives rise to the fact that electrons induced in the current are more energetic, but fewer electrons are induced. This means that the short-circuit current density and the fill factor (FF) are decreased. In terms of the PCE, the higher open-circuit voltage is compensated by the lower short-circuit current density and fill factor. Researchers showed that ultrafast charge separation is possible with negligible driving force. In fact, the electrical external quantum efficiency is highest for donor-acceptor blends with lowest driving force. Types One of the main advantages of the non-fullerene acceptors is their ability to be tuned and customized by chemical modification. This in contrary to fullerene acceptors. It also immediately creates a bottleneck because of the huge amount of possibilities there are which could be applied as an SMA. A wide variety of SMAs are tested to be a successful acceptor, but two classes of SMAs have proven to give the best results concerning Power Conversion Efficiency (PCE) and have made the greatest attribute to the recent development in NFA-OSCs. Rylene diimides Rylene diimides are, as said, one of the two main subclasses which are a basis for acceptor-molecules in modern NFA-OSCs. Rylene diimides are industrial dyes and can be divided into, once again, two subclasses: Perylene Diimides (PDIs) and Naphthalene Diimides (NDIs). Rylene diimides consist of a planar rylene framework and numerous constructions can be made by attaching certain subgroups and by using more PDI molecules in one acceptor. The mono-PDI molecule is shown in the figure on the right. Rylene diimides are considered good acceptors because of their favourable properties. Rylene diimides usually have high electron mobility values due to intermolecular π-stacking. These values are comparable to ones of fullerene acceptors. Furthermore, Rylene diimides also have a high absorbance spectrum in the visible area, high thermal and oxidative stability and their electric affinities can be tuned to a great extent by adding side groups and 3D-structure which leads to a significant higher open-circuit voltage () Challenges that must be faced by designing and improving Rylene diimides based OSCs are mainly concerned by synthesis of PDIs because the planar structure of the molecule makes that it tends to aggregate into a crystal structure. This greatly enhances the domain size, larger than the preferred 20 nm, in the bulk heterojunction which leads to a lower charge transport ability. Researchers have tried to reduce this aggregation by three structure adaptions, all focused on enhancing the mobility of Rylene diimides molecules. The first approach is to link two PDI molecules with a single carbon bond, to form a so-called twisted dimer. The second synthesis forms highly twisted 3D-structures of PDI molecules and the third approach forms a fused-ring structure. For all three possible ways, an example molecule is shown in the figure below. These derivatives are examples of acceptor-molecules which were tested and assessed in OSCs for their performance and PCE. Future research will focus on developing better PDIs resulting in higher PCE values for the OSC. Fused-ring electron acceptors Fused-ring electron acceptors (FREAs) are completely different from Rylene diimides. They consist of two electron withdrawing groups in between of a donor group. This donor group is a π-bridge of fused aromatic rings. FREAs have values for similar to those of fullerene acceptors and have a wide absorption range. Electron affinities can be tuned by substituting the side chains, the core and the end groups. Current research focusses on designing the best FREA with varying all these groups. Another development issue is the expensive synthesis of these molecules. Finding the most efficient synthetic route is therefore also an important subject concerning these acceptors Future development In current research, rylene diimides (for small band-gap energy donors) and FREAs (for large band-gap energy donors) have shown the most potential for becoming commercially viable solar cell materials for bulk heterojunction blend cells. Wide band gap donors are known to enhance voltage and diminish current density, but in combination with FREAs both values can be relatively high. There are still a lot of improvements to be made before an NFA-OSC can be commercially profitable. First of all, the PCE should be increased to at least 15% since this is the minimal value for commercial application. As said, PCEs already have exceeded 13% so recent development is on the right track. PCEs can be increased by designing even better NFAs, for instance, on the level of electron mobility NFAs still can increase a lot compared to FAs ( for the best NFAs compared to for the best FAs). Improvements can also be made in the following aspects: better donor matching, tandem constructions, BHJ morphology and domain purity of the donor and acceptor. Besides these theoretical research aspects, implementation in a life size commercial solar cell also brings a lot of challenges, such as easy and sustainable device fabrication methods and long-term stability of the organic compounds. Studies also show that with upscaling, the PCE in general drops. On all of these areas, NFA-OSCs show great potential but it will take a lot of research before a solid non-fullerene acceptor-organic solar cell can compete with inorganic solar cells. See also Rylene dye References Organic solar cells
Non-fullerene acceptor
[ "Chemistry", "Materials_science" ]
2,257
[ "Organic solar cells", "Polymer chemistry" ]
62,799,717
https://en.wikipedia.org/wiki/Molybdenum%28IV%29%20bromide
Molybdenum(IV) bromide, also known as molybdenum tetrabromide, is the inorganic compound with the formula MoBr4. It is a black solid. MoBr4 has been prepared by treatment of molybdenum(V) chloride with hydrogen bromide: 2 MoCl5 + 10 HBr → 2 MoBr4 + 10 HCl + Br2 The reaction proceeds via the unstable molybdenum(V) bromide, which releases bromine at room temperature. Molybdenum(IV) bromide can also be prepared by oxidation of molybdenum(III) bromide with bromine. References Bromides Molybdenum halides Molybdenum(IV) compounds
Molybdenum(IV) bromide
[ "Chemistry" ]
162
[ "Bromides", "Salts" ]
65,646,215
https://en.wikipedia.org/wiki/Hexaaza-18-crown-6
Hexaaza-18-crown-6 is the macrocyclic ligand with the formula (CH2CH2NH)6. A white solid, this compound has attracted attention as the N6-analogue of 18-crown-6. It functions as a hexadentate ligand in coordination chemistry. It is the parent hexaaza-crown ether. Its protonated derivatives bind anions via multiple hydrogen bonds. References Polyamines Ethyleneamines Chelating agents Macrocycles Hexadentate ligands
Hexaaza-18-crown-6
[ "Chemistry" ]
111
[ "Organic compounds", "Chelating agents", "Macrocycles", "Process chemicals" ]
65,647,112
https://en.wikipedia.org/wiki/Portogloboviridae
Portogloboviridae is a family of dsDNA viruses that infect archaea. It is a proposed family of the realm Varidnaviria, but ICTV officially puts it as incertae sedis virus. Viruses in the family are related to Helvetiavirae. The capsid proteins of these viruses and their characteristics are of evolutionary importance for the origin of the other Varidnaviria viruses since they seem to retain primordial characters. Description The virions in this family have a capsid with icosahedral geometry and a viral envelope that protects the genetic material. The diameter is 83 to 87 nanometers. The genome is circular dsDNA with a length of 20,222 base pairs. The genome contains 45 open reading frames (ORFs), which are closely arranged and occupy 89.1% of the genome. ORFs are generally short, with an average length of 103 codons. Virions have 10 proteins ranging from 20 to 32 kDa. Of these proteins, 8 code for the capsid and two for the viral envelope, including one that is a vertical single jelly roll (SJR) capsid protein. Entry into the host cell is by penetration. Viral replication occurs by chronic infection without a lytic cycle. The Portogloboviridae viruses together with Halopanivirales have evolutionary importance in the evolution of the other Varidnaviria viruses since they appear to be relics of how the first viruses of this realm were. Portogloboviridae together with Halopanivirales may have infected the last universal common ancestor (LUCA) and originated before that organism. It has been proposed that it may be related to the origin of Varidnaviria in the following way. Taxonomy The family has one genus which has two species: Alphaportoglobovirus Alphaportoglobovirus SPV2 Sulfolobus alphaportoglobovirus 1 References DNA viruses
Portogloboviridae
[ "Biology" ]
400
[ "Viruses", "DNA viruses" ]
65,647,556
https://en.wikipedia.org/wiki/Nomurabacteria
Nomurabacteria is a candidate phylum of bacteria belonging to the CPR group. They are ultra-small bacteria that have been found in a wide variety of environments, mainly in sediments under anaerobic conditions. Bacteria of this phylum share several of their characteristics with other ultra-small bacteria, nanometric size, small genomes, reduced metabolism, and low capacity to synthesize nucleotides and aminoacids. They also lack respiratory chains and the Krebs cycle. In addition, many can be endosymbionts of larger bacteria. Phylogenetic analyzes have suggested that Nomurabacteria and the other ultra-small bacteria make up the most basal clade of all bacteria. The archaea of the DPANN group are ultra-small archaea that share the same characteristics with these bacteria and are the most basal group of the archaeo-eukaryotic clade, although it can also be paraphyletic of eukaryotes and the other archaea as will be seen below. In some phylogenetic analyzes of the proteome, ultra-small bacteria emerge outside the traditional bacterial domain and instead emerge as a paraphyletic group of traditional Bacteria and the clade composed of archaea and eukaryotes. In these analyzes Nomurabacteria turns out to be the most basal clade of all cellular organisms. Phylogeny Proteome analyzes have shown that Nomurabacteria can be the most basal clade of cellular organisms and that the other CPR bacteria are a paraphyletic group as can be seen in the cladogram that shows the phylogenetic relationships between multiple bacterial, archaean and eukaryotes. References Bacteria Candidatus taxa
Nomurabacteria
[ "Biology" ]
356
[ "Prokaryotes", "Microorganisms", "Bacteria" ]
65,657,913
https://en.wikipedia.org/wiki/Spike%20response%20model
The spike response model (SRM) is a spiking neuron model in which spikes are generated by either a deterministic or a stochastic threshold process. In the SRM, the membrane voltage is described as a linear sum of the postsynaptic potentials (PSPs) caused by spike arrivals to which the effects of refractoriness and adaptation are added. The threshold is either fixed or dynamic. In the latter case it increases after each spike. The SRM is flexible enough to account for a variety of neuronal firing pattern in response to step current input. The SRM has also been used in the theory of computation to quantify the capacity of spiking neural networks; and in the neurosciences to predict the subthreshold voltage and the firing times of cortical neurons during stimulation with a time-dependent current stimulation. The name Spike Response Model points to the property that the two important filters and of the model can be interpreted as the response of the membrane potential to an incoming spike (response kernel , the PSP) and to an outgoing spike (response kernel , also called refractory kernel). The SRM has been formulated in continuous time and in discrete time. The SRM can be viewed as a generalized linear model (GLM) or as an (integrated version of) a generalized integrate-and-fire model with adaptation. Model equations for SRM in continuous time In the SRM, at each moment in time t, a spike can be generated stochastically with instantaneous stochastic intensity or 'escape function' that depends on the momentary difference between the membrane voltage and the dynamic threshold . The membrane voltage at time t is given by where is the firing time of spike number f of the neuron, is the resting voltage in the absence of input, is the input current at time t − s and is a linear filter (also called kernel) that describes the contribution of an input current pulse at time t − s to the voltage at time t. The contributions to the voltage caused by a spike at time are described by the refractory kernel . In particular, describes the time course of the action potential starting at time as well as the spike-afterpotential. The dynamic threshold is given by where is the firing threshold of an inactive neuron and describes the increase of the threshold after a spike at time . In case of a fixed threshold [i.e., =0], the refractory kernel should include only the spike-afterpotential, but not the shape of the spike itself. A common choice for the 'escape rate' (that is consistent with biological data) is where is a time constant that describes how quickly a spike is fired once the membrane potential reaches the threshold and is a sharpness parameter. For the threshold becomes sharp and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments is which that neuronal firing becomes non-neglibable as soon the membrane potential is a few mV below the formal firing threshold. The escape rate process via a soft threshold is reviewed in Chapter 9 of the textbook Neuronal Dynamics. In a network of N SRM neurons , the membrane voltage of neuron is given by where are the firing times of neuron j (i.e., its spike train), and describes the time course of the spike and the spike after-potential for neuron i, and describe the amplitude and time course of an excitatory or inhibitory postsynaptic potential (PSP) caused by the spike of the presynaptic neuron j. The time course of the PSP results from the convolution of the postsynaptic current caused by the arrival of a presynaptic spike from neuron j. Model equations for SRM in discrete time For simulations, the SRM is usually implemented in discrete time. In time step of duration , a spike is generated with probability that depends on the momentary difference between the membrane voltage and the dynamic threshold . The function F is often taken as a standard sigmoidal with steepness parameter . But the functional form of F can also be calculated from the stochastic intensity in continuous time as where is the distance to threshold. The membrane voltage in discrete time is given by where is the discretized firing time of the neuron, is the resting voltage in the absence of input, and is the input current at time (integrated over one time step). The input filter and the spike-afterpotential are defined as in the case of the SRM in continuous time. For networks of SRM neurons in discrete time we define the spike train of neuron j as a sequence of zeros and ones, and rewrite the membrane potential as In this notation, the refractory kernel and the PSP shape can be interpreted as linear response filters applied to the binary spike trains . Main applications of the SRM Theory of computation with pulsed neural networks Since the formulation as SRM provides an explicit expression for the membrane voltage (without the detour via a differential equations), SRMs have been the dominant mathematical model in a formal theory of computation with spiking neurons. Prediction of voltage and spike times of cortical neurons The SRM with dynamic threshold has been used to predict the firing time of cortical neurons with a precision of a few milliseconds. Neurons were stimulated, via current injection, with time-dependent currents of different means and variance while the membrane voltage was recorded. The reliability of predicted spikes was close to the intrinsic reliability when the same time-dependent current was repeated several times. Moreover, extracting the shape of the filters and directly from the experimental data revealed that adaptation extends over time scales from tens of milliseconds to tens of seconds. Thanks to the convexity properties of the likelihood in Generalized Linear Models, parameter extraction is efficient. Associative memory in networks of spiking neurons SRM0 neurons have been used to construct an associative memory in a network of spiking neurons. The SRM network which stored a finite number of stationary patterns as attractors using a Hopfield-type connectivity matrix was one of the first examples of attractor networks with spiking neurons. Population activity equations in large networks of spiking neurons For SRM neurons, an important variable characterizing the internal state of the neuron is the time since the last spike (or 'age' of the neuron) which enters into the refractory kernel . The population activity equations for SRM neurons can be formulated alternatively either as integral equations, or as partial differential equations for the 'refractory density'. Because the refractory kernel may include a time scale slower than that of the membrane potential, the population equations for SRM neurons provide powerful alternatives to the more broadly used partial differential equations for the 'membrane potential density'. Reviews of the population activity equation based on refractory densities can be found in as well in Chapter 14 of the textbook Neuronal Dynamics. Spike patterns and temporal code SRMs are useful to understand theories of neural coding. A network SRM neurons has stored attractors that form reliable spatio-temporal spike patterns (also known as synfire chains) example of temporal coding for stationary inputs. Moreover, the population activity equations for SRM exhibit temporally precise transients after a stimulus switch, indicating reliable spike firing. 4. History and relation to other models The Spike Response Model has been introduced in a series of papers between 1991 and 2000. The name Spike Response Model probably appeared for the first time in 1993. Some papers used exclusively the deterministic limit with a hard threshold others the soft threshold with escape noise. Precursors of the Spike Response Model are the integrate-and-fire model introduced by Lapicque in 1907 as well as models used in auditory neuroscience. SRM0 An important variant of the model is SRM0 which is related to time-dependent nonlinear renewal theory. The main difference to the voltage equation of the SRM introduced above is that in the term containing the refractory kernel there is no summation sign over past spikes: only the most recent spike matters. The model SRM0 is closely related to the inhomogeneous Markov interval process and to age-dependent models of refractoriness. GLM The equations of the SRM as introduced above are equivalent to Generalized Linear Models in neuroscience (GLM). In the neuroscience, GLMs have been introduced as an extension of the Linear-Nonlinear-Poisson model (LNP) by adding self-interaction of an output spike with the internal state of the neuron (therefore also called 'Recursive LNP'). The self-interaction is equivalent to the kernel of the SRM. The GLM framework enables to formulate a maximum likelihood approach applied to the likelihood of an observed spike train under the assumption that an SRM could have generated the spike train. Despite the mathematical equivalence there is a conceptual difference in interpretation: in the SRM the variable V is interpreted as membrane voltage whereas in the recursive LNP it is a 'hidden' variable to which no meaning is assigned. The SRM interpretation is useful if measurements of subthreshold voltage are available whereas the recursive LNP is useful in systems neuroscience where spikes (in response to sensory stimulation) are recorded extracellulary without access to the subthreshold voltage. Adaptive leaky integrate-and-fire models A leaky integrate-and-fire neuron with spike-triggered adaptation has a subthreshold membrane potential generated by the following differential equations where is the membrane time constant and is an adaptation current number, with index k, is the resting potential and is the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a value below the firing threshold. Integration of the linear differential equations gives a formula identical to the voltage equation of the SRM. However, in this case, the refractory kernel does not include the spike shape but only the spike-afterpotential. In the absence of adaptation currents, we retrieve the standard LIF model which is equivalent to a refractory kernel that decays exponentially with the membrane time constant . External links Spike Response Model, Chapter 6.4 of the textbook Neuronal Dynamics 'soft threshold' and escape noise, Chapter 9 of the textbook Neuronal Dynamics Quasi-Renewal Theory Chapter 14 of the textbook Neuronal Dynamics. Spike Response Model, from Scholarpedia Reference section Biophysics Computational neuroscience Neuroscience
Spike response model
[ "Physics", "Biology" ]
2,172
[ "Neuroscience", "Applied and interdisciplinary physics", "Biophysics" ]
42,777,908
https://en.wikipedia.org/wiki/Prediction%20of%20crystal%20properties%20by%20numerical%20simulation
The prediction of crystal properties by numerical simulation has become commonplace in the last 20 years as computers have grown more powerful and theoretical techniques more sophisticated. High accuracy prediction of elastic, electronic, transport and phase properties is possible with modern methods. Ab Initio Calculations Ab initio or first principles calculations are any of a number of software packages making use of density functional theory to solve for the quantum mechanical state of a system. Perfect crystals are an ideal subject for such calculations because of their high periodicity. Since every simulation package will vary in the details of its algorithms and implementations, this page will focus on a methodological overview. Basic theory Density functional theory seeks to solve for an approximate form of the electronic density of a system. In general, atoms are split into ionic cores and valence electrons. The ionic cores (nuclei plus non-bonding electrons) are assumed to be stable and are treated as a single object. Each valence electron is treated separately. Thus, for example, a Lithium atom is treated as two bodies – Li+ and e- – while oxygen is treated as three bodies, namely O2+ and 2e−. The “true” ground state of a crystal system is generally unsolvable. However, the variational theorem assures us that any guess as to the electronic state function of a system will overestimate the ground state energy. Thus, by beginning with a suitably parametrized guess and minimizing the energy with respect to each of those parameters, an extremely accurate prediction may be made. The question as to what one's initial guess should be is a topic of active research. In the large majority of crystal systems, electronic relaxation times are orders of magnitude shorter than ionic relaxation times. Thus, an iterative scheme is adopted. First, the ions are considered fixed and the electronic state is relaxed by considering the ionic and electron-electron pair potentials. Next, the electronic states are considered fixed and the ions are allowed to move under the influence of the electronic and ion-ion pair potentials. When the decrease in energy between two iterative steps is sufficiently small, the structure of the crystal is considered solved. Boundary conditions A key choice that must be made is how many atoms to explicitly include in one's calculation. In Big-O notation, calculations general scale as O(N3) where N is the number of combined ions and valence electrons. For structure calculations, it is generally desirable to choose the smallest number of ions that can represent the structure. For example, NaCl is a bcc cubic structure. At a first guess, one might construct a cell of two interlocked cubes – 8 Na and 8 Cl – as one's unit cell. This will give the correct answer but is computationally wasteful. By choosing appropriate coordinates, one might simulate it with just two atoms: 1 Na and 1 Cl. Crystal structure calculations rely on periodic boundary conditions. That is, the assumption is that the cell you have chosen is in the midst of an infinite lattice of identical cells. By taking our 1 Na 1 Cl cell and copying it many times along each of the crystal axes, we will have simulated the same superstructure as our 8 Na 8 Cl cell but with much reduced computational cost. Raw output Only a few lists of information will be output from a calculation, in general. For the ions, the position, velocity and net force on each ion are recorded at each step. For electrons, the guess as to the electronic state function may be recorded as well. Finally, the total energy of the system is recorded. From these three types of information, we may deduce a number of properties. Calculable properties Unit cell parameters Unit cell parameters (a,b,c,α,β,γ) can be computed from the final relaxed positions of the ions. In a NaCl calculation, the final position of the Na ion might be (0,0,0) in picometer Cartesian coordinates and the final position of the Cl ion might be (282,282,282). From this, we see that the lattice constant would be 584 pm. For non-orthorhombic systems, the determination of cell parameters might be more complicated, but many ab-initio numerical packages have utilities to make this calculation simpler. Once the lattice cell parameters are known, patterns for single crystal or powder diffraction can be readily predicted via Bragg's Law. Temperature and pressure The temperature of the system can be estimated by use of the Equipartition Theorem, with three degrees of freedom for each ion. Since ionic velocities are generally recorded at each step in the numerical simulation, the average kinetic energy of each ion is easy to calculate. There exist schemes which attempt to control the temperature of the simulation by, e.g. enforcing each ion to have exactly the kinetic energy predicted by the Equipartition Theorem (Berendsen thermostat) or by allowing the system to exchange energy and momentum with a (more massive) fictitious enclosing system (Nose-Hoover thermostat). The net force on each ion is generally calculated explicitly at each numerical step. From this, the stress tensor of the system can be calculated and usually is calculated by the numerical package. By varying the convergence criteria, one can either seek a lowest energy structure or a structure that produces a desired stress tensor. Thus, high pressures can be simulated as easily as ambient pressures. Elastic properties The Young's modulus of a mineral can be predicted by varying one cell parameter at a time and observing the evolution of the stress tensor. Because the raw output of a simulation includes energy and volume, the integrated version of the Birch-Murnaghan equation of state is often used to determine bulk modulus. Electronic density of states The electronic density functional is explicitly used in the calculation of the electronic ground state. Packages such as VASP have an option to calculate the electronic density of states per eV to facilitate the prediction of conduction bands and band gaps. Thermal transport properties The Green-Kubo relations can be used to calculate the thermal transport properties of a mineral. Since the velocities of the ions are stored at each numerical step, one can calculate the time correlation of later velocities with earlier velocities. The integral of these correlations is related to the Fourier thermal coefficient. Diffusion By recording the ionic positions at each time step, one can observe how far, on average, each ion has moved from its original position. The mean squared displacement of each ion type is related to the diffusion coefficient for a particle undergoing Brownian motion. References Crystallography
Prediction of crystal properties by numerical simulation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,349
[ "Crystallography", "Condensed matter physics", "Materials science" ]
42,779,491
https://en.wikipedia.org/wiki/Le%20Bail%20method
Le Bail analysis is a whole diffraction pattern profile fitting technique used to characterize the properties of crystalline materials, such as structure. It was invented by Armel Le Bail around 1988. Background The Le Bail method extracts intensities (Ihkl) from powder diffraction data. This is done in order to find intensities that are suitable to determine the atomic structure of a crystalline material and to refine the unit cell and has the added advantage of checking phase-purity. Generally, the intensities of powder diffraction data are complicated by overlapping diffraction peaks with similar d-spacings. For the Le Bail method, the unit cell and the approximate space group of the sample must be predetermined because they are included as a part of the fitting technique. The algorithm involves refining the unit cell, the profile parameters, and the peak intensities to match the measured powder diffraction pattern. It is not necessary to know the structural factor and associated structural parameters, since they are not considered in this type of analysis. Le Bail can be used to find phase transitions in high pressure and temperature experiments. It generally provides a quick method to refine the unit cell, which allows better experimental planning. Le Bail analysis provides a more reliable estimate for the intensities of allowed reflections for different crystal symmetries. Crystallographic structural determination can be accomplished in multiple ways. Le Bail technique is relevant for diffraction studies that involve using a radiation source, which may be neutron or synchrotron, to collect a high resolution, high quality powder diffraction profile. Initially, peak positions are found in the data. Next, the pattern is indexed in order to determine the unit cell or lattice parameters. Then, space group determination follows based on symmetry and the presence or absence of certain reflections. Then, either Le Bail or Pawley technique may be used to extract intensities and refine the unit cell. Refinement Le Bail analysis fits parameters using a steepest descent minimization process. Specifically, the method is least squares analysis, which is an iterative process that is discussed later in this article. The parameters being fitted include the unit-cell parameters, the instrumental zero error, peak width parameters, and peak shape parameters. First, the Le Bail method defines an arbitrary starting value for the intensities (Iobs). This value is ordinarily set to one, but other values may be used. While peak positions are constrained by the unit cell parameters, intensities are unconstrained. The equation to calculate intensities is: In the equation, Iobs is the intensity observed at a particular step and yi(obs) is the observed profile point. yi(calc) is the A single intensity value may contain more than one peak. Other peaks may be calculated similarly. The final intensity for a peak is calculated as y(calc) = yi(1) + yi(2). The summation is carried out over all contributing profile points for a particular 2-theta bin. The summation process is known as profile intensity partitioning, and it works over any number of peaks. Le Bail technique works especially well with overlapping intensities since in this method the intensity is allotted based on the multiplicity of the intensities that contribute to a particular peak. The somewhat arbitrary choice of starting values produces a bias in the calculated values. The refinement process continues by setting the new calculated structure factor to the observed structure factor value. The process is then repeated with the new structure factor estimate. At this point, the unit cell, background, peak widths, peak shape, and resolution function are refined, and the parameters are improved. The structure factor is then reset to the new structure factor value, and the process begins again. Structural refinement can continue with whole profile fitting techniques or further treatment of peak overlap. Probabilistic approaches may also be used to treat peak overlap. Advantages Some authors suggest the Le Bail technique exploits prior information more efficiently than Pawley method. This was an important consideration at the time of development when computing power was limited. Le Bail is also easily integrated into Rietveld analysis software, and is a part of a number of programs. Both methods improve subsequent structural refinements. Available software Le Bail analysis is commonly a part of Rietveld analysis software, such as GSAS/EXPGUI. It is also used in ARITVE, BGMN, EXPO, EXTRACT, FullProf, GENEFP, Jana2006, Overlap, Powder Cell, Rietan, TOPAS and Highscore. References Sources Crystallography
Le Bail method
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
951
[ "Crystallography", "Condensed matter physics", "Materials science" ]
67,124,064
https://en.wikipedia.org/wiki/National%20Centre%20for%20the%20Replacement%2C%20Refinement%20and%20Reduction%20of%20Animals%20in%20Research
The National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs, pronounced as "N C 3 Rs") is a British organization with the goal of reducing the number of animals used in scientific research. It is named after the three Rs principles, first described in 1959, for reducing the scale and impact of animal research. It was established in 2004 after the publication of a 2002 House of Lords select committee report on Animals In Scientific Procedures , the chief executive of NC3Rs is Dr Vicky Robinson, who was appointed CBE in the 2015 Birthday Honours "For services to Science and Animal Welfare". See also ARRIVE guidelines References External links Animal testing Animal welfare Bioethics Bioethics research organizations
National Centre for the Replacement, Refinement and Reduction of Animals in Research
[ "Chemistry", "Technology" ]
147
[ "Bioethics", "Animal testing", "Ethics of science and technology" ]
67,126,169
https://en.wikipedia.org/wiki/Bisantrene
Bisantrene is an anthracenyl bishydrazone with anthracycline-like antineoplastic activity and an antimetabolite. Bisantrene intercalates with and disrupts the configuration of DNA, resulting in DNA single-strand breaks, DNA-protein crosslinking, and inhibition of DNA replication. This agent is similar to doxorubicin in chemotherapeutic activity, but unlike anthracyclines like doxorubicin, it exhibits little cardiotoxicity. In addition to its anthracycline-like activity, a July 2020 seminal article by Su, R et al. at the City of Hope Hospital in Los Angeles, California, USA first identified bisantrene to also be a potent (IC50 = 142nM) inhibitor of the Fat Mass and Obesity (FTO) associated protein, which is a m6A RNA demethylase. The same study found that bisantrene is a weak inhibitor of ALKBH5, which is the only other demethylase, i.e. bisantrene is also a selective inhibitor of FTO. In 2021, bisantrene was demonstrated preclinically to be cardioprotective when administered together with cardiotoxic anthracyclines. A bisantrene combination treatment is currently (as at early 2024) nearing the end of a Phase II clinical trial to assess its efficacy in treating AML in heavily pretreated patients and to assess any adverse side effects, including any cardiotoxicity of the combination. The December 2023 interim findings are given in the History section. Medical uses Clinical trials of Bisantrene in the 1980s showed efficacy in a range of leukaemias (including Acute Myeloid Leukaemia), breast cancer, and ovarian cancer. Adverse Side Effects High doses of bisantrene (above 200 mg/m2/day) cause adverse side effects typical of anthracycline chemotherapeutics. Common adverse side effects include hair loss, bone marrow suppression, vomiting, rash, and inflammation of the mouth. For a chemotherapy drug, it is considered to have relatively low toxicity. Unlike other anthracycline chemotherapeutics, Bisantrene shows low levels of cardiotoxicity. In a Phase III metastatic breast cancer clinical, patients were exposed to cumulative doses in excess of 5440 mg/m2 without developing cardiac damage. The same study observed significantly lower rates of hair loss and nausea compared to patients given doxorubicin. Three Mechanisms of Action Bisantrene has three distinct mechanisms of action. Bisantrene contains an appropriately sized planar electron-rich chromophore to be a DNA intercalating agent, and in vitro, it is a potent inhibitor of DNA and RNA synthesis. Bisantrene is also a potent and selective inhibitor of the FTO enzyme, which is an m6A mRNA demethylase. Bisantrene acts by occupying FTO's catalytic pocket. This is a relatively recent discovery (July 2020). Finally, the University of Newcastle and the Hunter Medical Research Institute found in late 2021 preclinical research that bisantrene has a cardioprotective mechanism of action when administered together with a cardiotoxic drug such as doxorubicin. As at early 2024, the molecular basis for this cardioprotective effect hasn't been announced by the researchers. Bisantrene's cardioprotective mechanism of action is important because "15 of the 35 commercially available anti-cancer drugs have direct cardiotoxic effects on HCM (human cardiomyocytes)." According to the Australian Cardiovascular Alliance Cardio-Oncology Working Group , a drug which is simultaneously anticancer and cardioprotective is the "Holy Grail" of Cardio-Oncology. History Bisantrene was developed by Lederle Laboratories during the 1970s, a subsidiary of American Cyanamid, as a less cardiotoxic alternative to anthracyclines. Across the 1980s and early 1990s, over 40 clinical trials were conducted using Bisantrene. The National Cancer Institute (NCI)] undertook a large scale trial using Bisantrene under the name "Orange Crush", including a range of preclinical trials which found bisantrene to be inactive when taken orally, though was found to be efficacy towards some cancer cells intravenous, intraperitoneal, or subcutaneous. In the 1980s, forty-four patients with metastatic breast cancer who had undergone extensive combination chemotherapy with doxorubicin and had failed to respond to the combination, were treated with bisantrene. From 40 patients that were evaluated, 9 showed a partial response, and 18 showed the cancer was not progressive but stabilised. Bisantrene was approved for human medical use in France in 1990 to target Acute Myeloid Leukemia (AML) cancers. It has undergone 46 Phase II trials with 1,800 patients to test its efficacy against fighting cancer cells. The drug was delisted in the early 1990s due to a series of pharmaceutical mergers and acquisitions. In November 2019, researchers at the City of Hope Hospital in Los Angeles, California published that a drug with codename "CS1" is a potent and specific inihibitor of FTO, a m6A mRNA demethylase. This article didn't identify that "CS1" is actually bisantrene. In 2020 at Sheba Hospital, Tel Aviv, Israel, four out of 10 heavily pretreated AML patients responded to bisantrene administered as a single agent. All four of these responding patients had extramedullary disease. In July 2020, researchers at the City of Hope Hospital in Los Angeles, California published that bisantrene (for which they mostly used the codename "CS1") is a potent and specific inihibitor of FTO, a m6A mRNA demethylase. The fact that "CS1" is actually bisantrene is mentioned near the start of the Discussion section of the article. In 2021, researchers at the University of Chicago used bisantrene (which they referred to as "CS1", using the codename adopted by the City of Hope Hospital) to successfully inhibit FTO in a preclinical experiment. In a preclinical trial published in January 2022, researchers at the University of Lille used bisantrene to inhibit FTO in order to test whether an FTO inhibitor could potentially be used to treat disregulation of glucose metabolism in Type 2 diabetes. In 2022 researchers at the University of Texas tested a bisantrene, venetoclax and decitabine combination for AML preclinically. Based on the 2020 clinical study at Sheba Hospital and the two University of Texas pre-clinicals cited above, the most recent clinical trial is the one currently (as at early 2024) underway as a combination treatment for AML in heavily pretreated patients at Sheba Hospital, Tel Aviv, Israel. In the dose-finding stage, three out of six of the heavily pretreated patients were bridged to a bone marrow transplant. Interim results of the expansion stage were announced in December 2023 in a poster presented at the 2023 American Society of Haematologists conference. The results are promising for such heavily pre-treated patients. Six patients recovered sufficiently to be bridged to a bone marrow transplant. No cardiotoxity of the Bisantrene, Fludarabine and Clofarabin combination was observed. In November 2023, researchers at the University of Newcastle and at Race Oncology Limited jointly published a preclinical study in the peer-reviewed journal Blood. It found that bisanantrene is synergistic with the hypomethalating agent decitabine for treatment of AML. The researchers recommended that this combination should proceed to the clinic. Alternate Names for Bisantrene Names Bisantrene's chemical name is 9, 10-antrhracenedicarboxaldehydebis [(4, 5-dihydro-1H-imidazole-2-yl) hydrazine] dihydrochloride. Bisantrene was given the nickname “Orange Crush” in the 1980s due to its fluorescent orange color when in solution. Bisantrene is also sometimes referred to as "CS1" in cancer research journals, starting with the July 2020 seminal article by Su, R et al. The fact that "CS1" is actually bisantrene is mentioned near the start of the Discussion section of that article. References Anthracenes Hydrazones Imidazoles
Bisantrene
[ "Chemistry" ]
1,844
[ "Hydrazones", "Functional groups" ]
67,127,097
https://en.wikipedia.org/wiki/%CE%94-3-Tetrahydrocannabinol
Δ-3-Tetrahydrocannabinol (often abbreviated as delta-3-THC or Δ3-THC) is a synthetic isomer of tetrahydrocannabinol (THC) developed during the original research in the 1940s to develop synthetic routes to the natural products Δ8-THC and Δ9-THC found in the cannabis. While the normal trans configuration of THC is in this case flattened by the double bond, it still has two enantiomers as the 9-methyl group can exist in an (R) or (S) conformation. The (S) enantiomer has similar effects to Δ9-THC though with several times lower potency, while the (R) enantiomer is many times less active or inactive, depending on the assay used. It has been identified as a component of vaping liquid products. See also 7,8-Dihydrocannabinol Cannabitriol Delta-4-Tetrahydrocannabinol Delta-7-Tetrahydrocannabinol Delta-10-Tetrahydrocannabinol Hexahydrocannabinol JWH-138 Parahexyl References Benzochromenes Cannabinoids Heterocyclic compounds with 3 rings
Δ-3-Tetrahydrocannabinol
[ "Chemistry" ]
278
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
67,127,278
https://en.wikipedia.org/wiki/%CE%94-7-Tetrahydrocannabinol
Δ-7-Tetrahydrocannabinol (Delta-5-THC, Δ5-THC; alternatively numbered as Δ-5-Tetrahydrocannabinol, Δ7-THC) is a synthetic isomer of tetrahydrocannabinol. The (6aR,9S,10aR)-Δ7-THC epimer is only slightly less potent than Δ9-THC itself, while the (9R) epimer is much less potent. See also 7,8-Dihydrocannabinol Delta-3-Tetrahydrocannabinol Delta-4-Tetrahydrocannabinol Delta-8-Tetrahydrocannabinol Delta-10-Tetrahydrocannabinol Hexahydrocannabinol References Benzochromenes Cannabinoids
Δ-7-Tetrahydrocannabinol
[ "Chemistry" ]
190
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
69,994,997
https://en.wikipedia.org/wiki/CrysTBox
CrysTBox (Crystallographic Tool Box) is a suite of computer tools designed to accelerate material research based on transmission electron microscope images via highly accurate automated analysis and interactive visualization. Relying on artificial intelligence and computer vision, CrysTBox makes routine crystallographic analyses simpler, faster and more accurate compared to human evaluators. The high level of automation together with sub-pixel precision and interactive visualization makes the quantitative crystallographic analysis accessible even for non-crystallographers allowing for an interdisciplinary research. Simultaneously, experienced material scientists can take advantage of advanced functionalities for comprehensive analyses. CrysTBox is being developed in the Laboratory of electron microscopy at the Institute of Physics of the Czech Academy of Sciences. For academic purposes, it is available for free. As of 2022, the suite has been deployed at research and educational facilities in more than 90 countries supporting research of ETH Zurich, Lawrence Berkeley National Laboratory, Max Planck Institutes, Chinese Academy of Sciences, Fraunhofer Institutes or Oxford University. Suite As a scientific tool, CrysTBox suite is freely available for academic purposes, it supports file formats widely used in the community and offers interconnection with other scientific software. Availability CrysTBox is freely available on demand for non-commercial use by non-commercial subjects. The only safe way to download CrysTBox installers is via a request form on the official website. Commercial use is not allowed due to the license of MATLAB used for CrysTBox compilation. Notable research and users Besides education, CrysTBox is mainly used in research with fields of application spanning from nuclear research to archaeology and paleontology. Among others, the suite was employed in development of additive manufacturing (including 3D printed biodegradable alloys, metallic glass or high-entropy alloys), resistant coatings, laser shock peening, water cleaning technologies or characterization of 50 million years old flint. Institutions whose research was supported by CrysTBox include educational facilities such as ETH Zurich, University of California, Uppsala University, Oxford University, University of Waterloo, Indian Institute of Technology, Nanyang Technological University or University of Tokyo as well as research institutes like Max Planck Institutes, Chinese Academy of Sciences, Fraunhofer Institutes or US national laboratories (NL) such as Oak Ridge NL, Lawrence Berkeley NL, Idaho NL and Lawrence Livermoore NL. Limitations and disadvantages CrysTBox is compiled to a stand-alone installers using MATLAB Compiler. Therefore, 1-2 GB of MATLAB libraries are installed together with the toolbox. The diffraction simulation used in cellViewer is based on kinematic diffraction theory. This allows for a real-time response to user interaction, but it does not cover advanced diffraction features like double diffraction covered by dynamical diffraction theory, even though some phenomena caused by multiple electron-matter interactions are visualized by CrysTBox - for instance Kikuchi lines. The analytical tools provide correction for scale calibration imperfections, but does not provide adjustment for image distortions such as elliptical distortion. If high-accuracy measurement is needed or if the distortion exceeds standard levels, appropriate tools should be applied prior to the analysis. Crystallographic visualization tools In order to visualize functional relations and provide better understanding of experimental data, the graphical interface emphasize user interactivity and functional interconnection. There are two visualization tools in the suite: one depicting single material while another being focused on intergrowths of two different materials. cellViewer - single crystal visualization CellViewer allows to visualize the sample material in four modes widely used in material research: 3D model of atomic structure (direct space), simulated diffraction pattern (reciprocal space), stereographic projection (projection of 3D space of crystallographic planes and directions to 2D), inverse pole figure (defined part of stereographic projection). Graphical user interface provides user with two interactive views side by side. These views can display arbitrary combination of the four aforementioned visualization modes allowing to perceive their mutual relations. For instance, rotation of the atomic structure in direct space leads (if set so) to an instant update of the simulated diffraction pattern. If any diffraction spot is selected, corresponding crystallographic planes are shown in the unit cell etc. Such interconnections are implemented for each pair of the four available visualization modes. The electronic visualization allows to simplify understanding of widely used, yet less intuitive representations such as the inverse pole figure. For instance by drawing the coloured triangle of the inverse pole figure into the stereographic projection or to the more intuitive 3D atomic structure. - intergrowth visualization The allows for visualization of two misoriented materials and their interface such as crystal twins or grain boundaries. The user interface provides three views: two smaller views, each depicting one unit cell of selected material and orientation, and a larger view depicting an appropriate interface of the two structures. The interface can be visualized in four modes: 3D model of both unit cells, wire-frame model of both unit cells, cross section of the interface, bulk representation (up to several hundred atoms). All three views in the user interface are functionally interconnected. If the content of one view is rotated by the user, the other views follow. If a crystallographic plane or direction is selected in one view, it is shown in other views and corresponding crystallographic indices are stated. The tool also allows to highlight coincident site lattice or calculate the list of planes and directions which are parallel or nearly parallel in the two misoriented materials. Automated analysis of TEM images CrysTBox offers tools for automated processing of diffraction patterns and high-resolution transmission electron microscope images. Since the tools employ algorithms of artificial intelligence and computer vision, they are designed to require minimal operator effort providing higher accuracy compared to manual evaluation. Four analytical tools can be used to index diffraction patterns, measure lattice constants (distances and angles), sample thickness etc. Despite the high level of automation, the user is able to control the whole process and perform individual steps manually if needed. diffractGUI - HRTEM and diffraction processing DiffractGUI allows for an automated analysis of diffraction patterns and high-resolution images of single crystal or limited number of crystallites. It is able to determine crystal orientation, index individual diffraction spots and measure interplanar angles and distances in picometric precision. The input image may depict: selected area diffraction pattern, high-resolution image, nanodiffraction pattern or convergent beam electron diffraction. The input image is processed in the following steps: Preprocessing with accordance to the settings and image nature (resolution and noise reduction, Fourier transform for direct space images etc.). Detection of diffraction reflections at various scales (difference of Gaussians typically used for spot detection, Hough transform for CBED disk detection). The strongest detections are selected across the scale space. A regular lattice is fit to the set of the strongest detections using RANSAC algorithm. Lengths and angles of the lattice basis vectors are measured. Crystal lattice orientation is determined and diffraction reflections are identified using theoretical parameters of the sample material. Compared to human evaluation, considers tens or even hundreds of diffraction spots at once and, therefore, can localize the pattern with sub-pixel precision. ringGUI - ring diffraction analysis RingGUI allows for an automated processing of ring diffraction images of polycrystalline or powder samples. It can be used to identify the diffraction rings, quantify the interplanar distances and thus characterize or identify the sample material. With known material, it can assist in microscope calibration. The input image is processed as follows: beam-stopper detection, localization of the ring center, quantification of the diffraction profile and estimation of its background intensity, identification of the rings in the image (peaks in the profile). The results can be further processed and visualized in two interactive, functionally interconnected graphical elements: Interactive diffraction image – allows the user to improve readability of the diffraction image by removing the beam-stopper, subtracting the background, revealing faint or spotty rings or by crystallographic identification of the depicted rings. Diffraction profile – circular average of the image intensities depicts the peaks corresponding to the rings and their match with theoretical values known for given sample material. Both, the diffraction image as well as diffraction profile can be used to select diffraction rings with a mouse click. The corresponding ring is then highlighted in both graphical representations and details are listed. twoBeamGUI - sample thickness estimation Sample thickness can be estimated using twoBeamGUI from a convergent beam electron diffraction pattern (CBED) in two beam approximation. The procedure is based on an automated extraction of the intensity profile across the diffracted disk in the following steps: diffraction disk radius is determined using multi-scale Hough transform, the transmitted and diffracted disks are localized and the reflection is indexed, the disks are horizontally aligned, cropped out and profiles are measured across the disks, the profile across the diffracted disk is matched with a series of profiles automatically simulated for given material, reflection and specified thickness range. Once the procedure is completed, the measured profile and the most similar simulated profile are displayed with the diffracted disk on the background. This allows the user to verify correctness of the automated estimate and easily check the similarity of other intensity profiles within the specified thickness range. gpaGUI - geometric phase analysis The tool called gpaGUI provides an interactive interface for geometric phase analysis. It allows to generate 2D maps of various crystallographic quantities using high-resolution images. Since the geometric phase analysis is performed in frequency domain, the high-resolution image needs to be transformed into frequential representation using Fourier transform. Mathematically, the frequential image is a complex matrix with the size equal to the original image. Crystallographically, it can be seen as an artificial diffraction pattern of the original image depicting intensity peaks corresponding to the crystallographic planes present in the original image. After performing desired calculations, the frequential representation can be transformed back to the original spatial domain using inverse Fourier transform. Various crystallographic analyses can be performed using the frequential image. If it is filtered so that only the information from a region close to a particular diffraction spot is used (the rest is set to zero), a filtered direct image obtained by inverse Fourier transform then depicts only the planes corresponding to the selected diffraction spot. Moreover, due to its complex nature, the frequential image can be used to calculate amplitude and phase. Together with a vector of one crystallographic plane depicted in the image, they can be used to generate a 2D map interplanar distance of given plane. If two vectors of non-parallel planes are known, the method can be used to generate maps of strain and displacement. Graphical user interface of gpaGUI is vertically divided into two halves, each of which contains: Diffractogram preview allowing to select one diffraction spot corresponding to a crystallographic plane. Visualization of a selected quantity (input image, filtered image or one of the maps mentioned above) allowing to select point of interest or region of interest for further analysis. Results of detailed analysis of point or region of interest. The point analysis allows the user to select any pixel of the visualized map to see exact values of the particular pixel and its closest neighbourhood. If analysis of broader area is needed, a polygonal region can be outlined in the map allowing to enumerate its statistical details: mean, standard deviation, median, minimum, maximum and total area of the polygon. Since each half of the interface allows to specify one crystallographic plane, gpaGUI allows to calculate all the aforementioned crystallographic quantities including those which require two vectors. Precision and repeatability of the whole analysis relies on accuracy of the diffraction peak localization. To overcome inaccuracy of manual peak localization (with a mouse click), gpaGUI provides a possibility to process the input image with in order to accurately localize and index the peaks. See also Transmission electron microscope Selected area diffraction Convergent beam electron diffraction High-resolution transmission electron microscopy Geometric phase analysis Electron crystallography Crystal structure Computer vision Artificial intelligence Fourier transform Difference of gaussians Hough transform RANSAC Czech Academy of Sciences Notes References External links Request form to obtain CrysTBox Czech Academy of Sciences Institute of Physics of the Czech Academy of Sciences Crystallography Open Database Inorganic Crystal Structure Database Crystallography Science education software Science software Electron microscopy Computer vision software Visualization software
CrysTBox
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,620
[ "Electron", "Electron microscopy", "Materials science", "Crystallography", "Condensed matter physics", "Microscopy" ]
70,001,702
https://en.wikipedia.org/wiki/Vladimir%20Petrovich%20Mineev
Vladimir Petrovich Mineev (Владимир Петрович Минеев, surname sometimes transliterated as Mineyev; born 9 October 1945 in Moscow) is a Russian theoretical physicist, specializing in condensed matter physics. Biography Mineev graduated in 1969 from the Moscow Institute of Physics and Technology and then became a graduate student at Moscow's Landau Institute for Theoretical Physics. There in 1974 he received his Russian Candidate of Sciences degree (Ph.D.) and in 1983 his Russian Doctor of Sciences degree (habilitation). At the Landau Institute of Theoretical Physics, he was a researcher from 1972 to 1991 and a vice-director from 1992 to 1999, as well as holding a chair in theoretical physics from 1991 to 1999. In 1993 and 1994 he organized Landau Institute summer schools. In Grenoble, France at the Institut Nanosciences et Cryogénie of the Commissariat à l'énergie atomique et aux énergies alternatives (CEA), he was in charge of the theory group, Service de physique statistique, magnétisme et supraconductivité (SPSMS), from 1999 to 2006 and is since 2006 a senior scientist. He is both a Russian and French citizen. He has served as a referee for the Proceedings of the National Academy of Sciences, Nature Physics, Physical Review Letters, and many other physics journals. He has been a visiting scientist in 8 different countries. His visiting appointments at various locations include the Aspen Center for Physics in 1977 and again in 1989, France's IHES in 1978–1979, Finland's Low Temperature Laboratory of Aalto University at various times from 1979 to 1992, Denmark's Niels Bohr Institute in 1980 and again in 1998, Gothenburg's Chalmers University of Technology in 1981, ETH Zurich in 1991 and again in 2003 and 2008, Grenoble's Institut Laue-Langevin in 1993, Florida State University's National High Magnetic Field Laboratory in 1998–1999 and again in 2005, Kyōto's Yukawa Institute for Theoretical Physics in 1999 (as a guest professor), University of Oxford in 2003, both Tel Aviv University and the Weizmann Institute in 2004 and again in 2008, and the USA's Argonne National Laboratory in 2011. In 1992 he received the Landau Gold Medal for the topological classification of stable defects in ordered media. In 2014 he was awarded the Lars Onsager Prize. His research deals with various problems in solid state physics, especially the theory of superconductivity and its interaction with magnetism. He has been married since 1976 and has a son and two daughters. Selected publications References External links (publication list) 1945 births Living people Moscow Institute of Physics and Technology alumni Landau Institute for Theoretical Physics alumni 20th-century Russian physicists 21st-century Russian physicists 20th-century French physicists 21st-century French physicists Soviet physicists Condensed matter physicists Russian theoretical physicists Aspen Center for Physics people
Vladimir Petrovich Mineev
[ "Physics", "Materials_science" ]
613
[ "Condensed matter physicists", "Condensed matter physics" ]
61,835,523
https://en.wikipedia.org/wiki/Pounamu
Pounamu is a term for several types of hard and durable stone found in the South Island of New Zealand. They are highly valued in New Zealand, and carvings made from pounamu play an important role in Māori culture. Name The Māori word is derived from namu, an archaic word that describes blue-green (or 'grue') cognate with Tahitian ninamu. , also used in New Zealand English, in itself refers to two main types of green stone valued for carving: nephrite jade, classified by Māori as , , , and other names depending on colour; and translucent bowenite, a type of serpentine, known as . The collective term pounamu is preferred, as the other names in common use are misleading, such as New Zealand jade (not all pounamu is jade) and greenstone (a generic term used for unrelated stone from many countries). Pounamu is only found in New Zealand, whereas much of the carved "greenstone" sold in souvenir shops is jade sourced overseas. The Māori classification of pounamu is by colour and appearance; the shade of green is matched against a colour found in nature, and some hues contain flecks of red or brown. pounamu takes its name from the native freshwater fish Galaxias maculatus, one of the common whitebait species in New Zealand, and is pearly-white or grey-green in colour. It varies from translucent to opaque. Īnanga was the variety most prized by Māori for ornaments and (short handled clubs). pounamu is highly translucent and has a vivid shade of light green with no spots or flaws. Its name is the Māori word for a person of high rank, and is the rarest variety of pounamu. It was the preferred stone for making toki poutangata (ceremonial adzes) owned by rangatira (Māori chiefs). pounamu comes shades of rich dark green, often with small dark flecks or inclusions, and is named after the similarly-coloured leaves of the kawakawa tree (Piper excelsum). It is the most common variety of pounamu, and the most used in the manufacture of jewellery today. One of its main sources is the Taramakau River on the West Coast. is a rare type of kawakawa with small reddish dots or streaks; its name means "weka blood" after the flightless bird Gallirallus australis. pounamu is olive green and speckled with dark spots, reminiscent of the markings of three species of native freshwater fishes in the genus Galaxias that go by that name. Flower jade or picture jade is pounamu with cream, yellow, or brown inclusions, from oxidising or weathering in the surface of the stone. Cracks or fissures in the stone can allow iron impurities to enter, and carvers can then make use of the resulting patterns. Flower jade is best known from the Marsden district near Hokitika. pounamu is translucent like glass, but in a wide range of shades. When viewed against the light it resembles a clear drop of water. The name means "the tears that come from great sorrow", and refers to a Māori legend of a lamenting woman whose tears turned to stone. Chemistry Jade is formed from two different stones: jadeite and nephrite. Jadeite (sodium aluminium silicate) has interlocking granular crystals, while nephrite (calcium magnesium silicate) has crystals that are interwoven and fibrous. Jadeite is mostly found in Myanmar, while nephrite is found in Europe, British Columbia, Australia, and New Zealand. New Zealand nephrite contains varying amounts of iron, which account for its range of shades, richness of green, and translucency. Geological formation and location Pounamu is generally found in rivers in specific parts of the South Island as nondescript boulders and stones. Pounamu has been formed in New Zealand in four main locations; the West Coast, Fiordland, western Southland and the Nelson district. It is typically recovered from rivers and beaches where it has been transported to after being eroded from the mountains. The group of rocks where pounamu comes from are called ophiolites. Ophiolites are slices of the deep ocean crust and part of the mantle. When these deep mantle rocks (serpentinite) and crustal rock (mafic igneous rocks) are heated up (metamorphosed) together, pounamu can be formed at their contact. The Dun Mountain Ophiolite Belt has been metamorphosed in western Southland and pounamu from this belt is found along the eastern and northern edge of Fiordland. The Anita Bay Dunite near Milford Sound is a small but highly prized source of pounamu. In the Southern Alps, the Pounamu Ultramafic Belt in the Haast Schist occurs as isolated pods which are eroded and found on West Coast rivers and beaches. One source of īnanga pounamu at the head of Lake Wakatipu is possibly the only jade mining site in the world with Government protection. Significance to Māori Pounamu plays a very important role in Māori culture and is a taonga (treasure). It is and has been an important part of trade between the South Island iwi (tribe) Ngāi Tahu and other iwi. Adze blades made from pounamu were desired for carving of wood, and even with the arrival of metal tools pounamu tools were used. These were often reworked into (stylised human figures worn as pendants) and other taonga when they were no longer useful for carving wood. After the arrival of Ngāi Tahu in the South Island in the middle of the 18th century, the production of pounamu increased. Pounamu crafting and trade was important to the economy of Ngāi Tahu. Pounamu taonga increase in mana (spiritual power or prestige) as they pass from one generation to another. Pounamu is believed to absorb the mana of its past owners, and some heirloom pieces are named after a former owner in memory of their position and authority. The most prized taonga are those with known histories going back many generations: these are believed to have their own mana and were often given as gifts to seal important agreements. Pounamu taonga include tools such as (adzes), (chisels), (gouges), (knives), scrapers, awls, hammer stones, and drill points. Hunting tools include (fishing hooks) and lures, spear points, and (leg rings for fastening captive birds); weapons such as ; and ornaments such as pendants (, and ), ear pendants ( and ), and cloak pins. Functional pounamu tools were widely worn for both practical and ornamental reasons, and continued to be worn as purely ornamental pendants () even after they were no longer used as tools. Pounamu is found only in the South Island of New Zealand, known in Māori as ('The [land of] Greenstone Water') or ('The Place of Greenstone'). In 1997 the Crown handed back the ownership of all naturally occurring pounamu to the South Island iwi Ngāi Tahu (or Kāi Tahu), as part of the Ngāi Tahu Claims Settlement. Pounamu was of such value to Māori that peace was cemented by the exchange of valuable carved heirlooms, creating what was figuratively called a (door of greenstone), as in the saying (Let conclude a peace treaty that may never be broken, for ever and ever). Pounamu trails There were a dozen major pounamu trails used in the trading of pounamu and many more minor routes. Parties of 6 to 12 are thought to have used the tracks in summer, particularly via Harper Pass. Modern use Jewellery and other decorative items made from gold and pounamu were particularly fashionable in New Zealand in the Victorian and Edwardian years in the late 19th and early 20th century. It continues to be popular among New Zealanders and is often given as gifts. In 2011, the New Zealand Prime Minister John Key presented the President of the United States, Barack Obama with a (a type of Māori weapon) created from pounamu carved by New Zealand artist Aden Hoglund. An exhibition curated by Te Papa in 2007 called showcased 200 pounamu items from their collections and linked New Zealand and China through both the geographical location of nephrite and also the high level of artistry achieved in ancient China and then thousands of years later amongst Māori. The exhibition marked 40 years of diplomatic relations between countries when it toured to five venues in China in 2013. In the 2016 animated movie Moana the central premise is to return the stolen heart of Te Fiti which is manifest in a pounamu stone amulet. Fossicking for Pounamu is a cultural activity in New Zealand and allowed on designated areas of the West Coast of the South Island () and is limited to what can be carried unaided; fossicking elsewhere in the tribal area is illegal, while nephrite jade can be sourced legally and freely from Marlborough and Nelson. In 2009 David Anthony Saxton and his son Morgan David Saxton were sentenced to two and a half years imprisonment for stealing greenstone, with a helicopter, from the southern West Coast. Gallery See also Greenstone (disambiguation) Hei-tiki Lingling-o References External links Photos of 40 Pounamu varieties with accompanying information Pounamu, Te Rūnanga o Ngāi Tahu "Pounamu – jade or greenstone" in Te Ara – the Encyclopedia of New Zealand Examples of pounamu taonga (Māori treasures) from the collection of the Museum of New Zealand Te Papa Tongarewa First over the Alps: The epic of Raureka and the Greenstone by James Cowan (eText) Photo of woman wearing a greenstone neck pendant Photo of greenstone tiki Photo of greenstone mere Gemstones Geology of New Zealand Hardstone carving Māori culture Māori words and phrases Minerals Natural resources in Oceania
Pounamu
[ "Physics" ]
2,105
[ "Materials", "Gemstones", "Matter" ]
41,363,717
https://en.wikipedia.org/wiki/EuroFlow
EuroFlow consortium was founded in 2005 as 2U-FP6 funded project and launched in spring 2006. At first, EuroFlow was composed of 18 diagnostic research groups and two SMEs (small/medium enterprises) from eight different European countries with complementary knowledge and skills in the field of flow cytometry and immunophenotyping. During 2012 both SMEs left the project so it obtained full scientific independence. The goal of EuroFlow consortium is to innovate and standardize flow cytometry leading to global improvement and progress in diagnostics of haematological malignancies and individualisation of treatment. Background Since the '90s immunophenotyping (staining cells with antibodies conjugated with fluorochromes and detection with flow cytometer) became the preferred method in diagnostics of haematological malignancies. The advantages of this method are speed and simplicity, possibility to measure more than 6 parameters at a time, precise focusing on malignant population and also broad applicability in diagnostics. Because there is a great progress in development of antibodies, fluorochromes and multicolor digital flow cytometers, it became a question of how to interpret cytometric data and how to achieve comparable results between facilities. Even though a consensus of recommendations and guidelines was established, standardization was only partial because there was no regard of different antibody clones, fluorochromes and their optimal combinations or of sample preparation. On that account cytometry is perceived as method highly dependent on level of expertise and with limited reproducibility in multicentric studies. Goals of Euroflow These goals were set out in the journal Leukemia in 2012. Development and evaluation of new antibodies Establishment of new immunobead assay technology Development of new software tools followed by new analysing approaches for recognition of complex immunofenotype patterns. Design of new multicolor protocols and standard operating procedures (SOPs) Development and standardization of fast, accurate and highly sensitive flow cytometry Achievements During passed few years EuroFlow achieved most of its goals. Eight-color panels for diagnoses, and classification and follow-up of haematological malignancies were established. Panels, consisting of screening tube and supplementary characterisation tubes, are based on experiences and knowledge from literature but further optimised and tested in multiple research centers on large collection of samples impeaching on selection of fluorochromes and standardization of instrument settings and SOPs. Antibody clones, fluorochromes and other reagencies from different companies underwent detailed testing and comparison. Simultaneously a new software for analysing of more complex and extensive data files was developed, capable of multidimensional statistical comparison of normal data samples and patient samples. Also new antibody clones against rigorously selected epitopes of proteins involved in chromosomal translocations were developed for detection of most frequent fusion proteins in acute leukemia and chronic myeloid leukemia. Also detection of fusion proteins using immunobead assays was introduced. References External links EuroFlow Flow cytometry
EuroFlow
[ "Chemistry", "Biology" ]
631
[ "Flow cytometry" ]
41,366,808
https://en.wikipedia.org/wiki/Windpost
A windpost is a structural item used in the design and construction of masonry walls to increase lateral wall stability and protect them against damage from horizontal forces imposed by wind pressure, crowd or handrail loads. They are normally constructed from mild steel channel sections, supported at the head and the foot between floor slab levels and/or the principal steelwork sections forming the structural frame of the building. In cavity walls, the windpost will typically be fixed into the inner and outer leafs of the wall by specialist fixings and fastenings at regular intervals along its length. The windposts will be spaced along the walls of the building at regular intervals as calculated by the engineer to suit the required loadings. In most cases a windpost is a large and very unwieldy element that can often weigh in excess of 400 kg. The manufacture and delivery of both steel and concrete windposts has a significant carbon footprint and once delivered to site, their storage requires large areas to be set aside. The procurement of windposts, including the design process, often requires a lead time of four to five weeks. The length and weight of windposts makes them particularly difficult to manoeuvre into position in confined spaces. When installed as part of an internal wall, significant health and safety risks exist for the installers in lifting the windpost to its vertical position. There is no recognised mechanical lifting method to safely erect windposts. The properties of steel windposts have inherent fire integrity, acoustic, air tightness and thermal movement issues, all of which require additional measures to achieve specification compliance at extra cost. Design Methods and Alternatives Windposts are designed to span vertically, floor to floor and provide lateral support for masonry wall panels. The windposts will usually be restrained by the brickwork and designed as simply supported beams. As an alternative to steel windposts, when the primary structure is composed of reinforced concrete, secondary structures are cast in situ to provide lateral support to masonry panels. Traditional design methods are often not optimised for the design of masonry panels with openings, and therefore windposts can be over specified on walls where the design capacity may not be utilised. Using alternative design methods such as advanced yield line analysis, the specification of wind posts within masonry wall panels can often be optimised and often omitted. These calculations can be typically carried out by structural engineering software packages such as MasterSeries. Recently, a new innovative technique of reinforcing blockwork walls has been developed by Wembley Innovation Ltd and used in many Crossrail projects in the UK. It consists of using uniquely designed hollow blocks to allow the construction of reinforced concrete beams (Wi Beams) and columns (Wi Columns) within the blockwork construction, which eliminate the need for traditional windposts or lintels. This new technique maximises masonry wall strength without thickening the wall or harming its appearance and allows the architects to design and create uninterrupted blockwork panels with flexible detailing options, whilst retaining the performance characteristics of traditional masonry such as fire integrity, acoustic performance and air permeability. This modular approach also provides the adaptability for contractors to make late changes to construction without affecting the build programme and creates seamless walls which do not require any fire protection. References Structural engineering Masonry
Windpost
[ "Engineering" ]
664
[ "Construction", "Civil engineering", "Structural engineering", "Masonry" ]
59,184,147
https://en.wikipedia.org/wiki/Residue-to-product%20ratio
In climate engineering, the residue-to-product ratio (RPR) is used to calculate how much unused crop residue might be left after harvesting a particular crop. Also called the residue yield or straw/grain ratio, the equation takes the mass of residue divided by the mass of crop produced, and the result is dimensionless. The RPR can be used to project costs and benefits of bio-energy projects, and is crucial in determining financial sustainability. The RPR is particularly important for estimating the production of biochar, a beneficial farm input obtained from crop residues through pyrolysis. However, it is important to note that RPR values are rough estimates taken from broad production statistics, and can vary greatly depending on crop variety, climate, processing, and residual moisture content. See also Carbon sequestration Biomass Biochar Biofuel Pyrolysis References Climate engineering Crops Biofuels
Residue-to-product ratio
[ "Engineering" ]
185
[ "Planetary engineering", "Geoengineering" ]
59,184,609
https://en.wikipedia.org/wiki/Ibogaline
Ibogaline is an alkaloid found in Tabernanthe iboga along with the related chemical compounds ibogaine, ibogamine, and other minor alkaloids. It is a relatively smaller component of Tabernanthe iboga root bark total alkaloids (TA) content. It is also present in Tabernaemontana species such as Tabernaemontana australis which shares similar ibogan-biosynthetic pathways. The percentage of ibogaline in T. iboga root bark is up to 15% TA with ibogaine constituting 80% of the alkaloids and ibogamine up to 5%. Chemistry Derivatives Kisantine and Gabonine are thought to be ibogaline's oxidation byproducts. Adverse effect In rodents, ibogaline induces more body tremor and ataxia compared to ibogaine and ibogamine. Among a series of iboga and harmala alkaloids evaluated in rats, the study authors found the following order of potency in causing tremors: ED50 (μmol/kg, sc): tabernanthine (4.5) > ibogaline (7.6) > harmaline (12.8) > harmine (13.7) > ibogaine (34.8) > noribogaine (176.0) A subsequent study confirmed these findings. See also Coronaridine Voacangine References Alkaloids found in Iboga Indole alkaloids
Ibogaline
[ "Chemistry" ]
312
[ "Alkaloids by chemical classification", "Indole alkaloids" ]
59,185,557
https://en.wikipedia.org/wiki/Truncated%20triakis%20octahedron
The truncated triakis octahedron, or more precisely an order-8 truncated triakis octahedron, is a convex polyhedron with 30 faces: 8 sets of 3 pentagons arranged in an octahedral arrangement, with 6 octagons in the gaps. Triakis octahedron It is constructed from a triakis octahedron by truncating the order-8 vertices. This creates 6 regular octagon faces, and leaves 24 mirror-symmetric pentagons. Octakis truncated cube The dual of the order-8 truncated triakis octahedron is called a octakis truncated cube. It can be seen as a truncated cube with octagonal pyramids augmented to the faces. See also Truncated triakis tetrahedron Truncated tetrakis cube Truncated triakis icosahedron External links George Hart's Polyhedron generator - "t8kO" (Conway polyhedron notation) Polyhedra Truncated tilings
Truncated triakis octahedron
[ "Physics" ]
202
[ "Tessellation", "Truncated tilings", "Symmetry" ]
59,188,974
https://en.wikipedia.org/wiki/Localization-protected%20quantum%20order
Many-body localization (MBL) is a dynamical phenomenon which leads to the breakdown of equilibrium statistical mechanics in isolated many-body systems. Such systems never reach local thermal equilibrium, and retain local memory of their initial conditions for infinite times. One can still define a notion of phase structure in these out-of-equilibrium systems. Strikingly, MBL can even enable new kinds of exotic orders that are disallowed in thermal equilibrium – a phenomenon that goes by the name of localization-protected quantum order (LPQO) or eigenstate order. Background The study of phases of matter and the transitions between them has been a central enterprise in physics for well over a century. One of the earliest paradigms for elucidating phase structure, associated most with Landau, classifies phases according to the spontaneous breaking of global symmetries present in a physical system. More recently, we have also made great strides in understanding topological phases of matter which lie outside Landau's framework: the order in topological phases cannot be characterized by local patterns of symmetry breaking, and is instead encoded in global patterns of quantum entanglement. All of this remarkable progress rests on the foundation of equilibrium statistical mechanics. Phases and phase transitions are only sharply defined for macroscopic systems in the thermodynamic limit, and statistical mechanics allows us to make useful predictions about such macroscopic systems with many (~ 1023) constituent particles. A fundamental assumption of statistical mechanics is that systems generically reach a state of thermal equilibrium (such as the Gibbs state) which can be characterized by only a few parameters such as temperature or a chemical potential. Traditionally, phase structure is studied by examining the behavior of ``order parameters" in equilibrium states. At zero temperature, these are evaluated in the ground state of the system, and different phases correspond to different quantum orders (topological or otherwise). Thermal equilibrium strongly constrains the allowed orders at finite temperatures. In general, thermal fluctuations at finite temperatures reduce the long-ranged quantum correlations present in ordered phases and, in lower dimensions, can destroy order altogether. As an example, the Peierls-Mermin-Wagner theorems prove that a one dimensional system cannot spontaneously break a continuous symmetry at any non-zero temperature. Recent progress on the phenomenon of many-body localization has revealed classes of generic (typically disordered) many-body systems which never reach local thermal equilibrium, and thus lie outside the framework of equilibrium statistical mechanics. MBL systems can undergo a dynamical phase transition to a thermalizing phase as parameters such as the disorder or interaction strength are tuned, and the nature of the MBL-to-thermal phase transition is an active area of research. The existence of MBL raises the interesting question of whether one can have different kinds of MBL phases, just as there are different kinds of thermalizing phases. Remarkably, the answer is affirmative, and out-of-equilibrium systems can also display a rich phase structure. What's more, the suppression of thermal fluctuations in localized systems can even allow for new kinds of order that are forbidden in equilibrium—which is the essence of localization-protected quantum order. The recent discovery of time-crystals in periodically driven MBL systems is a notable example of this phenomenon. Phases out of equilibrium: eigenstate order Studying phase structure in localized systems requires us to first formulate a sharp notion of a phase away from thermal equilibrium. This is done via the notion of eigenstate order: one can measure order parameters and correlation functions in individual energy eigenstates of a many-body system, instead of averaging over several eigenstates as in a Gibbs state. The key point is that individual eigenstates can show patterns of order that may be invisible to thermodynamic averages over eigenstates. Indeed, a thermodynamic ensemble average isn't even appropriate in MBL systems since they never reach thermal equilibrium. What's more, while individual eigenstates aren't themselves experimentally accessible, order in eigenstates nevertheless has measurable dynamical signatures. The eigenspectrum properties change in a singular fashion as the system transitions between from one type of MBL phase to another, or from an MBL phase to a thermal one---again with measurable dynamical signatures. When considering eigenstate order in MBL systems, one generally speaks of highly excited eigenstates at energy densities that would correspond to high or infinite temperatures if the system were able to thermalize. In a thermalizing system, the temperature is defined via where the entropy is maximized near the middle of the many-body spectrum (corresponding to ) and vanishes near the edges of the spectrum (corresponding to ). Thus, "infinite temperature eigenstates" are those drawn from near the middle of the spectrum, and it more correct to refer to energy-densities rather than temperatures since temperature is only defined in equilibrium. In MBL systems, the suppression of thermal fluctuations means that the properties of highly excited eigenstates are similar, in many respects, to those of ground states of gapped local Hamiltonians. This enables various forms of ground state order to be promoted to finite energy densities. We note that in thermalizing MB systems, the notion of eigenstate order is congruent with the usual definition of phases. This is because the eigenstate thermalization hypothesis (ETH) implies that local observables (such as order parameters) computed in individual eigenstates agree with those computed in the Gibbs state at a temperature appropriate to the energy density of the eigenstate. On the other hand, MBL systems do not obey the ETH and nearby many-body eigenstates have very different local properties. This is what enables individual MBL eigenstates to display order even if thermodynamic averages are forbidden from doing so. Localization-protected symmetry-breaking order Localization enables symmetry breaking orders at finite energy densities, forbidden in equilibrium by the Peierls-Mermin-Wagner Theorems. Let us illustrate this with the concrete example of a disordered transverse field Ising chain in one dimension: where are Pauli spin-1/2 operators in a chain of length , all the couplings are positive random numbers drawn from distributions with means , and the system has Ising symmetry corresponding to flipping all spins in the basis. The term introduces interactions, and the system is mappable to a free fermion model (the Kitaev chain) when . Non-interacting Ising chain – no disorder Let us first consider the clean, non-interacting system: . In equilibrium, the ground state is ferromagnetically ordered with spins aligned along the axis for , but is a paramagnet for and at any finite temperature (Fig 1a). Deep in the ordered phase, the system has two degenerate Ising symmetric ground states which look like ``Schrödinger cat" or superposition states: . These display long-range order: At any finite temperature, thermal fluctuations lead to a finite density of delocalized domain walls since the entropic gain from creating these domain walls wins over the energy cost in one dimension. These fluctuations destroy long-range order since the presence of fluctuating domain walls destroys the correlation between distant spins. Disordered non-interacting Ising chain Upon turning on disorder, the excitations in the non-interacting model () localize due to Anderson localization. In other words, the domain walls get pinned by the disorder, so that a generic highly excited eigenstate for looks like , where refers to the eigenstate and the pattern is eigenstate dependent. Note that a spin-spin correlation function evaluated in this state is non-zero for arbitrarily distant spins, but has fluctuating sign depending on whether an even/odd number of domain walls are crossed between two sites. Whence, we say that the system has long-range spin-glass (SG) order. Indeed, for , localization promotes the ground state ferromagnetic order to spin-glass order in highly excited states at all energy densities (Fig 1b). If one averages over eigenstates as in the thermal Gibbs state, the fluctuating signs causes the correlation to average out as required by Peierls theorem forbidding symmetry breaking of discrete symmetries at finite temperatures in 1D. For , the system is paramagnetic (PM), and the eigenstates deep in the PM look like product states in the basis and do not show long range Ising order: . The transition between the localized PM and the localized SG at belongs to the infinite randomness universality class. Disordered interacting Ising chain Upon turning on weak interactions , the Anderson insulator remains many-body localized and order persists deep in the PM/SG phases. Strong enough interactions destroy MBL and the system transitions to a thermalizing phase. The fate of the MBL PM to MBL SG transition in the presence of interactions is presently unsettled, and it is likely this transition proceeds via an intervening thermal phase (Fig 1c). Detecting eigenstate order – measurable signatures While the discussion above pertains to sharp diagnostics of LPQO obtained by evaluating order parameters and correlation functions in individual highly excited many-body eigenstates, such quantities are nearly impossible to measure experimentally. Nevertheless, even though individual eigenstates aren't themselves experimentally accessible, order in eigenstates has measurable dynamical signatures. In other words, measuring a local physically accessible observable in time starting from a physically preparable initial state still contains sharp signatures of eigenstate order. For example, for the disordered Ising chain discussed above, one can prepare random symmetry-broken initial states which are product states in the basis: . These randomly chosen states are at infinite temperature. Then, one can measures the local magnetization in time, which acts as an order parameter for symmetry breaking. It is straightforward to show that saturates to a non-zero value even for infinitely late times in the symmetry-broken spin-glass phase, while it decays to zero in the paramagnet. The singularity in the eigenspectrum properties at the transition between the localized SG and PM phases translates into a sharp dynamical phase transition which is measurable. Indeed, a nice example of this is furnished by recent experiments detecting time-crystals in Floquet MBL systems, where the time crystal phase spontaneously breaks both time translation symmetry and spatial Ising symmetry, showing correlated spatiotemporal eigenstate order. Localization-protected topological order Similar to the case of symmetry breaking order, thermal fluctuations at finite temperatures can reduce or destroy the quantum correlations necessary for topological order. Once again, localization can enable such orders in regimes forbidden by equilibrium. This happens for both the so-called long range entangled topological phases, and for symmetry protected or short-range entangled topological phases. The toric-code/ gauge theory in 2D is an example of the former, and the topological order in this phase can be diagnosed by Wilson loop operators. The topological order is destroyed in equilibrium at any finite temperature due to fluctuating vortices--- however, these can get localized by disorder, enabling glassy localization-protected topological order at finite energy densities. On the other hand, symmetry protected topological (SPT) phases do have any bulk long-range order, and are distinguished from trivial paramagnets due to the presence of coherent gapless edge modes as long the protecting symmetry is present. In equilibrium, these edge modes are typically destroyed at finite temperatures as they decohere due to interactions with delocalized bulk excitations. Once again, localization protects the coherence of these modes even at finite energy densities! The presence of localization-protected topological order could potentially have far-reaching consequences for developing new quantum technologies by allowing for quantum coherent phenomena at high energies. Floquet systems It has been shown that periodically driven or Floquet systems can also be many-body localized under suitable drive conditions. This is remarkable because one generically expects a driven many-body system to simply heat up to a trivial infinite temperature state (the maximum entropy state without energy conservation). However, with MBL, this heating can be evaded and one can again get non-trivial quantum orders in the eigenstates of the Floquet unitary, which is the time-evolution operator for one period. The most striking example of this is the time-crystal, a phase with long-range spatiotemporal order and spontaneous breaking of time translation symmetry. This phase is disallowed in thermal equilibrium, but can be realized in a Floquet MBL setting. References Quantum mechanics Quantum chaos theory
Localization-protected quantum order
[ "Physics" ]
2,656
[ "Theoretical physics", "Quantum mechanics" ]
52,944,974
https://en.wikipedia.org/wiki/Darcy%27s%20law%20for%20multiphase%20flow
Morris Muskat et al. developed the governing equations for multiphase flow (one vector equation for each fluid phase) in porous media as a generalisation of Darcy's equation (or Darcy's law) for water flow in porous media. The porous media are usually sedimentary rocks such as clastic rocks (mostly sandstone) or carbonate rocks. where a = w, o, g The present fluid phases are water, oil and gas, and they are represented by the subscript a = w,o,g respectively. The gravitational acceleration with direction is represented as or or . Notice that in petroleum engineering the spatial co-ordinate system is right-hand-oriented with z-axis pointing downward. The physical property that links the flow equations of the three fluid phases, is relative permeability of each fluid phase and pressure. This property of the fluid-rock system (i.e. water-oil-gas-rock system) is mainly a function of the fluid saturations, and it is linked to capillary pressure and the flowing process, implying that it is subject to hysteresis effect. In 1940 M.C. Leverett pointed out that in order to include capillary pressure effects in the flow equation, the pressure must be phase dependent. The flow equation then becomes where a = w, o, g Leverett also pointed out that the capillary pressure shows significant hysteresis effects. This means that the capillary pressure for a drainage process is different from the capillary pressure of an imbibition process with the same fluid phases. Hysteresis does not change the shape of the governing flow equation, but it increases (usually doubles) the number of constitutive equations for properties involved in the hysteresis. During 1951-1970 commercial computers entered the scene of scientific and engineering calculations and model simulations. Computer simulation of the dynamic behaviour of oil reservoirs soon became a target for the petroleum industry, but the computing power was very weak at that time. With weak computing power, the reservoir models were correspondingly coarse, but upscaling of the static parameters were fairly simple and partly compensated for the coarseness. The question of upscaling relative permeability curves from the rock curves derived at core plug scale (which is often denoted the micro scale) to the coarse grids cells of the reservoir models (which is often called the macro scale) is much more difficult, and it became an important research field that is still ongoing. But the progress in upscaling was slow, and it was not until 1990-2000 that directional dependency of relative permeability and need for tensor representation was clearly demonstrated, even though at least one capable method was developed already in 1975. One such upscaling case is a slanted reservoir where the water (and gas) will segregate vertically relative to the oil in addition to the horizontal motion. The vertical size of a grid cell is also usually much smaller than the horizontal size of a grid cell, creating small and large flux areas respectively. All this requires different relative permeability curves for the x and z directions. Geological heterogeneities in the reservoirs like laminas or crossbedded permeability structures in the rock, also cause directional relative permeabilities. This tells us that relative permeability should, in the most general case, be represented by a tensor. The flow equations then become where a = w, o, g The above-mentioned case reflected downdip water injection (or updip gas injection) or production by pressure depletion. If you inject water updip (or gas downdip) for a period of time, it will give rise to different relative permeability curves in the x+ and x- directions. This is not a hysteresis process in the traditional sense, and it cannot be represented by a traditional tensor. It can be represented by an IF-statement in the software code, and it occurs in some commercial reservoir simulators. The process (or rather sequence of processes) may be due to a backup plan for field recovery, or the injected fluid may flow to another reservoir rock formation due to an unexpected open part of a fault or a non-sealing cement behind casing of the injection well. The option for relative permeability is seldom used, and we just note that it does not change (the analytical shape of) the governing equation, but increases (usually doubles) the number of constitutive equations for the properties involved. The above equation is a vector form of the most general equation for fluid flow in porous media, and it gives the reader a good overview of the terms and quantities involved. Before you go ahead and transform the differential equation into difference equations, to be used by the computers, you must write the flow equation in component form. The flow equation in component form (using summation convention) is where a = w, o, g where = 1,2,3 The Darcy velocity is not the velocity of a fluid particle, but the volumetric flux (frequently represented by the symbol ) of the fluid stream. The fluid velocity in the pores (or short but inaccurately called pore velocity) is related to Darcy velocity by the relation where a = w, o, g The volumetric flux is an intensive quantity, so it is not good at describing how much fluid is coming per time. The preferred variable to understand this is the extensive quantity called volumetric flow rate which tells us how much fluid is coming out of (or going into) a given area per time, and it is related to Darcy velocity by the relation where a = w, o, g We notice that the volumetric flow rate is a scalar quantity and that the direction is taken care of by the normal vector of the surface (area) and the volumetric flux (Darcy velocity). In a reservoir model the geometric volume is divided into grid cells, and the area of interest now is the intersectional area between two adjoining cells. If these are true neighboring cells, the area is the common side surface, and if a fault is dividing the two cells, the intersection area is usually less than the full side surface of both adjoining cells. A version of the multiphase flow equation, before it is discretized and used in reservoir simulators, is thus where a = w, o, g In expanded (component) form it becomes where a = w, o, g The (initial) hydrostatic pressure at a depth (or level) z above (or below) a reference depth z0 is calculated by where a = w, o, g When calculations of hydrostatic pressure are executed, one normally does not apply a phase subscript, but switch formula / quantity according to what phase is observed at the actual depth, but we have included the phase subscript here for clarity and consistency. However, when calculations of hydrostatic pressure are executed one may use an acceleration of gravity that varies with depth in order to increase accuracy. If such high accuracy is not needed, the acceleration of gravity is kept constant, and the calculated pressure is called overburden pressure. Such high accuracy is not needed in reservoir simulations so acceleration of gravity is treated as a constant in this discussion. The initial pressure in the reservoir model is calculated using the formula for (initial) overburden pressure which is where a = w, o, g In order to simplify the terms within the parenthesis of the flow equation, we can introduce a flow potential called the -potential, pronounced psi-potential, which is defined by where a = w, o, g It consists of two terms which are absolute pressure and gravity head. To save computing time the integral can be calculated initially and stored as a table to be used in the computationally cheaper table-lookup. Introduction of the -potential implies that where a = w, o, g The psi-potential is also frequently called the "datum pressure", since the function represents the pressure at any point in the reservoir after being transferred to the datum plane / depth z0. In practical engineering work it is very useful to refer pressures measured in wells to a datum level or to map the distribution of datum pressures throughout the reservoir. In this way the direction of fluid movement in the reservoir can be seen at a glance since the datum pressure distribution is equivalent to the potential distribution. Two simple examples will clarify this. A reservoir may consists of several flow units that are separated by tight shale layers. Fluid from one reservoir or flow unit can enter a fault at one depth and exit the fault in another reservoir or flow unit at another depth. Likewise can fluid enter a production well in one flow unit and exit the production well in another flow unit or reservoir. The multiphase flow equation for porous media now becomes where a = w, o, g This multiphase flow equation has traditionally been the starting point for the software programmer when he/she starts transforming the equation from differential equation to difference equation in order to write a program code for a reservoir simulator to be used in the petroleum industry. The unknown dependent variables have traditionally been oil pressure (for oil fields) and volumetric quantities for the fluids involved, but one may rewrite the total set of model equations to be solved for oil pressure and mass or mole quantities for the fluid components involved. The above equations are written in SI units and we are assuming that all material properties are also defined within the SI units. A result of this is that above versions of the equations do not need any unit conversion constants. The petroleum industry applies a variety of units, of which a least two have some prevalence. If you want to apply units other than SI units, you must establish correct unit conversion constants for the multiphase flow equations. Conversion of units The above equations are written in SI units (short SI) suppressing that the unit D (darcy) for the absolute permeability is defined in non-SI units. That is why there are no unit-related constants. The petroleum industry doesn't use the SI units. Instead, they use a special version of SI units that we will call Applied SI units, or they use another set of units called Field units which has its origin from US and UK. Temperature is not included in the equations, so we can use the factor-label method (also called unit-factor method) which says that if we have a variable/parameter with unit H, we multiply this variable/parameter by a conversion constant C and then the variable gets the unit G that we want. This means that we apply the transformation H*C = G, and the non-SI effect of the definition of permeability is included in the conversion factor C for permeability. The transformation H*C = G apply for every spatial dimension so we concentrate on the main terms, neglecting the signs, and then complete the parenthesis with the gravity term. Before we start the conversion, we notice that both the original (single phase) the flow equation of Darcy and the generalized (or extended) multiphase flow equations of Muskat et al. are using reservoir velocity (volume flux), volume rate and densities. The units of these quantities are given a prefix r (or R) in order to distinguish them from their counterparts at standard surface conditions which gets a prefix s (or S). This is especially important when we convert the equations to Field units. The reason for going into details in the seemingly simple topic of unit conversion, is that many people make mistakes when doing unit conversions. Now we are ready to start the conversion work. First, we take the flux version of the equation and rewrite it as where a = w, o, g We want to place the composite conversion factor together with the permeability parameter. Here we note that our equation is written in SI units, and that the group of variables/parameters (hereafter called parameters for short) on the right-hand side constitute a dimensionless group. Now we convert each parameter and collect these conversions into a single conversion constant. Now we note that our list with conversion constants (the C's) goes from applied unit to SI units, and this very common for such conversion lists. We therefore assume that our parameters are entered in applied units and convert them (back) to SI units. where a = w, o, g Notice that we have removed relative permeability which is a dimensionless parameter. This composite conversion factor is called Darcy's constant for the flux formulated equation, and it is Since our parameter group is dimensionless in base SI units, we don't need to include the SI units in the units for our composite conversion factor as you can see in the second table. Next, we take the rate version of the equation and rewrite it as where a = w, o, g Now we convert each parameter and collect these conversions into a single conversion constant. where a = w, o, g Notice that we have removed relative permeability which is a dimensionless parameter. This composite conversion factor is called Darcy's constant for the flux formulated equation, and it is The pressure gradient and the gravity term are identical for the flux and the rate equations, and will, therefore, be discussed only once. The task here is to have a gravity term that is consistent with the applied units ("H-units") for the pressure gradient. We must, therefore, place our conversion factor together with the gravity parameters. We write "the parenthesis" in SI units as where a = w, o, g and rewrite it as where a = w, o, g Now we convert each parameter and collect these conversions into a single conversion constant. First, we note that our equation is written in SI units, and that the group of parameters on the right-hand side constitute a dimensionless group. We, therefore, assume that our parameters are entered in applied units and convert them (back) to SI units. where a = w, o, g This gives the composite conversion factor for the consistency-conversion as Since our parameter group is dimensionless in SI units, we don't need to include the SI units in the units for our composite conversion factor as you can see in the second table. This is it for the analytical equations, but when the programmer transform the flow equation into a finite difference equation and further into a numerical algorithm, they are eager to minimize the number of computational operations. Here is an example with two constants that can be reduced to one by the fusion Using industry units, the flux version of the flow equation in vector form becomes where a = w, o, g and in component form it becomes where a = w, o, g where = 1,2,3 Using industry units, the rate version of the flow equation in vector form becomes where a = w, o, g and in component form it becomes where a = w, o, g Conversion of units is a fairly rare activity, even for technical professionals, but that is also the reason why people forget how to do it correctly. See also Petroleum reservoir Reservoir engineering Reservoir modeling Reservoir simulation Black oil equations Conversion of units – includes tables of conversion factors Dimensionless numbers in fluid mechanics Fermi problem – used to teach dimensional analysis Rayleigh's method of dimensional analysis Similitude (model) – an application of dimensional analysis System of measurement Units of measurement List of dimensionless quantities Orders of magnitude (numbers) Dimensional analysis Normalization (statistics) and standardized moment, the analogous concepts in statistics Buckingham π theorem References Fluid dynamics
Darcy's law for multiphase flow
[ "Chemistry", "Engineering" ]
3,177
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
52,945,125
https://en.wikipedia.org/wiki/First%20and%20second%20fundamental%20theorems%20of%20invariant%20theory
In algebra, the first and second fundamental theorems of invariant theory concern the generators and the relations of the ring of invariants in the ring of polynomial functions for classical groups (roughly the first concerns the generators and the second the relations). The theorems are among the most important results of invariant theory. Classically the theorems are proved over the complex numbers. But characteristic-free invariant theory extends the theorems to a field of arbitrary characteristic. First fundamental theorem for The theorem states that the ring of -invariant polynomial functions on is generated by the functions , where are in and . Second fundamental theorem for general linear group Let V, W be finite dimensional vector spaces over the complex numbers. Then the only -invariant prime ideals in are the determinant ideal generated by the determinants of all the -minors. Notes References Further reading Ch. II, § 4. of E. Arbarello, M. Cornalba, P.A. Griffiths, and J. Harris, Geometry of algebraic curves. Vol. I, Grundlehren der Mathematischen Wissenschaften, vol. 267, Springer-Verlag, New York, 1985. MR0770932 Hanspeter Kraft and Claudio Procesi, Classical Invariant Theory, a Primer Invariant theory
First and second fundamental theorems of invariant theory
[ "Physics", "Mathematics" ]
263
[ "Symmetry", "Algebra stubs", "Group actions", "Theorems in algebra", "Algebra", "Mathematical problems", "Mathematical theorems", "Invariant theory" ]
52,947,580
https://en.wikipedia.org/wiki/Nauruan%20navigational%20system
The Nauruan navigational system is a way of expressing direction, similar to North, South, East and West, but limitations in the system mean that it is unable to be used outside of Nauru. The system is constructed using two main points, Ganokoro and Arijeijen. Ganokoro stands for a place in Nauru that was considered a place of sunrise, and Arijeijen was a place of sunset on the island. Arijeijen was close to the place that once hosted a cemetery of the Chinese settlements of the island. The four main directions are pago, poe, Pawa (Apwewa) and Pwiju (apwijiuw). The word Apuwijiuw was generally translated as "eastwards", and stands for a direction towards Ganokoro, whereas the word Apwewa is translated as "westwards", and stands for a direction towards Arijeijen. The word pago stands for a direction towards the beach (as in the coast of the island) and poe stands for the direction towards the inland of the island, and the words are used in the form rodu apago and roga apoe. References Geography of Nauru Orientation (geometry)
Nauruan navigational system
[ "Physics", "Mathematics" ]
254
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)" ]
52,951,151
https://en.wikipedia.org/wiki/Scanning%20microscopy
Scanning microscopy may refer to: Scanning probe microscopy Atomic force microscopy Scanning tunneling microscope Scanning electron microscope Scanning capacitance microscopy Near-field scanning optical microscope Microscopes
Scanning microscopy
[ "Chemistry", "Technology", "Engineering" ]
35
[ "Microscopes", "Measuring instruments", "Microscopy" ]
52,953,124
https://en.wikipedia.org/wiki/Nucleosome%20remodeling%20factor
Nucleosome Remodeling Factor (NURF) is an ATP-dependent chromatin remodeling complex first discovered in Drosophila melanogaster (fruit fly) that catalyzes nucleosome sliding in order to regulate gene transcription. It contains an ISWI ATPase, making it part of the ISWI family of chromatin remodeling complexes. NURF is highly conserved among eukaryotes and is involved in transcriptional regulation of developmental genes. Discovery NURF was first purified from the model organism Drosophila melanogaster by Toshio Tsukiyama and Carl Wu in 1995. Tsukiyama and Wu described NURF’s chromatin remodeling activity on the hsp70 promoter. It was later discovered that NURF regulates transcription in this manner for hundreds of genes. A human ortholog of NURF, called hNURF, was isolated in 2003. Structure The NURF complex in Drosophila contains four subunits: NURF301, NURF140, NURF55, and NURF38. NURF140 is an ISWI ATPase, distinguishable by its HAND, SANT, and SLIDE domains (SANT-like but with several insertions). The NURF complex in Homo sapiens has three subunits, BPTF, SNF2L, and pRBAP46/48, homologous to NURF301, NURF140, and NURF55, respectively. There is no human homolog for NURF38. Function NURF interacts with chromatin by binding to modified histones or interacting with various transcription factors. NURF catalyzes nucleosome sliding in either direction on DNA without any apparent modifications to the histone octamer itself. NURF is essential for the expression of homeotic genes. The ISWI ATPase specifically recognizes intact N-terminal histone tails. In Drosophila, NURF interacts with the transcription factor GAGA to remodel chromatin at the hsp70 promoter, and null mutations in the Nurf301 subunit prevent larval metamorphosis. Other NURF mutants cause the development of melanotic tumors from larval blood cells. In humans, hNURF is involved in neuronal development and has been shown to enhance neurite outgrowth in vitro. References Molecular biology Nuclear organization
Nucleosome remodeling factor
[ "Chemistry", "Biology" ]
499
[ "Biochemistry", "Nuclear organization", "Cellular processes", "Molecular biology" ]
52,953,323
https://en.wikipedia.org/wiki/Peritrich%20nuclear%20code
The peritrich nuclear code (translation table 30) is a genetic code used by the nuclear genome of the peritrich ciliates Vorticella and Opisthonecta. The code (30)    AAs = FFLLSSSSYYEECC*WLLLAPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG Starts = --------------*--------------------M----------------------------  Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG  Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG  Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), and Valine (Val, V). Differences from the standard code See also List of all genetic codes: translation tables 1 to 16, and 21 to 31. The genetic codes database. References Molecular genetics Gene expression Protein biosynthesis
Peritrich nuclear code
[ "Chemistry", "Biology" ]
532
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
52,954,530
https://en.wikipedia.org/wiki/Intermediate-Current%20Stability%20Experiment
The Intermediate-Current Stability Experiment, or ICSE, was a magnetic fusion energy reactor designed by the UKAEA ("Harwell") design team. It was intended to be the follow-on design to the ZETA, incorporating a high-speed current pulse system that was expected to improve the stability of the plasma and allow fusion reactions to take place. Construction began in 1959, starting with its building known as "D-1", but Harwell's new director expressed concerns about some of the theoretical assumptions being made about the design, the project was canceled in August 1960. Parts that were already built were scavenged by other teams. History ZETA and ZETA II The Harwell fusion team completed construction of the ZETA reactor in August 1957. This was the world's first truly large-scale fusion device, both in size and in terms of the power fed into the plasma. After the machine proved stable, the teams began introducing deuterium fuel into the mix, and immediately noticed neutrons being released. Neutrons are the most easily seen results of fusion reactions, but the team was highly cautious due to several warnings of non-nuclear neutrons from teams in the US and USSR. A series of diagnostic "shots" through September and October were used to characterize the plasma. Spectrographic analysis of the plasma taken through small windows in the reactor's torus suggested the plasma was at a temperature of somewhere between 1 and 5 million degrees, which then-current theory suggested would cause a fusion rate within a factor of two of what was being measured. It appeared the neutrons were indeed from fusion events. The apparent success led to plans for a much larger follow-on reactor known as ZETA II. The goal of ZETA had been to generate low levels of fusion reactions, the goal for ZETA II was to produce so many reactions that the energy they released would be greater than the energy fed into the system, a condition known as "break even" or Q=1. To reach these energy levels, the reactor would have to be much larger and more powerful than the original ZETA, and it would be difficult to find room for it at Harwell. The need for more room led to John Cockcroft suggesting it be moved to the new Winfrith location, arguing that it was a prototype for a commercial machine, like the other reactors being built there. This was highly contentious; many members of the fusion team at Harwell were uninterested in moving and argued that losing the theoretical support at Harwell would be an enormous problem. The matter came to a head in a mid-January 1958 meeting which did not go smoothly. But any anger over the issue of choosing a site for ZETA II was completely overshadowed by the impending announcement of the ZETA results. This took place on Saturday 22 January 1958, with careful wording to note that the source of the neutrons had not yet been verified and it was not sure that they were from fusion. However, the assembled press reporters were not happy with these statements and continued to press Cockcroft on the issue. He eventually stated that in his own opinion they were 90% likely to be due to fusion. The reporters took this as a statement of fact, and the Sunday papers all claimed that fusion had been successfully achieved. A press release from the UKAEA to the contrary was largely ignored, and concerns expressed by researchers from other countries were dismissed as jingoism. However, further research on ZETA demonstrated the neutrons were indeed not from fusion. Cockcroft was forced to publish a humiliating retraction in May, and the ZETA II plans were thrown into disarray. By this time the design for ZETA II had grown considerably, it now featured a torus of in diameter, the toroidal magnets that provided stability increased in size 30 times, and the current pulse required to operate it demanded an enormous power supply, using details developed and patented at Harwell. References Fusion power
Intermediate-Current Stability Experiment
[ "Physics", "Chemistry" ]
801
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
51,407,573
https://en.wikipedia.org/wiki/Hierarchical%20equations%20of%20motion
The hierarchical equations of motion (HEOM) technique derived by Yoshitaka Tanimura and Ryogo Kubo in 1989, is a non-perturbative approach developed to study the evolution of a density matrix of quantum dissipative systems. The method can treat system-bath interaction non-perturbatively as well as non-Markovian noise correlation times without the hindrance of the typical assumptions that conventional Redfield (master) equations suffer from such as the Born, Markovian and rotating-wave approximations. HEOM is applicable even at low temperatures where quantum effects are not negligible. The hierarchical equation of motion for a system in a harmonic Markovian bath is Hierarchical equations of motion HEOMs are developed to describe the time evolution of the density matrix for an open quantum system. It is a non-perturbative, non-Markovian approach to propagating in time a quantum state. Motivated by the path integral formalism presented by Feynman and Vernon, Tanimura derive the HEOM from a combination of statistical and quantum dynamical techniques. Using a two level spin-boson system Hamiltonian Characterising the bath phonons by the spectral density By writing the density matrix in path integral notation and making use of Feynman–Vernon influence functional, all the bath coordinates in the interaction terms can be grouped into this influence functional which in some specific cases can be calculated in closed form. Assuming a high temperature heat bath with the Drude spectral distribution and taking the time derivative of the path integral form density matrix the equation and writing it in hierarchal form yields where destroys system excitation and hence can be referred to as the relaxation operator. The second term in is the temperature correction term with the inverse temperature and the "Hyper-operator" notation is introduced. As with the Kubo's stochastic Liouville equation in hierarchal form, the counter can go up to infinity which is a problem numerically, however Tanimura and Kubo provide a method by which the infinite hierarchy can be truncated to a finite set of differential equations where is determined by some constraint sensitive to the characteristics of the system i.e. frequency, amplitude of fluctuations, bath coupling etc. The "Terminator" defines the depth of the hierarchy. A simple relation to eliminate the term is found. . With this terminator the hierarchy is closed at the depth of the hierarchy by the final term: . The statistical nature of the HEOM approach allows information about the bath noise and system response to be encoded into the equation of motion doctoring the infinite energy problem of Kubo's SLE by introducing the relaxation operator ensuring a return to equilibrium. Computational cost When the open quantum system is represented by levels and baths with each bath response function represented by exponentials, a hierarchy with layers will contain: matrices, each with complex-valued (containing both real and imaginary parts) elements. Therefore, the limiting factor in HEOM calculations is the amount of RAM required, since if one copy of each matrix is stored, the total RAM required would be: bytes (assuming double-precision). Implementations The HEOM method is implemented in a number of freely available codes. A number of these are at the website of Yoshitaka Tanimura including a version for GPUs which used improvements introduced by David Wilkins and Nike Dattani. The nanoHUB version provides a very flexible implementation. An open source parallel CPU implementation is available from the Schulten group. See also Quantum master equation Open quantum system Fokker–Planck equation Quantum dynamical semigroup Quantum dissipation References Quantum mechanics
Hierarchical equations of motion
[ "Physics" ]
738
[ "Theoretical physics", "Quantum mechanics" ]
51,413,131
https://en.wikipedia.org/wiki/Free%20field%20%28acoustics%29
In acoustics, a free field is a situation or space in which no sound reflections occur. Characteristics The lack of reflections in a free field means that any sound in the field is entirely determined by a listener or microphone because it is received through the direct sound of the sound source. This makes the open field a direct sound field. In a free field, sound is attenuated with increased distance according to the inverse-square law. Examples and uses In nature, free field conditions occur only when sound reflections from the floor can be ignored, e.g. in new snow in a field, or approximately at good sound-absorbing floors (deciduous, dry sand, etc.) Free field conditions can be artificially produced in anechoic chambers. In particular, free field conditions play a major role in acoustic measurements and sound perception experiments as results are isolated from room reflections. With voice and sound recordings, one often seeks a condition free from sound reflections similar to a free field, even when during post-processing specifically desired spatial impression will be added, because this is not distorted by any sound reflections of the recording room. In the simple example shown in Figure 1, a singular sound source emits sound evenly and spherically with no obstructions. Equations The sound intensity and pressure level of any point in a free field is calculated below, where r (in meters) is the distance from the source and "where ρ and c are the air density and speed of sound respectively. To calculate for air pressure, the equation can be written differently: In order to simplify this equation we can remove elements: Measuring the sound pressure level at a reference distance (Rm) from the source allows us measure another distance (r) more easily than other methods: This means that as the distance from the sources doubles, the noise level decreases by 6 dB for each doubling. However if the sound field is not truly free of reflections, a directivity factor Q will help "characterise the directional sound radiation properties of a source." References See also Acoustics Thought experiments in physics
Free field (acoustics)
[ "Physics" ]
414
[ "Classical mechanics", "Acoustics" ]
68,531,530
https://en.wikipedia.org/wiki/Ocean%20optics
Ocean optics is the study of how light interacts with water and the materials in water. Although research often focuses on the sea, the field broadly includes rivers, lakes, inland waters, coastal waters, and large ocean basins. How light acts in water is critical to how ecosystems function underwater. Knowledge of ocean optics is needed in aquatic remote sensing research in order to understand what information can be extracted from the color of the water as it appears from satellite sensors in space. The color of the water as seen by satellites is known as ocean color. While ocean color is a key theme of ocean optics, optics is a broader term that also includes the development of underwater sensors using optical methods to study much more than just color, including ocean chemistry, particle size, imaging of microscopic plants and animals, and more. Key terminology Optically deep Where waters are “optically deep,” the bottom does not reflect incoming sunlight, and the seafloor cannot be seen by humans or satellites. The vast majority of the world's oceans by area are optically deep. Optically deep water can still be relatively shallow water in terms of total physical depth, as long as the water is very turbid, such as in estuaries. Optically shallow Where waters are “optically shallow,” the bottom reflects light and often can be seen by humans and satellites. Here, ocean optics can also be used to study what is under the water. Based on what color they appear to sensors, researchers can map habitat types, including macroalgae, corals, seagrass beds, and more. Mapping shallow-water environments requires knowledge of ocean optics because the color of the water must be accounted for when looking at the color of the seabed environment below. Inherent optical properties (IOPs) Inherent optical properties (IOPs) depend on what is in the water. These properties stay the same no matter what the incoming light is doing (daytime or nighttime, low sun angle or high sun angle). Absorption Water with large amounts of dissolved substances, such as lakes with large amounts of colored dissolved organic matter (CDOM), experience high light absorption. Phytoplankton and other particles also absorb light. Scattering Areas with sea ice, estuaries with large amounts of suspended sediments, and lakes with large amounts of glacial flour are examples of water bodies with high light scattering. All particles scatter light to some extent, including plankton, minerals, and detritus. Particle size effects how much scattering happens at different colors; for example, very small particles scatter light exponentially more in the blue colors than other colors, which is why the ocean and the sky are generally blue (called Rayleigh scattering). Without scattering, light would not “go” anywhere (outside of a direct beam from the sun or other source) and we would not be able to see the world around us. Attenuation Attenuation in water, also called beam attenuation or the beam attenuation coefficient, is the sum of all absorption and scattering. Attenuation of a light beam in one specific direction can be measured with an instrument called a transmissometer. Apparent optical properties (AOPs) Apparent optical properties (AOPs) depend on what is in the water (IOPs) and what is going on with the incoming light from the Sun. AOPs depend most strongly on IOPs and only depend somewhat on incoming light aka the “light field.” Characteristics of the light field that can affect AOP measurements include the angle at which light hits the water surface (high in the sky vs. low in the sky, and from which compass direction) and the weather and sky conditions (clouds, atmospheric haze, fog, or sea state aka roughness of the surface of the water). Remote sensing reflectance (Rrs) Remote sensing reflectance (Rrs) is a measure of light radiating out from beneath the ocean surface at all colors, normalized by incoming sunlight at all colors. Because Rrs is a ratio, it is slightly less sensitive to what is going on with the light field (such as the angle of the sun or atmospheric haziness). Rrs is measured using two paired spectroradiometers that simultaneously measure light coming in from the sky and light coming up from the water below at many wavelengths. Since it is a measurement of a light-to-light ratio, the energy units cancel out, and Rrs has the units of per steradian (sr-1) due to the angular nature of the measurement (upwelling light is measured at a specific angle, and incoming light is measured on a flat plane from a half-hemispherical area above the water surface). Light attenuation coefficient (Kd) Kd is the diffuse (or downwelling) coefficient of light attenuation (Kd), also called simply light attenuation, the vertical extinction coefficient, or the extinction coefficient. Kd describes the rate of decrease of light with depth in water, in units of per meter (m−1). The “d” stands for downwelling light, which is light coming from above the sensor in a half-hemispherical shape (aka half of a basketball). Scientists sometimes use Kd to describe the decrease in the total visible light available for plants in terms of photosynthetically active radiation (PAR) – called “Kd(PAR).” In other cases, Kd can describe the decrease in light with depth over a spectrum of colors or wavelengths, usually written as “Kd(λ).” At one color (one wavelength) Kd can describe the decrease in light with depth of one color, such as the decrease in blue light at the wavelength 490 nm, written as “Kd(490).” In general, Kd is calculated using Beer's Law and a series of light measurements collected from just under the water surface down through the water at many depths. Closure “Closure” refers to how optical oceanographers measure the consistency of models and measurements. Models refer to anything that is not explicitly measured in the water, including satellite-derived variables that are estimated using empirical relationships (for example, satellite-derived chlorophyll-a concentration is estimated from the ratios between green and blue remote sensing reflectance using an empirical relationship). Closure includes measurement closure, model closure, model-data closure, and scale closure. Where model-data closure experiments show misalignment between data and models, the cause of the misalignment may be due to measurement error, issues with the model, both, or some other external factor. Focus areas Ocean optics has been applied to study topics like primary production, phytoplankton, zooplankton, shallow-water habitats like seagrass beds and coral reefs, marine biogeochemistry, heating of the upper ocean, and carbon export to deep waters by way of the ocean biological pump. The portion of the electromagnetic spectrum usually involved in ocean optics is ultraviolet through infrared, about 300 nm to less than 2000 nm wavelengths. Common optical sensors used in oceanography The most widely used optical oceanographic sensors are PAR sensors, chlorophyll-a fluorescence sensors (fluorometers), and transmissometers. These three instruments are frequently mounted on CTD(conductivity-temperature-depth)-rosette samplers. These instruments have been used for many years on CTD-rosettes in global repeat oceanographic surveys like the CLIVAR GO-SHIP campaign. Particle size in the ocean Optical instruments are often used to measure the size spectrum of particles in the ocean. For example, phytoplankton organisms can range in size from a few microns (micrometers, μm) to hundreds of microns. The size of particles is often measured to estimate how quickly particles will sink, and therefore how efficiently plants can sequester carbon in the ocean's biological pump. Imaging of ocean particles and organisms Scientists study individual tiny objects such as plankton and detritus particles using flow cytometry and in situ camera systems. Flow cytometers measure sizes and take photographs of individual particles flowing through a tube system; one such instrument is the Imaging FlowCytoBot (IFCB). In situ camera systems are deployed over the side of a research vessel, alone or attached to other equipment, and they capture photographs of the water itself to image the particles present in the water; one such instrument is the Underwater Vision Profiler (UVP). Other imaging technologies used in the ocean include holography and particle imaging velocimetry (PIV), which uses 3D video footage to track the movement of underwater particles. Research in support of satellite remote sensing Ocean optics research done “in situ” (from research vessels, small boats, or on docks and piers) supports research that uses satellite data. In situ optical measurements provide a way to: 1) calibrate satellite sensors when they are just beginning to collect data, 2) develop algorithms to derive products or variables like chlorophyll-a concentration, and 3) validate data products derived from satellites. Using satellite data, researchers estimate things like particle size, carbon, water quality, water clarity, and bottom type based on the color profile as seen by satellite; all of these estimations (aka models) must be validated by comparing them to optical measurements made in situ. In situ data are often available from publicly accessible data libraries like the SeaBASS data archive. Major contributing scientists Oceanographers, physicists, and other scientists who have made major contributions to the field of ocean optics include (incomplete list): David Antoine, Marcel Babin, Paula Bontempi, Emmanuel Boss, Annick Bricaud, Kendall Carder, Ivona Cetinic, Edward Fry, Heidi Dierssen, David Doxaran, Gene Carl Feldman, Howard Gordon, Chuanmin Hu, Nils Gunnar Jerlov, George Kattawar, John Kirk, ZhongPing Lee, Hubert Loisel, Stephane Maritorena, Michael Mishchenko, Curtis Mobley, Bruce Monger, Andre Morel, Michael Morris, Norm Nelson, Mary Jane Perry, Rudolph Preisendorfer, Louis Prieur, Chandrasekhara Raman, Collin Roesler, Rüdiger Röttgers, David Siegel, Raymond Smith, Heidi Sosik, Dariusz Stramski, Michael Twardowski, Talbot Waterman, Jeremy Werdell, Ken Voss, Charles Yentsch, and Ronald Zaneveld. Education While ocean optics is an interdisciplinary field of study applies to a wide range of topics, it is not often taught as a course in graduate programs for marine science and oceanography. Two summer-term courses have been developed for graduate students from many different institutions. First, there is a summer lecture series operated by the International Ocean Colour Coordinating Group (IOCCG) which usually takes place in France. Second, there is an ongoing course in the United States called the “Optical Oceanography Class” or “Ocean Optics Class” in Washington State and later in Maine, which has been running continuously since 1985. For independent learning, Curt Mobley, Collin Roesler, and Emmanuel Boss wrote the Ocean Optics Web Book as an open-access online guide. See also Related fields and topics: Atmospheric optics Color of water Electromagnetic spectrum History of optics Oceanography Ocean color Optical depth Spectral color Transparency and translucency Visible spectrum Water clarity Water remote sensing Water quality Inherent and apparent optical properties and in-water methods: Absorption (electromagnetic radiation) Argo (oceanography) Attenuation coefficient Beer-Lambert Law Marine optical buoy Scattering Secchi disk Remote sensing and radiometric methods: Albedo Atmospheric correction NASA Earth Science Spectralon References Further reading Ocean Optics Web Book Oceanography Applied and interdisciplinary physics Scattering, absorption and radiative transfer (optics) Optics Marine biology Aquatic ecology Biological oceanography Water Earth sciences Earth observation in-situ sensors
Ocean optics
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
2,443
[ "Environmental instrumentation", " absorption and radiative transfer (optics)", "Applied and interdisciplinary physics", "Optics", "Hydrology", "Water", "Oceanography", "Marine biology", "Scattering", " molecular", "Ecosystems", "Earth observation in-situ sensors", "Atomic", "Aquatic ecolo...
60,667,144
https://en.wikipedia.org/wiki/Bais%20%28wine%29
Bais is a traditional Filipino mead from the Mandaya and Dibabawon Manobo of northeastern Mindanao. It is made from a mixture of honey and water at varying proportions. It is fermented for at least five days to a month or more. See also Byais Kabarawan Intus Mead Sima References Fermented drinks Philippine alcoholic drinks Filipino cuisine
Bais (wine)
[ "Biology" ]
77
[ "Fermented drinks", "Biotechnology products" ]
60,672,870
https://en.wikipedia.org/wiki/Surface%20growth
In mathematics and physics, surface growth refers to models used in the dynamical study of the growth of a surface, usually by means of a stochastic differential equation of a field. Examples Popular growth models include: KPZ equation Dimer model Eden growth model SOS model Self-avoiding walk Abelian sandpile model Kuramoto–Sivashinsky equation (or the flame equation, for studying the surface of a flame front) They are studied for their fractal properties, scaling behavior, critical exponents, universality classes, and relations to chaos theory, dynamical system, non-equilibrium / disordered / complex systems. Popular tools include statistical mechanics, renormalization group, rough path theory, etc. Kinetic Monte Carlo surface growth model Kinetic Monte Carlo (KMC) is a form of computer simulation in which atoms and molecules are allowed to interact at given rate that could be controlled based on known physics. This simulation method is typically used in the micro-electrical industry to study crystal surface growth, and it can provide accurate models surface morphology in different growth conditions on a time scales typically ranging from micro-seconds to hours. Experimental methods such as scanning electron microscopy (SEM), X-ray diffraction, and transmission electron microscopy (TEM), and other computer simulation methods such as molecular dynamics (MD), and Monte Carlo simulation (MC) are widely used. How KMC surface growth works 1. Absorption process First, the model tries to predict where an atom would land on a surface and its rate at particular environmental conditions, such as temperature and vapor pressure. In order to land on a surface, atoms have to overcome the so-called activation energy barrier. The frequency of passing through the activation barrier can by calculated by the Arrhenius equation: where A is the thermal frequency of molecular vibration, is the activation energy, k is the Boltzmann constant and T is the absolute temperature. 2. Desorption process When atoms land on a surface, there are two possibilities. First, they would diffuse on the surface and find other atoms to make a cluster, which will be discussed below. Second, they could come off of the surface or so-called desorption process. The desorption is described exactly as in the absorption process, with the exception of a different activation energy barrier. For example, if all positions on the surface of the crystal are energy equivalent, the rate of growth can be calculated from Turnbull formula: where is the rate of growth, ∆G = Ein – Eout, Aout, A0 out are frequencies to go in or out of crystal for any given molecule on the surface, h is the height of the molecule in the growth direction and C0 the concentration of the molecules in direct distance from the surface. 3. Diffusion process on surface Diffusion process can also be calculated with Arrhenius equation: where D is the diffusion coefficient and Ed is diffusion activation energy. All three processes strongly depend on surface morphology at a certain time. For example, atoms tend to lend at the edges of a group of connected atoms, the so-called island, rather than on a flat surface, this reduces the total energy. When atoms diffuse and connect to an island, each atom tends to diffuse no further, because activation energy to detach itself out of the island is much higher. Moreover, if an atom landed on top of an island, it would not diffuse fast enough, and the atom would tend to move down the steps and enlarge it. Simulation methods Because of limited computing power, specialized simulation models have been developed for various purposes depending on the time scale: a) Electronic scale simulations (density function theory, ab-initio molecular dynamics): sub-atomic length scale in femto-second time scale b) Atomic scale simulations (MD): nano to micro-meter length scale in nano-second time scale c) Film scale simulation (KMC): micro-meter length scale in micro to hour time scale. d) Reactor scale simulation (phase field model): meter length scale in year time scale. Multiscale modeling techniques have also been developed to deal with overlapping time scales. How to use growth conditions in KMC The interest of growing a smooth and defect-free surface requires a combination set of physical conditions throughout the process. Such conditions are bond strength, temperature, surface-diffusion limited and supersaturation (or impingement) rate. Using KMC surface growth method, following pictures describe final surface structure at different conditions. 1. Bond strength and temperature Bond strength and temperature certainly play important roles in the crystal grow process. For high bond strength, when atoms land on a surface, they tend to be closed to atomic surface clusters, which reduce total energy. This behavior results in many isolated cluster formations with a variety of size yielding a rough surface. Temperature, on the other hand, controls the high of the energy barrier. Conclusion: high bond strength and low temperature is preferred to grow a smoothed surface. 2. Surface and bulk diffusion effect Thermodynamically, a smooth surface is the lowest ever configuration, which has the smallest surface area. However, it requires a kinetic process such as surface and bulk diffusion to create a perfectly flat surface. Conclusion: enhancing surface and bulk diffusion will help create a smoother surface. 3. Supersaturation level Conclusion: low impingement rate helps creating smoother surface. 4. Morphology at different combination of conditions With the control of all growth conditions such as temperature, bond strength, diffusion, and saturation level, desired morphology could be formed by choosing the right parameters. Following is the demonstration how to obtain some interesting surface features: See also Domino tiling Diffusion-limited growth Non-Euclidean surface growth Stochastic partial differential equation References Kinetic Monte Carlo Surface science Semiconductor device fabrication Thin film deposition Coatings
Surface growth
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,185
[ "Thin film deposition", "Microtechnology", "Coatings", "Thin films", "Surface science", "Semiconductor device fabrication", "Condensed matter physics", "Planes (geometry)", "Solid state engineering" ]
60,672,947
https://en.wikipedia.org/wiki/Heterobimetallic%20catalysis
Heterobimetallic catalysis is an approach to catalysis that employs two different metals to promote a chemical reaction. Included in this definition are cases (Scheme 1) where: 1) each metal activates a different substrate (synergistic catalysis, used interchangeably with the terms "cooperative" and "dual" catalysis.), 2) both metals interact with the same substrate, and 3) only one metal directly interacts with the substrate(s), while the second metal interacts with the first. In synergistic catalysis Complexes of palladium catalyze cross-coupling of electrophiles with organometallic nucleophiles, including those derived from lithium, tin, zinc, and boron. One example is Sonogashira coupling, where catalytic amount of copper salt (e.g. CuI) reacts with a terminal alkyne (the pronucleophile) under basic conditions to generate a copper acetylide, which transmetalates onto an arylpalladiumII halide, regenerating the copper halide. Reductive elimination from the arylpalladium acetylide yields the cross-coupled product. Other organic pronucleophiles are cross-coupled with arylpalladium halides in the following examples (Scheme 2): 1. Gold-catalyzed cyclization of allenoates followed by cross-coupling with aryl iodides yields 4-arylbutenolides 2. Borylcupration of styrenes followed by palladium-catalyzed cross-coupling with aryl halides generates α-aryl-β-boromethyl functionalized arenes. This reaction has been rendered diastereoselective in the case of cyclic styrenes, and an enantioselective variant has also been developed. Enantioselective hydroarylation of styrenes is accomplished similarly via a chiral copper hydride 3. Asymmetric conjugate reduction-allylation of α,β-unsaturated ketones is achieved by Cu-H mediated reduction and subsequent allylation via a chiral PHOX-ligated palladium catalyst Also of note is the enantioselective allylation of activated nitriles (Scheme 3). A chiral bisphosphine-ligated rhodium catalyst activates the alpha-keto-nitrile component as its corresponding enolate, which is intercepted by a π-allylpalladium complex to yield the α-allylated nitrile in high enantiomeric excess. In the absence of the rhodium catalyst no enantioselectivity is observed, whereas the reaction does not proceed in the absence of palladium. With preformed heterobimetallic catalysts Catalyst systems in which both metal centers are contained in the same complex are also known (e.g. Shibasaki catalysts); further examples are provided below. Ion-paired combinations of early and late transition metal complexes can simultaneously interact with a substrate as both Lewis acid and Lewis base. For example, carbonylative ring expansion of epoxides (Scheme 4) is accomplished by Lewis acid activation by cationic complexes of CrIII, TiIII or AlIII with simultaneous ring opening by the [Co(CO)4]− counterion. Carbonylation of the resultant alkylcobalt followed by lactonization releases the product. A heterobimetallic bond-breaking process is also employed in the IPrCuFp-catalyzed C-H borylation system developed by Mankad (Scheme 5). Bimetallic cleavage of the B-H bond in pinacolborane generates a copper hydride (IPrCu-H) and an iron boryl [(pin)B-Fp], the latter of which borylates unactivated arenes upon UV irradiation. Bimetallic reductive elimination of H2 from the combination of H-Fp and IPrCu-H restarts the catalytic cycle. The incorporation of copper into the catalyst is essential; C-H borylation using (pin)B-Fp alone is stoichiometric in iron due to dimerization of the HFp byproduct. Heterobimetallic catalysts containing persistent M1-M2 bonds exhibit altered reactivity due to interaction of the two different metal centers. For example, allylic amination catalyzed by the binuclear complex [Cl2Ti(NtBuPPh2)2-/Pd(η3-CH2C(CH3)CH2)]+ is exceptionally rapid. DFT studies suggest that a Pd→Ti dative interaction accelerates the typically slow reductive elimination step by withdrawing electron density from Pd in the transition state (Scheme 6). Silica-supported heterobimetallic tantalum iridium catalysts were shown exhibit drastically increased catalytic performances in H/D catalytic exchange reactions with respect to (i) monometallic analogues as well as (ii) homogeneous systems. The key transition state in the C-H activation pathway, computed by DFT, involves (i) donation from the C-H σ orbital to an empty d orbital on the electrophilic early metal (Ta) together with (ii) backdonation from a filled d orbital arising from the late metal (Ir) to the C-H σ* orbital for nucleophilic assistance (Scheme 7). The calculations have shown that steric effects imparted by the ancillary ligands could result in enormous differences in C-H activation energy barriers (ca. 20 kcal/mol-1) in this heterobimetallic cooperative mechanism, indicating that metals accessibility has a drastic impact on the catalytic performances. In photoredox catalysis The combination of photoredox catalysis with traditional transition metal catalysis enables the use of visible light to drive challenging steps in a catalytic cycle. For example, nickel-catalyzed aryl amination suffers from a difficult C-N reductive elimination step. Hence instead of nickel, expensive palladium-based precatalysts are often used in combination with sterically encumbered phosphine ligands to facilitate reductive elimination. A more recent approach employs an iridium-based photoredox catalyst to effect single-electron oxidation of the intermediate NiII-amido complex. The resulting NiIII-amido rapidly undergoes reductive elimination, allowing the Ni-catalyzed aryl amination to proceed at room temperature without the use of phosphine ligands. Biological significance Enzymes containing two or more different metal centers are found in several important biological systems; for example, the Mo-Fe protein of nitrogenase catalyzes the conversion of N2 to NH3 in nitrogen fixation. Of more relevance to human biology, Cu-Zn superoxide dismutase protects cells from oxidative stress by converting superoxide, O2−, to O2 and hydrogen peroxide References Catalysis
Heterobimetallic catalysis
[ "Chemistry" ]
1,489
[ "Catalysis", "Chemical kinetics" ]
60,674,000
https://en.wikipedia.org/wiki/ASME%20Y14.5
ASME Y14.5 is a standard published by the American Society of Mechanical Engineers (ASME) to establish rules, symbols, definitions, requirements, defaults, and recommended practices for stating and interpreting Geometric Dimensions and Tolerances (GD&T). ASME/ANSI issued the first version of this Y-series standard in 1973. Overview ASME Y14.5 is a complete definition of Geometric Dimensioning and Tolerancing. It contains 15 sections which cover symbols and datums as well as tolerances of form, orientation, position, profile and runout. It is complemented by ASME Y14.5.1 - Mathematical Definition of Dimensioning and Tolerancing Principles. Together these standards allow for clear and concise detailing of dimensional requirements on a product drawing or electronic drawing package as well as the verification of the requirements on manufactured parts. Effective application of GD&T allows for parts to be verified by dimensional measurements, gauging, or by CMM. History The modern standard can trace its roots to the military standard MIL-STD-8 published in 1949. It was revised by MIL-STD-8A in 1953, which introduced the concept of modern GD&T "Rule 1". Further revisions have continued to add new concepts and address new technology like Computer Aided Design and Model-based definition. A list of revisions follows: ASME Y14.5-2018, "Dimensioning and Tolerancing" Current Standard Preceded by ASME Y14.5-2009 ASME Y14.5-2-2017, "Certification of Geometric Dimensioning and Tolerancing Professionals" Current Standard Preceded by ASME Y14.5-2-2000 ASME Y14.5-2009 Succeeded by ASME Y14.5-2018 Preceded by ASME Y14.5M-1994 ASME Y14.5M-1994 Succeeded by ASME Y14.5-2009 Reaffirmed in 2004 Preceded by ANSI Y14.5M-1982 ANSI Y14.5M-1982 Preceded by ANSI Y14.5-1973 Reaffirmed in 1988 ANSI Y14.5-1973 Succeeded by ASME Y14.5M-1982 Preceded by USASI Y14.5-1966 USASI Y14.5-1966 Succeeded by ANSI Y14.5-1973 Preceded by ASA Y14.5-1957 ASA Y14.5-1957 Succeeded by USASI Y14.5-1966 Preceded by ASA Z14.1 Series See also Geometric dimensioning and tolerancing CAD standards References External links 2018 | Y14.5 - Dimensioning and Tolerancing Official ASME page American Society of Mechanical Engineers Technical drawing ASME standards
ASME Y14.5
[ "Engineering" ]
565
[ "Design engineering", "Mechanical engineering organizations", "Civil engineering", "American Society of Mechanical Engineers", "Technical drawing" ]
47,341,174
https://en.wikipedia.org/wiki/Collaborative%20diffusion
Collaborative Diffusion is a type of pathfinding algorithm which uses the concept of antiobjects, objects within a computer program that function opposite to what would be conventionally expected. Collaborative Diffusion is typically used in video games, when multiple agents must path towards a single target agent. For example, the ghosts in Pac-Man. In this case, the background tiles serve as antiobjects, carrying out the necessary calculations for creating a path and having the foreground objects react accordingly, whereas having foreground objects be responsible for their own pathing would be conventionally expected. Collaborative Diffusion is favored for its efficiency over other pathfinding algorithms, such as A*, when handling multiple agents. Also, this method allows elements of competition and teamwork to easily be incorporated between tracking agents. Notably, the time taken to calculate paths remains constant as the number of agents increases. References Algorithms
Collaborative diffusion
[ "Mathematics" ]
180
[ "Applied mathematics", "Algorithms", "Mathematical logic" ]
49,073,441
https://en.wikipedia.org/wiki/Oxidative%20dissolution%20of%20silver%20nanoparticles
Silver nanoparticles (AgNPs) act primarily through a process known as oxidative dissolution, wherein Ag+ ions are released through an oxidative mechanism. AgNPs have potentially vast applications within the fields of medicine, science, and food and drug industries due to their antimicrobial properties, low cytotoxicity in humans, and inexpensive cost. Mechanism Silver is stable in water and needs an oxidizing element to achieve oxidative dissolution. When oxidizing agents such as hydrogen peroxide or oxygen are present, they dissolute AgNPs to release Ag+. The release of Ag+ leads to creation of reactive oxygen species (ROS) inside cells, which can further dissolute the nanoparticles. Some nano silver particles develop protective Ag3OH surface groups and it is thought that dissolution removes these groups and forms oxygen radicals, which attenuate reactivity of the AgNPs by entering into the lattice to form a highly stable Ag6O octahedral structure. It has been thought AgNP efficacy can mainly be attributed to shape, as nanoprisms and naorods have proven more active than nanospheres because they possess more highly exposed facets, thus leading to a faster release of Ag+ ions. Environmental factors Environmental factors that play a role in particle dissolution: pH (rate increases with increasing pH ( 6–8.5)), presence of halide ions (cause Ag+ precipitation), particle coating presence of reducing sugars. The presence of cysteine (inhibits dissolution). presence of natural organic matter. Synthesis AgNPs are synthesized using microwave irradiation, gamma irradiation UV activation, or conventional heating of the precursor silver nitrate, AgNO3 using an alginate solution as a stabilizing and reducing agent. The carboxyl or hydroxyl groups on the alginate reagent form complexes during the synthesis of the AgNPs that stabilize the reaction. Nanoparticle size and shape can be specified by changing the ratio of alginate to silver nitrate used and/or the pH. A coating such as PVP may be added to the nanoparticles by heating and subsequent slow cooling. Kinetics Stopped-flow spectrometry has been used to characterize the chemical mechanism and kinetics of AgNPs. Oxidative dissolution of AgNPs has been shown to be a first order reaction with respect to both silver and hydrogen peroxide and is independent of particle size. Antimicrobial activity Antibacterial, antiviral and anti-fungal properties have been investigated in response to AgNP dissolution. Antibacterial activities of AgNPs are much stronger in oxygenic conditions than anoxic conditions. Through their oxidative dissolution in biological systems, AgNPs can target important biomolecules such as “DNA, peptides, and cofactors” as well as absorb into nonspecific moieties and simultaneously disrupt several metabolic pathways. They have been known to act as a bridging agent between thiols, to have affinity for organic amines and phosphates. The combination of silver ions’ reaction with biomolecules with oxidative stress, ultimately leads to toxicity in biological environment. Inhibition of nitrification Oxidative dissolution of AgNPs, which gives rise to Ag+, potentially inhibits nitrification within Ammonia oxidizing bacteria. A key step in nitrification is the oxidation of ammonia to hydroxylamine (NH2OH) catalyzed by the enzyme ammonia monooxyganase (AMO). The enzymatic activity of AMO is highly vulnerable to interference due to its intracytoplasmic location and its abundance of copper. It is speculated that Ag+ ions from AgNPs interfere with AMO's copper bonds by replacing copper with Ag+ causing a decrease in enzymatic activity, and thus nitrification. References Nanoparticles by composition Chemical reactions Silver
Oxidative dissolution of silver nanoparticles
[ "Chemistry" ]
826
[ "nan" ]
49,076,198
https://en.wikipedia.org/wiki/Lactarius%20xanthogalactus
Lactarius xanthogalactus, commonly known as the yellow-staining milkcap is a species of fungus in the family Russulaceae. Several other Lactarius species that bear resemblance to L. xanthogalactus, but most can be distinguished by differences in staining reactions, macroscopic characteristics, or habitat. The species is found on the west coast of the United States and grows in the ground under trees. Taxonomy and classification The species was first described by American mycologist Charles Horton Peck in 1907. The specific epithet xanthogalactus is derived from the Greek words meaning "yellow" and "milk". Description The species produces mushrooms with pinkish-cinnamon caps measuring wide held by pinkish-white stems long and wide. When it is cut or injured, the mushroom oozes a white latex that rapidly turns bright sulfur-yellow. The spores are pale yellow, elliptical and bumpy. The spore print is pale yellow. The mushroom has an unpleasant taste. Similar species Lactarius vinaceorufescens has nearly identical microscopic features to L. xanthogalactus, but macroscopically it has reddish-vinaceous stains that develop on the cap, gills, and stem. Another lookalike is L. colorascens, but it may be distinguished from L. xanthogalactus by several features: a smaller fruit body; a whitish cap that becomes brownish-red with age and does not spot vinaceous or brown; bitter to faintly acrid latex; and slightly smaller spores. L. chrysorrheus is also similar, but it has a whitish to pale yellowish-cinnamon cap with slightly darker spots and grows under hardwoods (especially oak) on well-drained, often sandy soil, and its gills do not discolor or spot vinaceous or brown. Other superficially similar species include L. rubrilacteus, L. rufus, L. subviscidus, L. fragilis and L. rufulus, but none of these species have the yellow staining reaction characteristic of L. xanthogalactus. It could also be mistaken with L. rubidus, which is redder and sweet smelling, and L. substriatus, which has a red-orange cap and white latex that yellows. Habitat and distribution The fruit bodies grow scattered or in groups on the ground under conifers and hardwoods between November and February in Oregon and California. See also List of Lactarius species References xanthogalactus Fungi described in 1907 Fungi of the United States Taxa named by Charles Horton Peck Fungi without expected TNC conservation status Fungus species
Lactarius xanthogalactus
[ "Biology" ]
544
[ "Fungi", "Fungus species" ]
49,076,898
https://en.wikipedia.org/wiki/Liquidity%20at%20risk
The Liquidity-at-Risk (short: LaR) is a measure of the liquidity risk exposure of a financial portfolio. It may be defined as the net liquidity drain which can occur in the portfolio in a given risk scenario. If the Liquidity-at-Risk is greater than the portfolio's current liquidity position then the portfolio may face a liquidity shortfall. Liquidity-at-Risk is different from other measures of risk based on total loss, as it is based on an estimate of cash losses, or liquidity outflows, as opposed to total loss. Definition The Liquidity-at-Risk of a financial portfolio associated with a stress scenario is the net liquidity outflow resulting from this stress scenario: The liquidity shortfall in a stress scenario is thus given by the difference between the Liquidity-at-Risk associated with the stress scenario and the amount of liquid assets available at the point where the scenario occurs. The concept of Liquidity-at-Risk is used in stress testing. It is a conditional measure, which depends on the stress scenario considered. By analogy with Value-at-Risk one may also define a statistical notion of Liquidity-at-Risk, at a given confidence level (e.g. 95%), which may be defined as the highest Liquidity-at-Risk that may occur across all scenarios considered under a probabilistic model, with probability higher than the confidence level. This statistical notion of Liquidity-at-Risk is subject to model risk as it will depend on the probability distribution over scenarios. Relation with other risk measures Liquidity-at-Risk is different from other measures of risk based on total loss, such as Value-at-Risk, as it is based on an estimate of cash losses, or liquidity outflows, as opposed to total loss. See also Margin at risk Value at risk Profit at risk Earnings at risk Cash flow at risk References Mathematical finance Financial risk modeling Monte Carlo methods in finance
Liquidity at risk
[ "Mathematics" ]
408
[ "Applied mathematics", "Mathematical finance" ]
39,975,515
https://en.wikipedia.org/wiki/Angelic%20non-determinism
In computer science, angelic non-determinism is the execution of a nondeterministic algorithm where particular choices are declared to always favor a desired result, if that result is possible. For example, in halting analysis of a Nondeterministic Turing machine, the choices would always favor termination of the program. The "angelic" terminology comes from the Christian religious conventions of angels being benevolent and acting on behalf of an omniscient God. References Theoretical computer science
Angelic non-determinism
[ "Mathematics" ]
101
[ "Theoretical computer science", "Applied mathematics" ]
39,975,535
https://en.wikipedia.org/wiki/Demonic%20non-determinism
A term which describes the execution of a non-deterministic program where all choices are made in favour of non-termination. References Theoretical computer science
Demonic non-determinism
[ "Mathematics" ]
32
[ "Theoretical computer science", "Applied mathematics" ]
39,983,227
https://en.wikipedia.org/wiki/Cellular%20stress%20response
Cellular stress response is the wide range of molecular changes that cells undergo in response to environmental stressors, including extremes of temperature, exposure to toxins, and mechanical damage. Cellular stress responses can also be caused by some viral infections. The various processes involved in cellular stress responses serve the adaptive purpose of protecting a cell against unfavorable environmental conditions, both through short term mechanisms that minimize acute damage to the cell's overall integrity, and through longer term mechanisms which provide the cell a measure of resiliency against similar adverse conditions. General characteristics Cellular stress responses are primarily mediated through what are classified as stress proteins. Stress proteins often are further subdivided into two general categories: those that only are activated by stress, or those that are involved both in stress responses and in normal cellular functioning. The essential character of these stress proteins in promoting the survival of cells has contributed to them being remarkably well conserved across phyla, with nearly identical stress proteins being expressed in the simplest prokaryotic cells as well as the most complex eukaryotic ones. Stress proteins can exhibit widely varied functions within a cell- both during normal life processes and in response to stress. For example, studies in Drosophila have indicated that when DNA encoding certain stress proteins exhibit mutation defects, the resulting cells have impaired or lost abilities such as normal mitotic division and proteasome-mediated protein degradation. As expected, such cells were also highly vulnerable to stress, and ceased to be viable at elevated temperature ranges. Although stress response pathways are mediated in different ways depending on the stressor involved, cell type, etc., a general characteristic of many pathways especially ones where heat is the principal stressor is that they are initiated by the presence and detection of denatured proteins. Because conditions such as high temperatures often cause proteins to denature, this mechanism enables cells to determine when they are subject to high temperature without the need of specialized thermosensitive proteins. Indeed, if a cell under normal (meaning unstressed) conditions has denatured proteins artificially injected into it, it will trigger a stress response. Response to heat The heat shock response involves a class of stress proteins called heat shock proteins. These can help defend a cell against damage by acting as 'chaperons' in protein folding, ensuring that proteins assume their necessary shape and do not become denatured. This role is especially crucial since elevated temperature would, on its own, increase the concentrations of malformed proteins. Heat shock proteins can also participate in marking malformed proteins for degradation via ubiquitin tags. Response to toxins Many toxins end up activating similar stress proteins to heat or other stress-induced pathways because it is fairly common for some types of toxins to achieve their effects - at least in part - by denaturing vital cellular proteins. For example, many heavy metals can react with sulfhydryl groups stabilizing proteins, resulting in conformational changes. Other toxins that either directly or indirectly lead to the release of free radicals can generate misfolded proteins. Effects on cancer Cell stress can have both cancer-suppressing and cancer-promoting effects. Increased levels of oxidant stress may kill cancer cells. Furthermore, different forms of cellular stress can cause protein misfolding and aggregation leading to proteotoxicity. Tumor microenvironment stress leads to canonical and noncanonical endoplasmic stress (ER) responses, which trigger autophagy and are engaged during proteotoxic challenges to clear unfolded or misfolded proteins and damaged organelles to mitigate stress. There are links between unfolded protein response (UPR) responses and autophagy, oxidative stress, and inflammatory response signals in ER stress: aggregation of unfolded/misfolded proteins in the endoplasmic reticulum lumen causes the UPR to be activated. Chronic ER stress produces endogenous or exogenous damage to cells and activates UPR, which leads to impaired intracellular calcium and redox homeostasis. Cancer cells may become dependent on stress response mechanisms that involve lysosomal macromolecule degradation, or even autophagy that recycles entire organelles However, tumor cells exhibit therapeutic stress resistance-associated secretory phenotype involving extracellular vesicles (EVs) such as oncosomes and heat shock proteins. Furthermore, cancer cells with aberrant regulatory modifications in the chromatin of certain genes respond with different kinetics to cell stress, triggering expression of genes that protect them from cytotoxic conditions, and also by activating expression of genes that influence surrounding tissue in a manner that facilitates tumor growth. Applications Early research has suggested that cells which are better able to synthesize stress proteins and do so at the appropriate time are better able to withstand damage caused by ischemia and reperfusion. In addition, many stress proteins overlap with immune proteins. These similarities have medical applications in terms of studying the structure and functions of both immune proteins and stress proteins, as well as the role each plays in combating disease. See also Integrated stress response References Cellular processes
Cellular stress response
[ "Biology" ]
1,039
[ "Cellular processes" ]
44,235,126
https://en.wikipedia.org/wiki/Megamitochondria
Megamitochondria is extremely large and abnormal shapes of mitochondria seen in hepatocytes in alcoholic liver disease and in nutritional deficiencies. It can be seen in conditions of hypertrophy in cell death. References Robbins Basic Pathology by Kumer et al. Mitochondria Cell biology
Megamitochondria
[ "Chemistry", "Biology" ]
64
[ "Mitochondria", "Cell biology", "Metabolism" ]
44,235,963
https://en.wikipedia.org/wiki/Cobalt%20boride
Cobalt borides are inorganic compounds with the general formula CoxBy. The two main cobalt borides are CoB and Co2B. These are refractory materials. Applications Materials science Cobalt borides are known to be exceptionally resistant to oxidation, a chemical property which makes them useful in the field of materials science. For instance, studies suggest cobalt boride can increase the lifespan of metal parts when used as a coating, imparting surfaces with higher corrosion and wear resistance. These properties have been exploited in the field of biomedical sciences for the design of specialized drug delivery systems. Renewable energy Cobalt boride has also been studied as a catalyst for hydrogen storage and fuel cell technologies. Organic synthesis Cobalt boride is also an effective hydrogenation catalyst used in organic synthesis. In one study, cobalt boride was found to be the most selective transition metal based catalyst available for the production of primary amines via nitrile reduction, even exceeding other cobalt containing catalysts such as Raney cobalt. Preparations Materials coating Cobalt boride is produced under high temperature such as 1500 °C. Coatings of cobalt boride on iron are produced by boriding, which involves first introducing a coating of FeB, Fe2B. On to this iron boride coating is deposited cobalt using a pack cementation process. Cobalt boride nanoparticles in the size range of 18 to 22 nm have also been produced. Catalyst When used as a catalyst, cobalt boride is prepared by reducing a cobalt salt, such as cobalt(II) nitrate, with sodium borohydride. Prior to reduction, the surface area of the catalyst is maximized by supporting the salt on another material; often this material is activated carbon. See also Nickel boride Urushibara cobalt References Further reading Borides Cobalt compounds Hydrogenation catalysts
Cobalt boride
[ "Chemistry" ]
367
[ "Hydrogenation catalysts", "Hydrogenation" ]
44,236,888
https://en.wikipedia.org/wiki/N-Succinimidyl%204-fluorobenzoate
N-Succinimidyl 4-fluorobenzoate (SFB) is an organofluorine compound. When incorporating a fluorine-18 atom, SFB is used to label proteins or peptides for positron emission tomography. References 4-Fluorophenyl compounds Benzoate esters Imides
N-Succinimidyl 4-fluorobenzoate
[ "Chemistry" ]
74
[ "Imides", "Functional groups" ]
44,238,805
https://en.wikipedia.org/wiki/Conjugate%20convective%20heat%20transfer
The contemporary conjugate convective heat transfer model was developed after computers came into wide use in order to substitute the empirical relation of proportionality of heat flux to temperature difference with heat transfer coefficient which was the only tool in theoretical heat convection since the times of Newton. This model, based on a strictly mathematically stated problem, describes the heat transfer between a body and a fluid flowing over or inside it as a result of the interaction of two objects. The physical processes and solutions of the governing equations are considered separately for each object in two subdomains. Matching conditions for these solutions at the interface provide the distributions of temperature and heat flux along the body–flow interface, eliminating the need for a heat transfer coefficient. Moreover, it may be calculated using these data. History The problem of heat transfer in the presence of liquid flowing around the body was first formulated and solved as a coupled problem by Theodore L. Perelman in 1961, who also coined the term conjugate problem of heat transfer. Later T. L. Perelman, in collaboration with A.V. Luikov, developed this approach further. At that time, many other researchers started to solve simple problems using different approaches and joining the solutions for body and fluid on their interface. A review of early conjugate solutions may be found in the book by Dorfman. Conjugate problem formulation The conjugate convective heat transfer problem is governed by the set of equations consisting in conformity with physical pattern of two separate systems for body and fluid domains which incorporate the following equations: Body domain Unsteady or steady (Laplace or Poisson) two-or three-dimensional conduction equations or simplified one-dimensional equations for thin bodies Fluid domain For laminar flow: Navier–Stokes and energy equations or simplified equations: boundary layer for large and creeping flow for small Reynolds numbers, respectively. For turbulent flow: Reynolds average Navier–Stokes and energy equations or boundary layer equations for large Reynolds numbers Initial, boundary and conjugate conditions Conditions specifying the spatial distributions of variables for dynamic and thermal equations at initial time No-slip condition on the body and other usual conditions for dynamic equations Conditions of the first or the second kind specifying temperature or heat flux distribution on the domain boundaries Conjugate conditions on the body/fluid interface providing continuity of the thermal fields by specifying the equalities of temperatures and heat fluxes of a body and a flow at the vicinity of interface: T(+) = T(-), q(+) = q(-). Methods of conjugation body-fluid separation solutions Numerical methods One simple way to realize conjugation is to apply the iterations. The idea of this approach is that each solution for the body or for the fluid produces a boundary condition for other components of the system. The process starts by assuming that one of conjugate conditions exists on the interface. Then, one solves the problem for body or for fluid applying the guessing boundary condition and uses the result as a boundary condition for solving a set of equations for another component, and so on. If this process converges, the desired accuracy may be achieved. However, the rate of convergence highly depends on the first guessing condition, and there is no way to find a proper one, except through trial and error. Another numerical conjugate procedure is grounded on the simultaneous solution of a large set of governing equations for both subdomains and conjugate conditions. Patankar proposed a method and software for such solutions using one generalized expression for continuously computing the velocities and temperature fields through the whole problem domain while satisfying the conjugate boundary conditions. Analytical reducing to conduction problem As shown, the well-known Duhamel's integral for heat flux on a plate with arbitrary variable temperature is a sum of series of consequent temperature derivatives. This series in fact is a general boundary condition which becomes a condition of the third kind in the first approximation. Each of those two expressions in the form of Duhamel's integral or in a series of derivatives reduces a conjugate problem to the solution of only the conduction equation for the body at given conjugate conditions. An example of an early conjugate problem solution using Duhamel's integral has been performed. This approach has been applied both in integral and in series forms and is generalized for laminar and turbulent flows with pressure gradient, for flows at wide range of Prandtl and Reynolds numbers, for compressible flow, for power-law non-Newtonian fluids, for flows with unsteady temperature variations and some other more specific cases. Applications Starting from simple examples in the 1960s, the conjugate heat transfer methods have become a more powerful tool for modeling and investigating nature phenomena and engineering systems in different areas ranging from aerospace and nuclear reactors to thermal goods treatment and food processing, from complex procedures in medicine to atmosphere/ocean thermal interaction in meteorology, and from relatively simple units to multistage, nonlinear processes. A detailed review of more than 100 examples of conjugate modeling selected from a list of 200 early and modern publications shows that conjugate methods is now used extensively in a wide range of applications. That also is confirmed by numerous results published after this book appearance (2009) that one may see, for example, at the Web of Science. The applications in specific areas of conjugate heat transfer at periodic boundary conditions and in exchanger ducts are considered in two recent books. References Heat transfer
Conjugate convective heat transfer
[ "Physics", "Chemistry" ]
1,119
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
55,851,068
https://en.wikipedia.org/wiki/Not-all-equal%203-satisfiability
In computational complexity, not-all-equal 3-satisfiability (NAE3SAT) is an NP-complete variant of the Boolean satisfiability problem, often used in proofs of NP-completeness. Definition Like 3-satisfiability, an instance of the problem consists of a collection of Boolean variables and a collection of clauses, each of which combines three variables or negations of variables. However, unlike 3-satisfiability, which requires each clause to have at least one true Boolean value, NAE3SAT requires that the three values in each clause are not all equal to each other (in other words, at least one is true, and at least one is false). Hardness The NP-completeness of NAE3SAT can be proven by a reduction from 3-satisfiability (3SAT). First the nonsymmetric 3SAT is reduced to the symmetric NAE4SAT by adding a common dummy literal to every clause, then NAE4SAT is reduced to NAE3SAT by splitting clauses as in the reduction of general -satisfiability to 3SAT. In more detail, a 3SAT instance (where the are arbitrary literals) is reduced to the NAE4SAT instance where is a new variable. A satisfying assignment for becomes a satisfying assignment for by setting . Conversely a satisfying assignment with for must have at least one other literal true in each clause and thus be a satisfying assignment for . Finally a satisfying assignment with for can because of symmetry of and be flipped to produce a satisfying assignment with . NAE3SAT remains NP-complete when all clauses are monotone (meaning that variables are never negated), by Schaefer's dichotomy theorem. Monotone NAE3SAT can also be interpreted as an instance of the set splitting problem, or as a generalization of graph bipartiteness testing to 3-uniform hypergraphs: it asks whether the vertices of a hypergraph can be colored with two colors so that no hyperedge is monochromatic. More strongly, it is NP-hard to find colorings of 3-uniform hypergraphs with any constant number of colors, even when a 2-coloring exists. Easy cases Unlike 3SAT, some variants of NAE3SAT in which graphs representing the structure of variables and clauses are planar graphs can be solved in polynomial time. In particular this is true when there exists a planar graph with one vertex per variable, one vertex per clause, an edge for each variable–clause incidence, and a cycle of edges connecting all the variable vertices. References NP-complete problems Satisfiability problems
Not-all-equal 3-satisfiability
[ "Mathematics" ]
547
[ "Automated theorem proving", "Computational problems", "Mathematical problems", "NP-complete problems", "Satisfiability problems" ]
55,852,028
https://en.wikipedia.org/wiki/Dresselhaus%20effect
The Dresselhaus effect is a phenomenon in solid-state physics in which spin–orbit interaction causes energy bands to split. It is usually present in crystal systems lacking inversion symmetry. The effect is named after Gene Dresselhaus, who discovered this splitting in 1955. Spin–orbit interaction is a relativistic coupling between the electric field produced by an ion-core and the resulting dipole moment arising from the relative motion of the electron, and its intrinsic magnetic dipole proportional to the electron spin. In an atom, the coupling weakly splits an orbital energy state into two states: one state with the spin aligned to the orbital field and one anti-aligned. In a solid crystalline material, the motion of the conduction electrons in the lattice can be altered by a complementary effect due to the coupling between the potential of the lattice and the electron spin. If the crystalline material is not centro-symmetric, the asymmetry in the potential can favour one spin orientation over the opposite and split the energy bands into spin aligned and anti-aligned subbands. The Rashba spin–orbit coupling has a similar energy band splitting, but the asymmetry comes either from the bulk asymmetry of uniaxial crystals (e.g. of wurtzite type) or the spatial inhomogeneity of an interface or surface. Dresselhaus and Rashba effects are often of similar strength in the band splitting of GaAs nanostructures. Zincblende Hamiltonian Materials with zincblende structure are non-centrosymmetric (i.e., they lack inversion symmetry). This bulk inversion asymmetry (BIA) forces the perturbative Hamiltonian to contain only odd powers of the linear momentum. The bulk Dresselhaus Hamiltonian or BIA term is usually written in this form: where , and are the Pauli matrices related to the spin of the electrons as (here is the reduced Planck constant), and , and are the components of the momentum in the crystallographic directions [100], [010] and [001], respectively. When treating 2D nanostructures where the width direction or [001] is finite, the Dresselhaus Hamiltonian can be separated into a linear and a cubic term. The linear Dresselhaus Hamiltonian is usually written as where is a coupling constant. The cubic Dresselhaus term is written as where is the width of the material. The Hamiltonian is generally derived using a combination of the k·p perturbation theory alongside the Kane model. See also Fine electronic structure Electric dipole spin resonance Spin–orbit interaction References Semiconductors Quantum magnetism Spintronics Physical phenomena
Dresselhaus effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
547
[ "Matter", "Physical phenomena", "Physical quantities", "Semiconductors", "Spintronics", "Quantum mechanics", "Quantum magnetism", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Electrical resistance and conductance" ]
55,855,855
https://en.wikipedia.org/wiki/Conductive%20anodic%20filament
Conductive anodic filament, also called CAF, is a metallic filament that forms from an electrochemical migration process and is known to cause printed circuit board (PCB) failures. Mechanism CAF formation is a process involving the transport of conductive chemistries across a nonmetallic substrate under the influence of an applied electric field. CAF is influenced by electric field strength, temperature (including soldering temperatures), humidity, laminate material, and the presence of manufacturing defects. The occurrence of CAF failures has been primarily driven by the electronics industry pushing for higher density circuit boards and the use of electronics in harsher environments for high reliability applications. Failure modes and detection CAF commonly occurs between adjacent vias (i.e. plated through holes) inside a PCB, as the copper migrates along the glass/resin interface from anode to cathode. CAF failures can manifest as current leakage, intermittent electrical shorts, and even dielectric breakdown between conductors in printed circuit boards. This often makes CAF very difficult to detect, especially when it occurs as an intermittent issue. There are a few things that can be done to isolate the fault location and confirm CAF as a root cause of a failure. If the issue is intermittent then putting the sample of interest under combined temperature-humidity-bias (THB) may help recreate the failure mode. In addition, techniques such as cross sectioning or superconducting quantum interference device (SQUID) can be used to identify the failure. Considerations and mitigation There are several design considerations and mitigation techniques that can be used to reduce the susceptibility to CAF. Certain material selection (i.e. laminate) and design rules (i.e. via spacing) can help reduce CAF risk. Poor adhesion between the resin and glass fibers in the PCB can create a path for CAF to occur. This may depend on parameters of the silane finish applied to the glass fibers, which is used to promote adhesion to the resin. There are also testing standards that can be performed to assess CAF risk. IPC TM-650 2.6.25 provides a test method to assess CAF susceptibility. Additionally, IPC TM-650 2.6.16 provides a pressure vessel test method to rapidly evaluate glass epoxy laminate integrity. This is helpful but it may often be better to use design rules and proper material selection to proactively mitigate the issue. See also Whisker (metallurgy) External links Material & Process Influences on CAF Conductive Anodic Filament (CAF) Formation References Electrochemistry Semiconductor device defects
Conductive anodic filament
[ "Chemistry", "Technology" ]
544
[ "Electrochemistry", "Technological failures", "Semiconductor device defects" ]
55,857,604
https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20and%20Electrification
The Ministry of Energy and Electrification (Minenergo; ) was a government ministry in the Soviet Union. It was the agency responsible for the Soviet Union's electricity policies. The State Committee for Power and Electrification was upgraded to ministerial status (union-republic) in 1965; changed to all-union on 17 July 1987. List of ministers Source: Ignati Novikov (25.4.1962 - 24.11.1962) Pjotr Neporozhny (26.11.1962 - 23.3.1985) Anatoli Mayorets (24.3.1985 - 17.7.1989) Juri Semjonov (17.7.1989 - 24.8.1991) References Energy and Electrification Soviet Union
Ministry of Energy and Electrification
[ "Engineering" ]
153
[ "Energy organizations", "Energy ministries" ]
55,858,533
https://en.wikipedia.org/wiki/Steven%20G.%20Johnson
Steven Glenn Johnson (born 1973) is an American applied mathematician and physicist known for being a co-creator of the FFTW library for software-based fast Fourier transforms and for his work on photonic crystals. He is professor of Applied Mathematics and Physics at MIT where he leads a group on Nanostructures and Computation. While working on his PhD at MIT, he developed the Fastest Fourier Transform in the West (FFTW) library with funding from the DoD NDSEG Fellowship. Steven Johnson and his colleague Matteo Frigo were awarded the 1999 J. H. Wilkinson Prize for Numerical Software for this work. He is the author of the NLOpt library for nonlinear optimization, as well as being the co-author of the open-source electromagnetic softwares Meep and MPB. He is a frequent contributor to the Julia programming language, and he has also contributed to Python, R, and Matlab. He was a keynote speaker for the 2019 JuliaCon conference. Selected publications Articles Books References External links Steven G. Johnson, Photonic-crystal and microstructured fiber tutorials (2005). John D. Joannopoulos, Steven G. Johnson, Joshua N. Winn, and Robert D. Meade, Photonic Crystals: Molding the Flow of Light, second edition (Princeton, 2008), chapter 9. (Readable online.) Living people Massachusetts Institute of Technology School of Science faculty American computer scientists Massachusetts Institute of Technology School of Science alumni American optical engineers American optical physicists 1973 births Computational physicists 21st-century American physicists 21st-century American engineers 21st-century American mathematicians Metamaterials scientists American textbook writers People from St. Charles, Illinois Mathematicians from Illinois Physicists from Illinois
Steven G. Johnson
[ "Physics", "Materials_science" ]
350
[ "Metamaterials scientists", "Metamaterials", "Computational physicists", "Computational physics" ]
55,861,901
https://en.wikipedia.org/wiki/Ring%20main%20unit
In an electrical power distribution system, a ring main unit (RMU) is a factory assembled, metal enclosed set of switchgear used at the load connection points of a ring-type distribution network. It includes in one unit two switches that can connect the load to either or both main conductors, and a fusible switch or circuit breaker and switch that feed a distribution transformer. The metal enclosed unit connects to the transformer either through a bus throat of standardized dimensions, or else through cables and is usually installed outdoors. Ring main cables enter and leave the cabinet. This type of switchgear is used for medium-voltage power distribution, from 7200 volts to about 36000 volts. The ring main unit was introduced in the United Kingdom and is now widely used in other countries. In North American distribution practice, often the equivalent of a ring main unit is built into a pad-mounted transformer which integrates switches and transformer into a single cabinet. Categories Ring main units can be characterized by their type of insulation: air, oil or gas. The switch used to isolate the transformer can be a fusible switch, or may be a circuit breaker using vacuum or gas-insulated interrupters. The unit may also include protective relays to operate the circuit breaker on a fault. See also Ring circuit References Medium Voltage Ring Main Unit - Lucy Electric MV RMU SafeRing catalogue – ABB Distribution Automation Handbook // Elements of power distribution systems – ABB RM6 Ring main Unit catalogue – Schneider Electric MV RMU SafeRing catalogue – ABB http://chiragtec.com/images/Gas-Insulated%20Ring%20Main%20Unit%20-%20SafeRing/1.1%20-%20RMU%20Catalogue.pdf Electrical systems Electric power systems components Electric power distribution
Ring main unit
[ "Physics" ]
377
[ "Physical systems", "Electrical systems" ]
65,666,096
https://en.wikipedia.org/wiki/Zinc%20diphosphide
Zinc diphosphide (ZnP2) is an inorganic chemical compound. It is a red semiconductor solid with a band gap of 2.1 eV. It is one of the two compounds in the zinc-phosphorus system, the other being zinc phosphide (Zn3P2). Synthesis and reactions Zinc diphosphide can be prepared by the reaction of zinc with phosphorus. 2 Zn + P4 → 2 ZnP2 Structure ZnP2 has a room-temperature tetragonal form that converts to a monoclinic form at around 990 °C. In both of these forms, there are chains of P atoms, helical in the tetragonal, semi-spiral in the monoclinic. This compound is part of the Zn-Cd-P-As quaternary system and exhibit partial solid-solution with other binary compounds of the system. Safety ZnP2, like Zn3P2, is highly toxic due to the release of phosphine gas when the material reacts with gastric acid. References zinc phosphide II-V semiconductors II-V compounds
Zinc diphosphide
[ "Chemistry" ]
234
[ "II-V compounds", "Semiconductor materials", "Inorganic compounds", "II-V semiconductors" ]
67,134,422
https://en.wikipedia.org/wiki/The%20Fiel%20contraste
The Fiel contraste is a sculptural group created by the Spanish sculptor Ramón Conde, located in Pontevedra, Spain. It stands in Alhóndiga street, behind the Pontevedra City Hall, and was inaugurated on 30 April 2010. History The sculptural group is located in the place where the Alhóndiga or municipal grain market was in the Middle Ages. The central statue recalls the medieval Civil Servant (hired by the town hall) who, at the entrance to the walls of Pontevedra, near the Bastida Tower, faithfully contrasted with his scales the weights and measures of the goods that were to be sold in the city. Until the 16th century, the Alhóndiga was located where the Pontevedra City Hall is today. At the entrance to the Alhóndiga, the Fiel Contraste was responsible for checking the weights and measures of all the goods that arrived there to be sold. The taxes on the market depended on the verification of the weight of bread or cereals or the measures of wine. The disappearance of this profession occurred with the unification of weights and measures brought about by the Bourbon administration, with the appearance of the metric system and, finally, with the inauguration of the current City Hall in 1880. The commercial and fishing boom in Pontevedra had boosted the holding of markets in the city, notably the Feira Franca granted to Pontevedra in 1467 by King Henry IV of Castile, when the city was the main port in Galicia. Description The sculptural group consists of five pieces. The central bronze piece is the Faithful Contrast, which represents a Herculean man (characteristic of Ramón Conde's work) and timeless, with his left arm extended holding a pair of scales in his hand. The statue is high and weighs . His strong features denote power and authority in the exercise of his function to resolve conflicts about the exact weight of products in the city's fairs and markets. Around this central statue are four two-dimensional pieces of Corten Steel in the form of silhouettes or shadows depicting popular characters from a medieval city market, such as shopkeepers with their baskets in front of them or merchants on any given day in a city market. The sculptural group is valued at 100,000 euros. Gallery References See also Related articles Pontevedra City Hall External links https://www.outono.net/elentir/2016/01/26/el-fiel-contraste-un-monumento-al-almotacen-de-pontevedra/ http://esculturayarte.com/022739/Fiel-Contraste-1-en-Pontevedra.html#.X8aQe86g82w Pontevedra Colossal statues Bronze sculptures in Spain Outdoor sculptures in Pontevedra Sculptures in Spain 21st-century sculptures Sculptures of men in Spain Tourist attractions in Galicia (Spain) Monuments and memorials in Pontevedra Monuments and memorials in Galicia (Spain) Sculptures in Pontevedra History of Pontevedra
The Fiel contraste
[ "Physics", "Mathematics" ]
633
[ "Quantity", "Colossal statues", "Physical quantities", "Size" ]
67,136,789
https://en.wikipedia.org/wiki/The%20Code%20Breaker
The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race is a non-fiction book authored by American historian and journalist Walter Isaacson. Published in March 2021 by Simon & Schuster, it is a biography of Jennifer Doudna, the winner of the 2020 Nobel Prize in Chemistry for her work on the CRISPR system of gene editing. Promotion On March 22, 2021, Isaacson appeared on The Late Show with Stephen Colbert to discuss the book. Reception The book debuted at number one on The New York Times nonfiction best-seller list for the week ending March 13, 2021. In its starred review, Kirkus Reviews called it a "vital book about the next big thing in science—and yet another top-notch biography from Isaacson." Publishers Weekly called it a "gripping account of a great scientific advancement and of the dedicated scientists who realized it." References External links The Code Breaker at the Simon & Schuster website CRISPR Scientist's Biography Explores Ethics Of Rewriting The Code Of Life. Author interview, audio and transcript. Fresh Air, NPR, March 8, 2021. 2021 non-fiction books English-language non-fiction books Books about scientists Jennifer Doudna Simon & Schuster books American biographies Genetics books Genome editing Books by Walter Isaacson
The Code Breaker
[ "Engineering", "Biology" ]
261
[ "Genetics techniques", "Genetic engineering", "Genome editing" ]
67,140,927
https://en.wikipedia.org/wiki/Pollinator-mediated%20selection
Pollinator-mediated selection is an evolutionary process occurring in flowering plants, in which the foraging behavior of pollinators differentially selects for certain floral traits. Flowering plant are a diverse group of plants that produce seeds. Their seeds differ from those of gymnosperms in that they are enclosed within a fruit. These plants display a wide range of diversity when it comes to the phenotypic characteristics of their flowers, which attracts a variety of pollinators that participate in biotic interactions with the plant. Since many plants rely on pollen vectors, their interactions with them influence floral traits and also favor efficiency since many vectors are searching for floral rewards like pollen and nectar. Examples of pollinator-mediated selected traits could be those involving the size, shape, color and odor of flowers, corolla tube length and width, size of inflorescence, floral rewards and amount, nectar guides, and phenology. Since these types of traits are likely to be involved in attracting pollinators, they may very well be the result of selection by the pollinators themselves. Having a floral display that either attracts a variety of pollinators or is efficient in the exchanges that occur during pollination can have advantages for the reproductive success of plants. Thus, pollinator behavior is important to understand in relation to the evolution of flowering plants and in some cases pollinator behavior is thought to lead to specialized pollination syndromes where floral traits have co-evolved with their pollinators in a way that are a direct response to the selection occurring from their pollen vectors. However, many flowering plants don't display morphology that excludes all pollinators except the one they co-evolved with. The most effective pollinator principle posits that floral traits reflect the adaptation to the pollinator that is efficient at transferring the most pollen. Selection might actually favor some degree of generalization while some flowers can also retain particular traits that allow them to adapt to a certain type of pollinator, but will ultimately be molded by the pollinators that are the most effective and visit the most frequently. This leads to shifts in pollination syndromes and to some genera having a high diversity of pollination syndromes among species, suggesting that pollinators are a primary selective force driving diversity and speciation. Pollinator-mediated selection requires isolation and therefore cannot function in sympatry. Floral isolation is a consequence of pollinator behavior that reduces inter-lineage pollen transfer, which reduces gene flow and increases the possibility for a transition to different syndromes. Isolation with no gene flow between populations allows for the development of distinct species, thus speciation is a result of reproductive isolation and can be driven by pollinator-mediated selection. See also Fertilisation of Orchids (1862) Pollination Pollination syndrome Floral biology Flower constancy References Pollination Evolutionary biology Selection
Pollinator-mediated selection
[ "Biology" ]
574
[ "Evolutionary biology", "Evolutionary processes", "Selection" ]
67,143,185
https://en.wikipedia.org/wiki/Carbon%20Design%20System
Carbon Design System is a free and open-source design system and library created by IBM, which implements the IBM Design Language, and licensed under Apache License 2.0. Its public development initially started on June 10, 2015. Their components have multiple implementations, which includes a vanilla JS and CSS implementation and React (maintained by the Carbon Core team), while the community maintains the frameworks developed in Svelte, Vue.js, and Web Components. The official typeface to be used according to the guidelines is the IBM Plex typeface, with alternative typefaces for CJK scripts are Noto Sans CJK SC, Noto Sans CJK TC, and Noto Sans JP. See also Design language Flat design Fluent Design System by Microsoft Material Design by Google References External links Design language Graphical user interfaces IBM
Carbon Design System
[ "Engineering" ]
170
[ "Design", "Design languages" ]
70,011,163
https://en.wikipedia.org/wiki/David%20A.%20Huse
David Alan Huse (born May 16, 1958) is an American theoretical physicist, specializing in statistical physics and condensed matter physics. Biography After graduating from Lincoln-Sudbury Regional High School, Huse matriculated at the University of Massachusetts Amherst, where he graduated in 1979 with a B.S. in physics. He received in 1983 his Ph.D. from Cornell University with a thesis supervised by Michael E. Fisher. From 1983 to 1996, Huse worked in Bell Laboratories in Murray Hill. In 1996, he was appointed a professor in the physics department of Princeton University. At the Institute for Advanced Study, he has been appointed to positions for the autumn of 2010, and for the academic years 2015–2016, 2019–2020, and 2021–2022. He was elected in 2010 a member of the American Academy of Arts and Sciences, in 2013 a fellow of the American Association for the Advancement of Science, and in 2017 a member of the National Academy of Sciences. In 2022 he received the Lars Onsager Prize with Boris Altshuler and Igor Aleiner for ""foundational work on many-body localization, its associated phase transition, and implications for thermalization and ergodicity." In 1982 he married Julia Smith. They have two sons. Selected publications Arxiv preprint Arxiv preprint References External links David A. Huse - Publications, Academic Tree 1958 births Living people 20th-century American physicists 21st-century American physicists Condensed matter physicists American theoretical physicists Scientists at Bell Labs Fellows of the American Association for the Advancement of Science Fellows of the American Academy of Arts and Sciences Members of the United States National Academy of Sciences Lincoln-Sudbury Regional High School alumni University of Massachusetts Amherst alumni Cornell University alumni Princeton University faculty Fellows of the American Physical Society
David A. Huse
[ "Physics", "Materials_science" ]
367
[ "Condensed matter physicists", "Condensed matter physics" ]
70,013,634
https://en.wikipedia.org/wiki/Civilization%27s%20Waiting%20Room
Sivilisasjonens venterom (Norwegian for "Civilization's Waiting Room") was a research larp (live-action roleplaying game) held in Bergen in November 2021. It was designed to explore the potential of larps as a research methodology and as research dissemination, and was specifically intended to investigate ethical questions that arise when encountering new surveillance technologies. Background The project was funded by the Research Council of Norway as part of a scheme to increase the Norwegian impact of EU-funded research. The stated goal was to "create arenas where the general public can practice making ethical decisions about the use of new technologies, specifically machine vision technologies such as facial recognition, deepfakes and VR" The creative lead for the project was veteran larp developer Anita Myhre Andersen, working with Harald Misje, Jon Andreas Edland, Toril Mjelva Saatvedt, Sebastian Sjøvold and Eskil Mjelva Saatvedt. The researchers in the development team were Marianne Gunderson, Kristian A. Bjørkelo, and Jill Walker Rettberg, who had initiated the project. The larp drew upon the Nordic larp genre as well as on research on educational larping (Edu-larp) and larps as research tools. In a scholarly paper about Sivilisasjonens venterom, Malthe Stavning Erslev describes it as a research larp, which is "a method of academic knowledge development in its own right". Setting and gameplay Civilization's Waiting Room was set in a future where society has unravelled due to climate change and war. The Civilization (Sivilisasjonen) is a city state that is a rare refuge from the surrounding wilderness. It is run by a benevolent AI known as Intelligensen ("the Intelligence") that bases all of its decisions on the sum of all the opinions and interests of the citizens, as it interprets these based on the extensive data it collects and is fed by the citizens. Sivilisasjonen was therefore imagined as an AI-based democracy. The overall story arc of Sivilisasjonens Venterom unfolded over a dramatic day in the reception hall, starting in the morning with new applicants arriving, and ending in the evening with a ceremony in which those who had learned to manipulate the system were granted citizenship and access to Sivilisasjonen. During the day there were small personal dramas, planned plot twists and unplanned incidents, as well as large-scale hacking of the Intelligence undermining the foundational ideology of Sivilisasjonen. Players experienced conflicts on a personal level, as their characters had their interpersonal relationship challenged by technological mediation, as well as by their shifting interpretation of how this society worked. Participants also experienced large-scale drama as a group when the social framework of the Intelligence cracked and for a little while was replaced by a small group of more individuality-oriented hackers led by one of the organizers. Three related larps set in the same fictional world were Ettersynsing ("Opticionated"), a short form larp using a dinner table setting that was run at the NORA 2021 conference on AI, Mønsterakademiet, a short larp set in a school that trained citizens for the Civilization, and Hawa, a larp for children run by the larp development company Tidsreiser that was set in another part of the world where there are no adults, and robots bring up children in an attempt to mould them into peaceful, productive citizens. Reception Malthe Stavning Erslev analyses his experience of playing Trin in the larp, discussing larps as a mimetic method related to design fiction. However, he found that the focus on the aspects of surveillance that are visible, such as screens and cameras, led to less focus on data-intensive surveillance, and thus the larp could be said "not to challenge imaginaries, but to solidify them." In his MA thesis, Jon Andreas Edland argued that the "opportunity to observe a theme or situation from different sides and thus grants a larger room for reflection and understanding based on the context of the situation". References Live-action role-playing games Research Council of Norway Machine vision Government by algorithm Design of experiments November 2021 events in Norway
Civilization's Waiting Room
[ "Engineering" ]
900
[ "Machine vision", "Government by algorithm", "Automation", "Robotics engineering" ]
42,787,595
https://en.wikipedia.org/wiki/Hanani%E2%80%93Tutte%20theorem
In topological graph theory, the Hanani–Tutte theorem is a result on the parity of edge crossings in a graph drawing. It states that every drawing in the plane of a non-planar graph contains a pair of independent edges (not both sharing an endpoint) that cross each other an odd number of times. Equivalently, it can be phrased as a planarity criterion: a graph is planar if and only if it has a drawing in which every pair of independent edges crosses evenly (or not at all). History The result is named after Haim Hanani, who proved in 1934 that every drawing of the two minimal non-planar graphs K5 and K3,3 has a pair of edges with an odd number of crossings, and after W. T. Tutte, who stated the full theorem explicitly in 1970. A parallel development of similar ideas in algebraic topology has been credited to Egbert van Kampen, Arnold S. Shapiro, and Wu Wenjun. Applications One consequence of the theorem is that testing whether a graph is planar may be formulated as solving a system of linear equations over the finite field of order two. These equations may be solved in polynomial time, but the resulting algorithms are less efficient than other known planarity tests. Generalizations For other surfaces S than the plane, a graph can be drawn on S without crossings if and only if it can be drawn in such a way that all pairs of edges cross an even number of times; this is known as the weak Hanani–Tutte theorem for S. The strong Hanani–Tutte theorem states that a graph can be drawn without crossings on S if and only if it can be drawn in such a way that all independent pairs of edges cross an even number of times, without regard for the number of crossings between edges that share an endpoint; this strong version does not hold for all surfaces, but it is known to hold for the plane, the projective plane and the torus. The same approach, in which one shows that pairs of edges with an even number of crossings can be disregarded or eliminated in some type of graph drawing and uses this fact to set up a system of linear equations describing the existence of a drawing, has been applied to several other graph drawing problems, including upward planar drawings, drawings minimizing the number of uncrossed edges, and clustered planarity. References Planar graphs Graph drawing
Hanani–Tutte theorem
[ "Mathematics" ]
497
[ "Statements about planar graphs", "Planar graphs", "Theorems in discrete mathematics", "Planes (geometry)", "Theorems in graph theory" ]
42,787,918
https://en.wikipedia.org/wiki/Acetoxycycloheximide
Acetoxycycloheximide is an organic chemical compound. It can be considered as the acetylated analogue of cycloheximide. It is a potent protein synthesis inhibitor in animal cells and can inhibit the formation of memories. See also Cycloheximide References Acetate esters Secondary alcohols Antibiotics Glutarimides Ketones
Acetoxycycloheximide
[ "Chemistry", "Biology" ]
77
[ "Biotechnology products", "Ketones", "Functional groups", "Antibiotics", "Biocides" ]
42,787,929
https://en.wikipedia.org/wiki/2%2C3-Bis%28acetylmercaptomethyl%29quinoxaline
2,3-Bis(acetylmercaptomethyl)quinoxaline (BAMMQ) is an antiviral agent which inhibits poliovirus RNA synthesis in vitro and in vivo and inhibits human herpesvirus 1 multiplication in vitro. It does not interfere with attachment, penetration or DNA synthesis, but interrupts a late stage in virus assembly and/or maturation. BAMMQ was first identified as an antiviral agent in 1974, when it was shown to inhibit poliovirus growth by 99.8% or more at concentrations as low as 10−5 M. That same year, a patent was filed for BAMMQ and related quinoxaline compounds as antiviral agents, particularly noting their effectiveness against herpes simplex virus. The compound acts by specifically inhibiting viral RNA synthesis, with inhibition occurring within 30–60 minutes after addition of the drug. Studies demonstrated that while BAMMQ inhibited viral RNA replication, it did not directly affect viral protein synthesis. The compound's antiviral activity was found to be somewhat dependent on the culture medium composition and viral infection levels. This initial research also suggested that BAMMQ may act by preventing the reinitiation of viral RNA synthesis rather than blocking ongoing RNA synthesis. BAMMQ has demonstrated broad-spectrum antiviral activity against multiple viruses. In addition to poliovirus, early studies showed it was effective against vesicular stomatitis virus, human parainfluenza virus type 3, Rous sarcoma virus, and herpes simplex virus. However, the compound's basic cytotoxicity and the fact that its antiviral efficiency could be partly reversed by increasing viral infection levels or using enriched medium limited its practical applications. The compound has also shown activity against plant viruses. It effectively inhibits both tobacco mosaic virus (TMV) and cowpea chlorotic mottle virus (CCMV) at relatively low concentrations, requiring only 0.015 millimolar (mM) for 90% inhibition of TMV and 0.03 mM for CCMV in leaf disk experiments. While the compound shows promise in leaf disk studies, research has shown that it does not reduce TMV accumulation in tobacco tissue cultures containing the compound. BAMMQ has also been studied as an inhibitor of encephalomyocarditis (EMC) virus. While it allowed plating of high virus concentrations without causing extensive cell damage, no resistant viral mutants were observed in these studies. References Quinoxalines Thioesters Antiviral drugs Acetyl compounds
2,3-Bis(acetylmercaptomethyl)quinoxaline
[ "Chemistry", "Biology" ]
535
[ "Antiviral drugs", "Thioesters", "Biocides", "Functional groups" ]
42,790,711
https://en.wikipedia.org/wiki/Plant%20secretory%20tissue
The tissues that are concerned with the secretion of gums, resins, volatile oils, nectar latex, and other substances in plants are called secretory tissues. These tissues are classified as either laticiferous tissues or glandular tissues. Introduction Cells or organizations of cells which produce a variety of secretions. The secreted substance may remain deposited within the secretory cell itself or may be excreted, that is, released from the cell. Substances may be excreted to the surface of the plant or into intercellular cavities or canals. Some of the many substances contained in the secretions are not further utilized by the plant (resins, rubber, tannins, and various crystals), while others take part in the functions of the plant (enzymes and hormones). Secretory structures range from single cells scattered among other kinds of cells to complex structures involving many cells; the latter are often called glands. Epidermal hairs of many plants are secretory or glandular. Such hairs commonly have a head composed of one or more secretory cells borne on a stalk. The hair of a stinging needle is bulbous below and extends into a long, fine process above. If one touches the hair, its tip breaks off, the sharp edge penetrates the skin, and the poisonous secretion is released. Glands secreting a sugary liquid—the nectar—in flowers pollinated by insects are called nectaries. Nectaries may occur on the floral stalk or on any floral organ: sepal, petal, stamen, or ovary. The hydathode structures discharge water—a phenomenon called guttation through openings in margins or tips of leaves. The water flows through the xylem to its endings in the leaf and then through the intercellular spaces of the hydathode tissue toward the openings in the epidermis. Strictly speaking, such hydathodes are not glands because they are passive with regard to the flow of water. Some carnivorous plants have glands that produce secretions capable of digesting insects and small animals. These glands occur on leaf parts modified as insect-trapping structures. In the sundews ( Drosera ) the traps bear stalked glands, called tentacles. When an insect lights on the leaf, the tentacles bend down and cover the victim with a mucilaginous secretion, the enzymes of which digest the insect. See insectivorous plants. Resin ducts are canals lined with secretory cells that release resins into the canal. Resin ducts are common in gymnosperms and occur in various tissues of roots, stems, leaves, and reproductive structures. Gum ducts are similar to resin ducts and may contain resins, oils, and gums. Usually, the term gum duct is used with reference to the dicotyledons, although gum ducts also may occur in the gymnosperms. Oil ducts are intercellular canals whose secretory cells produce oils or similar substances. Such ducts may be seen, for example, in various parts of the plant of the carrot family (Umbelliferae). Laticifers are cells or systems of cells containing latex, a milky or clear, colored or colorless liquid. Latex occurs under pressure and exudes from the plant when the latter is cut. Laticiferous tissues These consist of thick walled, greatly elongated and much branched ducts containing a milky or yellowish colored juice known as latex. They contain numerous nuclei which lie embedded in the thin lining layer of protoplasm. They are irregularly distributed in the mass of parenchymatous cells. Laticiferous ducts, in which latex are found are again two types- Latex cell or non-articulate latex ducts Latex vessels or articulate latex Latex cells Also called as "non-articulate latex ducts", these ducts are independent units which extend as branched structures for long distances in the plant body. They originates as minute structures, elongate quickly and by repeated branching ramify in all directions but do not fuse together. Thus a network is not formed as in latex vessels. Latex vessel Also called "articulate latex ducts", these ducts or vessels are the result of anastamosis of many cells. They grow more or less as parallel ducts which by means of branching and frequent anastomoses form a complex network. Latex vessels are commonly found in many angiosperm families Papaveraceae, Compositae, Euphorbiaceae, Moraceae, etc. Function The function of laticiferous ducts is not clearly understood. They may also act as food storage organs or as reservoir of waste products, or as translocatory tissues. Glandular tissues This tissue consists of special structures; the glands. These glands contain some secretory or excretory products. A gland may consist of isolated cells or a small group of cells with or without a central cavity. They are of various kinds and may be internal or external. Internal glands are Oil-gland secreting essential oils, as in the fruits and leaves of orange, lemon. Mucilage secreting glands, as in the betel leaf Glands secreting gum, resin, tannin, etc. Digestive glands secreting enzymes or digestive agents Special water secreting glands at the tip of veins External glands are commonly short hairs tipped by glands. They are water-secreting hairs or glands, Glandular hairs secreting gum like substances as in tobacco, plumbago, etc. Glandular hairs secreting irritating, poisonous substances, as in nettles Honey glands, as in carnivorous plants. See also Vascular tissue Hydathode References Raven, Peter H., Evert, Ray F., & Eichhorn, Susan E. (1986). Biology of Plants (4th ed.). New York: Worth Publishers. . External links Intro to Plant Structure Contains diagrams of the plant tissues, listed as an outline. Plant anatomy Plant physiology
Plant secretory tissue
[ "Biology" ]
1,233
[ "Plant physiology", "Plants" ]
42,791,361
https://en.wikipedia.org/wiki/Polyhexahydrotriazine
Polyhexahydrotriazines (PHTs) are polymers of hexahydro-1,3,5-triazines, a class of heterocyclic compounds with the formula (CH2NR)3. They are among the strongest known thermosetting plastics and are stable to solvents at pH > 3, but decompose to the monomers in acidic solutions. Synthesis Various PHTs have been synthesized at room temperature in one step in the early 2000s; they were considered impractical due to their poor mechanical properties. It was later elucidated that, due to the temperature of synthesis, the poor mechanical properties observed were likely due to poly(hemiaminal) formation and not PHT formation. In 2014, Jeanette Garcia, Gavin Jones and a team of researchers at IBM Almaden Research Center, US, developed a new type of PHT (1.6) in what has been called a "happy accident". Garcia had left out a reagent when preparing her mix of chemicals. When low heat was applied to the beaker of paraformaldehyde and 4,4ʹ-oxydianiline, it had created a hemiaminal dynamic covalent network (HDCN). Heating the HDCN to 200 °C transformed it into two new polymers: PHT 1.6 and an organogel known as polyhemiaminal (PHA) Properties The PHT 1.6 has a yellowish color. It is resistant to solvents at pH > 3, but decomposes into monomers within a day in acidic solutions at pH < 2. This property is unusual for thermosetting plastics and allows easy recycling. This PHT has a Young's modulus exceeding 10 GPa, which is among the highest for a thermosetting plastic; it can be further increased by ~50% by dispersing carbon nanotubes in the polymer. The PHT is brittle, and cracks when strained to 1%. Upon heating, it softens at a glass transition temperature of ~190 °C and decomposes at ~300 °C. Possible uses A number of industries could benefit from the use of PHT in manufacturing parts and devices due to its recyclability, lightweight structure, and strength: Aerospace - PHT used in the development of wings, tails, bodies, and rutters would provide a lightweight, durable vessel for both commercial and personal aircraft applications. PHT in combination with PHA could be used to adhere aircraft pieces that are exposed to harsh environmental conditions and high speeds. Automotive - When manufacturing automobiles, PHT would provide a light, high performance option for auto body parts such as panels, hoods, and exterior trim pieces. Semiconductors - Semiconductor chips and transistors made of PHT would allow for the reworking of defective electronics rather than having to discard the equipment. References Amines Triazines Organic polymers Thermosetting plastics
Polyhexahydrotriazine
[ "Chemistry" ]
602
[ "Organic polymers", "Functional groups", "Organic compounds", "Amines", "Bases (chemistry)" ]
42,791,761
https://en.wikipedia.org/wiki/Quantum%20compass
The terminology quantum compass often relates to an instrument which measures relative position using the technique of atom interferometry. It includes an ensemble of accelerometers and gyroscope based on quantum technology to form an inertial navigation unit. Description Work on quantum technology based inertial measurement units (IMUs), the instruments containing the gyroscopes and accelerometers, follows from early demonstrations of matter-wave based accelerometers and gyrometers. The first demonstration of onboard acceleration measurement was made on an Airbus A300 in 2011. A quantum compass contains clouds of atoms frozen using lasers. By measuring the movement of these frozen particles over precise periods of time the motion of the device can be calculated. The device would then provide accurate position in circumstances where satellites are not available for satellite navigation, e.g. a fully submerged submarine. Various defence agencies worldwide, such as the US DARPA and the United Kingdom Ministry of Defence have pushed the development of prototypes for future uses in submarines and aircraft. In 2024, researchers from the Centre for Cold Matter of Imperial College, London, tested an experimental quantum compass on an underground train on London's District line. During the same year, scientists at the Sandia National Laboratories announced they were able to perform spatial quantum sensing using silicon photonic microchip components, a significant advancement towards the development of compact, portable and inexpensive quantum compass devices. References Measuring instruments Speed sensors Vehicle parts Vehicle technology
Quantum compass
[ "Physics", "Technology", "Engineering" ]
303
[ "Speed sensors", "Quantum mechanics", "Measuring instruments", "Vehicle technology", "Mechanical engineering by discipline", "Vehicle parts", "Components", "Quantum physics stubs" ]
42,791,912
https://en.wikipedia.org/wiki/Vitrimers
Vitrimers are a class of plastics, which are derived from thermosetting polymers (thermosets) and are very similar to them. Vitrimers consist of molecular, covalent networks, which can change their topology by thermally activated bond-exchange reactions. At high temperatures, they can flow like viscoelastic liquids; at low temperatures, the bond-exchange reactions are immeasurably slow (frozen), and the Vitrimers behave like classical thermosets at this point. Vitrimers are strong glass formers. Their behavior opens new possibilities in the application of thermosets, such as a self-healing material or simple processibility in a wide temperature range. Besides epoxy resins based on diglycidyl ether of bisphenol A, other polymer networks have been used to produce vitrimers, such as aromatic polyesters, polylactic acid (polylactide), polyhydroxyurethanes, epoxidized soybean oil with citric acid, and polybutadiene. Vitrimers were termed as such in the early 2010s by French researcher Ludwik Leibler from the CNRS. Background and significance Thermoplastics are easy to process, but corrode easily by chemicals and mechanical stress, while the opposite is true of thermosets. These differences arise from how the polymer chains are held together. Historically, thermoset polymer systems that were processable by virtue of topology changes within the covalent networks as mediated by bond exchange reactions were also developed by James Economy’s group at UIUC in the 1990s including consolidation of thermoset composite laminae. As well, the Economy group conducted studies employing secondary ion mass spectrometry (SIMS) on deuterated and undeuterated fully cured vitrimer layers to discriminate the length scales (<50 nm) for physical interdiffusion between vitrimers constituent atoms – providing evidence towards eliminating physical interdiffusion of the polymer chains as the governing mechanism for bonding between vitrimer layers. Thermoplastics are made of covalent bond molecule chains, which are held together by weak interactions (e.g., van der Waals forces). The weak intermolecular interactions lead to easy processing by melting (or in some cases also from solution), but also make the polymer susceptible to solvent degradation and to creep under constant load. Thermoplastics can be deformed reversibly above their glass-transition temperature or their crystalline melting point and be processed by extrusion, injection molding, and welding. Thermosets, on the other hand, are made of molecular chains which are interconnected by covalent bonds to form a stable network. Thus, they have outstanding mechanical properties and thermal and chemical resistance. They are an indispensable part of structural components in automotive and aircraft industries. Due to their irreversible linking by covalent bonds, molding is not possible once the polymerization is completed. Therefore, they must be polymerized in the desired shape, which is time-consuming, restricts the shape and is responsible for their high price. Given this, if the chains can be held together with reversible, strong covalent bonds, the resultant polymer would have the advantages of both thermoplastics and thermosets, including high processability, repairability, and performance. Vitrimers combine the desirable properties of both classes: they have the mechanical and thermal properties of thermosets and can be also molded under the influence of heat. Vitrimers can be welded like silicon glasses or metals. Welding by simple heating allows the creation of complex objects. Vitrimers could thus be a new and promising class of materials with many uses. The term vitrimer was created by the French researcher Ludwik Leibler, head of laboratory at CNRS, France's national research institute. In 2011, Leibler and co-workers developed silica-like networks using the well-established transesterification reaction of epoxy and fatty dicarboxylic or tricarboxylic acids. The synthesized networks have both hydroxyl and ester groups, which undergo exchange reactions (transesterifications) at high temperatures, resulting in the ability of stress relaxation and malleability of the material. On the other hand, the exchange reactions are suppressed to a great extent when the networks are cooled down, leading to a behavior like a soft solid. This whole process is based only on exchange reactions, which is the main difference from that of thermoplastics. Functional principle Glass and glass former If the melt of an (organic) amorphous polymer is cooled down, it solidifies at the glass-transition temperature Tg. On cooling, the hardness of the polymer increases in the neighborhood of Tg by several orders of magnitude. This hardening follows the Williams-Landel-Ferry equation, not the Arrhenius equation. Organic polymers are thus called fragile glass formers. Silicon glass (e.g., window glass), is in contrast labelled as a strong glass former. Its viscosity changes only very slowly in the vicinity of the glass-transition point Tg and follows the Arrhenius law. This is what permits glassblowing. If one would try to shape an organic polymer in the same manner as glass, it would at first firmly and fully liquefy very slightly above Tg. For a theoretical glassblowing of organic polymers, the temperature must be controlled very precisely. Until 2010, no organic strong glass formers were known. Strong glass formers can be shaped in the same way as glass (silicon dioxide) can be. Vitrimers are the first such material discovered, which can behave like viscoelastic fluid at high temperatures. Unlike classical polymer melts, whose flow properties are largely dependent on friction between monomers, vitrimers become a viscoelastic fluid because of exchange reactions at high temperatures as well as monomer friction. These two processes have different activation energies, resulting in a wide range of viscosity variation. Moreover, because the exchange reactions follow Arrhenius' Law, the change of viscosity of vitrimers also follows an Arrhenius relationship with the increase of temperature, differing greatly from conventional organic polymers. Effect of transesterification and temperature influence The research group led by Ludwik Leibler demonstrated the operating principle of vitrimers at the example of epoxy thermosets. Epoxy thermosets can be represented as vitrimers, when transesterification reactions can be introduced and controlled. In the studied system, carboxylic acids or carboxylic acid anhydrides must be used as hardeners. A topology change is possible by transesterification reactions which do not affect the number of links or the (average) functionality of the polymer, meaning that neither the decomposition of polymer linkages nor the decrease of integrity of polymers happens when transesterification reactions take place. Thus, the polymer can flow like a viscoelastic liquid at high temperatures. During the cooling phase, the transesterification reactions are slowed down, until they finally freeze (be immeasurably slow). Below this point vitrimers behave like normal, classical thermosets. The shown case-study polymers showed an elastic modulus of 1 MPa to 100 MPa, depending on the bonding network density. The concentration of ester groups in vitrimers is shown to have a huge influence on the rate of transesterification reactions. In the work done by Hillmyer, et al., about polyactide vitrimers, they demonstrated that the more ester groups present in the polymer, the faster the rates of relaxation will be, leading to better self-healing performance. Polyactide vitrimers which are synthesized by cross linking reactions of hydroxylterminated 4-arm star-shaped poly((±)-lactide) (HTSPLA) and methylenediphenyl diisocyanate (MDI) with the presence of cross-linking and transesterification catalyst stannous(II) octoate [Sn(Oct)2], have many more ester groups than all previous vitrimers; therefore, this material has a significantly high stress relaxing rate compared to other polyester based vitrimer systems. Applications There are many uses imaginable on this basis. A surfboard made of vitrimers could be brought into a new shape, scratches on a car body could be cured and cross-linked plastic or synthetic rubber items could be welded. Vitrimers, which are prepared from metathesis of dioxaborolanes with different commercially available polymers, can have both good processibility and outstanding performance, such as mechanical, thermal, and chemical resistance. The polymers that can be utilized in such methodology range from poly(methylmethacrylate), polyimine, polystyrene, to polyethylene with high density and cross-linked robust structures, which makes this preparative method of vitrimers able to be applied to a wide range of industries. Recent NASA-funded work on reversible adhesives for in-space assembly has used a high-performance vitrimer system called aromatic thermosetting copolyester (ATSP) as the basis for coatings and composites reversibly bondable in the solid state – providing new possibilities for the assembly of large, complex structures for space exploration and development. Start-up Mallinda Inc. claims to have applications across the composites market from wind energy, sporting goods, automotive, aerospace, marine, and carbon fiber reinforced pressure vessels among others. External links ESPCI ParisTech ATSP Innovations Mallinda Inc Imine-linked Vitrimers References Polymers French inventions 21st-century inventions
Vitrimers
[ "Chemistry", "Materials_science" ]
2,041
[ "Polymers", "Polymer chemistry" ]
42,793,721
https://en.wikipedia.org/wiki/Sodium%20cyanate
Sodium cyanate is the inorganic compound with the formula NaOCN. A white solid, it is the sodium salt of the cyanate anion. Structure The anion is described by two resonance structures: The salt adopts a body centered rhombohedral crystal lattice structure (trigonal crystal system) at room temperature. Preparation Sodium cyanate is prepared industrially by the reaction of urea with sodium carbonate at elevated temperature. 2OC(NH2)2 + Na2CO3 → 2Na(NCO) + CO2 + 2NH3 + H2O Sodium allophanate is observed as an intermediate: It can also be prepared in the laboratory by oxidation of a cyanide in aqueous solution by a mild oxidizing agent such as lead oxide. Uses and reactions The main use of sodium cyanate is for steel hardening. Sodium cyanate is used to produce cyanic acid, often in situ: This approach is exploited for condensation with amines to give unsymmetrical ureas: Such urea derivatives have a range of biological activity. See also Cyanate References Cyanates Sodium compounds
Sodium cyanate
[ "Chemistry" ]
242
[ "Inorganic compounds", "Functional groups", "Inorganic compound stubs", "Cyanates" ]
42,797,019
https://en.wikipedia.org/wiki/Optical%20phenomenon
Optical phenomena are any observable events that result from the interaction of light and matter. All optical phenomena coincide with quantum phenomena. Common optical phenomena are often due to the interaction of light from the Sun or Moon with the atmosphere, clouds, water, dust, and other particulates. One common example is the rainbow, when light from the Sun is reflected and refracted by water droplets. Some phenomena, such as the green ray, are so rare they are sometimes thought to be mythical. Others, such as Fata Morganas, are commonplace in favored locations. Other phenomena are simply interesting aspects of optics, or optical effects. For instance, the colors generated by a prism are often shown in classrooms. Scope Optical phenomena encompass a broad range of events, including those caused by atmospheric optical properties, other natural occurrences, man-made effects, and interactions involving human vision (entoptic phenomena). Also listed here are unexplained phenomena that could have an optical explanation and "optical illusions" for which optical explanations have been excluded. There are many phenomena that result from either the particle or the wave nature of light. Some are quite subtle and observable only by precise measurement using scientific instruments. A famous example is the bending of starlight by the Sun during a solar eclipse, a phenomenon that serves as evidence for the curvature of space as predicted by the theory of relativity. Atmospheric optics Non-atmospheric optical phenomena Dichromatism Gegenschein Iridescence Opposition effect Shadow Shade Silhouette Sylvanshine Zodiacal light Other optical effects Asterism, star gems such as star sapphire or star ruby Aura, a phenomenon in which gas or dust surrounding an object luminesces or reflects light from the object Aventurescence, also called the Schiller effect, spangled gems such as aventurine quartz and sunstone Baily's beads, grains of sunlight visible in total solar eclipses. camera obscura Cathodoluminescence Caustics Chatoyancy, cat's eye gems such as chrysoberyl cat's eye or aquamarine cat's eye Chromatic polarization Diffraction, the apparent bending and spreading of light waves when they meet an obstruction Dispersion Double refraction or birefringence of calcite and other minerals Double-slit experiment Electroluminescence Evanescent wave Fluorescence, also called luminescence or photoluminescence Mie scattering (Why clouds are white) Metamerism as of alexandrite Moiré pattern Newton's rings Phosphorescence Pleochroism gems or crystals, which seem "many-colored" Rayleigh scattering (Why the sky is blue, sunsets are red, and associated phenomena) Reflection Refraction Sonoluminescence Shrimpoluminescence Synchrotron radiation The separation of light into colors by a prism Triboluminescence Thomson scattering Total internal reflection Twisted light Umov effect Zeeman effect The ability of light to travel through space or through a vacuum. Entoptic phenomena Diffraction of light through the eyelashes Haidinger's brush Monocular diplopia (or polyplopia) from reflections at boundaries between the various ocular media Phosphenes from stimulation other than by light (e.g., mechanical, electrical) of the rod cells and cones of the eye or of other neurons of the visual system Purkinje images. Optical illusions The unusually large size of the Moon as it rises and sets, the Moon illusion The shape of the sky, the sky bowl Unexplained phenomena Some phenomena are yet to be conclusively explained and may possibly be some form of optical phenomena. Some consider many of these "mysteries" to simply be local tourist attractions that are not worthy of thorough investigation. Hessdalen lights Min Min lights Light of Saratoga Naga fireballs See also List of optical topics Optics References Source Further reading Thomas D. Rossing and Christopher J. Chiaverina, Light Science: Physics and the Visual Arts, Springer, New York, 1999, hardback, Robert Greenler, Rainbows, Halos, and Glories, Elton-Wolf Publishing, 1999, hardback, Polarized Light in Nature, G. P. Können, Translated by G. A. Beerling, Cambridge University Press, 1985, hardcover, M.G.J. Minnaert, Light and Color in the Outdoors, John Naylor "Out of the Blue: A 24-hour Skywatcher's Guide", CUP, 2002, Abenteuer im Erdschatten (German). The Marine Observers' Log External links Atmospheric Optics Reference site is:Ljósfræðilegt fyrirbrigði
Optical phenomenon
[ "Physics" ]
967
[ "Optical phenomena", "Physical phenomena" ]
54,274,216
https://en.wikipedia.org/wiki/Moduli%20stack%20of%20vector%20bundles
In algebraic geometry, the moduli stack of rank-n vector bundles Vectn is the stack parametrizing vector bundles (or locally free sheaves) of rank n over some reasonable spaces. It is a smooth algebraic stack of the negative dimension . Moreover, viewing a rank-n vector bundle as a principal -bundle, Vectn is isomorphic to the classifying stack Definition For the base category, let C be the category of schemes of finite type over a fixed field k. Then is the category where an object is a pair of a scheme U in C and a rank-n vector bundle E over U a morphism consists of in C and a bundle-isomorphism . Let be the forgetful functor. Via p, is a prestack over C. That it is a stack over C is precisely the statement "vector bundles have the descent property". Note that each fiber over U is the category of rank-n vector bundles over U where every morphism is an isomorphism (i.e., each fiber of p is a groupoid). See also classifying stack moduli stack of principal bundles References Algebraic geometry
Moduli stack of vector bundles
[ "Mathematics" ]
238
[ "Fields of abstract algebra", "Algebraic geometry" ]
54,275,492
https://en.wikipedia.org/wiki/Carpenter%20v.%20United%20States
Carpenter v. United States, , is a landmark United States Supreme Court case concerning the privacy of historical cell site location information (CSLI). The Court held that government entities violate the Fourth Amendment to the United States Constitution when accessing historical CSLI records containing the physical locations of cellphones without a search warrant. Prior to Carpenter, government entities could obtain cellphone location records from service providers by claiming the information was required as part of an investigation, without a warrant, but the ruling changed this procedure. Recognizing the influence of new consumer communications devices in the 2010s, the Court expanded its conceptions of constitutional rights toward the privacy of this type of data. However, the Court emphasized that the Carpenter ruling was narrowly restricted to the precise types of information and search procedures that were relevant to this case. Background Cell site location information (CSLI) Cellular telephone service providers are able to find the location of cell phones through either global positioning system (GPS) data or cell site location information (CSLI), in the process of connecting calls and data transmissions. CSLI is captured by nearby cell towers, and this information is used to triangulate the location of phones. Service providers capture and store this data for business purposes, such as troubleshooting, maximizing network efficiencies, and determining whether to charge customers roaming fees for particular calls. The data can also illustrate the historical movements of a cellphone. Thus, anyone with access to this data has the ability to know where the phone has been and what other cell phones were in the same area at a given time. When users travel with their cellphones, this data can theoretically illustrate every place a person has traveled, and possibly the locations of other people encountered via their corresponding data. Third-party doctrine Prior to Carpenter, the Supreme Court consistently held that a person had no reasonable expectation of privacy in regard to information voluntarily turned over to third-parties such as telephone companies, and therefore a search warrant is not required when government officials seek this information. This legal theory is known as the third-party doctrine, established by the Supreme Court in Smith v. Maryland (1979), in which the Court determined that government can obtain a list of phone numbers dialed from a suspect's phone. By the 2010s, cellphones and particularly smartphones had become important tools for nearly every person in the United States. Many applications, such as GPS navigation and location tools, require a phone to send and receive information constantly, including the exact location of the phone, often without an affirmative action on the part of its owner. As technology advanced in the 2010s, the Supreme Court began to modify its precedents on government searches of personal communications devices, given new consumer behaviors that may transcend the third-party doctrine. Background Between December 2010 and March 2011, several individuals in the Detroit, Michigan area conspired and participated in armed robberies at RadioShack and T-Mobile stores across the region. In April 2011, four of the robbers were captured and arrested. The petitioner, Timothy Carpenter, was not among the initial group of arrestees. One of those arrested confessed and turned over his phone so that FBI agents could review the calls made from his phone around the time of the robberies. The agents obtained a search warrant to inspect the information in that arrestee's phone, in order to find additional contacts of the arrestee and compile more evidence about the crime ring. From the historical cell site records on the arrestee's phone, the agents confirmed that Timothy Carpenter was also part of the crime ring, and proceeded to compile information about the location of his phone over 127 days. In turn, this information revealed that Carpenter had been within a two-mile radius of four robberies at the times they were perpetrated. This evidence was used to support Carpenter's arrest. At criminal court, Carpenter was found guilty of several counts of aiding and abetting robberies that affected interstate commerce, and another count of using a firearm during a violent crime. He was sentenced to 116 years in prison. Appeal at the Sixth Circuit Carpenter appealed his conviction and sentence to the United States Court of Appeals for the Sixth Circuit, arguing that the CSLI evidence used against him should be suppressed because the police had not obtained a warrant pertaining to his CSLI records before searching through them. In 2015, the Circuit Court upheld Carpenter's conviction. This ruling was largely based on the Smith v. Maryland precedent, stating that Carpenter used cellular telephone networks voluntarily, and per the third-party doctrine he had no reasonable expectation that the data should be private. Thus, review of that information by the police did not constitute a "search" and did not require a warrant under the Fourth Amendment. Carpenter appealed this ruling to the U.S. Supreme Court, which granted certiorari in 2016. Supreme Court Twenty amicus curiae briefs were filed by interested organizations, scholars, and corporations for Carpenter's case. Some considered the case to be the most important Fourth Amendment dispute to come before the Supreme Court in a generation. The Court issued its decision in 2018, with the majority opinion written by Chief Justice John Roberts. The Court's ruling recognized that the Carpenter case revealed a contradiction between two lines of Supreme Court rulings on the matter of police searches of personal communications information. In United States v. Jones (2012) the Court had ruled that GPS tracking could constitute a search under the Fourth Amendment as a violation of a person's reasonable expectation of privacy. Meanwhile, the Court had held in Smith v. Maryland (1979) that the third-party doctrine absolved the government from warrant requirements when searching through telephone records. Ultimately, in Carpenter the court determined that the third-party doctrine could not be extended to historical cell site location information (CSLI). Instead, the Court compared "detailed, encyclopedic, and effortlessly compiled" CSLI records to the GPS information at issue in United States v. Jones, recognizing that both forms of data accord the government the ability to track individuals' past movements. Furthermore, the Court noted that CSLI could pose even greater privacy risks than GPS data, as the prevalence of cellphones could accord the government "near perfect surveillance" of an individual's movements. Accordingly, the Court ruled that, under the Fourth Amendment, the government must obtain a search warrant in order to access historical CSLI records. Roberts argued that technology "has afforded law enforcement a powerful new tool to carry out its important responsibilities. At the same time, this tool risks Government encroachment of the sort the Framers [of the U.S. Constitution], after consulting the lessons of history, drafted the Fourth Amendment to prevent." As stated in the opinion, "Unlike the nosy neighbor who keeps an eye on comings and goings, they [new technologies] are ever alert, and their memory is nearly infallible. There is a world of difference between the limited types of personal information addressed in Smith [...] and the exhaustive chronicle of location information casually collected by wireless carriers today." However, Roberts stressed that the Carpenter decision was a very narrow one and did not affect other uses of the third-party doctrine, such as searches of banking records. Similarly, he noted that the decision did not prevent the collection of CSLI without a warrant in cases of emergency or for issues of national security. Dissenting opinions Justice Anthony Kennedy, in a dissenting opinion, cautioned against the limitations on law enforcement inherent in the majority opinion. According to Kennedy, the ruling "places undue restrictions on the lawful and necessary enforcement powers exercised not only by the Federal Government, but also by law enforcement in every State and locality throughout the Nation. Adherence to this Court's longstanding precedents and analytic framework would have been the proper and prudent way to resolve this case." In another dissent, Justice Samuel Alito wrote: "I fear that today's decision will do far more harm than good. The Court's reasoning fractures two fundamental pillars of Fourth Amendment law, and in doing so, it guarantees a blizzard of litigation while threatening many legitimate and valuable investigative practices upon which law enforcement has rightfully come to rely." In yet another dissent, Justice Neil Gorsuch agreed with most of the majority opinion but stressed that CSLI data is personal property, and its storage by telephone companies should be immaterial. According to Gorsuch, the Fourth Amendment "grants you the right to invoke its guarantees whenever one of your protected things (your person, your house, your papers, or your effects) is unreasonably searched or seized. Period." Gorsuch further recommended that the third-party doctrine be overturned as inconsistent with the original meaning of the Fourth Amendment. Impact and subsequent developments After the Supreme Court ruling, Carpenter's criminal conviction was remanded to the Sixth Circuit to determine if it could stand without the CSLI data that required a warrant per the Supreme Court. Carpenter's lawyers argued that the data should have been subject to the exclusionary rule and thrown out as material collected without a proper warrant under the Supreme Court's ruling. However, the Circuit Court judges concluded that the FBI was acting in good faith with respect to collecting the data based on the law at the time the crimes were committed. This type of good faith exemption is permitted per another Supreme Court precedent, Davis v. United States (2011). The evidence was allowed to stand, and the Sixth Circuit again upheld Carpenter's criminal conviction and prison sentence. His arguments concerning sentencing procedures under the recently enacted First Step Act were rejected. The Supreme Court's ruling in Carpenter was narrow and did not otherwise change the third-party doctrine related to other business records that might incidentally reveal location information, nor did it overrule prior decisions concerning conventional surveillance techniques and tools such as security cameras. The Court did not extend its ruling to other matters related to cellphones not presented in Carpenter, including real-time CSLI or "tower dumps" (the downloading of information about all the devices that were connected to a particular cell site during a particular interval). The opinion also did not consider other data collection goals involving foreign affairs or national security. References Further reading External links Case page at SCOTUSblog 2016 in United States case law 2018 in United States case law United States Supreme Court cases United States Fourth Amendment case law United States Supreme Court cases of the Roberts Court United States Court of Appeals for the Sixth Circuit cases Search and seizure case law Telecommunications case law Mobile phone culture 2010s in Detroit Global Positioning System
Carpenter v. United States
[ "Technology", "Engineering" ]
2,143
[ "Global Positioning System", "Aerospace engineering", "Wireless locating", "Aircraft instruments" ]