id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
50,797,595
https://en.wikipedia.org/wiki/Form%20factor%20%28design%29
Form factor is a hardware design aspect that defines and prescribes the size, shape, and other physical specifications of components, particularly in electronics. A form factor may represent a broad class of similarly sized components, or it may prescribe a specific standard. It may also define an entire system, as in a computer form factor. Evolution and standardization As electronic hardware has become smaller following Moore's law and related patterns, ever-smaller form factors have become feasible. Specific technological advances, such as PCI Express, have had a significant design impact, though form factors have historically evolved slower than individual components. Standardization of form factors is vital for hardware compatibility between different manufacturers. Trade-offs Smaller form factors may offer more efficient use of limited space, greater flexibility in the placement of components within a larger assembly, reduced use of material, and greater ease of transportation and use. However, smaller form factors typically incur greater costs in the design, manufacturing, and maintenance phases of the engineering lifecycle, and do not allow the same expansion options as larger form factors. In particular, the design of smaller form-factor computers and network equipment must entail careful consideration of cooling. End-user maintenance and repair of small form-factor electronic devices such as mobile phones is often not possible, and may be discouraged by warranty voiding clauses; such devices require professional servicing—or simply replacement—when they fail. Examples Computer form factors comprise a number of specific industry standards for motherboards, specifying dimensions, power supplies, placement of mounting holes and ports, and other parameters. Other types of form factors for computers include: Small form factor (SFF), a more loosely defined set of standards that may refer to both motherboards and computer cases. SFF devices include mini-towers and home theater PCs. Pizza box form factor, a wide, flat case form factor used for computers and network switches; often sized for installation in a 19-inch rack. All-in-one PC "Lunchbox" portable computer Components Hard disk drive form factors, the physical dimensions of a computer hard drive Hard disk enclosure form factor, the physical dimensions of a computer hard drive enclosure Motherboard form factor, the physical dimensions of a computer motherboard Memory module form factors Mobile form factors Laptop or notebook, a form of portable computer with a clamshell design. Subnotebook, ultra-mobile PC, netbook, and tablet computer, various form factors for devices that are smaller and often cheaper than a typical notebook. Mobile phone, including a wide range of sizes and layouts. Broad categories of form factors include bars, flip phones, and sliders, with many subtypes and variations. Also include phablets (small tablets) and industrial handheld devices. Stick PC, a single-board computer in a small elongated casing resembling a stick See also Computer hardware Electronic packaging Packaging engineering List of computer size categories List of integrated circuit package dimensions Notes References Design Electronic design Industrial design Packaging Broad-concept articles
Form factor (design)
[ "Engineering" ]
601
[ "Industrial design", "Design engineering", "Electronic design", "Electronic engineering", "Design" ]
50,797,822
https://en.wikipedia.org/wiki/Inclusive%20fitness%20in%20humans
Inclusive fitness in humans is the application of inclusive fitness theory to human social behaviour, relationships and cooperation. Inclusive fitness theory (and the related kin selection theory) are general theories in evolutionary biology that propose a method to understand the evolution of social behaviours in organisms. While various ideas related to these theories have been influential in the study of the social behaviour of non-human organisms, their application to human behaviour has been debated. Inclusive fitness theory is broadly understood to describe a statistical criterion by which social traits can evolve to become widespread in a population of organisms. However, beyond this some scientists have interpreted the theory to make predictions about how the expression of social behavior is mediated in both humans and other animals – typically that genetic relatedness determines the expression of social behaviour. Other biologists and anthropologists maintain that beyond its statistical evolutionary relevance the theory does not necessarily imply that genetic relatedness per se determines the expression of social behavior in organisms. Instead, the expression of social behavior may be mediated by correlated conditions, such as shared location, shared rearing environment, familiarity or other contextual cues which correlate with shared genetic relatedness, thus meeting the statistical evolutionary criteria without being deterministic. While the former position still attracts controversy, the latter position has a better empirical fit with anthropological data about human kinship practices, and is accepted by cultural anthropologists. History Applying evolutionary biology perspectives to humans and human society has often resulted in periods of controversy and debate, due to their apparent incompatibility with alternative perspectives about humanity. Examples of early controversies include the reactions to On the Origin of Species, and the Scopes Monkey Trial. Examples of later controversies more directly connected with inclusive fitness theory and its use in sociobiology include physical confrontations at meetings of the Sociobiology Study Group and more often intellectual arguments such as Sahlins' 1976 book The use and abuse of biology, Lewontin et al.'s 1984 Not in Our Genes, and Kitcher's 1985 Vaulting Ambition:Sociobiology and the Quest for Human Nature. Some of these later arguments were produced by other scientists, including biologists and anthropologists, against Wilson's 1975 book Sociobiology: The New Synthesis, which was influenced by (though not necessarily endorsed by) Hamilton's work on inclusive fitness theory. A key debate in applying inclusive fitness theory to humans has been between biologists and anthropologists around the extent to which human kinship relationships (considered to be a large component of human solidarity and altruistic activity and practice) are necessarily based on or influenced by genetic relationships or blood-ties ('consanguinity'). The position of most social anthropologists is summarized by Sahlins (1976), that for humans "the categories of 'near' and 'distant' [kin] vary independently of consanguinal distance and that these categories organize actual social practice" (p. 112). Biologists wishing to apply the theory to humans directly disagree, arguing that "the categories of 'near' and 'distant' do not 'vary independently of consanguinal distance', not in any society on earth." (Daly et al. 1997, p282). This disagreement is central because of the way the association between blood ties/genetic relationships and altruism are conceptualized by many biologists. It is frequently understood by biologists that inclusive fitness theory makes predictions about how behaviour is mediated in both humans and other animals. For example, a recent experiment conducted on humans by the evolutionary psychologist Robin Dunbar and colleagues was, as they understood it, designed "to test the prediction that altruistic behaviour is mediated by Hamilton's rule" (inclusive fitness theory) and more specifically that "If participants follow Hamilton's rule, investment (time for which the [altruistic] position was held) should increase with the recipient's relatedness to the participant. In effect, we tested whether investment flows differentially down channels of relatedness." From their results, they concluded that "human altruistic behaviour is mediated by Hamilton's rule ... humans behave in such a way as to maximize inclusive fitness: they are more willing to benefit closer relatives than more distantly related individuals." (Madsen et al. 2007). This position continues to be rejected by social anthropologists as being incompatible with the large amount of ethnographic data on kinship and altruism that their discipline has collected over many decades, that demonstrates that in many human cultures, kinship relationships (accompanied by altruism) do not necessarily map closely onto genetic relationships. Whilst the above understanding of inclusive fitness theory as necessarily making predictions about how human kinship and altruism is mediated is common amongst evolutionary psychologists, other biologists and anthropologists have argued that it is at best a limited (and at worst a mistaken) understanding of inclusive fitness theory. These scientists argue that the theory is better understood as simply describing an evolutionary criterion for the emergence of altruistic behaviour, which is explicitly statistical in character, not as predictive of proximate or mediating mechanisms of altruistic behaviour, which may not necessarily be determined by genetic relatedness (or blood ties) per se. These alternative non-deterministic and non-reductionist understandings of inclusive fitness theory and human behavior have been argued to be compatible with anthropologists' decades of data on human kinship, and compatible with anthropologists' perspectives on human kinship. This position (e.g. nurture kinship) has been largely accepted by social anthropologists, whilst the former position (still held by evolutionary psychologists, see above) remains rejected by social anthropologists. Theoretical background Theoretical overview Inclusive fitness theory, first proposed by Bill Hamilton in the early 1960s, proposes a selective criterion for the potential evolution of social traits in organisms, where social behavior that is costly to an individual organism's survival and reproduction could nevertheless emerge under certain conditions. The key condition relates to the statistical likelihood that significant benefits of a social trait or behavior accrue to (the survival and reproduction of) other organisms who also carry the social trait. Inclusive fitness theory is a general treatment of the statistical probabilities of social traits accruing to any other organisms likely to propagate a copy of the same social trait. Kin selection theory treats the narrower but more straightforward case of the benefits accruing to close genetic relatives (or what biologists call 'kin') who may also carry and propagate the trait. Under conditions where the social trait sufficiently correlates (or more properly, regresses) with other likely bearers, a net overall increase in reproduction of the social trait in future generations can result. The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes or heritable factors) that influence an organism's behavior in such a way that is helpful and protective of relatives and their offspring, this behavior can also increase the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. In formal terms, if such a complex of genes arises, Hamilton's rule (rb>c) specifies the selective criteria (in terms of relatedness (r), cost (c) benefit (b)) for such a trait to increase in frequency in the population (see Inclusive fitness for more details). Hamilton noted that inclusive fitness theory does not by itself predict that a given species will necessarily evolve such altruistic behaviors, since an opportunity or context for interaction between individuals is a more primary and necessary requirement in order for any social interaction to occur in the first place. As Hamilton put it, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start." (Hamilton 1987, 420). In other words, whilst inclusive fitness theory specifies a set of necessary criteria for the evolution of certain altruistic traits, it does not specify a sufficient condition for their evolution in any given species, since the typical ecology, demographics and life pattern of the species must also allow for social interactions between individuals to occur before any potential elaboration of social traits can evolve in regard to those interactions. Initial presentations of the theory The initial presentation of inclusive fitness theory (in the mid-1960s, see The Genetical Evolution of Social Behaviour) focused on making the general mathematical case for the possibility of social evolution. However, since many field biologists mainly use theory as a guide to their observations and analysis of empirical phenomena, Hamilton also speculated about possible proximate behavioural mechanisms that might be observable in organisms whereby a social trait could effectively achieve this necessary statistical correlation between its likely bearers: Hamilton here was suggesting two broad proximate mechanisms by which social traits might meet the criterion of correlation specified by the theory: Kin recognition (active discrimination): If a social trait enables an organism to distinguish between different degrees of genetic relatedness when interacting in a mixed population, and to discriminate (positively) in performing social behaviours on the basis of detecting genetic relatedness, then the average relatedness of the recipients of altruism could be high enough to meet the criterion. In another section of the same paper (page 54) Hamilton considered whether 'supergenes' that identify copies of themselves in others might evolve to give more accurate information about genetic relatedness. He later (1987, see below) considered this to be wrong-headed and withdrew the suggestion. Viscous populations (spatial cues): Even indiscriminate altruism may achieve the correlation in 'viscous' populations where individuals have low rates of dispersal or short distances of dispersal from their home range (their location of birth). Here, social partners are typically genealogically closely related, and so altruism can flourish even in the absence of kin recognition and kin discrimination faculties – spatial proximity and circumstantial cues provide the necessary correlation. These two alternative suggestions had important effects on how field biologists understood the theory and what they looked for in the behavior of organisms. Within a few years biologists were looking for evidence that 'kin recognition' mechanisms might occur in organisms, assuming this was a necessary prediction of inclusive fitness theory, leading to a sub-field of 'kin recognition' research. Later theoretical refinements A common source of confusion around inclusive fitness theory is that Hamilton's early analysis included some inaccuracies, that, although corrected by him in later publications, are often not fully understood by other researchers who attempt to apply inclusive fitness to understanding organisms' behaviour. For example, Hamilton had initially suggested that the statistical correlation in his formulation could be understood by a correlation coefficient of genetic relatedness, but quickly accepted George Price's correction that a general regression coefficient was the more relevant metric, and together they published corrections in 1970. A related confusion is the connection between inclusive fitness and multi-level selection, which are often incorrectly assumed to be mutually exclusive theories. The regression coefficient helps to clarify this connection: Hamilton also later modified his thinking about likely mediating mechanisms whereby social traits achieve the necessary correlation with genetic relatedness. Specifically he corrected his earlier speculations that an innate ability (and 'supergenes') to recognise actual genetic relatedness was a likely mediating mechanism for kin altruism: The point about inbreeding avoidance is significant, since the whole genome of sexual organisms benefits from avoiding close inbreeding; there is a different selection pressure at play compared to the selection pressure on social traits (see Kin recognition for more information). Since Hamiton's 1964 speculations about active discrimination mechanisms (above), other theorists such as Richard Dawkins had clarified that there would be negative selection pressure against mechanisms for genes to recognize copies of themselves in other individuals and discriminate socially between them on this basis. Dawkins used his 'Green beard' thought experiment, where a gene for social behaviour is imagined also to cause a distinctive phenotype that can be recognised by other carriers of the gene. Due to conflicting genetic similarity in the rest of the genome, there would be selection pressure for green-beard altruistic sacrifices to be suppressed via meitoic drive. Ongoing misconceptions Hamilton's later clarifications often go unnoticed, and because of the long-standing assumption that kin selection requires innate powers of kin recognition, some theorists have later tried to clarify the position: The assumption that 'kin selection requires kin discrimination' has obscured the more parsimonious possibility that spatial-cue-based mediation of social cooperation based on limited dispersal and shared developmental context are commonly found in many organisms that have been studied, including in social mammal species. As Hamilton pointed out, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start" (Hamilton 1987, see above section). Since a reliable context of interaction between social actors is always a necessary condition for social traits to emerge, a reliable context of interaction is necessarily present to be leveraged by context-dependent cues to mediate social behaviours. Focus on mediating mechanisms of limited dispersal and reliable developmental context has allowed significant progress in applying kin selection and inclusive fitness theory to a wide variety of species, including humans, on the basis of cue-based mediation of social bonding and social behaviours (see below). Mammal evidence In mammals, as well as in other species, ecological niche and demographic conditions strongly shape typical contexts of interaction between individuals, including the frequency and circumstances surrounding the interactions between genetic relatives. Although mammals exist in a wide variety of ecological conditions and varying demographic arrangements, certain contexts of interaction between genetic relatives are nevertheless reliable enough for selection to act upon. New born mammals are often immobile and always totally dependent (socially dependent if you will) on their carer(s) for nursing with nutrient rich milk and for protection. This fundamental social dependence is a fact of life for all mammals, including humans. These conditions lead to a reliable spatial context in which there is a statistical association of replica genes between a reproductive female and her infant offspring (and has been evolutionary typical) for most mammal species. Beyond this natal context, extended possibilities for frequent interaction between related individuals are more variable and depend on group living vs. solitary living, mating patterns, duration of pre-maturity development, dispersal patterns, and more. For example, in group living primates with females remaining in their natal group for their entire lives, there will be lifelong opportunities for interactions between female individuals related through their mothers and grandmothers etc. These conditions also thus provide a spatial-context for cue-based mechanisms to mediate social behaviours. In addition to the above examples, a wide variety of evidence from mammal species supports the finding that shared context and familiarity mediate social bonding, rather than genetic relatedness per se. Cross-fostering studies (placing unrelated young in a shared developmental environment) strongly demonstrate that unrelated individuals bond and cooperate just as would normal littermates. The evidence therefore demonstrates that bonding and cooperation are mediated by proximity, shared context and familiarity, not via active recognition of genetic relatedness. This is problematic for those biologists who wish to claim that inclusive fitness theory predicts that social cooperation is mediated via genetic relatedness, rather than understanding the theory simply to state that social traits can evolve under conditions where there is statistical association of genetically related organisms. The former position sees the expression of cooperative behaviour as more or less deterministically caused by genetic relatedness, where the latter position does not. The distinction between cooperation mediated by shared context, and cooperation mediated by genetic relatedness per se, has significant implications for whether inclusive fitness theory can be seen as compatible with the anthropological evidence on human social patterns or not. The shared context perspective is largely compatible, the genetic relatedness perspective is not (see below). Human kinship and cooperation The debate about how to interpret the implications of Inclusive fitness theory for human social cooperation has paralleled some of the key misunderstandings outlined above. Initially, evolutionary biologists interested in humans wrongly assumed that in the human case, 'kin selection requires kin discrimination' along with their colleagues studying other species (see West et al., above). In other words, many biologists assumed that strong social bonds accompanied by altruism and cooperation in human societies (long studied by the anthropological field of kinship) were necessarily built upon recognizing genetic relatedness (or 'blood ties'). This seemed to fit well with historical research in anthropology originating in the nineteenth century (see history of kinship) that often assumed that human kinship was built upon a recognition of shared blood ties. However, independently of the emergence of inclusive fitness theory, from 1960s onwards many anthropologists themselves had reexamined the balance of findings in their own ethnographic data and had begun to reject the notion that human kinship is 'caused by' blood ties (see Kinship). Anthropologists have gathered very extensive ethnographic data on human social patterns and behaviour over a century or more, from a wide spectrum of different cultural groups. The data demonstrates that many cultures do not consider 'blood ties' (in the genealogical sense) to underlie their close social relationships and kinship bonds. Instead social bonds are often considered to be based on location-based shared circumstances including living together (co-residence), sleeping close together, working together, sharing food (commensality) and other forms of shared life together. Comparative anthropologists have shown that these aspects of shared circumstances are a significant component of what influences kinship in most human cultures, notwithstanding whether or not 'blood ties' are necessarily present (see Nurture kinship, below). Although blood ties (and genetic relatedness) often correlate with kinship, just as in the case of mammals (above section), evidence from human societies suggests that it is not the genetic relatedness per se that is the mediating mechanism of social bonding and cooperation, instead it is the shared context (albeit typically consisting of genetic relatives) and the familiarity that arises from it, that mediate the social bonds. This implies that genetic relatedness is not the determining mechanism nor required for the formation of social bonds in kinship groups, or for the expression of altruism in humans, even if statistical correlations of genetic relatedness are an evolutionary criterion for the emergence of such social traits in biological organisms over evolutionary timescales. Understanding this distinction between the statistical role of genetic relatedness in the evolution of social traits and yet its lack of necessary determining role in mediating mechanisms of social bonding and the expression of altruism is key to inclusive fitness theory's proper application to human social behaviour (as well as to other mammals). Nurture kinship Compatible with biologists' emphasis on familiarity and shared context mediating social bonds, the concept of nurture kinship in the anthropological study of human social relationships highlights the extent to which such relationships are brought into being through the performance of various acts of sharing, acts of care, and performance of nurture between individuals who live in close proximity. Additionally the concept highlights ethnographic findings that, in a wide swath of human societies, people understand, conceptualize and symbolize their relationships predominantly in terms of giving, receiving and sharing nurture. The concept stands in contrast to the earlier anthropological concepts of human kinship relations being fundamentally based on "blood ties", some other form of shared substance, or a proxy for these (as in fictive kinship), and the accompanying notion that people universally understand their social relationships predominantly in these terms. The nurture kinship perspective on the ontology of social ties, and how people conceptualize them, has become stronger in the wake of David M. Schneider's influential Critique of the Study of Kinship[1] and Holland's subsequent Social Bonding and Nurture Kinship: Compatibility between Cultural and Biological approaches, demonstrating that as well as the ethnographic record, biological theory and evidence also more strongly support the nurture perspective than the blood perspective. Both Schneider and Holland argue that the earlier blood theory of kinship derived from an unwarranted extension of symbols and values from anthropologists' own cultures (see ethnocentrism). References Behavioral ecology Biological anthropology Evolutionary biology concepts Evolutionary biology Evolution of primates Social anthropology
Inclusive fitness in humans
[ "Biology" ]
4,109
[ "Evolutionary biology", "Behavior", "Behavioral ecology", "Evolutionary biology concepts", "Behavioural sciences", "Ethology" ]
50,798,411
https://en.wikipedia.org/wiki/Plant%20bioacoustics
Plant bioacoustics refers to the creation of sound waves by plants. Measured sound emissions by plants as well as differential germination rates, growth rates and behavioral modifications in response to sound are well documented. Plants detect neighbors by means other than well-established communicative signals including volatile chemicals, light detection, direct contact and root signaling. Because sound waves travel efficiently through soil and can be produced with minimal energy expenditure, plants may use sound as a means for interpreting their environment and surroundings. Preliminary evidence supports that plants create sound in root tips when cell walls break. Because plant roots respond only to sound waves at frequencies which match waves emitted by the plants themselves, it is likely that plants can receive and transduce sound vibrations into signals to elicit behavioral modifications as a form of below ground communication. Sound sensors Buzz pollination, or sonication, serves as an example of a behavioral response to specific frequencies of vibrations in plants. Some 20000 plants species, including Dodecatheon and Heliamphora have evolved buzz pollination in which they release pollen from anthers only when vibrated at a certain frequency created exclusively by bee flight muscles. The vibrations cause pollen granules to gain kinetic energy and escape from pores in the anthers. Similar to buzz pollination, there's a species of evening primrose that has been shown to respond to bee wing beats and sounds of similar frequencies by producing sweeter nectar. Oenothera drummondii (beach evening primrose) is a perennial subshrub native to the Southeastern United States, but has become naturalized on almost every continent. The plant grows among coastal dunes and sandy environments. It has been discovered that O. drummondii flowers produce significantly sweeter nectar within three minutes when exposed to bee wingbeats and artificial sounds containing similar frequencies. A possibility for this behavior is the fact that if the plant can sense when a pollinator is nearby, there is a high probability another pollinator will be in the area momentarily. In order to increase the chance of pollination, nectar with a higher sugar concentration is produced. It has been hypothesized that the flower serves as the “ear” which contains mechanoreceptors on the plasma membranes of the cells to detect mechanical vibration. A possible mechanism behind this is the activation of mechanoreceptors by sound waves, which causes a flux of Ca2+ into the plant cell causing it to depolarize Because of the specific frequencies produced by the pollinators’ wings, perhaps only a distinct amount of Ca2+ enters the cell, which would ultimately determine the plant hormones and expression of genes involved in the downstream effect. Research has shown that there is a calmodulin-like gene that could be a sensor of Ca2+ concentrations in cells, therefore amounts of Ca2+ in a plant cell could have substantial effects over the response of a stimuli. Due to the hormones and genes expressed in the petals of the flower, the transport of sugar into the nectar was increased by about 20%, giving it a higher concentration than compared to the nectar of flowers that were exposed to higher frequencies or no sound at all. An LDV (Laser Doppler vibrometer) was used to determine if the recordings would result in vibration of the petals. Petal velocity was shown in response to a honey bee and moth sound signal as well as low frequency feedbacks, but not high frequency feedbacks. Sugar concentrations of nectar was measured before and after the plants were exposed to sound; significant increase in sugar concentration was only observed when the low frequency (similar to bee wingbeats) and bee sounds were played. To validate that the flower was the organ sensing the vibration of the pollinator, an experiment was run where the flowers were covered with a glass jar, while the rest of the plant was exposed. Sugar concentration of nectar showed no significant difference before and after the low frequency sound was played. If petals act like the ears of the plant, then there must be natural selection on the mechanical parameters of the flower. Its resonance frequency depends on size, shape and density. When comparing the traits of plants based on their pollinators, there is a pattern between the shapes of flowers with “noisy” pollinators. Bees, birds and butterflies – the flowers they pollinate all correspond to having bowl-shaped/tubular flowers. Sound generation Plants emit audio acoustic emissions between 10–240 Hz as well as ultrasonic acoustic emissions (UAE) within 20–300 kHz. Evidence for plant mechanosensory abilities are shown when roots are subjected to unidirectional 220 Hz sound and subsequently grow in the direction of the vibration source. Using electrograph vibrational detection, structured sound wave emissions were detected along the elongation zone of root tips of corn plants in the form of loud and frequent clicks. When plants are isolated from contact, chemical, and light signal exchange with neighboring plants they are still able to sense their neighbors and detect relatives through alternative mechanisms, among which sound vibrations could play an important role. Furthermore, ultrasonic acoustic emissions (UAE) have been detected in a range of different plants which result from collapsing water columns under high tension. UAE studies show different frequencies of sound emissions based on whether or not drought conditions are present. Whether or not UAE are used by plants as a communication mechanism is not known. Although the explicit mechanisms through which sound emissions are created and detected in plants are not known, there are theories which shed light on possible mechanisms. Mechanical vibrations caused by charged cell membranes and walls is a leading hypothesis for acoustic emission generation. Myosins and other mechanochemical enzymes which use chemical energy in the form of ATP to produce mechanical vibrations in cells may also contribute to sound wave generation in plant cells. These mechanisms may lead to overall nanomechanical oscillations of cytoskeletal components, which can generate both low and high frequency vibrations. See also Rapid plant movement Plant perception (physiology) References Plant physiology Acoustics
Plant bioacoustics
[ "Physics", "Biology" ]
1,215
[ "Plant physiology", "Plants", "Classical mechanics", "Acoustics" ]
46,753,817
https://en.wikipedia.org/wiki/Silicon-vacancy%20center%20in%20diamond
The silicon-vacancy center (Si-V) is an optically active defect in diamond (referred to as a color center) that is receiving an increasing amount of interest in the diamond research community. This interest is driven primarily by the coherent optical properties of the Si-V, especially compared to the well-known and extensively-studied nitrogen-vacancy center (N-V). While the negative Si-V− center has received the majority of the silicon-vacancy center research, interest is growing in the neutral Si-V0 center as well. Properties Crystallographic The Si-V center is formed by replacing two neighboring carbon atoms in the diamond lattice with one silicon atom, which places itself between the two vacant lattice sites. This configuration has a D3d point group symmetry. Electronic The Si-V− center is a single-hole (spin-1/2) system with ground and excited electronic states located within the diamond bandgap. The ground and excited electronic states have two orbital states split by spin–orbit coupling. Each of these spin–orbit states is doubly degenerate by spin, and this splitting can be affected by lattice strain. Phonons in the diamond lattice drive transitions between these orbital states, causing rapid equilibration of the orbital population at temperatures above ca. 1 K. All four transitions between the two ground and two excited orbital states are dipole allowed with a sharp zero-phonon line (ZPL) at 738 nm (1.68 eV) and minimal phononic sideband in a roughly 20 nm window around 766 nm. The Si-V center emits much more of its emission into its ZPL, approximately 70% (Debye–Waller factor of 0.7), than most other optical centers in diamond, such as the nitrogen-vacancy center (Debye–Waller factor ~ 0.04). The Si-V− center also has higher excited states that relax quickly to the lowest excited states, allowing off-resonant excitation. The Si-V center has an inversion symmetry, and no static electric dipole moment (to the first order); it is therefore insensitive to the Stark shift that could result from inhomogeneous electric fields within the diamond lattice. This property, together with the weak electron-phonon coupling, results in a narrow ZPL in the Si-V center, which is mostly limited by its intrinsic lifetime. Bright photoluminescence, narrow optical lines, and ease of finding optically indistinguishable Si-V centers favor them for applications in solid-state quantum optics. Spin Although the optical transitions of the Si-V− center preserve the electron spin, the rapid phonon-induced mixing between the Si-V− orbital states causes spin decoherence. At very low temperatures below 100 millikelvin, spin coherence for the Si-V− center improves significantly. It is possible to use the 29Si nuclear spin and the electron spin of the Si-V as qubits for quantum information applications. Comparison to Nitrogen-Vacancy Center (N-V) The N-V center is a similar defect in diamond with more historical significance. Research on the N-V center dates to the 1950s, but the negative Si-V− center was discovered in 1980 and the neutral Si-V0 center was first seen in 2011. The two defects have different advantages and drawbacks. At room temperature, the N-V center has much better spin coherence, a wider ZPL, and wider phonon sideband. The sharpness of the Si-V center's ZPL, it's large Debye–Waller factor, as well as its ability to remain stable in nanophotonic structures are the main properties that have drawn research to it instead of using the more studied N-V center. Neutral SiV0 Overview The Si-V0 center has one fewer electron than the Si-V−, giving it a neutral charge and different properties. The newer Si-V0 center has a spin-1 system with electron spin resonance that gives it a spin coherence superior to the Si-V− center. At room temperature, its ZPL lies in the infrared spectrum at 946 nm with an excellent Debye-Waller factor of 0.9, as compared to the red ZPL peak of the Si-V− at 738 nm. However, stabilizing SiV0 is more of a challenge, with high precision (1-3 ppm) needed in controlling its boron concentration. Synthesis Ion Implantation Ion implantation has been used to synthesize Si-V centers in nanodiamonds. Si ions are implanted into the NDs at specific depths and implantation energies before being annealed. After the ion implantation, additional thermal treatments may be applied to repair structural defects and activate impurities. Unlike the ion implantation used to produce N-V centers, Si-V complexes can withstand higher temperature thermal treatments without dissociation risk. In practice, Si-V centers have been synthesized using multiple systems, with differing optical properties such as the widths of resultant ZPLs. Chemical Vapor Deposition Chemical vapor deposition (CVD) is used to synthesize Si-V centers via a similar process to that used to produce N-V centers .. In the case of Si-V centers, there are two main types of CVD used. First, when a solid containing Silicon is etched with hydrogen, and SiHx radicals dope the diamond lattice. Second, a silicon containing plasma is created containing SiH3 which functions as the dopant. Si-V centers have been synthesized via tetramethylsilane (TMS) gas as a doping source, and in the hydrogen plasma example chemicals such as H2/CH4/CO2 and H2/CH4/N2 have been used to grow the diamonds on Si substrates. Applications Si-V centers' special properties make them preferred to N-V centers in some specific applications. Si-V centers can be used in temperature sensing, photoelectric detectors, and biochemical visualization amongst others. Nanoscale Thermometer Because Si-V centers have such a sharp ZPL, they are used in temperature probes that measure the change in ZPL peaks. These temperature probes have achieved sub-Kelvin precision in less than a second, and do not need individual calibration. Due to the non-invasive nature of these luminescent thermometers, they lend themselves well to biological applications. For example, Si-V nano-scale thermometers have been used for thermosensing within live cells. Photoelectric Detectors Though there is little research favoring Si-V centers over N-V centers in photoelectric detectors, Si-V centers still exhibit a greater response and faster cutoff time compared to undoped detectors. This technology is expected to have applications in many fields including quantum information processing and bio-marking. Biochemical Visualization Confocal microscopes utilizing Si-V centers have been used to visualize cells in a non-invasive way. Additionally, nano diamond vacancy centers like Si-V centers can detect chemical reactions within cells and function as long-term biological markers. Quantum Information Si-V− centers in dilution refrigeration systems below 100 millikelvin can be used in quantum network applications, with spin memory long enough for entanglement up to 500 kilometers. This is because Si-V− centers can be integrated in nanophotonic structures to improve their interaction with photons for quantum communication. Nuclear spins, such as the 29Si nuclear spin of the center itself or 13C nuclear spins naturally present in the diamond around the Si-V can also be used as nuclear memory qubits. Research interest in the neutral Si-V0 center is also growing, since it has better spin coherence than the negative Si-V− center at higher temperature. Spin coherence of 1 second has been achieved for Si-V0, giving it potential as a material to be used in quantum networks. References Diamond Spintronics Spectroscopy Crystallographic defects Quantum information science
Silicon-vacancy center in diamond
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,686
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Spintronics", "Crystallographic defects", "Materials science", "Crystallography", "Condensed matter physics", "Materials degradation", "Spectroscopy" ]
46,757,344
https://en.wikipedia.org/wiki/Di%28propylene%20glycol%29%20methyl%20ether
Di(propylene glycol) methyl ether is an organic solvent with a variety of industrial and commercial uses. It finds use as a less volatile alternative to propylene glycol methyl ether and other glycol ethers. The commercial product is typically a mixture of four isomers. References Alcohol solvents Ether solvents
Di(propylene glycol) methyl ether
[ "Chemistry" ]
70
[]
46,757,985
https://en.wikipedia.org/wiki/Dwight%20Barkley
Dwight Barkley (born 7 January 1959) is a professor of mathematics at the University of Warwick. Education and career Barkley obtained his PhD in physics from the University of Texas at Austin in 1988. He then spent one year at Caltech working with Philip Saffman followed by three years at Princeton University where he worked with Yannís Keverkidis and Steven Orszag. In 1992 he was awarded both NSF and NATO postdoctoral fellowships. In 1994 he joined the faculty at the University of Warwick. Research Barkley studies waves in excitable media such as the Belousov–Zhabotinsky reaction, heart tissue, and neurons. He is the author of the Barkley Model of excitable media and discoverer of the role of Euclidean symmetry in spiral-wave dynamics. In 1997, Laurette Tuckerman and Dwight Barkley coined the term "bifurcation analysis for time steppers" for techniques involving the modification of time-stepping computer codes to perform the tasks of bifurcation analysis. He has applied this approach in several areas of fluid dynamics, in particular to stability analysis of the cylinder wake and of the backward-facing step. Barkley also works on the transition to turbulence in shear flows, including the formation of turbulent-laminar bands and the critical point for pipe flow. Exploiting an analogy with the transition between excitable and bistable media, Barkley derived a model for pipe flow which captures most features of transition to turbulence, in particular the behavior of turbulent regions called puffs and slugs. He is also known for deriving an equation to estimate how long it will be until a child in a car asks the question "are we there yet?" Awards In 2005 he was awarded the J. D. Crawford Prize for outstanding research in nonlinear science, "for his development of high quality, robust and efficient numerical algorithms for pattern formation phenomena in spatially extended dynamical systems". In 2008 he was elected Fellow of the American Physical Society "for combining computation and dynamical systems analyses to obtain remarkable insights into hydrodynamic instabilities and patterns in diverse systems, including flow past a cylinder, channel flow, laminar-turbulent bands, and thermal convection." That same year he was also elected fellow of the Institute of Mathematics and Its Applications. In 2009-2010 he was a Royal Society–Leverhulme Trust Senior Research Fellow. In 2016 he was elected Fellow of the Society for Industrial and Applied Mathematics "for innovative combinations of analysis and computation to obtain fundamental insights into complex dynamics of spatially extended systems." In 2024, he was named a Fluids Mechanics Fellow of Euromech "for his profound contributions to transition to turbulence, nonlinear dynamics, pattern formation, hydrodynamic instabilities, and the Euler singularity through combination of large-scale computing with insightful dynamical systems analysis and modelling". Selected publications . . . . . References External links Google scholar profile Living people Academics of the University of Warwick British mathematicians Dynamical systems theorists 1959 births Fluid dynamicists Fellows of the American Physical Society
Dwight Barkley
[ "Chemistry", "Mathematics" ]
624
[ "Dynamical systems", "Dynamical systems theorists", "Fluid dynamicists", "Fluid dynamics" ]
52,247,710
https://en.wikipedia.org/wiki/Gosi%20Khurd%20Irrigation%20Project
GosiKhurd Irrigation Project also known as Indira Sagar Irrigation Project is one of the major irrigation projects in Godavari basin in Indian state Maharashtra in the Bhandara district on the river Wainganga. The project was launched in 7th Five Year Plan by former Prime Minister Rajiv Gandhi during 1984. It has been declared as National Irrigation Project by Government of India. The project involved, Gose Khurd Dam along with the network of water canals including 99  km long right bank canal, 22.93  km left bank canal, lifting stations at Akot, Ambhora, Mokharbardi, Nerla and Tekepar. The project also included renovation of Asolamendha dam on river Pathari. The project is aimed to irrigate 2.5 lakh hectares of land. The project is designed to provide an annual irrigation to an area of 89,856 ha in Bhandara district, 19,481 ha in Nagpur district and 1,41,463 ha in Chandrapur district. Project timelines The project was accepted by the technical advisory committee in 1988 with an estimated cost of INR 461.19 Crores at the 1981–82 price level. The project got approval by the Forest and Environment department in 1988. The project was approved by Planning Commission of India in 1995. The updated cost of the project was INR 7777.85 Crores at 2007–08 prices that were approved by Planning Commission. The project was getting funds under Accelerated Irrigation Benefits Program (AIBP) since 1996. In 2009 it was declared as a National Project. Till 2009 19179 hectares of irrigation potential was created while remaining will be created as a national project. Project affected people started receiving compensations promised by Chief Minister Pruthviraj Chavan in 2013. Project hurdles and criticism A major criticism of the project involved environmental damage and damage to local livelihood. Around 92 villages were affected by this project making total PAP (Project affected people) of more than 1 lakh. Organization of the project affected inhabitants was formed named Gosikhurd Prakalpgrast Sangharsh Samiti. It demanded compensation as per 1999 rehabilitation norms as well as special benefits offered after the project was given national status. Gosikhurd Prakalpgrast Sangharsh Samiti demanded that in addition to alternative agriculture land, displaced farmers should be provided basic civic amenities, INR 12 lakhs in lieu of Government job, INR 1 lakh for building a cattle shed, compensation for farm labour at the revised rates. Present Status Gosi Khurd dam is almost complete and Ambhora lifting station is complete. The work of the canal system is in progress. Salient Features References Irrigation projects Bhandara district
Gosi Khurd Irrigation Project
[ "Engineering" ]
564
[ "Irrigation projects" ]
52,248,727
https://en.wikipedia.org/wiki/David%20Stuart%20%28structural%20biologist%29
Sir David Ian Stuart (born 8 December 1953) is a Medical Research Council Professor of Structural Biology at the Wellcome Trust Centre for Human Genetics at the University of Oxford where he is also a Fellow of Hertford College, Oxford. He is best known for his contributions to the X-ray crystallography of viruses, in particular for determining the structures of foot-and-mouth disease virus, bluetongue virus and the membrane-containing phages PRD1 (the first structure of an enveloped virus) and PM2. He is also director of Instruct and Life Sciences Director at Diamond Light Source. Education Stuart was born in 1953 in Lancashire. He was educated initially in Helmshore, Lancashire, and then in North Devon, at Barnstaple Grammar School. He studied Biophysics at King's College London, where he graduated with a BSc degree in 1974. He subsequently attended the University of Bristol and completed a PhD degree in the Biochemistry Department in 1979, working on the structure of the enzyme pyruvate kinase in the laboratory of Hilary Muirhead. Career and research Stuart moved to Oxford in 1979 and worked with Louise Johnson on the structure of the enzyme glycogen phosphorylase before moving in 1981 to work at the Institute of Biophysics in Beijing, China, with Liang Dong-Cai on insulin. Returning to Oxford in 1983 to work with Johnson he then in 1985 set up his own research group in the Laboratory of Molecular Biophysics, focused mainly on virus–receptor interactions and virus assembly. In 1999 Stuart led the establishment of the Division of Structural Biology, in the Nuffield Department of Medicine. Stuart has solved the atomic structures of complex biological molecules and viruses, including foot-and-mouth disease virus, bluetongue virus and the membrane-containing phages PRD1 (the first structure of an enveloped virus) and PM2. His structure of foot-and-mouth virus has assisted in the development of improved vaccines via structural vaccinology. He has also investigated the structure of the HIV reverse transcriptase protein, facilitating targeted drug design. Stuart also develops methods in structural biology and researches protein structure and evolution. Since 2008 Stuart has, as life science director, helped the development of the Diamond Light Source, the UK's synchrotron light source. His former doctoral students include Susan Lea. Honours and awards Stuart has received a number of awards and honours for his work on viral structure, including: Federation of European Biochemical Societies (FEBS) Anniversary Prize (1990) Fellow of the Royal Society (FRS, 1996) Descartes Prize (2002) Fellow of the Academy of Medical Sciences (FMedSci, 2006) Gregori Aminoff Prize with Stephen C. Harrison by the Royal Swedish Academy of Sciences (2006) European Crystallographic Association Max Perutz Prize (2007) Honorary fil.Dr.h.c. degree, University of Helsinki, Finland (2010) Honorary DSc degree, University of Leeds (2011) Honorary DSc degree, University of Bristol (2015) Premio Città di Firenze for Molecular Sciences – Award from CERM (2016) Knighted in the 2021 New Year Honours for services to medical research and the scientific community References 1953 births British biologists X-ray crystallography Fellows of the Royal Society Fellows of the Academy of Medical Sciences (United Kingdom) Alumni of King's College London Alumni of the University of Bristol Living people Knights Bachelor Alumni of the University of Oxford
David Stuart (structural biologist)
[ "Chemistry", "Materials_science" ]
701
[ "X-ray crystallography", "Crystallography" ]
52,251,139
https://en.wikipedia.org/wiki/Endoxifen
Endoxifen, also known as 4-hydroxy-N-desmethyltamoxifen, is a nonsteroidal selective estrogen receptor modulator (SERM) of the triphenylethylene group as well as a protein kinase C (PKC) inhibitor. It is under development for the treatment of estrogen receptor-positive breast cancer and for the treatment of mania in bipolar disorder. It is taken by mouth. Endoxifen is an active metabolite of tamoxifen and has been found to be effective in patients that have failed previous hormonal therapies (tamoxifen, aromatase inhibitors, and fulvestrant). The prodrug tamoxifen is metabolized by the CYP2D6 enzyme to produce endoxifen and afimoxifene (4-hydroxytamoxifen). Currently, endoxifen is approved by Drugs Controller General of India for the acute treatment of manic episode with or without mixed features of Bipolar I disorder. It is manufactured and sold by Intas Pharmaceuticals under the brand name Zonalta. Medical uses Bipolar disorder Endoxifen is used to treat manic or mixed episodes associated with bipolar I disorder in India. It has been found that the endoxifen improves manic symptoms as well as mixed episode symptoms of patients with bipolar I disorder and has been considered an effective and well-tolerated treatment for this condition. Bipolar disorder is associated with overactive protein kinase C (PKC) intracellular signaling. To date, there have been three phases of clinical trials. And, in the phase III trials, endoxifen reduced the total Young Mania Rating Scale (YMRS) score from 33.1 to 17.8. A significant (p < 0.001) improvement in Montgomery–Åsberg Depression Rating Scale (MADRS) score was observed for endoxifen (4.8 to 2.5). The endoxifen is well-tolerated by the subjects as depicted in the changes in Clinical Global Impression-Severity of Illness scores. Side effects The most prevalent side effects for endoxifen include headache, vomiting, insomnia. Other side effects were: gastritis, epigastric discomfort, diarrhea, restlessness, somnolence, etc. Some of the adverse events reported with other therapies for the management of manic episodes of bipolar I disorder were not observed during the clinical development program of endoxifen like reduction in platelet count, change in blood thyroid-stimulating hormone levels. There were no deaths, serious or significant adverse events during the conduct of trials. Overall, endoxifen was found to be well-tolerated and safe in patients of bipolar I disorder with acute manic episodes with or without mixed features. An important caveat here is that the trial was of very short duration (only three weeks). The long-term safety of Endoxifen has not been established among patients with Bipolar Disorder. Pharmacology Pharmacodynamics Selective estrogen receptor modulator Endoxifen is a selective estrogen receptor modulator (SERM) with estrogenic and antiestrogenic actions. In the first study to evaluate the pharmacology of endoxifen, it showed 25% of the affinity of estradiol for the estrogen receptor (ER) while afimoxifene had 35% of the affinity of estradiol for the ER. The antiestrogenic actions of endoxifen and afimoxifene in this study were very similar. In another study, the affinity of endoxifen for the ERα was 12.1% and its affinity for the ERβ was 4.75% relative to estradiol. For comparison, afimoxifene had relative binding affinities for the ERα and ERβ of 19.0% and 21.5% compared to estradiol, respectively. In yet another investigation, both endoxifen and afimoxifene had 181% of the affinity of estradiol for the ER whereas tamoxifen had 2.8% and N-desmethyltamoxifen had 2.4%. Protein kinase C inhibition The exact mechanism by which endoxifen exerts its therapeutic effects has not been established in bipolar I disorder. However, the efficacy of endoxifen could be mediated through protein kinase C (PKC). The PKC represents a family of enzymes highly enriched in the brain, where it plays a major role in regulating both pre-and post-synaptic aspects of neurotransmission. Excessive activation of PKC results in symptoms related to bipolar disorder. The PKC signaling pathway is a target for the actions of two structurally dissimilar antimanic agents – lithium and valproate. Endoxifen exhibits 4-fold higher potency in inhibiting PKC activity compared to tamoxifen in preclinical studies and is not dependent on the isozyme cytochrome P450 2D6 (CYP2D6) for action on the target tissues. Pharmacokinetics Orally administered endoxifen is rapidly absorbed and systemically available. The time to peak (Tmax) is between 4.5 and 6 hours after oral administration. It is not metabolized by cytochrome P450 enzymes. The half-life (t½) life of endoxifen is 52.1 to 58.1 hours. Research Endoxifen has been investigated as a potential drug in the treatment of breast cancer. References External links Endoxifen - AdisInsight Experimental drugs Hormonal antineoplastic drugs Human drug metabolites Phenol ethers Selective estrogen receptor modulators Triphenylethylenes Ethanolamines 4-Hydroxyphenyl compounds
Endoxifen
[ "Chemistry" ]
1,228
[ "Chemicals in medicine", "Human drug metabolites" ]
52,252,137
https://en.wikipedia.org/wiki/Ectopic%20decidua
Ectopic decidua are decidual cells found outside inner lining of the uterus. This condition was first described in 1971 by Walker and the name 'ectopic decidua' was coined by Tausig. While ectopic decidua is most commonly seen during pregnancy, it rarely occurs in non-pregnant people, accompanied by bleeding and pain. Generally, ectopic decidua has no clinical symptoms, but it sometimes manifests as abdominal pain in pregnancy. Ectopic decidua most commonly occurs the ovary, cervix and serosal lining of the uterus. It rarely occurs in peritoneum also. In the peritoneum, ectopic decidua is formed due to metaplasia of subserosal stromal cells under the influence of progesterone. It regresses within 4–6 weeks after childbirth. Therefore, no treatment is needed for this condition. However, it is necessary to differentiate deciduosis from metastatic cancers and mesothelioma. References Histopathology Human pregnancy
Ectopic decidua
[ "Chemistry" ]
227
[ "Histopathology", "Microscopy" ]
52,253,271
https://en.wikipedia.org/wiki/Richard%20S.%20Potember
Richard S. Potember is an American scientist and inventor. He is currently a principal systems engineer at MITRE. Prior to this he was a program manager in the Tactical Technology Office at the Defense Advanced Research Projects Agency (DARPA). He has been an instructor at the Whiting School of Engineering at the Johns Hopkins University since 1987. He was a member of the principal professional staff at the Johns Hopkins Applied Physics Laboratory, Laurel, Maryland, from 1981 to 2015. He was an adjunct professor at The Paul H. Nitze School of Advanced International Studies from 1995 to 1998. Education Potember was born in Boston, Massachusetts. He completed his B.S. in chemistry from Merrimack College in 1975 and his Ph.D. from Johns Hopkins University in chemistry in 1979, where his adviser was Dwaine O. Cowan. He completed his postdoctoral fellowship at the Johns Hopkins Applied Physics Laboratory (APL) in 1980. He received an M.S. in technical management from the Whiting School of Engineering, Johns Hopkins University in 1986. Research Potember was first known for his groundbreaking work in molecular electronics. He invented the first two-terminal molecular non-volatile memory or memristor as well as an optical disc technology that can store multiple bits of information at one location. He also co-invented a sol-gel processed switchable vanadium(IV) oxide thin film coating for energy conservation applications. Potember's recent achievements have focused on biotechnology and biomedical engineering. He performed pioneering work that demonstrated individual living nerve cells can be grown into controlled geometric patterns on substrates and these neurons can form true synaptic connections. that can be used to destroy viruses, bacteria and spores real-time in ventilated air, and in heating or air conditioning systems. Potember has also conducted research and development in the areas of time-of-flight mass spectrometry and solid propellants. Personal life Potember has two sons and lives with his wife in Maryland. References Year of birth missing (living people) Living people Scientists from Boston Whiting School of Engineering alumni Molecular electronics 20th-century American inventors 21st-century American businesspeople Mitre Corporation people Johns Hopkins University faculty Merrimack College alumni Biotechnologists American biomedical engineers
Richard S. Potember
[ "Chemistry", "Materials_science" ]
468
[ "Nanotechnology", "Molecular physics", "Molecular electronics" ]
52,256,432
https://en.wikipedia.org/wiki/A.%20P.%20B.%20Sinha
Akhoury Purnendu Bhusan Sinha (23 September 1928 – 4 July 2021) was an Indian solid state chemist who was the head of the Physical Chemistry Division of the National Chemical Laboratory, Pune. He is known for his theories on semiconductors and his studies on synthesis of manganites. He was an elected fellow of the Indian National Science Academy and the Indian Academy of Sciences. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded Sinha the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 1972, for his contributions to chemical sciences. Biography A. P. B. Sinha, born on 27 December 1928, joined the University of London from where he secured a PhD in 1954; his thesis was based on solid state chemistry. Later, he served the National Chemical Laboratory, Pune as a director's grade scientist and headed the Physical Chemistry division of the institution. Continuing his researches on solid state chemistry, Sinha studied low mobility semiconductors with respect to its electron transport and crystal distortions caused by electron lattice transitions, switching, magnetic ordering and memory effects. He is known to have synthesized new manganites and reportedly developed a number of solid state products such as thermistors, photocells, magnets and photovoltaic products. Based on his studies on electron lattice interaction, Sinha proposed support theories for the ferroelectricity theory and developed new theories on the thermoelectrical power and mobility in semiconductors. His researches are reported to have widened the understanding of conduction in semiconductors. The body of his literary work is composed of one book, Spectroscopy in inorganic chemistry, chapters to the book, A study of the growth and structure of layers of oxides, sulphides and related compounds, with special reference to the effect of temperature, edited by C. N. R. Rao, and several articles published in peer reviewed journals. His work has been cited by several authors. Sinha was associated with journals such as Bulletin Materials Science and Indian Journal of Pure Applied Physics as a member of their editorial boards. The Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards, in 1972. Sinha was elected by the Indian Academy of Sciences as their fellow in 1974 before he became an elected fellow of the Indian National Science Academy in 1978. He is also an elected fellow of the Maharashtra Academy of Sciences and a recipient of the Meritorious Invention Award of the National Research Development Corporation which he received in 1978. After his stint at NCL, Sinha migrated to the US and is associated with the Morris Innovative Research. Sinha died in the United States on 4 July 2021, at the age of 92. Citations Selected bibliography Books Articles See also Manganite Ferroelectricity Notes References External links 1928 births 2021 deaths Recipients of the Shanti Swarup Bhatnagar Award in Chemical Science Alumni of the University of London 20th-century Indian chemists Solid-state chemistry Indian physical chemists Fellows of the Indian Academy of Sciences Fellows of the Indian National Science Academy
A. P. B. Sinha
[ "Physics", "Chemistry", "Materials_science" ]
650
[ "Condensed matter physics", "nan", "Solid-state chemistry" ]
52,256,924
https://en.wikipedia.org/wiki/Forced%20convection%20in%20porous%20media
Forced convection is type of heat transport in which fluid motion is generated by an external source like a (pump, fan, suction device, etc.). Heat transfer through porus media is very effective and efficiently. Forced convection heat transfer in a confined porous medium has been a subject of intensive studies during the last decades because of its wide applications. The basic problem in heat convection through porous media consists of predicting the heat transfer rate between a deferentially heated, solid impermeable surface and a fluid-saturated porous medium. Beginning with constant wall temperature. In 2D steady state system According to Darcy's law is the thickness of the slender layer of length x that affects the temperature transition from to . Balancing the energy equation between enthalpy flow in the x direction and thermal diffusion in the y direction boundary is slender so The Peclet number is a dimensionless number used in calculations involving convective heat transfer. It is the ratio of the thermal energy convected to the fluid to the thermal energy conducted within the fluid. Advective transport rate Diffusive transport rate See also Darcy's law Nusselt Number Porous media Convective heat transfer Heat transfer coefficient Porous media References External links https://web.archive.org/web/20091211060057/http://www.me.ust.hk/~mezhao/pdf/20.PDF Convection Dimensionless numbers of fluid mechanics Fluid dynamics Heat transfer
Forced convection in porous media
[ "Physics", "Chemistry", "Engineering" ]
307
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Chemical engineering", "Convection", "Thermodynamics", "Piping", "Fluid dynamics" ]
37,946,483
https://en.wikipedia.org/wiki/Silvia%20Braslavsky
Silvia Elsa Braslavsky (born April 5, 1942 in Buenos Aires) is an Argentine chemist. She is the daughter of educationist and biochemist Lázaro Braslavsky, and the sister of Cecilia Braslavsky, educationist and erstwhile director of the International Bureau of Education of UNESCO. She has two daughters, sociologist Paula-Irene Villa Braslavsky and Carolina Klockow. Braslavsky has worked extensively in the domain of photobiology and she is a specialist in experimental photooptoacoustics. She was senior research scientist and Professor at the Max Planck Institute for Radiation Chemistry (now renamed Bioinorganic Chemistry) until her retirement in 2007. Scientific career Braslavsky read chemistry at the University of Buenos Aires but left Argentina after the "night of the long batons". While being a research assistant in Santiago de Chile she defended her PhD at the University of Buenos Aires. Following temporary positions at Penn State University (1969-1972), the National University of Rio Cuarto, Argentina (1972-1975), again Penn State (1975) and the University of Alberta in Edmonton, Canada (1975), she moved to the Max Planck Institute for Radiation Chemistry in Mülheim, Germany (1976), where she stayed until her retirement in 2007. Functions Braslavsky holds numerous official positions in the scientific field of chemistry. Since 2000 she is the chair of the IUPAC subcommittee on photochemistry. Since 2006 she is a corresponding member of CONICET and is a member of the international scientific advisory committee of INQUIMAE (Institute for Chemistry of Materials, Environment and Energy). Currently, she is chair and main organiser of the 16th International Conference on Photobiology, to be held in Cordoba, Argentina in 2014. Since 2010 she is a member of the representative panel of the RCAA (Red de Científicos Argentinos en Alemania, i.e. the Network of Argentine Scientists in Germany). Honours and awards This is selection of her honours and awards: 1998 first women to be awarded the Research Award of the American Society for Photobiology. 2004 Elhuyar-Goldschmidt price of the Spanish and German chemical society. 2008 first woman to be awarded a Doctor Honoris Causa from the Universitat Ramon Llull, Barcelona, Spain. 2011 “Raíces” Prize by the Minister of Science (MINCYT) in Argentina in recognition of her engagement for the scientific cooperation between Argentina and Germany. 2013 Honorary Professor at Universidad Nacional de la Plata, Argentina and distinguished visiting professor Universidad Nacional de Cordoba, Argentina 2016 Dr. honoris causa Universidad de Buenos Aires, Argentina. 2017 European Society of Photobiology, Medal “for outstanding and sustained contributions to the science and promotion of Photobiology”. 2019 International Union on Photobiology, Finsen Medal for “Lifetime Achievement in Photochemistry and Photosensory Biology” (Barcelona). 2019 International Photoacoustic and Photothermal Association (IPPA). Senior Prize (Moscow). 2020 Corresponding Member (Académica) of the Argentine National Academy of Sciences (ANC, Córdoba, Argentina). 2020-21 European Photochemical Association (EPA). “Photochemistry Ambassador” for “Service to the Photochemical Community”. International Congress on Photochemistry, Geneva, 2021. The EPA established the Award in 2018 to recognize outstanding service to the Photochemistry/Photophysical community. This prize is delivered every two years. Partial bibliography "Time-Resolved Photothermal and Photoacoustic Methods Applied to Photoinduced Processes in Solution", S.E. Braslavsky, G.E. Heibel, Chem. Rev. 92, 1381-1410 (1992). doi: 10.1021/cr00014a007 "Effect of Solvent on the Radiative Decay of Singlet Molecular Oxygen a(1Δg)", R.D. Scurlock, S. Nonell, S.E. Braslavsky, P.R. Ogilby, J. Phys. Chem. 99, 3521-3526 (1995). doi: 10.1021/j100011a019 '"Glossary of Terms Used in Photochemistry'", 3rd Version (IUPAC Recommendations 2006), S.E.Braslavsky, Pure Appl. Chem. 79, 293-461 (2007). doi:10.1351/pac200779030293 '"Glossary of Terms Used in Photocatalysis and Radiation Catalysis'" (IUPAC recommendations 2011) S.E. Braslavsky, A.M. Braun, A.E. Cassano, A.V. Emeline, M.I. Litter, L. Palmisano, V.N. Parmon, N. Serpone, Pure Appl. Chem. 83, 931-1014 (2011). doi:10.1351/PAC-REC-09-09-36 References External links Homepage at MPG Complete list of publications IUPAC Subcommittee on Photochemistry Homepage of 16th International Congress of Photobiology Argentine biochemists 1942 births Living people Women biochemists Argentine women scientists Argentine Jews Argentine people of Polish-Jewish descent Jewish chemists University of Buenos Aires alumni Scientists from Buenos Aires Argentine expatriates in Germany 20th-century Argentine women scientists 21st-century Argentine women scientists
Silvia Braslavsky
[ "Chemistry" ]
1,133
[ "Biochemists", "Women biochemists" ]
37,948,528
https://en.wikipedia.org/wiki/Uranium%20Mill%20Tailings%20Radiation%20Control%20Act
The Uranium Mill Tailings Radiation Control Act (1978) is a United States environmental law that amended the Atomic Energy Act of 1954 and authorized the Environmental Protection Agency to establish health and environmental standards for the stabilization, restoration, and disposal of uranium mill waste. Title 1 of the Act required the EPA to set environmental protection standards consistent with the Resource Conservation and Recovery Act, including groundwater protection limits; the Department of Energy to implement EPA standards and provide perpetual care for some sites; and the Nuclear Regulatory Commission to review cleanups and license sites to states or the DOE for perpetual care. Title 1 established a uranium mill remedial action program jointly funded by the federal government and the state. Title 1 of the Act also designated 22 inactive uranium mill sites for remediation, resulting in the containment of 40 million cubic yards of low-level radioactive material in UMTRCA Title 1 holding cells. Limitations The act was written in the "hectic final days" of the 95th U.S. Congress and contained multiple errors that made it "a nightmare of statutory construction," and required remedial legislation to fix. The act perpetuated the "Agreement State" program, established in 1959, in which the Atomic Energy Commission gave regulatory authority of certain nuclear materials to states. It was unclear how much regulatory power Agreement states had, and as a result these states took little regulatory action. Sites that were owned by the federal government, the NRC, or Agreement states were ineligible for remedial action under the UMTRCA, as they were instead the responsibility of the government agencies or states who owned them. See also Uranium Mill Tailings Remedial Action Uranium mining References External links Waste legislation in the United States Radioactive waste 1978 in American law United States federal energy legislation 95th United States Congress 1978 in the environment
Uranium Mill Tailings Radiation Control Act
[ "Chemistry", "Technology" ]
364
[ "Environmental impact of nuclear power", "Radioactive waste", "Hazardous waste", "Radioactivity" ]
49,686,737
https://en.wikipedia.org/wiki/Landau%E2%80%93Yang%20theorem
In quantum mechanics, the Landau–Yang theorem is a selection rule for particles that decay into two on-shell photons. The theorem states that a massive particle with spin 1 cannot decay into two photons. Assumptions A photon here is any particle with spin 1, without mass and without internal degrees of freedom. The photon is the only known particle with these properties. Consequences The theorem has several consequences in particle physics. For example: The ρ meson cannot decay into two photons, differently from the neutral pion, that almost always decays into this final state (98.8% of times). The Z boson cannot decay into two photons. The Higgs boson, whose decay into two photons was observed in 2012, cannot have spin 1 in models that assume the Landau–Yang theorem. Measurements taken in 2013 have since confirmed that the Higgs has spin 0. Original references Additional references Theorems in quantum mechanics Lev Landau
Landau–Yang theorem
[ "Physics", "Mathematics" ]
192
[ "Theorems in quantum mechanics", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Physics theorems" ]
49,690,041
https://en.wikipedia.org/wiki/Wild%20problem
In the mathematical areas of linear algebra and representation theory, a problem is wild if it contains the problem of classifying pairs of square matrices up to simultaneous similarity. Examples of wild problems are classifying indecomposable representations of any quiver that is neither a Dynkin quiver (i.e. the underlying undirected graph of the quiver is a (finite) Dynkin diagram) nor a Euclidean quiver (i.e., the underlying undirected graph of the quiver is an affine Dynkin diagram). Necessary and sufficient conditions have been proposed to check the simultaneously block triangularization and diagonalization of a finite set of matrices under the assumption that each matrix is diagonalizable over the field of the complex numbers. See also Semi-invariant of a quiver References Linear algebra Representation theory
Wild problem
[ "Mathematics" ]
165
[ "Linear algebra", "Representation theory", "Fields of abstract algebra", "Algebra" ]
49,694,191
https://en.wikipedia.org/wiki/Lactimidomycin
Lactimidomycin is a glutarimide antibiotic derived from the bacteria Streptomyces amphibiosporus. It has antifungal, antiviral and anti-cancer properties, acting as a direct inhibitor of protein translation in ribosomes. Antiviral activity is seen against a variety of RNA viruses including flaviviruses such as dengue fever, Kunjin virus and Modoc virus, as well as vesicular stomatitis virus and poliovirus. As lactimidomycin is a natural product containing an unusual unsaturated 12-membered lactone ring, it has been the subject of numerous total synthesis approaches. References Dengue fever Antiviral drugs Glutarimides Lactones
Lactimidomycin
[ "Biology" ]
160
[ "Antiviral drugs", "Biocides" ]
49,694,376
https://en.wikipedia.org/wiki/NT5C1A
5'-nucleotidase, cytosolic IA is a protein that in humans is encoded by the NT5C1A gene. Function Cytosolic nucleotidases, such as NT5C1A, dephosphorylate nucleoside monophosphates. References Further reading Proteins
NT5C1A
[ "Chemistry" ]
70
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
49,700,795
https://en.wikipedia.org/wiki/Constant-recursive%20sequence
In mathematics, an infinite sequence of numbers is called constant-recursive if it satisfies an equation of the form for all , where are constants. The equation is called a linear recurrence relation. The concept is also known as a linear recurrence sequence, linear-recursive sequence, linear-recurrent sequence, or a C-finite sequence. For example, the Fibonacci sequence , is constant-recursive because it satisfies the linear recurrence : each number in the sequence is the sum of the previous two. Other examples include the power of two sequence , where each number is the sum of twice the previous number, and the square number sequence . All arithmetic progressions, all geometric progressions, and all polynomials are constant-recursive. However, not all sequences are constant-recursive; for example, the factorial sequence is not constant-recursive. Constant-recursive sequences are studied in combinatorics and the theory of finite differences. They also arise in algebraic number theory, due to the relation of the sequence to polynomial roots; in the analysis of algorithms, as the running time of simple recursive functions; and in the theory of formal languages, where they count strings up to a given length in a regular language. Constant-recursive sequences are closed under important mathematical operations such as term-wise addition, term-wise multiplication, and Cauchy product. The Skolem–Mahler–Lech theorem states that the zeros of a constant-recursive sequence have a regularly repeating (eventually periodic) form. The Skolem problem, which asks for an algorithm to determine whether a linear recurrence has at least one zero, is an unsolved problem in mathematics. Definition A constant-recursive sequence is any sequence of integers, rational numbers, algebraic numbers, real numbers, or complex numbers (written as as a shorthand) satisfying a formula of the form for all for some fixed coefficients ranging over the same domain as the sequence (integers, rational numbers, algebraic numbers, real numbers, or complex numbers). The equation is called a linear recurrence with constant coefficients of order d. The order of the sequence is the smallest positive integer such that the sequence satisfies a recurrence of order d, or for the everywhere-zero sequence. The definition above allows eventually-periodic sequences such as and . Some authors require that , which excludes such sequences. Examples Fibonacci and Lucas sequences The sequence 0, 1, 1, 2, 3, 5, 8, 13, ... of Fibonacci numbers is constant-recursive of order 2 because it satisfies the recurrence with . For example, and . The sequence 2, 1, 3, 4, 7, 11, ... of Lucas numbers satisfies the same recurrence as the Fibonacci sequence but with initial conditions and . More generally, every Lucas sequence is constant-recursive of order 2. Arithmetic progressions For any and any , the arithmetic progression is constant-recursive of order 2, because it satisfies . Generalizing this, see polynomial sequences below. Geometric progressions For any and , the geometric progression is constant-recursive of order 1, because it satisfies . This includes, for example, the sequence 1, 2, 4, 8, 16, ... as well as the rational number sequence . Eventually periodic sequences A sequence that is eventually periodic with period length is constant-recursive, since it satisfies for all , where the order is the length of the initial segment including the first repeating block. Examples of such sequences are 1, 0, 0, 0, ... (order 1) and 1, 6, 6, 6, ... (order 2). Polynomial sequences A sequence defined by a polynomial is constant-recursive. The sequence satisfies a recurrence of order (where is the degree of the polynomial), with coefficients given by the corresponding element of the binomial transform. The first few such equations are for a degree 0 (that is, constant) polynomial, for a degree 1 or less polynomial, for a degree 2 or less polynomial, and for a degree 3 or less polynomial. A sequence obeying the order-d equation also obeys all higher order equations. These identities may be proved in a number of ways, including via the theory of finite differences. Any sequence of integer, real, or complex values can be used as initial conditions for a constant-recursive sequence of order . If the initial conditions lie on a polynomial of degree or less, then the constant-recursive sequence also obeys a lower order equation. Enumeration of words in a regular language Let be a regular language, and let be the number of words of length in . Then is constant-recursive. For example, for the language of all binary strings, for the language of all unary strings, and for the language of all binary strings that do not have two consecutive ones. More generally, any function accepted by a weighted automaton over the unary alphabet over the semiring (which is in fact a ring, and even a field) is constant-recursive. Other examples The sequences of Jacobsthal numbers, Padovan numbers, Pell numbers, and Perrin numbers are constant-recursive. Non-examples The factorial sequence is not constant-recursive. More generally, every constant-recursive function is asymptotically bounded by an exponential function (see #Closed-form characterization) and the factorial sequence grows faster than this. The Catalan sequence is not constant-recursive. This is because the generating function of the Catalan numbers is not a rational function (see #Equivalent definitions). Equivalent definitions In terms of matrices |-style="text-align:center;" | A sequence is constant-recursive of order less than or equal to if and only if it can be written as where is a vector, is a matrix, and is a vector, where the elements come from the same domain (integers, rational numbers, algebraic numbers, real numbers, or complex numbers) as the original sequence. Specifically, can be taken to be the first values of the sequence, the linear transformation that computes from , and the vector . In terms of non-homogeneous linear recurrences |- class="wikitable" ! Non-homogeneous !! Homogeneous |- align = "center" | | |- align = "center" | | A non-homogeneous linear recurrence is an equation of the form where is an additional constant. Any sequence satisfying a non-homogeneous linear recurrence is constant-recursive. This is because subtracting the equation for from the equation for yields a homogeneous recurrence for , from which we can solve for to obtain In terms of generating functions |-style="text-align:center;" | A sequence is constant-recursive precisely when its generating function is a rational function , where and are polynomials and . Moreover, the order of the sequence is the minimum such that it has such a form with and . The denominator is the polynomial obtained from the auxiliary polynomial by reversing the order of the coefficients, and the numerator is determined by the initial values of the sequence: where It follows from the above that the denominator must be a polynomial not divisible by (and in particular nonzero). In terms of sequence spaces |-align=center | A sequence is constant-recursive if and only if the set of sequences is contained in a sequence space (vector space of sequences) whose dimension is finite. That is, is contained in a finite-dimensional subspace of closed under the left-shift operator. This characterization is because the order- linear recurrence relation can be understood as a proof of linear dependence between the sequences for . An extension of this argument shows that the order of the sequence is equal to the dimension of the sequence space generated by for all . Closed-form characterization |-align=center | Constant-recursive sequences admit the following unique closed form characterization using exponential polynomials: every constant-recursive sequence can be written in the form for all , where The term is a sequence which is zero for all (where is the order of the sequence); The terms are complex polynomials; and The terms are distinct complex constants. This characterization is exact: every sequence of complex numbers that can be written in the above form is constant-recursive. For example, the Fibonacci number is written in this form using Binet's formula: where is the golden ratio and . These are the roots of the equation . In this case, , for all , are both constant polynomials, , and . The term is only needed when ; if then it corrects for the fact that some initial values may be exceptions to the general recurrence. In particular, for all . The complex numbers are the roots of the characteristic polynomial of the recurrence: whose coefficients are the same as those of the recurrence. We call the characteristic roots of the recurrence. If the sequence consists of integers or rational numbers, the roots will be algebraic numbers. If the roots are all distinct, then the polynomials are all constants, which can be determined from the initial values of the sequence. If the roots of the characteristic polynomial are not distinct, and is a root of multiplicity , then in the formula has degree . For instance, if the characteristic polynomial factors as , with the same root r occurring three times, then the th term is of the form Closure properties Examples The sum of two constant-recursive sequences is also constant-recursive. For example, the sum of and is (), which satisfies the recurrence . The new recurrence can be found by adding the generating functions for each sequence. Similarly, the product of two constant-recursive sequences is constant-recursive. For example, the product of and is (), which satisfies the recurrence . The left-shift sequence and the right-shift sequence (with ) are constant-recursive because they satisfy the same recurrence relation. For example, because is constant-recursive, so is . List of operations In general, constant-recursive sequences are closed under the following operations, where denote constant-recursive sequences, are their generating functions, and are their orders, respectively. The closure under term-wise addition and multiplication follows from the closed-form characterization in terms of exponential polynomials. The closure under Cauchy product follows from the generating function characterization. The requirement for Cauchy inverse is necessary for the case of integer sequences, but can be replaced by if the sequence is over any field (rational, algebraic, real, or complex numbers). Behavior Zeros Despite satisfying a simple local formula, a constant-recursive sequence can exhibit complicated global behavior. Define a zero of a constant-recursive sequence to be a nonnegative integer such that . The Skolem–Mahler–Lech theorem states that the zeros of the sequence are eventually repeating: there exists constants and such that for all , if and only if . This result holds for a constant-recursive sequence over the complex numbers, or more generally, over any field of characteristic zero. Decision problems The pattern of zeros in a constant-recursive sequence can also be investigated from the perspective of computability theory. To do so, the description of the sequence must be given a finite description; this can be done if the sequence is over the integers or rational numbers, or even over the algebraic numbers. Given such an encoding for sequences , the following problems can be studied: Because the square of a constant-recursive sequence is still constant-recursive (see closure properties), the existence-of-a-zero problem in the table above reduces to positivity, and infinitely-many-zeros reduces to eventual positivity. Other problems also reduce to those in the above table: for example, whether for some reduces to existence-of-a-zero for the sequence . As a second example, for sequences in the real numbers, weak positivity (is for all ?) reduces to positivity of the sequence (because the answer must be negated, this is a Turing reduction). The Skolem-Mahler-Lech theorem would provide answers to some of these questions, except that its proof is non-constructive. It states that for all , the zeros are repeating; however, the value of is not known to be computable, so this does not lead to a solution to the existence-of-a-zero problem. On the other hand, the exact pattern which repeats after is computable. This is why the infinitely-many-zeros problem is decidable: just determine if the infinitely-repeating pattern is empty. Decidability results are known when the order of a sequence is restricted to be small. For example, the Skolem problem is decidable for algebraic sequences of order up to 4. It is also known to be decidable for reversible integer sequences up to order 7, that is, sequences that may be continued backwards in the integers. Decidability results are also known under the assumption of certain unproven conjectures in number theory. For example, decidability is known for rational sequences of order up to 5 subject to the Skolem conjecture (also known as the exponential local-global principle). Decidability is also known for all simple rational sequences (those with simple characteristic polynomial) subject to the Skolem conjecture and the weak p-adic Schanuel conjecture. Degeneracy Let be the characteristic roots of a constant recursive sequence . We say that the sequence is degenerate if any ratio is a root of unity, for . It is often easier to study non-degenerate sequences, in a certain sense one can reduce to this using the following theorem: if has order and is contained in a number field of degree over , then there is a constant such that for some each subsequence is either identically zero or non-degenerate. Generalizations A D-finite or holonomic sequence is a natural generalization where the coefficients of the recurrence are allowed to be polynomial functions of rather than constants. A -regular sequence satisfies a linear recurrences with constant coefficients, but the recurrences take a different form. Rather than being a linear combination of for some integers that are close to , each term in a -regular sequence is a linear combination of for some integers whose base- representations are close to that of . Constant-recursive sequences can be thought of as -regular sequences, where the base-1 representation of consists of copies of the digit . Notes References External links OEIS index to a few thousand examples of linear recurrences, sorted by order (number of terms) and signature (vector of values of the constant coefficients) Combinatorics Dynamical systems Integer sequences Linear algebra Recurrence relations
Constant-recursive sequence
[ "Physics", "Mathematics" ]
3,157
[ "Sequences and series", "Discrete mathematics", "Integer sequences", "Mathematical structures", "Recurrence relations", "Algebra", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Mechanics", "Mathematical relations", "Linear algebra", "Numbers", "Number theory", "Dy...
48,450,465
https://en.wikipedia.org/wiki/Human%20Enhancement
Human Enhancement (2009) is a non-fiction book edited by philosopher Nick Bostrom and philosopher and bioethicist Julian Savulescu. Savulescu and Bostrom write about the ethical implications of human enhancement and to what extent it is worth striving towards. References 2009 non-fiction books Ethics books Transhumanist books Works by Nick Bostrom Edited volumes Transhumanism
Human Enhancement
[ "Technology", "Engineering", "Biology" ]
83
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
48,451,473
https://en.wikipedia.org/wiki/Phagraphene
Phagraphene [fæ’græfiːn] is a proposed graphene allotrope composed of 5-6-7 carbon rings. Phagraphene was proposed in 2015 based on systematic evolutionary structure searching. Theoretical calculations showed that phagraphene is not only dynamically and thermally stable, but also has distorted Dirac cones. The direction-dependent cones are robust against external strain with tuneable Fermi velocities. Higher-energy allotropes named Haeckelite contained penta- hexa- and hepta-carbon rings. Three types (rectangular, oblique and hexagonal) were proposed as early as 2000. These metastable allotropes have a trivial intrinsic metallic behavior. Phagraphene is predicted to have a potential energy of 193.2 kcal/mol. The bond order is 1.33, the same as for graphene. PHA/graphene An unrelated material called PHA/graphene is a polyhydroxyalkanoate graphene composite. References Graphene Allotropes of carbon
Phagraphene
[ "Chemistry" ]
228
[ "Allotropes of carbon", "Allotropes" ]
55,142,158
https://en.wikipedia.org/wiki/Ewingite
Ewingite is a mineral discovered by Jakub Plášil of the Institute of Physics at the Academy of Sciences of the Czech Republic in the Plavno mine, Czech Republic. Travis Olds of the University of Notre Dame and colleagues described ewingite, which is the most structurally complex known mineral on Earth. Ewingite is named in honor of Rodney C. Ewing, Professor of Geological Sciences at Stanford University, USA. The mineral is rare, due to its very narrow pH and compositional range required for formation, which are only known to occur in the Plavno mine. Ewingite forms through uranium oxidation occurring in the humid environment of the mine. The mineral is chemically similar to rabbittite, swartzite, and albrechtschraufite. The type specimen of ewingite has been placed in the mineralogy collections at the Natural History Museum of Los Angeles County. Localities Czech Republic : Plavno mine, Jáchymov District, Krušné Hory Mountains, Karlovy Vary Region, Bohemia References Calcium minerals Uranium(VI) minerals Carbonate minerals Uranyl compounds Tetragonal minerals
Ewingite
[ "Chemistry" ]
226
[ "Hydrate minerals", "Hydrates" ]
55,144,051
https://en.wikipedia.org/wiki/Land%20of%20Manu
In ancient Egyptian religion, the land of Manu (the West) is where the sun god Ra sets every evening. It is mentioned in the Book of the Dead. References Locations in Egyptian mythology Sun myths Ra
Land of Manu
[ "Astronomy" ]
44
[ "Astronomical myths", "Sun myths" ]
43,590,607
https://en.wikipedia.org/wiki/Bimetric%20gravity
Bimetric gravity or bigravity refers to two different classes of theories. The first class of theories relies on modified mathematical theories of gravity (or gravitation) in which two metric tensors are used instead of one. The second metric may be introduced at high energies, with the implication that the speed of light could be energy-dependent, enabling models with a variable speed of light. If the two metrics are dynamical and interact, a first possibility implies two graviton modes, one massive and one massless; such bimetric theories are then closely related to massive gravity. Several bimetric theories with massive gravitons exist, such as those attributed to Nathan Rosen (1909–1995) or Mordehai Milgrom with relativistic extensions of Modified Newtonian Dynamics (MOND). More recently, developments in massive gravity have also led to new consistent theories of bimetric gravity. Though none has been shown to account for physical observations more accurately or more consistently than the theory of general relativity, Rosen's theory has been shown to be inconsistent with observations of the Hulse–Taylor binary pulsar. Some of these theories lead to cosmic acceleration at late times and are therefore alternatives to dark energy. Bimetric gravity is also at odds with measurements of gravitational waves emitted by the neutron-star merger GW170817. On the contrary, the second class of bimetric gravity theories does not rely on massive gravitons and does not modify Newton's law, but instead describes the universe as a manifold having two coupled Riemannian metrics, where matter populating the two sectors interact through gravitation (and antigravitation if the topology and the Newtonian approximation considered introduce negative mass and negative energy states in cosmology as an alternative to dark matter and dark energy). Some of these cosmological models also use a variable speed of light in the high energy density state of the radiation-dominated era of the universe, challenging the inflation hypothesis. Rosen's bigravity (1940 to 1989) In general relativity (GR), it is assumed that the distance between two points in spacetime is given by the metric tensor. Einstein's field equation is then used to calculate the form of the metric based on the distribution of energy and momentum. In 1940, Rosen proposed that at each point of space-time, there is a Euclidean metric tensor in addition to the Riemannian metric tensor . Thus at each point of space-time there are two metrics: The first metric tensor, , describes the geometry of space-time and thus the gravitational field. The second metric tensor, , refers to the flat space-time and describes the inertial forces. The Christoffel symbols formed from and are denoted by and respectively. Since the difference of two connections is a tensor, one can define the tensor field given by: Two kinds of covariant differentiation then arise: -differentiation based on (denoted by a semicolon, e.g. ), and covariant differentiation based on (denoted by a slash, e.g. ). Ordinary partial derivatives are represented by a comma (e.g. ). Let and be the Riemann curvature tensors calculated from and , respectively. In the above approach the curvature tensor is zero, since is the flat space-time metric. A straightforward calculation yields the Riemann curvature tensor Each term on the right hand side is a tensor. It is seen that from GR one can go to the new formulation just by replacing {:} by and ordinary differentiation by covariant -differentiation, by , integration measure by , where , and . Having once introduced into the theory, one has a great number of new tensors and scalars at one's disposal. One can set up other field equations other than Einstein's. It is possible that some of these will be more satisfactory for the description of nature. The geodesic equation in bimetric relativity (BR) takes the form It is seen from equations () and () that can be regarded as describing the inertial field because it vanishes by a suitable coordinate transformation. Being the quantity a tensor, it is independent of any coordinate system and hence may be regarded as describing the permanent gravitational field. Rosen (1973) has found BR satisfying the covariance and equivalence principle. In 1966, Rosen showed that the introduction of the space metric into the framework of general relativity not only enables one to get the energy momentum density tensor of the gravitational field, but also enables one to obtain this tensor from a variational principle. The field equations of BR derived from the variational principle are where or with , and is the energy-momentum tensor. The variational principle also leads to the relation . Hence from () , which implies that in a BR, a test particle in a gravitational field moves on a geodesic with respect to Rosen continued improving his bimetric gravity theory with additional publications in 1978 and 1980, in which he made an attempt "to remove singularities arising in general relativity by modifying it so as to take into account the existence of a fundamental rest frame in the universe." In 1985 Rosen tried again to remove singularities and pseudo-tensors from General Relativity. Twice in 1989 with publications in March and November Rosen further developed his concept of elementary particles in a bimetric field of General Relativity. It is found that the BR and GR theories differ in the following cases: propagation of electromagnetic waves the external field of a high density star the behaviour of intense gravitational waves propagating through a strong static gravitational field. The predictions of gravitational radiation in Rosen's theory have been shown since 1992 to be in conflict with observations of the Hulse–Taylor binary pulsar. Massive bigravity Since 2010 there has been renewed interest in bigravity after the development by Claudia de Rham, Gregory Gabadadze, and Andrew Tolley (dRGT) of a healthy theory of massive gravity. Massive gravity is a bimetric theory in the sense that nontrivial interaction terms for the metric can only be written down with the help of a second metric, as the only nonderivative term that can be written using one metric is a cosmological constant. In the dRGT theory, a nondynamical "reference metric" is introduced, and the interaction terms are built out of the matrix square root of . In dRGT massive gravity, the reference metric must be specified by hand. One can give the reference metric an Einstein–Hilbert term, in which case is not chosen but instead evolves dynamically in response to and possibly matter. This massive bigravity was introduced by Fawad Hassan and Rachel Rosen as an extension of dRGT massive gravity. The dRGT theory is crucial to developing a theory with two dynamical metrics because general bimetric theories are plagued by the Boulware–Deser ghost, a possible sixth polarization for a massive graviton. The dRGT potential is constructed specifically to render this ghost nondynamical, and as long as the kinetic term for the second metric is of the Einstein–Hilbert form, the resulting theory remains ghost-free. The action for the ghost-free massive bigravity is given by As in standard general relativity, the metric has an Einstein–Hilbert kinetic term proportional to the Ricci scalar and a minimal coupling to the matter Lagrangian , with representing all of the matter fields, such as those of the Standard Model. An Einstein–Hilbert term is also given for . Each metric has its own Planck mass, denoted and respectively. The interaction potential is the same as in dRGT massive gravity. The are dimensionless coupling constants and (or specifically ) is related to the mass of the massive graviton. This theory propagates seven degrees of freedom, corresponding to a massless graviton and a massive graviton (although the massive and massless states do not align with either of the metrics). The interaction potential is built out of the elementary symmetric polynomials of the eigenvalues of the matrices or , parametrized by dimensionless coupling constants or , respectively. Here is the matrix square root of the matrix . Written in index notation, is defined by the relation The can be written directly in terms of as where brackets indicate a trace, . It is the particular antisymmetric combination of terms in each of the which is responsible for rendering the Boulware–Deser ghost nondynamical. See also Alternatives to general relativity DGP model Scalar–tensor theory References Theories of gravity General relativity
Bimetric gravity
[ "Physics" ]
1,749
[ "Theories of gravity", "General relativity", "Theoretical physics", "Theory of relativity" ]
43,593,074
https://en.wikipedia.org/wiki/Lasing%20without%20inversion
Lasing without inversion (LWI), or lasing without population inversion, is a technique used for light amplification by stimulated emission without the requirement of population inversion. A laser working under this scheme exploits the quantum interference between the probability amplitudes of atomic transitions in order to eliminate absorption without disturbing the stimulated emission. This phenomenon is also the essence of electromagnetically induced transparency. The basic LWI concept was first predicted by Ali Javan in 1956. The first demonstration of LWI was carried out by Marlan Scully in an experiment in rubidium and sodium at Texas A&M University, and then at NIST in Boulder. References Quantum optics
Lasing without inversion
[ "Physics" ]
132
[ "Quantum optics", "Quantum mechanics", "Quantum physics stubs" ]
43,597,028
https://en.wikipedia.org/wiki/Head-in-pillow%20defect
In the assembly of integrated circuit packages to printed circuit boards, a head-in-pillow defect (HIP or HNP), also called ball-and-socket, is a failure of the soldering process. For example, in the case of a ball grid array (BGA) package, the pre-deposited solder ball on the package and the solder paste applied to the circuit board may both melt, but the melted solder does not join. A cross-section through the failed joint shows a distinct boundary between the solder ball on the part and the solder paste on the circuit board, rather like a section through a head resting on a pillow. The defect can be caused by surface oxidation or poor wetting of the solder, or by distortion of the integrated circuit package or circuit board by the heat of the soldering process. This is particularly a concern when using lead-free solder, which requires a higher processing temperature. The defect can be attributed to a chain of events during soldering. Initially, the ball is in contact with solder paste. During heating, the board and components undergo thermal expansion, can flex, and some of the balls can be lifted off the paste. Oxidation occurs rapidly at elevated temperature, and when the surfaces come in contact again, the residual flux activity may not be sufficient to disrupt the oxide layer. The solder paste composition, eg. flux with higher activation temperature, together with the wetting characteristics of the solder ball, are the most significant mitigation factors. Since the warping of the circuit board or integrated circuit may disappear when the board cools, an intermittent fault may be created. Diagnosis of head-in-pillow defects may require use of X-rays or EOTPR (Electro Optical Terahertz Pulse Reflectometry), since the solder joints are hidden between the integrated circuit package and the printed circuit board. References Further reading Electronics manufacturing Engineering failures Metallurgy Soldering defects Soldering
Head-in-pillow defect
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
403
[ "Systems engineering", "Reliability engineering", "Metallurgy", "Technological failures", "Materials science", "Engineering failures", "Electronic engineering", "Civil engineering", "Soldering defects", "nan", "Electronics manufacturing" ]
56,580,107
https://en.wikipedia.org/wiki/DiRAC
Distributed Research using Advanced Computing (DiRAC) is an integrated supercomputing facility used for research in particle physics, astronomy and cosmology in the United Kingdom. DiRAC makes use of multi-core processors and provides a variety of computer architectures for use by the research community. Development DiRAC: DiRAC was initially funded in 2009 with an investment of £12 million from the Government of the United Kingdom's Large Facilities Capital Fund combined with funds from the Science and Technology Facilities Council (STFC) and a consortium of universities in the UK. DiRAC II: In 2012, the DiRAC facility was upgraded with a further £15 million of UK government capital to create DiRAC II which had five installations. DiRAC-3: was launched in 2021, with three services offered at four sites: Data intensive service, jointly hosted by the universities of Cambridge (part share in the "Cumulus" HPC platform) and Leicester (Data Intensive 3 and Data Intensive 2.5x supercomputers) Memory intensive service, hosted by Durham University at the Institute for Computational Cosmology (Memory Intensive 3 "COSMA8" and Memory Intensive 2.5 "COSMA7" supercomputers) Extreme scaling service, hosted by the University of Edinburgh (Extreme Scaling "Tursa" supercomputer) Paul Dirac "DiRAC" is a backronym which honours the theoretical physicist and Nobel laureate Paul Dirac. References College and university associations and consortia in the United Kingdom Information technology organisations based in the United Kingdom Science and Technology Facilities Council Supercomputers
DiRAC
[ "Technology" ]
323
[ "Supercomputers", "Supercomputing" ]
30,054,770
https://en.wikipedia.org/wiki/Biham%E2%80%93Middleton%E2%80%93Levine%20traffic%20model
The Biham–Middleton–Levine traffic model is a self-organizing cellular automaton traffic flow model. It consists of a number of cars represented by points on a lattice with a random starting position, where each car may be one of two types: those that only move downwards (shown as blue in this article), and those that only move towards the right (shown as red in this article). The two types of cars take turns to move. During each turn, all the cars for the corresponding type advance by one step if they are not blocked by another car. It may be considered the two-dimensional analogue of the simpler Rule 184 model. It is possibly the simplest system exhibiting phase transitions and self-organization. History The Biham–Middleton–Levine traffic model was first formulated by Ofer Biham, A. Alan Middleton, and Dov Levine in 1992. Biham et al found that as the density of traffic increased, the steady-state flow of traffic suddenly went from smooth flow to a complete jam. In 2005, Raissa D'Souza found that for some traffic densities, there is an intermediate phase characterized by periodic arrangements of jams and smooth flow. In the same year, Angel, Holroyd and Martin were the first to rigorously prove that for densities close to one, the system will always jam. Later, in 2006, Tim Austin and Itai Benjamini found that for a square lattice of side N, the model will always self-organize to reach full speed if there are fewer than N/2 cars. Lattice space The cars are typically placed on a square lattice that is topologically equivalent to a torus: that is, cars that move off the right edge would reappear on the left edge; and cars that move off the bottom edge would reappear on the top edge. There has also been research in rectangular lattices instead of square ones. For rectangles with coprime dimensions, the intermediate states are self-organized bands of jams and free-flow with detailed geometric structure, that repeat periodically in time. In non-coprime rectangles, the intermediate states are typically disordered rather than periodic. Phase transitions Despite the simplicity of the model, it has two highly distinguishable phases – the jammed phase, and the free-flowing phase. For low numbers of cars, the system will usually organize itself to achieve a smooth flow of traffic. In contrast, if there is a high number of cars, the system will become jammed to the extent that no single car will move. Typically, in a square lattice, the transition density is when there are around 32% as many cars as there are possible spaces in the lattice. Intermediate phase The intermediate phase occurs close to the transition density, combining features from both the jammed and free-flowing phases. There are principally two intermediate phases – disordered (which could be meta-stable) and periodic (which are provably stable). On rectangular lattices with coprime dimensions, only periodic orbits exist. In 2008 periodic intermediate phases were also observed in square lattices. Yet, on square lattices disordered intermediate phases are more frequently observed and tend to dominate densities close to the transition region. Rigorous analysis Despite the simplicity of the model, rigorous analysis is very nontrivial. Nonetheless, there have been mathematical proofs regarding the Biham–Middleton–Levine traffic model. Proofs so far have been restricted to the extremes of traffic density. In 2005, Alexander Holroyd et al proved that for densities sufficiently close to one, the system will have no cars moving infinitely often. In 2006, Tim Austin and Itai Benjamini proved that the model will always reach the free-flowing phase if the number of cars is less than half the edge length for a square lattice. Non-orientable surfaces The model is typically studied on the orientable torus, but it is possible to implement the lattice on a Klein bottle. When the red cars reach the right edge, they reappear on the left edge except flipped vertically; the ones at the bottom are now at the top, and vice versa. More formally, for every , a red car exiting the site would enter the site . It is also possible to implement it on the real projective plane. In addition to flipping the red cars, the same is done for the blue cars: for every , a blue car exiting the site would enter the site . The behaviour of the system on the Klein bottle is much more similar to the one on the torus than the one on the real projective plane. For the Klein bottle setup, the mobility as a function of density starts to decrease slightly sooner than in the torus case, although the behaviour is similar for densities greater than the critical point. The mobility on the real projective plane decreases more gradually for densities from zero to the critical point. In the real projective plane, local jams may form at the corners of the lattice even though the rest of the lattice is free-flowing. Randomization A randomized variant of the BML traffic model, called BML-R, was studied in 2010. Under periodic boundaries, instead of updating all cars of the same colour at once during each step, the randomized model performs updates (where is the side length of the presumably square lattice): each time, a random cell is selected and, if it contains a car, it is moved to the next cell if possible. In this case, the intermediate state observed in the usual BML traffic model does not exist, due to the non-deterministic nature of the randomized model; instead the transition from the jammed phase to the free flowing phase is sharp. Under open boundary conditions, instead of having cars that drive off one edge wrapping around the other side, new cars are added on the left and top edges with probability and removed from the right and bottom edges respectively. In this case, the number of cars in the system can change over time, and local jams can cause the lattice to appear very different from the usual model, such as having coexistence of jams and free-flowing areas; containing large empty spaces; or containing mostly cars of one type. References External links CUDA implementation by Daniel Lu WebGL implementation by Jason Davies JavaScript implementation by Maciej Baron Cellular automaton rules Lattice models Articles containing video clips Traffic flow
Biham–Middleton–Levine traffic model
[ "Physics", "Materials_science" ]
1,297
[ "Statistical mechanics", "Condensed matter physics", "Lattice models", "Computational physics" ]
30,057,479
https://en.wikipedia.org/wiki/Nanogenerator
A nanogenerator is a compact device that converts mechanical or thermal energy into electricity, serving to harvest energy for small, wireless autonomous devices. It uses ambient energy sources like solar, wind, thermal differentials, and kinetic energy. Nanogenerators can use ambient background energy in the environment, such as temperature gradients from machinery operation, electromagnetic energy, or even vibrations from motions. Energy harvesting from the environment has a very long history, dating back to early devices such as watermills, windmills and later hydroelectric plants. More recently there has been interest in smaller systems. While there was some work in the 1980s on implantable piezoelectric devices, more devices were developed in the 1990s including ones based upon the piezoelectric effect, electrostatic forces, thermoelectric effect and electromagnetic induction—see Beeby et al for a 2006 review. Very early on it was recognized that these could use energy sources such as from walking in shoes, and could have important medical applications, be used for in vivo MEMS devices or be used to power wearable computing. Many more recent systems have built onto this work, for instance triboelectric generators, bistable systems, pyroelectric materials and continuing work on piezoelectric systems as well as those described in more general overviews including applications in wireless electronic devices and other areas. There are three classes of nanogenerators: piezoelectric, triboelectric, both of which convert mechanical energy into electricity, and pyroelectric nanogenerators, which convert heat energy into electricity. Piezoelectric nanogenerator A piezoelectric nanogenerator is an energy-harvesting device capable of converting external kinetic energy into electrical energy via action by a nano-structured piezoelectric material. It is generally used to indicate kinetic energy harvesting devices utilizing nano-scaled piezoelectric material, like in thin-film bulk acoustic resonators. Mechanism The working principle of the nanogenerator will be explained in two different cases: the force exerted perpendicular to and parallel to the axis of the nanowire. When a piezoelectric structure is subjected to the external force of the moving tip, deformation occurs throughout the structure. The piezoelectric effect will create an electrical field inside the nanostructure; the stretched part with the positive strain will exhibit positive electrical potential, whereas the compressed part with negative strain will show the negative electrical potential. This is due to the relative displacement of cations with respect to anions in their crystalline structure. As a result, the tip of the nanowire will have an electrical potential distribution on its surface, while the bottom of the nanowire is neutralized since it is grounded. The maximum voltage generated in the nanowire can be calculated using the following equation: , where κ0 is the permittivity in vacuum, κ is the dielectric constant, e33, e15, and e31 are the piezoelectric coefficients, ν is the Poisson ratio, a is the radius of the nanowire, l is the length of the nanowire, and νmax is the maximum deflection of the nanowire's tip. The Schottky contact must be formed between the counter electrode and the tip of the nanowire since the ohmic contact will neutralize the electrical field generated at the tip. ZnO nanowire with an electron affinity of 4.5 eV, Pt (φ = 6.1 eV), is a metal sometimes used to construct the Schottky contact. By constructing the Schottky contact, the electrons will pass to the counter electrode from the surface of the tip when the counter electrode is in contact with the regions of the negative potential, whereas no current will be generated when it is in contact with the regions of the positive potential, in the case of the n-type semiconductive nanostructure (the p-type semiconductive structure will exhibit the reversed phenomenon since the hole is mobile in this case). For the second case, a model with a vertically grown nanowire stacked between the ohmic contact at its bottom and the Schottky contact at its top is considered. When the force is applied toward the tip of the nanowire, the uniaxial compressive force is generated in the nanowire. Due to the piezoelectric effect, the tip of the nanowire will have a negative piezoelectric potential, increasing the Fermi level at the tip. Since the electrons will then flow from the tip to the bottom through the external circuit, positive electrical potential will be generated at the tip. The Schottky contact will stop electrons from being transported through the interface, therefore maintaining the potential at the tip. As the force is removed, the piezoelectric effect diminishes, and the electrons will be flowing back to the top in order to neutralize the positive potential at the tip. The second case will generate an alternating-current output signal. Geometrical configuration Depending on the configuration of the piezoelectric nanostructure, the nanogenerator can be categorized into 3 types: VING, LING, and NEG. Vertical nanowire Integrated Nanogenerator (VING) VING is a 3-dimensional configuration consisting of a stack of 3 layers, which are the base electrode, the vertically grown piezoelectric nanostructure, and the counter electrode. The piezoelectric nanostructure is usually grown on the base electrode, which is then integrated with the counter electrode in full or partial mechanical contact with its tip. The first VING was developed in 2007 with a counter electrode with the periodic surface grating resembling the arrays of the AFM tip as a moving electrode. Since the counter electrode is not in full contact with the tips of the piezoelectric nanowire, its motion in-plane or out-of-plane caused by the external vibration induces the deformation of the piezoelectric nanostructure, leading to the generation of the electrical potential distribution inside each individual nanowire. The counter electrode is coated with metal, forming a Schottky contact with the tip of the nanowire. Zhong Lin Wang's group has generated counter electrodes composed of ZnO nanorods. Sang-Woo Kim's group at Sungkyunkwan University (SKKU) and Jae-Young Choi's group at Samsung Advanced Institute of Technology (SAIT) introduced a bowl-shaped transparent counter electrode by combining anodized aluminum and electroplating technology. They have also developed the other type of counter electrode by using networked single-walled carbon nanotube (SWNT). Lateral nanowire Integrated Nanogenerator (LING) LING is a 2-dimensional configuration consisting of three parts: the base electrode, the laterally grown piezoelectric nanostructure, and the metal electrode for schottky contact. In most cases, the thickness of the substrate film is thicker than the diameter of the piezoelectric nanostructure. LING is an expansion of the single wire generator (SWG). Nanocomposite Electrical Generators (NEG) NEG is a 3-dimensional configuration consisting of three main parts: the metal plate electrodes, the vertically grown piezoelectric nanostructure, and the polymer matrix, which fills in between the piezoelectric nanostructure. NEG was introduced by Momeni et al. A fabric-like geometrical configuration has been suggested where a piezoelectric nanowire is grown vertically on the two microfibers in their radial direction, and they are twined to form a nanogenerator. One of the microfibers is coated with the metal to form a Schottky contact, serving as the counter electrode for VINGs. Materials Among the various piezoelectric materials studied for the nanogenerator, much of the research has focused on materials with a wurtzite structure, such as ZnO, CdS and GaN. Zhong Lin Wang of the Georgia Institute of Technology introduced p-type ZnO nanowires. Unlike the n-type semiconductive nanostructure, the mobile particle in the p-type is a hole, thus, the schottky behavior is reversed from that of the n-type case; the electrical signal is generated from the portion of the nanostructure where the holes are accumulated. From the idea that the material with a perovskite structure is known to have more effective piezoelectric characteristics compared to that with a wurtzite structure, barium titanate nanowire has also been studied by Min-Feng Yu of the University of Illinois at Urbana-Champaign. The output signal was found to be more than 16 times that of a similar ZnO nanowire. Liwei Lin of the University of California, Berkeley, has suggested that PVDF can also be applied to form a nanogenerator. A comparison of the reported materials as of 2010 is given in the following table: Applications In 2010, the Zhong Lin Wang group developed a self-powered pH or UV sensor integrated with VING with an output voltage of 20–40mV on the sensor. Zhong Lin Wang's group has also generated an alternating current voltage of up to 100mV from the flexible SWG attached to a device for running hamster. Some of the piezoelectric nanostructure can be formed on various kinds of substrates, such as transparent organic substrates. The research groups in SKKU (Sang-Woo Kim's group) and SAIT (Jae-Young Choi's group) have developed a transparent and flexible nanogenerator. Their research substituted an indium-tin-oxide (ITO) electrode with a graphene layer. Triboelectric nanogenerator A triboelectric nanogenerator is an energy-harvesting device that converts mechanical energy into electricity using the triboelectric effect. They were first demonstrated by Zhong Lin Wang's group at the Georgia Institute of Technology in 2012. Ever since the first report of the TENG in January 2012, the output power density of the TENG has improved, reaching 313W/m2, the volume density reaches 490 kW/m3, and conversion efficiencies of ~60%–72% have been demonstrated. Ramakrishna Podila's group at Clemson University also demonstrated the first truly wireless triboelectric nanogenerators, which were able to charge energy storage devices (e.g., batteries and capacitors) without the need for any external amplification or boosters. Basic modes and mechanisms The triboelectric nanogenerator has three basic operation modes: vertical contact-separation mode, in-plane sliding mode, and single-electrode mode. They have different characteristics and are suitable for different applications. Vertical contact-separation mode The periodic change in the potential difference induced by the cycled separation and re-contact of the opposite triboelectric charges on the inner surfaces of the two sheets. When mechanical agitation is applied to the device to bend or press it, the inner surfaces will come into close contact, leaving one side of the surface with positive charges and the other with negative charges. When the deformation is released, the two surfaces with opposite charges will separate automatically, so that these opposite triboelectric charges will generate an electric field and induce a potential difference across the top and bottom electrodes. The electrons will flow from one electrode to the other through the external load. The electricity generated in this process will continue until the potentials of the two electrodes are the same. Subsequently, when the two sheets are pressed towards each other again, the triboelectric-charge-induced potential difference will begin to decrease to zero, so that the transferred charges will flow back through the external load to generate another current pulse in the opposite direction. When this periodic mechanical deformation lasts, the alternating current signals will be continuously generated. As for the pair of materials getting into contact and generating triboelectric charges, at least one of them needs to be an insulator so that the triboelectric charges cannot be conducted away but will remain on the inner surface of the sheet. Lateral sliding mode There are two basic friction processes: normal contact and lateral sliding. One TENG is designed based on the in-plane sliding between the two surfaces in a lateral direction. With triboelectrification from sliding, a periodic change in the contact area between two surfaces leads to a lateral separation of the charge centers, which creates a voltage driving the flow of electrons in the external load. The mechanism of in-plane charge separation can work in either one-directional sliding between two plates or in rotation mode. Single-electrode mode A single-electrode-based triboelectric nanogenerator is introduced as a more practical design for some applications, such as fingertip-driven triboelectric nanogenerators. According to the triboelectric series, electrons were injected from the skin into the PDMS since the PDMS is more triboelectrically negative than the skin. When negative triboelectric charges on the PDMS are fully screened from the induced positive charges on the ITO electrode by increasing the separation distance between the PDMS and skin, no output signals can be observed. Applications TENG is a physical process of converting mechanical agitation to an electric signal through triboelectrification (in the inner circuit) and electrostatic induction processes (in the outer circuit). Harvesting vibration energy might be used to power mobile electronics. TENG has been demonstrated for harvesting ambient vibration energy based on the contact-separation mode. A three-dimensional triboelectric nanogenerator (3D-TENG) has been designed based on a hybridization mode of conjunction between the vertical contact-separation mode and the in-plane sliding mode. In 2013, Zhonglin Wang's group reported a rotary triboelectric nanogenerator for harvesting wind energy. Subsequently, various types of triboelectric nanogenerators for harvesting ambient energy have been proposed, like 3D spiral structure triboelectric nanogenerators to collect wave energy, fully enclosed triboelectric nanogenerators applied in water and harsh environments, and multi-layered disk nanogenerators for harvesting hydropower. However, due to the limitations of the nanogenerator's working models, the friction generated between layers of the triboelectric nanogenerator will reduce the energy conversion efficiency and the durability of the device. Researchers have designed an all-weather droplet-based triboelectric nanogenerator that relies on the contact electrification effect between liquid and solid to generate electricity. Self-powered motion sensors The term "self-powered sensors" can refer to a system that powers all the electronics responsible for measuring detectable movement. For example, the self-powered triboelectric encoder, integrated into a smart belt-pulley system, converts friction into usable electrical energy by storing the harvested energy in a capacitor and fully powering the circuit, which includes a microcontroller and an LCD. Pyroelectric nanogenerator A pyroelectric nanogenerator is an energy-harvesting device that converts external thermal energy into electrical energy by using nano-structured pyroelectric materials. The pyroelectric effect is about the spontaneous polarization in certain anisotropic solids as a result of temperature fluctuation. The first pyroelectric nanogenerator was introduced by Zhong Lin Wang at the Georgia Institute of Technology in 2012. Mechanism The working principle of a pyroelectric nanogenerator can be explained by the primary pyroelectric effect and the secondary pyroelectric effect. The primary pyroelectric effect describes the charge produced in a strain-free case. The primary pyroelectric effect dominates the pyroelectric response in PZT, BTO, and some other ferroelectric materials. The mechanism is based on the thermally induced random wobbling of the electric dipole around its equilibrium axis, the magnitude of which increases with increasing temperature. Due to thermal fluctuations at room temperature, the electric dipoles will randomly oscillate within a degree from their respective aligning axes. Under a fixed temperature, the spontaneous polarization from the electric dipoles is constant. If the temperature in the nanogenerator changes from room temperature to a higher temperature, it will result in the electric dipoles oscillating within a larger degree of spread around their respective aligning axes. The quantity of induced charges in the electrodes is thus reduced, resulting in a flow of electrons. If the nanogenerator is cooled, the electric dipoles oscillate within a smaller degree of spread angle due to the lower thermal activity. In the second case, the obtained pyroelectric response is explained by the secondary pyroelectric effect, which describes the charge produced by the strain induced by thermal expansion. The secondary pyroelectric effect dominates the pyroelectric response in ZnO, CdS, and some other wurzite-type materials. The thermal deformation can induce a piezoelectric potential difference across the material, which can drive the electrons to flow in the external circuit. Applications In 2012, Zhong Lin Wang used a pyroelectric nanogenerator as a self-powered temperature sensor for detecting a change in temperature, where the response time and reset time of the sensor are about 0.9 and 3 s, respectively. See also Battery (electricity) Electrical generator Microelectromechanical systems Micropower Nanoelectromechanical systems References External links Professor Z. L. Wang's Nano Research Group at Georgia Institute of Technology Professor Sang-Woo Kim's Group at Yonsei University Laboratory for Nanoscale Mechanics and Physics at University of Illinois, Urbana-Champaign LINLAB at University of California, Berkeley Samsung Advanced Institute of Technology Engines Microtechnology Nanoelectronics
Nanogenerator
[ "Physics", "Materials_science", "Technology", "Engineering" ]
3,721
[ "Machines", "Microtechnology", "Engines", "Materials science", "Physical systems", "Nanoelectronics", "Nanotechnology" ]
30,068,177
https://en.wikipedia.org/wiki/Borlaug%20Global%20Rust%20Initiative
The Borlaug Global Rust Initiative (BGRI - originally named the Global Rust Initiative) was founded in response to recommendations of a committee of international experts who met to consider a response to the threat the global food supply posed by the Ug99 strain of wheat rust. The BGRI was renamed the Borlaug Global Rust initiative in honor of Green Revolution pioneer and Nobel Peace Prize Laureate Dr. Norman Borlaug who worked to establish and lead the Global Rust Initiative. The BGRI has the overarching objective of systematically reducing the world’s vulnerability to stem, yellow, and leaf rusts of wheat and advocating/facilitating the evolution of a sustainable international system to contain the threat of wheat rusts and continue the enhancements in productivity required to withstand future global threats to wheat. Executive committee Chair: Jeanie Borlaug Laube Permanent Members Ronnie Coffman, Cornell University, Vice Chairman of BGRI Bram Govaerts, Director General, CIMMYT, Himanshu Pathak, Director General, Indian Council of Agricultural Research, Aly Abousabaa Director General, ICARDA Rémi Nono Womdim, Deputy Director, Plant Production and Protection Division, FAO Rotating Members John Manners, Director, CSIRO Agriculture David Wall, Acting Director Research, Development and Technology, Agriculture and Agri-Food Canada Huajin Tang, VP for International Collaboration, China Academy of Agricultural Sciences Lene Lange, Director of Research, Aalborg University, Denmark Fentahun Mengistu, Director General, Ethiopian Institute for Agricultural Research Abd El Moneam El Banna, President, Egyptian Agricultural Research Center Eskander Zand, Deputy Minister and Head, Agricultural Research, Education and Extension Organization, Iran Eliud Kireger, Director General, Kenya Agricultural and Livestock Research Organization Masum Burak, Director General of the General Directorate of Agricultural Research, Turkey Iftikhar Ahmad, Chairman, Pakistan Agricultural Research Council Jose Costa, Deputy Administrator, Crop Production and Protection, USDA-ARS Alvaro Roel, President, INIA, Uruguay References External links Borlaug Global Rust Initiative website Wheat diseases Phytopathology Agronomy Plant breeding Genetic engineering and agriculture
Borlaug Global Rust Initiative
[ "Chemistry", "Engineering", "Biology" ]
452
[ "Plant breeding", "Genetic engineering and agriculture", "Genetic engineering", "Molecular biology" ]
33,925,900
https://en.wikipedia.org/wiki/Hydrodynamic%20seal
A hydrodynamic seal is a type of mechanical seal. A hydrodynamic seal uses a dynamic rotor with grooves that act as a pump and create an air film that the opposing sealing surface will ride on. A hydrodynamic seal performs better than hydrostatic seals by providing greater film stiffness, lower leakage and lower lift off speeds. Hydrodynamic seals have a variety of applications in multiple industries. there are a large number of various groove designs that have been proposed and tested. Some types of hydrodynamic grooves include: Spiral Groove Wave V Grooves U Grooves Double V Grooves Seals (mechanical)
Hydrodynamic seal
[ "Physics", "Engineering" ]
128
[ "Seals (mechanical)", "Materials", "Mechanical engineering", "Mechanical engineering stubs", "Matter" ]
33,926,747
https://en.wikipedia.org/wiki/Evolutionary%20aesthetics
Evolutionary aesthetics refers to evolutionary psychology theories in which the basic aesthetic preferences of Homo sapiens are argued to have evolved in order to enhance survival and reproductive success. Based on this theory, things like color preference, preferred mate body ratios, shapes, emotional ties with objects, and many other aspects of the aesthetic experience can be explained with reference to human evolution. Aesthetics and evolutionary psychology Many animal and human traits have been argued to have evolved in order to enhance survival and reproductive success. Evolutionary psychology extends this to psychological traits including aesthetical preferences. Such traits are generally seen as being adaptations to the environment during the Pleistocene era and are not necessarily adaptative in our present environment. Examples include disgust of potentially harmful spoiled foods; pleasure from sex and from eating sweet and fatty foods; and fear of spiders, snakes, and the dark. All known cultures have some form of art. This universality suggests that art is related to evolutionary adaptations. The strong emotions associated with art suggest the same. Landscape and other visual arts preferences An important choice for a mobile organism is selecting a good habitat to live in. Humans are argued to have strong aesthetical preferences for landscapes which were good habitats in the ancestral environment. When young human children from different nations are asked to select which landscape they prefer, from a selection of standardized landscape photographs, there is a strong preference for savannas with trees. The East African savanna is the ancestral environment in which much of human evolution is argued to have taken place. There is also a preference for landscapes with water, with both open and wooded areas, with trees with branches at a suitable height for climbing and taking foods, with features encouraging exploration such as a path or river curving out of view, with seen or implied game animals, and with some clouds. These are all features that are often featured in calendar art and in the design of public parks. A survey of art preferences in many different nations found that realistic painting was preferred. Favorite features were water, trees as well as other plants, humans (in particular beautiful women, children, and well-known historical figures), and animals (in particular both wild and domestic large animals). Blue, followed by green, was the favorite color. Using the survey, the study authors constructed a painting showing the preferences of each nation. Despite the many different cultures, the paintings all showed a strong similarity to landscape calendar art. The authors argued that this similarity was in fact due to the influence of the Western calendar industry. Another explanation is that these features are those evolutionary psychology predicts should be popular for evolutionary reasons. Physical attractiveness Various evolutionary concerns have been argued to influence what is perceived to be physically attractive. Such evolutionary based preferences are not necessarily static but may vary depending on environmental cues. Thus, availability of food influences which female body size is attractive which may have evolutionary reasons. Societies with food scarcities prefer larger female body size than societies having plenty of food. In Western society males who are hungry prefer a larger female body size than they do when not hungry. Mate selection An important adaptive function of courtship seems to be the selection of a mating partner with characteristics that would likely optimize reproductive success (selection for fitness). Such features include particular male or female characteristics that have aesthetic appeal to the opposite sex. Sexual selection tends to give rise to competition between individuals of the same gender. Darwin regarded such competition as having molded numerous aspects of animal behavior. Darwin particularly emphasized the striking evolution of aesthetic display in male birds. He also considered that a similar process had occurred in humans leading, for example, to the evolution of female beauty and sweeter voice and, in males, to the beard. Evolutionary musicology Evolutionary musicology is a subfield of biomusicology that grounds the psychological mechanisms of music perception and production in evolutionary theory. It covers vocal communication in non-human animal species, theories of the evolution of human music, and cross-cultural human universals in musical ability and processing. It also includes evolutionary explanations for what is considered aesthetically pleasing or not. Darwinian literary studies Darwinian Literary Studies (aka Literary Darwinism) is a branch of literary criticism that studies literature, including aesthetical aspects, in the context of evolution. Evolution of emotion Aesthetics are tied to emotions. There are several explanations regarding the evolution of emotion. One example is the emotion of disgust which has been argued to have evolved in order to avoid several harmful actions such as infectious diseases due to contact with spoiled foods, feces, and decaying bodies. Sexy son hypothesis, handicap principle, and arts The sexy son hypothesis suggests that a female’s optimal choice among potential mates is a male whose genes will produce male offspring with the best chance of reproductive success by having trait(s) being attractive to other females. Sometimes the trait may have no reproductive benefit in itself, apart from attracting females, because of Fisherian runaway. The peacock's tail may be one example. It has also been seen as an example of the handicap principle. It has been argued that the ability of the human brain by far exceeds what is needed for survival on the savanna. One explanation could be that the human brain and associated traits (such as artistic ability and creativity) are the equivalent of the peacock's tail for humans. According to this theory superior execution of art was important because it attracted mates. References Sociobiology Human evolution Evolutionary psychology Movements in aesthetics de:Evolutionäre Ästhetik
Evolutionary aesthetics
[ "Biology" ]
1,083
[ "Behavioural sciences", "Behavior", "Sociobiology" ]
33,937,103
https://en.wikipedia.org/wiki/Niven%27s%20theorem
In mathematics, Niven's theorem, named after Ivan Niven, states that the only rational values of in the interval for which the sine of  degrees is also a rational number are: In radians, one would require that , that be rational, and that be rational. The conclusion is then that the only such values are , , and . The theorem appears as Corollary 3.12 in Niven's book on irrational numbers. The theorem extends to the other trigonometric functions as well. For rational values of , the only rational values of the sine or cosine are , , and ; the only rational values of the secant or cosecant are and ; and the only rational values of the tangent or cotangent are and . History Niven's proof of his theorem appears in his book Irrational Numbers. Earlier, the theorem had been proven by D. H. Lehmer and J. M. H. Olmstead. In his 1933 paper, Lehmer proved the theorem for the cosine by proving a more general result. Namely, Lehmer showed that for relatively prime integers and with , the number is an algebraic number of degree , where denotes Euler's totient function. Because rational numbers have degree 1, we must have or and therefore the only possibilities are . Next, he proved a corresponding result for the sine using the trigonometric identity . In 1956, Niven extended Lehmer's result to the other trigonometric functions. Other mathematicians have given new proofs in subsequent years. See also Pythagorean triples form right triangles where the trigonometric functions will always take rational values, though the acute angles are not rational. Trigonometric functions Trigonometric number References Further reading External links Rational numbers Trigonometry Theorems in geometry Theorems in algebra
Niven's theorem
[ "Mathematics" ]
385
[ "Mathematical theorems", "Theorems in algebra", "Geometry", "Theorems in geometry", "Mathematical problems", "Algebra" ]
28,554,158
https://en.wikipedia.org/wiki/Scripta%20Materialia
Scripta Materialia is a peer-reviewed scientific journal. It is the "letters" section of Acta Materialia and covers novel properties, or substantially improved properties of materials. Specific materials discussed are metals, ceramics and semiconductors at all length scales, and published research endeavors explore the functional or mechanical behavior of these materials. Articles tend to focus on the materials science and engineering aspects of discovery, characterization, development (including advances), structure, chemistry, theory, experiment, modeling, simulation, physics processes (thermodynamics, mechanics, etc.), synthesis, processing (production), mechanisms, and control. The journal also publishes comments on papers published in both Acta Materialia and Scripta Materialia and "Viewpoint Sets", which are sets of short articles invited by guest editors. The editor-in-chief is Gregory S. Rohrer, who also edits Acta Materialia. History The journal was established in 1967 as Scripta Metallurgica. It was renamed Scripta Metallurgica et Materialia in 1990, finally obtaining its current name in 1996. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 5.611. References External links http://www.journals.elsevier.com/scripta-materialia/ Biweekly journals Elsevier academic journals Engineering journals English-language journals Materials science journals Academic journals established in 1967
Scripta Materialia
[ "Materials_science", "Engineering" ]
302
[ "Materials science journals", "Materials science" ]
28,557,044
https://en.wikipedia.org/wiki/Proteins%20%28journal%29
Proteins: Structure, Function, and Bioinformatics is a monthly peer-reviewed scientific journal published by John Wiley & Sons, which was established in 1986 by Cyrus Levinthal. The journal covers research on all aspects protein biochemistry, including computation, function, structure, design, and genetics. The editor-in-chief is Nikolay Dokholyan (Penn State College of Medicine). Publishing formats are original research reports, short communications, prediction reports, invited reviews, and topic proposals. In addition, Proteins includes a section entitled "Section Notes", describing novel protein structures. Abstracting and indexing Proteins is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.756. References External links Biochemistry journals Academic journals established in 1986 Monthly journals Wiley (publisher) academic journals Molecular and cellular biology journals English-language journals
Proteins (journal)
[ "Chemistry" ]
178
[ "Biochemistry journals", "Molecular and cellular biology journals", "Biochemistry literature", "Molecular biology" ]
28,557,957
https://en.wikipedia.org/wiki/Mechanochromism
The change of colour which occurs when chemicals are put under stress in the solid state by mechanical grinding, crushing and milling; by friction and rubbing; or in the solid or solution state by high pressure or sonication is covered by the generic term mechanochromism. Specifically colour change under pressure is known as piezochromism and under grinding or attrition tribochromism. See also Chromism Photoelasticity (for the physical process) References 1. Bamfield, Peter and Hutchings, Michael G, Chromic Phenomena: the technological applications of colour chemistry, Royal Society of Chemistry, Cambridge UK, pages 104–5, 2010. . Chromism
Mechanochromism
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering" ]
143
[ "Spectroscopy stubs", "Materials science stubs", "Spectrum (physical sciences)", "Chromism", "Astronomy stubs", "Materials science", "Smart materials", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
28,559,987
https://en.wikipedia.org/wiki/Monolayer%20doping
Monolayer doping (MLD) in semiconductor production is a well controlled, wafer-scale surface doping technique first developed at the University of California, Berkeley, in 2007. This work is aimed for attaining controlled doping of semiconductor materials with atomic accuracy, especially at nanoscale, which is not easily obtained by other existing technologies. This technique is currently used for fabricating ultrashallow junctions (USJs) as the heavily doped source/drain (S/D) contacts of metal–oxide–semiconductor field effect transistors (MOSFETs) as well as enabling dopant profiling of nanostructures. This MLD technique utilizes the crystalline nature of semiconductors and its self-limiting surface reaction properties to form highly uniform, self-assembled, covalently bonded dopant-containing monolayers followed by a subsequent annealing step for the incorporation and diffusion of dopants. The monolayer formation reaction is self-limiting, thereby resulting in the deterministic coverage of dopant atoms on the surface. MLD differs from other conventional doping techniques such as spin-on-dopants (SODs) and gas phase doping techniques in the way of dopant dose control. Such control in MLD is much more precise due to the self-limiting formation of covalently attached dopants on the surface while the SODs just rely on the thickness control of the spin-on oxide and the gas phase technique depends on the control of dopant gas flow rate; therefore, the excellent dose control in MLD can yield the exact tuning of the resulting dopant profile. Compared to ion-implantation, MLD does not involve the energetic introduction of dopant species into the semiconductor lattice where crystal damages are induced. In the case of implantation, defects such as interstitials and vacancies are inevitably generated, which interact with the dopants to further broaden the junction profile. This is known as the transient-enhanced diffusion (TED), which limits the formation of good quality of USJs. Also, stochastic variation in the dopant positioning and severe stoichiometric imbalance are thus induced for binary and tertiary compound semiconductors by the implantation techniques. In contrast, all MLD dopant atoms are thermally diffused from the crystal surface to the bulk and the dopant profile can be easily controlled by the thermal budget. Since the MLD system can be classified as a limited source model, this is desirable for controlled USJ fabrication with high uniformity and low stochastic variation. Combined with the excellent dopant dose uniformity and coverage in MLD, it is especially attractive for doping nonplanar devices such as fin-FETs and nanowires. As a result, high quality sub-5 nm ultra-shallow junction has been demonstrated in silicon via the use of this MLD technique. Compared to low-energy ion-implantation into a screening film followed by in-diffusion, the MLD technique requires a lower thermal budget and allows conformal doping on topographic features. Applications in various structures The MLD process is applicable for both p- and n-doping of various nanostructured materials, including conventional planar substrates, nanobelts and nanowires, which are fabricated by either the ‘bottom-up’ or ‘top-down’ approaches, making it highly versatile for various applications. In p-type doping of silicon, a covalently anchored monolayer of allylboronic acid pinacol ester is formed on the surface as the boron precursor while a monolayer of diethyl 1-propylphosphonate is used as the phosphorus precursor in n-type doping. For example, in the case of USJ formation, combining the phosphorus-MLD and conventional spike annealing, the record 5 nm junction (down to 2 nm - the SIMS resolution limit) with the noncontact Rs measurements (~5000 Ω/□) is reported and being consistent with the predicted values from the dopant profile. Notably, ~70 % of the dopants are electrically active as the MLD process utilizes an equilibrium based diffusion mechanism. In addition to silicon, MLD has also been applied to compound semiconductors such as indium arsenide (InAs) to obtain high quality ultra-shallow junctions. For the past years, controlling the post-growth dopant profiles in compound semiconductors such as III-V materials deterministically has not been well achieved due to the challenges in controlling the recovered stoichiometry after the implantation and sequential annealing. These residual damages can lead to higher junction leakage and lower dopant activation in compound semiconductors. Utilizing the MLD technique with sulfur dopants, a dopant profile abruptness of ~ 3.5 nm/decade with high electrically active sulfur concentrations of ~ 8–1018 cm−3 is observed in InAs without significant defect density. The MLD capping layer serves as i) preventing group V elements to desorb and ii) avoiding the dopant atoms to be lost to the ambient in order to result in the good quality junctions. Control of area dose and junction profile An important characteristic of the use of the substrate surface chemistry is the ability to readily control the areal dose of the dopants by forming a mixed monolayer of ‘blank’ and dopant-containing molecules. For instance, a mixture of boron precursor molecules and dodecene (all-carbon ‘blank’ precursor) in different ratios is utilized to manipulate the areal dose of boron. Besides the mixed monolayer formation, the areal dose can be readily tuned by using the molecular structure details of the dopant precursor. In specific, the molecular footprint of the precursor directly governs the surface concentration of the dopants, with larger molecules resulting in a lower dose. In this regard, using trioctylphosphine oxide (TOP) as the phosphorus precursor with an approximately six-fold larger molecular footprint than DPP, the dopant dose can be modulated in the reduction of six times accordingly. Moreover, the doping profiles can be readily tuned through optimization of the annealing conditions. In this case, the high surface doping density with sharp spatial decay can be obtained by using this MLD method with low anneal temperatures and short times for the formation of USJs. The ability to controllably tune the dopant dose through the structural design of the precursor and to control the dopant profile by the annealing conditions present a unique aspect of the MLD process for attaining the desired dopant dose and profile. This technology is currently being examined by industry for the USJ S/D contacts of future nanoscale transistors based on Si and III-V compound semiconductors. References Semiconductor device fabrication 2007 in technology American inventions
Monolayer doping
[ "Materials_science" ]
1,400
[ "Semiconductor device fabrication", "Microtechnology" ]
28,562,138
https://en.wikipedia.org/wiki/Specific%20fan%20power
Specific Fan Power (SFP) is a parameter that quantifies the energy-efficiency of fan air movement systems. It is a measure of the electric power that is needed to drive a fan (or collection of fans), relative to the amount of air that is circulated through the fan(s). It is not constant for a given fan, but changes with both air flow rate and fan pressure rise. Definition SFP for a given fan system and operating point (combination of flow rate and pressure rise) is defined as: where: is the electrical power used by the fan (or sum of all fans in the ventilation system) [kW] is the gross amount of air circulated through the fan (or ventilation system) [m3/s] There are various sub-definitions of SFP for different specific applications, including SFPe (building energy performance calculations), SFPv (for performance verification tests), SFPi (individual fan), SFPAHU (air handling unit), SFPFCU (fan coil unit), and SFPBLDG (whole building). These are explained in and in part in. Reference 1 also describes how account for intermittently operated fans, e.g. kitchen hoods, and part-load performance in variable air volume (VAV) systems. SFP can be expressed in the following equivalent SI units: SFP and fan system efficiency As you can see above, SFP can be expressed in units of pressure, since pressure is a measure of energy per m³ air. The relationship between SFP, fan pressure rise, and fan system efficiency is simply: where: is the overall efficiency of the driven fan system [-] is the rise in total pressure though the fan [kPa] In the case of an ideal lossless fan system (i.e. ) the SFP is exactly equal to the fan pressure rise (i.e. total pressure loss in the ventilation system). In reality the fan system efficiency is often in the range 0 to 60% (i.e. ); it is lowest for small fans or inefficient operating points (e.g. throttled flow or free-flow). The efficiency is a function of the total losses in the fan system, including aerodynamic losses in the fan, friction losses in the drive (e.g. belt), losses in the electric motor, and variable speed drive power electronics. For more insight into how to maximise energy efficiency and minimize noise in fan systems, see ref.1 See also Fan (mechanical) Industrial fans Specific pump power Efficient energy use References and notes Bibliography and further reading Bunn, R: Let's get specific about fan power. London: Building Services Journal. 1 August 1999 External links http://www.designbuilder.co.uk/helpv1/Content/Fans.htm Ventilation fans Heating, ventilation, and air conditioning Electric motors Fluid dynamics
Specific fan power
[ "Chemistry", "Technology", "Engineering" ]
599
[ "Engines", "Electric motors", "Chemical engineering", "Piping", "Electrical engineering", "Fluid dynamics" ]
28,563,536
https://en.wikipedia.org/wiki/C7H8N2O
{{DISPLAYTITLE:C7H8N2O}} The molecular formula C7H8N2O (molar mass: 136.154 g/mol, exact mass: 136.0637 u) may refer to: 3-Aminobenzamide Nicotinyl methylamide Molecular formulas
C7H8N2O
[ "Physics", "Chemistry" ]
67
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
28,568,276
https://en.wikipedia.org/wiki/Sipuleucel-T
Sipuleucel-T, sold under the brand name Provenge, developed by Dendreon Pharmaceuticals, LLC, is a cell-based cancer immunotherapy for prostate cancer (CaP). It is an autologous cellular immunotherapy. Medical uses Sipuleucel-T is indicated for the treatment of metastatic, asymptomatic or minimally symptomatic, metastatic castrate-resistant hormone-refractory prostate cancer (HRPC). Other names for this stage are metastatic castrate-resistant (mCRPC) and androgen independent (AI) or (AIPC). This stage leads to mCRPC with lymph node involvement and distal (distant) tumors; this is the lethal stage of CaP. The prostate cancer staging designation is T4,N1,M1c. Treatment method A course of treatment consists of three basic steps: The patient's white blood cells, primarily dendritic cells, a type of antigen-presenting cells (APCs), are extracted in a leukapheresis procedure. The blood product is sent to a production facility and incubated with a fusion protein (PA2024) consisting of two parts: The antigen prostatic acid phosphatase (PAP), which is present in 95% of prostate cancer cells and An immune signaling factor granulocyte-macrophage colony stimulating factor (GM-CSF) that helps the APCs to mature. The activated blood product (APC8015) is returned from the production facility to the infusion center and reinfused into the patient. Premedication with acetaminophen and antihistamine is recommended to minimize side effects. Side effects Common side effects include: bladder pain; bloating or swelling of the face, arms, hands, lower legs, or feet; bloody or cloudy urine; body aches or pain; chest pain; chills; confusion; cough; diarrhea; difficult, burning, or painful urination; difficulty with breathing; difficulty with speaking up to inability to speak; double vision; sleeplessness; and inability to move the arms, legs, or facial muscles. Society and culture Legal status Sipuleucel-T was approved by the U.S. Food and Drug Administration (FDA) on April 29, 2010, to treat asymptomatic or minimally symptomatic metastatic HRPC. Shortly afterward, sipuleucel-T was added to the compendium of cancer treatments published by the National Comprehensive Cancer Network (NCCN) as a "category 1" (highest recommendation) treatment for HRPC. The NCCN Compendium is used by Medicare and major health care insurance providers to decide whether a treatment should be reimbursed. Research Clinical trials Completed Sipuleucel-T showed overall survival (OS) benefit to patients in three double-blind randomized phase III clinical trials, D9901, D9902a, and IMPACT. The IMPACT trial served as the basis for FDA licensing. This trial enrolled 512 patients with asymptomatic or minimally symptomatic metastatic HRPC randomized in a 2:1 ratio. The median survival time for sipuleucel-T patients was 25.8 months comparing to 21.7 months for placebo-treated patients, an increase of 4.1 months. 31.7% of treated patients survived for 36 months vs. 23.0% in the control arm. Overall survival was statistically significant (P=0.032). The longer survival without tumor shrinkage or change in progression is surprising. This may suggest the effect of an unmeasured variable. The trial was conducted pursuant to a FDA Special Protocol Assessment (SPA), a set of guidelines binding trial investigators to specific agreed-upon parameters with respect to trial design, procedures and endpoints; compliance ensured overall scientific integrity and accelerated FDA approval. The D9901 trial enrolled 127 patients with asymptomatic metastatic HRPC randomized in a 2:1 ratio. The median survival time for patients treated with sipuleucel-T was 25.9 months comparing to 21.4 months for placebo-treated patients. Overall survival was statistically significant (P=0.01). The D9902a trial was designed like the D9901 trial but enrolled 98 patients. The median survival time for patients treated with sipuleucel-T was 19.0 months comparing to 15.3 months for placebo-treated patients, but did not reach statistical significance. Ongoing As of August 2014, the PRO Treatment and Early Cancer Treatment (PROTECT) trial, a phase IIIB clinical trial started in 2001, was tracking subjects but no longer enrolling new subjects. Its purpose is to test efficacy for patients whose CaP is still controlled by either suppression of testosterone by hormone treatment or by surgical castration. Such patients have usually failed primary treatment of either surgical removal of the prostate, (EBRT), internal radiation, BNCT or (HIFU) for curative intent. Such failure is called biochemical failure and is defined as a PSA reading of 2.0 ng/mL above nadir (the lowest reading taken post primary treatment). As of August 2014, a clinical trial administering sipuleucel-T in conjunction with ipilimumab (Yervoy) was tracking subjects but no longer enrolling new subjects; the trial evaluates the clinical safety and anti-cancer effects (quantified in PSA, radiographic and T cell response) of the combination therapy in patients with advanced prostate cancer. References Further reading External links Cancer treatments Prostate cancer
Sipuleucel-T
[ "Biology" ]
1,172
[ "Cell therapies" ]
45,435,217
https://en.wikipedia.org/wiki/Thorium%20oxalate
Thorium oxalate is the inorganic compound with the formula Th(C2O4)2(H2O)4. It is a white insoluble solid prepared by the reaction of thorium(IV) salts with an oxalic acid. The material is a coordination polymer. Each Th(IV) center is bound to 10 oxygen centers: eight provided by the bridging oxalates and two by a pair of aquo ligands. Two additional water of hydration are observed in the lattice. The solubility product (Ksp) of thorium oxalate is 5.0110−25. Density of anhydrous thorium oxalate is 4.637 g/cm3. References External links Atomistry.com: Thorium oxalate info page International Bio-Analytical Industries: Thorium Oxalate Dihydrate Thorium(IV) compounds Oxalates
Thorium oxalate
[ "Chemistry" ]
192
[ "Inorganic compounds", "Inorganic compound stubs" ]
45,437,349
https://en.wikipedia.org/wiki/Mycosubtilin
Mycosubtilin is a natural lipopeptide with antifungal and hemolytic activities and isolated from Bacillus species. It belongs to the iturin lipopeptide family. Definition Mycosubtilin is a natural lipopeptide. It is produced by the strains of Bacillus spp mainly by Bacillus subtilis. It was discovered due to its antifungal activities. It belongs to the family of iturin lipopeptides Structure Mycosubtilin is a heptapeptide, cyclized in a ring with a β-amino fatty acid. The peptide sequence is composed of L-Asn-D-Tyr-D-Asn-L-Gln-L-Pro-D-Ser-L-Asn. Biological activities Mycosubtilin has strong antifungal and hemolytic activities. It is active against fungi and yeasts such as Candida albicans, Candida tropicalis, Saccharomyces cerevisiae, Penicillium notatum, and Fusarium oxysporum. Its antibacterial activity is quite limited to bacteria such as Micrococcus luteus. See also Surfactin References Colloidal chemistry Antibiotics Lipopeptides Non-ionic surfactants
Mycosubtilin
[ "Chemistry", "Biology" ]
279
[ "Colloidal chemistry", "Biotechnology products", "Surface science", "Colloids", "Antibiotics", "Biocides" ]
42,158,230
https://en.wikipedia.org/wiki/Mucoserous%20acinus
Mucoserous acini (singular acinus) or mixed acini are mainly present in submandibular and sublingual glands. They are formed by mucous cells with some serous cells interspersed each other. Both cells pour their secretion directly in the lumen. Layer of mucous cells and serous demilune of serous cells superficial to mucous cells Histology
Mucoserous acinus
[ "Chemistry" ]
81
[ "Histology", "Microscopy" ]
40,725,476
https://en.wikipedia.org/wiki/Ramsey%20class
In the area of mathematics known as Ramsey theory, a Ramsey class is one which satisfies a generalization of Ramsey's theorem. Suppose , and are structures and is a positive integer. We denote by the set of all subobjects of which are isomorphic to . We further denote by the property that for all partitions of there exists a and an such that . Suppose is a class of structures closed under isomorphism and substructures. We say the class has the A-Ramsey property if for ever positive integer and for every there is a such that holds. If has the -Ramsey property for all then we say is a Ramsey class. Ramsey's theorem is equivalent to the statement that the class of all finite sets is a Ramsey class. References Ramsey theory
Ramsey class
[ "Mathematics" ]
161
[ "Combinatorics stubs", "Ramsey theory", "Combinatorics" ]
40,726,052
https://en.wikipedia.org/wiki/Community%20genetics
Community genetics is a recently emerged field in biology that fuses elements of community ecology, evolutionary biology, and molecular and quantitative genetics. Antonovics first articulated the vision for such a field, and Whitham et al. formalized its definition as "The study of the genetic interactions that occur between species and their abiotic environment in complex communities." The field aims to bridge the gaps in the study of evolution and ecology, within the multivariate community context in which ecological and evolutionary features are embedded. The documentary movie A Thousand Invisible Cords provides an introduction to the field and its implications. To date, the primary focus of most community genetics studies has been on the influences of genetic variation in plants on foliar arthropod communities. In a wide variety of ecosystems, different plant genotypes often support different compositions of associated foliar arthropod communities. Such community phenotypes have been observed in natural hybrid complexes, among genotypes and sibling families within a single species and among different plant populations. To understand the broader impacts of differences among plant genotypes on biodiversity as a whole, researchers have begun to examine the response of other organisms, such as foliar endophytes, mycorrhizal fungi, soil microbes, litter-dwelling arthropods, herbaceous plants and epiphytes. These effects are frequently examined with foundation species in temperate ecosystems, who structure ecosystems by modulating and stabilizing resources and ecosystem processes. The emphasis on foundation species allows researchers to focus on the likely most important players in a system without becoming overwhelmed by the complexity of all the genetically variable interactions occurring at the same time. However, unique effects of plant genotypes have also been found with non-foundation species, and can occur in tropical, boreal and alpine systems. The vision for the field of community genetics extends beyond documentation of different communities on different genotypes of a focal species. Other aspects of this field include understanding how species interactions within a community are modulated by host genotype, implications of host genotype on the fitness and evolution of community members, and selection on hosts influencing associated communities. Future progress in the field of community genetics is strongly dependent on breakthroughs in modern molecular DNA-based technology, such as genome sequencing. The application of a community genetics approach to understanding how species and communities of interacting organisms are reacting to rapid changes in climate, as well as informing restoration, are two important applied aspects of community genetics. References Community ecology Evolutionary biology Molecular genetics
Community genetics
[ "Chemistry", "Biology" ]
512
[ "Evolutionary biology", "Molecular genetics", "Molecular biology" ]
32,327,652
https://en.wikipedia.org/wiki/Isospin%20multiplet
In particle physics, isospin multiplets are families of hadrons with approximately equal masses. All particles within a multiplet, have the same spin, parity, and baryon numbers, but differ in electric charges. Isospin formally behaves as an angular momentum operator and thus satisfies the appropriate canonical commutation relations. For a given isospin quantum number I, 2I + 1 states are allowed, as if they were the third components of an angular momentum operator Î. The set of these states is called isospin multiplet and is used to accommodate the particles. An example of an isospin multiplet is the nucleon multiplet consisting of the proton and the neutron. In this case I = 1/2 and by convention the proton corresponds to the I3 = +1/2, while the neutron to I3 = -1/2. Another example is given by the delta baryons. In this case I = 3/2. The existence of the multiplets with approximately equal masses owes to the fact that the masses of up and down quarks are approximately equal (compared to a typical hadron mass), and the strong interaction is quark flavour blind. This makes the isospin symmetry a good approximation. References Hadrons
Isospin multiplet
[ "Physics" ]
264
[ "Matter", "Hadrons", "Particle physics", "Particle physics stubs", "Subatomic particles" ]
32,328,300
https://en.wikipedia.org/wiki/Automorphic%20L-function
In mathematics, an automorphic L-function is a function L(s,π,r) of a complex variable s, associated to an automorphic representation π of a reductive group G over a global field and a finite-dimensional complex representation r of the Langlands dual group LG of G, generalizing the Dirichlet L-series of a Dirichlet character and the Mellin transform of a modular form. They were introduced by . and gave surveys of automorphic L-functions. Properties Automorphic -functions should have the following properties (which have been proved in some cases but are still conjectural in other cases). The L-function should be a product over the places of of local functions. Here the automorphic representation is a tensor product of the representations of local groups. The L-function is expected to have an analytic continuation as a meromorphic function of all complex , and satisfy a functional equation where the factor is a product of "local constants" almost all of which are 1. General linear groups constructed the automorphic L-functions for general linear groups with r the standard representation (so-called standard L-functions) and verified analytic continuation and the functional equation, by using a generalization of the method in Tate's thesis. Ubiquitous in the Langlands Program are Rankin-Selberg products of representations of GL(m) and GL(n). The resulting Rankin-Selberg L-functions satisfy a number of analytic properties, their functional equation being first proved via the Langlands–Shahidi method. In general, the Langlands functoriality conjectures imply that automorphic L-functions of a connected reductive group are equal to products of automorphic L-functions of general linear groups. A proof of Langlands functoriality would also lead towards a thorough understanding of the analytic properties of automorphic L-functions. See also Grand Riemann hypothesis References Automorphic forms Zeta and L-functions Langlands program
Automorphic L-function
[ "Mathematics" ]
405
[ "Langlands program", "Number theory" ]
32,334,455
https://en.wikipedia.org/wiki/Limit%20analysis
Limit analysis is a structural analysis field which is dedicated to the development of efficient methods to directly determine estimates of the collapse load of a given structural model without resorting to iterative or incremental analysis. For this purpose, the field of limit analysis is based on a set of theorems, referred to as limit theorems, which are a set of theorems based on the law of conservation of energy that state properties regarding stresses and strains, lower and upper-bound limits for the collapse load and the exact collapse load. Software for limit analysis OPTUM G2 (2014-) General purpose software for geotechnical applications in 2D (also includes elastoplasticity, seepage, consolidation, staged construction, tunneling, and other relevant geotechnical analysis types). OPTUM G3 (2018-) General purpose software for geotechnical applications in 3D (also includes other relevant geotechnical analysis types). OPTUM CS (Concrete Solutions) (2019-) 3D design and analysis software for both pre-cast and in-situ concrete (also includes elastoplasticity). OPTUM MP (2019-) Free 2D concrete slab design and analysis software. LimitState:GEO (2008-) General purpose geotechnical software limit analysis application. Uses discontinuity layout optimization. LimitState:SLAB (2015-) Limit analysis software application for slabs. Uses discontinuity layout optimization. References Structural analysis
Limit analysis
[ "Engineering" ]
293
[ "Structural engineering", "Structural analysis", "Civil engineering", "Civil engineering stubs", "Mechanical engineering", "Aerospace engineering" ]
53,645,258
https://en.wikipedia.org/wiki/Resource%20calendar
The resource calendar is the timetable that shows how material and labor are consumed during the course of a project. This data might be at activity or project level. Project schedule Making a schedule relies on upon knowledge of every individual's accessibility and schedule limits, including: Time zones Work hours Get-away time See also Schedule (project management) Project planning Resource allocation References External links Creating Resource Calendars Time management Schedule (project management)
Resource calendar
[ "Physics" ]
87
[ "Physical quantities", "Time", "Time management", "Spacetime", "Schedule (project management)" ]
53,646,049
https://en.wikipedia.org/wiki/Ohmic%20plasma
An ohmic plasma is a plasma that is maintained and/or replenished by the heat produced when a plasma current flows through its resistance as in the induced poloidal magnetic field of a tokamak. Efficiency decreases as the plasma temperature increases. The mechanism is similar to the heat created with an electric current flowing through a resistance (ohmic). Related terms and topics Spitzer resistivity Alfvén wave Tokamak References External links Heating the plasma—RESEARCH FOR TOMORROW'S ENERGY SUPPLY Poloidal Field Institute for Plasma Physics (IPP-Max Planck) - Ohmic Heating Plasma types
Ohmic plasma
[ "Physics" ]
121
[ "Plasma types", "Plasma physics stubs", "Plasma physics" ]
53,646,845
https://en.wikipedia.org/wiki/Liquid%20slugging
Liquid slugging is the phenomenon of liquid entering the cylinder of a reciprocating compressor, a common cause of failure. Under normal conditions, the intake and output of a compressor cylinder is entirely vapor or gas, when a liquid accumulates at the suction port liquid slugging can occur. As more of the practically incompressible liquid enters, strain is placed upon the system leading to a variety of failures. References Physical phenomena
Liquid slugging
[ "Physics" ]
89
[ "Physical phenomena" ]
53,651,363
https://en.wikipedia.org/wiki/Neutron%20scattering%20length
A neutron may pass by a nucleus with a probability determined by the nuclear interaction distance, or be absorbed, or undergo scattering that may be either coherent or incoherent. The interference effects in coherent scattering can be computed via the coherent scattering length of neutrons, being proportional to the amplitude of the spherical scattered waves according to Huygens–Fresnel theory. This scattering length varies by isotope (and by element as the weighted arithmetic mean over the constituent isotopes) in a way that appears random, whereas the X-ray scattering length is just the product of atomic number and Thomson scattering length, thus monotonically increasing with atomic number. The scattering length may be either positive or negative. The scattering cross-section is equal to the square of the scattering length multiplied by 4π, i.e. the area of a circle with radius twice the scattering length. In some cases, as with titanium and nickel, it is possible to mix isotopes of an element whose lengths are of opposite signs to give a net scattering length of zero, in which case coherent scattering will not occur at all, while for vanadium already the opposite signs of the only naturally occurring isotope's two spin configurations give a near cancellation. However, neutrons will still undergo strong incoherent scattering in these materials. There is a large difference in scattering length between protium (-0.374) and deuterium (0.667). By using heavy water as solvent and/or selective deuteration of the probed molecule (exchanging the naturally occurring protium by deuterium) this difference can be leveraged in order to image the hydrogen configuration in organic matter, which is nearly impossible with X-rays due to their small sensitivity to hydrogen's single electron. On the other hand, neutron scattering studies of hydrogen-containing samples often suffer from the strong incoherent scattering of natural hydrogen. More comprehensive data is available from NIST and Atominstitut of Vienna. References Neutron scattering
Neutron scattering length
[ "Chemistry" ]
408
[ "Scattering", "Neutron scattering" ]
53,651,947
https://en.wikipedia.org/wiki/Ion%20funnel
In mass spectrometry, an ion funnel is a device used to focus a beam of ions using a series of stacked ring electrodes with decreasing inner diameter. A combined radio frequency and fixed electrical potential is applied to the grids. In electrospray ionization-mass spectrometry (ESI-MS), ions are created at atmospheric pressure, but are analyzed at subsequently lower pressures. Ions can be lost while they are shuttled from areas of higher to lower pressure due to the transmission process caused by a phenomenon called joule expansion or “free-jet expansion.” These ion clouds expand outward, which limits the amount of ions that reach the detector, so fewer ions are analyzed. The ion funnel refocuses and transmits ions efficiently from those areas of high to low pressure. History The first ion funnel was created in 1997 in the Environmental Molecular Sciences Laboratory Pacific Northwest National Laboratory by the researchers in Richard D. Smith's lab. The ion funnel was implemented to replace the ion transmission-limited skimmer for more efficient ion capture in an ESI source. Many characteristics of the ion funnel are attributed to the stacked ring ion guide, however, the disks of an ion funnel vary in diameter down its long axis. There is a portion at the base of the ion funnel in which a series of cylindrical ring electrodes have decreasing diameters, which enables the ion cloud entering the ion funnel to be spatially dispersed. This allows for efficient transfer of the ion cloud through the conductance limiting orifice at the exit as the ion cloud becomes focused to a much smaller radial size. The DC electric field serves to push ions through the funnel. For positive ions, the front plate of the funnel has the most positive DC voltage, and subsequent plates have gradually decreasing DC components, providing added control. RF and DC electric fields are co-applied with a pseudopotential created with alternating RF polarities on adjacent electrodes. This “pseudo-potential” radially confines ions and causes instability in ions with a lower m/z (mass to charge ratio) while ions with a higher m/z are focused to the center of the funnel. The initial ion funnel design used in the Smith research lab proved inefficient for collecting ions with low m/z. Simulations suggest that decreasing the spacing between the lenses so that they are less than the diameter of the smallest ring electrode could be a plausible solution to this problem. Another issue with the design is that the funnel is susceptible to noise with fast neutrals and charged droplets at many atmospheric interfaces during the initial vacuum phase. Modifications increase the efficiency and signal to noise ratio of the ion funnel. Some of the earliest ion funnels struggled to control gas flow as the pressure in the ion vacuum chamber was not uniform due to gas dynamic effects. The pressure at the funnel's exit was estimated to be 2 to 3 times higher than the pressure from the pressure gauge. The higher pressure required greater pumping in downstream vacuum chambers to compensate for the larger injection of gas. The discrepancy between the measured pressure and the pressure at the exit of the funnel was caused by the a sizable portion of the supersonic gas jet from the injector continuing beyond the Mach disk or shock diamond at the beginning of the funnel and continuing through until the end. The most effective resolution is the us of a jet disrupter that consists of a 9 mm diameter brass disk suspended perpendicular to the gas flow in the center of the ion funnel. Applications Mass spectrometry Ion funnels are frequently used in mass spectroscopy devices to collect ions from an ionization source. Previous devices lacking an ion funnel often lost ions during the transition from ionization source to the detector of the mass spectrometer. This loss was due to the increasing number of collisions undergone by ions with other gas molecules present in the atmosphere. The introduction of the ion funnel greatly reduced the amount of ions lost during experiments by guiding ions towards a desired destination, and through modification of the number of inlets is also able to increases sensitivity of measurements taken by the mass spectrometer. Multiple inlets allow multiple electrospray emitters, reducing the flow through each individual emitter. This creates many highly efficient electrosprays at low flow rates. Multiple inlets also improve sensitivity, with a linearly arranged 19 electrospray emitter coupled to 19 inlets operating at 18 Torr giving a nine-fold increase compared to a single inlet. Proton transfer reaction chamber Proton transfer reaction mass spectrometry has traditionally used drift tubes as ion traps. However, radio frequency ion funnels offer an attractive alternative, as they improve compound specific sensitivity significantly. This is due to increasing the effective reaction time and focusing the ions. The same pressure ranges are required for ion funnels and drift tubes, so the technology is not difficult to implement. Ion funnels have been shown to favor transmission of ions with high m/z. Breath analysis Breath analysis is a convenient and non-invasive way to detect chemicals in a bodily system such as alcohol content to determine intoxication, monitor the levels of anesthetics in the body during surgical procedures, and identify performance-enhancing substances in the system of athletes. However, conventional techniques are ineffective at low concentrations. An electrospray ionization interface assisted by an ion funnel used in a linear trap quadrupole Fourier-transform ion cyclotron resonance mass spectrometer was shown to greatly increase sensitivity with high resolution. See also Electrostatic lens Reflectron References Mass spectrometry Ions
Ion funnel
[ "Physics", "Chemistry" ]
1,114
[ "Matter", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Ions" ]
53,652,118
https://en.wikipedia.org/wiki/Karen%20Chan
Karen Chan is an associate professor at the Technical University of Denmark. She is a Canadian and French physicist most notable for her work on catalysis, electrocatalysis, and electrochemical reduction of carbon dioxide. Education Chan earned her B.Sc. in Chemical Physics in 2007 and her PhD in Chemistry in 2013 from Simon Fraser University under Michael Eikerling. Academic career Chan is known for her theoretical and computational work on the description of solid-liquid interfaces, electrocatalysis, batteries, and heterogeneous catalysis. Her work on computer simulations of the electrical double-layer and electrocatalysis has led to new ideas and understanding of, for instance, electrochemical carbon dioxide reduction, and water electrolysis. Following the completion of her PhD, she served as a postdoctoral researcher at Stanford University and in 2016 was promoted to staff scientist at SLAC National Accelerator Laboratory. In October 2018, she began serving as an associate professor at the Technical University of Denmark. References External links 1984 births Living people Canadian women physicists Catalysis Simon Fraser University alumni French women physicists Stanford University people Academic staff of the Technical University of Denmark
Karen Chan
[ "Chemistry" ]
232
[ "Catalysis", "Chemical kinetics" ]
37,956,283
https://en.wikipedia.org/wiki/WI-38
WI-38 is a diploid human cell line composed of fibroblasts derived from lung tissue of a 3-month-gestation female fetus. The fetus came from the elective abortion of a Swedish woman in 1963. The cell line was isolated by Leonard Hayflick the same year, and has been used extensively in scientific research, with applications ranging from developing important theories in molecular biology and aging to the production of most human virus vaccines. The uses of this cell line in human virus vaccine production is estimated to have saved the lives of millions of people. History The WI-38 cell line stemmed from earlier work by Hayflick growing human cell cultures. In the early 1960s, Hayflick and his colleague Paul Moorhead at the Wistar Institute in Philadelphia, Pennsylvania discovered that when normal human cells were stored in a freezer, the cells remembered the doubling level at which they were stored and, when reconstituted, began to divide from that level to roughly 50 total doublings (for cells derived from fetal tissue). Hayflick determined that normal cells gradually experience signs of senescence as they divide, first slowing before stopping division altogether. This finding is the basis for the Hayflick limit, which specifies the number of times a normal human cell population will divide before cell division stops. Hayflick's discovery later contributed to the determination of the biological roles of telomeres. Hayflick claimed that the finite capacity of normal human cells to replicate was an expression of aging or senescence at the cellular level. During this period of research, Hayflick also discovered that if cells were properly stored in a freezer, cells would remain viable and that an enormous number of cells could be produced from a single starting culture. One of the cell strains that Hayflick isolated, which he named WI-38, was found to be free of contaminating viruses, unlike the primary monkey kidney cells then in use for virus vaccine production. In addition, WI-38 cells could be frozen, then thawed and exhaustively tested. These advantages led to WI-38 quickly replacing primary monkey kidney cells for human virus vaccine production. WI-38 has also been used for research on numerous aspects of normal human cell biology. Applications WI-38 was invaluable to early researchers, especially those studying virology and immunology, since it was a readily available cell line of normal human tissue. Unlike the HeLa cell line, which were cancerous cells, WI-38 was a normal human cell population. Researchers in labs across the globe have since used WI-38 in their discoveries, most notably Hayflick in his development of human virus vaccines. Infected WI-38 cells secrete the virus, and can be cultured in large volumes suitable for commercial production. Virus vaccines produced in WI-38 have prevented disease or saved the lives of billions of people. Vaccines produced in WI-38 include those made against adenoviruses, rubella, measles, mumps, varicella zoster, poliovirus, hepatitis A and rabies. Genome sequence The WI-38 cell line was one of the first cell lines whose diploid genome was sequenced. This is critical because most human genome sequences have not been resolved to chromosome level, that is, it remained largely unclear which genetic variant is on which of the two chromatids. Besides being an important cell line for experimental studies (e.g. on aging), the WI-38 line is believed to have remained diploid since it was originally established in 1961. Nearly 60 years later, karyotyping by Soifer et al. (2020) showed that the WI-38 genome has not acquired major rearrangements such as translocations. More importantly, the de novo phased assembly confirms that the genome has in fact remained diploid and retained its heterozygosity throughout. It is therefore a good model for genome sequencing and serves as another reference genome. See also Use of fetal tissue in vaccine development MRC-5 References External links Cellosaurus entry for WI-38 Medical research: Cell division, by Meredith Wadman, 26. Jun 2013, Nature Human cell lines Cellular senescence 1960s in biology 1960s establishments in Pennsylvania History of medicine in the United States Vaccination Lung
WI-38
[ "Biology" ]
874
[ "Senescence", "Cellular senescence", "Vaccination", "Cellular processes" ]
37,956,397
https://en.wikipedia.org/wiki/Jacobson%E2%80%93Bourbaki%20theorem
In algebra, the Jacobson–Bourbaki theorem is a theorem used to extend Galois theory to field extensions that need not be separable. It was introduced by for commutative fields and extended to non-commutative fields by , and who credited the result to unpublished work by Nicolas Bourbaki. The extension of Galois theory to normal extensions is called the Jacobson–Bourbaki correspondence, which replaces the correspondence between some subfields of a field and some subgroups of a Galois group by a correspondence between some sub division rings of a division ring and some subalgebras of an associative algebra. The Jacobson–Bourbaki theorem implies both the usual Galois correspondence for subfields of a Galois extension, and Jacobson's Galois correspondence for subfields of a purely inseparable extension of exponent at most 1. Statement Suppose that L is a division ring. The Jacobson–Bourbaki theorem states that there is a natural 1:1 correspondence between: Division rings K in L of finite index n (in other words L is a finite-dimensional left vector space over K). Unital K-algebras of finite dimension n (as K-vector spaces) contained in the ring of endomorphisms of the additive group of K. The sub division ring and the corresponding subalgebra are each other's commutants. gave an extension to sub division rings that might have infinite index, which correspond to closed subalgebras in the finite topology. References Field (mathematics) Theorems in algebra
Jacobson–Bourbaki theorem
[ "Mathematics" ]
332
[ "Theorems in algebra", "Mathematical theorems", "Mathematical problems", "Algebra" ]
37,957,427
https://en.wikipedia.org/wiki/Mass%20spectrometric%20immunoassay
Mass spectrometric immunoassay (MSIA) is a rapid method is used to detect and/ or quantify antigens and or antibody analytes. This method uses an analyte affinity (either through antigens or antibodies) isolation to extract targeted molecules and internal standards from biological fluid in preparation for matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF-MS). This method allows for "top down" and "bottom up" analysis. This sensitive method allows for a new and improved process for detecting multiple antigens and antibodies in a single assay. This assay is also capable of distinguishing mass shifted forms of the same molecule via a panantibody, as well as distinguish point mutations in proteins. Each specific form is detected uniquely based on their characteristic molecular mass. MSIA has dual specificity because of the antibody-antigen reaction coupled with the power of a mass spectrometer. There are various other immunoassy techniques that have been used previously such as radioimmunoassay (RIA) and enzyme immunoassay (EIA and ELISA). These techniques are extremely sensitive however, there are many limitations to these methods. For example, quantification for ELISA and EIA require several hours because the binding has to reach equilibrium. RIA's disadvantage is that you need radioactive particles which are universally known to be carcinogens. The creation of MSIA fulfilled the need to determine the presence of one or more antigens in a specimen as well as the quantification of those said species. History This assay was patented in 2006 by Randall Nelson, Peter Williams and Jennifer Reeve Krone. The idea first came about with the development of ELISA and RIA. An earlier patent method suggested tagging antigens or antibodies with stable isotopes or long-lived radioactive elements. But limitations to both methods called for a better detection methods of a protein or proteins. The invention combines antigen-antibody binding with a mass spectrometer which aids in identifying qualitatively and quantifying analytes respectively. An early MSIA experiment was done on a venom laced human blood sample for the Antigen myotoxin. The experiment was successful in that the mass spectrum resulting from the analysis showed a distinct response for myotoxin at the molecular weight corresponding to 4,822 Da (a). The m/z ratio at 5,242 Da (b) is the molecular weight of the modified variant H-myotoxina, used as an internal reference species. The figure of the mass spectrum is shown below. Methodology An illustration of the MSIA procedure is depicted in the figure to the right. Analytes in a biological liquid sample are collected from solution by using a MSIA tip (also known as MSIA microcolumns) that contains a derivatized affinity frit. Biological samples contain various proteins that span a wide dynamic range so purification is needed to minimize the complex matrix and maximize mass spectrometry sensitivity. the MSIA tip serves as a place to purify these samples by immobilizing the analyte with high selectivity and specificity. Analytes are bound to the frit based on their affinities and all other nonspecific molecules are rinsed away. The specific targets are then eluted on to a mass spectrometer target plate with a MALDI matrix. However, proteins may be digested prior to ms analysis. A MALDI-TOF-MS later follows and targeted analytes are detected based on their m/z values. This method is qualitative, but the addition of mass shifted variants of the analyte for use as an internal standard makes this method useful for quantitative analysis. Pipetor tips, which have been termed MSIA tips or affinity pipette tips play a key role in the process of detecting analytes within biological samples. MSIA tips typically contain porous solid support which has derivatized antigens or antibodies covalently attached. Different analytes have different affinity for the tips so it is necessary to derivatize MSIA tips based on the analyte of interest. The main use of these tips are to flow samples through and the analytes affinity for the bound antigen/antibody allows for the capture of analyte. Non specifically bound compounds are rinsed out of the MSIA tips. The process can be simplified into 6 simple steps which Thermo termed the "work flow". Gather Sample Load Affinity Ligand Purify Target Analyte Elute Target Analyte Pre-MS Sampling Process MS Analysis Many "work flows" are commercially available for purchase. Applications MSIA is a method that can be used as an assay for a variety of different molecules such as proteins, hormones, drugs, toxins, and various pathogens found in biological fluids (Human and animal plasma, saliva, urine, tears etc.). MSIA has also been applied to clinical samples and have been proven to be a unique assay for clinically relevant proteins. Successfully assaying toxins, drugs and other pathogens are important to the environment as well as the human body. MSIA can be used for a range of biomedical and environmental applications.An important application of mass spectrometric immunoassy is that it can be used as a rapid, sensitive and accurate screening of apolipoproteins and mutations of them. Apolipoproteins represent a groups of proteins with many functions such as transport and clearance as well as enzyme activation. Recent studies have claimed that mutations in apopliproteins result in, or assist in the progression of various associated diseases including amyloidosis, amyloid cardiomyopathy, Alzheimer's disease, hypertriglyceridemic, lowered cholesterol, hyperlipidemia and atherosclerosis to name a few. Nelson and colleagues did a study using MSIA to characterize and isolate apolipoproteins species. Benefits There are many benefits to using a mass spectrometric immunoassay. Most importantly, the assay is extremely fast and the data are reproducible, and automated. They are sensitive, precise and allows for absolute quantification. Analytes can be detected to low detection limits (as low as picomolar) and the assay covers a wide dynamic range. See also Immunoassay Immunoscreening SISCAPA References Biochemistry methods Immunologic tests Mass spectrometry
Mass spectrometric immunoassay
[ "Physics", "Chemistry", "Biology" ]
1,337
[ "Biochemistry methods", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Immunologic tests", "Mass spectrometry", "Biochemistry", "Matter" ]
37,957,946
https://en.wikipedia.org/wiki/De%20novo%20protein%20synthesis%20theory%20of%20memory%20formation
The de novo protein synthesis theory of memory formation is a hypothesis about the formation of the physical correlates of memory in the brain. It is widely accepted that the physiological correlates for memories are stored at the synapse between various neurons. The relative strength of various synapses in a network of neurons form the memory trace, or ‘engram,’ though the processes that support this finding are less thoroughly understood. The de novo protein synthesis theory states that the production of proteins is required to initiate and potentially maintain these plastic changes within the brain. It has much support within the neuroscience community, but some critics claim that memories can be made independent of protein synthesis. History Originally, protein synthesis inhibitors (PSI) were only used as antibiotics. Through various mechanisms unique to each PSI, they would inhibit the synthesis of proteins, generally at the translational level. They achieved renown within the biological scientific community, when research on protein synthesis required PSI's to investigate certain physiological processes. Through this line of research, it was found that injection of PSI in the hippocampus resulted in amnesia: the memories undergoing consolidation at the time of injection were lost. After the injection, the animals (generally rats) would have their memories retested, and, as a consequence of interrupted memory consolidation, they reacted to a familiar situation as though they were in a novel environment. This gave rise to the de novo protein synthesis theory: the formation of a long-term memory requires the synthesis of new proteins. Eric Kandel established many of the biochemical markers of learning and memory in the Aplysia (California sea slug) in the 1970s, as his findings suggested potential pathways surrounding protein synthesis. He won the Nobel prize in 2000 for his research. In the same year, Nader published his findings about the liability of retrieved memories that had already undergone consolidation. For example, memories of past events are examples of memories that have already been consolidated. Nader discovered that, in the process of remembering, retrieved memories that became reactivated would require consolidation again. Various factors could interrupt this process; but without protein synthesis, memory re-consolidation would not occur and would result in the potential loss of the retrieved memory. This has been known as the reconsolidation theory of memory, which states that, after reactivation, memories undergo a process similar to initial consolidation to return them to their permanent state. Since then, a wealth of research has been done to clarify the mechanisms, genes, and proteins involved in the physiological correlate of memory. Protein synthesis inhibitors Protein synthesis inhibitors are a class of antibiotics, which prevent the production of new proteins by inhibiting the cell's gene expression ("Protein synthesis inhibitors", PSI). They generally operate at the ribosomal level through various mechanisms that prevent the ribosome from completing translation. Protein synthesis inhibitors that work in prokaryotic cells are often used as clinically prescribed antibiotics, while those that act of eukaryotic cells have been adapted for research purposes. In research, commonly used PSI's include anisomycin, cycloheximide, and puromycin - although the use of puromycin has stopped recently because of its toxic qualities and numerous side effects. Anisomycin has relatively high effectiveness in inhibiting protein synthesis and has a large effective time window. Cycloheximide is frequently used in acute studies, because of its high level of inhibition and ease of reversibility. Physiological changes Long term potentiation A line of research investigates long term potentiation (LTP), a process that describes how a memory can be consolidated between two neurons, or brain cells, ultimately by creating a circuit within the brain that can encode a memory. To initiate a learning circuit between two neurons, one prominent study described using tetanus stimulations to depolarize one neuron by 30mV, which, in turn, activated its NMDA glutamate receptors. The activation of these receptors resulted in Ca2+ flooding the cell, initiating a cascade of secondary messengers. The cascade of resulting reactions, brought about by secondary messengers, terminates with the activation of cAMP response binding element protein (CREB), which acts as a transcription factor for various genes and initiates their expression. Some proponents argue that the genes stimulate changes in communication between neurons, which underlie the encoding of memory; others suggest that the genes are byproducts of the LTP signaling pathway and are not directly involved in LTP. However, following the cascade of secondary messengers, no one would dispute that more AMPA receptors appear in the postsynaptic terminal.Higher numbers of AMPA receptors, taken together with the aforementioned events, allow for increased firing potential in the postsynaptic cell, which creates an improved learning circuit between these two neurons. Because of the specific, activity-dependent nature of LTP, it is an ideal model for a neural correlate of memory, as postulated by numerous studies; together, these studies show that the abolishment of LTP prevents the formation of memory at the neuronal level. Systems consolidation Systems consolidation is the process by which memories are shifted from a vulnerable state to a fairly permanent one. It also describes roles that certain brain structures, most notably the hippocampus, play in memory consolidation and the extent certain types of memories can be consolidated. LTP describes cellular level consolidation, which is the consolidation of a memory that occurs between individual neurons. Initially, cellular consolidation, or LTP, begins in the hippocampus; there, protein synthesis inhibitors, tetrodotoxin, lidocaine, lesions and other factors can interfere with hippocampal activity and cause memory deficits. The systems consolidation theory of memory is usually investigated by studying the loss of memory for past events (retrograde amnesia) that occurs as a result of damage to the hippocampus, which is involved in systems consolidation. Retrograde amnesia can be either temporally graded (older memories are affected less) or flat (all memories, regardless of age, are affected equally), depending on the type of memory encoded and the extent of hippocampal damage. Semantic memory Semantic memories (memories of facts) are one type of memory that is theorized to undergo complete systems consolidation in the hippocampus. Complete systems consolidation can eventually render semantic memories permanent, at which state they become independent from the hippocampus. There is evidence of semantic memories existing independently of any brain structure, especially when considering that the damage retrograde amnesia inflicts on semantic memory is temporally graded: there is a higher probability of older memories being retained even when the hippocampus is completely damaged. Newer semantic memories show a more variable likelihood of retainability, as they can be affected by minimal or complete destruction of the hippocampus. Episodic memory Episodic memories (memories of moments or events) is a type of memory that may not undergo complete systems consolidation; as a result, they remain entirely dependent on the hippocampus. Therefore, they cannot exist independently of any brain structures, unlike semantic memories. Evidence shows that complete hippocampal damage results in flat retrograde amnesia for episodic memories, including older memories. However, if the hippocampus is only partially damaged, then it is possible for the amnesia to have a temporal gradient, similar to one seen with semantic memories: older memories are more likely to be retained and newer memories less. Sleep and systems consolidation The mechanism for systems consolidation is unknown, but it has been established that protein synthesis must occur in the cortex, where the hippocampal independent memory is stored, and that sleep is likely to play a role in systems consolidation. Many genes are upregulated during sleep, and therefore there is a possibility that protein synthesis is active in sleep-consolidation. It remains to be seen if cortical consolidation uses the same mechanisms as the hippocampus to establish the memory trace. Proposed de novo proteins Once it was established that proteins were involved in the formation of memories, and an understanding of how the processes surrounding the proteins worked was formed, the next stage was to identify candidates for plasticity related proteins (proteins that would support the plastic changes between neurons, PRP). While many molecules, proteins and enzymes have been implicated in the associated processes of memory, identifying the specific proteins that are synthesized specifically to facilitate memory is a challenge. Listed below are the most common candidates for PRPs that support memory and learning functions. PKMzeta In 2011 Todd Sacktor proposed a model for how de novo protein synthesis modulates plasticity. Protein Kinase M zeta (PKMzeta) is a plasticity related protein that regulates the physiological processes that underlie learning and memory in Sacktor's model. PKMzeta is an isoform of protein kinase C, which differs in that it doesn't have an auto-inhibitory domain that requires high levels of substrate to perpetually activate the enzyme. PKMzeta mRNA is transported to the synaptic zones of the dendrites, where it is translated through the activity of multiple signaling pathways associated with LTP. After expression, PKMzeta requires an initial phosphorylation by phosphoinositide-dependent protein kinase 1 (PDK1), after which it can operate uninhibited. Protein interacting with C kinase 1 (PICK1) normally propagates the endocytic removal of AMPA receptors containing the GluR2 subunit from the postsynaptic regions. PKMzeta and PICK1 share a common binding site, which allows them to form a multiprotein complex. N-ethylmaleimide-sensitive factor (NSF) can disrupt the binding of PICK1 to the C-terminal of the AMPA receptors. This allows PKM zeta to phosphorylate the receptors, which traffics them to the synapse and enables easier excitability of the neuron. When in the membrane, a tyrosine dense binding site in the GluR2 AMPA receptors is used by brefeldin-resistant Arf-GEF 2 (BRAG2) to be actively removed from the synapse, where it is maintained in vesicles by PICK1. PMKzeta continuously phosphorylates the GluR2 AMPA receptors to maintain their presence within the synaptic membrane. There have been many studies to confirm the roles of each of these molecules, though there is always doubt and speculation of alternative processes. PKMzeta makes a great model for the de novo protein synthesis hypothesis. The effects of LTP summate to allow PKMzeta to be transcribed, which requires ribosomal activity in the dendrites. Blocking translation or transcription of proteins would prevent PKMzeta from being expressed, preventing the strengthening of neuronal networks that underlie a memory. Because of its long half life, the maintenance of receptors at a synapse is not affected by PSI. But the creation of a new memory would require new PKMzeta expression, which accounts for the specificity of PSI induced amnesia. Brain derived neurotrophic factor Brain derived neurotrophic factor (BDNF) is a neurotrophin associated with plasticity and growth of the central nervous system. It is a PRP candidate because its expression is closely related to activity, and abnormalities in its translation and signaling results in L-LTP deficits and amnesia. BDNF has been shown to enhance the activity of early LTP, but the longer lasting phases of LTP are thought to require protein synthesis. BDNF translation inhibition through PSI has shown the characteristic LTP blocking and amnesia, which has been followed up with genetic knockouts of the BDNF expressing gene. In these BDNF deficient animals the application of external BDNF can allow for the induction of LTP. There have been cases where BDNF needed not be present for the induction of LTP, suggesting that there may be in fact many parallel PRP pathways that lead to memory formation. BDNF and PKMzeta have some interaction effects. When LTP was induced in cell cultures in BDNF dependent ways (Theta burst stimulation or an increase in cAMP concentration) it was abolished with the application of ZIP (zeta-inhibitory peptide), a protein thought to specifically inactivate PKMzeta. This suggests that PKMzeta is the end modulator of LTP and learning. As expected PKMzeta levels dropped when PSIs were applied, but curiously this was not the case if BDNF was also applied. These findings show that BDNF modulates the LTP process to make it protein synthesis independent, contrary to the de novo protein synthesis theory. Criticisms Electrical activity When anisomycin is applied to the hippocampus, active memories are unable to fully consolidate and are lost. When anisomycin is applied to cell cultures, electrical activity within the cultures cease. This particular property of PSIs was not accounted for when the de novo protein synthesis theory was established, and is an alternative explanation for the amnesiac effects of PSIs. If a neuron is not electrically active, it is not transmitting information; therefore, the lack of electrical activity in the neuron by itself could be responsible for the loss of a memory. Anisomycin administered at a dose that inhibits 95% of protein synthesis and associated electrical activity is not the highest dosage used in PSI research. Higher doses may alter other processes other than protein synthesis to cause the silencing of neural activity, considering Puromycin has cytotoxic qualities, so its possible that other PSI might have similar effects that manifest in the interruption of neural firing. Additionally, anisomycin has been shown to cause a substantial catecholamine release that co-occurs with neural suppression, which has not been fully explained yet. These side effects other than the inhibition of protein synthesis may account for the amnesiac effects induced by PSI, but these findings are relatively new and are expected to receive much research attention in the near future. Memory formation and LTP independent of protein synthesis Demonstrating that memories can be formed, and that LTP can be initiated, without protein synthesis strongly reduces the strength of the de novo theory, which explicitly states that synthesis is required to form memories. As a result, many studies have shown various ways of inducing these events while specimens are under the effects of anisomycin or other protein synthesis inhibitors. BDNF applied to cell cultures with PSI still undergo LTP, suggesting that post-translational modifications such as phosphorylation or horizontal transport could be employed in the absence of protein synthesis. Additionally ZIP has amnesiac effects, but its specificity to PKMzeta have been questioned, which questions the accuracy of the PKMzeta model. References Further reading Protein biosynthesis Neuroscience of memory
De novo protein synthesis theory of memory formation
[ "Chemistry" ]
3,063
[ "Protein biosynthesis", "Gene expression", "Biosynthesis" ]
37,959,176
https://en.wikipedia.org/wiki/Nb.BbvCI
Nb.BbvCI is a nicking endonuclease used to cut one strand of double-stranded DNA. It has been successfully used to incorporate fluorochrome-labeled nucleotides into specific spots of a DNA sequence via nick translation. References Biotechnology Molecular biology
Nb.BbvCI
[ "Chemistry", "Biology" ]
59
[ "Biotechnology", "Molecular biology stubs", "nan", "Molecular biology", "Biochemistry" ]
37,959,983
https://en.wikipedia.org/wiki/Korte%27s%20third%20law%20of%20apparent%20motion
In psychophysics, Korte's third law of apparent motion is an observation relating the phenomenon of apparent motion to the distance and duration between two successively presented stimuli. Formulation Korte's four laws were first proposed in 1915 by Adolf Korte. The third law, particularly, describes how the increase in distance between two stimuli narrows the range of interstimulus intervals (ISI), which produce the apparent motion. It holds that there is a requirement for the proportional decrease in the frequency in which two stimulators are activated in alternation with the increase in ISI to ensure the quality of apparent motion. One identified violation of the Korte's law occurs if the shortest path between seen arm positions is not possible anatomically. This was demonstrated by Maggie Shiffrar and Jennifer Freyd using a picture that showed a woman demonstrating two positions. This highlighted the problem in taking the shortest path to perform the alternating postures. The laws were composed of general statements (laws) describing beta movement in the sense of "optimal motion". These outlined several constraints for obtaining the percept of apparent motion between flashes: "(1) larger separations require higher intensities, (2) slower presentation rates require higher intensities, (3) larger separations require slower presentation rates, (4) longer flash durations require shorter intervals . A modern formulation of the law is that the greater the length of a path between two successively presented stimuli, the greater the stimulus onset asynchrony (SOA) must be for an observer to perceive the two stimuli as a single mobile object. Typically, the relationship between distance and minimal SOA is linear. Arguably, Korte's third law is counterintuitive. One might expect that successive stimuli are less likely to be perceived as a single object as both distance and interval increase, and therefore, a negative relationship should be observed instead. In fact, such a negative relationship can be observed as well as Korte's law. Which relationship holds depends on speed. Korte's law also involves a constancy of velocity through apparent motion and it is said that data do not support it. References Psychophysics
Korte's third law of apparent motion
[ "Physics" ]
444
[ "Psychophysics", "Applied and interdisciplinary physics" ]
39,403,556
https://en.wikipedia.org/wiki/Free%20energy%20principle
The free energy principle is a theoretical framework suggesting that the brain reduces surprise or uncertainty by making predictions based on internal models and updating them using sensory input. It highlights the brain's objective of aligning its internal model and the external world to enhance prediction accuracy. This principle integrates Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. It has wide-ranging implications for comprehending brain function, perception, and action. Overview In biophysics and cognitive science, the free energy principle is a mathematical principle describing a formal account of the representational capacities of physical systems: that is, why things that exist look as if they track properties of the systems to which they are coupled. It establishes that the dynamics of physical systems minimise a quantity known as surprisal (which is the negative log probability of some outcome); or equivalently, its variational upper bound, called free energy. The principle is used especially in Bayesian approaches to brain function, but also some approaches to artificial intelligence; it is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception-action loops in neuroscience. The free energy principle models the behaviour of systems that are distinct from, but coupled to, another system (e.g., an embedding environment), where the degrees of freedom that implement the interface between the two systems is known as a Markov blanket. More formally, the free energy principle says that if a system has a "particular partition" (i.e., into particles, with their Markov blankets), then subsets of that system will track the statistical structure of other subsets (which are known as internal and external states or paths of a system). The free energy principle is based on the Bayesian idea of the brain as an “inference engine.” Under the free energy principle, systems pursue paths of least surprise, or equivalently, minimize the difference between predictions based on their model of the world and their sense and associated perception. This difference is quantified by variational free energy and is minimized by continuous correction of the world model of the system, or by making the world more like the predictions of the system. By actively changing the world to make it closer to the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods. The free energy principle is a mathematical principle of information physics: much like the principle of maximum entropy or the principle of least action, it is true on mathematical grounds. To attempt to falsify the free energy principle is a category mistake, akin to trying to falsify calculus by making empirical observations. (One cannot invalidate a mathematical theory in this way; instead, one would need to derive a formal contradiction from the theory.) In a 2018 interview, Friston explained what it entails for the free energy principle to not be subject to falsification: "I think it is useful to make a fundamental distinction at this point—that we can appeal to later. The distinction is between a state and process theory; i.e., the difference between a normative principle that things may or may not conform to, and a process theory or hypothesis about how that principle is realized. Under this distinction, the free energy principle stands in stark distinction to things like predictive coding and the Bayesian brain hypothesis. This is because the free energy principle is what it is — a principle. Like Hamilton's principle of stationary action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle. On the other hand, hypotheses that the brain performs some form of Bayesian inference or predictive coding are what they are—hypotheses. These hypotheses may or may not be supported by empirical evidence." There are many examples of these hypotheses being supported by empirical evidence. Background The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz’s work on unconscious inference and subsequent treatments in psychology and machine learning. Variational free energy is a function of observations and a probability density over their hidden causes. This variational density is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation to Bayesian model evidence. Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world. However, free energy is also an upper bound on the self-information of outcomes, where the long-term average of surprise is entropy. This means that if a system acts to minimise free energy, it will implicitly place an upper bound on the entropy of the outcomes – or sensory states – it samples. Relationship to other theories Active inference is closely related to the good regulator theorem and related accounts of self-organisation, such as self-assembly, pattern formation, autopoiesis and practopoiesis. It addresses the themes considered in cybernetics, synergetics and embodied cognition. Because free energy can be expressed as the expected energy of observations under the variational density minus its entropy, it is also related to the maximum entropy principle. Finally, because the time average of energy is action, the principle of minimum variational free energy is a principle of least action. Active inference allowing for scale invariance has also been applied to other theories and domains. For instance, it has been applied to sociology, linguistics and communication, semiotics, and epidemiology among others. Negative free energy is formally equivalent to the evidence lower bound, which is commonly used in machine learning to train generative models, such as variational autoencoders. Action and perception Active inference applies the techniques of approximate Bayesian inference to infer the causes of sensory data from a 'generative' model of how that data is caused and then uses these inferences to guide action. Bayes' rule characterizes the probabilistically optimal inversion of such a causal model, but applying it is typically computationally intractable, leading to the use of approximate methods. In active inference, the leading class of such approximate methods are variational methods, for both practical and theoretical reasons: practical, as they often lead to simple inference procedures; and theoretical, because they are related to fundamental physical principles, as discussed above. These variational methods proceed by minimizing an upper bound on the divergence between the Bayes-optimal inference (or 'posterior') and its approximation according to the method. This upper bound is known as the free energy, and we can accordingly characterize perception as the minimization of the free energy with respect to inbound sensory information, and action as the minimization of the same free energy with respect to outbound action information. This holistic dual optimization is characteristic of active inference, and the free energy principle is the hypothesis that all systems which perceive and act can be characterized in this way. In order to exemplify the mechanics of active inference via the free energy principle, a generative model must be specified, and this typically involves a collection of probability density functions which together characterize the causal model. One such specification is as follows. The system is modelled as inhabiting a state space , in the sense that its states form the points of this space. The state space is then factorized according to , where is the space of 'external' states that are 'hidden' from the agent (in the sense of not being directly perceived or accessible), is the space of sensory states that are directly perceived by the agent, is the space of the agent's possible actions, and is a space of 'internal' states that are private to the agent. Keeping with the Figure 1, note that in the following the and are functions of (continuous) time . The generative model is the specification of the following density functions: A sensory model, , often written as , characterizing the likelihood of sensory data given external states and actions; a stochastic model of the environmental dynamics, , often written , characterizing how the external states are expected by the agent to evolve over time , given the agent's actions; an action model, , written , characterizing how the agent's actions depend upon its internal states and sensory data; and an internal model, , written , characterizing how the agent's internal states depend upon its sensory data. These density functions determine the factors of a "joint model", which represents the complete specification of the generative model, and which can be written as . Bayes' rule then determines the "posterior density" , which expresses a probabilistically optimal belief about the external state given the preceding state and the agent's actions, sensory signals, and internal states. Since computing is computationally intractable, the free energy principle asserts the existence of a "variational density" , where is an approximation to . One then defines the free energy as and defines action and perception as the joint optimization problem where the internal states are typically taken to encode the parameters of the 'variational' density and hence the agent's "best guess" about the posterior belief over . Note that the free energy is also an upper bound on a measure of the agent's (marginal, or average) sensory surprise, and hence free energy minimization is often motivated by the minimization of surprise. Free energy minimisation Free energy minimisation and self-organisation Free energy minimisation has been proposed as a hallmark of self-organising systems when cast as random dynamical systems. This formulation rests on a Markov blanket (comprising action and sensory states) that separates internal and external states. If internal states and action minimise free energy, then they place an upper bound on the entropy of sensory states: This is because – under ergodic assumptions – the long-term average of surprise is entropy. This bound resists a natural tendency to disorder – of the sort associated with the second law of thermodynamics and the fluctuation theorem. However, formulating a unifying principle for the life sciences in terms of concepts from statistical physics, such as random dynamical system, non-equilibrium steady state and ergodicity, places substantial constraints on the theoretical and empirical study of biological systems with the risk of obscuring all features that make biological systems interesting kinds of self-organizing systems. Free energy minimisation and Bayesian inference All Bayesian inference can be cast in terms of free energy minimisation. When free energy is minimised with respect to internal states, the Kullback–Leibler divergence between the variational and posterior density over hidden states is minimised. This corresponds to approximate Bayesian inference – when the form of the variational density is fixed – and exact Bayesian inference otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g., Kalman filtering). It is also used in Bayesian model selection, where free energy can be usefully decomposed into complexity and accuracy: Models with minimum free energy provide an accurate explanation of data, under complexity costs (c.f., Occam's razor and more formal treatments of computational costs). Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data). Free energy minimisation and thermodynamics Variational free energy is an information-theoretic functional and is distinct from thermodynamic (Helmholtz) free energy. However, the complexity term of variational free energy shares the same fixed point as Helmholtz free energy (under the assumption the system is thermodynamically closed but not isolated). This is because if sensory perturbations are suspended (for a suitably long period of time), complexity is minimised (because accuracy can be neglected). At this point, the system is at equilibrium and internal states minimise Helmholtz free energy, by the principle of minimum energy. Free energy minimisation and information theory Free energy minimisation is equivalent to maximising the mutual information between sensory states and internal states that parameterise the variational density (for a fixed entropy variational density). This relates free energy minimization to the principle of minimum redundancy. Free energy minimisation in neuroscience Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states: that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively. Perceptual inference and categorisation Free energy minimisation formalises the notion of unconscious inference in perception and provides a normative (Bayesian) theory of neuronal processing. The associated process theory of neuronal dynamics is based on minimising free energy through gradient descent. This corresponds to generalised Bayesian filtering (where ~ denotes a variable in generalised coordinates of motion and is a derivative matrix operator): Usually, the generative models that define free energy are non-linear and hierarchical (like cortical hierarchies in the brain). Special cases of generalised filtering include Kalman filtering, which is formally equivalent to predictive coding – a popular metaphor for message passing in the brain. Under hierarchical models, predictive coding involves the recurrent exchange of ascending (bottom-up) prediction errors and descending (top-down) predictions that is consistent with the anatomy and physiology of sensory and motor systems. Perceptual learning and memory In predictive coding, optimising model parameters through a gradient descent on the time integral of free energy (free action) reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain. Perceptual precision, attention and salience Optimizing the precision parameters corresponds to optimizing the gain of prediction errors (c.f., Kalman gain). In neuronally plausible implementations of predictive coding, this corresponds to optimizing the excitability of superficial pyramidal cells and has been interpreted in terms of attentional gain. With regard to the top-down vs. bottom-up controversy, which has been addressed as a major open problem of attention, a computational model has succeeded in illustrating the circular nature of the interplay between top-down and bottom-up mechanisms. Using an established emergent model of attention, namely SAIM, the authors proposed a model called PE-SAIM, which, in contrast to the standard version, approaches selective attention from a top-down position. The model takes into account the transmission of prediction errors to the same level or a level above, in order to minimise the energy function that indicates the difference between the data and its cause, or, in other words, between the generative model and the posterior. To increase validity, they also incorporated neural competition between stimuli into their model. A notable feature of this model is the reformulation of the free energy function only in terms of prediction errors during task performance: where is the total energy function of the neural networks entail, and is the prediction error between the generative model (prior) and posterior changing over time. Comparing the two models reveals a notable similarity between their respective results while also highlighting a remarkable discrepancy, whereby – in the standard version of the SAIM – the model's focus is mainly upon the excitatory connections, whereas in the PE-SAIM, the inhibitory connections are leveraged to make an inference. The model has also proved to be fit to predict the EEG and fMRI data drawn from human experiments with high precision. In the same vein, Yahya et al. also applied the free energy principle to propose a computational model for template matching in covert selective visual attention that mostly relies on SAIM. According to this study, the total free energy of the whole state-space is reached by inserting top-down signals in the original neural networks, whereby we derive a dynamical system comprising both feed-forward and backward prediction error. Active inference When gradient descent is applied to action , motor control can be understood in terms of classical reflex arcs that are engaged by descending (corticospinal) predictions. This provides a formalism that generalizes the equilibrium point solution – to the degrees of freedom problem – to movement trajectories. Active inference and optimal control Active inference is related to optimal control by replacing value or cost-to-go functions with prior beliefs about state transitions or flow. This exploits the close connection between Bayesian filtering and the solution to the Bellman equation. However, active inference starts with (priors over) flow that are specified with scalar and vector value functions of state space (c.f., the Helmholtz decomposition). Here, is the amplitude of random fluctuations and cost is . The priors over flow induce a prior over states that is the solution to the appropriate forward Kolmogorov equations. In contrast, optimal control optimises the flow, given a cost function, under the assumption that (i.e., the flow is curl free or has detailed balance). Usually, this entails solving backward Kolmogorov equations. Active inference and optimal decision (game) theory Optimal decision problems (usually formulated as partially observable Markov decision processes) are treated within active inference by absorbing utility functions into prior beliefs. In this setting, states that have a high utility (low cost) are states an agent expects to occupy. By equipping the generative model with hidden states that model control, policies (control sequences) that minimise variational free energy lead to high utility states. Neurobiologically, neuromodulators such as dopamine are considered to report the precision of prediction errors by modulating the gain of principal cells encoding prediction error. This is closely related to – but formally distinct from – the role of dopamine in reporting prediction errors per se and related computational accounts. Active inference and cognitive neuroscience Active inference has been used to address a range of issues in cognitive neuroscience, brain function and neuropsychiatry, including action observation, mirror neurons, saccades and visual search, eye movements, sleep, illusions, attention, action selection, consciousness, hysteria and psychosis. Explanations of action in active inference often depend on the idea that the brain has 'stubborn predictions' that it cannot update, leading to actions that cause these predictions to come true. See also Constructal law - Law of design evolution in nature, animate and inanimate References External links Behavioral and Brain Sciences (by Andy Clark) Biological systems Systems theory Computational neuroscience Mathematical and theoretical biology
Free energy principle
[ "Mathematics", "Biology" ]
4,011
[ "Applied mathematics", "nan", "Mathematical and theoretical biology" ]
39,410,762
https://en.wikipedia.org/wiki/Standard%20dimension%20ratio
Standard dimension ratio (SDR) is a method of rating a pipe's durability against pressure. The standard dimension ratio describes the correlation between the pipe dimension and the thickness of the pipe wall. Common nominations are SDR11, SDR17, SDR26 and SDR35. Pipes with a lower SDR can withstand higher pressures. Pipe outside diameter Pipe wall thickness References Piping
Standard dimension ratio
[ "Chemistry", "Engineering" ]
82
[ "Building engineering", "Chemical engineering", "Civil engineering", "Civil engineering stubs", "Mechanical engineering", "Piping" ]
52,265,423
https://en.wikipedia.org/wiki/Dropwise%20condensation
In dropwise condensation the condensate liquid collects in the form of countless droplets of varying diameters on the condensing surface, instead of forming a continuous film, and does not wet the solid cooling surface. The droplets develop at points of surface imperfections (pit, scratch), called nucleation sites, and grow in size as more vapour condenses on its exposed surface. When the size of droplets is large there comes a time the droplet breakaway from the surface and knock off other droplets and carries it downstream. The moving droplet devours the droplets of smaller size. Dropwise condensation is one of the most effective mechanism of heat transfer and extremely large heat transfer coefficients can be achieved with this mechanism. In dropwise condensation, there is no liquid film to resist heat transfer, and as a result heat transfer coefficients can be achieved more than 10 times larger than those associated with film condensation, although 3-5 times is more common. Heat transfer coefficients are large so designers can achieve a specified heat transfer rate with a smaller surface area and thus a smaller and less expensive condenser. Dropwise condensation is achieved by adding a promoter chemical into the vapor, and/or roughened surfaces and surface and surface coated with hydrophobic impurities like fatty acids and organic compounds, known as dropwise promoters. Dropwise condensation is provoked artificially with the help of silicons, teflon, assortment of waxes, and fatty acids. These promoters are used to promote dropwise condensation but most promoters are highly unstable and lose their effectiveness with time due to oxidation, fouling and removal of the promoter from the surface. Dropwise condensation can be sustained for a long time by the combined effects of surface coating and periodic injection of the promoter into the vapor. When dropwise surfaces degrade, they convert to filmwise condensation. So most condensers are designed on the assumption that film condensation will take place on the surface eventually. Dropwise condensation is useful in powerplant heat exchangers, thermal desalination, self-cleaning surfaces, and heating and air conditioning. The total amount of heat transfer through a single droplet is a function of its radius and the size distribution over the condensation surface. The important factors which are involved in the mechanism of heat transfer through a single droplets are: Thermal conduction through a single droplet Thermal conduction in the substrate material Interphase matter transfer at the vapour liquid interface Curvature of the vapour-liquid interface References Cengel, Y. A., Heat Transfer – A Practical Approach, International Edition,1998, McGraw-Hill. A. Bejan, Convection heat Transfer, pp. 445,446. Wiley, New York (1984). Rose, J. W. and Glicksman, L. R., “Dropwise Condensation-The Distribution of Drop Sizes”, Int. J. Heat and Mass Transfer, vol.11, pp. 411– 425,1973. Incropera, DeWitt, Bergman, Lavine, "fundamentals of Heat and Mass Transfer", sixth edition, pp. 655. Warsinger, D. M., Swaminathan, J., Maswadeh, L. and Lienhard V, J. H., "Superhydrophobic condenser surfaces for air gap membrane distillation,” Journal of Membrane Science, vol. 492, pp. 578–587, 2015. Miljkovic, N, Wang, E. N. "Condensation heat transfer on superhydrophobic surfaces," MRS bulletin 38 (5), 397-406, 2013. Cheng, Y.T., Rodak, D.E., Wong, C.A. and Hayden, C.A., Effects of micro-and nano-structures on the self-cleaning behaviour of lotus leaves. Nanotechnology, 17(5), p.1359, 2006. Phase transitions Heat transfer
Dropwise condensation
[ "Physics", "Chemistry" ]
832
[ "Transport phenomena", "Physical phenomena", "Phase transitions", "Heat transfer", "Critical phenomena", "Phases of matter", "Thermodynamics", "Statistical mechanics", "Matter" ]
52,266,363
https://en.wikipedia.org/wiki/Wave%20Energy%20Scotland
Wave Energy Scotland (WES) is a technology development body set up by the Scottish Government to facilitate the development of wave energy in Scotland. It was set up in 2015 and is a subsidiary of Highlands and Islands Enterprise (HIE) based in Inverness. WES has managed numerous projects resulting from pre-commercial procurement funding calls in six main topic areas: power take-off, novel wave energy converters, structural materials and manufacturing processes, control systems, quick connection systems, and next generation wave energy. Each of these uses a stage-gate process, with fewer successful projects passing to later stages. WES has also commissioned eight landscaping studies in two phases. In 2020, together with the Basque Energy Agency ( or EVE), WES set up the EuropeWave programme to develop and test the most promising wave energy technologies, of which three concepts will be tested at sea. This is supported by European Horizon 2020 funding. Inception The Scottish Government took positive action to support the ailing wave energy sector in Scotland, following the demise of one of the leading developers Pelamis Wave Power. The Energy Minister Fergus Ewing announced an initial budget for the body of £14.3 million over 13 months at the RenewableUK conference in February 2015. Organisation objectives The original objectives for WES were set out by the Scottish Government as: Seek to retain the intellectual property and know-how from device development in Scotland for future benefit; Enable Scotland’s indigenous technologies to reach commercial readiness in the most efficient and effective manner, and in a way that allows the public sector to exit in due course; Ensure that the learning gained from support for wave device development and deployment to date, in particular the learning from Scotland’s leading wave technologies, is retained and used to benefit the wave energy industry; Avoid duplication in funding, encourage collaboration between companies and research institutes and foster greater standardisation across the industry; Ensure value for money from public sector investment; and Promote greater confidence in the technical performance of wave energy systems in order to encourage the return of private sector investment. Promote innovation in wave energy technology and encourage collaboration between industry, academia, and government Stage gate selections The WES development programme uses a series of stage-gates to evaluate technology progress. Through collaboration with the International Energy Agency's Ocean Energy Systems programme, Wave Energy Scotland has helped to develop "An International Evaluation and Guidance Framework for Ocean Energy Technology", first published in 2021. This sets out a clear evaluation methodology for the technical development and cost-effectiveness of wave and tidal energy technologies. A second edition was published in October 2023, adding the important aspect of environmental acceptability which had been missing from the first draft. The framework consists of six sequential stages of development, which is equivalent to those used in the IEC guidelines for testing early stage WECs, and can be linked to the widely-used Technology Readiness Level (TRL) scale. Project calls The WES development programme uses a staged approach with projects progressing from concept (stage 1), through design (stage 2), to demonstration (stage 3). , WES has held funding calls to start five development programmes, listed below. The successful projects in each stage are tabulated in List of projects funded by Wave Energy Scotland. In 2023, a sixth area of Next Generation Wave Energy was introduced, focusing on flexible generators. Power Take-off (PTO) In March 2015, WES announced the fist call of their development programme, for innovative power take-off systems. Depending on the status of the technology, projects of £100k to £4m were sought, with successful applicants eligible to claim 100% of the cost of development. A total of 42 applications were made for this £7m call, with contracts awarded to nine consortia. In July 2016, a total of 16 Power Take-Off projects were awarded, with over £7m total funding. Nine projects in Stage 1, at around £90k each. Five projects directly into Stage 2, at between £300k and £500k each. One project starting in Stage 3, the CorPower Ocean HiDrive project with £1.9m in funding. In September 2016, four of the nine PTO Stage 1 projects progressed to Stage 2, each awarded funding of around £490k. In March 2017, three of the original Stage 2 projects progressed to Stage 3, with nearly £2.5m funding each. In February 2018, one of the original Stage 1 projects also progressed to Stage 3 an was awarded £2.5m. Novel Wave Energy Converter Call (NWEC) In June 2015, the second call was announced, this time for "truly novel" wave energy converters. Eight projects were funded for the first stage of the NWEC call, out of 37 applications. In November 2015, eight projects were each awarded between £250k-£300k for 12 month NWEC Stage 1 projects, a total of £2.25m in funding. Four of these projects progressed to Stage 2 in April 2017, awarded around £700k in further funding. In January 2019, Mocean Energy and AWS Ocean Energy were awarded £7.7m between them for Stage 3 projects. Both companies planned to build half-scale devices and test them at the European Marine Energy Centre in real-sea conditions. Structural Materials and Manufacturing Processes Call A third call for Structural Materials and Manufacturing Processes was launched in July 2016, looking for materials for the WEC structure or prime mover that would facilitate a step change reduction in LCOE. In January 2017, 10 awards of around £250k each were made, for 12 month Stage 1 projects. In July 2018, three of these projects progressed to Stage 2, with a further £1.4m in funding between them. In March 2020, two projects then progressed to Stage 3: one Project led by Arup investigating the use of concrete as a structural material, the second led by Tension Technology International will look into a flexible buoyant pod. Control Systems In April 2017, a call on feasibility studies for Control System was announced, particularly welcoming experience from other related sectors. This was for initial projects of up to £47k lasting three months. In September 2017, 13 projects were awarded at Stage 1, with a total budget of £660k. Three of these projects progressed to Stage 2 in March 2018. In May 2019, two then progressed to Stage 3, sharing a budget of almost £1m. Quick Connection System A call was launched in July 2019 for systems that facilitate rapid connection and disconnection of a WEC from the moorings/electrical system, which was expected to speed up installation and operations, both leading to reduced costs. Seven projects were awarded at Stage 1 in December 2019. Of these, four progressed to Stage 2 in July 2020. Three then progressed to Stage 3 in July 2021, with almost £1.8m in funding. Next Generation Wave Energy In July 2023, a call was launched for concept designs that would directly convert motion into electricity, harnessing novel flexible electrostatic polymers and elastomers. Five projects were awarded up to £50k for 12-14 week concept designs investigating dielectric elastomer generators, and dielectric fluid generators. Two projects, led by 4c Engineering and TTI Marine Renewables, were awarded a further £400k funding in August 2024 for Stage 2. Over the following nine months, they are expected to form collaborations and progress their concepts for flexible wave energy devices. EuropeWave In December 2020, together with the Basque Energy Agency ( or EVE), WES set up the EuropeWave programme. This builds on the WES programme, using the same staged approach and pre-commercial procurement model. The programme has a budget of over €22.5m, comprising national, regional, and European Horizon 2020 funding. Trade association Ocean Energy Europe is also part of the consortium. As with the WES Novel Wave Energy Converter call, the programme will consist of three stages (1–3), culminating in scaled demonstration in real sea conditions for a year, at either the European Marine Energy Centre, Orkney, Scotland, or the Biscay Marine Energy Platform (BiMEP) near Armintza, Basque Country. Seven companies, listed in the table below, were selected in December 2021 to develop their device concepts, sharing a budget of €2.4m. After completing Stage 1, the five most promising technologies progressed to Stage 2 to perform more extensive modelling and testing to optimise their design. In September 2023, it was announced that CETO Wave Energy Ireland ACHIEVE, IDOM Consulting MARMOK-Atlantic, and Mocean Energy Blue Horizon 250 had progressed to the final stage of the EuropeWave programme with a shared budget of €13.4m. In April 2024, CETO secured a berth to test at BiMEP and also passed the authorisation to proceed milestone, enabling them to award the first contracts for fabrication of the device. Mocean plan to test their 250 kW device at the EMEC Billia Croo site, aiming to launch in 2025. Intellectual property WES acquired intellectual property developed by the now defunct Scottish wave energy companies Pelamis Wave Power and Aquamarine Power. The former as part of the inception of Wave Energy Scotland, hiring 12 former Pelamis employees including CEO Richard Yemm. The latter was completed in September 2016. Knowledge Library WES maintain an online Knowledge Library as part of their website, to provide access to information and documents from their extensive technology development programmes. It also contains reports from the knowledge capture projects from Pelamis Wave Power, Aquamarine Power, and AWS Ocean Energy. Annual conference With the exception of 2020 and 2021, WES has held an annual conference since 2016 to showcase progress in the sector. The first Wave Energy Scotland annual conference was held on 2 December 2016 at Pollock Halls in Edinburgh. This provided an update of ongoing and future calls, plus quick-fire updates from participants ongoing PTO and NWEC calls. A second annual conference was held on 28 November 2017. The third annual conference was held on 6 December 2018 at the Edinburgh International Conference Centre. External links Wave Energy Scotland website Wave Energy Scotland Knowledge Library See also Wave power Renewable energy in the United Kingdom List of wave power projects Marine energy Renewable energy in Scotland References Renewable energy organizations Organisations supported by the Scottish Government Organisations based in Inverness Wave power
Wave Energy Scotland
[ "Engineering" ]
2,090
[ "Renewable energy organizations", "Energy organizations" ]
56,605,705
https://en.wikipedia.org/wiki/%28Butadiene%29iron%20tricarbonyl
(Butadiene)iron tricarbonyl is an organoiron compound with the formula (CH)Fe(CO). It is a well-studied metal complex of butadiene. An orange-colored viscous liquid that freezes just below room temperature, the compound adopts a piano stool structure. The complex was first prepared by heating iron pentacarbonyl with the diene. Related compounds Iron(0) complexes of conjugated dienes have been extensively studied. In the butadiene series, (η-CH)Fe(CO) and (η:η-CH)(Fe(CO)) have been crystallized. Many related complexes are known for substituted butadienes and related species. The species (η-isoprene)iron tricarbonyl is chiral. See also Cyclobutadieneiron tricarbonyl References External links Iron carbonyl complexes Half sandwich compounds Diene complexes Iron(0) compounds
(Butadiene)iron tricarbonyl
[ "Chemistry" ]
197
[ "Organometallic chemistry", "Half sandwich compounds" ]
56,608,122
https://en.wikipedia.org/wiki/ClickSeq
ClickSeq is a click-chemistry based method for generating next generation sequencing libraries for deep-sequencing platforms including Illumina, HiSeq, MiSeq and NextSeq. Its function is similar to most other techniques for generating RNAseq or DNAseq libraries in that it aims to generate random fragments of biological samples of RNA or DNA and append specific sequencing adaptors to either end of every fragment, as per the requirements of the particular sequencing platform to be used (e.g. HiSeq). In ClickSeq, reverse transcription (RT) reactions are supplemented with small amounts of 3’-azido-nucleotides (AzNTPs) at defined ratios to deoxyribonucleotides (dNTPs). AzNTPs are chain-terminators and therefore induce the stochastic termination of cDNA synthesis at an average length determined by the ratio of AzNTPs to dNTPs. This results in the production of single-stranded cDNA fragments that contain an azido-group at their 3' ends. These 3'-azido-blocked cDNA molecules are purified away from the components of the RT reaction, and subsequently 'click-ligated' to 5’ alkyne-modified DNA adaptors via copper-catalysed azide-alkyne cycloaddition (CuAAC). This generates ssDNA molecules with unnatural triazole-linked DNA backbones. Nevertheless, these templates are used in PCR reactions and amplified to generate a cDNA sequencing library with the appropriate 5' and 3' sequencing adapters and indices required for Next-Generation Sequencing. ClickSeq has predominantly been used to sequence viral RNA genomes such as Flock House virus, cricket paralysis virus, and Zika virus, due to its resilience to artifactual chimera formation. Poly(A)-ClickSeq Poly(A)-ClickSeq is a variant of ClickSeq designed to target the junction of the three prime untranslated region (UTRs) and poly(A)-tails of the messenger RNAs (mRNAs) of higher-order organisms and of RNA viruses infecting these cells types. The core principle is similar to ClickSeq, however, the reverse-transcription step uses an oligo-dT primer (unanchored) to initiate cDNA synthesis from within the poly(A) tail and only three 3'azido-nucleotides (AzATP, AzGTP and AzCTP, collectively referred to as AzVTPs) are supplemented. Due to the omission of AzTTP, stochastic termination of cDNA synthesis cannot occur during reverse transcription of the poly(A)-tail. Rather, termination can only occur in the 3'UTR at a distance upstream of the poly(A) tail defined by the ratio of AzVTPs to dNTPs. Applications ClickSeq and Poly(A)-ClickSeq provide specific applications over other common RNA-seq techniques. These include: Removal of RNA fragmentation steps: When the reverse-transcription step is random-primed and cDNA synthesis is terminated by the 3'-azido-nucleotides, cDNA fragments can be generated without chemical, mechanical or enzymatic fragmentation of the sample RNA Removal of RNA/DNA ligase enzymes: In ClickSeq, there are no RNA or DNA ligation steps, as are commonly required in most next generation sequencing library synthesis strategies Reduction of artifactual recombination: In the original ClickSeq publication, Routh et al. demonstrated that the artifactual generation of cDNA chimeras was substantially reduced when using ClickSeq. This allowed the authors to detect rare RNA recombination events that arise during the replication of Flock House virus. Poly(A)-ClickSeq does not require enrichment or purification of mRNA or viral RNAs from biological specimens. Rather, Poly(A)-ClickSeq can be performed in a simple manner directly from crude RNA or total cellular RNA extracted from biological specimens. The copper-catalyst required for CuAAC may induce oxidative damage of the template DNA References DNA sequencing methods
ClickSeq
[ "Biology" ]
871
[ "Genetics techniques", "DNA sequencing methods", "DNA sequencing" ]
55,148,719
https://en.wikipedia.org/wiki/Total%20carbon
Total carbon (TC) is an analytical parameter representing the concentration of carbon in a sample. TC includes carbon in any form, whether organic or inorganic, volatile or fixed, dissolved or suspended. In many application areas, rather than TC, a parameter representing of subset of TC is measured; examples include Total organic carbon (TOC), Particulate inorganic carbon (PIC), and Dissolved organic carbon (DOC). Carbon Soil tests Water pollution Water quality indicators
Total carbon
[ "Chemistry", "Environmental_science" ]
93
[ "Water quality indicators", "Water pollution" ]
55,150,270
https://en.wikipedia.org/wiki/Directive%202012/18/EU
Directive 2012/18/EU or the Seveso-III Directive (full title: Directive 2012/18/EU of the European Parliament and of the Council of 4 July 2012 on the control of major-accident hazards involving dangerous substances, amending and subsequently repealing Council Directive 96/82/EC) is a European Union directive aimed at controlling major chemical accident hazards. It is implemented in national legislation and is enforced by national chemical safety authorities. The Seveso-III Directive aims at preventing such incidents and minimising their risks. All EU countries are obliged to adopt measures at national and company level to prevent major accidents and to ensure appropriate preparedness and response should such accidents happen. Industrial plants in the European Union are covered by its provisions if dangerous substances are or could be present in the "establishment" in quantities exceeding its stated thresholds. More than 12,000 establishments in the EU are covered by the requirements. Seveso-III replaces the previous Seveso-I (Directive 82/501/EC) and Seveso-II (Directive 96/82/EC), updating the laws due to changes in chemical classification regulations, for example. Seveso-III gets its name from the Seveso disaster, which occurred in 1976 in Italy. Seveso-III establishes minimum quantity thresholds for reporting and safety permits. Covered establishments Today more than 12,000 establishments in the EU are covered by the Seveso-III Directive. Establishments covered by Seveso are split into two categories: Lower-tier: Dangerous substances are present above a certain threshold set out in Annex I of the Directive. Upper-tier: Establishments with dangerous substances present in even greater quantities, requiring more stringent controls to prevent and minimise the consequences of major accidents. The main sectors covered are power generation, supply and distribution (13% of establishments); fuel storage (10%); general chemicals manufacture (9%) and wholesale and retail (8%). Major accidents The Lubrizol factory fire in Rouen in 2019 was followed by the 2020 Chemical Industries of Ethylene Oxide explosion in Tarragona. Obligations All establishment operators Notify the competent authority about the inventory of dangerous substances, specifying the quantities, physical form and the hazardous properties of the dangerous substances present in the establishment Draw-up a major accident prevention policy (MAPP) Implement a MAPP by appropriate means and a Safety Management System Provide information to the competent authorities to identify the risks for domino effects Produce a safety report for upper-tier establishments Produce internal emergency plans for upper tier establishments Member State authorities Produce external emergency plans for upper tier establishments (Article 12) Deploy land-use planning for the siting of establishments Make relevant information publicly available Ensure that any necessary action is taken after an accident including emergency measures, actions to ensure that the operator takes any necessary remedial measures and informing the persons likely to the affected Report the number of establishments (both tiers) to the Commission Report accidents to the Commission Prohibit the unlawful use or operation of establishments Conduct inspections Maintain or adopt stricter measures than those contained in the Seveso Directive Citizen's rights The public needs to be consulted and involved in the decision making for specific individual projects Member State authorities need to make available information held Access to justice needs to be granted in case the above rights have been infringed Citizens who live in an area potentially affected by a major accident involving dangerous substances, the EU legislation requires that Citizens are involved in the decision making, even if the establishment concerned is located in a neighbouring EU country. Citizens will be consulted when: new establishments are planned significant modifications are made to existing ones new developments are planned around existing establishments external emergency plans are drawn up for high risk establishments External links Text of the directive The Seveso Directive - Technological Disaster Risk Reduction The Mutual Joint Visit Programme for Seveso Inspections Seveso Plants Information Retrieval System Major Accident Reporting System 2012/18 European Union directives Regulation of chemicals in the European Union Safety codes Process safety
Directive 2012/18/EU
[ "Chemistry", "Engineering" ]
811
[ "Regulation of chemicals in the European Union", "Safety engineering", "Regulation of chemicals", "Process safety", "Chemical process engineering" ]
55,160,898
https://en.wikipedia.org/wiki/Electronic%20properties%20of%20graphene
Graphene is a semimetal whose conduction and valence bands meet at the Dirac points, which are six locations in momentum space, the vertices of its hexagonal Brillouin zone, divided into two non-equivalent sets of three points. The two sets are labeled K and K′. The sets give graphene a valley degeneracy of . By contrast, for traditional semiconductors the primary point of interest is generally Γ, where momentum is zero. Four electronic properties separate it from other condensed matter systems. Electronic spectrum Electrons propagating through graphene's honeycomb lattice effectively lose their mass, producing quasi-particles that are described by a 2D analogue of the Dirac equation rather than the Schrödinger equation for spin- particles. Dispersion relation When atoms are placed onto the graphene hexagonal lattice, the overlap between the pz(π) orbitals and the s or the px and py orbitals is zero by symmetry. The pz electrons forming the π bands in graphene can be treated independently. Within this π-band approximation, using a conventional tight-binding model, the dispersion relation (restricted to first-nearest-neighbor interactions only) that produces energy of the electrons with wave vector is with the nearest-neighbor (π orbitals) hopping energy γ0 ≈ and the lattice constant . The conduction and valence bands, respectively, correspond to the different signs. With one pz electron per atom in this model the valence band is fully occupied, while the conduction band is vacant. The two bands touch at the zone corners (the K point in the Brillouin zone), where there is a zero density of states but no band gap. The graphene sheet thus displays a semimetallic (or zero-gap semiconductor) character. Two of the six Dirac points are independent, while the rest are equivalent by symmetry. In the vicinity of the K-points the energy depends linearly on the wave vector, similar to a relativistic particle. Since an elementary cell of the lattice has a basis of two atoms, the wave function has an effective 2-spinor structure. As a consequence, at low energies, even neglecting the true spin, the electrons can be described by an equation that is formally equivalent to the massless Dirac equation. Hence, the electrons and holes are called Dirac fermions. This pseudo-relativistic description is restricted to the chiral limit, i.e., to vanishing rest mass M0, which leads to additional features: Here vF ≈ (0.003 c) is the Fermi velocity in graphene, which replaces the velocity of light in the Dirac theory; is the vector of the Pauli matrices; is the two-component wave function of the electrons and E is their energy. The equation describing the electrons' linear dispersion relation is where the wavevector is measured from the Dirac points (the zero of energy is chosen here to coincide with the Dirac points). The equation uses a pseudospin matrix formula that describes two sublattices of the honeycomb lattice. 'Massive' electrons Graphene's unit cell has two identical carbon atoms and two zero-energy states: one in which the electron resides on atom A, the other in which the electron resides on atom B. However, if the two atoms in the unit cell are not identical, the situation changes. Hunt et al. showed that placing hexagonal boron nitride (h-BN) in contact with graphene can alter the potential felt at atom A versus atom B enough that the electrons develop a mass and accompanying band gap of about . The mass can be positive or negative. An arrangement that slightly raises the energy of an electron on atom A relative to atom B gives it a positive mass, while an arrangement that raises the energy of atom B produces a negative electron mass. The two versions behave alike and are indistinguishable via optical spectroscopy. An electron traveling from a positive-mass region to a negative-mass region must cross an intermediate region where its mass once again becomes zero. This region is gapless and therefore metallic. Metallic modes bounding semiconducting regions of opposite-sign mass is a hallmark of a topological phase and display much the same physics as topological insulators. If the mass in graphene can be controlled, electrons can be confined to massless regions by surrounding them with massive regions, allowing the patterning of quantum dots, wires and other mesoscopic structures. It also produces one-dimensional conductors along the boundary. These wires would be protected against backscattering and could carry currents without dissipation. Single-atom wave propagation Electron waves in graphene propagate within a single-atom layer, making them sensitive to the proximity of other materials such as high-κ dielectrics, superconductors and ferromagnetics. Electron transport Graphene displays remarkable electron mobility at room temperature, with reported values in excess of . Hole and electron mobilities were expected to be nearly identical. The mobility is nearly independent of temperature between and , which implies that the dominant scattering mechanism is defect scattering. Scattering by graphene's acoustic phonons intrinsically limits room temperature mobility to at a carrier density of , times greater than copper. The corresponding resistivity of graphene sheets would be . This is less than the resistivity of silver, the lowest otherwise known at room temperature. However, on substrates, scattering of electrons by optical phonons of the substrate is a larger effect than scattering by graphene's own phonons. This limits mobility to . Charge transport is affected by adsorption of contaminants such as water and oxygen molecules. This leads to non-repetitive and large hysteresis I-V characteristics. Researchers must carry out electrical measurements in vacuum. Graphene surfaces can be protected by a coating with materials such as SiN, PMMA and h-BN. In January 2015, the first stable graphene device operation in air over several weeks was reported, for graphene whose surface was protected by aluminum oxide. In 2015 lithium-coated graphene was observed to exhibit superconductivity and in 2017 evidence for unconventional superconductivity was demonstrated in single layer graphene placed on the electron-doped (non-chiral) d-wave superconductor Pr2−xCexCuO4 (PCCO). Electrical resistance in 40-nanometer-wide nanoribbons of epitaxial graphene changes in discrete steps. The ribbons' conductance exceeds predictions by a factor of 10. The ribbons can act more like optical waveguides or quantum dots, allowing electrons to flow smoothly along the ribbon edges. In copper, resistance increases in proportion to length as electrons encounter impurities. Transport is dominated by two modes. One is ballistic and temperature independent, while the other is thermally activated. Ballistic electrons resemble those in cylindrical carbon nanotubes. At room temperature, resistance increases abruptly at a particular length—the ballistic mode at 16 micrometres and the other at 160 nanometres. Graphene electrons can cover micrometer distances without scattering, even at room temperature. Despite zero carrier density near the Dirac points, graphene exhibits a minimum conductivity on the order of . The origin of this minimum conductivity is unclear. However, rippling of the graphene sheet or ionized impurities in the substrate may lead to local puddles of carriers that allow conduction. Several theories suggest that the minimum conductivity should be ; however, most measurements are of order or greater and depend on impurity concentration. Near zero carrier density graphene exhibits positive photoconductivity and negative photoconductivity at high carrier density. This is governed by the interplay between photoinduced changes of both the Drude weight and the carrier scattering rate. Graphene doped with various gaseous species (both acceptors and donors) can be returned to an undoped state by gentle heating in vacuum. Even for dopant concentrations in excess of 1012 cm−2 carrier mobility exhibits no observable change. Graphene doped with potassium in ultra-high vacuum at low temperature can reduce mobility 20-fold. The mobility reduction is reversible on removing the potassium. Due to graphene's two dimensions, charge fractionalization (where the apparent charge of individual pseudoparticles in low-dimensional systems is less than a single quantum) is thought to occur. It may therefore be a suitable material for constructing quantum computers using anyonic circuits. In 2018, superconductivity was reported in twisted bilayer graphene. Excitonic properties First-principle calculations with quasiparticle corrections and many-body effects explore the electronic and optical properties of graphene-based materials. The approach is described as three stages. With GW calculation, the properties of graphene-based materials are accurately investigated, including bulk graphene, nanoribbons, edge and surface functionalized armchair oribbons, hydrogen saturated armchair ribbons, Josephson effect in graphene SNS junctions with single localized defect and armchair ribbon scaling properties. Magnetic properties In 2014 researchers magnetized graphene by placing it on an atomically smooth layer of magnetic yttrium iron garnet. The graphene's electronic properties were unaffected. Prior approaches involved doping. The dopant's presence negatively affected its electronic properties. Strong magnetic fields In magnetic fields of ~10 tesla, additional plateaus of Hall conductivity at with are observed. The observation of a plateau at and the fractional quantum Hall effect at were reported. These observations with indicate that the four-fold degeneracy (two valley and two spin degrees of freedom) of the Landau energy levels is partially or completely lifted. One hypothesis is that the magnetic catalysis of symmetry breaking is responsible for lifting the degeneracy. Spin transport Graphene is claimed to be an ideal material for spintronics due to its small spin–orbit interaction and the near absence of nuclear magnetic moments in carbon (as well as a weak hyperfine interaction). Electrical spin current injection and detection has been demonstrated up to room temperature. Spin coherence length above 1 micrometre at room temperature was observed, and control of the spin current polarity with an electrical gate was observed at low temperature. Spintronic and magnetic properties can be present in graphene simultaneously. Low-defect graphene nanomeshes manufactured using a non-lithographic method exhibit large-amplitude ferromagnetism even at room temperature. Additionally a spin pumping effect is found for fields applied in parallel with the planes of few-layer ferromagnetic nanomeshes, while a magnetoresistance hysteresis loop is observed under perpendicular fields. Dirac fluid Charged particles in high-purity graphene behave as a strongly interacting, quasi-relativistic plasma. The particles move in a fluid-like manner, traveling along a single path and interacting with high frequency. The behavior was observed in a graphene sheet faced on both sides with a h-BN crystal sheet. Anomalous quantum Hall effect The quantum Hall effect is a quantum mechanical version of the Hall effect, The Hall effect occurs when a magnetic field causes a perpendicular (transverse) current in a material. In the quantum Hall effect, the transverse conductivity is quantized in integer multiples of a basic quantity: where e is the elementary electric charge and h is the Planck constant. This phenomenon is typically observed in very clean silicon or gallium arsenide solids at temperatures around and high magnetic fields. Quantum Hall effect in graphene Graphenem, a single layer of carbon atoms, exhibits an unusual form of the quantum Hall effect. In graphene, the steps of conductivity quantization are shifted by 1/2 compared to the standard sequence and have an additional factor of 4. This can be expressed as: where N is the Landau level. The factor of 4 arises due to the double valley and double spin degeneracies of electrons in graphene. These anomalies can be observed even at room temperature (about 20 °C or 293 K). Behavior of electrons in graphene This anomalous behavior is due to graphene's massless Dirac electrons. In a magnetic field, these electrons form a Landau level at the Dirac point with an energy that is precisely zero. This is a result of the Atiyah–Singer index theorem index theorem and causes the "+1/2" term in the Hall conductivity for neutral graphene. In bilayer graphene, the quantum Hall effect is also observed but with only one of the two anomalies. The Hall conductivity in bilayer graphene is given by: In this case, the first plateau at is absent, meaning bilayer graphene remains metallic at the neutrality point. Additional observations in graphene Unlike normal metals, graphene's longitudinal resistance shows maxima, not minima, for integral values of the Landau filling factor in Shubnikov–de Haas oscillations. This is termed the integral quantum Hall effect. These oscillations exhibit a phase shift of π, known as Berry's phase, which is due to the zero effective mass of carriers near the Dirac points. Despite this zero effective mass, the temperature dependence of the oscillations indicates a non-zero cyclotron mass for the carriers. Experimental observations Graphene samples prepared on nickel films and on both the silicon and carbon faces of silicon carbide show the anomalous quantum Hall effect in electrical measurements. Graphitic layers on the carbon face of silicon carbide exhibit a clear Dirac spectrum in angle-resolved photoemission experiments. This effect is also observed in cyclotron resonance and tunneling experiments. Casimir effect The Casimir effect is an interaction between disjoint neutral bodies provoked by the fluctuations of the electrodynamical vacuum. Mathematically it can be explained by considering the normal modes of electromagnetic fields, which explicitly depend on the boundary (or matching) conditions on the interacting bodies' surfaces. Since graphene/electromagnetic field interaction is strong for a one-atom-thick material, the Casimir effect is of interest. Van der Waals force The Van der Waals force (or dispersion force) is also unusual, obeying an inverse cubic, asymptotic power law in contrast to the usual inverse quartic. Effect of substrate The electronic properties of graphene are significantly influenced by the supporting substrate. The Si(100)/H surface does not perturb graphene's electronic properties, whereas the interaction between it and the clean Si(100) surface changes its electronic states significantly. This effect results from the covalent bonding between C and surface Si atoms, modifying the π-orbital network of the graphene layer. The local density of states shows that the bonded C and Si surface states are highly disturbed near the Fermi energy. Comparison with nanoribbon If the in-plane direction is confined, in which case it is referred to as a nanoribbon, its electronic structure is different. If it is "zig-zag" (diagram), the bandgap is zero. If it is "armchair" (diagram), the bandgap is non-zero (see figure). References Works cited External links Wolfram demonstration for graphene BZ and electronic dispersion Graphene Electrical phenomena
Electronic properties of graphene
[ "Physics" ]
3,167
[ "Physical phenomena", "Electrical phenomena" ]
55,164,426
https://en.wikipedia.org/wiki/Chappuis%20absorption
Chappuis absorption () refers to the absorption of electromagnetic radiation by ozone, which is especially noticeable in the ozone layer, which absorbs a small part of sunlight in the visible portion of the electromagnetic spectrum. The Chappuis absorption bands occur at wavelengths between 400 and 650 nm. Within this range are two absorption maxima of similar height at 575 and 603 nm. Compared to the absorption of ultraviolet light by the ozone layer, known as the Hartley and Huggins absorptions, Chappuis absorption is distinctly weaker. Along with Rayleigh scattering, it contributes to the blue color of the sky, and is noticeable when the light has to travel a long path through the Earth's atmosphere. For this reason, Chappuis absorption only has a significant effect on the color of the sky at dawn and dusk, during the so-called blue hour. It is named after the French chemist James Chappuis (1854–1934), who discovered this effect. History James Chappuis was the first researcher (in 1880) to notice that light passing through ozone gas has a blue tint. He attributed this effect to absorption in the yellow, orange, and red parts of the light spectrum. The French chemist Auguste Houzeau had already shown in 1858 that the atmosphere contains traces of ozone, so Chappuis presumed that ozone could explain the blue color of the sky. He was certainly aware that this was not the only possible explanation, since the blue light that can be seen from Earth's surface is polarised. Polarization cannot be explained by light absorption by ozone, but can be explained by Rayleigh scattering, which was already known by Chappuis's time. Contemporary scientists thought that Rayleigh scattering was sufficient to explain the blue sky, and so the idea that ozone could play a role was eventually forgotten. In the early 1950s, Edward Hulburt was conducting research on the sky at dusk, to verify theoretical predictions on the temperature and density of the upper atmosphere on the basis of scattered light measured at the Earth's surface. The basic idea was that after the Sun passes under the horizon, it continues to illuminate the upper layers of the atmosphere. Hulburt wished to relate the intensity of light reaching the Earth's surface through Rayleigh scattering to the abundance of particles at each altitude, as the sunlight passes through the atmosphere at different heights over the course of sunset. In his measurements, performed in 1952 at Sacramento Peak in New Mexico, he found that the intensity of measured light was lower by a factor of 2 to 4 than the predicted value. His predictions were based on his theory, and on measurements that were made in the upper atmosphere only a few years before by rocket flights launched not far from Sacramento Peak. The magnitude of the deviation between prediction and photometric measurements made on Sacramento Peak precluded mere measurement error. Until then, theory had predicted that the sky at the zenith during sundown should appear blue-green to grey, and the color should shift to yellow during dusk. This was obviously in conflict with daily observation that the blue color of the sky in the zenith at dusk changes only imperceptibly. As Hulburt knew about the absorption by ozone, and as the spectral range of Chappuis absorption had been more precisely measured only a few years before by the French couple Arlette and Étienne Vassy, he made an attempt to account for this effect in his calculations. This brought the measurements completely into agreement with the theoretical predictions. The results of Hulburt were repeatedly confirmed in the following years. Indeed, not all color effects at dusk in clear sky can be explained by the deeper layers. To this end it is probably necessary to account for spectral extinction by aerosols in theoretical simulations. Independently of Hulburt, the French meteorologist Jean Dubois had proposed a few years before that Chappuis absorption had an effect on another color phenomenon of the sky at dusk. Dubois worked on the so-called "Earth's shadow" in his doctoral thesis in the 1940s, and he hypothesized that this effect could also be attributed to Chappuis absorption. However, this conjecture is not supported by more recent measurements. Physical basis Chappuis absorption is a continuum absorption in the wavelength range between 400 and 650 nm. It is caused by the photodissociation (breaking-apart) of the ozone molecule. The absorption maximum lies around 603 nm, with a cross-section of 5.23 10−21 cm2. A second, somewhat smaller maximum at ca. 575 nm has a cross-section of 4.83 10−21 cm2. The absorbance energy in the Chappuis bands lies between 1.8 and 3.1 eV. The measured values imply that absorption mechanism is barely temperature-dependent; the deviation accounts for less than three percent. Around its maxima, Chappuis absorption is about three orders of magnitude weaker than the absorption of ultraviolet light in the range of the Hartley bands. Indeed, the Chappuis absorption is one of the few noteworthy absorption processes within the visible spectrum in Earth's atmosphere. Overlaid on the absorption spectrum of the Chappuis bands at shorter wavelengths are partly irregular and diffuse bands caused by molecular vibrations. The irregularity of these bands implies that the ozone molecule is only for an extremely short time in an excited state before it dissociates. During this short excitation it is mostly undergoing symmetrical stretching vibrations, although with some contributions from bending vibrations. A consistent theoretical explanation of the vibration structure that is in line with the experimental data was for a long time an unsolved problem; even today, not all details of the Chappuis absorption can be explained by theory. Like when it absorbs ultraviolet light, the ozone molecule can decompose into an O2 molecule and an O atom during Chappuis absorption. Unlike the Hartley and Huggins absorptions, however, the decomposition products do not remain in an excited state. Dissociation in the Chappuis bands is the most important photochemical process involving ozone in the Earth's atmosphere below an altitude of 30 km. Over this altitude, it is outweighed by absorptions in the Hartley band. However, neither the Hartley nor the Chappuis absorptions cause significant loss of ozone in the stratosphere, despite the high potential photodissociation rate, because the elemental oxygen has a high probability of encountering an O2 molecule and recombining back into ozone. References External links Götz Hoeppe: Himmelslicht. Spiegelbild des Erdklimas. Auf: fu-berlin.de. Wetterlexikon: Chappuis-Absorption. Auf: deutscher-wetterdienst.de. Atmospheric optical phenomena
Chappuis absorption
[ "Physics" ]
1,378
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
36,531,196
https://en.wikipedia.org/wiki/Equivalent%20Concrete%20Performance%20Concept
According to the Equivalent Concrete Performance Concept a concrete composition, deviating from the EN 206-1, can still be accepted, provided that certain conditions are fulfilled. Conditions A concrete composition not composed by the standard EN 206–1, can be acknowledged, only if the new concrete shows a performance equal to the standardized concrete concerning environmental classes. Cement content and water-cement ratio are important elements hereby. The comparison with standardized concrete is tested according to the following properties: Compressive strength Resistance to carbonatation Chloride migration Freeze-thaw resistance Other possible requirements When the new concrete scores equally or better, a certificate of utilization can be obtained from certificating organizations. Standardization The valid standards concerning concrete are: EN 206-1: determines minimum requirements of concrete composition for different environmental classes. NBN B15-100: Belgian annex CUR recommendation 48: Dutch annex These national annexes serve to elaborate the functional description of the Equivalent Concrete Performance Concept. Concrete composition Standardized concrete is a highly durable material, predominantly thanks to the increasing amount of cement at stricter environmental classes. But cement is a costly component and has a relatively powerful impact on the environment. Partly because of this, alternative binders such as fly ashes and slags are applied in the concrete sector. As a result, the content of Portland cement can be reduced in many cases. Other recycled raw materials can also contribute to a more economic or less environmental polluting concrete composition. 1. Usage of residual products from the concrete industry, for example stone dust (from crushing aggregates), concrete slurry (from washing mixers) or concrete waste 2. Usage of residual products from other industries, for example fly ash from coal plants and slags from the metallurgical industry 3. Usage of new types of cement with reduced environmental impact (mineralized cement, limestone addition, waste-derived fuels) Durability To respect the Kyoto Protocol, the -emission should be reduced. Green concrete exists out of recycling or is composed is such a manner, that it is as environmental-friendly as possible. A few conditions before the term green concrete may be used: -emission by concrete manufacturing is reduced by 30% Concrete contains at least 20% residual products, used as aggregates New residual products, previously disposed of, are used in concrete production -neutral: waste-derived fuels replace at least 10% of the fossile fuels in cement production References Equivalent Performance Concept: green concrete Betonlexicon: Duurzaamheid van beton Brancheorganisatie van de betonmortelindustrie Maatschappelijk Verantwoord Ondernemen Nederland Concrete
Equivalent Concrete Performance Concept
[ "Engineering" ]
538
[ "Structural engineering", "Concrete" ]
36,532,828
https://en.wikipedia.org/wiki/Tbf5%20protein%20domain
In molecular biology, this protein domain represents Tbf5 which stands for TTDA subunit of TFIIH basal transcription factor complex (also known as subunit 5 of RNA polymerase II transcription factor B), and Rex1 a type of nucleotide excision repair (NER) proteins. Nucleotide excision repair is a major pathway for repairing UV light-induced DNA damage in most organisms. The function of this protein is to aid transcription. Structure These proteins have a structural motif consisting of a 2-layer sandwich structure with an alpha/beta plait topology. TFIIH Transcription/repair factor IIH (TFIIH) is essential for RNA polymerase II transcription and nucleotide excision repair. The TFIIH complex consists of ten subunits: ERCC2, ERCC3, GTF2H1, GTF2H2, GTF2H3, GTF2H4, GTF2H5, MNAT1, CDK7 and CCNH. TTDA is also required for the stability of the TFIIH complex and for the presence of normal levels of TFIIH in the cell. TFIIH is one of five general transcription factors (GTFs) that assemble with RNA polymerase IIat a promoter site prior to the initiation of transcription. It is one of ten subunits that complete part of the 10 subunit protein complex (holoTFIIH) and part of a six-subunit complex of Rad3, Tfb1, Tfb2, Tfb4, Tfb5, and Ssl1 (referred to as core) Function In humans, the function of Tbf5 is clear, as loss of it leads to trichothiosystropy. Defects in GTF2H5 cause the disease trichothiodystrophy (TTD), therefore GTF2H5 (general transcription factor 2H subunit 5) is also known as the TTD group A (TTDA) subunit (and as Tfb5). The TTDA subunit is responsible for the DNA repair function of the complex. TTDA is present both bound to TFIIH, and as a free fraction that shuffles between the cytoplasm and nucleus; induction of NER-type DNA lesions shifts the balance towards TTDA's more stable association with TFIIH. REX1, which is short for, required for excision 1, is required for DNA repair in the single-celled, photosynthetic algae Chlamydomonas reinhardtii, and has homologues in other eukaryotes. References Protein families Protein domains Transcription factors
Tbf5 protein domain
[ "Chemistry", "Biology" ]
542
[ "Transcription factors", "Gene expression", "Protein classification", "Signal transduction", "Protein domains", "Protein families", "Induced stem cells" ]
36,535,054
https://en.wikipedia.org/wiki/Barry%20Barish
Barry Clark Barish (born January 27, 1936) is an American experimental physicist and Nobel Laureate. He is a Linde Professor of Physics, emeritus at California Institute of Technology and a leading expert on gravitational waves. In 2017, Barish was awarded the Nobel Prize in Physics along with Rainer Weiss and Kip Thorne "for decisive contributions to the LIGO detector and the observation of gravitational waves". He said, "I didn't know if I would succeed. I was afraid I would fail, but because I tried, I had a breakthrough." In 2018, he joined the faculty at University of California, Riverside, becoming the university's second Nobel Prize winner on the faculty. In the fall of 2023, he joined Stony Brook University as the inaugural President's Distinguished Endowed Chair in Physics. In 2023, Barish was awarded the National Medal of Science by President Biden in a White House ceremony. Birth and education Barish was born in Omaha, Nebraska, the son of Lee and Harold Barish. His parents' families were Jewish immigrants from a part of Poland that is now in Belarus. Just after World War II, the family moved to Los Feliz in Los Angeles. He attended John Marshall High School and other schools. He earned a B.A. degree in physics (1957) and a Ph.D. degree in experimental high energy physics (1962) at the University of California, Berkeley. He joined Caltech in 1963 as part of a new experimental effort in particle physics using frontier particle accelerators at the national laboratories. From 1963 to 1966, he was a research fellow, and from 1966 to 1991 an assistant professor, associate professor, and professor of physics. From 1991 to 2005, he became Linde Professor of Physics, and after that Linde Professor of Physics, emeritus. From 1984 to 1996, he was the principal investigator of Caltech High Energy Physics Group. Research Firstly, Barish's experiments were performed at Fermilab using high-energy neutrino collisions to reveal the quark substructure of the nucleon. Among others, these experiments were the first to observe a current that was weak and neutral, a linchpin of the electroweak unification theories of Salam, Glashow, and Weinberg. In the 1980s, he directed MACRO, an experiment in a cave in Gran Sasso, Italy, that searched for exotic particles called magnetic monopoles and also studied penetrating cosmic rays, including neutrino measurements that provided important confirmatory evidence that neutrinos have mass and oscillate. In 1991, Barish was named the Maxine and Ronald Linde Professor of Physics at Caltech. In the early 1990s, he spearheaded GEM (Gammas, Electrons, Muons), an experiment that would have run at the Superconducting Super Collider which was approved after the former project L* led by Samuel Ting (and Barish as chairman of collaboration board) was rejected by SSC director Roy Schwitters. Barish was GEM spokesperson. Barish became the principal investigator of the Laser Interferometer Gravitational-wave Observatory (LIGO) in 1994 and director in 1997. He led the effort through the approval of funding by the NSF National Science Board in 1994, the construction and commissioning of the LIGO interferometers in Livingston, LA and Hanford, WA in 1997. He created the LIGO Scientific Collaboration, which now numbers more than 1000 collaborators worldwide to carry out the science. The initial LIGO detectors reached design sensitivity and set many limits on astrophysical sources. The Advanced LIGO proposal was developed while Barish was director, and he has continued to play a leading role in LIGO and Advanced LIGO. The first detection of the merger of two 30 solar mass black holes was made on September 14, 2015. This represented the first direct detection of gravitational waves since they were predicted by Einstein in 1916 and the first ever observation of the merger of a pair of black holes. Barish delivered the first presentation on this discovery to a scientific audience at CERN on February 11, 2016, simultaneously with the public announcement. From 2001 to 2002, Barish served as co-chair of the High Energy Physics Advisory Panel subpanel that developed a long-range plan for U.S. high energy physics. He has chaired the Commission of Particles and Fields and the U.S. Liaison committee to the International Union of Pure and Applied Physics (IUPAP). In 2002, he chaired the NRC Board of Physics and Astronomy Neutrino Facilities Assessment Committee Report, "Neutrinos and Beyond". From 2005 to 2013, Barish was director of the Global Design Effort for the International Linear Collider (ILC). The ILC is the highest priority future project for particle physics worldwide, as it promises to complement the Large Hadron Collider at CERN in exploring the TeV energy scale. This ambitious effort is being uniquely coordinated worldwide, representing a major step in international collaborations going from conception to design to implementation for large scale projects in physics. Honors and awards In 2002, he received the Klopsteg Memorial Award of the American Association of Physics Teachers. Barish was honored by the University of Bologna (2006) and University of Florida ( 2007) where he received honorary doctorates. In 2007, delivered the Van Vleck lectures at the University of Minnesota. The University of Glasgow honored Barish with an honorary degree of science in 2013. Barish was honored as a Titan of Physics in the On the Shoulders of Giants series at the 2016 World Science Festival. In 2016, Barish received the Enrico Fermi Prize "for his fundamental contributions to the formation of the LIGO and LIGO-Virgo scientific collaborations and for his role in addressing challenging technological and scientific aspects whose solution led to the first detection of gravitational waves". Barish was a recipient of the 2016 Smithsonian magazine's American Ingenuity Award in the Physical Science category. Barish was awarded the 2017 Henry Draper Medal from the National Academy of Sciences "for his visionary and pivotal leadership role, scientific guidance, and novel instrument design during the development of LIGO that were crucial for LIGO's discovery of gravitational waves from colliding black holes, thus directly validating Einstein's 100-year-old prediction of gravitational waves and ushering a new field of gravitational wave astronomy." Barish was a recipient of the 2017 Giuseppe and Vanna Cocconi Prize of the European Physical Society for his "pioneering and leading role in the LIGO observatory that led to the direct detection of gravitational waves, opening a new window to the Universe." Barish was a recipient of the 2017 Princess of Asturias Award for his work on gravitational waves (jointly with Kip Thorne and Rainer Weiss). Barish was a recipient of the 2017 Fudan-Zhongzhi Science Award for his leadership in the construction and initial operations of LIGO, the creation of the international LIGO Scientific Collaboration, and for the successful conversion of LIGO from small science executed by a few research groups into big science that involved large collaborations and major infrastructures, which eventually enabled gravitational-wave detection" (jointly with Kip Thorne and Rainer Weiss). In 2017, he won the Nobel Prize in Physics (jointly with Rainer Weiss and Kip Thorne) "for decisive contributions to the LIGO detector and the observation of gravitational waves". In 2018, Barish was honored as the Alumnus of the year by the University of California, Berkeley. In 2018, he received an honorary doctorate at Southern Methodist University. In 2018, he was conferred the Honorary Degree Doctor Honoris Causa of Sofia University St. Kliment Ohridski. In 2023, he was awarded the inaugural the Copernicus Prize, bestowed by the government of Poland on "those who made exceptional contributions to the development of world science." In 2023, he was awarded the National Medal of Science for "exemplary service to science, including groundbreaking research on sub-atomic particles. His leadership of the Laser Interferometer Gravitational-Wave Observatory led to the first detection of gravitational waves from merging black holes, confirming a key part of Einstein's Theory of Relativity. He has broadened our understanding of the universe and our Nation's sense of wonder and discovery." Barish has been elected to and held fellowship at the following organizations: the American Academy of Arts and Sciences (AAAS) the National Academy of Sciences (NAS) the National Science Board (NSB) Fellow of American Physical Society (APS) (President 2011) Fellow of American Association for the Advancement of Science (AAAS) Family Barry Barish is married to Samoan Barish. They have two children, Stephanie Barish and Kenneth Barish, professor and chair of Physics & Astronomy at University of California, Riverside, and three grandchildren, Milo Barish Chamberlin, Thea Chamberlin, and Ariel Barish. See also List of Jewish Nobel laureates References Further reading External links Barry Barish: The Long Odyssey from Einstein to Gravitational Waves -- Popular Science Lecture, The Royal Swedish Academy of Sciences Barry Barish: From Einstein to Gravitational Waves and Beyond--2016 Tencent WE Summit Einstein, Black Holes and Cosmic Chirps - A Lecture by Barry Barish, Fermilab Barry Barish: On the Shoulders of Giants, World Science Festival Episode 10 Barry Barish discusses gravitational waves, LIGO, and the scientists who made it happen, TheIHMC Fermi Chair, Sapienza Università di Roma including the Nobel Lecture on 8 December 2017 LIGO and Gravitational Waves II profile of Barish in UC Riverside magazine Barry Barish in Hyde Park Civilization on ČT24 10.7.2021 (moderator Daniel Stach) 1936 births Living people Nobel laureates in Physics American Nobel laureates 20th-century American physicists 21st-century American physicists American people of Polish-Jewish descent Jewish American physicists California Institute of Technology faculty Fellows of the American Physical Society Gravitational-wave astronomy Members of the United States National Academy of Sciences Foreign members of the Russian Academy of Sciences Foreign members of the Royal Society United States National Science Foundation officials American particle physicists American relativity theorists Scientists from California People from Los Feliz, Los Angeles Presidents of the American Physical Society People associated with IUPAP
Barry Barish
[ "Physics", "Astronomy" ]
2,093
[ "Astronomical sub-disciplines", "Gravitational-wave astronomy", "Astrophysics" ]
36,536,284
https://en.wikipedia.org/wiki/Truncated%2016-cell%20honeycomb
In four-dimensional Euclidean geometry, the truncated 16-cell honeycomb (or cantic tesseractic honeycomb) is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space. It is constructed by 24-cell and truncated 16-cell facets. Alternate names Truncated hexadecachoric tetracomb / Truncated hexadecachoric honeycomb Related honeycombs See also Regular and uniform honeycombs in 4-space: Tesseractic honeycomb 16-cell honeycomb 24-cell honeycomb Rectified 24-cell honeycomb Truncated 24-cell honeycomb Snub 24-cell honeycomb 5-cell honeycomb Truncated 5-cell honeycomb Omnitruncated 5-cell honeycomb Notes References Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) (x3x3o *b3o4o), (x3x3o *b3o *b3o), x3x3o4o3o - thext - O105 5-polytopes Honeycombs (geometry) Truncated tilings
Truncated 16-cell honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
351
[ "Honeycombs (geometry)", "Truncated tilings", "Tessellation", "Crystallography", "Symmetry" ]
36,541,651
https://en.wikipedia.org/wiki/Ecovative%20Design
Ecovative Design LLC is a materials company headquartered in Green Island, New York, that provides sustainable alternatives to plastics and polystyrene foams for packaging, building materials and other applications by using mushroom technology. History Ecovative was developed from a university project of founders Eben Bayer and Gavin McIntyre. In their Inventor's Studio course at Rensselaer Polytechnic Institute taught by Burt Swersey, Eben and Gavin developed and then patented a method of growing a mushroom-based insulation, initially called Greensulate before founding Ecovative Design in 2007. In 2007 they were awarded $16,000 from the National Collegiate Inventors and Innovators Alliance. Since 2008, when they were awarded $700,000 first place in the Picnic Green Challenge the company has developed and commercialized production of a protective packaging called EcoCradle that is now used by Dell, Puma SE, and Steelcase. In 2010 they were awarded $180,000 from the National Science Foundation and in 2011 the company received investment from 3M New Ventures, The DOEN Foundation, and Rensselaer Polytechnic Institute allowing them to double their current staff of 25. In spring 2012, Ecovative Design opened a new production facility and announced a partnership with Sealed Air to expand production of the packaging materials. In 2014 their material was used in a brick form in 'Hy-Fi', a tower displayed in New York by the Museum of Modern Art and they started selling 'grow-it-yourself' kits. In November 2019, the company announced a $10M investment to support their new Mycelium Foundry. In February 2020, IKEA committed to using Ecovative technology for packaging, replacing polystyrene. In April 2021, Ecovative Design received a $60M investment to develop new applications for their technology and scale up manufacturing. Mushroom materials Mushroom materials are a novel class of renewable bio-material grown from fungal mycelium and low-value non-food agricultural materials using a patented process developed by Ecovative Design. After being left to grow in a former in a dark place for about five days during which time the fungal mycelial network binds the mixture, the resulting light robust organic compostable material can be used within many products, including building materials, thermal insulation panels and protective packaging. The process uses an agricultural waste product such as cotton hulls, cleaning the material, heating it up, inoculating it to create growth of the fungal mycelium, growing the material for period of about five days, and finally heating it to make the fungus inert. During growth, the material's shape can be molded into various products including protective packaging, building products, apparel, car bumpers, or surfboards. The environmental footprint of the products is minimized through the use of agricultural waste, reliance on natural and non-controlled growth environments, and home compostable final products. The founders' intention is that this technology should replace polystyrene and other petroleum-based products that take many years to decompose, or never do so. Protective packaging A renewable and compostable replacement for polystyrene packaging, that is also referred to as 'EcoCradle. Structural biocomposites A natural and renewable replacement for engineered wood, formed from compressed mushroom material and requiring no numerical control. Architect David Benjamin of The Living, working with Evovative Design and Arup, built 'Hy-Fi', a temporary external exhibit at the Museum of Modern Art in New York City in 2014. Thermal insulation An insulation product is under development. Trials of 'Greensulate', a former product, were conducted at a Vermont school gym in May 2009. The product was later dropped when the company switched focus to the manufacture of protective packaging. Other uses Ecovative offer a 'Grow-it-yourself' kit allowing people to create mushroom materials themselves, used to create products including lamp shades. Working with the University of Aachen, Dutch designer Eric Klarenbeek used 3D printing technology to gown a chair without using plastic, metal or wood. Media Popular Science featured the composite insulation in its 2009 Invention Awards. A season six episode of CSI: New York, also featured the insulation as lab technicians tested the materials' flame resistant properties after finding particles on a victim's clothing. Packaging World magazine featured Ecovative on its July 2011 cover, suggesting that the company is poised to "be a game changer in various industries." The World Economic Forum also recognized Ecovative as a Technology Pioneer in 2011. Additionally, the founders were featured on the PBS show, Biz Kid$, in episode 209, "The Green Economy & You." Support The development of the material and processes has been supported by the Picnic Green Challenge, the Environmental Protection Agency, National Collegiate Inventors and Innovators Alliance (NCIIA), ASME, the National Science Foundation, NYSERDA, 3M New Ventures, The DOEN Foundation, Rensselaer Polytechnic Institute and a license agreement with Sealed Air. In addition to an array of awards, Ecovative's materials have been extensively highlighted in Material ConneXion libraries around the world. References External links Ecovative's homepage Biomaterials Manufacturing companies based in New York (state) Companies based in Albany County, New York See also Mycobond
Ecovative Design
[ "Physics", "Biology" ]
1,090
[ "Biomaterials", "Materials", "Matter", "Medical technology" ]
40,735,644
https://en.wikipedia.org/wiki/Darienine
Darienine is an anti-cholinergic alkaloid. References Pyridine alkaloids Heterocyclic compounds with 3 rings Nitrogen heterocycles Methoxy compounds Ketones
Darienine
[ "Chemistry" ]
44
[ "Ketones", "Pyridine alkaloids", "Alkaloids by chemical classification", "Functional groups" ]
40,735,675
https://en.wikipedia.org/wiki/Li%C3%A9nard%E2%80%93Chipart%20criterion
In control theory, the Liénard–Chipart criterion is a stability criterion modified from the Routh–Hurwitz stability criterion, proposed in 1914 by French physicists A. Liénard and M. H. Chipart. This criterion has a computational advantage over the Routh–Hurwitz criterion because it involves only about half the number of determinant computations. Algorithm The Routh–Hurwitz stability criterion says that a necessary and sufficient condition for all the roots of the polynomial with real coefficients to have negative real parts (i.e. is Hurwitz stable) is that where is the -th leading principal minor of the Hurwitz matrix associated with . Using the same notation as above, the Liénard–Chipart criterion is that is Hurwitz stable if and only if any one of the four conditions is satisfied: Hence one can see that by choosing one of these conditions, the number of determinants required to be evaluated is reduced. Alternatively Fuller formulated this as follows for (noticing that is never needed to be checked): This means if is even, the second line ends in and if is odd, it ends in and so this is just condition (1) for odd and condition (4) for even from above. The first line always ends in , but is also needed for even . References External links Stability theory
Liénard–Chipart criterion
[ "Mathematics" ]
276
[ "Applied mathematics", "Stability theory", "Applied mathematics stubs", "Dynamical systems" ]
46,759,796
https://en.wikipedia.org/wiki/ACOT13
Acyl-CoA thioesterase 13 is a protein that in humans is encoded by the ACOT13 gene. This gene encodes a member of the thioesterase superfamily. In humans, the protein co-localizes with microtubules and is essential for sustained cell proliferation. Structure The orthologous mouse protein forms a homotetramer and is associated with mitochondria. The mouse protein functions as a medium- and long-chain acyl-CoA thioesterase. Multiple transcript variants encoding different isoforms have been found for this gene. Function The protein encoded by the ACOT13 gene is part of a family of Acyl-CoA thioesterases, which catalyze the hydrolysis of various Coenzyme A esters of various molecules to the free acid plus CoA. These enzymes have also been referred to in the literature as acyl-CoA hydrolases, acyl-CoA thioester hydrolases, and palmitoyl-CoA hydrolases. The reaction carried out by these enzymes is as follows: CoA ester + H2O → free acid + coenzyme A These enzymes use the same substrates as long-chain acyl-CoA synthetases, but have a unique purpose in that they generate the free acid and CoA, as opposed to long-chain acyl-CoA synthetases, which ligate fatty acids to CoA, to produce the CoA ester. The role of the ACOT- family of enzymes is not well understood; however, it has been suggested that they play a crucial role in regulating the intracellular levels of CoA esters, Coenzyme A, and free fatty acids. Recent studies have shown that Acyl-CoA esters have many more functions than simply an energy source. These functions include allosteric regulation of enzymes such as acetyl-CoA carboxylase, hexokinase IV, and the citrate condensing enzyme. Long-chain acyl-CoAs also regulate opening of ATP-sensitive potassium channels and activation of Calcium ATPases, thereby regulating insulin secretion. A number of other cellular events are also mediated via acyl-CoAs, for example signal transduction through protein kinase C, inhibition of retinoic acid-induced apoptosis, and involvement in budding and fusion of the endomembrane system. Acyl-CoAs also mediate protein targeting to various membranes and regulation of G Protein α subunits, because they are substrates for protein acylation. In the mitochondria, acyl-CoA esters are involved in the acylation of mitochondrial NAD+ dependent dehydrogenases; because these enzymes are responsible for amino acid catabolism, this acylation renders the whole process inactive. This mechanism may provide metabolic crosstalk and act to regulate the NADH/NAD+ ratio in order to maintain optimal mitochondrial beta oxidation of fatty acids. The role of CoA esters in lipid metabolism and numerous other intracellular processes are well defined, and thus it is hypothesized that ACOT- enzymes play a role in modulating the processes these metabolites are involved in. References External links Further reading Genes on human chromosome 6 Proteins
ACOT13
[ "Chemistry" ]
668
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
46,760,498
https://en.wikipedia.org/wiki/Fourier%20ptychography
Fourier ptychography is a computational imaging technique based on optical microscopy that consists in the synthesis of a wider numerical aperture from a set of full-field images acquired at various coherent illumination angles, resulting in increased resolution compared to a conventional microscope. Each image is acquired under the illumination of a coherent light source at various angles of incidence (typically from an array of LEDs); the acquired image set is then combined using an iterative phase retrieval algorithm into a final high-resolution image that can contain up to a billion pixels (a gigapixel) with diffraction-limited resolution, resulting in a high space-bandwidth product. Fourier ptychography reconstructs the complex image of the object (with quantitative phase information), but contrary to holography, it is a non-interferometric imaging technique and thus often easier to implement. The name "ptychography" comes from the ancient Greek word πτυχή ("to fold", also found in the word triptych), because the technique is based on multiple "views" of the object. Image reconstruction algorithms The image reconstruction algorithms are based on iterative phase retrieval, either related to the Gerchberg–Saxton algorithm or based on convex relaxation methods. Like real space ptychography, the solution of the phase problem relies on the same mathematical shift invariance constraint, except in Fourier ptychography it is the diffraction pattern in the back focal plane that is moving with respect to the back-focal plane aperture. (In traditional ptychography the illumination moves with respect to the specimen.) Many reconstruction algorithms used in real-space ptychography are therefore used in Fourier ptychography, most commonly PIE and variants such as ePIE and 3PIE. Variants of these algorithms allow for simultaneous reconstruction of the pupil function of an optical system, allowing for the correction of the aberrations of the microscope objective, and diffraction tomography which permits the 3D reconstruction of thin sample objects without requiring the angular sample scanning needed for CT scans. Advantages Fourier ptychography can be easily implemented on a conventional optical microscope by replacing the illumination source by an array of LED and improve the optical resolution by a factor 2 (with only bright-field illumination) or more (when including dark-field images to the reconstruction.) A major advantage of Fourier ptychography is the ability to use a microscope objective with a lower numerical aperture without sacrificing the resolution. The use of a lower numerical aperture allows for larger field of view, larger depth of focus, and larger working distance. Moreover, it enables effective numerical aperture larger than 1 without resorting to oil immersion. Relation to ptychography Contrary to Fourier ptychography, (conventional) ptychography swaps the role of the focus element, from an objective to become a condenser, and relies on the acquisition of diffractograms with illumination position diversity. However, the two techniques are both based on the determination of the angular spectrum of the object through a phase retrieval procedure, and inherently reconstruct the same information. Therefore, Fourier ptychography and conventional ptychography provides a bridge between coherent diffraction imaging and full-field microscopy. See also Ptychography Computational imaging Synthetic-aperture radar References Microscopy Imaging
Fourier ptychography
[ "Chemistry" ]
669
[ "Microscopy" ]
46,767,247
https://en.wikipedia.org/wiki/Energy%20cascade
In continuum mechanics, an energy cascade involves the transfer of energy from large scales of motion to the small scales (called a direct energy cascade) or a transfer of energy from the small scales to the large scales (called an inverse energy cascade). This transfer of energy between different scales requires that the dynamics of the system is nonlinear. Strictly speaking, a cascade requires the energy transfer to be local in scale (only between fluctuations of nearly the same size), evoking a cascading waterfall from pool to pool without long-range transfers across the scale domain. This concept plays an important role in the study of well-developed turbulence. It was memorably expressed in this poem by Lewis F. Richardson in the 1920s. Energy cascades are also important for wind waves in the theory of wave turbulence. Consider for instance turbulence generated by the air flow around a tall building: the energy-containing eddies generated by flow separation have sizes of the order of tens of meters. Somewhere downstream, dissipation by viscosity takes place, for the most part, in eddies at the Kolmogorov microscales: of the order of a millimetre for the present case. At these intermediate scales, there is neither a direct forcing of the flow nor a significant amount of viscous dissipation, but there is a net nonlinear transfer of energy from the large scales to the small scales. This intermediate range of scales, if present, is called the inertial subrange. The dynamics at these scales is described by use of self-similarity, or by assumptions – for turbulence closure – on the statistical properties of the flow in the inertial subrange. A pioneering work was the deduction by Andrey Kolmogorov in the 1940s of the expected wavenumber spectrum in the turbulence inertial subrange. Spectra in the inertial subrange of turbulent flow The largest motions, or eddies, of turbulence contain most of the kinetic energy, whereas the smallest eddies are responsible for the viscous dissipation of turbulence kinetic energy. Kolmogorov hypothesized that when these scales are well separated, the intermediate range of length scales would be statistically isotropic, and that its characteristics in equilibrium would depend only on the rate at which kinetic energy is dissipated at the small scales. Dissipation is the frictional conversion of mechanical energy to thermal energy. The dissipation rate, , may be written down in terms of the fluctuating rates of strain in the turbulent flow and the fluid's kinematic viscosity, . It has dimensions of energy per unit mass per second. In equilibrium, the production of turbulence kinetic energy at the large scales of motion is equal to the dissipation of this energy at the small scales. Energy spectrum of turbulence The energy spectrum of turbulence, E(k), is related to the mean turbulence kinetic energy per unit mass as where ui are the components of the fluctuating velocity, the overbar denotes an ensemble average, summation over i is implied, and k is the wavenumber. The energy spectrum, E(k), thus represents the contribution to turbulence kinetic energy by wavenumbers from k to k + dk. The largest eddies have low wavenumber, and the small eddies have high wavenumbers. Since diffusion goes as the Laplacian of velocity, the dissipation rate may be written in terms of the energy spectrum as: with ν the kinematic viscosity of the fluid. From this equation, it may again be observed that dissipation is mainly associated with high wavenumbers (small eddies) even though kinetic energy is associated mainly with lower wavenumbers (large eddies). Energy spectrum in the inertial subrange The transfer of energy from the low wavenumbers to the high wavenumbers is the energy cascade. This transfer brings turbulence kinetic energy from the large scales to the small scales, at which viscous friction dissipates it. In the intermediate range of scales, the so-called inertial subrange, Kolmogorov's hypotheses lead to the following universal form for the energy spectrum: An extensive body of experimental evidence supports this result, over a vast range of conditions. Experimentally, the value is observed. The result was first stated by independently by Alexander Obukhov in 1941. Obukhov's result is equivalent to a Fourier transform of Kolmogorov's 1941 result for the turbulent structure function. Spectrum of pressure fluctuations The pressure fluctuations in a turbulent flow may be similarly characterized. The mean-square pressure fluctuation in a turbulent flow may be represented by a pressure spectrum, (k): For the case of turbulence with no mean velocity gradient (isotropic turbulence), the spectrum in the inertial subrange is given by where ρ is the fluid density, and α = 1.32 C2 = 2.97. A mean-flow velocity gradient (shear flow) creates an additional, additive contribution to the inertial subrange pressure spectrum which varies as k−11/3; but the k−7/3 behavior is dominant at higher wavenumbers. Spectrum of turbulence-driven disturbances at a free liquid surface Pressure fluctuations below the free surface of a liquid can drive fluctuating displacements of the liquid surface, which at small wavelengths are modulated by surface tension. This free-surface–turbulence interaction may also be characterized by a wavenumber spectrum. If δ is the instantaneous displacement of the surface from its average position, the mean squared displacement may be represented with a displacement spectrum G(k) as: A three dimensional form of the pressure spectrum may be combined with the Young–Laplace equation to show that: Experimental observation of this k−19/3 law has been obtained by optical measurements of the surface of turbulent free liquid jets. Notes References External links Turbulence Water waves
Energy cascade
[ "Physics", "Chemistry" ]
1,227
[ "Physical phenomena", "Turbulence", "Water waves", "Waves", "Fluid dynamics" ]
46,769,662
https://en.wikipedia.org/wiki/Relic%20abundance
In cosmology, the relic abundance of a given elementary particle is a measure of the present quantity of that particle remaining from the Big Bang. Uses Relic abundance is modelled for WIMPs (weakly interacting massive particles) in the study of dark matter. Calculation Assuming that an elementary particle was formerly in thermal equilibrium, its relic abundance may be calculated using a Boltzmann equation. The temperature scaled abundance of a particle is defined by where is the number density: that is, number of particles per physical volume (not the comoving volume). The relic abundance of a particle is shown by indicates the asymptotic value of abundance of a species of a particle which it will reach after its "freeze-out". References Physical cosmology
Relic abundance
[ "Physics", "Astronomy" ]
151
[ "Astronomical sub-disciplines", "Theoretical physics", "Astrophysics", "Particle physics", "Particle physics stubs", "Physical cosmology" ]
49,710,013
https://en.wikipedia.org/wiki/Palladium%E2%80%93NHC%20complex
In organometallic chemistry, palladium-NHC complexes are a family of organopalladium compounds in which palladium forms a coordination complex with N-heterocyclic carbenes (NHCs). They have been investigated for applications in homogeneous catalysis, particularly cross-coupling reactions. Synthesis The synthesis of Pd-NHC complexes follows the methods used for the synthesis of transition metal NHC complexes. The synthesis of Pd-NHC complexes can also be achieved through substitution of a labile ligand L in a Pd-L complex. Labile ligands typically include cyclooctadiene, dibenzylideneacetone, bridging halides, or phosphines. This process can be used in conjunction with the in situ generation of free carbenes. Pd-NHC complexes can also be synthesized through transmetalation with silver-NHC complexes. The transmetallated NHCs can either be isolated for subsequent reaction with palladium in a two-step method, or generated in the presence of palladium in a one-pot reaction. However, generation of Pd-NHC complexes by Ag transmetallation is cost-prohibitive and hampered by Ag complexes’ light sensitivity. Pd-NHC complexes in catalytic cross-coupling The utility of palladium-catalyzed cross-coupling reactions is enhanced by the use of N-heterocyclic carbene ligands. Indeed, Pd-NHC complexes have been proven effective in Suzuki-Miyaura, Negishi, Sonogashira, Kumada-Tamao-Corriu, Hiyama, and Stille cross-coupling. Compared to the corresponding Pd-phosphine catalysts, Pd-NHC catalysts can be faster, exhibit broader substrate scope, all with higher turnover numbers. Suzuki-Miyaura cross-coupling In Suzuki-Miyaura cross-couplings, the traditional coupling partners are organobromides and organoboron compounds. While Suzuki-Miyaura cross-couplings typically employ organobromides as coupling partners, organochlorides are more desirable electrophiles for cross-coupling due to their lower cost. The sluggish reactivity of the C-Cl bond is often a problem. With the advent of Pd-NHC complexes, organochlorides have emerged as viable partners in Suzuki-Miyaura cross coupling. Negishi coupling The use of NHC-Pd-PEPPSI complexes in Negishi cross-coupling has resulted in high turnover numbers and turnover frequencies. Additionally, NHC-Pd complexes can be used to couple sp3 centers to sp3 centers in higher yield than their non-NHC Pd analogs. However, studies of Pd-NHC complexes and their utility in Negishi coupling are currently lacking despite these promising results. Sonogashira coupling Pd-NHC complexes used in Sonogashira cross-coupling effect temperature stability in the complex. As in other Pd-NHC mediated cross-coupling reactions, the use of Pd-NHC complexes also allow higher turnover numbers than their NHC-free counterparts. NHC-palladacycles permit copper-free Sonogashira reactions to be carried out. Heck-Mizoroki coupling The use of Pd-NHC complexes in Heck-Mizoroki cross-coupling permits the use of cheaper, ample supplies of aryl chloride substrates. Additionally, the activity and stability of the catalyst in Heck-Mizoroki coupling can be enhanced by adjusting the 1,3 substituents on the imidazole ring. References Organopalladium compounds Organometallic chemistry Carbon-carbon bond forming reactions Catalysis
Palladium–NHC complex
[ "Chemistry" ]
771
[ "Catalysis", "Carbon-carbon bond forming reactions", "Organic reactions", "Chemical kinetics", "Organometallic chemistry" ]
49,714,181
https://en.wikipedia.org/wiki/Bis%28trifluoromethyl%29peroxide
Bis(trifluoromethyl)peroxide (BTP) is a fluorocarbon derivative first produced by Frédéric Swarts. It has some utility as a radical initiator for polymerisation reactions. BTP is unusual in the fact that, unlike many peroxides, it is a gas, is non-explosive, and has good thermal stability. History BTP was first synthesised by an electrolysis reaction using aqueous solutions containing trifluoroacetate ion but only in trace amounts. BTP was one of the by-products formed during trifluoromethylation reactions carried out by Swarts. Later it was discovered that BTP had some unusual properties and so more economically viable synthesis routes were sought. An early example was that of Porter and Cady, who were able to achieve a conversion rate of around 20-30% at atmospheric pressure and up to 90% at elevated pressure in an autoclave. Synthesis and reaction Present methods of the synthesis of BTP involves the reaction of carbonyl fluoride and chlorine trifluoride at 0-300 °C. An example of this reaction is the reaction of carbonyl fluoride and chlorine trifluoride in the presence of alkali metal fluorides or bifluorides at 100-250 °C. This example is quite insensitive to variations in temperature. Examples of the synthesis are: 2CF2O + ClF3 → CF3OOCF3 + ClF 6CF2O + 2ClF3 → 3CF3OOCF3 + Cl2 BTP can be isolated and purified by well-recognised procedures. In the mixture used to synthesize the compound chlorine monofluoride and chlorine trifluoride may still be present. These compounds are highly reactive and hazardous and are preferably deactivated as soon as possible. The deactivation is carried out by adding anhydrous calcium chloride to the mixture. The deactivated mixture is scrubbed with water and diluted caustically to remove the chlorine and residual carbonyl before drying to yield pure BTP. Metabolism In mammals, there are pathways for the metabolism of peroxides using various enzymes of the peroxidase class. For BTP, this would correspond to the following general reaction scheme: Peroxidase + C2F6O2 → 2CF3O− The peroxidase will then undergo two sequential electron transfers to return to its original state. Hepatoxicity Simulation of the toxicity of BTP has shown that organic peroxides can cause peripheral and centrilobular zonal hepatic necrosis, increased liver weight and hepatic enzymes and fatty changes in hepatocytes. This occurs in both humans and experimental animals. The toxicity of peroxides is thought to be caused by the formation of reactive oxygen species (ROS) which are involved in lipid peroxidation further oxidative cellular damage. Organic peroxides are often industrially used as oxidising agents. Exposure to such agents, for instance in the reported case of humans that were exposed to methyl ethyl ketone peroxide (MEKP), has been shown to cause peripheral zonal necrosis, increased hepatic enzyme levels and atypical pseudo-ductular proliferation at doses between 50 and 100 mL. Past animal studies have shown good correlations between organic peroxide damage in human case reports and test animals. 28-day repeat- dose studies of 1,1-bis (tert-butyldioxy)-3,3,5-trimethylcyclohexane and dicumyl peroxide in rats showed liver weight increase, periportal fatty changes and centrilobular hypertrophy of hepatocytes. A proposed mechanism for the toxicity of organic peroxides involves damage via formation of ROS, which is mediated by cytochrome P450. This then leads to lipid peroxidation of the membranes of the hepatocytes, alkylation of cellular macromolecules (reduced glutathione, altered calcium homeostasis. Identification of carboxyl, peroxyl, hydroxyl and alkoxyl radicals in rest rats give plausibility to the involvement of an oxidative system. Oral rat repeat dose studies with organic peroxides over 28 days have also shown alterations in rat kidneys in the form of histopathologic lesions. See also List of highly toxic gases Bis(trifluoromethyl) disulfide References Organic peroxides Trifluoromethyl compounds Gases
Bis(trifluoromethyl)peroxide
[ "Physics", "Chemistry" ]
965
[ "Matter", "Phases of matter", "Organic compounds", "Statistical mechanics", "Gases", "Organic peroxides" ]
49,716,057
https://en.wikipedia.org/wiki/SACEM%20%28railway%20system%29
The Système d'aide à la conduite, à l'exploitation et à la maintenance (SACEM) is an embedded, automatic speed train protection system for rapid transit railways. The name means "Driver Assistance, Operation, and Maintenance System". It was developed in France by GEC-Alsthom, Matra (now part of Siemens Mobility) and CSEE (now part of Hitachi Rail STS) in the 1980s. It was first deployed on the RER A suburban railway in Paris in 1989. Afterwards it was installed: on the Santiago Metro in Santiago, Chile; on some of the MTR lines in Hong Kong (Kwun Tong line, Tsuen Wan line, Island line, Tseung Kwan O line, Airport Express and Tung Chung line), all enhanced with ATO, on Lines A, B and 8 of the Mexico City Metro lines in Mexico City; and on Shanghai Metro Line 3. In 2017 the SACEM system in Paris was enhanced with Automatic Train Operation (ATO) and was put in full operation at the end of 2018. The SACEM system in Paris is to be enhanced to a fully fledged CBTC system named NExTEO. First to be deployed on the newly-extended line RER E in 2024, it is proposed to replace signalling and control on all RER lines. Operation The SACEM system enables a train to receive signals from devices under the tracks. A receiver in the train cabin interprets the signal, and sends data to the console so the driver can see it. A light on the console indicates the speed control setting: an orange light means slow speed, or ; a red light means full stop. If the driver alters the speed, a warning buzzer may sound. If the system determines that the speed might be unsafe, and the driver does not change it within a few seconds, SACEM engages the emergency brake. SACEM also allows for a reduction in potential train bunching and easier recovery from delays, therefore safely increasing operating frequencies as much as possible especially during rush hour. References External links Operation principles and examples with pictures in Hongkong Embedded systems Rapid transit Railway signalling
SACEM (railway system)
[ "Technology", "Engineering" ]
436
[ "Embedded systems", "Computer science", "Computer engineering", "Computer systems" ]
49,726,991
https://en.wikipedia.org/wiki/Presidential%20Emergency%20Facility
A Presidential Emergency Facility (PEF), also called Presidential Emergency Relocation Centers and VIP Evacuation and Support Facilities, is a fortified, working residence intended for use by the president of the United States should normal presidential residences, such as the White House or Camp David, be destroyed or overrun during war or other types of national emergencies. Some Presidential Emergency Facilities are specially designated sections of existing government and military installations, while others are dedicated sites that have been purpose-built. Various sources state there are, or were, between 9 and 75 such facilities. Quantity and location In his 1984 journalistic expose “The Day After World War III”, Edward Zuckerman states there were then nine Presidential Emergency Facilities within a 25-minute helicopter trip from Washington, D.C. According to Zuckerman, sites known to him at that time were code-named Cartwheel (at Fort Reno Park), Corkscrew, Cowpuncher, and Cannonball (Cross Mountain, Pennsylvania), though all have since been decommissioned. The White House itself is known as Crown while the presidential compound at the High Point Special Facility is Crystal (sometimes referred to as Crystal Palace). In a 2004 report to the Federal Communications Commission (FCC) concerning Corkscrew, which at the time had been decommissioned as a PEF site and transferred to the FCC, historian David Rotenstein contended there were 75 PEFs “scattered throughout the United States”, a number also claimed by the Brookings Institution. Design and staffing Construction on Presidential Emergency Facilities began in the 1960s from classified, black budget government appropriations. Purpose-built Presidential Emergency Facilities are silo-like structures constructed from reinforced concrete that sit atop an underground warren of bunkers and chambers designed to withstand a nuclear explosion. One of the few descriptions of a Presidential Emergency Facility observed while still in operation was provided by U.S. Coast Guard Captain Alex R. Larzelere, a former White House military aide, who visited one such site in 1968. Larzelere went on to describe the subterranean interior of the site, noting there were quarters for the president and his staff with beds kept ready for immediate use with fresh linens, communications facilities, and stores stocked with emergency rations, medicine, and other supplies to sustain several people for a prolonged period. Bill Gulley, a former U.S. Marine assigned to the White House Military Office, reported in 1980 that PEFs were all "manned twenty-four hours a day". See also Federal Relocation Arc Ground-Mobile Command Center White House Big Dig References Nuclear warfare United States nuclear command and control Disaster preparedness in the United States Continuity of government in the United States
Presidential Emergency Facility
[ "Chemistry" ]
537
[ "Radioactivity", "Nuclear warfare" ]
49,729,992
https://en.wikipedia.org/wiki/Hybrid%20argument%20%28cryptography%29
In cryptography, the hybrid argument is a proof technique used to show that two distributions are computationally indistinguishable. History Hybrid arguments had their origin in a papers by Andrew Yao in 1982 and Shafi Goldwasser and Silvio Micali in 1983. Formal description Formally, to show two distributions D1 and D2 are computationally indistinguishable, we can define a sequence of hybrid distributions D1 := H0, H1, ..., Ht =: D2 where t is polynomial in the security parameter n. Define the advantage of any probabilistic efficient (polynomial-bounded time) algorithm A as where the dollar symbol ($) denotes that we sample an element from the distribution at random. By triangle inequality, it is clear that for any probabilistic polynomial time algorithm A, Thus there must exist some k s.t. 0 ≤ k < t(n) and Since t is polynomial-bounded, for any such algorithm A, if we can show that it has a negligible advantage function between distributions Hi and Hi+1 for every i, that is, then it immediately follows that its advantage to distinguish the distributions D1 = H0 and D2 = Ht must also be negligible. This fact gives rise to the hybrid argument: it suffices to find such a sequence of hybrid distributions and show each pair of them is computationally indistinguishable. Applications The hybrid argument is extensively used in cryptography. Some simple proofs using hybrid arguments are: If one cannot efficiently predict the next bit of the output of some number generator, then this generator is a pseudorandom number generator (PRG). We can securely expand a PRG with 1-bit output into a PRG with n-bit output. See also Interactive proof system Universal composability Notes References Cryptography
Hybrid argument (cryptography)
[ "Mathematics", "Engineering" ]
386
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
30,073,529
https://en.wikipedia.org/wiki/C22H26O11
{{DISPLAYTITLE:C22H26O11}} The molecular formula C22H26O11 (molar mass: 466.43 g/mol, exact mass: 466.147512 u) may refer to: Agnuside Curculigoside A Molecular formulas
C22H26O11
[ "Physics", "Chemistry" ]
65
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
30,079,285
https://en.wikipedia.org/wiki/San%20Buenaventura%20Conservancy
The San Buenaventura Conservancy for Preservation is an historic preservation organization in Ventura, California also known by its early name of San Buenaventura. It works to recognize and revitalize historic, archeological and cultural resources in the region. The Conservancy is a non-profit 501c3 organization. The group was formed in 2004 after the demolition of the Mayfair Theater, an S. Charles Lee, Streamline Moderne, movie theater in downtown Ventura, California that was razed and replaced with a condominium project. Mission The San Buenaventura Conservancy mission statement: "To work through advocacy and outreach to recognize preserve and revitalize the irreplaceable historic, architectural and cultural resources of San Buenaventura and surrounding areas. To seek to increase public awareness of and participation in local preservation issues, and disseminate information useful in the preservation of structures and neighborhoods of San Buenaventura." San Buenaventura Conservancy website Programs & Projects The organization produces annual historic architecture tours in the historic neighborhoods and districts in midtown, downtown and the west side of Ventura, California. The conservancy is an all-volunteer organization with a ten-member board. Some of the Conservancy's most successful projects outside of the Ventura architectural Weekend tours and trade shows has been the ability of the board to work closely the City of Ventura, California and developers to find preservation solutions for historic buildings. At times the Conservancy advocates for specific historic buildings like Willett Ranch link to article and the Top Hat Burger Palace in Ventura, the Frank Petit House in South Oxnard, California, the Charles McCoy house, in Port Hueneme, California, and the Bracero farm Worker Camp in Piru, California, and the Wagon Wheel Motel on the 101 Freeway in Oxnard, California. Additionally the Conservancy works to strengthen preservation policies of local municipalities. It has achieved success at integrating appropriate preservation actions and policies into Ventura's General Plan, Downtown Specific Plan, and Westside Community Plan. On March 2, 2009 the San Buenaventura Conservancy – with attorney Susan Brandt-Hawley – filed suit in Ventura County Superior Court against the City of Oxnard, California, claiming that the City’s approval of the Oxnard Village Specific Plan project violated the California Environmental Quality Act (CEQA). link to article The project, as approved, requires the demolition of the Wagon Wheel Motel and restaurant, El Ranchito and bowling alley along with everything built on the site. The Conservancy case argues that the project can be feasibly accomplished without demolition of the Wagon Wheel, and CEQA therefore does not allow the Class 1 impact. The lawsuit requests issuance of a peremptory writ ordering the City to set aside its approval of the project pending compliance with CEQA. The original Ventura County Superior Court case was presented July 10, 2009. The Judge sided with the city of Oxnard. The San Buenaventura Conservancy has appealed the ruling and received a stay of demolition until the outcome of the appeal case: San Buenaventura Conservancy v. City of Oxnard et al. (CEQA) (Case: B220512 2nd District, Division 6.) link to article On Wednesday December 15, a three judge panel at the California 2nd district Court of Appeal in Ventura, California heard arguments from attorneys representing the case: Susan Brandt-Hawley for the San Buenaventura Conservancy, and Rachel Cook representing the developer ( Oxnard Village Investments, LLC.) and the city of Oxnard, California. The Appeals Court sided with the City of Oxnard on March 17, 2011 and agreed with the Superior court that the CEQA review was sufficient. The Wagon Wheel was demolished a week later. References Shepherd, Dirk, Save the Wagon Wheel, VC Reporter Newspaper article, Jan 11, 2007, link to article Levin, Charles, Ventura County Star Newspaper, Old motel might be declared landmark, January 23, 2007, link to article Singer, Matthew, Looking for a landmark, VC Reporter Newspaper article, January 25, 2007, link to article Clerici, Kevin Group sues Ventura to halt razing of ranch, Ventura County Star Newspaper article, June 28, 2007, link to article Lascher, Bill, VC Reporter Newspaper article, Hotel could occupy Chumash Village site, February 7, 2008, link to article Clerici, Kevin Historical Willett buildings to remain on site, Ventura County Star Newspaper article, February 27, 2008, link to article Lascher, Bill, Endangered Heritage and San Buenaventura Conservancy's 11 most endangered list, VC Reporter Newspaper article, June 12, 2008, link to article Cason, Coleen, Ventura County Star Newspaper article, Changing days, landmarks in photo calendar, December 28, 2008, link to article VC Reporter Newspaper article, The San Buenaventura Conservancy hosts an architectural, archaeological tour, January 15, 2009, link to article Hadley, Scott, Ventura County Star Newspaper, Oxnard Wagon Wheel Development to be taken up by council, January 27, 2009, Hadley, Scott, Ventura County Star Newspaper, Wagon Wheel redevelopment approved, January 29, 2009, Sullivan, Michael, VC Reporter Newspaper article, Historical homes in Oxnard meet a fiery grave this week, February 26, 2009, link to article Foster, Margaret, Lawsuit Stalls Loss of 1947 Motel, Preservation Magazine (Online), March 26, 2009, link to article Foster, Margaret, Calif. City Burns Down 1883 Farmhouses, Preservation Magazine (Online), March 31, 2009 Ventura County Star Newspaper, Memorial Boulder Keeps on Rollin''', March 29, 2009, link to article Chawkins Steve, Trying to keep Oxnard's Wagon Wheel in place, Los Angeles Times, April 10, 2009, link to article Hadley, Scott, Ventura County Star Newspaper, Judge Blocks Demolition of Wagon Wheel Buildings, October 31, 2009, Sisolak, Paul, Wagon Wheel Headed Back To Court, VC Reporter Newspaper article, November 12, 2009, Hadley, Scott, Ventura County Star Newspaper, Judge Clears way for Wagon Wheel Demolition, November 17, 2009, Sisolak, Paul, Court Appeal Possible in Wagon Wheel Preservation Case, VC Reporter Newspaper article, November 19, 2009, McKinnon, Lisa, Ventura County Star Newspaper, Ventura's Top Hat Burger Palace given 30 days to vacate site, January 8, 2010, link to article Cohn, Shane, VC Reporter Newspaper article, Up for debate The future of Ventura’s Westside may rest in Rancho Cañada Larga, December 9, 2010, link to article Hadley, Scott, Ventura County Star Newspaper, Final arguments presented in Wagon Wheel case'', December 15, 2010, link to article External links National Trust for Historic Preservation Wagon Wheel Photos on Flickr San Buenaventura Conservancy website Historic preservation organizations in the United States Heritage organizations Architectural history Urban planning in California
San Buenaventura Conservancy
[ "Engineering" ]
1,426
[ "Architectural history", "Architecture" ]
57,042,156
https://en.wikipedia.org/wiki/Fractionation%20of%20carbon%20isotopes%20in%20oxygenic%20photosynthesis
Photosynthesis converts carbon dioxide to carbohydrates via several metabolic pathways that provide energy to an organism and preferentially react with certain stable isotopes of carbon. The selective enrichment of one stable isotope over another creates distinct isotopic fractionations that can be measured and correlated among oxygenic phototrophs. The degree of carbon isotope fractionation is influenced by several factors, including the metabolism, anatomy, growth rate, and environmental conditions of the organism. Understanding these variations in carbon fractionation across species is useful for biogeochemical studies, including the reconstruction of paleoecology, plant evolution, and the characterization of food chains. Oxygenic photosynthesis is a metabolic pathway facilitated by autotrophs, including plants, algae, and cyanobacteria. This pathway converts inorganic carbon dioxide from the atmosphere or aquatic environment into carbohydrates, using water and energy from light, then releases molecular oxygen as a product. Organic carbon contains less of the stable isotope Carbon-13, or 13C, relative to the initial inorganic carbon from the atmosphere or water because photosynthetic carbon fixation involves several fractionating reactions with kinetic isotope effects. These reactions undergo a kinetic isotope effect because they are limited by overcoming an activation energy barrier. The lighter isotope has a higher energy state in the quantum well of a chemical bond, allowing it to be preferentially formed into products. Different organisms fix carbon through different mechanisms, which are reflected in the varying isotope compositions across photosynthetic pathways (see table below, and explanation of notation in "Carbon Isotope Measurement" section). The following sections will outline the different oxygenic photosynthetic pathways and what contributes to their associated delta values. Carbon isotope measurement Carbon on Earth naturally occurs in two stable isotopes, with 98.9% in the form of 12C and 1.1% in 13C. The ratio between these isotopes varies in biological organisms due to metabolic processes that selectively use one carbon isotope over the other, or "fractionate" carbon through kinetic or thermodynamic effects. Oxygenic photosynthesis takes place in plants and microorganisms through different chemical pathways, so various forms of organic material reflect different ratios of 13C isotopes. Understanding these variations in carbon fractionation across species is applied in isotope geochemistry and ecological isotope studies to understand biochemical processes, establish food chains, or model the carbon cycle through geological time. Carbon isotope fractionations are expressed in using delta notation of δ13C ("delta thirteen C"), which is reported in parts per thousand (per mille, ‰). δ13C is defined in relation to the Vienna Pee Dee Belemnite (VPDB, 13C/12C = 0.01118) as an established reference standard. This is called a "delta value" and can be calculated from the formula below: Photosynthesis reactions The chemical pathway of oxygenic photosynthesis fixes carbon in two stages: the light-dependent reactions and the light-independent reactions. The light-dependent reactions capture light energy to transfer electrons from water and convert NADP+, ADP, and inorganic phosphate into the energy-storage molecules NADPH and ATP. The overall equation for the light-dependent reactions is generally:2 H2O + 2 NADP+ + 3 ADP + 3 Pi + light → 2 NADPH + 2 H+ + 3 ATP + O2The light-independent reactions undergo the Calvin-Benson cycle, in which the energy from NADPH and ATP is used to convert carbon dioxide and water into organic compounds via the enzyme RuBisCO. The overall general equation for the light-independent reactions is the following:3 CO2 + 9 ATP + 6 NADPH + 6 H+ → C3H6O3-phosphate + 9 ADP + 8 Pi + 6 NADP+ + 3 H2OThe 3-carbon products (C3H6O3-phosphate) of the Calvin cycle are later converted to glucose or other carbohydrates such as starch, sucrose, and cellulose. Fractionation via RuBisCO The large fractionation of 13C in photosynthesis is due to the carboxylation reaction, which is carried out by the enzyme ribulose-1,5-bisphosphate carboxylase oxygenase, or RuBisCO. RuBisCO catalyzes the reaction between a five-carbon molecule, ribulose-1,5-bisphosphate (abbreviated as RuBP) and CO2 to form two molecules of 3-phosphoglyceric acid (abbreviated as PGA). PGA reacts with NADPH to produce 3-phosphoglyceraldehyde. Isotope fractionation due to Rubisco (form I) carboxylation alone is predicted to be a 28‰ depletion, on average. However, fractionation values vary between organisms, ranging from an 11‰ depletion observed in coccolithophorid algae to a 29‰ depletion observed in spinach. RuBisCO causes a kinetic isotope effect because 12CO2 and 13CO2 compete for the same active site and 13C has an intrinsically lower reaction rate. 13C fractionation model In addition to the discriminating effects of enzymatic reactions, the diffusion of CO2 gas to the carboxylation site within a plant cell also influences isotopic fractionation. Depending on the type of plant (see sections below), external CO2 must be transported through the boundary layer and stomata and into the internal gas space of a plant cell, where it dissolves and diffuses to the chloroplast. The diffusivity of a gas is inversely proportional to the square root of its molecular reduced mass (relatively to air), causing 13CO2 to be 4.4‰ less diffusive than 12CO2. A prevailing model for fractionation of atmospheric CO2 in plants combines the isotope effects of the carboxylation reaction with the isotope effects from gas diffusion into the plant in the following equation: Where: δ13Csample is the delta-value of the organism for 13C composition δ13Catm is the delta-value of atmospheric CO2, which is = -7.8‰ the discrimination due to diffusion a = 4.4‰ the carboxylation discrimination b = 30‰ ca is the partial pressure of CO2 in the external atmosphere, and ci is the partial pressure of CO2 in the intercellular spaces. This model, derived ab initio, generally describes fractionation of carbon in the majority of plants, which facilitate C3 carbon fixation. Modifications have been made to this model with empirical findings. However, several additional factors, not included in this general model, will increase or decrease 13C fractionation across species. Such factors include the competing oxygenation reaction of RuBisCO, anatomical and temporal adaptations to enzyme activity, and variations in cell growth and geometry. The isotopic fractionations of different photosynthetic pathways are uniquely characterized by these factors, as described below. In C3 plants A C3 plant uses C3 carbon fixation, one of the three metabolic photosynthesis pathways which also include C4 and CAM (described below). These plants are called "C3" due to the three-carbon compound (3-Phosphoglyceric acid, or 3-PGA) produced by the CO2 fixation mechanism in these plants. This C3 mechanism is the first step of the Calvin-Benson cycle, which converts CO2 and RuBP into 3-PGA. C3 plants are the most common type of plant, and typically thrive under moderate sunlight intensity and temperatures, CO2 concentrations above 200 ppm, and abundant groundwater. C3 plants do not grow well in very hot or arid regions, in which C4 and CAM plants are better adapted. The isotope fractionations in C3 carbon fixation arise from the combined effects of CO2 gas diffusion through the stomata of the plant, and the carboxylation via RuBisCO. Stomatal conductance discriminates against the heavier 13C by 4.4‰. RuBisCO carboxylation contributes a larger discrimination of 27‰. RuBisCO enzyme catalyzes the carboxylation of CO2 and the 5-carbon sugar, RuBP, into 3-phosphoglycerate, a 3-carbon compound through the following reaction: The product 3-phosphoglycerate is depleted in 13C due to the kinetic isotope effect of the above reaction. The overall 13C fractionation for C3 photosynthesis ranges between -20 and -37‰. The wide range of variation in delta values expressed in C3 plants is modulated by the stomatal conductance, or the rate of CO2 entering, or water vapor exiting, the small pores in the epidermis of a leaf. The δ13C of C3 plants depends on the relationship between stomatal conductance and photosynthetic rate, which is a good proxy of water use efficiency in the leaf. C3 plants with high water-use efficiency tend to be less fractionated in 13C (i.e., δ13C is relatively less negative) compared to C3 plants with low water-use efficiency. In C4 plants C4 plants have developed the C4 carbon fixation pathway to conserve water loss, thus are more prevalent in hot, sunny, and dry climates. These plants differ from C3 plants because CO2 is initially converted to a four-carbon molecule, malate, which is shuttled to bundle sheath cells, released back as CO2 and only then enters the Calvin Cycle. In contrast, C3 plants directly perform the Calvin Cycle in mesophyll cells, without making use of a CO2 concentration method. Malate, the four-carbon compound is the namesake of "C4" photosynthesis. This pathway allows C4 photosynthesis to efficiently shuttle CO2 to the RuBisCO enzyme and maintain high concentrations of CO2 within bundle sheath cells. These cells are part of the characteristic kranz leaf anatomy, which spatially separates photosynthetic cell-types in a concentric arrangement to accumulate CO2 near RuBisCO. These chemical and anatomical mechanisms improve the ability of RuBisCO to fix carbon, rather than perform its wasteful oxygenase activity. The RuBisCO oxygenase activity, called photorespiration, causes the RuBP substrate to be lost to oxygenation, and consumes energy in doing so. The adaptations of C4 plants provide an advantage over the C3 pathway, which loses efficiency due to photorespiration. The ratio of photorespiration to photosynthesis in a plant varies with environmental conditions, since decreased CO2 and elevated O2 concentrations would increase the efficiency of photorespiration. Atmospheric CO2 on Earth decreased abruptly at a point between 32 and 25 million years ago. This gave a selective advantage to the evolution of the C4 pathway, which can limit photorespiration rate despite the reduced ambient CO2. Today, C4 plants represent roughly 5% of plant biomass on Earth, but about 23% of terrestrial carbon fixation. Types of plants which use C4 photosynthesis include grasses and economically important crops, such as maize, sugar cane, millet, and sorghum. Isotopic fractionation differs between C4 carbon fixation and C3, due to the spatial separation in C4 plants of CO2 capture (in the mesophyll cells) and the Calvin cycle (in the bundle sheath cells). In C4 plants, carbon is converted to bicarbonate, fixed into oxaloacetate via the enzyme phosphoenolpyruvate (PEP) carboxylase, and is then converted to malate. The malate is transported from the mesophyll to bundle sheath cells, which are impermeable to CO2. The internal CO2 is concentrated in these cells as malate is reoxidized then decarboxylated back into CO2 and pyruvate. This enables RuBisCO to perform catalysis while internal CO2 is sufficiently high to avoid the competing photorespiration reaction. The delta value in the C4 pathway is -12 to -16‰ depleted in 13C due to the combined effects of PEP carboxylase and RuBisCO. The isotopic discrimination in the C4 pathway varies relative to the C3 pathway due to the additional chemical conversion steps and activity of PEP carboxylase. After diffusion into the stomata, the conversion of CO2 to bicarbonate concentrates the heavier 13C. The subsequent fixation via PEP carboxylase is thereby less depleted in 13C than that from Rubisco: about 2‰ depleted in PEP carboxylase, versus 29‰ in RuBisCO. However, a portion of the isotopically heavy carbon that is fixed by PEP carboxylase leaks out of the bundle sheath cells. This limits the carbon available to RuBisCO, which in turn lowers its fractionation effect. This accounts for the overall delta value in C4 plants to be -12 to -16 ‰. In CAM plants Plants that use Crassulacean acid metabolism, also known as CAM photosynthesis, temporally separate their chemical reactions between day and night. This strategy modulates stomatal conductance to increase water-use efficiency, so is well-adapted for arid climates. During the night, CAM plants open stomata to allow CO2 to enter the cell and undergo fixation into organic acids that are stored in vacuoles. This carbon is released to the Calvin cycle during the day, when stomata are closed to prevent water loss, and the light reactions can drive the necessary ATP and NADPH production. This pathway differs from C4 photosynthesis because CAM plants separate carbon by storing fixed CO2 in vesicles at night, then transporting it for use during the day. Thus, CAM plants temporally concentrate CO2 to improve RuBisCO efficiency, whereas C4 plants spatially concentrate CO2 in bundle sheath cells. The distribution of plants which use CAM photosynthesis includes epiphytes (e.g., orchids, bromeliads) and xerophytes (e.g., succulents, cacti). In Crassulacean acid metabolism, isotopic fractionation combines the effects of the C3 pathway in the daytime and the C4 pathway in the nighttime. At night, when temperature and water loss are lower, the CO2 diffuses through the stomata and produce malate via phosphenolpyruvate carboxylase. During the following day, stomata are closed, malate is decarboxylated, and CO2 is fixed by RuBisCO. This process alone is similar to that of C4 plants and yields characteristic C4 fractionation values of approximately -11‰. However, in the afternoon, CAM plants may open their stomata and perform C3 photosynthesis. In daytime alone, CAM plants have approximately -28‰ fractionation, characteristic of C3 plants. These combined effects provide δ13C values for CAM plants in the range of -10 to -20‰. The 13C to 12C ratio in CAM plants can indicate the temporal separation of CO2 fixation, which is the extent of biomass derived from nocturnal CO2 fixation relative to diurnal CO2 fixation. This distinction can be made because PEP carboxylase, the enzyme responsible for net CO2 uptake at night, discriminates 13C less than RuBisCO, which is responsible to daytime CO2 uptake. CAM plants which fix CO2 primarily at night would be predicted to show δ13C values more similar to C4 plants, whereas daytime CO2 fixation would show δ13C values more similar to C3 plants. In phytoplankton In contrast to terrestrial plants, where CO2 diffusion in air is relatively fast and typically not limiting, diffusion of dissolved CO2 in water is considerably slower and can often limit carbon fixation in phytoplankton. As gaseous CO2(g) is dissolved into aqueous CO2(aq), it is fractionated by both kinetic and equilibrium effects that are temperature-dependent. Relative to plants, the dissolved CO2 source for phytoplankton can be enriched in 13C by about 8‰ from atmospheric CO2. Isotope fractionation of 13C by phytoplankton photosynthesis is affected by the diffusion of extracellular aqueous CO2 into the cell, the RuBisCO-dependent cell growth rate, and the cell geometry and surface area. The use of bicarbonate and carbon-concentrating mechanisms in phytoplankton distinguishes the isotopic fractionation from plant photosynthetic pathways. The difference between intracellular and extracellular CO2 concentrations reflects the CO2 demand of a phytoplankton cell, which is dependent on its growth rate. The ratio of carbon demand to supply governs the diffusion of CO2 into the cell, and is negatively correlated with the magnitude of the carbon fractionation by phytoplankton. Combined, these relationships allow the fractionation between CO2(aq) and phytoplankton biomass to be used to estimate the phytoplankton growth rates. However, growth rate alone does not account for observed fractionation. The flux of CO2(aq) into and out of a cell is roughly proportional to the cell surface area, and the cell carbon biomass varies as a function of cell volume. Phytoplankton geometry that maximizes surface area to volume should have larger isotopic fractionation from photosynthesis. The biochemical characteristics of phytoplankton are similar to C3 plants, whereas the gas exchange characteristics more closely resemble the C4 strategy. More specifically, phytoplankton improve the efficiency of their primary carbon-fixing enzyme, RuBisCO, with carbon concentrating mechanisms (CCM), just as C4 plants accumulate CO2 in the bundle sheath cells. Different forms of CCM in phytoplankton include the active uptake of bicarbonate and CO2 through the cell membrane, the active transport of inorganic carbon from the cellular membrane to the chloroplasts, and active, unidirectional conversion of CO2 to bicarbonate. The parameters affecting 13C fractionation in phytoplankton contribute to δ13C values between -18 and -25‰. See also RuBisCO Isotope geochemistry Intrinsic KIE of RuBisCO Paleoclimatology References Isotope separation Isotopes of carbon
Fractionation of carbon isotopes in oxygenic photosynthesis
[ "Chemistry" ]
3,885
[ "Isotopes of carbon", "Isotopes" ]
57,042,704
https://en.wikipedia.org/wiki/Tubular%20Exchanger%20Manufacturers%20Association
The Tubular Exchanger Manufacturers Association (also known as TEMA) is an association of fabricators of shell and tube type heat exchangers. TEMA has established and maintains a set of construction standards for heat exchangers, known as the TEMA Standard. TEMA also produces software for evaluation of flow-induced vibration and of flexible shell elements (expansion joints). TEMA was founded in 1939, and is based in Tarrytown, New York. The association meets regularly to revise and update the standards, respond to inquiries, and discuss topics related to the industry. The TEMA Standard The current edition of the TEMA Standard is the Tenth Edition, published in 2019. Worldwide, the TEMA Standard is used as the construction standard for most shell and tube heat exchangers. The standard is composed of ten sections: Nomenclature (see below) Fabrication Tolerances General Fabrication and Performance Information Installation, Operation, and Maintenance Mechanical Standards Flow Induced Vibration Thermal Relations Physical Properties of Fluids General Information Recommended Good Practice TEMA Classifications of Heat Exchangers TEMA's standard recognizes three separate classifications of exchangers. Each class has different mechanical construction requirements, based on the expected service. Those classes are: Class R - for refinery and petroleum service Class C - for general commercial service Class B - for chemical process service In general, Class C is the least restrictive class, and Class R is the most stringent, insuring more robust designs for longer life in harsher service conditions. TEMA Exchanger Nomenclature Because heat exchangers can be configured many different ways, TEMA has standardized the nomenclature of exchanger types. A letter designation is used for the front head type, shell type, and rear head type of an exchanger. For example, a fixed tubesheet exchanger with bolted removable bonnets is designated as a 'BEM' type. A kettle type reboiler with a removable U-tube bundle is a 'BKU' type. Many different letter combinations are possible. Member Companies of TEMA The member companies of TEMA must demonstrate high quality exchanger fabrication standards, and possess in-house engineering capability for mechanical and thermal design of shell and tube type heat exchangers. Companies may fabricate other equipment in addition to heat exchangers. The current member companies of TEMA (in alphabetical order) are: Brask, Inc. Cust-O-Fab, Inc. Dunn Heat Exchangers, Inc. Energy Exchanger Corp. Fabsco Shell and Tube Graham Corporation Heat Transfer Equipment Company Hughes-Anderson Heat Exchangers, Inc. Kennedy Tank and Manufacturing Co. Krueger Engineering & Mfg. Joseph Oat Corp. Ohmstede, Inc Perry Products R.A.S. Process Equipment Southern Heat Exchanger Struthers-Wells Ward Vessel and Exchanger References External links Official website Heat exchangers American engineering organizations Manufacturing Fabrication (metal)
Tubular Exchanger Manufacturers Association
[ "Chemistry", "Engineering" ]
585
[ "Chemical equipment", "Heat exchangers", "Mechanical engineering", "Manufacturing" ]
57,044,077
https://en.wikipedia.org/wiki/Medical%20Devices%20Park%2C%20Hyderabad
Medical Devices Park, Hyderabad is a medical devices industrial estate located in Hyderabad, Telangana, India. The largest such Park in India spread over 250 acres. The dedicated park's ecosystem supports medical technology innovation and manufacturing. History The Park was inaugurated on 17 June 2017 near Hyderabad at Sultanpur in Patancheru of Sangareddy district by the Minister for Industries, K. T. Rama Rao. References 2017 establishments in Telangana Economy of Hyderabad, India Economy of Telangana Industries in Hyderabad, India Industrial parks in India Science parks in India Research institutes in Hyderabad, India High-technology business districts in India Biotechnology
Medical Devices Park, Hyderabad
[ "Biology" ]
122
[ "nan", "Biotechnology" ]
57,047,243
https://en.wikipedia.org/wiki/Symmetric%20power
In mathematics, the n-th symmetric power of an object X is the quotient of the n-fold product by the permutation action of the symmetric group . More precisely, the notion exists at least in the following three areas: In linear algebra, the n-th symmetric power of a vector space V is the vector subspace of the symmetric algebra of V consisting of degree-n elements (here the product is a tensor product). In algebraic topology, the n-th symmetric power of a topological space X is the quotient space , as in the beginning of this article. In algebraic geometry, a symmetric power is defined in a way similar to that in algebraic topology. For example, if is an affine variety, then the GIT quotient is the n-th symmetric power of X. References External links Symmetric relations
Symmetric power
[ "Physics" ]
174
[ "Symmetric relations", "Symmetry" ]
43,605,100
https://en.wikipedia.org/wiki/The%20Alvin%20Weinberg%20Foundation
The Alvin Weinberg Foundation was a registered UK charity, operating under the name Weinberg Next Nuclear, that campaigned for research and development into next-generation nuclear energy. In particular, it advocated advancement of liquid fluoride thorium reactor (LFTR) and other molten salt reactor (MSR) technologies. It was named for Alvin M. Weinberg, Director of Oak Ridge National Laboratory between 1955–1973 and the main advocate of MSR development. History September 2011: Launched at House of Lords. January 2014: Becomes a Registered Charity in England and Wales. May 2015: Stephen Tindale joins as Director. July 2017: The Weinberg Foundation dissolved. People Baroness Worthington is trustee and patron. Stephen Tindale, who led Greenpeace in the UK from 2000 until 2005, was its last Director. See also Generation IV reactor Liquid fluoride thorium reactor Molten salt reactor Thorium-based nuclear power References External links the-weinberg-foundation.org Nuclear energy Thorium Energy security Oak Ridge National Laboratory Charities based in London
The Alvin Weinberg Foundation
[ "Physics", "Chemistry" ]
212
[ "Nuclear energy", "Radioactivity", "Nuclear physics" ]
43,605,222
https://en.wikipedia.org/wiki/COART
COART (Coupled Ocean-Atmospheric Radiative Transfer code) - COART is established on the Coupled DIScrete Ordinate Radiative Transfer (Coupled DISORT or CDISORT) code, developed from DISORT. It is designed to simulate radiance (including water-leaving radiance) and irradiance (flux) at any levels in the atmosphere and ocean consistently. See also List of atmospheric radiative transfer codes Atmospheric radiative transfer codes References Jin, Z., T.P. Charlock, K. Rutledge, K. Stamnes, and Y. Wang, An analytical solution of radiative transfer in the coupled atmosphere-ocean system with rough surface. Appl. Opt., 45, 7443-7455, 2006. Jin, Z., and K. Stamnes, Radiative transfer in nonuniformly refracting layered media: atmosphere-ocean system, Appl. Opt., 33, 431-442, 1994. External links More information on COART COART online: https://clouds.larc.nasa.gov/jin/coart.html Ocean Albedo Look-up-table generated by COART: https://drive.google.com/drive/folders/1bVUcTBiZ1B7KhcnYeJiz-zFmpzGtrele?usp=sharing Atmospheric radiative transfer codes
COART
[ "Physics" ]
302
[ "Computational physics stubs", "Computational physics" ]
32,341,352
https://en.wikipedia.org/wiki/Quantum%20pendulum
The quantum pendulum is fundamental in understanding hindered internal rotations in chemistry, quantum features of scattering atoms, as well as numerous other quantum phenomena. Though a pendulum not subject to the small-angle approximation has an inherent nonlinearity, the Schrödinger equation for the quantized system can be solved relatively easily. Schrödinger equation Using Lagrangian mechanics from classical mechanics, one can develop a Hamiltonian for the system. A simple pendulum has one generalized coordinate (the angular displacement ) and two constraints (the length of the string and the plane of motion). The kinetic and potential energies of the system can be found to be This results in the Hamiltonian The time-dependent Schrödinger equation for the system is One must solve the time-independent Schrödinger equation to find the energy levels and corresponding eigenstates. This is best accomplished by changing the independent variable as follows: This is simply Mathieu's differential equation whose solutions are Mathieu functions. Solutions Energies Given , for countably many special values of , called characteristic values, the Mathieu equation admits solutions that are periodic with period . The characteristic values of the Mathieu cosine, sine functions respectively are written , where is a natural number. The periodic special cases of the Mathieu cosine and sine functions are often written respectively, although they are traditionally given a different normalization (namely, that their norm equals ). The boundary conditions in the quantum pendulum imply that are as follows for a given : The energies of the system, for even/odd solutions respectively, are quantized based on the characteristic values found by solving the Mathieu equation. The effective potential depth can be defined as A deep potential yields the dynamics of a particle in an independent potential. In contrast, in a shallow potential, Bloch waves, as well as quantum tunneling, become of importance. General solution The general solution of the above differential equation for a given value of a and q is a set of linearly independent Mathieu cosines and Mathieu sines, which are even and odd solutions respectively. In general, the Mathieu functions are aperiodic; however, for characteristic values of , the Mathieu cosine and sine become periodic with a period of . Eigenstates For positive values of q, the following is true: Here are the first few periodic Mathieu cosine functions for . Note that, for example, (green) resembles a cosine function, but with flatter hills and shallower valleys. See also Quantum harmonic oscillator Bibliography Muhammad Ayub, Atom Optics Quantum Pendulum, 2011, Islamabad, Pakistan., https://arxiv.org/abs/1012.6011 Quantum models Pendulums
Quantum pendulum
[ "Physics" ]
563
[ "Quantum models", "Quantum mechanics" ]
32,342,900
https://en.wikipedia.org/wiki/Ultrafast%20X-ray
Ultrafast X-rays or ultrashort X-ray pulses are femtosecond x-ray pulses with wavelengths occurring at interatomic distances. This beam uses the X-ray's inherent abilities to interact at the level of atomic nuclei and core electrons. This ability combined with the shorter pulses at 30 femtosecond could capture the change in position of atoms, or molecules during phase transitions, chemical reactions, and other transient processes in physics, chemistry, and biology. Fundamental transitions and processes Ultrafast X-ray diffraction (time-resolved X-ray diffraction) can surpass ultrashortpulse visible techniques, which are limited to detecting structures on the level of valence and free electrons. Ultrashort pulse X-ray techniques are able to resolve atomic scales, where dynamic structural changes and reactions occur in the interior of a material. See also Stanford PULSE Institute for ultrafast x-ray science Ultrafast laser spectroscopy References Further reading Emma P., et al. (2010) "First lasing and operation of an angstrom-wavelength free-electron laser" Nature Photonics 4(9):641-647. Diagram of the table-top ultrafast X-ray diffractometer Figures and Tables. Nature Publishing Group. March 25, 1999. External links Photon Science: X-Rays for Discovery Ultrafast X-ray Summer Schools X-rays Diffraction Synchrotron-related techniques Scattering
Ultrafast X-ray
[ "Physics", "Chemistry", "Materials_science" ]
301
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Scattering", "Diffraction", "Crystallography", "Particle physics", "Condensed matter physics", "Nuclear physics", "Spectroscopy" ]
45,444,629
https://en.wikipedia.org/wiki/Apple%20car%20project
From 2014 until 2024, Apple undertook a research and development effort to develop an electric and self-driving car, codenamed "Project Titan". Apple never openly discussed any of its automotive research, but around 5,000 employees were reported to be working on the project In May 2018, Apple reportedly partnered with Volkswagen to produce an autonomous employee shuttle van based on the T6 Transporter commercial vehicle platform. In August 2018, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars. In 2020, it was believed that Apple was still working on self-driving related hardware, software and service as a potential product, instead of actual Apple-branded cars. In December 2020, Reuters reported that Apple was planning on a possible launch date of 2024, but analyst Ming-Chi Kuo claimed it would not be launched before 2025 and might not be launched until 2028 or later. In February 2024, Apple executives canceled their plans to release the autonomous electric vehicle, instead shifting resources on the project to the company's generative artificial intelligence efforts. The project had reportedly cost the company over $1 billion per year, with other parts of Apple collaborating and costing hundreds of millions of dollars in additional spend. Additionally, over 600 employees were laid off due to the cancellation of the project. Car details The car project cycled through multiple designs over the years. Teams at Apple outside of the development project were involved in its development. People from the Apple silicon team were heavily involved in the car to design the processor used for its autonomy. At the time of cancellation, the chip was nearly finished, and had the equivalent processing power of four M2 Ultras combined. The microkernel for the car was named "safetyOS". Proposed collaborations and acquisitions During the 2008–2010 automotive industry crisis, with car companies nearing collapse, Apple SVP Tony Fadell proposed to Jobs the idea of buying General Motors at a reduced price. The idea was abandoned partly because the company felt that it would be a bad look, and partly because of its focus on the iPhone. In 2014, with renewed interest in the project, Apple's head of corporate development Adrian Perica met with Elon Musk several times with an interest in acquiring Tesla. Tim Cook, Apple's CEO, shut down these early negotiations, partly due to Apple's CFO (and former GM Europe CFO) Luca Maestri saying how difficult the car business was. Despite the failure, years later, then-hardware chief Dan Riccio and former Ford engineer and iPhone engineer Steve Zadesky returned to Musk to discuss ideas for a collaboration. A few more years later, as Tesla struggled to make its Model 3 sedan, Musk attempted to restart talks with Apple, but said Cook wouldn't meet. Attempts to partner with Mercedes-Benz advanced somewhat further than those with Tesla. The plan was for Mercedes-Benz to manufacture the car and Apple to also provide Mercedes-Benz its self-driving platform and UI for other cars. Apple pulled out partly because it had confidence that it could successfully manufacture a car themselves, and partly over disagreements over controlling the user's experience and data. The talks lasted for more than a year. The closest talks came to acquiring a car company were with McLaren. Some executives hoped that Jony Ive would be closer to Apple with that acquisition, following his reduced involvement in the company. BMW and Canoo, among others, were also in exploratory talks for an acquisition. Apple also met with Nissan and BYD Auto. Apple was concerned that integrating an automaker would be a disaster internally. Apple briefly partnered with Magna Steyr, a maker of low-volume vehicles for the project. In 2018, Apple signed a deal with Volkswagen to make an autonomous shuttle for Apple employees at their new headquarters, Apple Park. Volkswagen's T6 transport vans were to be modified, keeping the chassis and wheels, but with replaced dashboards, seats, and other components. The deal, an interim effort, was shut down by Doug Field, the head of the project, who saw it as a distraction. The Korea Economic Daily reported in 2021 that Hyundai was in early discussions with Apple to develop and produce self-driving electric cars jointly. Some weeks later, in late January, Apple announced some upper-level engineering changes, leading some Apple-watchers to speculate if Dan Riccio's "new chapter at Apple" might indicate leadership of the Titan project (or something altogether unrelated, such as augmented/virtual reality headset or deluxe noise-cancelling headphones). By early February, it appeared that Apple was close to a $3.59B deal with Hyundai to use its Kia Motors' West Point, Georgia manufacturing plant for the car, a fully autonomous machine without a driver's seat. However, in February 2021, Hyundai and Kia confirmed that they were not in talks with Apple to develop a car. Adding further credence to Apple's automotive aspirations, Business Insider Deutschland (Germany) reported that Apple had hired Porsche VP of Chassis Development, Dr. Manfred Harrer. After rumors coming from Financial Times about Apple talking to several Japanese car companies about the Apple Car project after the Hyundai-Kia rumor, Nissan came out to Reuters as saying it is not in any of these discussions. The next Apple Car speculation was that Apple was shopping around for Lidar navigation sensor suppliers for its project. History 2014–2015 The project was rumored to have been approved by Apple CEO Tim Cook in late 2014. For the project, Apple was rumored to have hired Johann Jungwirth, the former president and chief executive of Mercedes-Benz Research and Development North America, as well as at least one transmission engineer. In February 2015, Apple board member Mickey Drexler stated that Apple co-founder and CEO Steve Jobs had plans to design and build a car and that discussions about the concept surfaced around the time that Tesla Motors debuted its first car in 2008. In May 2015, Apple investor Carl Icahn stated that he believed rumors that Apple would enter the automobile market in 2020, and that logically Apple would view this car as "the ultimate mobile device". In August 2015, The Guardian reported that Apple were meeting with officials from GoMentum Station, a testing ground for connected and autonomous vehicles at the former Concord Naval Weapons Station in Concord, California. In September 2015, there were reports that Apple were meeting with self-driving car experts from the California Department of Motor Vehicles. According to The Wall Street Journal in September 2015, it will be a battery electric vehicle, initially lacking full autonomous driving capability, with a possible unveiling around 2019. In October 2015, Tim Cook stated about the car industry: "It would seem like there will be massive change in that industry, massive change. You may not agree with that. That's what I think... We'll see what we do in the future. I do think that the industry is at an inflection point for massive change." Cook enumerated ways that the modern descendants of the Ford Model T would be shaken to the very chassis—the growing importance of software in the car of the future, the rise of autonomous vehicles, and the shift from an internal combustion engine to electrification. In November 2015, various websites reported that suspected Apple front SixtyEight Research had attended an auto body conference in Europe. Also in November 2015, after unknown EV startup Faraday Future announced a $1 billion U.S. factory project, some speculated that it might be a front for Apple's secret car project. In late 2015, Apple contracted Torc Robotics to retrofit two Lexus SUVs with sensors in a project known internally as Baja. 2016 In 2016, Tesla Motors CEO Elon Musk stated that Apple will probably make a compelling electric car: "It's pretty hard to hide something if you hire over a thousand engineers to do it." In May 2016, reports were indicating Apple was interested in electric car charging stations. The Wall Street Journal reported on July 25, 2016, that Apple had convinced retired senior hardware engineering executive Bob Mansfield to return and take over the Titan project. A few days later, on July 29, Bloomberg Technology reported that Apple had hired Dan Dodge, the founder and former chief executive officer of QNX, BlackBerry Ltd.’s automotive software division. According to Bloomberg, Dodge's hiring heralded a shift in emphasis at Apple's Project Titan, in which the company will prioritize creating software for autonomous vehicles. However, the story said that Apple would continue to develop a vehicle of its own. On September 9, The New York Times reported dozens of layoffs in an effort to reboot, presumably from a team still numbering around 1,000. The following week, reports surfaced that Magna International, a contract vehicle manufacturer, had a small team working at Apple's Sunnyvale lab. 2017 After no new reports, car project news flared up again in mid-April 2017, as word spread that Apple was permitted to test autonomous vehicles on California roads. In mid-June, Tim Cook in an interview with Bloomberg TV said Apple was "focusing on autonomous systems" but not necessarily leading to an actual Apple car product, leaving speculation about Apple's role in the convergence of three disruptive "vectors of change": autonomous systems, electric vehicles and ride-sharing services. In mid-August, various sources reported that the car project was focusing on autonomous systems, now expected to test its technology in the real world using a company-operated inter-campus shuttle service between the main Infinite Loop campus in Cupertino and various Silicon Valley offices, including the new Apple Park. Then at the end of August, around 17 former Titan team members, braking and suspension engineers with Detroit experience, were hired by autonomous vehicle startup Zoox. October 2016 reports claimed the Titan project has a 2017 deadline to determine its fate - prove its practicality and viability, set a final direction. In November 2017, Apple employees Yin Zhou and Oncel Tuzel published a paper on VoxelNet, which uses a convolutional neural network to detect three-dimensional objects using lidar. Transportation/tech website Jalopnik reported in late November that Apple was recruiting automotive test engineering and tech talent for autonomous systems work and appeared to be secretly leasing, via third parties, a former Fiat Chrysler proving grounds site in Surprise, Arizona (originally Wittman). Also in 2017, The New York Times suggested that Apple had stopped developing its self-driving car. In response to such reports, Apple CEO Tim Cook acknowledged publicly that year that the company was working on autonomous-car technology. 2018 In January 2018, the company registered 27 self-driving vehicles with California's Department of Motor Vehicles. While Apple attempted to keep its autonomous vehicles plans secret, regulatory filings did provide evidence of their project and related activities. In September 2018, Apple held the third-highest number of California autonomous vehicle permits with 70, behind GM's Cruise (175) and Alphabet's Waymo (88). On July 7, 2018, a former Apple employee was arrested by the FBI for allegedly stealing trade secrets about Apple's self-driving car project. He was charged by federal prosecutors. The criminal complaint against the former employee revealed that at that time, Apple still had yet to openly discuss any of its self-driving research, with around 5,000 employees disclosed on the project. In August 2018, Doug Field, formerly senior vice president of engineering at Tesla, became the leader of Apple's Titan team. On August 24, 2018, it was reported that one of Apple's self-driving car had apparently been involved in a crash, when it was rear-ended during road-testing. The crash occurred while the car was at a stop, waiting to merge into traffic about 3.5 miles from Apple's headquarters in Cupertino, with no reported injuries. At the time, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars. In August 2018, there were reports about an Apple patent of a system that warns riders ahead of time about what an autonomous car would do, purportedly to alleviate the discomfort of surprise. 2019 In January 2019, Apple laid off more than 200 employees from their 'Project Titan' autonomous vehicle team. In June 2019, Apple acquired autonomous vehicle startup Drive.ai. 2020 In early December, Bloomberg reported that Apple artificial intelligence lead John Giannandrea is overseeing Apple Car development as prior lead Bob Mansfield has retired. A few weeks later, Reuters reported that Apple was working towards a possible launch date of 2024 according to two unnamed insiders. 2021 An industry source told The Korea Times that Apple was working in Korea to build its supply chain. Later in 2021, Apple was reportedly in talks with Toyota as well as Korean partners for production to commence in 2024. After Doug Field departed the project and joined Ford, Kevin Lynch, the wearables chief at Apple, was appointed to lead the project. 2022–2024 Bloomberg reported that Apple had given up on building a fully self-driving car and was instead looking to bring a car capable of self-driving only on highways. Its price would be below 100,000 dollars. TrendForce reported that microLED would be used in the car. Apple had 66 road-registered driverless cars, with 152 drivers registered to operate those cars. In January 2024, Bloomberg reports suggested that Apple further delayed the car's release date to 2028, significantly scaling down its plans for self-driving and instead focusing on basic driver-assistance features similar to existing electric vehicles. On February 27, 2024, Apple executives made an internal announcement that the entire car project was being cancelled, with most resources moving to work on Apple's generative AI projects. In April 2024, Apple laid off more than 600 employees in Santa Clara, California. Most of the offices impacted by the layoffs were previously linked to project Titan and one, 3250 Scott Blvd, code named Aria, was developing the microLED screens. Purported employees and affiliates Jamie Carlson, a former engineer on Tesla's Autopilot self-driving car program. After he left Tesla for Apple, he left Apple to work with Chinese automaker Nio on their NIO Pilot autonomous driving platform. Most recently he has returned to Apple special projects. Megan McClain, a former Volkswagen AG engineer with expertise in automated driving. Vinay Palakkode, a graduate researcher at Carnegie Mellon University, a hub of automated driving research. Xianqiao Tong, an engineer who developed computer vision software for driver assistance systems at microchip maker Nvidia Corp NVDA.O. Paul Furgale, former deputy director of the Autonomous Systems Lab at the Swiss Federal Institute of Technology in Zurich. Sanjai Massey, an engineer with experience in developing connected and automated vehicles at Ford and several suppliers. Stefan Weber, a former Bosch engineer with experience in video-based driver assistance systems. Lech Szumilas, a former Delphi research scientist with expertise in computer vision and object detection. Anup Vader, formerly Caterpillar autonomous systems thermal engineer, who left Apple in April 2019 to join Zoox autonomous vehicle startup. Doug Betts, former global quality leader at Fiat Chrysler. Johann Jungwirth, former CEO of Mercedes-Benz Research & Development, North America, Inc. – left for VW in Nov. 2015. Mujeeb Ijaz, a former Ford Motor Co. engineer, who founded A123 Systems's Venture Technologies division, which focused on materials research, electrical battery cell product development and advanced concepts (who helped recruited four to five staff researchers from A123, a battery technology company) Nancy Sun, formerly vice president of electrical engineering at electric motorcycle company Mission Motors in San Francisco. Mark Sherwood, formerly director of powertrain systems engineering at Mission Motors. Eyal Cohen, formerly vice president of software and electrical engineering at Mission Motors. Jonathan Cohen, former director of Nvidia's deep learning software. Nvidia uses deep learning in its Nvidia Drive PX platform, which is used in driver assistance systems. Chris Porritt – former Tesla vice president of vehicle engineering and former Aston Martin chief engineer. Luigi Taraborrelli, a former Lamborghini executive for 20 years. Alex Hitzinger is a German engineer who until March 31, 2016, was the technical director of Porsche's LMP1 project. He previously worked as Head of Advanced Technologies for the Red Bull and Toro Rosso Formula One teams. In January 2019 he left to head the technical VW commercial vehicles department. Benjamin Lyon, sensor expert, manager and founding team member, who reported directly to Doug Field, left Apple for a chief engineer position at "satellite and space startup" Astra in Feb 2021. Weibao Wang, software engineer indicted for theft and attempted theft of trade secrets. Doug Field, VP of special projects at Apple - de facto head of the car project. Left to join Ford See also Xiaomi SU7 References Further reading Apple Inc. hardware Electric vehicles Proposed vehicles Unreleased products Self-driving cars
Apple car project
[ "Engineering" ]
3,495
[ "Automotive engineering", "Self-driving cars" ]
45,445,059
https://en.wikipedia.org/wiki/Resilience%20%28materials%20science%29
In material science, resilience is the ability of a material to absorb energy when it is deformed elastically, and release that energy upon unloading. Proof resilience is defined as the maximum energy that can be absorbed up to the elastic limit, without creating a permanent distortion. The modulus of resilience is defined as the maximum energy that can be absorbed per unit volume without creating a permanent distortion. It can be calculated by integrating the stress–strain curve from zero to the elastic limit. In uniaxial tension, under the assumptions of linear elasticity, where Ur is the modulus of resilience, σy is the yield strength, εy is the yield strain, and E is the Young's modulus. This analysis is not valid for non-linear elastic materials like rubber, for which the approach of area under the curve until elastic limit must be used. Unit of resilience Modulus of resilience (Ur) is measured in a unit of joule per cubic meter (J·m−3) in the SI system, i.e. elastical deformation energy per surface of test specimen (merely for gauge-length part). Like the unit of tensile toughness (UT), the unit of resilience can be easily calculated by using area underneath the stress–strain (σ–ε) curve, which gives resilience value, as given below: Ur = Area underneath the stress–strain (σ–ε) curve up to yield = σ × ε Ur [=] Pa × % = (N·m−2)·(unitless) Ur [=] N·m·m−3 Ur [=] J·m−3 See also Toughness References Guha S. Quantification of inherent energy resilience of process systems for optimization of energy usage. Environ Prog Sustainable Energy. 2019;e13308. https://doi.org/10.1002/ep.13308 Guha S. Quantification of inherent energy resilience of process systems pertaining to a gas sweetening unit. International Journal of Industrial Chemistry (2020) 11:71–90 https://doi.org/10.1007/s40090-020-00203-3 Elasticity (physics) hu:Reziliencia simple:Resilience sr:Резилијенција
Resilience (materials science)
[ "Physics", "Materials_science" ]
506
[ "Deformation (mechanics)", "Physical phenomena", "Physical properties", "Elasticity (physics)" ]
45,449,252
https://en.wikipedia.org/wiki/Spot%20height
A spot height is an exact point on a map with an elevation recorded beside it that represents its height above a given datum. In the UK this is the Ordnance Datum. Unlike a bench-mark, which is marked by a disc or plate, there is no official indication of a spot height on the ground although, in open country, spot heights may sometimes be marked by cairns. In geoscience, it can be used for showing elevations on a map, alongside contours, bench marks, etc. See also Surveying Benchmark (surveying) Triangulation station References Cartography Geodesy Surveying Vertical position
Spot height
[ "Physics", "Mathematics", "Engineering" ]
127
[ "Vertical position", "Physical quantities", "Distance", "Applied mathematics", "Surveying", "Civil engineering", "Geodesy" ]