id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
7,927,161
https://en.wikipedia.org/wiki/Singly%20fed%20electric%20machine
Singly fed electric machine is a broad term which covers ordinary electric motors and electric generators. Such machines have only one external connection to the windings, and thus are said to be singly fed. See also Doubly fed electric machine Rotary converter References Electric motors Electrical generators
Singly fed electric machine
Physics,Technology,Engineering
58
10,921,962
https://en.wikipedia.org/wiki/Cytoscape
Cytoscape is an open source bioinformatics software platform for visualizing molecular interaction networks and integrating with gene expression profiles and other state data. Additional features are available as plugins. Plugins are available for network and molecular profiling analyses, new layouts, additional file format support and connection with databases and searching in large networks. Plugins may be developed using the Cytoscape open Java software architecture by anyone and plugin community development is encouraged. Cytoscape also has a JavaScript-centric sister project named Cytoscape.js that can be used to analyse and visualise graphs in JavaScript environments, like a browser. History Cytoscape was originally created at the Institute of Systems Biology in Seattle in 2002. Now, it is developed by an international consortium of open source developers. Cytoscape was initially made public in July, 2002 (v0.8); the second release (v0.9) was in November, 2002, and v1.0 was released in March 2003. Version 1.1.1 is the last stable release for the 1.0 series. Version 2.0 was initially released in 2004; Cytoscape 2.83, the final 2.xx version, was released in May 2012. Version 3.0 was released Feb 1, 2013, and the latest version, 3.4.0, was released in May 2016. Development The Cytoscape core developer team continues to work on this project and released Cytoscape 3.0 in 2013. This represented a major change in the Cytoscape architecture; it is a more modularized, expandable and maintainable version of the software. Usage While Cytoscape is most commonly used for biological research applications, it is agnostic in terms of usage. Cytoscape can visualize and analyze network graphs of any kind involving nodes and edges (e.g., social networks). A vital aspect of the software architecture of Cytoscape is the use of plugins for specialized features. Plugins are developed by core developers and the greater user community. See also Computational genomics Graph drawing JavaScript framework JavaScript library Metabolic network modelling Protein–protein interaction prediction References External links https://cytoscape.org/screenshots.html Cytoscape wiki Cytoscape omictools webpage Bioinformatics software Systems biology Mathematical and theoretical biology Graph drawing software Cross-platform software Java platform software
Cytoscape
Mathematics,Biology
498
35,987,396
https://en.wikipedia.org/wiki/Gelfand%E2%80%93Kirillov%20dimension
In algebra, the Gelfand–Kirillov dimension (or GK dimension) of a right module M over a k-algebra A is: where the supremum is taken over all finite-dimensional subspaces and . An algebra is said to have polynomial growth if its Gelfand–Kirillov dimension is finite. Basic facts The Gelfand–Kirillov dimension of a finitely generated commutative algebra A over a field is the Krull dimension of A (or equivalently the transcendence degree of the field of fractions of A over the base field.) In particular, the GK dimension of the polynomial ring Is n. (Warfield) For any real number r ≥ 2, there exists a finitely generated algebra whose GK dimension is r. In the theory of D-Modules Given a right module M over the Weyl algebra , the Gelfand–Kirillov dimension of M over the Weyl algebra coincides with the dimension of M, which is by definition the degree of the Hilbert polynomial of M. This enables to prove additivity in short exact sequences for the Gelfand–Kirillov dimension and finally to prove Bernstein's inequality, which states that the dimension of M must be at least n. This leads to the definition of holonomic D-modules as those with the minimal dimension n, and these modules play a great role in the geometric Langlands program. Notes References Coutinho: A primer of algebraic D-modules. Cambridge, 1995 Further reading Abstract algebra Dimension
Gelfand–Kirillov dimension
Physics,Mathematics
322
22,438,135
https://en.wikipedia.org/wiki/Transgenic%20hydra
Cnidarians such as Hydra have become attractive model organisms to study the evolution of immunity. However, despite long-term efforts, stably transgenic animals could not be generated, severely limiting the functional analysis of genes. For analytical purposes, therefore, an important technical breakthrough in the field was the development of a transgenic procedure for generation of stably transgenic lines by embryo microinjection. Uses Hydra polyps are small and transparent which makes it possible to trace single cells in vivo. In addition, transgenic Hydra provide a ready system for generating gain-of-function phenotypes. With the use of transgenes producing dominant-negative versions of proteins, one should be able to obtain loss-of-function phenotypes as well. Current technology allows generation of reporter constructs using promoters of various Hydra genes fused to fluorescent proteins. Since transgenic Hydra lines have become an important tool to dissect molecular mechanisms of development, a “Hydra Transgenic Facility” has been established at the Christian-Albrechts-University of Kiel (Germany). References Wittlieb J, Khalturin K, Lohmann JU, Anton-Erxleben F and Bosch TCG (2006): Transgenic Hydra allow in vivo tracking of individual stem cells during morphogenesis. Proc. Natl. Acad. Sci. USA 103;16: 6208-6211 Khalturin K, Anton-Erxleben F, Milde S, Plötz C, Wittlieb J, Hemmrich G and Bosch TCG (2007): Transgenic stem cells in Hydra reveal an early evolutionary origin for key elements controlling self-renewal and differentiation. Developmental Biology, Volume 309, Issue 1, Pages 32–44 Siebert S, Anton-Erxleben F and Bosch TCG (2008): Cell type complexity in the basal metazoan Hydra is maintained by both stem cell based mechanisms and transdifferentiation. Dev. Biol. 313: 13-24 Milde S, G Hemmrich, F Anton-Erxleben, K Khalturin, J Wittlieb, and TCG Bosch (2009): Characterization of taxonomically-restricted genes in a phylum-restricted cell type. Genome Biol. 10(1):R8 External links Transgenic Hydra facility at the University of Kiel (Germany) Genetically modified organisms Molecular biology Hydridae
Transgenic hydra
Chemistry,Engineering,Biology
506
45,469,159
https://en.wikipedia.org/wiki/BlueKai
Oracle BlueKai Data Management Platform, formerly known as BlueKai, is a cloud-based data management platform which is a part of Oracle Marketing that enables the personalization of online, offline, and mobile marketing campaigns. Biography BlueKai was created in 2008 by Omar Tawakol, Alexander Hooshmand, and Grant Ries, as a marketing tech start-up based in Cupertino, California. It was acquired by Oracle on February 24, 2014, for approximately $400 million, and was renamed Oracle BlueKai Data Management Platform (DMP). The company offers third party data collecting services. BlueKai collects PC and smartphone users' data to enhance ad marketing for their clients, and had about 700 million actionable profiles. BlueKai works to increase relevancy in the ads that appear for partnered companies' users. As a third party data collection company, they gather information on users surfing the web, though BlueKai claims not to collect sensitive financial details, adult material, or health issues. Other clients and sites using BlueKai's services have included Live.com, Huffingtonpost.com, Walmart.com, Vimeo.com, Microsoft.com, and eBay.com. After the 2014 Oracle acquisition, BlueKai’s data management platform became part of Oracle Marketing, which is part of the Oracle Advertising and Customer Experience (CX) application suite. Companies use Oracle Marketing to run marketing campaigns and manage related data from the web, social media, mobile, and email. In October 2014, it was integrated with Oracle’s cross-channel marketing product suite. This allowed marketers to anonymize customer behavioral data, integrate it with DMP’s third-party data, and create specific audience models for retargeting. Additional acquisitions that have been integrated into Oracle Marketing include Eloqua, Responsys, DataFox, Maxymiser, Compendium, and Infinity from Webtrends. In June 2020, TechCrunch reported that security researcher Anurag Sen had found an unsecured BlueKai database accessible on the open Internet. The database held billions of records containing names, home addresses, email addresses, and web browsing activity like purchases and newsletter unsubscribes. TechCrunch reported that under California state law, companies are required to publicly disclose data security incidents, but that Oracle had not done so at the date of the story. In 2024, Oracle announced that it will shut down its advertising business including Bluekai. The business was supported through the end of September, 2024. See also Oracle Advertising and Customer Experience (CX) Oracle Corporation References Computing platforms Data collection Technology companies established in 2008 Oracle acquisitions 2014 mergers and acquisitions Digital marketing companies Big data companies Data management software American companies established in 2008
BlueKai
Technology
570
40,632,694
https://en.wikipedia.org/wiki/Aplysamine-2
Aplysamine-2 is a bio-active isolate of marine sponge. References Halogen-containing alkaloids Bromobenzene derivatives Ketoximes Amides Methoxy compounds Dimethylamino compounds
Aplysamine-2
Chemistry
47
1,956,391
https://en.wikipedia.org/wiki/Hug%20machine
A hug machine, also known as a hug box, a squeeze machine, or a squeeze box, is a therapeutic device designed to calm hypersensitive persons, usually individuals with autism spectrum disorders. The device was invented by Temple Grandin to administer deep-touch pressure, a type of physical stimulation often self-administered by autistic individuals as a means of self-soothing. Autistic people often have sensory processing disorder, which entails abnormal levels of stimulation of the senses (such as hypersensitivity). Because of difficulty with social interactions, it can be uncomfortable or impractical to turn to other human beings for comfort, including hugs. Grandin addressed this by designing the hug machine, in part to help her own anxiety and sensory sensitivity. Description The hug machine consists of two hinged side-boards, each four by three feet (120 cm by 90 cm) with thick soft padding, which form a V-shape, with a complex control box at one end and heavy-duty tubes leading to an air compressor. The user lies or squats between the side-boards for as long or short of a period as desired. Using pressure exerted by the air compressor and controlled by the user, the side-boards apply deep pressure stimulation evenly across the lateral parts of the body. The machine and its development are depicted in the biopic Temple Grandin. History The inventor of the machine, Temple Grandin, realized as a young child that she would seek out deep pressure stimulation, but she felt over-stimulated when someone hugged or held her. The idea for the hug machine came to her during a visit to her aunt's Arizona ranch, where she noted the way cattle were confined in a squeeze chute for inoculation, and how some of the cattle immediately calmed down after pressure was administered. She realized that the deep pressure from the chute had a calming effect on the cattle, and she decided that something similar might well settle down her own hypersensitivity. Initially, Grandin's device met with disapproval as psychologists at her college sought to confiscate her prototype hug machine. Her science teacher, however, encouraged her to determine the reason it helped resolve the anxiety and sensory issues. Efficacy Several therapy programs in the United States now use hug machines, effectively achieving general calming effects among autistic people across the age spectrum. A 1995 study on the efficacy of Grandin's device, conducted by the Center for the Study of Autism, working with Willamette University in Salem, Oregon, involved ten autistic children and found a reduction in tension and anxiety. Other studies, including one by Margaret Creedon, have yielded similar results. A small pilot study by Edelson et al. (1999), published in the American Journal of Occupational Therapy, reported that the machine produced a significant reduction in tension but only a small decrease in anxiety. Grandin continued to use her own hug box on a regular basis to provide the deep pressure necessary to relieve symptoms of her anxiety. "I concentrate on how gently I can do it", she has said. A paper Grandin wrote on her hug machine and the effects of deep pressure stimulation was published in the Journal of Child and Adolescent Psychopharmacology. In a February 2010 Time magazine interview, Grandin stated that she no longer uses a hug machine: "It broke two years ago, and I never got around to fixing it. I'm into hugging people now." Squeeze chair For several years in the 1990s, urban interventionist/artist Wendy Jacob worked with Grandin in developing furniture that squeezes or "hugs" users, inspired by Grandin's hug machine. Deep pressure Many other deep pressure techniques were developed (see Table 1). Systematic reviews showed that they had positive effects but the quality of the studies was too low to confirm this effect. The pressure can be controlled by the person herself. Focus groups and simulations will be necessary to confirm acceptability compared to others and trials will be useful to confirm efficacy of this method. Animal analogs Several compression garments are available to treat noise phobia in dogs. See also Weighted blanket References External links Dr. Temple Grandin's Webpage: Livestock Behaviour, Design of Facilities and Humane Slaughter (Grandin.com) Description and schematic details of the squeeze machine (Grandin.com) Hug Machine Building Directions American inventions Treatment of autism Medical equipment
Hug machine
Biology
894
28,652,193
https://en.wikipedia.org/wiki/Green%20leaf%20volatiles
Green leaf volatiles (GLV) are organic compounds released by plants. Some of these chemicals function as signaling compounds between either plants of the same species, of other species, or even different lifeforms like insects. Green leaf volatiles are involved in patterns of attack and protection between species. They have been found to increase the attractive effect of pheromones of cohabiting insect species that protect plants from attacking insect species. For example, corn plants that are being fed on by caterpillars will release GLVs that attract wasps, who then attack the caterpillars. GLVs also have antimicrobial properties that can prevent infection at the site of injury. GLVs include C6-aldehydes [(Z)-3-hexenal, n-hexanal] and their derivatives such as (Z)-3-hexenol, (Z)-3-hexen-1-yl acetate, and the corresponding E-isomers. Functions When a plant is attacked, it emits GLVs into the environment through the air. How a plant responds depends on the type of damage involved. Plants respond differently to damage from a purely mechanical source and damage from herbivores. Mechanical damage tends to cause damage-associated molecular patterns (DAMPs) involving plant-derived substances and breakdown products. Herbivore-associated molecular patterns (HAMPs) involve characteristic molecules left by different types of herbivores when feeding. The oral secretions of herbivores appear to play an essential role in triggering the release of species-specific herbivore-induced plant volatiles. Wounds from herbivores, and mechanical wounds that have been treated with herbivore oral secretions, both trigger the release of higher quantities of plant volatiles than mechanical damage. Volatile blends are proposed to convey a variety of information to insects and plants. "Each plant species and even each plant genotype releases its own specific blend, and the quantities and ratios in which they are released also vary with the arthropod that is feeding on a plant and may even provide information on the time of day that feeding occurs." In addition to GLVs, herbivore induced plant volatiles (HIPVs) include terpenes, ethylene, methyl salicylate and other VOCs. GLVs activate the expression of genes related to the plants' defense mechanisms. Different antagonists trigger different expression of genes and the biosynthesis of signaling peptides which mediate systemic defense responses. Plant–plant interactions Undamaged neighboring plants have been shown, in some cases, to respond to GLV signals. Both the plant emitting the GLVs and its neighboring plants can enter a primed state in which plants activate their defenses systems more quickly and in a stronger concentration. The first study to clearly demonstrate anti-herbivore defense priming by GLVs focused on corn (Zea mays). Neighboring plants responded to the release of GLVs by priming against insect herbivore attack, reacting more rapidly and releasing greater levels of GLVs. Similar results have been shown in tomato plants. Neighboring plants reacted more strongly to GLVs from the plants exposed to the herbivore, by releasing more of the proteins related to the plants' defense mechanisms. Positive plant–insect interactions In positive plant-insect interactions, GLVs are used as a form of defense. They attract predators to plants that are being preyed upon by herbivores. For example, female parasitoid wasps from two different families, Microplitis croceipes and Netelia heroica, can be attracted to plants that are emitting GLVs due to wounding from caterpillars. Maize plants emit volatiles to attract the parasitic wasps Cotesia marginiventris and Microplitis rufiventris to attack African cotton leafworm. In some species GLVs enhance the attraction of sex pheromones. For example, green leaf volatiles have been found to increase the response of tobacco budworm to sex pheromone. Budworm larvae feed on tobacco, cotton, and various flowers and weeds, and in turn can be fed on by the larvae of cohabiting species that are attracted by GLVs. In another study, a multi-plant relationship was reported. The parasitic wasps (Vespula germanica and V. vulgaris) prey on caterpillar (Pieris brassicae)-infested cabbage leaves that emit GLVs. The same GLVs are emitted by the orchids (Epipactis purpurata and E. helleborine) through pheromone release. The orchids benefit from attracting the wasps, not to protect them from insects, but because the wasps aid in pollination. Benefits of GLV release have also been reported in soybeans grown in Iowa. When these soybean plants became heavily infested by aphids, the amount of GLV released far surpassed normal levels and as a result, more spotted lady beetles were attracted to the pheromone releasing plants and preyed on the bugs eating the plant. The stimulus of aphid predation is chemically transmitted through the plant to coordinate an increase release of GLV’s. The particular chemical released is unique to these spotted lady beetles and when different species of beetles were tested, there wasn’t any extra inclination for them to move towards GLV releasing plants. This indicates that these soybeans evolved ability to release species-specific pheromones to aid in their survival. Negative plant–insect interactions GLV release is correlated with fruit ripeness. Although this may be of effect in attracting pollinators, it also can cause issues if these GLV’s attract predators. One such example of this is with boll weevils, as an increase of GLV release when the plants are ripe has been found to increase the predation rate of these beetles. Another issue with GLV release and increasing predation is with populations that alter GLV emissions from the affected plants. In one case, it was noted that secretions from certain species of caterpillars significantly decrease the effect amount of GLV emissions. In order to determine what was being done to decrease GLV emissions, a study was run on four unique species of caterpillars to measure their effectiveness in decreasing GLV levels released from the predated plant. It’s been found that compounds in the gut and salivary glands, as well as modifications to those compounds in these various species, has been able to mute a large part of the effect of GLV released into the external environment. How this is done is though stopping the flow of pheromone molecules, so they can’t interact with receptors on the leaves of other plants. Antimicrobial properties GLVs can also have antimicrobial effects. Some plants express HPL, the main enzyme of GLV synthesis. The rates of fungal spore growth in HPL over-expressing have been compared with HPL silencing mutants to the wild type plants. Results from the study showed lower rates of fungal growth and higher GLV emissions on the HPL over-expressing mutants, while the HPL silencing mutants showed higher rates of fungal growth and lower GLV emissions, which supports the hypothesis that GLVs have antimicrobial properties. The antimicrobial properties of GLVs have also been proposed to be part of an evolutionary arms race. During an infection, plants emit GLVs to act as microbial agents, but bacteria and viruses have adapted to use these GLVs to their own benefit. The most common example of this is found in the red raspberry. When the red raspberry plant is infected, the virus influences it to produce more GLVs, which attract the red raspberry aphid. These GLVs cause more aphids to come and to feed on the plant for longer, giving the virus better chances of being ingested and spread more widely. Researchers are now trying to determine whether under infectious conditions plants emit GLVs for their benefit, or if bacteria and viruses induce the release of these compounds for their own benefit. Studies in this area have been inconclusive and contradictory. Study A systematic review by Schuman 2023 finds that most studies on plant volatiles relate to herbivore interactions. Schuman also finds that laboratory studies are overrepresented despite the wide differences in herbivore behaviour between natural and artificial settings. See also Kairomone Plant defenses against herbivory Herbivore adaptations to plant defense Secondary metabolite Volatile Organic Compound References Further reading Plant anatomy Leaves Chemical ecology
Green leaf volatiles
Chemistry,Biology
1,811
216,223
https://en.wikipedia.org/wiki/Biocide
A biocide is defined in the European legislation as a chemical substance or microorganism intended to destroy, deter, render harmless, or exert a controlling effect on any harmful organism. The US Environmental Protection Agency (EPA) uses a slightly different definition for biocides as "a diverse group of poisonous substances including preservatives, insecticides, disinfectants, and pesticides used for the control of organisms that are harmful to human or animal health or that cause damage to natural or manufactured products". When compared, the two definitions roughly imply the same, although the US EPA definition includes plant protection products and some veterinary medicines. The terms "biocides" and "pesticides" are regularly interchanged, and often confused with "plant protection products". To clarify this, pesticides include both biocides and plant protection products, where the former refers to substances for non-food and feed purposes and the latter refers to substances for food and feed purposes. When discussing biocides a distinction should be made between the biocidal active substance and the biocidal product. The biocidal active substances are mostly chemical compounds, but can also be microorganisms (e.g. bacteria). Biocidal products contain one or more biocidal active substances and may contain other non-active co-formulants that ensure the effectiveness as well as the desired pH, viscosity, colour, odour, etc. of the final product. Biocidal products are available on the market for use by professional and/or non-professional consumers. Although most of the biocidal active substances have a relative high toxicity, there are also examples of active substances with low toxicity, such as , which exhibit their biocidal activity only under certain specific conditions such as in closed systems. In such cases, the biocidal product is the combination of the active substance and the device that ensures the intended biocidal activity, i.e. suffocation of rodents by in a closed system trap. Another example of biocidal products available to consumers are products impregnated with biocides (also called treated articles), such as clothes and wristbands impregnated with insecticides, socks impregnated with antibacterial substances etc. Biocides are commonly used in medicine, agriculture, forestry, and industry. Biocidal substances and products are also employed as anti-fouling agents or disinfectants under other circumstances: chlorine, for example, is used as a short-life biocide in industrial water treatment but as a disinfectant in swimming pools. Many biocides are synthetic, but there are naturally occurring biocides classified as natural biocides, derived from, e.g., bacteria and plants. A biocide can be: A pesticide: this includes fungicides, herbicides, insecticides, algicides, molluscicides, miticides, piscicides, rodenticides, and slimicides. An antimicrobial: this includes germicides, antibiotics, antibacterials, antivirals, antifungals, antiprotozoals, and antiparasitics. See also spermicide. Uses In Europe biocidal products are divided into different product types (PT), based on their intended use. These product types, 22 in total under the Biocidal Products Regulation (EU) 528/2012 (BPR), are grouped into four main groups, namely disinfectants, preservatives, pest control, and other biocidal products. For example, disinfectants contains products to be used for human hygiene (PT 1) and veterinary hygiene (PT 3), preservatives contains wood preservatives (PT 8), for pest control contains rodenticides (PT 14) and repellents and attractants (PT 19), while other biocidal products contains antifouling products (PT 21). One active substance can be used in several product types, such as for example sulfuryl fluoride, which is approved for use as a wood preservative (PT 8) as well as an insecticide (PT 18). Biocides can be added to other materials (typically liquids) to protect them against biological infestation and growth. For example, certain types of quaternary ammonium compounds (quats) are added to pool water or industrial water systems to act as an algicide, protecting the water from infestation and growth of algae. It is often impractical to store and use poisonous chlorine gas for water treatment, so alternative methods of adding chlorine are used. These include hypochlorite solutions, which gradually release chlorine into the water, and compounds like sodium dichloro-s-triazinetrione (dihydrate or anhydrous), sometimes referred to as "dichlor", and trichloro-s-triazinetrione, sometimes referred to as "trichlor". These compounds are stable while solids and may be used in powdered, granular, or tablet form. When added in small amounts to pool water or industrial water systems, the chlorine atoms hydrolyze from the rest of the molecule forming hypochlorous acid (HOCl) which acts as a general biocide killing germs, micro-organisms, algae, and so on. Halogenated hydantoin compounds are also used as biocides. Hazards and environmental risks Because biocides are intended to kill living organisms, many biocidal products pose significant risk to human health and welfare. Great care is required when handling biocides and appropriate protective clothing and equipment should be used. The use of biocides can also have significant adverse effects on the natural environment. Anti-fouling paints, especially those utilising organic tin compounds such as TBT, have been shown to have severe and long-lasting impacts on marine eco-systems and such materials are now banned in many countries for commercial and recreational vessels (though sometimes still used for naval vessels). Disposal of used or unwanted biocides must be undertaken carefully to avoid serious and potentially long-lasting damage to the environment. Classification European classification The classification of biocides in the BPR is broken down into 22 product types (i.e. application categories), with several comprising multiple subgroups: MAIN GROUP 1: Disinfectants and general biocidal products Product-type 1: Human hygiene biocidal products Product-type 2: Private area and public health area disinfectants and other biocidal products Product-type 3: Veterinary hygiene biocidal products Product-type 4: Food and feed area disinfectants Product-type 5: Drinking water disinfectants MAIN GROUP 2: Preservatives Product-type 6: In-can preservatives Product-type 7: Film preservatives Product-type 8: Wood preservatives Product-type 9: Fibre, leather, rubber and polymerised materials preservatives Product-type 10: Masonry preservatives Product-type 11: Preservatives for liquid-cooling and processing systems Product-type 12: Slimicides Product-type 13: Metalworking-fluid preservatives MAIN GROUP 3: Pest control Product-type 14: Rodenticides Product-type 15: Avicides Product-type 16: Molluscicides Product-type 17: Piscicides Product-type 18: Insecticides, acaricides, and products to control other arthropods Product-type 19: Repellents and attractants Product-type 20: Control of other vertebrates MAIN GROUP 4: Other biocidal products Product-type 21: Antifouling products Product-type 22: Embalming and taxidermist fluids Legislation The EU regulatory framework for biocides has for years been defined by the Directive 98/8/EC, also known as the Biocidal Products Directive (BPD). The BPD was revoked by the Biocidal Products Regulation 528/2012 (BPR), which entered into force on 17 July 2012 with the application date of 1 September 2013. Several Technical Notes for Guidance (TNsG) have been developed to facilitate the implementation of the BPR and to assure a common understanding of its obligations. According to the EU legislation, biocidal products need authorisation to be placed or to remain on the market. Competent Authorities of the EU member states are responsible for assessing and approving the active substances contained in the biocides. The BPR follows some of the principles set previously under the REACH Regulation (Registration, Evaluation, Authorisation and Restrictions of Chemicals) and the coordination of the risk assessment process for both REACH and BPR are mandated to the European Chemicals Agency (ECHA), which assures the harmonization and integration of risk characterization methodologies between the two regulations. The biocides legislation puts emphasis on making the Regulation compatible with the World Trade Organization (WTO) rules and requirements and with the Globally Harmonized System of Classification and Labelling of Chemicals (GHS), as well as with the OECD programme on testing methods. Exchange of information requires the use of the OECD harmonised templates implemented in IUCLID – the International Unified Chemical Information Data System (see ECHA and OECD websites). Many biocides in the US are regulated under the Federal Pesticide Law (FIFRA) and its subsequent amendments, although some fall under the Federal Food, Drugs and Cosmetic Act, which includes plant protection products (see websites below). In Europe, the plant protection products are placed on the market under another regulatory framework, managed by the European Food Safety Authority (EFSA). Risk assessment Due to their intrinsic properties and patterns of use, biocides, such as rodenticides or insecticides, can cause adverse effects in humans, animals and the environment and should therefore be used with the utmost care. For example, the anticoagulants used for rodent control have caused toxicity in non-target species, such as predatory birds, due to their long half-life after ingestion by target species (i.e. rats and mice) and high toxicity to non-target species. Pyrethroids used as insecticides have been shown to cause unwanted effects in the environment, due to their unspecific toxic action, also causing toxic effects in non-target aquatic organisms. In light of potential adverse effects, and to ensure a harmonised risk assessment and management, the EU regulatory framework for biocides has been established with the objective of ensuring a high level of protection of human and animal health and the environment. To this aim, it is required that risk assessment of biocidal products is carried out before they can be placed on the market. A central element in the risk assessment of the biocidal products are the utilization instructions that defines the dosage, application method and number of applications and thus the exposure of humans and the environment to the biocidal substance. Humans may be exposed to biocidal products in different ways in both occupational and domestic settings. Many biocidal products are intended for industrial sectors or professional uses only, whereas other biocidal products are commonly available for private use by non-professional users. In addition, potential exposure of non-users of biocidal products (i.e. the general public) may occur indirectly via the environment, for example through drinking water, the food chain, as well as through atmospheric and residential exposure. Particular attention should be paid to the exposure of vulnerable sub-populations, such as the elderly, pregnant women, and children. Also pets and other domestic animals can be exposed indirectly following the application of biocidal products. Furthermore, exposure to biocides may vary in terms of route (inhalation, dermal contact, and ingestion) and pathway (food, drinking water, residential, occupational) of exposure, level, frequency and duration. The environment can be exposed directly due to the outdoor use of biocides or as the result of indoor use followed by release to the sewage system after e.g. wet cleaning of a room in which a biocide is used. Upon this release a biocidal substance can pass a sewage treatment plant (STP) and, based on its physical chemical properties, partition to sewage sludge, which in turn can be used for soil amendments thereby releasing the substance into the soil compartment. Alternatively, the substance can remain in the water phase in the STP and subsequently end up in the water compartment such as surface water etc. Risk assessment for the environment focuses on protecting the environmental compartments (air, water and soil) by performing hazard assessments on key species, which represent the food chain within the specific compartment. Of special concern is a well functioning STP, which is elemental in many removal processes. The large variety in biocidal applications leads to complicated exposure scenarios that need to reflect the intended use and possible degradation pathways, in order to perform an accurate risk assessment for the environment. Further areas of concern are endocrine disruption, PBT-properties, secondary poisoning, and mixture toxicity. Biocidal products are often composed of mixtures of one or more active substances together with co-formulants such as stabilisers, preservatives and colouring agents. Since these substances may act together to produce a combination effect, an assessment of the risk from each of these substances alone may underestimate the real risk from the product as a whole. Several concepts are available for predicting the effect of a mixture on the basis of known toxicities and concentrations of the single components. Approaches for mixture toxicity assessments for regulatory purposes typically advocate assumptions of additive effects;. This means that each substance in the mixture is assumed to contribute to a mixture effect in direct proportion to its concentration and potency. In a strict sense, the assumption is thereby that all substances act by the same mode or mechanism of action. Compared to other available assumptions, this concentration addition model (or dose addition model) can be used with commonly available (eco)toxicity data and effect data together with estimates of e.g. LC50, EC50, PNEC, AEL. Furthermore, assumptions of additive effects from any given mixture are generally considered as a more precautionary approach compared to other available predictive concepts. The potential occurrence of synergistic effects presents a special case, and may occur for example when one substance increases the toxicity of another, e.g. if substance A inhibits the detoxification of substance B. Currently, predictive approaches cannot account for this phenomenon. Gaps in our knowledge of the modes of action of substances as well as circumstances under which such effects may occur (e.g. mixture composition, exposure concentrations, species and endpoints) often hamper predictive approaches. Indications that synergistic effects might occur in a product will warrant either a more precautionary approach, or product testing.chemical As indicated above, the risk assessment of biocides in EU hinges for a large part by the development of specific emission scenario documents (ESDs) for each product type, which is essential for assessing its exposure of man and the environment. Such ESDs provide detailed scenarios to be used for an initial worse case exposure assessment and for subsequent refinements. ESDs are developed in close collaboration with the OECD Task Force on Biocides and the OECD Exposure Assessment Task Force and are publicly available from websites managed by the Joint Research Centre and OECD (see below). Once ESDs become available they are introduced in the European Union System for the Evaluation of Substances (EUSES), an IT tool supporting the implementation of the risk assessment principles set in the Technical Guidance Document for the Risk Assessment of Biocides (TGD). EUSES enables government authorities, research institutes and chemical companies to carry out rapid and efficient assessments of the general risks posed by substances to man and the environment. Once a biocidal active substance is allowed onto the list of approved active substances, its specifications become a reference source of that active substance (so called 'reference active substance'). Thus, when an alternative source of that active substance appears (e.g. from a company that have not participated in the Review Programme of active substances) or when a change appears in the manufacturing location and/or manufacturing process of a reference active substance, then a technical equivalence between these different sources needs to be established with regard to the chemical composition and hazard profile. This is to check if the level of hazard posed to health and environment by the active substance from the secondary source is comparable to the initial assessed active substance. It goes without saying that biocidal products must be used in an appropriate and controlled way. The amount utilized of an active substance should be minimized to that necessary to reach the desired effects thereby reducing the load on the environment and the linked potential adverse effects. In order to define the conditions of use and to ensure that the product fulfils its intended uses, efficacy assessments are carried out as an essential part of the risk assessment. Within the efficacy assessment the target organisms, the effective concentrations, including any thresholds or dependence of the effects on concentrations, the likely concentrations of the active substance used in the products, the mode of action, and the possible occurrence of resistance, cross resistance or tolerance is evaluated. A product cannot be authorized if the desired effect cannot be reached at a dose without posing unacceptable risks to human health or the environment. Appropriate management strategies needs to be taken to avoid the buildup of (cross)resistance. Last but not least, other fundamental elements are the instructions of use, the risk management measures and the risk communication, which is under responsibility of the EU member states. While biocides can have severe effects on human health and/or the environment, their benefits should not be overlooked. To provide some examples, without the above-mentioned rodenticides, crops and food stocks might be seriously affected by rodent activity, or diseases like Leptospirosis might be spread more easily, since rodents can be a vector for diseases. It is difficult to imagine hospitals, food industry premises without using disinfectants or using untreated wood for telephone poles. Another example of benefit is the fuel saving of antifouling substances applied to ships to prevent the buildup of biofilm and subsequent fouling organisms on the hulls which increase the drag during navigation. See also Fungicide Integrated pest management Non-pesticide management Oligodynamic effect Virucide References Literature Wilfried Paulus: Directory of Microbicides for the Protection of Materials and Processes. Springer Netherland, Berlin 2006, . Danish EPA (2001): Inventory of Biocides used in Denmark SCHER, SCCS, SCENIHR. (2012) Opinion on the Toxicity and Assessment of Chemical Mixtures https://web.archive.org/web/20160305110234/http://ec.europa.eu/health/scientific_committees/consultations/public_consultations/scher_consultation_06_en.htm External links Biocides by [European Commission Directorate-General for the Environment] US EPA Office of Pesticide Programs European Chemicals Agency Biocides Product Regulation website EFSA European Food Safety Authority Organisation for Economic Co-operation and Development (OECD) Biocides website US Food and Drug Administration (gone) Biological pest control agents Biological pest control Pesticides Pharmaceutical microbiology
Biocide
Biology,Environmental_science
4,028
33,053,957
https://en.wikipedia.org/wiki/Kepler-41
Kepler-41 or KOI-196 is a star in the constellation Cygnus. It is a G-type main-sequence star, like the Sun, and it is located about 3,510 light-years (1,080 parsecs) away. It is fairly similar to the Sun, with 115% of its mass, a radius of 129% times that of the Sun, and a surface temperature of 5,750 K. Search for stellar companions to Kepler-41 in 2013-2014 has yielded inconclusive results, compatible with Kepler-41 being the single star. Planetary system In 2011, the planet Kepler-41b was discovered in orbit around the star. The planet orbits extremely close to Kepler-41, completing an orbit once every 1.86 days. Despite it receiving a high amount of radiation from Kepler-41, the radius of the Kepler-41b was initially believed to be less than that of Jupiter making it unusual for a hot Jupiter however later observations showed an inflated radius similar to other hot jupiters. Kepler-41b is also quite reflective, with a geometric albedo of 0.30. References G-type main-sequence stars Planetary systems with one confirmed planet Planetary transit variables 196 Cygnus (constellation) J19380317+4558539
Kepler-41
Astronomy
267
47,793,123
https://en.wikipedia.org/wiki/Wallis%20Annenberg%20Wildlife%20Crossing
The Wallis Annenberg Wildlife Crossing (formerly Liberty Canyon Wildlife Crossing) is an under construction vegetated overpass spanning the Ventura Freeway and Agoura Road at Liberty Canyon in Agoura Hills, California. Once completed, the bridge will be one of the largest urban wildlife crossings in the world, connecting the Simi Hills and the Santa Monica Mountains over a busy, 10-lane freeway. Background The bridge is meant to allow animals to circulate through and thrive in habitats that are fragmented by human development. The crossing is particularly critical for the mountain lions of the Santa Monica Mountains, which have declined and become genetically isolated because the Ventura Freeway prevents them from moving between the mountains and the Simi Hills to the north. Other species expected to benefit from the crossing include bobcats, coyotes, gray foxes, birds of prey, skunks, rodents, American badgers, American Black Bears, fence lizards and Mule deer. In 2020, wildlife biologists found the first evidence of physical abnormalities in the isolated population. Newcomers would bring new genetic material into the mountains where the lack of genetic diversity is a serious threat to their long-term survival. It would allow young mountain lions born in the Santa Monica Mountains the chance to find new territory before possibly being killed by one of the dominant older males. Freeway traffic is one of the primary threats to mountain lions' survival in Southern California. Since 2002, at least a dozen have been killed by motorists on the section of freeway paralleling the Santa Monica Mountains. In 2013, a mountain lion, traveling from the north and on the verge of bringing new genetic material, died trying to cross at this location. GPS tracking collars fitted by the researchers show that most mountain lions approach this particular area and turn back without attempting the hazardous crossing of the freeway. This will be the first bridge on the California highway system designed specifically for fostering wildlife connectivity. The Ventura Freeway is a heavily travelled commuter route serving the Greater Los Angeles area and connecting Los Angeles and Ventura Counties with about 300,000 cars a day. The site is about northwest of downtown Los Angeles. Scientists identified Liberty Canyon as the best location for a wildlife crossing in a 1990 study commissioned by the Santa Monica Mountains Conservancy. Acquisition by the conservancy and other partners of privately owned land began to create one of the few areas with the lands on both sides of the freeway that are publicly owned and protected. The crossing is situated along a wildlife corridor within the Santa Monica Mountains National Recreation Area that consists of thousands of acres of local, state and federally protected lands and stretches northerly from Los Angeles into Ventura County. The county of Ventura has adopted a wildlife corridor protection ordinance that restricts activities that will impede the movement of mountains lions and other wildlife between the Santa Monica Mountains and the Los Padres National Forest. Design In 2015, the Resource Conservation District of the Santa Monica Mountains published a design for a and overpass for the wildlife crossing. To encourage use by wildlife, the bridge will have lush but drought-tolerant vegetation with matte materials to deflect bright headlights and insulation to quiet the roar of cars. Fencing at each end will help funnel them onto the crossing. A second phase of the project will cross a frontage road that is parallel with the freeway. Landscaping of the nearly includes of habitat restoration in the area. The restoration is partially needed because the 2018 Woolsey Fire burned through the wildlife corridor as it was pushed by strong Santa Ana winds in a southerly direction, and crossed the freeway in this area. The draft environmental document was released in 2017. A tunnel was considered as an alternative, but it would be less able to attract usage by wildlife and wouldn’t sustain vegetation. The California Department of Transportation (Caltrans) oversaw design and construction as it crosses a major transportation route. Funding campaign In 2014, the National Wildlife Federation, the Santa Monica Mountains Fund, and the #SaveLACougars campaign began to raise money for the project. The inspiration for the project, as well as the funding drive's "poster puma", was P-22, a mountain lion that survived crossing two freeways, the 101 and the 405, to reach Griffith Park at the easterly end of the Santa Monica Mountains. P-22 became a local celebrity; his death in 2022 would further stimulate awareness and funds for the campaign. In 2014, the California Wildlife Conservation Board gave a $650,000 grant to the Resource Conservation District of the Santa Monica Mountains for the design of the crossing. In 2015, the California Coastal Commission gave a $1 million grant to Caltrans for environmental assessment. Private donors were encouraged to contribute. The project stalled for years due to lack of funding. In May 2021, the Annenberg Foundation pledged to donate another $25 million once the project raised $35 million. As of mid-April 2022, donations totaled more than $87 million, with more than 5,000 people, foundations, agencies, and businesses contributing expertise and donations. The project costs around $90 million, with funding from private donations covering about 60% and the rest coming from public funds set aside for conservation purposes. Construction A groundbreaking ceremony was held on Earth Day in April 2022 with Governor Gavin Newsom, Wallis Annenberg, wildlife biologists and members of the public along with local, state and federal legislators. Caltrans set the beginning of construction for spring 2022 with construction to be completed within two years. Initial work included moving public utilities. As of mid-2024, the work is expected to finish by early 2026. References External links Wallis Annenberg Wildlife Crossing, Annenberg Foundation US-101 – Wallis Annenberg Wildlife Crossing at Liberty Canyon, Caltrans Wildlife Crossing at Liberty Canyon, Resource Conservation District of the Santa Monica Mountains Liberty Canyon Wildlife Crossing, Santa Monica Mountains Conservancy Liberty Canyon Wildlife Crossing, Save Open Space, Santa Monica Mountains Liberty Wildlife Corridor Partnership (savelacougars.org) The building of the world’s largest animal crossing, Michelle Loxton, KCLU, April 1, 2022 (podcast) Proposed bridges in the United States 2022 establishments in California Simi Hills Natural history of the Santa Monica Mountains Natural history of Los Angeles County, California Agoura Hills, California Santa Monica Mountains National Recreation Area Wildlife conservation
Wallis Annenberg Wildlife Crossing
Biology
1,266
44,908,849
https://en.wikipedia.org/wiki/Becampanel
Becampanel (INN) (code name AMP397) is a quinoxalinedione derivative drug which acts as a competitive antagonist of the AMPA receptor (IC50 = 11 nM). It was investigated as an anticonvulsant for the treatment of epilepsy by Novartis, and was also looked at as a potential treatment for neuropathic pain and cerebral ischemia, but never completed clinical trials. References AMPA receptor antagonists Amines Anticonvulsants Lactams Secondary amines Nitroarenes Quinoxalines Abandoned drugs
Becampanel
Chemistry
122
4,363,576
https://en.wikipedia.org/wiki/Cre-Lox%20recombination
Cre-Lox recombination is a site-specific recombinase technology, used to carry out deletions, insertions, translocations and inversions at specific sites in the DNA of cells. It allows the DNA modification to be targeted to a specific cell type or be triggered by a specific external stimulus. It is implemented both in eukaryotic and prokaryotic systems. The Cre-lox recombination system has been particularly useful to help neuroscientists to study the brain in which complex cell types and neural circuits come together to generate cognition and behaviors. NIH Blueprint for Neuroscience Research has created several hundreds of Cre driver mouse lines which are currently used by the worldwide neuroscience community. An important application of the Cre-lox system is excision of selectable markers in gene replacement. Commonly used gene replacement strategies introduce selectable markers into the genome to facilitate selection of genetic mutations that may cause growth retardation. However, marker expression can have polar effects on the expression of upstream and downstream genes. Removal of selectable markers from the genome by Cre-lox recombination is an elegant and efficient way to circumvent this problem and is therefore widely used in plants, mouse cell lines, yeast, etc. The system consists of a single enzyme, Cre recombinase, that recombines a pair of short target sequences called the Lox sequences. This system can be implemented without inserting any extra supporting proteins or sequences. The Cre enzyme and the original Lox site called the LoxP sequence are derived from bacteriophage P1. Placing Lox sequences appropriately allows genes to be activated, repressed, or exchanged for other genes. At a DNA level many types of manipulations can be carried out. The activity of the Cre enzyme can be controlled so that it is expressed in a particular cell type or triggered by an external stimulus like a chemical signal or a heat shock. These targeted DNA changes are useful in cell lineage tracing and when mutants are lethal if expressed globally. The Cre-Lox system is very similar in action and in usage to the FLP-FRT recombination system. History Cre-Lox recombination is a special type of site-specific recombination developed by Dr. Brian Sauer and patented by DuPont that operated in both mitotic and non-mitotic cells, and was initially used in activating gene expression in mammalian cell lines. Subsequently, researchers in the laboratory of Dr. Jamey Marth demonstrated that Cre-Lox recombination could be used to delete loxP-flanked chromosomal DNA sequences at high efficiency in specific developing T-cells of transgenic animals, with the authors proposing that this approach could be used to define endogenous gene function in specific cell types, indelibly mark progenitors in cell fate determination studies, induce specific chromosomal rearrangements for biological and disease modeling, and determine the roles of early genetic lesions in disease (and phenotype) maintenance. Shortly thereafter, researchers in the laboratory of Prof. Klaus Rajewsky reported the production of pluripotent embryonic stem cells bearing a targeted loxP-flanked (floxed) DNA polymerase gene. Combining these advances in collaboration, the laboratories of Drs. Marth and Rajewsky reported in 1994 that Cre-lox recombination could be used for conditional gene targeting. They observed ≈50% of the DNA polymerase beta gene was deleted in T cells based on DNA blotting. It was unclear whether only one allele in each T-cell or 50% of T cells had 100% deletion in both alleles. Researchers have since reported more efficient Cre-Lox conditional gene mutagenesis in the developing T cells by the Marth laboratory in 1995. Incomplete deletion by Cre recombinase is not uncommon in cells when two copies of floxed sequences exist, and allows the formation and study of chimeric tissues. All cell types tested in mice have been shown to undergo transgenic Cre recombination. Independently, Joe Z. Tsien has pioneered the use of Cre-loxP system for cell type- and region-specific gene manipulation in the adult brain where hundreds of distinct neuron types may exist and nearly all neurons in the adult brain are known to be post-mitotic. Tsien and his colleagues demonstrated Cre-mediated recombination can occur in the post-mitotic pyramidal neurons in the adult mouse forebrain. These developments have led to a widespread use of conditional mutagenesis in biomedical research, spanning many disciplines in which it becomes a powerful platform for determining gene function in specific cell types and at specific developmental times. In particular, the clear demonstration of its usefulness in precisely defining the complex relationship between specific cells/circuits and behaviors for brain research, has promoted the NIH to initiate the NIH Blueprint for Neuroscience Research Cre-driver mouse projects in early 2000. To date, NIH Blueprint for Neuroscience Research Cre projects have created several hundreds of Cre driver mouse lines which are currently used by the worldwide neuroscience community. Overview Cre-Lox recombination involves the targeting of a specific sequence of DNA and splicing it with the help of an enzyme called Cre recombinase. Cre-Lox recombination is commonly used to circumvent embryonic lethality caused by systemic inactivation of many genes. As of February 2019, Cre–Lox recombination is a powerful tool and is used in transgenic animal modeling to link genotypes to phenotypes. The Cre-lox system is used as a genetic tool to control site specific recombination events in genomic DNA. This system has allowed researchers to manipulate a variety of genetically modified organisms to control gene expression, delete undesired DNA sequences and modify chromosome architecture. The Cre protein is a site-specific DNA recombinase that can catalyse the recombination of DNA between specific sites in a DNA molecule. These sites, known as loxP sequences, contain specific binding sites for Cre that surround a directional core sequence where recombination can occur. When cells that have loxP sites in their genome express Cre, a recombination event can occur between the loxP sites. Cre recombinase proteins bind to the first and last 13 bp regions of a lox site forming a dimer. This dimer then binds to a dimer on another lox site to form a tetramer. Lox sites are directional and the two sites joined by the tetramer are parallel in orientation. The double stranded DNA is cut at both loxP sites by the Cre protein. The strands are then rejoined with DNA ligase in a quick and efficient process. The result of recombination depends on the orientation of the loxP sites. For two lox sites on the same chromosome arm, inverted loxP sites will cause an inversion of the intervening DNA, while a direct repeat of loxP sites will cause a deletion event. If loxP sites are on different chromosomes it is possible for translocation events to be catalysed by Cre induced recombination. Two plasmids can be joined using the variant lox sites 71 and 66. Cre recombinase Cre recombinase can be synthesized by the corresponding gene under the direction of cell-specific promoters, including promoters under the control of doxycycline. An additional level of control can be achieved by using his Cre recombinase, engineered to reversibly activate in the presence of the estrogen analogue 4-hydroxy tamoxifen. This provides the advantage that the Cre recombinase is active for a short time. This prevents non-specific actions of Cre recombinase. The Cre recombinase can recognize cryptic sites in the host genome and induce unauthorized recombination, damaging host DNA. This tool is suitable for deleting antibiotic resistance genes, but above all it allows conditional knockouts that can be induced at specific times in the cell type of choice. Models thus obtained are more likely to mimic the physiological situation. The Cre protein (encoded by the locus originally named as "Causes recombination", with "Cyclization recombinase" being found in some references) consists of 4 subunits and two domains: The larger carboxyl (C-terminal) domain, and smaller amino (N-terminal) domain. The total protein has 343 amino acids. The C domain is similar in structure to the domain in the Integrase family of enzymes isolated from lambda phage. This is also the catalytic site of the enzyme. loxP site loxP (locus of X-over P1) is a site on the bacteriophage P1 consisting of 34 bp. The site includes an asymmetric 8 bp sequence, variable except for the middle two bases, in between two sets of symmetric, 13 bp sequences. The exact sequence is given below; 'N' indicates bases which may vary, and lowercase letters indicate bases that have been mutated from the wild-type. The 13 bp sequences are palindromic but the 8 bp spacer is not, thus giving the loxP sequence a certain direction. Usually loxP sites come in pairs for genetic manipulation. If the two loxP sites are in the same orientation, the floxed sequence (sequence flanked by two loxP sites) is excised; however if the two loxP sites are in the opposite orientation, the floxed sequence is inverted. If there exists a floxed donor sequence, the donor sequence can be swapped with the original sequence. This technique is called recombinase-mediated cassette exchange and is a very convenient and time-saving way for genetic manipulation. The caveat, however, is that the recombination reaction can happen backwards, rendering cassette exchange inefficient. In addition, sequence excision can happen in trans instead of a in cis cassette exchange event. The loxP mutants are created to avoid these problems. Holliday junctions and homologous recombination During genetic recombination, a Holliday junction is formed between the two strands of DNA and a double-stranded break in a DNA molecule leaves a 3’OH end exposed. This reaction is aided with the endonuclease activity of an enzyme. 5’ Phosphate ends are usually the substrates for this reaction, thus extended 3’ regions remain. This 3’ OH group is highly unstable, and the strand on which it is present must find its complement. Since homologous recombination occurs after DNA replication, two strands of DNA are available, and thus, the 3’ OH group must pair with its complement, and it does so, with an intact strand on the other duplex. Now, one point of crossover has occurred, which is what is called a Holliday Intermediate. The 3’OH end is elongated (that is, bases are added) with the help of DNA Polymerase. The pairing of opposite strands is what constitutes the crossing-over or Recombination event, which is common to all living organisms, since the genetic material on one strand of one duplex has paired with one strand of another duplex, and has been elongated by DNA polymerase. Further cleavage of Holliday Intermediates results in formation of Hybrid DNA. This further cleavage or ‘resolvation’ is done by a special group of enzymes called Resolvases. RuvC is just one of these Resolvases that have been isolated in bacteria and yeast. For many years, it was thought that when the Holliday junction intermediate was formed, the branch point of the junction (where the strands cross over) would be located at the first cleavage site. Migration of the branch point to the second cleavage site would then somehow trigger the second half of the pathway. This model provided convenient explanation for the strict requirement for homology between recombining sites, since branch migration would stall at a mismatch and would not allow the second strand exchange to occur. In more recent years, however, this view has been challenged, and most of the current models for Int, Xer, and Flp recombination involve only limited branch migration (1–3 base pairs of the Holliday intermediate), coupled to an isomerisation event that is responsible for switching the strand cleavage specificity. Site-specific recombination Site-specific recombination (SSR) involves specific sites for the catalyzing action of special enzymes called recombinases. Cre, or cyclic recombinase, is one such enzyme. Site-specific recombination is, thus, the enzyme-mediated cleavage and ligation of two defined deoxynucleotide sequences. A number of conserved site-specific recombination systems have been described in both prokaryotic and eukaryotic organisms. In general, these systems use one or more proteins and act on unique asymmetric DNA sequences. The products of the recombination event depend on the relative orientation of these asymmetric sequences. Many other proteins apart from the recombinase are involved in regulating the reaction. During site-specific DNA recombination, which brings about genetic rearrangement in processes such as viral integration and excision and chromosomal segregation, these recombinase enzymes recognize specific DNA sequences and catalyse the reciprocal exchange of DNA strands between these sites. Mechanism of action Initiation of site-specific recombination begins with the binding of recombination proteins to their respective DNA targets. A separate recombinase recognizes and binds to each of two recombination sites on two different DNA molecules or within the same DNA strand. At the given specific site on the DNA, the hydroxyl group of the tyrosine in the recombinase attacks a phosphate group in the DNA backbone using a direct transesterification mechanism. This reaction links the recombinase protein to the DNA via a phospho-tyrosine linkage. This conserves the energy of the phosphodiester bond, allowing the reaction to be reversed without the involvement of a high-energy cofactor. Cleavage on the other strand also causes a phospho-tyrosine bond between DNA and the enzyme. At both of the DNA duplexes, the bonding of the phosphate group to tyrosine residues leave a 3’ OH group free in the DNA backbone. In fact, the enzyme-DNA complex is an intermediate stage, which is followed by the ligation of the 3’ OH group of one DNA strand to the 5’ phosphate group of the other DNA strand, which is covalently bonded to the tyrosine residue; that is, the covalent linkage between 5’ end and tyrosine residue is broken. This reaction synthesizes the Holliday junction discussed earlier. In this fashion, opposite DNA strands are joined. Subsequent cleavage and rejoining cause DNA strands to exchange their segments. Protein-protein interactions drive and direct strand exchange. Energy is not compromised, since the protein-DNA linkage makes up for the loss of the phosphodiester bond, which occurred during cleavage. Site-specific recombination is also an important process that viruses, such as bacteriophages, adopt to integrate their genetic material into the infected host. The virus, called a prophage in such a state, accomplishes this via integration and excision. The points where the integration and excision reactions occur are called the attachment (att) sites. An attP site on the phage exchanges segments with an attB site on the bacterial DNA. Thus, these are site-specific, occurring only at the respective att sites. The integrase class of enzymes catalyse this particular reaction. Efficiency of action Two factors have been shown to affect the efficiency of Cre's excision on the lox pair. First, the nucleotide sequence identity in the spacer region of lox site. Engineered lox variants which differ on the spacer region tend to have varied but generally lower recombination efficiency compared to wildtype loxP, presumably through affecting the formation and resolution of recombination intermediate. Another factor is the length of DNA between the lox pair. Increasing the length of DNA leads to decreased efficiency of Cre/lox recombination possibly through regulating the dynamics of the reaction. Genetic location of the floxed sequence affects recombination efficiency as well probably by influencing the availability of DNA by Cre recombinase. The choice of Cre driver is also important as low expression of Cre recombinase tends to result in non-parallel recombination. Non-parallel recombination is especially problematic in a fate mapping scenario where one recombination event is designed to manipulate the gene under study and the other recombination event is necessary for activating a reporter gene (usually encoding a fluorescent protein) for cell lineage tracing. Failure to activate both recombination events simultaneously confounds the interpretation of cell fate mapping results. Temporal control Inducible Cre activation is achieved using CreER (estrogen receptor) variant, which is only activated after delivery of tamoxifen. This is done through the fusion of a mutated ligand binding domain of the estrogen receptor to the Cre recombinase, resulting in Cre becoming specifically activated by tamoxifen. In the absence of tamoxifen, CreER will result in the shuttling of the mutated recombinase into the cytoplasm. The protein will stay in this location in its inactivated state until tamoxifen is given. Once tamoxifen is introduced, it is metabolized into 4-hydroxytamoxifen, which then binds to the ER and results in the translocation of the CreER into the nucleus, where it is then able to cleave the lox sites. Importantly, sometimes fluorescent reporters can be activated in the absence of tamoxifen, due to leakage of a few Cre recombinase molecules into the nucleus which, in combination with very sensitive reporters, results in unintended cell labelling. CreER(T2) was developed to minimize tamoxifen-independent recombination and maximize tamoxifen-sensitivity. Conditional cell lineage tracing Cells alter their phenotype in response to numerous environmental stimuli and can lose the expression of genes typically used to mark their identity, making it difficult to research the contribution of certain cell types to disease. Therefore, researchers often use transgenic mice expressing CreERt2 recombinase induced by tamoxifen administration, under the control of a promoter of a gene that marks the specific cell type of interest, with a Cre-dependent fluorescent protein reporter. The Cre recombinase is fused to a mutant form of the oestrogen receptor, which binds the synthetic oestrogen 4-hydroxytamoxifen instead of its natural ligand 17β-estradiol. CreER(T2) resides within the cytoplasm and can only translocate to the nucleus following tamoxifen administration, allowing tight temporal control of recombination. The fluorescent reporter cassette will contain a promoter to permit high expression of the fluorescent transgene reporter (e.g. a CAG promoter) and a loxP flanked stop cassette, ensuring the expression of the transgene is Cre-recombinase dependent and the reporter sequence. Upon Cre driven recombination, the stop cassette is excised, allowing reporter genes to express specifically in cells in which the Cre expression is being driven by the cell-specific marker promoter. Since removal of the stop cassette is permanent, the reporter genes are expressed in all the progeny produced by the initial cells where the Cre was once activated. Such conditional lineage tracing has proved to be extremely useful to efficiently and specifically identify vascular smooth muscle cells (VSMCs) and VSMC-derived cells and has been used to test effects on VSMC and VSMC-derived cells in vivo. Natural function of the Cre-lox system The P1 phage is a temperate phage that causes either a lysogenic or lytic cycle when it infects a bacterium. In its lytic state, once its viral genome is injected into the host cell, viral proteins are produced, virions are assembled, and the host cell is lysed to release the phages, continuing the cycle. In the lysogenic cycle the phage genome replicates with the rest of the bacterial genome and is transmitted to daughter cells at each subsequent cell division. It can transition to the lytic cycle by a later event such as UV radiation or starvation. Phages like the lambda phage use their site specific recombinases to integrate their DNA into the host genome during lysogeny. P1 phage DNA on the other hand, exists as a plasmid in the host. The Cre-lox system serves several functions in the phage: it circularizes the phage DNA into a plasmid, separates interlinked plasmid rings so they are passed to both daughter bacteria equally and may help maintain copy numbers through an alternative means of replication. The P1 phage DNA when released into the host from the virion is in the form of a linear double stranded DNA molecule. The Cre enzyme targets loxP sites at the ends of this molecule and cyclises the genome. This can also take place in the absence of the Cre lox system with the help of other bacterial and viral proteins. The P1 plasmid is relatively large (≈90Kbp) and hence exists in a low copy number - usually one per cell. If the two daughter plasmids get interlinked one of the daughter cells of the host will lose the plasmid. The Cre-lox recombination system prevents these situations by unlinking the rings of DNA by carrying out two recombination events (linked rings -> single fused ring -> two unlinked rings). It is also proposed that rolling circle replication followed by recombination will allow the plasmid to increase its copy number when certain regulators (repA) are limiting. Implementation of multiple loxP site pairs A classical strategy for generating gene deletion variants is based on double cross-integration of non-replicating vectors into the genome. Furthermore, recombination systems such as Cre-lox are widely used, mostly in eukaryotes. The versatile properties of Cre recombinase make it ideal for use in many genetic engineering strategies. As such, the Cre lox system has been used in a wide variety of eukaryotes, including plants. Multiple variants of loxP, in particular lox2272 and loxN, have been used by researchers with the combination of different Cre actions (transient or constitutive) to create a "Brainbow" system that allows multi-colouring of mice's brain with four fluorescent proteins. Another report using two lox variants pair but through regulating the length of DNA in one pair results in stochastic gene activation with regulated level of sparseness. Notes and references External links Introduction to Cre-lox technology by the "Jackson Laboratory" Molecular genetics DNA Genetic engineering Genetics techniques
Cre-Lox recombination
Chemistry,Engineering,Biology
4,928
359,684
https://en.wikipedia.org/wiki/Cumulant
In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. Any two probability distributions whose moments are identical will have identical cumulants as well, and vice versa. The first cumulant is the mean, the second cumulant is the variance, and the third cumulant is the same as the third central moment. But fourth and higher-order cumulants are not equal to central moments. In some cases theoretical treatments of problems in terms of cumulants are simpler than those using moments. In particular, when two or more random variables are statistically independent, the th-order cumulant of their sum is equal to the sum of their th-order cumulants. As well, the third and higher-order cumulants of a normal distribution are zero, and it is the only distribution with this property. Just as for moments, where joint moments are used for collections of random variables, it is possible to define joint cumulants. Definition The cumulants of a random variable are defined using the cumulant-generating function , which is the natural logarithm of the moment-generating function: The cumulants are obtained from a power series expansion of the cumulant generating function: This expansion is a Maclaurin series, so the th cumulant can be obtained by differentiating the above expansion times and evaluating the result at zero: If the moment-generating function does not exist, the cumulants can be defined in terms of the relationship between cumulants and moments discussed later. Alternative definition of the cumulant generating function Some writers prefer to define the cumulant-generating function as the natural logarithm of the characteristic function, which is sometimes also called the second characteristic function, An advantage of —in some sense the function evaluated for purely imaginary arguments—is that is well defined for all real values of even when is not well defined for all real values of , such as can occur when there is "too much" probability that has a large magnitude. Although the function will be well defined, it will nonetheless mimic in terms of the length of its Maclaurin series, which may not extend beyond (or, rarely, even to) linear order in the argument , and in particular the number of cumulants that are well defined will not change. Nevertheless, even when does not have a long Maclaurin series, it can be used directly in analyzing and, particularly, adding random variables. Both the Cauchy distribution (also called the Lorentzian) and more generally, stable distributions (related to the Lévy distribution) are examples of distributions for which the power-series expansions of the generating functions have only finitely many well-defined terms. Some basic properties The th cumulant of (the distribution of) a random variable enjoys the following properties: If and is constant (i.e. not random) then i.e. the cumulant is translation invariant. (If then we have If is constant (i.e. not random) then i.e. the th cumulant is homogeneous of degree . If random variables are independent then That is, the cumulant is cumulative — hence the name. The cumulative property follows quickly by considering the cumulant-generating function: so that each cumulant of a sum of independent random variables is the sum of the corresponding cumulants of the addends. That is, when the addends are statistically independent, the mean of the sum is the sum of the means, the variance of the sum is the sum of the variances, the third cumulant (which happens to be the third central moment) of the sum is the sum of the third cumulants, and so on for each order of cumulant. A distribution with given cumulants can be approximated through an Edgeworth series. First several cumulants as functions of the moments All of the higher cumulants are polynomial functions of the central moments, with integer coefficients, but only in degrees 2 and 3 are the cumulants actually central moments. mean the variance, or second central moment. the third central moment. the fourth central moment minus three times the square of the second central moment. Thus this is the first case in which cumulants are not simply moments or central moments. The central moments of degree more than 3 lack the cumulative property. Cumulants of some discrete probability distributions The constant random variables . The cumulant generating function is . The first cumulant is and the other cumulants are zero, . The Bernoulli distributions, (number of successes in one trial with probability of success). The cumulant generating function is . The first cumulants are and . The cumulants satisfy a recursion formula The geometric distributions, (number of failures before one success with probability of success on each trial). The cumulant generating function is . The first cumulants are , and . Substituting gives and . The Poisson distributions. The cumulant generating function is . All cumulants are equal to the parameter: . The binomial distributions, (number of successes in independent trials with probability of success on each trial). The special case is a Bernoulli distribution. Every cumulant is just times the corresponding cumulant of the corresponding Bernoulli distribution. The cumulant generating function is . The first cumulants are and . Substituting gives and . The limiting case is a Poisson distribution. The negative binomial distributions, (number of failures before successes with probability of success on each trial). The special case is a geometric distribution. Every cumulant is just times the corresponding cumulant of the corresponding geometric distribution. The derivative of the cumulant generating function is . The first cumulants are , and . Substituting gives and . Comparing these formulas to those of the binomial distributions explains the name 'negative binomial distribution'. The limiting case is a Poisson distribution. Introducing the variance-to-mean ratio the above probability distributions get a unified formula for the derivative of the cumulant generating function: The second derivative is confirming that the first cumulant is and the second cumulant is . The constant random variables have . The binomial distributions have so that . The Poisson distributions have . The negative binomial distributions have so that . Note the analogy to the classification of conic sections by eccentricity: circles , ellipses , parabolas , hyperbolas . Cumulants of some continuous probability distributions For the normal distribution with expected value and variance , the cumulant generating function is . The first and second derivatives of the cumulant generating function are and . The cumulants are , , and . The special case is a constant random variable . The cumulants of the uniform distribution on the interval are , where is the th Bernoulli number. The cumulants of the exponential distribution with rate parameter are . Some properties of the cumulant generating function The cumulant generating function , if it exists, is infinitely differentiable and convex, and passes through the origin. Its first derivative ranges monotonically in the open interval from the infimum to the supremum of the support of the probability distribution, and its second derivative is strictly positive everywhere it is defined, except for the degenerate distribution of a single point mass. The cumulant-generating function exists if and only if the tails of the distribution are majorized by an exponential decay, that is, (see Big O notation) where is the cumulative distribution function. The cumulant-generating function will have vertical asymptote(s) at the negative supremum of such , if such a supremum exists, and at the supremum of such , if such a supremum exists, otherwise it will be defined for all real numbers. If the support of a random variable has finite upper or lower bounds, then its cumulant-generating function , if it exists, approaches asymptote(s) whose slope is equal to the supremum or infimum of the support, respectively, lying above both these lines everywhere. (The integrals yield the -intercepts of these asymptotes, since .) For a shift of the distribution by , For a degenerate point mass at , the cumulant generating function is the straight line , and more generally, if and only if and are independent and their cumulant generating functions exist; (subindependence and the existence of second moments sufficing to imply independence.) The natural exponential family of a distribution may be realized by shifting or translating , and adjusting it vertically so that it always passes through the origin: if is the pdf with cumulant generating function and is its natural exponential family, then and If is finite for a range then if then is analytic and infinitely differentiable for . Moreover for real and is strictly convex, and is strictly increasing. Further properties of cumulants A negative result Given the results for the cumulants of the normal distribution, it might be hoped to find families of distributions for which for some , with the lower-order cumulants (orders 3 to ) being non-zero. There are no such distributions. The underlying result here is that the cumulant generating function cannot be a finite-order polynomial of degree greater than 2. Cumulants and moments The moment generating function is given by: So the cumulant generating function is the logarithm of the moment generating function The first cumulant is the expected value; the second and third cumulants are respectively the second and third central moments (the second central moment is the variance); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments. The moments can be recovered in terms of cumulants by evaluating the th derivative of at , Likewise, the cumulants can be recovered in terms of moments by evaluating the th derivative of at , The explicit expression for the th moment in terms of the first cumulants, and vice versa, can be obtained by using Faà di Bruno's formula for higher derivatives of composite functions. In general, we have where are incomplete (or partial) Bell polynomials. In the like manner, if the mean is given by , the central moment generating function is given by and the th central moment is obtained in terms of cumulants as Also, for , the th cumulant in terms of the central moments is The th moment is an th-degree polynomial in the first cumulants. The first few expressions are: The "prime" distinguishes the moments from the central moments . To express the central moments as functions of the cumulants, just drop from these polynomials all terms in which appears as a factor: Similarly, the th cumulant is an th-degree polynomial in the first non-central moments. The first few expressions are: In general, the cumulant is the determinant of a matrix: To express the cumulants for as functions of the central moments, drop from these polynomials all terms in which μ'1 appears as a factor: The cumulants can be related to the moments by differentiating the relationship with respect to , giving , which conveniently contains no exponentials or logarithms. Equating the coefficient of on the left and right sides and using gives the following formulas for : These allow either or to be computed from the other using knowledge of the lower-order cumulants and moments. The corresponding formulas for the central moments for are formed from these formulas by setting and replacing each with for : Cumulants and set-partitions These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is where runs through the list of all partitions of a set of size ; "" means is one of the "blocks" into which the set is partitioned; and is the size of the set . Thus each monomial is a constant times a product of cumulants in which the sum of the indices is (e.g., in the term , the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the integer corresponds to each term. The coefficient in each term is the number of partitions of a set of members that collapse to that partition of the integer when the members of the set become indistinguishable. Cumulants and combinatorics Further connection between cumulants and combinatorics can be found in the work of Gian-Carlo Rota, where links to invariant theory, symmetric functions, and binomial sequences are studied via umbral calculus. Joint cumulants The joint cumulant of several random variables is defined as the coefficient in the Maclaurin series of the multivariate cumulant generating function, see Section 3.1 in, Note that and, in particular As with a single variable, the generating function and cumulant can instead be defined via in which case and Repeated random variables and relation between the coefficients κk1, ..., kn Observe that can also be written as from which we conclude that For example and In particular, the last equality shows that the cumulants of a single random variable are the joint cumulants of multiple copies of that random variable. Relation with mixed moments The joint cumulant or random variables can be expressed as an alternate sum of products of their mixed moments, see Equation (3.2.7) in, where  runs through the list of all partitions of ; where  runs through the list of all blocks of the partition ; and where  is the number of parts in the partition. For example, is the expected value of , is the covariance of and , and For zero-mean random variables , any mixed moment of the form vanishes if is a partition of which contains a singleton . Hence, the expression of their joint cumulant in terms of mixed moments simplifies. For example, if X,Y,Z,W are zero mean random variables, we have More generally, any coefficient of the Maclaurin series can also be expressed in terms of mixed moments, although there are no concise formulae. Indeed, as noted above, one can write it as a joint cumulant by repeating random variables appropriately, and then apply the above formula to express it in terms of mixed moments. For example If some of the random variables are independent of all of the others, then any cumulant involving two (or more) independent random variables is zero. The combinatorial meaning of the expression of mixed moments in terms of cumulants is easier to understand than that of cumulants in terms of mixed moments, see Equation (3.2.6) in: For example: Further properties Another important property of joint cumulants is multilinearity: Just as the second cumulant is the variance, the joint cumulant of just two random variables is the covariance. The familiar identity generalizes to cumulants: Conditional cumulants and the law of total cumulance The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case , expressed in the language of (central) moments rather than that of cumulants, says In general, where the sum is over all partitions  of the set of indices, and 1, ..., b are all of the "blocks" of the partition ; the expression indicates that the joint cumulant of the random variables whose indices are in that block of the partition. Conditional cumulants and the conditional expectation For certain settings, a derivative identity can be established between the conditional cumulant and the conditional expectation. For example, suppose that where is standard normal independent of , then for any it holds that The results can also be extended to the exponential family. Relation to statistical physics In statistical physics many extensive quantities – that is quantities that are proportional to the volume or size of a given system – are related to cumulants of random variables. The deep connection is that in a large system an extensive quantity like the energy or number of particles can be thought of as the sum of (say) the energy associated with a number of nearly independent regions. The fact that the cumulants of these nearly independent random variables will (nearly) add make it reasonable that extensive quantities should be expected to be related to cumulants. A system in equilibrium with a thermal bath at temperature have a fluctuating internal energy , which can be considered a random variable drawn from a distribution . The partition function of the system is where =  and is the Boltzmann constant and the notation has been used rather than for the expectation value to avoid confusion with the energy, . Hence the first and second cumulant for the energy give the average energy and heat capacity. The Helmholtz free energy expressed in terms of further connects thermodynamic quantities with cumulant generating function for the energy. Thermodynamics properties that are derivatives of the free energy, such as its internal energy, entropy, and specific heat capacity, all can be readily expressed in terms of these cumulants. Other free energy can be a function of other variables such as the magnetic field or chemical potential , e.g. where is the number of particles and is the grand potential. Again the close relationship between the definition of the free energy and the cumulant generating function implies that various derivatives of this free energy can be written in terms of joint cumulants of and . History The history of cumulants is discussed by Anders Hald. Cumulants were first introduced by Thorvald N. Thiele, in 1889, who called them semi-invariants. They were first called cumulants in a 1932 paper by Ronald Fisher and John Wishart. Fisher was publicly reminded of Thiele's work by Neyman, who also notes previous published citations of Thiele brought to Fisher's attention. Stephen Stigler has said that the name cumulant was suggested to Fisher in a letter from Harold Hotelling. In a paper published in 1929, Fisher had called them cumulative moment functions. The partition function in statistical physics was introduced by Josiah Willard Gibbs in 1901. The free energy is often called Gibbs free energy. In statistical mechanics, cumulants are also known as Ursell functions relating to a publication in 1927. Cumulants in generalized settings Formal cumulants More generally, the cumulants of a sequence , not necessarily the moments of any probability distribution, are, by definition, where the values of for are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints. Bell numbers In combinatorics, the th Bell number is the number of partitions of a set of size . All of the cumulants of the sequence of Bell numbers are equal to 1. The Bell numbers are the moments of the Poisson distribution with expected value 1. Cumulants of a polynomial sequence of binomial type For any sequence of scalars in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence of formal moments, given by the polynomials above. For those polynomials, construct a polynomial sequence in the following way. Out of the polynomial make a new polynomial in these plus one additional variable : and then generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on . Each coefficient is a polynomial in the cumulants; these are the Bell polynomials, named after Eric Temple Bell. This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants. Free cumulants In the above moment-cumulant formula\ for joint cumulants, one sums over all partitions of the set . If instead, one sums only over the noncrossing partitions, then, by solving these formulae for the in terms of the moments, one gets free cumulants rather than conventional cumulants treated above. These free cumulants were introduced by Roland Speicher and play a central role in free probability theory. In that theory, rather than considering independence of random variables, defined in terms of tensor products of algebras of random variables, one considers instead free independence of random variables, defined in terms of free products of algebras. The ordinary cumulants of degree higher than 2 of the normal distribution are zero. The free cumulants of degree higher than 2 of the Wigner semicircle distribution are zero. This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory. See also Entropic value at risk Cumulant generating function from a multiset Cornish–Fisher expansion Edgeworth expansion Polykay k-statistic, a minimum-variance unbiased estimator of a cumulant Ursell function Total position spread tensor as an application of cumulants to analyse the electronic wave function in quantum chemistry. References External links cumulant on the Earliest known uses of some of the words of mathematics Moment (mathematics)
Cumulant
Physics,Mathematics
4,488
55,461,330
https://en.wikipedia.org/wiki/NGC%204753
NGC 4753 is a lenticular galaxy located about 60 million light-years away in the constellation of Virgo. NGC 4753 was discovered by astronomer William Herschel on February 22, 1784. It is notable for having distinct dust lanes that surround its nucleus. It is a member of the NGC 4753 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. Physical characteristics The distribution of dust in NGC 4753 lies in an inclined disk wrapped several times around the nucleus. The material in the disk may have been accreted from the merger of a gas rich dwarf galaxy. Over several orbital periods, the accreted material eventually smeared out into a disk. Differential precession that occurred after the accretion event caused the disk to twist. Eventually, the disk settled into a fixed orientation with respect to the galaxy. The age of the disk is estimated to be around half a billion to a billion years. Another explanation suggests that the dust in NGC 4753 originated from red giant stars in the galaxy. Dark matter Analysis of the twisted disk in NGC 4753 by Steiman-Cameron et al. revealed that most of the mass in the galaxy lies in a slightly flattened spherical halo of dark matter. Globular clusters NGC 4753 has an estimated population of 1070 ± 120 globular clusters. Supernovae Two supernovae have been observed in NGC 4753: SN 1965I (type unknown, mag. 13.5) was discovered by Leonida Rosino on 18 June 1965. SN 1983G (type Ia, mag. 13) was co-discovered by Kiyomi Okazaki and Robert Evans on 4 April 1983. Group membership NGC 4753 is a member of its own galaxy group, known as the NGC 4753 Group. The NGC 4753 Group is located near the southern edge of the Virgo Cluster. The group, along with other groups of galaxies form part of a filament that extends off from the southern border of the Virgo Cluster that is called the Virgo II Groups. Image gallery See also List of NGC objects (4001–5000) Centaurus A References External links De Vaucouleurs Atlas entry on NGC 4753 NGC 4753 Lenticular dust in detail (NASA/ESA) Lenticular galaxies Peculiar galaxies Virgo (constellation) 4753 043671 8009 Astronomical objects discovered in 1784 Discoveries by William Herschel +00-33-016 12498-0055
NGC 4753
Astronomy
525
16,723,782
https://en.wikipedia.org/wiki/Jackshaft
A jackshaft, also called a countershaft, is a common mechanical design component used to transfer or synchronize rotational force in a machine. A jackshaft is often just a short stub with supporting bearings on the ends and two pulleys, gears, or cranks attached to it. In general, a jackshaft is any shaft that is used as an intermediary transmitting power from a driving shaft to a driven shaft. History Jackshaft The oldest uses of the term jackshaft appear to involve shafts that were intermediate between water wheels or stationary steam engines and the line shafts of 19th century mills. In these early sources from New England mills in 1872 and 1880, the term "jack shaft" always appears in quotes. Another 1872 author wrote: "Gear wheels are used in England to transmit the power of the engine to what is usually called the jack shaft." By 1892, the quotes were gone, but the use remained the same. The pulleys on the jackshafts of mills or power plants were frequently connected to the shaft with clutches. For example, in the 1890s, the generating room of the Virginia Hotel in Chicago had two Corliss engines and five dynamos, linked through a jackshaft. Clutches on the jackshaft pulleys allowed any or all of the dynamos to be driven by either or both of the engines. With the advent of chain-drive vehicles, the term jackshaft was generally applied to the final intermediate shaft in the drive train, either a chain driven shaft driving pinions that directly engaged teeth on the inside of the rims of the drive wheels, or the output shaft of the transmission/differential that is linked by chain to the drive wheels. One of the first uses of the term jackshaft in the context of railroad equipment was in an 1890 patent application by Samuel Mower. In his electric-motor driven railroad truck, the motor was geared to a jackshaft mounted between the side frames. A sliding dog clutch inside the jackshaft was used to select one of several gear ratios on the chain drive to the driven axle. Later railroad jackshafts were generally connected to the driving wheels using side rods (see jackshaft (locomotive) for details). Countershaft The term countershaft is somewhat older. In 1828, the term was used to refer to an intermediate horizontal shaft in a gristmill driven through gearing by the waterwheel and driving the millstones through bevel gears. An 1841 textbook used the term to refer to a short shaft driven by a belt from the line shaft and driving the spindle of a lathe through additional belts. The countershaft and the lathe spindle each carried cones of different-diameter pulleys for speed control. In 1872, this definition was given: "The term countershaft is applied to all shafts driven from the main line [shaft] when placed at or near the machines to be driven ..." Modern uses Modern jackshafts and countershafts are often hidden inside large machinery as components of the larger overall device. In farm equipment, a spinning output shaft at the rear of the vehicle is commonly referred to as the power take-off or PTO, and the power-transfer shaft connected to it is commonly called a PTO shaft, but is also a jackshaft. See also Drive shaft Layshaft References Industrial Revolution History of technology Shaft drives
Jackshaft
Technology
687
10,374,890
https://en.wikipedia.org/wiki/NGC%204622
NGC 4622, also known as the Backward Galaxy, is a face-on unbarred spiral galaxy with a very prominent ring structure located in the constellation Centaurus. The galaxy is a member of the Centaurus Cluster. Spiral structure The spiral galaxy NGC 4622 lies approximately 111 million light-years away from Earth in the constellation Centaurus. NGC 4622 is an example of a galaxy with leading spiral arms. Each spiral arm winds away from the center of the galaxy and ends at an outermost tip that "points" in a certain direction (away from the arm). Spiral arms were thought to always trail, meaning that the outermost tip of every spiral arm points away from the direction of the disk's orbital rotation. This is true of the inner spiral arm of NGC 4622 but not of its outer spiral arms. The outer arms of NGC 4622 are instead leading spiral arms, meaning the tips of the spiral arms point towards the direction of disk rotation. This may be the result of a gravitational interaction between NGC 4622 and another galaxy or the result of a merger between NGC 4622 and a smaller object. NGC 4622 also has a single inner trailing spiral arm. Although it was originally suspected that the inner spiral arm was a leading arm, the observations that established that the outer arms were leading also established that the inner arm was trailing. These results were met with skepticism in part because they contradicted conventional wisdom with one quote being "so you're the backward astronomers who found the backward galaxy." Astronomical objections centered on the fact that dust reddening and cloud silhouettes were used to determine that the outer arms lead. The galaxy disk is tilted only 19 degrees from face-on making near to far-side effects of dust hard to discern and because clumpy dust clouds might be concentrated on one side of the disk, creating misleading results. In response, new research determined NGC 4622's spiral arm sense with a method independent of the previous work. The Fourier component method reveals two new weak arms in the inner disk winding opposite the outer strong clockwise pair. Thus the galaxy must have a pair of arms winding in the opposite direction from most galaxies. Analysis of a color-age star formation angle sequence of the Fourier components establishes that the strong outer pair is the leading pair A Fourier component image of the arm pairs is shown with one of the pair of arms marked for the newly discovered inner CCW pair (black dots) as well as one of the already known (CW) outer pair (white dots). Supernovae On 25 May 2001, an image taken by the Hubble Space Telescope captured a supernova in NGC 4622, designated SN 2001jx (type unknown, mag. 17.5). It was discovered by R. Buta and GG Byrd of the University of Alabama as well as T. Freeman of Bevill State Community College. The type of supernova was not determined. A second supernova in this galaxy was discovered on 14 January 2019, and designated SN2019so (typeIa-91bg-like, mag. 18.5). Group and cluster According to AM Garcia, NGC 4622 is a member of the NGC 4709 group which consists of at least 42 galaxies including NGC 4616, NGC 4622B (also called PGC 42852), NGC 4679 and NGC 4709. The NGC 4709 group is part of the Centaurus Cluster. See also Messier 64 – a spiral galaxy with its gas disk orbiting opposite the disk of stars References External links Hubble Heritage site: Pictures and description SKY-MAP.ORG : NGC 4622 Ring galaxies Unbarred spiral galaxies 4622 42701 Centaurus Centaurus Cluster
NGC 4622
Astronomy
757
1,661,177
https://en.wikipedia.org/wiki/Accretion%20%28astrophysics%29
In astrophysics, accretion is the accumulation of particles into a massive object by gravitationally attracting more matter, typically gaseous matter, into an accretion disk. Most astronomical objects, such as galaxies, stars, and planets, are formed by accretion processes. Overview The accretion model that Earth and the other terrestrial planets formed from meteoric material was proposed in 1944 by Otto Schmidt, followed by the protoplanet theory of William McCrea (1960) and finally the capture theory of Michael Woolfson. In 1978, Andrew Prentice resurrected the initial Laplacian ideas about planet formation and developed the modern Laplacian theory. None of these models proved completely successful, and many of the proposed theories were descriptive. The 1944 accretion model by Otto Schmidt was further developed in a quantitative way in 1969 by Viktor Safronov. He calculated, in detail, the different stages of terrestrial planet formation. Since then, the model has been further developed using intensive numerical simulations to study planetesimal accumulation. It is now accepted that stars form by the gravitational collapse of interstellar gas. Prior to collapse, this gas is mostly in the form of molecular clouds, such as the Orion Nebula. As the cloud collapses, losing potential energy, it heats up, gaining kinetic energy, and the conservation of angular momentum ensures that the cloud forms a flattened disk—the accretion disk. Accretion of galaxies A few hundred thousand years after the Big Bang, the Universe cooled to the point where atoms could form. As the Universe continued to expand and cool, the atoms lost enough kinetic energy, and dark matter coalesced sufficiently, to form protogalaxies. As further accretion occurred, galaxies formed. Indirect evidence is widespread. Galaxies grow through mergers and smooth gas accretion. Accretion also occurs inside galaxies, forming stars. Accretion of stars Stars are thought to form inside giant clouds of cold molecular hydrogen—giant molecular clouds of roughly and in diameter. Over millions of years, giant molecular clouds are prone to collapse and fragmentation. These fragments then form small, dense cores, which in turn collapse into stars. The cores range in mass from a fraction to several times that of the Sun and are called protostellar (protosolar) nebulae. They possess diameters of and a particle number density of roughly . Compare it with the particle number density of the air at the sea level—. The initial collapse of a solar-mass protostellar nebula takes around 100,000 years. Every nebula begins with a certain amount of angular momentum. Gas in the central part of the nebula, with relatively low angular momentum, undergoes fast compression and forms a hot hydrostatic (non-contracting) core containing a small fraction of the mass of the original nebula. This core forms the seed of what will become a star. As the collapse continues, conservation of angular momentum dictates that the rotation of the infalling envelope accelerates, which eventually forms a disk. As the infall of material from the disk continues, the envelope eventually becomes thin and transparent and the young stellar object (YSO) becomes observable, initially in far-infrared light and later in the visible. Around this time the protostar begins to fuse deuterium. If the protostar is sufficiently massive (above ), hydrogen fusion follows. Otherwise, if its mass is too low, the object becomes a brown dwarf. This birth of a new star occurs approximately 100,000 years after the collapse begins. Objects at this stage are known as Class I protostars, which are also called young T Tauri stars, evolved protostars, or young stellar objects. By this time, the forming star has already accreted much of its mass; the total mass of the disk and remaining envelope does not exceed 10–20% of the mass of the central YSO. At the next stage, the envelope completely disappears, having been gathered up by the disk, and the protostar becomes a classical T Tauri star. The latter have accretion disks and continue to accrete hot gas, which manifests itself by strong emission lines in their spectrum. The former do not possess accretion disks. Classical T Tauri stars evolve into weakly lined T Tauri stars. This happens after about 1 million years. The mass of the disk around a classical T Tauri star is about 1–3% of the stellar mass, and it is accreted at a rate of 10−7 to per year. A pair of bipolar jets is usually present as well. The accretion explains all peculiar properties of classical T Tauri stars: strong flux in the emission lines (up to 100% of the intrinsic luminosity of the star), magnetic activity, photometric variability and jets. The emission lines actually form as the accreted gas hits the "surface" of the star, which happens around its magnetic poles. The jets are byproducts of accretion: they carry away excessive angular momentum. The classical T Tauri stage lasts about 10 million years (there are only a few examples of so-called Peter Pan disks, where the accretion continues to persist for much longer periods, sometimes lasting for more than 40 million years). The disk eventually disappears due to accretion onto the central star, planet formation, ejection by jets, and photoevaporation by ultraviolet radiation from the central star and nearby stars. As a result, the young star becomes a weakly lined T Tauri star, which, over hundreds of millions of years, evolves into an ordinary Sun-like star, dependent on its initial mass. Accretion of planets Self-accretion of cosmic dust accelerates the growth of the particles into boulder-sized planetesimals. The more massive planetesimals accrete some smaller ones, while others shatter in collisions. Accretion disks are common around smaller stars, stellar remnants in a close binary, or black holes surrounded by material (such as those at the centers of galaxies). Some dynamics in the disk, such as dynamical friction, are necessary to allow orbiting gas to lose angular momentum and fall onto the central massive object. Occasionally, this can result in stellar surface fusion (see Bondi accretion). In the formation of terrestrial planets or planetary cores, several stages can be considered. First, when gas and dust grains collide, they agglomerate by microphysical processes like van der Waals forces and electromagnetic forces, forming micrometer-sized particles. During this stage, accumulation mechanisms are largely non-gravitational in nature. However, planetesimal formation in the centimeter-to-meter range is not well understood, and no convincing explanation is offered as to why such grains would accumulate rather than simply rebound. In particular, it is still not clear how these objects grow to become sized planetesimals; this problem is known as the "meter size barrier": As dust particles grow by coagulation, they acquire increasingly large relative velocities with respect to other particles in their vicinity, as well as a systematic inward drift velocity, that leads to destructive collisions, and thereby limit the growth of the aggregates to some maximum size. Ward (1996) suggests that when slow moving grains collide, the very low, yet non-zero, gravity of colliding grains impedes their escape. It is also thought that grain fragmentation plays an important role replenishing small grains and keeping the disk thick, but also in maintaining a relatively high abundance of solids of all sizes. A number of mechanisms have been proposed for crossing the 'meter-sized' barrier. Local concentrations of pebbles may form, which then gravitationally collapse into planetesimals the size of large asteroids. These concentrations can occur passively due to the structure of the gas disk, for example, between eddies, at pressure bumps, at the edge of a gap created by a giant planet, or at the boundaries of turbulent regions of the disk. Or, the particles may take an active role in their concentration via a feedback mechanism referred to as a streaming instability. In a streaming instability the interaction between the solids and the gas in the protoplanetary disk results in the growth of local concentrations, as new particles accumulate in the wake of small concentrations, causing them to grow into massive filaments. Alternatively, if the grains that form due to the agglomeration of dust are highly porous their growth may continue until they become large enough to collapse due to their own gravity. The low density of these objects allows them to remain strongly coupled with the gas, thereby avoiding high velocity collisions which could result in their erosion or fragmentation. Grains eventually stick together to form mountain-size (or larger) bodies called planetesimals. Collisions and gravitational interactions between planetesimals combine to produce Moon-size planetary embryos (protoplanets) over roughly 0.1–1 million years. Finally, the planetary embryos collide to form planets over 10–100 million years. The planetesimals are massive enough that mutual gravitational interactions are significant enough to be taken into account when computing their evolution. Growth is aided by orbital decay of smaller bodies due to gas drag, which prevents them from being stranded between orbits of the embryos. Further collisions and accumulation lead to terrestrial planets or the core of giant planets. If the planetesimals formed via the gravitational collapse of local concentrations of pebbles, their growth into planetary embryos and the cores of giant planets is dominated by the further accretions of pebbles. Pebble accretion is aided by the gas drag felt by objects as they accelerate toward a massive body. Gas drag slows the pebbles below the escape velocity of the massive body causing them to spiral toward and to be accreted by it. Pebble accretion may accelerate the formation of planets by a factor of 1000 compared to the accretion of planetesimals, allowing giant planets to form before the dissipation of the gas disk. However, core growth via pebble accretion appears incompatible with the final masses and compositions of Uranus and Neptune. Direct calculations indicate that, in a typical protoplanetary disk, the formation time of a giant planet via pebble accretion is comparable to the formation times resulting from planetesimal accretion. The formation of terrestrial planets differs from that of giant gas planets, also called Jovian planets. The particles that make up the terrestrial planets are made from metal and rock that condensed in the inner Solar System. However, Jovian planets began as large, icy planetesimals, which then captured hydrogen and helium gas from the solar nebula. Differentiation between these two classes of planetesimals arise due to the frost line of the solar nebula. Accretion of asteroids Meteorites contain a record of accretion and impacts during all stages of asteroid origin and evolution; however, the mechanism of asteroid accretion and growth is not well understood. Evidence suggests the main growth of asteroids can result from gas-assisted accretion of chondrules, which are millimeter-sized spherules that form as molten (or partially molten) droplets in space before being accreted to their parent asteroids. In the inner Solar System, chondrules appear to have been crucial for initiating accretion. The tiny mass of asteroids may be partly due to inefficient chondrule formation beyond 2 AU, or less-efficient delivery of chondrules from near the protostar. Also, impacts controlled the formation and destruction of asteroids, and are thought to be a major factor in their geological evolution. Chondrules, metal grains, and other components likely formed in the solar nebula. These accreted together to form parent asteroids. Some of these bodies subsequently melted, forming metallic cores and olivine-rich mantles; others were aqueously altered. After the asteroids had cooled, they were eroded by impacts for 4.5 billion years, or disrupted. For accretion to occur, impact velocities must be less than about twice the escape velocity, which is about for a radius asteroid. Simple models for accretion in the asteroid belt generally assume micrometer-sized dust grains sticking together and settling to the midplane of the nebula to form a dense layer of dust, which, because of gravitational forces, was converted into a disk of kilometer-sized planetesimals. But, several arguments suggest that asteroids may not have accreted this way. Accretion of comets Comets, or their precursors, formed in the outer Solar System, possibly millions of years before planet formation. How and when comets formed is debated, with distinct implications for Solar System formation, dynamics, and geology. Three-dimensional computer simulations indicate the major structural features observed on cometary nuclei can be explained by pairwise low velocity accretion of weak cometesimals. The currently favored formation mechanism is that of the nebular hypothesis, which states that comets are probably a remnant of the original planetesimal "building blocks" from which the planets grew. Astronomers think that comets originate in both the Oort cloud and the scattered disk. The scattered disk was created when Neptune migrated outward into the proto-Kuiper belt, which at the time was much closer to the Sun, and left in its wake a population of dynamically stable objects that could never be affected by its orbit (the Kuiper belt proper), and a population whose perihelia are close enough that Neptune can still disturb them as it travels around the Sun (the scattered disk). Because the scattered disk is dynamically active and the Kuiper belt relatively dynamically stable, the scattered disk is now seen as the most likely point of origin for periodic comets. The classic Oort cloud theory states that the Oort cloud, a sphere measuring about in radius, formed at the same time as the solar nebula and occasionally releases comets into the inner Solar System as a giant planet or star passes nearby and causes gravitational disruptions. Examples of such comet clouds may already have been seen in the Helix Nebula. The Rosetta mission to comet 67P/Churyumov–Gerasimenko determined in 2015 that when Sun's heat penetrates the surface, it triggers evaporation (sublimation) of buried ice. While some of the resulting water vapour may escape from the nucleus, 80% of it recondenses in layers beneath the surface. This observation implies that the thin ice-rich layers exposed close to the surface may be a consequence of cometary activity and evolution, and that global layering does not necessarily occur early in the comet's formation history. While most scientists thought that all the evidence indicated that the structure of nuclei of comets is processed rubble piles of smaller ice planetesimals of a previous generation, the Rosetta mission confirmed the idea that comets are "rubble piles" of disparate material. Comets appear to have formed as ~100-km bodies, then overwhelmingly ground/recontacted into their present states. See also Quasi-star References Concepts in astrophysics Celestial mechanics Solar System dynamic theories
Accretion (astrophysics)
Physics
3,087
43,341,889
https://en.wikipedia.org/wiki/Moda%20Domani%20Institute
Established in 2014 in Paris as a subsidiary of ISG Business School, Moda Domani Institute was one of the few business schools in France specialized in luxury, fashion and design. The business school was a member of the IONIS Education Group, the largest private group in France in terms of student population and endowment. In UK, the university had a double-degree partnership with the Liverpool John Moores University. The school closed in September 2020, replaced by ISG Luxury Management. References External links Moda Domani Institute website Business schools in France Educational institutions established in 2014 Education in Paris 2014 establishments in France Design schools
Moda Domani Institute
Engineering
125
58,306,941
https://en.wikipedia.org/wiki/Technology%20readiness%20level
Technology readiness levels (TRLs) are a method for estimating the maturity of technologies during the acquisition phase of a program. TRLs enable consistent and uniform discussions of technical maturity across different types of technology. TRL is determined during a technology readiness assessment (TRA) that examines program concepts, technology requirements, and demonstrated technology capabilities. TRLs are based on a scale from 1 to 9 with 9 being the most mature technology. TRL was developed at NASA during the 1970s. The US Department of Defense has used the scale for procurement since the early 2000s. By 2008 the scale was also in use at the European Space Agency (ESA). The European Commission advised EU-funded research and innovation projects to adopt the scale in 2010. TRLs were consequently used in 2014 in the EU Horizon 2020 program. In 2013, the TRL scale was further canonized by the International Organization for Standardization (ISO) with the publication of the ISO 16290:2013 standard. A comprehensive approach and discussion of TRLs has been published by the European Association of Research and Technology Organisations (EARTO). Extensive criticism of the adoption of TRL scale by the European Union was published in The Innovation Journal, stating that the "concreteness and sophistication of the TRL scale gradually diminished as its usage spread outside its original context (space programs)". Definitions Assessment tools A Technology Readiness Level Calculator was developed by the United States Air Force. This tool is a standard set of questions implemented in Microsoft Excel that produces a graphical display of the TRLs achieved. This tool is intended to provide a snapshot of technology maturity at a given point in time. The Defense Acquisition University (DAU) Decision Point (DP) Tool originally named the Technology Program Management Model was developed by the United States Army. and later adopted by the DAU. The DP/TPMM is a TRL-gated high-fidelity activity model that provides a flexible management tool to assist Technology Managers in planning, managing, and assessing their technologies for successful technology transition. The model provides a core set of activities including systems engineering and program management tasks that are tailored to the technology development and management goals. This approach is comprehensive, yet it consolidates the complex activities that are relevant to the development and transition of a specific technology program into one integrated model. Uses The primary purpose of using technology readiness levels is to help management in making decisions concerning the development and transitioning of technology. It is one of several tools that are needed to manage the progress of research and development activity within an organization. Among the advantages of TRLs: Provides a common understanding of technology status Risk management Used to make decisions concerning technology funding Used to make decisions concerning transition of technology Some of the characteristics of TRLs that limit their utility: Readiness does not necessarily fit with appropriateness or technology maturity A mature product may possess a greater or lesser degree of readiness for use in a particular system context than one of lower maturity Numerous factors must be considered, including the relevance of the products' operational environment to the system at hand, as well as the product-system architectural mismatch TRL models tend to disregard negative and obsolescence factors. There have been suggestions made for incorporating such factors into assessments. For complex technologies that incorporate various development stages, a more detailed scheme called the Technology Readiness Pathway Matrix has been developed going from basic units to applications in society. This tool aims to show that a readiness level of a technology is based on a less linear process but on a more complex pathway through its application in society. History Technology readiness levels were conceived at NASA in 1974 and formally defined in 1989. The original definition included seven levels, but in the 1990s NASA adopted the nine-level scale that subsequently gained widespread acceptance. Original NASA TRL Definitions (1989) Level 1 – Basic Principles Observed and Reported Level 2 – Potential Application Validated Level 3 – Proof-of-Concept Demonstrated, Analytically and/or Experimentally Level 4 – Component and/or Breadboard Laboratory Validated Level 5 – Component and/or Breadboard Validated in Simulated or Realspace Environment Level 6 – System Adequacy Validated in Simulated Environment Level 7 – System Adequacy Validated in Space The TRL methodology was originated by Stan Sadin at NASA Headquarters in 1974. Ray Chase was then the JPL Propulsion Division representative on the Jupiter Orbiter design team. At the suggestion of Stan Sadin, Chase used this methodology to assess the technology readiness of the proposed JPL Jupiter Orbiter spacecraft design. Later Chase spent a year at NASA Headquarters helping Sadin institutionalize the TRL methodology. Chase joined ANSER in 1978, where he used the TRL methodology to evaluate the technology readiness of proposed Air Force development programs. He published several articles during the 1980s and 90s on reusable launch vehicles utilizing the TRL methodology. These documented an expanded version of the methodology that included design tools, test facilities, and manufacturing readiness on the Air Force Have Not program. The Have Not program manager, Greg Jenkins, and Ray Chase published the expanded version of the TRL methodology, which included design and manufacturing. Leon McKinney and Chase used the expanded version to assess the technology readiness of the ANSER team's Highly Reusable Space Transportation (HRST) concept. ANSER also created an adapted version of the TRL methodology for proposed Homeland Security Agency programs. The United States Air Force adopted the use of technology readiness levels in the 1990s. In 1995, John C. Mankins, NASA, wrote a paper that discussed NASA's use of TRL, extended the scale, and proposed expanded descriptions for each TRL. In 1999, the United States General Accounting Office produced an influential report that examined the differences in technology transition between the DOD and private industry. It concluded that the DOD takes greater risks and attempts to transition emerging technologies at lesser degrees of maturity than does private industry. The GAO concluded that use of immature technology increased overall program risk. The GAO recommended that the DOD make wider use of technology readiness levels as a means of assessing technology maturity prior to transition. In 2001, the Deputy Under Secretary of Defense for Science and Technology issued a memorandum that endorsed use of TRLs in new major programs. Guidance for assessing technology maturity was incorporated into the Defense Acquisition Guidebook. Subsequently, the DOD developed detailed guidance for using TRLs in the 2003 DOD Technology Readiness Assessment Deskbook. Because of their relevance to Habitation, 'Habitation Readiness Levels (HRL)' were formed by a group of NASA engineers (Jan Connolly, Kathy Daues, Robert Howard, and Larry Toups). They have been created to address habitability requirements and design aspects in correlation with already established and widely used standards by different agencies, including NASA TRLs. More recently, Dr. Ali Abbas, Professor of chemical engineering and Associate Dean of Research at the University of Sydney and Dr. Mobin Nomvar, a chemical engineer and commercialisation specialist, have developed Commercial Readiness Level (CRL), a nine-point scale to be synchronised with TRL as part of a critical innovation path to rapidly assess and refine innovation projects to ensure market adoption and avoid failure. In the European Union The European Space Agency adopted the TRL scale in the mid-2000s. Its handbook closely follows the NASA definition of TRLs. In 2022, the ESA TRL Calculator was released to the public. The universal usage of TRL in EU policy was proposed in the final report of the first High Level Expert Group on Key Enabling Technologies, and it was implemented in the subsequent EU framework program, called H2020, running from 2013 to 2020. This means not only space and weapons programs, but everything from nanotechnology to informatics and communication technology. See also References Online External links Technology Readiness Levels (TRL) NASA Technology Readiness Levels Introduction NASA archive via Wayback Machine DNV Recommended_Practices (Look for DNV-RP-A203) UK MoD Acquisition Operating Framework guide to TRL (requires registration) Technology assessment Technology transfer Management cybernetics
Technology readiness level
Technology
1,628
135,619
https://en.wikipedia.org/wiki/List%20of%20compounds
Compounds are organized into the following lists: , compounds without a C–H bond See also Exotic molecule – a compound containing one or more exotic atoms External links Relevant links for chemical compounds are: Chemical Abstracts Service, a division of the American Chemical Society Chemical Abstracts Service (CAS) – substance databases CAS Common Chemistry ChemSpider PubChem Chemistry-related lists
List of compounds
Physics,Chemistry
74
29,888,101
https://en.wikipedia.org/wiki/Deployable%20structure
A deployable structure is a structure that can change shape so as to significantly change its size. Examples of deployable structures are umbrellas, some tensegrity structures, bistable structures, some Origami shapes and scissor-like structures. Deployable structures are also used on spacecraft for deploying solar panels and solar sails. Space-based deployable structures can be categorized into three primary classes: the first is the articulated structure class wherein rigid members contain sliding contact joints or are folded at hinge points and pivot to deploy, often locking into place. The second class consists of on-orbit assembly where a device is fabricated and/or mechanically joined in space to form the structure. The final class is high strain structures (often composed of High strain composites) wherein the device is dramatically flexed from one configuration to another during deployment. Gallery See also Engineering mechanics Four-bar linkage Kinematics Linkage (mechanical) Machine Outline of machines Overconstrained mechanism Parallel motion Slider-crank linkage Compliant mechanism References External links University of Cambridge Deployable structures department publications Linkages (mechanical) Structural engineering
Deployable structure
Engineering
227
67,869
https://en.wikipedia.org/wiki/Pontiac%20fever
Pontiac fever is an acute, nonfatal respiratory disease caused by various species of Gram-negative bacteria in the genus Legionella. It causes a mild upper respiratory infection that resembles acute influenza. Pontiac fever resolves spontaneously and often goes undiagnosed. Both Pontiac fever and the more severe Legionnaire's disease may be caused by the same bacterium, but Pontiac fever does not include pneumonia. Signs and symptoms Cause Species of Legionella known to cause Pontiac fever include Legionella pneumophila, Legionella longbeachae, Legionella feeleii, Legionella micdadei, and Legionella anisa. Sources of the causative agents are aquatic systems and potting soil. The first outbreak caused by inhalation of aerosolized potting soil was discovered in New Zealand in January 2007. A total of 10 workers at a nursery came down with Pontiac fever. It was the first identification of L. longbeachae. Pontiac fever does not spread from person to person. It is acquired through aerosolization of water droplets and/or potting soil containing Legionella bacteria. Diagnosis Epidemiology Pontiac fever is known to have a short incubation period of 1 to 3 days. No fatalities have been reported and cases resolve spontaneously without treatment. It is often not reported. Age, gender, and smoking do not seem to be risk factors. Pontiac fever seems to affect young people in the age medians of 29 to 32. Pathogenesis of the Pontiac fever is poorly known. History Pontiac fever was named after the city of Pontiac, Michigan, where the first case was recognized. In 1968, several workers at the county's department of health came down with a fever and mild flu symptoms, but not pneumonia. After the 1976 Legionnaires' outbreak in Philadelphia, the Michigan health department re-examined blood samples and discovered the workers had been infected with the newly identified Legionella pneumophila. An outbreak caused by Legionella micdadei in early 1988 in the UK became known as Lochgoilhead fever. Since that time, other species of Legionella that cause Pontiac fever have been identified, most notably in New Zealand, in 2007 where Legionella longbeachae was discovered. The New Zealand outbreak also marked the first time Pontiac fever had been traced to potting soil. References External links American Legion Building biology Gram-negative bacteria Industrial hygiene Legionellosis Pathogenic bacteria Pontiac, Michigan Legionellales
Pontiac fever
Engineering
496
51,794,246
https://en.wikipedia.org/wiki/Sandra%20Pizzarello
Sandra Pizzarello (24 April 1933 – 24 October 2021) was an Italian biochemist known for her co-discovery of amino acid enantiomeric excess in carbonaceous chondrite meteorites. Her research interests concerned the characterization of meteoritic organic compounds in elucidating the evolution of planetary homochirality. Pizzarello was a project collaborator and co-investigator for the NASA Astrobiology Institute (NAI), the president of the International Society for the Study of the Origin of Life (2014-2017), and an emerita professor at Arizona State University (ASU). Early life and education Sandra Pizzarello was born in Venice, Italy on 24 April 1933. In 1955, she graduated summa cum laude from the University of Padua earning her Doctor of Biological Sciences degree under her adviser Professor Roncato. Pizzarello went on to work as a research associate developing tranquilizers for Farmitalia Research Laboratories in the Department of Neuropharmacology. Over the course of several years, Pizzarello transitioned from research to raising a family. Following a career opportunity for her husband, an aeronautical engineer and computer scientist, she moved her family to Phoenix, Arizona in 1970. Once Pizzarello's youngest of four children finished primary school, her focus returned to her career after a decade away from scientific research. She audited a graduate biochemistry seminar course at ASU where she met Professor John Read Cronin, future co-discoverer of amino acid enantiomeric excess in meteorites. Due to her outstanding performance in the course, she was offered a job to work with Cronin at the university as a research professor in analyzing the recently recovered Murchison meteorite. Sandra Pizzarello died on 24 October 2021, at the age of 88. Research Sandra Pizzarello's research over the last forty years involved the analysis of organic compounds in several carbonaceous chondrites, particularly molecular, chiral, and isotopic characterization of amino acids. Because the formation of these organic-rich meteorites pre-date the origin of life, they had been under investigation as potential sites of primal organic compounds which could shed light on abiogenesis, specifically the origin of biological homochirality. Such studies, however, had been inconclusive until 1997 when Cronin and Pizzarello detected 7-9% L-enantiomeric excesses of three abiological amino acids while analyzing the Murchison meteorite. Given Earth's history of meteoric impacts and the observation that meteors contain an excess of the biologically relevant L-stereoisomer of certain amino acids, Pizzarello studied the effect of meteoritic amino acids in enantiomeric excess on the formation of other biological molecules. In one study, Pizzarello found that nonracemic solutions of abiological isovaline and proteinogenic alanine can direct the condensation of glycolaldehyde to produce nonracemic solutions of threose and erythrose via an aldol reaction concluding that amino acids can act as asymmetric catalysts in carbohydrate synthesis. These findings support the origin of life hypothesis that homochirality originated prior to life and from extraterrestrial origins. However, Pizzarello's theoretical inquiries into cosmochemical evolution remain debated based on suspect analytical evidence of meteoritic enantiomeric excesses. External links # https://nai.nasa.gov/directory/pizzarello-sandra/ NASA.gov https://webapp4.asu.edu/directory/person/274781 References 1933 births 2021 deaths Biochemists Arizona State University faculty University of Padua alumni Scientists from Venice Italian emigrants to the United States
Sandra Pizzarello
Chemistry,Biology
774
2,225,410
https://en.wikipedia.org/wiki/Lambda%20Leonis
Lambda Leonis (λ Leonis, abbreviated Lam Leo, λ Leo), formally named Alterf , is a star in the constellation of Leo. The star is bright enough to be seen with the naked eye, having an apparent visual magnitude of 4.32 Based upon an annual parallax shift of 0.00991 arcseconds, it is located about 329 light-years from the Sun. At that distance, the visual magnitude of the star is reduced by an interstellar absorption factor of 0.06 because of extinction. Nomenclature λ Leonis (Latinised to Lambda Leonis) is the star's Bayer designation. It bore the traditional name Alterf, from the Arabic الطرف aṭ-ṭarf "the view (of the lion)". In 2016, the International Astronomical Union (IAU) organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Alterf for this star on February 1, 2017 and it is now so included in the List of IAU-approved Star Names. This star, along with Xi Cancri, were the Persian Nahn, "the Nose", and the Coptic Piautos, "the Eye", both lunar asterisms. Properties This is a K-type giant star with a stellar classification of K4.5 III. It is a suspected variable star with a reported magnitude range of 4.28−4.34. Lambda Leonis is 29% more massive than the Sun and is 3.6 billion years old. The interferometry-measured angular diameter of this star, after correcting for limb darkening, is , which, at its estimated distance, equates to a physical radius of nearly 45 times the radius of the Sun. It shines with around 540 times the luminosity of the Sun, from an outer atmosphere that has an effective temperature of 4,150 K. References K-type giants Leo (constellation) Alterf Leonis, Lambda Leonis, 04 046750 Suspected variables 082308 3773 Durchmusterung objects
Lambda Leonis
Astronomy
442
77,148,171
https://en.wikipedia.org/wiki/Lactarius%20aurantiacus
Lactarius aurantiacus is a species of mushroom in the family Russulaceae and is commonly referred to as the orange milkcap. The common English name "orange milkcap" can also refer to other similar species of fungi, such as Lactarius subflammeus. Description L. aurantiacus is a mycorrhizal mushroom that varies in colour from a vibrant orange to a light orangish brown. Its cap is convex with a slightly depressed centre and ranges from 1 to 5 centimetres in diameter. The texture of it is said to be smooth and glossy. The mushroom grows from 2.5cm to 6.5cm tall. Additionally, like all other species of milkcaps, L. aurantiacus produces a milky latex when bruised or cut. The mushroom's gills are spaced apart slightly and are a light pink or orange in colour. Its stem is approximately 5-12mm in diameter and has no ring. Habitat and distribution This species of macro fungi is mainly found in Europe but has also been sighted in certain parts of Asia and North America. They grow either alone or in small groups. L. aurantiacus grows in acidic soils near pine, spruce, and sometimes birch trees in forests. It creates a mycorrhizal relationship with one or more of the trees that are around it. Similar species Lactarius fulvissimus Lactarius subflammeus (also commonly called "orange milkcap") References aurantiacus Fungi of Europe Fungus species
Lactarius aurantiacus
Biology
315
7,154,332
https://en.wikipedia.org/wiki/Skewness%20risk
Skewness risk in forecasting models utilized in the financial field is the risk that results when observations are not spread symmetrically around an average value, but instead have a skewed distribution. As a result, the mean and the median can be different. Skewness risk can arise in any quantitative model that assumes a symmetric distribution (such as the normal distribution) but is applied to skewed data. Ignoring skewness risk, by assuming that variables are symmetrically distributed when they are not, will cause any model to understate the risk of variables with high skewness. Skewness risk plays an important role in hypothesis testing. The analysis of variance, one of the most common tests used in hypothesis testing, assumes that the data is normally distributed. If the variables tested are not normally distributed because they are too skewed, the test cannot be used. Instead, nonparametric tests can be used, such as the Mann–Whitney test for unpaired situation or the sign test for paired situation. Skewness risk and kurtosis risk also have technical implications in calculation of value at risk. If either are ignored, the Value at Risk calculations will be flawed. Benoît Mandelbrot, a French mathematician, extensively researched this issue. He feels that the extensive reliance on the normal distribution for much of the body of modern finance and investment theory is a serious flaw of any related models (including the Black–Scholes model and CAPM). He explained his views and alternative finance theory in a book: The (Mis)Behavior of Markets: A Fractal View of Risk, Ruin and Reward. In options markets, the difference in implied volatility at different strike prices represents the market's view of skew, and is called volatility skew. (In pure Black–Scholes, implied volatility is constant with respect to strike and time to maturity.) Skewness for bonds Bonds have a skewed return. A bond will either pay the full amount on time (very likely to much less likely depending on quality), or less than that. A normal bond does not ever pay more than the "good" case. See also Skewness Kurtosis risk Taleb distribution Stochastic volatility References Mandelbrot, Benoit B., and Hudson, Richard L., The (mis)behaviour of markets : a fractal view of risk, ruin and reward, London : Profile, 2004, Johansson, A. (2005) "Pricing Skewness and Kurtosis Risk on the Swedish Stock Market", Masters Thesis, Department of Economics, Lund University, Sweden Premaratne, G., Bera, A. K. (2000). Modeling Asymmetry and Excess Kurtosis in Stock Return Data. Office of Research Working Paper Number 00-0123, University of Illinois Statistical deviation and dispersion Investment Risk analysis Mathematical finance Applied probability
Skewness risk
Mathematics
597
40,729,744
https://en.wikipedia.org/wiki/ThinkPad%20Yoga
The ThinkPad Yoga is a 2-in-1 convertible business-oriented tablet from Lenovo unveiled in September at the 2013 IFA in Berlin, Germany. It was released in the United States in November 2013. Design and performance The ThinkPad Yoga series laptops have a "backlit" keyboard that flattens when flipped into tablet mode. This is accomplished with a platform surrounding the keys which rises until level with the keyboard buttons, a locking mechanism that prevents key presses, and feet that pop out to prevent the keyboard from directly resting on flat surfaces. Lenovo implemented this design in response to complaints about its earlier IdeaPad Yoga 13 and 11 models being awkward to use in tablet mode. A reinforced hinge was required to implement this design. Other than its convertible form factor, the first ThinkPad Yoga is a rather standard ThinkPad device with a black magnesium-reinforced chassis, island keyboard, a red TrackPoint, and a large buttonless touchpad (but touchpad have been upgraded to a mechanical 3-button version for a next generations of Yoga line). Models Zero generation (2013) Thinkpad Yoga (S1 Yoga 12) The first ThinkPad Yoga has a 12.5-inch IPS touchscreen with 1080p resolution. The screen was designed for use with an optional pen-style digitizer. It is powered by Haswell processors from Intel. Buyers are able to choose standard 2.5" hard drives or SSD, and additional M.2 SSD, but have a non-replaceable battery and soldered RAM. First generation (2014) 11e Yoga (Windows version) The Windows version has the same specs as the Chromebook, but comes with a 320GB hard drive for storage and also accepts SSDs. The memory can be upgraded up to 8GB. Unlike the Chromebook variant, the components of this version can be upgraded. The 11e fully supports the openSUSE flavor of the Linux operating system. 11e Yoga Chromebook The ThinkPad 11e is a Chromebook that has a matte black chassis with reinforced hinges and corners, a sturdy lid, and a rubber bumper protecting its display in order to help it survive accidental dropping, spills, and general rough handling. It uses a quad-core Intel Celeron CPU, has 4 GB of RAM which can not be upgraded, an 11.6 inch screen, and 16GB of eMMC flash storage. Reviewers claim it is somewhat heavier than a typical Chromebook with a weight of 3.1 pounds. This is likely because of its ruggedized and reinforced chassis. It uses typical ThinkPad-style keyboard with customized ChromeOS keys. It does not have Trackpoint but only a touchpad. The screen is matte with and anti-glare coating and has a resolution of 1366×768 pixels. A 720p webcam is mounted above the screen. It has media card reader, a USB 2.0 port, a USB 3.0 port, and HDMI 1.4 port and a headphone jack. Connectivity is provided by 802.11ac Wi-Fi and Bluetooth 4.0. S3 Yoga 14 The Yoga 14 model reportedly "strikes the middle ground between bulky workstations and flexible hybrids." The laptop's metal hinge makes it sturdy, flexible and durable but has a below-average battery life. Like other models, the display can bend a full 360 degrees and the keyboard can be folded in half to use as a stand. According to a review for Business News Daily, "The ThinkPad Yoga 14 is a balancing act of diverse features. Thankfully, Lenovo pulled them all together into a satisfying work machine. The notebook features a high-quality build and an excellent keyboard and trackpad — all must-have features for serious productivity. And extras like the TrackPoint pointing stick are great for legacy ThinkPad users who prefer those options." S5 Yoga 15 Second generation (2015) 11e Yoga (2nd Gen) Yoga 260 The Yoga 260 uses a lightweight carbon-fiber hybrid material on its lid and magnesium-plastic blend on its lower portion. Lenovo claims the Yoga 260 has been subject to extensive testing of its ability to survive extreme temperatures, vibrations, altitudes, and shocks. Its keyboard is spill resistant. It includes a 12.5-inch display of resolution 1366×768 or 1920×1080. An active stylus, the ThinkPad Pen Pro, is included for drawing and text entry; it can be used with Lenovo's WRITEit hand-writing recognition application. A large fingerprint reader is included for logging into a user account. The design of Yoga 260 is in the same generation as the ThinkPad X260, which features 6th generation Intel Core i processors, same display resolution choices, same supported operating systems. Yoga 460 and P40 Yoga Yoga 460 is a base model with only integrated graphics. The ThinkPad P40 Yoga, like other Yoga branded products, is a convertible device with "laptop, stand, tent, and tablet" modes. The P40 Yoga includes a touchscreen display with resolution of 1920×1080 or 2560×1440, designed in cooperation with Wacom, using that company's Active ES technology which can sense 2,048 different pressure levels. The screen works with a stylus called the ThinkPad Pen Pro that has various pen tips designed to give varied forms of tactile feedback. The P40 uses Intel Core i7 CPUs, can accommodate up to 16 gigabytes of RAM, has SSDs up to 512 gigabytes in size, and uses an Nvidia Quadro M500M GPU. Third generation (2016) 11e Yoga (3rd Gen) 11e Yoga Chromebook (3rd Gen) X1 Yoga The ThinkPad X1 Yoga is a revamp of the ThinkPad X1 Carbon that includes the multi-mode flexibility of the Yoga line and a 14-inch display with optional OLED technology. The display has a resolution of 2560×1440 pixels. It weighs about . Fourth generation (2017) 11e Yoga (4th Gen) 11e Yoga Chromebook (4th Gen) Yoga 370 A 13.3 inch replacement for the Yoga 260, with 7th generation Intel Core processors. X1 Yoga (2nd Gen) Changes from previous X1 Yoga includes the use of 7th generation Intel Core i ('Kaby Lake') processors, addition of Thunderbolt 3 ports, USB-C connector for power adapter, 'wave' style keyboard featuring matte finish on the keyboard. Fifth generation (2018) 11e Yoga (5th Gen) L380 Yoga More affordable version of the X380 Yoga, with replaceable RAM, but lacking Thunderbolt. X380 Yoga A smaller 13.3" derivative of the X1 Yoga. X1 Yoga (3rd Gen) The design is derived from 6th generation ThinkPad X1 Carbon, with the ThinkShutter privacy camera included by default (except for models with a IR camera), 15W 8th generation Core i5/i7 quad core processors and a built-in stylus. OLED screens are no longer an option. Sixth generation (2019) L390 Yoga More affordable version of the X390 Yoga with replaceable RAM, but lacking thunderbolt. X390 Yoga A smaller 13.3" derivative of the X1 Yoga. X1 Yoga (4th Gen) The design is derived from 7th generation ThinkPad X1 Carbon. This is notably the first ThinkPad with aluminum chassis. 15W 8th/10th generation Core i5/i7 quad core processors and a built-in stylus. Seventh generation (2020) 11e Yoga (6th Gen) L13 Yoga (1st Gen) Released on the 27 August 2019, the more affordable version of the X13 Yoga, but lacking thunderbolt. Successor of the L390 which breaks the naming scheme. It no longer has replaceable RAM. X13 Yoga (1st Gen) Released on the 24 February 2020, the smaller 13.3" derivative of the X1 Yoga. Successor of the X390 which breaks the naming scheme. X1 Yoga (5th Gen) The design is derived from 8th generation ThinkPad X1 Carbon. 10th generation Core i5/i7 quad core processors and a built-in stylus. Eighth generation (2021) Reviews Dan Ackerman of CNET wrote, "In our brief hands-on time with the ThinkPad Yoga, while it's made of tough, light magnesium alloy, it didn't feel as slick and coffee shop ready as the IdeaPad version (and it lacks the extremely high-res screen of the Yoga 2), but the hidden keyboard think is so fascinating, you'll find yourself folding the lid back and forth over and over again just to watch it in action." Brittany Hillen of Slashgear wrote, "The ThinkPad Yoga is a hybrid machine with a lot to offer users as both a laptop and as a tablet, though in slate mode it is thicker than what you'd get with a traditional tablet. There is nothing ill to speak of regarding the ThinkPad Yoga -- everything about it is solid, with the exception perhaps being a lower quality stylus than what an artist would need. The construction feels solid and durable in the hands, the keyboard is comfortable for typing in long duration stints, and the hardware is capable for a variety of tasks." James Kendrick of ZDNET wrote, "The ThinkPad Yoga is a great work laptop that can be pressed into tablet duty when desired. Its heavy-duty ThinkPad construction will stand up to the rigors of a road warrior. The battery life is reasonable and the beautiful screen works well in both laptop and tablet modes." The ThinkPad Yoga X1, (the first metal ThinkPad) has been given excellent reviews, with some review sites giving the new model a score of 4.5/5 stars. See also Ideapad Yoga ThinkPad X series ThinkPad L series References Lenovo laptops 2-in-1 PCs Yoga Computer-related introductions in 2013 Chromebook
ThinkPad Yoga
Technology
2,055
23,789,332
https://en.wikipedia.org/wiki/2%2C3%2C7%2C8-Tetrachlorodibenzodioxin
2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) is a polychlorinated dibenzo-p-dioxin (sometimes shortened, though inaccurately, to simply 'dioxin') with the chemical formula CHClO. Pure TCDD is a colorless solid with no distinguishable odor at room temperature. It is usually formed as an unwanted product in burning processes of organic materials or as a side product in organic synthesis. TCDD is the most potent compound (congener) of its series (polychlorinated dibenzodioxins, known as PCDDs or simply dioxins) and became known as a contaminant in Agent Orange, an herbicide used in the Vietnam War. TCDD was released into the environment in the Seveso disaster. It is a persistent organic pollutant. Biological activity in humans and animals TCDD and dioxin-like compounds act via a specific receptor present in all cells: the aryl hydrocarbon (AH) receptor. This receptor is a transcription factor which is involved in the expression of genes; it has been shown that high doses of TCDD either increase or decrease the expression of several hundred genes in rats. Genes of enzymes activating the breakdown of foreign and often toxic compounds are classic examples of such genes (enzyme induction). TCDD increases the enzymes breaking down, e.g., carcinogenic polycyclic hydrocarbons such as benzo(a)pyrene. These polycyclic hydrocarbons also activate the AH receptor, but less than TCDD and only temporarily. Even many natural compounds present in vegetables cause some activation of the AH receptor. This phenomenon can be viewed as adaptive and beneficial, because it protects the organism from toxic and carcinogenic substances. Excessive and persistent stimulation of AH receptor, however, leads to a multitude of adverse effects. The physiological function of the AH receptor has been the subject of continuous research. One obvious function is to increase the activity of enzymes breaking down foreign chemicals or normal chemicals of the body as needed. There seem to be many other functions, however, related to the development of various organs and the immune systems or other regulatory functions. The AH receptor is phylogenetically highly conserved, with a history of at least 600 million years, and is found in all vertebrates. Its ancient analogs are important regulatory proteins even in more primitive species. In fact, knock-out animals with no AH receptor are prone to illness and developmental problems. Taken together, this implies the necessity of a basal degree of AH receptor activation to achieve normal physiological function. Toxicity in humans In 2000, the Expert Group of the World Health Organization considered developmental toxicity as the most pertinent risk of dioxins to human beings. Because people are usually exposed simultaneously to several dioxin-like chemicals, a more detailed account is given at dioxins and dioxin-like compounds. Developmental effects In Vietnam and the United States, teratogenic or birth defects were observed in children of people who were exposed to Agent Orange or 2,4,5-T that contained TCDD as an impurity out of the production process. However, there has been some uncertainty on the causal link between Agent Orange/dioxin exposure. In 2006, a meta-analysis indicated large amount of heterogeneity between studies and emphasized a lack of consensus on the issue. Stillbirths, cleft palate, and neural tube defects, with spina bifida were the most statistically significant defects. Later some tooth defects and borderline neurodevelopmental effects have been reported. After the Seveso accident, tooth development defects, changed sex ratio and decreased sperm quality have been noted. Various developmental effects have been clearly shown after high mixed exposures to dioxins and dioxin-like compounds, the most dramatic in Yusho and Yu-chen catastrophes, in Japan and Taiwan, respectively. Cancer It is largely agreed that TCDD is not directly mutagenic or genotoxic. Its main action is cancer promotion; it promotes the carcinogenicity initiated by other compounds. Very high doses may, in addition, cause cancer indirectly; one of the proposed mechanisms is oxidative stress and the subsequent oxygen damage to DNA. There are other explanations such as endocrine disruption or altered signal transduction. The endocrine disrupting activities seem to be dependent on life stage, being anti-estrogenic when estrogen is present (or in high concentration) in the body, and estrogenic in the absence of estrogen. TCDD was classified by the International Agency for Research on Cancer (IARC) as a carcinogen for humans (group 1). In the occupational cohort studies available for the classification, the risk was weak and borderline detectable, even at very high exposures. Therefore, the classification was, in essence, based on animal experiments and mechanistic considerations. This was criticized as a deviation from IARC's 1997 classification rules. The main problem with IARC classification is that it only assesses qualitative hazard, i.e. carcinogenicity at any dose, and not the quantitative risk at different doses. According to a 2006 Molecular Nutrition & Food Research article, there were debates on whether TCDD was carcinogenic only at high doses which also cause toxic damage of tissues. A 2011 review concluded that, after 1997, further studies did not support an association between TCDD exposure and cancer risk. One of the problems is that in all occupational studies the subjects have been exposed to a large number of chemicals, not only TCDD. By 2011, it was reported that studies that include the update of Vietnam veteran studies from Operation Ranch Hand, had concluded that after 30 years the results did not provide evidence of disease. On the other hand, the latest studies on Seveso population support TCDD carcinogenicity at high doses. In 2004, an article in the International Journal of Cancer provided some direct epidemiological evidence that TCDD or other dioxins are not causing soft-tissue sarcoma at low doses, although this cancer has been considered typical for dioxins. There was in fact a trend of cancer to decrease. This is called a J-shape dose-response, low doses decrease the risk, and only higher doses increase the risk, according to a 2005 article in the journal Dose-Response. Safety recommendations The Joint FAO/WHO Expert Committee on Food Additives (JECFA) derived in 2001 a provisional tolerable monthly intake (PTMI) of 70 pg TEQ/kg body weight. The United States Environmental Protection Agency (EPA) established an oral reference dose (RfD) of 0.7 pg/kg b.w. per day for TCDD (see discussion on the differences in). According to the Aspen Institute, in 2011:The general environmental limit in most countries is 1,000 ppt TEq in soils and 100 ppt in sediment. Most industrialized countries have dioxin concentrations in soils of less than 12 ppt. The U.S. Agency for Toxic Substance and Disease Registry has determined that levels higher than 1,000 ppt TEq in soil require intervention, including research, surveillance, health studies, community and physician education, and exposure investigation. The EPA is considering reducing these limits to 72 ppt TEq. This change would significantly increase the potential volume of contaminated soil requiring treatment. Animal toxicology By far most information on toxicity of dioxin-like chemicals is based on animal studies utilizing TCDD. Almost all organs are affected by high doses of TCDD. In short-term toxicity studies in animals, the typical effects are anorexia and wasting, and even after a huge dose animals die only 1 to 6 weeks after the TCDD administration. Seemingly similar species have varying sensitivities to acute effects: lethal dose for a guinea pig is about 1 μg/kg, but to a hamster it is more than 1,000 μg/kg. A similar difference can be seen even between two different rat strains. Various hyperplastic (overgrowth) or atrophic (wasting away) responses are seen in different organs, thymus atrophy is very typical in several animal species. TCDD also affects the balance of several hormones. In some species, but not in all, severe liver toxicity is seen. Taking into account the low doses of dioxins in the present human population, only two types of toxic effects have been considered to cause a relevant risk to humans: developmental effects and cancer. Developmental effects Developmental effects occur at very low doses in animals. They include frank teratogenicity such as cleft palate and hydronephrosis. Development of some organs may be even more sensitive: very low doses perturb the development of sexual organs in rodents, and the development of teeth in rats. The latter is important in that tooth deformities were also seen after the Seveso accident and possibly after a long breast-feeding of babies in the 1970s and 1980s when the dioxin concentrations in Europe were about ten times higher than at present. Cancer Cancers can be induced in animals at many sites. At sufficiently high doses, TCDD has caused cancer in all animals tested. The most sensitive is liver cancer in female rats, and this has long been a basis for risk assessment. Dose-response of TCDD in causing cancer does not seem to be linear, and there is a threshold below which it seems to cause no cancer. TCDD is not mutagenic or genotoxic, in other words, it is not able to initiate cancer, and the cancer risk is based on promotion of cancer initiated by other compounds or on indirect effects such as disturbing defense mechanisms of the body e.g. by preventing apoptosis or programmed death of altered cells. Carcinogenicity is associated with tissue damage, and it is often viewed now as secondary to tissue damage. TCDD may in some conditions potentiate the carcinogenic effects of other compounds. An example is benzo(a)pyrene that is metabolized in two steps, oxidation and conjugation. Oxidation produces epoxide carcinogens that are rapidly detoxified by conjugation, but some molecules may escape to the nucleus of the cell and bind to DNA causing a mutation, resulting in cancer initiation. When TCDD increases the activity of oxidative enzymes more than conjugation enzymes, the epoxide intermediates may increase, increasing the possibility of cancer initiation. Thus, a beneficial activation of detoxifying enzymes may lead to deleterious side effects. Sources TCDD has never been produced commercially except as a pure chemical for scientific research. It is, however, formed as a synthesis side product when producing certain chlorophenols or chlorophenoxy acid herbicides. It may also be formed along with other polychlorinated dibenzodioxins and dibenzofuranes in any burning of hydrocarbons where chlorine is present, especially if certain metal catalysts such as copper are also present. Usually a mixture of dioxin-like compounds is produced, therefore a more thorough treatise is under dioxins and dioxin-like compounds. The greatest production occurs from waste incineration, metal production, and fossil-fuel and wood combustion. Dioxin production can usually be reduced by increasing the combustion temperature. Total U.S. emissions of PCCD/Fs were reduced from ca. 14 kg TEq in 1987 to 1.4 kg TEq in 2000. History TCDD was first synthesized in the laboratory in 1957 by Wilhelm Sandermann, and he also discovered the effects of the compound. Cases of exposure There have been numerous incidents where people have been exposed to high doses of TCDD. In 1953, an accident occurred at BASF during the chlorination of diphenyl oxides, as a result of which several workers developed severe chloracne. Similar cases had occurred 6 years earlier in the USA and in 1952, 1954 and 1956 at the Boehringer Ingelheim company. In 1976, thousands of inhabitants of Seveso, Italy were exposed to TCDD after an accidental release of several kilograms of TCDD from a pressure tank. Many animals died, and high concentrations of TCDD, up to 56,000 pg/g of fat, were noted especially in children playing outside and eating local food. The acute effects were limited to about 200 cases of chloracne. Long-term effects seem to include a slight excess of multiple myeloma and myeloid leukaemia, as well as some developmental effects such as disturbed development of teeth and excess of girls born to fathers who were exposed as children. Several other long-term effects have been suspected, but the evidence is not very strong. In Times Beach, Missouri, several hundred people were poisoned by extremely high concentrations of TCDD by Russell Martin Bliss, who sprayed TCDD-contaminated waste oil on dusty roads to avoid large dust clouds. Bliss himself obtained the waste oil from NEPACCO, a company that produced Agent Orange. No one was ever charged in relation to the incident, and the city of Times Beach was abandoned and disincorporated following an investigation by the CDC and EPA. This is marked as the single largest contamination of a civilian area by TCDD in United States history. In Vienna, two women were poisoned at their workplace in 1997, and the measured concentrations in one of them were the highest ever measured in a human being, 144,000 pg/g of fat. This is about 100,000 times the concentrations in most people today and about 10,000 times the sum of all dioxin-like compounds in young people today. They survived but suffered from difficult chloracne for several years. The poisoning likely happened in October 1997 but was not discovered until April 1998. At the institute where the women worked as secretaries, high concentrations of TCDD were found in one of the labs, suggesting that the compound had been produced there. The police investigation failed to find clear evidence of crime, and no one was ever prosecuted. Aside from malaise and amenorrhea there were few other symptoms or abnormal laboratory findings. In 2004, presidential candidate Viktor Yushchenko of Ukraine was poisoned with a large dose of TCDD. His blood TCDD concentration was measured 108,000 pg/g of fat, which is the second highest ever measured. This concentration implies a dose exceeding 2 mg, or 25 μg/kg of body weight. He suffered from chloracne for many years, but after initial malaise, other symptoms or abnormal laboratory findings were few. An area of polluted land in Italy, known as the Triangle of Death, is contaminated with TCDD from years of illegal waste disposal by organized crime. See also Dioxins and dioxin-like compounds Toxic Equivalency References External links U.S. National Library of Medicine: Hazardous Substances Databank – 2,3,7,8-Tetrachlorodibenzodioxin Dioxin synopsis Dioxins CDC – NIOSH Pocket Guide to Chemical Hazards Chloroarenes Dibenzodioxins IARC Group 1 carcinogens Blood agents
2,3,7,8-Tetrachlorodibenzodioxin
Chemistry
3,133
56,416,473
https://en.wikipedia.org/wiki/Sulfamoyl%20fluoride
In organic chemistry, sulfamoyl fluoride is an organic compound having the chemical formula F−SO2−N(−R1)−R2. Its derivatives are called sulfamoyl fluorides. Examples of sulfamoyl fluorides include: Sulfamoyl fluorides are contrasted with the sulfonimidoyl fluorides with structure R1-S(O)(F)=N-R2. Production Sulfamoyl fluorides can be made by treating secondary amines with sulfuryl fluoride (SO2F2) or sulfuryl chloride fluoride (SO2ClF). Cyclic secondary amines work as well, provided they are not aromatic. Sulfamoyl fluorides can also be made from sulfamoyl chlorides, by reacting with a substance that can supply the fluoride ion, such as NaF, KF, HF, or SbF3. Sulfonamides can undergo a Hofmann rearrangement when treated with a difluoro-λ3-bromane to yield a singly substituted N-sulfamoyl fluoride. See also Fluorosulfonate Sulfonyl halide Sulfuryl fluoride References Functional groups Leaving groups
Sulfamoyl fluoride
Chemistry
269
41,663,808
https://en.wikipedia.org/wiki/Field%20Deployable%20Hydrolysis%20System
The Field Deployable Hydrolysis System (FDHS) is a transportable, high throughput neutralization system developed by the U.S. Army for converting chemical warfare material into compounds not usable as weapons. Operation Neutralization is facilitated through chemical reactions involving reagents that are mixed and heated to increase destruction efficiency, which is rated at 99.9 percent. The transportable FDHS is a self-contained system that includes power generators and a laboratory. Operational inputs include consumable materials such as water, reagents and fuel. It is designed to be set up within 10 days and is equipped with redundant critical systems. An on-site a crew of 15 trained personnel, including SME support, is needed for each shift of a possible 24-hour operational cycle. Development A 20-week design and development phase was funded by the Defense Threat Reduction Agency in February 2013. The effort to develop a functional prototype was led by subject-matter experts from the Edgewood Chemical Biological Center (ECBC) in partnership with the United States Army Chemical Materials Agency. An operational model was developed over the course of six months, with the participation of 50 ECBC employees. Deployment Two of these units were deployed on the for use in the destruction of Syria's chemical weapons. They are the "centerpiece" of the disarmament effort. The United Kingdom gave the United States £2.5 million of specialist equipment and training to enable the highest-priority chemicals to be processed more quickly. References Chemical weapons demilitarization Chemical warfare United States Army vehicles
Field Deployable Hydrolysis System
Chemistry
315
20,533,760
https://en.wikipedia.org/wiki/Steppe%20belt
A steppe belt is a contiguous phytogeographic region of predominantly grassland (steppe), which has common characteristics in soil, climate, vegetation and fauna. A forest-steppe belt is a region of forest steppe. The largest steppe and (forest-steppe) belt is the Eurasian steppe belt which stretches from Central Europe via Ukraine, southern Russia, northern Central Asia, southern Siberia, into Mongolia and China, often called the Great Steppe. The term "steppe belt" may also be applied to some grassland zones in biogeographical zoning of mountains. References Grasslands Biogeography Belt regions
Steppe belt
Biology
119
2,921,675
https://en.wikipedia.org/wiki/Michael%20D.%20Morley
Michael Darwin Morley (September 29, 1930 – October 11, 2020) was an American mathematician. At his death in 2020, Morley was professor emeritus at Cornell University. His research was in mathematical logic and model theory, and he is best known for Morley's categoricity theorem, which he proved in his PhD thesis Categoricity in Power in 1962. Early life and education Morley was born in Youngstown, Ohio, on September 29, 1930. He obtained his BS in mathematics from Case Institute of Technology in 1951 and his PhD in mathematics from the University of Chicago in 1962. Morley's formal PhD advisor at the University of Chicago was Saunders Mac Lane, but he completed his thesis under the guidance of Robert Vaught at the University of California, Berkeley. His dissertation was titled Categoricity in Power. Career Morley was an assistant professor at the University of Wisconsin–Madison from 1963 to 1967. He joined the faculty at Cornell University in 1967 as an associate professor, was promoted to professor in 1970, and became a professor emeritus in 2003. He served as president of the Association for Symbolic Logic from 1986 to 1989. Morley received the 2003 Leroy P. Steele Prize for Seminal Contribution to Research from the American Mathematical Society for his 1965 paper "Categoricity in Power". This paper, his doctoral dissertation, introduced Morley rank and proved Morley's categoricity theorem. Personal life Morley died on October 11, 2020, in Sayre, Pennsylvania. Selected publications See also Morley's problem References External links Morley's home page 1930 births 2020 deaths Cornell University faculty University of Wisconsin–Madison faculty University of Chicago alumni University of California, Berkeley alumni Model theorists Writers from Youngstown, Ohio
Michael D. Morley
Mathematics
346
15,258,439
https://en.wikipedia.org/wiki/Sapper%20army
A Sapper Army () was a multi-brigade military construction engineer formation of the Engineer Troops (Soviet Union) of the Soviet Red Army during World War II. Formed to construct large-scale defensive works, sapper armies were used from late 1941 until mid-1942 when the Red Army opted to organize smaller and more flexible construction engineer formations. Although the organization of military construction engineers into an army-level echelon was unusual, the use of dedicated troops for military construction was common to many armies of World War II. History Reeling from the German invasion of 1941, the Soviets decided to organize large military construction engineer formations to construct defensive works on a massive scale. The Soviets hoped such works would strengthen Red Army defensive operations and buy enough time to rebuild their forces for a counter-offensive. Consequently, the high command ordered the formation of the first sapper armies on October 13, 1941. Originally, six sapper armies were formed, but by December 1941 this was expanded to ten sapper armies, numbered First through Tenth. The sapper armies were not only composed of military personnel; "women, old men, schoolchildren and teenagers under the draft age" were also mobilized to serve in the construction units. The sapper armies worked to construct defensive lines that were made up of battalion and company strong points in the Moscow, Stalingrad, North Caucasus, and Volga military districts. Sapper armies also trained troops for the Red Army's engineers and consequently suffered a steady loss of qualified personnel. Dissatisfied with the relative lack of flexibility of the sapper armies, the high command disbanded five of them in February 1942 and used the released personnel for the formation of new rifle (infantry) units. Confronted with the German summer offensive of 1942, the remaining sapper armies built defensive works around Moscow and Stalingrad, and in the Caucasus. On July 26, 1942, the high command directed the reorganization of the sapper armies, and by October 1942, the remaining five sapper armies had been converted into defensive construction directorates. The troops released by this measure were used to form new rifle and smaller engineer units. Historian David Glantz assessed the effectiveness of the sapper armies as having "... contributed significantly to the Red Army's victories at Leningrad, Moscow, and Stalingrad by preparing defensive lines, providing vital engineering support to the Red Army's operating fronts, and serving as a base for the formation of other more specialized engineer forces assigned to operating fronts." Organization Sapper armies were made up of two to four sapper brigades. A sapper brigade controlled 19 sapper battalions, each with three companies of four platoons. Sapper battalions had an authorized strength of 497 men, and included woodcutting units, road- and bridge-building units, units dedicated to the construction of defensive positions, and motorized tractor units. Fully manned, each sapper army was authorized some 45,000 to 50,000 men. Deployment 1st Sapper Army. Assigned to Western Front from December 1941 until September 1942. A 19 November 1941 NKO order downsized the planned formation of a 1st Sapper Army to work on fortifications in Karelia to the 1st Separate Operational-Engineer Group. On 21 December 1941, the Chief of Engineers of the Western Front, Maj. Gen. M. P. Vorob'ev, requested the NKO to form another 1st Sapper Army to exercise more command and control over the 80 separate sapper battalions on defence line construction west of Moscow. The army consisted of ten sapper brigades with eight sapper battalions each. 2nd Sapper Army. Assigned to Arkhangelsk Military District (MD) from October 1941 until February 1942. 3rd Sapper Army. Assigned to Moscow MD from October 1941 until September 1942. 4th Sapper Army. Assigned to Volga MD from October 1941 until May 1942. 5th Sapper Army. Assigned to Stalingrad and North Caucasus MD's from October 1941 until March 1942. 6th Sapper Army. Assigned to Volga MD and the Bryansk Front from October 1941 until September 1942. 7th Sapper Army. Assigned to Volga and Stalingrad MD's from October 1941 until September 1942. 8th Sapper Army. Assigned to North Caucasus MD and the Southern, Caucasus, and Trans-Caucasus Fronts from October 1941 until October 1942. 9th Sapper Army. Assigned to North Caucasus MD from October 1941 until March 1942. 10th Sapper Army. Assigned to North Caucasus MD from October 1941 until March 1942. 11th Sapper Army. Assigned to Leningrad MD from September 1942 until July 1944. Footnotes Sources Army units and formations of the Soviet Union Engineering units and formations
Sapper army
Engineering
930
37,272,203
https://en.wikipedia.org/wiki/Cone%20bush
Cone bush, conebush, or cone-bush is a common name for various plants, usually dicotyledonous shrubs that bear their flowers and seeds in compact, cone-shaped inflorescences and infructescences. The plants that the name most frequently applies to are members of the Proteaceae, and in particular the Australian genus Isopogon and the African genus Leucadendron. References Isopogon Leucadendron Plant common names
Cone bush
Biology
97
56,683,091
https://en.wikipedia.org/wiki/Khankaspis
Khankaspis is a poorly preserved arthropod genus that contains one species, K. bazhanovi, recovered from the Snegurovka Formation of Siberia, Russia. Some authors have placed Khankaspis within the order Strabopida, but poorly preserved material precludes detailed comparisons with other Cambrian arthropods. References Cambrian arthropods Cambrian animals of Asia Fossils of Russia Fossil taxa described in 1969 Controversial taxa Cambrian genus extinctions
Khankaspis
Biology
97
1,919,332
https://en.wikipedia.org/wiki/Calcicole
A calcicole, calciphyte or calciphile is a plant that thrives in lime rich soil. The word is derived from the Latin 'to dwell on chalk'. Under acidic conditions, aluminium becomes more soluble and phosphate less. As a consequence, calcicoles grown on acidic soils often develop the symptoms of aluminium toxicity, i.e. necrosis, and phosphate deficiency, i.e. anthocyanosis (reddening of the leaves) and stunting. A plant that thrives in acid soils is known as a calcifuge. A plant thriving on sand (which may be acidic or calcic) is termed psammophilic or arenaceous (see also arenite). Examples of calcicole plants Ash trees (Fraxinus spp.) Honeysuckle (Lonicera) Buddleja Lilac (Syringa) Beet Clematis Sanguisorba minor Some European orchids Some succulent plants genera Sansevieria and Titanopsis or cacti genus Thelocactus. Calcicolous grasses References Plant physiology
Calcicole
Biology
230
25,879,986
https://en.wikipedia.org/wiki/Jason%20Walter%20Brown
Jason W. Brown (born April 14, 1938) is an American neurologist and writer of works in neuropsychology and philosophy of mind. He has been a reviewer and recipient of grants and fellowships from the National Institutes of Health and the Alexander von Humboldt Foundation and is or has been on the editorial boards of leading journals in his field. He has written 14 books, edited 4 others, and more than 200 articles. Brown is the founder and active chief neurologist of the Center For Cognition and Communication "CCC". He founded the entity in 1985 in New York City, a specialized private practice in evaluating and treating traumatic brain injury. Biography Premedical studies at the University of California in Los Angeles, graduation from Berkeley in 1959. Medical school at the University of Southern California in Los Angeles, with M.D. in 1963, internship at St. Elizabeth's Hospital in Washington, D.C. He returned to Los Angeles for a residency in neurology at UCLA. 1967–1969 in the Army, in Korea and San Francisco. In 1969, he took a post-doctoral fellowship at the Boston Veteran's Hospital. In 1970, he was invited to the staff of Columbia-Presbyterian Hospital in New York as assistant professor. In 1972, he published his first book, Aphasia, Apraxia, and Agnosia. In 1976, he received a fellowship from the Foundations Fund for Research in Psychiatry to spend a year at the Centre Neuropsychologique et Neurolinguistique in Paris. On his return, he joined the staff of New York University Medical Center, eventually as clinical professor in neurology. The academic year 1978–79 was spent as visiting associate professor at Rockefeller University. The Center for Cognition and Communication (CCC) was established to provide treatment for clients with head injury, stroke, and other acquired and developmental disorders of cognition. Since 2002, Brown and his wife Carine house and co-organize the Psychology Nexus workshops on South of France. Books Brown, J. W. (1972). Aphasia, apraxia and agnosia. Clinical and theoretical aspects Springfield, IL: Thomas. Brown, J. W. (1977). Mind, brain and consciousness. New York: Academic. Brown, J. W. (1988). Life of the mind. New Jersey: Erlbaum. Brown, J. W. (1991). Self and process. New York: Springer-Verlag. Brown, J. W. (1996). Time, will and mental process. New York: Plenum Press. Brown, J. W. (2000). Mind and nature: essays on time and subjectivity. London: Whurr. Brown, J. W. (2001). The Self-Embodied Mind: Process, Brain Dynamics, and the Conscious Present. Barrytown: Station Hill Press. Brown, J. W. (2005). Process and the authentic life. Toward a psychology of value. Heusenstamm: Ontos Verlag, De Gruyter. Brown, J. W. (2010). Neuropsychological foundations of conscious experience. Louvain-la-Neuve, Belgium: Les Editions Chromatika. Brown, J. W. (2011). Gourmet's guide to the mind. Louvain-la-Neuve, Belgium: Les Editions Chromatika. Brown, J. W. (2012). Love and other emotions. London: Karnac Press. Brown, J. W. (2014). Microgenetic theory and process thought. In preparation. Brown, J. W. (2017). Metapsychology of the creative process. Continuous novelty as the ground of creative advance. Exeter: Imprint Academic. Brown, J. W. (2017). Reflections on mind and the image of reality. Eugene, Oregon: Resource Publications. Brown, J.W. (2024), Ausgewählte Aufsätze zu einer Prozesspsychologie. Herausgegeben von Paul Stenner und Denys Zhadiaiev Von Dr. Jason W. Brown. Verlag Karl Alber: Baden-Baden ISBN 978-3-495-99305-7 (Whitehead Studien, Bd. 11), 2024 Brown, J.W., Stenner, P. (2024), The Microgenetic Theory of Mind and Brain. Selected Essays in Process Psychology. (Ed. Denys Zhadiaiev). Routledge: New York. ISBN 9781032873848, Dec 6, 2024 Edited Brown, J. W. (1973). Aphasia, tran. of A. Pick, Aphasie, Springfield: Thomas. Brown, J. W. (1981). Jargonaphasia (Ed.) New York: Academic. Brown, J. W. (1988). Agnosia and apraxia (Ed.) New Jersey: Erlbaum. Brown, J. W. (1989). Neuropsychology of perception. New Jersey: Erlbaum. Articles Brown, J.W. (2013). in: Bradford, D. (2013) Microgenesis and the Mind/Brain State: Interview with Jason Brown, Mind and Matter, 11 (2) 183-203. Brown, J.W. (2014). Feeling, Journal of mind and behavior,35. Brown, J.W. (2017). Microgenetic theory of perception, memory and the mental state. Journal of consciousness studies, 24:51-70. Brown, J.W. (2018). The nature of existence. Orpheus’ glance: selected papers on process philosophy, 2002–2017. P. Stenner and M. Weber Eds. Belgium: Les Editions Chromatika. Brown, J.W. (2018). A process theory of morality, In M. Pachalska and J. Kropotov (Eds). Psychology, neuropsychology and neurophysiology: studies in microgenetic theory. Krakow: IMPULS. Brown, J.W. (2018). Memory and thought. Proceedings of the Whitehead conference in the Azores, 2017, Nature and process. Teixeira, M-T and Pickering, J. (Eds). Cambridge Scholars Publishing, 2018–19, in press. Brown, J.W. (2018). Theoretical note on the nature of the present. Process studies, 47.1-2 (2018): 163-171. Brown, J.W. (2018). Agency and the will. Mind and matter, 16:195-212. Brown, J.W. (2020). Origins of subjective experience. The Journal of mind and behavior. Summer and autumn 2020, Volume 41, #3 and 4. Pages 270-279. Brown, J.W. (2020). Time and the dream, Neuropsychoanalysis, 22:1-2, 129-138 Brown, J.W. (2021) The mind/brain state. The Journal of mind and behavior 42(1), 1-16. Brown, J.W., Zhadiaiev, D.V. (2022). From drive to value. Process studies. 1 November 2022; 51 (2): 204–220. doi: https://doi.org/10.5406/21543682.51.2.04. Brown, J.W. (2023). Agency and freedom. Exploring consciousness - from non-duality to non-locality. (in press) References American neurologists American philosophers of mind 1938 births Living people New York University Grossman School of Medicine faculty Keck School of Medicine of USC alumni University of California, Berkeley alumni University of California, Los Angeles alumni
Jason Walter Brown
Biology
1,641
486,432
https://en.wikipedia.org/wiki/Processor%20register
A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900. Almost all computers, whether load/store architecture or not, load items of data from a larger memory into registers where they are used for arithmetic operations, bitwise operations, and other operations, and are manipulated or tested by machine instructions. Manipulated items are then often stored back to main memory, either by the same instruction or by a subsequent one. Modern processors use either static or dynamic random-access memory (RAM) as main memory, with the latter usually accessed via one or more cache levels. Processor registers are normally at the top of the memory hierarchy, and provide the fastest way to access data. The term normally refers only to the group of registers that are directly encoded as part of an instruction, as defined by the instruction set. However, modern high-performance CPUs often have duplicates of these "architectural registers" in order to improve performance via register renaming, allowing parallel and speculative execution. Modern x86 design acquired these techniques around 1995 with the releases of Pentium Pro, Cyrix 6x86, Nx586, and AMD K5. When a computer program accesses the same data repeatedly, this is called locality of reference. Holding frequently used values in registers can be critical to a program's performance. Register allocation is performed either by a compiler in the code generation phase, or manually by an assembly language programmer. Size Registers are normally measured by the number of bits they can hold, for example, an 8-bit register, 32-bit register, 64-bit register, 128-bit register, or more. In some instruction sets, the registers can operate in various modes, breaking down their storage memory into smaller parts (32-bit into four 8-bit ones, for instance) to which multiple data (vector, or one-dimensional array of data) can be loaded and operated upon at the same time. Typically it is implemented by adding extra registers that map their memory into a larger register. Processors that have the ability to execute single instructions on multiple data are called vector processors. Types A processor often contains several kinds of registers, which can be classified according to the types of values they can store or the instructions that operate on them: User-accessible registers can be read or written by machine instructions. The most common division of user-accessible registers is a division into data registers and address registers. Control registers s can hold numeric data values such as integers and, in some architectures, floating-point numbers, as well as characters, small bit arrays and other data. In some older architectures, such as the IBM 704, the IBM 709 and successors, the PDP-1, the PDP-4/PDP-7/PDP-9/PDP-15, the PDP-5/PDP-8, and the HP 2100, a special data register known as the accumulator is used implicitly for many operations. s hold addresses and are used by instructions that indirectly access primary memory. Some processors contain registers that may only be used to hold an address or only to hold numeric values (in some cases used as an index register whose value is added as an offset from some address); others allow registers to hold either kind of quantity. A wide variety of possible addressing modes, used to specify the effective address of an operand, exist. The stack pointer is used to manage the run-time stack. Rarely, other data stacks are addressed by dedicated address registers (see stack machine). General-purpose registers (s) can store both data and addresses, i.e., they are combined data/address registers; in some architectures, the register file is unified so that the GPRs can store floating-point numbers as well. Status registers hold truth values often used to determine whether some instruction should or should not be executed. s (FPRs) store floating-point numbers in many architectures. Constant registers hold read-only values such as zero, one, or pi. hold data for vector processing done by SIMD instructions (Single Instruction, Multiple Data). Special-purpose registers (SPRs) hold some elements of the program state; they usually include the program counter, also called the instruction pointer, and the status register; the program counter and status register might be combined in a program status word (PSW) register. The aforementioned stack pointer is sometimes also included in this group. Embedded microprocessors, such as microcontrollers, can also have special function registers corresponding to specialized hardware elements. Model-specific registers (also called machine-specific registers) store data and settings related to the processor itself. Because their meanings are attached to the design of a specific processor, they are not expected to remain standard between processor generations. Memory type range registers (MTRRs) s are not accessible by instructions and are used internally for processor operations. The instruction register holds the instruction currently being executed. Registers related to fetching information from RAM, a collection of storage registers located on separate chips from the CPU: Memory buffer register (MBR), also known as memory data register (MDR) Memory address register (MAR) Architectural registers are the registers visible to software and are defined by an architecture. They may not correspond to the physical hardware if register renaming is being performed by the underlying hardware. Hardware registers are similar, but occur outside CPUs. In some architectures (such as SPARC and MIPS), the first or last register in the integer register file is a pseudo-register in that it is hardwired to always return zero when read (mostly to simplify indexing modes), and it cannot be overwritten. In Alpha, this is also done for the floating-point register file. As a result of this, register files are commonly quoted as having one register more than how many of them are actually usable; for example, 32 registers are quoted when only 31 of them fit within the above definition of a register. Examples The following table shows the number of registers in several mainstream CPU architectures. Note that in x86-compatible processors, the stack pointer (ESP) is counted as an integer register, even though there are a limited number of instructions that may be used to operate on its contents. Similar caveats apply to most architectures. Although all of the below-listed architectures are different, almost all are in a basic arrangement known as the von Neumann architecture, first proposed by the Hungarian-American mathematician John von Neumann. It is also noteworthy that the number of registers on GPUs is much higher than that on CPUs. Usage The number of registers available on a processor and the operations that can be performed using those registers has a significant impact on the efficiency of code generated by optimizing compilers. The Strahler number of an expression tree gives the minimum number of registers required to evaluate that expression tree. See also CPU cache Quantum register Register allocation Register file Shift register References Computer architecture Digital registers Central processing unit
Processor register
Technology,Engineering
1,496
7,711,708
https://en.wikipedia.org/wiki/Mail%20coach
A mail coach is a stagecoach that is used to deliver mail. In Great Britain, Ireland, and Australia, they were built to a General Post Office-approved design operated by an independent contractor to carry long-distance mail for the Post Office. Mail was held in a box at the rear where the only Royal Mail employee, an armed guard, stood. Passengers were taken at a premium fare. There was seating for four passengers inside and more outside with the driver. The guard's seat could not be shared. This distribution system began in Britain in 1784. In Ireland the same service began in 1789, and in Australia it began in 1828. A mail coach service ran to an exact and demanding schedule. Aside from quick changes of horses the coach only stopped for collection and delivery of mail and never for the comfort of the passengers. To avoid a steep fine turnpike gates had to be open by the time the mail coach with its right of free passage passed through. The gatekeeper was warned by the sound of the posthorn. Mail coaches were slowly phased out during the 1840s and 1850s, their role eventually replaced by trains as the railway network expanded. History in Britain The postal delivery service in Britain had existed in the same form for about 150 years – from its introduction in 1635, mounted carriers had ridden between "posts" where the postmaster would remove the letters for the local area before handing the remaining letters and any additions to the next rider. The riders were frequent targets for robbers, and the system was inefficient. John Palmer, a theatre owner from Bath, believed that the coach service he had previously run for transporting actors and materials between theatres could be used for a countrywide mail delivery service, so in 1782, he suggested to the Post Office in London that they take up the idea. He met resistance from officials who believed that the existing system could not be improved, but eventually the Chancellor of the Exchequer, William Pitt, allowed him to carry out an experimental run between Bristol and London. Under the old system the journey had taken up to 38 hours. The coach, funded by Palmer, left Bristol at 4 pm on 2 August 1784 and arrived in London just 16 hours later. Impressed by the trial run, Pitt authorised the creation of new routes. By the end of 1785 there were services from London to Norwich, Liverpool, Leeds, Dover, Portsmouth, Poole, Exeter, Gloucester, Worcester, Holyhead and Carlisle. A service to Edinburgh was added the next year and Palmer was rewarded by being made Surveyor and Comptroller General of the Post Office. Initially the coach, horses and driver were all supplied by contractors. There was strong competition for the contracts as they provided a fixed regular income on top of which the companies could charge fares for the passengers. By the beginning of the 19th century the Post Office had their own fleet of coaches with black and maroon livery. The early coaches were poorly built, but in 1787 the Post Office adopted John Besant's improved and patented design, after which Besant, with his partner John Vidler, enjoyed a monopoly on the supply of coaches, and a virtual monopoly on their upkeep and servicing. The mail coaches continued unchallenged until the 1830s but the development of railways spelt the end for the service. The first rail delivery between Liverpool and Manchester took place on 11 November 1830. By the early 1840s other rail lines had been constructed and many London-based mail coaches were starting to be withdrawn from service; the final service from London (to Norwich) was shut down in 1846. Regional mail coaches continued into the 1850s, but these too were eventually replaced by rail services. Travel The mail coaches were originally designed for a driver, seated outside, and up to four passengers inside. The guard (the only Post Office employee on the coach) travelled on the outside at the rear next to the mail box. Later a further passenger was allowed outside, sitting at the front next to the driver, and eventually a second row of seating was added behind him to allow two further passengers to sit outside. Travel could be uncomfortable as the coaches travelled on poor roads and passengers were obliged to dismount from the carriage when going up steep hills to spare the horses (as Charles Dickens describes at the beginning of A Tale of Two Cities). The coaches averaged 7 to 8 mph (11–13 km/h) in summer and about 5 mph (8 km/h) in winter but by the time of Queen Victoria the roads had improved enough to allow speeds of up to 10 mph (16 km/h). Fresh horses were supplied every 10 to 15 miles (16–24 km). Stops to collect mail were short and sometimes there would be no stops at all with the guard throwing the mail off the coach and snatching the new deliveries from the postmaster. The cost of travelling by mail coach was about 1d. a mile more expensive than by private stage coach, but the coach was faster and, in general, less crowded and cleaner. Crowding was a common problem with private stage coaches, which led to their overturning; the limits on numbers of passengers and luggage prevented this occurring on the mail coaches. Travel on the mail coach was nearly always at night; as the roads were less busy the coach could make better speed. The guard was heavily armed with a blunderbuss and two pistols and dressed in the Post Office livery of maroon and gold. The mail coaches were thus well defended against highwaymen, and accounts of robberies often confuse them with private stage coaches, though robberies did occur. To prevent corruption and ensure good performance, the guards were paid handsomely and supplied with a generous pension. The mail was their sole charge, meaning that they had to deliver it on foot if a problem arose with the coach and, unlike the driver, they remained with the coach for the whole journey; occasionally guards froze to death from hypothermia in their exposed position outside the coach during the harsh winters (see River Thames frost fairs). The guard was supplied with a timepiece and a posthorn, the former to ensure the schedule was met, the latter to alert the post house to the imminent arrival of the coach and warn tollgate keepers to open the gate (mail coaches were exempt from stopping and paying tolls: a fine was payable if the coach was forced to stop). Since the coaches had right of way on the roads the horn was also used to advise other road users of their approach. History in Ireland A twice-weekly stage coach service operated between Dublin and Drogheda to the north, Kilkenny to the south and Athlone to the west as early as 1737 and for a short period from 1740, a Dublin to Belfast stage coach existed. In winter, this last route took three days, with overnight stops at Drogheda and Newry; in summer, travel time was reduced to two days. In 1789, mail coaches began a scheduled service from Dublin to Belfast. They met the mail boats coming from Portpatrick in Scotland at Donaghadee, in County Down. By the mid-19th century, most of the mail coaches in Ireland were eventually out-competed by Charles Bianconi's country-wide network of open carriages, before this system in turn succumbed to the railways. History in Australia Australia's first mail coach was established in 1828 and was crucial in connecting the remote settlements being established to the larger centres. The first mail contracts were issued and mail was transported by coach or on horseback from Sydney to the first seven country post offices – Penrith, Parramatta, Liverpool, Windsor, Campbelltown, Newcastle and Bathurst. The Sydney to Melbourne overland packhorse mail service was commenced in 1837. From 1855 the Sydney to Melbourne overland mail coach was supplanted by coastal steamer ship and rail. The rail network became the distributor of mail to larger regional centres there the mail coach met the trains and carried the mail to more remote towns and villages. In 1863 contracts were awarded to the coaching company Cobb & Co to transport Royal Mail services within New South Wales and Victoria. These contracts and later others in Queensland continued until 1924 when the last service operated in western Queensland. The lucrative mail contracts helped Cobb & Co grow and become an efficient and vast network of coach services in eastern Australia. Royal Mail coach services reached their peak in the later decades of the 19th century, operating over thousands of miles of eastern Australia. In 1870s Cobb & Co's Royal Mail coaches were operating some 6000 horses per day, and travelling 28,000 miles weekly carrying mail, gold, and general parcels. Some Concord stagecoaches were imported from the United States made in New Hampshire by the Abbot-Downing Company. This design was a 'thorough-brace' or 'jack' style coach characterised by an elegant curved lightweight body suspended on two large leather straps, which helped to isolate the passengers and driver from the jolts and bumps of the rough unmade country roads. Soon Australian coach builders using many of the Concord design features customised the design for Australian conditions. See also The English Mail-Coach, an 1849 essay by the English author Thomas De Quincey. Horse-drawn vehicle Chapar Khaneh, in ancient Persia Note References Further reading Margetson, Stella. "The Mail Coach Revolution" History Today (Jan 1967), Vol. 17 Issue 1, p36-44. External links History of the Post Bath Postal Museum The Mail, by Anne Woodley Miscellaneous Essays – The English Mail Coach, by Thomas de Quincey. Authorama – Public Domain Books Mail Coaches British Postal Museum & Archive Mail Coach Routes – Direct from Dublin from Leigh's New Pocket Road-book of Ireland, 1835. Coaches (carriage) Postal systems Postal infrastructure in the United Kingdom
Mail coach
Technology
1,964
25,599
https://en.wikipedia.org/wiki/Rubidium
Rubidium is a chemical element; it has symbol Rb and atomic number 37. It is a very soft, whitish-grey solid in the alkali metal group, similar to potassium and caesium. Rubidium is the first alkali metal in the group to have a density higher than water. On Earth, natural rubidium comprises two isotopes: 72% is a stable isotope Rb, and 28% is slightly radioactive Rb, with a half-life of 48.8 billion years – more than three times as long as the estimated age of the universe. German chemists Robert Bunsen and Gustav Kirchhoff discovered rubidium in 1861 by the newly developed technique, flame spectroscopy. The name comes from the Latin word , meaning deep red, the color of its emission spectrum. Rubidium's compounds have various chemical and electronic applications. Rubidium metal is easily vaporized and has a convenient spectral absorption range, making it a frequent target for laser manipulation of atoms. Rubidium is not a known nutrient for any living organisms. However, rubidium ions have similar properties and the same charge as potassium ions, and are actively taken up and treated by animal cells in similar ways. Characteristics Physical properties Rubidium is a very soft, ductile, silvery-white metal. It has a melting point of and a boiling point of . It forms amalgams with mercury and alloys with gold, iron, caesium, sodium, and potassium, but not lithium (despite rubidium and lithium being in the same periodic group). Rubidium and potassium show a very similar purple color in the flame test, and distinguishing the two elements requires more sophisticated analysis, such as spectroscopy. Chemical properties Rubidium is the second most electropositive of the stable alkali metals and has a very low first ionization energy of only 403 kJ/mol. It has an electron configuration of [Kr]5s1 and is photosensitive. Due to its strong electropositive nature, rubidium reacts explosively with water to produce rubidium hydroxide and hydrogen gas. As with all the alkali metals, the reaction is usually vigorous enough to ignite metal or the hydrogen gas produced by the reaction, potentially causing an explosion. Rubidium, being denser than potassium, sinks in water, reacting violently; caesium explodes on contact with water. However, the reaction rates of all alkali metals depend upon surface area of metal in contact with water, with small metal droplets giving explosive rates. Rubidium has also been reported to ignite spontaneously in air. Compounds Rubidium chloride (RbCl) is probably the most used rubidium compound: among several other chlorides, it is used to induce living cells to take up DNA; it is also used as a biomarker, because in nature, it is found only in small quantities in living organisms and when present, replaces potassium. Other common rubidium compounds are the corrosive rubidium hydroxide (RbOH), the starting material for most rubidium-based chemical processes; rubidium carbonate (Rb2CO3), used in some optical glasses, and rubidium copper sulfate, Rb2SO4·CuSO4·6H2O. Rubidium silver iodide (RbAg4I5) has the highest room temperature conductivity of any known ionic crystal, a property exploited in thin film batteries and other applications. Rubidium forms a number of oxides when exposed to air, including rubidium monoxide (Rb2O), Rb6O, and Rb9O2; rubidium in excess oxygen gives the superoxide RbO2. Rubidium forms salts with halogens, producing rubidium fluoride, rubidium chloride, rubidium bromide, and rubidium iodide. Isotopes Although rubidium is monoisotopic, rubidium in the Earth's crust is composed of two isotopes: the stable 85Rb (72.2%) and the radioactive 87Rb (27.8%). Natural rubidium is radioactive, with specific activity of about 670 Bq/g, enough to significantly expose a photographic film in 110 days. Thirty additional rubidium isotopes have been synthesized with half-lives of less than 3 months; most are highly radioactive and have few uses. Rubidium-87 has a half-life of  years, which is more than three times the age of the universe of  years, making it a primordial nuclide. It readily substitutes for potassium in minerals, and is therefore fairly widespread. Rb has been used extensively in dating rocks; 87Rb beta decays to stable 87Sr. During fractional crystallization, Sr tends to concentrate in plagioclase, leaving Rb in the liquid phase. Hence, the Rb/Sr ratio in residual magma may increase over time, and the progressing differentiation results in rocks with elevated Rb/Sr ratios. The highest ratios (10 or more) occur in pegmatites. If the initial amount of Sr is known or can be extrapolated, then the age can be determined by measurement of the Rb and Sr concentrations and of the 87Sr/86Sr ratio. The dates indicate the true age of the minerals only if the rocks have not been subsequently altered (see rubidium–strontium dating). Rubidium-82, one of the element's non-natural isotopes, is produced by electron-capture decay of strontium-82 with a half-life of 25.36 days. With a half-life of 76 seconds, rubidium-82 decays by positron emission to stable krypton-82. Occurrence Rubidium is not abundant, being one of 56 elements that combined make up 0.05% of the Earth's crust; at roughly the 23rd most abundant element in the Earth's crust it is more abundant than zinc or copper. It occurs naturally in the minerals leucite, pollucite, carnallite, and zinnwaldite, which contain as much as 1% rubidium oxide. Lepidolite contains between 0.3% and 3.5% rubidium, and is the commercial source of the element. Some potassium minerals and potassium chlorides also contain the element in commercially significant quantities. Seawater contains an average of 125 μg/L of rubidium compared to the much higher value for potassium of 408 mg/L and the much lower value of 0.3 μg/L for caesium. Rubidium is the 18th most abundant element in seawater. Because of its large ionic radius, rubidium is one of the "incompatible elements". During magma crystallization, rubidium is concentrated together with its heavier analogue caesium in the liquid phase and crystallizes last. Therefore, the largest deposits of rubidium and caesium are zone pegmatite ore bodies formed by this enrichment process. Because rubidium substitutes for potassium in the crystallization of magma, the enrichment is far less effective than that of caesium. Zone pegmatite ore bodies containing mineable quantities of caesium as pollucite or the lithium minerals lepidolite are also a source for rubidium as a by-product. Two notable sources of rubidium are the rich deposits of pollucite at Bernic Lake, Manitoba, Canada, and the rubicline found as impurities in pollucite on the Italian island of Elba, with a rubidium content of 17.5%. Both of those deposits are also sources of caesium. Production Although rubidium is more abundant in Earth's crust than caesium, the limited applications and the lack of a mineral rich in rubidium limits the production of rubidium compounds to 2 to 4 tonnes per year. Several methods are available for separating potassium, rubidium, and caesium. The fractional crystallization of a rubidium and caesium alum yields after 30 subsequent steps pure rubidium alum. Two other methods are reported, the chlorostannate process and the ferrocyanide process. For several years in the 1950s and 1960s, a by-product of potassium production called Alkarb was a main source for rubidium. Alkarb contained 21% rubidium, with the rest being potassium and a small amount of caesium. Today the largest producers of caesium produce rubidium as a by-product from pollucite. History Rubidium was discovered in 1861 by Robert Bunsen and Gustav Kirchhoff, in Heidelberg, Germany, in the mineral lepidolite through flame spectroscopy. Because of the bright red lines in its emission spectrum, they chose a name derived from the Latin word , meaning "deep red". Rubidium is a minor component in lepidolite. Kirchhoff and Bunsen processed 150 kg of a lepidolite containing only 0.24% rubidium monoxide (Rb2O). Both potassium and rubidium form insoluble salts with chloroplatinic acid, but those salts show a slight difference in solubility in hot water. Therefore, the less soluble rubidium hexachloroplatinate (Rb2PtCl6) could be obtained by fractional crystallization. After reduction of the hexachloroplatinate with hydrogen, the process yielded 0.51 grams of rubidium chloride (RbCl) for further studies. Bunsen and Kirchhoff began their first large-scale isolation of caesium and rubidium compounds with of mineral water, which yielded 7.3 grams of caesium chloride and 9.2 grams of rubidium chloride. Rubidium was the second element, shortly after caesium, to be discovered by spectroscopy, just one year after the invention of the spectroscope by Bunsen and Kirchhoff. The two scientists used the rubidium chloride to estimate that the atomic weight of the new element was 85.36 (the currently accepted value is 85.47). They tried to generate elemental rubidium by electrolysis of molten rubidium chloride, but instead of a metal, they obtained a blue homogeneous substance, which "neither under the naked eye nor under the microscope showed the slightest trace of metallic substance". They presumed that it was a subchloride (); however, the product was probably a colloidal mixture of the metal and rubidium chloride. In a second attempt to produce metallic rubidium, Bunsen was able to reduce rubidium by heating charred rubidium tartrate. Although the distilled rubidium was pyrophoric, they were able to determine the density and the melting point. The quality of this research in the 1860s can be appraised by the fact that their determined density differs by less than 0.1 g/cm3 and the melting point by less than 1 °C from the presently accepted values. The slight radioactivity of rubidium was discovered in 1908, but that was before the theory of isotopes was established in 1910, and the low level of activity (half-life greater than 1010 years) made interpretation complicated. The now proven decay of 87Rb to stable 87Sr through beta decay was still under discussion in the late 1940s. Rubidium had minimal industrial value before the 1920s. Since then, the most important use of rubidium is research and development, primarily in chemical and electronic applications. In 1995, rubidium-87 was used to produce a Bose–Einstein condensate, for which the discoverers, Eric Allin Cornell, Carl Edwin Wieman and Wolfgang Ketterle, won the 2001 Nobel Prize in Physics. Applications Rubidium compounds are sometimes used in fireworks to give them a purple color. Rubidium has also been considered for use in a thermoelectric generator using the magnetohydrodynamic principle, whereby hot rubidium ions are passed through a magnetic field. These conduct electricity and act like an armature of a generator, thereby generating an electric current. Rubidium, particularly vaporized 87Rb, is one of the most commonly used atomic species employed for laser cooling and Bose–Einstein condensation. Its desirable features for this application include the ready availability of inexpensive diode laser light at the relevant wavelength and the moderate temperatures required to obtain substantial vapor pressures. For cold-atom applications requiring tunable interactions, 85Rb is preferred for its rich Feshbach spectrum. Rubidium has been used for polarizing 3He, producing volumes of magnetized 3He gas, with the nuclear spins aligned rather than random. Rubidium vapor is optically pumped by a laser, and the polarized Rb polarizes 3He through the hyperfine interaction. Such spin-polarized 3He cells are useful for neutron polarization measurements and for producing polarized neutron beams for other purposes. The resonant element in atomic clocks utilizes the hyperfine structure of rubidium's energy levels, and rubidium is useful for high-precision timing. It is used as the main component of secondary frequency references (rubidium oscillators) in cell site transmitters and other electronic transmitting, networking, and test equipment. These rubidium standards are often used with GNSS to produce a "primary frequency standard" that has greater accuracy and is less expensive than caesium standards. Such rubidium standards are often mass-produced for the telecommunications industry. Other potential or current uses of rubidium include a working fluid in vapor turbines, as a getter in vacuum tubes, and as a photocell component. Rubidium is also used as an ingredient in special types of glass, in the production of superoxide by burning in oxygen, in the study of potassium ion channels in biology, and as the vapor in atomic magnetometers. In particular, 87Rb is used with other alkali metals in the development of spin-exchange relaxation-free (SERF) magnetometers. Rubidium-82 is used for positron emission tomography. Rubidium is very similar to potassium, and tissue with high potassium content will also accumulate the radioactive rubidium. One of the main uses is myocardial perfusion imaging. As a result of changes in the blood–brain barrier in brain tumors, rubidium collects more in brain tumors than normal brain tissue, allowing the use of radioisotope rubidium-82 in nuclear medicine to locate and image brain tumors. Rubidium-82 has a very short half-life of 76 seconds, and the production from decay of strontium-82 must be done close to the patient. Rubidium was tested for the influence on manic depression and depression. Dialysis patients suffering from depression show a depletion in rubidium, and therefore a supplementation may help during depression. In some tests the rubidium was administered as rubidium chloride with up to 720 mg per day for 60 days. Precautions and biological effects Rubidium reacts violently with water and can cause fires. To ensure safety and purity, this metal is usually kept under dry mineral oil or sealed in glass ampoules in an inert atmosphere. Rubidium forms peroxides on exposure even to a small amount of air diffused into the oil, and storage is subject to similar precautions as the storage of metallic potassium. Rubidium, like sodium and potassium, almost always has +1 oxidation state when dissolved in water, even in biological contexts. The human body tends to treat Rb+ ions as if they were potassium ions, and therefore concentrates rubidium in the body's intracellular fluid (i.e., inside cells). The ions are not particularly toxic; a 70 kg person contains on average 0.36 g of rubidium, and an increase in this value by 50 to 100 times did not show negative effects in test persons. The biological half-life of rubidium in humans measures 31–46 days. Although a partial substitution of potassium by rubidium is possible, when more than 50% of the potassium in the muscle tissue of rats was replaced with rubidium, the rats died. References Further reading Meites, Louis (1963). Handbook of Analytical Chemistry (New York: McGraw-Hill Book Company, 1963) External links Rubidium at The Periodic Table of Videos (University of Nottingham) Chemical elements Alkali metals Reducing agents Chemical elements with body-centered cubic structure Pyrophoric materials
Rubidium
Physics,Chemistry,Technology
3,312
150,183
https://en.wikipedia.org/wiki/Eschede%20train%20disaster
On 3 June 1998, part of an ICE 1 train on the Hannover–Hamburg railway near Eschede in Lower Saxony, Germany derailed and crashed into an overpass that crossed the railroad, which then collapsed onto the train. 101 people were killed and at least 88 were injured, making it the second-deadliest railway disaster in German history after the 1939 Genthin rail disaster, and the world's worst ever high-speed rail disaster. The cause of the derailment was a single fatigue crack in one wheel, which caused a part of the wheel to become caught in a railroad switch (points), changing the direction of the switch as the train passed over it. This led to the train's carriages going down two separate tracks, causing the train to derail and crash into the pillars of a concrete road bridge, which then collapsed and crushed two coaches. The remaining coaches and the rear power car crashed into the wreckage. After the incident, many investigations into the wheel fracture took place. Analysis concluded that the accident was caused by poor wheel design which allowed a fatigue fracture to develop on the wheel rim. Investigators also considered other contributing factors, including the failure to stop the train, and maintenance procedures. The disaster had legal and technical consequences including trials, fines and compensation payments. The wheel design was modified and train windows were made easier to break in an emergency. A memorial place was opened at the place of the disaster. Background The InterCity Express 1, abbreviated as ICE 1, is the first German high-speed train and was introduced in 1988. Timeline Wheel fracture ICE 1 trainset 51 was travelling as ICE 884 "Wilhelm Conrad Röntgen" from Munich to Hamburg. The train was scheduled to stop at Augsburg, Nürnberg, Würzburg, Fulda, Kassel, Göttingen, and Hanover before reaching Hamburg. After stopping in Hanover at 10:30, the train continued its journey northwards. About and forty minutes away from Hamburg and south of central Eschede, near Celle, the steel tyre on a wheel on the third axle of the first car split and peeled away from the wheel, having been weakened by metal fatigue. The momentum of this caused the steel tyre to flatten and it was catapulted upwards, penetrating the floor of the train carriage where it remained stuck. The tyre embedded in the carriage was seen by Jörg Dittmann, one of the passengers in Coach 1. The tyre went through an armrest in his compartment between the seats where his wife and son were sitting. Dittmann took his wife and son out of the damaged coach and went to inform a conductor in the third coach. The conductor, who noticed vibrations in the train, told Dittmann that company policy required him to investigate the circumstances before pulling the emergency brake. The conductor took one minute to reach the site in Coach 1. According to Dittmann, the train had begun to sway from side to side by then. The conductor did not show willingness to stop the train immediately, and wished to first investigate the incident more thoroughly. Dittmann could not find an emergency brake in the corridor and had not noticed that there was an emergency brake handle in his own compartment. The train crashed just as Dittmann was about to show the armrest puncture to the conductor. Derailment As the train passed over the first of two points, the embedded tyre slammed against the guide rail of the points, pulling it from the railway ties. This guide rail also penetrated the floor of the car, becoming embedded in the vehicle and lifting the bogie off the rails. At 10:59 local time (08:59 UTC), one of the now-derailed wheels struck the points lever of the second switch, changing its setting. The rear axles of car number 3 were switched onto a parallel track, and the entire car was thereby thrown sideways into the piers supporting a roadway overpass, destroying them. Car number 4, likewise derailed by the violent deviation of car number 3 and still travelling at , passed intact under the bridge and rolled onto the embankment immediately behind it, striking several trees before coming to a stop. Two Deutsche Bahn railway workers who had been working near the bridge were killed instantly when the derailed car crushed them. The breaking of the car couplings caused the automatic emergency brakes to engage, and the mostly undamaged first three cars came to a stop. Bridge collapse The front power car and coaches one and two cleared the bridge. The third carriage hit the bridge, causing it to collapse, but cleared the bridge. Coach four cleared the bridge, moved away from the track onto an embankment, and hit a group of trees before stopping. The bridge pieces crushed the rear half of coach five. The restaurant coach, six, was crushed to a height. With the track now obstructed completely by the collapsed bridge, the remaining cars jackknifed into the rubble in a zig-zag pattern: car 7, the service car, the restaurant car, the three first-class cars numbered 10 to 12, and the rear power car all derailed and slammed into the pile. The resulting chaos was likened to a partially collapsed folding ruler. An automobile was also found in the wreckage; it belonged to the two railway technicians killed, and was probably parked on the bridge before the accident. Separated from the rest of the carriages, the detached front power car coasted for a further three kilometers (two miles) until it came to a stop after passing Eschede railway station. The crash produced a sound that witnesses later described as "startling", "horribly loud", and "like a plane crash". People living nearby, alerted by the sound, were the first to arrive at the scene; Erika Karl, the first, photographed the site. She said that, upon hearing the noise, her husband initially believed there had been an aircraft accident. After the accident, eight of the ICE carriages occupied an area slightly longer than the length of a single carriage. At 11:02, the local police declared an emergency. At 11:07, as the magnitude of the disaster quickly became apparent and this was elevated to "major emergency". At 12:30 the Celle district government declared a "catastrophic emergency" (civil state of emergency). More than 1,000 rescue workers from regional emergency services, fire departments, rescue services, the police and army were dispatched. Some 37 emergency physicians, who happened to be attending a professional conference in nearby Hanover, also provided assistance during the early hours of the rescue effort, as did units of the British Forces Germany. While the driver and many passengers in the front part of the train survived with minor to moderate injuries, very few passengers survived in the rear carriages, which crashed into the concrete bridge pile at a speed of . 101 were killed, including the two railway workers who had been standing under the bridge. ICE 787, travelling from Hamburg to Hanover, had passed under the bridge going in the opposite direction only two minutes earlier. That train had passed the bridge one minute ahead of schedule, while the accident train was one minute behind schedule. Had both been on time, ICE 787 may have also been impacted by the derailment. By 13:45 authorities had given emergency treatment to 87 people, of whom the 27 most severely injured were airlifted to hospitals. Causes The disintegrated resilient wheel was the cause of the accident, but several factors contributed to the severity of the damage, including proximity to the bridge and flipping point, and the wheel being on a car near the front of the train, causing many cars to derail. Wheel design The ICE 1 trains were originally equipped with single-cast wheelsets, known as monobloc wheels. Once in service it soon became apparent that this design could, as a result of metal fatigue and uneven wear, result in resonance and vibration at cruising speed. Passengers noticed this particularly in the restaurant car, where there were reports of loud vibrations in the dinnerware and of glasses "creeping" across tables. Managers in the railway organisation had experienced these severe vibrations on a previous trip and asked to have the problem solved. In response engineers decided that to solve the problem, the suspension of ICE cars could be improved with the use of a rubber damping ring between the rail-contacting steel tyre and the steel wheel body. A similar design (known as resilient wheels) had been employed successfully in trams around the world, at much lower speeds. This kind of wheel, dubbed a wheel–tyre design, consisted of a wheel body surrounded by a rubber damper and then a relatively thin metal tyre. The new design was not tested at high speed in Germany before it was made operational, but was successful at resolving the issue of vibration at cruising speeds. Decade-long experience at high speed gathered by train manufacturers and railway companies in Italy, France and Japan was not considered. At the time, there were no facilities in Germany that could test the actual failure limit of the wheels, and so complete prototypes were never tested physically. The design and specification relied greatly on available materials data and theory. The very few laboratory and rail tests that were performed did not measure wheel behaviour with extended wear conditions or speeds greater than normal cruising. Nevertheless, over several years the wheels had been reliable and, until the accident, had not caused any major problems. In July 1997, nearly one year before the disaster, Üstra, the company that operates Hanover's tram network, discovered fatigue cracks in dual block wheels on trams running at about . It began changing wheels before fatigue cracks could develop, much earlier than was legally required by the specification. Üstra reported its findings in a warning to all other users of wheels built with similar designs, including Deutsche Bahn, in late 1997. According to Üstra, Deutsche Bahn replied by stating that they had not noticed problems in their trains. The (Fraunhofer LBF) in Darmstadt was charged with the task of determining the cause of the accident. It was revealed later that the institute had told the DB management as early as 1992 about its concerns about possible wheel–tyre failure. It was soon apparent that dynamic repetitive forces had not been considered in the modelling done during the design phase, and the resulting design lacked an adequate margin of safety. The following factors, overlooked during design, were noted: The tyres were flattened into an ellipse as the wheel turned through each revolution (approximately 500,000 times during a typical day in service on an ICE train), with corresponding fatigue effects. In contrast to the monobloc wheel design, cracks could form on the inside as well as the outside of the tyre. As the tyre wore thinner, dynamic forces increased, causing crack growth. Flat spots and ridges or swells in the tyre dramatically increased the dynamic forces on the assembly and greatly accelerated wear. Failure to stop train Failing to stop the train resulted in a catastrophic series of events. Had the train been stopped immediately after the disintegration of the wheel, it is unlikely that the subsequent events would have occurred. Valuable time was lost when the train manager refused to stop the train until he had investigated the problem himself, saying this was company policy. This decision was upheld in court, absolving the train manager of all charges. Given that he was a customer service employee and not a train maintainer or engineer, he had no more authority to make an engineering judgment about whether or not to stop the train than did any passenger. Maintenance About the time of the disaster, the technicians at Deutsche Bahn's maintenance facility in Munich used only standard flashlights for visual inspection of the tyres, instead of metal fatigue detection equipment. Previously, advanced testing machines had been used; however the equipment generated many false positive error messages, so it was considered unreliable and its use was discontinued. During the week prior to the Eschede disaster, three separate automated checks indicated that a wheel was defective. Investigators discovered, from a maintenance report generated by the train's on-board computer, that two months prior to the Eschede disaster, conductors and other train staff filed eight separate complaints about the noises and vibrations generated from the bogie with the defective wheel; the company did not replace the wheel. Deutsche Bahn said that its inspections were proper at the time and that the engineers could not have predicted the wheel fracture. Other factors The design of the overbridge may have also contributed to the accident because it had two thin piers holding up the bridge on either side, instead of the spans going from solid abutments to solid abutments. The bridge that collapsed in the Granville rail disaster of 1977 had a similar weakness. The bridge built after the disaster is a cantilevered design that does not have this vulnerability. Another contributing factor to the casualty rate was the use of welds that "unzipped" during the crash in the carriage bodies. Consequences Legal Immediately after the accident, Deutsche Bahn paid 30,000 Deutsche Marks (about US$19,000) for each fatality to the applicable families. At a later time Deutsche Bahn settled with some victims. Deutsche Bahn stated that it paid the equivalent of more than 30 million U.S. dollars to survivors and the families of victims. In August 2002, two Deutsche Bahn officials and one engineer were charged with manslaughter. The trial lasted 53 days with expert witnesses from around the world testifying. The case ended in a plea bargain in April 2003. According to the German code of criminal procedure, if the defendant has not been found to bear substantial guilt, and if the state attorney and the defendant agree, the defendant may pay a fine and the criminal proceedings are dismissed with prejudice and without a verdict. Each engineer paid €10,000 (around US$12,000). Technical Within weeks, all wheels of similar design were replaced with monobloc wheels. The entire German railway network was checked for similar arrangements of switches close to possible obstacles. Rescue workers at the crash site experienced considerable difficulties in cutting their way through the train to gain access to the victims. Both the aluminium framework and the pressure-proof windows offered unexpected resistance to rescue equipment. As a result, all trains were refitted with windows that have breaking seams. Memorial Udo Bauch, a survivor who was left disabled by the accident, built his own memorial with his own money. Bauch said that the chapel received 5,000 to 6,000 visitors per year. One year after Bauch's memorial was built, an official memorial, funded partly by Deutsche Bahn, was established. The official memorial was opened on 11 May 2001 in the presence of 400 relatives as well as many dignitaries, rescuers and residents of Eschede. The memorial consists of 101 wild cherry trees, with each representing one fatality. The trees have been planted along the rails near the bridge and with the switch in front. From the field, a staircase leads up to the street and a gate; on the other side of the street a number of stairs lead further up to nowhere. There is an inscription on the side of the stone gate and an inscription on a memorial wall that also lists the names of the fatalities placed at the centre of the trees. Dramatization The Eschede derailment, as well as the investigation into the incident, was covered as the fifth episode of the first season of the National Geographic TV documentary series Seconds from Disaster, entitled "Derailment at Eschede" which was filmed on the Ecclesbourne Valley Railway in Derbyshire, UK. See also National Geographic Seconds from Disaster episodes Lathen train collision – 2006 maglev train crash in Germany Lists of rail accidents List of structural failures and collapses List of accidents and disasters by death toll References Citations General references The Eschede Reports ICE Train Accident in Eschede – Recent News Summary Official Eschede Website showing memorial Further reading O'Connor, Bryan, (NASA), "Eschede Train Disaster", Leadership ViTS Meeting, 7 May 2007 External links The ICE/ICT pages Eschede – Zug 884("Eschede – Train 884"), a German documentary film about the disaster by Raymond Ley (2008, 90 minutes). "Das ICE-Unglück von Eschede" ("The ICE accident in Eschede") Derailments in Germany Railway accidents in 1998 Intercity Express 1998 in Germany June 1998 events in Germany Transport in Lower Saxony Bridge disasters in Germany Bridge disasters caused by collision 20th century in Lower Saxony Accidents and incidents involving Deutsche Bahn Engineering failures
Eschede train disaster
Technology,Engineering
3,340
5,119,324
https://en.wikipedia.org/wiki/Chi%20Centauri
Chi Centauri (χ Cen, χ Centauri) is a star in the constellation Centaurus. χ Centauri is a blue-white B-type main sequence dwarf with a mean apparent magnitude of +4.36. It is approximately 510 light years from Earth. It is classified as a Beta Cephei type variable star and its brightness varies by 0.02 magnitudes with a period of 50.40 minutes. This star is a proper motion member of the Upper Centaurus–Lupus sub-group in the Scorpius–Centaurus OB association, the nearest such co-moving association of massive stars to the Sun. References Centauri, Chi Beta Cephei variables B-type main-sequence stars Centaurus Upper Centaurus Lupus 5285 068862 122980 CD-40 8405
Chi Centauri
Astronomy
174
6,864,576
https://en.wikipedia.org/wiki/NGC%202812
NGC 2812 is a lenticular galaxy in the constellation Cancer. It was discovered by Albert Marth on February 17, 1865. References External links Lenticular galaxies Unbarred lenticular galaxies Cancer (constellation) 2812 26242
NGC 2812
Astronomy
48
37,329,132
https://en.wikipedia.org/wiki/Exosporium%20livistonicola
'Distocercospora livistonae' was a fungus in the family Mycosphaerellaceae, it was then moved in 2017 to genus Exosporium as Exosporium livistonicola. References Fungi described in 2006 Fungal plant pathogens and diseases Fungus species
Exosporium livistonicola
Biology
61
3,525,665
https://en.wikipedia.org/wiki/Lithotroph
Lithotrophs are a diverse group of organisms using an inorganic substrate (usually of mineral origin) to obtain reducing equivalents for use in biosynthesis (e.g., carbon dioxide fixation) or energy conservation (i.e., ATP production) via aerobic or anaerobic respiration. While lithotrophs in the broader sense include photolithotrophs like plants, chemolithotrophs are exclusively microorganisms; no known macrofauna possesses the ability to use inorganic compounds as electron sources. Macrofauna and lithotrophs can form symbiotic relationships, in which case the lithotrophs are called "prokaryotic symbionts". An example of this is chemolithotrophic bacteria in giant tube worms or plastids, which are organelles within plant cells that may have evolved from photolithotrophic cyanobacteria-like organisms. Chemolithotrophs belong to the domains Bacteria and Archaea. The term "lithotroph" was created from the Greek terms 'lithos' (rock) and 'troph' (consumer), meaning "eaters of rock". Many but not all lithoautotrophs are extremophiles. The last universal common ancestor of life is thought to be a chemolithotroph (due to its presence in the prokaryotes). Different from a lithotroph is an organotroph, an organism which obtains its reducing agents from the catabolism of organic compounds. History The term was suggested in 1946 by Lwoff and collaborators. Biochemistry Lithotrophs consume reduced inorganic compounds (electron donors). Chemolithotrophs A chemolithotroph is able to use inorganic reduced compounds in its energy-producing reactions. This process involves the oxidation of inorganic compounds coupled to ATP synthesis. The majority of chemolithotrophs are chemolithoautotrophs, able to fix carbon dioxide (CO2) through the Calvin cycle, a metabolic pathway in which CO2 is converted to glucose. This group of organisms includes sulfur oxidizers, nitrifying bacteria, iron oxidizers, and hydrogen oxidizers. The term "chemolithotrophy" refers to a cell's acquisition of energy from the oxidation of inorganic compounds, also known as electron donors. This form of metabolism is believed to occur only in prokaryotes and was first characterized by Ukrainian microbiologist Sergei Winogradsky. Habitat of chemolithotrophs The survival of these bacteria is dependent on the physiochemical conditions of their environment. Although they are sensitive to certain factors such as quality of inorganic substrate, they are able to thrive under some of the most inhospitable conditions in the world, such as temperatures above 110 degrees Celsius and below 2 pH. The most important requirement for chemolithotropic life is an abundant source of inorganic compounds, which provide a suitable electron donor in order to fix CO2 and produce the energy the microorganism needs to survive. Since chemosynthesis can take place in the absence of sunlight, these organisms are found mostly around hydrothermal vents and other locations rich in inorganic substrate. The energy obtained from inorganic oxidation varies depending on the substrate and the reaction. For example, the oxidation of hydrogen sulfide to elemental sulfur by ½O2 produces far less energy (50 kcal/mol or 210 kJ/mol) than the oxidation of elemental sulfur to sulfate (150 kcal/mol or 627 kJ/mol) by 3/2 O2,. The majority of lithotrophs fix carbon dioxide through the Calvin cycle, an energetically expensive process. For some low-energy substrates, such as ferrous iron, the cells must cull through large amounts of inorganic substrate to secure just a small amount of energy. This makes their metabolic process inefficient in many places and hinders them from thriving. Overview of the metabolic process There is a fairly large variation in the types of inorganic substrates that these microorganisms can use to produce energy. Sulfur is one of many inorganic substrates that can be used in different reduced forms depending on the specific biochemical process that a lithotroph uses. The chemolithotrophs that are best documented are aerobic respirers, meaning that they use oxygen in their metabolic process. The list of these microorganisms that employ anaerobic respiration though is growing. At the heart of this metabolic process is an electron transport system that is similar to that of chemoorganotrophs. The major difference between these two microorganisms is that chemolithotrophs directly provide electrons to the electron transport chain, while chemoorganotrophs must generate their own cellular reducing power by oxidizing reduced organic compounds. Chemolithotrophs bypass this by obtaining their reducing power directly from the inorganic substrate or by the reverse electron transport reaction. Certain specialized chemolithotrophic bacteria use different derivatives of the Sox system; a central pathway specific to sulfur oxidation. This ancient and unique pathway illustrates the power that chemolithotrophs have evolved to use from inorganic substrates, such as sulfur. In chemolithotrophs, the compounds – the electron donors – are oxidized in the cell, and the electrons are channeled into respiratory chains, ultimately producing ATP. The electron acceptor can be oxygen (in aerobic bacteria), but a variety of other electron acceptors, organic and inorganic, are also used by various species. Aerobic bacteria such as the nitrifying bacteria, Nitrobacter, use oxygen to oxidize nitrite to nitrate. Some lithotrophs produce organic compounds from carbon dioxide in a process called chemosynthesis, much as plants do in photosynthesis. Plants use energy from sunlight to drive carbon dioxide fixation, but chemosynthesis can take place in the absence of sunlight (e.g., around a hydrothermal vent). Ecosystems establish in and around hydrothermal vents as the abundance of inorganic substances, namely hydrogen, are constantly being supplied via magma in pockets below the sea floor. Other lithotrophs are able to directly use inorganic substances, e.g., ferrous iron, hydrogen sulfide, elemental sulfur, thiosulfate, or ammonia, for some or all of their energy needs. Here are a few examples of chemolithotrophic pathways, any of which may use oxygen or nitrate as electron acceptors: Photolithotrophs Photolithotrophs such as plants obtain energy from light and therefore use inorganic electron donors such as water only to fuel biosynthetic reactions (e. g., carbon dioxide fixation in lithoautotrophs). Lithoheterotrophs versus lithoautotrophs Lithotrophic bacteria cannot use, of course, their inorganic energy source as a carbon source for the synthesis of their cells. They choose one of three options: Lithoheterotrophs do not have the ability to fix carbon dioxide and must consume additional organic compounds in order to break them apart and use their carbon. Only a few bacteria are fully lithoheterotrophic. Lithoautotrophs are able to use carbon dioxide from the air as a carbon source, the same way plants do. Mixotrophs will take up and use organic material to complement their carbon dioxide fixation source (mix between autotrophy and heterotrophy). Many lithotrophs are recognized as mixotrophic in regard to their C-metabolism. Chemolithotrophs versus photolithotrophs In addition to this division, lithotrophs differ in the initial energy source which initiates ATP production: Chemolithotrophs use the above-mentioned inorganic compounds for aerobic or anaerobic respiration. The energy produced by the oxidation of these compounds is enough for ATP production. Some of the electrons derived from the inorganic donors also need to be channeled into biosynthesis. Mostly, additional energy has to be invested to transform these reducing equivalents to the forms and redox potentials needed (mostly NADH or NADPH), which occurs by reverse electron transfer reactions. Photolithotrophs use light as their energy source. These organisms are photosynthetic; examples of photolithotrophic bacteria are purple bacteria (e. g., Chromatiaceae), green bacteria (Chlorobiaceae and Chloroflexota), and "Cyanobacteria". Purple and green bacteria oxidize sulfide, sulfur, sulfite, iron or hydrogen. Cyanobacteria and plants extract reducing equivalents from water, i.e., they oxidize water to oxygen. The electrons obtained from the electron donors are not used for ATP production (as long as there is light); they are used in biosynthetic reactions. Some photolithotrophs shift over to chemolithotrophic metabolism in the dark. Geological significance Lithotrophs participate in many geological processes, such as the formation of soil and the biogeochemical cycling of carbon, nitrogen, and other elements. Lithotrophs also associate with the modern-day issue of acid mine drainage. Lithotrophs may be present in a variety of environments, including deep terrestrial subsurfaces, soils, mines, and in endolith communities. Soil formation A primary example of lithotrophs that contribute to soil formation is Cyanobacteria. This group of bacteria are nitrogen-fixing photolithotrophs that are capable of using energy from sunlight and inorganic nutrients from rocks as reductants. This capability allows for their growth and development on native, oligotrophic rocks and aids in the subsequent deposition of their organic matter (nutrients) for other organisms to colonize. Colonization can initiate the process of organic compound decomposition: a primary factor for soil genesis. Such a mechanism has been attributed as part of the early evolutionary processes that helped shape the biological Earth. Biogeochemical cycling Biogeochemical cycling of elements is an essential component of lithotrophs within microbial environments. For example, in the carbon cycle, there are certain bacteria classified as photolithoautotrophs that generate organic carbon from atmospheric carbon dioxide. Certain chemolithoautotrophic bacteria can also produce organic carbon, some even in the absence of light. Similar to plants, these microbes provide a usable form of energy for organisms to consume. On the contrary, there are lithotrophs that have the ability to ferment, implying their ability to convert organic carbon into another usable form. Lithotrophs play an important role in the biological aspect of the iron cycle. These organisms can use iron as either an electron donor, Fe(II) → Fe(III), or as an electron acceptor, Fe (III) → Fe(II). Another example is the cycling of nitrogen. Many lithotrophic bacteria play a role in reducing inorganic nitrogen (nitrogen gas) to organic nitrogen (ammonium) in a process called nitrogen fixation. Likewise, there are many lithotrophic bacteria that also convert ammonium into nitrogen gas in a process called denitrification. Carbon and nitrogen are important nutrients, essential for metabolic processes, and can sometimes be the limiting factor that affects organismal growth and development. Thus, lithotrophs are key players in both providing and removing these important resource. Acid mine drainage Lithotrophic microbes are responsible for the phenomenon known as acid mine drainage. Typically occurring in mining areas, this process concerns the active metabolism of pyrites and other reduced sulfur components to sulfate. One example is the acidophilic bacterial genus, A. ferrooxidans, that use iron(II) sulfide (FeS2) to generate sulfuric acid. The acidic product of these specific lithotrophs has the potential to drain from the mining area via water run-off and enter the environment. Acid mine drainage drastically alters the acidity (pH values of 2–3) and chemistry of groundwater and streams, and may endanger plant and animal populations downstream of mining areas. Activities similar to acid mine drainage, but on a much lower scale, are also found in natural conditions such as the rocky beds of glaciers, in soil and talus, on stone monuments and buildings and in the deep subsurface. Astrobiology It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on the planet Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions. On January 24, 2014, NASA reported that current studies by the Curiosity and Opportunity rovers on Mars will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective. See also Autotroph Electrolithoautotroph Endolith Heterotroph Microbial metabolism Organotroph Dissimilatory metal-reducing microorganisms Zetaproteobacteria References External links Minerals and the Origins of Life (Robert Hazen, NASA) (video, 60m, April 2014). Lithotrophs Metabolism Microbiology Soil biology
Lithotroph
Chemistry,Biology
2,867
19,630,428
https://en.wikipedia.org/wiki/Geotraces
GEOTRACES is an international research programme for improving understanding of marine biogeochemical cycles. GEOTRACES is organised internationally under the auspices of the Scientific Committee on Oceanic Research (originally under the International Council for Science). Its management is overseen by a Scientific Steering Committee (SSC), with representatives of 15 nations from across the globe, and the programme involves active participation of more than 30 nations. Genesis The concept of cycle describes the pathway by which a chemical element moves through the three major compartments of Earth (such as continents, atmosphere, and ocean). Because these cycles are directly related to climate dynamics and are heavily impacted by global change, it is essential to quantify them. GEOTRACES focuses on the oceanic part of the cycles, with the ambition to map the distribution of trace elements and isotopes in the ocean and to understand the processes controlling this distribution. Some of these trace elements are directly linked to climate via, for example, their role as essential nutrients for life; others allow quantification of ocean processes (origin and dynamics of matter, age of water masses, etc.); some of them are pollutants (for example, lead or mercury). Modelling based on the data collected will thus achieve substantial progress in understanding the current and past of the ocean and improve projections of the ocean’s response to global change. After some years in the planning and enabling phase, the GEOTRACES Science Plan was published in 2006 and the GEOTRACES programme formally launched its seagoing effort in January 2010. This phase is expected to last a decade. Challenges and benefits Trace elements serve as regulators of biological processes in the ocean, influencing marine ecosystem dynamics and the carbon cycle. Despite this significance, knowledge of the marine biogeochemical cycles of these essential micronutrients is surprisingly incomplete. GEOTRACES is quantifying the supply, removal, internal cycling, chemical form and distribution of essential micronutrients and other trace elements. Understanding the sensitivity of these biogeochemical cycles to changing environmental conditions will improve projections of the ocean’s response to global change. The cycles of many trace elements and isotopes have been impacted significantly by human activity, which has increased the discharge of harmful elements into the ocean. GEOTRACES’ emphasis on understanding the processes regulating the marine biogeochemical cycles of trace elements will improve prediction of the transport and fate of contaminants in the ocean and thereby help to protect the ocean environment. Much of what is known about ocean conditions in the past and, therefore, about the ocean’s role in climate variability is derived from trace element and isotope patterns recorded in marine archives (sediments, corals, etc.). Greater knowledge of the processes governing these tracers in the modern ocean will improve interpretation of ocean conditions in the past, from which more reliable prediction of future changes can be made. Goals Benefits will be realised by pursuing two overarching goals: To determine global ocean distributions of selected trace elements and isotopes, and to evaluate the sources, sinks, and internal cycling of these species in order to characterise more completely the physical, chemical and biological processes regulating their distributions. To understand the response of trace element and isotope cycles to global change, to help predict the future and to improve chemical proxies for past changes in the ocean environment. Activities GEOTRACES cruises The central component of GEOTRACES is a series of cruises spanning the global ocean and sampling the full water column. These dedicated GEOTRACES cruises collect seawater for analysis of a wide range of trace element and isotopes. This strategy is guided by the principle that more will be learned through complementary investigation of multiple trace elements than can be achieved in an exhaustive study of one element in isolation. The first GEOTRACES cruise was cruise GPc06 in August 2005 in the North Pacific, though the program was officially launched in January/February 2010. The first U.S. GEOTRACES cruise was in fall 2010 on the R/V Knorr in the North Atlantic. Intercalibration Ensuring accuracy of the results is essential if GEOTRACES is to build a meaningful global dataset. To this end, the Standards and Intercalibration (S&I) Committee is in charge of securing that truthful and precise data are generated in the GEOTRACES Program through the use of appropriate sampling protocols, analytical standards and certified reference materials, and the active sharing of methods and results. Since the concentration, activity, or chemical speciation of a trace element or isotope can be affected by sampling methods, sample handling, and analytical determinations, GEOTRACES follows the strategy of cruises to occupy a common station along their transects. At the same time two U.S.-led cruises (2008 and 2009) provided samples for intercalibration to laboratories from many countries. Seawater samples are available for use by other labs that wish to join this effort. Simple data comparisons like depth profiles show whether there are disagreements and, if so, the investigators can examine their methods and even data work ups to identify and remedy the problems. Data management Compilation of data into secure and readily searchable databases ensures ease of use and is fundamental to the success of the programme. The GEOTRACES Data Assembly Centre (GDAC) is responsible for the compilation, quality control and secure archiving of data received from national data centers and from core international GEOTRACES cruises. It has as its main aims the integration of core GEOTRACES data into global data sets, and making this data accessible to participating scientist and the larger science community according to the GEOTRACES data policy. The GDAC is hosted at the British Oceanographic Data Centre and a dedicated committee with international representation oversees it. GEOTRACES Data Products GEOTRACES Data Products are freely available on-line. The third Intermediate Data Product (IDP2021) was released in November 2021. It contains hydrographical and marine geochemical data acquired during the first 10 years of the programme. The main motivation for distributing the product at this time is to strengthen and intensify collaboration with the broader ocean research community. At the same time, GEOTRACES is seeking feedback to improve future data products. The GEOTRACES Intermediate Data Product consists of two parts: the digital data package and the eGEOTRACES Electronic Atlas. The digital data package (available at http://www.bodc.ac.uk/geotraces/data/dp) contains data from 77 cruises and more than 800 hydrographic and geochemical parameters. The data covers the global ocean, data density being the highest in the Atlantic. The eGEOTRACES Electronic Atlas (available at www.egeotraces.org) is based on the digital data package and provides 2D and 3D images of the ocean distribution of many of the parameters. The 3D figures provide geographical context crucial for correctly assessing extent and origin of tracer plumes as well as for inferring processes acting on the tracers and shaping their distribution. The numerous links to other tracers, sections and basins found on section plots and 3D animations allow quick switching between parameters and domains and facilitate comparative studies. In addition, eGEOTRACES can help in teaching and outreach activities and can also facilitate conveying societally relevant scientific results to interested laymen or decision makers. Organizational structure The GEOTRACES SSC was initially led by co-chairs, Prof. Robert F. Anderson of the Lamont–Doherty Earth Observatory (Columbia University) and Prof. Gideon M. Henderson from University of Oxford. Eventually, the co-chairs transitioned to Dr. Maeve Lohan of the University of Southampton and Dr. Karen Casciotti from Stanford University. In 2018, the SSC chairs were occupied by Phoebe Lam (University of California, Santa Cruz) and Andy Bowie (University of Tasmania). References External links GEOTRACES program website GEOTRACES International Data Assembly Centre (GDAC) GEOTRACES Peer-reviewed Papers Database Biogeochemical cycle Chemical oceanography Research projects
Geotraces
Chemistry
1,634
26,428
https://en.wikipedia.org/wiki/Rosetta%20Stone
The Rosetta Stone is a stele of granodiorite inscribed with three versions of a decree issued in 196 BC during the Ptolemaic dynasty of Egypt, on behalf of King Ptolemy V Epiphanes. The top and middle texts are in Ancient Egyptian using hieroglyphic and Demotic scripts, respectively, while the bottom is in Ancient Greek. The decree has only minor differences across the three versions, making the Rosetta Stone key to deciphering the Egyptian scripts. The stone was carved during the Hellenistic period and is believed to have originally been displayed within a temple, possibly at Sais. It was probably moved in late antiquity or during the Mamluk period, and was eventually used as building material in the construction of Fort Julien near the town of Rashid (Rosetta) in the Nile Delta. It was found there in July 1799 by French officer Pierre-François Bouchard during the Napoleonic campaign in Egypt. It was the first Ancient Egyptian bilingual text recovered in modern times, and it aroused widespread public interest with its potential to decipher this previously untranslated hieroglyphic script. Lithographic copies and plaster casts soon began circulating among European museums and scholars. When the British defeated the French, they took the stone to London under the terms of the Capitulation of Alexandria in 1801. Since 1802, it has been on public display at the British Museum almost continuously and it is the most visited object there. Study of the decree was already underway when the first complete translation of the Greek text was published in 1803. Jean-François Champollion announced the transliteration of the Egyptian scripts in Paris in 1822; it took longer still before scholars were able to read Ancient Egyptian inscriptions and literature confidently. Major advances in the decoding were recognition that the stone offered three versions of the same text (1799); that the Demotic text used phonetic characters to spell foreign names (1802); that the hieroglyphic text did so as well, and had pervasive similarities to the Demotic (1814); and that phonetic characters were also used to spell native Egyptian words (1822–1824). Three other fragmentary copies of the same decree were discovered later, and several similar Egyptian bilingual or trilingual inscriptions are now known, including three slightly earlier Ptolemaic decrees: the Decree of Alexandria in 243 BC, the Decree of Canopus in 238 BC, and the Memphis decree of Ptolemy IV, c. 218 BC. Though the Rosetta Stone is now known to not be unique, it was the essential key to the modern understanding of ancient Egyptian literature and civilisation. The term "Rosetta Stone" is now used to refer to the essential clue to a new field of knowledge. Description The Rosetta Stone is listed as "a stone of black granodiorite, bearing three inscriptions ... found at Rosetta" in a contemporary catalogue of the artefacts discovered by the French expedition and surrendered to British troops in 1801. At some period after its arrival in London, the inscriptions were coloured in white chalk to make them more legible, and the remaining surface was covered with a layer of carnauba wax designed to protect it from visitors' fingers. This gave a dark colour to the stone that led to its mistaken identification as black basalt. These additions were removed when the stone was cleaned in 1999, revealing the original dark grey tint of the rock, the sparkle of its crystalline structure, and a pink vein running across the top left corner. Comparisons with the Klemm collection of Egyptian rock samples showed a close resemblance to rock from a small granodiorite quarry at Gebel Tingar on the west bank of the Nile, west of Elephantine in the region of Aswan; the pink vein is typical of granodiorite from this region. The Rosetta Stone is high at its highest point, wide, and thick. It weighs approximately . It bears three inscriptions: the top register in Ancient Egyptian hieroglyphs, the second in the Egyptian Demotic script, and the third in Ancient Greek. These three scripts are not three different languages, as is commonly misunderstood. The front surface is polished and the inscriptions lightly incised on it; the sides of the stone are smoothed, but the back is only roughly worked, presumably because it would have not been visible when the stele was erected. Original stele The Rosetta Stone is a fragment of a larger stele. No additional fragments were found in later searches of the Rosetta site. Owing to its damaged state, none of the three texts is complete. The top register, composed of Egyptian hieroglyphs, suffered the most damage. Only the last 14 lines of the hieroglyphic text can be seen; all of them are broken on the right side, and 12 of them on the left. Below it, the middle register of demotic text has survived best; it has 32 lines, of which the first 14 are slightly damaged on the right side. The bottom register of Greek text contains 54 lines, of which the first 27 survive in full; the rest are increasingly fragmentary due to a diagonal break at the bottom right of the stone. The full length of the hieroglyphic text and the total size of the original stele, of which the Rosetta Stone is a fragment, can be estimated based on comparable steles that have survived, including other copies of the same order. The slightly earlier decree of Canopus, erected in 238 BC during the reign of Ptolemy III, is and wide, and contains 36 lines of hieroglyphic text, 73 of demotic text, and 74 of Greek. The texts are of similar length. From such comparisons, it can be estimated that an additional 14 or 15 lines of hieroglyphic inscription are missing from the top register of the Rosetta Stone, amounting to another . In addition to the inscriptions, there would probably have been a scene depicting the king being presented to the gods, topped with a winged disc, as on the Canopus Stele. These parallels, and a hieroglyphic sign for "stela" on the stone itself (see Gardiner's sign list), O26 suggest that it originally had a rounded top.Parkinson et al. (1999) p. 26 The height of the original stele is estimated to have been about . Memphis decree and its context The stele was erected after the coronation of King Ptolemy V and was inscribed with a decree that established the divine cult of the new ruler. The decree was issued by a congress of priests who gathered at Memphis. The date is given as "4 Xandikos" in the Macedonian calendar and "18 Mekhir" in the Egyptian calendar, which corresponds to . The year is stated as the ninth year of Ptolemy V's reign (equated with 197/196 BC), which is confirmed by naming four priests who officiated in that year: Aetos son of Aetos was priest of the divine cults of Alexander the Great and the five Ptolemies down to Ptolemy V himself; the other three priests named in turn in the inscription are those who led the worship of Berenice Euergetis (wife of Ptolemy III), Arsinoe Philadelphos (wife and sister of Ptolemy II), and Arsinoe Philopator, mother of Ptolemy V. However, a second date is also given in the Greek and hieroglyphic texts, corresponding to , the official anniversary of Ptolemy's coronation. The demotic text conflicts with this, listing consecutive days in March for the decree and the anniversary. It is uncertain why this discrepancy exists, but it is clear that the decree was issued in 196 BC and that it was designed to re-establish the rule of the Ptolemaic kings over Egypt. The decree was issued during a turbulent period in Egyptian history. Ptolemy V Epiphanes, the son of Ptolemy IV Philopator and his wife and sister Arsinoe, reigned from 204 to 181 BC. He had become ruler at the age of five after the sudden death of both of his parents, who were murdered in a conspiracy that involved Ptolemy IV's mistress Agathoclea, according to contemporary sources. The conspirators effectively ruled Egypt as Ptolemy V's guardians until a revolt broke out two years later under general Tlepolemus, when Agathoclea and her family were lynched by a mob in Alexandria. Tlepolemus, in turn, was replaced as guardian in 201 BC by Aristomenes of Alyzia, who was chief minister at the time of the Memphis decree. Political forces beyond the borders of Egypt exacerbated the internal problems of the Ptolemaic kingdom. Antiochus III the Great and Philip V of Macedon had made a pact to divide Egypt's overseas possessions. Philip had seized several islands and cities in Caria and Thrace, while the Battle of Panium (198 BC) had resulted in the transfer of Coele-Syria, including Judaea, from the Ptolemies to the Seleucids. Meanwhile, in the south of Egypt, there was a long-standing revolt that had begun during the reign of Ptolemy IV, led by Horwennefer and by his successor Ankhwennefer. Both the war and the internal revolt were still ongoing when the young Ptolemy V was officially crowned at Memphis at the age of 12 (seven years after the start of his reign) and when, just over a year later, the Memphis decree was issued. Stelae of this kind, which were established on the initiative of the temples rather than that of the king, are unique to Ptolemaic Egypt. In the preceding Pharaonic period it would have been unheard of for anyone but the divine rulers themselves to make national decisions: by contrast, this way of honouring a king was a feature of Greek cities. Rather than making his eulogy himself, the king had himself glorified and deified by his subjects or representative groups of his subjects. The decree records that Ptolemy V gave a gift of silver and grain to the temples. It also records that there was particularly high flooding of the Nile in the eighth year of his reign, and he had the excess waters dammed for the benefit of the farmers. In return the priesthood pledged that the king's birthday and coronation days would be celebrated annually and that all the priests of Egypt would serve him alongside the other gods. The decree concludes with the instruction that a copy was to be placed in every temple, inscribed in the "language of the gods" (Egyptian hieroglyphs), the "language of documents" (Demotic), and the "language of the Greeks" as used by the Ptolemaic government. Securing the favour of the priesthood was essential for the Ptolemaic kings to retain effective rule over the populace. The High Priests of Memphis—where the king was crowned—were particularly important, as they were the highest religious authorities of the time and had influence throughout the kingdom. Given that the decree was issued at Memphis, the ancient capital of Egypt, rather than Alexandria, the centre of government of the ruling Ptolemies, it is evident that the young king was anxious to gain their active support. Thus, although the government of Egypt had been Greek-speaking ever since the conquests of Alexander the Great, the Memphis decree, like the three similar earlier decrees, included texts in Egyptian to show its connection to the general populace by way of the literate Egyptian priesthood. There can be no one definitive English translation of the decree, not only because modern understanding of the ancient languages continues to develop, but also because of the minor differences between the three original texts. Older translations by E. A. Wallis Budge (1904, 1913) and Edwyn R. Bevan (1927) are easily available but are now outdated, as can be seen by comparing them with the recent translation by R. S. Simpson, which is based on the demotic text and can be found online, or with the modern translations of all three texts, with introduction and facsimile drawing, that were published by Quirke and Andrews in 1989. The stele was almost certainly not originally placed at Rashid (Rosetta) where it was found, but more likely came from a temple site farther inland, possibly the royal town of Sais. The temple from which it originally came was probably closed around AD 392 when Roman emperor Theodosius I ordered the closing of all non-Christian temples of worship. The original stele broke at some point, its largest piece becoming what we now know as the Rosetta Stone. Ancient Egyptian temples were later used as quarries for new construction, and the Rosetta Stone probably was re-used in this manner. Later it was incorporated in the foundations of a fortress constructed by the Mameluke Sultan Qaitbay (/18–1496) to defend the Bolbitine branch of the Nile at Rashid. There it lay for at least another three centuries until its rediscovery. Three other inscriptions relevant to the same Memphis decree have been found since the discovery of the Rosetta Stone: the Nubayrah Stele, a stele found in Elephantine and Noub Taha, and an inscription found at the Temple of Philae (on the Philae obelisk). Unlike the Rosetta Stone, the hieroglyphic texts of these inscriptions were relatively intact. The Rosetta Stone had been deciphered long before they were found, but later Egyptologists have used them to refine the reconstruction of the hieroglyphs that must have been used in the lost portions of the hieroglyphic text on the Rosetta Stone. Rediscovery French forces under Napoleon Bonaparte invaded Egypt in 1798, accompanied by a corps of 151 technical experts (savants), known as the Commission des Sciences et des Arts. On 1799, French soldiers under the command of Colonel d'Hautpoul were strengthening the defences of Fort Julien, a couple of miles north-east of the Egyptian port city of Rosetta (modern-day Rashid). Lieutenant Pierre-François Bouchard spotted a slab with inscriptions on one side that the soldiers had uncovered when demolishing a wall within the fort. He and d'Hautpoul saw at once that it might be important and informed General Jacques-François Menou, who happened to be at Rosetta. The find was announced to Napoleon's newly founded scientific association in Cairo, the Institut d'Égypte, in a report by Commission member Michel Ange Lancret noting that it contained three inscriptions, the first in hieroglyphs and the third in Greek, and rightly suggesting that the three inscriptions were versions of the same text. Lancret's report, dated 1799, was read to a meeting of the Institute soon after . Bouchard, meanwhile, transported the stone to Cairo for examination by scholars. The discovery was reported in September in Courrier de l'Égypte, the official newspaper of the French expedition. The anonymous reporter expressed a hope that the stone might one day be the key to deciphering hieroglyphs. In 1800 three of the commission's technical experts devised ways to make copies of the texts on the stone. One of these experts was Jean-Joseph Marcel, a printer and gifted linguist, who is credited as the first to recognise that the middle text was written in the Egyptian demotic script, rarely used for stone inscriptions and seldom seen by scholars at that time, rather than Syriac as had originally been thought. It was artist and inventor Nicolas-Jacques Conté who found a way to use the stone itself as a printing block to reproduce the inscription. A slightly different method was adopted by Antoine Galland. The prints that resulted were taken to Paris by General Charles Dugua. Scholars in Europe were now able to see the inscriptions and attempt to read them. After Napoleon's departure, French troops held off British and Ottoman attacks for another 18 months. In March 1801, the British landed at Aboukir Bay. Menou was now in command of the French expedition. His troops, including the commission, marched north towards the Mediterranean coast to meet the enemy, transporting the stone along with many other antiquities. He was defeated in battle, and the remnant of his army retreated to Alexandria where they were surrounded and besieged, with the stone now inside the city. Menou surrendered on 30 August. From French to British possession After the surrender, a dispute arose over the fate of the French archaeological and scientific discoveries in Egypt, including the artefacts, biological specimens, notes, plans, and drawings collected by the members of the commission. Menou refused to hand them over, claiming that they belonged to the institute. British General John Hely-Hutchinson refused to end the siege until Menou gave in. Scholars Edward Daniel Clarke and William Richard Hamilton, newly arrived from England, agreed to examine the collections in Alexandria and said they had found many artefacts that the French had not revealed. In a letter home, Clarke said that "we found much more in their possession than was represented or imagined". Hutchinson claimed that all materials were property of the British Crown, but French scholar Étienne Geoffroy Saint-Hilaire told Clarke and Hamilton that the French would rather burn all their discoveries than turn them over, referring ominously to the destruction of the Library of Alexandria. Clarke and Hamilton pleaded the French scholars' case to Hutchinson, who finally agreed that items such as natural history specimens would be considered the scholars' private property. Menou quickly claimed the stone, too, as his private property. Hutchinson was equally aware of the stone's unique value and rejected Menou's claim. Eventually an agreement was reached, and the transfer of the objects was incorporated into the Capitulation of Alexandria signed by representatives of the British, French, and Ottoman forces. It is not clear exactly how the stone was transferred into British hands, as contemporary accounts differ. Colonel Tomkyns Hilgrove Turner, who was to escort it to England, claimed later that he had personally seized it from Menou and carried it away on a gun-carriage. In a much more detailed account, Edward Daniel Clarke stated that a French "officer and member of the Institute" had taken him, his student John Cripps, and Hamilton secretly into the back streets behind Menou's residence and revealed the stone hidden under protective carpets among Menou's baggage. According to Clarke, their informant feared that the stone might be stolen if French soldiers saw it. Hutchinson was informed at once and the stone was taken away—possibly by Turner and his gun-carriage. Turner brought the stone to England aboard the captured French frigate HMS Égyptienne, landing in Portsmouth in February 1802. His orders were to present it and the other antiquities to King George III. The King, represented by War Secretary Lord Hobart, directed that it should be placed in the British Museum. According to Turner's narrative, he and Hobart agreed that the stone should be presented to scholars at the Society of Antiquaries of London, of which Turner was a member, before its final deposit in the museum. It was first seen and discussed there at a meeting on 1802. In 1802, the Society created four plaster casts of the inscriptions, which were given to the universities of Oxford, Cambridge and Edinburgh and to Trinity College Dublin. Soon afterwards, prints of the inscriptions were made and circulated to European scholars. Before the end of 1802, the stone was transferred to the British Museum, where it is located today. New inscriptions painted in white on the left and right edges of the slab stated that it was "Captured in Egypt by the British Army in 1801" and "Presented by King George III". The stone has been exhibited almost continuously in the British Museum since June 1802. During the middle of the 19th century, it was given the inventory number "EA 24", "EA" standing for "Egyptian Antiquities". It was part of a collection of ancient Egyptian monuments captured from the French expedition, including a sarcophagus of Nectanebo II (EA 10), the statue of a high priest of Amun (EA 81), and a large granite fist (EA 9). The objects were soon discovered to be too heavy for the floors of Montagu House (the original building of The British Museum), and they were transferred to a new extension that was added to the mansion. The Rosetta Stone was transferred to the sculpture gallery in 1834 shortly after Montagu House was demolished and replaced by the building that now houses the British Museum. According to the museum's records, the Rosetta Stone is its most-visited single object, a simple image of it was the museum's best selling postcard for several decades, and a wide variety of merchandise bearing the text from the Rosetta Stone (or replicating its distinctive shape) is sold in the museum shops. The Rosetta Stone was originally displayed at a slight angle from the horizontal, and rested within a metal cradle that was made for it, which involved shaving off very small portions of its sides to ensure that the cradle fitted securely. It originally had no protective covering, and it was found necessary by 1847 to place it in a protective frame, despite the presence of attendants to ensure that it was not touched by visitors. Since 2004 the conserved stone has been on display in a specially built case in the centre of the Egyptian Sculpture Gallery. A replica of the Rosetta Stone is now available in the King's Library of the British Museum, without a case and free to touch, as it would have appeared to early 19th-century visitors. The museum was concerned about heavy bombing in London towards the end of the First World War in 1917, and the Rosetta Stone was moved to safety, along with other portable objects of value. The stone spent the next two years below ground level in a station of the Postal Tube Railway at Mount Pleasant near Holborn. Other than during wartime, the Rosetta Stone has left the British Museum only once: for one month in October 1972, to be displayed alongside Champollion's Lettre at the Louvre in Paris on the 150th anniversary of the letter's publication. Even when the Rosetta Stone was undergoing conservation measures in 1999, the work was done in the gallery so that it could remain visible to the public. Reading the Rosetta Stone Prior to the discovery of the Rosetta Stone and its eventual decipherment, the ancient Egyptian language and script had not been understood since shortly before the fall of the Roman Empire. The usage of the hieroglyphic script had become increasingly specialised even in the later Pharaonic period; by the 4th century AD, few Egyptians were capable of reading them. Monumental use of hieroglyphs ceased as temple priesthoods died out and Egypt was converted to Christianity; the last known inscription is dated to , found at Philae and known as the Graffito of Esmet-Akhom. The last demotic text, also from Philae, was written in 452. Hieroglyphs retained their pictorial appearance, and classical authors emphasised this aspect, in sharp contrast to the Greek and Roman alphabets. In the 5th century, the priest Horapollo wrote Hieroglyphica, an explanation of almost 200 glyphs. His work was believed to be authoritative, yet it was misleading in many ways, and this and other works were a lasting impediment to the understanding of Egyptian writing. Later attempts at decipherment were made by Arab historians in medieval Egypt during the 9th and 10th centuries. Dhul-Nun al-Misri and Ibn Wahshiyya were the first historians to study hieroglyphs, by comparing them to the contemporary Coptic language used by Coptic priests in their time. The study of hieroglyphs continued with fruitless attempts at decipherment by European scholars, notably Pierius Valerianus in the 16th century and Athanasius Kircher in the 17th. The discovery of the Rosetta Stone in 1799 provided critical missing information, gradually revealed by a succession of scholars, that eventually allowed Jean-François Champollion to solve the puzzle that Kircher had called the riddle of the Sphinx. Greek text The Greek text on the Rosetta Stone provided the starting point. Ancient Greek was widely known to scholars, but they were not familiar with details of its use in the Hellenistic period as a government language in Ptolemaic Egypt; large-scale discoveries of Greek papyri were a long way in the future. Thus, the earliest translations of the Greek text of the stone show the translators still struggling with the historical context and with administrative and religious jargon. Stephen Weston verbally presented an English translation of the Greek text at a Society of Antiquaries meeting in April 1802. Meanwhile, two of the lithographic copies made in Egypt had reached the Institut de France in Paris in 1801. There, librarian and antiquarian Gabriel de La Porte du Theil set to work on a translation of the Greek, but he was dispatched elsewhere on Napoleon's orders almost immediately, and he left his unfinished work in the hands of colleague Hubert-Pascal Ameilhon. Ameilhon produced the first published translations of the Greek text in 1803, in both Latin and French to ensure that they would circulate widely. At Cambridge, Richard Porson worked on the missing lower right corner of the Greek text. He produced a skilful suggested reconstruction, which was soon being circulated by the Society of Antiquaries alongside its prints of the inscription. At almost the same moment, Christian Gottlob Heyne in Göttingen was making a new Latin translation of the Greek text that was more reliable than Ameilhon's and was first published in 1803. It was reprinted by the Society of Antiquaries in a special issue of its journal Archaeologia in 1811, alongside Weston's previously unpublished English translation, Colonel Turner's narrative, and other documents. Demotic text At the time of the stone's discovery, Swedish diplomat and scholar Johan David Åkerblad was working on a little-known script of which some examples had recently been found in Egypt, which came to be known as Demotic. He called it "cursive Coptic" because he was convinced that it was used to record some form of the Coptic language (the direct descendant of Ancient Egyptian), although it had few similarities with the later Coptic script. French Orientalist Antoine-Isaac Silvestre de Sacy had been discussing this work with Åkerblad when, in 1801, he received one of the early lithographic prints of the Rosetta Stone, from Jean-Antoine Chaptal, French minister of the interior. He realised that the middle text was in this same script. He and Åkerblad set to work, both focusing on the middle text and assuming that the script was alphabetical. They attempted to identify the points where Greek names ought to occur within this unknown text, by comparing it with the Greek. In 1802, Silvestre de Sacy reported to Chaptal that he had successfully identified five names ("Alexandros", "Alexandreia", "Ptolemaios", "Arsinoe", and Ptolemy's title "Epiphanes"), while Åkerblad published an alphabet of 29 letters (more than half of which were correct) that he had identified from the Greek names in the Demotic text. They could not, however, identify the remaining characters in the Demotic text, which, as is now known, included ideographic and other symbols alongside the phonetic ones. Hieroglyphic text Silvestre de Sacy eventually gave up work on the stone, but he was to make another contribution. In 1811, prompted by discussions with a Chinese student about Chinese script, Silvestre de Sacy considered a suggestion made by Georg Zoëga in 1797 that the foreign names in Egyptian hieroglyphic inscriptions might be written phonetically; he also recalled that as early as 1761, Jean-Jacques Barthélemy had suggested that the characters enclosed in cartouches in hieroglyphic inscriptions were proper names. Thus, when Thomas Young, foreign secretary of the Royal Society of London, wrote to him about the stone in 1814, Silvestre de Sacy suggested in reply that in attempting to read the hieroglyphic text, Young might look for cartouches that ought to contain Greek names and try to identify phonetic characters in them. Young did so, with two results that together paved the way for the final decipherment. In the hieroglyphic text, he discovered the phonetic characters "" (in today's transliteration "") that were used to write the Greek name "". He also noticed that these characters resembled the equivalent ones in the demotic script, and went on to note as many as 80 similarities between the hieroglyphic and demotic texts on the stone, an important discovery because the two scripts were previously thought to be entirely different from one another. This led him to deduce correctly that the demotic script was only partly phonetic, also consisting of ideographic characters derived from hieroglyphs. Young's new insights were prominent in the long article "Egypt" that he contributed to the in 1819. He could make no further progress, however. In 1814, Young first exchanged correspondence about the stone with Jean-François Champollion, a teacher at Grenoble who had produced a scholarly work on ancient Egypt. Champollion saw copies of the brief hieroglyphic and Greek inscriptions of the Philae obelisk in 1822, on which William John Bankes had tentatively noted the names "" and "" in both languages. From this, Champollion identified the phonetic characters (in today's transliteration ). On the basis of this and the foreign names on the Rosetta Stone, he quickly constructed an alphabet of phonetic hieroglyphic characters, completing his work on 14 September and announcing it publicly on 27 September in a lecture to the . On the same day he wrote the famous "" to Bon-Joseph Dacier, secretary of the Académie, detailing his discovery. In the postscript Champollion notes that similar phonetic characters seemed to occur in both Greek and Egyptian names, a hypothesis confirmed in 1823, when he identified the names of pharaohs Ramesses and Thutmose written in cartouches at Abu Simbel. These far older hieroglyphic inscriptions had been copied by Bankes and sent to Champollion by Jean-Nicolas Huyot. From this point, the stories of the Rosetta Stone and the decipherment of Egyptian hieroglyphs diverge, as Champollion drew on many other texts to develop an Ancient Egyptian grammar and a hieroglyphic dictionary which were published after his death in 1832. Later work Work on the stone now focused on fuller understanding of the texts and their contexts by comparing the three versions with one another. In 1824 Classical scholar Antoine-Jean Letronne promised to prepare a new literal translation of the Greek text for Champollion's use. Champollion in return promised an analysis of all the points at which the three texts seemed to differ. Following Champollion's sudden death in 1832, his draft of this analysis could not be found, and Letronne's work stalled. François Salvolini, Champollion's former student and assistant, died in 1838, and this analysis and other missing drafts were found among his papers. This discovery incidentally demonstrated that Salvolini's own publication on the stone, published in 1837, was plagiarism. Letronne was at last able to complete his commentary on the Greek text and his new French translation of it, which appeared in 1841. During the early 1850s, German Egyptologists Heinrich Brugsch and Max Uhlemann produced revised Latin translations based on the demotic and hieroglyphic texts. The first English translation followed in 1858, the work of three members of the Philomathean Society at the University of Pennsylvania. Whether one of the three texts was the standard version, from which the other two were originally translated, is a question that has remained controversial. Letronne attempted to show in 1841 that the Greek version, the product of the Egyptian government under the Macedonian Ptolemies, was the original. Among recent authors, John Ray has stated that "the hieroglyphs were the most important of the scripts on the stone: they were there for the gods to read, and the more learned of their priesthood". Philippe Derchain and Heinz Josef Thissen have argued that all three versions were composed simultaneously, while Stephen Quirke sees in the decree "an intricate coalescence of three vital textual traditions". Richard Parkinson points out that the hieroglyphic version strays from archaic formalism and occasionally lapses into language closer to that of the demotic register that the priests more commonly used in everyday life. The fact that the three versions cannot be matched word for word helps to explain why the decipherment has been more difficult than originally expected, especially for those original scholars who were expecting an exact bilingual key to Egyptian hieroglyphs. Rivalries Even before the Salvolini affair, disputes over precedence and plagiarism punctuated the decipherment story. Thomas Young's work is acknowledged in Champollion's 1822 Lettre à M. Dacier, but incompletely, according to early British critics: for example, James Browne, a sub-editor on the Encyclopædia Britannica (which had published Young's 1819 article), anonymously contributed a series of review articles to the Edinburgh Review in 1823, praising Young's work highly and alleging that the "unscrupulous" Champollion plagiarised it. These articles were translated into French by Julius Klaproth and published in book form in 1827. Young's own 1823 publication reasserted the contribution that he had made. The early deaths of Young (1829) and Champollion (1832) did not put an end to these disputes. In his work on the stone in 1904 E. A. Wallis Budge gave special emphasis to Young's contribution compared with Champollion's. In the early 1970s, French visitors complained that the portrait of Champollion was smaller than one of Young on an adjacent information panel; English visitors complained that the opposite was true. The portraits were in fact the same size. Requests for repatriation to Egypt Calls for the Rosetta Stone to be returned to Egypt were made in July 2003 by Zahi Hawass, then Secretary-General of Egypt's Supreme Council of Antiquities. These calls, expressed in the Egyptian and international media, asked that the stele be repatriated to Egypt, commenting that it was the "icon of our Egyptian identity". He repeated the proposal two years later in Paris, listing the stone as one of several key items belonging to Egypt's cultural heritage, a list which also included: the iconic bust of Nefertiti in the Egyptian Museum of Berlin; a statue of the Great Pyramid architect Hemiunu in the Roemer-und-Pelizaeus-Museum in Hildesheim, Germany; the Dendera Temple Zodiac in the Louvre in Paris; and the bust of Ankhhaf in the Museum of Fine Arts in Boston. In August 2022, Zahi Hawass reiterated his previous demands. In 2005, the British Museum presented Egypt with a full-sized fibreglass colour-matched replica of the stele. This was initially displayed in the renovated Rashid National Museum, an Ottoman house in the town of Rashid (Rosetta), the closest city to the site where the stone was found. In November 2005, Hawass suggested a three-month loan of the Rosetta Stone, while reiterating the eventual goal of a permanent return. In December 2009, he proposed to drop his claim for the permanent return of the Rosetta Stone if the British Museum lent the stone to Egypt for three months for the opening of the Grand Egyptian Museum at Giza in 2013. As John Ray has observed: "The day may come when the stone has spent longer in the British Museum than it ever did in Rosetta." National museums typically express strong opposition to the repatriation of objects of international cultural significance such as the Rosetta Stone. In response to repeated Greek requests for return of the Elgin Marbles from the Parthenon and similar requests to other museums around the world, in 2002, over 30 of the world's leading museums—including the British Museum, the Louvre, the Pergamon Museum in Berlin, and the Metropolitan Museum in New York City—issued a joint statement: Idiomatic use Various ancient bilingual or even trilingual epigraphical documents have sometimes been described as "Rosetta stones", as they permitted the decipherment of ancient written scripts. For example, the bilingual Greek-Brahmi coins of the Greco-Bactrian king Agathocles have been described as "little Rosetta stones", allowing Christian Lassen's initial progress towards deciphering the Brahmi script, thus unlocking ancient Indian epigraphy. The Behistun inscription has also been compared to the Rosetta stone, as it links the translations of three ancient Middle-Eastern languages: Old Persian, Elamite, and Akkadian. The term Rosetta stone has been also used idiomatically to denote the first crucial key in the process of decryption of encoded information, especially when a small but representative sample is recognised as the clue to understanding a larger whole. According to the Oxford English Dictionary, the first figurative use of the term appeared in the 1902 edition of the Encyclopædia Britannica relating to an entry on the chemical analysis of glucose. Another use of the phrase is found in H. G. Wells's 1933 novel The Shape of Things to Come, where the protagonist finds a manuscript written in shorthand that provides a key to understanding additional scattered material that is sketched out in both longhand and on typewriter. Since then, the term has been widely used in other contexts. For example, Nobel laureate Theodor W. Hänsch in a 1979 Scientific American article on spectroscopy wrote that "the spectrum of the hydrogen atoms has proven to be the Rosetta Stone of modern physics: once this pattern of lines had been deciphered much else could also be understood". Fully understanding the key set of genes to the human leucocyte antigen has been described as "the Rosetta Stone of immunology". The flowering plant Arabidopsis thaliana has been called the "Rosetta Stone of flowering time". A gamma-ray burst (GRB) found in conjunction with a supernova has been called a Rosetta Stone for understanding the origin of GRBs. The technique of Doppler echocardiography has been called a Rosetta Stone for clinicians trying to understand the complex process by which the left ventricle of the human heart can be filled during various forms of diastolic dysfunction. The European Space Agency's Rosetta spacecraft, launched to study the comet 67P/Churyumov–Gerasimenko in the hope that determining its composition will advance understanding of the origins of the Solar System. The name is used for various forms of translation software and services. "Rosetta Stone" is a brand of language-learning software published by Rosetta Stone Inc., who are headquartered in Arlington County, US. Additionally, "Rosetta", developed and maintained by Canonical (the Ubuntu Linux company) as part of the Launchpad project, is an online language translation tool to help with localisation of software. One program, billed as a "lightweight dynamic translator" that enables applications compiled for PowerPC processors to run on x86 processor Apple Inc. systems, is named "Rosetta". The Rosetta@home endeavour is a distributed computing project for predicting protein structures from amino acid sequences (i.e. translating sequence into structure). Rosetta Code is a wiki-based chrestomathy website with algorithm implementations in several programming languages. The Rosetta Project brings language specialists and native speakers together to develop a meaningful survey and near-permanent archive of 1,500 languages, in physical and digital form, with the intent of it remaining useful from AD 2000 to 12,000. See also Egypt–United Kingdom relations Garshunography use of the script of one language to write utterances of another language which already has a script associated with it. List of individual rocks Transliteration of Ancient Egyptian Rosetta (spacecraft) References Timeline of early publications about the Rosetta Stone Notes Bibliography External links Stones 196 BC 2nd-century BC steles 1799 archaeological discoveries Ancient Egyptian stelas Egyptology Multilingual texts Ancient Egyptian objects in the British Museum French invasion of Egypt and Syria Antiquities acquired by Napoleon Ptolemaic Greek inscriptions Metaphors referring to objects 1799 in Egypt History of translation Buildings and structures in Beheira Governorate
Rosetta Stone
Physics
8,389
40,548,511
https://en.wikipedia.org/wiki/River%20bank%20failure
River bank failure can be caused when the gravitational forces acting on a bank exceed the forces which hold the sediment together. Failure depends on sediment type, layering, and moisture content. All river banks experience erosion, but failure is dependent on the location and the rate at which erosion is occurring. River bank failure may be caused by house placement, water saturation, weight on the river bank, vegetation, and/or tectonic activity. When structures are built too close to the bank of the river, their weight may exceed the weight which the bank can hold and cause slumping, or accelerate slumping that may already be active. Adding to these stresses can be increased saturation caused by irrigation and septics, which reduce the soil's strength. While deep rooted vegetation can increase the strength of river banks, replacement with grass and shallower rooted vegetation can actually weaken the soil. Presence of lawns and concrete driveways concentrates runoff onto the riverbank, weakening it further. Foundations and structures further increase stress. Although each mode of failure is clearly defined, investigation into soil types, bank composition, and environment must be clearly defined in order to establish the mode of failure, of which multiple types may be present on the same area at different times. Once failure has been classified, steps may be taken in order to prevent further erosion. If tectonic failure is at fault, research into its effects may aid in the understanding of alluvial systems and their responses to different stresses. Description A river bank can be divided into three zones: Toe zone, bank zone, and overbank area. The toe zone is the area which is most susceptible to erosion. Because it is located in between the ordinary water level and the low water level, it is strongly affected by currents and erosional events. The bank zone is above the ordinary high water level, but can still be effected periodically by currents, and gets the most human and animal traffic. The overbank area is inland of both the toe and bank zones, and can be classified as either a floodplain or a bluff, depending on its slope. A river bank will respond to erosional activity based on the characteristics of the bank material. The most common type of bank is a stratified or interstratified bank, which consists of cohesionless layers interbedded with cohesive layers. If the cohesive soil is at the toe of the bank, it will control the retreat rate of the overlying layer. If the cohesionless soil is at the toe of the bank, these layers are not protected by the layers of cohesive soil. A Bedrock bank is usually very stable and will experience gradual erosion. A cohesive bank is highly susceptible to erosion in times of lowering water levels due to its low permeability. Failures in cohesive soils will be in rotational or planar failure surfaces, while in non-cohesive soils failures will be in an avalanche fashion. Modes of failure Hydraulically induced failure Hydraulic processes at or below the surface of the water may entrain sediment and directly cause erosion. Non-cohesive banks are particularly vulnerable to this type of failure, due to bank undercutting, bed degradation, and basal clean-out. Hydraulic toe erosion occurs when flow is in the direction of a bank at the bend of the river and the highest velocity is at the outer edge and in the center depth of the water. Centrifugal forces raise the water elevation so that it is highest on the outside bend, and as gravity pulls the water downward, a rolling, helical spiral happens, with downward velocities against the bank (erosive force). It will be highest in tight bends. The worst erosion will be immediately downstream from the point of maximum curvature. In cases with noncohesive layers, currents remove the material and create a cantilever overhang of cohesive material. Shear exceeds the critical shear at the toe of the bank, and particles are eroded. This then causes an overhang eventually resulting in bank retreat and failure. Geotechnical failure Geotechnical failure usually occurs due to stresses on the bank exceeding the forces the bank can accommodate. One example is oversaturation of the bank following a lowering of the water level from the floodplain to normal bank levels. Pore water pressure in the saturated bank reduces the frictional shear strength of the soil and increases sliding forces. This type of failure is most common in fine grained soils because they cannot drain as rapidly as coarse grained soils. This can be accentuated if the banks had already been destabilized due to erosion of cohesionless sands, which undermines the bank material and leads to bank collapse. If the bank has been exposed to freeze thaw, tension cracks may lay lead to bank failure. Subsurface moisture weakens internal shear. Capillary action can also decrease the angle of repose of the bank to less than the existing bank slope. This oversteepens the slope and can lead to collapse when the soil dries. Piping failure may occur when high groundwater seepage pressure increases, as well as the rate of flow. This causes collapse of part of the bank. Failure is usually due to selective groundwater flow along interbedded saturated layers within stratified river banks, with lenses of sand and coarser material in between layers of finer cohesive material. Tectonic failure Changes in the valley floor slope can influence alluvial rivers, which can happen due to tectonics. This may cause river bank failure, resulting in hazards to people living near to the river and to structures such as bridges, pipelines, and powerline crossings. While large and fast flowing rivers should maintain their original flow paths, low gradients makes effects caused by slope changes larger. Bank failure as the result of tectonics may also lead to avulsion, in which a river abandons its own river channel in favor of forming a new one. Avulsion due to tectonics is most common in rivers experiencing a high stand, in which bank failure has led to a loss of natural levees due to liquefication and fractures from an earthquake. Gravitational failure Gravitational failure includes shallow and rotational slides, slab and cantilever failures, and earthflows and dry granular flows. It is the process of detaching sediment primarily from a cohesive bank and transporting it fluvially. Shallow failure occurs where a layer of material moves along planes parallel to bank surfaces. Failure is typical of soils with low cohesion, and occurs when the angle of the bank exceeds the angle of internal friction. Small to medium-sized blocks are forced out at or near the base of the river bank due to excessive pore water pressure and overburden. The slab of material in the lower half of the bank will fall out, leaving an alcove shaped cavity. Failure is usually associated with steep banks and saturated finer grained cohesive bank materials that allow buildup of positive pore water pressure and strong seepage within structure. Popout failure is when small to medium-sized blocks are forced out at or near the base of the river bank due to excessive pore water pressure and overburden. The slab of material in the lower half of the bank will fall out, leaving an alcove shaped cavity. Failure is usually associated with steep banks and saturated finer grained cohesive bank materials that allow buildup of positive pore water pressure and strong seepage within structure. Small to medium-sized blocks are forced out at, or near the base of the river bank due to excessive pore water pressure and overburden. Slab failure is the sliding and forward toppling of deep-seated mass into the river channel. Failures are associated with steep, low height, fine grained cohesive banks and occur during low flow conditions. They are the result of a combination of scour at the bank toe, high pore water pressure in the bank material, and tension crack at the top of the bank. Cantilever failures occur when an overhanging blocks collapses into the channel. Failure often occurs after the bank has experienced undercutting. Failure is usually in a composite of fine and coarse grained material, and is active during low flow conditions. Failure caused by dry granular flow occurs typically on non-cohesive banks at, or near to, the angle of repose, which are undercut. This increases the local bank angle above the friction angle, and individual grains roll, slide, and bounce down the bank in a layer. Accumulation usually occurs at the toe. A wet earthflow occurs where the loss of strength of a section of bank due to saturation increases the weight of the bank and decreases the banks material strength so that the soil flows as a viscous liquid. This type of failure usually occurs on low angle banks and the affected material flows down the bank to form lobes of material at the toe. Beam failure happens as the result of tension cracks in the overhang, and occurs only when the lower part of an overhang block fails along an almost horizontal failure surface. Examples 1811–1812 New Madrid earthquake The 1811–12 New Madrid earthquakes were caused by earthquakes on the Mississippi River, and represent bank failure caused by tectonic activity in the New Madrid Seismic Zone (NMSZ). The NMSZ is the result of a failed rift system which remains weak today, and thus is prone to faulting and earthquake activity. The earthquakes caused immediate bank failure, in which the surface banks fell above and below the water surface, causing swells large enough to sink a boat. Some swells were caused by the sediment falling into the river, but at other times the swells themselves hitting the banks caused large areas of the Mississippi banks to fall at one time. The waters of the Mississippi were seen to flow backwards, due to the shocks caused by the earthquake. Large amounts of sediment were introduced into the river. Bank caving was seen as far downriver as Memphis, Tennessee. Vertical offsets may have been the primary source of turbulence, though short lived. Northwestern Minnesota bank failure Bank failure was located on the Red River and its tributaries. It was caused by erosion and represents slumping. Failure occurs in this area because river banks are composed of clay, due to glacial and lake deposition, as opposed to more resistant sediments such as sand or gravel. Most commonly, slumping exists in the Sherack Formation, which sits on a less competent formation called the Huot and Brenna Formations. The Sherack Formation is composed of silt and clay laminations, while the Brenna is a clay deposit. These less competent formations become exposed when the overlying Sherack Formation is eroded by the river valley. Cracks can also form in the Sherack Formation, causing weakness in the underlying clay, and slumping. The exposed contact between the formations (commonly in the Red River area), and thus the inherent weakness at this contact, causes mass wasting of the river bank. Human activity near the banks of the river then increases failure risks. Due to this human interference, the river's best mode of defense is to avoid unnecessary loading near the river and to enhance awareness of the issues leading to failure. When failure does occur, an understanding of the geotechnical parameters of the slope are necessary, and are the most heavily relied upon in order to understand the underlying causes. This can be accomplished by obtaining values for the plastic limit and liquid limit of the soils. Also of interest are the interactions between streamflows and sediment contribution. The Red River and Minnesota receives contributions from the Pembina River of northeastern North Dakota. Erosion rates are very high for this river, and lead to extensive and steep erosion of the banks of the river. This increased runoff then produces increased streamflow and thus higher erosion events downstream, such as in the Red River. Solutions River bank failure is dependent on many solutions, the most common of which are lime stabilization and retaining walls, riprap and sheet piling, maintaining deep vegetation, windrows and trenches, sacks and blocks, gabions and mattresses, soil-cement, and avoiding the construction of structures near the banks of the river. Riprap Riprap made of rocks and other materials, arranged in a way as to inhibit erosional processes on a river bank. This method is expensive and can experience failure, but has the ability to be used for large areas. Failure is seen when the bank undergoes particle erosion, due to the stones being too small to resist shear stress, removal of individual stones weakening the overall riprap, the side slope of the bank being too steep for the riprap to resist the displacing forces, or gradation of riprap being too uniform (nothing to fill small spaces). Failure can also occur by slump, translational slide, or modified slump. Windrows and trenches Windrows are the piling of erosion-resistant material on a river's bank, where if buried, they become known as trenches. When erosion persists an already determined location, these windrows and trenches are made to slide down with the bank in order to protect it from further occurrences of erosion. This allows for the need of minimal design work, in that installation is simple on high banks, although other methods could lead to failure. Disadvantages include the windows and trenches continuing to erode until they intersect the erosion-resistant material. Results of this method have been seen to be inconsistent, as the steep slope of the bank leads to increased velocity of the river. Sacks/Blocks Sacks and blocks may be used during flooding, where sacks are filled with material, allowing for blocks to encourage drainage and vegetation growth. This method requires increased man labor and larger amounts of filler material, as all sacks and blocks should be of the same size. Gabions and mattresses Gabions are stacked, rectangular wire boxes filled with stones. They are useful on steep slopes when the water is too fast for the use of a riprap technique. They are expensive and labor-intensive, as well as require periodical inspection for damage and subsequent maintenance, though they have been seen to demonstrate positive performance. Mattress gabions are broad shallow baskets, useful on smooth riverbanks for the growth of vegetation. Tied side by side and layered next to each other on shallow surfaces, they create a blanket of protection against erosion. Articulated concrete mattresses are used in large rivers such as the Mississippi, and consist of concrete blocks held by steel rods. Quick to use with a good reputation, they allow for complete coverage of the riverbank when properly placed. This in turn leads to a good service record. However, open spaces (8%) allow for fine material to pass through, and the spaces between the blocks may cause removal of the bank. Unfortunately, the mattresses themselves don't fit well in sharp curves, and it may be costly to remove the vegetation on the bank, which is required for placement. Soil-cement The exact placement of soil cement may be different depending on the slope of the bank. In rivers with high wave action, a stairstep pattern may be needed to dissipate the energy coming from the waves. In conditions with lower wave energy, cement may be 'plated' in sheets parallel to the slope. This technique cannot be used however on a steep slope. Soil cement may have negative effects in freeze/thaw conditions, but positive effects in banks with sand and vegetation, as little strength and impermeability can cause failure. Vegetation Three main types of vegetation exist to prevent bank failure: Trees, shrubs, and grasses. Trees will provide for deep and dense root systems, increasing the stresses a river bank will accommodate. Shrubs are staked into the river bank in order to provide a protective covering against erosion, creating good plant coverage and soil stability. Cuttings may be tied together into fascines, and placed into shallow trenches parallel to the bank of the river. Typically, willows and cottonwood poles are the most useful materials, however, fiber products may also be used are then partially buried and staked in place. These bundles of cuttings create log-like structures which will root, grow, and create good plant coverage. The structures hold the soil in place and protect the stream bank from erosion. The use of vegetation to counteract erosional processes is the most labor-intensive method to employ, while also the least expensive. It also improves the habitat and is aesthetically pleasing. On steep banks, however, trees may not be able to stabilize the toe of the bank, and the weight of the tree itself may lead to failure. It is also difficult to grow vegetation in conditions such as freeze thaw. If not properly protected, wildlife and livestock may damage the vegetation. References Erosion Soil erosion Hydrology Rivers
River bank failure
Chemistry,Engineering,Environmental_science
3,404
326,821
https://en.wikipedia.org/wiki/Appliance%20plug
An appliance plug is a three-conductor power connector originally developed for kettles, toasters and similar small appliances. It was common in the United Kingdom, New Zealand, Australia, Germany, the Netherlands and Sweden. It has largely been made obsolete and replaced by IEC 60320 C15 and C16 connectors, or proprietary connectors to base plates for cordless kettles. It still occurs on some traditional ceramic electric jugs. It is also used for some laboratory water stills. On some models of the classical ceramic electric jug, the appliance plug prevents the lid from being raised while the connector is inserted. This is important as during operation of these jugs, the water it contains is connected to the electric mains and is an electric shock risk. Appliance plugs were also used to supply power to electric toasters, electric coffee percolators, electric frypans, and many other appliances. An appliance plug is to some degree heat resistant, but the maximum working temperature varied from manufacturer to manufacturer and even from batch to batch. The mains connectors of the appliance plug are two rounded sockets that accept two rounded pins from the appliance. They are unpolarised. The third connection, earth, is a large metal contact on each side of the plug body which makes contact with the sides of the plug receptacle, grounding the appliance body. Some appliances using these connectors incorporate a spring and plunger mechanism with a temperature-sensitive release system; if the temperature rises significantly above a preset limit - for example, if a kettle boils dry - the spring is released and (if all goes well) the plunger pushes the plug and socket apart. It must then be allowed to cool and then reset manually by forcing the connector back into the appliance. A plug of same design but probably different dimensions was in use in former USSR for powering electric kettles and electric samovars. References Mains power connectors Home appliances
Appliance plug
Physics,Technology
411
26,969,485
https://en.wikipedia.org/wiki/Table%20of%20Gaussian%20integer%20factorizations
A Gaussian integer is either the zero, one of the four units (±1, ±i), a Gaussian prime or composite. The article is a table of Gaussian Integers followed either by an explicit factorization or followed by the label (p) if the integer is a Gaussian prime. The factorizations take the form of an optional unit multiplied by integer powers of Gaussian primes. Note that there are rational primes which are not Gaussian primes. A simple example is the rational prime 5, which is factored as in the table, and therefore not a Gaussian prime. Conventions The second column of the table contains only integers in the first quadrant, which means the real part x is positive and the imaginary part y is non-negative. The table might have been further reduced to the integers in the first octant of the complex plane using the symmetry . The factorizations are often not unique in the sense that the unit could be absorbed into any other factor with exponent equal to one. The entry , for example, could also be written as . The entries in the table resolve this ambiguity by the following convention: the factors are primes in the right complex half plane with absolute value of the real part larger than or equal to the absolute value of the imaginary part. The entries are sorted according to increasing norm . The table is complete up to the maximum norm at the end of the table in the sense that each composite or prime in the first quadrant appears in the second column. Gaussian primes occur only for a subset of norms, detailed in sequence . This here is a composition of sequences and . Factorizations See also Gaussian integer Table of divisors Integer factorization References External links OEIS: Gaussian Primes Complex numbers Gaussian integer factorizations
Table of Gaussian integer factorizations
Mathematics
374
233,403
https://en.wikipedia.org/wiki/Siege%20engine
A siege engine is a device that is designed to break or circumvent heavy castle doors, thick city walls and other fortifications in siege warfare. Some are immobile, constructed in place to attack enemy fortifications from a distance, while others have wheels to enable advancing up to the enemy fortification. There are many distinct types, such as siege towers that allow foot soldiers to scale walls and attack the defenders, battering rams that damage walls or gates, and large ranged weapons (such as ballistas, catapults/trebuchets and other similar constructions) that attack from a distance by launching projectiles. Some complex siege engines were combinations of these types. Siege engines are fairly large constructions – from the size of a small house to a large building. From antiquity up to the development of gunpowder, they were made largely of wood, using rope or leather to help bind them, possibly with a few pieces of metal at key stress points. They could launch simple projectiles using natural materials to build up force by tension, torsion, or, in the case of trebuchets, human power or counterweights coupled with mechanical advantage. With the development of gunpowder and improved metallurgy, bombards and later heavy artillery became the primary siege engines. Collectively, siege engines or artillery together with the necessary soldiers, sappers, ammunition, and transport vehicles to conduct a siege are referred to as a siege train. Antiquity Ancient Assyria through the Roman Empire The earliest siege engines appear to be simple movable roofed towers used for cover to advance to the defenders' walls in conjunction with scaling ladders, depicted during the Middle Kingdom of Egypt. Advanced siege engines including battering rams were used by Assyrians, followed by the catapult in ancient Greece. In Kush siege towers as well as battering rams were built from the 8th century BC and employed in Kushite siege warfare, such as the siege of Ashmunein in 715 BC. The Spartans used battering rams in the siege of Plataea in 429 BC, but it seems that the Greeks limited their use of siege engines to assault ladders, though Peloponnesian forces used something resembling flamethrowers. The first Mediterranean people to use advanced siege machinery were the Carthaginians, who used siege towers and battering rams against the Greek colonies of Sicily. These engines influenced the ruler of Syracuse, Dionysius I, who developed a catapult in 399 BC. The first two rulers to make use of siege engines to a large extent were Philip II of Macedonia and Alexander the Great. Their large engines spurred an evolution that led to impressive machines, like the Demetrius Poliorcetes' Helepolis (or "Taker of Cities") of 304 BC: nine stories high and plated with iron, it stood tall and wide, weighing . The most used engines were simple battering rams, or tortoises, propelled in several ingenious ways that allowed the attackers to reach the walls or ditches with a certain degree of safety. For sea sieges or battles, seesaw-like machines (sambykē or sambuca) were used. These were giant ladders, hinged and mounted on a base mechanism and used for transferring marines onto the sea walls of coastal towns. They were normally mounted on two or more ships tied together and some sambuca included shields at the top to protect the climbers from arrows. Other hinged engines were used to catch enemy equipment or even opposing soldiers with opposable appendices which are probably ancestors to the Roman corvus. Other weapons dropped heavy weights on opposing soldiers. The Romans preferred to assault enemy walls by building earthen ramps (agger) or simply scaling the walls, as in the early siege of the Samnite city of Silvium (306 BC). Soldiers working at the ramps were protected by shelters called vineae, that were arranged to form a long corridor. Convex wicker shields were used to form a screen (plutei or plute in English) to protect the front of the corridor during construction of the ramp. Another Roman siege engine sometimes used resembled the Greek ditch-filling tortoise of Diades, this galley (unlike the ram-tortoise of Hegetor the Byzantium) called a musculus ("muscle") was simply used as cover for sappers to engineer an offensive ditch or earthworks. Battering rams were also widespread. The Roman Legions first used siege towers ; in the first century BC, Julius Caesar accomplished a siege at Uxellodunum in Gaul using a ten-story siege tower. Romans were nearly always successful in besieging a city or fort, due to their persistence, the strength of their forces, their tactics, and their siege engines. The first documented occurrence of ancient siege engine pieces in Europe was the gastraphetes ("belly-bow"), a kind of large crossbow. These were mounted on wooden frames. Greater machines forced the introduction of pulley system for loading the projectiles, which had extended to include stones also. Later torsion siege engines appeared, based on sinew springs. The onager was the main Roman invention in the field. Ancient China The earliest documented occurrence of ancient siege-artillery pieces in China was the levered principled traction catapult and an high siege crossbow from the Mozi (Mo Jing), a Mohist text written at about the 4th – 3rd century BC by followers of Mozi who founded the Mohist school of thought during the late Spring and Autumn period and the early Warring States period. Much of what we now know of the siege technology of the time comes from Books 14 and 15 (Chapters 52 to 71) on Siege Warfare from the Mo Jing. Recorded and preserved on bamboo strips, much of the text is now extremely corrupted. However, despite the heavy fragmentation, Mohist diligence and attention to details which set Mo Jing apart from other works ensured that the highly descriptive details of the workings of mechanical devices like Cloud Ladders, Rotating Arcuballistas and Levered Catapults, records of siege techniques and usage of siege weaponry can still be found today. Elephant Indian, Sri Lankan, Chinese and Southeast Asian kingdoms and empires used war elephants as battering rams. Middle Ages Medieval designs include a large number of catapults such as the mangonel, onager, the ballista, the traction trebuchet (first designed in China in the 3rd century BC and brought over to Europe in the 4th century AD), and the counterweight trebuchet (first described by Mardi bin Ali al-Tarsusi in the 12th century, though of unknown origin). These machines used mechanical energy to fling large projectiles to batter down stone walls. Also used were the battering ram and the siege tower, a wooden tower on wheels that allowed attackers to climb up and over castle walls, while protected somewhat from enemy arrows. A typical military confrontation in medieval times was for one side to lay siege to an opponent's castle. When properly defended, they had the choice whether to assault the castle directly or to starve the people out by blocking food deliveries, or to employ war machines specifically designed to destroy or circumvent castle defenses. Defending soldiers also used trebuchets and catapults as a defensive advantage. Other tactics included setting fires against castle walls in an effort to decompose the cement that held together the individual stones so they could be readily knocked over. Another indirect means was the practice of mining, whereby tunnels were dug under the walls to weaken the foundations and destroy them. A third tactic was the catapulting of diseased animals or human corpses over the walls in order to promote disease which would force the defenders to surrender, an early form of biological warfare. Modern era With the advent of gunpowder, firearms such as the arquebus and cannon—eventually the petard, mortar and artillery—were developed. These weapons proved so effective that fortifications, such as city walls, had to be low and thick, as exemplified by the designs of Vauban. The development of specialized siege artillery, as distinct from field artillery, culminated during World War I and World War II. During the First World War, huge siege guns such as Big Bertha were designed to see use against the modern fortresses of the day. The apex of siege artillery was reached with the German Schwerer Gustav gun, a huge caliber railway gun, built during early World War II. Schwerer Gustav was initially intended to be used for breaching the French Maginot Line of fortifications, but was not finished in time and (as a sign of the times) the Maginot Line was circumvented by rapid mechanized forces instead of breached in a head-on assault. The long time it took to deploy and move the modern siege guns made them vulnerable to air attack and it also made them unsuited to the rapid troop movements of modern warfare. See also References Sources External links Paolo Santini De Machinis or De machinis bellicis de Mariano Taccola, Paris, BnF, Département des manuscrits, Latin 7239 Scenes of Siege Warfare
Siege engine
Engineering
1,877
75,337,057
https://en.wikipedia.org/wiki/Fitzpatrickella%20operculata
Fitzpatrickella operculata is an ascomycete species of fungus from the order Coryneliales. It grows exclusively on the fruits of the Drimys genus of flowering plants on the Juan Fernández Islands. This fungus was named after Harry Morton Fitzpatrick, a mycologist involved in the study of Coryneliales. It is the only species in the genus Fitzpatrickella . Description Fitzpatrickella operculata is characterised by the presence of doliiform, black ascocarps with an obvious area for dehiscence to occur. The ascocarps of Fitzpatrickella operculata tend to almost cover the fruits they infect being very tightly packed, with the ascocarps having well defined operculum, that when ruptured to release the ascospores leaves a central cavity in the ascocarp. The central cavity is lined with a zone of textura prismatica (tissue composed of relatively short cylindrical cells), and an outer layer of textura angular (tissue composed of tightly packed polyhedral cells). Fitzpatrickella operculata generally produces eight spored asci, containing brown to dark brown pitted ascospores of varying shapes, which are also unicellular spores. References Eurotiomycetes Fungi described in 1985 Fungus species
Fitzpatrickella operculata
Biology
268
608,002
https://en.wikipedia.org/wiki/Tyramine
Tyramine ( ) (also spelled tyramin), also known under several other names, is a naturally occurring trace amine derived from the amino acid tyrosine. Tyramine acts as a catecholamine releasing agent. Notably, it is unable to cross the blood-brain barrier, resulting in only non-psychoactive peripheral sympathomimetic effects following ingestion. A hypertensive crisis can result, however, from ingestion of tyramine-rich foods in conjunction with the use of monoamine oxidase inhibitors (MAOIs). Occurrence Tyramine occurs widely in plants and animals, and is metabolized by various enzymes, including monoamine oxidases. In foods, it often is produced by the decarboxylation of tyrosine during fermentation or decay. Foods that are fermented, cured, pickled, aged, or spoiled have high amounts of tyramine. Tyramine levels go up when foods are at room temperature or go past their freshness date. Specific foods containing considerable amounts of tyramine include: Strong or aged cheeses: cheddar, Swiss, Parmesan, Stilton, Gorgonzola or blue cheeses, Camembert, feta, Muenster Meats that are cured, smoked, or processed: such as salami, pepperoni, dry sausages, hot dogs, bologna, bacon, corned beef, pickled or smoked fish, caviar, aged chicken livers, soups or gravies made from meat extract Pickled or fermented foods: sauerkraut, kimchi, tofu (especially stinky tofu), pickles, miso soup, bean curd, tempeh, sourdough breads Condiments: soy, shrimp, fish, miso, teriyaki, and bouillon-based sauces Drinks: beer (especially tap or home-brewed), vermouth, red wine, sherry, liqueurs Beans, vegetables, and fruits: fermented or pickled vegetables, overripe fruits Chocolate Scientists more and more consider tyramine in food as an aspect of safety. They propose projects of regulations aimed to enact control of biogenic amines in food by various strategies, including usage of proper fermentation starters, or preventing their decarboxylase activity. Some authors wrote that this has already given positive results, and tyramine content in food is now lower than it has been in the past. In plants Mistletoe (toxic and not used by humans as a food, but historically used as a medicine). In animals Tyramine also plays a role in animals including: In behavioral and motor functions in Caenorhabditis elegans; Locusta migratoria swarming behaviour; and various nervous roles in Rhipicephalus, Apis, Locusta, Periplaneta, Drosophila, Phormia, Papilio, Bombyx, Chilo, Heliothis, Mamestra, Agrotis, and Anopheles. Biological activity Tyramine is a norepinephrine and dopamine releasing agent (NDRA) and indirectly acting sympathomimetic. Evidence for the presence of tyramine in the human brain has been confirmed by postmortem analysis. Additionally, the possibility that tyramine acts directly as a neuromodulator was revealed by the discovery of a G protein-coupled receptor with high affinity for tyramine, called the trace amine-associated receptor (TAAR1). The TAAR1 receptor is found in the brain, as well as peripheral tissues, including the kidneys. Tyramine is a full agonist of the TAAR1 in rodents and humans. Tyramine is physiologically metabolized by monoamine oxidases (primarily MAO-A), FMO3, PNMT, DBH, and CYP2D6. Human monoamine oxidase enzymes metabolize tyramine into 4-hydroxyphenylacetaldehyde. If monoamine metabolism is compromised by the use of monoamine oxidase inhibitors (MAOIs) and foods high in tyramine are ingested, a hypertensive crisis can result, as tyramine also can displace stored monoamines, such as dopamine, norepinephrine, and epinephrine, from pre-synaptic vesicles. Tyramine is considered a "false neurotransmitter", as it enters noradrenergic nerve terminals and displaces large amounts of norepinephrine, which enters the blood stream and causes vasoconstriction. Additionally, cocaine has been found to block blood pressure rise that is originally attributed to tyramine, which is explained by the blocking of adrenaline by cocaine from reabsorption to the brain. The first signs of this effect were discovered by a British pharmacist who noticed that his wife, who at the time was on MAOI medication, had severe headaches when eating cheese. For this reason, it is still called the "cheese reaction" or "cheese crisis", although other foods can cause the same problem. Most processed cheeses do not contain enough tyramine to cause hypertensive effects, although some aged cheeses (such as Stilton) do. A large dietary intake of tyramine (or a dietary intake of tyramine while taking MAO inhibitors) can cause the tyramine pressor response, which is defined as an increase in systolic blood pressure of 30 mmHg or more. The increased release of norepinephrine (noradrenaline) from neuronal cytosol or storage vesicles is thought to cause the vasoconstriction and increased heart rate and blood pressure of the pressor response. In severe cases, adrenergic crisis can occur. Although the mechanism is unclear, tyramine ingestion also triggers migraine attacks in sensitive individuals and can even lead to stroke. Vasodilation, dopamine, and circulatory factors are all implicated in the migraines. Double-blind trials suggest that the effects of tyramine on migraine may be adrenergic. Research reveals a possible link between migraines and elevated levels of tyramine. A 2007 review published in Neurological Sciences presented data showing migraine and cluster diseases are characterized by an increase of circulating neurotransmitters and neuromodulators (including tyramine, octopamine, and synephrine) in the hypothalamus, amygdala, and dopaminergic system. People with migraine are over-represented among those with inadequate natural monoamine oxidase, resulting in similar problems to individuals taking MAO inhibitors. Many migraine attack triggers are high in tyramine. If one has had repeated exposure to tyramine, however, there is a decreased pressor response; tyramine is degraded to octopamine, which is subsequently packaged in synaptic vesicles with norepinephrine (noradrenaline). Therefore, after repeated tyramine exposure, these vesicles contain an increased amount of octopamine and a relatively reduced amount of norepinephrine. When these vesicles are secreted upon tyramine ingestion, there is a decreased pressor response, as less norepinephrine is secreted into the synapse, and octopamine does not activate alpha or beta adrenergic receptors. When using a MAO inhibitor (MAOI), an intake of approximately 10 to 25 mg of tyramine is required for a severe reaction, compared to 6 to 10 mg for a mild reaction. Tyramine, like phenethylamine, is a monoaminergic activity enhancer (MAE) of serotonin, norepinephrine, and dopamine in addition to its catecholamine-releasing activity. That is, it enhances the action potential-mediated release of these monoamine neurotransmitters. The compound is active as a MAE at much lower concentrations than the concentrations at which it induces the release of catecholamines. The MAE actions of tyramine and other MAEs may be mediated by TAAR1 agonism. Biosynthesis Biochemically, tyramine is produced by the decarboxylation of tyrosine via the action of the enzyme tyrosine decarboxylase. Tyramine can, in turn, be converted to methylated alkaloid derivatives N-methyltyramine, N,N-dimethyltyramine (hordenine), and N,N,N-trimethyltyramine (candicine). In humans, tyramine is produced from tyrosine, as shown in the following diagram. Chemistry In the laboratory, tyramine can be synthesized in various ways, in particular by the decarboxylation of tyrosine. Society and culture Legal status United States Tyramine is a Schedule I controlled substance, categorized as a hallucinogen, making it illegal to buy, sell, or possess in the state of Florida without a license at any purity level or any form whatsoever. The language in the Florida statute says tyramine is illegal in "any material, compound, mixture, or preparation that contains any quantity of [tyramine] or that contains any of [its] salts, isomers, including optical, positional, or geometric isomers, and salts of isomers, if the existence of such salts, isomers, and salts of isomers is possible within the specific chemical designation." This ban is likely the product of lawmakers overly eager to ban substituted phenethylamines, which tyramine is, in the mistaken belief that ring-substituted phenethylamines are hallucinogenic drugs like the 2C series of psychedelic substituted phenethylamines. The further banning of tyramine's optical isomers, positional isomers, or geometric isomers, and salts of isomers where they exist, means that meta-tyramine and phenylethanolamine, a substance found in every living human body, and other common, non-hallucinogenic substances are also illegal to buy, sell, or possess in Florida. Given that tyramine occurs naturally in many foods and drinks (most commonly as a by-product of bacterial fermentation), e.g. wine, cheese, and chocolate, Florida's total ban on the substance may prove difficult to enforce. Notes References Antihypotensive agents Migraine Monoamine oxidase inhibitors Monoaminergic activity enhancers Norepinephrine-dopamine releasing agents Peripherally selective drugs Phenethylamine alkaloids Phenethylamines TAAR1 agonists Trace amines 4-Hydroxyphenyl compounds
Tyramine
Chemistry
2,296
23,483,761
https://en.wikipedia.org/wiki/Molecular%20Cell
Molecular Cell is a peer-reviewed scientific journal that covers research on cell biology at the molecular level, with an emphasis on new mechanistic insights. It was established in 1997 and is published two times per month. The journal is published by Cell Press and is a companion to Cell. Abstracting and indexing The journal is abstracted and indexed, for example, in: According to the Journal Citation Reports, the journal had an impact factor of 14.5 in 2023. References External links Academic journals established in 1997 Molecular and cellular biology journals Biweekly journals English-language journals Cell Press academic journals
Molecular Cell
Chemistry
123
1,391,016
https://en.wikipedia.org/wiki/Cellulosic%20ethanol
Cellulosic ethanol is ethanol (ethyl alcohol) produced from cellulose (the stringy fiber of a plant) rather than from the plant's seeds or fruit. It can be produced from grasses, wood, algae, or other plants. It is generally discussed for use as a biofuel. The carbon dioxide that plants absorb as they grow offsets some of the carbon dioxide emitted when ethanol made from them is burned, so cellulosic ethanol fuel has the potential to have a lower carbon footprint than fossil fuels. Interest in cellulosic ethanol is driven by its potential to replace ethanol made from corn or sugarcane. Since these plants are also used for food products, diverting them for ethanol production can cause food prices to rise; cellulose-based sources, on the other hand, generally do not compete with food, since the fibrous parts of plants are mostly inedible to humans. Another potential advantage is the high diversity and abundance of cellulose sources; grasses, trees and algae are found in almost every environment on Earth. Even municipal solid waste components like paper could conceivably be made into ethanol. The main current disadvantage of cellulosic ethanol is its high cost of production, which is more complex and requires more steps than corn-based or sugarcane-based ethanol. Cellulosic ethanol received significant attention in the 2000s and early 2010s. The United States government in particular funded research into its commercialization and set targets for the proportion of cellulosic ethanol added to vehicle fuel. A large number of new companies specializing in cellulosic ethanol, in addition to many existing companies, invested in pilot-scale production plants. However, the much cheaper manufacturing of grain-based ethanol, along with the low price of oil in the 2010s, meant that cellulosic ethanol was not competitive with these established fuels. As a result, most of the new refineries were closed by the mid-2010s and many of the newly founded companies became insolvent. A few still exist, but are mainly used for demonstration or research purposes; as of 2021, none produces cellulosic ethanol at scale. Overview Cellulosic ethanol is a type of biofuel produced from lignocellulose, a structural material that comprises much of the mass of plants and is composed mainly of cellulose, hemicellulose and lignin. Popular sources of lignocellulose include both agricultural waste products (e.g. corn stover or wood chips) and grasses like switchgrass and miscanthus species. These raw materials for ethanol production have the advantage of being abundant and diverse and would not compete with food production, unlike the more commonly used corn and cane sugars. However, they also require more processing to make the sugar monomers available to the microorganisms typically used to produce ethanol by fermentation, which drives up the price of cellulos-derived ethanol. Cellulosic ethanol can reduce greenhouse gas emissions by 85% over reformulated gasoline. By contrast, starch ethanol (e.g., from corn), which most frequently uses natural gas to provide energy for the process, may not reduce greenhouse gas emissions at all depending on how the starch-based feedstock is produced. According to the National Academy of Sciences in 2011, there is no commercially viable bio-refinery in existence to convert lignocellulosic biomass to fuel. Absence of production of cellulosic ethanol in the quantities required by the regulation was the basis of a United States Court of Appeals for the District of Columbia decision announced January 25, 2013, voiding a requirement imposed on car and truck fuel producers in the United States by the Environmental Protection Agency requiring addition of cellulosic biofuels to their products. These issues, along with many other difficult production challenges, led George Washington University policy researchers to state that "in the short term, [cellulosic] ethanol cannot meet the energy security and environmental goals of a gasoline alternative." History The French chemist, Henri Braconnot, was the first to discover that cellulose could be hydrolyzed into sugars by treatment with sulfuric acid in 1819. The hydrolyzed sugar could then be processed to form ethanol through fermentation. The first commercialized ethanol production began in Germany in 1898, where acid was used to hydrolyze cellulose. In the United States, the Standard Alcohol Company opened the first cellulosic ethanol production plant in South Carolina in 1910. Later, a second plant was opened in Louisiana. However, both plants were closed after World War I due to economic reasons. The first attempt at commercializing a process for ethanol from wood was done in Germany in 1898. It involved the use of dilute acid to hydrolyze the cellulose to glucose, and was able to produce 7.6 liters of ethanol per 100 kg of wood waste ( per ton). The Germans soon developed an industrial process optimized for yields of around per ton of biomass. This process soon found its way to the US, culminating in two commercial plants operating in the southeast during World War I. These plants used what was called "the American Process" — a one-stage dilute sulfuric acid hydrolysis. Though the yields were half that of the original German process ( of ethanol per ton versus 50), the throughput of the American process was much higher. A drop in lumber production forced the plants to close shortly after the end of World War I. In the meantime, a small but steady amount of research on dilute acid hydrolysis continued at the USFS's Forest Products Laboratory. During World War II, the US again turned to cellulosic ethanol, this time for conversion to butadiene to produce synthetic rubber. The Vulcan Copper and Supply Company was contracted to construct and operate a plant to convert sawdust into ethanol. The plant was based on modifications to the original German Scholler process as developed by the Forest Products Laboratory. This plant achieved an ethanol yield of per dry ton, but was still not profitable and was closed after the war. With the rapid development of enzyme technologies in the last two decades, the acid hydrolysis process has gradually been replaced by enzymatic hydrolysis. Chemical pretreatment of the feedstock is required to hydrolyze (separate) hemicellulose, so it can be more effectively converted into sugars. The dilute acid pretreatment is developed based on the early work on acid hydrolysis of wood at the USFS's Forest Products Laboratory. In 2009, the Forest Products Laboratory together with the University of Wisconsin–Madison developed a sulfite pretreatment to overcome the recalcitrance of lignocellulose for robust enzymatic hydrolysis of wood cellulose. In his 2007 State of the Union Address on January 23, 2007, US President George W. Bush announced a proposed mandate for of ethanol by 2017. Later that year, the US Department of Energy awarded $385 million in grants aimed at jump-starting ethanol production from nontraditional sources like wood chips, switchgrass, and citrus peels. Production methods The stages to produce ethanol using a biological approach are: A "pretreatment" phase to make the lignocellulosic material such as wood or straw amenable to hydrolysis Cellulose hydrolysis (cellulolysis) to break down the molecules into sugars Microbial fermentation of the sugar solution Distillation and dehydration to produce pure alcohol In 2010, a genetically engineered yeast strain was developed to produce its own cellulose-digesting enzymes. Assuming this technology can be scaled to industrial levels, it would eliminate one or more steps of cellulolysis, reducing both the time required and costs of production. Although lignocellulose is the most abundant plant material resource, its usability is curtailed by its rigid structure. As a result, an effective pretreatment is needed to liberate the cellulose from the lignin seal and its crystalline structure so as to render it accessible for a subsequent hydrolysis step. By far, most pretreatments are done through physical or chemical means. To achieve higher efficiency, both physical and chemical pretreatments are required. Physical pretreatment involves reducing biomass particle size by mechanical processing methods such as milling or extrusion. Chemical pretreatment partially depolymerizes the lignocellulose so enzymes can access the cellulose for microbial reactions. Chemical pretreatment techniques include acid hydrolysis, steam explosion, ammonia fiber expansion, organosolv, sulfite pretreatment, SO2-ethanol-water fractionation, alkaline wet oxidation and ozone pretreatment. Besides effective cellulose liberation, an ideal pretreatment has to minimize the formation of degradation products because they can inhibit the subsequent hydrolysis and fermentation steps. The presence of inhibitors further complicates and increases the cost of ethanol production due to required detoxification steps. For instance, even though acid hydrolysis is probably the oldest and most-studied pretreatment technique, it produces several potent inhibitors including furfural and hydroxymethylfurfural. Ammonia Fiber Expansion (AFEX) is an example of a promising pretreatment that produces no inhibitors. Most pretreatment processes are not effective when applied to feedstocks with high lignin content, such as forest biomass. These require alternative or specialized approaches. Organosolv, SPORL ('sulfite pretreatment to overcome recalcitrance of lignocellulose') and SO2-ethanol-water (AVAP®) processes are the three processes that can achieve over 90% cellulose conversion for forest biomass, especially those of softwood species. SPORL is the most energy efficient (sugar production per unit energy consumption in pretreatment) and robust process for pretreatment of forest biomass with very low production of fermentation inhibitors. Organosolv pulping is particularly effective for hardwoods and offers easy recovery of a hydrophobic lignin product by dilution and precipitation. AVAP® process effectively fractionates all types of lignocellulosics into clean highly digestible cellulose, undegraded hemicellulose sugars, reactive lignin and lignosulfonates, and is characterized by efficient recovery of chemicals. Cellulolytic processes The hydrolysis of cellulose (cellulolysis) produces simple sugars that can be fermented into alcohol. There are two major cellulolysis processes: chemical processes using acids, or enzymatic reactions using cellulases. Chemical hydrolysis In the traditional methods developed in the 19th century and at the beginning of the 20th century, hydrolysis is performed by attacking the cellulose with an acid. Dilute acid may be used under high heat and high pressure, or more concentrated acid can be used at lower temperatures and atmospheric pressure. A decrystallized cellulosic mixture of acid and sugars reacts in the presence of water to complete individual sugar molecules (hydrolysis). The product from this hydrolysis is then neutralized and yeast fermentation is used to produce ethanol. As mentioned, a significant obstacle to the dilute acid process is that the hydrolysis is so harsh that toxic degradation products are produced that can interfere with fermentation. BlueFire Renewables uses concentrated acid because it does not produce nearly as many fermentation inhibitors, but must be separated from the sugar stream for recycle [simulated moving bed chromatographic separation, for example] to be commercially attractive. Agricultural Research Service scientists found they can access and ferment almost all of the remaining sugars in wheat straw. The sugars are located in the plant's cell walls, which are notoriously difficult to break down. To access these sugars, scientists pretreated the wheat straw with alkaline peroxide, and then used specialized enzymes to break down the cell walls. This method produced of ethanol per ton of wheat straw. Enzymatic hydrolysis Cellulose chains can be broken into glucose molecules by cellulase enzymes. This reaction occurs at body temperature in the stomachs of ruminants such as cattle and sheep, where the enzymes are produced by microbes. This process uses several enzymes at various stages of this conversion. Using a similar enzymatic system, lignocellulosic materials can be enzymatically hydrolyzed at a relatively mild condition (50 °C and pH 5), thus enabling effective cellulose breakdown without the formation of byproducts that would otherwise inhibit enzyme activity. All major pretreatment methods, including dilute acid, require an enzymatic hydrolysis step to achieve high sugar yield for ethanol fermentation. Fungal enzymes can be used to hydrolyze cellulose. The raw material (often wood or straw) still has to be pre-treated to make it amenable to hydrolysis. In 2005, Iogen Corporation announced it was developing a process using the fungus Trichoderma reesei to secrete "specially engineered enzymes" for an enzymatic hydrolysis process. Another Canadian company, SunOpta, uses steam explosion pretreatment, providing its technology to Verenium (formerly Celunol Corporation)'s facility in Jennings, Louisiana, Abengoa's facility in Salamanca, Spain, and a China Resources Alcohol Corporation in Zhaodong. The CRAC production facility uses corn stover as raw material. Microbial fermentation Traditionally, baker's yeast (Saccharomyces cerevisiae), has long been used in the brewery industry to produce ethanol from hexoses (six-carbon sugars). Due to the complex nature of the carbohydrates present in lignocellulosic biomass, a significant amount of xylose and arabinose (five-carbon sugars derived from the hemicellulose portion of the lignocellulose) is also present in the hydrolysate. For example, in the hydrolysate of corn stover, approximately 30% of the total fermentable sugars is xylose. As a result, the ability of the fermenting microorganisms to use the whole range of sugars available from the hydrolysate is vital to increase the economic competitiveness of cellulosic ethanol and potentially biobased proteins. At the turn of the millennium, metabolic engineering for microorganisms used in fuel ethanol production showed significant progress. Besides Saccharomyces cerevisiae, microorganisms such as Zymomonas mobilis and Escherichia coli have been targeted through metabolic engineering for cellulosic ethanol production. An attraction towards alternative fermentation organism is its ability to ferment five carbon sugars improving the yield of the feed stock. This ability is often found in bacteria based organisms. In the first decade of the 21st century, engineered yeasts have been described efficiently fermenting xylose, and arabinose, and even both together. Yeast cells are especially attractive for cellulosic ethanol processes because they have been used in biotechnology for hundreds of years, are tolerant to high ethanol and inhibitor concentrations and can grow at low pH values to reduce bacterial contamination. Combined hydrolysis and fermentation Some species of bacteria have been found capable of direct conversion of a cellulose substrate into ethanol. One example is Clostridium thermocellum, which uses a complex cellulosome to break down cellulose and synthesize ethanol. However, C. thermocellum also produces other products during cellulose metabolism, including acetate and lactate, in addition to ethanol, lowering the efficiency of the process. Some research efforts are directed to optimizing ethanol production by genetically engineering bacteria that focus on the ethanol-producing pathway. Gasification process (thermochemical approach) The gasification process does not rely on chemical decomposition of the cellulose chain (cellulolysis). Instead of breaking the cellulose into sugar molecules, the carbon in the raw material is converted into synthesis gas, using what amounts to partial combustion. The carbon monoxide, carbon dioxide and hydrogen may then be fed into a special kind of fermenter. Instead of sugar fermentation with yeast, this process uses Clostridium ljungdahlii bacteria. This microorganism will ingest carbon monoxide, carbon dioxide and hydrogen and produce ethanol and water. The process can thus be broken into three steps: Gasification — Complex carbon-based molecules are broken apart to access the carbon as carbon monoxide, carbon dioxide and hydrogen Fermentation — Convert the carbon monoxide, carbon dioxide and hydrogen into ethanol using the Clostridium ljungdahlii organism Distillation — Ethanol is separated from water A 2002 study has found another Clostridium bacterium that seems to be twice as efficient in making ethanol from carbon monoxide as the one mentioned above. Alternatively, the synthesis gas from gasification may be fed to a catalytic reactor where it is used to produce ethanol and other higher alcohols through a thermochemical process. This process can also generate other types of liquid fuels, an alternative concept successfully demonstrated by the Montreal-based company Enerkem at their facility in Westbury, Quebec. Hemicellulose to ethanol Studies are intensively conducted to develop economic methods to convert both cellulose and hemicellulose to ethanol. Fermentation of glucose, the main product of cellulose hydrolyzate, to ethanol is an already established and efficient technique. However, conversion of xylose, the pentose sugar of hemicellulose hydrolyzate, is a limiting factor, especially in the presence of glucose. Moreover, it cannot be disregarded as hemicellulose will increase the efficiency and cost-effectiveness of cellulosic ethanol production. Sakamoto (2012) et al. show the potential of genetic engineering microbes to express hemicellulase enzymes. The researchers created a recombinant Saccharomyces cerevisiae strain that was able to: hydrolyze hemicellulase through codisplaying endoxylanase on its cell surface, assimilate xylose by expression of xylose reductase and xylitol dehydrogenase. The strain was able to convert rice straw hydrolyzate to ethanol, which contains hemicellulosic components. Moreover, it was able to produce 2.5x more ethanol than the control strain, showing the highly effective process of cell surface-engineering to produce ethanol. Advantages General advantages of ethanol fuel Ethanol burns more cleanly and more efficiently than gasoline. Because plants consume carbon dioxide as they grow, bioethanol has an overall lower carbon footprint than fossil fuels. Substituting ethanol for oil can also reduce a country's dependence on oil imports. Advantages of cellulosic ethanol over corn or sugar-based ethanol Commercial production of cellulosic ethanol, which unlike corn and sugarcane would not compete with food production, would be highly attractive since it would alleviate pressure on these foodcrops. Although its processing costs are higher, the price of cellulose biomass is much cheaper than that of grains or fruits. Moreover, since cellulose is the main component of plants, the whole plant can be harvested, rather than just the fruit or seeds. This results in much better yields; for instance, switchgrass yields twice as much ethanol per acre as corn. Biomass materials for cellulose production require fewer inputs, such as fertilizer, herbicides, and their extensive roots improve soil quality, reduce erosion, and increase nutrient capture. The overall carbon footprint and global warming potential of cellulosic ethanol are considerably lower (see chart) and the net energy output is several times higher than that of corn-based ethanol. The potential raw material is also plentiful. Around 44% of household waste generated worldwide consists of food and greens. An estimated 323 million tons of cellulose-containing raw materials which could be used to create ethanol are thrown away each year in US alone. This includes 36.8 million dry tons of urban wood wastes, 90.5 million dry tons of primary mill residues, 45 million dry tons of forest residues, and 150.7 million dry tons of corn stover and wheat straw. Moreover, even land marginal for agriculture could be planted with cellulose-producing crops, such as switchgrass, resulting in enough production to substitute for all the current oil imports into the United States. Paper, cardboard, and packaging comprise around 17% of global household waste; although some of this is recycled. As these products contain cellulose, they are transformable into cellulosic ethanol, which would avoid the production of methane, a potent greenhouse gas, during decomposition. Disadvantages General disadvantages The main overall drawback of ethanol fuel is its lower fuel economy compared to gasoline when using ethanol in an engine designed for gasoline with a lower compression ratio. Disadvantages of cellulosic ethanol over corn or sugar-based ethanol The main disadvantage of cellulosic ethanol is its high cost and complexity of production, which has been the main impediment to its commercialization. Economics Although the global bioethanol market is sizable (around 110 billion liters in 2019), the vast majority is made from corn or sugarcane, not cellulose. In 2007, the cost of producing ethanol from cellulosic sources was estimated ca. USD 2.65 per gallon (€0.58 per liter), which is around 2–3 times more expensive than ethanol made from corn. However, the cellulosic ethanol market remains relatively small and reliant on government subsidies. The US government originally set cellulosic ethanol targets gradually ramping up from 1 billion liters in 2011 to 60 billion liters in 2022. However, these annual goals have almost always been waived after it became clear there was no chance of meeting them. Most of the plants to produce cellulosic ethanol were canceled or abandoned in the early 2010s. Plants built or financed by DuPont, General Motors and BP, among many others, were closed or sold. As of 2018, only one major plant remains in the US. In order for it to be grown on a large-scale production, cellulose biomass must compete with existing uses of agricultural land, mainly for the production of crop commodities. Of the United States' 2.26 billion acres (9.1 million km2) of unsubmerged land, 33% are forestland, 26% pastureland and grassland, and 20% crop land. A study by the U.S. Departments of Energy and Agriculture in 2005 suggested that 1.3 billion dry tons of biomass is theoretically available for ethanol use while maintaining an acceptable impact on forestry, agriculture. Comparison with corn-based ethanol Currently, cellulose is more difficult and more expensive to process into ethanol than corn or sugarcane. The US Department of Energy estimated in 2007 that it costs about $2.20 per gallon to produce cellulosic ethanol, which is 2–3 times much as ethanol from corn. Enzymes that destroy plant cell wall tissue cost US$0.40 per gallon of ethanol compared to US$0.03 for corn. However, cellulosic biomass is cheaper to produce than corn, because it requires fewer inputs, such as energy, fertilizer, herbicide, and is accompanied by less soil erosion and improved soil fertility. Additionally, nonfermentable and unconverted solids left after making ethanol can be burned to provide the fuel needed to operate the conversion plant and produce electricity. Energy used to run corn-based ethanol plants is derived from coal and natural gas. The Institute for Local Self-Reliance estimates the cost of cellulosic ethanol from the first generation of commercial plants will be in the $1.90–$2.25 per gallon range, excluding incentives. This compares to the current cost of $1.20–$1.50 per gallon for ethanol from corn and the current retail price of over $4.00 per gallon for regular gasoline (which is subsidized and taxed). Enzyme-cost barrier Cellulases and hemicellulases used in the production of cellulosic ethanol are more expensive compared to their first generation counterparts. Enzymes required for maize grain ethanol production cost 2.64-5.28 US dollars per cubic meter of ethanol produced. Enzymes for cellulosic ethanol production are projected to cost 79.25 US dollars, meaning they are 20-40 times more expensive. The cost differences are attributed to quantity required. The cellulase family of enzymes have a one to two order smaller magnitude of efficiency. Therefore, it requires 40 to 100 times more of the enzyme to be present in its production. For each ton of biomass it requires 15-25 kilograms of enzyme. More recent estimates are lower, suggesting 1 kg of enzyme per dry tonne of biomass feedstock. There is also relatively high capital costs associated with the long incubation times for the vessel that perform enzymatic hydrolysis. Altogether, enzymes comprise a significant portion of 20-40% for cellulosic ethanol production. A 2016 paper estimates the range at 13-36% of cash costs, with a key factor being how the cellulase enzyme is produced. For cellulase produced offsite, enzyme production amounts to 36% of cash cost. For enzyme produced onsite in a separate plant, the fraction is 29%; for integrated enzyme production, the fraction is 13%. One of the key benefits of integrated production is that biomass instead of glucose is the enzyme growth medium. Biomass costs less, and it makes the resulting cellulosic ethanol a 100% second-generation biofuel, i.e., it uses no ‘food for fuel’. Feedstocks In general there are two types of feedstocks: forest (woody) Biomass and agricultural biomass. In the US, about 1.4 billion dry tons of biomass can be sustainably produced annually. About 370 million tons or 30% are forest biomass. Forest biomass has higher cellulose and lignin content and lower hemicellulose and ash content than agricultural biomass. Because of the difficulties and low ethanol yield in fermenting pretreatment hydrolysate, especially those with very high 5 carbon hemicellulose sugars such as xylose, forest biomass has significant advantages over agricultural biomass. Forest biomass also has high density which significantly reduces transportation cost. It can be harvested year around which eliminates long-term storage. The close to zero ash content of forest biomass significantly reduces dead load in transportation and processing. To meet the needs for biodiversity, forest biomass will be an important biomass feedstock supply mix in the future biobased economy. However, forest biomass is much more recalcitrant than agricultural biomass. In 2009, the USDA Forest Products Laboratory together with the University of Wisconsin–Madison developed efficient technologies that can overcome the strong recalcitrance of forest (woody) biomass including those of softwood species that have low xylan content. Short-rotation intensive culture or tree farming can offer an almost unlimited opportunity for forest biomass production. Woodchips from slashes and tree tops and saw dust from saw mills, and waste paper pulp are forest biomass feedstocks for cellulosic ethanol production. Switchgrass (Panicum virgatum) is a native tallgrass prairie grass. Known for its hardiness and rapid growth, this perennial grows during the warm months to heights of 2–6 feet. Switchgrass can be grown in most parts of the United States, including swamplands, plains, streams, and along the shores & interstate highways. It is self-seeding (no tractor for sowing, only for mowing), resistant to many diseases and pests, & can produce high yields with low applications of fertilizer and other chemicals. It is also tolerant to poor soils, flooding, & drought; improves soil quality and prevents erosion due its type of root system. Switchgrass is an approved cover crop for land protected under the federal Conservation Reserve Program (CRP). CRP is a government program that pays producers a fee for not growing crops on land on which crops recently grew. This program reduces soil erosion, enhances water quality, and increases wildlife habitat. CRP land serves as a habitat for upland game, such as pheasants and ducks, and a number of insects. Switchgrass for biofuel production has been considered for use on Conservation Reserve Program (CRP) land, which could increase ecological sustainability and lower the cost of the CRP program. However, CRP rules would have to be modified to allow this economic use of the CRP land. Miscanthus × giganteus is another viable feedstock for cellulosic ethanol production. This species of grass is native to Asia and is a sterile hybrid of Miscanthus sinensis and Miscanthus sacchariflorus. It has high crop yields, is cheap to grow, and thrives in a variety of climates. However, because it is sterile, it also requires vegetative propagation, making it more expensive. It has been suggested that Kudzu may become a valuable source of biomass. Cellulosic ethanol commercialization Fueled by subsidies and grants, a boom in cellulosic ethanol research and pilot plants occurred in the early 2000s. Companies such as Iogen, POET, and Abengoa built refineries that can process biomass and turn it into ethanol, while companies such as DuPont, Diversa, Novozymes, and Dyadic invested in enzyme research. However, most of these plants were canceled or closed in the early 2010s as technical obstacles proved too difficult to overcome. As of 2018, only one cellulosic ethanol plant remained operational. In the later 2010s, various companies occasionally attempted smaller-scale efforts at commercializing cellulosic ethanol, although such ventures generally remain at experimental scales and often dependent on subsidies. The companies Granbio, Raízen and the Centro de Tecnologia Canavieira each run a pilot-scale facility operate in Brazil, which together produce around 30 million liters in 2019. Iogen, which started as an enzyme maker in 1991 and re-oriented itself to focus primarily on cellulosic ethanol in 2013, owns many patents for cellulosic ethanol production and provided the technology for the Raízen plant. Other companies developing cellulosic ethanol technology as of 2021 are Inbicon (Denmark); companies operating or planning pilot production plants include New Energy Blue (US), Sekab (Sweden) and Clariant (in Romania). Abengoa, a Spanish company with cellulosic ethanol assets, became insolvent in 2021. The Australian Renewable Energy Agency, along with state and local governments, partially funded a pilot plant in 2017 and 2020 in New South Wales as part of efforts to diversify the regional economy away from coal mining. US Government support From 2006, the US Federal government began promoting the development of ethanol from cellulosic feedstocks. In May 2008, Congress passed a new farm bill that contained funding for the commercialization of second-generation biofuels, including cellulosic ethanol. The Food, Conservation, and Energy Act of 2008 provided for grants covering up to 30% of the cost of developing and building demonstration-scale biorefineries for producing "advanced biofuels," which effectively included all fuels not produced from corn kernel starch. It also allowed for loan guarantees of up to $250 million for building commercial-scale biorefineries. In January 2011, the USDA approved $405 million in loan guarantees through the 2008 Farm Bill to support the commercialization of cellulosic ethanol at three facilities owned by Coskata, Enerkem and INEOS New Planet BioEnergy. The projects represent a combined per year production capacity and will begin producing cellulosic ethanol in 2012. The USDA also released a list of advanced biofuel producers who will receive payments to expand the production of advanced biofuels. In July 2011, the US Department of Energy gave in $105 million in loan guarantees to POET for a commercial-scale plant to be built Emmetsburg, Iowa. See also Second generation biofuels Food vs. fuel References External links List of U.S. Ethanol Plants Cellulosic Ethanol Path is Paved With Various Technologies The Transition to Second Generation Ethanol USDA & DOE Release National Biofuels Action Plan Commercializing Cellulosic Ethanol Cellulosic ethanol output could "explode" Poet Producing Cellulosic Ethanol on Pilot Scale More U.S. backing seen possible for ethanol plants Shell fuels cellulosic ethanol push with new Codexis deal Enerkem to build cellulosic ethanol plant in U.S. Ethanol Production Could Reach 90 Billion Gallons by 2030 backed by Sandia National Laboratories and GM Corp. Sandia National Laboratories & GM study: PDF format from hitectransportation.org . Ethanol From Cellulose: A General Review — P.C. Badger, 2002 US DOEOffice of Biological and Environmental Research (OBER). National Renewable Energy Laboratory, Research Advances – Cellulosic Ethanol. USDA Forest Products Laboratory reuters.com, New biofuels to come from many sources: conference, Fri Feb 13, 2009 2:50pm EST reuters.com, U.S. weekly ethanol margins rise to above break even, Fri Feb 13, 2009 4:01pm EST wired.com, One Molecule Could Cure Our Addiction to Oil, 09.24.07 Further reading Ethanol fuel Ethanol Wood products Forestry Renewable fuels Biofuels technology Energy development Renewable energy commercialization
Cellulosic ethanol
Biology
6,940
36,100,100
https://en.wikipedia.org/wiki/FOMP
The magnetocrystalline anisotropy energy of a ferromagnetic crystal can be expressed as a power series of direction cosines of the magnetic moment with respect to the crystal axes. The coefficient of those terms is the constant anisotropy. In general, the expansion is limited to a few terms. Normally the magnetization curve is continuous with respect to the applied field up to saturation but, in certain intervals of the anisotropy constant values, irreversible field-induced rotations of the magnetization are possible, implying first-order magnetization transition between equivalent magnetization minima, the so-called first-order magnetization process (FOMP). Theory The total energy of a uniaxial magnetic crystal in an applied magnetic field can be written as a summation of the anisotropy term up to six order, neglecting the sixfold planar contribution, and the field dependent Zeeman energy term where: are the anisotropy constants up to six order is the applied magnetic field is the saturation magnetization is the angle between the magnetization and the easy c-axis is the angle between the field and the easy c-axis The total energy can be written: Phase diagram of easy and hard directions In order to determine the preferred directions the magnetization vector in the absence of the external magnetic field we analyze first the case of uniaxial crystal. The maxima and minima of energy with respect to must satisfy while for the existence of the minima For symmetry reasons the c-axis and the basal plane are always points of extrema and can be easy or hard directions depending on the anisotropy constant values. We can have two additional extrema along conical directions at angles given by: The and are the cones associated to the + and - sign. It can be verified that the only is always a minimum and can be an easy direction, while is always a hard direction. A useful representation of the diagram of the easy directions and other extrema is the representation in term of reduced anisotropy constant and . The following figure shows the phase diagram for the two cases and . All the information concerning the easy directions and the other extrema are contained in a special symbol that marks every different region. It simulate a polar type of energy representation indicating existing extrema by concave (minimum) and convex tips (maximum). Vertical and horizontal stems refer to the symmetry axis and the basal plane respectively. The left-hand and the right-hand oblique stems indicate the and cones respectively. The absolute minimum (easy direction) is indicated by filling of the tip. Transformation of Anisotropy Constant into Conjugate Quantities Before going into the details of the calculation of the various types of FOMP we call the readers attention to a convenient transformation () of the anisotropy constants into conjugate quantities, denoted by. This transformation can be found in such a way that all the results obtained for the case of parallel to c-axis can be immediately transferred to the case of perpendicular to c-axis and vice versa according to the following symmetrical dual correspondence: The use of the table is very simple. If we have a magnetization curve obtained with perpendicular to c-axis and with the anisotropy constant , we can have exactly the same magnetization curve using according to the table but applying the parallel to c-axis and vice versa. FOMP examples The determination of the conditions for the existence of FOMP requires the analysis of the magnetization curve dependence on the anisotropy constant values, for different directions of the magnetic field. We limit the analysis to the cases for parallel or perpendicular to the c-axis, hereafter indicated as A-case and P-case, where A denotes axial while P stands for planar. The analysis of the equilibrium conditions shows that two types of FOMP are possible, depending on the final state after the transition, in case of saturation we have (type-1 FOMP) otherwise (type-2 FOMP). In the case when an easy cone is present we add the suffix C to the description of the FOMP-type. So all possible cases of FOMP-type are: A1, A2, P1, P2, P1C, A1C. In the following figure some examples of FOMP-type are represented, i.e. P1, A1C and P2 for different anisotropy constants, reduced variable are given on the axes, in particular on the abscissa and on the ordinate . FOMP diagram Tedious calculations allow now to determine completely the regions of existence of type 1 or type 2 FOMP. As in the case of the diagram of the easy directions and other extrema is convenient the representation in term of reduced anisotropy constant and . In the following figure we summarize all the FOMP-types distinguished by the labels A1, A2, P1, P2, P1C, A1C which specifies the magnetic field directions (A axial; P planar) and the type of FOMP (1 and 2) and the easy cone regions with type 1 FOMP (A1C, P1C). Polycrystalline system Since the FOMP transition represents a singular point in the magnetization curve of a single crystal, we analyze how this singularity is transformed when we magnetize a polycrystalline sample. The result of the mathematical analysis shows the possibility of carrying out the measurement of critical field () where the FOMP transition takes place in the case of polycrystalline samples. For determining the characteristics of FOMP when the magnetic field is applied at a variable angle with respect to the c axis, we have to examine the evolution of the total energy of the crystal with increasing field for different values of between and . The calculations are complicated and we report only the conclusions. The sharp FOMP transition, evident in single crystal, in the case of polycrystalline samples moves at higher fields for different from hard direction and then becomes smeared out. For higher value of the magnetization curve becomes smooth, as is evident from computer magnetization curves obtained by summation of all curves corresponding to all angles between and . Origin of high order anisotropy constants The origin of high anisotropy constant can be found in the interaction of two sublattices ( and ) each of them having a competing high anisotropy energy, i.e. having different easy directions. In particular we can no longer consider the system as a rigid collinear magnetic structure, but we must allow for substantial deviations from the equilibrium configuration present at zero field. Limiting up to fourth order, neglecting the in plane contribution, the anisotropy energy becomes: where: is the exchange integral () in case of ferromagnetism are the anisotropy constants of the sublattice are the anisotropy constants of the sublattice respectively is the applied field are the saturation magnetizations of and sublattices are the angles between the magnetization of and sublattices with the easy c-axis The equilibrium equation of the anisotropy energy has not a complete analytical solution so computer analysis is useful. The interesting aspect regards the simulation of the resulted magnetization curves, analytical or discontinuous with FOMP. By computer it is possible to fit the obtained results by an equivalent anisotropy energy expression: where: are the equivalent anisotropy constants up to six order is the angle between the magnetization and the easy c-axis So starting from a forth order anisotropy energy expression we obtain an equivalent expression of sixth order, that is higher anisotropy constant can be derived from the competing anisotropy of different sublattices. FOMP in other symmetries The problem for cubic crystal system has been approached by Bozorth, and partial results have been obtained by different authors, but exact complete phase diagrams with anisotropy contributions up to sixth and eighth order have only been determined more recently. The FOMP in the trigonal crystal system has been analyzed for case of the anisotropy energy expression up to forth order: where and are the polar angles of the magnetization vector with respect to c-axis. The study of the energy derivatives allows the determination of the magnetic phase and the FOMP-phase as in the hexagonal case, see the reference for the diagrams. References Ferromagnetism
FOMP
Chemistry,Materials_science
1,753
50,336,055
https://en.wikipedia.org/wiki/Glossary%20of%20artificial%20intelligence
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision. A B C D E F G H I J K L M N O P Q R S T U V W X References Works cited Notes Artificial intelligence Machine learning Artificial intelligence Wikipedia glossaries using description lists
Glossary of artificial intelligence
Engineering
101
43,946,298
https://en.wikipedia.org/wiki/Graeme%20Moad
Graeme Moad (born 25 June 1952) is an Australian polymer chemist. Education and career Moad received a Bachelor of Science in 1974 and a Ph.D. in 1977, both from the University of Adelaide. He followed this with postdoctoral research at Pennsylvania State University. In 1979 he joined the CSIRO in Melbourne; CSIRO is Australia's largest scientific research organisation. He has made substantial contributions to the theory of free radical polymerization, and he was co-author with David Solomon of the definitive reference book: The Chemistry of Radical Polymerization (Moad & Solomon, 2006). With fellow CSIRO polymer chemists Ezio Rizzardo and San Thang he is a co-developer of the RAFT process. Honours and awards In 2012 Moad received the Battaerd-Jordan Polymer Medal, and was also elected a Fellow of the Australian Academy of Science. In 2020 he was awarded their David Craig Medal and Lecture. In 2014, he shared the ATSE Clunies-Ross Award with San Thang and Ezio Rizzardo. He was elected a Fellow of the Australian Academy of Technology and Engineering in 2021. Moad was appointed Companion of the Order of Australia (AC) in the 2022 Australia Day Honours for "eminent service to science, particularly polymer design and synthesis and radical polymerization, education through mentoring, and to professional scientific organisations". He was elected a Fellow of the Royal Society in 2023. References 1952 births Living people 21st-century Australian chemists Australian chemists Polymer scientists and engineers CSIRO people University of Adelaide alumni Academic staff of the University of Adelaide Academic staff of Monash University Companions of the Order of Australia Fellows of the Australian Academy of Science Fellows of the Australian Academy of Technological Sciences and Engineering Fellows of the Royal Society People from Orange, New South Wales
Graeme Moad
Chemistry,Materials_science
364
58,650,219
https://en.wikipedia.org/wiki/Shuili%20Snake%20Kiln%20Ceramics%20Cultural%20Park
The Shuili Snake Kiln Ceramics Cultural Park () is a ceramic kiln in Dingkan Village, Shuili Township, Nantou County, Taiwan. Name The name Snake came from the long and narrow shape of the kiln, resembling a snake. History The kiln was built in 1927 where it used to produce large jars and other ceramics. It was founded by ceramic artist master Jiang Song Lin. Architecture The building consists of the culture museum, ceramic classroom and multimedia room. See also List of tourist attractions in Taiwan References External links 1927 establishments in Taiwan Buildings and structures in Nantou County Kilns in Taiwan Tourist attractions in Nantou County Ceramics museums in Taiwan
Shuili Snake Kiln Ceramics Cultural Park
Chemistry,Engineering
135
3,721,430
https://en.wikipedia.org/wiki/A.%20O.%20L.%20Atkin
Arthur Oliver Lonsdale Atkin (31 July 1925 – 28 December 2008), who published under the name A. O. L. Atkin, was a British mathematician. As an undergraduate during World War II, Atkin worked at Bletchley Park cracking German codes. He received his Ph.D. in 1952 from the University of Cambridge, where he was one of John Littlewood's research students. In 1952 he moved to Durham University as a lecturer in mathematics. During 1964–1970, he worked at the Atlas Computer Laboratory at Chilton, computing modular functions. Toward the end of his life, he was Professor Emeritus of mathematics at the University of Illinois at Chicago. Atkin, along with Noam Elkies, extended Schoof's algorithm to create the Schoof–Elkies–Atkin algorithm. Together with Daniel J. Bernstein, he developed the sieve of Atkin. Atkin is also known for his work on properties of the integer partition function and the monster module. He was a vocal fan of using computers in mathematics, so long as the end goal was theoretical advance: "Each new generation of machines makes feasible a whole new range of computations; provided mathematicians pursue these rather than merely break old records for old sports, computation will have a significant part to play in the development of mathematics." Atkin died of nosocomial pneumonia on 28 December 2008, in Maywood, Illinois. Selected publications Atkin, A. O. L. and Morain, F. "Elliptic Curves and Primality Proving." Math. Comput. 61, 29–68, 1993. Atkin, A. O. L. and Bernstein, D. J. Prime sieves using binary quadratic forms, Math. Comp. 73 (2004), 1023–1030.. See also Atkin–Goldwasser–Kilian–Morain certificates Atkin–Lehner theory Elliptic curve primality proving References External links Atkin's university webpage Atkin's info at The Prime Pages 1925 births 2008 deaths 20th-century American mathematicians 21st-century American mathematicians 20th-century British mathematicians 21st-century British mathematicians Number theorists University of Illinois Chicago faculty Alumni of the University of Cambridge Deaths from pneumonia in Illinois Bletchley Park people Academics of Durham University
A. O. L. Atkin
Mathematics
472
38,292,740
https://en.wikipedia.org/wiki/30%20Piscium
30 Piscium (HIP 154) is a solitary variable star in the zodiac constellation of Pisces. It is visible to the naked eye with an apparent visual magnitude of 4.37. Its calculated mid-value of antiposed parallax shift as the Earth moves around the Sun of very roughly , makes it around 410 light years away. Its net movement in the present epoch is one of moving closer – radial velocity (speed away from our star system) is −12 km/s. This is an aging red giant star with a stellar classification of M3 III, indicating it has exhausted the hydrogen at its core and evolved off the main sequence. It is a candidate long-period variable star and has been given the designation YY Psc. It varies in brightness between magnitudes 4.31 and 4.41 with no clear period. Possible periods of 23.1, 32.0, 53.6, and 167.8 days have been identified. The star has 109 times the Sun's radius and is radiating 1,600 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 3,490 K. References M-type giants Slow irregular variables Pisces (constellation) Durchmusterung objects Piscium, 030 224935 000154 9089 Piscium, YY
30 Piscium
Astronomy
277
68,132,262
https://en.wikipedia.org/wiki/Steitzviridae
Steitzviridae is a family of RNA viruses, which infect prokaryotes. Taxonomy Steitzviridae contains 117 genera: Abakapovirus Achlievirus Adahmuvirus Alehxovirus Aphenovirus Arawsmovirus Arctuvirus Arpirivirus Ashcevirus Bahnicevirus Belbovirus Berdovirus Bicehmovirus Bidhavirus Brikhyavirus Cahrlavirus Cahtavirus Catindovirus Cebevirus Chlurivirus Chorovirus Clitovirus Cohrdavirus Controvirus Cunarovirus Dohnjavirus Endehruvirus Eregrovirus Erimutivirus Fagihovirus Fejonovirus Ferahgovirus Fluruvirus Frobavirus Fudhoevirus Gahmegovirus Garnievirus Gehrmavirus Gernuduvirus Gihfavirus Gredihovirus Gulmivirus Hahkesevirus Henifovirus Hohltdevirus Hohrdovirus Huhbevirus Huohcivirus Huylevirus Hyjrovirus Hylipavirus Iwahcevirus Jiforsuvirus Kecijavirus Kecuhnavirus Kehruavirus Kihsiravirus Kinglevirus Kyanivirus Laimuvirus Lazuovirus Lehptavirus Lihvevirus Limaivirus Lomnativirus Loptevirus Luloavirus Lygehevirus Lyndovirus Mahdsavirus Mahjnavirus Metsavirus Milihnovirus Minusuvirus Mocruvirus Molucevirus Nehumivirus Nihlwovirus Ociwvivirus Pahspavirus Patimovirus Pepusduvirus Phulihavirus Pirifovirus Podtsbuvirus Pohlodivirus Psiaduvirus Psouhdivirus Puduphavirus Pujohnavirus Rodtovirus Rohsdrivirus Sdenfavirus Setohruvirus Sidiruavirus Snuwdevirus Sperdavirus Stehnavirus Suhnsivirus Surghavirus Tamanovirus Tehmuvirus Tehnicivirus Thehlovirus Thyrsuvirus Tikiyavirus Timirovirus Tsuhreavirus Tuskovirus Tuwendivirus Vernevirus Vesehyavirus Vindevirus Weheuvirus Widsokivirus Yeziwivirus Zuysuivirus References Virus families Riboviria
Steitzviridae
Biology
496
46,629,097
https://en.wikipedia.org/wiki/See-through%20graphics
See-through graphics can be added to glass or other transparent panels to provide advertising, branding, architectural expression, one-way privacy and solar control. Perforated self-adhesive window films are often used to create see-through graphics. A graphic is printed on the front side of the film which contains circular holes (perforations) covering up to fifty percent of the surface area. The eye focuses on light reflecting from the printed colors of the graphic rather than light passing through the perforations. The other side of the film is usually black to create a one-way effect (the graphic on the front side is not visible). From this side the view through the film is dominant because the black layer absorbs more light than is reflected. See-through graphics can also be printed onto transparent non-perforated surfaces. This is achieved by printing layers of dot or line patterns (maintaining areas of transparency between the printed areas) on top of each other in exact registration or with minimum overlap. For example, the first printed layer might be black dots followed by a layer of colored dots for the image or graphic. History There is evidence that one-way window blinds were used in 18th century London. The blinds were made from canvas, silk or wire mesh which was painted on one side. The product was advertised by a tradesman called John Brown in 1726. Painted window screens were also found in the US city of Baltimore from 1913. The screens were made from a woven wire mesh. They provided decoration to house fronts, privacy to those inside while maintaining the view out. Perforated films were sold in the 1970s and 1980s as sun blinds for vehicles and buildings. This was cited in the first patent application for see-through graphics. The blinds had brightly colored and reflective front sides. The reverse side was a non-reflective black color which allowed good visibility through the blind. See-through graphics were used in 1982 on the Safe Screen Squash Court, the world's first squash court with four unobstructed one-way vision walls. The one-way effect was created by printing transparent walls with two or more layers of dots in exact registration(without any overlap). Spectators could see into the illuminated court (past a layer of black dots visible from the outside) but the players cannot see past the white or colored dot patterns which are visible from inside. Roland Hill, the designer of the Safe Screen Squash Court, filed the first patent for See-through graphics in 1984, which covered the use of perforated window films for see-through graphics. The company Contra Vision Ltd was launched in 1985 by Hill to commercialize the various see-through graphics technologies and patents. A second patent for see-through graphics was filed by Hill to cover translucent print patterns, which can be illuminated from either side. The world's first "full wrap" bus was completed in New Zealand in 1991. A number of competing manufacturers of Perforated Window Film entered the market in the 1990s, including licensees of Contra Vision Ltd such as 3M, Avery Dennison and Continental Graphics. See-through graphics as a product category was recognized on the Travelling Museum of British invention. The windows of the bus were wrapped in see-through graphics and the product was one of a hundred British innovations which were showcased inside. See-through graphics was also recognized by the Design Council as a best of British Millennium product. Production See-through graphics are most commonly produced on large format printers using solvent inkjet or uv print technology. Material comes on rolls typically up to 1.53m wide and 50m long. The material has an adhesive back which allows it to be fitted to the exterior of windows. Several manufacturers produce different grades of one-way-vision film. An 80:20 has 80% material and 20% hole and is best for retail windows where the light inside the building may well be brighter than the outside at certain times. At the other end of the scale Transport For London (TFL) specify that a 50:50 One Way Vision Film with a 1.5mm hole must be used on taxi and minicab windows. Applications See-through graphics are used for Out of Home (OOH) advertising campaigns as part of vehicle wraps on buses, trams and the back window of taxis. It is also used for advertising on static sites such as telephone kiosks, bus shelters and on glass windows and partitions in airports and other transport hubs. The main benefit is that advertisers can install larger and more impactful graphics which cover windows as well as standard walls. There are a number of tips and tricks to ensure a successful "window wrap". Point of Purchase (PoP)/Point of Sale (PoS) advertisements, ranging from poster size to total windows and door advertisements, are used at retail locations to promote the sale of goods. Building wraps are another application of see-through graphics, converting buildings into advertising billboards. Architectural glass applications cover both exterior window glazing and interior design on doors and partitions. See-through graphics add unique character to the outside of buildings, while providing privacy and solar control benefits to those inside. A novel application is to apply see-through graphics to sunglasses for promotional purposes. Lighting considerations The level of illumination on either side on the window is important to ensure the best effect. Typically a higher level of illumination is needed on the side with the printed graphics to ensure they are seen prominently and the view through to the other side is obscured. Depending upon the nature of the design itself and the level of illumination to either side of the panel, the design either obscures through vision or the observer can choose to focus upon the design or objects on the other side of the panel. References Graphics Window coverings Transparent materials Advertising tools
See-through graphics
Physics
1,169
33,734,529
https://en.wikipedia.org/wiki/Visual%20arts
The visual arts are art forms such as painting, drawing, printmaking, sculpture, ceramics, photography, video, image, filmmaking, design, crafts, and architecture. Many artistic disciplines, such as performing arts, conceptual art, and textile arts, also involve aspects of the visual arts, as well as arts of other types. Also included within the visual arts are the applied arts, such as industrial design, graphic design, fashion design, interior design, and decorative art. Current usage of the term "visual arts" includes fine art as well as applied or decorative arts and crafts, but this was not always the case. Before the Arts and Crafts Movement in Britain and elsewhere at the turn of the 20th century, the term 'artist' had for some centuries often been restricted to a person working in the fine arts (such as painting, sculpture, or printmaking) and not the decorative arts, crafts, or applied visual arts media. The distinction was emphasized by artists of the Arts and Crafts Movement, who valued vernacular art forms as much as high forms. Art schools made a distinction between the fine arts and the crafts, maintaining that a craftsperson could not be considered a practitioner of the arts. The increasing tendency to privilege painting, and to a lesser degree sculpture, above other arts has been a feature of Western art as well as East Asian art. In both regions, painting has been seen as relying to the highest degree on the imagination of the artist and being the furthest removed from manual labour – in Chinese painting, the most highly valued styles were those of "scholar-painting", at least in theory practiced by gentleman amateurs. The Western hierarchy of genres reflected similar attitudes. Education and training Training in the visual arts has generally been through variations of the apprentice and workshop systems. In Europe, the Renaissance movement to increase the prestige of the artist led to the academy system for training artists, and today most of the people who are pursuing a career in the arts train in art schools at tertiary levels. Visual arts have now become an elective subject in most education systems. In East Asia, arts education for nonprofessional artists typically focused on brushwork; calligraphy was numbered among the Six Arts of gentlemen in the Chinese Zhou dynasty, and calligraphy and Chinese painting were numbered among the four arts of scholar-officials in imperial China. Leading country in the development of the arts in Latin America, in 1875 created the National Society of the Stimulus of the Arts, founded by painters Eduardo Schiaffino, Eduardo Sívori, and other artists. Their guild was rechartered as the National Academy of Fine Arts in 1905 and, in 1923, on the initiative of painter and academic Ernesto de la Cárcova, as a department in the University of Buenos Aires, the Superior Art School of the Nation. Currently, the leading educational organization for the arts in the country is the UNA Universidad Nacional de las Artes. Drawing Drawing is a means of making an image, illustration or graphic using any of a wide variety of tools and techniques available online and offline. It generally involves making marks on a surface by applying pressure from a tool, or moving a tool across a surface using dry media such as graphite pencils, pen and ink, inked brushes, wax color pencils, crayons, charcoals, pastels, and markers. Digital tools, including pens, stylus, that simulate the effects of these are also used. The main techniques used in drawing are: line drawing, hatching, crosshatching, random hatching, shading, scribbling, stippling, and blending. An artist who excels at drawing is referred to as a draftsman or draughtsman. Drawing and painting go back tens of thousands of years. Art of the Upper Paleolithic includes figurative art beginning between about 40,000 to 35,000 years ago. Non-figurative cave paintings consisting of hand stencils and simple geometric shapes are even older. Paleolithic cave representations of animals are found in areas such as Lascaux, France and Altamira, Spain in Europe, Maros, Sulawesi in Asia, and Gabarnmung, Australia. In ancient Egypt, ink drawings on papyrus, often depicting people, were used as models for painting or sculpture. Drawings on Greek vases, initially geometric, later developed into the human form with black-figure pottery during the 7th century BC. With paper becoming common in Europe by the 15th century, drawing was adopted by masters such as Sandro Botticelli, Raphael, Michelangelo, and Leonardo da Vinci, who sometimes treated drawing as an art in its own right rather than a preparatory stage for painting or sculpture. Painting Painting taken literally is the practice of applying pigment suspended in a carrier (or medium) and a binding agent (a glue) to a surface (support) such as paper, canvas or a wall. However, when used in an artistic sense it means the use of this activity in combination with drawing, composition, or other aesthetic considerations in order to manifest the expressive and conceptual intention of the practitioner. Painting is also used to express spiritual motifs and ideas; sites of this kind of painting range from artwork depicting mythological figures on pottery to The Sistine Chapel, to the human body itself. History Origins and early history Like drawing, painting has its documented origins in caves and on rock faces. The finest examples, believed by some to be 32,000 years old, are in the Chauvet and Lascaux caves in southern France. In shades of red, brown, yellow and black, the paintings on the walls and ceilings are of bison, cattle, horses and deer. Paintings of human figures can be found in the tombs of ancient Egypt. In the great temple of Ramses II, Nefertari, his queen, is depicted being led by Isis. The Greeks contributed to painting but much of their work has been lost. One of the best remaining representations are the Hellenistic Fayum mummy portraits. Another example is mosaic of the Battle of Issus at Pompeii, which was probably based on a Greek painting. Greek and Roman art contributed to Byzantine art in the 4th century BC, which initiated a tradition in icon painting. The Renaissance Apart from the illuminated manuscripts produced by monks during the Middle Ages, the next significant contribution to European art was from Italy's renaissance painters. From Giotto in the 13th century to Leonardo da Vinci and Raphael at the beginning of the 16th century, this was the richest period in Italian art as the chiaroscuro techniques were used to create the illusion of 3-D space. Painters in northern Europe too were influenced by the Italian school. Jan van Eyck from Belgium, Pieter Bruegel the Elder from the Netherlands and Hans Holbein the Younger from Germany are among the most successful painters of the times. They used the glazing technique with oils to achieve depth and luminosity. Dutch masters The 17th century witnessed the emergence of the great Dutch masters such as the versatile Rembrandt who was especially remembered for his portraits and Bible scenes, and Vermeer who specialized in interior scenes of Dutch life. Baroque The Baroque started after the Renaissance, from the late 16th century to the late 17th century. Main artists of the Baroque included Caravaggio, who made heavy use of tenebrism. Peter Paul Rubens, a Flemish painter who studied in Italy, worked for local churches in Antwerp and also painted a series for Marie de' Medici. Annibale Carracci took influences from the Sistine Chapel and created the genre of illusionistic ceiling painting. Much of the development that happened in the Baroque was because of the Protestant Reformation and the resulting Counter Reformation. Much of what defines the Baroque is dramatic lighting and overall visuals. Impressionism Impressionism began in France in the 19th century with a loose association of artists including Claude Monet, Pierre-Auguste Renoir and Paul Cézanne who brought a new freely brushed style to painting, often choosing to paint realistic scenes of modern life outside rather than in the studio. This was achieved through a new expression of aesthetic features demonstrated by brush strokes and the impression of reality. They achieved intense color vibration by using pure, unmixed colors and short brush strokes. The movement influenced art as a dynamic, moving through time and adjusting to newfound techniques and perception of art. Attention to detail became less of a priority in achieving, whilst exploring a biased view of landscapes and nature to the artist's eye. Post-impressionism Towards the end of the 19th century, several young painters took impressionism a stage further, using geometric forms and unnatural color to depict emotions while striving for deeper symbolism. Of particular note are Paul Gauguin, who was strongly influenced by Asian, African and Japanese art, Vincent van Gogh, a Dutchman who moved to France where he drew on the strong sunlight of the south, and Toulouse-Lautrec, remembered for his vivid paintings of night life in the Paris district of Montmartre. Symbolism, expressionism and cubism Edvard Munch, a Norwegian artist, developed his symbolistic approach at the end of the 19th century, inspired by the French impressionist Manet. The Scream (1893), his most famous work, is widely interpreted as representing the universal anxiety of modern man. Partly as a result of Munch's influence, the German expressionist movement originated in Germany at the beginning of the 20th century as artists such as Ernst Kirschner and Erich Heckel began to distort reality for an emotional effect. In parallel, the style known as cubism developed in France as artists focused on the volume and space of sharp structures within a composition. Pablo Picasso and Georges Braque were the leading proponents of the movement. Objects are broken up, analyzed, and re-assembled in an abstracted form. By the 1920s, the style had developed into surrealism with Dali and Magritte. Printmaking Printmaking is creating, for artistic purposes, an image on a matrix that is then transferred to a two-dimensional (flat) surface by means of ink (or another form of pigmentation). Except in the case of a monotype, the same matrix can be used to produce many examples of the print. Historically, the major techniques (also called media) involved are woodcut, line engraving, etching, lithography, and screen printing (serigraphy, silk screening) but there are many others, including modern digital techniques. Normally, the print is printed on paper, but other mediums range from cloth and vellum to more modern materials. European history Prints in the Western tradition produced before about 1830 are known as old master prints. In Europe, from around 1400 AD woodcut, was used for master prints on paper by using printing techniques developed in the Byzantine and Islamic worlds. Michael Wolgemut improved German woodcut from about 1475, and Erhard Reuwich, a Dutchman, was the first to use cross-hatching. At the end of the century Albrecht Dürer brought the Western woodcut to a stage that has never been surpassed, increasing the status of the single-leaf woodcut. Chinese origin and practice In China, the art of printmaking developed some 1,100 years ago as illustrations alongside text cut in woodblocks for printing on paper. Initially images were mainly religious but in the Song dynasty, artists began to cut landscapes. During the Ming (1368–1644) and Qing (1616–1911) dynasties, the technique was perfected for both religious and artistic engravings. Development in Japan 1603–1867 Woodblock printing in Japan (Japanese: 木版画, moku hanga) is a technique best known for its use in the ukiyo-e artistic genre; however, it was also used very widely for printing illustrated books in the same period. Woodblock printing had been used in China for centuries to print books, long before the advent of movable type, but was only widely adopted in Japan during the Edo period (1603–1867). Although similar to woodcut in western printmaking in some regards, moku hanga differs greatly in that water-based inks are used (as opposed to western woodcut, which uses oil-based inks), allowing for a wide range of vivid color, glazes and color transparency. After the decline of ukiyo-e and introduction of modern printing technologies, woodblock printing continued as a method for printing texts as well as for producing art, both within traditional modes such as ukiyo-e and in a variety of more radical or Western forms that might be construed as modern art. In the early 20th century, shin-hanga that fused the tradition of ukiyo-e with the techniques of Western paintings became popular, and the works of Hasui Kawase and Hiroshi Yoshida gained international popularity. Institutes such as the "Adachi Institute of Woodblock Prints" and "Takezasado" continue to produce ukiyo-e prints with the same materials and methods as used in the past. Photography Photography is the process of making pictures by means of the action of light. The light patterns reflected or emitted from objects are recorded onto a sensitive medium or storage chip through a timed exposure. The process is done through mechanical shutters or electronically timed exposure of photons into chemical processing or digitizing devices known as cameras. The word comes from the Greek φως phos ("light"), and γραφις graphis ("stylus", "paintbrush") or γραφη graphê, together meaning "drawing with light" or "representation by means of lines" or "drawing." Traditionally, the product of photography has been called a photograph. The term photo is an abbreviation; many people also call them pictures. In digital photography, the term image has begun to replace photograph. (The term image is traditional in geometric optics.) Architecture Architecture is the process and the product of planning, designing, and constructing buildings or any other structures. Architectural works, in the material form of buildings, are often perceived as cultural symbols and as works of art. Historical civilizations are often identified with their surviving architectural achievements. The earliest surviving written work on the subject of architecture is De architectura, by the Roman architect Vitruvius in the early 1st century AD. According to Vitruvius, a good building should satisfy the three principles of firmitas, utilitas, venustas, commonly known by the original translation – firmness, commodity and delight. An equivalent in modern English would be: Durability – a building should stand up robustly and remain in good condition. Utility – it should be suitable for the purposes for which it is used. Beauty – it should be aesthetically pleasing. Building first evolved out of the dynamics between needs (shelter, security, worship, etc.) and means (available building materials and attendant skills). As human cultures developed and knowledge began to be formalized through oral traditions and practices, building became a craft, and "architecture" is the name given to the most highly formalized and respected versions of that craft. Filmmaking Filmmaking is the process of making a motion-picture, from an initial conception and research, through scriptwriting, shooting and recording, animation or other special effects, editing, sound and music work and finally distribution to an audience; it refers broadly to the creation of all types of films, embracing documentary, strains of theatre and literature in film, and poetic or experimental practices, and is often used to refer to video-based processes as well. Computer art Visual artists are no longer limited to traditional visual arts media. Computers have been used as an ever more common tool in the visual arts since the 1960s. Uses include the capturing or creating of images and forms, the editing of those images (including exploring multiple compositions) and the final rendering or printing (including 3D printing). Computer art is any in which computers played a role in production or display. Such art can be an image, sound, animation, video, CD-ROM, DVD, video game, website, algorithm, performance or gallery installation. Many traditional disciplines now integrate digital technologies, so the lines between traditional works of art and new media works created using computers, have been blurred. For instance, an artist may combine traditional painting with algorithmic art and other digital techniques. As a result, defining computer art by its end product can be difficult. Nevertheless, this type of art is beginning to appear in art museum exhibits, though it has yet to prove its legitimacy as a form unto itself and this technology is widely seen in contemporary art more as a tool, rather than a form as with painting. On the other hand, there are computer-based artworks which belong to a new conceptual and postdigital strand, assuming the same technologies, and their social impact, as an object of inquiry. Computer usage has blurred the distinctions between illustrators, photographers, photo editors, 3-D modelers, and handicraft artists. Sophisticated rendering and editing software has led to multi-skilled image developers. Photographers may become digital artists. Illustrators may become animators. Handicraft may be computer-aided or use computer-generated imagery as a template. Computer clip art usage has also made the clear distinction between visual arts and page layout less obvious due to the easy access and editing of clip art in the process of paginating a document, especially to the unskilled observer. Plastic arts Plastic arts is a term for art forms that involve physical manipulation of a plastic medium by moulding or modeling such as sculpture or ceramics. The term has also been applied to all the visual (non-literary, non-musical) arts. Materials that can be carved or shaped, such as stone or wood, concrete or steel, have also been included in the narrower definition, since, with appropriate tools, such materials are also capable of modulation. This use of the term "plastic" in the arts should not be confused with Piet Mondrian's use, nor with the movement he termed, in French and English, "Neoplasticism." Sculpture Sculpture is three-dimensional artwork created by shaping or combining hard or plastic material, sound, or text and or light, commonly stone (either rock or marble), clay, metal, glass, or wood. Some sculptures are created directly by finding or carving; others are assembled, built together and fired, welded, molded, or cast. Sculptures are often painted. A person who creates sculptures is called a sculptor. The earliest undisputed examples of sculpture belong to the Aurignacian culture, which was located in Europe and southwest Asia and active at the beginning of the Upper Paleolithic. As well as producing some of the earliest known cave art, the people of this culture developed finely-crafted stone tools, manufacturing pendants, bracelets, ivory beads, and bone-flutes, as well as three-dimensional figurines. Because sculpture involves the use of materials that can be moulded or modulated, it is considered one of the plastic arts. The majority of public art is sculpture. Many sculptures together in a garden setting may be referred to as a sculpture garden. Sculptors do not always make sculptures by hand. With increasing technology in the 20th century and the popularity of conceptual art over technical mastery, more sculptors turned to art fabricators to produce their artworks. With fabrication, the artist creates a design and pays a fabricator to produce it. This allows sculptors to create larger and more complex sculptures out of materials like cement, metal and plastic, that they would not be able to create by hand. Sculptures can also be made with 3-d printing technology. US copyright definition of visual art In the United States, the law protecting the copyright over a piece of visual art gives a more restrictive definition of "visual art". A "work of visual art" is — (1) a painting, drawing, print or sculpture, existing in a single copy, in a limited edition of 200 copies or fewer that are signed and consecutively numbered by the author, or, in the case of a sculpture, in multiple cast, carved, or fabricated sculptures of 200 or fewer that are consecutively numbered by the author and bear the signature or other identifying mark of the author; or (2) a still photographic image produced for exhibition purposes only, existing in a single copy that is signed by the author, or in a limited edition of 200 copies or fewer that are signed and consecutively numbered by the author. A work of visual art does not include — (A)(i) any poster, map, globe, chart, technical drawing, diagram, model, applied art, motion picture or other audiovisual work, book, magazine, newspaper, periodical, data base, electronic information service, electronic publication, or similar publication;   (ii) any merchandising item or advertising, promotional, descriptive, covering, or packaging material or container;   (iii) any portion or part of any item described in clause (i) or (ii); (B) any work made for hire; or (C) any work not subject to copyright protection under this title. See also Art materials Asemic writing Collage Conservation and restoration of cultural property Crowdsourcing creative work Décollage Environmental art Found object Graffiti History of art Illustration Installation art Interactive art Landscape art Mathematics and art Mixed media Portraiture Process art Recording medium Sketch (drawing) Sound art Vexillography Video art Visual arts and Theosophy Visual impairment in art Visual poetry References External links ArtLex – online dictionary of visual art terms (archived 24 April 2005) Calendar for Artists – calendar listing of visual art festivals. Art History Timeline by the Metropolitan Museum of Art. Communication design Visual arts media
Visual arts
Engineering
4,419
14,811,220
https://en.wikipedia.org/wiki/PPAN
Suppressor of SWI4 1 homolog is a protein that in humans is encoded by the PPAN gene. The protein encoded by this gene is an evolutionarily conserved protein similar to yeast SSF1 as well as to the gene product of the Drosophila gene peter pan (PPAN). SSF1 is known to be involved in the second step of mRNA splicing. Both SSF1 and PPAN are essential for cell growth and proliferation. This gene was found to cotranscript with P2RY11/P2Y(11), an immediate downstream gene on the chromosome that encodes an ATP receptor. The chimeric transcripts of this gene and P2RY11 were found to be ubiquitously present and regulated during granulocytic differentiation. Exogenous expression of this gene was reported to reduce the anchorage-independent growth of some tumor cells. Although being involved in ribosome biogenesis, human PPAN is not merely localized in nucleoli, but also in mitochondria. Depletion of PPAN provokes apoptosis as observed by increased amounts of p53 and its target gene p21, BAX-driven depolarisation of mitochondria, cytochrome c release as well as caspase-dependent cleavage of PARP. Recent studies revealed that PPAN participates in the regulation of mitochondrial homeostasis, presumably via modulation of autophagy. Furthermore, PPAN is required for proper cycling of cells since down regulation of PPAN in cancer cells results in a p53-independent cell cycle arrest. One of the introns of PPAN encodes the Small nucleolar RNA SNORD105. References Further reading
PPAN
Chemistry
350
73,133,245
https://en.wikipedia.org/wiki/HD%20142990
HD 142990, also known as HR 5942 and V913 Scorpii, is a star about 470 light years from the Earth, in the constellation Scorpius. It is a 5th magnitude star, so it will be faintly visible to the naked eye of an observer far from city lights. It is a variable star, whose brightness varies slightly from 5.40 to 5.47 during its 23.5 hour rotation period. It is a member of the Upper Scorpius Region of the Scorpius–Centaurus association. HD 142990 is a helium-weak star. In 1983, Ermanno Borra et al. detected the star's strong (~kilogauss) magnetic field from the Zeeman splitting of the Hβ spectral line. Later estimates put the field strength as several kilogauss. The variability of HD 142990 was discovered in 1977 by Holger Pedersen and Bjarne Thomsen, during a spectroscopic and photometric study of helium weak and helium strong stars. In 1978 the star was given the variable star designation V913 Scorpii. Far more extensive photometric data were provided by the Kepler K2 program, which sampled the light curve well, and allowed Dominic Bowman et al. to measure the star's day rotation period. The rotation period of HD 142990 appears to be decreasing at a rate of about 0.6 seconds per year. This might mean the star is still contracting towards the zero-age main sequence, though other explanations involving magnetohydrodynamics have been proposed. In 1989, Jeffrey Linsky et al. reported the detection of 6 cm radio emission from HD 142990, which appeared to be variable on a time scale of 5 minutes. In 2018, Emil Lenc et al. found that the radio emission from the star is circularly polarized. In 2019, Barnali Das et al. reported that HD 142990 exhibits coherent electron cyclotron maser emission at 200 MHz, making it, at that time, only the fourth hot magnetic star known to emit by this mechanism . References Scorpius 078246 142990 Scorpii, V913 SX Arietis variables B-type main-sequence stars Helium-weak stars 5942
HD 142990
Astronomy
477
23,892,932
https://en.wikipedia.org/wiki/GoldSim
GoldSim is dynamic, probabilistic simulation software developed by GoldSim Technology Group. This general-purpose simulator is a hybrid of several simulation approaches, combining an extension of system dynamics with some aspects of discrete event simulation, and embedding the dynamic simulation engine within a Monte Carlo simulation framework. While it is a general-purpose simulator, GoldSim has been most extensively used for environmental and engineering risk analysis, with applications in the areas of water resource management , mining , radioactive waste management , geological carbon sequestration , aerospace mission risk analysis and energy. History In 1990, Golder Associates, an international engineering consulting firm, was asked by the United States Department of Energy (DOE) to develop probabilistic simulation software that could be used to help with decision support and management within the Office of Civilian Radioactive Waste Management. The results of this effort were two DOS-based programs (RIP and STRIP), which were used to support radioactive waste management projects within the DOE. In 1996, in an effort funded by Golder Associates, the US DOE, the Japan Nuclear Cycle Development Institute (currently the Japan Atomic Energy Agency) and the Spanish National Radioactive Waste Company (ENRESA), the capabilities of RIP and STRIP were incorporated into a general purpose Windows-based simulator called GoldSim. Subsequent funding was also provided by NASA. Initially only offered to the original funding organizations, GoldSim was released to the public in 2002. In 2004, GoldSim Technology Group LLC was spun off from Golder Associates and is now a wholly independent company. Notable applications include providing the simulation framework for: 1) the Yucca Mountain Repository Performance Assessment model developed by Sandia National Laboratories; 2) a comprehensive system-level computational model for performance assessment of geological sequestration of CO2 developed by Los Alamos National Laboratory; 3) a flood operations model to help better understand and fine tune operations of a large dam used for water supply and flood control in Queensland, Australia; and 4) models for simulating risks associated with future crewed space missions by NASA Ames Research Center. Modeling Environment GoldSim provides a visual and hierarchical modeling environment, which allows users to construct models by adding “elements” (model objects) that represent data, equations, processes or events, and linking them together into graphical representations that resemble influence diagrams. Influence arrows are automatically drawn as elements are referenced by other elements. Complex systems can be translated into hierarchical GoldSim models by creating layer of “containers” (or sub-models). Visual representations and hierarchical structures help users to build very large, complex models that can still be explained to interested stakeholders (e.g., government regulators, elected officials, and the public). Though it is primarily a continuous simulator, GoldSim has a number of features typically associated with discrete simulators. By combining these two simulation methods, systems that are best represented using both continuous and discrete dynamics can often be more accurately simulated. Examples include tracking the quantity of water in a reservoir that is subject to both continuous inflows and outflows, as well as sudden storm events; and tracking the quantity of fuel in a space vehicle as it is subjected to random perturbations (e.g., component failures, extreme environmental conditions). Because the software was originally developed for complex environmental applications, in which many inputs are uncertain and/or stochastic, in addition to being a dynamic simulator, GoldSim is a Monte Carlo simulator, such that inputs can be defined as distributions and the entire system simulated a large number of times to provide probabilistic outputs. As such, the software incorporates a number of computational features to facilitate probabilistic simulation of complex systems, including tools for generating and correlating stochastic time series, advanced sampling capabilities (including latin hypercube sampling, nested Monte Carlo analysis, and importance sampling), and support for distributed processing. See also Computer Simulation Monte Carlo Simulation List of computer simulation software References External links Simulation software Risk management software Scientific simulation software Mathematical software Environmental science software Numerical software Simulation programming languages Probabilistic software Science software for Windows
GoldSim
Mathematics,Environmental_science
835
42,911,578
https://en.wikipedia.org/wiki/Acyldepsipeptide%20antibiotics
Acyldepsipeptide or cyclic acyldepsipeptide (ADEP) is a class of potential antibiotics first isolated from bacteria and act by deregulating the ClpP protease. Natural ADEPs were originally found as products of aerobic fermentation in Streptomyces hawaiiensis, A54556A and B, and in the culture broth of Streptomyces species, enopeptin A and B. ADEPs are of great interest in drug development due to their antibiotic properties and thus are being modified in attempt to achieve greater antimicrobial activity. The potential role of ADEPs in combating antibiotic drug resistance is postulated due to their novel mode of action that other antibiotics are not known to use, activation of casein lytic protease (ClpP) which is an important bacterial protease. Most antibiotics work through inhibitory processes to establish cell death, while ADEPs actually work through activation of the protease to cause uncontrolled protein degradation, inhibition of cell division, and subsequent cell death. They largely affect Gram-positive bacteria and could be of great use to target antibiotic resistant microbes such as methicillin-resistant Staphylococcus aureus (MRSA), penicillin-resistant Streptococcus pneumoniae (PRSP), Mycobacterium tuberculosis, and others. Despite the potential use of ADEP, possible resistance has been examined in certain species. Mechanism ADEP antibiotics can be used to defeat resistant bacterial infections. They bind to ClpP and allow the protease to degrade proteins without the help of an ATPase. ADEP4/ClpP complexes target primarily newly formed proteins, and FtsZ which allows cell division. ClpP active form is a tetradecamer composed of two heptamers to which 14 ADEPs bind to. ADEPs bind in the cavities formed by two ClpP monomers. Their binding site is composed of hydrophobic residues and corresponds to the binding sites of ClpP ATPases. Upon binding, a series of secondary structures shifts occur from the outer region to the center of ClpP. This puts the flexible N-terminal β-loop, into a disordered state. The β-loops normally form a gate above the proteolytic channel and prevent proteins from randomly passing through. They are critical for ClpP interaction with its substrate and ATPases. When ADEP binds, the β-loops shift outward and this is accompanied by the shifts of two α-helices (α1 and α2), four β-strands (β1, β2, β3 and β5) and other loops which lead to the opening of the ClpP pore. In summary, ADEP4 deregulates ClpP function and changes it from a closed state to an open one. At this point its specific proteolytic activity becomes a less controlled process, with the destruction of proteins that are around in the targeted cell. The peptidase ClpP is highly conserved throughout organisms and is tightly regulated. Without activation, ClpP in normal conditions can degrade short peptides that freely diffuse into its inner degradation chamber. Clp-family proteins are ATP-dependent proteases which play a crucial role in the cell function by degrading misfolded proteins. ClpP is a monomer on its own but oligomerizes into tetradecamers when bound to ATPases. It needs an ATPase to identify, unfold, and transfer targeted big proteins into its proteolytic channel. In fact, ClpP on its own can only degrade peptides that are up to six amino acids long. ADEP binding induces ClpP proteolytic activation that leads to the proteins degradation in the cell, especially nascent proteins and the Ftsz protein which is an important protein in cell division. This potentially leads to cell death and is the reason why ADEP is a promising technique for drug development. For folded proteins, unfolded proteins, and long peptides, ClpP must be activated by a protein in the family of ATPase associated with diverse cellular activities (AAA proteins), such as ClpA, ClpX, or ClpC. These chaperone proteins are responsible for hydrolyzing ATP to ADP, harnessing the energy, and then taking folded proteins and unfolding them. Next, Clp-ATPases slip the unfolded proteins into the degradation chamber within ClpP, allowing for processive degradation of the substrate. This process is tightly regulated with the hydrolysis of ATP to prevent uncontrolled protein or peptide degradation that would be harmful to the cell. In contrast, ADEP activates ClpP without the need for ATP hydrolysis, causing degradation of unfolded proteins and peptides within the cell at uncontrolled rates. ADEPs are thought to bind slightly cooperatively on the surface of each ClpP ring in its hydrophobic pockets and have allosteric effects in activation of ClpP. This binding initiates ClpP to undergo a conformational change such that its N-terminal region opens up its axial pore to allow for partial degradation of products, as compared to progressive degradation with ClpA. ADEP activation of ClpP does not allow for folded protein degradation, but even with unfolded protein and peptide degradation, ADEP still causes bacterial cell death. Research has shown that ADEP-activated ClpP targets cell division rather than metabolic processes. ADEP appears to initiate ClpP to preferably degrade FtsZ, an important bacterial protein involved in septum formation that is necessary for bacterial cell division. As a result, Gram-positive bacteria treated with ADEPs form long filaments before cell death. Advantages When bacteria are exposed to antibiotics they can become resistant or tolerant to the antibiotic. ADEPs have a great potential for clinical application due to their high antibacterial activity against Gram-positive pathogens such as Staphylococcus aureus, and other pathogens that are found in biofilms and chronic infections. Their effectiveness increases when combined with different antibiotics such as ciprofloxacin, linezolid, vancomycin or rifampicin. Additional studies should focus more on the toxicity of ADEPs and their implementation for clinical use. Applications After the dysregulation of bacterial proteolytic machinery by a new class of antibiotics was published in the Journal Nature, many scientists started to study this antibiotic. Most of the experiments are focused on how the ADEPs/ClpP complex work, and the functional difference between ADEP and its synthetic congeners. In 2011, P. Sass and co-workers performed a research focusing in the interaction and function of ADEPs and ClpP. They induced ADEP into Bacillus subtilis, Staphylococcus aureus and Streptococcus pneumoniae to identify how ADEP leads to the death of bacteria. The results demonstrated that ADEP is perturbing bacterial cell division. To identify the reason why ADEP inhibited cell division, researchers monitored septum formation and nucleoid segregation in ADEP B. subtilis and ADEP S. aureus. The S. aureus and B. subtilis samples gave equivalent results. This part showed the importance of wild type of ClpP and inhibition of septum formation is by direct interference of ADEP with the cell division components. Localization studies by GFP-labeled cell divisions proteins demonstrated that ADEP causes delocalization of Ftsz and inhibition of Z-Ring assembly in both species. The impact of ADEP in ∆clpX mutant indicated that ADEP is affecting cell division and that it also inhibits Z-ring assembly. Finally researchers repeat the experiment with ∆ClpP mutant to confirm that the presence of ADEP decreases abundance of FtsZ through ClpP degradation. In 2013, scientists at Northeastern University performed an experiment focused on how ADEP 4/ClpP works. The experimental results showed the efficiency of ADEP4 when it is combined with other antibiotics. Researchers monitored the amount of trypic peptides, and found out that ADEP4/ClpP induces peptide degradation in a biofilm system. By using Mueller-Hinton broth they demonstrated that ADEP 4 was more effective than other antibiotics such as rifampicin or vancomycin. However, they observed the same trends where ADEP4 combined to rifampicin is more effective and actually eradicates all stationary phases. The in vitro results showed the efficiency of ADEP 4 in mice infected with 4 different strains S. aureus, the laboratory strain SA113, and clinical isolates USA300, UAMS-1 and strain 37. Chemistry ADEPs are naturally occurring antibiotics. Certain bacteria produce them as defense mechanism in antagonist bacterial interactions. For instance, Streptomyces species produce them as secondary metabolites. There are 6 forms of acyl depsipeptides that are distinguishable by their chemical structure and function. ADEPs generally differ by one or two functional groups that give some of them more flexibility, and stability. Their chemical structures are derived from ADEP 1 and are slightly different from one another. For instance, the only difference between ADEP 2 and ADEP 3 is the conformation of the difluorophenylalanine side chain. ADEP 2 has an S configuarion while ADEP 3 has an R configuarion. Molecular modification In order to develop a useful antibiotic, ADEP continues to be modified for greater antimicrobial activity and stability. By restricting components of ADEP to decrease the molecule's flexibility, binding was enhanced and antimicrobial activity significantly increased. Specific amino acids essential to the peptidolactone core of ADEP were altered and restricted, causing stabilization of ADEP in a bioactive conformation. In fact, the conformational restrictions of ADEP resulted in its ability to activate ClpP increasing seven-fold and its antimicrobial activity 1200-fold. Research on altering ADEP molecules continues in attempt to construct a new antibiotic for public use. References Further reading Molecular description of ADEP1 Antibiotics
Acyldepsipeptide antibiotics
Biology
2,112
62,692,253
https://en.wikipedia.org/wiki/International%20Economic%20History%20Association
The International Economic History Association (IEHA) is an association of national, regional, and international organizations dedicated to the field of economic history, broadly defined. The IEHA includes 45 member organizations in 40 countries around the world. Headquartered in Utrecht, Netherlands, the IEHA promotes the study of and facilitates collaboration on a variety of projects, publications, and initiatives. While the IEHA has origins in European historiographies (especially those of France and the United Kingdom), it has since expanded its scope and membership to include economies and scholars outside of traditional areas of research. The IEHA is best known for its triannual congress, the World Economic History Congress, an international and interdisciplinary event where over 1,000 economic historians convene each meeting to discuss trends in the field. Attendees of the conference include economists, historians, policymakers, heads of states, government ministers, and scholars of economic history. History Founding At the height of the Cold War in 1960, the IEHA was founded to unite scholars in Western Europe, the United States, and the Soviet Union. Among economists there were concerns of spurring and sustaining economic growth in many economic history departments in the United Kingdom and the United States. Similarly, Alexander Gerschenkron sought to build on Rostow's stages of economic growth with his research on economic backwardness. At the same time, the founding of the IEHA originally stemmed from the work of Fernand Braudel and Immanuel Wallerstein on economic growth in early modern Europe. Throughout the twentieth century, the IEHA gradually grew in size and the number of papers presented. In 1968, the member organizations of the IEHA convened for the first meeting outside of Europe. The fourth meeting met in Bloomington, Indiana, United States. By 2012, the organization expanded its global approach to the discipline by hosting its first conference outside of Europe. Around 750 attendees from 55 countries attended the World Economic History Congress in Stellenbosch, South Africa. European scholars at the conference were more interested in the North–south divide, thus facilitating the developing of African economic history as a whole. The conference was, in part, organized by the African Agenda, and boosted tourism to the local community. Academics have noted that the hosting of the Congress in Stellenbosch positioned the country to become one of the leading cenrtres of economic history on the African continent. The opening address, delivered by Minister of Finance Pravin Gordhan, recognized the economic and political potential that the conference had for the South African economy. The first Congress to convene in Asia took place in Kyoto, Japan in August 2015. Presentations focused less on European economies and more on Latin American and Asian economies. The meeting thus presented an important moment, not just for economic history, but also for global history. The conference led to the publication of Jörg Baten's A History of the Global Economy: 1500 to the Present (2016) that, according to one reviewer "was commissioned by the International Economic History Association and the editor states that his aim is to organize a 'non-Eurocentric history' that presents 'economic history in a balanced way.'" In recent years, the organization has returned its focus to present-day questions. In 2018, President Anne McCants spoke of the importance of understanding globalization: its origins, its effects on inequality, and the importance of big data. At the congress, held in Boston, Massachusetts French economist Thomas Piketty (École des hautes études en sciences sociales) and author of Capital in the Twenty-First Century described the World Economic History Congress as "one of the few places in the world where economists and historians talk to each other, and we truly need this interdisciplinary approach." The IEHA produces an annual bulletin of conferences and meetings for economic historians. Leadership Former presidents of the IEHA include: 1965–1968: Frederic Chapin Lane 1968–1974: Kristof Glamann 1974–1978: Peter Mathias 1978–1982: Zsigmond Pál Pach 1982–1986: Jean-François Bergier 1986–1990: Herman Van der Wee 1994–1998: Gabriel Tortella 1998–2002: Roberto Cortés Conde 2002–2006: Richard Sutch 2006–2009: Riitta Hjerppe 2012–2015: Grietjie Verhoef 2015–2018: Tetsuji Okazaki 2018–2021: Anne McCants Former secretaries general of the IEHA include: 1998–2006: Jan Luiten van Zanden 2006–2012: Jörg Baten 2012–2015: Debin Ma 2018–2021: Jari Eloranta Organization The IEHA comprises three bodies. The General Assembly includes one representative from each member organization. The executive committee oversees the execution of decisions made by the General Assembly, and the Local Organizing Committees are responsible for running the World Economic History Congress. Membership The IEHA comprises 45 member organizations, including the following. World Economic History Congress Every four years (and every three years since 2006), the IEHA hosts a World Economic History Congress (WEHC) on a particular topic in economic history. The meetings aim to bring together scholars who focus on to discuss present-day debates in the discipline. Past meetings include: The initial meetings were titled the International Conference and held in western European countries. The fifth meeting was held in Leningrad, Russia and, by the eighth meeting in Budapest, Hungary, the name was changed to the International Economic History Congress. The latter had over 850 economic historians from 88 countries participate. Its goal was to promote regulate debates in the international community of scholars. In 1994, the Eleventh International Economic History Congress in Milan, Italy had over 1,100 participants from more than 50 countries. The Congress has also been vital for the development of quantitative economic history, also known as cliometrics. See also Economic History Association Economic History Society Further reading Paul Bairoch, Economics and World History: Myths and Paradoxes (Chicago, USA: University of Chicago Press, 1995) Maxine Berg, "East-West Dialogues: Economic Historians, the Cold War, and Détente." The Journal of Modern History 87, no. 1 (2015): 36–71. doi:10.1086/680261. Rondo E. Cameron, A Concise Economic History of the World" From Paleolithic Times to the Present (Oxford, UK: Oxford University Press, 1997) Donald N. McCloskey, If You're So Smart: The Narrative of Economic Expertise (Chicago, USA: University of Chicago Press, 1992) S.A.J. Parsons and G. Chandler, How to Find Out About Economics: The Commonwealth and International Library: Libraries and Technical Information Division (Elsevier Science, 2014) References Professional associations based in the Netherlands Economic history societies History of business History of technology Economic history journals Organizations established in 1960
International Economic History Association
Technology
1,392
411,007
https://en.wikipedia.org/wiki/Geometric%20standard%20deviation
In probability theory and statistics, the geometric standard deviation (GSD) describes how spread out are a set of numbers whose preferred average is the geometric mean. For such data, it may be preferred to the more usual standard deviation. Note that unlike the usual arithmetic standard deviation, the geometric standard deviation is a multiplicative factor, and thus is dimensionless, rather than having the same dimension as the input values. Thus, the geometric standard deviation may be more appropriately called geometric SD factor. When using geometric SD factor in conjunction with geometric mean, it should be described as "the range from (the geometric mean divided by the geometric SD factor) to (the geometric mean multiplied by the geometric SD factor), and one cannot add/subtract "geometric SD factor" to/from geometric mean. Definition If the geometric mean of a set of numbers is denoted as then the geometric standard deviation is Derivation If the geometric mean is then taking the natural logarithm of both sides results in The logarithm of a product is a sum of logarithms (assuming is positive for all so It can now be seen that is the arithmetic mean of the set therefore the arithmetic standard deviation of this same set should be This simplifies to Geometric standard score The geometric version of the standard score is If the geometric mean, standard deviation, and z-score of a datum are known, then the raw score can be reconstructed by Relationship to log-normal distribution The geometric standard deviation is used as a measure of log-normal dispersion analogously to the geometric mean. As the log-transform of a log-normal distribution results in a normal distribution, we see that the geometric standard deviation is the exponentiated value of the standard deviation of the log-transformed values, i.e. As such, the geometric mean and the geometric standard deviation of a sample of data from a log-normally distributed population may be used to find the bounds of confidence intervals analogously to the way the arithmetic mean and standard deviation are used to bound confidence intervals for a normal distribution. See discussion in log-normal distribution for details. References External links Non-Newtonian calculus website Scale statistics Non-Newtonian calculus
Geometric standard deviation
Mathematics
446
1,164,759
https://en.wikipedia.org/wiki/Water-tube%20boiler
A high pressure watertube boiler (also spelled water-tube and water tube) is a type of boiler in which water circulates in tubes heated externally by fire. Fuel is burned inside the furnace, creating hot gas which boils water in the steam-generating tubes. In smaller boilers, additional generating tubes are separate in the furnace, while larger utility boilers rely on the water-filled tubes that make up the walls of the furnace to generate steam. The heated water/steam mixture then rises into the steam drum. Here, saturated steam is drawn off the top of the drum. In some services, the steam passes through tubes in the hot gas path, (a superheater) to become superheated. Superheated steam is a dry gas and therefore is typically used to drive turbines, since water droplets can severely damage turbine blades. Saturated water at the bottom of the steam drum returns to the lower drum via large-bore 'downcomer tubes', where it pre-heats the feedwater supply. (In large utility boilers, the feedwater is supplied to the steam drum and the downcomers supply water to the bottom of the waterwalls). To increase economy of the boiler, exhaust gases are also used to pre-heat combustion air blown into the burners, and to warm the feedwater supply in an economizer. Such watertube boilers in thermal power stations are also called steam generating units. The older fire-tube boiler design, in which the water surrounds the heat source and gases from combustion pass through tubes within the water space, is typically a much weaker structure and is rarely used for pressures above . A significant advantage of the watertube boiler is that there is less chance of a catastrophic failure: there is not a large volume of water in the boiler nor are there large mechanical elements subject to failure. A water-tube boiler was patented by Blakey of England in 1766 and was made by Dallery of France in 1780. Applications "The ability of watertube boilers to be designed without the use of excessively large and thick-walled pressure vessels makes these boilers particularly attractive in applications that require dry, high-pressure, high-energy steam, including steam turbine power generation". Owing to their superb working properties, the use of watertube boilers is highly preferred in the following major areas: Variety of process applications in industries Chemical processing divisions Pulp and Paper manufacturing plants Refining units Besides, they are frequently employed in power generation plants where large quantities of steam (ranging up to 500 kg/s) having high pressures i.e. approximately and high temperatures reaching up to 550 °C are generally required. For example, the Ivanpah solar-power station uses two Rentech Type-D watertube boilers for plant warmup, and when operating as a fossil-fueled power station. Stationary Modern boilers for power generation are almost entirely water-tube designs, owing to their ability to operate at higher pressures. Where process steam is required for heating or as a chemical component, then there is still a small niche for fire-tube boilers. One notable exception is in typical nuclear-power stations (Pressurized Water Reactors), where the steam generators are generally configured similar to firetube boiler designs. In these applications the hot gas path through the "Firetubes" actually carries the very hot/high pressure primary coolant from the reactor, and steam is generated on the external surface of the tubes. Marine Their ability to work at higher pressures has led to marine boilers being almost entirely watertube. This change began around 1900, and traced the adoption of turbines for propulsion rather than reciprocating (i.e. piston) engines – although watertube boilers were also used with reciprocating engines, and firetube boilers were also used in many marine turbine applications. Railway There has been no significant adoption of water-tube boilers for railway locomotives. A handful of experimental designs were produced, but none of them were successful or led to their widespread use. Most water-tube railway locomotives, especially in Europe, used the Schmidt system. Most were compounds, and a few uniflows. The Norfolk and Western Railway's Jawn Henry was an exception, because it used a steam turbine combined with an electric transmission. LMS 6399 Fury Rebuilt completely after a fatal accident LNER 10000 "Hush hush" Using a Yarrow boiler, rather than Schmidt. Not successful and re-boilered with a conventional boiler. Hybrids A slightly more successful adoption was the use of hybrid water-tube / fire-tube systems. As the hottest part of a locomotive boiler is the firebox, it was an effective design to use a water-tube design here and a conventional fire-tube boiler as an economiser (i.e. pre-heater) in the usual position. One famous example of this was the USA Baldwin 4-10-2 No. 60000, built in 1926. Operating as a compound at a boiler pressure of it covered over successfully. After a year though, it became clear that any economies were overwhelmed by the extra costs, and it was retired to a museum display at the Franklin Institute in Philadelphia, Pennsylvania. A series of twelve experimental locomotives were constructed at the Baltimore and Ohio Railroad's Mt. Clare shops under the supervision of George H. Emerson, but none of them was replicated in any numbers. The only railway use of water-tube boilers in any numbers was the Brotan boiler, invented by Johann Brotan in Austria in 1902, and found in rare examples throughout Europe, although Hungary was a keen user and had around 1,000 of them. Like the Baldwin, it combined a water-tube firebox with a fire-tube barrel. The original characteristic of the Brotan was a long steam drum running above the main barrel, making it resemble a Flaman boiler in appearance. Road While the traction engine was usually built using its locomotive boiler as its frame, other types of steam road vehicles such as lorries and cars have used a wide range of different boiler types. Road transport pioneers Goldsworthy Gurney and Walter Hancock both used water-tube boilers in their steam carriages around 1830. Most undertype wagons used water-tube boilers. Many manufacturers used variants of the vertical cross-tube boiler, including Atkinson, Clayton, Garrett and Sentinel. Other types include the Clarkson 'thimble tube' and the Foden O-type wagon's pistol-shaped boiler. Steam fire-engine makers such as Merryweather usually used water-tube boilers for their rapid steam-raising capacity. Many steam cars used water-tube boilers, and the Bolsover Express company even made a water-tube replacement for the Stanley Steamer fire-tube boiler. Design variations D-type boiler The 'D-type' is the most common type of small- to medium-sized boilers, similar to the one shown in the schematic diagram. It is used in both stationary and marine applications. It consists of a large steam drum vertically connected to a smaller water drum (a.k.a. "mud drum") via multiple steam-generating tubes. These drums and tubes as well as the oil-fired burner are enclosed by water-walls - additional water-filled tubes spaced close together so as to prevent gas flow between them. These water wall tubes are connected to both the steam and water drums, so that they act as a combination of preheaters and downcomers as well as decreasing heat loss to the boiler shell. M-type boilers The M-type boilers were used in many US World War II warships including hundreds of Fletcher-class destroyers. Three sets of tubes form the shape of an M, and create a separately fired superheater that allows better superheat temperature control. In addition to the mud drum shown on a D-type boiler, an M-type has a water-screen header and a waterwall header at the bottom of the two additional rows of vertical tubes and downcomers. Low water content The low water content boiler has a lower and upper header connected by watertubes that are directly impinged upon from the burner. This is a "furnace-less" boiler that can generate steam and react quickly to changes in load. Babcock & Wilcox boiler Designed by the American firm of Babcock & Wilcox, this type has a single drum, with feedwater drawn from the bottom of the drum into a header that supplies inclined water-tubes. The watertubes supply steam back into the top of the drum. Furnaces are located below the tubes and drum. This type of boiler was used by the Royal Navy's Leander-class frigates and in United States Navy New Orleans-class cruisers. Stirling boiler The Stirling boiler has near-vertical, almost-straight watertubes that zig-zag between a number of steam and water drums. Usually there are three banks of tubes in a "four drum" layout, but certain applications use variations designed with a different number of drums and banks. They are mainly used as stationary boilers, owing to their large size, although the large grate area does also encourage their ability to burn a wide range of fuels. Originally coal-fired in power stations, they also became widespread in industries that produced combustible waste and required process steam. Paper pulp mills could burn waste bark, sugar refineries their bagasse waste. It is a horizontal drum type of boiler. Yarrow Named after its designers, the then Poplar-based Yarrow Shipbuilders, this type of three-drum boiler has three drums in a delta formation connected by watertubes. The drums are linked by straight watertubes, allowing easy tube-cleaning. This does, however, mean that the tubes enter the drums at varying angles, a more difficult joint to caulk. Outside the firebox, a pair of cold-leg pipes between each drum act as downcomers. Due to its three drums, the Yarrow boiler has a greater water capacity. Hence, this type is usually used in older marine boiler applications. Its compact size made it attractive for use in transportable power generation units during World War II. In order to make it transportable, the boiler and its auxiliary equipment (fuel oil heating, pumping units, fans etc.), turbines, and condensers were mounted on wagons to be transported by rail. White-Forster The White-Forster type is similar to the Yarrow, but with tubes that are gradually curved. This makes their entry into the drums perpendicular, thus simpler to make a reliable seal. Thornycroft Designed by the shipbuilder John I. Thornycroft & Company, the Thornycroft type features a single steam drum with two sets of watertubes either side of the furnace. These tubes, especially the central set, have sharp curves. Apart from obvious difficulties in cleaning them, this may also give rise to bending forces as the tubes warm up, tending to pull them loose from the tubeplate and creating a leak. There are two furnaces, venting into a common exhaust, giving the boiler a wide base tapering profile. Forced circulation boiler In a forced circulation boiler, a pump is added to speed up the flow of water through the tubes. Other types O-type boiler A-type boiler Flex-tube boiler M-type control superheater See also Clarkson thimble tube boiler Corner tube boiler Internally rifled boiler tubes (also known as serve tubes) Three-drum boiler References External links Boilers
Water-tube boiler
Chemistry
2,338
12,642,142
https://en.wikipedia.org/wiki/Superose
Superose is a trade name for a collection of FPLC columns which are used in the automated separation of biological molecules. The different columns provided can separate a variety of macromolecules, ranging from small peptides and polysaccharides to DNA strands and entire viruses. The material inside the column is agarose based, meaning that it consists of sugars that are crosslinked to form a gel-like mass. The pores in this material have different sizes, and if a molecule is too big, it does not fit into the pores, meaning that it follows a shorter way to the end of the column. The columns are placed in a holder, and a computerized pumping system pumps a watery solution, often a buffer through the column. A special injection loop allows the injection of the desired sample. See also Size exclusion Sepharose Sephadex References Chromatography Biochemistry
Superose
Chemistry,Biology
186
11,546,489
https://en.wikipedia.org/wiki/List%20of%20photovoltaics%20companies
This is a list of notable photovoltaics (PV) companies. Grid-connected solar photovoltaics (PV) is the fastest growing energy technology in the world, growing from a cumulative installed capacity of 7.7 GW in 2007, to 320 GW in 2016. In 2016, 93% of the global PV cell manufacturing capacity utilized crystalline silicon (cSi) technology, representing a commanding lead over rival forms of PV technology, such as cadmium telluride (CdTe), amorphous silicon (aSi), and copper indium gallium selenide (CIGS). In 2016, manufacturers in China and Taiwan met the majority of global PV module demand, accounting for 68% of all modules, followed by the rest of Asia at 14%. The United States and Canada manufactured 6%, and Europe manufactured a mere 4%. In 2021 China produced about 80% of the polysilicon, 95% of wafers, 80% of cells and 70% of modules. Module production capacity reached 460 GW with crystalline silicon technology assembly accounting for 98%. Photovoltaics companies include PV capital equipment producers, cell manufacturers, panel manufacturers and installers. The list does not include silicon manufacturing companies. Photovoltaic manufacturers Top 10 by year Summary According to EnergyTrend, the 2011 global top ten polysilicon, solar cell and solar module manufacturers by capacity were found in countries including People's Republic of China, United States, Taiwan, Germany, Japan, and Korea. In 2011, the global top ten polysilicon makers by capacity were GCL, Hemlock, OCI, Wacker, LDK, REC, MEMC/SunEdison, Tokuyama, LCY and Woongjin, represented by People's Republic of China, United States, Taiwan, Germany, Japan and South Korea. Historical rankings In 2015, GCL System Integration Technology Company made an increase of 500%, topping 2.5-2.7 GW, which puts it at seventh rank, overtaking Yingli Green, compared to 0.5 GW in 2014. Their solar PV module production appears to have reached a 3.7 GW capacity at the end of 2015. Solar modules, as the final products to be installed to generate electricity, are regarded as the major components to be selected by customers willing to choose solar PV energy. Solar module manufacturers must be sure that their products can be sustainable for application periods of more than 25 years. As a result, major solar module producers have their products tested by publicly recognized testing organizations and guarantee their durable efficiency rate for a certain number of years. The solar PV market has been growing for the past few years. According to solar PV research company PVinsights, worldwide shipments of solar modules in 2011 was around 25 GW, and the shipment year-over-year growth was around 40%. The top five solar module producers in 2011 were: Suntech, First Solar, Yingli, Trina, and Canadian. The top five solar module companies possessed 51.3% market share of solar modules, according to PVinsights' market intelligence report. Top 10 solar cell producers According to an annual market survey by the photovoltaics trade publication Photon International, global production of photovoltaic cells and modules in 2009 was 12.3 GW. The top ten manufacturers accounted for 45% of this total. In 2010, a tremendous growth of solar PV cell shipments doubled the solar PV cell market size. According to the solar PV market research company PVinsights, Suntech topped the ranking of solar cell production. Most of the top ten solar PV producers doubled their shipment in 2010 and five of them were over one gigawatt shipments. The top ten solar cell producers dominated the market with an even higher market share, say 50~60%, with respect to an assumed twenty gigawatt cell shipments in 2010. Quarterly ranking Although yearly ranking is as listed above, quarterly ranking can indicate which company can sustain particular conditions such as price adjustment, government feed-in tariff change, and weather conditions. In 2Q11, First Solar regained the top spot in solar module shipments from Suntech. From the 2Q11 results, four phenomena should be noticed: thin film leader First Solar still dominates; more centralization in the solar module market; Chinese companies soared; and the giga-watt game is prevailing (according to the latest solar model shipment report by PVinsigts). Thin film ranking Thin film solar cells are commercially used in several technologies, including cadmium telluride (CdTe), copper indium gallium diselenide (CIGS), and amorphous and other thin-film silicon (a-Si, TF-Si). In 2013, thin-film declined to 9% of worldwide PV production. In 2009, thin films represented 16.8% of total global production, up from 12.5% in 2008. The top ten thin-film producers were: 1100.0 MW First Solar 123.4 MW Suntech solar 94.0 MW Sharp 60.0 MW HELIOSPHERA 50.0 MW Sungen Solar 50.0 MW Trony 50.0 MW Moser Baer 43.0 MW Solar Frontier 42.0 MW Mitsubishi 40.0 MW Kaneka Corporation 40.0 MW Vtech Solar 30.0 MW Würth Solar 30.0 MW Bosch (formerly Ersol) 30.0 MW EPV 1 Estimated 2011 global top 10 polysilicon manufacturers by capacity On the other hand, the 2011 global top ten solar cell makers by capacity are dominated by both Chinese and Taiwanese companies, including Suntech, JA Solar, Trina, Yingli, Motech, Gintech, Canadian Solar, NeoSolarPower, Hanwha Solar One and JinkoSolar. 2011 global top 10 solar cell manufacturers by capacity In terms of solar module by capacity, the 2011 global top ten are Suntech, LDK, Canadian Solar, Trina, Yingli, Hanwha Solar One, Solar World, Jinko Solar, Sunneeg and Sunpower, represented by makers in People's Republic of China and Germany. 2011 global top 10 solar module manufacturers by capacity In terms of wafer and cell capacities, both makers from Taiwan and China have demonstrated significant year over year growth from 2010 to 2011. China and Taiwan production capacity Solar photovoltaic production by country China now manufactures more than half of the world's solar photovoltaics. Its production has been rapidly escalating. In 2001 it had less than 1% of the world market. In contrast, in 2001 Japan and the United States combined had over 70% of world production. By 2011 they produced around 15%. Other companies Other notable companies include: Anwell Solar, Hong Kong, China Ascent Solar, Tucson, Arizona, US Cool Earth Solar, California, US Dyesol, Canberra, Australia Eurosolar, Germany Global Solar, Tucson, Arizona, US GreenSun Energy, Jerusalem, Israel Hanwha, Seoul, South Korea HelioVolt, Austin, Texas, US Hitachi, Japan IBC SOLAR, Germany International Solar Electric Technology, Chatsworth, California, US Isofotón, Malaga, Spain Konarka Technologies, Inc., Lowell, Massachusetts, US LDK Solar, Xinyu, China Meyer Burger, Thun, Switzerland Miasolé, California, US Mitsubishi Electric, Tokyo, Japan Nanosolar, San Jose, California, US Odersun, Frankfurt Oder, Germany Panasonic Corporation Osaka, Japan PowerFilm, Inc., Ames, Iowa, US Renewable Energy Corporation, Norway Schott Solar, Germany Signet Solar, California, US Skyline Solar, Mountain View, California, US SolarEdge, Grass Valley, California, US SolarPark Korea, Wanju, South Korea SolarWorld, Bonn, Germany Solimpeks, Munich, Germany SoloPower, San Jose, California, US Spectrolab, Inc., Sylmar, California, US Sulfurcell, company has changed name to Soltecture in 2011, Germany SunEdison Suniva, Norcross, Georgia, US Sun Power Corporation, San Jose, California, US Targray Technology International, Kirkland, Quebec, Canada Tenksolar, Minneapolis, Minnesota, US Topray Solar, China Toshiba, Tokyo, Japan Unirac, Albuquerque, New Mexico, US Wagner & Co., Germany Wirsol, Waghäusel, Germany Xinyi Solar, Wuhu, China List of solar panel factories Below is a list of solar panel factories. It lists actual factories only, former plants are below this first table. Closed solar panel factories See also List of CIGS companies List of concentrating solar thermal power companies List of energy storage projects List of silicon producers Renewable energy industry Silicon Module Super League Solar cell Dye-sensitized solar cell Solar inverter Power optimizer Applied Materials, a solar cell capital equipment producer Notes References External links "Solar Home System" Electrical-engineering-related lists Photovoltaics companies Photovoltaics companies
List of photovoltaics companies
Engineering
1,852
4,915,128
https://en.wikipedia.org/wiki/IDRO%20Group
The Industrial Development & Renovation Organization of Iran (IDRO) known as IDRO Group was established in 1967 in Iran. IDRO Group is one of the largest companies in Iran. It is also one of the largest conglomerates in Asia. IDRO's objective is to develop Iran's industry sector and to accelerate the industrialization process of the country and to export Iranian products worldwide. Today, IDRO owns 117 subsidiaries and affiliated companies both domestically as well as internationally. Businesses In the course of its 40 years of activity, IDRO has gradually become a major shareholder of some key industries in Iran. In recent years and in accordance with the country's privatization policy, IDRO has made great efforts to privatize its affiliated companies. While carrying on its privatization policies and lessening its role as a holding company, IDRO intends to concentrate on its prime missions and to turn into an industrial development agency. IDRO has focused its activities on the following areas in order to materialize such strategy and to expedite the industrial development of Iran: Promotion of local and foreign investments with minority holdings owned by IDRO (less than 50% of the shares) with particular emphasis on new, hi-tech and export-oriented industries. Restructuring the existing industries through participation of reputable foreign companies in order to transfer new technologies and to enhance the non-oil exports of Iran. Development of general contracting activities with the participation of the Iranian private sector and credible foreign companies. Rendering consultancy and support services to foreign investors. Privatization of the existing subsidiaries. Industrial Investment Management Development Automotive Industry Industrial Equipment Machinery Marine Industry Railway Industry Hi-Tech Industries Development General Contracting Health care Banking Privatization IDRO had privatized 140 of its companies worth about 2,000 billion rials ($200 million) in the past. The organization will offer shares of 150 industrial units to private investors by March 2010. In 2009, 290 companies were under the control of the IDRO. Subsidiaries This is a list of IDRO's main subsidiaries (as of 2008): See also Economy of Iran IMIDRO Industry of Iran International rankings of Iran Iranian automobile industry Iranian railway industry List of Iranian companies National Iranian Oil Company National Iranian Petrochemical Company Privatization in Iran Science and technology in Iran Geological Survey and Mineral Exploration of Iran References External links IDRO's official website IDRO annual report (2008/09) IDRO annual report (2003) Government-owned companies of Iran Companies Economy of Iran Industrial development agencies Conglomerate companies of Iran Life sciences industry Manufacturing companies based in Tehran Ministry of Industry, Mine and Trade (Iran) Iranian entities subject to U.S. Department of the Treasury sanctions
IDRO Group
Biology
533
16,717,893
https://en.wikipedia.org/wiki/Single-access%20key
In phylogenetics, a single-access key (also called dichotomous key, sequential key, analytical key, or pathway key) is an identification key where the sequence and structure of identification steps is fixed by the author of the key. At each point in the decision process, multiple alternatives are offered, each leading to a result or a further choice. The alternatives are commonly called "leads", and the set of leads at a given point a "couplet". Single access keys are closely related to decision trees and binary search trees. However, to improve the usability and reliability of keys, many single-access keys incorporate reticulation, changing the tree structure into a directed acyclic graph. Single-access keys have been in use for several hundred years. They may be printed in various styles (e. g., linked, nested, indented, graphically branching) or used as interactive, computer-aided keys. In the latter case, either a longer part of the key may be displayed (optionally hyperlinked), or only a single question may be displayed at a time. If the key has several choices it is described as polychotomous or polytomous. If the entire key consists of exactly two choices at each branching point, the key is called dichotomous. The majority of single-access keys are dichotomous. Diagnostic ('artificial') versus synoptic ('natural') keys Any single-access key organizes a large set of items into a structure that breaks them down into smaller, more accessible subsets, with many keys leading to the smallest available classification unit (a species or infraspecific taxon typically in the form of binomial nomenclature). However, a trade-off exists between keys that concentrate on making identification most convenient and reliable (diagnostic keys), and keys which aim to reflect the scientific classification of organisms (synoptic keys). The first type of keys limits the choice of characteristics to those most reliable, convenient, and available under certain conditions. Multiple diagnostic keys may be offered for the same group of organisms: Diagnostic keys may be designed for field (field guides) or laboratory use, for summer or winter use, and they may use geographic distribution or habitat preference of organisms as accessory characteristics. They do so at the expense of creating artificial groups in the key. An example of a diagnostic key is shown below. It is not based on the taxonomic classification of the included species — compare with the botanical classification of oaks. In contrast, synoptic keys follow the taxonomic classification as close as possible. Where the classification is already based on phylogenetic studies, the key represents the evolutionary relationships within the group. To achieve this, these keys often have to use more difficult characteristics, which may not always be available in the field, and which may require instruments like a hand lens or microscope. Because of convergent evolution, superficially similar species may be separated early in the key, with superficially different, but genetically closely related species being separated much later in the key. Synoptic keys are typically found in scientific treatments of a taxonomic group ("monographs"). An example of a synoptic key (corresponding to the diagnostic key shown below) is shown further below. In plants, flower and fruit characteristics often are important for primary taxonomic classification: Structural variants of single-access keys The distinction between dichotomous (bifurcating) and polytomous (multifurcating) keys is a structural one, and identification key software may or may not support polytomous keys. This distinction is less arbitrary than it may appear. Allowing a variable number of choices is disadvantageous in the nested display style, where for each couplet in a polytomous key the entire key must be scanned to the end to determine whether more than a second lead may exist or not. Furthermore, if the alternative lead statements are complex (involving more than one characteristic and possibly "and", "or", or "not"), two alternative statements are significantly easier to understand than couplets with more alternatives. However, the latter consideration can easily be accommodated in a polytomous key where couplets based on a single characteristic may have more than two choices, and complex statements may be limited to two alternative leads. Another structural distinction is whether only lead statements or question-answer pairs are supported. Most traditional single-access keys use the "lead-style", where each option consists of a statement, only one of which is correct. Especially computer-aided keys occasionally use the "question-answer-style" instead, where a question is presented with a choice of answers. The second style is well known from multiple choice testing and therefore more intuitive for beginners. However, it creates problems when multiple characteristics need to be combined in a single step (as in "Flower red and spines present" versus "Flowers yellow to reddish-orange, spines absent"). Presentation styles Single-access keys may be presented in different styles. The two most frequently encountered styles are the Nested style in which all couplets immediately follow their lead, at the expense of separating the leads within a couplet. The most frequent subtype of nested keys are called "indented key", where indentation increases with each level. With a large key this can lead to much whitespace in print, and consequently little remaining room for lead text and illustrations. Although "indented key" is sometimes used as a synonym for nested key, the indentation itself is not an essential feature of a nested key. (Examples of non-hyperlinked, indented nested keys may be found at www.env.gov.bc.ca) Linked style: The leads within a couplet immediately follow each other, making polytomous keys easy to achieve. At the end of each lead some form of pointer (a numbering system, hyperlinks, etc.) create the connection to the couplets that follow this lead. The nested style gives an excellent overview over the structure of the key. With a short key and moderate indentation it can be easy to follow and even backtrace an erroneous identification path. The nested style is problematic with polytomous keys, where each key must be scanned to the end to verify that no further leads exist within a couplet. It also does not easily support reticulation (which requires a link method similar to the one used in the linked style). Advantages and disadvantages A large amount of knowledge about reliable and efficient identification procedures may be incorporated in good single-access keys. Characteristics that are reliable and convenient to observe most of the time and for most species (or taxa), and which further provide a well-balanced key (the leads splitting number of species evenly) will be preferred at the start of the key. However, in practice it is difficult to achieve this goal for all taxa in all conditions. If the information for a given identification step is not available, several potential leads must be followed and identification becomes increasingly difficult. Although software exists that helps in skipping questions in a single-access key, the more general solution to this problem is the construction and use of multi-access keys, allowing a free choice of identification steps and are easily adaptable to different taxa (e.g., very small or very large) as well as different circumstances of identification (e. g., in the field or laboratory). See also Multi-access key References External links www.identificationkey.fr Phylogenetics
Single-access key
Biology
1,534
7,984,037
https://en.wikipedia.org/wiki/Correlation%20%28projective%20geometry%29
In projective geometry, a correlation is a transformation of a d-dimensional projective space that maps subspaces of dimension k to subspaces of dimension , reversing inclusion and preserving incidence. Correlations are also called reciprocities or reciprocal transformations. In two dimensions In the real projective plane, points and lines are dual to each other. As expressed by Coxeter, A correlation is a point-to-line and a line-to-point transformation that preserves the relation of incidence in accordance with the principle of duality. Thus it transforms ranges into pencils, pencils into ranges, [complete] quadrangles into [complete] quadrilaterals, and so on. Given a line m and P a point not on m, an elementary correlation is obtained as follows: for every Q on m form the line PQ. The inverse correlation starts with the pencil on P: for any line q in this pencil take the point . The composition of two correlations that share the same pencil is a perspectivity. In three dimensions In a 3-dimensional projective space a correlation maps a point to a plane. As stated in one textbook: If κ is such a correlation, every point P is transformed by it into a plane , and conversely, every point P arises from a unique plane π′ by the inverse transformation κ−1. Three-dimensional correlations also transform lines into lines, so they may be considered to be collineations of the two spaces. In higher dimensions In general n-dimensional projective space, a correlation takes a point to a hyperplane. This context was described by Paul Yale: A correlation of the projective space P(V) is an inclusion-reversing permutation of the proper subspaces of P(V). He proves a theorem stating that a correlation φ interchanges joins and intersections, and for any projective subspace W of P(V), the dimension of the image of W under φ is , where n is the dimension of the vector space V used to produce the projective space P(V). Existence of correlations Correlations can exist only if the space is self-dual. For dimensions 3 and higher, self-duality is easy to test: A coordinatizing skewfield exists and self-duality fails if and only if the skewfield is not isomorphic to its opposite. Special types of correlations Polarity If a correlation φ is an involution (that is, two applications of the correlation equals the identity: for all points P) then it is called a polarity. Polarities of projective spaces lead to polar spaces, which are defined by taking the collection of all subspace which are contained in their image under the polarity. Natural correlation There is a natural correlation induced between a projective space P(V) and its dual P(V∗) by the natural pairing between the underlying vector spaces V and its dual V∗, where every subspace W of V∗ is mapped to its orthogonal complement W⊥ in V, defined as Composing this natural correlation with an isomorphism of projective spaces induced by a semilinear map produces a correlation of P(V) to itself. In this way, every nondegenerate semilinear map induces a correlation of a projective space to itself. References Projective geometry Functions and mappings
Correlation (projective geometry)
Mathematics
682
38,417,785
https://en.wikipedia.org/wiki/Superelement
In scientific computing and computational engineering, a superelement is a finite element method technique which consists of defining a new type of finite element by grouping and processing a set of finite elements. A superelement describes a part of a problem, and can be locally solved, before being implemented in the global problem. Substructuring a problem by means of superelements may facilitate the division of labor and overcome computer memory limitations. History Superelements were invented in the aerospace industry, where complexity and the size of problems exceeded the solving capabilities of the computational hardware. The development of superelements made solving of larger problems possible, by breakdown of complex systems such as complete airplanes. References Finite element method
Superelement
Mathematics
142
351,036
https://en.wikipedia.org/wiki/Staple%20%28fastener%29
A staple is a type of two-pronged fastener, usually metal, used for joining, gathering, or binding materials together. Large staples might be used with a hammer or staple gun for masonry, roofing, corrugated boxes and other heavy-duty uses. Smaller staples are used with a stapler to attach pieces of paper together; such staples are a more permanent and durable fastener for paper documents than the paper clip. Etymology The word "staple" originated in the late thirteenth Century, from Old English stapol, meaning "post, pillar". The word's first usage in the paper-fastening sense is attested from 1895. History In ancient times, the staple had several different functions. Large metal staples dating from the 6th century BC have been found in the masonry works of the Persian empire (ancient Iran). For the construction of the Pasargadae and later Ka'ba-ye Zartosht, these staples, which are known as "dovetail" or "swallowtail" staples, were used for tightening stones together. The home stapling machine was developed by Henry Heyl in 1877 and registered under US Patent No. 195,603. Heyl's companies, American Paper-Box Machine Company, Novelty Paper Box Company, and Standard Box Company, all of Philadelphia, manufactured machinery using staples in paper packaging and for saddle stitching. Advantages Most kinds of staples are easier to produce than nails or screws. The crown of the staple can be used to bridge materials butted together. The crown can bridge a piece and fasten it without puncturing, with a leg on either side, e.g. fastening electrical cables to wood framing. The crown provides greater surface area than other comparable fasteners. This is generally more helpful with thinner materials. Disadvantages Staples generally have lower holding power compared to nails or screws. This can make them unsuitable for heavy-duty applications where strong connections are required. Once a staple has been driven, it is difficult to remove without causing damage to the surrounding material. This contrasts with screws, which can often be removed and reused. When used to hold paper together, staples create a more or less permanent attachment. Removing them without damaging the paper can be challenging, whereas paperclips can be easily added and removed without harming the paper. While it's possible to remove and reuse staples, doing so can be difficult and often renders the staple unusable for future use. Paperclips, in contrast, are designed to be reusable. Paper staples The term "stapling" is used for both fastening sheets of paper together with bent legs or fastening sheets of paper to something solid with straight legs; however, when differentiating between the two, the term "tacking" is used for straight-leg stapling, while the term "stapling" is used for bent-leg stapling. Specifications Modern staples for paper staplers are made from zinc-plated steel wires glued together and bent to form a long strip of staples. Staple strips are commonly available as "full strips" with 210 staples per strip. Both copper plated and more expensive stainless steel staples which do not rust are also available, but uncommon. Some staple sizes are used more commonly than others, depending on the application required. Some companies have unique staples just for their products. Staples from one manufacturer may or may not fit another manufacturer's unit even if they look similar and serve the same purpose. Staples are often described as X/Y (e.g. 24/6 or 26/6), where the first number X is the gauge of the wire (AWG), and the second number Y is the length of the shank (leg) in millimeters. Some exceptions to this rule include staple sizes like No. 10. Common sizes for the home and office include: 26/6, 24/6, 24/8, 13/6, 13/8 and No. 10 for mini staplers. Common sizes for heavy duty staplers include: 23/8, 23/12, 23/15, 23/20, 23/24, 13/10, and 13/14. Stapleless staplers cut and bend paper without using metal fasteners. Standards There are few standards for staple size, length and thickness. This has led to many different incompatible staples and staplers systems, all serving the same purpose or applications. 24/6 staples are described by the German DIN 7405 standard. In the United States, the specifications for non-medical industrial staples are described in ASTM F1667-15, Standard Specification for Driven Fasteners: Nails, Spikes, and Staples. A heavy duty office staple might be designated as F1667 STFCC-04: ST indicates staple, FC indicates flat top crown, C indicates cohered (joined into a strip), and 04 is the dash number for a staple with a length of 0.250 inch (6 mm), a leg thickness of 0.020 inch (500 μm), a leg width of 0.030 inch (800 μm), and a crown width of 0.500 inch (13 mm). In the home Staples are most commonly used to bind a stack of individual paper pages. A mechanical or electrical stapler may apply them by passing them through the paper pages and then clinching the staple legs that protrude from the bottom of the page stack. When using a stapler, the papers to be fastened are placed between the main body and the anvil. The papers are pinched between the body and the anvil, then a drive blade pushes on the crown of the staple on the end of the staple strip. The staple breaks from the end of the strip and the legs of the staple are forced through the paper. As the legs hit the grooves in the anvil they are bent to hold the pages together. Many staplers have an anvil in the form of a "pinning" or "stapling" switch. This allows a choice between bending in or out. The outward bent staples are easier to remove and are for temporary fastening or "pinning". Most staplers are capable of stapling without the anvil to drive straight leg staples for tacking. There are various types of staples for paper, including heavy-duty staples, designed for use on documents 20, 50, or over 100 pages thick. There are also speedpoint staples, which have slightly sharper teeth so they can go through paper more easily. In business Staples are commonly considered a neat and efficient method of binding paperwork because they are relatively unobtrusive, low cost, and readily available. Large staples found on corrugated cardboard boxes have folded legs. They are applied from the outside and do not use an anvil; jaw-like appendages push through the cardboard alongside the legs and bend them from the outside. Saddle stitch staplers, also known as "booklet staplers," feature a longer reach from the pivot point than general-purpose staplers and bind pages into a booklet or "signature". Some can use "loop-staples" that enable the user to integrate folded matter into ring books and binders. Outward clinch staples are blind staples. There is no anvil, and they are applied with a staple gun. When applied, each staple leg forms a curve bending outwards. This is in part caused by the shape of the crown, which is like an inverted "V", and not flat as in ordinary staples. Also, the legs are sharpened with an inside bevel point, causing them to tend to go outwards when forced into the base material. These staples are used for upholstery work, especially in vehicles, where they are used for fastening fabric or leather to a foam base. These staples are also used when installing fiberglass insulation batts around air ducts- the FSK paper sheathing is overlapped, and the two layers are stapled together before sealing with tape. In packaging Staples are used in various types of packaging. Staples can attach items to paperboard for carded packaging Staples of stitches can be used to attach the manufacturer's joint of corrugated boxes Staples are used to close corrugated boxes. Small (nominally -inch crown) staples can be applied to a box with a post stapler. Wider crown (nominally -inch) staples can be applied with a blind clincher Staples can help fabricate and attach paperwork to wooden boxes and crates. In construction Construction staples are commonly larger, have a more varied use, and are delivered by a staple gun or hammer tacker. Staple guns do not have backing anvils and are exclusively used for tacking (with the exception of outward-clinch staplers used for fastening duct insulation). They typically have staples made from thicker metal. Some staple guns use arched staples for fastening small cables, e.g. phone or cable TV, without damaging the cable. Devices known as hammer tackers or staple hammers operate without complex mechanics as a simple head loaded with a strip of staples drives them directly; this method requires a measure of skill. Powered electric staplers or pneumatic staplers drive staples easily and accurately; they are the simplest manner of applying staples, but are hindered by a cord or hose. Cordless electric staplers use a battery, typically rechargeable and sometimes replaceable. In medicine Surgical staples are used for the closing of incisions and wounds, a function also performed by sutures. See also Stapler Staple gun Staple remover Hammer tacker Paper clip References External links —discusses many uses of the word . Fasteners Stationery Woodworking Packaging Metallic objects Office equipment
Staple (fastener)
Physics,Engineering
1,980
40,485,987
https://en.wikipedia.org/wiki/Philip%20Maini
Philip Kumar Maini (born 16 October 1959 in Magherafelt, Northern Ireland) is a Northern Irish mathematician. Since 1998, he has been the Professor of Mathematical Biology at the University of Oxford and is the director of the Wolfson Centre for Mathematical Biology in the Mathematical Institute. Personal life Philip Maini is the son of Panna Lal Maini and Satya Wati Bhandari. Panna Lal and Satya Wati were from Punjab in North West India. Panna Lal traveled to Northern Ireland in 1954. He had sailed to London on the ship Maloja of the Peninsula and Orient Steam Navigation Company arriving there on 18 February 1954. Satya Wati and Philip's elder brother Arvind did not arrive in Northern Ireland until 1957. Education Maini was educated at Rainey Endowed School in County Londonderry and Balliol College, Oxford where he was awarded a BA in 1982 and a DPhil in 1985, the latter for a thesis modelling morphogenetic pattern formation supervised by James D. Murray Research and career After a postdoctoral research position at Oxford and an associate professorship at the University of Utah, he returned to Oxford in 1990 as university lecturer in mathematical biology with a tutorial fellowship at Brasenose College, Oxford. He became director of the Wolfson Centre for Mathematical Biology in 1998, then Statutory Professor in Mathematical Biology and professorial fellow of St John's College, Oxford in 2005. Maini's research includes mathematical modelling of tumours, wound healing and embryonic pattern formation, and the theoretical analysis of these models. His research has been funded by the Engineering and Physical Sciences Research Council (EPSRC) and Biotechnology and Biological Sciences Research Council (BBSRC). He has supervised 53 PhD students. From 2002 to 2015 Maini was the editor-in-chief of the Bulletin of Mathematical Biology and has served on the editorial boards of many other journals. Maini gave an invited talk at ICM 2010 in Hyderabad, speaking on "Modelling Aspects of Tumour Metabolism." Awards and honours Maini was elected a Fellow of the Royal Society (FRS) in 2015. His certificate of election reads: Maini was an elected member of the boards of the Society for Mathematical Biology and the European Society for Mathematical and Theoretical Biology. He is a Fellow of the Institute of Mathematics and its Applications (IMA), the Society for Industrial and Applied Mathematics (SIAM), and the Royal Society of Biology, and is a corresponding member of the Mexican Academy of Sciences. He has held visiting positions at universities worldwide. In 2017, he was elected to a fellowship of the Academy of Medical Sciences and the next year elected a Foreign Fellow by the Indian National Science Academy. In 2021, he was elected Fellow of the European Academy of Sciences and a Fellow of the American Association for the Advancement of Science. Maini co-authored a 1997 Bellman Prize-winning paper and received a Royal Society Leverhulme Trust Senior Research Fellowship and Wolfson Research Merit Award, and the London Mathematical Society Naylor Prize. In 2024 he was awarded the Sylvester Medal by the Royal Society "for his contributions to mathematical biology, especially the interdisciplinary modelling of biomedical phenomena and systems". References External links Official web site 1959 births Living people People from Magherafelt People educated at Rainey Endowed School Alumni of Balliol College, Oxford University of Utah faculty Fellows of St John's College, Oxford Mathematicians from Northern Ireland Theoretical biologists Fellows of the Society for Industrial and Applied Mathematics Fellows of the Royal Society Foreign fellows of the Indian National Science Academy
Philip Maini
Biology
718
13,612,232
https://en.wikipedia.org/wiki/Amelioration%20pattern
In software engineering, an amelioration pattern is an anti-pattern formed when an existing software design pattern was edited (i.e. rearranged, added or deleted) to better suit a particular problem so as to achieve some further effect or behavior. In this sense, an amelioration pattern is transformational in character. References External links Amelioration Pattern at the Portland Pattern Repository Software design patterns
Amelioration pattern
Technology,Engineering
84
178,870
https://en.wikipedia.org/wiki/Atavism
In biology, an atavism is a modification of a biological structure whereby an ancestral genetic trait reappears after having been lost through evolutionary change in previous generations. Atavisms can occur in several ways, one of which is when genes for previously existing phenotypic features are preserved in DNA, and these become expressed through a mutation that either knocks out the dominant genes for the new traits or makes the old traits dominate the new one. A number of traits can vary as a result of shortening of the fetal development of a trait (neoteny) or by prolongation of the same. In such a case, a shift in the time a trait is allowed to develop before it is fixed can bring forth an ancestral phenotype. Atavisms are often seen as evidence of evolution. In social sciences, atavism is the tendency of reversion: for example, people in the modern era reverting to the ways of thinking and acting of a former time. The word atavism is derived from the Latin atavus—a great-great-great-grandfather or, more generally, an ancestor. Biology Evolutionarily traits that have disappeared phenotypically do not necessarily disappear from an organism's DNA. The gene sequence often remains, but is inactive. Such an unused gene may remain in the genome for many generations. As long as the gene remains intact, a fault in the genetic control suppressing the gene can lead to it being expressed again. Sometimes, the expression of dormant genes can be induced by artificial stimulation. Atavisms have been observed in humans, such as with infants born with vestigial tails (called a "coccygeal process", "coccygeal projection", or "caudal appendage"). Atavism can also be seen in humans who possess large teeth, like those of other primates. In addition, a case of "snake heart", the presence of "coronary circulation and myocardial architecture [that closely] resemble those of the reptilian heart", has also been reported in medical literature. Atavism has also recently been induced in avian dinosaur (bird) fetuses to express dormant ancestral non-avian dinosaur (non-bird) features, including teeth. Other examples of observed atavisms include: Hind limbs in cetaceans and sirenians. Extra toes of the modern horse. Reappearance of limbs in limbless vertebrates. Re-evolution of sexuality from parthenogenesis in oribatid mites. Teeth in avian dinosaurs (birds). Dewclaws in dogs. Reappearance of prothoracic wings in insects. Reappearance of wings on wingless stick insects and leaf insects and earwigs. Atavistic muscles in several birds and mammals such as the beagle and the jerboa. Extra toes in guinea pigs. Reemergence of sexual reproduction in the flowering plant Hieracium pilosella and the Crotoniidae family of mites. Webbed feet in adult axolotls. Human tails (not pseudo-tails) and supernumerary nipples in humans (and other primates). Color blindness in humans. Culture Atavism is a term in Joseph Schumpeter's explanation of World War I in twentieth-century liberal Europe. He defends the liberal international relations theory that an international society built on commerce will avoid war because of war's destructiveness and comparative cost. His reason for World War I is termed "atavism", in which he asserts that senescent governments in Europe (those of the German Empire, Russian Empire, Ottoman Empire, and Austro-Hungarian Empire) pulled the liberal Europe into war, and that the liberal regimes of the other continental powers did not cause it. He used this idea to say that liberalism and commerce would continue to have a soothing effect in international relations, and that war would not arise between nations which are connected by commercial ties. This latter idea is very similar to the later Golden Arches theory. University of London professor Guy Standing has identified three distinct sub-groups of the precariat, one of which he refers to as "atavists", who long for what they see as a lost past. Social Darwinism During the interval between the acceptance of evolution in the mid-1800s and the rise of the modern understanding of genetics in the early 1900s, atavism was used to account for the reappearance in an individual of a trait after several generations of absence—often called a "throw-back". The idea that atavisms could be made to accumulate by selective breeding, or breeding back, led to breeds such as Heck cattle. This had been bred from ancient landraces with selected primitive traits, in an attempt of "reviving" the aurochs, an extinct species of wild cattle. The same notions of atavisms were used by social Darwinists, who claimed that "inferior" races displayed atavistic traits, and represented more primitive traits than other races. Both atavism's and Ernst Haeckel's recapitulation theory are related to evolutionary progress, as development towards a greater complexity and a superior ability. In addition, the concept of atavism as part of an individualistic explanation of the causes of criminal deviance was popularised by the Italian criminologist Cesare Lombroso in the 1870s. He attempted to identify physical characteristics common to criminals and labeled those he found as atavistic, 'throw-back' traits that determined 'primitive' criminal behavior. His statistical evidence and the closely related idea of eugenics have long since been abandoned by the scientific community, but the concept that physical traits may affect the likelihood of criminal or unethical behavior in a person still has some scientific support. See also Atavistic regression Exaptation Spandrel (biology) Torna atrás References External links Photograph of an additional (third) hoof of cows Evolutionary biology Genetics
Atavism
Biology
1,237
30,178,320
https://en.wikipedia.org/wiki/Vertex%20of%20a%20representation
In mathematical finite group theory, the vertex of a representation of a finite group is a subgroup associated to it, that has a special representation called a source. Vertices and sources were introduced by . References Representation theory Finite groups
Vertex of a representation
Mathematics
45
36,855,584
https://en.wikipedia.org/wiki/29%20Cygni
29 Cygni is a single star in the northern constellation of Cygnus. It is dimly visible to the naked eye as a white-hued star with an apparent visual magnitude of 4.93. The distance to 29 Cyg, as estimated from an annual parallax shift of , is 133 light years. The star is moving closer to the Earth with a heliocentric radial velocity of −17 km/s. It is a member of the 30–50 million year old Argus Association of co-moving stars. This is an A-type main-sequence star with a stellar classification of A2 V. Rodríguez et al. (2000) classify it as a Delta Scuti variable with a frequency of 0.0267 cycles per day. It is a Lambda Boötis class chemically peculiar star and the first such star to be classified as a pulsating variable. 29 Cyg is multi-periodic, small-amplitude variable with a magnitude change of about 0.02 and a dominant period of 39 minutes. A magnetic field has been detected with an averaged quadratic field of . The star has a moderate rate of rotation, showing a projected rotational velocity of 65 km/s. It has double the mass of the Sun and is radiating 25 times the Sun's luminosity from its photosphere at an effective temperature of roughly 8,790 K. 29 Cygni is listed in multiple star catalogs as having several companions within , including the yellow 7th magnitude HD 192661. All are background objects not physically associated with 29 Cygni itself. The naked-eye stars b1 Cygni and b2 Cygni, respectively about one and two degrees away, also lie at different distances to 29 Cygni. Planetary system In 2022, a superjovian extrasolar planet HIP 99770 b was discovered by direct imaging and astrometry. Its spectral class is between L7 and L9.5, corresponding to a surface temperature of 1400 K. Notes References A-type main-sequence stars Delta Scuti variables Lambda Boötis stars Cygnus (constellation) BD+36 3955 J20143203+3648225 Cygni, 29 192640 099770 7736 Cygni, V1644 Planetary systems with one confirmed planet
29 Cygni
Astronomy
464
77,269,960
https://en.wikipedia.org/wiki/Manuel%20Bibes
Manuel Bibes, born on July 15, 1976, in Sainte-Foy-la-Grande, is a French physicist specializing in functional oxides, multiferroic materials, and spintronics. He is currently a Research Director at the National Center for Scientific Research (CNRS). Biography After earning an engineering degree from the Institut National des Sciences Appliquées de Toulouse in 1998, Bibes completed his Ph.D. under the supervision of Josep Fontcuberta at the ICMAB, at the Autonomous University of Barcelona in 2001, focusing on thin manganite films and their application in spintronics. His PhD was followed by a postdoctoral fellowship at the Joint Physics Unit CNRS/Thales (currently known as Laboratory Albert Fert) under the guidance of Prof. Albert Fert. Bibes joined the CNRS in 2003 at the Institute of Fundamental Electronics, now known as the Center for Nanoscience and Nanotechnology (C2N). Afterwards he completed research stays at MIT and the University of Cambridge as a visiting researcher and joined the Laboratory Albert Fert at 2007. All his research publications are listed in Google Scholar. Throughout his career, Bibes has been a leader in research of multiferroic materials (which simultaneously exhibits magnetic and ferroelectric properties) and their utilisation in electrical control of magnetism. In 2009, his team discovered the phenomenon of giant tunnel electroresistance in ferroelectric tunnel junctions (results published in Nature) demonstrating their potential as artificial synapses. In 2016, in collaboration with the Spintec laboratory, he demonstrated that non-magnetic oxide interfaces can be used as ultrasensitive spin detectors. This findings led to a collaboration with Intel for the development of a new type of energy efficient transistor (MESO) aimed at replacing the current transistors based on CMOS technology. Since 2018, Manuel Bibes has been recognized as a Highly Cited Researcher by Clarivate Analytics. In June 2022, along with Agnès Barthélémy, Ramamoorthy Ramesh and Nicola Spaldin, he received the Europhysics Prize from the European Physical Society for their significant contributions to the fundamental and applied physics of multiferroic and magnetoelectric materials. In October 2024, he co-founds the start-up company Nellow, together with Laurent Vila and Jean-Philippe Attané from Spintec. Nellow aims to develop and commercialize chips with an ultralow power consumption for logic and artificial intelligence. Awards and honors Europhysics Prize, European Physical Society (2022) ERC Advanced Grant, European Research Council (2019) Friedrich Wilhelm Bessel Research Award, Alexander von Humboldt Foundation (2018) Descartes-Huygens Prize, French Academy of Sciences and Royal Netherlands Academy of Arts and Sciences (2017) Fellow of American Physical Society, APS (2015) ERC Consolidator Grant, European Research Council, ERC (2014) EU-40 Materials Prize, European Materials Research Society, EMRS (2013) Extraordinary Doctorate Award, Autonomous University of Barcelona (2001) Selected lectures and talks Electric-field control of magnetism in oxide heterostructures (Seminar at Collège de France, May 30, 2017) A journey through the oxide world (a talk at French Academy of Sciences, February 20, 2018) References External links Official Website Materials science Condensed matter physicists Oxides Thin film deposition 21st-century French physicists 1976 births Living people
Manuel Bibes
Physics,Chemistry,Materials_science,Mathematics,Engineering
708
51,190,957
https://en.wikipedia.org/wiki/Difficulty%20of%20engagement
Difficulty of engagement is a notion in the Campbell paradigm, a model of behavior change with person-independent difficulty. Motivation Difficulty is considered a key predictor of behavior in psychology and is included in most recognized models of behavior change, such as the theory of planned behavior. Most of these models rely on people's perceptions and estimates of behavioral difficulty. That is, difficulty is considered to be subjective and person-dependent. Obviously, perceived difficulty varies by individual. A more objective measure of difficulty is desirable, e.g., for environmental or energy policy, because people may misperceive the difficulty of behaviors, possibly because affected by mood or current circumstances. The Campbell Paradigm The Campbell paradigm was proposed by Kaiser et al. as a model of behavior change with person-independent difficulty. The model treats the likelihood of individual behavior as a function of attitude and of the difficulty of engaging in this behavior. The more demanding these barriers are, the more favorable attitude towards a general goal, such as environment protection. The relation between difficulty of behaviors, attitudes and behaviors can be computed using a one-parameter logistic Rasch model and yield the proportion of persons that engage in a given behavior. See also Attitude-behavior consistency References Behavioral concepts
Difficulty of engagement
Biology
248
6,158,574
https://en.wikipedia.org/wiki/Aircraft%20Meteorological%20Data%20Relay
Aircraft Meteorological Data Relay (AMDAR) is a program initiated by the World Meteorological Organization. AMDAR is used to collect meteorological data worldwide by using commercial aircraft. Data is collected by the aircraft navigation systems and the onboard standard temperature and static pressure probes. The data is then preprocessed before linking them down to the ground either via VHF communication (ACARS) or via satellite link ASDAR. A detailed description is given in the AMDAR Reference Manual (WMO-No 958) available from the World Meteorological Organization, Geneva, Switzerland Usage AMDAR transmissions are most commonly used in forecast models as a supplement to radiosonde data, to aid in the plotting of upper-air data between the standard radiosonde soundings at 00Z and 12Z. See also Solar-powered aircraft References External links (Information no longer available) WMO AMDAR Observing System site (Dead link) NOAA AMDAR site Meteorological data and networks Meteorological instrumentation and equipment
Aircraft Meteorological Data Relay
Technology,Engineering
196
69,088,613
https://en.wikipedia.org/wiki/Hinged%20arch%20bridge
A hinged arch bridge is one with hinges incorporated into its structure to allow movement. In structural engineering, a hinge is essentially a "cut in the structure" that can withstand compressive forces. In a steel arch the hinge allows free rotation, somewhat resembling a common hinge. The most common hinged arch bridge varieties are the two-hinged bridge with hinges at the springing points and the three-hinged bridge with an additional hinge at the crown of the arch; though single-hinged versions exist with a hinge only at the crown of the arch. Hinges at the springing point prevent bending moments from being transferred to the bridge abutments. A triple-hinged bridge is statically determinate, while the other versions are not. Description A fixed arch bridge, that is one without hinges, exerts a bending moment at the abutments and stresses caused by change of temperature or shrinkage of concrete have to be taken up by the arch. A two-hinged arch has a hinge at the base of each arch (the springing point), while a three-hinged arch has a third hinge at the crown of the arch. The advantage of the fixed arches is in their lower construction and maintenance costs. In a two-hinged arch bridge no bending moments are transferred to the abutments, due to the presence of the hinge. A change in the relative position of the abutments may cause a change in the thrust load exerted by the arch on the abutments. The addition of a third hinge at the crown, which allows rotation of the arch members, means that the thrust and shear forces exerted on the abutments are not affected by small movements in either abutment. Three-hinged arch bridges are, therefore, used when there is the possibility of unequal settlement of the abutments. Single-hinged arch bridges, with a hinge only at the crown, were also built though in relatively small numbers compared to the other types. A three hinged bridge is isostatic, that is it is statically determinate; a two-hinged bridge is statically indeterminate in one degree of freedom, while a fixed arch bridge is indeterminate in three degrees of freedom. The statically determinate three-hinged arches were popular until the Second World War. Post-war, the advances in calculation methods allowed broad use of statically indeterminate schemes. In the end of the 20th century three-hinged arches made a comeback associated with the uses of engineered wood ("glulam") in bridge construction: the glulam construction have to be pre-fabricated, using three-hinged design naturally divides the arch into two halves that are easier to transport. While in steel arches hinges typically allow free rotation of connected parts, in reinforced concrete bridges typical implementation of a hinge involves thinning of the concrete structure while adding more reinforcement locally. History Early arch bridges were fixed arches. The two-hinged bridge was developed by the engineers Couche and Salle in 1858 for a wrought iron bridge carrying the Paris-Creil railway line across the Canal Saint-Denis. They had attempted to introduce a third hinge at the crown but were unsuccessful because the thickness of the arch was insufficient. The first three-hinged bridge was the Unterspree Bridge in Berlin (Johann Wilhelm Schwedler, 1863), built two years after the pioneering theoretical work by . Hradecky Bridge (1866) is probably the oldest three-hinged bridge still used. Hinged bridges were popular with railway companies, who often had the need to construct large bridges. The Arch Bridge at Bellows Falls in New England, built in 1905, is a particularly large example of a three-hinged arch bridge. At in length it was the longest in America when built. The 1888 Hennepin Avenue Bridge in Minneapolis was unusual in that it was both a two- and three-hinged bridge. The bridge was split longitudinally with the two halves being built by different companies. The north arch ribs are three-hinged, while the south arch ribs are two-hinged. Three-hinged arch bridges remain popular in modern civil engineering. References Sources Bridge design Arch bridges
Hinged arch bridge
Engineering
873
1,628,483
https://en.wikipedia.org/wiki/Dodecahedrane
Dodecahedrane is a chemical compound, a hydrocarbon with formula , whose carbon atoms are arranged as the vertices (corners) of a regular dodecahedron. Each carbon is bound to three neighbouring carbon atoms and to a hydrogen atom. This compound is one of the three possible Platonic hydrocarbons, the other two being cubane and tetrahedrane. Dodecahedrane does not occur in nature and has no significant uses. It was synthesized by Leo Paquette in 1982, primarily for the "aesthetically pleasing symmetry of the dodecahedral framework". For many years, dodecahedrane was the simplest real carbon-based molecule with full icosahedral symmetry. Buckminsterfullerene (), discovered in 1985, also has the same symmetry, but has three times as many carbons and 50% more atoms overall. The synthesis of the C20 fullerene in 2000, from brominated dodecahedrane, may have demoted to second place. Structure The angle between the C-C bonds in each carbon atom is 108°, which is the angle between adjacent sides of a regular pentagon. That value is quite close to the 109.5° central angle of a regular tetrahedron—the ideal angle between the bonds on an atom that has sp3 hybridisation. As a result, there is minimal angle strain. However, the molecule has significant levels of torsional strain as a result of the eclipsed conformation along each edge of the structure. The molecule has perfect icosahedral (Ih) symmetry, as evidenced by its proton NMR spectrum in which all hydrogen atoms appear at a single chemical shift of 3.38 ppm. Unlike buckminsterfullerene, dodecahedrane has no delocalized electrons and hence has no aromaticity. History For over 30 years, several research groups actively pursued the total synthesis of dodecahedrane. A review article published in 1978 described the different strategies that existed up to then. The first attempt was initiated in 1964 by R.B. Woodward with the synthesis of the compound triquinacene which was thought to be able to simply dimerize to dodecahedrane. Other groups were also in the race, for example that of Philip Eaton and Paul von Ragué Schleyer. Leo Paquette's group at Ohio State University was the first to succeed, by a complex 29-step route that mostly builds the dodecahedral skeleton one ring at a time, and finally closes the last hole. In 1987, more versatile alternative synthesis route was found by the Horst Prinzbach's group. Their approach was based on the isomerization pagodane, obtained from isodrin (isomer of aldrin) as starting material i.a. through [6+6]photocycloaddition. Schleyer had followed a similar approach in his synthesis of adamantane. Following that idea, joint efforts of the Prinzbach team and the Schleyer group succeeded but obtained only 8% yield for the conversion at best. In the following decade the group greatly optimized that route, so that dodecahedrane could be obtained in multi-gram quantities. The new route also made it easier to obtain derivatives with selected substitutions and unsaturated carbon-carbon bonds. Two significant developments were the discovery of σ-bishomoaromaticity and the formation of C20 fullerene from highly brominated dodecahedrane species. Synthesis Original route Paquette's 1982 organic synthesis takes about 29 steps with raw materials cyclopentadiene (2 equivalents 10 carbon atoms), dimethyl acetylenedicarboxylate (4 carbon atoms) and allyltrimethylsilane (2 equivalents, 6 carbon atoms). In the first leg of the procedure two molecules of cyclopentadiene 1 are coupled together by reaction with elemental sodium (forming the cyclopentadienyl complex) and iodine to dihydrofulvalene 2. Next up is a tandem Diels–Alder reaction with dimethyl acetylenedicarboxylate 3 with desired sequence pentadiene-acetylene-pentadiene as in symmetrical adduct 4. An equal amount of asymmetric pentadiene-pentadiene-acetylene compound (4b) is formed and discarded. {|align="center" class="wikitable" style="font-size:small" | |valign=top | |- | Dodecahedrane synthesis part I||Dodecahedrane synthesis part II |- |} In the next step of the sequence iodine is temporarily introduced via an iodolactonization of the diacid of 4 to dilactone 5. The ester group is cleaved next by methanol to the halohydrin 6, the alcohol groups converted to ketone groups in 7 by Jones oxidation and the iodine groups reduced by a zinc-copper couple in 8. {|align="center" class="wikitable" style="font-size:small" | |valign=top | |- | Dodecahedrane synthesis part III||Dodecahedrane synthesis part IV |- |} The final 6 carbon atoms are inserted in a nucleophilic addition to the ketone groups of the carbanion 10 generated from allyltrimethylsilane 9 and n-butyllithium. In the next step the vinyl silane 11 reacts with peracetic acid in acetic acid in a radical substitution to the dilactone 12 followed by an intramolecular Friedel-Crafts alkylation with phosphorus pentoxide to diketone 13. This molecule contains all required 20 carbon atoms and is also symmetrical which facilitates the construction of the remaining 5 carbon-carbon bonds. Reduction of the double bonds in 13 to 14 is accomplished with hydrogenation with palladium on carbon and that of the ketone groups to alcohol groups in 15 by sodium borohydride. Replacement of hydroxyl by chlorine in 17 via nucleophilic aliphatic substitution takes place through the dilactone 16 (tosyl chloride). The first C–C bond forming reaction is a kind of Birch alkylation (lithium, ammonia) with the immediate reaction product trapped with chloromethyl phenyl ether, the other chlorine atom in 17 is simply reduced. This temporary appendix will in a later stage prevent unwanted enolization. The newly formed ketone group then forms another C–C bond by photochemical Norrish reaction to 19 whose alcohol group is induced to eliminate with TsOH to alkene 20. {|align="center" class="wikitable" style="font-size:small" | |valign=top | |- | Dodecahedrane synthesis part V||Dodecahedrane synthesis part VI |- |} The double bond is reduced with hydrazine and sequential diisobutylaluminum hydride reduction and pyridinium chlorochromate oxidation of 21 forms the aldehyde 22. A second Norrish reaction then adds another C–C bond to alcohol 23 and having served its purpose the phenoxy tail is removed in several steps: a Birch reduction to diol 24, oxidation with pyridinium chlorochromate to ketoaldehyde 25 and a reverse Claisen condensation to ketone 26. A third Norrish reaction produces alcohol 27 and a second dehydration 28 and another reduction 29 at which point the synthesis is left completely without functional groups. The missing C-C bond is put in place by hydrogen pressurized dehydrogenation with palladium on carbon at 250 °C to dodecahedrane 30. Pagodane route In Prinzbach's optimized route from pagodane to dodecahedrane, the original low-yielding isomerization of parent pagodane to dodecahedrane is replaced by a longer but higher yielding sequence - which nevertheless still relies heavily on pagodane derivatives. In the scheme below, the divergence from the original happens after compound 16. Derivatives A variety of dodecahedrane derivatives have been synthesized and reported in the literature. Hydrogen substitution Substitution of all 20 hydrogens by fluorine atoms yields the relatively unstable perfluorododecahedrane C20F20, which was obtained in milligram quantities. Trace amounts of the analogous perchlorododecahedrane C20Cl20 were obtained, among other partially chlorinated derivatives, by reacting dissolved in liquid chlorine under pressure at about 140 °C and under intense light for five days. Complete replacement by heavier halogens seems increasingly difficult due to their larger size. Half or more of the hydrogen atoms can be substituted by hydroxyl groups to yield polyols, but the extreme compound C20(OH)20 remained elusive as of 2006. Amino-dodecahedranes comparable to amantadine have been prepared, but were more toxic and with weaker antiviral effects. Annulated dodecahedrane structures have been proposed. Encapsulation Molecules whose framework forms a closed cage, like dodecahedrane and buckminsterfullerene, can encapsulate atoms and small molecules in the hollow space within. Those insertions are not chemically bonded to the caging compound, but merely mechanically trapped in it. Cross, Saunders and Prinzbach succeeded in encapsulating helium atoms in dodecahedrane by shooting He+ ions at a film of the compound. They obtained microgram quantities of (the "@" being the standard notation for encapsulation), which they described as a quite stable substance. The molecule has been described as "the world's smallest helium balloon". References External links Paquette's dodecahedrane synthesis at SynArchive.com 2D and 3D models of dodecahedrane and cuneane assemblies Full text of Paquette's paper Polycyclic nonaromatic hydrocarbons Total synthesis Cyclopentanes Substances discovered in the 1980s
Dodecahedrane
Chemistry
2,087
12,998,165
https://en.wikipedia.org/wiki/Alkyl%20sulfonate
Alkyl sulfonates are esters of alkane sulfonic acids with the general formula R-SO2-O-R'. They act as alkylating agents, some of them are used as alkylating antineoplastic agents in the treatment of cancer, e.g. Busulfan. References Sulfonate esters
Alkyl sulfonate
Chemistry
76
14,241,792
https://en.wikipedia.org/wiki/TRACE%20%28psycholinguistics%29
TRACE is a connectionist model of speech perception, proposed by James McClelland and Jeffrey Elman in 1986. It is based on a structure called "the TRACE", a dynamic processing structure made up of a network of units, which performs as the system's working memory as well as the perceptual processing mechanism. TRACE was made into a working computer program for running perceptual simulations. These simulations are predictions about how a human mind/brain processes speech sounds and words as they are heard in real time. Inspiration TRACE was created during the formative period of connectionism, and was included as a chapter in Parallel Distributed Processing: Explorations in the Microstructures of Cognition. The researchers found that certain problems regarding speech perception could be conceptualized in terms of a connectionist interactive activation model. The problems were that speech is extended in time the sounds of speech (phonemes) overlap with each other the articulation of a speech sound is affected by the sounds that come before and after it, and there is natural variability in speech (e.g. foreign accent) as well as noise in the environment (e.g. busy restaurant). Each of these causes the speech signal to be complex and often ambiguous, making it difficult for the human mind/brain to decide what words it is really hearing. In very simple terms, an interactive activation model solves this problem by placing different kinds of processing units (phonemes, words) in isolated layers, allowing activated units to pass information between layers, and having units within layers compete with one another, until the “winner” is considered “recognized” by the model. Key findings "TRACE was the first model that instantiated the activation of multiple word candidates that match any part of the speech input." A simulation of speech perception involves presenting the TRACE computer program with mock speech input, running the program, and generating a result. A successful simulation indicates that the result is found to be meaningfully similar to how people process speech. Time-course of word recognition It is generally accepted in psycholinguistics that (1) when the beginning of a word is heard, a set of words that share the same initial sound become activated in memory, (2) the words that are activated compete with each other while more and more of the word is heard, (3) at some point, due to both the auditory input and the lexical competition, one word is recognized. For example, a listener hears the beginning of bald, and the words bald, ball, bad, bill become active in memory. Then, soon after, only bald and ball remain in competition (bad, bill have been eliminated because the vowel sound doesn't match the input). Soon after, bald is recognized. TRACE simulates this process by representing the temporal dimension of speech, allowing words in the lexicon to vary in activation strength, and by having words compete during processing. Figure 1 shows a line graph of word activation in a simple TRACE simulation. Lexical effect on phoneme perception If an ambiguous speech sound is spoken that is exactly in between and , the hearer may have difficulty deciding what it is. But, if that same ambiguous sound is heard at the end of a word like woo/?/ (where ? is the ambiguous sound), then the hearer will more likely perceive the sound as a . This probably occurs because "wood" is a word but "woot" is not. An ambiguous phoneme presented in a lexical context will be perceived as consistent with the surrounding lexical context. This perceptual effect is known as the Ganong effect. TRACE reliably simulates this, and can explain it in relatively simple terms. Essentially, the lexical unit which has become activated by the input (i.e. wood) feeds back activation to the phoneme layer, boosting the activation of its constituent phonemes (i.e. ), thus resolving the ambiguity. Lexical basis of segmentation Speakers usually don't leave pauses in between words when speaking, yet listeners seem to have no difficulty hearing speech as a sequence of words. This is known as the segmentation problem, and is one of the oldest problems in the psychology of language. TRACE proposed the following solution, backed up by simulations. When words become activated and recognized, this reveals the location of word boundaries. Stronger word activation leads to greater confidence about word boundaries, which informs the hearer of where to expect the next word to begin. Process The TRACE model is a connectionist network with an input layer and three processing layers: pseudo-spectra (feature), phoneme and word. Figure 2 shows a schematic diagram of TRACE. There are three types of connectivity: (1) feedforward excitatory connections from input to features, features to phonemes, and phonemes to words; (2) lateral (i.e., within layer) inhibitory connections at the feature, phoneme and word layers; and (3) top-down feedback excitatory connections from words to phonemes. The input to TRACE works as follows. The user provides a phoneme sequence that is converted into a multi-dimensional feature vector. This is an approximation of acoustic spectra extended in time. The input vector is revealed a little at a time to simulate the temporal nature of speech. As each new chunk of input is presented, this sends activity along the network connections, changing the activation values in the processing layers. Features activate phoneme units, and phonemes activate word units. Parameters govern the strength of the excitatory and inhibitory connections, as well as many other processing details. There is no specific mechanism that determines when a word or a phoneme has been recognized. If simulations are being compared to reaction time data from a perceptual experiment (e.g. lexical decision), then typically an activation threshold is used. This allows for the model behavior to be interpreted as recognition, and a recognition time to be recorded as the number of processing cycles that have elapsed. For deeper understanding of TRACE processing dynamics, readers are referred to the original publication and to a TRACE software tool that runs simulations with a graphical user interface. Criticism Modularity of mind debate TRACE’s relevance to the modularity debate has recently been brought to the fore by Norris, Cutler and McQueen’s (2001) report on the Merge (?) model of speech perception. While it shares a number of features with TRACE, a key difference is the following. While TRACE permits word units to feedback activation to the phoneme level, Merge restricts its processing to feed-forward connections. In the terms of this debate, TRACE is considered to violate the principle of information encapsulation, central to modularity, when it permits a later stage of processing (words) to send information to an earlier stage (phonemes). Merge advocates for modularity by arguing that the same class of perceptual phenomena that is accounted for in TRACE can be explained in a connectionist architecture that does not include feedback connections. Norris et al. point out that when two theories can explain the same phenomenon, parsimony dictates that the simpler theory is preferable. Applications Speech and language therapy Models of language processing can be used to conceptualize the nature of impairment in persons with speech and language disorder. For example, it has been suggested that language deficits in expressive aphasia may be caused by excessive competition between lexical units, thus preventing any word from becoming sufficiently activated. Arguments for this hypothesis consider that mental dysfunction can be explained by slight perturbation of the network model's processing. This emerging line of research incorporates a wide range of theories and models, and TRACE represents just one piece of a growing puzzle. Distinction from speech recognition software Psycholinguistic models of speech perception, e.g. TRACE, must be distinguished from computer speech recognition tools. The former are psychological theories about how the human mind/brain processes information. The latter are engineered solutions for converting an acoustic signal into text. Historically, the two fields have had little contact, but this is beginning to change. Influence TRACE’s influence in the psychology literature can be assessed by the number of articles that cite it. There are 345 citations of McClelland and Elman (1986) in the PsycINFO database. Figure 3 shows the distribution of those citations over the years since publication. The figure suggests that interest in TRACE grew significantly in 2001, and has remained strong, with about 30 citations per year. See also Motor theory of speech perception (rival theory) Cohort model (rival theory) References External links jTRACE - A Java reimplementation of the TRACE model. Open-source platform-independent software. Page also includes download of an earlier c language implementation of TRACE. Cognitive architecture Psycholinguistics Phonetics Speech
TRACE (psycholinguistics)
Engineering
1,795
26,339,669
https://en.wikipedia.org/wiki/Schur%20algebra
In mathematics, Schur algebras, named after Issai Schur, are certain finite-dimensional algebras closely associated with Schur–Weyl duality between general linear and symmetric groups. They are used to relate the representation theories of those two groups. Their use was promoted by the influential monograph of J. A. Green first published in 1980. The name "Schur algebra" is due to Green. In the modular case (over infinite fields of positive characteristic) Schur algebras were used by Gordon James and Karin Erdmann to show that the (still open) problems of computing decomposition numbers for general linear groups and symmetric groups are actually equivalent. Schur algebras were used by Friedlander and Suslin to prove finite generation of cohomology of finite group schemes. Construction The Schur algebra can be defined for any commutative ring and integers . Consider the algebra of polynomials (with coefficients in ) in commuting variables , 1 ≤ i, j ≤ . Denote by the homogeneous polynomials of degree . Elements of are k-linear combinations of monomials formed by multiplying together of the generators (allowing repetition). Thus Now, has a natural coalgebra structure with comultiplication and counit the algebra homomorphisms given on generators by    (Kronecker's delta). Since comultiplication is an algebra homomorphism, is a bialgebra. One easily checks that is a subcoalgebra of the bialgebra , for every r ≥ 0. Definition. The Schur algebra (in degree ) is the algebra . That is, is the linear dual of . It is a general fact that the linear dual of a coalgebra is an algebra in a natural way, where the multiplication in the algebra is induced by dualizing the comultiplication in the coalgebra. To see this, let and, given linear functionals , on , define their product to be the linear functional given by The identity element for this multiplication of functionals is the counit in . Main properties One of the most basic properties expresses as a centralizer algebra. Let be the space of rank column vectors over , and form the tensor power Then the symmetric group on letters acts naturally on the tensor space by place permutation, and one has an isomorphism In other words, may be viewed as the algebra of endomorphisms of tensor space commuting with the action of the symmetric group. is free over of rank given by the binomial coefficient . Various bases of are known, many of which are indexed by pairs of semistandard Young tableaux of shape , as varies over the set of partitions of into no more than parts. In case k is an infinite field, may also be identified with the enveloping algebra (in the sense of H. Weyl) for the action of the general linear group acting on (via the diagonal action on tensors, induced from the natural action of on given by matrix multiplication). Schur algebras are "defined over the integers". This means that they satisfy the following change of scalars property: for any commutative ring . Schur algebras provide natural examples of quasihereditary algebras (as defined by Cline, Parshall, and Scott), and thus have nice homological properties. In particular, Schur algebras have finite global dimension. Generalizations Generalized Schur algebras (associated to any reductive algebraic group) were introduced by Donkin in the 1980s. These are also quasihereditary. Around the same time, Dipper and James introduced the quantized Schur algebras (or q-Schur algebras for short), which are a type of q-deformation of the classical Schur algebras described above, in which the symmetric group is replaced by the corresponding Hecke algebra and the general linear group by an appropriate quantum group. There are also generalized q-Schur algebras, which are obtained by generalizing the work of Dipper and James in the same way that Donkin generalized the classical Schur algebras. There are further generalizations, such as the affine q-Schur algebras related to affine Kac–Moody Lie algebras and other generalizations, such as the cyclotomic q-Schur algebras related to Ariki-Koike algebras (which are q-deformations of certain complex reflection groups). The study of these various classes of generalizations forms an active area of contemporary research. References Further reading Stuart Martin, Schur Algebras and Representation Theory, Cambridge University Press 1993. , Andrew Mathas, Iwahori-Hecke algebras and Schur algebras of the symmetric group, University Lecture Series, vol.15, American Mathematical Society, 1999. , Hermann Weyl, The Classical Groups. Their Invariants and Representations. Princeton University Press, Princeton, N.J., 1939. , Abstract algebra Representation theory Issai Schur
Schur algebra
Mathematics
1,022
62,931,555
https://en.wikipedia.org/wiki/Dodo%20Sue%20Ware%20Kiln%20ruins
The is an archaeological site containing late Heian to early Kamakura period kilns located in the Mutsure neighborhood of the city of Tahara, Aichi in the Tōkai region of Japan. It was designated as a National Historic Site in 1922. Overview The Dodo Sue Ware Kiln was a Sue ware pottery production site approximately four kilometers southeast of the modern city center of Tahara, in a hilly forest. After World War II, many ruins of kilns have been discovered in the Atsumi Peninsula dating from the late Heian period to the early Kamakura period, thus shedding light on the origins of several styles of pottery which until that time were uncertain. The Dodo site contains two nobori-gama kilns built side-by-side on a hill, utilizing a south-facing slope. These kilns were used to produce everyday items, such as small bowls, plates, tea cups, etc. The site is located approximately 18 minutes by car from Toyohashi Railroad Atsumi Line Mikawa-Tahara Station. See also List of Historic Sites of Japan (Aichi) References External links Tahara Museum home page History of Aichi Prefecture Tahara, Aichi Historic Sites of Japan Japanese pottery kiln sites Mikawa Province
Dodo Sue Ware Kiln ruins
Chemistry,Engineering
264
9,157,736
https://en.wikipedia.org/wiki/Andr%C3%A9%20Joyal
André Joyal (; born 1943) is a professor of mathematics at the Université du Québec à Montréal who works on category theory. He was a member of the School of Mathematics at the Institute for Advanced Study in 2013, where he was invited to join the Special Year on Univalent Foundations of Mathematics. Research He discovered Kripke–Joyal semantics, the theory of combinatorial species and with Myles Tierney a generalization of the Galois theory of Alexander Grothendieck in the setup of locales. Most of his research is in some way related to category theory, higher category theory and their applications. He did some work on quasi-categories, after their invention by Michael Boardman and Rainer Vogt, in particular conjecturing and proving the existence of a Quillen model structure on the category of simplicial sets whose weak equivalences generalize both equivalence of categories and Kan equivalence of spaces. He co-authored the book "Algebraic Set Theory" with Ieke Moerdijk and recently started a web-based expositional project Joyal's CatLab on categorical mathematics. Personal life Joyal was born in Drummondville (formerly Saint-Majorique). He has three children and lives in Montreal. Bibliography ; ; André Joyal, Ieke Moerdijk, Algebraic set theory. London Mathematical Society Lecture Note Series 220. Cambridge Univ. Press 1995. viii+123 pp.  André Joyal, Myles Tierney, Notes on simplicial homotopy theory, CRM Barcelona, Jan 2008 pdf André Joyal, Disks, duality and theta-categories, preprint (1997) (contains an original definition of a weak n-category: for a short account see Leinster's , 10.2). References External links Interview with André Joyal (in French) Official Web page at UQAM Living people 1943 births Category theorists 20th-century Canadian mathematicians 21st-century Canadian mathematicians Academic staff of the Université du Québec à Montréal People from Drummondville
André Joyal
Mathematics
407
3,353,456
https://en.wikipedia.org/wiki/Richard%20Jozsa
Richard Jozsa is an Australian mathematician who holds the Leigh Trapnell Chair in Quantum Physics at the University of Cambridge. He is a fellow of King's College, Cambridge, where his research investigates quantum information science. A pioneer of his field, he is the co-author of the Deutsch–Jozsa algorithm and one of the co-inventors of quantum teleportation. Education Jozsa received his Doctor of Philosophy degree on twistor theory at Oxford, under the supervision of Roger Penrose. Career and research Jozsa has held previous positions at the University of Bristol, the University of Plymouth and the Université de Montréal. Awards and honours His work was recognised in 2004 by the London Mathematical Society with the award of the Naylor Prize for 'his fundamental contributions to the new field of quantum information science'. Since 2016, Jozsa is a member of the Academia Europaea. References Living people Fellows of King's College, Cambridge Members of Academia Europaea Cambridge mathematicians Academics of the University of Bristol Academics of the University of Plymouth Australian mathematicians Australian physicists 1953 births Quantum information scientists Alumni of the University of Oxford
Richard Jozsa
Technology
232
57,687,371
https://en.wikipedia.org/wiki/Multimodal%20sentiment%20analysis
Multimodal sentiment analysis is a technology for traditional text-based sentiment analysis, which includes modalities such as audio and visual data. It can be bimodal, which includes different combinations of two modalities, or trimodal, which incorporates three modalities. With the extensive amount of social media data available online in different forms such as videos and images, the conventional text-based sentiment analysis has evolved into more complex models of multimodal sentiment analysis, which can be applied in the development of virtual assistants, analysis of YouTube movie reviews, analysis of news videos, and emotion recognition (sometimes known as emotion detection) such as depression monitoring, among others. Similar to the traditional sentiment analysis, one of the most basic task in multimodal sentiment analysis is sentiment classification, which classifies different sentiments into categories such as positive, negative, or neutral. The complexity of analyzing text, audio, and visual features to perform such a task requires the application of different fusion techniques, such as feature-level, decision-level, and hybrid fusion. The performance of these fusion techniques and the classification algorithms applied, are influenced by the type of textual, audio, and visual features employed in the analysis. Features Feature engineering, which involves the selection of features that are fed into machine learning algorithms, plays a key role in the sentiment classification performance. In multimodal sentiment analysis, a combination of different textual, audio, and visual features are employed. Textual features Similar to the conventional text-based sentiment analysis, some of the most commonly used textual features in multimodal sentiment analysis are unigrams and n-grams, which are basically a sequence of words in a given textual document. These features are applied using bag-of-words or bag-of-concepts feature representations, in which words or concepts are represented as vectors in a suitable space. Audio features Sentiment and emotion characteristics are prominent in different phonetic and prosodic properties contained in audio features. Some of the most important audio features employed in multimodal sentiment analysis are mel-frequency cepstrum (MFCC), spectral centroid, spectral flux, beat histogram, beat sum, strongest beat, pause duration, and pitch. OpenSMILE and Praat are popular open-source toolkits for extracting such audio features. Visual features One of the main advantages of analyzing videos with respect to texts alone, is the presence of rich sentiment cues in visual data. Visual features include facial expressions, which are of paramount importance in capturing sentiments and emotions, as they are a main channel of forming a person's present state of mind. Specifically, smile, is considered to be one of the most predictive visual cues in multimodal sentiment analysis. OpenFace is an open-source facial analysis toolkit available for extracting and understanding such visual features. Fusion techniques Unlike the traditional text-based sentiment analysis, multimodal sentiment analysis undergo a fusion process in which data from different modalities (text, audio, or visual) are fused and analyzed together. The existing approaches in multimodal sentiment analysis data fusion can be grouped into three main categories: feature-level, decision-level, and hybrid fusion, and the performance of the sentiment classification depends on which type of fusion technique is employed. Feature-level fusion Feature-level fusion (sometimes known as early fusion) gathers all the features from each modality (text, audio, or visual) and joins them together into a single feature vector, which is eventually fed into a classification algorithm. One of the difficulties in implementing this technique is the integration of the heterogeneous features. Decision-level fusion Decision-level fusion (sometimes known as late fusion), feeds data from each modality (text, audio, or visual) independently into its own classification algorithm, and obtains the final sentiment classification results by fusing each result into a single decision vector. One of the advantages of this fusion technique is that it eliminates the need to fuse heterogeneous data, and each modality can utilize its most appropriate classification algorithm. Hybrid fusion Hybrid fusion is a combination of feature-level and decision-level fusion techniques, which exploits complementary information from both methods during the classification process. It usually involves a two-step procedure wherein feature-level fusion is initially performed between two modalities, and decision-level fusion is then applied as a second step, to fuse the initial results from the feature-level fusion, with the remaining modality. Applications Similar to text-based sentiment analysis, multimodal sentiment analysis can be applied in the development of different forms of recommender systems such as in the analysis of user-generated videos of movie reviews and general product reviews, to predict the sentiments of customers, and subsequently create product or service recommendations. Multimodal sentiment analysis also plays an important role in the advancement of virtual assistants through the application of natural language processing (NLP) and machine learning techniques. In the healthcare domain, multimodal sentiment analysis can be utilized to detect certain medical conditions such as stress, anxiety, or depression. Multimodal sentiment analysis can also be applied in understanding the sentiments contained in video news programs, which is considered as a complicated and challenging domain, as sentiments expressed by reporters tend to be less obvious or neutral. References Natural language processing Affective computing Social media Machine learning Multimodal interaction
Multimodal sentiment analysis
Technology,Engineering
1,084
54,520,002
https://en.wikipedia.org/wiki/Magellanic%20moorland
The Magellanic moorland or Magellanic tundra () is an ecoregion on the Patagonian archipelagos south of latitude 48° S. It is characterized by high rainfall with a vegetation of scrubs, bogs and patches of forest in more protected areas. Cushion plants, grass-like plants and bryophytes are common. At present there are outliers of Magellanic moorland as far north as in the highlands of Cordillera del Piuchén (latitude 42° 22' S) in Chiloé Island. During the Llanquihue glaciation Magellanic moorland extended to the non-glaciated lowlands of Chiloé Island and further north to the lowlands of Chilean lake district (latitude 41° S). The classification of Magellanic moorland has proven problematic as substrate, low temperatures and exposure to the ocean influences the development of the Magallanic moorland. It thus may qualify either as polar tundra or heathland. Flora and plant communities Edmundo Pisano identifies the following plant communities for the Magellanic moorland: Bogs Sphagnum bogs Magellanic sphagnum tundra Juncus bogs Non-sphagniferous bryophytic tundra Non-sphagnum moss bog Hepatica bogs Pluvinar mires Hygrophytic mire tundra Montane pulvinar tundra Bryophyte and dwarf shrub tundra Gramineous mires Tufty sedge tundra Subantarctic gramineous mire Woody synusia tundras Tundras with Pilgerodendron uvifera Association Pilgerodendretum uviferae Sub-association Pilgerodendro-Nothofagetum betuloidis Sub-association Nano-Pilgerodendretum uviferae Interior nanophanerophytic tundras Interior heath of low to medium elevation Montane nanophaneritic tundra Where forests occur they are made up of the following trees Nothofagus betuloides (coigüe de Magallanes), Drimys winteri (canelo), Pseudopanax laetevirens (sauco del diablo), Embothrium coccineum (notro), Maytenus magellanica (maitén), Pilgerodendron uviferum (ciprés de las Guaitecas) and Tepualia stipularis (tepú). Soils and climate Soils are usually rich in turf and organic matter and poor in bases. Often they are also water-saturated. Granitoids, schists and ancient volcanic rocks make up the basement on which soils develop. Any previously existing regolith has been eroded by the Quaternary glaciations. It is not rare for bare rock surfaces to be exposed in the interior of islands. The climate where Magellanic moorland grows can be defined as oceanic, snowy and isothermal with cool and windy summers. In the Köppen climate classification it has a tundra climate ET. References Bibliography Shrublands Ecology of Patagonia Temperate broadleaf and mixed forests Temperate rainforests Ecoregions of Chile Andean forests Ecoregions of South America Neotropical ecoregions Magellanic subpolar forests
Magellanic moorland
Biology
679
62,263,664
https://en.wikipedia.org/wiki/Thomas%20Nail
Thomas Nail is a professor of Philosophy at The University of Denver. Biography Nail received a B.A in philosophy from the University of North Texas, and a Ph.D. from the University of Oregon. His dissertation was on the theme of political revolution in the work of French philosophers Gilles Deleuze and Félix Guattari and the Zapatista uprising in Chiapas, Mexico. This research was the foundation of his first book, Returning to Revolution: Deleuze, Guattari, and Zapatismo, published in 2012. Philosophy Nail has written on the philosophy of movement, which he defines as “the analysis of diverse phenomena across social, aesthetic, scientific, and ontological domains from the primary perspective of motion.” He argues that the philosophy of motion is a unique kind of philosophical methodology. It is related to process philosophy but is distinct from Whitehead's discontinuous "occasions" and from Bergson's vitalism. “The difference between simply describing the motion of things, which almost every philosopher and even layperson has done, and the philosophy of movement is the degree to which movement plays an analytically primary role in the description.” From the perspective of movement, according to Nail, all seemingly discrete bodies are the result of moving flows of matter that continually fold themselves up in various patterns or what he calls “fields of motion.” Nail's philosophy of movement provides a conceptual framework for the study of these patterns of motion through history. Nail, however, also claims his philosophy of movement is not a metaphysical theory of reality in itself. Instead, he describes it as a practical and historical methodology oriented by the unprecedented scale and scope of global mobility in the early 21st century. In particular, he names four major historical conditions that situate his thought: mass migration, digital media, quantum physics, and climate change. He therefore describes his philosophy as a “history of the present.” Nail also describes his work as loosely part of the recent philosophical tradition of new materialism. The term “new materialism” has been applied to numerous and divergent philosophies including speculative realists, object-oriented ontologists, and neo-vitalists who all share in common some version of non-anthropocentric realism. However, Nail's work does not fit into any of these camps. His philosophy of movement instead offers a different kind of new materialism insofar as it focuses on the pedetic/indeterministic motion of matter and its various kinetic patterns. His philosophy is also unique among new materialists, excluding those within archaeology, because of its strongly historical methodology. Works Nail's published work is divided into two primary books series. The first series is composed of six “core” books, each written with a similar organization on five major areas of philosophy: ontology, politics, aesthetics, science, and nature. Each book provides a theory, history, and contemporary case study of the kinetic method. The purpose of each book is to redefine its subject area from a kinetic or process materialist perspective. The Figure of the Migrant (2015) and Theory of the Border (2016) develop a theory and history of what he terms “kinopolitics” based on the study of patterns of social motion. Theory of the Image (2019) develops a “kinesthetics” of moving images in the arts. Theory of the Object (2021) develops a “kinemetrics” of moving objects in the sciences. Theory of the Earth (2021) develops a “geokinetics” of nature in motion, and Being and Motion (2018) develops an original historical ontology of motion. The second series is composed of several books, each written on a major historical precursor to the philosophy motion. This includes Lucretius, Karl Marx, and Virginia Woolf. Each book offers a kinetic interpretation and close reading of one of these figures as philosophers who made motion their fundamental starting point. They include Lucretius I: An Ontology of Motion, 2018; Lucretius II: An Ethics of Motion, 2020; Lucretius III: A History of Motion, 2022; Marx in Motion: A New Materialist Marxism, 2020. Criticism On The Figure of the Migrant, Adriana Novoa has written that 'in regards to Mexico, Thomas writes under the assumption that all the migrants originating from this country have the same relationship with movement, but by failing to consider the existence of human diversity in movement, the book simplifies motivations and imposes a mechanistic social meaning. Thomas's theoretical effort does not help us to understand the inequality of humans and its connection with kinetic power. In modern Latin American nations, the dynamics of human movement were shaped, and continue to be shaped, by racial divisions, for example’. Andrew Dilts has written that the book 'gives us both a framework for understanding the movements of peoples...and yet at the same time by not prioritizing the action and self-understandings of those very people, it risks freezing them into the same stasis which the book seeks to resist'. On Theory of the Border, Alex Sager has written that 'Nail does not offer a theory of the border, at least insofar as we understand theories as offering explanations or predictions. Rather, what he provides is a taxonomy of different types of border technologies that he derives from his understanding of different (mostly) European historical periods. His book gives little guidance for determining when these technologies will emerge, what will motivate them, who they will target and how they will combine. The book's neglect of agents that construct and contest borders is striking.' Avery Kolers has written that 'unfortunately, Nail's writing is less transparent than it could be; the sheer buildup of neologisms is only the beginning of it. More importantly, there are places where it is not fully clear that the analysis hangs together.' Bibliography Returning to Revolution: Deleuze, Guattari and Zapatismo (Edinburgh University Press, 2012, 2015), (hardcover), (paperback) The Figure of the Migrant (Stanford University Press, 2015), (paperback), (hardcover) Theory of the Border (Oxford University Press, 2016), (paperback), (hardcover) Lucretius I: An Ontology of Motion (Edinburgh University Press, 2018), (paperback), (hardcover) Being and Motion (Oxford University Press, 2018), (paperback), (hardcover) Theory of the Image (Oxford University Press, 2019), (paperback), (hardcover) Lucretius II: An Ethics of Motion (Edinburgh University Press, 2020), (paperback), (hardcover) Marx in Motion: A New Materialist Marxism (Oxford University Press, 2020), (paperback), (hardcover) Theory of the Earth (Stanford University Press, 2021), (paperback), (hardcover) Theory of the Object (Edinburgh University Press, 2021), (paperback), (hardcover) Lucretius III: A History of Motion (Edinburgh University Press, 2022), (paperback), (hardcover) Matter and Motion: A Brief History of Kinetic Materialism (Edinburgh University Press, 2023) See also New materialisms References External links personal blog Thomas Nail at Academia.edu 1979 births Living people Continental philosophers American political philosophers Materialists Ontologists 21st-century American philosophers American philosophers of technology
Thomas Nail
Physics
1,510
19,920,343
https://en.wikipedia.org/wiki/Simultaneous%20action%20selection
Simultaneous action selection, or SAS, is a game mechanic that occurs when players of a game take action (such as moving their pieces) at the same time. Examples of games that use this type of movement include rock–paper–scissors and Diplomacy. Typically, a "secret yet binding" method of committing to one's move is necessary, so that as players' moves are revealed and implemented, others do not change their moves in light of the new information. Thus, in Diplomacy, players write down their moves and then reveal them simultaneously. Because no player gets the first move, this potentially arbitrary source of advantage is not present. It is also possible for simultaneous movement games to proceed relatively quickly, because players are acting at the same time, rather than waiting for their turn. Simultaneous action selection is easily implemented in card games such as Apples to Apples in which players simply select cards and throw them face-down into the center. Limitations Some games do not lend themselves to simultaneous movement, because one player's move may be prevented by the other player's. For instance, in chess, a move of a bishop takes queen would be incompatible with a simultaneous opposing move of queen takes bishop. By contrast, the simultaneous movement is possible in Junta because each coup phase has a movement stage and a separate combat stage; no units are removed until all have had a chance to move. It has been noted that "a certain amount of reverse psychology and reverse-reverse psychology ensues" as players attempt to calculate the implications of others' potential actions. Junta also has simultaneous action selection in that players secretly choose their locations at the same time. This is important in that, for instance, a player plotting an assassination may choose the bank for his or her own location (hoping to quickly deposit the ill-gotten gains) before finding out whether the location of his or her assassination was on the mark. Real world applications Simultaneous action selection is used in many real-world applications such as first-price sealed-bid auctions. The fact that no bidder knows what others are planning to bid may provide an incentive to bid high if there is a strong desire to win the auction, which can result in much higher winning bids than if better information were available. The prisoner's dilemma is another classic example of simultaneous action selection. SAS can also be used to introduce an element of chance, as when rock–paper–scissors is used to decide a matter. See also References External links BoardGameGeek: Games using simultaneous action selection BoardGameGeek wiki: Simultaneous Action Selection Game design Game theory
Simultaneous action selection
Mathematics,Engineering
521
66,393,505
https://en.wikipedia.org/wiki/St%20Peter%27s%20Medal
The St Peter's Medal is awarded annually by the British Association of Urological Surgeons (BAUS) for contributions to the surgical field of urology. The medal was designed and produced by sculptor William Bloye of the Birmingham School of Art and presented to the BAUS in 1948 by Bernard Joseph Ward, the BAUS's first vice-president. The first medal was awarded in 1949 to J. B. Macalpine who was the first to report bladder cancers associated with the dye industry. St Peter on the medal is identified by a key engraved on the bible that he holds. On the reverse is a laurel wreath within which the recipient's name is engraved, and around the circumference are the names of Edwin Hurry Fenwick, Peter Freyer and John Thomson-Walker. Origin and history The St Peter's Medal was designed and produced by sculptor William Bloye of the Birmingham School of Art, for the purpose of being awarded to a person who has made significant contributions to the field of urology and is a member of the British Isles or Commonwealth. The stamping die for the medal was presented to the British Association of Urological Surgeons (BAUS) in 1948 by Bernard Joseph Ward, the BAUS's first vice-president and urologist at Queen Elizabeth Hospital. The first medal was awarded in 1949 to J. B. Macalpine who first reported bladder cancers associated with the dye industry. It has subsequently been awarded annually by the BAUS, usually to one recipient, apart from 1951, 1999, 2005, 2006, 2007 and 2014, when there were two recipients. The medal is engraved with the names of the three teachers who influenced Bernard Ward: Edwin Hurry Fenwick, Peter Freyer and John Thomson-Walker. On presenting the medal in 1948, Ward stated in his speech that "although they were individually attached to other hospitals, they all came together in one hospital, St. Peter's; and the suggestion therefore was that in order to honour all three of them, we should call it the St. Peter's Medal. The hospital, the first urological hospital in Britain, was named after Saint Peter, whose name derives from the Latin for rock, petrus, and who was said by Christ to be the foundation upon which the Christian church was to be constructed. St Peter on the medal is identified by the iconography of a key engraved on the bible that he holds. On the reverse of the medal is a laurel wreath, within which the recipient's name is engraved, and around the wreath are the names of Fenwick, Freyer and Thomson-Walker. Recipients In 1951, the medal was presented for the second time, and for the first time to two recipients, when Ronald Ogier Ward and Terence J. Millin were given the award. In 1959 the medal was awarded to Harold H. Hopkins, a physicist, and in 2006 to Alison Brading, a physiologist. Other recipients have included Sir Michael Woodruff, Richard Turner-Warwick, John Wickham, Howard Kynaston, Geoffrey Chisholm, John M. Fitzpatrick, Roger Kirby and Prokar Dasgupta. Influence In 1975 the International Medical Society of Paraplegia proposed to offer a similar award based on the BAUS's St Peter's Medal. See also List of recipients of the St Peter's Medal References Awards established in 1948 Urology Medicine awards
St Peter's Medal
Technology
692