id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
42,966,879
https://en.wikipedia.org/wiki/Rewrite%20order
In theoretical computer science, in particular in automated reasoning about formal equations, reduction orderings are used to prevent endless loops. Rewrite orders, and, in turn, rewrite relations, are generalizations of this concept that have turned out to be useful in theoretical investigations. Motivation Intuitively, a reduction order R relates two terms s and t if t is properly "simpler" than s in some sense. For example, simplification of terms may be a part of a computer algebra program, and may be using the rule set { x+0 → x , 0+x → x , x*0 → 0, 0*x → 0, x*1 → x , 1*x → x }. In order to prove impossibility of endless loops when simplifying a term using these rules, the reduction order defined by "sRt if term t is properly shorter than term s" can be used; applying any rule from the set will always properly shorten the term. In contrast, to establish termination of "distributing-out" using the rule x*(y+z) → x*y+x*z, a more elaborate reduction order will be needed, since this rule may blow up the term size due to duplication of x. The theory of rewrite orders aims at helping to provide an appropriate order in such cases. Formal definitions Formally, a binary relation (→) on the set of terms is called a rewrite relation if it is closed under contextual embedding and under instantiation; formally: if l→r implies u[lσ]p→u[rσ]p for all terms l, r, u, each path p of u, and each substitution σ. If (→) is also irreflexive and transitive, then it is called a rewrite ordering, or rewrite preorder. If the latter (→) is moreover well-founded, it is called a reduction ordering, or a reduction preorder. Given a binary relation R, its rewrite closure is the smallest rewrite relation containing R. A transitive and reflexive rewrite relation that contains the subterm ordering is called a simplification ordering. Properties The converse, the symmetric closure, the reflexive closure, and the transitive closure of a rewrite relation is again a rewrite relation, as are the union and the intersection of two rewrite relations. The converse of a rewrite order is again a rewrite order. While rewrite orders exist that are total on the set of ground terms ("ground-total" for short), no rewrite order can be total on the set of all terms. A term rewriting system is terminating if its rules are a subset of a reduction ordering. Conversely, for every terminating term rewriting system, the transitive closure of (::=) is a reduction ordering, which need not be extendable to a ground-total one, however. For example, the ground term rewriting system { f(a)::=f(b), g(b)::=g(a) } is terminating, but can be shown so using a reduction ordering only if the constants a and b are incomparable. A ground-total and well-founded rewrite ordering necessarily contains the proper subterm relation on ground terms. Conversely, a rewrite ordering that contains the subterm relation is necessarily well-founded, when the set of function symbols is finite. A finite term rewriting system is terminating if its rules are subset of the strict part of a simplification ordering. Notes References Rewriting systems Order theory
Rewrite order
[ "Mathematics" ]
746
[ "Order theory" ]
42,967,409
https://en.wikipedia.org/wiki/Eigenstrain
In continuum mechanics an eigenstrain is any mechanical deformation in a material that is not caused by an external mechanical stress, with thermal expansion often given as a familiar example. The term was coined in the 1970s by Toshio Mura, who worked extensively on generalizing their mathematical treatment. A non-uniform distribution of eigenstrains in a material (e.g., in a composite material) leads to corresponding eigenstresses, which affect the mechanical properties of the material. Overview Many distinct physical causes for eigenstrains exist, such as crystallographic defects, thermal expansion, the inclusion of additional phases in a material, and previous plastic strains. All of these result from internal material characteristics, not from the application of an external mechanical load. As such, eigenstrains have also been referred to as “stress-free strains” and “inherent strains”. When one region of material experiences a different eigenstrain than its surroundings, the restraining effect of the surroundings leads to a stress state on both regions. Analyzing the distribution of this residual stress for a known eigenstrain distribution or inferring the total eigenstrain distribution from a partial data set are both two broad goals of eigenstrain theory. Analysis of eigenstrains and eigenstresses Eigenstrain analysis usually relies on the assumption of linear elasticity, such that different contributions to the total strain are additive. In this case, the total strain of a material is divided into the elastic strain e and the inelastic eigenstrain : where and indicate the directional components in 3 dimensions in Einstein notation. Another assumption of linear elasticity is that the stress can be linearly related to the elastic strain and the stiffness by Hooke’s Law: In this form, the eigenstrain is not in the equation for stress, hence the term "stress-free strain". However, a non-uniform distribution of eigenstrain alone will cause elastic strains to form in response, and therefore a corresponding elastic stress. When performing these calculations, closed-form expressions for (and thus, the total stress and strain fields) can only be found for specific geometries of the distribution of . Ellipsoidal inclusion in an infinite medium One of the earliest examples providing such a closed-form solution analyzed a ellipsoidal inclusion of material with a uniform eigenstrain, constrained by an infinite medium with the same elastic properties. This can be imagined with the figure on the right. The inner ellipse represents the region . The outer region represents the extent of if it fully expanded to the eigenstrain without being constrained by the surrounding . Because the total strain, shown by the solid outlined ellipse, is the sum of the elastic and eigenstrains, it follows that in this example the elastic strain in the region is negative, corresponding to a compression by on the region . The solutions for the total stress and strain within are given by: Where is the Eshelby Tensor, whose value for each component is determined only by the geometry of the ellipsoid. The solution demonstrates that the total strain and stress state within the inclusion are uniform. Outside of , the stress decays towards zero with increasing distance away from the inclusion. In the general case, the resulting stresses and strains may be asymmetric, and due to the asymmetry of , the eigenstrain may not be coaxial with the total strain. Inverse problem Eigenstrains and the residual stresses that accompany them are difficult to measure (see:Residual stress). Engineers can usually only acquire partial information about the eigenstrain distribution in a material. Methods to fully map out the eigenstrain, called the inverse problem of eigenstrain, are an active area of research. Understanding the total residual stress state, based on knowledge of the eigenstrains, informs the design process in many fields. Applications Structural engineering Residual stresses, e.g. introduced by manufacturing processes or by welding of structural members, reflect the eigenstrain state of the material. This can be unintentional or by design, e.g. shot peening. In either case, the final stress state can affect the fatigue, wear, and corrosion behavior of structural components. Eigenstrain analysis is one way to model these residual stresses. Composite materials Since composite materials have large variations in the thermal and mechanical properties of their components, eigenstrains are particularly relevant to their study. Local stresses and strains can cause decohesion between composite phases or cracking in the matrix. These may be driven by changes in temperature, moisture content, piezoelectric effects, or phase transformations. Particular solutions and approximations to the stress fields taking into account the periodic or statistical character of the composite material's eigenstrain have been developed. Strain engineering Lattice misfit strains are also a class of eigenstrains, caused by growing a crystal of one lattice parameter on top of a crystal with a different lattice parameter. Controlling these strains can improve the electronic properties of an epitaxially grown semiconductor. See: strain engineering. See also Residual stress Strain (mechanics) References Continuum mechanics
Eigenstrain
[ "Physics" ]
1,061
[ "Classical mechanics", "Continuum mechanics" ]
42,968,292
https://en.wikipedia.org/wiki/Dropcam
Dropcam, Inc. was an American technology company headquartered in San Francisco, California. The company is known for its Wi-Fi video streaming cameras, Dropcam and Dropcam Pro, that allow people to view live feeds through Dropcam's cloud-based service. On June 20, 2014, it was announced that Google's Nest Labs bought Dropcam for $555 million, a decision Dropcam co-founder Greg Duffy later described as a "mistake". In June 2015, Nest introduced the Nest Cam, a successor to the Dropcam Pro. Support for Dropcam services ended on April 8, 2024. History Software engineers Greg Duffy and Aamir Virani founded Dropcam in 2009. Duffy served as Dropcam's CEO and Virani served as COO. They originally developed software for cameras made by Swedish company AXIS. Wanting to develop a less expensive camera, the two companies parted ways and Dropcam started producing its own cameras that primarily provided video monitoring for homes and small businesses. Duffy and Virani credit Duffy's dad with at least part of the inspiration for Dropcam. He wanted to identify the neighbor who was letting their dog poop on his lawn but they were having trouble finding a security camera that made it easy to record, stream and monitor large amounts of data. Dropcam received early funding from technology investor Mitch Kapor, and in June 2012, Dropcam secured $12 million in venture capital funding led by Menlo Ventures and previous investors, Accel Partners and Bay Partners. Dropcam has also received funding from Felicis Ventures and Kleiner Perkins Caufield & Byers. The following year, it received $30 million more in funding led by Institutional Venture Partners, bringing the total raised to $47.8 million. Duffy said Dropcam's revenue grew 500 percent year over year. Dropcam hosts cloud data through Amazon Web Services and Duffy said in 2014 that Dropcam presently records more video than YouTube. Dropcam has become popular in families watching their children, through monitoring pets at home, at pet stores and in adoption centers. Users have also reportedly caught home-burglaries in progress. Duffy has said, “Moms are using it to catch their babies' first steps when they're not around, checking that older kids have arrived home safely; contacting children who are ignoring their cell phones; and sharing footage from birthday parties.” Due to the success of Dropcam, several companies launched similar products and services in 2014 and 2015, such as SpotCam and simplicam. In June 2015, the parent company Nest has introduced Nest Cam as a successor to Dropcam Pro. On April 7, 2023, Google announced that it would end support for both Dropcam and Nest Secure on April 8, 2024. Cloud Recording Dropcam provides optional encrypted digital video recording through the cloud. The Cloud Recording service automatically saves video on a rolling basis, so users can review the past week or month of footage, depending on their plan. All users, with or without the service, can still view the live feed. Dropcam allows users to download the video and create video clips while also allowing for the creation of a public stream. About 40% of Dropcam users sign up for the cloud service. As part of Dropcam's Cloud Recording service, markers are placed on a user's video timeline when motion or audio is detected, so a user may go back and view those specific events rather than watch the whole feed to search for notable activities. Dropcam introduced a beta version of its Activity Recognition feature for Cloud Recording, which learns typical motion patterns in a user's video stream, allowing for customized motion alerts. References Home automation companies Defunct electronics companies of the United States Google acquisitions 2014 mergers and acquisitions 2009 establishments in California 2024 disestablishments in California
Dropcam
[ "Technology" ]
779
[ "Home automation", "Home automation companies" ]
42,968,488
https://en.wikipedia.org/wiki/BMVA%20Summer%20School
BMVA Summer School is an annual summer school on computer vision, organised by the British Machine Vision Association and Society for Pattern Recognition (BMVA). The course is residential, usually held over five days, and consists of lectures and practicals in topics in image processing, computer vision, pattern recognition. It is intended that the course will complement and extend the material in existing technical courses that many students/researchers will encounter in their early stage of postgraduate training or careers. It aims to broaden awareness of knowledge and techniques in Vision, Image Computing and Pattern Recognition, and to develop appropriate research skills, and for students to interact with their peers, and to make contacts among those who will be the active researchers of their own generation. It is open to students from both UK and non-UK universities. The registration fees vary based on time of registration and are in general slightly higher for non-UK students. The summer school has been hosted locally by various universities in UK that carry out Computer Vision research, e.g., Kingston University, the University of Manchester, Swansea University and University of Lincoln. It has run since the mid-1990s, and content is updated every year. Speakers at the Summer School are active academic researchers or experienced practitioners from industry, mainly in the UK. It has received financial support from EPSRC from 2009 to 2012. Delegates of the summer school are usually encouraged to bring posters to summer school to present their work to peers and lecturers. A best poster is selected by the summer school lecturers. References External links 26th BMVA Summer School 2023, University of East Anglia 25th BMVA Summer School 2022, University of East Anglia 2017 BMVA summer school, University of Lincoln 2015-16 BMVA summer school, Swansea University 2014 BMVA summer school, Swansea University 2013 BMVA summer school, Manchester University 2012 BMVA summer school, Manchester University 2011 BMVA summer school, Manchester University Annual events in the United Kingdom Computer science education in the United Kingdom Computer vision research infrastructure Engineering and Physical Sciences Research Council Information technology organisations based in the United Kingdom Machine vision Summer schools
BMVA Summer School
[ "Engineering" ]
429
[ "Machine vision", "Robotics engineering" ]
42,970,651
https://en.wikipedia.org/wiki/Key%20Performance%20Parameters
Key Performance Parameters (KPPs) specify what the critical performance goals are in a United States Department of Defense (DoD) acquisition under the JCIDS process. The JCIDS intent for KPPs is to have a few measures stated where the acquisition product either meets the stated performance measure or else the program will be considered a failure per instructions CJCSI 3170.01H – Joint Capabilities Integration and Development System. The mandates require 3 to 8 KPPs be specified for a United States Department of Defense major acquisition, known as Acquisition Category 1 or ACAT-I. The term is defined as "Performance attributes of a system considered critical to the development of an effective military capability. A KPP normally has a threshold representing the minimum acceptable value achievable at low-to-moderate risk, and an objective, representing the desired operational goal but at higher risk in cost, schedule, and performance. KPPs are contained in the Capability Development Document (CDD) and the Capability Production Document (CPD) and are included verbatim in the Acquisition Program Baseline (APB). KPPs are considered Measures of Performance (MOPs) by the operational test community." Commentary notes that metrics must be chosen carefully, and that they are hard to define and apply throughout a projects life cycle. It is also desired that KPPs of a program avoid repetition, and to be something applicable among different programs such as fuel efficiency. Higher numbers of KPPs are associated to program and schedule instability. See also Analysis of Alternatives Requirement (example mention of Net-Ready KPP, a mandated KPP) References External links CJCSI 3170.01H at DAU collection CJCSM 3170.01C at everyspec.com United States Department of Defense United States defense procurement Systems engineering
Key Performance Parameters
[ "Engineering" ]
366
[ "Systems engineering" ]
42,970,995
https://en.wikipedia.org/wiki/Yesterday%20%28time%29
Yesterday is a temporal construct of the relative past; literally of the day before the current day (today), or figuratively of earlier periods or times, often but not always within living memory. Learning and language The concepts of "yesterday", "today" and "tomorrow" are among the first relative time concepts acquired by infants. In language a distinctive noun or adverb for "yesterday" is present in most but not all languages, though languages with ambiguity in vocabulary also have other ways to distinguish the immediate past and immediate future. "Yesterday" is also a relative term and concept in grammar and syntax. Yesterday is an abstract concept in the sense that events that occurred in the past do not exist in the present reality, though their consequences persist. Some languages have a hesternal tense: a dedicated grammatical form for events of the previous day. References Past Days
Yesterday (time)
[ "Physics" ]
176
[ "Spacetime", "Past", "Physical quantities", "Time" ]
42,971,132
https://en.wikipedia.org/wiki/Seoul%20Accord
The Seoul Accord is an international accreditation agreement for professional computing and information technology academic degrees, between the bodies responsible for accreditation in its signatory countries. Established in 2008, the signatories as of 2016 are Australia, Canada, Taiwan, Hong Kong, Japan, Korea, the United Kingdom and the United States. Provisional signatories include Ireland, New Zealand, Mexico, Philippines, Sri Lanka and Malaysia. In 2021, Mexico officially became a signatory. In 2024, Saudi Arabia, Ireland, Indonesian, Malaysia add in one of signatories, while Sri Lanka, Peru, Philippines are provisional status. On the other hand, New Zealand quit at same year. This agreement mutually recognizes tertiary level computing and IT qualifications between the signatory agencies. Graduates of accredited programs in any of the signatory countries are recognized by the other signatory countries as having met the academic requirements as IT professionals. Scope The Seoul Accord covers tertiary undergraduate computing degrees. Engineering and Engineering Technology programs are not covered by the Seoul accord, although some Software engineering programs have dual accreditation with the Washington Accord. Signatories See also Sydney Accord - engineering technologists Dublin Accord - engineering technicians EQANIE - European accreditation Chartered Engineer Outcome-based education Professional Engineer References External links Seoul Accord Professional titles and certifications Professional certification in computing Information technology education Computer science education Computer science organizations
Seoul Accord
[ "Technology" ]
270
[ "Computer science organizations", "Information technology education", "Computer science education", "Computer science", "Information technology" ]
42,971,169
https://en.wikipedia.org/wiki/Fire%20Research%20and%20Safety%20Act%20of%201968
Fire Research and Safety Act of 1968 was a declaration for a panoptic fire research and safety program advocated by President Lyndon Johnson on February 16, 1967. The Act of Congress established a National Commission on Fire Prevention and Control while encompassing more effective measures for fire hazards protection with the potentiality of death, injury, and damage to property. The U.S. statute petitioned a nationwide collection of comprehensive fire data with emphasis on a United States fire research program, fire safety education and training programs, demonstrations of new approaches and improvements in fire control and prevention resulting in the reduction of death, personal injury, and property damage. The S. 1124 legislation was passed by the 90th Congressional session and enacted by the 36th President of the United States Lyndon B. Johnson on March 1, 1968. Provisions Public Law 90-259 was penned as two titles: Fire Research and Safety Program and National Commission on Fire Prevention and Control. Title I - Fire Research and Safety Program (A) The purpose of Title I is to authorize directly or through contracts or grants (1) fire investigations to determine their causes, frequency of occurrence, severity, and other pertinent factors (2) research into the causes and nature of fires, and the development of improved methods and techniques for fire prevention, fire control, and reduction of death, personal injury, and property damage (3) educational programs to — (A) inform the public of fire hazards and fire safety techniques (B) encourage avoidance of such hazards and use of such techniques (4) fire information reference services, including the collection, analysis, and dissemination of data, research results, and other information, derived from this program or from other sources and related to fire protection, fire control, and reduction of death, personal injury, and property damage (5) educational and training programs to improve, among other things — (A) the efficiency, operation, and organization of fire services (B) the capability of controlling unusual fire-related hazards and fire disasters (6) projects demonstrating — (A) improved or experimental programs of fire prevention, fire control, and reduction of death, personal injury, and property damage (B) application of fire safety principles in construction (C) improvement of the efficiency, operation, or organization of the fire services (B) The purpose of Title I is to support by contracts or grants the development, for use by educational and other nonprofit institutions (1) fire safety and fire protection engineering or science curriculums (2) fire safety courses, seminars, or other instructional materials and aids for the above curriculums or other appropriate curriculums or courses of instruction Grants may be made only to States and local governments other non-Federal public agencies, and nonprofit institutions. Such a grant may be up to 100 percentum of the total cost of the project for which such grant is made. Title II - National Commission on Fire Prevention and Control The U.S. Congress found the United States to have an increasing proportation of the population dispersed in suburban and urban vicinities. The population geographics created a complex and frequently obscured approach for controlling and preventing destructive fires beyond local municipalities. The purpose of Title II is to establish a commission to perform a thorough investigation and study of the demography and population dynamics in the United States. The commission is to develop a formulation for recommendations whereby the United States can reduce the destruction of life and property caused by fire in U.S. cities, communities, suburbs, and elsewhere. Establishment of Commission The National Commission on Fire Prevention and Control is to be composed of twenty members. Secretary of Commerce, Secretary of Housing and Urban Development, and eighteen members are to be appointed by the President of the United States. The individuals appointed as members shall be eminently well qualified by training or experience to carry out the functions of the Commission, and shall be selected so as to provide representation of the views of individuals and organizations of all areas of the United States concerned with fire research, safety, control, or prevention, including representatives drawn from Federal, State, and local governments, industry, labor, universities, laboratories, trade associations, and other interested institutions or organizations. Not more than six members of the Commission shall be appointed from the Federal Government. The President shall designate the Chairman and Vice Chairman of the Commission. See also Federal Fire Prevention and Control Act of 1974 Fire Research Laboratory Firefighting National Bureau of Standards Organic Act U.S. Flammable Fabrics Act References External links 1968 in American law 90th United States Congress Fire investigation Fire protection Fire prevention law Firefighting in the United States United States federal housing legislation
Fire Research and Safety Act of 1968
[ "Engineering" ]
920
[ "Building engineering", "Fire protection" ]
42,971,530
https://en.wikipedia.org/wiki/Armillaria%20altimontana
Armillaria altimontana is a species of agaric fungus in the family Physalacriaceae. The species, found in the Pacific Northwest region of North America, was officially described as new to science in 2012. It was previously known as North American biological species (NABS) X. It grows in high-elevation mesic habitats in dry coniferous forests. This species has been found on hardwoods and conifers and is associated most commonly with fir-dominated forest types in southern British Columbia, Washington, Oregon, Idaho and northern California. A. Altimontana competes directly with A. solidipes, and evidence suggests it's beneficial and can increase tree survival. See also List of Armillaria species References External links altimontana Fungi of North America Fungal tree pathogens and diseases Pacific Northwest Fungi described in 2012 Fungus species
Armillaria altimontana
[ "Biology" ]
176
[ "Fungi", "Fungus species" ]
42,972,744
https://en.wikipedia.org/wiki/Anhembi%20orthobunyavirus
Anhembi orthobunyavirus, also called Anhembi virus (AMBV), is a species of virus. It was initially considered a strain of Wyeomyia virus, belonging serologically to the Bunyamwera serogroup of bunyaviruses. In 2018 it was made its own species. It was isolated from the rodent - Proechimys iheringi - and a mosquito - Phoniomyia pilicauda - in São Paulo, Brazil. Until 2001 this virus has not been reported to cause disease in humans. References Orthobunyaviruses
Anhembi orthobunyavirus
[ "Biology" ]
127
[ "Virus stubs", "Viruses" ]
42,972,900
https://en.wikipedia.org/wiki/Agua%20Preta%20virus
Agua Preta virus is an unaccepted species of virus, suggested to belong to the order Herpesvirales and family Orthoherpesviridae, as determined by thin-section electron microscopy. It was isolated from the gray short-tailed bat, Carollia subrufa, in the Utinga Forest near Belém, Brazil. References Herpesviridae Unaccepted virus taxa
Agua Preta virus
[ "Biology" ]
85
[ "Viruses", "Controversial taxa", "Virus stubs", "Unaccepted virus taxa", "Biological hypotheses" ]
42,972,915
https://en.wikipedia.org/wiki/Regulation%20of%20fracking
Countries using or considering to use fracking have implemented different regulations, including developing federal and regional legislation, and local zoning limitations. In 2011, after public pressure France became the first nation to ban hydraulic fracturing, based on the precautionary principle as well as the principal of preventive and corrective action of environmental hazards. The ban was upheld by an October 2013 ruling of the Constitutional Council. Some other countries have placed a temporary moratorium on the practice. Countries like the United Kingdom and South Africa, have lifted their bans, choosing to focus on regulation instead of outright prohibition. Germany has announced draft regulations that would allow using hydraulic fracturing for the exploitation of shale gas deposits with the exception of wetland areas. The European Union has adopted a recommendation for minimum principles for using high-volume hydraulic fracturing. Its regulatory regime requires full disclosure of all additives. In the United States, the Ground Water Protection Council launched FracFocus.org, an online voluntary disclosure database for hydraulic fracturing fluids funded by oil and gas trade groups and the U.S. Department of Energy. Hydraulic fracturing is excluded from the Safe Drinking Water Act's underground injection control's regulation, except when diesel fuel is used. The EPA assures surveillance of the issuance of drilling permits when diesel fuel is employed. On 17 December 2014, New York state issued a statewide ban on hydraulic fracturing, becoming the second state in the United States to issue such a ban after Vermont. Approaches Risk-based approach The main tool used by this approach is risk assessment. A risk assessment method, based on experimenting and assessing risk ex-post, once the technology is in place. In the context of hydraulic fracturing, it means that drilling permits are issued and exploitation conducted before the potential risks on the environment and human health are known. The risk-based approach mainly relies on a discourse that sacralizes technological innovations as an intrinsic good, and the analysis of such innovations, such as hydraulic fracturing, is made on a sole cost-benefit framework, which does not allow prevention or ex-ante debates on the use of the technology. This is also referred to as "learning-by-doing". A risk assessment method has for instance led to regulations that exist in the hydraulic fracturing in the United States (EPA will release its study on the effect of hydraulic fracturing on groundwater in 2014, though hydraulic fracturing has been used for more than 60 years. Commissions that have been implemented in the US to regulate the use of hydraulic fracturing have been created after hydraulic fracturing had started in their area of regulation. This is for instance the case in the Marcellus shale area where three regulatory committees were implemented ex-post. Academic scholars who have studied the perception of hydraulic fracturing in the North of England have raised two main critiques of this approach. Firstly, it takes scientific issues out of the public debate since there is no debate on the use of a technology but on its effects. Secondly, it does not prevent environmental harm from happening since risks are taken then assessed instead of evaluated then taken as it would be the case with a precautionary approach to scientific debates. The relevance and reliability of risk assessments in hydraulic fracturing communities has also been debated amongst environmental groups, health scientists, and industry leaders. A study has epitomized this point: the participants to regulatory committees of the Marcellus shale have, for a majority, raised concerns about public health although nobody in these regulatory committees had expertise in public health. That highlights a possible underestimation of public health risks due to hydraulic fracturing. Moreover, more than a quarter of the participants raised concerns about the neutrality of the regulatory committees given the important weigh of the hydraulic fracturing industry. The risks, to some like the participants of the Marcellus Shale regulatory committees, are overplayed and the current research is insufficient in showing the link between hydraulic fracturing and adverse health effects, while to others like local environmental groups the risks are obvious and risk assessment is underfunded. Precaution-based approach The second approach relies on the precautionary principle and the principal of preventive and corrective action of environmental hazards, using the best available techniques with an acceptable economic cost to insure the protection, the valuation, the restoration, management of spaces, resources and natural environments, of animal and vegetal species, of ecological diversity and equilibriums. The precautionary approach has led to regulations as implemented in France and in Vermont, banning hydraulic fracturing. Such an approach is called upon by social sciences and the public as studies have shown in the North of England and Australia. Indeed, in Australia, the anthropologist who studied the use of hydraulic fracturing concluded that the risk-based approach was closing down the debate on the ethics of such a practice, therefore avoiding questions on broader concerns that merely the risks implied by hydraulic fracturing. In the North of England, levels of concerns registered in the deliberative focus groups studied were higher regarding the framing of the debate, meaning the fact that people did not have a voice in the energetic choices that were made, including the use of hydraulic fracturing. Concerns relative to risks of seismicity and health issues were also important to the public, but less than this. A reason for that is that being withdrawn the right to participate in the decision-making triggered opposition of both supporters and opponents of hydraulic fracturing. The points made to defend such an approach often relate to climate change and the impact on the direct environment; related to public concerns on the rural landscape for instance in the UK. Energetic choices indeed affect climate change since greenhouse gas emissions from fossil fuels extraction such as shale gas and oil contribute to climate change. Therefore, people have in the UK raised concerns about the exploitation of these resources, not just hydraulic fracturing as a method. They would hence prefer a precaution-based approach to decide whether or not, regarding the issue of climate change, they want to exploit shale gas and oil. Framing of the debate There are two main areas of interest regarding how debates on hydraulic fracturing for the exploitation of unconventional oil and gas have been conducted. "Learning-by-doing" and the displacement of ethics A risk-based approach is often referred to as "learning-by-doing" by social sciences. Social sciences have raised two main critiques of this approach. Firstly, it takes scientific issues out of the public debate since there is no debate on the use of a technology but on its impacts. Secondly, it does not prevent environmental harm from happening since risks are taken then assessed instead of evaluated then taken. Public concerns are shown to be really linked to these issues of scientific approach. Indeed, the public in the North of England for instance fears "the denial of the deliberation of the values embedded in the development and application of that technology, as well as the future it is working towards" more than risks themselves. The legitimacy of the method is only questioned after its implementation, not before. This vision separates risks and effects from the values entitled by a technology. For instance, hydraulic fracturing entitles a transitional fuel for its supporters whereas for its opponents it represents a fossil fuel exacerbating the greenhouse effect and global warming. Not asking these questions leads to seeing only the mere economic cost-benefit analysis. This is linked to a pattern of preventing non-experts from taking part in scientific-technological debates, including their ethical issues. An answer to that problem is seen to be increased public participation so as to have the public deciding which issues to address and what political and ethical norms to adopt as a society. Another public concern with the "learning-by-doing" approach is that the speed of innovation may exceed the speed of regulation and since innovation is seen as serving private interests, potentially at the expense of social good, it is a matter of public concern. Science and Technology Studies have theorized "slowing-down" and the precautionary principle as answers. The claim is that the possibility of an issue is legitimate and should be taken into account before any action is taken. Variations in risk-assessment of environmental effects of hydraulic fracturing Issues also exist regarding the way risk assessment is conducted and whether it reflects some interests more than others. Firstly, an issue exists about whether risk assessment authorities are able to judge the impact of hydraulic fracturing in public health. A study conducted on the advisory committees of the Marcellus Shale gas area has shown that not a single member of these committees had public health expertise and that some concern existed about whether the commissions were not biased in their composition. Indeed, among 51 members of the committees, there is no evidence that a single one has any expertise in environmental public health, even after enlarging the category of experts to "include medical and health professionals who could be presumed to have some health background related to environmental health, however minimal". This cannot be explained by the purpose of the committee since all three executive orders of the different committees mentioned environmental public health related issues. Another finding of the authors is that a quarter of the opposed comments mentioned the possibility of bias in favor of gas industries in the composition of committees. The authors conclude saying that political leaders may not want to raise public health concerns not to handicap further economic development due to hydraulic fracturing. Secondly, the conditions to allow hydraulic fracturing are being increasingly strengthened due to the move from governmental agencies' authority over the issue to elected officials' authority over it. The Shale Gas Drilling Safety Review Act of 2014 issued in Maryland forbids the issuance of drilling permits until a high standard "risk assessment of public health and environmental hazards relating to hydraulic fracturing activities" is conducted for at least 18 months based on the Governor's executive order. Institutional discourse and the public A qualitative study using deliberative focus groups has been conducted in the North of England, where the Bowland-Hodder shale, a big shale gas reservoir, is exploited by hydraulic fracturing. These group discussions reflect many concerns on the issue of the use of unconventional oil and unconventional gas. There is a concern about trust linked with a doubt on the ability or will of public authorities to work for the greater social good since private interests and profits of industrial companies are seen as corruptive powers. Alienation is also a concern since the feeling of a game rigged against the public rises due to "decision making being made on your behalf without being given the possibility to voice an opinion". Exploitation also arises since economic rationality that is seen as favoring short-termism is accused of seducing policy-makers and industry. Risk is accentuated by what is hydraulic fracturing as well as what is at stake, and "blind spots" of current knowledge as well as risk assessment analysis are accused of increasing the potentiality of negative outcomes. Uncertainty and ignorance are seen as too important in the issue of hydraulic fracturing and decisions are therefore perceived as rushed, which is why participants favored some form of precautionary approach. There is a major fear on the possible disconnection between the public's and the authorities' visions of what is a good choice for the good reasons. It also appears that media coverage and institutional responses are widely inaccurate to answer public concerns. Indeed, institutional responses to public concerns are mostly inadequate since they focus on risk assessment and giving information to the public that is considered anxious because ignorant. But public concerns are much wider and it appears that public knowledge on hydraulic fracturing is rather good. The hydraulic fracturing industry has lobbied for permissive regulation in Europe, the US federal government, and US states. On 20 March 2015 the rules for disclosing the chemicals used were tightened by the Obama administration. The new rules give companies involved 30 days from the beginning of an operation on federal land to disclose those chemicals. See also Regulation of hydraulic fracturing in the United States Hydraulic fracturing by country References External links The British Columbia (Canada) Oil and Gas Commission mandatory disclosure of hydraulic fracturing fluids Hydraulic Fracturing: Selected Legal Issues Congressional Research Service Environmental law Hydraulic fracturing Energy law Mining law and governance Regulation of technologies
Regulation of fracking
[ "Chemistry" ]
2,440
[ "Petroleum technology", "Natural gas technology", "Hydraulic fracturing" ]
42,973,032
https://en.wikipedia.org/wiki/Phaeoacremonium%20aleophilum
Phaeoacremonium aleophilum is a fungus species in the genus Phaeoacremonium. It is associated with Phaeomoniella chlamydospora in esca in mature grapevines and decline in young vines (Petri disease), two types of grapevine trunk disease. Togninia minima is the teleomorph (the sexual reproductive stage) of P. aleophilum. References External links mycobank.org Grapevine trunk diseases Diaporthales Fungi described in 1996 Fungus species
Phaeoacremonium aleophilum
[ "Biology" ]
114
[ "Fungi", "Fungus species" ]
68,694,975
https://en.wikipedia.org/wiki/Yttrium%28II%29%20oxide
Yttrium(II) oxide or yttrium monoxide is a chemical compound with the formula YO. This chemical compound was first created in its solid form by pulsed laser deposition, using yttrium(III) oxide as the target at 350 °C. The film was deposited on calcium fluoride using a krypton monofluoride laser. This resulted in a 200 nm flim of yttrium monoxide. References Yttrium compounds Oxides
Yttrium(II) oxide
[ "Chemistry" ]
96
[ "Oxides", "Salts" ]
68,695,051
https://en.wikipedia.org/wiki/IPhone%2013
The iPhone 13 and iPhone 13 Mini (stylized as iPhone 13 mini) are smartphones developed and marketed by Apple. They are the fifteenth generation of iPhones, succeeding the iPhone 12 and 12 Mini. They were unveiled at an Apple Event in Apple Park in Cupertino, California, on September 14, 2021, alongside the higher-priced iPhone 13 Pro and iPhone 13 Pro Max flagships. They were released on September 24, 2021. The iPhone 13 Mini was discontinued in September 2023, and the iPhone 13 was discontinued in September 2024 with the announcement of the iPhone 16. History The iPhone 13 and iPhone 13 Mini were officially announced alongside the ninth-generation iPad, 6th generation iPad Mini, Apple Watch Series 7, iPhone 13 Pro, and iPhone 13 Pro Max by a virtual press event filmed and recorded at Apple Park in Cupertino, California, on September 14, 2021. Pre-orders began on September 17, 2021, at 5:00 am PDT. Pricing starts at US$799 for the iPhone 13 and US$699 for the iPhone 13 Mini. Together with iPhone 12, the iPhone 13 Mini was discontinued on September 12, 2023, with the announcement of the iPhone 15 and 15 Pro. The iPhone 13 continued to be sold until the announcement of the iPhone 16 and 16 Pro on September 9, 2024, when it was discontinued alongside the iPhone 15 Pro. Design The iPhone 13 has a flat chassis analogous to that of contemporaneous Apple products with some differences such as the rear cameras being larger and arranged diagonally. The Face ID True Depth sensor housing on the iPhone is 20% smaller yet taller than its predecessors. The iPhone 13 and iPhone 13 Mini are available in six colors: Midnight, Starlight, Product Red, Blue, Pink, and Green. On March 8, 2022, at Apple's Special Event "Peek Performance", Apple revealed a new Green color option, which became available and released on March 18, 2022. Specifications Hardware The iPhone 13 and iPhone 13 Mini use an Apple-designed A15 Bionic system on a chip. The iPhone 13 and 13 Mini feature a 6-core CPU, 4-core GPU, and 16-core Neural Engine, while the iPhone 13 Pro and 13 Pro Max feature a 5-core GPU. Display The iPhone 13 features a display with Super Retina XDR OLED technology at a resolution of 2532×1170 pixels and a pixel density of about 460 PPI with a refresh rate of 60 Hz. The iPhone 13 Mini features a display with the same technology at a resolution of 2340×1080 pixels and a pixel density of about 476 PPI. Both models have the Super Retina XDR OLED display with improved typical brightness up to 800 nits, and max brightness up to 1200 nits. Cameras The iPhone 13 and 13 Mini feature the same camera system with three cameras: one front-facing camera (12MP f/2.2) and two back-facing cameras: a wide (12MP f/1.6) and ultra-wide (12MP f/2.4) camera. The back-facing cameras both contain larger sensors for more light-gathering with new sensor shift optical image stabilization (OIS) on the main camera. The camera module on the back is arranged diagonally instead of vertically to engineer the larger sensors. The cameras use Apple's latest computational photography engine, called Smart HDR 4. Users can also choose from a range of photographic styles during capture, including rich contrast, vibrant, warm, and cool. Apple clarifies this is different from a filter because it works intelligently with the image processing algorithm during capture to apply local adjustments to an image and the effects will be baked into the photos, unlike filters which can be removed after applying. The camera app contains a new mode called Cinematic Mode, which allows users to rack focus between subjects and create (simulate) shallow depth of field using software algorithms. It is supported on wide and front-facing cameras in 1080p at 30 fps. Charging The iPhone 13 and 13 Mini have Lightning fast charging at 20 Watts, and wireless charging via MagSafe at 15 W (iPhone 13) or 12 W (13 Mini), or via the Qi protocol at 7.5 W. Software iPhone 13 and iPhone 13 Mini were originally shipped with iOS 15 at launch. They are compatible with iOS 16, which was released on September 12, 2022. The latest software iOS 17, which was revealed at Apple's WWDC 2023 event, is compatible with the iPhone 13 and 13 Mini. The next-generation Qi2 wireless charging standard has been added to the iPhone 13 and iPhone 13 Mini with the update to iOS 17.2. It is compatible with iOS 18, which was released on September 16, 2024. The 13 and 13 Mini are able to make calls via: FaceTime Audio / Video (not available in some regions); Voice over LTE (VoLTE); Wi-Fi Calling (not available in some regions models). Wi-Fi hotspotting is also possible Manufacturing The iPhone 13 and 13 Mini were manufactured on contract by Pegatron and Foxconn for Apple. Release Availability by country September 24, 2021 October 1, 2021 October 8, 2021 October 22, 2021 October 29, 2021 November 5, 2021 November 19, 2021 Repairability Some iPhone 13 parts are paired to the motherboard. The user is warned if a paired component (e.g. screen, battery) is replaced by independent repair shops, and the Face ID sensors may completely cease to function if replaced. On later iOS versions Apple removed this limitation. Apple technicians have a proprietary software tool to pair components. See also Comparison of smartphones History of the iPhone List of iPhone models References External links – official site Mobile phones introduced in 2021 Mobile phones with 4K video recording Mobile phones with multiple rear cameras Products and services discontinued in 2024 Discontinued flagship smartphones
IPhone 13
[ "Technology" ]
1,214
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
68,697,715
https://en.wikipedia.org/wiki/Martin%20curve
The Martin curve is a power law used by oceanographers to describe the export to the ocean floor of particulate organic carbon (POC). The curve is controlled with two parameters: the reference depth in the water column, and a remineralisation parameter which is a measure of the rate at which the vertical flux of POC attenuates. It is named after the American oceanographer John Martin. The Martin Curve has been used in the study of ocean carbon cycling and has contributed to understanding the role of the ocean in regulating atmospheric levels. Background The dynamics of the particulate organic carbon (POC) pool in the ocean are central to the marine carbon cycle. POC is the link between surface primary production, the deep ocean, and marine sediments. The rate at which POC is degraded in the dark ocean can impact atmospheric CO2 concentration. The biological carbon pump (BCP) is a crucial mechanism by which atmospheric CO2 is taken up by the ocean and transported to the ocean interior. Without the BCP, the pre-industrial atmospheric CO2 concentration (~280 ppm) would have risen to ~460 ppm. At present, the particulate organic carbon (POC) flux from the surface layer of the ocean to the ocean interior has been estimated to be 4–13 Pg-C year−1. To evaluate the efficiency of the BCP, it is necessary to quantify the vertical attenuation of the POC flux with depth because the deeper that POC is transported, the longer the CO2 will be isolated from the atmosphere. Thus, an increase in the efficiency of the BCP has the potential to cause an increase of ocean carbon sequestration of atmospheric CO2 that would result in a negative feedback on global warming. Different researchers have investigated the vertical attenuation of the POC flux since the 1980s. In 1987, Martin et al. proposed the following power law function to describe the POC flux attenuation: (1) where z is water depth (m), and Fz and F100 are the POC fluxes at depths of z metres and 100 metres respectively. Although other functions, such as an exponential curve, have also been proposed and validated, this power law function, commonly known as the "Martin curve", has been used very frequently in discussions of the BCP. The exponent b in this equation has been used as an index of BCP efficiency: the larger the exponent b, the higher the vertical attenuation rate of the POC flux and the lower the BCP efficiency. Moreover, numerical simulations have shown that a change in the value of b would significantly change the atmospheric CO2 concentration. Subsequently, other researchers have derived alternative remineralization profiles from assumptions about particle degradability and sinking speed. However, the Martin curve has become ubiquitous as the model that assumes slower-sinking and/or labile organic matter is preferentially depleted near the surface causing increasing sinking speed and/or remineralization timescale with depth. The Martin curve can be expressed in a slightly more general way as: where fp(z) is the fraction of the flux of particulate organic matter from a productive layer near the surface sinking through the depth horizon z [m], Cp [mb] is a scaling coefficient, and b is a nondimensional exponent controlling how fp decreases with depth. The equation is often normalised to a reference depth zo but this parameter can be readily absorbed into Cp. Vertical attenuation rate The vertical attenuation rate of the POC flux is very dependent on the sinking velocity and decomposition rate of POC in the water column. Because POC is labile and has little negative buoyancy, it must be aggregated with relatively heavy materials called ballast to settle gravitationally in the ocean. Materials that may serve as ballast include biogenic opal (hereinafter "opal"), CaCO3, and aluminosilicates. In 1993, Ittekkot hypothesized that the drastic decrease from ~280 to ~200 ppm of atmospheric CO2 that occurred during the last glacial maximum was caused by an increase of the input of aeolian dust (aluminosilicate ballast) to the ocean, which strengthened the BCP. In 2002, Klaas and Archer , as well as Francois et al. who compiled and analyzed global sediment trap data, suggested that CaCO3, which has the largest density among possible ballast minerals, is globally the most important and effective facilitator of vertical POC transport, because the transfer efficiency (the ratio of the POC flux in the deep sea to that at the bottom of the surface mixed layer) is higher in subtropical and tropical areas where CaCO3 is a major component of marine snow. Reported sinking velocities of CaCO3-rich particles are high. Numerical simulations that take into account these findings have indicated that future ocean acidification will reduce the efficiency of the BCP by decreasing ocean calcification. In addition, the POC export ratio (the ratio of the POC flux from an upper layer (a fixed depth such as 100 metres, or the euphotic zone or mixed layer) to net primary productivity) in subtropical and tropical areas is low because high temperatures in the upper layer increase POC decomposition rates. The result might be a higher transfer efficiency and a strong positive correlation between POC and CaCO3 in these low-latitude areas: labile POC, which is fresher and easier for microbes to break down, decomposes in the upper layer, and relatively refractory POC is transported to the ocean interior in low-latitude areas. On the basis of observations that revealed a large increase of POC fluxes in high-latitude areas during diatom blooms and on the fact that diatoms are much bigger than coccolithophores, Honda and Watanabe proposed in 2010 that opal, rather than CaCO3, is crucial as ballast for effective POC vertical transport in subarctic regions. Weber et al. reported in 2016 a strong negative correlation between transfer efficiency and the picoplankton fraction of plankton as well as higher transfer efficiencies in high-latitude areas, where large phytoplankton such as diatoms predominate. They also calculated that the fraction of vertically transported CO2 that has been sequestered in the ocean interior for at least 100 years is higher in high-latitude (polar and subpolar) regions than in low-latitude regions. In contrast, Bach et al.conducted in 2019 a mesocosm experiment to study how the plankton community structure affected sinking velocities and reported that during more productive periods the sinking velocity of aggregated particles was not necessarily higher, because the aggregated particles produced then were very fluffy; rather, the settling velocity was higher when the phytoplankton were dominated by small cells. In 2012, Henson et al. revisited the global sediment trap data and reported the POC flux is negatively correlated with the opal export flux and uncorrelated with the CaCO3 export flux. Key factors affecting the rate of biological decomposition of sinking POC in the water column are water temperature and the dissolved oxygen (DO) concentration: the lower the water temperature and the DO concentration, the slower the biological respiration rate and, consequently, the POC flux decomposition rate. For example, in 2015 Marsay with other analysed POC flux data from neutrally buoyant sediment traps in the upper 500 m of the water column and found a significant positive correlation between the exponent b in equation (1) above and water temperature (i.e., the POC flux was attenuated more rapidly when the water was warmer). In addition, Bach et al. found POC decomposition rates are high (low) when diatoms and Synechococcus (harmful algae) are the dominant phytoplankton because of increased (decreased) zooplankton abundance and the consequent increase (decrease) in grazing pressure. Using radiochemical observations (234Th-based POC flux observations), Pavia et al. found in 2019 that the exponent b of the Martin curve was significantly smaller in the low-oxygen (hypoxic) eastern Pacific equatorial zone than in other areas; that is, vertical attenuation of the POC flux was smaller in the hypoxic area. They pointed out that a more hypoxic ocean in the future would lead to a lower attenuation of the POC flux and therefore increased BCP efficiency and could thereby be a negative feedback on global warming. McDonnell et al. reported in 2015 that vertical transport of POC is more effective in the Antarctic, where the sinking velocity is higher and the biological respiration rate is lower than in the subtropical Atlantic. Henson et al. also reported in 2019 a high export ratio during the early bloom period, when primary productivity is low, and a low export ratio during the late bloom period, when primary productivity is high. They attributed the low export ratio during the late bloom to grazing pressure by microzooplankton and bacteria. Despite these many investigations of the BCP, the factors governing the vertical attenuation of POC flux are still under debate. Observations in subarctic regions have shown that the transfer efficiency between depths of 1000 and 2000 m is relatively low and that between the bottom of the euphotic zone and a depth of 1000 m it is relatively high. Marsay et al. therefore proposed in 2015 that the Martin curve does not appropriately express the vertical attenuation of POC flux in all regions and that a different equation should instead be developed for each region. Gloege et al. discussed in 2017 parameterization of the vertical attenuation of POC flux, and reported that vertical attenuation of the POC flux in the twilight zone (from the base of the euphotic zone to 1000 m) can be parameterised well not only by a power law model (Martin curve) but also by an exponential model and a ballast model. However, the exponential model tends to underestimate the POC flux in the midnight zone (depths greater than 1000 metres). Cael and Bisson reported in 2018 that the exponential model (power law model) tends to underestimate the POC flux in the upper layer, and overestimate it in the deep layer. However, the abilities of both models to describe POC fluxes were comparable statistically when they were applied to the POC flux dataset from the eastern Pacific that was used to propose the "Martin curve". In a long-term study in the northeastern Pacific, Smith et al. observed in 2018 a sudden increase of the POC flux accompanied by an unusually high transfer efficiency; they have suggested that because the Martin curve cannot express such a sudden increase, it may sometimes underestimate BCP strength. In addition, contrary to previous findings, some studies have reported a significantly higher transfer efficiency, especially to the deep sea, in subtropical regions than in subarctic regions. This pattern may be attributable to small temperature and DO concentration differences in the deep sea between high-latitude and low-latitude regions, as well as to a higher sinking velocity in subtropical regions, where CaCO3 is a major component of deep-sea marine snow. Moreover, it is also possible that POC is more refractory in low-latitude areas than in high-latitude areas. Uncertainty in the biological pump The ocean's biological pump regulates atmospheric carbon dioxide levels and climate by transferring organic carbon produced at the surface by phytoplankton to the ocean interior via marine snow, where the organic carbon is consumed and respired by marine microorganisms. This surface to deep transport is usually described by a power law relationship of sinking particle concentration with depth. Uncertainty in biological pump strength can be related to different variable values (parametric uncertainty) or the underlying equations (structural uncertainty) that describe organic matter export. In 2021, Lauderdale evaluated structural uncertainty using an ocean biogeochemistry model by systematically substituting six alternative remineralisation profiles fit to a reference power-law curve. Structural uncertainty makes a substantial contribution, about one-third in atmospheric pCO2 terms, to the total uncertainty of the biological pump, highlighting the importance of improving biological pump characterisation from observations and its mechanistic inclusion in climate models. Carbon and nutrients are consumed by phytoplankton in the surface ocean during primary production, leading to a downward flux of organic matter. This "marine snow" is transformed, respired, and degraded by heterotrophic organisms in deeper waters, ultimately releasing those constituents back into dissolved inorganic form. Oceanic overturning and turbulent mixing return resource-rich deep waters back to the sunlit surface layer, sustaining global ocean productivity. The biological pump maintains this vertical gradient in nutrients through uptake, vertical transport, and remineralisation of organic matter, storing carbon in the deep ocean that is isolated from the atmosphere on centennial and millennial timescales, lowering atmospheric CO2 levels by several hundred microatmospheres. The biological pump resists simple mechanistic characterisation due to the complex suite of biological, chemical, and physical processes involved, so the fate of exported organic carbon is typically described using a depth-dependent profile to evaluate the degradation of sinking particulate matter. See also Particulate inorganic carbon References Oceanography Carbon
Martin curve
[ "Physics", "Environmental_science" ]
2,769
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
68,698,264
https://en.wikipedia.org/wiki/ARM-657%20Mamboret%C3%A1
The ARM-657 Mamboretá is a rocket pod of Argentine origin, it carries 6 Áspid rockets of 57mm each. It is used in the IA 58 Pucará, Embraer 312 Tucano, IA 63 Pampa, A-4AR Fightinghawk and Hughes 500 helicopters of the Argentine Air Force, it was also expected to be used by the UH-1H Huey II and the CH-14 Aguilucho of the Argentine Army. It is a design by Fabricaciones Militares and CITEDEF, it is planned to start a production chain in FM and improve the performance of this device. The rocket that fires is the Aspide, whose heads can be changed for different military purposes. Its main function is to carry out close air support and counterinsurgency missions. It has two variants: A with intervalometer and B without intervalometer. References Rockets and missiles Argentine inventions
ARM-657 Mamboretá
[ "Astronomy" ]
189
[ "Rocketry stubs", "Astronomy stubs" ]
68,698,578
https://en.wikipedia.org/wiki/Ability
Abilities are powers an agent has to perform various actions. They include common abilities, like walking, and rare abilities, like performing a double backflip. Abilities are intelligent powers: they are guided by the person's intention and executing them successfully results in an action, which is not true for all types of powers. They are closely related to but not identical with various other concepts, such as disposition, know-how, aptitude, talent, potential, and skill. Theories of ability aim to articulate the nature of abilities. Traditionally, the conditional analysis has been the most popular approach. According to it, having an ability means one would perform the action in question if one tried to do so. On this view, Michael Phelps has the ability to swim 200 meters in under 2 minutes because he would do so if he tried to. This approach has been criticized in various ways. Some counterexamples involve cases in which the agent is physically able to do something but unable to try, due to a strong aversion. In order to avoid these and other counterexamples, various alternative approaches have been suggested. Modal theories of ability, for example, focus on what is possible for the agent to do. Other suggestions include defining abilities in terms of dispositions and potentials. An important distinction among abilities is between general abilities and specific abilities. General abilities are abilities possessed by an agent independent of their situation while specific abilities concern what an agent can do in a specific situation. So while an expert piano player always has the general ability to play various piano pieces, they lack the corresponding specific ability in a situation where no piano is present. Another distinction concerns the question of whether successfully performing an action by accident counts as having the corresponding ability. In this sense, an amateur hacker may have the effective ability to hack his boss's email account, because they may be lucky and guess the password correctly, but not the corresponding transparent ability, since they are unable to reliably do so. The concept of abilities and how they are to be understood is relevant for various related fields. Free will, for example, is often understood as the ability to do otherwise. The debate between compatibilism and incompatibilism concerns the question whether this ability can exist in a world governed by deterministic laws of nature. Autonomy is a closely related concept, which can be defined as the ability of individual or collective agents to govern themselves. Whether an agent has the ability to perform a certain action is important for whether they have a moral obligation to perform this action. If they possess it, they may be morally responsible for performing it or for failing to do so. Like in the free will debate, it is also relevant whether they had the ability to do otherwise. A prominent theory of concepts and concept possession understands these terms in relation to abilities. According to it, it is required that the agent possess both the ability to discriminate between positive and negative cases and the ability to draw inferences to related concepts. Definition and semantic field Abilities are powers an agent has to perform various actions. Some abilities are very common among human agents, like the ability to walk or to speak. Other abilities are only possessed by a few, such as the ability to perform a double backflip or to prove Gödel's incompleteness theorem. While all abilities are powers, the converse is not true, i.e. there are some powers that are not abilities. This is the case, for example, for powers that are not possessed by agents, like the power of salt to dissolve in water. But some powers possessed by agents do not constitute abilities either. For example, the power to understand French is not an ability in this sense since it does not involve an action, in contrast to the ability to speak French. This distinction depends on the difference between actions and non-actions. Actions are usually defined as events that an agent performs for a purpose and that are guided by the person's intention, in contrast to mere behavior, like involuntary reflexes. In this sense, abilities can be seen as intelligent powers. Various terms within the semantic field of the term "ability" are sometimes used as synonyms but have slightly different connotations. Dispositions, for example, are often equated with powers and differ from abilities in the sense that they are not necessarily linked to agents and actions. Abilities are closely related to know-how, as a form of practical knowledge on how to accomplish something. But it has been argued that these two terms may not be identical since know-how belongs more to the side of knowledge of how to do something and less to the power to actually do it. The terms "aptitude" and "talent" usually refer to outstanding inborn abilities. They are often used to express that a certain set of abilities can be acquired when properly used or trained. Abilities acquired through learning are frequently referred to as skills. The term "disability" is usually used for a long-term absence of a general human ability that significantly impairs what activities one can engage in and how one can interact with the world. In this sense, not any lack of an ability constitutes a disability. The more direct antonym of "ability" is "inability" instead. Theories of ability Various theories of the essential features of abilities have been proposed. The conditional analysis is the traditionally dominant approach. It defines abilities in terms of what one would do if one had the volition to do so. For modal theories of ability, by contrast, having an ability means that the agent has the possibility to execute the corresponding action. Other approaches include defining abilities in terms of dispositions and potentials. While all the concepts used in these different approaches are closely related, they have slightly different connotations, which often become relevant for avoiding various counterexamples. Conditional analysis The conditional analysis of ability is the traditionally dominant approach. It is often traced back to David Hume and defines abilities in terms of what one would do if one wanted to, tried to or had the volition to do so. It is articulated in the form of a conditional expression, for example, as "S has the ability to A iff S would A if S tried to A". On this view, Michael Phelps has the ability to swim 200 meters in under 2 minutes because he would do so if he tried to. The average person, on the other hand, lacks this ability because they would fail if they tried. Similar versions talk of having a volition instead of trying. This view can distinguish between the ability to do something and the possibility that one does something: only having the ability implies that the agent can make something happen according to their will. This definition of ability is closely related to Hume's definition of liberty as "a power of acting or not acting, according to the determinations of the will". But it is often argued that this is different from having a free will in the sense of the capacity of choosing between different courses of action. This approach has been criticized in various ways, often by citing alleged counterexamples. Some of these counterexamples focus on cases where an ability is actually absent even though it would be present according to the conditional analysis. This is the case, for example, if someone is physically able to perform a certain action but, maybe due to a strong aversion, cannot form the volition to perform this action. So according to the conditional analysis, a person with arachnophobia has the ability to touch a trapped spider because they would do so if they tried. But all things considered, they do not have this ability since their arachnophobia makes it impossible for them to try. Another example involves a woman attacked on a dark street who would have screamed if she had tried to but was too paralyzed by fear to try it. One way to avoid this objection is to distinguish between psychological and non-psychological requirements of abilities. The conditional analysis can then be used as a partial analysis applied only to the non-psychological requirements. Another form of criticism involves cases where the ability is present even though it would be absent according to the conditional analysis. This argument can be centered on the idea that having an ability does not ensure that each and every execution of it is successful. For example, even a good golfer may miss an easy putt on one occasion. That does not mean that they lack the ability to make this putt but this is what the conditional analysis suggests since they tried it and failed. One reply to this problem is to ascribe to the golfer the general ability, as discussed below, but deny them the specific ability in this particular instance. Modal approach Modal theories of ability focus not on what the agent would do under certain circumstances but on what is possible for the agent to do. This possibility is often understood in terms of possible worlds. On this view, an agent has the ability to perform a certain action if there is a complete and consistent way how the world could have been, in which the agent performs the corresponding action. This approach easily captures the idea that an agent can possess an ability without executing it. In this case, the agent does not perform the corresponding action in the actual world but there is a possible world where they perform it. The problem with the approach described so far is that when the term "possible" is understood in the widest sense, many actions are possible even though the agent actually lacks the ability to perform them. For example, not knowing the combination of the safe, the agent lacks the ability to open the safe. But dialing the right combination is possible, i.e. there is a possible world in which, through a lucky guess, the agent succeeds at opening the safe. Because of such cases, it is necessary to add further conditions to the analysis above. These conditions play the role of restricting which possible worlds are relevant for evaluating ability-claims. Closely related to this is the converse problem concerning lucky performances in the actual world. This problem concerns the fact that an agent may successfully perform an action without possessing the corresponding ability. So a beginner at golf may hit the ball in an uncontrolled manner and through sheer luck achieve a hole-in-one. But the modal approach seems to suggest that such a beginner still has the corresponding ability since what is actual is also possible. A series of arguments against this approach is due to Anthony Kenny, who holds that various inferences drawn in modal logic are invalid for ability ascriptions. These failures indicate that the modal approach fails to capture the logic of ability ascriptions. It has also been argued that, strictly speaking, the conditional analysis is not different from the modal approach since it is just one special case of it. This is true if conditional expressions themselves are understood in terms of possible worlds, as suggested, for example, by David Kellogg Lewis and Robert Stalnaker. In this case, many of the arguments directed against the modal approach may equally apply to the conditional analysis. Other approaches The dispositional approach defines abilities in terms of dispositions. According to one version, "S has the ability to A in circumstances C iff she has the disposition to A when, in circumstances C, she tries to A". This view is closely related to the conditional analysis but differs from it because the manifestation of dispositions can be prevented through the presence of so-called masks and finks. In these cases, the disposition is still present even though the corresponding conditional is false. Another approach sees abilities as a form of potential to do something. This is different from a disposition since a disposition concerns the relation between a stimulus and a manifestation that follows when the stimulus is present. A potential, on the other hand, is characterized only by its manifestation. In the case of abilities, the manifestation concerns an action. Types Whether it is correct to ascribe a certain ability to an agent often depends on which type of ability is meant. General abilities concern what agents can do independent of their current situation, in contrast to specific abilities. To possess an effective ability, it is sufficient if the agent can succeed through a lucky accident, which is not the case for transparent abilities. General and specific An important distinction among abilities is between general and specific abilities, sometimes also referred to as global and local abilities. General abilities concern what agents can do generally, i.e. independent of the situation they find themselves in. But abilities often depend for their execution on various conditions that have to be fulfilled in the given circumstances. In this sense, the term "specific ability" is used to describe whether an agent has an ability in a specific situation. So while an expert piano player always has the general ability to play various piano pieces, they lack the corresponding specific ability if they are chained to a wall, if no piano is present or if they are heavily drugged. In such cases, some of the necessary conditions for using the ability are not met. While this example illustrates a case of a general ability without a specific ability, the converse is also possible. Even though most people lack the general ability to jump 2 meters high, they may possess the specific ability to do so when they find themselves on a trampoline. The reason that they lack this general ability is that they would fail to execute it in most circumstances. It would be necessary to succeed in a suitable proportion of the relevant cases for having the general ability as well, as would be the case for a high jump athlete in this example. It seems that the two terms are interdefinable but there is disagreement as to which one is the more basic term. So a specific ability may be defined as a general ability together with an opportunity. Having a general ability, on the other hand, can be seen as having a specific ability in various relevant situations. A similar distinction can be drawn not just for the term "ability" but also for the wider term "disposition". The distinction between general and specific abilities is not always drawn explicitly in the academic literature. While discussions often focus more on the general sense, sometimes the specific sense is intended. This distinction is relevant for various philosophical issues, specifically for the ability to do otherwise in the free will debate. If this ability is understood as a general ability, it seems to be compatible with determinism. But this seems not to be the case if a specific ability is meant. Effective and transparent Another distinction sometimes found in the literature concerns the question of whether successfully performing an action by accident counts as having the corresponding ability. For example, a student in the first grade is able, in a weaker sense, to recite the first 10 digits of Pi insofar as they are able to utter any permutation of the numerals from 0 to 9. But they are not able to do so in a stronger sense since they have not memorized the exact order. The weaker sense is sometimes termed effective abilities, in contrast to transparent abilities corresponding to the stronger sense. Usually, ability ascriptions have the stronger sense in mind, but this is not always the case. For example, the sentence "Usain Bolt can run 100 meters in 9.58 seconds" is usually not taken to mean that Bolt can, at will, arrive at the goal at exactly 9.58 seconds, no more and no less. Instead, he can do something that amounts to this in a weaker sense. Relation to other concepts The concept of abilities is relevant for various other concepts and debates. Disagreements in these fields often depend on how abilities are to be understood. In the free will debate, for example, a central question is whether free will, when understood as the ability to do otherwise, can exist in a world governed by deterministic laws of nature. Free will is closely related to autonomy, which concerns the agent's ability to govern oneself. Another issue concerns whether someone has the moral obligation to perform a certain action and is responsible for succeeding or failing to do so. This issue depends, among other things, on whether the agent has the ability to perform the action in question and on whether they could have done otherwise. The ability-theory of concepts and concept possession defines them in terms of two abilities: the ability to discriminate between positive and negative cases and the ability to draw inferences to related concepts. Free will The topic of abilities plays an important role in the free will debate. The free will debate often centers around the question of whether the existence of free will is compatible with determinism, so-called compatibilism, or not, so-called incompatibilism. Free will is frequently defined as the ability to do otherwise while determinism can be defined as the view that the past together with the laws of nature determine everything happening in the present and the future. The conflict arises since, if everything is already fixed by the past, there seems to be no sense in which anyone could act differently than they do, i.e. that there is no place for free will. Such a result might have serious consequences since, according to some theories, people would not be morally responsible for what they do in such a case. Having an explicit theory of what constitutes an ability is central for deciding whether determinism and free will are compatible. Different theories of ability may lead to different answers to this question. It has been argued that, according to a dispositionalist theory of ability, compatibilism is true since determinism does not exclude unmanifested dispositions. Another argument for compatibilism is due to Susan Wolf, who argues that having the type of ability relevant for moral responsibility is compatible with physical determinism since the ability to perform an action does not imply that this action is physically possible. Peter van Inwagen and others have presented arguments for incompatibilism based on the fact that the laws of nature impose limits on our abilities. These limits are so strict in the case of determinism that the only abilities possessed by anyone are the ones that are actually executed, i.e. there are no abilities to do otherwise than one actually does. Autonomy Autonomy is usually defined as the ability to govern oneself. It can be ascribed both to individual agents, like human persons, and to collective agents, like nations. Autonomy is absent when there is no intelligent force governing the entity's behavior at all, as in the case of a simple rock, or when this force does not belong to the governed entity, as when one nation has been invaded by another and now lacks the ability to govern itself. Autonomy is often understood in combination with a rational component, e.g. as the agent's ability to appreciate what reasons they have and to follow the strongest reason. Robert Audi, for example, characterizes autonomy as the self-governing power to bring reasons to bear in directing one's conduct and influencing one's propositional attitudes. Autonomy may also encompass the ability to question one's beliefs and desires and to change them if necessary. Some authors include the condition that decisions involved in self-governing are not determined by forces outside oneself in any way, i.e. that they are a pure expression of one's own will that is not controlled by someone else. In the Kantian tradition, autonomy is often equated with self-legislation, which may be interpreted as laying down laws or principles that are to be followed. This involves the idea that one's ability of self-governance is not just exercised on a case-by-case basis but that one takes up long-term commitments to more general principles governing many different situations. Obligation and responsibility The issue of abilities is closely related to the concepts of responsibility and obligation. On the side of obligation, the principle that "ought implies can" is often cited in the ethical literature. Its original formulation is attributed to Immanuel Kant. It states that an agent is only morally obligated to perform a certain action if they are able to perform this action. As a consequence of this principle, one is not justified to blame an agent for something that was out of their control. According to this principle, for example, a person sitting on the shore has no moral obligation to jump into the water to save a child drowning nearby, and should not be blamed for failing to do so, if they are unable to do so due to Paraplegia. The problem of moral responsibility is closely related to obligation. One difference is that "obligation" tends to be understood more in a forward-looking sense in contrast to backward-looking responsibility. But these are not the only connotations of these terms. A common view concerning moral responsibility is that the ability to control one's behavior is necessary if one is to be responsible for it. This is often connected to the thesis that alternative courses of action were available to the agent, i.e. that the agent had the ability to do otherwise. But some authors, often from the incompatibilist tradition, contend that what matters for responsibility is to act as one chooses, even if no ability to do otherwise was present. One difficulty for these principles is that our ability to do something at a certain time often depends on having done something else earlier. So a person is usually able to attend a meeting 5 minutes from now if they are currently only a few meters away from the planned location but not if they are hundreds of kilometers away. This seems to lead to the counter-intuitive consequence that people who failed to take their flight due to negligence are not morally responsible for their failure because they currently lack the corresponding ability. One way to respond to this type of example is to allow that the person is not to be blamed for their behavior 5 minutes before the meeting but hold instead that they are to be blamed for their earlier behavior that caused them to miss the flight. Concepts and concept possession Concepts are the basic constituents of thoughts, beliefs and propositions. As such, they play a central role for most forms of cognition. A person can only entertain a proposition if they possess the concepts involved in this proposition. For example, the proposition "wombats are animals" involves the concepts "wombat" and "animal". Someone who does not possess the concept "wombat" may still be able to read the sentence but cannot entertain the corresponding proposition. There are various theories concerning how concepts and concept possession are to be understood. One prominent suggestion sees concepts as cognitive abilities of agents. Proponents of this view often identify two central aspects that characterize concept possession: the ability to discriminate between positive and negative cases and the ability to draw inferences from this concept to related concepts. So, on the one hand, a person possessing the concept "wombat" should be able to distinguish wombats from non-wombats (like trees, DVD-players or cats). On the other hand, this person should be able to point out what follows from the fact that something is a wombat, e.g. that it is an animal, that it has short legs or that it has a slow metabolism. It is usually taken that these abilities have to be possessed to a significant degree but that perfection is not necessary. So even some people who are not aware of their slow metabolism may count as possessing the concept "wombat". Opponents of the ability-theory of concepts have argued that the abilities to discriminate and to infer are circular since they already presuppose concept possession instead of explaining it. They tend to defend alternative accounts of concepts, for example, as mental representations or as abstract objects. References Accountability Causality Concepts in ethics Intelligence Action (philosophy) Power (social and political) concepts
Ability
[ "Physics" ]
4,785
[]
68,699,646
https://en.wikipedia.org/wiki/Qualcomm%20EDL%20mode
The Qualcomm Emergency Download mode, commonly known as Qualcomm EDL mode and officially known as Qualcomm HS-USB QD-Loader 9008 is a feature implemented in the boot ROM of a system on a chip by Qualcomm which can be used to recover bricked smartphones. On Google's Pixel 3, the feature was accidentally shown to users after the phone was bricked. Device support For a device to support EDL it must be using Qualcomm hardware. The Snapdragon family is very widely used. Access ADB The Android Debug Bridge can be utilized to get access to EDL mode, with the command adb reboot edl. Windows The Qualcomm Product Support Tool (QPST) is normally used internally by service center executives for low-level firmware flashing to revive Android devices from a hard-brick or to fix persistent software issues. To flash the firmware, the tool communicates with supported devices via EDL. The QPST has not been officially released by Qualcomm. Linux Qualcomm Download (QDL) is a tool to communicate with Qualcomm System On a Chip bootroms to install or execute code. The source code is maintained by Bjorn Andersson aka andersson. Test points Qualcomm implemented motherboards always include a test point. These can vary between phone models. Generally, test points are a pair of contacts, which can be some way apart. EDL can be accessed by opening the back of the phone, finding the location of the test point, which depends on the model, and using a pair of metal tweezers to short the connectors and boot the phone into EDL. Further software tools are needed for actions in EDL mode. EDL Deep Flash Cable Qualcomm implemented a feature in motherboards with the presence of EDL, where they can be booted to EDL via an EDL Deep Flash Cable. This specific cable has the general appearance of a button present in the cable. The button can be represented as a switch, to be able to make the phone boot into EDL mode. With the use of the cable, in most devices and cases, it will not be necessary to use the test points. The cable also works on hard-bricked devices to boot them into EDL mode. This cable works by having a button present between D+ and GND. References Qualcomm Booting
Qualcomm EDL mode
[ "Technology" ]
510
[ "Mobile technology stubs" ]
68,700,040
https://en.wikipedia.org/wiki/Susan%20Humphris
Susan E. Humphris is a geologist known for her research on processes at mid-ocean ridges. She is an elected fellow of the American Geophysical Union. Education and career Humphris grew up in the United Kingdom, where she learned to sail and enjoyed hikes that incited her interest in the natural world. As a child, she was not fond of history classes, but enjoyed the other subjects. Humphris has an undergraduate degree from Lancaster University (1972), and earned her Ph.D. in 1976 from the Massachusetts Institute of Technology and Woods Hole Oceanographic Institution. Following her Ph.D., she spent time as a postdoc at Imperial College in London and a year at Lamont–Doherty Earth Observatory. For more than a decade, Humphris was a staff scientist at Sea Education Association before she joined the staff at Woods Hole Oceanographic Institution in 1992. In 2013, Humphris was elected a fellow of the American Geophysical Union who cited her "For sustained and exemplary contributions to our understanding of volcanic and hydrothermal processes at mid-oceanic ridges". Research Humphris' research started with investigations into high temperature alterations of oceanic basalts. She has examined the geochemistry of hydrothermal systems at multiple locations including the Walvis Ridge in the South Atlantic, the mid-ocean ridges near Tristan da Cunha the TAG vent field, and Lucky Strike. Humphris worked with the team that found evidence of life in ancient rocks in samples drilled from the Earth's mantle. She has participated in multiple dives to the seafloor in submersibles including Nadir and Alvin, and was the lead scientist for the 2012 overhaul of the DSV Alvin. In 2018, she summarized scientists' progress in understanding the controls on hydrothermal vent fluid composition in an article published in Annual Review of Marine Science. Selected publications Awards and honors Nap J Buonaparte Service Award, Massachusetts Marine Educators (2003) Fellow, American Geophysical Union (2013) References External links Article on Susan Humphris, December 17, 2020 Fellows of the American Geophysical Union Woods Hole Oceanographic Institution Alumni of Lancaster University Massachusetts Institute of Technology alumni Living people American women geologists British geochemists British oceanographers Year of birth missing (living people)
Susan Humphris
[ "Chemistry" ]
464
[ "Geochemists", "British geochemists" ]
68,700,195
https://en.wikipedia.org/wiki/Robert%20Naeye
Robert Naeye is an American science journalist and former magazine editor. Early life He was born in Burlington, Vermont and raised in Hershey, Pennsylvania. He currently lives just outside Hershey. His ancestry can be traced to the Flanders region of Belgium and other places in Northern Europe. He is unrelated to the Belgian cyclist of the same name Robert Naeye. Education He completed and undergraduate degree at Oberlin College and a master's degree in science journalism at Boston University. Career Publications and NASA He has worked as a Researcher/Reporter at Discover magazine, Senior Editor at Astronomy magazine, Editor in Chief of Mercury magazine, Senior Editor and later Editor in Chief of Sky & Telescope magazine, and Senior Science Writer for the Astrophysics Science Division at NASA Goddard Space Flight Center in Greenbelt, Maryland. He has also written numerous freelance articles for a variety of print and online publications. Bibliography He has contributed to four books and authored two other books: Through the Eyes of Hubble: The Birth, Life, and Violent Death of Stars Signals from Space: The Chandra X-ray Observatory Awards and honours In 2002, the Astronomical Association of Northern California honored him with its Professional Astronomer of the Year Award In 2002, the American Astronomical Society's High-Energy Astrophysics Science Division honored him with the David N. Schramm Award for Science Journalism In 2019, the Pennsylvania NewsMedia Association honored him with the Keystone Press Award for Investigative Journalism, Division VI (First Place) References External links Profile Official Biography Official Website Living people 20th-century American physicists 21st-century American physicists Gravitational-wave astronomy American particle physicists American relativity theorists Scientists from California Year of birth missing (living people)
Robert Naeye
[ "Physics", "Astronomy" ]
337
[ "Astronomical sub-disciplines", "Gravitational-wave astronomy", "Astrophysics" ]
68,700,713
https://en.wikipedia.org/wiki/LigD
LigD is a multifunctional ligase/polymerase/nuclease (3'-phosphoesterase) found in bacterial non-homologous end joining (NHEJ) DNA repair systems. It is much more error-prone than the more complex eukaryotic system of NHEJ, which uses multiple enzymes to fill its role. The polymerase preferentially use rNTPs (RNA nucleotides), possibly advantageous in dormant cells. The actual architecture of LigD is variable. The LigD homolog in Bacillus subtilis does not have the nuclease domain. LigD with its ligase domain artificially removed can perform its function (with loss of fidelity) with a separate LigC acting as the ligase. The LigD homolog in the archaeon Methanocella paludicola is broken into three single-domain proteins sharing an operon. References DNA repair
LigD
[ "Biology" ]
201
[ "Molecular genetics", "Cellular processes", "DNA repair" ]
68,701,106
https://en.wikipedia.org/wiki/SuperSU
SuperSU is a discontinued proprietary Android application that can keep track of the root permissions of apps, after the Android device has been rooted. SuperSU is generally installed through a custom recovery such as TWRP. SuperSU includes the option to undo the rooting. SuperSU cannot always reliably hide the rooting. The project includes a wrapper library written in Java called libsuperuser for different ways of calling the su binary. History Since 2012, SuperSU app is all maintained by the original author Chainfire himself. In 2014, support for Android 5.0 was added. In September 2015, SuperSU was acquired by a Chinese company called Coding Code Mobile Technology LLC (CCMT), raising concerns about privacy, but Chainfire promised he was closely auditing the changes that CCMT made. In 2018, the application was removed from the Google Play Store and the original developer Chainfire announced their departure of SuperSU development, although others continue to maintain it. As of 2018, many users already switched to Magisk. References External links Last version: Android (operating system) Rooting (Android)
SuperSU
[ "Technology" ]
228
[ "Mobile software stubs", "Mobile technology stubs" ]
68,702,013
https://en.wikipedia.org/wiki/Sandra%20Hirche
Sandra Hirche (born 1974) is a German control theorist and engineer. She is Liesel Beckmann Distinguished Professor of electrical and computer engineering at the Technical University of Munich, where she holds the chair of information-oriented control. Her research focuses on human–robot interaction, haptic technology, telepresence, and the control engineering and systems theory needed to make those technologies work. Education and career Hirche was born in 1974 in Freiberg. She became a student of aerospace engineering at Technische Universität Berlin, earning a diploma in 2002. She completed her doctorate (Dr.Ing.) at the Technical University of Munich in 2005. After postdoctoral research at the Tokyo Institute of Technology and University of Tokyo, she joined the Technical University of Munich as an associate professor in 2008. She was named Liesel Beckmann Distinguished Professor and given the chair of information-oriented control in 2013. Recognition Hirche was named an IEEE Fellow in 2020 "for contributions to human-machine interaction and networked control". References External links Home page Living people German electrical engineers German women engineers Control theorists Technische Universität Berlin alumni Academic staff of the Technical University of Munich Fellows of the IEEE 1974 births 21st-century German engineers 21st-century German women engineers
Sandra Hirche
[ "Engineering" ]
261
[ "Control engineering", "Control theorists" ]
68,703,246
https://en.wikipedia.org/wiki/Independence%20Theory%20in%20Combinatorics
Independence Theory in Combinatorics: An Introductory Account with Applications to Graphs and Transversals is an undergraduate-level mathematics textbook on the theory of matroids. It was written by Victor Bryant and Hazel Perfect, and published in 1980 by Chapman & Hall. Topics A major theme of Independence Theory in Combinatorics is the unifying nature of abstraction, and in particular the way that matroid theory can unify the concept of independence coming from different areas of mathematics. It has five chapters, the first of which provides basic definitions in graph theory, combinatorics, and linear algebra, and the second of which defines and introduces matroids, called in this book "independence spaces". As the name would suggest, these are defined primarily through their independent sets, but equivalences with definitions using circuits, matroid rank, and submodular set function are also presented, as are sums, minors, truncations, and duals of matroids. Chapter three concerns graphic matroids, the matroids of spanning trees in graphs, and the greedy algorithm for minimum spanning trees. Chapter four includes material on transversal matroids, which can be described in terms of matchings of bipartite graphs, and includes additional material on matching theory and related topics including Hall's marriage theorem, Menger's theorem (an equivalence between minimum cuts and maximum sets disjoint paths in graphs), Latin squares, and gammoids. The final chapter concerns matroid representations using linear independence in vector spaces, labeled as an appendix and presented with fewer proofs. Many exercises are included, of varied difficulty, with hints and solutions. Audience and reception The level of the text is appropriate for courses for advanced undergraduates or master's students, with only basic linear algebra as a prerequisite, and covers its material at a more accessible and general level than other texts on matroid theory. Although disagreeing with the book's choice to omit the related topic of geometric lattices, reviewer Dominic Welsh calls it "an ideal text for an undergraduate course on combinatorial theory". Michael J. Ganley similarly calls it "a very good introduction to quite a difficult subject". However, reviewer W. Dörfler complains that the book has inadequate coverage of practical applications, and is missing a proper bibliography. Another complaint, by Bernhard Korte, is that the book's title is misleading: "independence spaces" often refers more generally to abstract simplicial complexes, while the book concentrates much more specifically on matroids. Korte also echoes the other reviewers' complaints about the lack of coverage of applications in combinatorial optimization and of connections to lattice theory. References Mathematics textbooks 1980 non-fiction books Matroid theory
Independence Theory in Combinatorics
[ "Mathematics" ]
557
[ "Matroid theory", "Combinatorics" ]
68,703,788
https://en.wikipedia.org/wiki/Human%20Landing%20System
A Human Landing System (HLS) is a spacecraft in the U.S. National Aeronautics and Space Administration's (NASA) Artemis program that is expected to land humans on the Moon. These are being designed to convey astronauts from the Lunar Gateway space station in lunar orbit to the lunar surface, sustain them there, and then return them to the Gateway station. NASA intends to use Starship HLS for Artemis III, an enhanced Starship HLS for Artemis IV, and a Blue Origin HLS for Artemis V. Rather than leading the HLS development effort internally, NASA provided a reference design and asked commercial vendors compete to design, develop and deliver systems based on a NASA-produced set of requirements. Each selected vendor is required to deliver two landers: one for an uncrewed test lunar landing, and one to be used as the first Artemis crewed lander. NASA started the competition process in 2019 with the Starship HLS selected as the winner in 2021. The original timeline called for an uncrewed test flight before a crewed flight in 2024 as part of the Artemis III mission, but the crewed flight has been delayed to at least 2025. In addition to the initial contract, NASA awarded two rounds of separate contracts in May 2019 and September 2021 on aspects of the HLS to encourage alternative designs, separately from the initial HLS development effort. It announced in March 2022 that it was developing new sustainability rules and pursuing both a Starship HLS upgrade and a new competing alternative design that would comply with the rules. In May 2023, Blue Origin was selected as the second provider for lunar lander services. Reference design The Advanced Exploration Lander was a 2018 NASA concept for a three-stage lander, intended to serve as a design reference for the commercial HLS design proposals. After departing from the Lunar Gateway in its lunar near-rectilinear halo orbit (NRHO), a transfer module would take the lander and embarked crew to a low lunar orbit and then separate. The descent module would then land itself and the ascent module carrying the crew on the lunar surface. A crew of up to four could spend up to two weeks on the surface before using the ascent module to take them back to Gateway. Each of the three modules would have a mass of approximately 12 to 15 metric tons and would be delivered separately by commercial launchers for integration at Gateway. Both the ascent and transfer modules could be designed to be reusable, with the descent module intended to be left on the lunar surface. Preliminary HLS studies In December 2018 NASA announced that it was issuing a formal request for proposals as Appendix E of NextSTEP-2 inviting American companies to submit bids for the design and development of new reusable systems allowing astronauts to land on the lunar surface. On February 14, 2019, NASA hosted an Industry Forum at NASA HQ to provide an overview of the Human Landing System (HLS) Broad Agency Announcement. In April 2019 NASA announced a formal request for proposals closing on November 15, 2019, for Appendix H of NextSTEP-2 inviting American companies to submit bids for the design and development of the Ascent Element of the Human Landing System (HLS) including the cabin used during landings. This was extended to cover an option for an integrated lander—a single vehicle that performs transfer, descent, and ascent. Design competition Five companies responded to NASA's request for proposal by the November 2019 deadline, and after evaluating the proposals, NASA selected three for further design work. In April 2020, NASA awarded separate contracts totaling US$967 million in design development funding to Blue Origin, Dynetics, and SpaceX to begin a 10-month-long design processes. The companies/teams selected in the 2020 design awards were the "National Team" led by Blue Origin, with US$579 million in NASA design funding; Dynetics, including SNC and other unspecified companies, with US$253 million in NASA funding; and SpaceX with a modified Starship spacecraft design called Starship HLS, with US$135 million in NASA design funding. Although the HLS initial design phase was planned to be a ten-month program ending in February 2021 with the selection of up to two contractors, NASA delayed the selection process and announcement by two months. The companies were bidding on a contract to provide design, development, build, test, and evaluation of an HLS, plus two lunar landings, one uncrewed and one crewed, for a fixed price. NASA evaluated the bids based on three evaluation factors: technical merit, managerial ability, and price, in that order, and found SpaceX better. On 16 April 2021, NASA selected only a single lander—Starship HLS—to move on to a full development contract. NASA awarded a US$2.89 billion contract to SpaceX to develop the Starship HLS lander and to provide two operational lunar missions—one uncrewed demonstration mission, and one crewed lunar landing—as early as 2025. NASA had stated that they would have preferred to award two contracts, but that insufficient funds were appropriated by Congress to allow the awarding of a second contract. This had been stated as a possible outcome in the contract solicitation. Post-competition protests and litigation On April 30, 2021, both Blue Origin and Dynetics filed formal protests with the US Government Accountability Office claiming that NASA had improperly evaluated aspects of the proposals. On April 30, 2021, NASA suspended the Starship HLS contract and funding until such time as the GAO could issue a ruling on the protests. In May 2021, Sen. Cantwell, from Blue Origin's state of Washington, introduced an amendment to the "Endless Frontier Act" that directed NASA to reopen the HLS competition and select a second lander proposal and authorized spending of an additional US$10 billion. This funding would require a separate appropriations act. Sen. Sanders criticized the amendment as a "multibillion dollar Bezos bailout", as the money would likely go to Blue Origin, which was founded by Jeff Bezos. The act, including this amendment, was passed by the U.S. Senate on June 8, 2021. On July 30, 2021, the GAO rejected the protests and found that "NASA did not violate procurement law" in awarding the contract to SpaceX, who bid a much lower cost and more capable system. Nevertheless, CNBC reported on August 4 that "Jeff Bezos' space company remains on the offensive in criticizing NASA's decision to award Elon Musk's SpaceX with the sole contract to build a vehicle to land astronauts on the moon" and the company had produced an infographic highlighting several Starship deficiencies compared to the Blue Origin proposal, but noted the infographic avoided showing the Blue Origin bid price as roughly double the SpaceX bid price. Soon after the appeal was rejected, NASA made the contracted initial payment of US$300M to SpaceX. On August 13, 2021, Blue Origin filed a lawsuit in the US Court of Federal Claims challenging "NASA's unlawful and improper evaluation of proposals." Blue Origin asked the court for an injunction to halt further spending by NASA on the existing contract with SpaceX. Reaction to the lawsuit was mostly negative in the space community, at NASA, and among Blue Origin employees according to space journalist Eric Berger. The judge dismissed the suit on November 4, 2021, and NASA was allowed to resume working with SpaceX. Starship HLS The Starship Human Landing System (Starship HLS) was selected by NASA for long-duration crewed lunar landings as part of NASA's Artemis program. The Starship HLS is a modified configuration of SpaceX's Starship spacecraft, optimized to operate on and around the Moon. As a result, the heat shield and flight control surfaces — parts of the main Starship design needed for atmospheric re-entry — are not included in Starship HLS. The entire spacecraft will land on the Moon and will then launch from the Moon. If needed, the variant will use high-thrust CH4/O2 RCS thrusters located mid-body on Starship HLS during the final "tens of meters" of the terminal lunar descent and landing, and will be powered by a solar array located on its nose below the docking port. Elon Musk stated that Starship HLS would be able to deliver "potentially up to 200 tons" to the lunar surface. Starship HLS would be launched to Earth orbit using the SpaceX Super Heavy booster, and would use a series of tanker spacecraft to refuel the Starship HLS vehicle in Earth orbit for lunar transit and lunar landing operations. Starship HLS would then act as its own transit vehicle to reach lunar orbit for rendezvous with Orion. In the mission concept, a NASA Orion spacecraft would carry a NASA crew to the lander, where they would depart and descend to the surface of the Moon. After lunar surface operations, Starship HLS would lift off from the lunar surface acting as a single-stage to orbit and return the crew to Orion. NASA highlighted two weaknesses with SpaceX's proposal. Starship's propulsion systems were described as "notably complex", and the report referred to prior delays under the Commercial Crew program and Falcon Heavy launch vehicle development as evidence of potential threats to their development schedule. Blue Origin selected as second provider In May 2023, Blue Origin was selected as a second provider for lunar lander services with a $3.4 billion contract. NASA stated that it decided to add another human landing system partner to: "increase competition, reduce costs to taxpayers, support a regular cadence of lunar landings, further invest in the lunar economy." Unselected proposals Integrated Lander Vehicle The Integrated Lander Vehicle (ILV) or National Human Landing System (NHLS) was a lunar lander design concept proposed by the "National Team" led by Blue Origin, along with Lockheed Martin, Northrop Grumman, and Draper Laboratory as major partners. The main selling point of the lander was that all the components had been in development in one form or another for some time. The transfer stage was based on the Cygnus spacecraft, the Blue Moon was to be used as the descent stage, and the ascent stage was based on the Orion spacecraft. It was to be launched in three parts on either the New Glenn and Vulcan Centaur but could also be launched on a single SLS Block 1B. In the April 2020 HLS source selection statement, NASA stated that the vehicle passed all requirements but faced risks with its power, propulsion, and communications systems which posed a significant risk to the developmental timeline. Dynetics ALPACA HLS The Dynetics ALPACA (Autonomous Logistics Platform for All-Moon Cargo Access) Human Landing System design concept was proposed by Dynetics and Sierra Nevada Corporation with support from a number of subcontractors. The vehicle design consisted of a single-stage lander powered by methalox engines, although an earlier design used drop tanks. ALPACA was proposed to launch on a Vulcan Centaur or SLS Block 1B rocket, and be refueled by up to three Vulcan Centaur tanker flights. Ultimately, NASA did not select the proposal, citing negative mass margins and an experimental thrust structure, which could pose threat to development time. Boeing HLS The Boeing Human Landing System proposal was submitted to NASA in early November 2019. The primary solution was a two-stage lander designed to launch on a single SLS Block 1B, with Intuitive Machines working with Boeing to provide engines, and reusing technologies from their Starliner spacecraft. To cover the possibility that the SLS Block 1B was not ready by 2024, Boeing proposed a solution where the descent stage was launched on an SLS Block 1 while the ascent stage would be launched by a commercial launcher and assembled in lunar orbit. The Boeing proposal was not selected for design funding by NASA in the April 2020 design funding announcements. Vivace HLS The Vivace Human Landing System was a lunar landing concept by aerospace firm Vivace. Little is known about the vehicle other than its resemblance to NASA's Altair lunar lander from the Constellation program. Vivace's concept was not selected for full design funding. Alternative design studies In addition to the design and development RFP for Appendix H of NextSTEP-2, NASA announced 11 contracts worth US$45.5 million in total for Appendix E of NextSTEP-2 in May 2019. These were short-term studies on transfer vehicles, descent elements, descent element prototypes, refueling element studies and prototypes. One of the requirements was that selected companies would contribute at least 20% of the total cost of the project "to reduce costs to taxpayers and encourage early private investments in the lunar economy". A second set of contracts totaling $146 million was awarded on September 14, 2021. These contracts were for studies of a second-generation HLS that is to be used for missions after Artemis III. As with the first set of contracts, NASA intends to award more than one HLS if there is sufficient funding. On March 23, 2022, NASA announced it intended to initiate a formal request for proposals for second-generation HLS designs, drafting new sustainability rules to support it with a 2026–2027 delivery date for the design. NASA stated it would solicit designs from the broader aerospace industry out of a need for redundancy and competition. Under the current HLS contract, NASA also exercised an option calling for a second Starship HLS demonstration mission to the Moon, with the Starship design updated to meet the new sustainability rules. In addition, NASA announced a target date of April 2025 for Artemis III, likely using the first-generation Starship HLS design. Space.com journalist Mike Wall speculated that, based on statements from NASA Administrator Bill Nelson, NASA had gained enough congressional and presidential support to make the requests. Follow-on programs In 2021, NASA began studies on the future Lunar Exploration Transportation Services (LETS) for regular trips between the Gateway station, lunar orbits, and the lunar surface; for sustainable HLS operations. Notes References Artemis program 2010s in the United States 2020s in the United States 2020s in spaceflight Human spaceflight programs Lunar modules NASA programs Public–private partnership projects in the United States
Human Landing System
[ "Engineering" ]
2,918
[ "Space programs", "Human spaceflight programs" ]
68,704,373
https://en.wikipedia.org/wiki/Primeval%20Forest%20National%20Park
Primeval Forest National Park is a small national park located on southwestern New Providence Island in the Bahamas. A patch of old-growth blackland coppice and karst with an area of 7.5 acres, it is considered a time capsule of the old evergreen tropical hardwood forests of the Bahamas. It was established in 2002. History From the 18th century into the 1970s, the logging industry led to the mass culling of hardwood forests that used to cover the islands and could be as tall as 50 ft. In the 1990s, then president of the Bahamas National Trust Pericles Maillis came across a remaining undisturbed patch of ancient forest, and led an initiative to protect the area. Nature The park can be viewed from wooden boardwalks, steps, and bridges. The main attraction of the park is arguably the limestone caverns and sinkholes. Flora includes pines, hemlocks, and mosses and fauna includes a number of bird species. References National parks of the Bahamas New Providence Old-growth forests
Primeval Forest National Park
[ "Biology" ]
204
[ "Old-growth forests", "Ecosystems" ]
68,704,501
https://en.wikipedia.org/wiki/GH%20Turbine%20GT-25000
The GT-25000 is an industrial and marine gas turbine produced by CSIC Longjiang GH Gas Turbine Corporation, Ltd, a subsidiary of China Shipbuilding Industry Company (CSIC). Development In 1993, China and Ukraine signed the UGT-25000 Gas Turbine Production License and Single Unit Sales Contract. Under the contract, Ukraine was to sell 10 units of the DA80 (export designation of the UGT25000) gas turbines to China as well as a transfer of related technologies and technical documentation. The funding from China in exchange for transfer of technology stopped the project from being cancelled by Ukraine. They were planned to be used on the PLA Navy's future warships such as the 052B and 052C destroyers. However, they had blade problems and the last two 052Cs, hulls 512 and 513 built at Jiangnan Shipyard sat pierside for more than two years without being accepted by the PLAN. In 1998, the localisation process of the gas turbine was started and three entities were involved: No. 703 research institute under the China Shipbuilding Industry Corporation (CSIC) as well as Xi'an Aero-Engine Corporation and Harbin Turbine Co. The project was overseen by No. 703 research institute and technical drawings procured by them were shared with Xi'an Aero-Engine Corporation. In 2004, the first locally produced model was completed and named GT-25000. It achieved 60% localisation and had equivalent performance to the DA80. By 2011, the localisation rate had reached 98.1%. After the completion of localisation, efforts were made to improve the reliability of the GT-25000 over the original DA80. QC-280/QD-280 Later on, Xi'an Aero-Engine Corporation, which had also participated in the localisation process of the GT-25000, wanted to take over the military market from CSIC's GT-25000 and unveiled the QC-280/QD-280 series, which has essentially identical performance with the GT-25000 but named differently for IP reasons. Eventually, the PLA Navy chose to stick with CSIC's GT-25000 for its warships. Design Rating power: 26.7 MW - 30 MW (estimated) Efficiency: 36.5% Fuel type: Gas Exhaust temp: 480 °C Exhaust Flow: 89 kg/s Output speed: 3270~5000 rpm Variants UGT-25000 / DA80: Ukrainian variant produced by Zorya-Mashproekt GT-25000: Localised model produced by CSIC CGT25-D: Variant of GT-25000 with 30MW power for industrial uses. Exported to Russia in 2021. GT-25000 S-S cycle: Upgraded to 33MW power, designed with assistance from Ukrainian engineers GT-25000IC: Under development, aim of 40MW power with intercooler process QC-280/QD-280: Variant produced by Xi'an Aero-Engine Corporation after original localisation process. Users Type 052C destroyer Type 052D destroyer Type 055 destroyer See also General Electric LM2500 Rolls-Royce WR-21 Rolls-Royce MT30 Rolls-Royce Marine Spey References Gas turbines
GH Turbine GT-25000
[ "Technology" ]
665
[ "Engines", "Gas turbines" ]
68,704,780
https://en.wikipedia.org/wiki/Barium%20bromate
Barium bromate is a chemical compound composed of the barium ion and the bromate ion, with the chemical formula of Ba(BrO3)2. Preparation Barium bromate can be prepared by reacting potassium bromate with barium chloride: References Barium compounds Bromates
Barium bromate
[ "Chemistry" ]
58
[ "Bromates", "Inorganic compounds", "Oxidizing agents", "Inorganic compound stubs" ]
68,705,329
https://en.wikipedia.org/wiki/3-Chlorophenmetrazine
3-Chlorophenmetrazine (3-CPM; code name PAL-594) is a recreational designer drug with stimulant effects. It is a substituted phenylmorpholine derivative, closely related to better known drugs such as phenmetrazine and 3-fluorophenmetrazine (3-FPM; PAL-593). The drug has been shown to act as a norepinephrine–dopamine releasing agent (NDRA) with additional weak serotonin release. Its values for induction of monoamine release are 27nM for dopamine, 75nM for norepinephrine, and 301nM for serotonin in rat brain synaptosomes. Hence, it releases dopamine about 3-fold more potently than norepinephrine and about 11-fold more potently than serotonin. Similarly to cis-4-methylaminorex, the drug is notable in being one of the most selective dopamine releasing agents (DRAs) known, although it still has substantial capacity to release norepinephrine. See also 3-Bromomethylphenidate 3-Chloromethamphetamine 3-Chloromethcathinone 4-Methylphenmetrazine G-130 Methylenedioxyphenmetrazine Phendimetrazine PDM-35 Radafaxine References Beta-Hydroxyamphetamines Designer drugs Phenylmorpholines Serotonin-norepinephrine-dopamine releasing agents
3-Chlorophenmetrazine
[ "Chemistry" ]
336
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,705,358
https://en.wikipedia.org/wiki/Methylenedioxyphenmetrazine
3,4-Methylenedioxyphenmetrazine, also known as 3-MDPM, is a recreational designer drug with stimulant effects. It is a substituted phenylmorpholine derivative, closely related to better known drugs such as phenmetrazine and 3-fluorophenmetrazine. It has been identified as a synthetic impurity formed in certain routes of MDMA manufacture. See also 3-Chlorophenmetrazine MDMAR MDPV Methylone References Benzodioxoles Beta-Hydroxyamphetamines Designer drugs Phenylmorpholines
Methylenedioxyphenmetrazine
[ "Chemistry" ]
132
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,705,828
https://en.wikipedia.org/wiki/Catherine%20Jeandel
Catherine Jeandel is a French geochemical oceanographer known for her research on isotope geochemistry and trace elements in the ocean. Education and career Jeandel grew up in northern Brittany wanting to be an ocean scientist, despite a lack of interest in mathematics. She was a student at the École normale supérieure de Sèvres from 1977 to 1982. Jeandel earned a B.S. and her Ph.D. at the University of Paris VII. From 1982 until 1983, she was a research associate at the Institut de Physique du Globe de Paris. She joined the Centre national de la recherche scientifique (CNRS) in 1983. From 1988 until 1990, Jeandel was at the Lamont-Doherty Geological Observatory. She was promoted to research director at the CNRS in 2007. In 2018, Jeandel was elected a fellow of the American Geophysical Union who cited her "for fundamental research on the marine biogeochemical cycles of trace elements and for exploiting them as tracers in chemical and paleoceanography". Research Jeandel is known for her research on trace elements found in seawater and on marine particles, include investigations into vanadium, chromium, and neodymium. She has examined trace elements at multiple locations in the global ocean, include time-series sites such as KERFIX, in the Southern Ocean, and the EUMELI sites in the Atlantic Ocean. A portion of her research examines the role of particles from land that transport trace elements into marine systems Jeandel served on the Scientific Steering Committee of the GEOTRACES project, where she focused on the transport of trace elements into the global ocean. Selected publications Awards and honors Fellow, American Geophysical Union (2018) CNRS bronze medal (1992) Knight of the French Legion of Honor (2008) Officer of the French National Order of Merit (Ordre national du Mérite) (2015) Georges Milot Prize and Medal from the French Academy of Sciences (2018) Fellow, Geochemical Society and the European Association of Geochemistry (2018) References External links , interview with Jeandel Fellows of the American Geophysical Union French National Centre for Scientific Research scientists Living people University of Paris alumni Women geochemists French oceanographers 1957 births
Catherine Jeandel
[ "Chemistry" ]
461
[ "Geochemists", "Women geochemists" ]
64,350,197
https://en.wikipedia.org/wiki/Estradiol%2017%CE%B2-benzoate
Estradiol 17β-benzoate (E2-17B) is an estrogen and an estrogen ester—specifically, the C17β benzoate ester of estradiol—which was never marketed. It is the C17β positional isomer of the better-known and clinically used estradiol ester estradiol benzoate (estradiol 3-benzoate; Progynon-B). Estradiol 17β-benzoate was first described in the 1930s. See also List of estrogen esters § Estradiol esters References Abandoned drugs Benzoate esters Estradiol esters Secondary alcohols Synthetic estrogens
Estradiol 17β-benzoate
[ "Chemistry" ]
147
[ "Drug safety", "Abandoned drugs" ]
64,350,777
https://en.wikipedia.org/wiki/ISocket
iSocket is a smart device brand created by iSocket Systems in 2010. iSocket sends a text message to the user in case of a power outage or other events in a remote location, such as temperature changes, water or gas leaks, or break-ins. iSocket was created in 2010 by iSocket Systems CEO Denis Sokol. The company is based in Varkaus, Finland. Sokol claims that iSocket was the first smart plug for power outage alerts in the world. Considered a part of the Internet of things, iSocket was one of the two winners of the Thread Group Innovation Enabler Program for connected homes in the third quarter of 2015. iSocket uses a cellular radio and a SIM card and contains a small battery backup so that it can stay powered long enough to alert the user of a power interruption. The socket may include a temperature sensor to monitor temperature during the cold season to avoid frozen pipes. It also sends a message when the power is restored. iSocket can be controlled via SMS or a phone call. Motion, door, smoke, heat, and gas sensors can be connected to iSocket within Ceco Home, the company's home monitoring system. References External links iSocket World Internet of things Smart devices
ISocket
[ "Technology" ]
267
[ "Home automation", "Smart devices" ]
64,351,379
https://en.wikipedia.org/wiki/Wedge%20strategy%20%28diplomacy%29
Wedge strategies in diplomacy are used to prevent, divide, and weaken an adversary coalition. Wedge strategies can take the shape of reward-based or coercive-based. Alignment abnormalities can arise because of wedge strategies. Wedge strategies may be a subset or similar to Divide and rule strategies, however, there may be a slight optical difference. With the divide and rule strategy, there is a clear winner, whereas with the wedge strategy, attention is not focused on the winner but instead against the discredited coalition. US examples 1948: George Kennan argued that the United States should "wean a Chinese coalition government from the Soviets" 1952 CIA's national covert strategy objective "should be to drive a wedge between the Communist government of China and the Communist government of the USSR to the point where hostilities actually break out or are on the constant verge of breaking out...so that they are no longer a menace to the West and to their Asiatic neighbors." Great Britain examples 1930s: Great Britain's defensive attempts to accommodate Italy 1940–1941: Great Britain used a wedge strategy to keep Spain from entering World War II on the side of the Axis Soviet examples 1950: Moscow inciting Mao to actions guaranteed to sustain Sino-American friction Russian examples 2016: Interference by Russia in the UK Brexit referendum, driving wedges between the EU member states Contemporary Chinese examples Against the Australia-US alliance Against the EU Against the EU-US alliance Against the Japan-US alliance Against the Pakistan-US alliance Against the Philippines-US alliance Against the ROK-US alliance Against the ROK-Japan-US security trilateral Against the Taiwan-US alliance Against the Vietnam-US partnership References Geopolitical terminology Conflict (process) Political schisms Foreign intervention
Wedge strategy (diplomacy)
[ "Biology" ]
353
[ "Behavior", "Aggression", "Human behavior", "Conflict (process)" ]
64,351,483
https://en.wikipedia.org/wiki/Corn%20salad%20necrosis%20virus
The corn salad necrosis virus is a virus infecting corn salad. It is related to tobacco necrosis virus and is highly similar to TNVA and Satellite tobacco necrosis virus. Even though corn salad necrosis virus and tobacco necrosis virus are similar, only corn salad necrosis virus can systemically infect corn salad. Infection remains low at only 2%, or 20 plants per square metre. Viral particles of the virus are spherical and 30 nanometre in diameter. References Tombusviridae Viral plant pathogens and diseases
Corn salad necrosis virus
[ "Biology" ]
109
[ "Virus stubs", "Viruses" ]
64,352,031
https://en.wikipedia.org/wiki/Estradiol%2017%CE%B2-acetate
Estradiol 17β-acetate is an estrogen and an estrogen ester—specifically, the C17β acetate ester of estradiol—which was never marketed. It is the C17β positional isomer of the better-known and clinically used estradiol ester estradiol acetate (estradiol 3-acetate; Femtrace). See also List of estrogen esters § Estradiol esters References Abandoned drugs Acetate esters Estradiol esters Secondary alcohols Synthetic estrogens
Estradiol 17β-acetate
[ "Chemistry" ]
117
[ "Drug safety", "Abandoned drugs" ]
64,352,086
https://en.wikipedia.org/wiki/Estradiol%20diacetate
Estradiol diacetate (EDA), or estradiol 3,17β-diacetate, is an estrogen and an estrogen ester—specifically, the C3 and C17β diacetate ester of estradiol—which was never marketed. It is related to the estradiol monoesters estradiol acetate (estradiol 3-acetate; Femtrace) and estradiol 17β-acetate. See also List of estrogen esters § Estradiol esters References Abandoned drugs Acetate esters Estradiol esters Secondary alcohols Synthetic estrogens
Estradiol diacetate
[ "Chemistry" ]
133
[ "Drug safety", "Abandoned drugs" ]
64,352,340
https://en.wikipedia.org/wiki/Body%20theory
In the sociology of the body, body theory is a theory that analyses the human body as an ordered or "lived-in" entity, subject to the cultural and conceptual forces of a society. It is also described as a dynamic field that involves various conceptualizations and re-significations of the body as well as its formation or transformation that affect how bodies are constructed, perceived, evaluated, and experienced. Body theory is considered one of the traditional theories of personal identity. Noted thinkers who developed their respective body theories include Michel Foucault, Norbert Elias, Roland Barthes, and Yuasa Yasuo. Origin and development The Western conceptualization of the body has been associated with the theorizing about the self. René Descartes, for instance, distinguished the mind and the body through his notion of mind/body dualism. The developmental trajectory of this theory followed the shifts from the manners that are related to bodily function during the Middle Ages to the modern period with its social forms and the complementary understandings of acceptable bodily behavior. When those theories are evaluated through the idea of bodily abstraction, historical and cultural variations emerge. Scholars identified fundamentally different conceptions based on the qualities they exhibit from the tribal, traditional, modern to postmodern periods. Later developments focus on the growing interest in the materiality of the body - that it is not merely taken as a place to anchor the head. In the East, body theory is said to have emerged out of the Buddhist intellectual and spiritual history. For example, in the Buddhist notion of "personal cultivation" the body is trained to achieve true knowledge along with one's mind. It also includes the Eastern concept of the authentic self, which - in Japan - pertains to the creative, productive "function" or "field" of life energy. Contemporary theorists such as Ichikawa Hiroshi, Yuasa Yasuo, and Masachi Osawa drew from these traditions and modulated it with current phenomenological concepts of the "lived body". There is also the influence of the Hindu belief, which holds that everything has God-nature. It denies the spirit within body theory as it advocates for the freedom of the spirit from the body. This tradition has spawned modern interpretations and reactions. Peter Bertocci, for instance, maintained that the body is not part of Cosmic Mind but is a society of sub-human selves. Modern theorists have used the Eastern view of the body to destabilize the Western body theory with its focus on a form of dualism. These include Friedrich Nietzsche, Emmanuel Levinas, and Roland Barthes. Freud and early sociology Sigmund Freud explored the concept of the body through his notion of the "bounded body" in the essay Beyond the Pleasure Principle. He noted that a completely closed body is deprived of the means of ongoing life while an absolutely open one without borders would not be a body at all because it would have no ongoing identity. Freud maintained that what is required is a bounded body that has a border or membrane that enables it to have a communion with an outside. German sociologist Norbert Elias, one of the earliest body theorists, posited a related theory, which holds that the body is malleable since it evolves in and is shaped by social configurations. It is also interdependent with other bodies in a variety of ways. This approach to body theory views the body as continually in a flux, undergoing changes, which are many and largely unforeseen. Elias also identified the processes that made it possible for the modern self to emerge within a civilized or controlled body. Contemporary philosophy Michel Foucault's theory of the body focuses on how it serves as a site of discourse and power as well as an object of discipline and control. He argued that the materiality of power operates on the bodies of individuals to create the kind of body that the society needs. Recent theories have given rise to labels such as the naturalistic and materialistic body. The former, which sociologist Chris Shilling advocated, focuses on the idea that there is a biological explanation and basis for human behavior. This is demonstrated in the suggestion that human behavior is explained by and encoded within the gene. Foucault's theory of biopower posits that the modern era is defined by the progression from the deductive state power of being able to take away the life or livelihood of citizens, practiced by absolute monarchies through taxation and capital punishment, to the productive state power practiced by liberal democracies through healthcare and welfare to establish a state monopoly on the acts of granting life and health. Queer and feminist theory Foucault's analysis of the body is frequently cited in queer and feminist theory, which hold the othering of queer or female bodies as a cornerstone of many different types of social disenfranchisement. Interpretations include views of the female body as socially, culturally, and legally defined in terms of its sexual availability to men. Gender scholar Judith Butler is well known for her theory of the social construction of gender, which extends social constructionism and performativity towards male and female bodies as aspects of sex and gender. Butler also argued that the classification, or "citing", of certain types of bodies as queer constitutes a unique form of performance, demonstrated in subcultures such as drag queens as a conflict between the masculine or feminine identities performed in public and the underlying, "unperformable" concepts they represent. Postmodernism A post-modern interpretation of the body theory emerged for the purpose of overturning the universal conceptions of the body. This view, which is called "new-body theory", emphasizes the relationship between the body and the self. It produced the theories that explain the importance of the body in the contemporary social life. These could include different orientations such as those focusing on gender, ethnicity or other socially-constructed differences. For example, a feminist approach looks at domination and subversion as a way of examining the conditions and experiences of embodiment in society. There are also theorists who cite the role that media communications, globalization, international trade, consumerism, education, and political incursions, among others play in theorizing the modern embodied being. Another post-modern interpretation involves the approach to reading the body in the context of race, class, gender, and sexual orientation. In this view, the material body is understood in terms of social construction where it formed part of conceptualization of the body-as-text metaphor. In the feminist and queer views, for example, the body may be understood through the markings in it that result from violence. There are scholars who note that the post-modern conceptualizations of the body theory tend to be distanced from individuals’ everyday embodied experiences and practices. This is attributed to the focus on the reading of the body as metaphor and the ambivalence towards the material body. This is said to have shifted the attention away from structured categories of difference. Healthism Another area in body theory is called healthism. It approaches the concept of the body, particularly health and disease within the context of the individual. This theoretical strand, which emerged from the subdiscipline in sociology called "sociology of health and illness", addresses the so-called objectification or the reduction of bodily experiences to signifiers of disease and illness. Instead of theorizing the body based on researching external approach or on speculative writing from the psychoanalitic version of the "internal", healthism focuses on how the body is experienced as a way of getting a better theoretical hold of the concept. Some interpretations of this ideology have also drawn from Foucault's works (e.g. Foucault's notion of biopower) to describe the body as a material site where discursive formations are fleshed out. See also Gaze theory Body image Bodily autonomy Reproductive rights References General Lennon, Kathleen, "Feminist Perspectives on the Body", Stanford Encyclopedia of Philosophy (Fall 2019 Edition), Edward N. Zalta (ed.). Sociological theories Human body
Body theory
[ "Physics" ]
1,637
[ "Human body", "Physical objects", "Matter" ]
64,353,431
https://en.wikipedia.org/wiki/Thulium%28II%29%20chloride
Thulium(II) chloride is an inorganic compound with the chemical formula TmCl2. Production Thulium(II) chloride can be produced by reducing thulium(III) chloride by thulium metal: 2 TmCl3 + Tm → 3 TmCl2 Chemical properties Thulium(II) chloride reacts with water violently, producing hydrogen gas and thulium(III) hydroxide. When thulium(II) chloride first touches water, a light red solution is formed, which fades quickly. References Lanthanide halides Thulium compounds Chlorides
Thulium(II) chloride
[ "Chemistry" ]
124
[ "Chlorides", "Inorganic compounds", "Salts" ]
64,354,020
https://en.wikipedia.org/wiki/Curio%20%27Trident%20Blue%27
Curio 'Trident Blue', known commonly as Senecio 'Trident Blue', Trident Blue Chalk and Kleinia 'Trident Blue', is a spear-shaped succulent plant that is a hybrid of Curio repens and Curio talinoides. Description Bred by Australian gardener from Melbourne, Attila Kapitany, the plant features powdery blue-grey leaves with a lance-shaped tip, making them akin in appearance to the Greek God Poseidon's trident (hence the name). It is a groundcover that grows up to 30 cm tall and spreads to 1 meter wide. It is suited as a mass planting and as a weed suppressant. See also Curio × peregrinus, a similar looking hybrid References Curio (plant) Succulent plants House plants Ornamental plant cultivars Hybrid plants Drought-tolerant plants
Curio 'Trident Blue'
[ "Biology" ]
176
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
64,354,104
https://en.wikipedia.org/wiki/David%20Sherrill
Charles David Sherrill is a professor of chemistry and computational science and engineering at Georgia Tech working in the areas of theoretical chemistry, computational quantum chemistry, and scientific computing. His research focuses on the development and application of theoretical methods for non-covalent interactions between molecules. He is the lead principal investigator of the Psi open-source quantum chemistry program. Life and education Born in Chattanooga, Tennessee (April 5, 1970), Sherrill received his S.B. in chemistry from MIT. He received his Ph.D. in 1996 from the University of Georgia, working with Professor Henry F. Schefer, III on highly correlated configuration interaction methods. He was an NSF Postdoctoral Fellow in the laboratory of Martin Head-Gordon at the University of California, Berkeley. Career In 1999, Sherrill joined the faculty of the school of chemistry and biochemistry at Georgia Tech. He joined the school of computational science and engineering as a joint faculty member in 2006. He became associate director of Georgia Tech's Institute for Data Engineering and Science (IDEaS) in 2017. He has been an associate editor of The Journal of Chemical Physics since 2009. Research Sherrill develops methods, algorithms, and software for quantum chemistry. He has introduced efficient density-fitting techniques into several quantum chemistry methods, speeding up computations. His research group obtains highly-accurate results for important prototype chemical systems, and uses these results to develop computational protocols that are faster yet still accurate. Sherrill focuses on intermolecular interactions, and has published definitive studies of the strength, geometric dependence, and substituent effects in prototype interactions including π-π, CH/π, S/π, and cation-π interactions. He has developed extensions of symmetry-adapted perturbation theory (SAPT) to analyze these interactions in terms of their fundamental physical forces (electrostatics, exchange/steric repulsion, induction/polarization, and London dispersion forces). A fragment-based partitioning of SAPT allows analyses of which non-bonded contacts are most important for binding, and has been used to understand substituent effects in protein-drug binding. Sherrill has published over 200 peer-reviewed articles on these topics, and presented over 130 invited lectures, including the 2011 Robert S. Mulliken Lecture at the University of Georgia, the keynote talk for the 2015 Workshop on Control of London Dispersion Interactions in Molecular Chemistry in Göttingen, and keynote talks at the 2015 and 2016 meetings of the Southeast Theoretical Chemistry Association. Sherrill's methods and algorithms are made publicly available to the quantum chemistry community through the open-source quantum chemistry program Psi, developed by his group and collaborators worldwide. Awards Sherrill is a Fellow of the American Physical Society, the American Chemical Society, and the American Association for the Advancement of Science. Education Sherrill is active in promoting education in chemistry, quantum chemistry, and data science. He has published an extensive set of notes and lectures on fundamentals of quantum chemistry. His educational efforts have been recognized by his being named the Outreach Volunteer of the Year by the Georgia Section of the American Chemical Society in 2017, and the Class of 1940 W. Howard Ector Outstanding Teacher at Georgia Tech in 2006. References External links David Sherrill: Google Scholar Theoretical chemists Computational chemists American chemists Fellows of the American Chemical Society 1970 births Living people Fellows of the American Physical Society Massachusetts Institute of Technology alumni Georgia Tech faculty
David Sherrill
[ "Chemistry" ]
702
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists" ]
64,354,401
https://en.wikipedia.org/wiki/Hype%20%28marketing%29
Hype in marketing is a strategy of using extreme publicity. Hype as a modern marketing strategy is closely associated with social media. Marketing through hype often uses artificial scarcity to induce demand. Consumers of hyped products often participate as a form of conspicuous consumption to signify characteristics about themselves. Hype allows brands to promote their image above the actual quality of the product. Streetwear brands have collaborated with luxury fashion to justify charging premium prices for their goods. As an example, fashion label Vetements used social media channels to promote a limited-edition hoodie which sold 500 units in hours, recording sales of €445,000. When hype marketing is used to drive demand for limited-edition goods, consumers sometimes attempt resell those good on secondary markets for a profit (comparable to ticket scalping). The resale market is a $24 billion industry. Method Luxury brands may release products as a collaborate with ready-made garment brands as a way to build hype. Collaborations have been used by some luxury brands to circumvent fast fashion brands copying their designs. NYU Professor Adam Alter says that for an established brand to create a scarcity frenzy, they need to release a limited number of different products, frequently. Hype is often built via Pop-up retail. Comme des Garçons was one of the first to use this strategy, leasing a short-term vacant shop solved the storage problems of releasing product for quick sale. In popular culture The term ‘hypebeast’ has been coined to define consumers vulnerable to hype marketing. The origins of the term come from the Hong Kong-based company Hypebeast. The behaviours of the hypebeast define hype marketing; the purchase of popular goods they can't afford to impress others. Hype also manifests itself in queues with brands often retailing hyped products through pop-up stores. Many luxury brands release hyped products via their online shop. This has led to the creation of companies that allow consumers to use bots to guarantee or improve their chances of purchasing a limited-edition product. See also Content house Content marketing Creator economy Gartner hype cycle Influencer marketing Social commerce Social media marketing References Marketing strategy Social media
Hype (marketing)
[ "Technology" ]
459
[ "Computing and society", "Social media" ]
64,354,939
https://en.wikipedia.org/wiki/Solving%20the%20Riddle%20of%20Phyllotaxis
Solving the Riddle of Phyllotaxis: Why the Fibonacci Numbers and the Golden Ratio Occur in Plants is a book on the mathematics of plant structure, and in particular on phyllotaxis, the arrangement of leaves on plant stems. It was written by Irving Adler, and published in 2012 by World Scientific. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. Background Irving Adler (1913–2012) was known as a peace protester, schoolteacher, and children's science book author before, in 1961, earning a doctorate in abstract algebra. Even later in his life, Adler began working on phyllotaxis, the mathematical structure of leaves on plant stems. This book, which collects several of his papers on the subject previously published in journals and edited volumes, is the last of his 85 books to be published before his death. Topics Different plants arrange their leaves differently, for instance on alternating sides of the plant stem, or rotated from each other by other fractions of a full rotation between consecutive leaves. In these patterns, rotations by 1/2 of an angle, 1/3 of an angle, 3/8 of an angle, or 5/8 of an angle are common, and it does not appear to be coincidental that the numerators and denominators of these fractions are all Fibonacci numbers. Higher Fibonacci numbers often appear in the number of spiral arms in the spiraling patterns of sunflower seed heads, or the helical patterns of pineapple cells. The theme of Adler's work in this area, in the papers reproduced in this volume, was to find a mathematical model for plant development that would explain these patterns and the occurrence of the Fibonacci numbers and the golden ratio within them. The papers are arranged chronologically; they include four journal papers from the 1970s, another from the late 1990s, and a preface and book chapter also from the 1990s. Among them, the first is the longest, and reviewer Adhemar Bultheel calls it "the most fundamental"; it uses the idea of "contact pressure" to cause plant parts to maximize their distance from each other and maintain a consistent angle of divergence from each other, and makes connections with the mathematical theories of circle packing and space-filling curves. Subsequent papers refine this theory, make additional connections for instance to the theory of continued fractions, and provide a more general overview. Interspersed with the theoretical results in this area are historical asides discussing, among others, the work on phyllotaxis of Theophrastus (the first to study phyllotaxis), Leonardo da Vinci (the first to apply mathematics to phyllotaxis), Johannes Kepler (the first to recognize the importance of the Fibonacci numbers to phyllotaxis), and later naturalists and mathematicians. Audience and reception Reviewer Peter Ruane found the book gripping, writing that it can be read by a mathematically inclined reader with no background knowledge in phyllotaxis. He suggests, however, that it might be easier to read the papers in the reverse of their chronological order, as the broader overview papers were written later in this sequence. And Yuri V. Rogovchenko calls its publication "a thoughtful tribute to Dr. Adler’s multi-faceted career as a researcher, educator, political activist, and author". References Plant morphology Fibonacci numbers Mathematics books 2012 non-fiction books Mathematical Association of America
Solving the Riddle of Phyllotaxis
[ "Mathematics", "Biology" ]
727
[ "Plants", "Recurrence relations", "Plant morphology", "Fibonacci numbers", "Golden ratio", "Mathematical relations" ]
64,355,017
https://en.wikipedia.org/wiki/Nature%20Environment%20and%20Pollution%20Technology
Nature Environment and Pollution Technology is an open access, peer-reviewed scientific journal of environmental science. It is published quarterly by Technoscience Publications and was established in 2002. The journal is indexed in Scopus, ProQuest, Chemical Abstracts (CAS), EBSCO, References External links Official Website English-language journals Open access journals Academic journals established in 2002 Environmental science journals Quarterly journals
Nature Environment and Pollution Technology
[ "Environmental_science" ]
79
[ "Environmental science journals" ]
64,355,593
https://en.wikipedia.org/wiki/Sherman%20function
The Sherman function describes the dependence of electron-atom scattering events on the spin of the scattered electrons. It was first evaluated theoretically by the physicist Noah Sherman and it allows the measurement of polarization of an electron beam by Mott scattering experiments. A correct evaluation of the Sherman function associated to a particular experimental setup is of vital importance in experiments of spin polarized photoemission spectroscopy, which is an experimental technique which allows to obtain information about the magnetic behaviour of a sample. Background Polarization and spin-orbit coupling When an electron beam is polarized, an unbalance between spin-up, , and spin-down electrons, , exists. The unbalance can be evaluated through the polarization defined as . It is known that, when an electron collides against a nucleus, the scattering event is governed by Coulomb interaction. This is the leading term in the Hamiltonian, but a correction due to spin-orbit coupling can be taken into account and the effect on the Hamiltonian can be evaluated with the perturbation theory. Spin orbit interaction can be evaluated, in the rest reference frame of the electron, as the result of the interaction of the spin magnetic moment of the electron with the magnetic field that the electron sees, due to its orbital motion around the nucleus, whose expression in the non-relativistic limit is: In these expressions is the spin angular-momentum, is the Bohr magneton, is the g-factor, is the reduced Planck constant, is the electron mass, is the elementary charge, is the speed of light, is the potential energy of the electron and is the angular momentum. Due to spin orbit coupling, a new term will appear in the Hamiltonian, whose expression is . Due to this effect, electrons will be scattered with different probabilities at different angles. Since the spin-orbit coupling is enhanced when the involved nuclei possess a high atomic number Z, the target is usually made of heavy metals, such as mercury, gold and thorium. Asymmetry If we place two detectors at the same angle from the target, one on the right and one on the left, they will generally measure a different number of electrons and . Consequently it is possible to define the asymmetry , as . The Sherman function is a measure of the probability of a spin-up electron to be scattered, at a specific angle , to the right or to the left of the target, due to spin-orbit coupling. It can assume values ranging from -1 (spin-up electron is scattered with 100% probability to the left of the target) to +1 (spin-up electron is scattered with 100% probability to the right of the target). The value of the Sherman function depends on the energy of the incoming electron, evaluated via the parameter . When , spin-up electrons will be scattered with the same probability to the right and to the left of the target. Then it is possible to write Plugging these formulas inside the definition of asymmetry, it is possible to obtain a simple expression for the evaluation of the asymmetry at a specific angle , i.e.: . Theoretical calculations are available for different atomic targets and for a specific target, as a function of the angle. Application To measure the polarization of an electron beam, a Mott detector is required. In order to maximize the spin-orbit coupling, it is necessary that the electrons arrive near to the nuclei of the target. To achieve this condition, a system of electron optics is usually present, in order to accelerate the beam up to keV or to MeV energies. Since standard electron detectors count electrons being insensitive to their spin, after the scattering with the target any information about the original polarization of the beam is lost. Nevertheless, by measuring the difference in the counts of the two detectors, the asymmetry can be evaluated and, if the Sherman function is known from previous calibration, the polarization can be calculated by inverting the last formula. In order to characterize completely the in-plane polarization, setups are available, with four channeltrons, two devoted to the left-right measure and two devoted to the up-right measure. Example In the panel it is shown an example of the working principle of a Mott detector, supposing a value for . If an electron beam with a 3:1 ratio of spin-up over spin-down electrons collide with the target, it will be splitted with a ratio 5:3, according to previous equation, with an asymmetry of 25%. See also Spin–orbit interaction Mott scattering Photoemission spectroscopy References Electron beam Foundational quantum physics Scattering
Sherman function
[ "Physics", "Chemistry", "Materials_science" ]
949
[ "Electron", "Electron beam", "Foundational quantum physics", "Quantum mechanics", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics" ]
64,355,912
https://en.wikipedia.org/wiki/Annonoideae
Annonoideae is a subfamily of plants in the family Annonaceae, with genera distributed in tropical areas world-wide. The family and this subfamily are based on the type genus Annona. Tribes and genera The following genera, subdivided into seven tribes are accepted: Annoneae Auth.: Endlicher 1839 Annona L. (synonym Rollinia A. St.-Hil.) Anonidium Engl. & Diels Asimina Adans. (synonym Deeringothamnus Small) Diclinanona Diels Disepalum Hook. f. Goniothalamus (Blume) Hook.f. & Thomson (synonym Richella A.Gray) Neostenanthera Exell (synonym Boutiquea Le Thomas) Bocageeae Auth.: Endlicher 1839 Bocagea A.St.-Hil. Cardiopetalum Schltdl. Cymbopetalum Benth. Froesiodendron R.E.Fr. Hornschuchia Nees Mkilua Verdc. Porcelia Ruiz & Pav. Trigynaea Schltdl. Duguetieae Auth.: Chatrou & Saunders 2012 Duckeanthus R.E.Fr. Duguetia A.St.-Hil. Fusaea (Baill.) Saff. Letestudoxa Pellegr. Pseudartabotrys Pellegr. Guatterieae Auth.: Hooker & Thomson 1855 Guatteria Ruiz & Pav. Monodoreae Auth.: Baill. 1868 Asteranthe Engl. & Diels Dennettia Hexalobus A.DC. Isolona Engl. Lukea Mischogyne Exell Monocyclanthus Keay Monodora Dunal Uvariastrum Engl. Uvariodendron (Engl. & Diels) R.E.Fr. Uvariopsis Engl. Ophrypetaleae Auth.: Dagallier & Couvreur 2023 Ophrypetalum Diels Sanrafaelia Verdc. Uvarieae Auth.: Hooker & Thomson 1855 Afroguatteria Boutique Cleistochlamys Oliv. Dasymaschalon (Hook.f. & Thomson) Dalla Torre & Harms Desmos Lour. Dielsiothamnus R.E.Fr. Fissistigma Griff. Friesodielsia Steenis (syn. Schefferomitra Diels) Monanthotaxis Baill. Pyramidanthe Miq. (syn. Mitrella Miq.) Sphaerocoryne (Boerl.) Ridl. Toussaintia Boutique Uvaria L. (synonym Balonga Le Thomas and Melodorum Lour.) Xylopieae Auth.: Endlicher 1839 Artabotrys R.Br. Xylopia L. References External links Annonaceae Plant subfamilies
Annonoideae
[ "Biology" ]
630
[ "Plant subfamilies", "Plants" ]
64,357,281
https://en.wikipedia.org/wiki/Translate%20%28Apple%29
Translate is a translation app developed by Apple for their iOS and iPadOS devices. Introduced on June 22, 2020, it functions as a service for translating text sentences or speech between several languages and was officially released on September 16, 2020, along with iOS 14. All translations are processed through the neural engine of the device, and as such can be used offline. History On June 7, 2021, Apple announced that the app would be available on iPad models running iPadOS 15, as well as Macs running macOS Monterey alongside other system-wide translation features. The app was officially released for iPad models on September 20, 2021, along with iPadOS 15. On June 6, 2022, Apple announced six new languages, Turkish, Indonesian, Polish, Dutch, Thai and Vietnamese. The six new languages work on iPhone 8 or later, iPhone 8 Plus or later, iPhone X or later, iPhone SE (2nd generation) or later, iPad Air (3rd generation) or later, all iPad Pro models, iPad Mini (5th generation) or later and iPad (5th generation) or later. The Turkish, Indonesian, Polish, Dutch and Thai languages were added to the app on June 22, 2022, the second anniversary of the announcement of the app. The Vietnamese language was added to the app on July 27, 2022. iOS 16 introduced the ability to translate text through the camera, allowing users to translate text on objects or physical documents in real-time. On June 5, 2023, the new Ukrainian language was added to the app. The new language works on iPhone Xs or later, iPhone SE (2nd generation) or later, iPad Air (3rd generation) or later, iPad Pro (2nd generation) or later, iPad Mini (5th generation) or later and iPad (6th generation) or later. On June 10, 2024, the new Hindi language was added to the app. The new language works on iPhone Xs or later, iPhone SE (2nd generation) or later, iPad Air (3rd generation) or later, iPad Pro (3rd generation) or later, iPad Mini (5th generation) or later and iPad (7th generation) or later. Languages Translate originally supported the translation between the UK (British) and US (American) dialects of English, Arabic, Mandarin Chinese, French, German, the European dialect of Spanish, Italian, Japanese, Korean, the Brazilian dialect of Portuguese and Russian. This grew to 17 languages as six new languages - Turkish, Indonesian, Polish, Dutch, Thai and Vietnamese, were added in 2022. Support for Ukrainian was added with iOS 17, bringing the number of supported languages to 18, and then Hindi with iOS 18, bringing the number of supported languages to 19. All languages support dictation and can be downloaded for offline use. Languages not yet supported These are all the languages that are not yet supported on Apple Translate but have been planned for future iOS versions. (It says "Is not currently supported for translation") Abkhaz Afrikaans Albanian Armenian Assamese Azerbaijani Bashkir Basque Belarusian Bengali Bosnian Bulgarian Burmese Catalan Cherokee Croatian Czech Danish Dari Dhivehi Dutch Dzongkha Esperanto Estonian Faroese Fijian Filipino (Tagalog) Finnish Frisian Galician Georgian Greek Greenlandic Gujarati Haitian Creole Hebrew Hmong Hungarian Icelandic Ilocano Inuktitut Irish Javanese Kannada Kashmiri Kazakh Khmer Kinyarwanda Kurdish Kyrgyz Lao Latin Latvian Lithuanian Luxemburgish Macedonian Malay Malayalam Manx Maltese Marathi Mongolian Nepali Norwegian (Bokmal) Oriya (Odia) Papiamento Pashto Persian Punjabi Quechua Romanian Sanskrit Serbian Sicilian Sindhi Sinhala Slovak Slovenian Sundanese Swedish Tajik Tamil Tatar Telugu Tibetan Turkmen Urdu Uyghur Uzbek Welsh References IOS-based software made by Apple Inc. iOS software iPadOS software MacOS software Machine translation software Natural language processing software Products introduced in 2020 2020 software
Translate (Apple)
[ "Technology" ]
792
[ "Mobile software stubs", "Mobile technology stubs" ]
64,358,331
https://en.wikipedia.org/wiki/Norna%20Robertson
Norna Robertson (FRSE, FInstP, FRAS, FAPS) is a lead scientist at LIGO at California Institute of Technology, and professor of experimental physics at the University of Glasgow. Her career has focused on experimental research into suspension systems and instrumentation to achieve the detection of gravitational waves. Education Robertson obtained a Ph.D. in experimental physics in 1981 from the University of Glasgow, researching gravitational wave detection and how seismic noise could be suppressed in sensitive measurements. Research and career Robertson began her postdoctoral career as a researcher at Imperial College London studying infrared astronomy. In 1983, she joined the University of Glasgow as a lecturer and returned to gravitational waves research, becoming a Professor in 1999. In 2003, Robertson moved to the Gintzon Laboratory at Stanford University as a visiting professor, where her work focused on suspension systems for Advanced LIGO. She became a lead scientist at the LIGO at California Institute of Technology in 2007, leading an international team of 20 scientists and engineers. Her research contributed to the design of detection instrumentation that ultimately led to the first observation of gravitational waves in 2015. Her work is now focused on the development of ultra-low noise suspensions systems for Advanced LIGO. Awards and honours Robertson was awarded the President's Medal from the Royal Society of Edinburgh in 2016 for her work on suspension systems for gravitational wave detection. She received the California Institute of Technology Staff Service and Impact Award in 2017. She is a Fellow of the Royal Society of Edinburgh, the American Physical Society, the Royal Astronomical Society, the Institute of Physics, and the International Society on General Relativity and Gravitation. References Living people Fellows of the Royal Society of Edinburgh Fellows of the American Physical Society Fellows of the Institute of Physics Fellows of the Royal Astronomical Society Scottish physicists Experimental physicists California Institute of Technology faculty Gravitational-wave astronomy British women scientists Alumni of the University of Glasgow Year of birth missing (living people)
Norna Robertson
[ "Physics", "Astronomy" ]
384
[ "Astrophysics", "Experimental physics", "Gravitational-wave astronomy", "Astronomical sub-disciplines", "Experimental physicists" ]
64,359,053
https://en.wikipedia.org/wiki/Juvenile%20polyp
Juvenile polyps are a type of polyp found in the colon. While juvenile polyps are typically found in children, they may be found in people of any age. Juvenile polyps are a type of hamartomatous polyps, which consist of a disorganized mass of tissue. They occur in about two percent of children. Juvenile polyps often do not cause symptoms (asymptomatic); when present, symptoms usually include gastrointestinal bleeding and prolapse through the rectum. Removal of the polyp (polypectomy) is warranted when symptoms are present, for treatment and definite histopathological diagnosis. In the absence of symptoms, removal is not necessary. Recurrence of polyps following removal is relatively common. Juvenile polyps are usually sporadic, occurring in isolation, although they may occur as a part of juvenile polyposis syndrome. Sporadic juvenile polyps may occur in any part of the colon, but are usually found in the distal colon (rectum and sigmoid). In contrast to other types of colon polyps, juvenile polyps are not premalignant and are not usually associated with a higher risk of cancer; however, individuals with juvenile polyposis syndrome are at increased risk of gastric and colorectal cancer. Unlike juvenile polyposis syndrome, solitary juvenile polyps do not require follow up with surveillance colonoscopy. Signs and symptoms Juvenile polyps often do not cause symptoms (asymptomatic); when present, symptoms usually include gastrointestinal bleeding and prolapse through the rectum. Juvenile polyps are usually sporadic, occurring in isolation, although they may occur as a part of juvenile polyposis syndrome. Sporadic juvenile polyps may occur in any part of the colon, but are usually found in the distal colon (rectum and sigmoid). Histopathology Under microscopy, juvenile polyps are characterized by cystic architecture, mucus-filled glands, and prominent lamina propria. Inflammatory cells may be present. Compared with sporadic polyps, polyps that occur in juvenile polyposis syndrome tend to have more of a frond-like (resembling a leaf) growth pattern with fewer stroma, fewer dilated glands and smaller glands with more proliferation. Syndrome-related juvenile polyps also demonstrate more neoplasia and increased COX-2 expression compared with sporadic juvenile polyps. Diagnosis Juvenile polyps are diagnosed by examination of their distinctive histopathology, generally after polypectomy via endoscopy. Juvenile polyps cause fecal calprotectin level to be elevated. Treatment If symptoms are present, then removal of the polyp (polypectomy) is warranted. Recurrence of polyps following removal is relatively common. Unlike juvenile polyposis syndrome, solitary juvenile polyps do not require follow up with surveillance colonoscopy. Epidemiology Juvenile polyps occur in about 2 percent of children. In contrast to other types of colon polyps, juvenile polyps are not premalignant and are not usually associated with a higher risk of cancer; however, individuals with juvenile polyposis syndrome are at increased risk of gastric and colorectal cancer. References Digestive system neoplasia Histopathology
Juvenile polyp
[ "Chemistry" ]
673
[ "Histopathology", "Microscopy" ]
64,359,579
https://en.wikipedia.org/wiki/Honeywell%20JetWave
Honeywell's JetWave is a piece of satellite communications hardware produced by Honeywell that enables global in-flight internet connectivity. Its connectivity is provided using Inmarsat’s GX Aviation network. The JetWave platform is used in business and general aviation, as well as defense and commercial airline users. History In 2012, Honeywell announced it would provide Inmarsat with the hardware for its GX Ka-band in-flight connectivity network. The Ka-band (pronounced either "kay-ay band" or "ka band") is a portion of the microwave part of the electromagnetic spectrum defined as frequencies in the range 27.5 to 31 gigahertz (GHz). In satellite communications, the Ka-band allows higher bandwidth communication. In 2017, after five years and more than 180 flight hours and testing, JetWave was launched as part of GX Aviation with Lufthansa Group. Honeywell’s JetWave was the exclusive terminal hardware option for the Inmarsat GX Aviation network; however, the exclusivity clause in that contract has expired. In July 2019, the United States Air Force selected Honeywell’s JetWave satcom system for 70 of its C-17 Globemaster III cargo planes. In December 2019, it was reported that six AirAsia aircraft had been fitted with Inmarsat’s GX Aviation Ka-band connectivity system and is slated to be implemented fleetwide across AirAsia’s Airbus A320 and A330 models in 2020, requiring installation of JetWave atop AirAsia’s fuselages. Today, Honeywell’s JetWave hardware is installed on over 1,000 aircraft worldwide. In August 2021, the Civil Aviation Administration of China approved a validation of Honeywell’s MCS-8420 JetWave satellite connectivity system for Airbus 320 aircraft. In December 2021, Honeywell, SES, and Hughes Network Systems demonstrated multi-orbit high-speed airborne connectivity for military customers using Honeywell’s JetWave MCX terminal with a Hughes HM-series modem, and SES satellites in both medium Earth orbit (MEO) and geostationary orbit (GEO). The tests achieved full duplex data rates of more than 40 megabits per second via a number of SES' (GEO) satellites including GovSat-1, and the high-throughput, low-latency O3b MEO satellite constellation, with connections moving between GEO/MEO links in under 30 sec. Uses Commercial aviation Honeywell’s JetWave enables air transport and regional aircraft to connect to Inmarsat’s GX Aviation network. The multichannel satellite (MSC) JetWave terminals share the same antenna controller, modem and router hardware with the business market, but have an MCS-8200 fuselage-mounted antenna. Business aviation Honeywell’s JetWave hardware allows users to connect to Inmarsat’s Jet ConneX, a business aviation broadband connectivity offering to provide Wi-Fi for connected devices. JetWave offers a tail-mount antenna for business jets. Defense Honeywell’s JetWave satellite communications system for defense allows users to connect to the Inmarsat GX network, offering global coverage for military airborne operators, including over water, over nontraditional flight paths and in remote areas. JetWave and the Inmarsat GX network enable mission-critical applications like real-time weather; videoconferencing; large file transfers; encryption capabilities; in-flight briefings; intelligence, surveillance, and reconnaissance video; and secure communications. JetWave is configurable for a variety of military platforms and offers antennas for large and small airframes. References Computer hardware Satellite Internet access
Honeywell JetWave
[ "Technology", "Engineering" ]
758
[ "Computer engineering", "Computer hardware", "Computer systems", "Computer science", "Computers" ]
44,402,565
https://en.wikipedia.org/wiki/Global%20cascades%20model
Global cascades models are a class of models aiming to model large and rare cascades that are triggered by exogenous perturbations which are relatively small compared with the size of the system. The phenomenon occurs ubiquitously in various systems, like information cascades in social systems, stock market crashes in economic systems, and cascading failure in physics infrastructure networks. The models capture some essential properties of such phenomenon. Model description To describe and understand global cascades, a network-based threshold model has been proposed by Duncan J. Watts in 2002. The model is motivated by considering a population of individuals who must make a decision between two alternatives, and their choices depend explicitly on other people's states or choices. The model assumes that an individual will adopt a new particular opinion (product or state) if a threshold fraction of his/her neighbors have adopted the new one, else he would keep his original state. To initiate the model, a new opinion will be randomly distributed among a small fraction of individuals in the network. If the fraction satisfies a particular condition, a large cascades can be triggered.(see Global Cascades Condition) A phase transition phenomenon has been observed: when the network of interpersonal influences is sparse, the size of the cascades exhibits a power law distribution, the most highly connected nodes are critical in triggering cascades, and if the network is relatively dense, the distribution shows a bimodal form, in which nodes with average degree show more importance by serving as triggers. Several generalizations of the Watt's threshold model have been proposed and analyzed in the following years. For example, the original model has been combined with independent interaction models to provide a generalized model of social contagion, which classifies the behavior of the system into three universal classes. It has also been generalized on modular networks degree-correlated networks and to networks with tunable clustering. The role of the initiators has also been studied recently, shows that different initiator would influence the size of the cascades. Watt's threshold model is one of the few models that shows qualitative differences on multiplex networks and single layer networks. It can furthermore exhibit broad and multi-modal cascade size distributions on finite networks. Global cascades condition To derive the precise cascade condition in the original model, a generating function method could be applied. The generating function for vulnerable nodes in the network is: where pk is the probability a node has degree k, and and f is the distribution of the threshold fraction of individuals. The average vulnerable cluster size can be derived as: where z is the average degree of the network. The Global cascades occur when the average vulnerable cluster size diverges The equation could be interpreted as: When , the clusters in the network is small and global cascades will not happen since the early adopters are isolated in the system, thus no enough momentum could be generated. When , the typical size of the vulnerable cluster is infinite, which implies presence of global cascades. Relations with other contagion models The Model considers a change of state of individuals in different systems which belongs to a larger class of contagion problems. However it differs with other models in several aspects: Compared with 1) epidemic model: where contagion events between individual pairs are independent, the effect a single infected node having on an individual depends on the individual's other neighbors in the proposed model. Unlike 2) percolation or self-organized criticality models, the threshold is not expressed as the absolute number of "infected" neighbors around an individual, instead, a corresponding fraction of neighbors is selected. It is also different from 3) random-field ising model and majority voter model, which are frequently analyzed on regular lattices, here, however the heterogeneity of the network plays a significant role. See also Threshold model Information cascade Stock market crash Cascading failure Epidemic model Percolation_theory Self-organized criticality Ising model Voter model Complex contagion Sociological theory of diffusion Global cascade References Mathematical modeling Network theory
Global cascades model
[ "Mathematics" ]
822
[ "Mathematical modeling", "Applied mathematics", "Graph theory", "Network theory", "Mathematical relations" ]
44,402,859
https://en.wikipedia.org/wiki/SIP%20extensions%20for%20the%20IP%20Multimedia%20Subsystem
The Session Initiation Protocol (SIP) is the signaling protocol selected by the 3rd Generation Partnership Project (3GPP) to create and control multimedia sessions with multiple participants in the IP Multimedia Subsystem (IMS). It is therefore a key element in the IMS framework. SIP was developed by the Internet Engineering Task Force (IETF) as a standard for controlling multimedia communication sessions in Internet Protocol (IP) networks. It is characterized by its position in the application layer of the Internet Protocol Suite. Several SIP extensions published in Request for Comments (RFC) protocol recommendations, have been added to the basic protocol for extending its functionality. The 3GPP, which is a collaboration between groups of telecommunications associations aimed at developing and maintaining the IMS, stated a series of requirements for SIP to be successfully used in the IMS. Some of them could be addressed by using existing capabilities and extensions in SIP while, in other cases, the 3GPP had to collaborate with the IETF to standardize new SIP extensions to meet the new requirements. The IETF develops SIP on a generic basis, so that the use of extensions is not restricted to the IMS framework. 3GPP requirements for SIP The 3GPP has stated several general requirements for operation of the IMS. These include an efficient use of the radio interface by minimizing the exchange of signaling messages between the mobile terminal and the network, a minimum session setup time by performing tasks prior to session establishment instead of during session establishment, a minimum support required in the terminal, the support for roaming and non-roaming scenarios with terminal mobility management (supported by the access network, not SIP), and support for IPv6 addressing. Other requirements involve protocol extensions, such as SIP header fields to exchange user or server information, and SIP methods to support new network functionality: requirement for registration, re-registration, de-registration, event notifications, instant messaging or call control primitives with additional capabilities such as call transference. Other specific requirements are: Quality of service support with policy and charging control, as well as resource negotiation and allocation before alerting the destination user. Identification of users for authentication, authorization and accounting purposes. Security between users and the network and among network nodes is a major issue to be addressed by using mutual authentication mechanisms such as private and public keys and digests, as well as media authorization extensions. It must be also possible to present both the caller and the called party the identities of their counterparts, with the ability to hide this information if required. Anonymity in session establishment and privacy are also important. Protection of SIP signaling with integrity and confidentiality support based on initial authentication and symmetric cryptographic keys; error recovery and verification are also needed. Session release initiated by the network (e.g. in case the user terminal leaves coverage or runs out of credit). Source-routing mechanisms. The routing of SIP messages has its own requirements in the IMS as all terminal originated session setup attempts must transit both the P-CSCF and the S-CSCF so that these call session control functions (CSCFs) servers can properly provide their services. There can be special path requirements for certain messages as well. Interoperation between IMS and the public switched telephone network (PSTN). Finally, it is also necessary that other protocols and network services such as DHCP or DNS are adapted to work with SIP, for instance for outbound proxy (P-CSCF) location and SIP Uniform Resource Identifier (URI) to IP address resolution, respectively. Extension negotiation mechanism There is a mechanism in SIP for extension negotiation between user agents (UA) or servers, consisting of three header fields: supported, require and unsupported, which UAs or servers (i.e. user terminals or call session control function (CSCF) in IMS) may use to specify the extensions they understand. When a client initiates a SIP dialog with a server, it states the extensions it requires to be used and also other extensions that are understood (supported), and the server will then send a response with a list of extensions that it requires. If these extensions are not listed in the client's message, the response from the server will be an error response. Likewise, if the server does not support any of the client's required extensions, it will send an error response with a list of its unsupported extensions. This kind of extensions are called option tags, but SIP can also be extended with new methods. In that case, user agents or servers use the Allow header to state which methods they support. To require the use of a particular method in a particular dialog, they must use an option tag associated to that method. SIP extensions Caller preferences and user agent capabilities These two extensions allow users to specify their preferences about the service the IMS provides. With the caller preferences extension, the calling party is able to indicate the kind of user agent they want to reach (e.g. whether it is fixed or mobile, a voicemail or a human, personal or for business, which services it is capable to provide, or which methods it supports) and how to search for it, with three header fields: Accept-Contact to describe the desired destination user agents, Reject-Contact to state the user agents to avoid, and Request-Disposition to specify how the request should be handled by servers in the network (i.e. whether or not to redirect and how to search for the user: sequentially or in parallel). By using the user agent capabilities extension, user agents (terminals) can describe themselves when they register so that others can search for them according to their caller preferences extension headers. For this purpose, they list their capabilities in the Contact header field of the REGISTER message. Event notification The aim of event notification is to obtain the status of a given resource (e.g. a user, one's voicemail service) and to receive updates of that status when it changes. Event notification is necessary in the IMS framework to inform about the presence of a user (i.e. "online" or "offline") to others that may be waiting to contact them, or to notify a user and its P-CSCF of its own registration state, so that they know if they are reachable and what public identities they have registered. Moreover, event notification can be used to provide additional services such as voicemail (i.e. to notify that they have new voice messages in their inbox). To this end, the specific event notification extension defines a framework for event notification in SIP, with two new methods: SUBSCRIBE and NOTIFY, new header fields and response codes and two roles: the subscriber and the notifier. The entity interested in the state information of a resource (the subscriber) sends a SUBSCRIBE message with the Uniform Resource Identifier (URI) of the resource in the request initial line, and the type of event in the Event header. Then the entity in charge of keeping track of the state of the resource (the notifier), receives the SUBSCRIBE request and sends back a NOTIFY message with a subscription-state header as well as the information about the status of the resource in the message body. Whenever the resource state changes, the notifier sends a new NOTIFY message to the subscriber. Each kind of event a subscriber can subscribe to is defined in a new event package. An event package describes a new value for the SUBSCRIBE Event header, as well as a MIME type to carry the event state information in the NOTIFY message. There is also an allow-events header to indicate event notification capabilities, and the 202 accepted and 489 bad event response codes to indicate if a subscription request has been preliminary accepted or has been turned down because the notifier does not understand the kind of event requested. In order to make an efficient use of the signaling messages, it is also possible to establish a limited notification rate (not real-time notifications) through a mechanism called event throttling. Moreover, there is also a mechanism for conditional event notification that allows the notifier to decide whether or not to send the complete NOTIFY message depending on if there is something new to notify since last subscription or there is not. State publication The event notification framework defines how a user agent can subscribe to events about the state of a resource, but it does not specify how that state can be published. The SIP extension for event state publication was defined to allow user agents to publish the state of an event to the entity (notifier) that is responsible for composing the event state and distributing it to the subscribers. The state publication framework defines a new method: PUBLISH, which is used to request the publication of the state of the resource specified in the request-URI, with reference to the event stated in the Event header, and with the information carried in the message body. Instant messaging The functionality of sending instant messages to provide a service similar to text messaging is defined in the instant messaging extension. These messages are unrelated to each other (i.e. they do not originate a SIP dialog) and sent through the SIP signaling network, sharing resources with control messages. This functionality is supported by the new MESSAGE method, which can be used to send an instant message to the resource stated in the request-URI, with the content carried in the message body. This content is defined as a MIME type, being text/plain the most common one. In order to have an instant messaging session with related messages, the Message Session Relay Protocol (MSRP) is available. Call transfer The REFER method extension defines a mechanism to request a user agent to contact a resource which is identified by a URI in the Refer-To header field of the request message. A typical use of this mechanism is call transfer: during a call, the participant who sends the REFER message tells the recipient to contact to the user agent identified by the URI in the corresponding header field. The REFER message also implies an event subscription to the result of the operation, so that the sender will know whether or not the recipient could contact the third person. However, this mechanism is not restricted to call transfer, since the Refer-To header field can be any kind of URI, for instance, an HTTP URI, to require the recipient to visit a web page. Reliability of provisional responses In the basic SIP specification, only requests and final responses (i.e. 2XX response codes) are transmitted reliably, this is, they are retransmitted by the sender until the acknowledge message arrives (i.e. the corresponding response code to a request, or the ACK request corresponding to a 2XX response code). This mechanism is necessary since SIP can run not only over reliable transport protocols (TCP) that assure that the message is delivered, but also over unreliable ones (UDP) that offer no delivery guarantees, and it is even possible that both kinds of protocols are present in different parts of the transport network. However, in such an scenario as the IMS framework, it is necessary to extend this reliability to provisional responses to INVITE requests (for session establishment, this is, to start a call). The reliability of provisional responses extension provides a mechanism to confirm that provisional responses such as the 180 Ringing response code, that lets the caller know that the callee is being alerted, are successfully received. To do so, this extension defines a new method: PRACK, which is the request message used to tell the sender of a provisional response that his or her message has been received. This message includes a RACK header field which is a sequence number that matches the RSeq header field of the provisional response that is being acknowledged, and also contains the CSeq number that identifies the corresponding INVITE request. To indicate that the user agent requests or supports reliable provisional responses, the 100rel option tag will be used. Session description updating The aim of the UPDATE method extension is to allow user agents to provide updated session description information within a dialog, before the final response to the initial INVITE request is generated. This can be used to negotiate and allocate the call resources before the called party is alerted. Preconditions In the IMS framework, it is required that once the callee is alerted, the chances of a session failure are minimum. An important source of failure is the inability to reserve network resources to support the session, so these resources should be allocated before the phone rings. However, in the IMS, to reserve resources the network needs to know the callee's IP address, port and session parameters and therefore it is necessary that the initial offer/answer exchange to establish a session has started (INVITE request). In basic SIP, this exchange eventually causes the callee to be alerted. To solve this problem, the concept of preconditions was introduced. In this concept the caller states a set of constraints about the session (i.e. codecs and QoS requirements) in the offer, and the callee responds to the offer without establishing the session or alerting the user. This establishment will occur if and only if both the caller and the callee agree that the preconditions are met. The preconditions SIP extension affects both SIP, with a new option tag (precondition) and defined offer/answer exchanges, and Session Description Protocol (SDP), which is a format used to describe streaming media initialization parameters, carried in the body of SIP messages. The new SDP attributes are meant to describe the current status of the resource reservation, the desired status of the reservation to proceed with session establishment, and the confirmation status, to indicate when the reservation status should be confirmed. The SDP offer/answer model using PRACK and UPDATE requests In the IMS, the initial session parameter negotiation can be done by using the provisional responses and session description updating extensions, along with SDP in the body of the messages. The first offer, described by means of SDP, can be carried by the INVITE request and will deal with the caller's supported codecs. This request will be answered by the provisional reliable response code 183 Session Progress, that will carry the SDP list of supported codecs by both the caller and the callee. The corresponding PRACK to this provisional answer will be used to select a codec and initiate the QoS negotiation. The QoS negotiation is supported by the PRACK request, that starts resource reservation in the calling party network, and it is answered by a 2XX response code. Once this response has been sent, the called party has selected the codec too, and starts resource reservation on its side. Subsequent UPDATE requests are sent to inform about the reservation progress, and they are answered by 2XX response codes. In a typical offer/answer exchange, one UPDATE will be sent by the calling party when its reservation is completed, then the called party will respond and eventually finish allocating the resources. It is then, when all the resources for the call are in place, when the caller is alerted. Identification and charging In the IMS framework it is fundamental to handle user identities for authentication, authorization and accounting purposes. The IMS is meant to provide multimedia services over IP networks, but also needs a mechanism to charge users for it. All this functionality is supported by new special header fields. P-headers The Private Header Extensions to SIP, also known as P-Headers, are special header fields whose applicability is limited to private networks with a certain topology and characteristics of lower layers' protocols. They were designed specifically to meet the 3GPP requirements because a more general solution was not available. These header fields are used for a variety of purposes including charging and information about the networks a call traverses: P-Charging-Vector: A collection of charging information, such as the IMS Charging Identity (ICID) value, the address of the SIP proxy that creates the ICID value, and the Inter Operator Identifier (IOI). It may be filled during the establishment of a session or as a standalone transaction outside a dialog. P-Charging-Function-Address: The addresses of the charging functions (functional entities that receive the charging records or events) in the user's home network. It also may be filled during the establishment of a dialog or as a standalone transaction, and informs each proxy involved in a transaction. P-Visited-Network-ID: Identification string of the visited network. It is used during registrations, to indicate to the user's home network which network is providing services to a roaming user, so that the home network is able to accept the registration according to their roaming agreements. P-Access-Network-Info: Information about the access technology (the network providing the connectivity), such as the radio access technology and cell identity. It is used to inform service proxies and the home network, so that they can optimize services or simply so that they can locate the user in a wireless network P-Called-Party-ID: The URI originally indicated in the request-URI of a request generated by the calling user agent. When the request reaches the registrar (S-CSCF) of the called user, the registrar re-writes the request-URI on the first line of the request with the registered contact address (i.e. IP address) of the called user, and stores the replaced request-URI in this header field. In the IMS, a user may be identified by several SIP URIs (address-of-record), for instance, a SIP URI for work and another SIP URI for personal use, and when the registrar replaces the request-URI with the effective contact address, the original request-URI must be stored so that the called party knows to which address-of-record was the invitation sent. P-Associated-URI: Additional URIs that are associated with a user that is registering. It is included in the 200 OK response to a REGISTER request to inform a user which other URIs the service provider has associated with an address-of-record (AOR) URI. More private headers have been defined for user database accessing: P-User-Database: The address of the user database, this is, the Home Subscriber Server (HSS), that contains the profile of the user that generated a particular request. Although the HSS is a unique master database, it can be distributed into different nodes for reliability and scalability reasons. In this case, a Subscriber location function (SLF) is needed to find the HSS that handles a particular user. When a user request reaches the I-CSCF at the edge of the administrative domain, this entity queries the SLF for the corresponding HSS and then, to prevent the S-CSCF from having to query the SLF again, sends the HSS address to the S-CSCF in the P-User-Database header. Then the S-CSCF will be able to directly query the HSS to get information about the user (e.g. authentication information during a registration). P-Profile-Key: The key to be used to query the user database (HSS) for a profile corresponding to the destination SIP URI of a particular SIP request. It is transmitted among proxies to perform faster database queries: the first proxy finds the key and the others query the database by directly using the key. This is useful when Wildcarded Service Identities are used, this is, Public Service Identities that match a regular expression, because the first query has to resolve the regular expression to find the key. Asserted identity The private extensions for asserted identity within trusted networks are designed to enable a network of trusted SIP servers to assert the identity of authenticated users, only within an administrative domain with previously agreed policies for generation, transport and usage of this identification information. These extensions also allow users to request privacy so that their identities are not spread outside the trust domain. To indicate so, they must insert the privacy token id into the Privacy header field. The main functionality is supported by the P-Asserted-Identity extension header. When a proxy server receives a request from an untrusted entity and authenticates the user (i.e. verifies that the user is who he or she says that he or she is), it then inserts this header with the identity that has been authenticated, and then forwards the request as usual. This way, other proxy servers that receive this SIP request within the Trust Domain (i.e. the network of trusted entities with previously agreed security policies) can safely rely on the identity information carried in the P-Asserted-Identity header without the necessity of re-authenticating the user. The P-Preferred-Identity extension header is also defined, so that a user with several public identities is able to tell the proxy which public identity should be included in the P-Asserted-Identity header when the user is authenticated. Finally, when privacy is requested, proxies must withhold asserted identity information outside the trusted domain by removing P-Asserted-Identity headers before forwarding user requests to untrusted identities (outside the Trust Domain). There exist analogous extension headers for handling the identification of services of users, instead of the users themselves. In this case, Uniform Resource Names are used to identify a service (e.g. a voice call, an instant messaging session, an IPTV streaming) Security mechanisms Access security in the IMS consists of first authenticating and authorizing the user, which is done by the S-CSCF, and then establishing secure connections between the P-CSCF and the user. There are several mechanisms to achieve this, such as: HTTP digest access authentication, which is part of the basic SIP specification and leads to a Transport Layer Security connection between the user and the proxy. HTTP digest access authentication using AKA, a more secure version of the previous mechanism for cellular networks that uses the information from the user's smart card and commonly creates two IPsec security associations between the P-CSCF and the terminal. The security mechanisms agreement extension for SIP was then introduced to provide a secure mechanism for negotiating the security algorithms and parameters to be used by the P-CSCF and the terminal. This extension uses three new header fields to support the negotiation process: First, the terminal adds a security–client header field containing the mechanisms, authentication and encryption algorithms it supports to the REGISTER request. Then, the P-CSCF adds a security-server header field to the response that contains the same information as the client's but with reference to the P-CSCF. In case there are more than one mechanism, they are associated with a priority value. Finally, the user agent sends a new REGISTER request over the just created secure connection with the negotiated parameters, including a security-verify header field that carries the same contents as the previously received security-server header field. This procedure protects the negotiation mechanism from Man-in-the-middle attacks: if an attacker removed the strongest security mechanisms from the Security-Server header field in order to force the terminal to choose weaker security algorithms, then the Security-Verify and Security-Server header fields would not match. The contents of the Security-Verify header field cannot be altered as they are sent through the new established secure association, as long as this association is no breakable by the attacker in real time (i.e. before the P-CSCF discovers the Man-in-the-middle attack in progress. Media authorization The necessity in the IMS of reserving resources to provide quality of service (QoS) leads to another security issue: admission control and protection against denial-of-service attacks. To obtain transmission resources, the user agent must present an authorization token to the network (i.e. the policy enforcement point, or PEP) . This token will be obtained from its P-CSCF, which may be in charge of QoS policy control or have an interface with the policy control entity in the network (i.e. the policy decision function, or PDF) which originally provides the authorization token. The private extensions for media authorization link session signaling to the QoS mechanisms applied to media in the network, by defining the mechanisms for obtaining authorization tokens and the P-Media-Authorization header field to carry these tokens from the P-CSCF to the user agent. This extension is only applicable within administrative domains with trust relationships. It was particularly designed for specialized SIP networks like the IMS, and not for the general Internet. Source-routing mechanisms Source routing is the mechanism that allows the sender of a message to specify partially or completely the route the message traverses. In SIP, the route header field, filled by the sender, supports this functionality by listing a set of proxies the message will visit. In the IMS context, there are certain network entities (i.e. certain CSCFs) that must be traversed by requests from or to a user, so they are to be listed in the Route header field. To allow the sender to discover such entities and populate the route header field, there are mainly two extension header fields: path and service-route. Path The extension header field for registering non-adjacent contacts provides a Path header field which accumulates and transmits the SIP URIs of the proxies that are situated between a user agent and its registrar as the REGISTER message traverses then. This way, the registrar is able to discover and record the sequence of proxies that must be transited to get back to the user agent. In the IMS every user agent is served by its P-CSCF, which is discovered by using the Dynamic Host Configuration Protocol or an equivalent mechanism when the user enters the IMS network, and all requests and responses from or to the user agent must traverse this proxy. When the user registers to the home registrar (S-CSCF), the P-CSCF adds its own SIP URI in a Path header field in the REGISTER message, so that the S-CSCF receives and stores this information associated with the contact information of the user. This way, the S-CSCF will forward every request addressed to that user through the corresponding P-CSCF by listing its URI in the route header field. Service route The extension for service route discovery during registration consists of a Service-Route header field that is used by the registrar in a 2XX response to a REGISTER request to inform the registering user of the entity that must forward every request originated by him or her. In the IMS, the registrar is the home network's S-CSCF and it is also required that all requests are handled by this entity, so it will include its own SIP URI in the service-route header field. The user will then include this SIP URI in the Route header field of all his or her requests, so that they are forwarded through the home S-CSCF. Globally routable user agent URIs In the IMS it is possible for a user to have multiple terminals (e.g. a mobile phone, a computer) or application instances (e.g. video telephony, instant messaging, voice mail) that are identified with the same public identity (i.e. SIP URI). Therefore, a mechanism is needed in order to route requests to the desired device or application. That is what a Globally Routable User Agent URI (GRU) is: a URI that identifies a specific user agent instance (i.e. terminal or application instance) and it does it globally (i.e. it is valid to route messages to that user agent from any other user agent on the Internet). These URIs are constructed by adding the gr parameter to a SIP URI, either to the public SIP URI with a value that identifies the user agent instance, or to a specially created URI that does not reveal the relationship between the GRUU and the user's identity, for privacy purposes. They are commonly obtained during the registration process: the registering user agent sends a Uniform Resource Name (URN) that uniquely identifies that SIP instance, and the registrar (i.e. S-CSCF) builds the GRUU, associates it to the registered identity and SIP instance and sends it back to the user agent in the response. When the S-CSCF receives a request for that GRUU, it will be able to route the request to the registered SIP instance. Signaling compression The efficient use of network resources, which may include a radio interface or other low-bandwidth access, is essential in the IMS in order to provide the user with an acceptable experience in terms of latency. To achieve this goal, SIP messages can be compressed using the mechanism known as SigComp (signaling compression). Compression algorithms perform this operation by substituting repeated words in the message by its position in a dictionary where all these words only appear once. In a first approach, this dictionary may be built for each message by the compressor and sent to the decompressor along with the message itself. However, as many words are repeated in different messages, the extended operations for SigComp define a way to use a shared dictionary among subsequent messages. Moreover, in order to speed up the process of building a dictionary along subsequent messages and provide high compression ratios since the first INVITE message, SIP provides a static SIP/SDP dictionary which is already built with common SIP and SDP terms. There is a mechanism to indicate that a SIP message is desired to be compressed. This mechanism defines the comp=sigcomp parameter for SIP URIs, which signals that the SIP entity identified by the URI supports SigComp and is willing to receive compressed messages. When used in request-URIs, it indicates that the request is to be compressed, while in Via header fields it signals that the subsequent response is to be compressed. Content Indirection In order to obtain even shorter SIP messages and make a very efficient use of the resources, the content indirection extension makes it possible to replace a MIME body part of the message with an external reference, typically an HTTP URI. This way the recipient of the message can decide whether or not to follow the reference to fetch the resource, depending on the bandwidth available. NAT traversal Network address translation (NAT) makes it impossible for a terminal to be reached from outside its private network, since it uses a private address that is mapped to a public one when packets originated by the terminal cross the NAT. Therefore, NAT traversal mechanisms are needed for both the signaling plane and the media plane. Internet Engineering Task Force's RFC 6314 summarizes and unifies different methods to achieve this, such as symmetric response routing and client-initiated connections for SIP signaling, and the use of STUN, TURN and ICE, which combines both previous ones, for media streams Internet Protocol version 6 compatibility Internet Engineering Task Force's RFC 6157 describes the necessary mechanisms to guarantee that SIP works successfully between both Internet Protocol versions during the transition to IPv6. While SIP signaling messages can be transmitted through heterogeneous IPv4/IPv6 networks as long as proxy servers and DNS entries are properly configured to relay messages across both networks according to these recommendations, user agents will need to implement extensions so that they can directly exchange media streams. These extensions are related to the Session Description Protocol offer/answer initial exchange, that will be used to gather the IPv4 and IPv6 addresses of both ends so that they can establish a direct communication. Interworking with other technologies Apart from all the explained extensions to SIP that make it possible for the IMS to work successfully, it is also necessary that the IMS framework interworks and exchanges services with existing network infrastructures, mainly the Public switched telephone network (PSTN). There are several standards that address this requirements, such as the following two for services interworking between the PSTN and the Internet (i.e. the IMS network): PSTN Interworking Service Protocol (PINT), that extends SIP and SDP for accessing classic telephone call services in the PSTN (e.g. basic telephone calls, fax service, receiving content over the telephone). Services in PSTN requesting Internet Services (SPIRITS), that provides the opposite functionality to PINT, this is, supporting the access to Internet services from the PSTN. And also for PSTN-SIP gateways to support calls with one end in each network: Session Initiation Protocol for Telephones (SIP-T), that describes the practices and uses of these gateways. ISDN User Part (ISUP) to Session Initiation Protocol (SIP) Mapping, which makes it possible to translate SIP signaling messages into ISUP messages of the Signaling System No. 7 (SS7) which is used in the PSTN, and vice versa. Moreover, the SIP INFO method extension is designed to carry user information between terminals without affecting the signaling dialog and can be used to transport the dual-tone multi-frequency signaling to provide telephone keypad function for users. See also Next-generation network Voice over IP TISPAN References Books External links 3rd Generation Partnership Project's page about the IP Multimedia Subsystem IP Multimedia Subsystem call flows IMS services Telecommunications infrastructure Network architecture Mobile telecommunications standards 3GPP standards Multimedia Telephony Videotelephony Audio network protocols VoIP protocols Application layer protocols
SIP extensions for the IP Multimedia Subsystem
[ "Technology", "Engineering" ]
6,895
[ "Network architecture", "Computer networks engineering", "Mobile telecommunications standards", "Mobile telecommunications", "Multimedia", "IMS services" ]
44,402,891
https://en.wikipedia.org/wiki/Masque%20Attack
Masque Attack is an iOS vulnerability identified and named by computer security company FireEye in July 2014. FireEye privately informed Apple Inc. of the issue on July 26, 2014 and disclosed the vulnerability to the public on November 10, 2014 through a blog post on their website. The vulnerability is identified to exist on iOS 7.1.1, 7.1.2, 8.0, 8.1 and 8.1.1 beta, and on jailbroken and non-jailbroken iOS devices. The vulnerability consists of getting users to download and install apps that have been deceptively created with the same bundle identifier as an existing legitimate app. The deceptive app can then replace and pose as the legitimate app, as long as the app was not one pre-installed along with iOS (i.e., the default Apple apps) – and thus, the reason FireEye gave for naming the vulnerability "Masque Attack". Once the deceptive app is installed, the malicious parties can access any data entered by the user, such as account credentials. On November 13, 2014, the United States Computer Emergency Readiness Team (US-CERT, part of the Department of Homeland Security) released Alert bulletin TA14-317A, regarding the Masque Attack. Apple stated on November 14 that they were not aware of any incidents in which one of their customers had been affected by the attack. References IOS
Masque Attack
[ "Technology" ]
288
[ "Computer security stubs", "Computing stubs" ]
44,403,472
https://en.wikipedia.org/wiki/Leucoagaricus%20erythrophaeus
Leucoagaricus erythrophaeus is a species of agaric fungus. Described as new to science in 2010, it is found in California, where it grows in mixed forest. The specific epithet erythrophaeus originates from the Greek words ερυ𝛉ρος ("red" or "bloody") and ϕαιος ("dark"), and refers to the mushroom's characteristic bruising reaction. The species was formerly known under the misapplied name Lepiota roseifolia. Description Its cap is 18-60 mm across. Its shape is initially hemispherical, then expanding to be convex or conical, ending up flat, or even slightly concave. It has dark brown scales arranged circularly around the purple to red to brown centre. When touched, the cap turns red-orange, which fades to dark-brown. The gills are free from the stipe, and often attached to a collar-like structure called a collarium. They are moderately crowded, and yellowish white, turning orange when touched. The stipe measures 55–70 x 4–5 mm, and is cylindrical near the top, though it widens at the base, up to 15 mm wide. It is hollow and hairy, and has pale yellow to cream-coloured flesh. It has a white annulus with fringed edges that flares upwards or downwards. The flesh of Leucoagaricus erythrophaeus is white, though orange when cut. It has a white spore print. The odour ranges from indistinct to astringent. Microscopic features The basidiospores measure 5.9–8.8 x 3.5–4.9 μm, are ellipsoid and have relatively thick walls. They do not have any germ pores. They turn reddish-brown when mounted with iodine-based reagent (are dextrinoid) and they stain readily by Congo red. They are metachromatic in Cresyl blue. The basidia measure 15–29 x 6.5–9.0 μm, and have 4 sterigmata each. Pleurocystidia are absent. The cheilocystidia measure 30–93 x 8–14 μm, and are narrowly club-shaped to cylindrical, and sometimes have a forked apex. They are brown and have dark granules when in ammonia. Clamp connections are absent in all tissues. Habitat and Distribution Leucoagaricus erythrophaeus has been found north of Mendocino county in California, for example in Picea sitchensis and Tsuga heterophylla forests or Alnus rubra and Sequoia sempervirens forests in the north, and in Pseudotsuga menziesii and Sequoia sempervirens forests in central coastal California. It grows from late October through early December in small groups. The true distribution is unknown. Similar species Leucoagaricus badhamii exhibits similar red staining. Leucoagaricus erythrophaeus differs from L. flammeotincta, by its pseudocollarium, orange staining gills and trichodermal elements on cap. List of Leucoagaricus species References External links erythrophaeus Fungi of North America Fungi described in 2010 Fungus species
Leucoagaricus erythrophaeus
[ "Biology" ]
691
[ "Fungi", "Fungus species" ]
44,403,623
https://en.wikipedia.org/wiki/Meshedness%20coefficient
In graph theory, the meshedness coefficient is a graph invariant of planar graphs that measures the number of bounded faces of the graph, as a fraction of the possible number of faces for other planar graphs with the same number of vertices. It ranges from 0 for trees to 1 for maximal planar graphs. Definition The meshedness coefficient is used to compare the general cycle structure of a connected planar graph to two extreme relevant references. In one end, there are trees, planar graphs with no cycle. The other extreme is represented by maximal planar graphs, planar graphs with the highest possible number of edges and faces for a given number of vertices. The normalized meshedness coefficient is the ratio of available face cycles to the maximum possible number of face cycles in the graph. This ratio is 0 for a tree and 1 for any maximal planar graph. More generally, it can be shown using the Euler characteristic that all n-vertex planar graphs have at most 2n − 5 bounded faces (not counting the one unbounded face) and that if there are m edges then the number of bounded faces is m − n + 1 (the same as the circuit rank of the graph). Therefore, a normalized meshedness coefficient can be defined as the ratio of these two numbers: It varies from 0 for trees to 1 for maximal planar graphs. Applications The meshedness coefficient can be used to estimate the redundancy of a network. This parameter along with the algebraic connectivity which measures the robustness of the network, may be used to quantify the topological aspect of network resilience in water distribution networks. It has also been used to characterize the network structure of streets in urban areas. Limitations Using the definition of the average degree , one can see that in the limit of large graphs (number of edges the meshedness tends to Thus, for large graphs, the meshedness does not carry more information than the average degree. References Graph invariants Planar graphs
Meshedness coefficient
[ "Mathematics" ]
409
[ "Planar graphs", "Graph theory", "Graph invariants", "Mathematical relations", "Planes (geometry)" ]
44,403,744
https://en.wikipedia.org/wiki/Pollination%20network
A pollination network is a bipartite mutualistic network in which plants and pollinators are the nodes, and the pollination interactions form the links between these nodes. The pollination network is bipartite as interactions only exist between two distinct, non-overlapping sets of species, but not within the set: a pollinator can never be pollinated, unlike in a predator-prey network where a predator can be depredated. A pollination network is two-modal, i.e., it includes only links connecting plant and animal communities. Nested structure of pollination networks A key feature of pollination networks is their nested design. A study of 52 mutualist networks (including plant-pollinator interactions and plant-seed disperser interactions) found that most of the networks were nested. This means that the core of the network is made up of highly connected generalists (a pollinator that visits many different species of plant), while specialized species interact with a subset of the species that the generalists interact with (a pollinator that visits few species of plant, which are also visited by generalist pollinators). As the number of interactions in a network increases, the degree of nestedness increases as well. One property that results from nested structure of pollination networks is an asymmetry in specialization, where specialist species are often interacting with some of the most generalized species. This is in contrast to the idea of reciprocal specialization, where specialist pollinators interact with specialist plants. Similar to the relationship between network complexity and network nestedness, the amount of asymmetry in specialization increases as the number of interactions increases. Modularity of networks Another feature that is common in pollination networks is modularity. Modularity occurs when certain groups of species within a network are much more highly connected to each other than they are with the rest of the network, with weak interactions connecting different modules. Within modules it has been shown that individual species play certain roles. Highly specialized species often only interact with individuals within their own module and are known as ‘peripheral species’; more generalized species can be thought of as ‘hubs’ within their own module, with interactions between many different species; there are also species which are very generalized which can act as ‘connectors’ between their own module and other modules. A study of three separate networks, all of which showed modularity, revealed that hub species were always plants and not the insect pollinators. Previous work has found that networks will become nested at a smaller size (number of species) than that where networks frequently become modular. Species loss and robustness to collapse There is substantial interest into the robustness of pollination networks to species loss and collapse, especially due to anthropogenic factors such as habitat destruction. The structure of a network is thought to affect how long it is able to persist after species decline begins. In particular, the nested structure of networks has been shown to protect against complete destruction of the network, because the core group of generalists are the most robust to extinction by habitat loss. Models specifically focused on the effects of habitat loss have shown that specialist species tend to go extinct first, while the last species to go extinct are the most generalized of the network. Other studies focusing specifically on the removal of different types of species showed that species decline is the fastest when removing the most generalized species. However, there have been contrasting results on how rapidly decline occurs with removal of these species. One study showed that even at the fastest rate, the decline was still linear. Another study revealed that with the removal of the most common pollinator species, the network showed a drastic collapse. In addition to focusing on the removal of species themselves, other work has emphasized the importance of studying the loss of interactions, as this will often precede species loss and may well accelerate the rate at which extinction occurs. See also Aeroplankton Biological network References Further reading Application-specific graphs Mutualism (biology) Network theory Pollination Systems biology
Pollination network
[ "Mathematics", "Biology" ]
816
[ "Behavior", "Symbiosis", "Biological interactions", "Graph theory", "Network theory", "Mathematical relations", "Mutualism (biology)", "Systems biology" ]
44,403,747
https://en.wikipedia.org/wiki/Dennis%20F.%20Evans
Dennis Frederick Evans (7 March 1928 – 6 November 1990) was an English chemist who made important contributions to nuclear magnetic resonance, magnetochemistry and other aspects of chemistry. Early life Evans was born in Nottingham, England on 28 March 1928. His father George Frederick Evans was a master carpenter and his mother (née Gladys Martha Taylor) was a dressmaker. He was educated at Huntingdon Street Junior School and then won a scholarship to Nottingham High School. In 1946 he entered Oxford with a scholarship to Lincoln College where his tutor was Rex Richards (later Sir Rex Richards FRS). He won the university Gibbs Prize in Chemistry in 1949, and in that year started DPhil work with Richards on calorimetry and the magnetic properties of clathrates containing nitric oxide or oxygen; nine papers resulted from this work. He was an ICI Research Fellow from 1952-5. In 1953-4 he became a postdoctoral research associate at the University of Chicago with Robert S. Mulliken working on the electronic spectra of halogens in organic solvents, producing four papers under his own name. Career and research In October 1955 Geoffrey Wilkinson appointed him as a lecturer in inorganic chemistry at Imperial College London. He was subsequently promoted to Senior Lecturer in 1963, Reader in 1964 and Professor in 1981. He was awarded an FRS in 1981. His research interests ranged widely over a number of topics in inorganic, organic and physical chemistry. Here we select just three areas which made a lasting contribution to chemistry. I. Measurement of magnetic susceptibility. For paramagnetic  inorganic materials in particular, such measurements are often useful. In 1959, he devised a procedure, now called the Evans Method, in which an NMR tube containing the paramagnetic species is dissolved in water-tert-butanol in the presence of a capillary of pure tert-butanol. From the difference in positions of the 1H NMR peak of the hydroxyl peak of pure butanol and the same peak shifted by the paramagnetic substrate the susceptibility of the sample can be calculated. In 1967 he devised an ingenious modification of the classical Gouy balance in which,  instead of weighing the sample in a magnetic field,  a small but powerful magnet was weighed against the static sample. This was further refined in 1974 by using two 6 g. strong magnets, each mounted on a torsion strip. The force that the static paramagnetic sample exerted on one magnet was balanced out by a current passed through a coil placed between the poles of the second magnet; by measurement of this current the magnetic susceptibility of the sample can be calculated. II  Nuclear magnetic resonance spectroscopy (NMR). In addition to his use of NMR to determine magnetic susceptibilities of species in solution (see above) he made wide use of the technique in the study of organometallic and coordination complexes. He also used the technique of double irradiation of organic compounds to establish the relative signs of coupling constants. III Inorganic chemistry. He made a number of studies on organometallic and coordination complexes. An example of his ingenuity in this area is to show that divalent lanthanides might show Grignard-like behaviour, and to this end he found that samarium, europium and ytterbium formed such species and that they showed Grignard-type reactions. Personal life Evans was a much-loved person, giving freely of his time and his immense knowledge (which was not confined to chemistry) to all who asked for help. He kept a range of exotic pets which he looked after well, e. g.  a Cayman Islands alligator and a five-foot sand snake called George fed with live toads obtained from his local Chelsea pub. George escaped into the King’s Road and was after re-capture given by Evans to the London Zoo. He also kept locusts (some of these escaped too), bird-eating lizards and giant scorpions. He became a celebrated member of the Chelsea Arts Club. References 1928 births 1990 deaths Academics of Imperial College London English chemists Inorganic chemists Fellows of the Royal Society People educated at Nottingham High School
Dennis F. Evans
[ "Chemistry" ]
858
[ "British inorganic chemists", "Inorganic chemists" ]
44,404,283
https://en.wikipedia.org/wiki/Jeevan%20Pramaan
Jeevan Pramaan is an Indian Life Certificate program affiliated with Aadhaar for people with pensions. It was started by Prime Minister Narendra Modi on 10 November 2014. The certificate was made for people who receive pensions from central or state governments or other government organisations. Jeevan Pramaan was made by the Department of Electronics and IT, Government of India. The Jeevan Pramaan software can be downloaded from https://jeevanpramaan.gov.in/ & from the Google Play Store for both PC and Android devices. This procedure can also be completed in one of the several Jeevan Pramaan Centres. A pension recipient can receive an electronic Jeevan Praaman certificate by using this software and a fingerprint or iris scan, as well as the Aadhaar platform for identification. The certificate can then be made available electronically to the Pension Disbursing Agency. No of beneficiaries year wise References E-government in India Pensions in India Public-key cryptography Modi administration initiatives Digital India initiatives
Jeevan Pramaan
[ "Technology" ]
220
[ "Computing stubs", "Software stubs" ]
44,405,111
https://en.wikipedia.org/wiki/Nanoinjection
Nanoinjection is the process of using a microscopic lance and electrical forces to deliver DNA to a cell. It is claimed to be more effective than microinjection because the lance used is ten times smaller than a micropipette and the method uses no fluid. The nanoinjector mechanism is operated while submerged in a pH buffered solution. Then, a positive electrical charge is applied to the lance, which accumulates negatively charged DNA on its surface. The nanoinjector mechanism then penetrates the zygotic membranes, and a negative charge is applied to the lance, releasing the accumulated DNA within the cell. The lance is required to maintain a constant elevation on both entry and exit of the cell. Nanoinjection results in a long-term cell viability of 92% following the electrophoretic injection process with a 100 nm diameter nanopipette, the typical diameter of nanoinjection pipet. Single cell transfections are used to virtually transfer any type of mammalian cell into another using a syringe which creates an entry for DNA to be released. A nano needle is used as a mechanical vector for plasmid DNA. The method can be improved further with Atomic Force Microscopy or AFM. In order to avoid causing permanent damage to the cell or provoke cellular leaking of intracellular fluid, AFM is a tool of choice, as it allows for precise positioning of the DNA, allowing for tip penetration into the cytosol, which is critical for viable DNA transfer into the cell. Reasons to use nanoinjection include the insertion of genetic material into the genome of a zygote. This method is a critical step in understanding and developing gene functions. Nanoinjection is also used to genetically modify animals to aid in the research of cancer, Alzheimer’s disease, and diabetes. Fabrication The lance is made using the polyMUMPs fabrication technology.  It creates a gold layer, and two structural layers that are 2.0 and 1.5 μm thick respectively.  It is a simple process, which makes it good as a platform to prototype polysilicon MEMS devices at a low commercial cost of fabrication.  The lance has a solid, tapered body, that is 2 μm thick, with a tip width of 150 nm.  The taper is set at 7.9°, coming to a maximum width of 11 μm. Two highly folded electrical connections provide an electrical path between the lance and two equivalent bond pads, with a gold wire connecting one of the bond pads to an integrated circuit chip carrier’s pin.  The carrier is then placed into a custom built electrical socket. In the situation of fertilizing eggs, the lance is incorporated into a kinematic mechanism consisting of a change-point parallel-guiding six-bar mechanism and a compliant parallel-guiding folded-beam suspension. Techniques Electrophoretic Injection Electrophoretic injection remains the most common form of nanoinjection. Just as with the other methods, a lance ten times smaller than that of microinjection is used. Preparing the lance for injection, a positive charge is applied, attracting the negatively-charged DNA to its tip. After the lance has reached a desired depth within the cell, the charge is reversed, repelling the DNA into the cell. The typical injection voltages are ±20 V, but can be as low as 50-100 mV. Diffusion A manual force is applied to a center fixture of the injection device, moving the lances through cell membranes and into the cytoplasm or nucleus of adhered cells. The magnitude of the force is measured using a force plate on a small number of injections to obtain an estimate of the manual force. The force plate is arranged to measure the force actually applied to the injection chip (that is, not including the stiffness of the support spring). After holding the force for five seconds, the force is released and the injection device is removed from the cell. The diffusion protocol presented data for comparison against other variations in the injection process. Applications By delivering certain particles into cells, diseases can be treated or even cured. Gene therapy is possibly the most common field of foreign material delivery into cells and has great implications for curing human genetic diseases. For example, two monkeys colorblind from birth were given gene therapy treatment in a recent experiment. As a result of gene therapy, both animals had their color vision restored with no apparent side effects. Traditionally, gene therapy has been divided into two categories: biological (viral) vectors and chemical or physical (nonviral) approaches. Although viral vectors are currently the most effective approach to delivering DNA into cells, they have certain limitations, including immunogenicity, toxicity, and limited capacity to carry DNA. One factor critical to successful gene therapy is the development of efficient delivery systems. Although advances in gene transfer technology, including viral and non-viral vectors, have been made, an ideal vector system has not yet been constructed. Alternatives Microinjection is the predecessor to nanoinjection. Still used in biological research, microinjection is useful in the examination of non-living cells or in cases where cell viability does not matter. Using a glass pipette 0.5-1.0 micrometers in diameter, the cell has its membrane damaged upon puncture. As opposed to nanoinjection, microinjection uses DNA-filled liquid driven into the cell under pressure. Depending on factors such as the skill of the operator, survival rates of cells undergoing this procedure can be as high as 56% or as low as 9%. Other methods exist that target groups of cells, such as electroporation. These methods are incapable of targeting specific cells, and are therefore not usable where efficiency and cell viability are a concern. References Cell biology
Nanoinjection
[ "Biology" ]
1,180
[ "Cell biology" ]
44,406,953
https://en.wikipedia.org/wiki/Liquid%20apogee%20engine
A liquid apogee engine (LAE), or apogee engine, refers to a type of chemical rocket engine typically used as the main engine in a spacecraft. The name apogee engine derives from the type of manoeuvre for which the engine is typically used, i.e. an in-space delta-v change made at the apogee of an elliptical orbit in order to circularise it. For geostationary satellites, this type of orbital manoeuvre is performed to transition from a geostationary transfer orbit and place the satellite on station in a circular geostationary orbit. Despite the name, an apogee engine can be used for a range of other manoeuvres, such as end-of-life deorbit, Earth orbit escape, planetary orbit insertion and planetary descent/ascent. In some parts of the space industry an LAE is also referred to as a liquid apogee motor (LAM), a liquid apogee thruster (LAT) and, depending on the propellant, a dual-mode liquid apogee thruster (DMLAT). Despite the ambiguity with respect to the use of engine and motor in these names, all use liquid propellant. An apogee kick motor (AKM) or apogee boost motor (ABM) such as the Waxwing, however, uses solid propellant. These solid-propellant versions are not used on new-generation satellites. History The apogee engine traces its origin to the early 1960s, when companies such as Aerojet, Rocketdyne, Reaction Motors, Bell Aerosystems, TRW Inc. and The Marquardt Company were all participants in developing engines for various satellites and spacecraft. Derivatives of these original engines are still used today and are continually being evolved and adapted for new applications. Layout A typical liquid apogee engine scheme could be defined as an engine with: pressure-regulated hypergolic liquid bipropellant feed, thermally isolated solenoid or torque motor valves, injector assembly containing (though dependent on the injector) central oxidant gallery and outer fuel gallery, radiative and film-cooled combustion chamber, characteristic velocity limited by thermal capability of combustion chamber material, Thrust coefficient limited by supersonic area ratio of the expansion nozzle. To protect the spacecraft from the radiant heat of the combustion chamber, these engines are generally installed together with a heat shield. Propellant Apogee engines typically use one fuel and one oxidizer. This propellant is usually, but not restricted to, a hypergolic combination such as: /, MMH/, UDMH/. Hypergolic propellant combinations ignite upon contact within the engine combustion chamber and offer very high ignition reliability, as well as the ability for reignition. In many instances mixed oxides of nitrogen (MON), such as MON-3 ( with 3 wt% ), is used as a substitute for pure . The use of is under threat in Europe due to REACH regulations. In 2011 the REACH framework legislation added to its candidate list of substances of very high concern. This step increases the risk that the use of will be prohibited or restricted in the near- to mid-term. Exemptions are being sought to allow to be used for space applications, however to mitigate this risk, companies are investigating alternative propellants and engine designs. A change over to these alternative propellants is not straightforward, and issues such as performance, reliability and compatibility (e.g. satellite propulsion system and launch-site infrastructure) require investigation. Performance The performance of an apogee engine is usually quoted in terms of vacuum specific impulse and vacuum thrust. However, there are many other details which influence performance: The characteristic velocity is influenced by design details such as propellant combination, propellant feed pressure, propellant temperature, and propellant mixture ratio. The thrust coefficient is influenced primarily by the nozzle supersonic area ratio. A typical 500 N-class hypergolic liquid apogee engine has a vacuum specific impulse in the region of 320 s, with the practical limit estimated to be near 335 s. Though marketed to deliver a particular nominal thrust and nominal specific impulse at nominal propellant feed conditions, these engines actually undergo rigorous testing where performance is mapped over a range of operating conditions before being deemed flight-qualified. This means that a flight-qualified production engine can be tuned (within reason) by the manufacturer to meet particular mission requirements, such as higher thrust. Operation Most apogee engines are operated in an on–off manner at a fixed thrust level. This is because the valves used only have two positions: open or closed. The duration for which the engine is on, sometimes referred to as the burn duration, depends both on the manoeuvre and the capability of the engine. Engines are qualified for a certain minimal and maximal single-burn duration. Engines are also qualified to deliver a maximal cumulative burn duration, sometimes referred to as cumulative propellant throughput. The useful life of an engine at a particular performance level is dictated by the useful life of the materials of construction, primarily those used for the combustion chamber. Applications A simplified division can be made between apogee engines used for telecommunications and exploration missions: Present telecommunication spacecraft platforms tend to benefit more from high specific impulse than high thrust. The less fuel is consumed to get into orbit, the more is available for station keeping when on station. This increase in the remaining propellant can be directly translated to an increase in the service lifetime of the satellite, increasing the financial return on these missions. Planetary exploration spacecraft, especially the larger ones, tend to benefit more from high thrust than high specific impulse. The quicker a high delta-v manoeuvre can be executed, the higher the efficiency of this manoeuvre, and the less propellant is required. This reduction in the propellant required can be directly translated to an increase in the bus and payload mass (at design stage), enabling better science return on these missions. The actual engine chosen for a mission is dependent on the technical details of the mission. More practical considerations such as cost, lead time and export restrictions (e.g. ITAR) also play a part in the decision. See also Rocket engine References Rocket engines Spacecraft propulsion
Liquid apogee engine
[ "Technology" ]
1,288
[ "Rocket engines", "Engines" ]
44,408,374
https://en.wikipedia.org/wiki/Human%20disease%20network
A human disease network is a network of human disorders and diseases with reference to their genetic origins or other features. More specifically, it is the map of human disease associations referring mostly to disease genes. For example, in a human disease network, two diseases are linked if they share at least one associated gene. A typical human disease network usually derives from bipartite networks which consist of both diseases and genes information. Additionally, some human disease networks use other features such as symptoms and proteins to associate diseases. History In 2007, Goh et al. constructed a disease-gene bipartite graph using information from OMIM database and termed human disease network. In 2009, Barrenas et al. derived complex disease-gene network using GWAs (Genome Wide Association studies). In the same year, Hidalgo et al. published a novel way of building human phenotypic disease networks in which diseases were connected according to their calculated distance. In 2011, Cusick et al. summarized studies on genotype-phenotype associations in cellular context. In 2014, Zhou, et al. built a symptom-based human disease network by mining biomedical literature database. Properties A large-scale human disease network shows scale-free property. The degree distribution follows a power law suggesting that only a few diseases connect to a large number of diseases, whereas most diseases have few links to others. Such network also shows a clustering tendency by disease classes. In a symptom-based disease network, disease are also clustered according to their categories. Moreover, diseases sharing the same symptom are more likely to share the same genes and protein interactions. See also Bioinformatics Genome Network theory Network medicine References External links https://web.archive.org/web/20080625034729/http://hudine.neu.edu/ http://www.barabasilab.com/pubs/CCNR-ALB_Publications/200705-14_PNAS-HumanDisease/200705-14_PNAS-HumanDisease-poster.pdf https://www.nytimes.com/2008/05/06/health/research/06dise.html Network theory
Human disease network
[ "Mathematics" ]
460
[ "Network theory", "Mathematical relations", "Graph theory" ]
44,408,590
https://en.wikipedia.org/wiki/Climate-adaptive%20building%20shell
In building engineering, a climate-adaptive building shell (CABS) is a façade or roof that interacts with the variability of its environment in a dynamic way. Conventional structures have static building envelopes and therefore cannot act in response to changing weather conditions and occupant requirements. Well-designed CABS have two main functions: they contribute to energy-saving for heating, cooling, ventilation, and lighting, and they induce a positive impact on the indoor environmental quality of buildings. Definition The description of CABS made by Loonen et al. says that:A climate adaptive building shell has the ability to repeatedly and reversibly change some of its functions, features or behavior over time in response to changing performance requirements and variable boundary conditions, and does this with the aim of improving overall building performance. This definition shows several components that conform CABS, and are addressed in this article. The first part of the definition is related to its fundamental characteristic; being adaptive envelopes, or in other words, having skins that could adjust to new circumstances. This means that envelopes should be able to "alter slightly as to achieve the desired result", "become used to a new situation", and even return to their original stage if needed. Although occupants’ desired conditions are indoors, they are affected by the outdoor surroundings. While these outcomes can be broadly defined, there is a consensus that the purpose of CABS is to provide shelter, protection, and a comfortable indoor environmental quality by consuming the minimum amount of energy needed. Therefore, the objective is to improve the well-being and productivity of people inside the building by making it sensitive to its surroundings. CABS must satisfy different demands that compete or even conflict with each other. For example, they must find the compromise between daylight and glare, fresh air and draft, ventilation and excessive humidity, shutters and luminaires, heat gains and overheating, and others among them. The dynamism of the envelope required to manage these compromises could be accomplished in various ways, for example by moving components, by the introduction of airflows or by a chemical change in a material. However, it is not sufficient to simply add adaptive features to the design or the existing building, they must be integrated into it as a whole system. Therefore, by using CABS technologies, a variety of opportunities are available for a transformation from "manufactured" to "mediated" indoor spaces. Related concepts CABS is only one designation for an envelope concept that can be described by a range of different terms. Several variations on the term 'adaptive' can be used, including: active, advanced, dynamic, interactive, kinetic, responsive, intelligent and switchable. In addition, the concepts of responsive architecture, kinetic architecture, intelligent building are closely related. The main difference with CABS is that the adaptation takes place at the building shell level, whereas the other concepts consider a whole-building approach. Categorization of CABS Like any other system, CABS have several independent characteristics by which they can be categorized. Therefore, the same CABS may fit somehow into all of these categories. What may be different from one CABS to another is the subcategorization, which discriminates based on the attributes of each one of them. The following are some of the possible categorizations that may be found in the literature. Climate responsive systems As the name says, they are categorized based on the climatic factors they tackle. Their behavior is based on producing a change in heat, light, air, water and/or other types of energy. Thus, they are subcategorized into three types: solar-responsive systems, air-flow-responsive systems, and other natural sources responsive systems. Emerging technologies of Climate Adaptive Curtain Wall A climate-adaptive building curtain wall possesses the ability to repeatedly and reversibly modify its heat transfer characteristics (U-Value and SHGC) in response to evolving performance demands and variable environmental conditions. This adaptation aims to enhance the overall efficiency of the building. This capability entails the continuous adjustment of the envelope's parameters autonomously, without relying on external power sources. The primary objective is to elevate the comfort and productivity of individuals within the building by enabling the structure to sensitively react to its surroundings. Additionally, an adaptive shell offers energy-saving benefits, technology demonstrates a potential of 30% reduction in total energy consumption. However, it's not enough to merely advance the technology; it's equally crucial for the new technology to seamlessly integrate into existing infrastructure. To achieve this, the system perpetually alters the building shell's heat transfer properties by air circulation within the hermetically sealed curtain wall panel, achieving the desired effects. Consequently, this pioneering technology will significantly diminish the carbon footprint of tall buildings while enhancing the well-being of their occupants. Solar responsive systems They are based on managing solar energy in different formats. Usually, they use one of the following five types of solar control devices: external, integrated, internal, double skin, and ventilated cavity. The first type of solar energy is solar heat. CABS related to this type of energy are intended to maximize solar heat gain in winter and minimize them in summer. Some examples of this technology are the solar barrel wall (water-filled oil barrels), water bags on the roof, dynamic insulation, and thermochromic (change color due to temperature) materials on walls to get appropriate color and reflectance responding to the outside temperature. Another type of solar energy is solar light. CABS linked with this energy source are based on the control of indoor illuminance levels, distributions, windows views, and glare. To accomplish these tasks, there are three main ways: with traditional mechanical systems (wide range of options from venetian blinds up to a complex motorized system) innovative mechanical systems (rotational, retractable, sliding, active daylighting and self-adjusting fenestration schemes), and smart glass or translucent materials (thermochromic, photochromic, electrochromic materials). This last one is used in windows and can achieve its goal in four ways: change in optical properties, lighting direction, visual appearance, and thermophysical properties. Between these smart materials, electrically-activated glazing for building façades has gained commercial viability and remains as the most visible indicator for smart materials in a building. The third kind of solar energy is solar electricity which mostly relays on installing integrated photovoltaics systems. To be considered CABS they must have the ability to be kinetic, rather than individually movable panels. Normally this is achieved through the use of heliotropic sun-tracking systems to maximize the solar energy capture. Air-flow responsive systems They are those related to natural ventilation and wind electricity. The first ones have the goal of exhausting the excess of carbon dioxide, water vapor, odors and pollutants that tend to accumulate in an indoor space. At the same time, they must replace it with new and fresh air, usually coming from the outside. Some examples of this type of technology are kinetic roof structure and double skin facades. Other less common types of CABS are the ones generating wind electricity. Thus, they convert wind energy into electrical energy via small scale wind turbines integrated into buildings. This can be for example as wind turbines fitted horizontally between each floor. Other examples may be found in buildings such as the Dynamic Tower, the COR Building in Miami and the Greenway Self-park Garage in Chicago. Other natural sources systems They may account for the use of rain, snow and additional natural supplies. Unfortunately, no extra information related to this issue was found. Based on the time frame scale As dynamic technologies, CABS can show different configurations over time, extending from seconds up to changes appreciable during the lifetime of the building. Thus, the four types of adaptations based on the time frame scales are seconds, minutes, hours, and seasons The variation that takes place just in seconds are found randomly in nature. Some examples may be short-term variations in wind speed and direction that may cause shifts in wind-based skins. An example of a shift that occurs within minutes is the cloud cover which has an impact on the daylight availability. Therefore, CABS that use this kind of energy may also fall into this category. Some changes that adjust in the order of hours are fluctuations in air temperature, and the track of sun through the sky (although sun movement around the sky is a continuous process, its track is done in this time scale). Finally, some CABS can adapt across seasons, and therefore are expected to offer extensive performance benefits. Based on the scale of change The adaptive behavior of CABS is related to how its mechanisms work. Therefore, they are either based on a change in behavior (macro-scale) or properties (micro-scale). Macro-scale changes It is often also referred to as “kinetic envelopes”, which implies that a certain kind of observable motion is present, usually resulting in energy changes in the building shell's configuration. This is commonly achieved via moving parts that can perform at least one of the following actions: folding, sliding, expanding, creasing, hinging, rolling, inflating, fanning, rotating, curling, etc. Based on their adaptive level, the macro scale mechanisms can be divided into two types of systems: intelligent building skins and responsive façade systems. The first ones use a centralization building system and sensing equipment to adjust to weather conditions. They should be capable of learning from the occupants’ reactions and considering future weather fluctuation to respond accordingly. Some examples of this kind of feature are building automation and physically adaptive components such as louvers, sunshades, operable windows or smart material assemblies. A responsive façade system has the same functions and performance characteristics of an intelligent building skin but goes even further by having an interactive aspect. This means it incorporates components such as computational algorithms which enable the building system to regulate itself and learn in time. Therefore, a responsive building skin, not only includes mechanisms for satisfying occupants desires and learn from their feedback, but it also encourages a dual educating path where both the building and its residents take place in a constant and growing conversation. Micro-scale changes These kinds of changes directly affect the internal structure of a material either via thermophysical or opaque optical properties, or through the exchange of energy from one form to another. When considering the adaptative level, they usually fall into the smart material category. They are characterized by being altered by outside stimuli such as temperature, heat, moisture, light, electric or magnetic fields. An important consideration in the use of this type of materials is whether their changes are reversible or irreversible. The most attractive property that catches the designers’ attention is its immediacy or real-time response, which in turn improves its functionality and performance, and at the same time decreases its energy use. Some examples are: aerogel (synthetic low-density translucent substance applied in window glazing), phase-change material (like micro-encapsulated wax), salt hydrates, thermochromic polymer films, shape-memory alloys, temperature-responsive polymers, structure integrated photovoltaics, and smart thermobimetal self-ventilating skins. Based on the control type There are two different control types: intrinsic and extrinsic regulators. Intrinsic controls They are characterized by being self-adjusting systems, which means that their adaptive capacity is an integral feature. They are stimulated by environmental conditions such as: temperature, relative humidity, precipitation, wind speed and direction, etc. This self-sufficient control is sometimes referred to as “direct control” since the main drivers are the environmental impacts, without the need for external decision-making devices. Therefore, the need for fewer components may be seen as an advantage, as well as the fact that it can have an immediate change without the need for fuel or electricity. However, a downside is that is can only perform on the environmental conditions and variations it was designed for. Extrinsic controls This kind of controls can take advantage of feedback by changing their behavior based on comparisons of the current state with the desired one. Their structure has three main components: sensors, processors and actuators. Wrapping them up with a logic controller gives them the ability to make changes in two levels: distributed (regulated by local processors) or centralized (via a superior control unit). As an advantage, they have high levels of control allowing for manually intervention for satisfaction and well-being. A disadvantage is the need for various components. Based on the spatial scale The spatial scale of CABS refers to the physical size of a system. Therefore, the adaptation can take place as an envelope, façade, façade component and façade subcomponent. Based on the inspirational scale One of the fundamental characteristics of human beings is the ability to create new things. As a starting point inspiration is needed, which can come from nature or other sources such as own ideas. Therefore, the use of organisms’ morphological or physiological properties or natural behaviors in no-biological sciences is known as biomimetics and is commonly used in building sciences. The CABS who get this source of inspiration are known as biomimetic adaptive building skins (Bio-ABS). Thus, the variation in properties and behaviors are transferred from biological representations that provide environmentally, mechanically, structurally or material-wise efficient strategies to buildings. Within the biomimetic adaptive building skins, there are two ways of categorization. The first one is based on the biomimetic approach. It discriminates according to the order in which the problem is solved. There are two possibilities: initiated through the identification of a technical problem to be solved by a biological solution (top-down) or with the examination of a biological solution to solve a technical problem (bottom-up). The second category of Bio-ABS is based on the adaptation level, which offers three types: morphological ( based on form, structure and texture), physiological, or behavioral. Based on the development stage This categorization embraces any analysis that measures the performance of a given CABS project. The developmental stages can be labeled as a preliminary model (PM), simulated model (SM), pilot-scale prototype (PSP) and full-scale application (FSA). Based on the number of functions This classification relays to the number of environmental factors that a given CABS adjust to when activated by stimuli independently. Some of them are: ventilating, heating/cooling, improving air quality, regulating humidity levels, changing color, and regulating energy demand. In this way, they can be monofunctional or multifunctional. Based on the performance task This last differentiation accounts for the purpose and the evaluation of how effectively the adaptation is being achieved, therefore, divided into two subcategories. The first one is the performance target, which relates to the building aspect that is being assessed. Some examples are: indoor air quality, thermal comfort, visual comfort and energy demand. The second category is the measure and metric improvements. Some usual parameters measured are: displacement, daylight intake, humidification/dehumidification, heat dissipation, airflow, permeability and cooling. Motivations for the implementation of CABS Buildings are exposed to a wide variety of changing conditions during their life cycle. Weather conditions vary not only throughout the year but also throughout the day. Also, the occupants’ load, activities, and preferences vary constantly. Responding to this dynamism from and energy and comfort point of view, CABS offers the ability to actively moderate the exchange of energy across a building's skin over time. By doing this, in response to predominant meteorological conditions and comfort needs, it introduces good energy-saving opportunities. While just for being constructed any building generates changes in its environment (such as solar patterns and wind variations), by having the ability to maximize the use of exterior resources it mitigates its environmental consequences. Thus, CABS use the “existing natural energies to light, heat and ventilate the spaces”, obtaining maximum thermal comfort conditions. As an example, by incorporating the photovoltaic principles into the glass intended to be used in facades, the new skins will generate local and non-polluting electricity to supply the buildings’ energy needs. Also, it promotes the use of daylight, that when it comes from a window with an exterior view it “results in increased productivity, mental function, and memory recall”. The building envelope is one of the most important design parameters determining indoor physical environment related to thermal comfort, visual comfort, and even occupancy working efficiency. To promote the creation of healthier and more productive spaces, not only daylight but natural ventilation, and other external resources must be considered. These are current tasks performed by CABS as environmental-based technologies. Thus, CABS not only have better performance than static envelopes, but also “provide an exciting aesthetic, the aesthetic of change”. The fact that CABS respond to changing conditions in a flexible way provides them the opportunity to maintain a high level of performance during real-time changes. This is achieved through anticipation and reaction. Therefore, the systems can handle environmental uncertainty, which is very appreciated. This flexibility is performed in CABS in three ways: adaptability (climate mediators between indoor and outdoor), multi-ability (multiple and new roles over time), and evolvability (ability to handle changes over a longer time horizon). The use of dynamic and sustainable technologies offer the possibility to have better environmental and economic performances of building envelopes. For example, by having heat avoidance and passive cooling features, buildings can be less expensive because of less cooling energy needs and therefore reduced mechanical equipment required. Even though the demand for satisfying working environment and economic performance has increased, CABS have the potential to undertake this goal. Drawbacks for the implementation of CABS As Mols et al. claim, CABS is an immature concept, needing more research due to the lack of successful applications in practice. Likewise, as a consequence of being an unexplored concept, “the true value of making building shells adaptive is yet an unknown, and we can only guess how much of this potential is accessible with existing concepts and technologies”. At its current stage, the concept is yet more theoretical than practical, being backed up by simulation technologies instead of constructed projects. Kuru et al. add to this point by saying that, from their research, academia projects are more frequent than real-world industrial ones. Since the concept of CABS relays on changes, it is sometimes related to devices and technologies that require higher operational and maintenance activity than static envelopes. This has several implications, such as greater attention to possible failures, the need of repairs, and on some occasions higher operational and maintenance costs. Also, sometimes the need for a centralized control center may affect this issue. Therefore, the election of the kind of technology is an issue that must be taken with care. However, Lechner states that the current reliability of cars demonstrates that movable systems can be made that require few if any repairs over long periods. He finishes this idea by saying that “with good design and materials, exposed building systems have become extremely reliable even with exposure to saltwater and ice in the winter”. Therefore, although there is a concern on the operation and maintenance of these types of technologies, there seems to be a solution in the decision making of the type, the materials and the design of such devices. As dynamic mechanisms, CABS may depend on energy availability. Contrastingly, passive technologies do not present this problem because they do not actively act, presenting a higher robustness of the system towards change. Its independence of any external input (electricity, thermal energy or data) enables its continuing functionality, even in case of power failure. Therefore, to permit continuous operation, the use of backup alternatives such as a secondary energy source is likely to be suggested to some CABS. Finally, the lack of control of several CABS may be seen as a flaw. There are some CABS, like the ones relying on smart materials, that cannot be controlled by the occupant. In these cases, if they do not satisfy the occupants’ desire, they generate an unfortunate outcome. Thus, the possibility to control a given technology may be seen as a strength or a weakness depending on the device, the intention and the task that needs to be achieved. Current status and use of these technologies Historically, the façade has been the main load-bearing structural element of buildings, restricting its functionality and materiality. In the contemporary period, the façade is often liberated from its structural task letting for more flexibility to fit in diverse contexts such as saving/generating energy, providing thermal properties for comfort, and adaptability to changing conditions. Modern construction methods, developments in material sciences, dropping prices of electronic devices, and availability of controllable kinetic façade components now offer rich possibilities for innovative building envelope solutions that respond better to the environmental context, thereby allowing the façade to ‘‘behave’’ as a living organism. However, most of the current status of CABS is focused on trying to better understand the concepts behind these technologies to be transferred and implemented in practical ways on buildings. Kuru et al., identify three major limitations in biomimetic adaptive building skins (Bio-ABS). The limitations suggested are: level of development, regulating diverse environmental factors, and performance evaluation. They suggest that as normal to any immature concept, the majority of the intended projects are conceptual. One of the main reasons is the challenges of combining multiple disciplines like architecture, biomimetics and engineering to finally develop, analyze and measure performance. Moreover, procedures to identify and transfer biological solutions into architectural systems are limited. Current software has limitations in terms of having specific tools and methods that can mimic the performance of Bio-ABS. Adding to this issue, the transition from digital models to the physical application requires the teamwork of experts from different fields, which sometimes can be hard to achieve. Another current deficiency is the focus on monofunctional CABS, which turns to be a waste on the opportunity of improvement. The idea behind CABS is to have envelopes that could respond to various internal and external factors, not just one per building skin. Moreover, the support and development rate of CABS tasks is being uneven. For example, from the research of Kuru et al. the results show that the light management CABS are most comprehensively developed while the energy regulations are the least studied. Thus, while it is likely to see a boost in the implementation of lighting management CABS, the ones related to energy regulation may seem lagging. Similarly, the research currently conducted is characterized by fragmented developments. Some of it going in the direction of material science (e.g. switchable glazing, adaptable thermal mass, and variable insulation), and others in creative processes. As a consequence of the drawbacks presented above, currently, the most common way of using energy efficiency in buildings is having a whole building (not only envelope) approach. There are few examples of façades that incorporate passive or smart technologies to create a comfortable indoor space, except for shading technologies such as blinds or louvers and operable windows for ventilation. Therefore, future improvements in this field may be required to overcome these issues. Future improvements on CABS Several challenges must be faced to improve the growth of CABS. The first one is the creation of custom made software that could analyze dynamic systems based on a climatic pattern. Moreover, if the software can anticipate and examine the future consequences of actions happening at the present, more accurate results can be obtained. This could be improved by introducing logic controls into CABS's software. Finally, making more user-friendly interfaces could ease the usage of these tools. Following this idea, not only software but also the scope of topics that CABS currently gather may be extended as well. Therefore, the creation of new ways to manage and control energy, water and heat must be explored. One way to do it is by engineering how to mimic the biological methods to translate them into a practical way for buildings. The inspiration in nature seems to have great potential. A common characteristic of developing ideas is that to grow and prosper, risks must be taken. Therefore, opening the possibility of failure. CABS are not the exception, and to be successful developers must take the risks, for example, the ones related to long periods of payback time and high operative costs. Mols et al. mention that “If the developer chooses to take the risks, the outcomes are claimed to be beneficiary”. Some of these risks relay on the uncertainty behind CABS. A way to mitigate them is by monitoring operational performance and by conducting post-occupancy evaluations growing data on the actual performance of current CABS which right now is lacking in the literature. As a conclusion, the idea of CABS needs the support and commitment of all buildings stakeholders to be able to transcend. Notable examples Although the concept of CABS is still relatively new, several hundreds of concepts can be found in buildings all over the world. The following list shows an overview of notable examples. Built examples Al Bahar Towers, Aedas, Abu Dhabi Arab World Institute, Jean Nouvel, Paris, France Heliotrope, Rolf Disch, Freiburg, Germany Burke Brise Soleil – Quadracci Pavilion, Milwaukee Art Museum Milwaukee, Wisconsin, United States Surry Hills Library, Francis-Jones Morehen Thorp, Sydney, Australia Bengt Sjostrom Theatre, Studio Gang Architects, Rockford, Illinois, United States Kuggen movable sunscreen, Wingårdh arkitektkontor, Gothenburg, Sweden The Barcelona Media-ICT Building Barcelona, Spain Terrence Donnelly Centre for Cellular and Biomolecular Research Toronto, Canada Devonshire Building University of Newcastle New San Francisco Federal Building San Francisco, United States References Architectural design Building engineering Sustainable building
Climate-adaptive building shell
[ "Engineering" ]
5,313
[ "Sustainable building", "Building engineering", "Construction", "Civil engineering", "Architectural design", "Design", "Architecture" ]
44,409,131
https://en.wikipedia.org/wiki/Technology%20transfer%20in%20computer%20science
Technology transfer in computer science refers to the transfer of technology developed in computer science or applied computing research, from universities and governments to the private sector. These technologies may be abstract, such as algorithms and data structures, or concrete, such as open source software packages. Examples Notable examples of technology transfer in computer science include: References Computer science Computer science Computing-related lists
Technology transfer in computer science
[ "Technology" ]
74
[ "Computing-related lists", "Computer science" ]
44,409,150
https://en.wikipedia.org/wiki/International%20Building%20Performance%20Simulation%20Association
The International Building Performance Simulation Association (IBPSA), is a non-profit international society of building performance simulation researchers, developers and practitioners, dedicated to improving the built environment. IBPSA aims to provide a forum for researchers, developers and practitioners to review building model developments, encourage the use of software programs, address standardization, accelerate integration and technology transfer, via exchange of knowledge and organization of (inter)national conferences. Organization IBPSA is an international organization with regional affiliate organizations around the world. IBPSA is governed by a board of directors elected by the membership of all the regional affiliates. In addition to the president, vice-president, secretary, and treasurer, the board is made up of members-at-large and representatives sent by the regional affiliates. Publications Newsletter ibpsaNEWS, IBPSA's online newsletter is published twice per year. The current edition and past issues are available at the IBPSA website. Conference proceedings IBPSA is organizer of the bi-annual international IBPSA Building Simulation Conference and Exhibition. Building Simulation is the premier international event in the field of building performance simulation. In addition to the international conferences, some regional affiliates organize local conferences, as well. All papers presented in the proceedings of these conferences are available at IBPSA's website. Journal The Journal of Building Performance Simulation (JBPS) is the official peer-reviewed scientific journal of the International Building Performance Simulation Association. JBPS publishes articles of the highest quality that are original, cutting-edge, well-researched and of significance to the international community. The journal also offers a forum for original review papers and researched case studies. JBPS is published by Taylor & Francis Group, and co-edited by Dr. Jan Hensen (Eindhoven University of Technology) and Prof. Ian Beausoleil-Morrison (Carleton University). Regional affiliates Membership of IBPSA is organized through regional affiliates. These affiliates plan and coordinate different types of activities, such as conferences, software workshops, symposia, etc. There are currently 31 regional IBPSA affiliates, spanning 5 continents: Argentina, Australasia, Brazil, Canada, China, Chile, Czechia, Danube, Egypt, England, France, Germany, India, Indonesia, Ireland, Italy, Japan, Korea, Mexico, Netherlands + Flanders, Nordic, Poland, Russia, Scotland, Singapore, Slovakia, Spain, Switzerland, Turkey, USA, and Vietnam. Support for building performance simulation users IBPSA has partnered with ASHRAE and IESNA to develop the Building Energy Modeling Professional Certification scheme. In addition, IBPSA is supporter of mailing lists and question-and-answer websites that stimulation knowledge exchange and discussion among users of building performance simulation in research and practice. Awards IBPSA has three awards and also recognizes Fellows: See also Air Infiltration and Ventilation Centre ASHRAE Building performance simulation Building performance CIBSE References Architecture organizations Building engineering organizations Energy conservation Energy organizations Low-energy building
International Building Performance Simulation Association
[ "Engineering" ]
606
[ "Architecture organizations", "Building engineering", "Building engineering organizations", "Energy organizations", "Architecture" ]
44,409,262
https://en.wikipedia.org/wiki/Lanierone
Lanierone is a pheromone emitted by the pine engraver and an odorous volatile component of saffron. References Enols Pheromones Saffron
Lanierone
[ "Chemistry" ]
37
[ "Enols", "Pheromones", "Chemical ecology", "Functional groups" ]
44,409,729
https://en.wikipedia.org/wiki/Individual%20mobility
Individual human mobility is the study that describes how individual humans move within a network or system. The concept has been studied in a number of fields originating in the study of demographics. Understanding human mobility has many applications in diverse areas, including spread of diseases, mobile viruses, city planning, traffic engineering, financial market forecasting, and nowcasting of economic well-being. Data In recent years, there has been a surge in large data sets available on human movements. These data sets are usually obtained from cell phone or GPS data, with varying degrees of accuracy. For example, cell phone data is usually recorded whenever a call or a text message has been made or received by the user, and contains the location of the tower that the phone has connected to as well as the time stamp. In urban areas, user and the telecommunication tower might be only a few hundred meters away from each other, while in rural areas this distance might well be in region of a few kilometers. Therefore, there is varying degree of accuracy when it comes to locating a person using cell phone data. These datasets are anonymized by the phone companies so as to hide and protect the identity of actual users. As example of its usage, researchers used the trajectory of 100,000 cell phone users within a period of six months, while in much larger scale trajectories of three million cell phone users were analyzed. GPS data are usually much more accurate even though they usually are, because of privacy concerns, much harder to acquire. Massive amounts of GPS data describing human mobility are produced, for example, by on-board GPS devices on private vehicles. The GPS device automatically turns on when the vehicle starts, and the sequence of GPS points the device produces every few seconds forms a detailed mobility trajectory of the vehicle. Some recent scientific studies compared the mobility patterns emerged from mobile phone data with those emerged from GPS data. Researchers have been able to extract very detailed information about the people whose data are made available to public. This has sparked a great amount of concern about privacy issues. As an example of liabilities that might happen, New York City released 173 million individual taxi trips. City officials used a very weak cryptography algorithm to anonymize the license number and medallion number, which is an alphanumeric code assigned to each taxi cab. This made it possible for hackers to completely de-anonymize the dataset, and even some were able to extract detailed information about specific passengers and celebrities, including their origin and destination and how much they tipped. Characteristics At the large scale, when the behaviour is modelled over a period of relatively long duration (e.g. more than one day), human mobility can be described by three major components: trip distance distribution radius of gyration number of visited locations Brockmann, by analysing banknotes, found that the probability of travel distance follows a scale-free random walk known as Lévy flight of form where . This was later confirmed by two studies that used cell phone data and GPS data to track users. The implication of this model is that, as opposed to other more traditional forms of random walks such as brownian motion, human trips tend to be of mostly short distances with a few long distance ones. In brownian motion, the distribution of trip distances are govern by a bell-shaped curve, which means that the next trip is of a roughly predictable size, the average, where in Lévy flight it might be an order of magnitude larger than the average. Some people are inherently inclined to travel longer distances than the average, and the same is true for people with lesser urge for movement. Radius of gyration is used to capture just that and it indicates the characteristic distance travelled by a person during a time period t. Each user, within his radius of gyration , will choose his trip distance according to . The third component models the fact that humans tend to visit some locations more often than what would have happened under a random scenario. For example, home or workplace or favorite restaurants are visited much more than many other places in a user's radius of gyration. It has been discovered that where , which indicates a sublinear growth in different number of places visited by an individual . These three measures capture the fact that most trips happen between a limited number of places, with less frequent travels to places outside of an individual's radius of gyration. Predictability Although the human mobility is modeled as a random process, it is surprisingly predictable. By measuring the entropy of each person's movement, it has been shown that there is a 93% potential predictability. This means that although there is a great variance in type of users and the distances that each of them travel, the overall characteristic of them is highly predictable. Implication of it is that in principle, it is possible to accurately model the processes that are dependent on human mobility patterns, such as disease or mobile virus spreading patterns. On individual scale, daily human mobility can be explained by only 17 Network motifs. Each individual, shows one of these motifs characteristically, over a period of several months. This opens up the possibility to reproduce daily individual mobility using a tractable analytical model Applications Infectious diseases spread across the globe usually because of long-distance travels of carriers of the disease. These long-distance travels are made using air transportation systems and it has been shown that "network topology, traffic structure, and individual mobility patterns are all essential for accurate predictions of disease spreading". On a smaller spatial scale the regularity of human movement patterns and its temporal structure should be taken into account in models of infectious disease spread. Cellphone viruses that are transmitted via bluetooth are greatly dependent on the human interaction and movements. With more people using similar operating systems for their cellphones, it's becoming much easier to have a virus epidemic. In Transportation Planning, leveraging the characteristics of human movement, such as tendency to travel short distances with few but regular bursts of long-distance trips, novel improvements have been made to Trip distribution models, specifically to Gravity model of migration See also Mobilities Private transport Network theory Personal transporter Personal air vehicle Personal rapid transit Automobile Mass automobility Car dependence Bicycle Lévy flight Scale-free network Public transport Transportation Geography and Network Science (wikibook) References Networks Network analysis Social systems Self-organization Information economy
Individual mobility
[ "Mathematics" ]
1,278
[ "Self-organization", "Dynamical systems" ]
44,409,977
https://en.wikipedia.org/wiki/Structural%20cut-off
The structural cut-off is a concept in network science which imposes a degree cut-off in the degree distribution of a finite size network due to structural limitations (such as the simple graph property). Networks with vertices with degree higher than the structural cut-off will display structural disassortativity. Definition The structural cut-off is a maximum degree cut-off that arises from the structure of a finite size network. Let be the number of edges between all vertices of degree and if , and twice the number if . Given that multiple edges between two vertices are not allowed, is bounded by the maximum number of edges between two degree classes . Then, the ratio can be written , where is the average degree of the network, is the total number of vertices, is the probability a randomly chosen vertex will have degree , and is the probability that a randomly picked edge will connect on one side a vertex with degree with a vertex of degree . To be in the physical region, must be satisfied. The structural cut-off is then defined by . Structural cut-off for neutral networks The structural cut-off plays an important role in neutral (or uncorrelated) networks, which do not display any assortativity. The cut-off takes the form which is finite in any real network. Thus, if vertices of degree exist, it is physically impossible to attach enough edges between them to maintain the neutrality of the network. Structural disassortativity in scale-free networks In a scale-free network the degree distribution is described by a power law with characteristic exponent , . In a finite scale free network, the maximum degree of any vertex (also called the natural cut-off), scales as . Then, networks with , which is the regime of most real networks, will have diverging faster than in a neutral network. This has the important implication that an otherwise neutral network may show disassortative degree correlations if . This disassortativity is not a result of any microscopic property of the network, but is purely due to the structural limitations of the network. In the analysis of networks, for a degree correlation to be meaningful, it must be checked that the correlations are not of structural origin. Impact of the structural cut-off Generated networks A network generated randomly by a network generation algorithm is in general not free of structural disassortativity. If a neutral network is required, then structural disassortativity must be avoided. There are a few methods by which this can be done: Allow multiple edges between the same two vertices. While this means that the network is no longer a simple network, it allows for sufficient edges to maintain neutrality. Simply remove all vertices with degree . This guarantees that no vertex is subject to structural limitations in its edges, and the network is free of structural disassortativity. Real networks In some real networks, the same methods as for generated networks can also be used. In many cases, however, it may not make sense to consider multiple edges between two vertices, or such information is not available. The high degree vertices (hubs) may also be an important part of the network that cannot be removed without changing other fundamental properties. To determine whether the assortativity or disassortativity of a network is of structural origin, the network can be compared with a degree-preserving randomized version of itself (without multiple edges). Then any assortativity measure of the randomized version will be a result of the structural cut-off. If the real network displays any additional assortativity or disassortativity beyond the structural disassortativity, then it is a meaningful property of the real network. Other quantities that depend on the degree correlations, such as some definitions of the rich-club coefficient, will also be impacted by the structural cut-off. See also Assortativity Degree distribution Complex network Rich-club coefficient References Network theory
Structural cut-off
[ "Mathematics" ]
802
[ "Network theory", "Mathematical relations", "Graph theory" ]
44,410,336
https://en.wikipedia.org/wiki/Evolution%20of%20snake%20venom
Venom in snakes and some lizards is a form of saliva that has been modified into venom over its evolutionary history. In snakes, venom has evolved to kill or subdue prey, as well as to perform other diet-related functions. While snakes occasionally use their venom in self defense, this is not believed to have had a strong effect on venom evolution. The evolution of venom is thought to be responsible for the enormous expansion of snakes across the globe. The evolutionary history of snake venom is a matter of debate. Historically, snake venom was believed to have evolved once, at the base of the Caenophidia, or derived snakes. Molecular studies published beginning in 2006 suggested that venom originated just once among a putative clade of reptiles, called Toxicofera, approximately 170 million years ago. Under this hypothesis, the original toxicoferan venom was a very simple set of proteins that were assembled in a pair of glands. Subsequently, this set of proteins diversified in the various lineages of toxicoferans, including Serpentes, Anguimorpha, and Iguania: several snake lineages also lost the ability to produce venom. The Toxicoferan hypothesis was challenged by studies in the mid-2010s, including a 2015 study which found that venom proteins had homologs in many other tissues in the Burmese python. The study therefore suggested that venom had evolved independently in different reptile lineages, including once in the Caenophid snakes. Venom containing most extant toxin families is believed to have been present in the last common ancestor of the Caenophidia: these toxins subsequently underwent tremendous diversification, accompanied by changes in the morphology of venom glands and delivery systems. Snake venom evolution is thought to be driven by an evolutionary arms race between venom proteins and prey physiology. The common mechanism of evolution is thought to be gene duplication followed by natural selection for adaptive traits. The adaptations produced by this process include venom more toxic to specific prey in several lineages, proteins that pre-digest prey, and a method to track down prey after a bite. These various adaptations of venom have also led to considerable debate about the definition of venom and venomous snakes. Changes in the diet of a lineage have been linked to atrophication of the venom. Evolutionary history The origin of venom is thought to have provided the catalyst for the rapid diversification of snakes in the Cenozoic period, particularly to the Colubridae and their colonization of the Americas. Scholars suggest that the reason for this huge expansion was the shift from a mechanical to a biochemical method of subduing prey. Snake venoms attack biological pathways and processes that are also targeted by venoms of other taxa; for instance, calcium channel blockers have been found in snakes, spiders, and cone snails, thus suggesting that venom exhibits convergent evolution. Venom is common among derived snake families. Venom containing most extant toxin families is believed to have been present in the last common ancestor of the Caenophidia, also called Colubroidea. These toxins subsequently underwent tremendous diversification, accompanied by changes in the morphology of venom glands and delivery systems. This diversification is linked to the rapid global radiation of the advanced snakes. The tubular or grooved fangs snakes use to deliver their venom to their target have evolved multiple times, and are an example of convergent evolution. The tubular fangs common to front-fanged snakes are believed to have evolved independently in Viperidae, Elapidae, and Atractaspidinae. Until the use of gene sequencing to create phylogenetic trees became practical, phylogenies were created on the basis of morphology. Such traditional phylogenies suggested that venom originated along multiple branches among Squamata approximately 100 million years ago: in the Caenophidia, or derived snakes, and in the lizard genus Heloderma. Studies using nuclear gene sequences in the mid-2000s and early 2010s found the presence of venom proteins in the lizard clades Anguimorpha and Iguania similar to those of snakes, and suggested that together with Serpentes, these formed a clade, which they named "Toxicofera". This led to the theory that venom originated only once within the entire lineage approximately 170 million years ago. This ancestral venom was described as consisting of a very simple set of proteins, assembled in a pair of glands. The venoms of the different lineages then diversified and evolved independently, along with their means of injecting venom into prey. This diversification included the independent evolution of front-fanged venom delivery from the ancestral rear-fanged venom delivery system. The single origin hypothesis also suggests that venom systems subsequently atrophied, or were completely lost, independently in a number of lineages. The phylogenetic position of Iguania within Toxicofera is supported by most molecular studies, but not by morphological ones. The "Toxicoferan hypothesis" was subsequently challenged. A study performed in 2014 found that homologs of 16 venom proteins, which had been used to support the single origin hypothesis, were all expressed at high levels in a number of body tissues. The authors therefore suggested that previous research, which had found venom proteins to be conserved across the supposed Toxicoferan lineage, might have misinterpreted the presence of more generic "housekeeping" genes across this lineage, as a result of only sampling certain tissues within the reptiles' bodies. Therefore, the authors suggested that instead of evolving just once in an ancestral reptile, venom evolved independently in multiple lineages, including once prior to the radiation of the "advanced" snakes. A 2015 study found that homologs of the so-called "toxic" genes were present in numerous tissues of a non-venomous snake, the Burmese python. One of the authors stated that they had found homologs to the venom genes in many tissues outside the oral glands, where venom genes might be expected. This demonstrated the weaknesses of only analyzing transcriptomes (the total messenger RNA in a cell). The team suggested that pythons represented a period in snake evolution before major venom development. The researchers also found that the expansion of venom gene families occurred mostly in highly venomous caenophidian snakes (also referred to as "colubrid snakes"), thus suggesting that most venom evolution took place after this lineage diverged from other snakes. The debate over the Toxicoferan hypothesis is driven in part by disagreement over the definition of a venom. As of 2022, the Toxicoferan hypothesis remains a prevalent view. Mechanisms of evolution The primary mechanism for the diversification of venom is thought to be the duplication of gene coding for other tissues, followed by their expression in the venom glands. The proteins then evolved into various venom proteins through natural selection. This process, known as the birth-and-death model, is responsible for several of the protein recruitment events in snake venom. These duplications occurred in a variety of tissue types with a number of ancestral functions. Notable examples include 3FTx, ancestrally a neurotransmitter found in the brain, which has adapted into a neurotoxin that binds and blocks acetylcholine receptors. Another example is phospholipase A2 (PLA2) type IIA, ancestrally involved with inflammatory processes in normal tissue, which has evolved into venom capable of triggering lipase activity and tissue destruction. The change in function of PLA2, in particular, has been well documented; there is evidence of several separate gene duplication events, often associated with the origin of new snake species. Non-allelic homologous recombination induced by transposon invasion (or recombination between DNA sequences that are similar, but not alleles) has been proposed as the mechanism of duplication of PLA2 genes in rattlesnakes, as an explanation for its rapid evolution. These venom proteins have also occasionally been recruited back into tissue genes. Gene duplication is not the only way that venom has become more diverse. There have been instances of new venom proteins generated by alternative splicing. The Elapid snake Bungarus fasciatus, for example, possesses a gene that is alternatively spliced to yield both a venom component and a physiological protein. Further diversification may have occurred by gene loss of specific venom components. For instance, the rattlesnake ancestor is believed to have had the PLA2 genes for a heterodimeric neurotoxin now found in Crotalus scutulatus, but those genes are absent in modern non-neurotoxic Crotalus species; the PLA2 genes for the Lys49-myotoxin supposedly existing in the common ancestor of rattlesnakes were also lost several times on recent lineages to extant species Domain loss has also been implicated in venom neofunctionalization. Investigation of the evolutionary history of viperid SVMP venom genes revealed repeated occasions of domain loss, coupled with significant positive selection in most of the phylogenetic branches where domain loss was thought to have occurred. Venom toxins have also evolved via the gene "hijacking" or "co-opting", or the change in function of unrelated genes. A 2021 study suggested that co-opting explained the evolution of most types of toxins, but not that of the toxins that are most abundant in snake venom. Protein recruitment events have occurred at different points in the evolutionary history of snakes. For example, the 3FTX protein family is absent in the viperid lineage, suggesting that it was recruited into snake venom after the viperid snakes branched off from the remaining colubroidae. PLA2 is thought to have been recruited at least two separate times into snake venom, once in elapids and once in viperids, displaying convergent evolution of this protein into a toxin. A 2019 study suggested that gene duplication could have allowed different toxins to evolve independently, allowing snakes to experiment with their venom profiles and explore new and effective venom formulations. This was proposed as one of the ways snakes have diversified their venom formulations through millions of years of evolution. The various recruitment events had resulted in snake venom evolving into a very complex mixture of proteins. The venom of rattlesnakes, for example, includes nearly 40 different proteins from different protein families, and other snake venoms have been found to contain more than 100 distinct proteins. The composition of this mixture has been shown to vary geographically, and to be related to the prey species available in a certain region. Snake venom has generally evolved very quickly, with changes occurring faster in the venom than in the rest of the organism. Selection pressure Long-standing hypotheses of snake venom evolution have assumed that most snakes inject far more venom into their prey than is required to kill them; thus, venom composition would not be subject to natural selection. This is known as the "overkill" hypothesis. However, recent studies of the molecular history of snake venom have contradicted this, instead finding evidence of rapid adaptive evolution in many different clades, including the carpet vipers, Echis, the ground rattlesnakes, Sistrurus, and the Malayan pit viper, as well as generally in the diversification of PLA2 proteins. There is phylogenetic evidence of positive selection and rapid rates of gene gain and loss in venom genes of Sistrurus taxa feeding on different prey. As of 2019, evidence existed both of "overkill" occurring in some lineages, and rapid adaptive evolution, and an evolutionary arms race with prey physiology, in many others. The genes that code for venom proteins in some snake genera have a proportion of synonymous mutations that is lower than would be expected if venom were evolving through neutral evolutionary processes; the non-synonymous mutation rate, however, was found higher in many cases, indicating directional selection. In addition, snake venom is metabolically costly for a snake to produce, which scientists have suggested as further evidence that a selection pressure exists on snake venom (in this case, to minimize the volume of venom required). The use of model organisms, rather than snakes' natural prey, to study prey toxicity, has been suggested as a reason why the "overkill" hypothesis may have been overemphasized. However, the pitviper genus Agkistrodon has been found to be an exception to this; the composition of venom in Agkistrodon has been found to be related to the position of the species within the phylogeny, suggesting that those venoms have evolved mostly through neutral processes (mutation and genetic drift), and that there may be significant variation in the selection pressure upon various snake venoms. Several studies have found evidence that venom and resistance to venom in prey species have evolved in a coevolutionary arms race. For example, wood rats of the genus Neotoma have a high degree of resistance to the venom of rattlesnakes, suggesting that the rats have evolved in response to the snake venom, thus renewing selection pressure upon the snakes. Resistance to venoms of sympatric predatory snake species has been found in eels, ground squirrels, rock squirrels, and Eastern gray squirrels. All these studies suggested a co-evolutionary arms race between prey and predator, indicating another potential selection pressure on snake venom to increase or innovate toxicity. The selection pressure on snake venom is thought to be selecting for functional diversity within the proteins in venom, both within a given species, and across species. In addition to prey physiology, evidence exists that snake venom has evolved in response to the physiology of predators. Besides diet, there are other possible pressures on snake venom composition. A 2019 study found that larger body mass and smaller ecological habitats were correlated with increased venom yield. Another study found that weather and temperature had stronger correlations with snake venom than diets or types of prey. While venomous snakes use their venoms in defence (hence the problem of snakebite in humans), it is not well known to what extent natural selection for defence has driven venom evolution. The venoms of the Texas coral snake, Micrurus tener, and other species of Micrurus have been found to contain toxins with specific pain-inducing activity, suggesting a defensive function. However, a questionnaire survey of snakebite patients bitten by a wide variety of venomous species showed that pain after most snakebites is of slow onset, arguing against widespread selection for defence. The spitting of venom displayed by some species of spitting cobra is solely a defensive adaptation. A 2021 study showed that the venoms of all three lineages of spitting cobra convergently evolved higher levels of sensory neuron activation (i.e., cause more pain) than the venoms of non-spitting cobras, through the synergistic action of cytotoxins and Phospholipase A2 toxins, indicating selection for a defensive function. In contrast to venom composition and toxicity to specific lineages, venom yield, or the quantity of venom produced by an individual snake, has not been found to vary with the body-mass of prey animals, and instead to vary with the body-mass of snakes producing it. Yield increases with snake body-mass in a consistent with the hypothesis that snakes invest a constant proportion of metabolic output into producing venom, which is metabolically costly. Functional adaptations Snakes use their venom to kill or subdue prey, as well as for other diet-related functions, such as digestion. Current scientific theory suggests that snake venom is not used for defense or for competition between members of the same species, unlike in other taxa. Thus adaptive evolution in snake venom has resulted in several adaptations with respect to these diet-related functions that increase the fitness of the snakes that carry them. This is also reflected in variation in venom composition within a species; venom is known to vary geographically, and by age and size, likely reflecting variation in the prey consumed by different groups within a species. Geographic variation is also present at the species level; island snakes tend to have less complex venoms, while those living in highly productive habitats have more complex venoms, suggesting a biogeographic pattern. Prey-specific venom toxicity Venom that is toxic only to a certain taxon, or strongly toxic only to a certain taxon, has been found in a number of snakes, suggesting that these venoms have evolved via natural selection to subdue preferred prey species. Examples of this phenomenon have been found in the Mangrove snake Boiga dendrophila, which has a venom specifically toxic to birds, as well as in the genera Echis and Sistrurus, and in sea snakes. The venom of Spilotes sulphureus which has two components, one of which is toxic to lizards but non-toxic in mammals, while the other is toxic in mammals and non-toxic in lizards. However, while several snakes possess venom that is highly toxic to their preferred prey species, the reverse correlation is not necessarily true: the venoms of several snakes are toxic to taxa that they do not consume in high proportions. Most snake venom, for instance, is highly toxic to lizards, although the proportion of lizard prey varies among snake species. This has led researchers to suggest that toxicity to a certain taxon is nearly independent of toxicity to another distantly related taxon. The natural diets of snakes in the widespread viper genus Echis are highly varied, and include arthropods, such as scorpions, as well as vertebrates. Various Echis species consume different quantities of arthropods in their diet. A 2009 study injected scorpions with the venom of various Echis species, and found a high correlation between the proportion of arthropods that the snakes consumed in their natural habitat, and the toxicity of their venom to scorpions. The researchers also found evidence that the evolution of venom more toxic to arthropods was related to an increase in the proportion of arthropods in the snakes' diet, and that diet and venom may have evolved by a process of coevolution. A phylogeny of the genus constructed using mitochondrial DNA showed that one instance of a change in venom composition in the species ancestral to all Echis snakes was correlated with a shift to an arthropod based diet, whereas another shift in a more recent lineage was correlated with a shift to a diet of vertebrates. Despite the higher toxicity of the venom of arthropod-consuming species, it was not found to incapacitate or kill prey any faster than that of species with fewer arthropods in their diet. Thus, the venom is thought to have evolved to minimize the volume required, as the production of venom carries a significant metabolic cost, thus providing a fitness benefit. This pattern is also found in other lineages. Similar results were obtained by a 2012 study which found that the venom of arthropod-consuming Echis species was more toxic to locusts than that of vertebrate-consuming species. A 2009 study of the venom of four Sistrurus pit viper species found significant variation in the toxicity to mice. This variation was related to the proportion of small mammals in the diet of those species. The idea that Sistrurus venom had evolved to accommodate a mammal-based diet was supported by phylogenetic analysis. The researchers suggested that the basis for the difference in toxicity was the difference in muscle physiology in the various prey animals. Two lineages of elapid snakes, common sea snakes and Laticauda sea kraits, have independently colonized marine environments, and shifted to a very simple diet based on teleosts, or ray-finned fish. A 2005 study found that both these lineages have a much simpler set of venom proteins than their terrestrial relatives on the Australian continent, which have a more varied and complex diet. These findings were confirmed by a 2012 study, which compared the venoms of Toxicocalamus longissimus, a terrestrial species, and Hydrophis cyanocinctus, a marine species, both within the subfamily Hydrophiinae (which is also within the Elapid family). Despite being closely related to one another, the marine species had a significantly simpler set of venom proteins. The venoms of the sea snakes are nonetheless among the most toxic venoms known. It has been argued that since sea snakes are typically unable to prevent the escape of bitten prey, their venoms have evolved to act very rapidly. Pre-digestion of prey The various subspecies of the rattlesnake genus Crotalus, produce venoms that carry out two conflicting functions. The venom immobilizes prey after a bite, and also helps digestion by breaking down tissues before the snake eats its prey. As with other members of the family Viperidae, the venoms of Crotalus disrupt the homeostatic processes of prey animals. However, there is a wide variety of venom compositions among the species of Crotalus. A 2010 study found a 100-fold difference in the amount of metalloproteinase activity among the various snakes, with Crotalus cerberus having the highest activity and Crotalus oreganus concolor having the lowest. There was also a 15-fold variation in the amount of protease activity, with C. o. concolor and C. cerberus having the highest and lowest activities, respectively. Metalloproteinase activity causes hemorrhage and necrosis following a snake bite, a process which aids digestion. The activity of proteases, on the other hand, disrupts platelet and muscle function and damages cell membranes, and thus contributes to a quick death for the prey animal. The study found that the venoms of Crotalus fell into two categories; those that favored metalloproteinases (Type I) and those that favored proteases (Type II). The study stated that these functions were essentially mutually exclusive; venoms had been selected for based on either their toxicity or their tenderizing potential. The researchers also hypothesized that the reason for this dichotomy was that a venom high in neurotoxicity, such as a type II venom, kills an animal quickly, preventing the relatively slower acting metalloproteinase from digesting tissue. Tracking bitten prey Another example of an adaptive function other than prey immobilization is the role of viperid venom in allowing the snake to track a prey animal it has bitten, a process known as "prey relocalization." This important adaptation allowed rattlesnakes to evolve the strike-and-release bite mechanism, which provided a huge benefit to snakes by minimizing contact with potentially dangerous prey animals. However, this adaptation then requires the snake to track down the bitten animal in order to eat it, in an environment full of other animals of the same species. A 2013 study found that western diamondback rattlesnakes (Crotalus atrox) responded more actively to mouse carcasses that had been injected with crude rattlesnake venom. When the various components of the venom were separated out, the snakes responded to mice injected with two kinds of disintegrins. The study concluded that these disintegrin proteins were responsible for allowing the snakes to track their prey, by changing the odor of the bitten animal. Diet-based atrophication Venom in a number of lineages of snakes is thought to have atrophied in response to dietary shifts. A 2005 study in the marbled sea snake, Aipysurus eydouxii found that the gene for a three-fingered protein found in the venom had undergone a deletion of two nucleotide bases which made the venom 50–100 times less toxic than it had been previously. This change was correlated with a change in diet from fish to a diet consisting almost entirely of fish eggs, suggesting that the adaptation to an egg diet had removed the selection pressure needed to maintain a highly toxic venom, allowing the venom genes to accumulate deleterious mutations. A similar venom degradation following a shift to an egg-based diet has been found in the Common egg-eater Dasypeltis scabra, whose diet consists entirely of birds' eggs, meaning that the snake had no use for its venom. This has led biologists to suggest that if venom is not used by a species, it is rapidly lost. References Citations Cited sources External links Evolution Snake venom Snake venom Evolutionary biology
Evolution of snake venom
[ "Biology" ]
4,958
[ "Evolutionary biology", "Phylogenetics", "Evolution of tetrapods" ]
44,413,506
https://en.wikipedia.org/wiki/High-level%20language%20computer%20architecture
A high-level language computer architecture (HLLCA) is a computer architecture designed to be targeted by a specific high-level programming language (HLL), rather than the architecture being dictated by hardware considerations. It is accordingly also termed language-directed computer design, coined in and primarily used in the 1960s and 1970s. HLLCAs were popular in the 1960s and 1970s, but largely disappeared in the 1980s. This followed the dramatic failure of the Intel 432 (1981) and the emergence of optimizing compilers and reduced instruction set computer (RISC) architectures and RISC-like complex instruction set computer (CISC) architectures, and the later development of just-in-time compilation (JIT) for HLLs. A detailed survey and critique can be found in . HLLCAs date almost to the beginning of HLLs, in the Burroughs large systems (1961), which were designed for ALGOL 60 (1960), one of the first HLLs. The best known HLLCAs may be the Lisp machines of the 1970s and 1980s, for the language Lisp (1959). At present the most popular HLLCAs are Java processors, for the language Java (1995), and these are a qualified success, being used for certain applications. A recent architecture in this vein is the Heterogeneous System Architecture (2012), which HSA Intermediate Layer (HSAIL) provides instruction set support for HLL features such as exceptions and virtual functions; this uses JIT to ensure performance. Definition There are a wide variety of systems under this heading. The most extreme example is a Directly Executed Language (DEL), where the instruction set architecture (ISA) of the computer equals the instructions of the HLL, and the source code is directly executable with minimal processing. In extreme cases, the only compiling needed is tokenizing the source code and feeding the tokens directly to the processor; this is found in stack-oriented programming languages running on a stack machine. For more conventional languages, the HLL statements are grouped into instruction + arguments, and infix order is transformed to prefix or postfix order. DELs are typically only hypothetical, though they were advocated in the 1970s. In less extreme examples, the source code is first parsed to bytecode, which is then the machine code that is passed to the processor. In these cases, the system typically lacks an assembler, as the compiler is deemed sufficient, though in some cases (such as Java), assemblers are used to produce legal bytecode which would not be output by the compiler. This approach was found in the Pascal MicroEngine (1979), and is currently used by Java processors. More loosely, a HLLCA may simply be a general-purpose computer architecture with some features specifically to support a given HLL or several HLLs. This was found in Lisp machines from the 1970s onward, which augmented general-purpose processors with operations specifically designed to support Lisp. Examples The Burroughs Large Systems (1961) were the first HLLCA, designed to support ALGOL (1959), one of the earliest HLLs. This was referred to at the time as "language-directed design." The Burroughs Medium Systems (1966) were designed to support COBOL for business applications. The Burroughs Small Systems (mid-1970s, designed from late 1960s) were designed to support multiple HLLs by a writable control store. These were all mainframes. The Wang 2200 (1973) series were designed with a BASIC interpreter in micro-code. The Pascal MicroEngine (1979) was designed for the UCSD Pascal form of Pascal, and used p-code (Pascal compiler bytecode) as its machine code. This was influential on the later development of Java and Java machines. Lisp machines (1970s and 1980s) were a well-known and influential group of HLLCAs. Intel iAPX 432 (1981) was designed to support Ada. This was Intel's first 32-bit processor design, and was intended to be Intel's main processor family for the 1980s, but failed commercially. Rekursiv (mid-1980s) was a minor system, designed to support object-oriented programming and the Lingo programming language in hardware, and supported recursion at the instruction set level, hence the name. A number of processors and coprocessors intended to implement Prolog more directly were designed in the late 1980s and early 1990s, including the Berkeley VLSI-PLM, its successor (the PLUM), and a related microcode implementation. There were also a number of simulated designs that were not produced as hardware A VHDL-based methodology for designing a Prolog processor, A Prolog coprocessor for superconductors. Like Lisp, Prolog's basic model of computation is radically different from standard imperative designs, and computer scientists and electrical engineers were eager to escape the bottlenecks caused by emulating their underlying models. Niklaus Wirth's Lilith project included a custom CPU geared toward the Modula-2 language. The INMOS Transputer was designed to support concurrent programming, using occam. The AT&T Hobbit processor, stemming from a design called CRISP (C-language Reduced Instruction Set Processor), was optimized to run C code. In the late 1990s, there were plans by Sun Microsystems and other companies to build CPUs that directly (or closely) implemented the stack-based Java virtual machine. As a result, several Java processors have been built and used. Ericsson developed ECOMP, a processor designed to run Erlang. It was never commercially produced. The HSA Intermediate Layer (HSAIL) of the Heterogeneous System Architecture (2012) provides a virtual instruction set to abstract away from the underlying ISAs, and has support for HLL features such as exceptions and virtual functions, and include debugging support. Implementation HLLCA are frequently implemented via a stack machine (as in the Burroughs Large Systems and Intel 432), and implemented the HLL via microcode in the processor (as in Burroughs Small Systems and Pascal MicroEngine). Tagged architectures are frequently used to support types (as in the Burroughs Large Systems and Lisp machines). More radical examples use a non-von Neumann architecture, though these are typically only hypothetical proposals, not actual implementations. Application Some HLLCs have been particularly popular as developer machines (workstations), due to fast compiles and low-level control of the system with a high-level language. Pascal MicroEngine and Lisp machines are good examples of this. HLLCAs have often been advocated when a HLL has a radically different model of computation than imperative programming (which is a relatively good match for typical processors), notably for functional programming (Lisp) and logic programming (Prolog). Motivation A detailed list of putative advantages is given in . HLLCAs are intuitively appealing, as the computer can in principle be customized for a language, allowing optimal support for the language, and simplifying compiler writing. It can further natively support multiple languages by simply changing the microcode. Key advantages are to developers: fast compilation and detailed symbolic debugging from the machine. A further advantage is that a language implementation can be updated by updating the microcode (firmware), without requiring recompilation of an entire system. This is analogous to updating an interpreter for an interpreted language. An advantage that's reappearing post-2000 is safety or security. Mainstream IT has largely moved to languages with type and/or memory safety for most applications. The software those depend on, from OS to virtual machines, leverage native code with no protection. Many vulnerabilities have been found in such code. One solution is to use a processor custom built to execute a safe high level language or at least understand types. Protections at the processor word level make attackers' job difficult compared to low level machines that see no distinction between scalar data, arrays, pointers, or code. Academics are also developing languages with similar properties that might integrate with high level processors in the future. An example of both of these trends is the SAFE project. Compare language-based systems, where the software (especially operating system) is based around a safe, high-level language, though the hardware need not be: the "trusted base" may still be in a lower level language. Disadvantages A detailed critique is given in . The simplest reason for the lack of success of HLLCAs is that from 1980 optimizing compilers resulted in much faster code and were easier to develop than implementing a language in microcode. Many compiler optimizations require complex analysis and rearrangement of the code, so the machine code is very different from the original source code. These optimizations are either impossible or impractical to implement in microcode, due to the complexity and the overhead. Analogous performance problems have a long history with interpreted languages (dating to Lisp (1958)), only being resolved adequately for practical use by just-in-time compilation, pioneered in Self and commercialized in the HotSpot Java virtual machine (1999). The fundamental problem is that HLLCAs only simplify the code generation step of compilers, which is typically a relatively small part of compilation, and a questionable use of computing power (transistors and microcode). At the minimum tokenization is needed, and typically syntactic analysis and basic semantic checks (unbound variables) will still be performed – so there is no benefit to the front end – and optimization requires ahead-of-time analysis – so there is no benefit to the middle end. A deeper problem, still an active area of development , is that providing HLL debugging information from machine code is quite difficult, basically because of the overhead of debugging information, and more subtly because compilation (particularly optimization) makes determining the original source for a machine instruction quite involved. Thus the debugging information provided as an essential part of HLLCAs either severely limits implementation or adds significant overhead in ordinary use. Further, HLLCAs are typically optimized for one language, supporting other languages more poorly. Similar issues arise in multi-language virtual machines, notably the Java virtual machine (designed for Java) and the .NET Common Language Runtime (designed for C#), where other languages are second-class citizens, and often must hew closely to the main language in semantics. For this reason lower-level ISAs allow multiple languages to be well-supported, given compiler support. However, a similar issue arises even for many apparently language-neutral processors, which are well-supported by the language C, and where transpiling to C (rather than directly targeting the hardware) yields efficient programs and simple compilers. The advantages of HLLCAs can be alternatively achieved in HLL Computer Systems (language-based systems) in alternative ways, primarily via compilers or interpreters: the system is still written in a HLL, but there is a trusted base in software running on a lower-level architecture. This has been the approach followed since circa 1980: for example, a Java system where the runtime environment itself is written in C, but the operating system and applications written in Java. Alternatives Since the 1980s the focus of research and implementation in general-purpose computer architectures has primarily been in RISC-like architectures, typically internally register-rich load–store architectures, with rather stable, non-language-specific ISAs, featuring multiple registers, pipelining, and more recently multicore systems, rather than language-specific ISAs. Language support has focused on compilers and their runtimes, and interpreters and their virtual machines (particularly JIT'ing ones), with little direct hardware support. For example, the current Objective-C runtime for iOS implements tagged pointers, which it uses for type-checking and garbage collection, despite the hardware not being a tagged architecture. In computer architecture, the RISC approach has proven very popular and successful instead, and is opposite from HLLCAs, emphasizing a very simple instruction set architecture. However, the speed advantages of RISC computers in the 1980s was primarily due to early adoption of on-chip cache and room for large registers, rather than intrinsic advantages of RISC.. See also ASIC Java processor Language-based system Lisp machine Prolog#Implementation in hardware Silicon compiler References – review A Baker’s Dozen: Fallacies and Pitfalls in Processor Design Grant Martin & Steve Leibson, Tensilica (early 2000s), slides 6–9 Further reading Programming language topics
High-level language computer architecture
[ "Engineering" ]
2,593
[ "Software engineering", "Programming language topics" ]
54,438,034
https://en.wikipedia.org/wiki/Katie%20Mack%20%28astrophysicist%29
Katherine J. Mack (born 1 May 1981) is a theoretical cosmologist who holds the Hawking Chair in Cosmology and Science Communication at the Perimeter Institute. Her academic research investigates dark matter, vacuum decay, and the Epoch of Reionization. Mack is also a popular science communicator who participates in social media and regularly writes for Scientific American, Slate, Sky & Telescope, Time, and Cosmos. Early life and education Mack became interested in science as a child and built solar-powered cars out of Lego blocks. Her mother is a fan of science fiction, and encouraged Mack to watch Star Trek and Star Wars. Her grandfather was a student at Caltech and worked on the Apollo 11 mission. She became more interested in spacetime and the Big Bang after attending talks by scientists such as Stephen Hawking. Mack attended California Institute of Technology, and appeared as an extra in the opening credits of the 2001 American comedy film Legally Blonde when they filmed on campus. She received her undergraduate degree in physics in 2003. Mack obtained her PhD in astrophysics from Princeton University in 2009. Her thesis on the early universe was supervised by Paul Steinhardt. Research and career After earning her doctorate, Mack joined the University of Cambridge as a Science and Technology Facilities Council (STFC) postdoctoral research fellow at the Kavli Institute for Cosmology. Later in 2012, Mack was a Discovery Early Career Researcher Award (DECRA) Fellow at the University of Melbourne. Mack was involved with the construction of the dark matter detector SABRE. In January 2018, Mack became an assistant professor in the Department of Physics at North Carolina State University and a member of the university's Leadership in Public Science Cluster. She joined the Perimeter Institute for Theoretical Physics in June 2022 as the inaugural Hawking Chair in Cosmology and Science Communication. The Canadian multidisciplinary research organization CIFAR named her one of the CIFAR Azrieli Global Scholars in 2022. Mack works at the intersection between fundamental physics and astrophysics. Her research considers dark matter, vacuum decay, the formation of galaxies, observable tracers of cosmic evolution, and the Epoch of Reionization. Mack has described dark matter as "one of science's most pressing enigmas". She has worked on dark matter self-annihilation and whether the accretion of dark matter could result in the growth of primordial black holes (PBHs). She has worked on the impact of PBHs on the cosmic microwave background. She has become increasingly interested, too, in the end of the universe. Public engagement and advocacy Mack maintains a strong science outreach presence on both social and traditional media. In this wise she has been described by Motherboard and Creative Cultivate as a "social media celebrity". Mack is a popular science writer and has contributed to The Guardian, Scientific American, Slate, The Conversation, Sky & Telescope, Gizmodo, Time, and Cosmos, as well as providing expert information to the BBC. Mack's Twitter account has over 300,000 followers; her response to a climate change denier on that platform gained mainstream coverage, as did her "Chirp for LIGO" upon the first detection of gravitational waves. She was the 2017 Australian Institute of Physics Women in Physics lecturer, in which capacity she spent three weeks delivering talks at schools and universities across Australia. In 2018, Mack was chosen to be one of the judges for Nature magazine's newly founded Nature Research Awards for Inspiring Science and Innovating Science. In February 2019, she appeared in an episode of The Jodcast, talking about her work and science communication. Mack was a member of the jury for the Alfred P. Sloan Prize in the 2019 Sundance Film Festival. In 2019, she was referenced on the Hozier track "No Plan" from his album Wasteland, Baby!: "As Mack explained, there will be darkness again". She is a member of the Sloan Science & Film community, where she works on science fiction. Her first book, The End of Everything (Astrophysically Speaking), was published by Simon & Schuster in August 2020, the firm having won the rights in an eight-way bidding battle. It considers the five scenarios for the end of the universe (both theoretically and practically), and has received positive reviews both for its science outreach accuracy and its wit. The book was also a New York Times Notable Book and featured on the best books of the year lists of The Washington Post, The Economist, New Scientist, Publishers Weekly, and The Guardian. Mack hosted a podcast with author John Green called Crash Course Pods: The Universe in 2024. Personal life Mack is interested in the intersection of art, poetry and science. She and the musician Hozier became friends after getting to know one another on Twitter. She is bisexual. Mack is also a pilot, having earned her private pilot license during the COVID-19 pandemic. References External links Katie Mack at NC State University 21st-century American physicists 21st-century American women scientists 21st-century American women writers American astrophysicists Bisexual scientists Bisexual women writers American cosmologists LGBTQ astronomers LGBTQ physicists North Carolina State University faculty 21st-century science writers American women astrophysicists American women science writers California Institute of Technology alumni Princeton University alumni 1981 births Living people 21st-century American LGBTQ people
Katie Mack (astrophysicist)
[ "Astronomy" ]
1,087
[ "Astronomers", "LGBTQ astronomers" ]
54,438,878
https://en.wikipedia.org/wiki/Omega1%20Tauri
{{DISPLAYTITLE:Omega1 Tauri}} Omega1 Tauri is a solitary, orange hued star in the zodiac constellation of Taurus. It is faintly visible to the naked eye with an apparent visual magnitude of +5.51. Based upon an annual parallax shift of 11.22 mas as seen from Earth, it is located about 290 light years from the Sun. This is an evolved K-type giant star with a stellar classification of K2 III. At the estimated age of 4.2 billion years, it is a red clump star that is generating energy by helium fusion at its core. Omega1 Tauri has about 1.5 times the mass of the Sun and has expanded to around 12 times the Sun's radius. It is radiating 57.5 times the Sun's luminosity from its photosphere at an effective temperature of 4,737 K. The radial velocity of this star shows no appreciable variation, and for this reason it is used as a radial velocity standard. References K-type giants Horizontal-branch stars Tauri, Omega Taurus (constellation) BD+19 0672 Tauri, 43 026162 019388 1283
Omega1 Tauri
[ "Astronomy" ]
252
[ "Taurus (constellation)", "Constellations" ]
54,439,518
https://en.wikipedia.org/wiki/Junior%20Solar%20Sprint
Junior Solar Sprint (JSS) is a competitive program for 5th- to 8th-grade students to create a small solar-powered vehicle. JSS competitions are sponsored by the Army Educational Outreach Program (AEOP), and administered by the Technology Student Association (TSA). Objectives of JSS are to create the fastest, most interesting, or best crafted vehicle. Skills in science, technology, engineering, and mathematics (STEM) are fostered when designing and constructing the vehicles, as well as principles of alternative fuels, engineering design, and aerodynamics. History Junior Solar Sprint was created in the 1980s by the National Renewable Energy Laboratory (NREL) to teach younger children about the importance and challenges of using renewable energy. The project also teaches students how the engineering process is applied, and how solar panels, transmission, and aerodynamics can be used in practice. Since 2001, the AEOP has funded JSS events. TSA began hosting competitions in 2011, and it became a middle school-level event in 2014. In association with TSA, Pitsco Education has sold recommended materials for the project. Competition Since Junior Solar Sprint became a TSA event, the rules for creating the vehicle have been defined in the TSA rulebook. At the conference, the total cost of creating each car must be less than US$50. The team must also document their process in a notebook. During the time trials section, each car is raced three times down a lane long, on a hard surface like a tennis court. To keep the vehicle pointed straight ahead, a guide wire is run across every lane attached by an eyelet. When the cars are racing, they must remain attached to the wire with no external control. During the race, no modifications are allowed, though anyone may watch. The fastest time of the three trials is used for qualification to the semifinal round. In the next stage, a single- or double-elimination tournament, cars are raced against each other at the same time, until one of the 16 semifinalists is determined the fastest. Junior Solar Sprint competitions are held at the national-, state-, and some regional-level TSA conferences, as well as other AEOP-hosted locations. In the event that the site is overcast, and the solar panels won't work, two 1.5 volt AA batteries will be given to each team. Judges determine the three best vehicles for each category: speed, craftsmanship, and appearance. The 2017 national TSA conference was held June 21–25, in Orlando, Florida, and many middle school students across the country traveled to compete with their vehicles. The team from Joan MacQueen Middle School in Alpine, California, won first place. References External links Junior Solar Sprint from the Army Educational Outreach Program Junior Solar Sprint from the Technology Student Association Engineering competitions Engineering education in the United States Technology Student Association American military youth groups
Junior Solar Sprint
[ "Technology" ]
585
[ "Science and technology awards", "Engineering competitions" ]
54,439,547
https://en.wikipedia.org/wiki/Administration%20of%20Radioactive%20Substances%20Advisory%20Committee
The Administration of Radioactive Substances Advisory Committee (ARSAC) is an advisory non-departmental public body of the government of the United Kingdom. It is sponsored by the Department of Health. The committee advises government on the certification of doctors and dentists who want to use radioactive medicinal products on people. Doctors and dentists who use radioactive medicinal products (radiopharmaceuticals) on people must get a certificate from health ministers. This certificate allows them to use radioactive medicinal products in diagnosis, therapy and research. ARSAC was set up to advise health ministers with respect to the grant, renewal, suspension, revocation and variation of certificates and generally in connection with the system of prior authorisation required by Article 5(a) of Council Directive 76/579/Euratom. The majority of ARSAC's members are medical doctors who are appointed to the committee as independent experts in their field (for example nuclear medicine). The committee comments on applications in confidence to the ARSAC Support Unit, Public Health England. No individual committee member approves any single application. An official from the Department of Health authorises successful applications on behalf of the Secretary of State. See also Centre for Radiation, Chemical and Environmental Hazards in Oxfordshire References External links Nuclear medicine organizations Non-departmental public bodies of the United Kingdom government
Administration of Radioactive Substances Advisory Committee
[ "Engineering" ]
264
[ "Nuclear medicine organizations", "Nuclear organizations" ]
54,440,098
https://en.wikipedia.org/wiki/5%CE%B1-Dihydronorethisterone
5α-Dihydronorethisterone (5α-DHNET, dihydronorethisterone, 17α-ethynyl-5α-dihydro-19-nortestosterone, or 17α-ethynyl-5α-estran-17β-ol-3-one) is a major active metabolite of norethisterone (norethindrone). Norethisterone is a progestin with additional weak androgenic and estrogenic activity. 5α-DHNET is formed from norethisterone by 5α-reductase in the liver and other tissues. Pharmacology Unlike norethisterone which is purely progestogenic, 5α-DHNET has been found to possess both progestogenic and marked antiprogestogenic activity, showing a profile of progestogenic activity like that of a selective progesterone receptor modulator (SPRM). Moreover, the affinity of 5α-DHNET for the progesterone receptor (PR) is greatly reduced relative to that of norethisterone at only 25% of that of progesterone (versus 150% for norethisterone). 5α-DHNET shows higher affinity for the androgen receptor (AR) compared to norethisterone with approximately 27% of the affinity of the potent androgen metribolone (versus 15% for norethisterone). However, although 5α-DHNET has higher affinity for the AR than does norethisterone, it has significantly diminished and in fact almost abolished androgenic activity in comparison to norethisterone in rodent bioassays. Similar findings were observed for ethisterone (17α-ethynyltestosterone) and its 5α-reduced metabolite, whereas 5α-reduction enhanced both the AR affinity and androgenic potency of testosterone and nandrolone (19-nortestosterone) in rodent bioassays. As such, it appears that the C17α ethynyl group of norethisterone is responsible for its loss of androgenicity upon 5α-reduction. Instead of androgenic activity, 5α-DHNET has been reported to possess some antiandrogenic activity. Norethisterone and 5α-DHNET have been found to act as weak irreversible aromatase inhibitors (Ki = 1.7 μM and 9.0 μM, respectively). However, the concentrations required are probably too high to be clinically relevant at typical dosages of norethisterone. 5α-DHNET specifically has been assessed and found to be selective in its inhibition of aromatase, and does not affect other steroidogenesis enzymes such as cholesterol side-chain cleavage enzyme (P450scc), 17α-hydroxylase/17,20-lyase, 21-hydroxylase, or 11β-hydroxylase. Since it is not aromatized (and hence cannot be transformed into an estrogenic metabolite), unlike norethisterone, 5α-DHNET has been proposed as a potential therapeutic agent in the treatment of estrogen receptor (ER)-positive breast cancer. See also 5α-Dihydroethisterone 5α-Dihydronandrolone 5α-Dihydronormethandrone 5α-Dihydrolevonorgestrel References 5α-Reduced steroid metabolites Ethynyl compounds Anabolic–androgenic steroids Aromatase inhibitors Estranes Human drug metabolites Ketones Selective progesterone receptor modulators
5α-Dihydronorethisterone
[ "Chemistry" ]
769
[ "Ketones", "Chemicals in medicine", "Functional groups", "Human drug metabolites" ]
54,440,174
https://en.wikipedia.org/wiki/NGC%207051
NGC 7051 is a barred spiral galaxy located about 100 million light-years away in the constellation of Aquarius. It was discovered by astronomer John Herschel on July 30, 1827. On June 18, 2002 a type II supernova designated as SN 2002dq was discovered in NGC 7051. See also NGC 7042 References External links Barred spiral galaxies Aquarius (constellation) 7051 66566 Astronomical objects discovered in 1827
NGC 7051
[ "Astronomy" ]
89
[ "Constellations", "Aquarius (constellation)" ]
54,440,318
https://en.wikipedia.org/wiki/19-Noretiocholanolone
19-Noretiocholanolone, also known as 5β-estran-3α-ol-17-one, is a metabolite of nandrolone (19-nortestosterone) and bolandione (19-norandrostenedione) that is formed by 5α-reductase. It is on the list of substances prohibited by the World Anti-Doping Agency since it is a detectable metabolite of nandrolone, an anabolic-androgenic steroid (AAS). Consumption of boar meat, liver, kidneys and heart have been found to increase urinary 19-noretiocholanolone output. See also Etiocholanolone 19-Norandrosterone 5α-Dihydronandrolone 5α-Dihydronorethisterone References 5α-Reduced steroid metabolites Secondary alcohols Estranes Human drug metabolites Ketones World Anti-Doping Agency prohibited substances
19-Noretiocholanolone
[ "Chemistry" ]
209
[ "Pharmacology", "Ketones", "Functional groups", "Medicinal chemistry stubs", "Chemicals in medicine", "Human drug metabolites", "Pharmacology stubs" ]
54,440,878
https://en.wikipedia.org/wiki/Amebucort
Amebucort (developmental code name ZK-90999) is a synthetic glucocorticoid corticosteroid which was never marketed. References Acetate esters Triketones Glucocorticoids Pregnanes Diols Abandoned drugs
Amebucort
[ "Chemistry" ]
58
[ "Drug safety", "Abandoned drugs" ]
54,440,879
https://en.wikipedia.org/wiki/Butixocort
Butixocort, also known as tixocortol butyrate, is a synthetic glucocorticoid corticosteroid. References Butyrate esters Corticosteroid esters Diketones Glucocorticoids Pregnanes Thiols
Butixocort
[ "Chemistry" ]
62
[ "Organic compounds", "Thiols" ]
54,440,881
https://en.wikipedia.org/wiki/Deprodone
Deprodone, also known as desolone, is a synthetic glucocorticoid corticosteroid. References Diketones Diols Glucocorticoids Pregnanes Abandoned drugs
Deprodone
[ "Chemistry" ]
46
[ "Drug safety", "Abandoned drugs" ]
54,440,882
https://en.wikipedia.org/wiki/Dichlorisone
Dichlorisone is a synthetic glucocorticoid corticosteroid which was never marketed. References Organochlorides Diols Diketones Glucocorticoids Pregnanes Abandoned drugs
Dichlorisone
[ "Chemistry" ]
47
[ "Drug safety", "Abandoned drugs" ]
54,440,886
https://en.wikipedia.org/wiki/Isoflupredone
Isoflupredone, also known as deltafludrocortisone and 9α-fluoroprednisolone, is a synthetic glucocorticoid corticosteroid which was never marketed. Its acetate ester, isoflupredone acetate, is used in veterinary medicine. References Further reading Diketones Organofluorides Glucocorticoids Pregnanes Triols Abandoned drugs Hydroxyketones Primary alcohols Secondary alcohols Tertiary alcohols
Isoflupredone
[ "Chemistry" ]
106
[ "Drug safety", "Abandoned drugs" ]
54,440,891
https://en.wikipedia.org/wiki/Prednazoline
Prednazoline, a compound of prednisolone phosphate with fenoxazoline, is a synthetic corticosteroid as well as vasoconstrictor and α-adrenergic sympathomimetic. See also Prednazate Prednimustine References Alpha-adrenergic agonists Corticosteroid esters Corticosteroids Diketones Glucocorticoids Imidazolines Mineralocorticoids Phosphates Pregnanes Sympathomimetics Triols Vasoconstrictors
Prednazoline
[ "Chemistry" ]
124
[ "Phosphates", "Salts" ]
54,440,893
https://en.wikipedia.org/wiki/Resocortol
Resocortol is a synthetic glucocorticoid corticosteroid which was never marketed. References Diketones Diols Glucocorticoids Pregnanes Abandoned drugs
Resocortol
[ "Chemistry" ]
42
[ "Drug safety", "Abandoned drugs" ]
54,440,894
https://en.wikipedia.org/wiki/Tipredane
Tipredane (developmental code name SQ-27239) is a synthetic glucocorticoid corticosteroid which was never marketed. References Secondary alcohols Organofluorides Glucocorticoids Ketones Organosulfur compounds Pregnanes Abandoned drugs
Tipredane
[ "Chemistry" ]
60
[ "Ketones", "Organosulfur compounds", "Functional groups", "Drug safety", "Organic compounds", "Abandoned drugs" ]
54,440,992
https://en.wikipedia.org/wiki/Cortifen
Cortifen, also known as cortiphen or kortifen, as well as fencoron, is a synthetic glucocorticoid corticosteroid and cytostatic antineoplastic agent which was developed in Russia for potential treatment of tumors. It is a hydrophobic chlorphenacyl nitrogen mustard ester of 11-deoxycortisol (cortodoxone). See also List of hormonal cytostatic antineoplastic agents List of corticosteroid esters List of Russian drugs References Acetate esters Tertiary alcohols Amines Corticosteroid esters Glucocorticoids Ketones Mineralocorticoids Nitrogen mustards Organochlorides Prodrugs Russian drugs Chloroethyl compounds
Cortifen
[ "Chemistry" ]
170
[ "Ketones", "Functional groups", "Prodrugs", "Chemicals in medicine", "Amines", "Bases (chemistry)" ]
54,441,483
https://en.wikipedia.org/wiki/Ciclometasone
Ciclometasone (brand names Cycloderm, Telocort) is a synthetic glucocorticoid corticosteroid which is marketed in Italy. References Amines Carboxylic acids Organochlorides Corticosteroid esters Diketones Glucocorticoids Pregnanes Diols
Ciclometasone
[ "Chemistry" ]
72
[ "Amines", "Carboxylic acids", "Bases (chemistry)", "Functional groups" ]
54,441,495
https://en.wikipedia.org/wiki/Cloticasone
Cloticasone is a synthetic glucocorticoid corticosteroid which was never marketed. References Ketones Diols Organofluorides Glucocorticoids Organochlorides Pregnanes Thioesters Enones
Cloticasone
[ "Chemistry" ]
54
[ "Ketones", "Thioesters", "Functional groups" ]
54,441,499
https://en.wikipedia.org/wiki/Cormetasone
Cormetasone, or cormethasone, is a synthetic glucocorticoid corticosteroid which was never marketed. References Diketones Organofluorides Glucocorticoids Pregnanes Triols Abandoned drugs
Cormetasone
[ "Chemistry" ]
56
[ "Drug safety", "Abandoned drugs" ]
54,441,564
https://en.wikipedia.org/wiki/Fluocortin%20butyl
Fluocortin butyl (brand names Lenen, Novoderm, Varlane, Vaspit), or fluocortin 21-butylate, is a synthetic glucocorticoid corticosteroid which is marketed in Germany, Belgium, Luxembourg, Spain, and Italy. Chemically, it is the butyl ester derivative of fluocortin. It was patented in 1971 and approved for medical use in 1977. References Corticosteroid esters Esters Organofluorides Glucocorticoids Pregnanes
Fluocortin butyl
[ "Chemistry" ]
120
[ "Organic compounds", "Esters", "Functional groups" ]
54,441,571
https://en.wikipedia.org/wiki/Icometasone
Icometasone is a synthetic glucocorticoid corticosteroid which was never marketed. References Organochlorides Diketones Glucocorticoids Pregnanes Triols Abandoned drugs Hydroxyketones Primary alcohols Secondary alcohols Tertiary alcohols
Icometasone
[ "Chemistry" ]
59
[ "Drug safety", "Abandoned drugs" ]
54,441,577
https://en.wikipedia.org/wiki/Isoprednidene
Isoprednidene (developmental code name StC 407) is a synthetic glucocorticoid corticosteroid which was never marketed. References Diketones Glucocorticoids Pregnanes Triols Abandoned drugs Vinylidene compounds Hydroxyketones Primary alcohols Secondary alcohols Tertiary alcohols
Isoprednidene
[ "Chemistry" ]
69
[ "Drug safety", "Abandoned drugs" ]
54,441,584
https://en.wikipedia.org/wiki/Locicortolone
Locicortolone (developmental code name RU-24476), or locicortone, is a synthetic glucocorticoid corticosteroid which was never marketed. References Primary alcohols Organochlorides Diketones Glucocorticoids Pregnanes Abandoned drugs
Locicortolone
[ "Chemistry" ]
65
[ "Drug safety", "Abandoned drugs" ]
54,441,591
https://en.wikipedia.org/wiki/Meclorisone
Meclorisone (developmental code name NSC-92353) is a synthetic glucocorticoid corticosteroid which was never marketed. References Organochlorides Diketones Diols Glucocorticoids Pregnanes Abandoned drugs
Meclorisone
[ "Chemistry" ]
57
[ "Drug safety", "Abandoned drugs" ]
54,441,600
https://en.wikipedia.org/wiki/Ticabesone
Ticabesone is a synthetic glucocorticoid corticosteroid which was never marketed. References Enones Diols Organofluorides Glucocorticoids Pregnanes Abandoned drugs Thioesters
Ticabesone
[ "Chemistry" ]
50
[ "Thioesters", "Drug safety", "Functional groups", "Abandoned drugs" ]
54,441,604
https://en.wikipedia.org/wiki/Timobesone
Timobesone is a synthetic glucocorticoid corticosteroid which was never marketed. References Enones Diols Organofluorides Glucocorticoids Pregnanes Abandoned drugs Thioesters
Timobesone
[ "Chemistry" ]
50
[ "Functional groups", "Thioesters", "Drug safety", "Abandoned drugs" ]
54,441,937
https://en.wikipedia.org/wiki/Langgan
Langgan () is the ancient Chinese name of a gemstone which remains an enigma in the history of mineralogy; it has been identified, variously, as blue-green malachite, blue coral, white coral, whitish chalcedony, red spinel, and red jade. It is also the name of a mythological langgan tree of immortality found in the western paradise of Kunlun Mountain, and the name of the classic waidan alchemical elixir of immortality langgan huadan 琅玕華丹 "Elixir Efflorescence of Langgan". Word The Chinese characters 琅 and 玕 used to write the gemstone name lánggān are classified as radical-phonetic characters that combine the semantically significant "jade radical" 玉 or 王 (commonly used to write names of jades or gemstones) and phonetic elements hinting at pronunciation. Láng 琅 combines the "jade radical" with liáng 良 "good; fine" (interpreted to denote "fine jade") and gān 玕 combines it with the phonetic gān 干 "stem; trunk". The Chinese word yù 玉 is usually translated as "jade" but in some contexts translates as "fine ornamental stone; gemstone; precious stone", and can refer to a variety of rocks that carve and polish well, including jadeite, nephrite, agalmatolite, bowenite, and serpentine. Modern written Chinese láng 琅 and gān 玕 have variant Chinese characters. Láng 琅 is occasionally transcribed as láng 瑯 (with láng 郞 "gentleman") or lán 瓓 (lán 闌 "railing"); and gān 玕 is rarely written as gān 玵 (with a gān 甘 "sweet" phonetic). Guwen "ancient script" variants were láng 𤨜 or 𤦴 and gān 𤥚. Berthold Laufer proposed that langgan was an onomatopoetic word "descriptive of the sound yielded by the sonorous stone when struck". Lang occurs in several imitative words meaning "tinkling of jade pendants/ornaments": lángláng 琅琅 "tinkling/jingling sound", língláng 玲琅 "tinkling/jangling of jade", línláng 琳琅 "beautiful jade; sound of jade", and lángdāng 琅璫 "tinkling sound". Laufer further suggests this etymology would explain the transference of the name langgan from a stone to a coral; Du Wan's 杜綰 Yunlin shipu 雲林石譜 "Stone Catalogue of the Cloudy Forest" (below) expressly states that the coral langgan "when struck develops resonant properties". Classical descriptions The name langgan has undergone remarkable semantic change. The first references to langgan are found in Chinese classics from the Warring States period (475-221 BCE) and Han dynasty (206 BCE-220 CE), which describe it as a valuable gemstone and mineral drug, as well as the mythological fruit of the langgan tree of immortality on Kunlun Mountain. Texts from the turbulent Six Dynasties period (220-589) and Sui dynasty (581-618) used langgan gemstone as a literary metaphor, and an ingredient in alchemical elixirs of immortality, many of which were poisonous. During the Tang dynasty (618-907), langgan was reinterpreted as a type of coral. Several early texts (including the Shujing, Guanzi, and Erya below) recorded langgan in context with the obscure gemstone(s) qiúlín 璆琳. In Classical Chinese syntax, 璆琳 can be parsed as two qiu and lin types of jade or as one qiulin type. A recent dictionary of Classical Chinese says qiú 璆 "fine jade, jade lithophone" is cognate with qiú 球 "precious gem, fine jade; jade chime or lithophone" (which later came to mean "ball; sphere"), and lín 琳 "blue-gem; sapphire". In what may be the earliest record, the c. 5th-3rd centuries BCE Yu Gong "Tribute of Yu the Great" chapter of the Shujing "Classic of Documents" says the tributary products from Yong Province (located in the Wei River plain, one of the ancient Nine Provinces) included qiulin and langgan jade-like gemstones: "Its articles of tribute were the k'ew and lin gem-stones, and the lang-kan precious stones". Legge quotes Kong Anguo's commentary that langgan is "a stone, but like a pearl", and suggests it was possibly lazulite or lapis lazuli, which Laufer calls "purely conjectural". The c. 4th-3rd centuries BCE Guanzi encyclopedic text, named for and attributed to the 7th century BCE philosopher Guan Zhong, who served as Prime Minister to Duke Huan of Qi (r. 685-643 BCE), uses bi 璧 "a flat jade disc with a hole in the center", qiulin 璆琳 "lapis lazuli", and langgan 琅玕 as examples of how establishing diverse local commodities as fiat currencies will encourage foreign economic cooperation. When Duke Huan asks Guanzi about how to politically control the "Four Yi" (meaning "all foreigners" on China's borders), he replies: Since the Yuzhi [i.e., Yuezhi/Kushans in Central Asia] have not paid court, I request our use of white jade discs [白璧] as money. Since those in the Kunlun desert (modern-day Xinjiang and Tibet) have not paid court, I request our use of lapis lazuli and langgan gems as money. … Since a white jade held tight unseen against one's chest or under one's armpit will be used as a thousand pieces of gold, we can obtain the Yuezhi eight thousand li away and make them pay court. Since a lapis lazuli and langgan gem (fashioned in) a hair clasp and earring will be used as a thousand gold pieces, we can obtain [i.e., defeat] [the inhabitants] of the Kunlun deserts eight thousand li away and make them pay court. Therefore if resources are not commandeered, economies will not connect, those distant from each other will have nothing to use for their common interest and the four yi will not be obtained and come to court. Xun Kuang's 3rd century BCE Confucian classic Xunzi has a context criticizing elaborate burials that uses dan'gan 丹矸 (with dān 丹 "cinnabar" and gān 矸 "waste rock", with the "stone radical" and same gān 干 phonetic) and langgan 琅玕. In these ancient times, the body was covered with pearls and jades, the inner coffin was filled with beautifully ornamented embroideries, and he outer coffin was filled with yellow gold and decorated with cinnabar [丹矸] with added layers of laminar verdite. [In the outer tomb chamber were] rhinoceros and elephant ivory fashioned into trees, with precious rubies [琅玕], magnetite lodestones, and flowering aconite for their fruit." (18.7) John Knoblock translates langgan as "rubies", noting perhaps the genuine ruby or balas spinel, were connected with the cult of immortality, and cites the Shanhaijing saying they grow on Mount Kunlun's Fuchang trees, and the Zhen'gao saying that adepts swallow "ruby blossoms" to feign death and become transcendents. Early Chinese dictionaries define langgan. The c. 4th-3rd century BCE Erya geography section (9 Shidi 釋地) lists valuable products from the various regions of ancient China: "The beautiful things of the northwest are the qiulin [璆琳] and langgan gemstones from the wastelands [虛] of Kunlun Mountain". The 121 CE Shuowen jiezi (Jade Radical section 玉部) has two consecutive definitions for lang 琅 and gan 玕. Lang is [used in] langgan, which "resembles a pearl [似珠者]", Gan is [used in] langgan, paraphrasing the Yu Gong, "Yong Province [using the ancient yōng 雝 character for yōng 雍] [produces] qiulin and langgan [gems] [球琳琅玕]". Three sections about western Chinese mountains in the c. 4th-2nd centuries BCE Shanhaijing "Classic of Mountains and Seas" record early geographic legends associating langgan with Xi Wang Mu "Queen Mother of the West" who lives on Jade Mountain in the mythological axis mundi Kunlun Mountain paradise. Two mention langgan gems and one mentions langganshu 琅玕樹 trees. The Shanhaijing translator Anne Birrell exemplifies the difficulties of translating the word langgan in three ways: "pearl-like gems", "red jade", and "precious gem [tree]". First, the "Classic of the Mountains: West" section says Huaijiang 槐江 (lit. "pagoda-tree river") Mountain, located 400 li northeast of Kunlun Mountain, has abundant langgan and other valuable minerals. "On the summit of Mount Carobriver are quantities of green male-yellow 多青雄黃, precious pearl-like gems [藏琅玕], and yellow gold and jade. Granular cinnabar is abundant on its south face and there are quantities of speckled yellow gold and silver on its north face." (2) "Male-yellow" overliterally translates xiónghuáng 雄黃 "realgar; red orpiment"—Compare Richard Strassberg's translation, "On the mountain’s heights is much green realgar, the finest quality of Langgan-Stone, yellow gold, and jade. On its southern slope are many grains of cinnabar, while on its northern slope are much glittering yellow gold and silver.". Guo Pu's 4th century CE Shanhaijing commentary says langgan shi 石 "stone/gem" (cf. zi 子 "seeds" in the third section) resembles a pearl, and cáng 藏 "store; conceal, hide" means yǐn 隱 "conceal; hide". However, Hao Yixing's 郝懿行 1822 commentary says cáng 藏 was originally written zāng 臧 "good", that is, Huaijiang Mountain has the "best" quality langgan. Second, the "Classic of the Great Wilderness: West" section records that on [Xi] Wang Mu 王母 "Queen Mother [of the West]" Mountain: "Here are the sweet-bloom tree, sweet quince, white weeping willow, the look-flesh creature, the triply-grey horse, precious jade [琁瑰], dark green jade gemstone [瑤碧], the white tree, red jade [琅玕], white cinnabar, green cinnabar, and quantities of silver and iron." (16) Third, the "Classic of Regions Within the Seas: West" section refers to a mythical tricephalic creature dwelling in a fuchangshu 服常樹 (lit. "serve constant tree") who guards a langganshu tree south of Kunlun: "The wears-ever fruit tree—on its crown there is a three-headed person who is in charge of the precious gem tree [琅玕樹]." (11) Interpreters disagree whether the langgan tree grows alongside the fuchang tree or grows on it. Guo Pu's commentary admits unfamiliarity with the fuchang 服常 tree; Wu Renchen's 17th-century commentary notes the similarity with the shachang 沙棠 "sand-plum tree" that the Huainanzi lists with langgan, but doubts they are the same. Guo's commentary says langgan zi 子 "seeds". or "fruits" resemble pearls (cf. the Shuowen definition) and quotes the Erya that it is found on Kunlun Mountain. The c. 120 BCE Huainanzi "Terrestrial Forms" chapter (4 墬形) describes langgan trees and langgan jade both found on Mt. Kunlun. The first context describes how Yu the Great controlled the Great Flood and "excavated the wastelands of Kunlun [昆侖之球] to make level ground". "Atop the heights of Kunlun are treelike cereal plants [木禾] thirty-five feet tall. Growing to the west of these are pearl trees [珠樹], jade trees [玉樹], carnelian trees [琁樹], and no-death trees [不死樹]. To the east are found sand-plum trees [沙棠] and malachite trees [琅玕]. To the south are crimson trees [絳樹]. To the north are bi jade trees [碧樹] and yao jade trees [瑤樹]." (4.3), translating with Schafer's "malachite" instead of "coral"). The second context paraphrases the Erya definition (above) of langgan: "The beautiful things of the northwest are the qiu, lin, and langgan jades [球琳琅玕] of the Kunlun Mountains [昆侖]" (4.7), noting that qiu, lin, and langgan are "types of jade, mostly not identifiable with certainty". Medicine Several early classics of traditional Chinese medicine mention langgan. The c. 1st century BCE Huangdi Neijings Suwen 素問 "Basic Questions" section uses langgan beads to describe a healthy pulse. "When man is serene and healthy the pulse of the heart flows and connects, just as pearls are joined together or like a string of red jade [如循琅玕]—then one can speak of a healthy heart". The c. 2nd century CE Nan Jing explains this langgan bead simile: "[If the qi in] the vessels comes tied together like rings, or as if they were following [in their movement a chain of] lang gan stones [如循琅玕], that implies a normal state." Commentaries elaborate that langgan stones "resemble pearls" and their movement is like a "string of jade- or pearl-like beads". The c. 3rd century CE Shennong Bencaojing lists qīng lánggān 青琅玕 "blue-green langgan" or shízhū 石珠 (lit. "rock pearl") as a mineral drug used to treat ailments such as itchy skin, carbuncle, and ALS. This is one of the rare early references to langgan that treats it as a real substance, while many others make it a feature of the divine world. Alchemy The langgan huadan 琅玕華丹 "Elixir Efflorescence of Langgan" name of the waidan "external alchemy" elixir of immortality is the best-known usage of the word langgan. Some other translations are "Elixir of Langgan Efflorescence", "Lang-Kan (Gem) Radiant Elixir", and "Elixir Flower of Langgan". The earliest method of compounding the elixir is found in the Taiwei lingshu ziwen langgan huadan shenzhen shangjing 太微靈書紫文琅玕華丹神真上經 "Supreme Scripture on the Elixir of Langgan Efflorescence, from the Purple Texts Inscribed by the Spirits of Grand Tenuity". This text was originally part of the Daoist Shangqing School scriptural corpus supposedly revealed to Yang Xi (330-c. 386 CE) between 364 and 370. The Purple Texts alchemical recipe for preparing Elixir of Langgan Efflorescence involves nine steps in four stages carried out over thirteen years. The first stage produces the Langgan Efflorescence proper, which when ingested is said to make "one's complexion similar to gold and jade and enables one to summon divine beings". The next three stages further refine and transform the Langgan Elixir, repeatedly plant it in the earth, and eventually generate a tree whose fruits confer immortality when eaten, just like those of the legendary langgan tree on Mount Kunlun. Upon completing any of the nine successive steps in producing the elixir, the alchemist (or adept in the neidan interpretation) can choose to either ingest the products and obtain immortality by ascending into the realm of Shangqing heavens or may continue on to the next step with the promise of ever-increasing rewards. The first stage has one complex waidan step of compounding the primary Langgan Efflorescence. After performing ritual zhāi 齋 "purification practices" for 40 days, the adept spends 60 days to acquire and prepared the elixir's fourteen ingredients, place them in a crucible, add mercury on top of them, lute the crucible with several layers of mud, and after sacrificing wine to the divinities, heating the crucible for 100 days. The elixir's fourteen reagents, given in exalted code names such as "White-Silk Flying Dragon" for quartz, are: cinnabar, realgar, milky quartz, azurite, amethyst, graphite, saltpeter, sulfur, asbestos, mica, iron pyrite, lead carbonate, Turkestan salt (desert lake precipitates containing gypsum, anhydrite, and halite), and orpiment. Based upon these ingredients, Schafer says the end product was probably bluish flint glass with a high lead content. The alchemist can either leave the crucible closed and proceed to the next stage or break it open and consume the langan elixir that is said to yield marvelous results. The efflorescence should have thirty-seven hues. It is a volatile liquid both brilliant and mottled, a purple aurora darkly flashing. This is called the Elixir of Langgan Efflorescence. If, just at dawn on the first day of the eleventh, fourth, or eighth month, you bow repeatedly and ingest one ounce of this elixir with the water from an east-flowing stream, seven-colored pneumas will rise from your head and your face will have the jadelike glow of metallic efflorescence. If you hold your breath, immediately a chariot from the eight shrouded extents of the universe will arrive. When you spit on the ground, your saliva will transform into a flying dragon. When you whistle to your left, divine Transcendents will pay court to you; when you point to the right, the vapors of Three Elementals will join with the wind. Then, in thousands of conveyances, with myriad outriders, you will fly up to Upper Clarity. The second stage comprises two iterative 100-day waidan alchemical steps transforming the elixir. Firing the unopened stage one crucible of Langgan Efflorescence for another 100 days will produce the Lunar Efflorescence of the Yellow Solution [黄水月華], which when consumed will make you "change forms ten thousand times, your eyes will become luminous moons, and you will float above in the Grand Void to fly off to the Palace of Purple Tenuity". The next step of firing the closed crucible for an additional one 100 days will produce three giant pearls called the Jade Essence of the Swirling Solution [徊水玉精]. Ingesting one alchemical pearl supposedly causes you to immediately give off liquid and fire, form gems with your breath, and your body "will become a sun, and the Thearchs of Heaven will descend to greet you. You will rise as a glowing orb to Upper Clarity." The third stage involves four 3-year steps utilizing the elixirs produced in the first two stages to create fantastic seeds that are replanted and grow into increasingly perfected "spirit trees" with fruits of immortality. This stage falls between conventional waidan alchemy and the horticultural art of growing marvelous zhi 芝 "plants of longevity; fungi" such as the lingzhi mushroom. Initially, the adept mixes the Elixir of Langgan Efflorescence with Jade Essence of the Swirling Solution, transforming the jīng 精 "essence; sperm; seed" in the latter name into an actual seed that is planted in an irrigated field. After three years it grows into the Tree of Ringed Adamant [環剛樹子] or Hidden Polypore of the Grand Bourne [太極隱芝], which has a ring-shaped fruit like a red jujube. Next, the adept plants one of the ringed fruits and waters it with the Yellow Solution, and after three years a plant called the Phoenix-Brain Polypore [fengnao zhi 鳳腦芝] will grow like a calabash, with pits like five-colored peaches. Then, a phoenix-brain fruit is planted and watered with Yellow Solution, which after three years will grow into a red tree, like a pine, five or six feet in height, with a jade-white fruit like a pear [赤樹白子]. Lastly, the adept plants the seed of the red tree, waters it with Swirling Solution, waits another three years for the growth of a vermilion tree like a plum, six or seven feet in height, with a halycon-blue fruit like the jujube [絳樹青實]. Upon eating this fruit, the adept will ascend to the heaven of Purple Tenuity. The fourth stage involves two comparatively quicker waidan steps. The adept repeatedly boils equal parts of the Yellow Solution and the Swirling Solution, and transforms them into the Blue Florets of Aqueous Yang [水陽青映]. If you drink this at dawn, your body will issue a blue and gemmy light, your mouth will spew forth purple vapors, and you will rise above to Upper Clarity [Shangqing]. But before departing earth, the adept's last step is to mix the remaining Elixir of Langan Efflorescence with liquified lead and mercury to produce 50-pound ingots of alchemical silver and purple gold, make incantations to the water spirits, and throw both oblatory ingots into a stream. Despite the carefully detailed Purple Texts' waidan recipe for preparing langgan elixirs, scholars have doubted that the authors actually meant for it to be produced and consumed. Some interpret the impractical 13-year elixir recipe as symbolic instructions for what later came to be known as neidan meditative visualization, and is more a "product of religious imagination", drawing on the respected metaphors of alchemical language, than a laboratory manual drawing on the metaphors of meditation. Others believe this "extravagantly impractical recipe" is an attempt to assimilate into conventional waidan alchemy the ancient legends about langgan gems that grow on trees in the paradise of KunIun. The Shangqing Daoist patriarch Tao Hongjing compiled and edited both the c. 370 Taiwei lingshu ziwen langgan huadan shenzhen shangjing and the c. 499 Zhen'gao 真誥 "Declarations of the Perfected" that also mentions langan elixirs in some of the same terminology. One context records that the early Daoist masters Yan Menzi 衍門子, Gao Qiuzi 高丘子, and Master Hongyai 洪涯先生 swallowed langgan hua 琅玕華 "langgan blossoms" to feign death and become xian transcendents and enter the "dark region" beyond the world. Needham and Lu proposed this langgan hua probably refers to a red or green poisonous mushroom, and Knoblock surmised that these "ruby blossoms" were a species of hallucinogenic mushroom connected with the elixir of immortality. Another Zhen'gao context describes how in the Shangqing latter days before the apocalypse (predicted to be in 507) people will practice alchemy to create immortality drugs, including the Langgan Elixir that "will flow and flower in thick billows" and Cloud Langgan. If the adept takes one spatula full of elixir, "their spiritual feathers will spread forth like pinions. Then will they (be able to) peruse the pattern figured on the Vault of Space, and glow forth in the Chamber of Primal Commencement". Several ingredients in the Elixir of Langgan Efflorescence are toxic heavy metals including mercury, lead, and arsenic, and alchemical elixir poisoning was common knowledge in China. Academics have puzzled over why Daoist adepts would knowingly consume a compound of mineral poisons, and Michel Strickmann, a scholar of Daoist and Buddhist studies, proposes that langgan elixir was believed to be an agent of self-liberation that guaranteed immortality to the faithful through a kind of ritual suicide. Since early Daoist literature thoroughly, "even rapturously", described the deadly toxic qualities of many elixirs, Strickmann concluded that scholars need to reexamine the Western stereotype of "accidental elixir poisoning" that supposedly applied to "misguided alchemists and their unwitting imperial patrons". Literature Chinese authors extended the classical descriptions of langgan meaning "a highly valued gem from western China; a mythical tree of immortality on Kunlun Mountain" into a literary and poetic metaphor for the exotic beauties of an idealized natural world. Several early writers described langgan jewelry, both real and fictional. The 2nd-century scholar and scientist Zhang Heng described a party for the Han nobility at which guests were delighted with the presentation of bowls overflowing with zhēnxiū 珍羞 "delicacies; exotic foods" including langgan fruits of paradise. The 3rd-century poet Cao Zhi described hanging "halcyon blue" (cuì 翠) langgan from the waist of his "beautiful person", and the 5th-century poet Jiang Yan adorned a goddess with gems of langgan. Some other authors reinforced use of its name to refer to divine fruits on heavenly trees. Ruan Ji, one of the Seven Sages of the Bamboo Grove, wrote a 3rd-century poem titled "Dining at Sunrise on Langgan Fruit". The 8th-century poet Li Bai wrote about a famished but proud fenghuang that would not deign to peck at bird food, but like a Daoist adept, would scorn all but a diet of langgan. This represents a literary transition from glittering fruit of distant Kunlun, to aristocratic fare in golden bowls, eventually to an elixir of immortality. A further extension of the langgan metaphor was to describe natural images of beautiful crystals and lush vegetation. For example, Ban Zhao's poem on "The Arrival of Winter" says, "The long [Yellow River] forms (crystalline) langgan [written langan 瓓玕] / Layered ice is like banked-up jade". Two of Du Fu's poems figuratively used the word langgan in reference to the vegetation around the forest home of a Daoist recluse, and to the splendid grass that provided seating for guests at a royal picnic near a mysterious grotto. Bamboo was the most typical representative of blue-green langgan in the plant world, compare láng 筤 ("bamboo radical" and the liáng phonetic in láng 琅) "young bamboo; blue'" Liu Yuxi wrote that the famous spotted bamboo of South China was "langgan colored". Geographic sources Chinese texts list many diverse locations from where langgan occurred. Several classical works associate mythical langan trees with Kunlun Mountain (far west or northwest China), and two gives sources of actual langgan gemstones, the Shujing says it was tribute from Yong Province (present day Gansu and Shaanxi) and the Guanzi says the Kunlun desert (Xinjiang and Tibet). Official Chinese histories record langgan coming from different sources. The 3rd-century Weilüe, 5th-century Hou Hanshu, 6th-century Wei shu, and 7th-century Liang shu list langgan among the products of Daqin, which depending on context meant the Near East or the Eastern Roman Empire, especially Syria. The Liang shu also says it was found in Kucha (modern Aksu Prefecture, Xinjiang), the 7th-century Jinshu says in Shaanxi, and the 10th-century Tangshu says in India. The Jiangnan Bielu history of the Southern Tang (937–976) says langgan was mined at Pingze 平澤 in Shu (Sichuan Province). The Daoist scholar and alchemist Tao Hongjing (456-536) notes langgan gemstone was traditionally associated with Sichuan. The Tang pharmacologist Su Jing 蘇敬 (d. 674) reports that it came from the distant Man tribes of the Yunnan–Guizhou Plateau and Hotan/Khotan. Accurately identifying geographic sources may be complicated by langgan referring to more than one mineral, as discussed next. Identifications The precise referent of the Chinese name langgan 琅玕 is uncertain in the present day. Scholars have described it as an "enigmatic archaism of politely pleasant or poetic usage", and "one of the most elusive terms in Chinese mineralogy". Identifications of langgan comprise at least three categories: Blue-green langgan was first recorded circa 4th century BCE, Coral langgan from the 8th century, and Red langgan is from an uncertain date. Edward H. Schafer, an eminent scholar of Tang dynasty literature and history, discussed langgan in several books and articles. His proposed identifications gradually changed from Mediterranean red coral, to coral or a glass-like gem, to chrysoprase or demantoid, to coral or red spinel, and ultimately to malachite. Blue-green langgan Langgan was a qīng 青 "green; blue; greenish black" (see Blue–green distinction in language) gemstone of lustrous appearance mentioned in numerous classical texts. They listed it among historical imperial tribute products presented from the far western regions of China, and as the mineral-fruit of the legendary langgan trees of immortality on Mount Kunlun. Schafer's 1978 monograph on langgan sought to identify the treasured blue-green gemstone, if it ever had a unique identity, and concluded the most plausible identification is malachite, a bright green mineral that was anciently used as a copper ore and an ornamental stone. Two early Chinese mineralogical authorities identified langgan as malachite, commonly called kǒngquèshí 孔雀石 (lit. "peacock stone") or shílǜ 石綠 (lit. "stone green"). Comparing blue-green stones that were known in early East Asia, Schafer disqualified several conceivable identities; demantoid garnet and green tourmaline are rarely of gem quality, while neither apple-green chrysoprase nor light greenish-blue turquoise typically have dark hues. This leaves malachite, This handsome green carbonate of copper has important credentials. It is often found in copper mines, and is therefore regularly at the disposal of copper- and bronze-producing peoples. It has, in certain varieties, a lovely silky luster, caused by its fibrous structure. It is soft and easily cut. It takes a good polish. It was commonly made into beads both in the western and eastern worlds. Above all, even uncut malachite often has a nodular or botryoidal structure, like little clumps of bright green beads, one of the classical forms attributed to lang-kan. Sometimes, too, it is stalactitic, like little stone trees. Furthermore, archeology confirms that malachite was an important gemstone of pre-Han China. Inlays of malachite and turquoise decorated many early Chinese bronze weapons and ritual vessels. Tang sources continued to record blue-green langgan. Su Jing's 652 Xinxiu bencao 新修本草 said it was a glassy substance similar to liúli 琉璃 "colored glaze; glass; glossy gem" that was imported from the Man tribes in the Southwest and from Khotan. In 762, Emperor Daizong of Tang proclaimed a new era name of Baoying 寶應 "Treasure Response" in honor of the discovery of thirteen auspicious treasures in Jiangsu, one of which was glassy langgan beads Coral langgan Tang dynasty herbalists and pharmacists changed the denotation of langgan from the traditional blue-green gemstone to a kind of coral. Chen Cangqi's c. 720 Bencao shiyi 本草拾遺 "Collected Addenda to the Pharmacopoeia" described it a pale red coral, growing like a branched tree on the bottom of the sea, fished by means of nets, and after coming out of the water gradually darkens and turns blue. Langan already had an established connection with coral. Chinese mythology matches two antipodean paradises of Mount Kunlun in the far west and Mount Penglai located on an island in the far eastern Bohai Sea. Both mountains had mythic plants and trees of immortality that attracted Daoist xian transcendents; Kunlun's red langgan trees with blue-green fruits were paralleled by Penglai's shanhu shu 珊瑚樹 "red coral trees". Regarding what variety of blue or green branching coral was identified as this "mineralized subaqueous shrub" langgan. Since it must have been a coral attractive enough to be comparable with the extravagant myths of Kunlun, Schafer suggests considering the blue coral Heliopora coerula. It is the only living species in the family Helioporidae, the only octocoral known to produce a massive skeleton, and was found throughout Pacific and Indian Oceans, although the IUCN currently considers it a vulnerable species. Du Wan's c. 1124 Yunlin shipu mineralogy book has a section (100) on langgan shi 琅玕石 that mentions shanhu "coral". A coral-like stone found in shallow water along the coast of Ningbo Zhejiang. Some specimens are two or three feet high. They must be pulled up by ropes let down from rafts. Though white when first taken from the water, they turn a dull purple after a while. They are patterned everywhere with circles, like ginger branches, and are rather brittle. Though the natives hold … Li Shizhen's 1578 Bencao Gangmu classic pharmacopeia objects to applying the term langgan to these marine invertebrates, which should properly be called shanhu while langgan should only be applied to the stone occurring in the mountains. Li's commentary suggests that the terminological confusion arose from the Shuowen jiezi definition of shanhu 珊瑚: 色赤生於海或生於山 "coral is red colored and grows in the ocean or in the mountains". This puzzling description of mountain corals was more likely a textual misunderstanding than a reference to coral fossils. Red langgan The most recent, and least historically documented, identification of langgan is a red gemstone. The Chinese geologist Chang Hung-Chao (Zhang Hongzhao) propagated this explanation when his book about geological terms in Chinese literature identified langgan as malachite, and noted an alternative construal of reddish spinel or balas ruby from the famous mines at Badakhshan. Some authors have cited Chang's balas ruby identification of langgan; others have used, or even confused, it with ruby, in translations (e.g., "precious rubies"). However, Schafer demonstrates that Chang's "supposed" textual evidence for red langgan is tenuous and suggests that Guo Pu's Shanhai jing commentary created this mineralogical confusion. Guo glosses the langgan tree as red, but is unclear whether this refers to the tree itself or its gem-like fruit. Compare Birrell's and Bokenkamp's Shanhai jing translations of "red jade" and "green kernels from scarlet gem trees". Chang misquotes dan'gan 丹矸 "cinnabar rock" from the Xunzi as dan'gan 丹玕 "cinnabar gan", and cites one textual occurrence of the term. The Shangqing Daoist Dadong zhenjing 大洞真經 Authentic Scripture of the Great Cavern records a heavenly palace named Dan'gan dian 丹玕殿 Basilica of the Cinnabar Gan. Admitting the possibility of interpreting gan 玕 as a monosyllabic truncation for langgan 琅玕, comparable with reading hongpo 红珀 for honghupo 红琥珀 "red amber", Schafer concludes there is insufficient dan'gan evidence for an explicit red variety of langgan. The lyrical term langgan occurs 87 times in the huge Complete Tang Poems collection of Tang poetry, with only two hong langgen 紅琅玕 "red langgan" usages by the Buddhist monk-poets Guanxiu (831-912) and Ji Qi 齊己 (863-937). Both poems use langgan to describe "red coral", the latter (贈念法華經) uses shanhu in the same line: 珊瑚捶打紅琅玕 "coral beating on red langgan" in cold waters. Dictionary translations Chinese-English dictionaries illustrate the multifaceted difficulties of identifying langgan. Compare the following list. Most of these bilingual Chinese dictionaries cross-reference lang and gan to langgan, but a few translate lang and gan independently. In terms of Chinese word morphology, láng 琅 is a free morpheme that can appear alone (for instance, a surname) or in other compound words (such as fàláng 琺琅 "enamel" and Lángyá shān 琅琊山 "Mount Langya (Anhui)") while gān 玕 is a bound morpheme that only occurs in the compound lánggān and does not have independent meaning. The origin of Giles' lang translation "a kind of white carnelian" is unknown, unless it derives from Williams' "a whitish stone". It was copied in Mathews' and various other Chinese dictionaries up to the online standard Unihan Database "a variety of white carnelian; pure". "White carnelian" is a marketing name for "white or whitish chalcedony of faint carnelian color". Carnelian is usually reddish-brown while common chalcedony colors are white, grey, brown, and blue. References Footnotes''' External links Taiwei lingshu ziwen langgan huadan shenzhen shangjing 太微靈書紫文琅玕華丹神真上經, 1445 Ming Dynasty edition Zhengtong daozang'' 正統道藏 Alchemical substances Chinese alchemy Chinese mythology Gemstones Mythological objects
Langgan
[ "Physics", "Chemistry" ]
8,102
[ "Materials", "Alchemical substances", "Gemstones", "Matter" ]
54,442,169
https://en.wikipedia.org/wiki/Amcinafide
Amcinafide (developmental code name SQ-15112), also known as triamcinolone acetophenide, is a synthetic glucocorticoid corticosteroid which was never marketed. References Acetophenides Corticosteroid cyclic ketals Diols Organofluorides Glucocorticoids Pregnanes Abandoned drugs
Amcinafide
[ "Chemistry" ]
78
[ "Drug safety", "Abandoned drugs" ]
54,442,170
https://en.wikipedia.org/wiki/Cicortonide
Cicortonide is a synthetic glucocorticoid corticosteroid which was never marketed. References Acetate esters Acetonides Secondary alcohols Corticosteroid cyclic ketals Diketones Organofluorides Glucocorticoids Nitriles Organochlorides Pregnanes
Cicortonide
[ "Chemistry" ]
69
[ "Nitriles", "Functional groups" ]
54,442,181
https://en.wikipedia.org/wiki/Procinonide
Procinonide (developmental code RS-2362; also known as fluocinolone acetonide propionate) is a synthetic glucocorticoid corticosteroid which was never marketed. References Acetonides Secondary alcohols Corticosteroid cyclic ketals Corticosteroid esters Organofluorides Glucocorticoids Pregnanes Propionate esters Diketones Abandoned drugs
Procinonide
[ "Chemistry" ]
94
[ "Drug safety", "Abandoned drugs" ]